My training data has 10680 samples and I've set the fit like this:
model.fit(X, y, batch_size=32, epochs=5, verbose=1, validation_split=0.1)
Which means it should use 90% to train (9612 samples) and the other 10% to check, right?
But when I try to run it the epochs shown are:
Epoch 1/5
301/301 [==============================] - 25s 85ms/step - loss: nan - accuracy: 0.4999 - val_loss: nan - val_accuracy: 0
It's using only 301 samples to fit.
What am I doing wrong or not seeing here?
I know it's not right because it worked once (with 9612 samples).
This is the model:
keras.utils.normalize(X)
model = Sequential()
model.add(Conv2D(128, (3,3), input_shape = X.shape[1:]))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(128, (3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten()) # Converts 3D feature maps to 1D feature vectors
model.add(Dense(128, kernel_regularizer= tf.keras.regularizers.l2(5e-5)))
model.add(Activation("relu"))
model.add(Dense(1, kernel_regularizer=tf.keras.regularizers.l2(5e-5)))
model.add(Activation("sigmoid"))
sgd = tf.keras.optimizers.SGD(lr=0.001, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss="binary_crossentropy",
optimizer=sgd,
metrics=['accuracy'])
model.summary()
I think the 301 there means the number of batches, so since you have 9612 data and set batch_size=32
, the number of batches are expected to be 9612 / 32 = 300.375 -> 301