While training the CNN, after completion of each epoch it takes more time to move to next epoch, while each epoch can be completed in 60s-80s, it take almost 5 mins to move to next epoch. I have provided my code, is there anything I am missing out ?
#importing the libraries
from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
#inintializing the ANN
classifier = Sequential()
# Convolutional layer
classifier.add(Conv2D(64,(3,3),input_shape =(128, 128, 3), activation = 'relu'))
#pooling layer
classifier.add(MaxPooling2D(pool_size = (2,2)))
#second convolutional layer
classifier.add(Conv2D(128,(3,3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2,2)))
# flatten
classifier.add(Flatten())
#full connection
classifier.add(Dense(output_dim = 128, activation = 'relu'))
classifier.add(Dense(output_dim = 1, activation = 'sigmoid'))
#compiling the cnn
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
# we create two instances with the same arguments
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
test_datagen = ImageDataGenerator(rescale = 1./255)
training_set = train_datagen.flow_from_directory('dataset/training_set',
target_size = (128, 128),
batch_size = 32,
class_mode = 'binary')
test_set = test_datagen.flow_from_directory('dataset/test_set',
target_size = (128, 128),
batch_size = 32,
class_mode = 'binary')
classifier.fit_generator(training_set,
samples_per_epoch = 8000,
nb_epoch = 25,
validation_data = test_set,
nb_val_samples = 2000)
There is no need to set samples_per_epoch
and nb_val_samples
if you use ImageDataGenerator, as this is a Sequence and internally contains its length (if you use a recent Keras version, of course). The problem is that nb_val_steps
is used for the parameter validation_steps
, and I think you just set this value much higher than the right value.
If needed, you should set steps_per_epoch
and validation_steps
to the correct values, if you set validation_steps
to a value larget than len(val_data) / batch_size
, you are effectively telling keras to do validation with more data than necessary, slowing down the validation step.