Search code examples
pythonkerasjupyter-notebookconv-neural-networkout-of-memory

CNN data doesn't fit in memory (kernel dies)


I am working on this dataset (the image part), normalized to 1103 images of size 1396 x 1676. I am trying to feed it to a CNN (taken from here) of this shape:

n, w, h = X_train.shape # which is (n, 1396, 1676)
cnn = Sequential()
cnn.add(Conv2D(32, (3, 3), activation="relu", input_shape=(w, h, 1)))
cnn.add(MaxPooling2D(pool_size = (2, 2)))
cnn.add(Conv2D(32, (3, 3), activation="relu", input_shape=(w, h, 1)))
cnn.add(MaxPooling2D(pool_size = (2, 2)))
cnn.add(Conv2D(32, (3, 3), activation="relu", input_shape=(w, h, 1)))
cnn.add(MaxPooling2D(pool_size = (2, 2)))
cnn.add(Conv2D(64, (3, 3), activation="relu", input_shape=(w, h, 1)))
cnn.add(MaxPooling2D(pool_size = (2, 2)))
cnn.add(Conv2D(64, (3, 3), activation="relu", input_shape=(w, h, 1)))
cnn.add(MaxPooling2D(pool_size = (2, 2)))
cnn.add(Flatten())
cnn.add(Dense(activation = 'relu', units = 128))
cnn.add(Dense(activation = 'relu', units = 64))
cnn.add(Dense(activation = 'sigmoid', units = 1))
cnn.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

early = EarlyStopping(monitor="val_loss", mode="min", patience=3)
learning_rate_reduction = ReduceLROnPlateau(monitor='val_loss', patience=2, verbose=1,factor=0.3, min_lr=0.000001)
callbacks_list = [early, learning_rate_reduction]

cnn.fit(X_train, Y_train, epochs=25, validation_split=0.7, callbacks=callbacks_list)

Both in Google Colab and on a Jupyter in my 16Gb machine, it produces a kernel death. I tried to use a generator to feed just 10 images at a time, but it doesn't produce any good result (0.5 accuracy).

Could you suggest a way to solve this? Maybe tweaking some parameters too


Solution

  • Depending on what you are trying to train on (not considering the data and goal). Your first start is to not use big resolution of the images. Just start small (256x256 or 512x512). All you need to do is to resize the images first (use opencv to resize) Then train the model on that. It will surely give you (and the hardware) some breathing room.