I'm training CNN with 2403 images 1280x720 px each. This the code that I'm running:
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import Conv2D,MaxPooling2D,Activation,Dense,Flatten,Dropout
model = keras.Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(1280,720,3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(3))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
# this is the augmentation configuration we will use for testing:
# only rescaling
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
'/gdrive/MyDrive/shot/training',
target_size=(1280, 720),
batch_size=640,
class_mode='categorical')
history = model.fit(
train_generator,
steps_per_epoch= 2403//640,
epochs= 15,
)
The session is crashing before the first epoch. Is there anything that I can do to reduce RAM usage? What other alternatives do I have?
Seems like you are having a large batch size which is consuming all the RAM. So I suggest first try with smaller batch size like 32 or 64. Also your image sizes are too large, you can reduce it initially for experiments.
train_generator = train_datagen.flow_from_directory(
'/gdrive/MyDrive/shot/training',
target_size=(256, 256), # -> Change the image size
batch_size=32, # -> Reduce batch size
class_mode='categorical'
)