Search code examples
kerastensorflow2.0transfer-learningmobilenetoverfitting-underfitting

Validation loss goes up after some epoch transfer learning


My validation loss decreases at a good rate for the first 50 epoch but after that the validation loss stops decreasing for ten epoch after that. I'm using mobilenet and freezing the layers and adding my custom head. my custom head is as follows:

def addTopModelMobileNet(bottom_model, num_classes):

top_model = bottom_model.output
top_model = GlobalAveragePooling2D()(top_model)
top_model = Dense(64,activation = 'relu')(top_model)
top_model = Dropout(0.25)(top_model)
top_model = Dense(32, activation = 'relu')(top_model)
top_model = Dropout(0.10)(top_model)
top_model = Dense(num_classes, activation = 'softmax')(top_model)

return top_model

i'm using alpha 0.25, learning rate 0.001, decay learning rate / epoch, nesterov momentum 0.8. I'm also using earlystoping callback with patience of 10 epoch.

Lossaccuracy


Solution

  • This phenomenon is called over-fitting. At around 70 epochs, it overfits in a noticeable manner.

    There are a few reasons behind these.

    1. Data: Please analyze your data first. Balance the imbalanced data. Use augmentation if the variation of the data is poor.

    2. Layer tune: Try to tune dropout hyper param a little more. I would suggest you try adding the BatchNorm layer too.

    3. Finally, try decreasing the learning rate to 0.0001 and increase the total number of epochs. Do not use EarlyStopping at this moment. Look at the training history. Sometimes global minima can't be reached because of some weird local minima.