Search code examples
conv-neural-networkdata-sciencetraining-data

After a few epochs, the difference between Valid loss and Loss increases


I'm trying to train the model on a MagnaTagAtune dataset. Is the model properly trained? What is the problem, does anyone know? Will waiting solve the problem?

The results are shown in the image. enter image description here


Thank you pseudo_random_here for your answer. Your tips were helpful, but the problem was still there.

Unfortunately, changing the learning rate did not work. Now, after your advice, I will use the SGD optimizer with a learning rate of 0.1. I even used another model that was for this but the problem was not solved.

from keras.optimizers import SGD
opt = SGD(lr=0.1)
model.compile(loss = "categorical_crossentropy", optimizer = opt)

Solution

  • Short answer: I would say your val_loss is too high and waiting is unlikely to solve your problem

    Explanation: I believe there are two possibilities here:

    • Your architecture is not suitable for the data
    • Your learning rate is too small

    PS. It would help a lot if you were to provide info on what architecture of NNs you are using, what loss function we are looking at and what exactly is it that you are predicting?