Search code examples
tensorflowneural-networkkerasgenerative-adversarial-networkdcgan

Sudden drop in validation_loss after reloading the model(s)


I am testing a cGAN in keras / tensorflow, and after 1000 epochs I saved the model.

After a bit of time I restored

  1. the generator model + weights
  2. the discriminator model + weights
  3. the GAN weights (the model is recreated)

This is the resulting val_accuracy:

sudden drop

It is possible to see clearly that there is an immense drop in val_loss after restoring the model.

Could someone explain me why/what could have caused this situation ?


Solution

  • Further analysis might be required to prove this, but you might just unintentionally discovered a technique called "warm restarting". Simple said, you train your model with an annealing learning normally, stop, reset the learning rate and start over again. Intuitively you give the model oppurtunities to jump out of local minima and this might result in the observed behavior.