Search code examples
tensorflowdeep-learningartificial-intelligencegenerative-adversarial-network

Is it bad if my GAN discriminator loss goes to 0?


Been training my Pix2Pix GAN, and the discriminator loss starts going to 0 around the 20th epoch. It then consistently stays at 0 from around the 30th epoch onwards.

The generator loss keeps decreasing however. At the start around the first few epochs the generator loss was between 50 - 60. Around the 100th epoch the generator loss was about 4 - 5. Then from 150th to 350th epoch, the generator loss hovered between 1 - 3.

So is it bad that the discriminator loss goes to 0? And how would I fix it?


Solution

  • Basically, you don't want the Descriminator loss to go to zero because that would mean that the Descriminator is doing a too good job (and most importantly, the Generator a too bad one), ie it can easily discriminate between fake and real data (ie the Generators's creations are not close enough to real data).

    To sum it up, it's important to define loss of the Descriminator that way because we do want the Descriminator to try and reduce this loss but the ultimate goal of the whole GAN system is to have losses balance out. Hence if one loss goes to zero, it's failure mode (no more learning happens).

    To avoid this, you have to make sure, that your last Descriminator layer is not a Sigmoid layer and that your loss is not constrained between [0, 1]. You could try to use a BCE layer or something similar.