I can't understand the loss function in GAN model in tensorflow documentation. Why use tf.ones_like()
for real_loss and tf.zeros_like()
for fake outputs??
def discriminator_loss(real_output,fake_output):
real_loss = cross_entropy(tf.ones_like(real_output),real_output)
fake_loss = cross_entropy(tf.zeros_like(fake_output),fake_output)
total_loss = real_loss + fake_loss
return total_loss
We have the following loss functions we need to minimize in a mini-max fashion (or min-max if you wish to call it that).
where real_output
= real_labels and fake_output
= generated_labels.
Now, with this in mind, let's see what does the code snippet in TensorFlow's documentation represent:
real_loss = cross_entropy(tf.ones_like(real_output), real_output)
evaluates to
fake_loss = cross_entropy(tf.zeros_like(fake_output),fake_output)
evaluates to
total_loss = real_loss + fake_loss
evaluates to
Clearly, we get the loss function for the discriminator in the mini-max game we want to minimize.