Search code examples
tensorflowneural-networknlpgenerative-adversarial-network

Easy way to clamp Neural Network outputs between 0 and 1?


So I'm working on writing a GAN neural network and I want to set my network's output to 0 if it is less than 0 and 1 if it is greater than 1 and leave it unchanged otherwise. I'm pretty new to tensorflow, but I don't know of any tensorflow function or activation to do this without unwanted side effects. So I made my loss function so it calculates the loss as if the output was clamped, with this code:

def discriminator_loss(real_output, fake_output):
    real_output_clipped = min(max(real_output.numpy()[0], 
    0), 1)
    fake_output_clipped = min(max(fake_output.numpy()[0], 
    0), 1)

    real_clipped_tensor = 
    tf.Variable([[real_output_clipped]], dtype = "float32")
    fake_clipped_tensor = 
    tf.Variable([[fake_output_clipped]], dtype = "float32")

    real_loss = cross_entropy(tf.ones_like(real_output), 
    real_clipped_tensor)
    fake_loss = cross_entropy(tf.zeros_like(fake_output), 
    fake_clipped_tensor)

    total_loss = real_loss + fake_loss
    return total_loss

but I get this error:

ValueError: No gradients provided for any variable: ['dense_50/kernel:0', 'dense_50/bias:0', 'dense_51/kernel:0', 'dense_51/bias:0', 'dense_52/kernel:0', 'dense_52/bias:0', 'dense_53/kernel:0', 'dense_53/bias:0'].

Does anyone know a better way to do this, or a way to fix this error?

Thanks!


Solution

  • You can apply a ReLU layer from Keras as your final layer and set max_value=1.0. For example:

    model = tf.keras.Sequential()
    model.add(tf.keras.layers.Dense(32, input_shape=(16,)))
    model.add(tf.keras.layers.Dense(32))
    model.add(tf.keras.layers.ReLU(max_value=1.0))
    

    You can read more about it here: https://www.tensorflow.org/api_docs/python/tf/keras/layers/ReLU