Search code examples
tensorflowerror-handlingkerasdeep-learningloss-function

Implementing custom WARP loss function in Keras/Tensorflow with error: LookupError: No gradient defined for operation


I am creating a custom loss function - I have made others before this one, which work fine. However, I am running into an error on the gradients:

LookupError: No gradient defined for operation 'loss/target_global_pool_loss/while/RandomShuffle' (op type: RandomShuffle)

I am unsure whether it is how I handle things within the tensorflow while loop, however, if I open a python terminal I do get a float value out:

import tensorflow as tf
import warp_loss
a = [0,1,0,1,1,1,0,0,1]
b = [0.5,0.5,0.3,0.7,0.8,0.9,0.,0.2,0.2]
a = tf.constant(a)
b = tf.constant(b)
sess = tf.InteractiveSession()
loss = warp_loss(a,b)
loss.eval()
0.41588834
loss
<tf.Tensor 'while_3/Exit_1:0' shape=() dtype=float32>
def warp_loss(y_true, y_pred):
    """
    Implementation of the WARP loss function

    Arguments:
    y_true -- true labels, required when you define a loss in Keras, you don't need it in this function.
    y_pred -- prediction values 0-1.

    Returns:
    loss -- real number, value of the loss
    """

    neg_mask  = tf.where(tf.equal(y_true, 0), tf.ones_like(y_pred), tf.zeros_like(y_pred))

    # Get positive and negative scores   
    positives = tf.boolean_mask(y_pred,y_true)
    negatives = tf.boolean_mask(y_pred,neg_mask)

    loss = tf.constant(0, dtype=tf.float32)
    p    = tf.constant(0)

    # Loop all positives
    while_condition = lambda p, loss: tf.less(p, tf.shape(positives)[0])
    def sampling(p, loss):
        # Simulate random sampling without resampling
        shuffled  = tf.random.shuffle(negatives)

        # If no negative above positive, low loss
        sample_i  = tf.cond( tf.keras.backend.sum(K.cast(K.greater(shuffled, positives[p]), K.floatx())) > 0, lambda: tf.cast(tf.argmax(K.cast(K.greater(shuffled, positives[p]), K.floatx())), tf.float32) , lambda: tf.cast(-1, tf.float32 ) )

        # Every positive is equally wanted (therefore -1 foregoes to the investigated positive class)
        L = tf.log(tf.cast(tf.shape(negatives)[0],tf.float32)/(sample_i+1.))
        distance = tf.cast(shuffled[tf.cast(sample_i,tf.int32)], tf.float32)-tf.cast(positives[p], tf.float32)

        # Sum up loss
        individual_loss  = tf.cond( sample_i >= 0 , lambda: L*distance , lambda: tf.cast(0, tf.float32 ) )

        return [tf.add(p, 1), tf.add(loss, individual_loss)]

    _, loss = tf.while_loop(while_condition, sampling, [p, loss])

    return loss

I expected that my output just should be a float value as my other loss functions have.

My input is a i,j,channels and output is a binary list of potential classes. I do train_on_batch 1 sample per batch (here it fails):

 File "train.py", line 319, in <module>
    batch_out = model.train_on_batch(np.array([npzobj['features']]), np.array([npzobj['targets']]))
  File "/lib/python3.5/site-packages/keras/engine/training.py", line 1216, in train_on_batch
    self._make_train_function()
  File "/lib/python3.5/site-packages/keras/engine/training.py", line 509, in _make_train_function
    loss=self.total_loss)
  File "/lib/python3.5/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "/lib/python3.5/site-packages/keras/optimizers.py", line 184, in get_updates
    grads = self.get_gradients(loss, params)
  File "/lib/python3.5/site-packages/keras/optimizers.py", line 89, in get_gradients
    grads = K.gradients(loss, params)
  File "/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py", line 2757, in gradients
    return tf.gradients(loss, variables, colocate_gradients_with_ops=True)
  File "/lib/python3.5/site-packages/tensorflow/python/ops/gradients_impl.py", line 664, in gradients
    unconnected_gradients)
  File "/lib/python3.5/site-packages/tensorflow/python/ops/gradients_impl.py", line 923, in _GradientsHelper
    (op.name, op.type))
LookupError: No gradient defined for operation 'loss/target_global_pool_loss/while/RandomShuffle' (op type: RandomShuffle)


Solution

  • Apparently the random shuffle does not have a gradient, however, a work around following this solution GPU kernel for tf.random_shuffle solved my problem.

    shuffled  = tf.gather(negatives, tf.random.shuffle(tf.range(tf.shape(negatives)[0])))
    
    # Instead of
    
    shuffled  = tf.random.shuffle(negatives)