Search code examples
python-3.xtensorflowkerasloss-function

Custom loss is missing an operation for gradient


I'm not too sure how to deal with this and why I am getting this error.

    raise ValueError('An operation has `None` for gradient. '
    ValueError: An operation has `None` for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.

So I am using a custom tripleloss for the loss function from this blog. https://omoindrot.github.io/triplet-loss and I am running it in keras which should not be an issue. But I cannot get it to work correctly with my model.

So this is the loss function from them. The other code that it needs is a direct copy:

def batch_hard_triplet_loss(embeddings, labels, margin = 0.3, squared=False):

    # Get the pairwise distance matrix
    pairwise_dist = pairwise_distances(embeddings, squared=squared)
    mask_anchor_positive = _get_anchor_positive_triplet_mask(labels)
    mask_anchor_positive = tf.to_float(mask_anchor_positive)
    anchor_positive_dist = tf.multiply(mask_anchor_positive, pairwise_dist)
    hardest_positive_dist = tf.reduce_max(anchor_positive_dist, axis=1, keepdims=True)
    mask_anchor_negative = _get_anchor_negative_triplet_mask(labels)
    mask_anchor_negative = tf.to_float(mask_anchor_negative)
    max_anchor_negative_dist = tf.reduce_max(pairwise_dist, axis=1, keepdims=True)
    anchor_negative_dist = pairwise_dist + max_anchor_negative_dist * (1.0 - mask_anchor_negative)
    hardest_negative_dist = tf.reduce_min(anchor_negative_dist, axis=1, keepdims=True)
    # Combine biggest d(a, p) and smallest d(a, n) into final triplet loss
    triplet_loss = tf.maximum(hardest_positive_dist - hardest_negative_dist + margin, 0.0)
    triplet_loss = tf.reduce_mean(triplet_loss)
#triplet_loss = k.mean(triplet_loss) # use keras mean

    return triplet_loss

Now this is my model that I am using.

train_datagen = ImageDataGenerator(
    preprocessing_function=preprocess_input,
    ....
    validation_split=0.2) # set validation split

train_generator = train_datagen.flow_from_directory(
    IMAGE_DIR,
    target_size=(224, 224),
    batch_size=BATCHSIZE,
    class_mode='categorical',
    subset='training') # set as training data

validation_generator = train_datagen.flow_from_directory(
    IMAGE_DIR, # same directory as training data
    target_size=(224, 224),
    batch_size=BATCHSIZE,
    class_mode='categorical',
    subset='validation') # set as validation data

print("Initializing Model...")
# Get base model
input_layer = preloadmodel.get_layer('model_1').get_layer('input_1').input
layer_output = preloadmodel.get_layer('model_1').get_layer('glb_avg_pool').output
# Make extractor
base_network = Model(inputs=input_layer, outputs=layer_output)

# Define new model
input_images = Input(shape=(224, 224, 3), name='input_image')  # input layer for images
#input_labels = Input(shape=(num_classes,), name='input_label')  # input layer for labels
embeddings = base_network(input_images)  # output of network -> embeddings
output = Dense(1, activation='sigmoid')(embeddings)
model = Model(inputs=input_images,  outputs=output)
# Compile model
model.compile(loss=batch_hard_triplet_loss, optimizer='adam')

Solution

  • Ok I solved these issues with a lot of research. Now it didn't fix my problem as the code still does not work, but the issue of the loss function is fixed. Following this blog https://medium.com/@Bloomore/how-to-write-a-custom-loss-function-with-additional-arguments-in-keras-5f193929f7a0

    I changes the loss function to this:

    def batch_hard_triplet_loss(embeddings, labels, margin = 0.3, squared=False):
        # Get the pairwise distance matrix
        pairwise_dist = pairwise_distances(embeddings, squared=squared)
        mask_anchor_positive = _get_anchor_positive_triplet_mask(labels)
        mask_anchor_positive = tf.to_float(mask_anchor_positive)
        anchor_positive_dist = tf.multiply(mask_anchor_positive, pairwise_dist)
        hardest_positive_dist = tf.reduce_max(anchor_positive_dist, axis=1, keepdims=True)
        mask_anchor_negative = _get_anchor_negative_triplet_mask(labels)
        mask_anchor_negative = tf.to_float(mask_anchor_negative)
        max_anchor_negative_dist = tf.reduce_max(pairwise_dist, axis=1, keepdims=True)
        anchor_negative_dist = pairwise_dist + max_anchor_negative_dist * (1.0 - mask_anchor_negative)
        hardest_negative_dist = tf.reduce_min(anchor_negative_dist, axis=1, keepdims=True)
        def loss(y_true, y_pred):
    
            # Combine biggest d(a, p) and smallest d(a, n) into final triplet loss
            #triplet_loss = tf.maximum(hardest_positive_dist - hardest_negative_dist + margin, 0.0)
            #triplet_loss = tf.reduce_mean(triplet_loss)
            triplet_loss = k.maximum(hardest_positive_dist - hardest_negative_dist + margin, 0.0)
            triplet_loss = k.mean(triplet_loss) # use keras mean
            return triplet_loss
    
        return loss
    

    And then call it in the model like this:

    batch_loss = batch_hard_triplet_loss(embeddings, input_labels, 0.4, False)
    model = Model(inputs=input_images,  outputs=embeddings)
    model.compile(loss=batch_loss, optimizer='adam')
    

    It now gives me these issues

    tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'input_label' with dtype float and shape [?,99]
     [[{{node input_label}}]]
    

    But hey were moving on up. What the problem is keras only accepts loss with 2 parameters so you need to call the loss form another function like I did here.