Search code examples
pythontensorflowkerastensorflow2.0tf.keras

Keras Compute loss between 2 Ragged Tensors


I have 2 ragged tensors defined as follows

import tensorflow as tf

# Tensor 1
pred_score = tf.ragged.constant([
    [[-0.51760715], [-0.18927467], [-0.10698503]],
    [[-0.58782816], [-0.13076714], [-0.04999146], [-0.1772059], [-0.14299354]]
])
pred_score = tf.squeeze(pred_score, axis=-1)
pred_score_dist = tf.nn.softmax(pred_score, axis=-1)
print(pred_score_dist)
print(pred_score_dist.shape)

>> <tf.RaggedTensor [[0.25664675, 0.35639265, 0.38696054],
                     [0.1358749, 0.21460423, 0.23265839, 0.20486614, 0.21199636]]>
(2, None)


# Tensor 2
actual_score = tf.ragged.constant([
    [3.0, 2.0, 2.0], 
    [3.0, 3.0, 1.0, 1.0, 0.0]
])
actual_score_dist = tf.nn.softmax(actual_score, axis=-1)
print(actual_score_dist)
print(actual_score_dist.shape)

<tf.RaggedTensor [[0.5761169, 0.21194157, 0.21194157],
                  [0.4309495, 0.4309495, 0.05832267, 0.05832267, 0.021455714]]>
(2, None)

I want to compute KL Divergence row by row, finally overall divergence, I tried running this however it gives error

loss = tf.keras.losses.KLDivergence()
batch_loss = loss(actual_score_dist, pred_score_dist)

ValueError: TypeError: object of type 'RaggedTensor' has no len()

Can someone please help me


Solution

  • Here is one way to make it run,

    def custom_kld(y_value, y_pred):
        if isinstance(y_value, tf.RaggedTensor):
            y_value = y_value.to_tensor()
    
        if isinstance(y_pred, tf.RaggedTensor):   
            y_pred = y_pred.to_tensor()
        
        return tf.keras.losses.KLDivergence()(y_value, y_pred)
    
    custom_kld(actual_score_dist, pred_score_dist)
    <tf.Tensor: shape=(), dtype=float32, numpy=0.41143954>