Search code examples
tensorflowmachine-learningdeep-learningloss-functioncross-entropy

How does TensorFlow SparseCategoricalCrossentropy work?


I'm trying to understand this loss function in TensorFlow but I don't get it. It's SparseCategoricalCrossentropy. All other loss functions need outputs and labels of the same shape, this specific loss function doesn't.

Source code:

import tensorflow as tf;

scce = tf.keras.losses.SparseCategoricalCrossentropy();
Loss = scce(
  tf.constant([ 1,    1,    1,    2   ], tf.float32),
  tf.constant([[1,2],[3,4],[5,6],[7,8]], tf.float32)
);
print("Loss:", Loss.numpy());

The error is:

InvalidArgumentError: Received a label value of 2 which is outside the valid range of [0, 2).  
Label values: 1 1 1 2 [Op:SparseSoftmaxCrossEntropyWithLogits]

How to provide proper params to the loss function SparseCategoricalCrossentropy?


Solution

  • SparseCategoricalCrossentropy and CategoricalCrossentropy both compute categorical cross-entropy. The only difference is in how the targets/labels should be encoded.

    When using SparseCategoricalCrossentropy the targets are represented by the index of the category (starting from 0). Your outputs have shape 4x2, which means you have two categories. Therefore, the targets should be a 4 dimensional vector with entries that are either 0 or 1. For example:

    scce = tf.keras.losses.SparseCategoricalCrossentropy();
    Loss = scce(
      tf.constant([ 0,    0,    0,    1   ], tf.float32),
      tf.constant([[1,2],[3,4],[5,6],[7,8]], tf.float32))
    

    This in contrast to CategoricalCrossentropy where the labels should be one-hot encoded:

    cce = tf.keras.losses.CategoricalCrossentropy();
    Loss = cce(
      tf.constant([ [1,0]    [1,0],    [1, 0],   [0, 1]   ], tf.float32),
      tf.constant([[1,2],[3,4],[5,6],[7,8]], tf.float32))
    

    SparseCategoricalCrossentropy is more efficient when you have a lot of categories.