Search code examples
pythontensorflowkerastensorflow2.0loss-function

How do you gather the elements of y_pred that do not correspond to the true label in a Keras/tf2.0 custom loss function?


Below is a simple example in numpy of what I would like to do:

import numpy as np

y_true = np.array([0,0,1])
y_pred = np.array([0.1,0.2,0.7])

yc = (1-y_true).astype('bool')

desired = y_pred[yc]

>>> desired
>>> array([0.1, 0.2])

So the prediction corresponding to the ground truth is 0.7, I want to operate on an array containing all the elements of y_pred, except for the ground truth element.

I am unsure of how to make this work within Keras. Here is a working example of the problem in the loss function. Right now 'desired' isn't accomplishing anything, but that is what I need to work with:

# using tensorflow 2.0.0 and keras 2.3.1

import tensorflow.keras.backend as K
import tensorflow as tf
from tensorflow.keras.layers import Input,Dense,Flatten
from tensorflow.keras.models import Model
from keras.datasets import mnist

(x_train, y_train), (x_test, y_test) = mnist.load_data()

# Normalize data.
x_train = x_train.astype('float32') / 255
x_test = x_test.astype('float32') / 255

# Convert class vectors to binary class matrices.
y_train = tf.keras.utils.to_categorical(y_train, 10)
y_test = tf.keras.utils.to_categorical(y_test, 10)

input_shape = x_train.shape[1:]


x_in = Input((input_shape))

x = Flatten()(x_in)
x = Dense(256,'relu')(x)
x = Dense(256,'relu')(x)
x = Dense(256,'relu')(x)

out = Dense(10,'softmax')(x)




def loss(y_true,y_pred):


    yc = tf.math.logical_not(kb.cast(y_true, 'bool'))
    desired = tf.boolean_mask(y_pred,yc,axis = 1)    #Remove and it runs


    CE = tf.keras.losses.categorical_crossentropy(
        y_true,
        y_pred)

    L = CE

    return L



model = Model(x_in,out)

model.compile('adam',loss = loss,metrics = ['accuracy'])


model.fit(x_train,y_train)

I end up getting an error

ValueError: Shapes (10,) and (None, None) are incompatible

Where 10 is the number of categories. The end purpose is to implement this: ComplementEntropy in Keras, where my issue seems to be lines 26-28.


Solution

  • You can remove axis=1 from the Boolean_mask and it will run. And frankly, I don't see why you need axis=1 here.

    def loss(y_true,y_pred):
    
    
        yc = tf.math.logical_not(K.cast(y_true, 'bool'))
        print(yc.shape)
        desired = tf.boolean_mask(y_pred, yc)    #Remove axis=1 and it runs
    
    
        CE = tf.keras.losses.categorical_crossentropy(
            y_true,
            y_pred)
    
        L = CE
    
        return L
    

    This is probably what happens. You have y_pred which is a 2D tensor (N=2). Then you have a 2D mask (K=2). But there's this condition K + axis <= N. And if you pass axis=1 this fails.