Search code examples
debuggingkerasdeep-learningtensorloss

Keras Custom Losses: Want to track each loss values at the end of each epoch


I would like to check the values of self.losses['RMSE'], self.loss['CrossEntropy'], and self.loss['OtherLoss'] at the end of each epoch. Currently, I can only check the total loss self.loss['total'].

def train_test(self):
    def custom_loss(y_true, y_pred):
        ## (...) Calculate several losses inside this function
        self.losses['total'] = self.losses['RMSE'] + self.losses['CrossEntropy'] + self.losses['OtherLoss']
        return self.losses['total']


    ## (...) Generate Deep learning model & Read Inputs
    logits = keras.layers.Dense(365, activation=keras.activations.softmax)(concat)
    self.model = keras.Model(inputs=[...], outputs=logits)

    self.model.compile(optimizer=keras.optimizers.Adam(0.001),
                       loss=custom_loss)

    self.history = self.model.fit_generator(
        generator=self.train_data,
        steps_per_epoch=train_data_size//FLAGS.batch_size,
        epochs=5,
        callbacks=[CallbackA(self.losses)])

class TrackTestDataPerformanceCallback(keras.callbacks.Callback):
    def __init__(self, losses):
        self.losses = losses

    def on_epoch_end(self, epoch, logs={}):
        for key in self.losses.keys()
            print('Type of loss: {}, Value: {}'.format(key, K.eval(self.losses[key])))

I passed self.loss to callback function CallbackA in order to print the sub-loss values at the end of each epoch. However, it gives an error message as follows:

InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'input_3' with dtype float and shape [?,5]
 [[Node: input_3 = Placeholder[dtype=DT_FLOAT, shape=[?,5], _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]
 [[Node: loss/dense_3_loss/survive_rates/while/LoopCond/_881 = _HostRecv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_360_loss/dense_3_loss/survive_rates/while/LoopCond", tensor_type=DT_BOOL, _device="/job:localhost/replica:0/task:0/device:CPU:0"](^_clooploss/dense_3_loss/survive_rates/while/strided_slice_4/stack_2/_837)]]

I could pass the train data to callback function again, and predict itself to track each loss values. But I think there might be a better solution that I don't know yet.

Summary: How to track several losses values in custom loss function after each epoch?

Constraints: To reduce some computation cost, I would like to manage several losses in a custom_loss function for now. But if I have to wrap each loss into each function, that is OK.


Solution

  • I had to maintain a combined custom_loss for our model, so I found a way to track several sub-losses by putting into metrics parameter. Each loss function is defined separately as a function.

    def custom_loss():
        return subloss1() + subloss2() + subloss3()
    
    def subloss1():
        ...
        return value1
    
    def subloss2():
        ...
        return value2
    
    def subloss3():
        ...
        return value3
    
    
    self.model.compile(optimizer=keras.optimizers.Adam(0.001),
                           loss=custom_loss,
                           metrics=[subloss1, subloss2, subloss3]