Search code examples
tensorflowkerasneural-networkloss-function

Keras custom loss-function


I would like to implement the following custom loss function, with argument x as the output of the last layer. Until now I implemented function this as Lambda layer, coupled with the keras mae loss, but I do not want that anymore

def GMM_UNC2(self, x):
    tmp = self.create_mr(x) # get mr series
    mr  = k.sum(tmp, axis=1) # sum over time
    tmp = k.square((1/self.T_i) * mr)
    tmp = k.dot(tmp, k.transpose(self.T_i))
    tmp = (1/(self.T * self.N)) * tmp

    f   = self.create_factor(x) # get factor
    std = k.std(f)
    mu  = k.mean(f)
    tmp = tmp + std/mu 

    def loss(y_true, y_pred=tmp):
        return k.abs(y_true-y_pred)

    return loss

self.y_true = np.zeros((1,1))
self.sdf_net = Model(inputs=[self.in_ma, self.in_mi, self.in_re, self.in_si], outputs=w)
self.sdf_net.compile(optimizer=self.optimizer, loss=self.GMM_UNC2(w))
self.sdf_net.fit([self.macro, self.micro, self.R, self.R_sign], self.y_true, epochs=epochs, verbose=1)

The code actually runs but it doesn't actually use tmp as input to loss (I multiplied it with some number, but the loss stays the same)

What am I doing wrong?


Solution

  • It is not completely clear from your question if you want to apply GMM_UNC2 function to the predictions, or it is applied only once to build the loss. If it is the first option, then all that code should be inside the loss and apply it over y_pred, like

    def GMM_UNC2(self):
    
        def loss(y_true, y_pred):
            tmp = self.create_mr(y_pred) # get mr series
            mr  = k.sum(tmp, axis=1) # sum over time
            tmp = k.square((1/self.T_i) * mr)
            tmp = k.dot(tmp, k.transpose(self.T_i))
            tmp = (1/(self.T * self.N)) * tmp
            f   = self.create_factor(x) # get factor
            std = k.std(f)
            mu  = k.mean(f)
            tmp = tmp + std/mu 
            return k.abs(y_true-y_pred)
    
        return loss
    

    If it is the second option, in general, passing objects as default values in a Python function definition is not a good idea, because it can be changed in the function definition. Also, you are assuming that the second argument to the loss has a name y_pred, but when called, it is done without a name, as a positional argument. In summary, you could try using a explicit comparison inside the loss, like

        def loss(y_true, y_pred):
            if y_pred is None:
                y_pred = tmp
            return k.abs(y_true - y_pred)
    

    If what you like is ignoring the predictions, and forcibly using tmp, then you can ignore the y_pred argument of the loss and only use tmp, like

        def loss(y_true, _):
            return k.abs(y_true - tmp)