Search code examples
pythontensorflowkerasloss

Custom loss w weight arrays of batch size in tensorflow/keras


I am creating a custom loss function, which is a MAE(y_true, y_pred), weighted by two arrays, a and b, where all four arrays are of the same size (10000 samples/timesteps).

def custom_loss(y_true, y_pred, a, b):
        mae = K.abs(y_true - y_pred)
        loss = mae * a * b
        return loss

Question: How can I feed a and b into the function? Both should be split and shuffled just like y_true and y_pred.

So far, I am using a LSTM trained on data X of shape (samples x time steps x variables). Here, I tried tf's add_loss function to get this done, which resulted in errors due to different data shapes, when passing a and b as further input layers.

#LSTM
input_layer = Input(shape=input_shape)
in = LSTM(20, activation='relu', return_sequences=True)(input_layer)
out = LSTM(1, activation='linear', return_sequences=False)(in)

layer_a = Input(shape=(10000))
layer_b = Input(shape=(10000))

model = Model(inputs = [input_layer, layer_a, layer_b], outputs = out)  
model.add_loss(custom_loss(input_layer, out, layer_a, layer_b))
model.compile(loss=None, optimizer=Adam(0.01))

# X=data of shape 20 variables x 10000 timesteps, y, a, b = data of shape 10000 timesteps
model.fit(x=[X, a, b], y=y, batch_size=1, shuffle=True)

How do I do this correctly?


Solution

  • as you introduced, you have to use add_loss. remember to pass to your loss all the variables (trues, predictions, and extra tensors in the correct format).

    n_sample = 100
    timesteps = 30
    features = 5
    
    X = np.random.uniform(0,1, (n_sample,timesteps,features))
    y = np.random.uniform(0,1, n_sample)
    a = np.random.uniform(0,1, n_sample)
    b = np.random.uniform(0,1, n_sample)
    
    def custom_loss(y_true, y_pred, a, b):
        mae = K.abs(y_true - y_pred)
        loss = mae * a * b
        return loss
    
    
    input_layer = Input(shape=(timesteps, features))
    x = LSTM(20, activation='relu', return_sequences=True)(input_layer)
    out = LSTM(1, activation='linear')(x)
    
    layer_a = Input(shape=(1,))
    layer_b = Input(shape=(1,))
    target = Input(shape=(1,))
    
    model = Model(inputs = [target, input_layer, layer_a, layer_b], outputs = out)  
    model.add_loss(custom_loss(target, out, layer_a, layer_b))
    model.compile(loss=None, optimizer=Adam(0.01))
    
    model.fit(x=[y, X, a, b], y=None, shuffle=True, epochs=3)
    

    to use the model in inference mode (remove y as input and a and b if not needed):

    final_model = Model(model.inputs[1], model.output)
    final_model.predict(X)