Search code examples
tensorflowkerasloss-function

scaling back data in customized keras training loss function


I define a customized loss function for my LSTM model (RMSE function) to be as follows:

def RMSE(y_true, y_pred):
        return K.sqrt(K.mean(K.square(y_pred - y_true)))

everything good so far, but the issue is that I scale my input data to be in the range of [-1, 1], so the reported loss will be associated with this scale, I want the model to report the training loss in the range of my original data, for example by applying the scaler.inverse_transform function on the y_true and y_pred somehow, but no luck doing it... as they are tensor and the scaler.inverse_transform requires numpy array....

any idea how to force re-scaling the data and reporting the loss values in the right scale?


Solution

  • scaler.inverse_transform essentially uses scaler.min_ and scaler.scale_ parameters to convert data in sklearn.preprocessing.minmaxscaler. An example:

    from sklearn.preprocessing import MinMaxScaler
    import numpy as np
    
    data = np.array([[-1, 2], [-0.5, 6], [0, 10], [1, 18]])
    scaler = MinMaxScaler()
    data_trans = scaler.fit_transform(data)
    print('transform:\n',data_trans)
    
    data_inverse = (data_trans - scaler.min_)/scaler.scale_
    print('inverse transform:\n',data_inverse)
    
    # print
    transform:
     [[0.   0.  ]
     [0.25 0.25]
     [0.5  0.5 ]
     [1.   1.  ]]
    inverse transform:
     [[-1.   2. ]
     [-0.5  6. ]
     [ 0.  10. ]
     [ 1.  18. ]]
    

    So you just need to use them to achieve your goals in RMSE function.

    def RMSE_inverse(y_true, y_pred):
        y_true = (y_true - K.constant(scaler.min_)) / K.constant(scaler.scale_)
        y_pred = (y_pred - K.constant(scaler.min_)) / K.constant(scaler.scale_)
        return K.sqrt(K.mean(K.square(y_pred - y_true)))