Search code examples
pythontensorflowneural-networktime-seriesmetrics

Best way to collect metrics of real number predictions with negatives?


I have a neural net that predicts real number values in a time series. Some of these predictions are negative. I am looking for the best way to measure the error or accuracy of these predictions. The Tensorflow pages describe their accuracy measurements as only including a measurement of how many were correct and not how correct they were.

Is there a metric for the taking the average of the accuracy of each prediction subtracting values with the wrong sign by how wrong they were. for example// output: -1, 1, 2, 4 Predicts: 1.5, 1, 1, 3 Accuracy: -1.5, 1, .5, .75 Average: (-1.5 + 1 + .5 + .75) / 4 = 3/16, 0.1875


Solution

  • It is standard to start with mean squared error for fitting continuous numerical values.

    For your example:

    Ground Truth:    -1,  1,  2,  4
    Predictions:    1.5,  1,  1,  3
    Error:          2.5,  0, -1, -1
    Squared Error: 6.25,  0,  1,  1
    

    Which leads to a mean squared error of:

    (6.25 + 0 + 1 + 1) / 4 = 2.0625
    

    This will guide optimization to avoid very large errors, but there is no explicit penalty for getting the sign wrong.

    This should be available in your DL library of choice. For example torch.nn.MSELoss, keras.losses.mean_squared_error, or tf.keras.losses.MSE.