I try to use machine learning techniques to predict time to events. My predictions will be probability vectors v of length 20, v[i] being the probability that the event occurs in i + 1 days (i ranges from 0 to 19).
How can I test the custom loss and metric functions I write?
I'd like to use the following loss and metric to train a model :
Her's how I tried to implement it :
from keras import backend as K
def weighted_meansquare(y_true, y_pred):
w = K.constant(np.array([i + 1 for i in range(20)]))
return K.sum(K.square(w * y_pred - w * y_true))
def esperance_metric(y_true, y_pred):
w = K.constant(np.array([i + 1 for i in range(20)]))
return K.sum(w * y_true - w * y_true)
I expected the model to minimize the metric (which is basically an expectation since my model returns a probability vector). Yet when I try to fit my model I see that the metric is always 0.0000e+00 .
What I'm looking for is :
some specific tips about how to code these functions
some general tips about testing keras.backend functions
You have a typo in your definition of esperance_metric
: you use y_true - y_true
instead of y_pred - y_true
, which is why your metric is always 0.
I also see a mistake in weighted_meansquare
. You should multiple by w
after taking the square as follows:
K.sum(w * K.square(y_pred - y_true))
In general, if you want to test backend functions you can try evaluating them with K.eval
. For example:
y_pred = K.constant([1.] * 20)
y_true = K.constant([0.] * 20)
print(K.eval(esperance_metric(y_true, y_pred)))