I'm trying to implement the LogCosh function as a custom loss function. I get an error when I do this, as the fitting phase gets NaN as the loss. Even weirder, when I run it a bunch it starts to give actual values for the loss, then reaches a point where it starts returning NaN again.
My model:
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(81,)),
tf.keras.layers.Dense(300, activation='relu'),
tf.keras.layers.Dense(1)
])
My loss function and data fitting:
def custom_loss(y_true, y_pred):
x = y_true-y_pred
return tf.math.log(tf.math.cosh(x))
model.compile(optimizer='adam',
loss=custom_loss,
metrics=['MeanAbsoluteError'])
model.fit(train_features, train_labels, epochs=3)
This gives NaN
:
Train on 21263 samples
Epoch 1/3
21263/21263 [==============================] - 1s 65us/sample - loss: nan - MeanAbsoluteError: nan
Epoch 2/3
21263/21263 [==============================] - 1s 51us/sample - loss: nan - MeanAbsoluteError: nan
Epoch 3/3
21263/21263 [==============================] - 1s 57us/sample - loss: nan - MeanAbsoluteError: nan
Why is it being garbage/how do I fix this issue so the loss actually works?
You don't need to write any custom function for that. LogCosh
is already a built-in loss function available in TF 2.4
.
model.compile(optimizer='adam',
loss=tf.keras.losses.LogCosh(),
metrics=['MeanAbsoluteError'])