I just want to define a loss function to test.
I used as example the euclidean distance:
def euc_dist_keras(y_true, y_pred):
return K.sqrt(K.sum(K.square(y_true - y_pred), axis=-1, keepdims=True))
Since I have to obtain from the net list of couples (x,y), I want to test this outside the NN.
So I used:
y_true = [[0., 1.], [0., 0.]]
y_pred = [[1., 1.], [1., 0.]]
With just:
edk = euc_dist_keras(y_true, y_pred)
I obtained the error: TypeError: unsupported operand type(s) for -: 'list' and 'list'
So I used:
y_true_array = np.array(y_true)
y_pred_array = np.array(y_pred)
edk = euc_dist_keras(y_true_array, y_pred_array)
But obtained:
Tensor("Sqrt:0", shape=(2, 1), dtype=float64)
Instead of expected output value: 1
How to obtain the desired value? The same euc_dist_keras, used in:
model.compile(loss=euc_dist_keras, optimizer=opt)
will work exactly in the same way I'm testing it?
Thanks!
Added:
with tf.Session() as sess: print(edk.eval())
I obtained: [[1.] [1.]]
I expected: 1.
Maybe I make some mistake in the def? Or the mean of all samples is made just when I use it when compile the model?
It sounds like this is TensorFlow v1 code. If so, you must run the operations within a "session" in order to evaluate them. See SO post How to print the value of a Tensor object in TensorFlow? and the TF v1 documentation on sessions:
https://www.tensorflow.org/api_docs/python/tf/compat/v1/Session