I am playing around with tf.tensordot
in Tensorflow. However, I am experiencing some inconsistencies which are bugging me. Below is a reproducible example:
tf.reset_default_graph()
tf.set_random_seed(42)
np.random.seed(42)
X = np.random.rand(150, 196, 268).astype(np.float32)
W = tf.Variable(initial_value=tf.random_normal([268, 22], stddev=0.1))
dotted_150 = tf.tensordot(X, W, axes=[[2], [0]])
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
output_150 = sess.run(dotted_150)
This returns a tensor that has dimensions (150, 196, 22)
tf.reset_default_graph()
tf.set_random_seed(42)
np.random.seed(42)
X = np.random.rand(1, 196, 268).astype(np.float32)
W = tf.Variable(initial_value=tf.random_normal([268, 22], stddev=0.1))
dotted_1 = tf.tensordot(X, W, axes=[[2], [0]])
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
output_1 = sess.run(dotted_1)
This returns a tensor that has dimensions (1, 196, 22)
Now, if we test whether the first element from output_150
is almost equal to the first and only element from output_1
, the result is a mismatch between the two arrays.
np.testing.assert_allclose(output_1[0], output_150[0])
On the other hand, if we do:
np.random.seed(42)
input_150 = np.random.rand(150, 196, 268).astype(np.float32)
np.random.seed(42)
input_1 = np.random.rand(1, 196, 268).astype(np.float32)
np.testing.assert_equal(input_150[0], input_1[0])
We see that the inputs are exactly the same. With that said, I would expect that the outputs from the tf.tensordot
to be the same as well and they are not.
On the same note, here is a tf.tensordot
equivalent using tf.reshape
and tf.matmul
:
tf.reset_default_graph()
tf.set_random_seed(42)
np.random.seed(42)
X = np.random.rand(150, 196, 268).astype(np.float32)
W = tf.Variable(initial_value=tf.random_normal([268, 22], stddev=0.1))
reshaped = tf.reshape(X, [-1, 268])
mulled_150 = tf.reshape(tf.matmul(reshaped, W), [-1, 196, 22])
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
output_150 = sess.run(mulled_150)
tf.reset_default_graph()
tf.set_random_seed(42)
np.random.seed(42)
X = np.random.rand(1, 196, 268).astype(np.float32)
W = tf.Variable(initial_value=tf.random_normal([268, 22], stddev=0.1))
reshaped = tf.reshape(X, [-1, 268])
mulled_1 = tf.reshape(tf.matmul(reshaped, W), [-1, 196, 22])
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
output_1 = sess.run(mulled_1)
np.testing.assert_allclose(output_1[0], output_150[0])
The outcome is exactly the same, a mismatch between the output arrays. How can that be?
Apparently, if I use tf.float64
precision instead of tf.float32
the results are identical.