Search code examples
pythontensorflowintequality

Compare tensor of unsigned int to python int


I want to compare a TensorFlow tensor of unsigned int e.g. tf.uint32 to the python integer 1. How do I do that? All the following fail

ones = tf.ones((2, 3), dtype=tf.uint32)
ones == 1
ones == [[1, 1, 1], [1, 1, 1]]
ones == tf.constant(1, dtype=tf.uint32)
ones == tf.ones((2, 3), dtype=tf.uint32)  # ?!!! I'm surprised this doesn't work
ones == tf.constant([[1, 1, 1], [1, 1, 1]], dtype=tf.uint32)

with the cryptic

tensorflow.python.framework.errors_impl.NotFoundError: Could not find valid device for node.
Node:{{node Equal}}
All kernels registered for op Equal :
  device='XLA_GPU'; T in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, ..., DT_QUINT8, DT_QINT32, DT_BFLOAT16, DT_COMPLEX128, DT_HALF]
  device='XLA_CPU_JIT'; T in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, ..., DT_QUINT8, DT_QINT32, DT_BFLOAT16, DT_COMPLEX128, DT_HALF]
  device='XLA_GPU_JIT'; T in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, ..., DT_QUINT8, DT_QINT32, DT_BFLOAT16, DT_COMPLEX128, DT_HALF]
  device='GPU'; T in [DT_BOOL]
  device='GPU'; T in [DT_COMPLEX128]
  device='GPU'; T in [DT_COMPLEX64]
  device='GPU'; T in [DT_INT64]
  device='GPU'; T in [DT_INT16]
  device='GPU'; T in [DT_INT8]
  device='GPU'; T in [DT_INT32]
  device='GPU'; T in [DT_UINT8]
  device='GPU'; T in [DT_DOUBLE]
  device='GPU'; T in [DT_HALF]
  device='GPU'; T in [DT_FLOAT]
  device='CPU'; T in [DT_BOOL]
  device='CPU'; T in [DT_STRING]
  device='CPU'; T in [DT_COMPLEX128]
  device='CPU'; T in [DT_COMPLEX64]
  device='CPU'; T in [DT_INT64]
  device='CPU'; T in [DT_INT32]
  device='CPU'; T in [DT_BFLOAT16]
  device='CPU'; T in [DT_INT16]
  device='CPU'; T in [DT_INT8]
  device='CPU'; T in [DT_UINT8]
  device='CPU'; T in [DT_DOUBLE]
  device='CPU'; T in [DT_HALF]
  device='CPU'; T in [DT_FLOAT]
  device='XLA_CPU'; T in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, ..., DT_QUINT8, DT_QINT32, DT_BFLOAT16, DT_COMPLEX128, DT_HALF]
 [Op:Equal]

while with tf.int32 it works fine

>>> tf.ones((2, 3), dtype=tf.int32) == 1
<tf.Tensor: shape=(2, 3), dtype=bool, numpy=
array([[ True,  True,  True],
       [ True,  True,  True]])>

Solution

  • You can compare with uint8. For example, the following snippet gives correct output:

    ones = tf.ones((2, 3), dtype=tf.uint8)
    ones == 1
    
    <tf.Tensor: shape=(2, 3), dtype=bool, numpy=
    array([[ True,  True,  True],
           [ True,  True,  True]])>
    

    You can also cast for any unsupported dtypes.

    ones = tf.ones((2, 3), dtype = tf.uint32)
    ones_c = tf.cast(ones, dtype = tf.int32)
    ones_c == 1
    
    <tf.Tensor: shape=(2, 3), dtype=bool, numpy=
    array([[ True,  True,  True],
           [ True,  True,  True]])>
    

    As it seems like a bug, here's a GitHub issue regarding the behavior (to track):

    https://github.com/tensorflow/tensorflow/issues/39457

    Update: Now, you can use tf-nightly to compare uint16 and uint32.

    https://pypi.org/project/tf-nightly-gpu/