Search code examples
pythonnumba

Why can't you use bitwise & with numba and uint64?


I have the following MWE:

import numba as nb

@nb.njit(nb.uint64(nb.uint64))
def popcount(x): 
      b=0
      while(x > 0):
          x &= x - 1   
          b+=1
      return b


print(popcount(43))

It fails with:

numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend)
No implementation of function Function(<built-in function iand>) found for signature:
 
 >>> iand(float64, float64)
 
There are 8 candidate implementations:
  - Of which 4 did not match due to:
  Overload of function 'iand': File: <numerous>: Line N/A.
    With argument(s): '(float64, float64)':
   No match.
  - Of which 2 did not match due to:
  Operator Overload in function 'iand': File: unknown: Line unknown.
    With argument(s): '(float64, float64)':
   No match for registered cases:
    * (bool, bool) -> bool
    * (int64, int64) -> int64
    * (int64, uint64) -> int64
    * (uint64, int64) -> int64
    * (uint64, uint64) -> uint64
  - Of which 2 did not match due to:
  Overload in function 'gen_operator_impl.<locals>._ol_set_operator': File: numba/cpython/setobj.py: Line 1508.
    With argument(s): '(float64, float64)':
   Rejected as the implementation raised a specific error:
     TypingError: All arguments must be Sets, got (float64, float64)
  raised from /home/user/python/mypython3.10/lib/python3.10/site-packages/numba/cpython/setobj.py:108

During: typing of intrinsic-call at /home/user/python/popcount.py (7)

File "popcount.py", line 7:
def popcount(x): 
    <source elided>
      while(x > 0):
          x &= x - 1   
          ^

What is wrong with using uint64 for this?


The code fails with the same message even if I use:

print(popcount(nb.uint64(43))

Solution

  • At first, I thought this was NumPy uint64 awkwardness. Turns out it's slightly different Numba uint64 awkwardness.

    By NumPy dtype rules, a standard Python int is handled as numpy.int_ dtype, which is signed. There's no integer dtype big enough to hold all values of both uint64 dtype and a signed dtype, so in mixed uint64/int operations, NumPy converts both operands to float64!

    You can't use & with floating-point dtypes, so that's where the error would come from with NumPy type handling.

    It turns out Numba uses different type handling, though. Under Numba rules, an operation on a uint64 and a signed integer produces int64, not float64. But then the assignment:

    x &= x - 1
    

    tries to assign an int64 value to a variable that initially held a uint64 value.

    This is the part where Numba gets awkward. By Numba type inference rules,

    A type variable holds the type of each variable (in the Numba IR). Conceptually, it is initialized to the universal type and, as it is re-assigned, it stores a common type by unifying the new type with the existing type. The common type must be able to represent values of the new type and the existing type. Type conversion is applied as necessary and precision loss is accepted for usability reason.

    Rather than converting to uint64 or keeping int64, the Numba compiler tries to unify uint64 and int64 to find a new type to use. This is the part where Numba accepts precision loss and converts this version of the x variable to float64. Then you get the error about & not working on float64.