Search code examples
pythonnumpyfloating-pointprecisionfloating-accuracy

robust numpy.float64 equality testing


Is there a robust way to test for equality of floating point numbers, or to generally ensure that floats that should be equal actually do equal each other to within the float's precision? For example, here is a distressing situation:

>> np.mod(2.1, 2) == 0.1
Out[96]: False

I realize that this occurs because of the floating point error:

>> np.mod(2.1, 2)
Out[98]: 0.10000000000000009

I am familiar with the np.isclose(a,b,tol) function, but using it makes me uncomfortable since I might get false positives, i.e. getting told that things are equal when they really should not be. There is also the note that np.isclose(a,b) may be different from np.isclose(b,a) which is even worse.

I am wondering, is there a more robust way to determine equality of floats, without false positives/false negatives, without a==b being different from b==a and without having to mess around with tolerances? If not, what are the best practices/recommendations for setting the tolerances to ensure robust behavior?


Solution

  • You stated that you want the check to return True if their infinite-precision forms are equal. In that case you need to use an infinite-precision data structure. For example fractions.Fraction:

    >>> from fractions import Fraction
    >>> Fraction(21, 10) % 2 == Fraction(1, 10)
    True
    

    There is also (although slow!) support for arrays containing python objects:

    >>> import numpy as np
    >>> arr = np.array([Fraction(1, 10), Fraction(11, 10), Fraction(21, 10), 
    ...                 Fraction(31, 10), Fraction(41, 10)])
    >>> arr % 2 == Fraction(1, 10)
    array([ True, False,  True, False,  True], dtype=bool)
    

    You just have to make sure you don't lose the infinite-precision objects (which isn't easy for several numpy/scipy functions).

    In your case you could even just operate on integers:

    >>> 21 % 20 == 1
    True