Search code examples
pythonpython-3.xfloating-pointfloating-point-precision

How to avoid floating point errors?


I was trying to write a function to approximate square roots (I know there's the math module...I want to do it myself), and I was getting screwed over by the floating point arithmetic. How can you avoid that?

def sqrt(num):
    root = 0.0
    while root * root < num:
        root += 0.01
    return root

Using this has these results:

>>> sqrt(4)
2.0000000000000013
>>> sqrt(9)
3.00999999999998

I realize I could just use round(), but I want to be able to make this really accurate. I want to be able to calculate out to 6 or 7 digits. That wouldn't be possible if I'm rounding. I want to understand how to properly handle floating point calculations in Python.


Solution

  • This really has nothing to do with Python - you'd see the same behavior in any language using your hardware's binary floating-point arithmetic. First read the docs.

    After you read that, you'll better understand that you're not adding one one-hundredth in your code. This is exactly what you're adding:

    >>> from decimal import Decimal
    >>> Decimal(.01)
    Decimal('0.01000000000000000020816681711721685132943093776702880859375')
    

    That string shows the exact decimal value of the binary floating ("double precision" in C) approximation to the exact decimal value 0.01. The thing you're really adding is a little bigger than 1/100.

    Controlling floating-point numeric errors is the field called "numerical analysis", and is a very large and complex topic. So long as you're startled by the fact that floats are just approximations to decimal values, use the decimal module. That will take away a world of "shallow" problems for you. For example, given this small modification to your function:

    from decimal import Decimal as D
    
    def sqrt(num):
        root = D(0)
        while root * root < num:
            root += D("0.01")
        return root
    

    then:

    >>> sqrt(4)
    Decimal('2.00')
    >>> sqrt(9)
    Decimal('3.00')
    

    It's not really more accurate, but may be less surprising in simple examples because now it's adding exactly one one-hundredth.

    An alternative is to stick to floats and add something that is exactly representable as a binary float: values of the form I/2**J. For example, instead of adding 0.01, add 0.125 (1/8) or 0.0625 (1/16).

    Then look up "Newton's method" for computing square roots ;-)