Search code examples
floating-pointdate-arithmetic

Floating point arithmetic: summation versus multiplication of error


I'm trying to understand the floating point arithmetic behind this simple example. Both codes are arithmetically equivalent in theory, but obviously a series of additions adds more error than a simple multiplication.

s=0.0
for i in range(10):
    s += 0.1
print(s)
print('%.30f' % s)

0.9999999999999999
0.999999999999999888977697537484

but:

s=0.1
s *= 10
print(s)
print('%.30f' % s)
1.0
1.000000000000000000000000000000

I would like to understand what is going on behind the scenes.

I understand that the binary representation of the decimal 0.1 is never accurate, and that can be verified by:

print(0.1)
print('%.30f' % 0.1)
0.1
0.100000000000000005551115123126

So in a sequence of summations, that remainder 5.55e-18 keeps adding up to the variable and very quickly it grows.

However, when multiplying, I'd expect that the same remainder is also multiplied and it would grow, but that doesn't happen. Why is that? Any sort of optimisation before converting to binary?


Solution

  • It just has to do with how results are rounded (internally, in binary). 0.1 converts to

    0.1000000000000000055511151231257827021181583404541015625

    which is

    0.0001100110011001100110011001100110011001100110011001101 in binary.

    Multiply that by 10 (1010 in binary) and you get

    1.000000000000000000000000000000000000000000000000000001
    

    That is 55 significant bits; rounded to 53 bits it equals 1.0.

    Add 0.1 ten times and you'll go through a sequence of roundings (your assumption that the error "keeps adding up to the variable and very quickly it grows" is wrong -- why would adding 0.1 ten times be less than 1.0 then?). If you print the full decimal values after each iteration and you should see

    0.1000000000000000055511151231257827021181583404541015625
    0.200000000000000011102230246251565404236316680908203125
    0.3000000000000000444089209850062616169452667236328125
    0.40000000000000002220446049250313080847263336181640625
    0.5
    0.59999999999999997779553950749686919152736663818359375
    0.6999999999999999555910790149937383830547332763671875
    0.79999999999999993338661852249060757458209991455078125
    0.899999999999999911182158029987476766109466552734375
    0.99999999999999988897769753748434595763683319091796875
    

    Look at what happens between 0.5 and 0.6, for example. Add the internal binary values for 0.5 and 0.1

    0.1 + 0.0001100110011001100110011001100110011001100110011001101

    The answer is

    0.1001100110011001100110011001100110011001100110011001101
    

    That is 55 bits; rounded to 53 bits it's

    0.10011001100110011001100110011001100110011001100110011
    

    which in decimal is

    0.59999999999999997779553950749686919152736663818359375

    which is less than 0.6, though you might have expected it to be greater.