Search code examples
pythonfloating-pointieee-754

The 36-digit patch in the output of print('%.70f' % (0.2 + 0.1))


I understand why 0.1 + 0.2 yields this in Python 3:

>>> 0.1 + 0.2
0.30000000000000004

...but I don't understand why there is a 36-digit stretch of (mostly) non-zero digits in the middle of the output shown below:

>>> print('%.70f' % (0.2 + 0.1))
0.3000000000000000444089209850062616169452667236328125000000000000000000

I do expect there to be an difference between 0.1 + 0.2 and the binary IEEE 754 float nearest to 0.1 + 0.2, but I don't understand why this difference would result in a 36-digit representation (corresponding to roughly ~120 bits of precision).

I could understand if the error had a much smaller (< 53 bit) precision, or had an (apparent) infinite precision, maybe be due to an artifact of the algorithm for evaluating '%.70f' % (0.2 + 0.1). But I can't make sense of an error that would result in the 36-digit patch shown above.


Solution

  • The Python implementation you are using apparently uses IEEE-754 binary64 for floating-point. (This is common, but Python does not mandate it.)

    In this format, numbers are represented as multiples of powers of two, where the specific power of two used depends on the magnitude of the number. (The floating-point format is also described in other ways that are mathematically equivalent to this, such as using a significand with a fixed number of fractional bits instead of an integer multiple as I am using here. This description is easier for the explanation at hand.)

    For numbers around .3, the power of two used is 2−54. The result of adding .1 and .2, after rounding to fit in the floating-point format is 5404319552844596 times 2−54, or 5404319552844596 / 254.

    That number, 5404319552844596 / 254, is exactly 0.3000000000000000444089209850062616169452667236328125.