Search code examples
pythonpython-decimal

Why is the precision accurate when Decimal() takes in a string instead of float? in Python


Why are these values different and how does it differ from each other?

>>> from decimal import Decimal
>>> Decimal('0.1') + Decimal('0.1') + Decimal('0.1') - Decimal('0.3')
Decimal('0.0')

>>> Decimal(0.1) + Decimal(0.1) + Decimal(0.1) - Decimal(0.3)
Decimal('2.775557561565156540423631668E-17')

Solution

  • This is quoted from Decimal module source code which explains pretty good, if the input is float, the module internally calls the class method "Decimal.from_float()":

    Note that Decimal.from_float(0.1) is not the same as Decimal('0.1'). Since 0.1 is not exactly representable in binary floating point, the value is stored as the nearest representable value which is 0x1.999999999999ap-4. The exact equivalent of the value in decimal is 0.1000000000000000055511151231257827021181583404541015625.