Search code examples
pythonfloating-pointdecimalprecision

Confused about python decimal library


I am doing quantitative analysis on Python, comparing results with another colleague we found these typical float discrepancies on the 17th decimal place or so due to different operation order.

Looking for a solution I found the decimal library, read the docs and ran some examples

# This is using fixed point arithmetic, resulting on what we would expect
Decimal('0.1') + Decimal('0.2') == Decimal('0.3')

# This is using floating point arithmetic, and thus will never reach 
# the exact number using base 2, the famous 0.30000000000000004
x = 0.1 + 0.2

# But what is this doing? It outputs 0.3000000000000000166533453694, which 
# suggests a floating point usage, but different error
x = Decimal(0.1) + Decimal(0.2) 

So at this point I am really confused, why is this error different from the vanilla python float, and if I want to use this library alongside Pandas, do I have to cast to str every single operand?


Solution

  • When you use Decimal(0.1) you don't get exactly 0.1, the constant 0.1 that you fed to Decimal is already off due to the usual floating point inaccuracies - see Is floating point math broken? It's an exact representation of an inexact number. When you use Decimal('0.1') you do get an exact 0.1, because the conversion from string is performed by Decimal itself.

    The conversion from float to str applies a tiny bit of rounding, it may fix things or it may not.