I'm using decimal module to avoid float rounding errors. In my case the values are money so I want two decimal places.
To do that I do decimal.getcontext().prec = 2
but then I get some surprising results which make me think I'm missing something. In this code the first assertion works, but the second fails
from decimal import getcontext, Decimal
assert Decimal("3000") + Decimal("20") == 3020
getcontext().prec = 2
assert Decimal("3000") + Decimal("20") == 3020 # fails
Since 3000
and 20
are integers I was expecting this to hold, but I get 3000
instead. Any ideas on what is happening?
decimal
does not implement fixed-point arithmetic directly. It implements base 10 floating-point arithmetic. The precision (prec
) is the total number of significant digits retained and has nothing to do with the position of the radix point.
Try displaying the computed value in your last example:
>>> Decimal("3000") + Decimal("20")
Decimal('3.0E+3')
The exact result (3020) is rounded back to the 2 (because you set prec
to 2) most significant digits, so the trailing "20" is thrown away.
If, e.g., you want 2 places after the decimal point, you'll have to arrange for that yourself. Search the docs for the question "Once I have valid two place inputs, how do I maintain that invariant throughout an application?".