I just learned the Decimal class in Python, and I have some issues in modifying the precision of the decimal numbers. Code:
from decimal import *
def main() :
getcontext().prec = 50
print Decimal(748327402479023).sqrt()
if __name__ == '__main__' :
main()
The output:
27355573.51764029457632865944595074348085555311409538
1.414213562373095048801688724209698078569671875376948
Instead of showing 50 decimal digits, it shows 50 digits in total. Is there a way to fix this?
Edit:
I am solving a problem which requires big floating point accuracy. That's why i have to use python. Now if the answer to the problem is for example 0.55, i should print 0.55 followed by 48 zeroes...
One way of doing it is to set prec
much higher than you need, then use round
. Python 3:
>>> getcontext().prec=100
>>> print(round(Decimal(748327402479023).sqrt(), 50))
27355573.51764029457632865944595074348085555311409538175920
>>> print(round(Decimal(7483).sqrt(), 50))
86.50433515148243713481567854198645604944135142208905
For Python 2, do:
>>> print Decimal(748327402479023).sqrt().quantize(Decimal('1E-50'))
27355573.51764029457632865944595074348085555311409538175920
The value for prec
depends on how large your numbers are, but it has to be greater than log10
of your number, or you will get a decimal.InvalidOperation: quantize result has too many digits for current context
exception.