In a long code of 600 lines I have one part of the code that calculates something weird.
idl = 0
print type(dl), dl
idl = int(dl*10)+1
print idl
This returns:
<type 'float'> 0.1
1
This calculation is done in a definition in my code. This is obviously not the expected result. The weird thing is, when I copy the code above in a separate python file:
idl = 0
dl = 0.1
print type(dl), dl
idl = int(dl*10)+1
print idl
I get:
<type 'float'> 0.1
2
What could be the origin of this problem? I've extracted these parts, to make the problem simple, but if you want I can give more information.
Eric Postpischil's comment is on-point.
Python tries to hide some of the ugliness of floating-point numbers from the casual user. Most of the time, it's fine. Sometimes, you get surprised. Many decimal numbers do not represent precisely in binary form -- they become repeating or extremely long fractions. Python's display code converts the binary floating point number into a decimal, sometimes rounding off to get to the right place.
Here's a solid reference on the topic: http://docs.python.org/2/tutorial/floatingpoint.html
The code snippet you provide is correct, but you may wish to use int(round(dl*10))
.
If the imprecise representation of decimal math in floating point causes you consternation -- like it might if you are working with money, for instance -- check out the decimal module: http://docs.python.org/2/library/decimal.html#module-decimal
The decimal module provides excellent facilities for doing decimal math, but it is somewhat more cumbersome to use than floating-point.
For what it's worth, this problem is not unique to Python. You will find similar problems in most programming languages (and a similar decimal-math library to solve the problems.)