I am a new user to Python 3.6.0. I am trying to divide 2 numbers that produces a big output. However, using
return ans1 // ans2
produces 55347740058143507128 while using
return int(ans1 / ans2)
produces 55347740058143506432.
Which is more accurate and why is that so?
The first one is more accurate since it gives the exact integer result.
The second represents the intermediate result as a float. Floats have limited resolution (53 bits of mantissa) whereas the result needs 66 bits to be represented exactly. This results in a loss of accuracy.
If we looks at the hex representation of both results:
>>> hex(55347740058143507128)
'0x3001aac56864d42b8L'
>>> hex(55347740058143506432)
'0x3001aac56864d4000L'
we can see that the least-significant bits of the result that didn't fit in a 53-bit mantissa all got set to zero.
One way to see the rounding directly, without any complications brought about by division is:
>>> int(float(55347740058143507128))
55347740058143506432L