I'm trying to write a program to find a sum of all multiples of 3 below given n. So far I came up with this solution (using formula for arithmetic series):
def sumof3(n):
n = int((n-1)/3)
return int((n/2)*(6+3*(n-1)))
It works pretty well but for some reason starts to fail with large numbers. For example, for n = 232471924 the return value is 9007199280122284, while it should be 9007199280122283.
Can anybody advise where's a bug here?
Python has arbitrary-precision integers but standard limited (double) precision floats. In Python 3, the division of two integers with /
produces a float, which means (e.g.) that
>>> 10**50/10**25
1e+25
>>> int(10**50/10**25)
10000000000000000905969664
but if we work purely with integers using //
, we get:
>>> 10**50//10**25
10000000000000000000000000
In your code, both (n-1)/3
and (n/2)
will produce float output, which means that you've only got ~18 digits or so of precision. If we rework your function to work purely with integers:
def sumof3b(n):
n = (n-1)//3
return (6+3*(n-1))*n//2
Then we get agreement for the low values:
>>> all(sumof3(n) == sumof3b(n) for n in range(10**7))
True
but at high values we preserve the precision:
>>> n = 232471924
>>> sumof3(n) # bad
9007199280122284
>>> sumof3b(n) # good
9007199280122283
[Here we can reorder to make sure we're not losing any fractional data, but I sometimes find the fractions
module comes in handy too.]