Search code examples
python-3.xgreatest-common-divisorlargenumberlcm

python code showing wrong output when computing least common multiple for large integers


I implemented an lcm problem using the following algorithm in python. My code is given below :

# Uses python3
import sys

def gcd_efficient(a, b):
    #current_gcd = 1
    #for d in range(2, min(a, b) + 1):
    #   if a % d == 0 and b % d == 0:
    #        if d > current_gcd:
    #            current_gcd = d

    #return current_gcd

    remainder = max(a, b) % min(a, b)
    newMax = min(a, b)
    if remainder == 0:
        return newMax

    return  gcd_efficient(newMax, remainder)


def lcm_efficient(a, b):
    #for l in range(1, a*b + 1):
    #   if l % a == 0 and l % b == 0:
    #        return l

    product = a*b
    gcd = gcd_efficient(a, b)
    lcm = product/gcd
    return int(lcm)

print(lcm_efficient(226553150, 1023473145))

Now I have used the above code to compute the lcm of large integers being as input.

However I find that for some large integers: for example, when the input is: Input: 226553150 1023473145 The output from the python console is: 46374212988031352 But the actual output should be: 46374212988031350

The actual output is differing from the given output by just 2. However, what confuses me is that why is the python interpreter giving errors in output upon executing the above code.

Can this error be nullified?

Waiting for the answers!!


Solution

  • Your problem is here:

    lcm = product/gcd
    

    It should be

    lcm = product//gcd
    

    to guarantee integer division. Your translation from C++ assumed that the / operator works identically in C++ and Python. And in Python 2 it does. But not in Python 3. In Python 3, / calls for floating point division, so the original version called for both operands to be converted to floats (losing precision in the process) followed by floating-point division. Your return statement converts the floating-point result back to int, masking the problem.