Search code examples
pythonpython-3.xtime-complexitybig-oexponentiation

Is the following divide and conquer recursive algorithm for the exponentiation more efficient than the iterative one for large numbers?


I have the two following algorithms. My analysis says that both of them are O(m^2+4^n) i.e they are equivalent for big numbers. Is this right?. Note that m and n are the bits numbers for x and y

def pow1(x, y):

    if y == 0:
        return 1
    temp = x
    while y > 1:
        y -= 1
        temp *= x
    return temp
def pow2(x, y):

    if y == 0:
        return 1
    temp = pow2(x, y//2)
    if y & 1:    return temp * temp * x
    return temp * temp

enter image description here


Solution

  • Whether the divide-and-conquer algorithm is more efficient depends on a ton of factors. In Python it is more efficient.

    Your analysis is right; assuming standard grade-school multiplication, divide-and-conquer does fewer, more expensive multiplications, and asymptotically that makes total runtime a wash (constant factors probably matter -- I'd still guess divide-and-conquer would be faster because the majority of the work is happening in optimized C rather than in Python loop overhead, but that's just a hunch, and it'd be hard to test given that Python doesn't use an elementary multiplication algorithm).

    Before going further, note that big integer multiplication in Python is little-o of m^2. In particular, it uses karatsuba, which is around O(m^0.58 n) for an m-bit integer and an n-bit integer with m<=n.

    The small terms using ordinary multiplication won't matter asymptotically, so focusing on the large ones we can replace the multiplication cost and find that your iterative algorithm is around O(4^n m^1.58) and your divide-and-conquer solution is around O(3^n m^1.58).