Search code examples
pythontime-complexityruntime-errorsimplifypow

efficiency graph size calculation with power function


I am looking to improve the following simple function (written in python), calculating the maximum size of a specific graph:

def max_size_BDD(n):
i = 1
size = 2
while i <= n:
    size += min(pow(2, i-1), pow(2, pow(2, n-i+1))-pow(2, pow(2, n-i)))
    i+=1
    print(str(i)+" // "+ str(size))
return size

if i give it as input n = 45, the process gets killed (probably because it takes too long, i dont think it is a memory thing, right?). How can i redesign the following algorithm such that it can handle larger inputs?


Solution

  • My proposal: While the original function starts to run into troubles at ~10, I have practically no limitations (even for n = 100000000, I stay below 1s).

    def exp_base_2(n):
        return 1 << n
    
    
    def max_size_bdd(n):
        # find i at which the min branch switches
        start_i = n
        while exp_base_2(n - start_i + 1) < start_i:
            start_i -= 1
    
        # evaluate all to that point
        size = 1 + 2 ** start_i
    
        # evaluate remaining terms (in an uncritical range of n - i)
        for i in range(start_i + 1, n + 1):
            val = exp_base_2(exp_base_2(n - i))
            size += val * (val - 1)
            print(f"{i} // {size}")
        return size
    

    Remarks:

    • Core idea: avoid the large powers of 2, as they are not necessary to calculate if you use the min in the end.
    • I did all this in a rush, maybe I can add more explanation later... if anyone is interested. Then, I could also do a more decent verification of the new implementation.
    • The effect of exp_base_2 should be negligible after doing all the math to optimize the original calculations. I did this optimization before I went into analysis.
    • Maybe a complete closed-form solution is possible. I did not invest the time for further investigations.