Search code examples
algorithmtime-complexitybig-ocomputer-science

Is it possible to have two valid big O runtimes that depend on different variables?


Consider the subset sum problem where you are given a set of all positive integers and wish to determine the number of ways you could use a subset of them to obtain a desired value. Essentially, counting up the number of solutions to a coin change while only having one coin of each denomination. This algorithm (taken from here: https://www.youtube.com/watch?v=nqlNzOcnCfs) computes this:

enter image description here

The author presented the run time as O(n * total) where n is the number of integers you can use to make up the sum (here n = 4) and total is the value you wish to sum to.

But in the worst case, the recursion can split at most twice in the last else statement and the depth of the recursion is at most n, so couldn't you also say that recursion tree has a maximum number of calls O(2^n) and this should be another valid runtime complexity? Is there a logical contradiction here since this big O does not depend on total at all?


Solution

  • But in the worst case, the recursion can split at most twice in the last else statement and the depth of the recursion is at most n, so couldn't you also say that recursion tree has a maximum number of calls O(2^n)

    That would have been the case if it were not for this line at the top of the implementation:

    if key in mem:
        return mem[key]
    

    It is this line that prevents the code from going into that last else in cases when the return value for the same combination of total and i has been computed. There are at most n * total such combinations, so that is the limiting factor for computing big O(...) for the algorithm. This technique related to dynamic programming is called memoization. The timing you get in this situation is dependent on total, so the algorithm is pseudo-polynomial.