Search code examples
algorithmrecursionbig-orecurrence

Computing Tightest Big-Oh Bounds on Recurrence


In my Algorithms class we're learning about recurrences, but I'm completely lost and have no idea what to do. I found this pdf from Bowdoin Solving Recurrences with the Iteration/Recursion-tree and it explains it a bit better but the examples provided don't include Big Oh. I have one of the problems listed below. How do we manipulate the recurrence iteration tree to incorporate the O(n^2)? I would appreciate it if someone would be able to explain what to do in the case of Big Oh involving recurrences. Thank you

T(n) = T(n−4)+O(n^2)

Solution

  • I would like to follow up on the other answers by stating the following:

    You cannot simply, in the general case, naively multiply the number of occurrences by the complexity of each individual occurrence.

    The safe thing to do would be to evaluate the recurrence precisely - i.e. without the O-notation, and "re-apply" the O-notation at the end by only taking the leading order term.


    Let's look at the example. Define a function S(n) which computes the precise series value:

    enter image description here

    Let's assume that the recursion stops when n <= 0, then there are floor(n / 4) terms in the expansion series:

    enter image description here

    Where in step (*) we used the standard formulae for summing powers of natural numbers (positive integers), and in the last step we gathered all terms proportional to n^3, which is the leading power. Thus, we can safely conclude that T(n) = O(S(n)) = O(n^3).


    Where might a naive approach fail? Consider the following example:

    enter image description here

    Where N is a parameter equal to the initial value of n, i.e:

    enter image description here

    Now, what assumptions can we make in the event of a naive approach?

    1. Since n = N initially, assume that the lower bound of the complexity is simply given by assuming n = N for all of the log terms. But this gives 0!
    2. Assuming that the stopping condition is when n = 1, the largest term is also the last - log N. There are obviously N terms in the entire sum, so the final complexity is O(N log N) = O(n log n).

    (2) seems reasonable, but is it correct?

    Let's use the above procedure:

    enter image description here

    Where we used some logarithm rules in (1) (2), and Stirling's approximation in (3). Therefore the final complexity T(n) is:

    enter image description here

    So as you can see, the naive multiplication approach gave us the wrong result of O(n log n) instead of O(n).

    Why did I bore you (apologies!) by choosing this rather abstruse counter-example? Because I have seen this mistake made before.