Search code examples
algorithmtime-complexitybig-oasymptotic-complexity

What functions are included in Big-O Notation?


I am learning about Big-O Notation and working on an assignment I am stuck on. Basically, I have been given different functions, and have to write the Big(O) for them. I think my confusion lies on what functions can be included in Big-O. I understand the hiearchy as follows: O(1) O(logn) O(n) O(nlogn) O(n^2) O(2^n) O(n!)

I also understand why constants and smaller terms are left out, since we are just looking for a bound. My question is what happens when a function is not written in these terms. For example (this is not my exact question but similar), 3^n is not a constant multiple of 2^n. Is the Big-O then O(3^n) or still O(2^n)? My thinking is O(3^n) as 3^n grows faster than 2^n and Big O is an upper bound. But I haven't seen Big O expressed with a base that isn't 2 or n as listed above. Is this correct thinking?


Solution

  • For your specific question, O(3^n) is different from O(2^n). One way to see this is to look at the ratio of the values as n --> infinity. In this case, the ratio is:

    (1.5)^n
    

    And this grows without bound.

    Similarly, n^3 is different from n^2 because the ratio is:

    n
    

    And this grows without bound as n --> infinity.

    On the other hand, 3*n and 2*n are the same. Their ratio is:

    1.5
    

    This does not grow as n --> infinity.

    It is very important to understand that not all functions are used for big-O. Basically, the "argument" in big-O represents a class of functions with the same asymptotic behavior as n --> infinity. The simplest member of the class is usually chosen for the notation.

    Remember the big-O is the upper bound on complexity. When you are actually analyzing an algorithm, you might use more complex functions and include constants. In fact, these can be very important and explain why an algorithm such as bubble sort may be optimal for small data sets, even though it is not optimal for large data sets.