Search code examples
big-oasymptotic-complexity

Role of lower order terms in big O notation


In big O notation, we always say that we should ignore constant factors for most cases. That is, rather than writing,

3n^2-100n+6

we are almost always satisfied with

n^2

since that term is the fastest growing term in the equation.

But I found many algorithm courses starts comparing functions with many terms

2n^2+120n+5 = big O of n^2

then finding c and n0 for those long functions, before recommending to ignore low order terms in the end.

My question is what would I get from trying to understand and annalising these kinds of functions with many terms? Before this month I am comfortable with understanding what O(1), O(n), O(LOG(n)), O(N^3) mean. But am I missing some important concepts if I just rely on this typically used functions? What will I miss if I skipped analysing those long functions?


Solution

  • Ever have intermediate steps in your work? That is what this likely is as when you are computing a big O, chances are you don't already know for sure what the highest order term is and thus you keep track of them all and then determine which complexity class makes sense in the end. There is also something to be said for understanding why the lower order terms can be ignored.


    Take some graph algorithms like a minimum spanning tree or shortest path. Now, can just looking at an algorithm you know what the highest term will be? I know I wouldn't and so I'd trace through the algorithm and collect a bunch of terms.

    If you want another example, consider Sorting Algorithms and whether you want to memorize all the complexities or not. Bubble Sort, Shell Sort, Merge Sort, Quick Sort, Radix Sort and Heap Sort are a few of the more common algorithms out there. You could either memorize both the algorithm and complexity or just the algorithm and derive the complexity from the pseudo code if you know how to trace them.