Search code examples
algorithmcomplexity-theorybig-onotation

Why does Big-O Notation use O(1) instead of O(k)?


If I understand Big-O notation correctly, k should be a constant time for the efficiency of an algorithm. Why would a constant time be considered O(1) rather than O(k), considering it takes a variable time? Linear growth ( O(n + k) ) uses this variable to shift the time right by a specific amount of time, so why not the same for constant complexity?


Solution

  • There is no such linear growth asymptotic O(n + k) where k is a constant. If k were a constant and you went back to the limit representation of algorithmic growth rates, you'd see that O(n + k) = O(n) because constants drop out in limits.

    Your answer may be O(n + k) due to a variable k that is fundamentally independent of the other input set n. You see this commonly in compares vs moves in sorting algorithm analysis.

    To try to answer your question about why we drop k in Big-O notation (which I think is taught poorly, leading to all this confusion), one definition (as I recall) of O() is as follows:

    formula

    Read: f(n) is in O( g(n) ) iff there exists d and n_0 where for all n > n_0,
                                             f(n) <= d * g(n)
    

    Let's try to apply it to our problem here where k is a constant and thus f(x) = k and g(x) = 1.

    • Is there a d and n_0 that exist to satisfy these requirements?

    Trivially, the answer is of course yes. Choose d > k and for n > 0, the definition holds.