Search code examples
time-complexitybinary-search

Why does a binary search have a time complexity of O(log n), when the actual time complexity is a step function?


This is the definition of the logarithmic time complexity one normally comes across:

Logarithmic running time O(log n) essentially means that the running time grows in proportion to the logarithm of the input size.

But in some cases, it doesn't seem that the time grows strictly in proportion to the logarithm of the input size. For example, consider binary search. It is said that the algorithm has time complexity of O(log n) . I've generated arrays with size ranging from 2 to 1000, and for each array I've calculated the number of iterations (i.e., array splits) in the worst case scenario. This is the result

enter image description here

Evidently, the actual time complexity is not log2(n) (both functions have the same output x only if log2(x) is an integer).

So my question is, why are we saying that the binary search algorithm has a O(log n) complexity, when the time complexity is in fact a step function? (the derivation that starts with 1 = N/2^x and after some algebraic manipulation ends up with x = log2(N) doesn't explain the reason)


Solution

  • You are right in your observations. However, big-O is asymptotic notation. In general, when discussing time complexity, we are not interested in a function that gives us an exact number of operations, but we are interested in how the time grows as a function of the input size - specifically, for sufficiently large input sizes.

    If you take a moment to understand the definition of big-O, you will find some interesting properties. One such property is that O(f(n)) = O(c * f(n) + d), where c > 0 and d are constants (if f(n) is not constant).

    Let us take the step function s(x) from your example. We know that for any x, log(x) - 1 <= s(x) <= log(x). We recognise that -1 is a constant, so it follows that O(s(x)) = O(log(x)).