Search code examples
algorithmtime-complexitybinary-search

Why to consider binary search running time complexity as log2N


Can someone explain me when it comes to binary search we say the running time complexity is O(log n)? I searched it in Google and got the below,

"The number of times that you can halve the search space is the same as log2 n".

I know we do halve until we find the search key in the data structure, but why we have to consider it as log2 n? I understand that ex is exponential growth and so the log2 n is the binary decay. But I am unable to interpret the binary search in terms of my logarithm definition understanding.


Solution

  • Think of it like this:

    If you can afford to half something m times, (i.e., you can afford to spend time proportional to m), then how large array can you afford to search?

    Obviously arrays of size 2m, right?

    So if you can search an array of size n = 2m, then the time it takes is proportional to m, and solving m for n look like this:

    n = 2m

    log2(n) = log2(2m)

    log2(n) = m


    Put another way: Performing a binary search on an array of size n = 2m takes time proportional to m, or equivalently, proportional to log2(n).