Search code examples
algorithmdata-structurescomplexity-theorybig-ologarithm

Difference between O(n) and O(log(n)) - which is better and what exactly is O(log(n))?


This is my first course in data structures and every lecture / TA lecture , we talk about O(log(n)) . This is probably a dumb question but I'd appreciate if someone can explain to me exactly what does it mean !?


Solution

  • It means that the thing in question (usually running time) scales in a manner that is consistent with the logarithm of its input size.

    Big-O notation doesn't mean an exact equation, but rather a bound. For instance, the output of the following functions is all O(n):

    f(x) = 3x
    g(x) = 0.5x
    m(x) = x + 5
    

    Because as you increase x, their outputs all increase linearly - if there's a 6:1 ratio between f(n) and g(n), there will also be approximately a 6:1 ratio between f(10*n) and g(10*n) and so on.


    As for whether O(n) or O(log n) is better, consider: if n = 1000, then log n = 3 (for log-base-10). Which would you rather have your algorithm take to run: 1000 seconds, or 3 seconds?