Search code examples
algorithmperformancebig-ocomplexity-theory

What does scaling of the upper bound of your algorithm's runtime tell you?


Suppose the only thing you know is that your algorithm runs in O(n^2) time in the worst case. From this very fact you know that the upper bound looks like Cn^2 for some C > 0. Thus you know how the upper bound of your algorithm scales, namely, if you double the input size the upper bound quadruples.

Question: What practical question can you answer if you know the way the upper bound scales? I just can't understand if this particular knowledge is helpful in some way.


Solution

  • If you know of an alternative algorithm that is not in O(𝑛²), then you may conclude that there is some minimum input size above which your algorithm will outperform the alternative algorithm.

    This is because if 𝑔(𝑛) is not in O(𝑛²), then there do not exist 𝐢 and N such that for every 𝑛 > 𝑁 we would have 𝑔(𝑛) < 𝐢𝑛². So you would also not find 𝐢 and 𝑁 such that 𝑔(𝑛) < 𝐢𝑛 or 𝑔(𝑛) < 𝐢𝑛log𝑛, ...etc. So 𝑔(𝑛) will not be O(1), O(𝑛), O(𝑛log𝑛), ... as it is not O(𝑛²).

    Still, the input size for which your algorithm would outperform the alternative could be so astronomically great, that it would not be practical, and you would still prefer the alternative algorithm.


    Please also realize that when coders speak of a time complexity for the worst case, they most often mean that this is a tight bound for the worst case. For example, if someone presents a sorting algorithm with a worst case time complexity of O(𝑛²), they mean that it does not have the worst case time complexity of O(𝑛log𝑛) that more efficient sorting algorithms have.