Suppose the only thing you know is that your algorithm runs in O(n^2)
time in the worst case. From this very fact you know that the upper bound looks like Cn^2
for some C > 0
. Thus you know how the upper bound of your algorithm scales, namely, if you double the input size the upper bound quadruples.
Question: What practical question can you answer if you know the way the upper bound scales? I just can't understand if this particular knowledge is helpful in some way.
If you know of an alternative algorithm that is not in O(πΒ²), then you may conclude that there is some minimum input size above which your algorithm will outperform the alternative algorithm.
This is because if π(π) is not in O(πΒ²), then there do not exist πΆ and N such that for every π > π we would have π(π) < πΆπΒ². So you would also not find πΆ and π such that π(π) < πΆπ or π(π) < πΆπlogπ, ...etc. So π(π) will not be O(1), O(π), O(πlogπ), ... as it is not O(πΒ²).
Still, the input size for which your algorithm would outperform the alternative could be so astronomically great, that it would not be practical, and you would still prefer the alternative algorithm.
Please also realize that when coders speak of a time complexity for the worst case, they most often mean that this is a tight bound for the worst case. For example, if someone presents a sorting algorithm with a worst case time complexity of O(πΒ²), they mean that it does not have the worst case time complexity of O(πlogπ) that more efficient sorting algorithms have.