Search code examples
algorithmtime-complexityruntimebig-o

When should we use soft-O, soft-Omega, soft-Theta


I'm new to algorithm field and meet soft notations many times. I don't understand the usage of soft notations. According to the wikipedia,

'Essentially, it is big O notation, ignoring logarithmic factors because the growth-rate effects of some other super-logarithmic function indicate a growth-rate explosion for large-sized input parameters that is more important to predicting bad run-time performance than the finer-point effects contributed by the logarithmic-growth factor(s).' https://en.wikipedia.org/wiki/Big_O_notation#Extensions_to_the_Bachmann%E2%80%93Landau_notations

I don't really understand the usage of soft-O by it's definitions. Could someone give me an example that we have to use soft-O instead of big-O. Many thanks.


Solution

  • You use different notation for different levels of precision. For example, you can say

    • Mergesort uses at most n ⌈lg n⌉ comparisons

    • Mergesort uses O(n log n) comparisons

    • Mergesort uses Õ(n) comparisons

    The first is fussy but precise. The third is imprecise but enough to compare it to insertion sort (Õ(n) versus Θ(n²)).

    Using Õ to suppress a single log is a little silly, but you can also write things like log²n √(log log n) = Õ(1) or even extend it to 1/ε terms for approximation schemes, and in a complicated algorithm, those factors get really annoying to keep track of.

    You never have to use soft-O notation though. You don't have to use big-O notation either. You could be like Knuth and quote instruction counts of a particular implementation for the MMIX processor. It's just convenient.