Why is ω(n) smaller than O(n)?
I know what is little omega (for example, n = ω(log n)
), but I can't understand why ω(n) is smaller than O(n).
Algorithmic complexity has a mathematic definition.
If f
and g
are two functions, f = O(g)
if you can find two constants c
(> 0) and n
such as f(x) < c * g(x)
for every x > n
.
For Ω
, it is the opposite: you can find constants such as f(x) > c * g(x)
.
f = Θ(g)
if there are three constants c
, d
and n
such as c * g(x) < f(x) < d * g(x)
for every x > n
.
Then, O
means your function is dominated, Θ
your function is equivalent to the other function, Ω
your function has a lower limit.
So, when you are using Θ
, your approximation is better for you are "wrapping" your function between two edges ; whereas O
only set a maximum. Ditto for Ω
(minimum).
To sum up:
O(n)
: in worst situations, your algorithm has a complexity of n
Ω(n)
: in best case, your algorithm has a complexity of n
Θ(n)
: in every situation, your algorithm has a complexity of n
To conclude, your assumption is wrong: it is Θ
, not Ω
. As you may know, n > log(n)
when n
has a huge value. Then, it is logic to say n = Θ(log(n))
, according to previous definitions.