Could please help me to understand notation's that mention in the picture?, I try to understand "Big O notation" in that under the "Family of Bachmann–Landau notations" Table there is "Formal Definition" column, in that, there are lot's notation with equation, i did't come across these notation before. could any one familiar with this ? https://en.wikipedia.org/wiki/Big_O_notation#Family_of_Bachmann–Landau_notations
The logic behind that definitions are actually quite simple, it basically says that no matter what constants are multiplying the result, from some point where n
is big enough, the one of the function will start to being bigger/smaller and it remains that way.
To see real difference, I will explain th small-o
(which says that some function has smaller complexity than other), it says that for all k
bigger than zero you can find some value of n
called n_0
for which all n
bigger than n_0
follows this pattern: f(n) <= k*g(n)
.
So you have two functions and you put there n
as a parameter. Then no matter what you put as k
, you always find value of n
for which f(n) <= k*g(n)
and all value that are bigger than the one you have find will also fit into this equation.
Consider for example:
f(n) = n * 100
g(n) = n^2
So if you try to put i.e. n=5
there, it does not say you what has bigger complexity, because 5*100=500
and 5^2=25
. If you put number big enough, i.e. n=100
, then f(n)=100*100=10000
and g(n)=100^2=100*100=10000
. So we get to the same value. If you try to put anything bigger than that, the g(n) will become bigger and bigger.
It also have to follow the equation f(n) <= k*g(n)
. In example, if I put i.e. k=0.1
then
100*n <= 0.1*n^2 *10
1000n <= n^2 /n
1000 < n
So with that functions, you can see that for k=0.1
you have n_0 = 1000
to fulfill the equations, but it is enough. All n > 1000
will be bigger and the function g(n)
will always be bigger, therefore it has higher complexity. (ok, the real proof is not that easy, but you can see the pattern). The point is, no matter what k
will be, even if it is equal k=0.000000001
, there always be breaking point of n_0
and from that point, all g(n)
will be bigger than f(n)
We can also try some negative equations to see whats difference between O(n)
and O(n^2)
.
Lets take:
f(n) = n
g(n) = 10*n
So in standard algebra the g(n) > f(n)
, right? But in complexity theory we need to know if it grows bigger and if so, if it grows bigger than just multiplying it with constant.
So if we consider that k=0.01
, then you can see that no matter how big the n
will be, you never find n_0
that fulfills the f(n) <= k*g(n)
, so the f(n) != o(g(n))
In terms of complexity theory you can take the notations as smaller/bigger, so
f(n) = o(g(n)) -> f(n) < g(n)
f(n) = O(g(n)) -> f(n) <= g(n)
f(n) = Big-Theta(g(n)) -> f(n) === g(n)
//... etc, remember these euqations are not algebraic, just for complexity