So, Everbody is familiar with constant O(1)
or Linear O(N)
Space Complexity.
But I have a question that, is there any case when Space Complexity of an Algorithm is proportional to O(NLogn)
or O(N^2)
. If that's possible, what is the advantage of it.
P.S.- I have researched over various sites but didn't got any satisfactory solution.
Almost any algorithm can be made to use O(N^2)
memory. Consider some f(a,b)
where 0 < a,b < N
and f
is expensive to calculate. To reduce runtime an obvious solution is to use a look up table of size N * N
with pre calculated results. Such kind of trade-off between runtime and memory usage can be made very often.
In general, algorithms that are using matrices typically take N*N
memory to store the matrices. For example to rotate a point in N=3
dimensions, you can use a 3x3
rotation matrix.