So far understanding Big-O notation and how it's calculated is ok...most of the situations are easy to understand. However, I just came across this one problem that I cannot for the life of me figure out.
Directions: select the best big-O notation for the expression.
(n^2 + lg(n))(n-1) / (n + n^2)
The answer is O(n). That's all fine and dandy, but how is that rationalized given the n^3
factor in the numerator? n^3
isn't the best, but I thought there was like a "minimum" basis between f(n) <= O(g(n))
?
The book has not explained any mathematical inner-workings, everything has sort of been injected into a possible solution (taking f(n)
and generating a g(n)
that's slightly greater than f(n)
).
Kinda stumped. Go crazy on the math, or math referencing, if you must.
Also, given a piece of code, how does one determine the time units per line? How do you determine logarithmic times based off of a line of code (or multiple lines of code)? I understand that declaring and setting a variable is considered 1 unit of time, but when things get nasty, how would I approach a solution?
Non-rigorous answer:
n^3 + n log(n) - n^2 - log n
.n^3
for large n
, and the denominator grows as n^2
for large n
.n^{3 - 2}
for large n
, or O(n)
.