Search code examples
algorithmperformancebig-olimitcomplexity-theory

How do I know that n^(0.1) = w[ (log n)^10 ] with regard to algorithm complexity


assume w = little omega

  • one algorithm takes n^(0.1) steps to complete a task

  • another algorithm takes (log n)^10 to complete the task

The question asked is whether n^(0.1) = w((log n)^10) is true. I got the answer wrong and said false. However, I'm not sure why I got it wrong if, graphically, (log n)^10 appears to dominate n^(0.1) even at n values in the trillions.

What I have tried:

I have tried taking the limit of f(x)/g(x) to see if it goes to infinity. However, after a round of L'Hopital's rule it seems like this approach would be a very involved process.

I have also tried simply graphically assessing whether n^(0.1) grows faster than (log n)^10. And perhaps this is the source of my confusion. My professor said that in the case of logs vs polynomials, that any polynomial always dominates the log. However, I have tried graphing these functions against each other and even at n = 9999999, (log n)^10 seems to clearly dominate n^(0.1).

Am I missing something here? Will n^0.1 dominate (log n)^10 at some point?

Note: I should have clarified that this is all in relation to big O notation and algorithm complexity. Where n^(0.1) and (log n)^10 represent the steps that a given algorithm takes to perform a task. I've added some description to the question which will hopefully make things clearer.


Solution

  • Since you didn't specify the base of the log, lets make it a constant B.

    Now the easiest way is to prove what you want it to let let n = B10m

    Then n0.1 / (logB n)10 = Bm / (10m)10

    Now we're comparing a simple exponential to a simple polynomial, and we can prove if necessary that the exponential will dominate... though n will get quite large before it does.

    For instance if we find m such that 2m = (10m)10, we get m = 100 or so, so n = 21000.