I'm trying to solve a question about comparing 2 algorithms in terms of their worst case running times and finding the input size for which one algorithm would have a faster running time than the other.
The two algorithms are:
A1 = 2n log10 n
A2 = 0.1n2
Basically, I am trying to solve the following inequality for n:
2n log10 n < 0.1n2
Can anyone please point me in the right direction?
I have managed to get up to:
log10 n < 0.05n ==> n < 100.05n
But I have no idea what to do from here (or perhaps I have gone about the wrong way trying to solve it).
Thank you for your help in advance!
Actually, you are trying to solve the inequality
because the algorithm is only going to be faster for a very short time, and then for any larger values of n, the
algorithm will be faster.
Ignore the case n <= 0, multiply by 10, and divide by n to produce:
Then divide by 20 and exponentiate both sides with a base of 10:
Use a numeric solver to find the zeros of on the interval [1, 40] since clearly 40 is an upper bound (because
).
For instance, in Matlab:
>> fzero(@(x) 10^(x/20)- x, 20)
ans =
29.3531
So for any n an integer up through 29, the algorithm is faster, and for n > 29, the
algorithm wins.