I know that if a loop variable expands exponentially by a constant exponent k it has a complexity of O(log(logn)). But I couldn't wrap my head around analyzing this function.
void foo{
int a=0;
for(int i=2;i<=n;i++){
if(i%2==0){
a++;
} else{
i=(i-1)*i;
}
}
}
Let's look at the values that i
takes on over the iterations of the loop.
i
= 2.i
= 3, since we compute skip the “else” branch and then add one.i
= 7, since we compute i = 3 * (3 - 1)
and then add one.i
= 43, since we compute i = 7 * (7 - 1)
and then add one.i
= 1807, since we compute i = 43 * (43 - 1)
and the add one.In what I can only describe as a magical coincidence, I happen to recognize the sequence of numbers 2, 3, 7, 43, 1807 as the start of Sylvester's sequence, which I just learned about a few weeks ago. Checking Wikipedia confirms that, indeed, Sylvester's sequence is given by the recurrence relation
It's known that this sequence grows doubly-exponentially, with Sn approximately equal to E2n for E = 1.264084735... . So this indeed means that the number of iterations of this algorithm as a function of n will be O(log log n), since i
grows doubly-exponentially quickly.
But let's say that you didn't have the dumb luck of having found this exact sequence on a random Wikipedia search. Could you still show that the runtime is Θ(log log n)? The answer is yes, and here's one way to do it.
The first observation we can make is that this procedure will cause i
to grow more slowly than if we instead set i
= i
2. You can check this by seeing the corresponding terms of what happens:
i = i(i - 1) + 1 2, 3, 7, 43, ...
i = i^2 2, 4, 16, 256, ...
If we were to grow i
at this faster rate, then after k iterations of the loop we'd have i
= 22k. (Notice that 2, 4, 16, 256, ... = 220, 221, 222, ...). We stop once n = 22k, which happens after k = Θ(log log n) iterations. This provides a lower bound on how many steps this algorithm will take.
To get an upper-bound, imagine that we instead initialized i
= 1.2 and then updated i
= i
2 per iteration. Then we'd get these numbers:
i = i(i - 1) + 1 2, 3, 7, 43, ...
i = i^2 1.2, 1.4, 4.3, 18.5,
You can prove, if you'd like, that indeed the sequence you've come up with indeed continues to outpace this smaller sequence. And that smaller sequence has the nice property that the value of i
on iteration k is (1.2)2k, which means that we (again) stop after Θ(log log n) iterations.
In other words, our sequence is sandwiched between two different other sequences, each of which grows doubly-exponentially quickly, which means that this sequence too grows doubly-exponentially quickly. Therefore, this loop stops after Θ(log log n) steps.
Hope this helps!