I'm trying to come up with the recurrence relation for Stein's Algorithm (binary GCD algorithm), but my ability to trace through it is proving not up-to-scratch.
I'm completely stumped by the multiple paths and recursive calls, as well as the fact that we're dealing with the total bits to represent our values rather than the values themselves.
Here's the algorithm:
stein(u, v):
if either of u or v are 0: return their sum.
if either of u or v are 1: return 1.
if both u and v are even: return 2 * stein(u/2, v/2)
if u is even and v is odd: return stein(u/2, v)
if u is odd and v is even: return stein(u, v/2)
if both u and v are odd: return stein(larger-smaller/2, smaller)
I'm trying to find the recurrence relation T(n) where n is the total number of bits needed to represent both u and v. I think the first step here for me should be working out when worst-case performance occurs.
I think that each division operation reduces the number of bits by 1, but that's about the most I understand thus far.
I've tried tracing the algorithm but to no avail. I've also read the relevant section of Knuth Vol. 2 but I think it's a little outside the current scope of my understanding because it made little sense to me.
You want a recurrence relation that denotes the number of bits in u and v, not the value of stein(u,v) so let's reason it through a bit.
Given arbitrary u and v, what are the best and worst cases?
Best case (we finish quickly): one of the constant time cases.
Worst case: recursive call to stein(u/2, v/2), stein(u,v/2), or stein(larger-smaller/2, smaller)
In the first scenario we're halving the values, which will simply remove two binary digit. It cost us one operation to do so. T(n) = T(n-2) + 1
In the second scenario we're only dividing one of the values, so only 1 digit less than we started with. This took one operation. T(n) = T(n-1) + 1
The third scenario gets uglier. A subtraction iterates over all the digits in n. This means if we lose m digits, but used n steps (the subtraction) our recurrence is T(n) >= T(n-m) + n. We still have to find m, and if we can prove this step eliminates many digits (ex: m = n/2), the recurrence may not be too slow.
Unfortunately, we can easily come up with a very bad scenario.
Set v to 3. This assures us it's odd, and it will always be the smaller of the two. Now if we set u such that (u-v)/2 continues being odd, then the recurrence will keep going to that third case. And with v=3, (u-v)/2 will only be 1 digit shorter than u. This means in the worst case m is 1. ==> T(n) = T(n-1) + n
example of that bad scenario: (21,3) -> (9,3) -> (3,3) We could continue constructing the larger numbers by taking v' = v*2 + 3. As you see, the growth of these "bad" numbers is one binary digit at a time and will lead us to always go down the third path. That's why m is 1.
That last recurrence scenario is unfortunately O(n*n)