While looking for a efficient algorithm for multiplying 2 large numbers, came across the below c method in a forum:-
...
typedef unsigned long long ULL;
ULL multiply (ULL a,ULL b)
{
ULL result = 0;
while(a > 0)
{
if(a & 1)
{
result += b;
}
a >>= 1;
b <<= 1;
}
return result;
}
...
The above algo doesn't require multiply instruction, rather uses bitshift and addition operation only (thus making it quicker).
Checked that the method is working correctly, but, I don't fully get how it works. An explanation would be helpful.
It's iterating over all the 1 bits in a
, adding b
shifted the appropriate amount.
First, note that if a
is 11001
, it can be treated as 10000
+ 1000
+ 1
, so a * b
is (10000*b) + (1000*b) + (1*b)
.
Then, note that 10000*b
is just b << 4
.
Each time through the loop, b
is shifted left by 1 to reflect the current shift amount, and a
is shifted right by 1 so that the low-order bit can be tested. In this example, it ends up adding b
+ (b << 3)
+ (b << 4)
, which is just (1*b) + (1000*b) + (10000*b)
.