Below is an algorithm I picked up somewhere (forgot where exactly, possibly from this answer) to calculate the amount of bits set in an integer, i.e. its Hamming weight.
function hamming_weight($i)
{
$i = $i - (($i >> 1) & 0x55555555);
$i = ($i & 0x33333333) + (($i >> 2) & 0x33333333);
return ((($i + ($i >> 4)) & 0x0F0F0F0F) * 0x01010101) >> 24;
}
(I happened to have it handy in PHP, but this could really be any language.)
If I'm not terribly mistaken, this runs in O(1) - there's no branches after all.
Now here's a bit-counting function I wrote myself, which apart from readability I deem inferior:
function hamming_weight_2($i)
{
$weight = 0;
for ($k = 1, $s = 0; $k < 0xFFFFFFFF; $k *= 2, $s++)
{
$weight += (($i & $k) >> $s);
}
return $weight;
}
However, in what way is it inferior? At first I thought "well there's a loop, so this should run in linear time", but then I realized the loop doesn't depend on the size of the input at all. No matter the size of $i, the number of iterations stays the same.
What I'm thus wondering is this:
In this case, looking at the question in terms of big O complexity doesn't make sense because there are a fixed number of bits in your variable. Instead you should count the individual operations:
Algorithm 1:
Algorithm 2:
Even allowing for replacing those multiplications with additional bitshifts, significantly more work is being done in the second algorithm.