There are already questions about counting how many 1
s are there in a number, but this question is about judging whether there is even or odd number of 1s.
Any loop or conditional (including switch) statements are not allowed. Also, division, multiplication or modulus operators should be avoided. To be more specific, we may assume that it's a 32-bits unsigned integer.
Actually I've got an implementation already, but I cannot work out the reason why it works. Any proof of its correctness or any new idea would be very helpful.
int even_ones(unsigned x)
{
x ^= x>>16;
x ^= x>>8;
x ^= x>>4;
x ^= x>>2;
x ^= x>>1;
return !(x & 1);
}
I assume you know what the exclusive or ^
operation does - if two bits are set to the same value, the result is 0
- otherwise it is 1
. So when I have two one-bit numbers, A and B, then A^B
will be zero if either A and B are both 0
, or both 1
. In other words - the sum of ones in A
and B
is even...
Now let's do this two bits at a time: C and D are two-bit numbers. Here are the possible combinations:
C D C^D
00 00 00 even
01 00 01 odd
10 00 10 odd
11 00 11 even
00 01 01 odd
01 01 00 even
10 01 11 even
11 01 10 odd
00 10 10 odd
01 10 11 even
10 10 00 even
11 10 01 odd
00 11 11 even
01 11 10 odd
10 11 01 odd
11 11 00 even
As you can see - the operation in each instance reduces the number of bits by one half, and the resulting number of 1
bits is odd if you started out with an odd number (because pairs of 1
s cancel out, but all the others are unchanged).
Now it should be obvious to see why the same thing continues to be true when you start with larger numbers (4 bit, 8 bit, 16 bit). In essence, your algorithm starts with a 32 bit number and splits it into two 16 bit numbers. By getting rid of "double 1's", it reduces the number of bits by half; then operates on the half that remains, and repeats until there is just a single bit left. By testing whether that bit is a one (odd) or zero (even), you get your answer.
In case it's not clear, an operation like x ^= x>>16
actually shifts the top 16 bits into the bottom 16 and produces an exclusive OR there. It doesn't actually clear the top bits, so "a mess is left behind". But the algorithm ignores that mess in what follows. See the following (starting with 8 bits for simplicity):
x = abcdefgh
x >> 4 = 0000abcd
new x = abcdijkl
x >> 2 = 00abcdij
new x = abmnopqr
x >> 1 = 0abmnopq
new x = astuvwxy
In this, the last digit y
is the XOR of r
and q
, which in turn are the XOR of l,j
and k,i
; these in turn are the XOR of h,d
, f,b
, g,c
, and e,a
respectively. As you can see, you ended up with the XOR of all the bits; and as I explained above, that means either "all even" or "all odd", depending on whether the least significant bit is now a 1
or a 0
.
I hope this helped.