I've recently encountered a piece of code that supposedly works fine, but I don't quite understand why.
size_t a = 19;
std::cout<<std::bitset<8>(a)<<std::endl;
a ^= a & -a;
std::cout<<std::bitset<8>(a)<<std::endl;
This piece of code will invert the least significant bit of a given unsigned integer. I would prefer to just write a ^= 1;
, but I'm puzzled by why the piece of code above actually works. I would think that making an unsigned int
negative will result in undefined behavior?
a & -a
gives you the least significant 1-bit set in a
. For an odd number it is indeed 1, but that's not the case in general of course.
Making an unsigned
negative is a well-defined and occasionally useful notation: -a
for positive a
is -a
+ 2N where N is the number of bits in the type. An alternative to writing size_t a = std::numeric_limits<size_t>::max();
is to write size_t a = -1;
for example.
So a ^= a & -a;
flips the least significant 1-bit to 0.
Rather clever really.