I'm working on some simple bit manipulation problems in C++, and came across this while trying to visualize my steps. I understand that the number of bits assigned to different primitive types may vary from system to system. For my machine, sizeof(int)
outputs 4
, so I've got 4 char
worth of bits for my value. I also know now that the definition of a byte is usually 8 bits, but is not necessarily the case. When I output CHAR_BIT
I get 8
. I therefore expect there to be a total of 32 bits for my int
values.
I can then go ahead and print the binary value of my int
to the screen:
int max=~0; //All my bits are turned on now
std::cout<<std::bitset<sizeof(int)*CHAR_BIT>(max)<<std::endl;
$:11111111111111111111111111111111
I can increase the bitset size if I want though:
int max=~0;
std::cout<<std::bitset<sizeof(int)*CHAR_BIT*3>(max)<<std::endl;
$:000000000000000000000000000000001111111111111111111111111111111111111111111111111111111111111111
Why are there so many ones? I would have expected to have only 32 ones, padded with zeros. Instead there's twice as many, what's going on?
When I repeat the experiment with unsigned int
, which has the same size as int
, the extra ones don't appear:
unsigned int unmax=~0;
std::cout<<std::bitset<sizeof(unsigned int)*CHAR_BIT*3>(unmax)<<std::endl;
$:000000000000000000000000000000000000000000000000000000000000000011111111111111111111111111111111
The constructor of std::bitset
takes an unsigned long long
, and when you try to assign a -1 (which is what ~0
is in an int
) to an unsigned long long
, you get 8 bytes (64 bits) worth of 1s.
It doesn't happen with unsigned int
because you are assigning the value of 4294967295 instead of -1, which is 32 1s in a unsigned long long