When I use std::bitset<N>::bitset( unsigned long long )
this constructs a bitset and when I access it via the operator[]
, the bits seems to be ordered in the little-endian fashion. Example:
std::bitset<4> b(3ULL);
std::cout << b[0] << b[1] << b[2] << b[3];
prints 1100
instead of 0011
i.e. the ending (or LSB) is at the little (lower) address, index 0.
Looking up the standard, it says
initializing the first M bit positions to the corresponding bit values in
val
Programmers naturally think of binary digits from LSB to MSB (right to left). So the first M bit positions is understandably LSB → MSB, so bit 0 would be at b[0]
.
However, under shifting, the definition goes
The value of
E1
<<E2
isE1
left-shiftedE2
bit positions; vacated bits are zero-filled.
Here one has to interpret the bits in E1
as going from MSB → LSB and then left-shift E2
times. Had it been written from LSB → MSB, then only right-shifting E2
times would give the same result.
I'm surprised that everywhere else in C++, the language seems to project the natural (English; left-to-right) writing order (when doing bitwise operations like shifting, etc.). Why be different here?
There is no notion of endian-ness as far as the standard is concerned. When it comes to std::bitset
, [template.bitset]/3
defines bit position:
When converting between an object of class
bitset<N>
and a value of some integral type, bit positionpos
corresponds to the bit value1<<pos
. The integral value corresponding to two or more bits is the sum of their bit values.
Using this definition of bit position in your standard quote
initializing the first
M
bit positions to the corresponding bit values inval
a val
with binary representation 11
leads to a bitset<N> b
with b[0] = 1
, b[1] = 1
and remaining bits set to 0
.