As we know, we have two types of Endianness: big endian and little endian.
Let's say that an integer takes 4 bytes, so the layout of the integer 1
should be 0x01 0x00 0x00 0x00 for little endian and 0x00 0x00 0x00 0x01 for big endian.
To check if a machine is little endian or big endian, we can code as below:
int main()
{
int a = 1;
char *p = (char *)&a;
// *p == 1 means little endian, otherwise, big endian
return 0;
}
As my understanding, *p
is assigned with the first octet: 0x01
for little endian and 0x00
for big endian (the two bold parts above), that's how the code works.
Now I don't quite understand how bit field works with different Endianness.
Let's say we have such a struct:
typedef struct {
unsigned char a1 : 1;
unsigned char a2 : 1;
unsigned char a6 : 3;
}Bbit;
And we do the assignment as below:
Bbit bit;
bit.a1 = 1;
bit.a2 = 1;
Will this piece of code be implementation specific? I'm asking if the values of bit.a1
and of bit.a2
are 1
on little endian and are 0
on big endian? Or are they definitely 1
regardless of the different Endianness?
Let's say we have a struct:
typedef struct {
unsigned char a1 : 1;
unsigned char a2 : 1;
unsigned char a6 : 3;
}Bbit;
and a definition:
Bbit byte;
Suppose byte
is stored in a single byte and is currently zeroed out: 0000 0000
.
byte.a1 = 1;
This sets the bit called a1
to 1
. If a1
is the first bit, then byte
has become 1000 0000
, but if a1
is the fifth bit, then byte
has become 0000 1000
, and if a1
is the eighth bit, then byte
has become 0000 0001
.
byte.a2 = 1;
This sets the bit called a2
to 1
. If a2
is the second bit, then byte
has (likely) become 1100 0000
, but if a2
is the sixth bit, then byte
has (likely) become 0000 1100
, and if a2
is the seventh bit, then byte
has become 0000 0011
. (These are only "likely" because there is no guarantee that the bits follow some reasonable order. It's just unlikely that a compiler will go out of its way to mess up this example.)
Endianness is not a factor when it comes to the values that are stored. Only the bits representing the specified bit-field are changed with each assignment, and the value being assigned is reduced to that number of bits (with implementation-defined behavior if that value is too big for that number of bits).