Suppose I have
struct A
{
signed char a:1;
unsigned char b:1;
};
If I have
A two, three;
two.a = 2; two.b = 2;
three.a = 3; three.b = 3;
two
will contain 0
s in its fields, while three
will contain 1
s. So, this makes me think, that assigning a number to a single-bit-field gets the least significant bit (2
is 10
in binary and 3
is 11
).
So, my question is - is this correct and cross-platform? Or it depends on the machine, on the compiler, etc. Does the standard says anything about this, or it's completely implementation defined?
Note: The same result may be achieved by assigning 0
and 1
, instead of 2
and 3
respectively. I used 2
and 3
just for illustrating my question, I wouldn't use it in a real-world situation
P.S. And, yes, I'm interesting in both - C
and C++
, please don't tell me they are different languages, because I know this :)
The rules in this case are no different than in case of full-width arithmetic. Bit-fields behave the same way as the corresponding full-size types, except that their width is limited by the value you specified in the bit-field declaration (6.7.2.1/9 in C99).
Assigning an overflowing value to a signed bit-field leads to implementation-defined behavior, which means that behavior you observe with bit-field a
is generally not portable.
Assigning an overflowing value to an unsigned bit-field uses the rules of modulo arithmetic, meaning that the value is taken modulo 2^N
, where N
is the width of the bit-field. This means, for example, that assigning even numbers to your bit-field b
will always produce value 0
, while assigning odd numbers to such bit-field will always produce 1
.