This code:
signed char a = 128;
a++;
printf("%d", a);
Prints value of '-127'. I understand why that is, basically the value "resets" when it reaches the limit and goes from there, but I have trouble finding out if that is specified in standard, or it's just a random action my compiler does?
Assuming the most common platforms with 8 bit char
types and 2's complement (x86, ARM, MIPS, PPC, MSP430, etc.), the following happens:
128
is too large for a signed char
. It is converted in an implementation defined way to a signed char
. Typically it is simply truncated and bit-copied (1:1). In 2's complement, 0b1000000
is the 2's complement representation of decimal -128 in a signed char
.-128 + 1 -> -127
On other platforms, the result varies, but most likely they just use the lower bits of the int
constant 128
.
To detect such flaws (but not all), enable compiler warnings and pay heed to them. gcc will report a truncation warning (-Wconversion
) for the initialiser.