I have a simple program with a short variable declaration:
short int v=0XFFFD;
printf("v = %d\n",v);
printf("v = %o\n",v);
printf("v = %X\n",v);
The result is:
v = -3 ; v = 37777777775 ; v = FFFFFFFD
I don't understand how to calculate these values. I know that a short variable can hold values between -32768 and 32767, and the value 0XFFFD causes an overflow, but I don't know how to calculate the exact value, which is -3 in this case.
Also, if my declaration is v=0XFFFD why the output v=%X is FFFFFFFD?
First of all a short
can be as short as 16 bits (which probably is the case on your compiler). This means that 65533
can't be represented correctly, the assignment overflows, it wraps to -3
(as short int
is a signed short integer). But you already knew that.
Secondly when sent as an argument to printf
the short int
is converted to int
automatically, but as v
contains -3
that's the value that is sent to printf
.
Thirldly the %o
and %X
conversions expect an unsigned int
which is not quite what you've supplied. This means undefined behavior (in theory), but in practice it's quite predictable. This means that the bit pattern for -3
is interpreted as an unsigned integer istead which on 32 bit machines happens to be 0xFFFFFFFD
.