In section 2.7.1 Integer constants, it says:
To illustrate some of the subtleties of integer constants, assume that type int uses a 16-bit twos-complement representation, type long uses a 32-bit twos-complement representation, and type long long uses a 64-bit twos-complement representation. We list in Table 2-6 some interesting integer constants...
An interesting point to note from this table is that integers in the range 2^15 through 2^16 - 1 will have positive values when written as decimal constants but negative values when written as octal or hexadecimal constants (and cast to type int).
But, as far as I know, integers in the range 2^15 - 2^16-1 written as hex/octal constants also have positive values when cast to type unsigned
. Is the book wrong?
In the described setup, decimal literals in the range [32768,65535] have type long int
, and hexadecimal literals in that range have type unsigned int
.
So, the constant 0xFFFF
is an unsigned int
with value 65535, and the constant 65535
is a signed long int
with value 65535.
I think your text is trying to discuss the cases:
(int)0xFFFF
(int)65535
Now, since int
cannot represent the value 65535
both of these cause out-of-range conversion which is implementation-defined (or may raise an implementation-defined signal).
Most commonly (in fact, all 2's complement systems I've ever heard of), it will use a combination of truncation and reinterpretation in both of those cases, giving a value of -1
.
So the last paragraph of your quote is a bit strange. 65535
and 0xFFFF
are both large positive numbers; (int)0xFFFF
and (int)65535
are (probably) both negative numbers; but if you cast one and don't cast the other then you get a discrepancy which is not surprising.