(assuming 64bit machine)
e.g.
int n = 0xFFFFFFFF; //max 32bit unsigned number
printf("%u\n", n);
The maximum positive number that a regular signed integer (32bit) can store is 0x7FFFFFFF
.
In the above example I'm assigning the maximum unsigned integer value to a regular signed integer, I'm receiving no warnings or error from GCC, and the result is printed without problems (with -Wall -Wextra
).
Appending U
or L
to the hex constant changes nothing.
Why is that?
Assume that int
and unsigned int
are 32 bits, which is the case on most platforms you're likely to be using (both 32-bit and 64-bit systems). Then the constant 0xFFFFFFFF
is of type unsigned int
, and has the value 4294967295.
This:
int n = 0xFFFFFFFF;
implicitly converts that value from unsigned int
to int
. The result of the conversion is implementation-defined; there is no undefined behavior. (In principle, it can also cause an implementation-defined signal to be raised, but I know of no implementations that do that).
Most likely the value stored in n
will be -1
.
printf("%u\n", n);
Here you use a %u
format specifier, which requires an argument of type unsigned int
, but you pass it an argument of type int
. The standard says that values of corresponding signed and unsigned type are interchangeable as function arguments, but only for values that are within the range of both types, which is not the case here.
This call does not perform a conversion from int
to unsigned int
. Rather, an int
value is passed to printf
, which assumes that the value it received is of type unsigned int
. The behavior is undefined. (Again, this would be a reasonable thing to warn about.)
The most likely result is that the int
value of -1
, which (assuming 2's-complement) has the same representation as 0xFFFFFFFF
, will be treated as if it were an unsigned int
value of 0xFFFFFFFF
, which is printed in decimal as 4294967295
.
You can get a warning on int n = 0xFFFFFFFF;
by using the -Wconversion
or -Wsign-conversion
option. These option are not included in -Wextra
or -Wall
. (You'd have to ask the gcc maintainers why.)
I don't know of an option that will cause a warning on the printf
call.
(Of course the fix is to define n
as an unsigned int
, which makes everything correct and consistent.)