What is the difference between Usage
#define CONSTANT_1 (256u)
#define CONSTANT_2 (0XFFFFu)
and
#define CONSTANT_1 (256)
#define CONSTANT_2 (0XFFFF)
when do I really need to add u
and what problems we get into if not?
I am more interested in the example expressions where one usage can go wrong with other usage.
The trailing u
makes the constant have unsigned type. For the examples given, this is probably unnecessary and may have surprising consequences:
#include <stdio.h>
#define CONSTANT_1 (256u)
int main() {
if (CONSTANT_1 > -1) {
printf("expected this\n");
} else {
printf("but got this instead!\n");
}
return 0;
}
The reason for this surprising result is the comparison is performed using unsigned arithmetics, -1
being implicitly converted to unsigned int
with value UINT_MAX
. Enabling extra warnings will save the day on modern compilers (-Wall -Werror
for gcc and clang).
256u
has type unsigned int
whereas 256
has type int
. The other example is more subtle: 0xFFFFu
has type unsigned int
, and 0xFFFF
has type int
except on systems where int
has just 16 bits where it has type unsigned int
.
Some industry standards such as MISRA-C mandate such constant typing, a counterproductive recommendation in my humble opinion.