take for example the following C code :
int main(int argc, char *argv[])
{
signed char i;
unsigned char count = 0xFF;
for (i=0; i<count;i++)
{
printf("%x\n", i);
}
return 0;
}
This code runs in an infinite loop, even if I compile it as follows :
# gcc -Wall -Wpedantic -Wconversion -Wsign-compare -Wtype-limits -Wsign-conversion test.c -o test
Does someone know for a compiler flag that should warn about those kind of issues ?
Just to be clear, I'm not asking 'why is get an infinite loop', but to know if there's a way to prevent it using a compiler or static analysis ?
i
is a signed char
, incrementing it beyond SCHAR_MAX
has an implementation defined effect. The computation i + 1
is performed after promotion of i
to int
and it does not overflow (unless sizeof(int) == 1
and SCHAR_MAX == INT_MAX
). Yet this value is beyond the range of i
and since i
has a signed type, either the result is implementation-defined or an implementation-defined signal is raised. (C11 6.3.1.3p3 Signed and unsigned integers).
By definition, the compiler is the implementation, so the behavior is defined for each specific system and on x86 architectures where storing the value results in masking the low-order bits, gcc
should be aware that the loop test is definitely constant, making it an infinite loop.
Note that clang
does not detect the constant test either, but clang 3.9.0
will if count
is declared as const
, and it does issue a warning if i < count
is replaced with i < 0xff
, unlike gcc
.
Neither compiler complains about the signed / unsigned comparison issue because both operands are actually promoted to int
before the comparison.
You found a meaningful issue here, especially significant because some coding conventions insist on using the smallest possible type for all variables, resulting in such oddities as int8_t
or uint8_t
loop index variables. Such choices are indeed error-prone and I have not yet found a way to get the compiler to warn the programmer about silly errors such as the one you posted.