So lately, I read on an issue regarding the three distinct types in C, char/unsigned char/signed char. The problem that I now encounter is not something I have experienced up till now (my program works correctly on all tested computers and only targets little-endian (basically all modern desktops and servers using Windows/Linux right?). I frequently reuse a char array I defined for holding a "string" (not a real string of course) as temporary variables. E.g. instead of adding another char to the stack I just reuse one of the members like array[0]. However, I based this tactic on the fact that a char would always be signed, until I read today that it actually depends on the implementation. What will happen if I now have a char and I assign a negative value to it?
char unknownsignedness = -1;
If I wrote
unsigned char A = -1;
I think that the C-style cast will simply reinterpret the bits and the value that A represents as an unsigned type becomes different. Am I right that these C-Style casts are simply reinterpretation of bits? I am now referring to signed <-> unsigned conversions.
So if an implementation has char as unsigned, would my program stop working as intended? Take the last variable, if I now do
if (A == -1)
I am now comparing a unsigned char to a signed char value, so will this simply compare the bits not caring about the signedness or will this return false because obviously A cannot be -1? I am confused what happens in this case. This is also my greatest concern, as I use chars like this frequently.
The following code prints No
:
#include <stdio.h>
int
main()
{
unsigned char a;
a = -1;
if(a == -1)
printf("Yes\n");
else
printf("No\n");
return 0;
}
The code a = -1
assigns an implementation-defined value to a
; on most machines, a will be 255. The test a == -1
compares an unsigned char
to an int
, so the usual promotion rules apply; hence, it is interpreted as
`(int)a == -1`
Since a
is 255, (int)a
is still 255, and the test yields false.