Search code examples
gcccastingcharint

Char conversion in gcc


What are the char implicit typecasting rules? The following code gives an awkward output of -172.

char x = 200;
char y = 140;
printf("%d", x+y);

My guess is that being signed, x is casted into 72, and y is casted into 12, which should give 84 as the answer, which however is not the case as mentioned above. I am using gcc on Ubuntu.


Solution

  • The following code gives an awkward output of -172.

    The behavior of an overflow is implementation dependent, but visibly in your case (and mine) a char has 8 bits and its representation is the complement by 2. So the binary representation of the unsigned char 200 and 140 are 11001000 and 10001100, corresponding to the binary representation of the  signed char -56 and -116, and -56 + -116 equals -172 (the char are promoted to int to do the addition).

    Example forcing x and y to be signed whatever the default for char:

    #include <stdio.h>
    
    int main()
    {
      signed char x = 200;
      signed char y = 140;
    
      printf("%d %d %d\n", x, y, x+y);
      return 0;
    }
    

    Compilation and execution :

    pi@raspberrypi:/tmp $ gcc -Wall c.c
    pi@raspberrypi:/tmp $ ./a.out
    -56 -116 -172
    pi@raspberrypi:/tmp $ 
    

    My guess is that being signed, x is casted into 72, and y is casted into 12

    You supposed the higher bit is removed (11001000 -> 1001000 and 10001100 -> 1100) but this is not the case, contrarily to the IEEE floats using a bit for the sign.