Search code examples
cunsigned-integersigned-integerinternal-representation

About int and unsigned int


There are the following programs:

#include <stdio.h>

int main(void)
{
   int i=2147483647;
   unsigned int j=4294967295;
   printf("%d %d %d\n",i,i+1,i+2);
   printf("%u %u %u\n",j,j+1,j+2);
   return 0;
}

Why i+2 is not equal to -2147483646 ?

why j+2 is not equal to 2

It's the result that I expected was different. What is its execution process like?

EDIT

The result I get is:

  • i=2147483647
  • i+1=-2147483648
  • i+2=-2147483647
  • j=4294967295
  • j+1=0
  • j+2=1

Solution

  • If you will output the value of j in the hexadecimal notation like for example

    unsigned int j = UINT_MAX;
    printf( "j = %u, j = %#x\n", j, j );
    

    You will get the following output

    j = 4294967295, j = 0xffffffff
    

    So adding 1 to 0xffffffff you will get 0x00000000. Again adding 1 you will get 0x00000001.

    From the C Standard (6.2.5 Types)

    1. ... A computation involving unsigned operands can never overflow, because a result that cannot be represented by the resulting unsigned integer type is reduced modulo the number that is one greater than the largest value that can be represented by the resulting type.

    As for the signed integer variable i then in general the result is undefined due to the overflow.

    If the internal representation of integers is two's complement representation then implementations can silently wrap-around on overflow. In this case for signed integer you will have

    int i = INT_MAX;
    
    printf( "i = %d, i = %#x\n", i, ( unsigned int )i );
    printf( "i + 1 = %d, i + 1 = %#x\n", i + 1, ( unsigned int )( i + 1 ) );
    printf( "i + 2 = %d, i + 2 = %#x\n", i + 2, ( unsigned int )( i + 2 ) );
    

    The output is

    i = 2147483647, i = 0x7fffffff
    i + 1 = -2147483648, i + 1 = 0x80000000
    i + 2 = -2147483647, i + 2 = 0x80000001
    

    That is the hexadecimal representation of an object of the type int 0x80000000 yields the minimal value stored in the object (the sign bit is set). The representation 0x80000001 yields the value that follows the minimal value.