I have the following source compiled in Linux 3.10.0-957.5.1.el7.x86_64,g++ version 4.8.5
Case1:
printf("INT_MAX=(%d) , INT_MIN=(%d) \n",INT_MAX,INT_MIN);
int ix= 500 ;
long int lx1=0,lx2=0;
lx1=2147483647 + 10 ;
lx2=2100000000 ;
if( ix < (lx1-lx2) )
printf("ix is not bigger \n");
else
printf("ix is bigger \n");
compiled with warning :
warning: integer overflow in expression [-Woverflow]
lx1=2147483647 + 10 ;
and output :
INT_MAX=(2147483647) , INT_MIN=(-2147483648)
ix is bigger
and the following source Case2 :
printf("INT_MAX=(%d) , INT_MIN=(%d) \n",INT_MAX,INT_MIN);
int ix= 500 ;
long int lx1=0,lx2=0;
lx1=2200000000 + 10 ;
lx2=2100000000 ;
if( ix < (lx1-lx2) )
printf("ix is not bigger \n");
else
printf("ix is bigger \n");
compiled without warning and the output :
INT_MAX=(2147483647) , INT_MIN=(-2147483648)
ix is not bigger
My question : Why Case1 output can be wrong ? lx1 and lx2 both long int, in this box it is size 8 bytes , how come 2200000000 is fine but 2147483647 is not for lx1 ?!
Comparing int with long and others is referd , still can not figure it out .
The calculation of 2147483647 + 10
happens with int
datatypes, because both values fit into an int
. The result overflows and then the result will be extended to a long
, but that's too late.
Suffix the number with an l
to make it a long
: 2147483647l + 10
.
2200000000
is too big for an int
, therefore it is a long
, so the additon works as expected.