I am a Java newbie and studying int overflow! While playing with some integers I was surprised with some weird results
int x = 2147483647 + 1;
x == > - 2147483648
int x = 2147483647 + 2;
x ==> -2147483647
int x = 2147483647 + 2147483647;
x ==> -2
int x = 2147483647 + 2147483648;
**compile error**
I thought integer overflow would not cause any compile error. Also, it is hard for me to understand how outputs for the overflow are calculated (ex. why int x = 2147483647 + 1 // x ==> -2147483648) Can anybody please expalin the logic of these results?
Thanks!!
From the language spec:
The largest decimal literal of type int is 2147483648 (2^31).
All decimal literals from 0 to 2147483647 may appear anywhere an int literal may appear. The decimal literal 2147483648 may appear only as the operand of the unary minus operator - (§15.15.4).
It is a compile-time error if the decimal literal 2147483648 appears anywhere other than as the operand of the unary minus operator; or if a decimal literal of type int is larger than 2147483648 (2^31).
You can't use 2147483648 as an int
literal because an int
literal is an int
expression, and thus must have an int
value; but 2147483648 is too large to be used as an int
value.