I'm trying to understand why INT_MIN
is equal to -2^31 - 1
and not just -2^31
.
My understanding is that an int
is 4 bytes = 32 bits
. Of these 32 bits
, I assume 1 bit
is used for the +/- sign, leaving 31 bits
for the actual value. As such, INT_MAX
is equal to 2^31-1 = 2147483647
. On the other hand, why is INT_MIN
equal to -2^31 = -2147483648
? Wouldn't this exceed the '4 bytes' allotted for int
? Based on my logic, I would have expected INT_MIN
to equal -2^31 = -2147483647
Most modern systems use two's complement to represent signed integer data types. In this representation, one state in the positive side is used up to represent zero, hence one positive value lesser than the negatives. In fact this is one of the prime advantage this system has over the sign-magnitude system, where zero has two representations, +0 and -0. Since zero has only one representation in two's complement, the other state, now free, is used to represent one more number.
Let's take a small data type, say 4 bits wide, to understand this better. The number of possible states with this toy integer type would be 2⁴ = 16 states. When using two's complement to represent signed numbers, we would have 8 negative and 7 positive numbers and zero; in sign-magnitude system, we'd get two zeros, 7 positive and 7 negative numbers.
Bin Dec
0000 = 0
0001 = 1
0010 = 2
0011 = 3
0100 = 4
0101 = 5
0110 = 6
0111 = 7
1000 = -8
1001 = -7
1010 = -6
1011 = -5
1100 = -4
1101 = -3
1110 = -2
1111 = -1
I think you are confused since you are imagining that sign-magnitude representation is used for signed numbers; although this is also allowed by the language standards, this system is very less likely to be implemented as two's complement system is significantly a better representation.
As of C++20, only two's complement is allowed for signed integers; source.