Search code examples
cbit

Why does -INT_MIN == INT_MIN, and how the negative sign operate in bit level?


Can't understand how -INT_MIN transform to INT_MIN in bit level


Solution

  • In common architecture negative values use 2's complement.

    This has an immediate outcome: -INT_MIN will be greater (by 1) that INT_MAX. That means the the result value will be undefined(*). And still for common architectures it just happen to be -INT_MIN again.

    Just look at what happens for signed bytes (values are lower so easier to handle) MIN is -128 and MAX is 127. In hexa, they are respectively 0x80 and 0x7F. But the representation of 128 as an int value is again 0x80. And when you assign that back to a signed byte type you end with (- (-128)) gives -128...


    As char has a lower rank than int, the char is promoted to int, the operation gives a result that can be represented as an int and the conversion to a char gives an implementation defined result. But - INT_MIN is an expression that would give a result that cannot be represented as a int value. This is an exceptional condition and the behaviour is explicitely undefined per standard. That being said, common implementation handle everything the same...