From The Open Group Base Specifications Issue 7, IEEE Std 1003.1-2008:
The signbit() macro shall return a non-zero value if and only if the sign of its argument value is negative.
Why does signbit(-0)
return 0
? I just want to understand the logic behind this decision.
In signbit(-0)
:
0
is a constant of type int
.-0
is the result of negating 0
, so it is zero of type int
.signbit(-0)
produces 0.If you do signbit(-0.)
instead:
0.
is a constant of type double
.-0.
is the result of negating 0.
, so it is a negative zero of type double
.signbit(-0.)
produces 1.The key is that -0
negates an integer type, and the integer types typically do not encode negative zero as distinct from a positive zero. When an integer zero is converted to floating point, the result is a simple (positive) zero. However, -0.
negates a floating-point type, and the floating-point types do encode negative zero distinctly from positive zero.