I came across this question in a forum. The answer is something like this:
#define ISUNSIGNED(a) (a >= 0 && ~a >= 0)
//Alternatively, assuming the argument is to be a type, one answer would use type casts:
#define ISUNSIGNED(type) ((type)0 - 1 > 0)
I have a few questions regarding this.Why do we need to check ~a >= 0
? What is the second solution all about? I did not understand the statement: "argument is to be a type". More importantly the author states that first #define
will not work in ANSI C (but will work in K&R C). Why not?
#define ISUNSIGNED(a) (a >= 0 && ~a >= 0)
For a signed value which is positive, a >= 0
will be true (obviously) and ~a >= 0
will be false since we've flipped the bits so the sign bit is now set, resulting in a negative value. The entire expression is therefore false.
For a signed value which is negative, a >= 0
will be false (obviously) and the rest of the expression will not be evaluated; the overall result for the expression is false.
For an unsigned value, a >= 0
will always be true (obviously, since unsigned values can't be negative). If we flip the bits then ~a >= 0
is also true, since even with the most significant bit (the sign bit) set to 1, it's still treated as a positive value.
So, the expression returns true if the original value and its bitwise inverse are both positive, i.e. it's an unsigned value.
#define ISUNSIGNED(type) ((type)0 - 1 > 0)
This is to be called with a type rather than a value: ISUNSIGNED(int)
or ISUNSIGNED(unsigned int)
, for example.
For an int
, the code expands to
((int)0 - 1 > 0)
which is false, since -1
is not greater than 0
.
For an unsigned int
, the code expands to
((unsigned int)0 - 1 > 0)
The signed 1
and 0
literals in the expression are promoted to unsigned
to match the first 0
, so the entire expression is evaluated as an unsigned comparison. 0 - 1
in unsigned arithmetic will wrap around resulting in the largest possible unsigned value (all bits set to 1), which is greater than 0, so the result is true.
As to why it would work with K&R C, but not ANSI C, maybe this article can shed some light:
When an unsigned char or unsigned short is widened, the result type is int if an int is large enough to represent all the values of the smaller type. Otherwise, the result type is unsigned int. The value preserving rule produces the least surprise arithmetic result for most expressions.
I guess that means that when comparing an unsigned short
to 0
, for example, the unsigned value is converted to a signed int
which breaks the behaviour of the macro.
You can probably work around this by having (a-a)
which evaluates to either signed or unsigned zero as appropriate, instead of the literal 0
which is always signed.