I have the following code:
NSUInteger one = 1;
CGPoint p = CGPointMake(-one, -one);
NSLog(@"%@", NSStringFromCGPoint(p));
Its output:
{4.29497e+09, 4.29497e+09}
On the other hand:
NSUInteger one = 1;
NSLog(@"%i", -one); // prints -1
I know there’s probably some kind of overflow going on, but why do the two cases differ and why doesn’t it work the way I want? Should I always remind myself of the particular numeric type of my variables and expressions even when doing trivial arithmetics?
P.S. Of course I could use unsigned int
instead of NSUInteger
, makes no difference.
When you apply the unary -
to an unsigned value, the unsigned value is negated and then forced back into unsigned garb by having Utype_MAX + 1
repeatedly added to that value. When you pass that to CGPointMake()
, that (very large) unsigned value is then assigned to a CGFloat
.
You don't see this in your NSLog()
statement because you are logging it as a signed integer. Convert that back to a signed integer and you indeed get -1. Try using NSLog("%u", -one)
and you'll find you're right back at 4294967295
.
unsigned int
versus NSUInteger
DOES make a difference: unsigned int
is half the size of NSUInteger
under an LP64 architecture (x86_64, ppc64) or when you compile with NS_BUILD_32_LIKE_64
defined. NSUInteger
happens to always be pointer-sized (but use uintptr_t
if you really need an integer that's the size of a pointer!); unsigned
is not when you're using the LP64 model.