unsigned__int64 difference;
difference=(64*33554432);
printf ("size %I64u \n", difference);
difference=(63*33554432);
printf ("size %I64u \n", difference);
the first # is ridiculously large. The second number is the correct answer. How does changing it from 62 to 63 cause such a change?
First value is 18446744071562067968 Second value is 2113929216
Sorry the values were 64 and 63, not 63 and 62.
Unless qualified otherwise, integer literals are of type int
. I would assume that on the platform you're on, an int
is 32-bit. So the calculation (64*33554432)
overflows and becomes negative. You then cast this to a unsigned __int64
, so this now gets flipped back to a very very large positive integer.
Voila:
int main()
{
int a1 = (64*33554432);
int a2 = (63*33554432);
printf("%08x\n", a1); // 80000000 (negative)
printf("%08x\n", a2); // 7e000000 (positive)
unsigned __int64 b1 = a1;
unsigned __int64 b2 = a2;
printf("%016llx\n", b1); // ffffffff80000000
printf("%016llx\n", b2); // 000000007e000000
}