Consider the following code:
int32_t x = -2;
cout << uint64_t(x) << endl;
The cast in the second line contains basically two atomic steps. The increase in bitwidth from 32 bits to 64 bits and the change of interpretation from signed to unsigned. If one compiles this with g++ and executes, one gets 18446744073709551614. This suggests that the increase in bitwidth is processed first (as a signed extension) and the change in signed/unsigned interpretation thereafter, i.e. that the code above is equivalent to writing:
int32_t x = -2;
cout << uint64_t(int64_t(x)) << endl;
What confuses me that one could also first interpret x as an unsigned 32-bit bitvector first and then zero-extend it to 64-bit, i.e.
int32_t x = -2;
cout << uint64_t(uint32_t(x)) << endl;
This would yield 4294967294. Would someone please confirm that the behavior of g++ is required by the standard and is not implementation defined? I would be most excited if you could refer me to the norm in the standard that actually concerns the issue at hand. I tried to do so but failed bitterly.
Thanks in advance!
You are looking for Standard section 4.7. In particular, paragraph 2 says:
If the destination type is unsigned, the resulting value is the least unsigned integer congruent to the source integer (modulo 2n where n is the number of bits used to represent the unsigned type).
In the given example, we have that 18446744073709551614 = -2 mod 264.