I have this operation:
uint32_t DIM = // ...
int32_t x = // ...
// Operation:
x & (DIM-1u)
How does implicit type conversion work in the statement x & (DIM-1u)
?
x
to uint32_t
?(DIM-1u)
to int32_t
?uint32_t
or int32_t
Two scenarios, noting that 1u
is a literal of type unsigned
:
unsigned
is in the inclusive range of 16 bits to 31 bits. The type of DIM - 1u
is uint32_t
, and the whole expression is uint32_t
. This is because the signed
type in a binary expression where the other argument is an unsigned
type is converted implicitly to unsigned
.
unsigned
is 32 bits or larger. Then the type of DIM - 1u
is unsigned
, and the same for the type of the whole expression.
Finally, note that the C++ standard permits unsigned
and std::uint32_t
to be the same type; i.e.
std::cout << std::is_same<std::uint32_t, unsigned>::value;
is allowed to be 1.