I have 2 uint16_t's that i want to combine into 1 32 bit number:
uint16_t var1 = 255; // 0000 0000 1111 1111
uint16_t var2 = 255; // 0000 0000 1111 1111
uint32_t var3 = (var1 << 16) + var2;
I expect var3 to be 0000 0000 1111 1111 0000 0000 1111 1111... so 16711935 in decimal but i get 255 ( 0000 0000 0000 0000 0000 0000 1111 1111).
Any ideas?
Thanks
To some extent, this is platform-dependent. On my nearest system, we get the expected results, which can be demonstrated with a short program:
#include <stdint.h>
#include <inttypes.h>
#include <stdio.h>
int main()
{
uint16_t var1 = 255; // 0000 0000 1111 1111
uint16_t var2 = 255; // 0000 0000 1111 1111
uint32_t var3 = (var1 << 16) + var2;
printf("%#"PRIx32"\n", var3);
}
Output is 0xff00ff
.
However, your var1
and var2
undergo the normal integer promotions before any arithmetic. If the promoted types can't hold the intermediate result, part of the calculation can be lost, as you see.
You can avoid the problem by explicitly widening var1
before the arithmetic:
uint32_t var3 = ((uint32_t)var1 << 16) + var2;
The equivalent failing program on my system is:
#include <stdint.h>
#include <inttypes.h>
#include <stdio.h>
int main()
{
uint16_t var1 = 255; // 00ff
uint16_t var2 = 255; // 00ff
uint64_t var3 = ((uint64_t)var1 << 32) + var2;
printf("%#"PRIx64"\n", var3);
}
This produces 0x1fe
instead of 0xff000000ff
if I don't widen var1
with the cast as shown (because on this system, <<32
happens to be a no-op with 32-bit unsigned types).