I am programming the 8051 in C using the Si Labs IDE. I currently have three bytes: address_byte3, address_byte2, and address_byte1
. I then initialized a variable address_sum
to be an unsigned long int
then did the following operation on it...
address_sum=(address_byte3<<16)+(address_byte2<<8)+(address_byte1);
This operation would lead me to believe that the value loaded into address_sum
if address_byte3, address_byte2, & address_byte1
were 0x92, 0x56, & 0x78
, respectively, would be 0xXX925678
. Instead I am getting a value of 0xXX005678
. My logic seems sound but then again I am the one writing the code so I'm biased and could be blinded by my own ignorance. Does anyone have a solution or an explanation as to why the value for address_byte
is "lost"?
Thank you.
Variables shorter than int
are promoted to int
when doing calculations on them. It seems that your int
type is 16-bit, so shifting it by 16 bits doesn't work right.
You should explicitly cast the variables to the result type (unsigned long
):
address_sum = ((unsigned long)address_byte3<<16) +
((unsigned long)address_byte2<<8) +
(unsigned long)address_byte1;
The last casting is superfluous but doesn't hurt.