Search code examples
cvisual-studio-2010castingunsignedsigned

Visual Studio casting issue


I'm trying to do, what I imagined to be, a fairly basic task. I have two unsigned char variables and I'm trying to combine them into a single signed int. The problem here is that the unsigned chars start as signed chars, so I have to cast them to unsigned first.

I've done this task in three IDE's; MPLAB (as this is an embedded application), MATLAB, and now trying to do it in visual studio. Visual is the only one having problems with the casting.

For an example, two signed chars are -5 and 94. In MPLAB I first cast the two chars into unsigned chars:

unsigned char a = (unsigned char)-5;
unsigned char b = (unsigned char)94;

This gives me 251 and 94 respectively. I then want to do some bitshifting and concat:

int c = (int)((((unsigned int) a) << 8) | (unsigned int) b);

In MPLAB and MATLAB this gives me the right signed value of -1186. However, the exact same code in visual refuses to output results as a signed value, only unsigned (64350). This has been checked by both debugging and stepping through the code and printing the results:

printf("%d\n", c);

What am I doing wrong? This is driving me insane. The application is an electronic device that collects sensor data, then stores it on an SD card for later decoding using a program written in C. I technically could do all the calculations in MPLAB and then store those on the SDCARD, but I refuse to let Microsoft win.

I understand my method of casting is very unoptimised and you could probably do it in one line, but having had this problem for a couple of days now I've tried to break the steps down as much as possible.

Any help is most appreciated!


Solution

  • The problem is that an int on most systems is 32-bits. If you concatenate two 8-bit quantities and store it into a 32-bit quantity, you will get a positive integer because you are not setting the sign bit, which is the most significant bit. More specifically, you are only populating the lower 16 bits of a 32-bit integer, which will naturally be interpreted as a positive number.

    You can fix this by explicitly using as 16-bit signed int.

    #include <stdio.h>
    #include <stdint.h>
    
    int main() {
        unsigned char a = (unsigned char)-5;
        unsigned char b = (unsigned char)94;
        int16_t c = (int16_t)((((unsigned int) a) << 8) | (unsigned int) b);
        printf("%d\n", c);
    }
    

    Note that I am on a Linux system, so you will probably have to change stdint.h to the Microsoft equivalent, and possibly change int16_t to whatever Microsoft calls their 16-bit signed integer type, if it is different, but this should work with those modifications.