Search code examples
c#typeshexunsignedsigned

Can't assign 32bit hexadecimal value to integer


First, this question has related posts: Why Int32 maximum value is 0x7FFFFFFF?

However, I want to know why the hexadecimal value is always treated as an unsigned quantity.

See the following snippet:

byte  a = 0xFF;               //No error (byte is an unsigned type).
short b = 0xFFFF;             //Error! (even though both types are 16 bits).
int   c = 0xFFFFFFFF;         //Error! (even though both types are 32 bits).
long  d = 0xFFFFFFFFFFFFFFFF; //Error! (even though both types are 64 bits).

The reason for the error is because the hexadecimal values are always treated as unsigned values, regardless of what data-type they are stored as. Hence, the value is 'too large' for the data-type described.


For instance, I expected:

int c = 0xFFFFFFFF;

To store the value:

-1

And not the value:

4294967295

Simply because int is a signed type.


So, why is it that the hexadecimal values are always treated as unsigned even if the sign type can be inferred by the data-type used to store them?

How can I store these bits into these data-types without resorting to the use of ushort, uint, and ulong?

In particular, how can I achieve this for the long data-type considering I cannot use a larger signed data-type?


Solution

  • What's going on is that a literal is intrinsically typed. 0.1 is a double, which is why you can't say float f = 0.1. You can cast a double to a float (float f = (float)0.1), but you may lose precision. Similarly, the literal 0xFFFFFFFF is intrinsically a uint. You can cast it to an int, but that's after it has been interpreted by the compiler as a uint. The compiler doesn't use the variable to which you are assigning it to figure out its type; its type is defined by what sort of literal it is.