I can create a literal long by appending an L to the value; why can't I create a literal short or byte in some similar way? Why do I need to use an int literal with a cast?
And if the answer is "Because there was no short literal in C", then why are there no short literals in C?
This doesn't actually affect my life in any meaningful way; it's easy enough to write (short) 0 instead of 0S or something. But the inconsistency makes me curious; it's one of those things that bother you when you're up late at night. Someone at some point made a design decision to make it possible to enter literals for some of the primitive types, but not for all of them. Why?
In C, int
at least was meant to have the "natural" word size of the CPU and long
was probably meant to be the "larger natural" word size (not sure in that last part, but it would also explain why int
and long
have the same size on x86).
Now, my guess is: for int
and long
, there's a natural representation that fits exactly into the machine's registers. On most CPUs however, the smaller types byte
and short
would have to be padded to an int
anyway before being used. If that's the case, you can as well have a cast.