I experimented today with how the compiler determines the types for numbers declared as var
.
var a = 255; //Type = int. Value = byte.MaxValue. Why isn't this byte?
var b = 32767; //Type = int. Value = short.MaxValue. Why isn't this short?
var c = 2147483647; //Type = int. Value = int.MaxValue. int as expected.
var d = 2147483648; //Type = uint. Value = int.MaxValue + 1. uint is fine but could have been long?
var e = 4294967296; //Type = long. Value = uint.MaxValue + 1. Type is long as expected.
Why is int
the default for any number that is between Int32.MinValue
and Int32.MaxValue
?
Wouldn't it be better to use the smallest possible data type to save memory? (I understand that these days memory is cheap, but still, saving memory isn't that bad especially if it's so easy to do).
If the compiler did use the smallest data type, and if you had a variable with 255 and knew that later on you would want to store a value like 300, then the programmer could just declare it short
instead of using var
.
Why is var d = 2147483648
implicitly uint
and not long
?
Seems as though the compiler will always try and use a 32 bit integer if it can, first signed, then unsigned, then long
.
Seems as though the compiler will always try and use a 32 bit integer if it can, first signed, then unsigned, then long.
That is exactly right. C# Language Specification explains that it tries to pick an integral type that uses the smallest possible number of bytes to represent integer literal with no suffix. Here is the explanation from the language specification:
To permit the smallest possible
int
andlong
values to be written as decimal integer literals, the following two rules exist:
- When a decimal-integer-literal with the value
2147483648
and no integer-type-suffix appears as the token immediately following a unary minus operator token, the result is a constant of typeint
with the value−2147483648
. In all other situations, such a decimal-integer-literal is of typeuint
.- When a decimal-integer-literal with the value
9223372036854775808
and no integer-type-suffix or the integer-type-suffixL
orl
appears as the token immediately following a unary minus operator token, the result is a constant of typelong
with the value−9223372036854775808
. In all other situations, such a decimal-integer-literal is of typeulong
.
Note that the language specification mentions your var d = ...
example explicitly, requiring the result to be of type uint
.