I am new to C# and want to understand how values work. If I look at a normal integer value, it has 3 important parts in it: the type, name and value.
int testInt = 3;
| | |
Type Name Value
But when I see a float value it confuses me a bit because of the suffix F
.
float testFloat = 3.0F;
| | | |
Type Name Value Type
Now there are two types in it, and without the F
suffix the value would be a double. But why is this happening when I can declare the double variable with
double testDouble = 3.0D;
The double
as the first word should be enough, shouldn't it? The same goes for the decimal value with the suffix M:
decimal testDecimal = 3.0M;
Then it starts really confusing me when it comes to the other suffixes:
ulong bigOne = 2985825802805280508UL;
I used ulong
in a test before and know that the u
is for "unsigned" and lets the value be twice as high as normal. Then you get the U again as suffix and the L for literal as google said. As I understand it, "literals" are value types that contain numbers. But what I don't understand is, why does this ulong work even without the suffix?
ulong bigOne = 2985825802805280508;
Then I tried something different to understand the importance of the suffix
byte testLong = 12312UL;
This didn't work because the value is too high for byte (254) and the suffix does not convert it to an long variable.
Why isn't the first word (type) not enough for a declaration? The first word should be enough to tell the type. Is the best practice to always give the values a suffix?
You are confusing two different things here:
float testFloat = 3.0F;
The float
tells the compiler that the variable testFloat
will be a floating point value. The F
tells the compiler that the literal 3.0
is a float
. The compiler needs to know both pieces before it can decide whether or not it can assign the literal to the variable with either no conversion or an implicit conversion.
For example, you can do this:
float testFloat = 3;
And that's okay. Because the compiler will see 3
as a literal integer, but it knows it can assign that to a float without loss of precision (this is implicit conversion). But if you do this:
float testFloat = 3.0;
3.0
is a literal double (because that's the default without a suffix) and it can't implicitly (i.e. automatically) convert a double to a float because a float has less precision. In other words, information might be lost. So you either tell the compiler that it's a literal float:
float testFloat = 3.0f;
Or you tell it you are okay with any loss of precision by using an explicit cast:
float testFloat = (float)3.0;