In c#, you can define a number literal as either int or double
Double var1 = 56.1;
int var2 = 51;
These are the default types the literals are assigned. However, the game engine I'm working on uses floats for position, rotation, etc. When the float is assigned a literal double, ie, float varFloat = 75.4;
the compiler throws an error saying that the double literal needs to be a float, which is correct. So one needs to turn the double literal into a float ie. float varFloat = 75.4f;
. However, when given an int literal, the int is implicitly converted to a float. Ie,
float varFloat = 44; // This is fine.
My question is is the compiler smart enough to realize that 44 should be a float literal? If not, that means that every time the literal is accessed, it's also performing a conversion. In most cases, this really doesn't matter. But with high performance code, it could potentially become an issue (even if it's a minor one) if ints are used all over the place instead of floats. There is no way as far as I know to change these literals into floats without going through lines of source code which really isn't time well spent at all.
So, does the compiler convert the int literal into a float literal? If not, what can be done about this waste of processing power other then trying to avoid it?
The answer is yes. It is smart enough to know that.
This is the disassembled IL (courtesy of ILSpy) of a simple program which has such an assignment. As you can see from the ldc.r4 instruction, there is no conversion taking place:
.method private hidebysig static
void Main (
string[] args
) cil managed
{
// Method begins at RVA 0x2050
// Code size 8 (0x8)
.maxstack 1
.entrypoint
.locals init (
[0] float32 x
)
IL_0000: nop
IL_0001: ldc.r4 44
IL_0006: stloc.0
IL_0007: ret
} // end of method Program::Main