I'm wondering why this C# code
long b = 20;
is compiled to
ldc.i4.s 0x14
conv.i8
(Because it takes 3 bytes instead of the 9 required by ldc.i8 20
. See this for more information.)
while this code
double a = 20;
is compiled to the 9-byte instruction
ldc.r8 20
instead of this 3-byte sequence
ldc.i4.s 0x14
conv.r8
(Using mono 4.8.)
Is this a missed opportunity or the cost of the conv.i8
outbalances the gain in code size ?
I doubt you will get more satisfactory answer than "because noone thought it necessary to implement it."
The fact is, they could've made it this way, but as Eric Lippert has many times stated, features are chosen to be implemented rather than chosen not to be implemented. In this particular case, this feature's gain didn't outweigh the costs, e.g. additional testing, non-trivial conversion between int
and float
, while in the case of ldc.i4.s
, it's not that much of a trouble. Also it's better not to bloat the jitter with more optimization rules.
As shown by the Roslyn source code, the conversion is done only for long
. All in all, it's entirely possible to add this feature also for float
or double
, but it won't be much useful except when producing shorter CIL code (useful when inlining is needed), and when you want to use a float constant, you usually actually use a floating point number (i.e. not an integer).