The following C# program silently and implicitly calls an explicit decimal-to-long conversion operator, losing precision.
I don't understand why this happens. As far as I understand, in C# explicit operator should not be called implicitly by the language. Especially that in this case the silent explicit conversion is losing precision (1.1M => 1L).
This odd behavior actually caused a bug in my program.
Here is the simplified code:
// Custom number class
struct Num
{
long Raw;
public static implicit operator Num(long v) => new Num { Raw = v };
}
class Program
{
static void Main()
{
decimal d = 1.1m;
// The following line implicitly converts d to long (silently losing precision),
// then calls Num.op_Implicit(long)
Num num = (Num)d; // <=== should not compile???
}
}
Here is the resulting IL. You can see that System.Decimal::op_Explicit
is called, even though it's never asked for.
IL_0000: ldc.i4.s 11
IL_0002: ldc.i4.0
IL_0003: ldc.i4.0
IL_0004: ldc.i4.0
IL_0005: ldc.i4.1
IL_0006: newobj instance void [mscorlib]System.Decimal::.ctor(int32, int32, int32, bool, uint8)
IL_000b: stloc.0
IL_000c: ldloc.0
IL_000d: call int64 [mscorlib]System.Decimal::op_Explicit(valuetype [mscorlib]System.Decimal) // <=== ???
IL_0012: call valuetype Num Num::op_Implicit(int64)
IL_0017: stloc.1
IL_0018: ret
The rules for user defined explicit conversions are given in the C# spec. 10.5.5 User-defined explicit conversions. The rules are quite complex, but specifically allow for additional explicit conversions to be performed “implicitly”:
… If
E
does not already have the typeSₓ
, then a standard explicit conversion fromE
toSₓ
is performed. …
(Here E
is the source type - decimal
and Sₓ
is the type your conversion operator converts from - long
).
Note that you are performing an explicit conversion to Num
(even though it uses the implicit operator you have defined), so the rules for explicit conversions are followed, which allows the extra “implicit” explicit conversion from decimal
to long
.