The following code:
double c1 = 182273d;
double c2 = 0.888d;
Expression c1e = Expression.Constant(c1, typeof(double));
Expression c2e = Expression.Constant(c2, typeof(double));
Expression<Func<double, double>> sinee = a => Math.Sin(a);
Expression sine = ((MethodCallExpression)sinee.Body).Update(null, new[] { c1e });
Expression sum = Expression.Add(sine, c2e);
Func<double> f = Expression.Lambda<Func<double>>(sum).Compile();
double r = f();
double rr = Math.Sin(c1) + c2;
Console.WriteLine(r.ToString("R"));
Console.WriteLine(rr.ToString("R"));
Will output:
0.082907514933846488
0.082907514933846516
Why r and rr are different?
Update:
Found that this is reproduced if to select "x86" platform target or to check "Prefer 32-bit" with "Any CPU". In 64x mode works correctly.
I'm not an expert on such things, but I'll give my view on this.
First, problem appears only if compile with debug flag (in release mode it does not appear), and indeed only if run as x86.
If we decompile method to which your expression compiles, we will see this (in both debug and release):
IL_0000: ldc.r8 182273 // push first value
IL_0009: call float64 [mscorlib]System.Math::Sin(float64) // call Math.Sin()
IL_000e: ldc.r8 0.888 // push second value
IL_0017: add // add
IL_0018: ret
However, if we look at IL code of similar method compiled in debug mode we will see:
.locals init (
[0] float64 V_0
)
IL_0001: ldc.r8 182273
IL_000a: call float64 [mscorlib]System.Math::Sin(float64)
IL_000f: ldc.r8 0.888
IL_0018: add
IL_0019: stloc.0 // save to local
IL_001a: br.s IL_001c // basically nop
IL_001c: ldloc.0 // V_0 // pop from local to stack
IL_001d: ret // return
You see that compiler added (unnecessary) save and load of result to a local variable (probably for debugging purposes). Now here I'm not sure, but as far as I read, on x86 architecture, double values might be stored in 80-bit CPU registers (quote from here):
By default, in code for x86 architectures the compiler uses the coprocessor's 80-bit registers to hold the intermediate results of floating-point calculations. This increases program speed and decreases program size. However, because the calculation involves floating-point data types that are represented in memory by less than 80 bits, carrying the extra bits of precision—80 bits minus the number of bits in a smaller floating-point type—through a lengthy calculation can produce inconsistent results.
So my guess would be that this storage to local and load from local causes conversion from 64-bit to 80-bit (because of register) and back, which causes behavior you observe.
Another explanation might be that JIT behaves differentely between debug and release modes (might still be related to storing intermediate computation results in 80-bit registers).
Hopefully some people who know more can confirm if I'm right or not on this.
Update in response to comment. One way to decompile expression is to create dynamic assembly, compile expression to a method there, save to disk, then look with any decompiler (I use JetBrains DotPeek). Example:
var asm = AppDomain.CurrentDomain.DefineDynamicAssembly(
new AssemblyName("dynamic_asm"),
AssemblyBuilderAccess.Save);
var module = asm.DefineDynamicModule("dynamic_mod", "dynamic_asm.dll");
var type = module.DefineType("DynamicType");
var method = type.DefineMethod(
"DynamicMethod", MethodAttributes.Public | MethodAttributes.Static);
Expression.Lambda<Func<double>>(sum).CompileToMethod(method);
type.CreateType();
asm.Save("dynamic_asm.dll");