Search code examples
c#.net.net-coreimplicit-conversion

Seeking an explanation for the change in order of widening operations from .NET Framework 4.8 to .NET 8


We are updating our application from .NET Framework 4.8 to .NET 8.

During regression testing, we noticed that implicit widening conversions seem to happen in a different order, resulting in some changes to results.

It appears that in .NET Framework, an operation like: d1= f1 * f2 will first convert f1 and f2 to doubles before executing the multiplication, while in .NET 8 the multiplication between floats will be conducted first, and then the widening will happen.

I am aware of the behavior of floating point binary math. I am not trying to claim that one of these is "wrong" - more just trying to understand why the behavior was changed.

And: Is there any way to temporarily change the .NET 8 behavior back to the .NET Framework behavior so we can understand our regression test better?

(P.S. yes, I am aware that this would not be a problem if we didn't have the implicit conversions in the first place. But this is a large legacy codebase and I cannot change that easily.)

Console app testing code:

Console.WriteLine(".NET 8");

Console.WriteLine("Input Float: 0.3333333F");
float f1 = 0.3333333F;
Console.WriteLine("Decimal: " + f1.ToString());
Console.WriteLine("Binary: " + GetFloatBinary(f1));
Console.WriteLine();

Console.WriteLine("Float multiplication first, then conversion");
Console.WriteLine("f2 = f1 * f1");
float f2 = f1 * f1;
Console.WriteLine("Decimal: " + f2.ToString());
Console.WriteLine("Binary: " + GetFloatBinary(f2));
Console.WriteLine("d1 = (double)f2");
double d1 = (double)f2;
Console.WriteLine("Decimal: " + d1.ToString());
Console.WriteLine("Binary: " + GetDoubleBinary(d1));
Console.WriteLine();

Console.WriteLine("Conversion first, multiplication second");
Console.WriteLine("d2 = (double)f1 * (double)f1");
double d2 = (double)f1 * (double)f1;
Console.WriteLine("Decimal: " + d2.ToString());
Console.WriteLine("Binary: " + GetDoubleBinary(d2));
Console.WriteLine();

Console.WriteLine("Let the platform decide");
Console.WriteLine("d3 = f1 * f1");
double d3 = f1 * f1;
Console.WriteLine("Decimal: " + d3.ToString());
Console.WriteLine("Binary: " + GetDoubleBinary(d3));
Console.WriteLine();

Console.ReadLine();

static string GetFloatBinary(float value)
{
    const int bitCount = sizeof(float) * 8;
    int intValue = System.BitConverter.ToInt32(BitConverter.GetBytes(value), 0);
    return Convert.ToString(intValue, 2).PadLeft(bitCount, '0');
}

static string GetDoubleBinary(double value)
{
    const int bitCount = sizeof(double) * 8;
    int intValue = System.BitConverter.ToInt32(BitConverter.GetBytes(value), 0);
    return Convert.ToString(intValue, 2).PadLeft(bitCount, '0');
}

Results:

.NET FRAMEWORK
Input Float: 0.3333333F
Decimal: 0.3333333
Binary: 00111110101010101010101010101010

Float multiplication first, then conversion
f2 = f1 * f1
Decimal: 0.1111111
Binary: 00111101111000111000111000110111
d1 = (double)f2
Decimal: 0.111111097037792
Binary: 0000000000000000000000000000000011100000000000000000000000000000

Conversion first, multiplication second
d2 = (double)f1 * (double)f1
Decimal: 0.111111097865635
Binary: 0000000000000000000000000000000011100011100011100011100100000000

Let the platform decide
d3 = f1 * f1
Decimal: 0.111111097865635
Binary: 0000000000000000000000000000000011100011100011100011100100000000
.NET 8
Input Float: 0.3333333F
Decimal: 0.3333333
Binary: 00111110101010101010101010101010

Float multiplication first, then conversion
f2 = f1 * f1
Decimal: 0.1111111
Binary: 00111101111000111000111000110111
d1 = (double)f2
Decimal: 0.1111110970377922
Binary: 0000000000000000000000000000000011100000000000000000000000000000

Conversion first, multiplication second
d2 = (double)f1 * (double)f1
Decimal: 0.11111109786563489
Binary: 0000000000000000000000000000000011100011100011100011100100000000

Let the platform decide
d3 = f1 * f1
Decimal: 0.1111110970377922
Binary: 0000000000000000000000000000000011100000000000000000000000000000

I think I have a clear understanding of what changed, but I can't find any documentation as to why.

Thanks for any insight!


Solution

  • The problem is reproducible for me only on 32bit-builds (or AnyCPU prefer 32bit on .NET Framework.)

    The IL for both is the same and there are no mentions in the ECMA addendum, so presumably no real changes on that front.

    On 64bit, .NET Framework is the same as .NET 6 in my case (and same results as in question) - floats are first mutiplied.

    On 32bit, the disassembly for .NET Framework however is:

    00120AD2 D9 45 C0             fld         dword ptr [ebp-40h]  
    00120AD5 D8 C8                fmul        st,st(0)  
    00120AD7 DD 5D A4             fstp        qword ptr [ebp-5Ch]
    

    regarding the fld instruction:

    The fld instruction loads a 32 bit, 64 bit, or 80 bit floating point value onto the stack. This instruction converts 32 and 64 bit operand to an 80 bit extended precision value before pushing the value onto the floating point stack.

    This inconsistency is however consistent with the ECMA 335 Standard:

    I.12.1.3 Handling of floating-point data types:

    Everywhere else (on the evaluation stack, as arguments, as return types, and as local variables) floating-point numbers are represented using an internal floating-point type. In each such instance, the nominal type of the variable or expression is either float32or float64, but its value can be represented internally with additional range and/or precision. The size of the internal floating-point representation is implementation-dependent, can vary, and shall have precision at least as great as that of the variable or expression being represented.

    This variability across runtime implementation (different JITters emitting different instructions) is discussed in this issue on github and especially those comments (my clarification in []):

    In any case, the current x86 JIT [.NET Framework] uses x87's transcendental instructions and the precision of those instructions is up for debate.

    @mikedn the x86 port of RyuJIT will be using SSE.[.NET Core] However, for this issue, unless a program explicitly changes the evaluation mode the evaluation will be in double anyway. I think that @gafter is actually more concerned that this is not a requirement of the spec, and there is also no way to force it to happen.

    There are also these SO questions/answers that discuss the topic in greater detail:

    and this blogpost which summarizes the main issue (which instruction set is used by the JIT compiler in which implementation) well and also provides possible solutions (using decimal or third-party library):

    To be clear, is the runtime that will compile Just-In-time le CIL code when executing the application, and who will decide to use the FPU if we are in 32-bit or the SSE instructions is we are in 64-bit (and if the SSE are available).