Search code examples
c#compilationfloating-pointroslyn

Why does modulus operator return different results depening on the compliler and variables used?


I have some code that produces different results depending on what compiler I use:

float a = 1f;
float b = 6f;
float result = a % (a / b);
float result1 = 1f % (1f / 6f);
Console.WriteLine("{0} , {1}", result, result1);

When running this code with Roslyn, the output is:

5.551115E-17 , 0.1666666

When I build project with this code in Unity for Windows (Mono), I get

0 , 0.1666666

Same project, on Unity, for Android (il2cpp):

0.1666666 , 0.1666666

Same with .NET 4.7.2:

5.551115E-17 , 5.551115E-17

Same with .NET 5:

0.16666664 , 0.16666664

First, the difference for Mono seems to be that it actually optimises the whole floating-point ariphmetics away (notices that the value before modulus operator and the numerator for the fraction are the same and immediately replaces the whole expression with zero, which makes sense logically but undermines the whole "I explicitly typed that those are all floats" thing). But why all the other differences, especially the first one? Is that behaviour defined somewhere (as in, one of those compilers does it "the right way" and the others not)?


Solution

  • Basically, IL doesn't have proper "stack types" for float vs double. It only has F (ECMA 335 I.12.1.3)

    For float a = 1f; float b = 6f; float result = a % (a / b); the C# compiler emits essentially:

    ldc.r4 1
    ldc.r4 6
    stloc.0
    dup
    ldloc.0
    div
    rem
    

    Which, due to the how the runtime works with its single type means that it is free to treat this as:

    float a = 1f; float b = 6f;
    double tmp1 = (a / b);
    double tmp2 = a % tmp1;
    float result = (float)tmp2;
    

    When treated like this, the result is always 5.551115E-17.

    This is largely only an issue with the legacy 32-bit JIT because it had to use the x87 FPU stack which due to prohibitive cost in changing the rounding mode just does all operations as 64-bit. Due to legacy reasons, it also did not insert intermediate casts to float between operations.

    RyuJIT (the modern 64-bit JIT and that used by all of .NET Core since 2.1, IIRC) uses the x86 SIMD instructions (SSE/SSE2) instead which natively support 32-bit and 64-bit operations and so it doesn't have this problem. All operations are simply done directly as the "correct" type (noting that there may still be some edge cases I'm not remembering).

    The "fix" to make this consistent everywhere is to insert an explicit cast to float after each "operation". For example:

    float a = 1f; float b = 6f;
    float result = a % (float)(a / b);
    

    Likewise:

    float a = 1f; float b = 6f;
    float result = a / b;
    result = a % result;