Search code examples
c#floating-pointrounding-error

Cast float to decimal in C#. Why is (decimal)0.1F == 0.1M not false because of rounding?


If I evaluate the following in C#, it yields true:

(decimal)0.1F == 0.1M

Why doesn't the conversion to float and back to decimal introduce any rounding errors?


Solution

  • The cause of the observed behavior is that Microsoft’s C# implementation converts float to decimal using only seven decimal digits.

    Microsoft’s implementation of C# uses .NET. When .NET converts a single-precision floating-point number to decimal, it produces at most seven significant digits, rounding any residue using round-to-nearest.

    The source text 0.1F becomes the single-precision value 0.100000001490116119384765625. When this is converted to decimal with seven significant digits, the result is exactly 0.1. Thus, in Microsoft’s C#, (decimal) 0.1F produces 0.1, so (decimal) 0.1F == 0.1M is true.

    We can compare this with a non-Microsoft implementation, Mono C#. An online compiler for this is available here. In it, Console.WriteLine((decimal)0.1F); prints “0.100000001490116”, and (decimal)0.1F == 0.1M evaluates to false. Mono C# appears to produce more than seven digits when converting float to decimal.

    Microsoft’s C# documentation for explicit conversions says “When you convert float or double to decimal, the source value is converted to decimal representation and rounded to the nearest number after the 28th decimal place if required.” I would have interpreted this to mean that the true value of the float, 0.100000001490116119384765625, is exactly converted to decimal (since it requires fewer than 28 digits), but apparently this is not the case.

    We can further confirm this and illustrate what is happening by converting float to double and then to decimal. Microsoft’s C# converts double to decimal using 15 significant digits. If we convert 0.1F to double, the value does not change, because double can exactly represent each float value. So (double) 0.1F has exactly the same value as 0.1F, 0.100000001490116119384765625. However, now, when it is converted to decimal, 15 digits are produced. In a Microsoft C# implementation, Console.WriteLine((decimal)(double) 0.1F); prints “0.100000001490116”, and (decimal)(double) 0.1F == 0.1M evaluates to false.