Search code examples
floating-pointlanguage-agnosticroundingprecision

Does floating-point arithmetic always err away from zero?


In C# / .NET, the expression String.Format("{0:R}", 0.1 * 199) yields 19.900000000000002.

Because it's a floating-point number, I obviously never expect to get an exact "19.9" result. However from my tests, it looks like error always seems to be positive and never negative. That is, my result is always just a tiny bit larger than it should be, never just a tiny bit smaller.

Can I always count on that behavior? Or am I just doing the wrong tests?

(I assume this is a language-agnostic principle, not exclusive to C# / .NET)


Solution

  • Any single operation goes for the closest floating-point representation to the actual result; try 0.1 + 0.7.