Search code examples
c#c++floating-pointepsilon

Float epsilon is different in c++ than c#


So small question, I've been looking into moving part of my C# code to C++ for performance reasons. Now when I look at my float.Epsilon in C# its value is different from my C++ value.

In C# the value, as described by microsoft is 1.401298E-45.

In C++ the value, as described by cppreferences is 1.19209e-07;

How can it be that the smallest possible value for a float/single can be different between these languages?

If I'm correct, the binary values should be equal in terms of number of bytes an maybe even their binary values. Or am I looking at this the wrong way?

Hope someone can help me, thanks!


Solution

  • The second value you quoted is the machine epsilon for IEEE binary32 values.

    The first value you quoted is NOT the machine epsilon. From the documentation you linked:

    The value of the Epsilon property is not equivalent to machine epsilon, which represents the upper bound of the relative error due to rounding in floating-point arithmetic.

    From the wiki Variant Definitions section for machine epsilon:

    The IEEE standard does not define the terms machine epsilon and unit roundoff, so differing definitions of these terms are in use, which can cause some confusion.

    ...

    The following different definition is much more widespread outside academia: Machine epsilon is defined as the difference between 1 and the next larger floating point number.

    The C# documentation is using that variant definition.

    So the answer is that you are comparing two different types of Epsilon.