Search code examples
floating-point-precision

Machine epsilon vs least positive number


What is the difference between machine epsilon and least positive number in floating point representation?

If I try to show the floating point number on a number line .Is the gap between exact 0 and the first positive (number which floating point can represent) ,and the gap between two successive numbers, different?

which one is generally smaller? and on which factor these two values depends(mantisa or exponent)?


Solution

  • Machine epsilon is actually the relative error in a floating point number system representation. Using this you can find the absolute errors. How? see eg in IEEE754 you have 23 bit mantissa and 8 bit biased exponent . As per the definition of epsilon you can find it by putting all zero in exponent we get 2^-23 we now have least positive number for which 1+epsilon not equals 1

    So to find absolute error of any range of number we just multiply it with the exponent of that number.

    Whereas least number is the lowest number that a number representation can represent. eg all zeros in IEEE754 representation.

    Both are different things...