Search code examples
floating-pointprecision

How can I generally estimate the relevance of error introduced by a calculation in a program using floating point numbers


I'm currently learning about representation errors more deeply when using floating point numbers. I understand the fact that some decimal numbers cannot be exactly represented in floating point.

It's unsettling that when I write a program where there are multiple calculations over numbers, the errors can get big enough to cause significant issues.

How can I estimate how big an error can be and whether or not floating point numbers will be problematic enough to use an alternative, given I know what is the equation I'm applying to a set of numbers?

I'm sure there must be literature about how to do that but I haven't found the keywords. Could anyone please share resources that are specific about figuring out how much error will be introduced by your formula and the range of your data?

I tried reading about the topic and working some examples but I could not extract a general equation or method.


Solution

  • Its hard to pack 6-months of college numerical analysis in a Stack overflow answer, but something for OP to start on.


    How can I estimate how big an error can be (?)

    Some guidelines:

    • What is error? Are we looking for the absolute x - expected_x or relative (x - expected_x)/expected_x error? (Usually its the relative one, unless near 0.0 - or the ULP error)

    • First, understand unit in the last place (ULP) for floating point types.

    • Converting decimal text to/from floating point types impart about 0.5 ULP error.

    • Multiplications/divisions/sqrt() tend to impart about 0.5 ULP error.

    • Addition/subtraction tend to impart about 0.5 to many ULP error.

    • Trig functions tend to impart about a few ULP error.

    • Functions like exp() can greatly amplify errors: error_y = error_x * exp(x).

    • Edge cases near 0.0 and infinity often need special consideration.

    I could not extract a general equation or method.

    Oversimplified: variance formula.

    Also review Errors and residuals.