Search code examples
floating-pointieee-754elementary-functions

Why does table-based sin approximation literature always use this formula when another formula seems to make more sense?


The literature on computing the elementary function sin with tables refers to the formula:

sin(x) = sin(Cn) * cos(h) + cos(Cn) * sin(h)

where x = Cn + h, Cn is a constant for which sin(Cn) and cos(Cn) have been pre-computed and are available in a table, and, if following Gal's method, Cn has been chosen so that both sin(Cn) and cos(Cn) are closely approximated by floating-point numbers. The quantity h is close to 0.0. An example of reference to this formula is this article (page 7).

I don't understand why this makes sense: cos(h), however it is computed, will probably be wrong by at least 0.5 ULP for some values of h, and since it is close to 1.0, this seems to have a drastic effect on the accuracy of the result sin(x) when computed this way.

I do not understand why the formula below is not used instead:

sin(x) = sin(Cn) + (sin(Cn) * (cos(h) - 1.0) + cos(Cn) * sin(h))

Then the two quantities (cos(h) - 1.0) and sin(h) can be approximated with polynomials that are easy to make accurate as they produce results near zero. The values for sin(Cn) * (cos(h) - 1.0), cos(Cn) * sin(h) and for their sum is still small and its absolute accuracy is expressed in ULPs of the small quantity that the sum represents, so that adding this quantity to sin(Cn) is almost correctly rounded.

Am I missing something that makes the earlier, popular, simpler formula behave well too? Do the writers take it for granted that the readers will understand that the first formula is actually implemented as the second formula?

EDIT: Example

A single-precision table to compute single-precision sinf() and cosf() might contain the following point in single-precision:

         f             |        cos f          |       sin f      
-----------------------+-----------------------+---------------------
0.017967 0x1.2660bcp-6 |    0x1.ffead8p-1      |    0x1.265caep-6
                       |    (actual value:)    |    (actual value:)
                       | ~0x1.ffead8000715dp-1 | ~0x1.265cae000e6f9p-6

The following functions are specialized single-precision functions to use around 0.017967:

float sinf_trad(float x)
{
  float h = x - 0x1.2660bcp-6f;

  return 0x1.265caep-6f * cos_0(h) + 0x1.ffead8p-1f * sin_0(h);
}

float sinf_new(float x)
{
  float h = x - 0x1.2660bcp-6f;

  return 0x1.265caep-6f + (0x1.265caep-6f * cosm1_0(h) + 0x1.ffead8p-1f * sin_0(h));
}

Testing these functions between 0.01f and 0.025f seems to show that the new formula gives more precise results:

$ gcc -std=c99 test.c && ./a.out 
relative error, traditional: 2.169624e-07, new: 1.288049e-07
sum of squares of absolute error, traditional: 6.616202e-12, new: 2.522784e-12

I took several shortcuts so please look at the complete program.


Solution

  • Well, this formula is a start. Then other transformations could be done, depending on the context. I agree that if the formula sin(x) = sin(Cn) * cos(h) + cos(Cn) * sin(h) is applied in the target precision, then the rounding error of sin(Cn) * cos(h) is up to 1/2 ulp of the result, which is bad if the goal is to get an accurate result. However some terms are sometimes expressed in greater precision by using pseudo-expansions. For instance, a number can be represented by a pair (a,b) where b is much smaller than a and whose value is regarded as a+b. In such a case, cos(h) could be represented by a pair (1,h') and the computation would be equivalent to what you suggest.

    Alternatively, the implementation can be detailed once the formulas to evaluate cos(h) and sin(h) are given. See Section 3.1 in Stehlé and Zimmermann's paper you cited: they define C*(h) = C(h) − 1, and use C* in the final formula, which is basically what you suggest.

    Note: I'm note sure that using the above formula is the best choice. One could start with sin(x) = sin(Cn) + error_term, and compute the error term in some other way.