Search code examples
c++trigonometrycomputation

Computational Trigonometry functions precision decreasing and error percent rising


Hello I am solving trigonometry functions like sin(x) and cos(x) with Taylor Series Expansions

Problem: My values are not wrong just not very precise

My question is whether I can improve the accuracy of these functions, I think I have tried everything but I need your suggestions.

double trig::funcsin(int value)
{
    sum = 0;
    //summation
    factorial fac;
    for(int i = 0; i < 7; i++)
    {
        sum += pow((-1), i)*(((double)pow(value, (double)2*i+1)/(double)fac.fact((double)2*i+ 1)));
    }
    return sum;
}
double trig::funccos(int value)
{
    factorial fac;
    sum = 0;
    for(int i = 0;i < 7;i++)
    {
        sum += (pow((-1), i)*((double)pow(value, (double)2*i)/(double)fac.fact((double)2*i)));
    }
    return sum;
}

Example:

Real: -0.7568024953

Mine: -0.73207

Real: -0.27941549819

Mine: -0.501801

Aslo as x becomes larger the output values become less precise at an exponential rate.

I am on GCC compiler, please give me suggestions


Solution

  • The following code demonstrates the Taylor series (about x==0) for the sin() function. As you know, the sine function repeats an identical cycle for every 2*pi interval. But the Taylor series is just a polynomial -- it needs a lot of terms to approximate a wiggly function like sine. And trying to approximate the sine function at some point far away from the origin will require so many terms that accumulated errors will give an unsatisfactory result.

    To avoid this problem, my function starts by remapping x into a single cycle's range centered around zero, between -pi and +pi.

    It's best to avoid using pow and factorial functions if you can instead cheaply update components at each step in the summation. For example, I keep a running value for pow(x, 2*n+1): It starts off set to x (at n==0), then every time n is incremented, I multiply this by x*x. So it only costs a single multiplication to update this value at each step. A similar optimization is used for the factorial term.

    This series alternates between positive and negative terms, so to avoid the hassle of keeping track of whether we need to add or subtract a term, the loop handles two terms on each iteration -- it adds the first and subtracts the second.

    Each time a new sum is calculated, it is compared with the previous sum. If the two are equal (indicating the updates have surpassed the sum variable's precision), the function returns. This isn't a great way to test for a terminating condition, but it makes the function simpler.

    #include <iostream>
    #include <iomanip>
    
    double mod_pi(double x) {
        static const double two_pi = 3.14159265358979 * 2;
        const int q = static_cast<int>(x / two_pi + 0.5);
        return x - two_pi * q;
    }
    
    double func_sin(double x) {
        x = mod_pi(x);
    
        double sum = 0;
        double a   = 1;  // 2*n+1   [1, 3, 5, 7, ...]
        double b   = x;  // x^a
        double c   = 1;  // (2*n+1)!
    
        const double x_sq = x * x;
    
        for(;;) {
            const double tp = b / c;
    
            // update for negative term
            c *= (a+1) * (a+2);
            a += 2;
            b *= x_sq;
    
            const double tn = b / c;
            const double ns = tp - tn + sum;
            if(ns == sum) return ns;
            sum = ns;
    
            // update for positive term (at top of loop)
            c *= (a+1) * (a+2);
            a += 2;
            b *= x_sq;
        }
    }
    
    int main() {
        const double y = func_sin(-0.858407346398077);
        std::cout << std::setprecision(13) << y << std::endl;
    }