Search code examples
cmathtime-complexitycalculustaylor-series

what could be the time complexity of sine function by taylor's series?


I did search everywhere, even on SO. and scholarly articles on google are far beyond my mental capabilities. This is as close as it gets, but doesn't answer my question. There is no answer for this as Far as I have looked for. Believe Me. ( But feel free to prove me wrong )

I have a homework problem for calculating Time complexity of sine function using Taylor's expansion of sine(x) function. I am not asking for Taylor series or Taylor series function program, but its time complexity. I know the next term in Taylor expansion is given as :

Term in x^n = (Term in x^n-2) * x * x / n / (n-1)

The function snippet is this:

double sine(double x) {
int n;
double sum, term;
n = 3;
sum = x;
term = x;
while(isgreater(fabs(term), 0.0000000001)) {
    term = (term * x * x) / ( n * (n -1));
        if(n % 4 == 3)
            sum -= term;
    else
            sum += term;
         n = n + 2;
}   
return sum;
}

fabs() is the function for absolute value and 0.0000000001 is the precision required. If my understanding is correct, the code will stop when the value of the last calculated term is less than/equal to the precision float set.

My deduction so far is maybe the Time complexity will depend on x^2/n^2 ? or it is not deduce-able cause we don't know at which specific index/number the tern will be less then precision float ?

Math is not strong with me, But thankfully there are Masters like you out there :)


Solution

  • what could be the time complexity of sine function by Taylor's series?
    Math is not strong with me

    As an adjunct to LutzL fine analysis, sometimes a sanity check and visual of simple testing the code provides insight as to the time complexity.

    #define N 1000000
    clock_t test_sine(double x) {
      clock_t c1 = clock();
      for (int i = 0; i < N; i++) {
        sine(x);
      }
      clock_t c2 = clock();
      return c2 - c1;
    }
    
    void test_sines(void) {
      double x0 = 0;
      double x1 = 70;
      double dx = 0.2;
      for (double x = x0; x <= x1; x += dx) {
        printf("%f \t %d\n", x, (int) test_sine(x));
        fflush(stdout);
      }
    }
    
    int main(void) {
      test_sines();
    }
    

    Sample output

    0   16
    0.2 78
    0.4 93
    0.6 94
    0.8 94
    1   124
    ...
    

    Graphing this is fairly linear once x > 3.

    enter image description here

    Looking closer we see a step in time every power and using the trend looks very close to t = pow(x,1/3.)

    void test_sines(void) {
      double x0 = 0.0000000001/2;
      double x1 = 1;
      double dx = pow(2, 1.0/5);
      for (double x = x0; x <= x1; x *= dx) {
    

    enter image description here


    Using clock() is fairly crude, yet provides a visual to the analysis.

    With x * x, I would have expected term to affect isgreater() about every sqrt(x) as that would oblige another loop iteration.

    Conclusion: time complexity for small values about power(x,1.0/d) (2.0 <= d <= 3.1) and linear for large values.


    Note there are a number of issues about the quality of OP's sine() that render its result weak for many x. For many values x > 900, sine(x) was an infinite loop.