Search code examples
cgdbprecisiontaylor-series

Taylor series with accuracy of 0.00001 (prints same value as entered by user) C code


Im trying to calculate the sin(x) using Taylor Series for sin x by formula with the accuracy of 0.00001 (meaning until the sum goes lower than precision of 0.00001).

(x is given by radians).

The problem is that my function to calculate sin (using Taylor series formula) is printing out the same value as given (for example if 7 is given it will print out 7.00000 instead of 0.656987). tried to debug my code using gdb and couldnt figure out why it stops after first iteration. Here`s my C code in order to calculate sin (x ) using Taylor series.

double my_sin(double x) {
  int i=3,sign=1; // sign variable is meant to be used for - and + operator inside loop.
// i variable will be used for power and factorial division
  double sum=x, accuracy=0.000001; // sum is set for first x.
for(i=3;fabs(sum) < accuracy ;i+=2){ // starting from power of 3.
    sign*=-1; // sign will change each iteration from - to +.
sum+=sign*(pow(x,i)/factorial(i)); // the formula itself (factorial simple function for division)
}
return (sum); 
}

Any help would be appreciated. Thanks


Solution

  • tried to debug my code using gdb and couldnt figure out why it stops after first iteration.

    Well, let's do it again, step by step.

    1. sum = x (input is 7.0, so sum == 7.0).
    2. for(i=3; fabs(sum) < accuracy; i+=2) { ...
      Since sum is 7.0, it is not less than accuracy, so the loop body never executes.
    3. return sum; -- sum is still 7.0, so that's what your function returns.

    Your program does exactly what you asked it to do.

    P.S. Here the code you probably intended to write:

    double my_sin(double x) {
      double sum = x, accuracy = 0.000001;
      double delta = DBL_MAX;
      for(int i = 3, sign = -1; accuracy < fabs(delta); i += 2, sign = -sign) {
        delta = sign * pow(x, i) / factorial(i);
        sum += delta;
      }
      return sum;
    }