Search code examples
precisionnumerical-analysistaylor-seriessignificance

Using Taylor Series to Avoid Loss of Precision


I'm trying to use Taylor series to develop a numerically sound algorithm for solving a function. I've been at it for quite a while, but haven't had any luck yet. I'm not sure what I'm doing wrong.

The function is

f(x)=1 + x - sin(x)/ln(1+x)   x~0

Also: why does loss of precision even occur in this function? when x is close to zero, sin(x)/ln(1+x) isn't even close to being the same number as x. I don't see where significance is even being lost.

In order to solve this, I believe that I will need to use the Taylor expansions for sin(x) and ln(1+x), which are

x - x^3/3! + x^5/5! - x^7/7! + ...

and

x - x^2/2 + x^3/3 - x^4/4 + ...

respectively. I have attempted to use like denominators to combine the x and sin(x)/ln(1+x) components, and even to combine all three, but nothing seems to work out correctly in the end. Any help is appreciated.


Solution

  • Method used in question is correct - just make sure your calculator is in radians mode.