Search code examples
matlabnumerical-methodsscientific-computingepsilon

What's the minimum step size that can be used in Euler's method before it becomes unreliable?


In particular, if Euler's method is implemented on a computer, what's the minimum step size that can be used before rounding errors cause the Euler approximations to become completely unreliable?

I presume it's when step size reaches the machine epsilon? E.g. if machine epsilon is e-16, then once step size is roughly e-16, the Euler approximations are unreliable.


Solution

  • For Euler's method, the minimum step size you want to use would be h0 = e-8, since that corresponds to the timestep size at which the error attains its minimum value. If each floating-point operation has error of order n (e-16 in your case), such a timestep h0 coresponds to the squared root of n, hence e-8.