Search code examples
pythonmathdifferential-equationscalculus

Euler's Approximation


I'm trying to make a program for Euler's Approximation for a differential equation(I got a problem in a book telling me to).I managed to write something that works for everything(that I've tested at least) but it fails at step = 0,1 for some reason... Here is the code:

from math import pow as p

y0: int = 3

h = input("Step size:")
h =float(h)
h1 = h
x_n=0
while x_n < (1):
    yn = y0 + h*(3*(p(x_n ,2))*(2-y0))
    x_n=h1
    h1=h1+h
    y0=yn
else:
    print(yn)

# h(1),f(1)=3
# h(0.1),f(1)=2,39279
# h(0.01),f(1)=2.37011
# h(0.001),f(1)=2.36810

It works for h=1, h=0.5, h=0.2, h=0.05, h=0.01, h=0.001 and so on... For some reason it only fails when h=0.1.

The differential equation in question is y'=3(x^2)(2-y)

Thank you for the help!


Solution

  • (It would look better if the variable names have a uniform construction)

    If you print out also the time of the last step

    for h in [1.0,0.5,0.2,0.1,0.05]:
        x_n, y_0 = 0.0, 3.0
        while x_n < (1):
            y_n = y_0 + h*(3*(x_n**2)*(2-y_0))
            x_n = x_n + h
            y_0 = y_n
        else:
            print(f"h={h}: f({x_n:12.10f}) = {y_n:12.10f}")
    

    then you find in the result

    h= 1.00000: f(1.0000000000) = 3.0000000000
    h= 0.50000: f(1.0000000000) = 2.6250000000
    h= 0.20000: f(1.0000000000) = 2.4261034230
    h= 0.10000: f(1.1000000000) = 2.2749557398
    h= 0.05000: f(1.0000000000) = 2.3795619396
    

    that indeed at h=0.1 the iteration only stops at x=1.1. This is due to the floating point format of the variables, the point x_n close to x=1 can fall shortly below, so that another step is performed. One way to prevent this is to adapt the last step (another is to use a "close-by" comparison)

    for h in [1.0,0.5,0.2,0.1,0.05]:
        x_n, y_0, x_f = 0.0, 3.0, 1.0
        y_n = y_0
        while x_n < x_f:
            if x_n + 1.001*h > x_f: h = x_f - x_n
            y_n = y_n + h*(3*(x_n**2)*(2-y_n))
            x_n = x_n + h
        else:
            print(f"h={h:8.5f}: f({x_n:12.10f}) = {y_n:12.10f}")
    

    which gives the more consistent result

    h= 1.00000: f(1.0000000000) = 3.0000000000
    h= 0.50000: f(1.0000000000) = 2.6250000000
    h= 0.20000: f(1.0000000000) = 2.4261034230
    h= 0.10000: f(1.0000000000) = 2.3927939140
    h= 0.05000: f(1.0000000000) = 2.3795619396
    

    To explore the influence of the floating point representation of the numbers, print out in long what the floating point numbers really are,

    print(", ".join("%20.17f"%x for x in np.cumsum(10*[0.1])))
    

    gives

    0.10000000000000001,  0.20000000000000001,  0.30000000000000004,  
    0.40000000000000002,  0.50000000000000000,  0.59999999999999998,  
    0.69999999999999996,  0.79999999999999993,  0.89999999999999991,  
    0.99999999999999989
    

    This shows that the rounding in the last mantissa bit after floating point operations can have rather counter-intuitive results