Search code examples
rodedifferential-equations

Determine the right time step for solving ode in R (deSolve)


As posted in this question, the times argument in solving ODE changes the results.

Basically, for the exact same func, a times of seq(1, 10, 0.01) and seq(1, 10, 1) yields entirely different results.

My question is: what is the "right" result? How to determine the "right" time step (0.01 or 1, or any other value)?

I would decrease the time step gradually, solve the ode, and see if the result stops changing. But it there a better way to do it?

Thanks!


Solution

  • The differences in the link are a result of the chaotic system magnifying minimal errors to the size of the chaotic attractor over a timespan of less than 30. You get that result even if you compare a segmentation of 100 sub-intervals with one with 101 sub-intervals. It is thus quite impossible to compute a reliable result over time spans larger than 20 for this system.

    python scipy's odeint uses the same lsode Fortran code as the R package. Using the same parameters and functions as in the test example in the linked question/answer I get by setting the error tolerances rather low

    atol, rtol = 1e-3, 1e-6
    
    for N in [10, 100, 1000]:
        t = np.linspace(0,10,N+1)
        u = odeint(Lorenz,u0,t, atol=atol, rtol=rtol);
        print "N =%6d"%N, ", (t,u) =",zip(t,u)[-1]
    

    the output

    N =    10 , (t,u) = (10.0, array([ 15.9506689 ,  -6.49172355, -10.50061322]))
    N =   100 , (t,u) = (10.0, array([ 15.86686806,  -6.39874131, -10.3567592 ]))
    N =  1000 , (t,u) = (10.0, array([ 15.87163076,  -6.40449548, -10.36581067]))
    

    One can still see the influence of the segmentation, however the differences are proportional to the error tolerances.

    By setting the error tolerances to a more reasonable size, the differences in the outputs reduce and then vanish, as the internal steps of the integrator become much smaller than the time step of the output time sequence:

    atol, rtol = 1e-6, 1e-8
    
    N =    10 , (t,u) = (10.0, array([ 16.76057876,  -7.28291262, -11.68849765]))
    N =   100 , (t,u) = (10.0, array([ 16.76049974,  -7.28284578, -11.68840157]))
    N =  1000 , (t,u) = (10.0, array([ 16.76049991,  -7.28284592, -11.68840176]))
    
    ---
    atol, rtol = 1e-12, 1e-14
    
    N =    10 , (t,u) = (10.0, array([ 16.76043932,  -7.28277217, -11.68828488]))
    N =   100 , (t,u) = (10.0, array([ 16.76043932,  -7.28277217, -11.68828488]))
    N =  1000 , (t,u) = (10.0, array([ 16.76043932,  -7.28277217, -11.68828488]))
    

    Thus, if you see an influence of the output time segmentation to the integration result, then the error tolerances (given or default) are not strict enough.