I want to use a switching function to integrate a simple ODE. The switching function will ensure that the gradient is zero for certain values of t and will make the gradient a numerical constant for any other value of t.
I can get the desired result by using dummy states but when I repeat the calculation for a single state the output from solve_ivp is different. I would like to understand why this is the case.
Here is the code to reproduce the results:
#==============================================================================
# import modules
#==============================================================================
from scipy.integrate import solve_ivp
from matplotlib import pyplot as plt
import numpy as np
#==============================================================================
# user defined functions
#==============================================================================
def ode_multi(t, y):
dy1_dt = 0 if t < 3 else 1
dy2_dt = 0 if t < 4 else 1
dy3_dt = dy1_dt - dy2_dt
return [dy1_dt, dy2_dt, dy3_dt]
def ode_single(t, y):
dy1_dt = 0 if t < 3 else 1
dy2_dt = 0 if t < 4 else 1
dy3_dt = dy1_dt - dy2_dt
return [dy3_dt]
#==============================================================================
# solve ode
#==============================================================================
sol_multi = solve_ivp(ode_multi, [0,6], [0,0,0], method='LSODA', t_eval = np.linspace(0,6,100))
sol_single = solve_ivp(ode_single, [0,6], [0], method='LSODA', t_eval = np.linspace(0,6,100))
plt.figure(1)
plt.plot(sol_multi.t, sol_multi.y[0], label='dy1_dt')
plt.plot(sol_multi.t, sol_multi.y[1], label='dy2_dt')
plt.plot(sol_multi.t, sol_multi.y[2], label='dy3_dt')
plt.legend(); plt.grid()
plt.figure(2)
plt.plot(sol_single.t, sol_single.y[0], label='dy3_dt')
plt.legend(); plt.grid()
I tried using numpy.heaviside instead of if statements for the switching function to ensure the gradients are differentiable. The output is the same so I used if statements in the code sample to simplify the explanation.
Your inputs are appreciated!
Set a limit to the step size to half the difference between "singular" events, max_step=0.5
should work in this case. Reducing the error tolerances can have a similar effect.
It seems that the integrator jumps over the interval where the derivative is not zero. This can happen as the internal step size is adapted to the derivatives function, as far as the solver sees it. Now in the initial zero segment the solver works up to the hypothesis that the derivatives function is the zero function, and increases the step size accordingly. With some luck, no point in the interval [3,4]
gets evaluated, so nothing invalidates the hypothesis.
This is not the case in the first example, as the first two components do change, and thus force the solver to treat the jump points carefully.