I am solving a stochastic differential equation and I have a function that contains an algorithm to solve it. So I have to call that function at each time step (it is similar to Runge Kutta's method but with a random variable), then I have to solve the equation many times (since the solution is random) to be able to make averages with all the solutions . That is why I want to know how to call this function in each iteration in the most efficient way possible.
Some ways to optimize function calls:
However, since you say that your application is a variation on Runge-Kutta, then neither of these is likely to work; you are going to have varying values of t and the modeled state vector, so you must call the function within the loop, and the values are constantly changing.
If your algorithm is slow, then it won't matter how efficient you make the function calls. Look at optimizing the function to make it run faster (or convert to Cython) - the actual call itself is not the bottleneck.
EDIT: I see that you are running this multiple times, to determine a range of values given the stochastic nature of this simulation. In that case, you should use multiprocessing to run multiple simulations on separate CPU cores - this will speed things up some.