Search code examples
pythonscipymathematical-optimization

Resuming an optimization in scipy.optimize?


scipy.optimize presents many different methods for local and global optimization of multivariate systems. However, I have a very long optimization run needed that may be interrupted (and in some cases I may want to interrupt it deliberately). Is there any way to restart... well, any of them? I mean, clearly one can provide the last, most optimized set of parameters found as the initial guess, but that's not the only parameter in play - for example, there are also gradients (jacobians, for example), populations in differential evolution, etc. I obviously don't want these to have to start over as well.

I see little way to prove these to scipy, nor to save its state. For functions that take a jacobian for example, there is a jacobian argument ("jac"), but it's either a boolean (indicating that your evaluation function returns a jacobian, which mine doesn't), or a callable function (I would only have the single result from the last run to provide). Nothing takes just an array of the last jacobian available. And with differential evolution, loss of the population would be horrible for performance and convergence.

Are there any solutions to this? Any way to resume optimizations at all?


Solution

  • The general answer is no, there's no general solution apart from, just as you say, starting from the last estimate from the previous run.

    For differential evolution specifically though, you can can instantiate the DifferentialEvolutionSolver, which you can pickle at checkpoint and unpickle to resume. (The suggestion comes from https://github.com/scipy/scipy/issues/6517)