In python, I'd like to minimize hundreds of thousands of scalar valued functions. Is there a faster way than a for-loop over a def
of the objective function and a call to scipy.optimize.minimize_scalar
? Basically a vectorized version, where I could give a function foo
and a 2D numpy array where each row is given as extra data to the objective function.
This is not possible currently in scipy, without using a workaround like multiprecessing, threads, or converting the many one dimensional problems to a giant multidimensional one. There is however a pull request currently open to adress this.