I'm finding the maximum of a function f over the parameter nu in Python using Scipy's built-in differential evolution, while keeping the other terms (args) fixed. My code
max = scipy.optimize.differential_evolution(lambda nu:-f(args,nu),bounds)
fopt = max.fun
Gives me the correct value I desire. However, now I want to do the same thing but vary over two parameters; call them nu and mu. I tried
max = scipy.optimize.differential_evolution(lambda nu,mu:-f(args,nu,mu),bounds)
fopt = max.fun
But I get an error. What is the correct way to implement optimization over several parameters using the above?
You can optimize multiple parameters by passing them as a list to scipy.optimize.differential_evolution
. For example, assume the following function that takes a list x
as an argument:
def fun(x):
return x[0] + x[1]
Let's assume the first parameter is nu
(corresponds to x[0]
) and the second one is mu
(corresponds to x[1]
). Define bounds accordingly for nu
and mu
and then you can optimize both at once as:
from scipy.optimize import differential_evolution
bounds = [(0,2), (0,3)] # for e.g., (0,2) for nu and (0,3) for mu
max = differential_evolution(fun, bounds)
fopt = max.fun
Using lambda
:
from scipy.optimize import differential_evolution
fun = lambda x: x[0]+x[1]
bounds = [(0,2), (0,3)] # for e.g., (0,2) for nu and (0,3) for mu
max = differential_evolution(fun, bounds)
fopt = max.fun
In your specific case, your lambda function lambda nu,mu:-f(args,nu,mu)
is returning negative value of function f
. Here you can pass nu
and mu
in a list x
as lambda x:-f(args,x)
and unpack x
in f
for nu
and mu
respectively.