Abstract problem to be solved:
we have n d-dimentional design variables, say {k_0, k_1, ..., k_n}
maximize the minimum of [f(k_0), f(k_1), ... f(k_n)], where f() a nonlinear function, i.e. maximin
constraint: mean([k_0, k_1, ...,k_n])==m, m known constant
Can someone provide an example of how this can be solved (maximin, d-dim variables) via pyOpt?
EDIT: i tried this:
import scipy as sp
from pyOpt.pyOpt_optimization import Optimization
from pyOpt.pyALPSO.pyALPSO import ALPSO
def __objfunc(x,**kwargs):
f=min([x[0]+x[1],x[2]+x[3]])
g=[0.0]
g[0]=(((x[0]+x[1])+(x[2]+x[3]))/2.0)-5
fail=0
return f,g,fail
if __name__=='__main__':
op=Optimization('test', __objfunc)
op.addVarGroup('p0',4,type='c')
op.addObj('f')
op.addCon('ineq','i')
o=ALPSO()
o(op)
print(op._solutions[0])
suppose 2-dimentional design variables
is there any better way?
I probably would reformulate this as:
The min() function you used is non-differentiable (and thus dangerous). Also the mean() function can be replaced by a linear constraint (which is easier).
I am not familiar with the ALPSO solver, but this reformulation would usually be helpful for more traditional solvers like SNOPT, NLPQL and FSQP.