Im trying to minimize the following function:
def func(x, *args):
#""" Objective function """
return ((2*math.pi*2000*((x[0]/2)**2))+(2**(x[0]/2)*(x[1])*1000))
with the constraints, bounds and initial values of:
guess=[4,5]
d1=3
d2=10
h1=3
h2=10
v=50
cons = ({'type': 'ineq','fun' : lambda x: math.pi*x[1]*((x[0]/2)**2)-v})
bnds = ((d1, d2), (h1, h2))
and the optimization fun:
scipy.optimize.minimize(func,guess, method='SLSQP',bounds=bnds,
constraints=cons)
but i keep getting no solution:
fun: 48281.04745868263
jac: array([ 25783.35449219, 2828.42675781])
message: 'Positive directional derivative for linesearch'
nfev: 12
nit: 7
njev: 3
status: 8
success: False
x: array([ 3. , 7.07344142])
please help me.
This looks highly unstable in terms of numerical-optimization. It maybe might work when some bounds are given to save it. But fast-growing things like 2^n
are calling for trouble.
Now if i interpret your function correctly, you can divide it by 1000; which effects in smaller values, liked by the optimizer. It's basically a scaling of the objective.
Compare your fun:
# ((2*math.pi*2000*((x[0]/2)**2))+(2**(x[0]/2)*(x[1])*1000))
fun: 48258.32083419573
jac: array([ 25775.48237605, 2828.42712477])
message: 'Positive directional derivative for linesearch'
nfev: 44
nit: 10
njev: 6
status: 8
success: False
x: array([ 3. , 7.06540634])
with:
# ((2*math.pi*2*((x[0]/2)**2))+(2**(x[0]/2)*(x[1])*1))
fun: 48.2813631259886
jac: array([ 25.78346395, 2.82842684])
message: 'Optimization terminated successfully.'
nfev: 12
nit: 3
njev: 3
status: 0
success: True
x: array([ 3. , 7.07355302])
If you need the original objective, do a postprocessing multiplication!