I'm trying to optimize a function obtained with machine learning based on polynomial regression, so I don't have any analytical relationship. This function has 17 input parameters/independent variables (it is geometric parameters), and these parameters are limited by specifically values. The lists with these values are shown below:
minimum_values = [4.03, 15.03, 15.06, 20.02, 90.03, 75.0, 12.01, 12.03, 23.04, 24.01, 21.0, 35.09, 24.01, 21.08, 18.03, 30.04, 66.01]
maximum_values = [11.98, 21.99, 22.99, 29.99, 99.99, 83.0, 21.98, 21.96, 33.0, 29.98, 26.98, 42.94, 30.0, 26.99, 25.92, 42.76, 81.95]
All articles and guides show only simplest analytical mathematic function with one or two independent variables. I was trying using some methods, but they all require only scalar value, but I have to use vector. I see that I should use "constraint" or "bounds" from scipy.optimize, but I don't understad me to write in constraint. So far, I've just written down:
x_initial = [10, 20, 20, 25, 95, 80, 20, 20, 30, 27, 25, 40, 25, 25, 20, 35, 70]
max_efficient = scipy.optimize.fmin(lambda x: -polynom_regression(data, x), x_initial, callback=cb, retall=True)
And it works but, of course, it is an infinite optimization process because algorithm is searching extremum in the infinite space.
I would be grateful for any help!
I was able to solve this problem only only one method. This code helped me: Does scipy's minimize function with method "COBYLA" accept bounds? . Below the code:
minimum_values = [4.03, 15.03, 15.06, 20.02, 90.03, 75.0, 12.01, 12.03, 23.04, 24.01, 21.0, 35.09, 24.01, 21.08, 18.03, 30.04, 66.01]
maximum_values = [11.98, 21.99, 22.99, 29.99, 99.99, 83.0, 21.98, 21.96, 33.0, 29.98, 26.98, 42.94, 30.0, 26.99, 25.92, 42.76, 81.95]
x_initial = [10, 20, 20, 25, 95, 80, 20, 20, 30, 27, 25, 40, 25, 25, 20, 35, 70]
bounds = []
for i in range(len(minimum_values)):
bounds.append([minimum_values[i], maximum_values[i]])
cons = []
for factor in range(len(bounds)):
min_value, max_value = bounds[factor]
minimum = {'type': 'ineq',
'fun': lambda x, lb=min_value, i=factor: x[i] - lb}
maximum = {'type': 'ineq',
'fun': lambda x, ub=max_value, i=factor: ub - x[i]}
cons.append(minimum)
cons.append(maximum)
res = scipy.optimize.minimize(lambda x: -polynom_regression(data, x), x_initial, constraints=cons, method='COBYLA')
But so far I was able to solve this problem only by method 'COBYLA', many other methods don't work with constrains.