Search code examples
pythonscipyscipy-optimizescipy-optimize-minimize

How to use minimize from scipy correctly with multible variables?


I have five variables that I would like to subject to scipy.optimize.minimize in order to find the solution I am seeking in terms of A, B, C, D, and E. First, I imported minimize from scipy and defined the initial guesses (based on lab experiments, so they should be valid).

from scipy.optimize import minimize

A0 = 1.90
B0 = 6.40
C0 = 11.7
D0 = 3.70
E0 = 2.50

ABCDE0 = [A0, B0, C0, D0, E0]

Second, I defined the individual functions that comprise the objective function conveniently named Objective. I also tried to combine F, G, H, and I into one function without any luck, so I decided to leave it like this for the time being.

def F(abcde):
    a, b, c, d, e = abcde
    return c - (b ** 2) / (a - e)

def G(abcde):
    a, b, c, d, e = abcde
    return (4 * e * ((a - e) * c - b ** 2)) / (a * c - b ** 2)

def H(abcde):
    a, b, c, d, e = abcde
    return b / (2 * (a - e))

def I(abcde):
    a, b, c, d, e = abcde
    return (2 * e * b) / (a * c - b ** 2)

def Objective(abcde):
    return (F(abcde) / G(abcde)) / (H(abcde) / I(abcde))

Third, I defined the constraint (i.e. (F/G)/(H/I)=1) and also the bounds named bnds to be +/- 10% of the initial guesses for the sake of simplicity.

def constraint(x):
    F = x[0]
    G = x[1]
    H = x[2]
    I = x[3]
    return (F / G) / (H / I) - 1

con = {'type': 'eq', 'fun': constraint1}

min_per = 0.9
max_per = 1.1
bnds = ((A0*min_per, A0*max_per), (B0*min_per, B0*max_per), 
        (C0*min_per, C0*max_per), (D0*min_per, D0*max_per), 
        (E0*min_per, E0*max_per))

Fourth and finally, minimize provided me with a solution termed sol.

sol = minimize(Objective, ABCDE0, method='SLSQP', bounds=bnds, constraints=con1, options={'disp':True})

If sol was to be printed by print(sol), the following message would appear.

Positive directional derivative for linesearch    (Exit mode 8)
            Current function value: 1.0
            Iterations: 18
            Function evaluations: 188
            Gradient evaluations: 14
     fun: 1.0
     jac: array([ 0.00000000e+00,  1.49011612e-08, -7.45058060e-09,  0.00000000e+00,
        0.00000000e+00])
 message: 'Positive directional derivative for linesearch'
    nfev: 188
     nit: 18
    njev: 14
  status: 8
 success: False
       x: array([ 2.09      ,  5.76      , 10.53      ,  4.07      ,  2.50000277])

To my novice mind, constraint appears to be the problem but I am not sure to due lack of experience with minimize.

  • What I am doing wrong?
  • Why is it not executing successfully?
  • Is it better to use root_scalar as suggested by @fuglede in this scenario?
  • Is it possible to include all the individual functions into one function to avoid making a mess?

Please be aware that D0 and d are not included in any of the functions but is important in the grand schemes of things as one of the five independent variables.


Solution

  • Several things are going on here:

    Firstly, as a matter of taste, I would probably have left the functions F, ..., I as having multiple inputs to avoid having to unpack the list. The objective function does need a list as its arguments though; i.e. you could do something like

    def F(a, b, c, d, e):
        return c - (b ** 2) / (a - e)
    
    ...
    
    def objective(abcde):
        return (F(*abcde) / G(*abcde)) / (H(*abcde) / I(*abcde))
    

    That's just style. More importantly, your constraint method won't quite do what you want: As I understand, you want to evaluate the functions F, ..., I on the inputs, but that never happens; instead, the value of the variable F (which is an unfortunate name as it shadows the name of the function) ends up being simply a. Instead, you would do something like

    def constraint(x):
        return (F(*x) / G(*x)) / (H(*x) / I(*x)) - 1
    

    Now, constraint(x) is nothing but objective(x) - 1, so your constraints ends up stating that your objective must be equal to 1 at a feasible solution. This means that there's really not much optimization going on at all: Any feasible solution is optimal. While a priori minimize will indeed attempt to find a feasible solution for you, chances are that you'll have more luck trying to use some of the root-finding capabilities of scipy.optimize as what you're doing is trying to find the roots of the function objective - 1.

    Now finally, this actually turns out to be straightforward: Your function objective is equal to 1 wherever it's defined, so any input is a feasible solution: First, by writing F = ((a-e)c - (b**2))/(a-e), note that (a*c-b**2) / (4*e*(a-e)). Then, (F/G)*I = (2*e*b)/(4*e*(a-e)) = b/(2*(a-e)) = H, so 1 = (F/G)*(I/H) = (F/G)/(H/I).