Search code examples
pythonpython-2.7scipymathematical-optimization

Using Scipy minimize (scipy.optimize.minimize) with a large equality constraint matrix


I need to minimize a function of say, five variables (x[0] to x[4])

The scalar function to be minimized is given by X'*H*X. The objective function would look similar to this:

def objfun(x):
    H = 0.1*np.ones([5,5])
    f = np.dot(np.transpose(x),np.dot(H,x))[0][0]
    return f

Which would return a single scalar value.

The question is, how do I implement a constraint equations given by:

A*X - b = 0

Where A and b are subject to change in each run. A random example would be:

A = 
array([[ 1,  2,  3,  4,  5],
       [ 2,  1,  3,  4,  5],
       [-1,  2,  3,  0,  0],
       [ 0, -5,  6,  3,  2],
       [-3,  5,  6,  2,  8]])

B = 
array([[ 0],
       [ 2],
       [ 3],
       [-2],
       [-7]])

A and B cannot be hard-coded into a constraint function as they may be different in each run. There are no bounds on the variables and the optimization method need not be specified.

EDIT

I realized that having 5 constraint equations for an optimization problem with 5 variables gives a unique solution just by solving the equations. So how about a case where A may be defined as:

A = 
array([[ 1,  2,  3,  4,  5],
       [ 2,  1,  3,  4,  5],
       [-1,  2,  3,  0,  0],
       [ 0,  0,  0,  0,  0],
       [ 0,  0,  0,  0,  0]])


B = 
array([[ 0],
       [ 2],
       [ 3],
       [ 0],
       [ 0]])

So we have a 5 variable optimization problem with 3 linear constraints.


Solution

  • You could try using the scipy.optimize.fmin_cobyla function, I don't know the numerical details so you should check it with values for which you know the expected answer and see if it works for your needs, play with the tolerance arguments rhoend and rhobeg and see if you get an expected answer, a sample program could be something like:

    import numpy as np
    import scipy.optimize
    
    A = \
    np.array([[ 1,  2,  3,  4,  5],
              [ 2,  1,  3,  4,  5],
              [-1,  2,  3,  0,  0],
              [ 0,  0,  0,  0,  0],
              [ 0,  0,  0,  0,  0]])
    
    B = \
    np.array([[0],
              [2],
              [3],
              [0],
              [0]])
    
    def objfun(x):
        H = 0.1*np.ones([5,5])
        f = np.dot(np.transpose(x),np.dot(H,x))
        return f
    
    def constr1(x):
        """ The constraint is satisfied when return value >=0 """
        sol = A*x
        if np.allclose(sol, B):
            return 0.01
        else:
            # Return the negative distance between the expected solution
            # and the actual solution for a somehow meaningful value
            return -np.linalg.norm(B-sol)
    
    scipy.optimize.fmin_cobyla(objfun, [0.0, 0.0, 0.0, 0.0,0.0], [constr1])
    #np.linalg.solve(A, b)
    

    Please note that this given example doesn't have a solution, try it with something that does. I am not completely sure that the constraint function is properly defined, try to find something that works well for you. You should try to provide an initial guess that it's an actual solution instead of [0.0, 0.0, 0.0, 0.0,0.0] for better results.

    Check the oficial documentation for more details: http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fmin_cobyla.html#scipy.optimize.fmin_cobyla

    Edit: Also depending what kind of solution you are looking for you could probably form a better constrain function, maybe allowing values that are around a certain tolerance distance from the expected solution even if not completely exact, and returning a value higher than 0 the closer they are to the tolerance instead of always 0.1, etc...