Search code examples
pythonoptimizationscipylinear-programming

How to solve linear optimization problems with 1D-array constrain in scipy?


I have an optimization problem where one of constraints is:
(weighted sum of x_i) = constant.

Here is an example:

# Solving: ones @ x -> min
# weights 1D-array (n)
ones = np.ones_like(weights)
coefficient = 1/2 * len(weights) * np.prod(weights)
# constrain I want: weights @ x == coefficient

scipy.optimize.linprog requires 2D-array as the coefficients matrix for the equality constraint and 1D-array for the right part. But in my case I have a 1D-array as coefficients (could easily be converted into diagonal matrix).

We also have coefficient on the right part which we can't convert to vector because there are plenty of variants where the sum is the same but values x_i are different, so we can't just constrain all x_i values.

What should you do to solve a problem with such constraints?


Solution

  • I got it. You could create 2D-array for coefficient and 1D-array for right part like that.

    b_eq = np.zeros_like(weights, dtype=np.float64)
    b_eq[0] = coefficient
    
    A_eq_tmp = np.zeros((len(C), len(C)))
    A_eq_tmp[0] = weights
    

    So basically you fill entire matrix with zeros and same to the right part. Then you fill one row in matrix with your weights and one value in the right part.