Search code examples
pythonoptimizationscipymathematical-optimizationlinear-programming

Using scipy linprog


Let's say I have the following problem:

  1. objective function c1x1 + c2x2 (we need to minimize it)
  2. -x1 + x2 <= 0
  3. 0 <= x1 <= 3
  4. 0 <= x2 <= 2

We also assume that c1 = 1 and c2 = -0.5 and another constrain has been added c1x1 + c2x2 = P, were P = -1,0,1.

What is the right way to use linprog in order to solve this problem? is bounds refer to lower bounds?

Thanks for helping.


Solution

  • Ignoring the issue about the impossible equality constraint mentioned in the comment, we only have to refer to the documentation:

    A_ub @ x <= b_ub
    A_eq @ x == b_eq
    lb <= x <= ub
    

    Here, lb is 0 by default, so you don't have to bother with the lower bounds at all, and your problem boils down to c = [1, -0.5], A_ub = [[-1, 1], [1, 0], [0, 1]], and b_ub = [0, 3, 2]:

    In [57]: linprog(c, A_ub, b_ub)                                                                          
    Out[57]: 
         con: array([], dtype=float64)
         fun: 9.37460773860971e-11
     message: 'Optimization terminated successfully.'
         nit: 4
       slack: array([7.91922932e-11, 3.00000000e+00, 2.00000000e+00])
      status: 0
     success: True
           x: array([1.08299862e-10, 2.91075683e-11])
    

    That is, up to numerical issues, the optimal solutions is x1 = x2 = 0 (which is also fairly obvious: The only way to get a negative objective would be let x2 be positive, but your first constraint would then require that x1 be at least as large as x2, which in turn causes your objective to become positive).