Search code examples
optimizationscipycvxpy

Is there a wrapper library for solving optimisation problems by declaring known and unknown variables?


cvxpy has a very neat way to write out the optimisation form without worrying too much about converting it into a "standard" matrix form as this is done internally somehow. Best to explain with an example:

def cvxpy_implementation():

    var1 = cp.Variable()
    var2 = cp.Variable()

    constraints = [
            var1 <= 3,
            var2 >= 2
    ]

    obj_fun = cp.Minimize(var1**2 + var2**2)

    problem = cp.Problem(obj_fun, constraints)
    problem.solve()

    return var1.value, var2.value


def scipy_implementation1():

    A = np.diag(np.ones(2))
    lb = np.array([-np.inf, 2])
    ub = np.array([3, np.inf])

    con = LinearConstraint(A, lb, ub)

    def obj_fun(x):
        return (x**2).sum()
    result = minimize(obj_fun, [0, 0], constraints=con)
    return result.x

def scipy_implementation2():

    con = [
        {'type': 'ineq', 'fun': lambda x: 3 - x[0]},
        {'type': 'ineq', 'fun': lambda x: x[1] - 2},]

    def obj_fun(x):
        return (x**2).sum()

    result = minimize(obj_fun, [0, 0], constraints=con)
    return result.x

All of the above give the correct result but the cvxpy implementation is much "easier" to write out, specifically I don't have to worry about the inequalities and can name variables useful thinks when writing out the inequalities. Compare that to the scipy1 and scipy2 implementations where in the first case I have to write out these extra infs and in the second case I have to remember which variable is which. You can imagine a case where I have 100 variables and while concatenating them will ultimately need to be done I'd like to be able to write it out like in cvxpy.

Question: Has anyone implemented this for scipy? or is there an alternative library that could make this work?

thank you


Solution

  • Wrote something up that would do this and seems to cover the main issues I had in mind.

    The general idea is you define variables and then create a simple expression as you would normally write it out and then the solver class optimises over the defined variables

    https://github.com/evan54/optimisation/blob/master/var.py

    The example below illustrates a simple use case

    # fake data 
    a = 2
    m = 3 
    x = np.linspace(0, 10) 
    y = a * x + m + np.random.randn(len(x)) 
    a_ = Variable() 
    m_ = Variable() 
    y_ = a_ * x + m_ 
    error = y_ - y 
    prob = Problem((error**2).sum(), None) 
    prob.minimize() print(f'a = {a}, a_ = {a_}') print(f'm = {m}, m_ = {m_}')