Search code examples
pythonscipyconstraints

SciPy nonlinear constraints in case of increasing values


I'm having trouble with global optimisation for a function that has specific constraints on the values. Let me explain the situation.

We have a function F(x1, x2,..., xn, a, b), and our goal is to find its global minimum. We also have constraints from physics: x[i] <= x[i+1] for all values of x. All values in x[i] and a,b have boundaries (0,1).

How can I set up the correct nonlinear constraint for x in SciPy?

I've already tried this option:

def con(x):
    out = np.ones(N)

    for i in range(N-1):
        if x[i] > x[i + 1]:
            out[i:] = -1
            return out
        else:
            out[i] = 1

    if x[N - 1] < x[N - 2]: 
        out[N - 1] = -1
    else:
        out[N - 1] = 1

    return out

const = NonlinearConstraint(con, 0, 1)

bounds = [(0, 1) for i in range(N+2)]

solution = differential_evolution(f, bounds, workers=-1, constraints=const)

But it doesn't work. Some values in x are outside the bounds (< 0, how???) and it ignores the constraint x[i]<=x[i+1]. I think I completely misunderstand how the NonlinearConstraint works. I would appreciate any help!


Solution

  • The NonlinearConstraint function expects a single value or an array of values, representing the constraint violations. Your con function should return an array where each element corresponds to the difference x[i+1] - x[i]. If this difference is positive (or zero), the constraint is satisfied; otherwise, it's violated.Currently, you're using a single NonlinearConstraint to represent multiple inequalities. Instead, define each constraint individually. This allows the solver to handle them more effectively. differential_evolution is a global optimization method, it might not be the most suitable for your problem given the constraints. Consider using trust-constr or SLSQP, which are better suited for handling inequality constraints.

        #from scipy.optimize import minimize, NonlinearConstraint
    import numpy as np
    
    N = ...  # Number of x variables
    
    def objective(x):
        # ... your objective function F(x1, x2, ..., xn, a, b)
        pass
    
    # Individual constraints (one for each x[i] <= x[i+1])
    constraints = [NonlinearConstraint(lambda x, i=i: x[i+1] - x[i], 0, np.inf) for i in range(N-1)]
    
    # Bounds for x and other parameters (a, b)
    bounds = [(0, 1) for _ in range(N)] + [(0, 1), (0, 1)]  # Assume a and b are the last two elements
    
    # Initial guess for x, a, and b
    x0 = np.linspace(0, 1, N)  # Or a better guess if you have one
    x0 = np.append(x0, [0.5, 0.5])  # Initial values for a and b
    
    result = minimize(objective, x0, bounds=bounds, constraints=constraints, method='SLSQP') 
    # or method='trust-constr'
    
    # Optimal solution
    if result.success:
        optimal_x = result.x[:N]
        optimal_params = result.x[N:]  # a and b
        print("Optimal x:", optimal_x)
        print("Optimal parameters (a, b):", optimal_params)
    else:
        print("Optimization failed:", result.message)
    

    I hope it works!