In order to solve my nonlinear program, I first solve the zero cost version of the problem (i.e. feasibility problem), and use the result as a guess for the full optimization problem.
The feasibility problem, many constraints, among which are constraints of the form x * y = slack
in which x
, y
, slack
are all variables (I am using that to represent a relaxed complementarity constraint). I use the solution to the feasibility problem as the guess to the optimization problem. The optimization problem is just the feasibility problem with a quadratic cost on the slack
added in.
The curious thing is that even with a feasible guess as a starting point, the optimization problem still returns infeasible. Why is that the case? Shouldn't the program at least return the initial feasible guess as a solution?
The full program is too long and complicated to be included in this question and I have yet to distill it into a minimally reproducable example. Perhaps the answer to this question would point me in the right direction to investigate. Thanks in advance!
Since you used SNOPT solver for this problem, and SNOPT uses sequential quadratic programming approach, which doesn't guarantee to generate a feasible solution in every iteration, even if the initial one is feasible.
You could refer to the SNOPT manual https://web.stanford.edu/group/SOL/papers/SNOPT-SIGEST.pdf for more details. What SNOPT does is that it defines a "merit function", which is the combination of the cost and the constraint violation (written as the Lagrangian of the constrained optimization problem), and in each iteration SNOPT attempts to reduce this merit function. So it could increase the violation of the constraint, as long as the cost decrease compensates for the increase of constraint violation.