Convex nonlinear programming

(Redirected from Convex NLP)

Convex nonlinear programming refers to optimization problems of the form:

 minimize $f(\mathbf{x})$ with respect to $\mathbf{x}$ subject to $\mathbf{g(x) \leq 0}$ $\mathbf{h(x)=0}$ $\mathbf{x}\in\Re^n$

where $f(\mathbf{x})$ and $\mathbf{g(x)}$ are convex functions of the vector $\mathbf{x}$, $\mathbf{h(x)}$ is an affine function of the vector $\mathbf{x}$, and n is a positive integer.

Software

• MINOS
• Author: Systems Optimization Laboratory at Stanford University
• Method:
• LP: Primal simplex method
• Linearly constrainted nonlinear program: Reduced gradient method combined with a quasi-newton algorithm
• Nonlinear objetive + Nonlinear constraints: Project Lagrangian algorithm: series of linearly constrained problems (generated by linearizing the nonlinear constraints) solved by reduced gradient along with augmented Lagrangian penalty term in the outer loop
• CONOPT
• Author: ARKI Consulting and Development in Denmark
• Method: A GRG based algorithm with several enhancements including customized LP updating methods and SQP-like iterations which will be employed based on the nonlinearity degree of the model. there are currently three versions: old CONOPT1, CONOPT2 and CONOPT3 which has several new features (e.g. the inner-loop SQP for highly nonlinear models) enabling solution of larger NLPs and faster convergence.
• SNOPT
• Author: Philip Gill, University of California at San Diego and Walter Murray and Michael Saunders, Stanford University
• Method: SQP algorithm that obtains search directions from a sequence of quadratic programming subproblems. Each QP minimizes a quadratic model of a certain Lagrangian function subject to a linearization of the constraints. An augmented Lagrangian merit function is reduced along each search direction to ensure convergence from any starting point.
• Comparison
• CONOPT, MINOS, and SNOPT behave differently on most models; meaning while CONOPT is superior for some models, MINOS or SNOPT will be superior for others. CONOPT is well suited for models with very nonlinear constraints and for models where feasibility is difficult to achieve. If you experience that MINOS has problems maintaining feasibility during the optimization you should try CONOPT. On the other hand, if you have a model with few nonlinearities outside the objective function then either MINOS or SNOPT could be the best solver. SNOPT is most efficient if only some of the variables enter nonlinearly, or if the number of active constraints (including simple bounds) is nearly as large as the number of variables. Further detail could be find in Relative Merits of MINOS and CONOPT