# Convex nonlinear programming

(Redirected from Convex NLP)

Convex nonlinear programming refers to optimization problems of the form:

 minimize $f(\mathbf{x})$ with respect to $\mathbf{x}$ subject to $\mathbf{g(x) \leq 0}$ $\mathbf{h(x)=0}$ $\mathbf{x}\in\Re^n$

where $f(\mathbf{x})$ and $\mathbf{g(x)}$ are convex functions of the vector $\mathbf{x}$, $\mathbf{h(x)}$ is an affine function of the vector $\mathbf{x}$, and n is a positive integer.

# Software

• MINOS
• Author: Systems Optimization Laboratory at Stanford University
• Method:
• LP: Primal simplex method
• Linearly constrainted nonlinear program: Reduced gradient method combined with a quasi-newton algorithm
• Nonlinear objetive + Nonlinear constraints: Project Lagrangian algorithm: series of linearly constrained problems (generated by linearizing the nonlinear constraints) solved by reduced gradient along with augmented Lagrangian penalty term in the outer loop
• CONOPT
• Author: ARKI Consulting and Development in Denmark
• Method: A GRG based algorithm with several enhancements including customized LP updating methods and SQP-like iterations which will be employed based on the nonlinearity degree of the model. there are currently three versions: old CONOPT1, CONOPT2 and CONOPT3 which has several new features (e.g. the inner-loop SQP for highly nonlinear models) enabling solution of larger NLPs and faster convergence.
• SNOPT
• Author: Philip Gill, University of California at San Diego and Walter Murray and Michael Saunders, Stanford University
• Method: SQP algorithm that obtains search directions from a sequence of quadratic programming subproblems. Each QP minimizes a quadratic model of a certain Lagrangian function subject to a linearization of the constraints. An augmented Lagrangian merit function is reduced along each search direction to ensure convergence from any starting point.