<< Chapter < Page Chapter >> Page >

Constrained optimization is the minimization of an objective function subject to constraints on the possible valuesof the independent variable. Constraints can be either equality constraints or inequality constraints . Because the scalar-variable case follows easily from the vector one, only the latter is discussed indetail here.

Equality constraints

The typical constrained optimization problem has the form x f x subject to g x 0 where f is the scalar-valued objective function and g is the vector-valued constraint function . Strict convexity of the objective function is not sufficient to guarantee a unique minimum; in addition, each component of the constraint must bestrictly convex to guarantee that the problem has a unique solution. Because of the constraint, stationary points of f alone may not be solutions to the constrained problem: they may not satisfy the constraints. In fact,solutions to the constrained problem are often not stationary points of the objective function. Consequently, the ad hoc technique of searching for all stationary points of the objective function that also satisfy the constraint do notwork.

The classical approach to solving constrained optimization problems is the method of Lagrange multipliers . This approach converts the constrained optimization probleminto an unconstrained one, thereby allowing use of the techniques described in the previous section. The Lagrangian of a constrained optimization problem is defined to be the scalar-valued function L x f x g x Essentially, the following theorem states that stationary points of the Lagrangian are potential solutions of the constrained optimization problem: as always, each candidate solution must be tested to determine whichminimizes the objective function.

Let x denote a local solution to the constrained optimization problem given above wherethe gradients x g 1 x ,, x g M x of the constraint function's components are linearly independent. There then exists a unique vector such that x L x 0 Furthermore, the quadratic form y x 2 L x y is non-negative for all y satisfying x g x y 0 .

The latter result in the theorem says that the Hessian of the Lagrangian evaluated at its stationary points is non-negativedefinite with respect to all vectors orthogonal to the gradient of the constraint. This result generalizes the notion of a positivedefinite Hessian in unconstrained problems.

The rather abstract result of the preceding theorem has a simple geometric interpretation. As shown in , the constraint corresponds to a contour in the x plane.

Geometric interpretation of lagrange multipliers.

The thick line corresponds to the contour of the values of x satisfying the constraint equation g x 0 . The thinner lines are contours of constant values of the objective function f x . The contour corresponding to the smallest value of the objective function just tangent to theconstraint contour is the solution to the optimization problem with equality constraints.
A contour map of the objective function indicates those values of x for which f x c . In this figure, as c becomes smaller, the contours shrink to a small circle in the center of the figure. The solution to theconstrained optimization problem occurs when the smallest value of c is chosen for which the contour just touches the constraint contour. At thatpoint, the gradient of the objective function and of the constraint contour are proportional to each other. Thisproportionality vector is , the so-called Lagrange multiplier . The Lagrange multiplier's exact value must be such that the constraint is exactly satisfied. Note thatthe constraint can be tangent to the objective function's contour map for larger values of c . These potential, but erroneous, solutions can be discarded only by evaluating the objective function.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Statistical signal processing. OpenStax CNX. Dec 05, 2011 Download for free at http://cnx.org/content/col11382/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Statistical signal processing' conversation and receive update notifications?

Ask