<< Chapter < Page | Chapter >> Page > |
Constrained optimization is the minimization of an objective function subject to constraints on the possible valuesof the independent variable. Constraints can be either equality constraints or inequality constraints . Because the scalar-variable case follows easily from the vector one, only the latter is discussed indetail here.
The typical constrained optimization problem has the form
where
is the scalar-valued objective function and
is the vector-valued
constraint
function . Strict convexity of the objective function
is
not sufficient to guarantee a unique
minimum; in addition, each component of the constraint must bestrictly convex to guarantee that the problem has a unique
solution. Because of the constraint, stationary points of
alone may not be solutions to the constrained
problem: they may not satisfy the constraints. In fact,solutions to the constrained problem are often
not stationary points of the objective
function. Consequently, the
The classical approach to solving constrained optimization
problems is the method of
Lagrange multipliers .
This approach converts the constrained optimization probleminto an unconstrained one, thereby allowing use of the
techniques described in the previous section. The
Lagrangian of a constrained optimization problem
is defined to be the scalar-valued function
Essentially, the following theorem states that stationary
points of the Lagrangian are
potential solutions of the constrained optimization problem: as always,
each candidate solution must be tested to determine whichminimizes the objective function.
Let
denote a local solution to
the constrained optimization problem given above wherethe gradients
,,
of the constraint function's components are
linearly independent. There then exists a unique vector
such that
Furthermore, the quadratic form
is non-negative for all
satisfying
.
The rather abstract result of the preceding theorem has a simple geometric interpretation. As shown in , the constraint corresponds to a contour in the plane. A contour map of the objective function indicates those values of for which . In this figure, as becomes smaller, the contours shrink to a small circle in the center of the figure. The solution to theconstrained optimization problem occurs when the smallest value of is chosen for which the contour just touches the constraint contour. At thatpoint, the gradient of the objective function and of the constraint contour are proportional to each other. Thisproportionality vector is , the so-called Lagrange multiplier . The Lagrange multiplier's exact value must be such that the constraint is exactly satisfied. Note thatthe constraint can be tangent to the objective function's contour map for larger values of . These potential, but erroneous, solutions can be discarded only by evaluating the objective function.
Notification Switch
Would you like to follow the 'Statistical signal processing' conversation and receive update notifications?