<< Chapter < Page | Chapter >> Page > |
The least squares optimal filter design problem is quadratic
in the filter coefficients:
If
is positive definite, the
error surface
is a unimodal "bowl" in
.
For a quadratic error surface, the bottom of the bowl can be
found in one step by computing
. Most modern nonlinear optimization methods (which
are used, for example, to solve the
optimal IIR filter design problem!) locally
approximate a nonlinear function with a second-order(quadratic) Taylor series approximation and step to the bottom
of this quadratic approximation on each iteration. However, anolder and simpler appraoch to nonlinear optimaztion exists,
based on
gradient descent .
Contour plot of ε-squared
By updating the coefficient vector by taking a step opposite the gradient direction : , we go (locally) "downhill" in the steepest direction, which seems to be a sensible way to iterativelysolve a nonlinear optimization problem. The performance obviously depends on ; if is too large, the iterations could bounce back and forth up out of thebowl. However, if is too small, it could take many iterations to approach thebottom. We will determine criteria for choosing later.
In summary, the gradient descent algorithm for solving the Weiner filter problem is: The gradient descent idea is used in the LMS adaptive fitleralgorithm. As presented, this alogrithm costs computations per iteration and doesn't appear very attractive, but LMS only requires computations and is stable, so it is very attractive when computation is an issue, even thought it converges moreslowly then the RLS algorithms we have discussed so far.
Notification Switch
Would you like to follow the 'Fundamentals of signal processing' conversation and receive update notifications?