<< Chapter < Page Chapter >> Page >

One of the most commonly used quasi-Newton methods in IIR filter design is the Davidon-Fletcher-Powell ( DFP ) method [link] . In 1970 K. Steiglitz [link] used the DFP method to solve an IIR magnitude approximation to a desired real frequency response. For stability concerns he used a cascade form of the IIR filter given in [link] through

H ( z ) = α r = 1 M 1 + a r z - 1 + b r z - 2 1 + c r z - 1 + d r z - 2

Therefore he considered the following problem,

min a n , b n , c n , d n ω k α r = 1 M 1 + a r e - j ω k + b r e - 2 j ω k 1 + c r e - j ω k + d r e - 2 j ω k - D ( ω k ) 2

His method is a direct implementation of the DFP algorithm in problem [link] .

In 1972 Andrew Deczky [link] employed the DFP algorithm to solve a complex IIR least- p approximation to a desired frequency response. Like Steiglitz, Deczky chose to employ the cascaded IIR structure of [link] , mainly for stability reasons but also because he claims that for this structure it is simpler to derive the first order information required for the DFP method.

The MATLAB Signal Processing Toolbox includes a function called INVFREQZ, originally written by J. Smith and J. Little [link] . Invfreqz uses the algorithm by Levy (see § [link] ) as an initial step and then begins an iterative algorithm based on the damped Gauss-Newton [link] to minimize the solution error ε s according to the least-squared error criteria. This method performs a line search after every iteration to find the optimal direction for the next step. Invfreqz evaluates the roots of A ( z ) after each iteration to verify that the poles of H ( z ) lie inside the unit circle; otherwise it will convert the pole into its reciprocal. This approach guarantees a stable filter.

Among other Newton-based approaches, Spanos and Mingori [link] use a Newton algorithm combined with the Levenberg-Marquardt technique to improve the algorithm's convergence properties. Their idea is to express the denominator function A ( ω ) as a sum of second-order rational polynomials. Thus H ( ω ) can be written as

H ( ω ) = r = 1 L - 1 b r + j ω β r a r + j ω β r - ω 2 + d

Their global descent approach is similar to the one presented in [link] . As any Newton-based method, this approach suffers under a poor initial guess, and does not guarantee to converge (if convergence occurs) to a local minimum. However, in such case, convergence is quadratic.

Kumaresan's method [link] considers a three-step approach. It is not clear whether his method attempts to minimize the equation error or the solution error. He uses divided differences [link] to reformulate the solution error in terms of the coefficients a k . Using Lagrange multiplier theory, he defines

E = y T C T [ C C T ] - 1 C y

where y = [ H 0 H 1 H L - 1 ] T contains the frequency samples and C is a composition matrix containing the frequency divided differences and the coefficients a k (a more detailed derivation can be found in [link] ). Equation [link] is iterated until convergence of the coefficient vector a ^ is reached. This vector is used as initial guess in the second step, involving a Newton-Raphson search of the optimal a ^ that minimizes E 2 . Finally the vector b ^ is found by solving a linear system of equations.

Equation error linearization methods

Typically general use optimization tools prove effective in finding a solution. However in the context of IIR filter design, they often tend to take a rather large number of iterations, generate large matrices or require complicated steps like solving or estimating (and often inverting) vectors and matrices of first and second order information [link] . Using gradient-based tools for nonlinear problems like [link] certainly seems like a suboptimal approach. Also, typical Newton-based methods tend to converge quick (quadratically), yet they make assumptions about radii of convergence and initial proximity to the solution (otherwise performance is suboptimal). In the context of filter design one should wonder if better performance could be achieved by exploiting characteristics from the problem. This section introduces the concept of linearization, an alternative to general optimization methods that has proven successful in the context of rational approximation. The main idea behind linearization approaches consists in transforming a complex nonlinear problem into a sequence of linear ones, an idea that is parallel to the approach followed in our development of IRLS l p optimization.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Iterative design of l_p digital filters. OpenStax CNX. Dec 07, 2011 Download for free at http://cnx.org/content/col11383/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Iterative design of l_p digital filters' conversation and receive update notifications?

Ask