<< Chapter < Page Chapter >> Page >

Jackson's method

The following is a recent approach (from 2008) by Leland Jackson [link] based in the frequency domain. Consider vectors a R N and b R M such that

H ( ω ) = B ( ω ) A ( ω )

where H ( ω ) , B ( ω ) , A ( ω ) are the Fourier transforms of h , b and a respectively. For a discrete frequency set one can describe Fourier transform vectors B = W b b and A = W a a (where W b , W a correspond to the discrete Fourier kernels for b , a respectively). Define

H a ( ω k ) = 1 A ( ω k )

In vector notation, let D a = diag ( H a ) = diag ( 1 / A ) . Then

H ( ω ) = B ( ω ) A ( ω ) = H a ( ω ) B ( ω ) H = D a B

Let H d ( ω ) be the desired complex frequency response. Define D d = diag ( H d ) . Then one wants to solve

min E * E = E 2 2

where E = H - H d . From [link] one can write H = H d + E as

H = D a B = D a W b b

Therefore

H d = H - E = D a W b b - E

Solving [link] for b one gets

b = ( D a W b ) H d

Also,

H d = D d I ^ = D d D a A = D a D d A = D a D d W a a

where I ^ is a unit column vector. Therefore

H - E = H d = D a D d W a a

From [link] we get

D a W b b - E = D a D d W a a

or

D a D d W a a + E = D a W b b

which in a least squares sense results in

a = ( D a D d W a ) ( D a W b b )

From [link] one gets

a = ( D a D d W a ) ( D a W b [ ( D a W b ) H d ] )

As a summary, at the i -th iteration one can write [link] and [link] as follows,

b i = ( diag ( 1 / A i - 1 ) W b ) H d a i = ( diag ( 1 / A i - 1 ) diag ( H d ) W a ) ( diag ( 1 / A i - 1 ) W b b i )

Soewito's quasilinearization method

Consider the equation error residual function

e ( ω k ) = B ( ω k ) - D ( ω k ) · A ( ω k ) = n = 0 M b n e - j ω k n - D ( ω k ) · 1 + n = 1 N a n e - j ω k n = b 0 + b 1 e - j ω k + + b M e - j ω k M - D k - D k a 1 e - j ω k - - D k a N e - j ω k N = b 0 + b M e - j ω k M - D k a 1 e - j ω k + a N e - j ω k N - D k

with D k = D ( ω k ) . The last equation indicates that one can represent the equation error in matrix form as follows,

e = F h - D

where

F = 1 e - j ω 0 e - j ω 0 M - D 0 e - j ω 0 - D 0 e - j ω 0 N 1 e - j ω L - 1 e - j ω L - 1 M - D L - 1 e - j ω L - 1 - D L - 1 e - j ω L - 1 N

and

h = b 0 b 1 b M a 1 a N and D = D 0 D L - 1

Consider now the solution error residual function

s ( ω k ) = H ( ω k ) - D ( ω k ) = B ( ω k ) A ( ω k ) - D ( ω k ) = 1 A ( ω k ) B ( ω k ) - D ( ω k ) · A ( ω k ) = W ( ω k ) e ( ω k )

Therefore one can write the solution error in matrix form as follows

s = W ( F h - D )

where W is a diagonal matrix with 1 A ( ω ) in its diagonal. From [link] the least-squared solution error ε s = s * s can be minimized by

h = ( F * W 2 F ) - 1 F * W 2 D

From [link] an iteration Soewito refers to this expression as the Steiglitz-McBride Mode-1 in frequency domain. could be defined as follows

h i + 1 = ( F * W i 2 F ) - 1 F * W i 2 D

by setting the weights W in [link] equal to A k ( ω ) , the Fourier transform of the current solution for a .

A more formal approach to minimizing ε s consists in using a gradient method (these approaches are often referred to as Newton-like methods). First one needs to compute the Jacobian matrix J of s , where the p q -th term of J is given by J p q = s p h q with s as defined in [link] . Note that the p -th element of s is given by

s p = H p - D p = B p A p - D p

For simplicity one can consider these reduced form expressions for the independent components of h ,

s p b q = 1 A p b q n = 0 M b n e - j ω p n = W p e - j ω p q s p a q = B p a q 1 A p = - B p A p 2 a q 1 + n = 1 N a n e - j ω p n = - 1 A p · B p A p · e - j ω p q = - W p H p e - j ω p q

Therefore on can express the Jacobian J as follows,

J = W G

where

G = 1 e - j ω 0 e - j ω 0 M - H 0 e - j ω 0 - H 0 e - j ω 0 N 1 e - j ω L - 1 e - j ω L - 1 M - H L - 1 e - j ω L - 1 - H L - 1 e - j ω L - 1 N

Consider the solution error least-squares problem given by

min h f ( h ) = s T s

where s is the solution error residual vector as defined in [link] and depends on h . It can be shown [link] that the gradient of the squared error f ( h ) (namely f ) is given by

f = J * s

A necessary condition for a vector h to be a local minimizer of f ( h ) is that the gradient f be zero at such vector. With this in mind and combining [link] and [link] in [link] one gets

f = G * W 2 ( F h - D ) = 0

Solving the system [link] gives

h = ( G * W 2 F ) - 1 G * W 2 D

An iteration can be defined as follows Soewito refers to this expression as the Steiglitz-McBride Mode-2 in frequency domain. Compare to the Mode-1 expression and the use of G i instead of F .

h i + 1 = ( G i * W i 2 F ) - 1 G i * W i 2 D

where matrices W and G reflect their dependency on current values of a and b .

Atmadji Soewito [link] expanded the method of quasilinearization of Bellman and Kalaba [link] to the design of IIR filters. To understand his method consider the first order of Taylor's expansion near H i ( z ) , given by

H i + 1 ( z ) = H i ( z ) + [ B i + 1 ( z ) - B i ( z ) ] A i ( z ) - [ A i + 1 ( z ) - A i ( z ) ] B i ( z ) A i 2 ( z ) = H i ( z ) + B i + 1 ( z ) - B i ( z ) A i ( z ) - B i ( z ) [ A i + 1 ( z ) - A i ( z ) ] A i 2 ( z )

Using the last result in the solution error residual function s ( ω ) and applying simplification leads to

s ( ω ) = B i + 1 ( ω ) A i ( ω ) - H i ( ω ) A i + 1 ( ω ) A i ( ω ) + B i ( ω ) A i ( ω ) - D ( ω ) = 1 A i ( ω ) B i + 1 ( ω ) - H i ( ω ) A i + 1 ( ω ) + B i ( ω ) - A i ( ω ) D ( ω )

Equation [link] can be expressed (dropping the use of ω for simplicity) as

s = W B i + 1 - H i ( A i + 1 - 1 ) - H i + B i - D ( A i - 1 ) - D

One can recognize the two terms in brackets as G h i + 1 and F h i respectively. Therefore [link] can be represented in matrix notation as follows,

s = W [ G h i + 1 - ( D + H i - F h i ) ]

with H = [ H 0 , H 1 , , H L - 1 ] T . Therefore one can minimize s T s from [link] with

h i + 1 = ( G i * W i 2 G i ) - 1 G i * W i 2 ( D + H i - F h i )

since all the terms inside the parenthesis in [link] are constant at the ( i + 1 ) -th iteration. In a sense, [link] is similar to [link] , where the desired function is updated from iteration to iteration as in [link] .

It is important to note that any of the three algorithms can be modified to solve a weighted l 2 IIR approximation using a weighting function W ( ω ) by defining

V ( ω ) = W ( ω ) A ( ω )

Taking [link] into account, the following is a summary of the three different updates discussed so far:

SMB Frequency Mode-1: h i + 1 = ( F * V i 2 F ) - 1 F * V i 2 D SMB Frequency Mode-2: h i + 1 = ( G i * V i 2 F ) - 1 G i * V i 2 D Soewito's quasilinearization: h i + 1 = ( G i * V i 2 G i ) - 1 G i * V i 2 ( D + H i - F h i )

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Iterative design of l_p digital filters. OpenStax CNX. Dec 07, 2011 Download for free at http://cnx.org/content/col11383/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Iterative design of l_p digital filters' conversation and receive update notifications?

Ask