<< Chapter < Page | Chapter >> Page > |
The following is a recent approach (from 2008) by Leland Jackson [link] based in the frequency domain. Consider vectors and such that
where are the Fourier transforms of and respectively. For a discrete frequency set one can describe Fourier transform vectors and (where correspond to the discrete Fourier kernels for respectively). Define
In vector notation, let . Then
Let be the desired complex frequency response. Define . Then one wants to solve
where . From [link] one can write as
Therefore
Solving [link] for one gets
Also,
where is a unit column vector. Therefore
From [link] we get
or
which in a least squares sense results in
From [link] one gets
As a summary, at the -th iteration one can write [link] and [link] as follows,
Consider the equation error residual function
with . The last equation indicates that one can represent the equation error in matrix form as follows,
where
and
Consider now the solution error residual function
Therefore one can write the solution error in matrix form as follows
where is a diagonal matrix with in its diagonal. From [link] the least-squared solution error can be minimized by
From [link] an iteration Soewito refers to this expression as the Steiglitz-McBride Mode-1 in frequency domain. could be defined as follows
by setting the weights in [link] equal to , the Fourier transform of the current solution for .
A more formal approach to minimizing consists in using a gradient method (these approaches are often referred to as Newton-like methods). First one needs to compute the Jacobian matrix of , where the -th term of is given by with as defined in [link] . Note that the -th element of is given by
For simplicity one can consider these reduced form expressions for the independent components of ,
Therefore on can express the Jacobian as follows,
where
Consider the solution error least-squares problem given by
where is the solution error residual vector as defined in [link] and depends on . It can be shown [link] that the gradient of the squared error (namely ) is given by
A necessary condition for a vector to be a local minimizer of is that the gradient be zero at such vector. With this in mind and combining [link] and [link] in [link] one gets
Solving the system [link] gives
An iteration can be defined as follows Soewito refers to this expression as the Steiglitz-McBride Mode-2 in frequency domain. Compare to the Mode-1 expression and the use of instead of .
where matrices and reflect their dependency on current values of and .
Atmadji Soewito [link] expanded the method of quasilinearization of Bellman and Kalaba [link] to the design of IIR filters. To understand his method consider the first order of Taylor's expansion near , given by
Using the last result in the solution error residual function and applying simplification leads to
Equation [link] can be expressed (dropping the use of for simplicity) as
One can recognize the two terms in brackets as and respectively. Therefore [link] can be represented in matrix notation as follows,
with . Therefore one can minimize from [link] with
since all the terms inside the parenthesis in [link] are constant at the -th iteration. In a sense, [link] is similar to [link] , where the desired function is updated from iteration to iteration as in [link] .
It is important to note that any of the three algorithms can be modified to solve a
weighted
Taking [link] into account, the following is a summary of the three different updates discussed so far:
Notification Switch
Would you like to follow the 'Iterative design of l_p digital filters' conversation and receive update notifications?