<< Chapter < Page | Chapter >> Page > |
Since a linear weighted least squares problem like [link] has a closed form solution (see Appendix [link] ), it can be solved in one step. Then the solution is used to update the weighting function, which is kept constant for the next closed form solution and so on (as discussed in [link] ).
One of the earlier works on the use of IRLS methods for approximation was written by Charles Lawson [link] , [link] , [link] , in part motivated by problems that might not have a suitable algorithm. He looked at a basic form of the IRLS method to solve problems and extended it by proposing a multiplicative update of the weighting coefficients at each iteration (that is, ). Lawson's method triggered a number of papers and ideas; however his method is sensitive to the weights becoming numerically zero; in this case the algorithm must restart. A number of ideas [link] , [link] have been proposed (some from Lawson himself) to prevent or deal with these occurrences, and in general his method is considered somewhat slow.
John Rice and Karl Usow [link] , [link] extended Lawson's method to the general problem ( ) by developing an algorithm based on Lawson's that also updates the weights in a multiplicative form. They used the results from Theorem [link] by Motzkin and Walsh [link] , [link] to guarantee that a solution indeed exists for the problem. They defined
where
and
with being a convergence parameter and . The rest of the algorithm works the same way as the basic IRLS method; however the proper selection of could allow for strong convergence (note that for we obtain the basic IRLS algorithm).
Another approach to solve [link] consists in a partial updating strategy of the filter coefficients rather than the weights , by using a temporary coefficient vector defined by
The filter coefficients after each iteration are then calculated by
where is a convergence parameter (with ). This approach is known as the Karlovitz method [link] , and it has been claimed that it converges to the global optimal solution for even values of such that . However, in practice several convergence problems have been found even under such assumptions. One drawback is that the convergence parameter has to be optimized at each iteration via an expensive line search process. Therefore the overall execution time becomes rather large.
S. W. Kahng [link] developed an algorithm based on Newton-Raphson's method that uses
to get
This selection for
is based upon Newton's method to minimize
(the same result was derived independently by Fletcher, Grant and Hebden
[link] ). The rest of the algorithm follows Karlovitz's approach; however since
is fixed there is no need to perform the linear search for its best value. Since Kahng's method is based on Newton's method, it converges quadratically to the optimal solution. Kahng proved that his method converges for all cases of
and for any problem (at least in theory). It can be seen that Kahng's method is a particular case of Karlovitz's algorithm, with
as defined in
[link] . Newton-Raphson based algorithms are not warranted to converge to the optimal solution unless they are somewhat close to the solution since they require to know and invert the Hessian matrix of the objective function (which must be
positive definite
Burrus, Barreto and Selesnick developed a method [link] , [link] , [link] that combines the powerful quadratic convergence of Newton's methods with the robust initial convergence of the basic IRLS method, thus overcoming the initial sensitivity of Newton-based algorithms and the slow linear convergence of Lawson-based methods. To accelerate initial convergence, their approach to solve [link] uses , where is a convergence parameter (with ). At any given iteration, increases its value by a factor of . This is done at each iteration, so to satisfy
where corresponds to the desired norm. The implementation of each iteration follows Karlovitz's method using the particular selection of given by [link] .
It is worth noting that the method outlined above combines several ideas into a powerful approach. By not solving the desired problem from the first iteration, one avoids the potential issues of Newton-based methods where convergence is guaranteed within a radius of convergence. It is well known that for there exists a continuum of solutions (as shown in [link] ). By slowly increasing from iteration to iteration one hopes to follow the continuum of solutions from towards the desired . By choosing a reasonable the method can only spend one iteration at any given and still remain close enough to the optimal path. Once the algorithm reaches a neighborhood of the desired , it can be allowed to iterate at such , in order to converge to the optimal solution. This process is analogous to homotopy , a commonly used family of optimization methods [link] .
While and designs offer meaningful approaches to filter design, the Constrained Least Squares (CLS) problem offers an interesting tradeoff to both approaches [link] . In the context of filter design, the CLS problem seems to be first presented by John Adams [link] in 1991. The problem Adams posed is a Quadratic Programming (QP) problem, well suited for off-the-shelf QP tools like those based on Lagrange multiplier theory [link] . However, Adams posed the problem in such a way that a transition band is required (as shown in [link] ). Burrus et al. presented a formulation [link] , [link] , [link] where only a transition frequency is required; the transition band is induced ; it does indeed exist but is not specified (it adjusts itself optimally according to the constraint specifications). The method by Burrus et al. is based on Lagrange multipliers and the Karush-Kuhn-Tucker (KKT) conditions.
An alternative to the KKT-based method mentioned above is the use of IRLS methods where a suitable weighting function serves as the constraining function over frequencies that exceed the constraint tolerance. Otherwise no weights are used, effectively forcing a least-squares solution. While this idea has been suggested by Burrus et al., one of the main contributions of this work is a thorough investigation of this approach, as well as proper documentation of numerical results, theoretical findings and proper code.
Notification Switch
Would you like to follow the 'Iterative design of l_p digital filters' conversation and receive update notifications?