<< Chapter < Page | Chapter >> Page > |
Barreto showed in [link] that the modified version of Kahng's method (or BBS) typically converges faster than the RUL algorithm. However, this approach presents some peculiar problems that depend on the transition bandwidth . For some particular values of , the BBS method will result in an ill-posed weight matrix that causes the error to increase dramatically after a few iterations as illustrated in [link] (where ).
Two facts can be derived from the examples in [link] : for this particular bandwidth the error increased slightly after the fifth and eleventh iterations, and increased dramatically after the sixteenth. Also, it is worth to notice that after such increase, the error started to decrease quadratically and that, at a certain point, the error became flat (thus reaching the numerical accuracy limits of the digital system).
The effects of different values of were studied to find out if a relationship between and the error increase could be determined. [link] shows the error for different values of and for . It can be seen that some particular bandwidths cause the algorithm to produce a very large error.
Our studies (as well as previous work from J. A. Barreto
[link] ) demonstrate that this error explosion occurs only for a small range of bandwidth specifications. Under most circumstances the BBS method exhibits fast convergence properties to the desired solution. However at this point it is not understood what causes the error increase and therefore this event cannot be anticipated. In order to avoid such problem, I propose the use of an adaptive scheme that modifies the BBS step. As
increases the step from a current
guess to the next also increases, as described in
[link] . In other words, at the
-th iteration one approximates the
solution (as long as the algorithm has not yet reached the desired
); the next iteration one approximates
. There is always a possibility that these two solutions lie far apart enough that the algorithm takes a descent step so that the
guess is too far away from the actual
The conclusions derived above suggest the possibility to use an adaptive algorithm [link] that changes the value of so that the error always decreases. This idea was implemented by calculating temporary new weight and filter coefficient vectors that will not become the updated versions unless their resulting error is smaller than the previous one. If this is not the case, the algorithm "tries" two values of , namely
(where is an updating variable). The resulting errors for each attempt are calculated, and is updated according to the value that produced the smallest error. The error of this new is compared to the error of the nonupdated weights and coefficients, and if the new produces a smaller error, then such vectors are updated; otherwise another update of is performed. The modified adaptive IRLS algorithm can be summarized as follows,
The algorithm described above changes the value of that causes the algorithm to produce a large error. The value of is updated as many times as necessary without changing the values of the weights, the filter coefficients, or . If an optimal value of exists, the algorithm will find it and continue with this new value until another update in becomes necessary.
The algorithm described above was implemented for several combinations of and ; for all cases the new algorithm converged faster than the BBS algorithm (unless the values of and are such that the error never increases). The results are shown in [link] .a for the specifications from [link] . Whereas using the BBS method for this particular case results in a large error after the sixteenth iteration, the adaptive method converged before ten iterations.
[link] .b illustrates the change of per iteration in the adaptive method, using an update factor of . The error stops decreasing after the fifth iteration (where the BBS method introduces the large error); however, the adaptive algorithm adjusts the value of so that the error continues decreasing. The algorithm decreased the initial value of from 1.75 to its final value of 1.4175 (at the expense of only one additional iteration with ).
One result worth noting is the relationship between
and
solutions and how they compare to
designs.
[link] shows a comparison of designs for a length-21 Type-I linear phase low pass FIR filter with transition band defined by
. The curve shows the
versus
errors (namely
and
); the values of
used to make this curve were
(Matlab's
firls
and
firpm
functions were used to design the
and
filters respectively). Note the very small decrease in
after
reaches 100. The curve suggests that a better compromise between
and
can be reached by choosing
. Furthermore, to get better results one can concentrate on values between
and
; fortunately, for values of
so low no numerical complications arise and convergence is reached in a few iterations.
Notification Switch
Would you like to follow the 'Iterative design of l_p digital filters' conversation and receive update notifications?