<< Chapter < Page Chapter >> Page >

Barreto showed in [link] that the modified version of Kahng's method (or BBS) typically converges faster than the RUL algorithm. However, this approach presents some peculiar problems that depend on the transition bandwidth β . For some particular values of β , the BBS method will result in an ill-posed weight matrix that causes the l p error to increase dramatically after a few iterations as illustrated in [link] (where f = ω / 2 π ).

This image contains two graphs. The upper graph is labeled f_p=0.2, f_s=0.25. The bottom graph is labeled f_p=0.2, f_s=0.248. The x and y axes of both of these graphs are labeled iteration and l_60 error respecitively.
Error jumps on IRLS methods.

Two facts can be derived from the examples in [link] : for this particular bandwidth the error increased slightly after the fifth and eleventh iterations, and increased dramatically after the sixteenth. Also, it is worth to notice that after such increase, the error started to decrease quadratically and that, at a certain point, the error became flat (thus reaching the numerical accuracy limits of the digital system).

The effects of different values of σ were studied to find out if a relationship between σ and the error increase could be determined. [link] shows the l p error for different values of β and for σ = 1 . 7 . It can be seen that some particular bandwidths cause the algorithm to produce a very large error.

This three dimensional graph demonstrates the L_P error for different bandwidths,f_P=0.2. Both Iteration and Bandwidth are compared against L_P error on the y-axis.The most prominent error jumps are at bandwidths 0.01 and 0.05.
Relationship between bandwidth and error jumps.

Our studies (as well as previous work from J. A. Barreto [link] ) demonstrate that this error explosion occurs only for a small range of bandwidth specifications. Under most circumstances the BBS method exhibits fast convergence properties to the desired solution. However at this point it is not understood what causes the error increase and therefore this event cannot be anticipated. In order to avoid such problem, I propose the use of an adaptive scheme that modifies the BBS step. As p increases the step from a current l p guess to the next also increases, as described in [link] . In other words, at the i -th iteration one approximates the l 2 σ i solution (as long as the algorithm has not yet reached the desired p ); the next iteration one approximates l 2 σ i + 1 . There is always a possibility that these two solutions lie far apart enough that the algorithm takes a descent step so that the l 2 σ i + 1 guess is too far away from the actual l 2 σ i + 1 solution . This is better illustrated in [link] .

There is a curved line segment with four points along the line. From left to right the points are l_2, l_2σ k, l_2σ k+1, and l_p_des. At the point l_2σ k a thin straight arrow points down and to the right.
A step too long for IRLS methods.

The conclusions derived above suggest the possibility to use an adaptive algorithm [link] that changes the value of σ so that the error always decreases. This idea was implemented by calculating temporary new weight and filter coefficient vectors that will not become the updated versions unless their resulting error is smaller than the previous one. If this is not the case, the algorithm "tries" two values of σ , namely

σ L = σ * ( 1 - δ ) and σ H = σ * ( 1 + δ )

(where δ is an updating variable). The resulting errors for each attempt are calculated, and σ is updated according to the value that produced the smallest error. The error of this new σ is compared to the error of the nonupdated weights and coefficients, and if the new σ produces a smaller error, then such vectors are updated; otherwise another update of σ is performed. The modified adaptive IRLS algorithm can be summarized as follows,

  1. Find the unweighted approximation a 0 = C T C - 1 C T D and use p 0 = 2 σ (with 1 σ 2 )
  2. Iteratively solve [link] and [link] using λ i = 1 p i - 1 and find the resulting error ε i for the i -th iteration
  3. If ε i ε i - 1 ,
    • Calculate [link]
    • Select the smallest of ε σ L and ε σ H to compare it with ε i until a value is found that results in a decreasing error
    Otherwise iterate as in the BBS algorithm.
This image contains two graphs. The upper graph represents L_p error (f_P=0.2,β=0.048,K_α=1.75) and the lower graph represents Variations in K. L_p error is represented on the y-axis for the upper graph while the y axis represents K on the lower graph. For both graphs, Iterations are represented on the x-axis.
FIR design example using adaptive method. a) l p error obtained with the adaptive method; b) Change of σ .

The algorithm described above changes the value of σ that causes the algorithm to produce a large error. The value of σ is updated as many times as necessary without changing the values of the weights, the filter coefficients, or p . If an optimal value of σ exists, the algorithm will find it and continue with this new value until another update in σ becomes necessary.

The algorithm described above was implemented for several combinations of σ and β ; for all cases the new algorithm converged faster than the BBS algorithm (unless the values of σ and β are such that the error never increases). The results are shown in [link] .a for the specifications from [link] . Whereas using the BBS method for this particular case results in a large error after the sixteenth iteration, the adaptive method converged before ten iterations.

[link] .b illustrates the change of σ per iteration in the adaptive method, using an update factor of δ = 0 . 1 . The l p error stops decreasing after the fifth iteration (where the BBS method introduces the large error); however, the adaptive algorithm adjusts the value of σ so that the l p error continues decreasing. The algorithm decreased the initial value of σ from 1.75 to its final value of 1.4175 (at the expense of only one additional iteration with σ = 1 . 575 ).

A graph representing L_2-L_∞ Error for L_p designs. The x-axis is labeled E_2 and the y-axis is labeled E_∞.
Relationship between l 2 and l errors for l p FIR filter design.

One result worth noting is the relationship between l 2 and l solutions and how they compare to l p designs. [link] shows a comparison of designs for a length-21 Type-I linear phase low pass FIR filter with transition band defined by f = { 0 . 2 , 0 . 24 } . The curve shows the l 2 versus l errors (namely ε 2 and ε ); the values of p used to make this curve were p = { 2 , 2 . 2 , 2 . 5 , 3 , 4 , 5 , 7 , 10 , 15 , 20 , 30 , 50 , 60 , 100 , 150 , 200 , 400 , } (Matlab's firls and firpm functions were used to design the l 2 and l filters respectively). Note the very small decrease in ε after p reaches 100. The curve suggests that a better compromise between ε 2 and ε can be reached by choosing 2 < p < . Furthermore, to get better results one can concentrate on values between p = 5 and p = 20 ; fortunately, for values of p so low no numerical complications arise and convergence is reached in a few iterations.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Iterative design of l_p digital filters. OpenStax CNX. Dec 07, 2011 Download for free at http://cnx.org/content/col11383/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Iterative design of l_p digital filters' conversation and receive update notifications?

Ask