<< Chapter < Page | Chapter >> Page > |
In some applications, the effects of phase are not a necessary factor to consider when designing a filter. For these applications, control of the filter's magnitude response is a priority for the designer. In order to improve the magnitude response of a filter, one must not explicitly include a phase, so that the optimization algorithm can look for the best filter that approximates a specified magnitude, without being constrained about optimizing for a phase response too.
The magnitude approximation problem can be formulated as follows:
Unfortunately, the second term inside the norm (namely the absolute value function) is not differentiable when its argument is zero. Although one could propose ways to work around this problem, I propose the use of a different design criterion, namely the approximation of a desired magnitude squared. The resulting problem is
The autocorrelation of a causal length- FIR filter is given by
The Fourier transform of the autocorrelation is known as the Power Spectral Density function [link] (or simply the SPD), and is defined as follows,
From the properties of the Fourier Transform [link] one can show that there exists a frequency domain relationship between and given by
This relationship suggests a way to design magnitude-squared filters, namely by using the filter's autocorrelation coefficients instead of the filter coefficients themselves. In this way, one can avoid the use of the non-differentiable magnitude response.
An important property to note at this point is the fact that since the filter coefficients are real, one can see from [link] that the autocorrelation function is symmetric; thus it is sufficient to consider its last values. As a result, the PSD can be written as
in a similar way to the linear phase problem.
The symmetry property introduced above allows for the use of the linear phase algorithm of [link] to obtain the autocorrelation coefficients of . However, there is an important step missing in this discussion: how to obtain the filter coefficients from its autocorrelation. To achieve this goal, one can follow a procedure known as Spectral Factorization . The objective is to use the autocorrelation coefficients instead of the filter coefficients as the optimization variables. The variable transformation is done using [link] , which is not a one-to-one transformation. Because of the last result, there is a necessary condition for a vector to be a valid autocorrelation vector of a filter. This is summarized [link] in the spectral factorization theorem , which states that is the autocorrelation function of a filter if and only if for all . This turns out to be a necessary and sufficient condition [link] for the existence of . Once the autocorrelation vector is found using existing robust interior-point algorithms, the filter coefficients can be calculated via spectral factorization techniques.
Assuming a valid vector can be found for a particular filter , the problem presented in [link] can be rewritten as
In [link] the existence condition is redundant since and, thus, is not included in the problem definition. For each , the constraints of [link] constitute a pair of linear inequalities in the vector ; therefore the constraint is convex in . Thus the change of variable transforms a nonconvex optimization problem in into a convex problem in .
Notification Switch
Would you like to follow the 'Iterative design of l_p digital filters' conversation and receive update notifications?