<< Chapter < Page | Chapter >> Page > |
Recall the Weiner filter problem
$\{{x}_{k}\}$ , $\{{d}_{k}\}$ jointly wide sense stationaryFind $W$ minimizing $({}_{k}^{2})$ $${}_{k}={d}_{k}-{y}_{k}={d}_{k}-\sum_{i=0}^{M-1} {w}_{i}{x}_{k-i}={d}_{k}-{X}^{k}^T{W}^{k}$$ $${X}^{k}=\begin{pmatrix}{x}_{k}\\ {x}_{k-1}\\ \\ {x}_{k-M+1}\\ \end{pmatrix}$$ $${W}^{k}=\begin{pmatrix}{w}_{0}^{k}\\ {w}_{1}^{k}\\ \\ {w}_{M-1}^{k}\\ \end{pmatrix}$$ The superscript denotes absolute time, and the subscript denotes time or a vector index.
the solution can be found by setting the gradient $0$
To find the (approximate) Wiener filter, some approximations are necessary. As always, the key is to make the right approximations!
The LMS algorithm is often called a stochastic gradient algorithm, since $({}^{k})$ is a noisy gradient. This is by far the most commonly used adaptive filtering algorithm, because
To Compute | ${y}_{k}$ | ${}_{k}$ | ${W}^{k+1}$ | = Total |
---|---|---|---|---|
multiplies | $M$ | $0$ | $M+1$ | $2M+1$ |
adds | $M-1$ | $1$ | $M$ | $2M$ |
So the LMS algorithm is $O(M)$ per sample. In fact, it is nicely balanced in that the filter computation and the adaptation require the sameamount of computation.
Note that the parameter $$ plays a very important role in the LMS algorithm. It can also be varied with time, but usually a constant $$ ("convergence weight facor") is used, chosen after experimentation for a givenapplication.
large $$ : fast convergence, fast adaptivity
small $$ : accurate $W$ less misadjustment error, stability
Notification Switch
Would you like to follow the 'Adaptive filters' conversation and receive update notifications?