<< Chapter < Page
  Adaptive filters   Page 1 / 1
Chapter >> Page >

In "normalized" LMS, the gradient step factor is normalized by the energy of the data vector: NLMS X k H X k where is usually 1 2 and is a very small number introduced to prevent division by zero if X k H X k is very small. W k + 1 W k 1 X H X e k X k The normalization has several interpretations

  • corresponds to the 2nd-order convergence bound
  • makes the algorithm independent of signal scalings
  • adjusts W k + 1 to give zero error with current input: W k + 1 X k d k
  • minimizes mean effort at time k 1
NLMS usually converges much more quickly than LMS at very little extra cost; NLMS is very commonly used. In some applications, normalization is so universal that"we use the LMS algorithm" implies normalization as well.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Adaptive filters. OpenStax CNX. May 12, 2005 Download for free at http://cnx.org/content/col10280/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Adaptive filters' conversation and receive update notifications?

Ask