<< Chapter < Page Chapter >> Page >

Adaptive Equalization

Another type of equalization, capable of tracking a slowly time-varying channel response, is known as adaptive equalization. It can be implemented to perform tap-weight adjustments periodically or continually. Periodic adjustments are accomplished by periodically transmitting a preamble or short training sequence of digital data known by the receiver. Continual adjustment are accomplished by replacing the known training sequence with a sequence of data symbols estimated from the equalizer output and treated as known data. When performed continually and automatically in this way, the adaptive procedure is referred to as decision directed.

If the probability of error exceeds one percent, the decision directed equalizer might not converge. A common solution to this problem is to initialize the equalizer with an alternate process, such as a preamble to provide good channel-error performance, and then switch to decision-directed mode.

The simultaneous equations described in equation (4) of module “Transversal Equalizer” do not include the effects of channel noise. To obtain stable solution to the filter weights, it is necessary that the data be averaged to obtain the stable signal statistic, or the noisy solution obtained from the noisy data must be averaged. The most robust algorithm that average noisy solution is the least-mean-square (LMS) algorithm. Each iteration of this algorithm uses a noisy estimate of the error gradient to adjust the weights in the direction to reduce the average mean-square error.

The noisy gradient is simply the product e ( k ) r x size 12{e \( k \) r rSub { size 8{x} } } {} of an error scalar e ( k ) size 12{e \( k \) } {} and the data vector r x size 12{r rSub { size 8{x} } } {} .

e ( k ) = z ( k ) z ˆ ( k ) size 12{e \( k \) =z \( k \) - { hat {z}} \( k \) } {} (1)

Where z ( k ) size 12{z \( k \) } {} and z ˆ ( k ) size 12{ { hat {z}} \( k \) } {} are the desired output signal (a sample free of ISI) and the estimate at time k.

z ˆ ( k ) = c T r x = n = N N x ( k n ) c n size 12{ { hat {z}} \( k \) =c rSup { size 8{T} } r rSub { size 8{x} } = Sum cSub { size 8{n= - N} } cSup { size 8{N} } {x \( k - n \) c rSub { size 8{n} } } } {} (2)

Where c T size 12{c rSup { size 8{T} } } {} is the transpose of the weight vector at time k.

Iterative process that updates the set of weights is obtained as follows:

c ( k + 1 ) = c ( k ) + Δe ( k ) r x size 12{c \( k+1 \) =c \( k \) +Δe \( k \) r rSub { size 8{x} } } {} (3)

Where c ( k ) size 12{c \( k \) } {} is the vector of filter weights at time k, and Δ size 12{Δ} {} is a small term that limits the coefficient step size and thus controls the rate of convergence of the algorithm as well as the variance of the steady state solution. Stability is assured if the parameter Δ size 12{Δ} {} is smaller than the reciprocal of the energy of the data in the filter. Thus, while we want the convergence parameter Δ size 12{Δ} {} to be large for fast convergence but not so large as to be unstable, we also want it to be small enough for low variance.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Principles of digital communications. OpenStax CNX. Jul 29, 2009 Download for free at http://cnx.org/content/col10805/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Principles of digital communications' conversation and receive update notifications?

Ask