<< Chapter < Page | Chapter >> Page > |
But why do the two algorithms converge to different places? The facile answer is that they are different becausethey minimize different performance functions. Indeed, the error surfaces in [link] show minima in different locations. The convergent value of $a\approx 0.38$ for ${J}_{N}\left(a\right)$ is explicable because $0.{38}^{2}\approx 0.15={\mathbf{s}}^{2}$ . The convergent value of $a=0.22$ for ${J}_{LS}\left(a\right)$ is calculated in closed form in Exercise [link] , and this value does a good job minimizing its cost,but it has not solved the problem of making ${a}^{2}$ close to ${\mathbf{s}}^{2}$ . Rather, ${J}_{LS}\left(a\right)$ calculates a smaller gain that makes $\text{avg}\left\{{s}^{2}\right\}\approx {\mathbf{s}}^{2}$ . The minima are different. The moral is this:Be wary of your performance functions—they may do what you ask.
Use
agcgrad.m
to investigate the AGC algorithm.
mu
works?
Can the stepsize be too small?Can the stepsize be too large?mu
effect the convergence
rate?a
?lenavg
works?
Can
lenavg
be too small?
Can
lenavg
be too large?lenavg
effect the convergence
rate?Show that the value of $a$ that achieves the minimum of ${J}_{LS}\left(a\right)$ can be expressed as
Is there a way to use this (closed form) solution to replace the iteration [link] ?
Consider the alternative objective function $J\left(a\right)=\frac{1}{2}{a}^{2}(\frac{1}{2}\frac{{s}^{2}\left[k\right]}{3}-{\mathbf{s}}^{2})$ . Calculate the derivative and implement avariation of the AGC algorithm that minimizes this objective. How does this version compare to the algorithms [link] and [link] ? Draw the error surface for this algorithm. Which version is preferable?
Try initializing the estimate
a(1)=-2
in
agcgrad.m
.
Which minimum does the algorithm find? What happens tothe data record?
Create your own objective function $J\left(a\right)$ for the AGC problem. Calculate the derivative and implement avariation of the AGC algorithm that minimizes this objective. How does this version compare to the algorithms [link] and [link] ? Draw the error surface for your algorithm. Which version do you prefer?
Investigate how the error surface depends on the input
signal. Replace
randn
with
rand
in
agcerrorsurf.m
and draw the error surfaces
for both
${J}_{N}\left(a\right)$ and
${J}_{LS}\left(a\right)$ .
One of the impairments encountered in transmission systems is the degradation due to fading, when the strengthof the received signal changes in response to changes in the transmission path. (Recall the discussion in [link] .) This section shows how an AGC can be used to counteractthe fading, assuming the rate of the fading is slow, and provided the signal does not disappear completely.
Suppose that the input consists of a random sequence undulating slowly up and down in magnitude, as in the topplot of [link] . The adaptive AGC compensates for the amplitude variations,growing small when the power of the input is large, and large when the power of the input is small. This is shown in themiddle graph. The resulting output is of roughly constant amplitude, as shown in the bottom plot of [link] .
Notification Switch
Would you like to follow the 'Software receiver design' conversation and receive update notifications?