<< Chapter < Page | Chapter >> Page > |
The term ${a}^{2}{r}^{2}\left(kT\right)$ inside the parentheses is equal to ${s}^{2}\left[k\right]$ . The term $a{r}^{2}\left(kT\right)$ outside the parentheses is not directly available to the assessment mechanism, though it can reasonably beapproximated by $\frac{{s}^{2}\left[k\right]}{a}$ . Substituting the derivative into [link] and evaluating at $a=a\left[k\right]$ gives the algorithm
Care must be taken when implementing [link] that $a\left[k\right]$ does not approach zero.
Of course, ${J}_{LS}\left(a\right)$ of [link] is not the only possible goal for the AGC problem.What is important is not the exact form of the performance function, but where the performance function has its optimal points.Another performance function that has a similar error surface (peek ahead to [link] ) is
Taking the derivative gives
where the approximation arises from swapping the order of the differentiation and the averagingand where the derivative of $|\xb7|$ is the signum or sign function, which holds as long as the argument is nonzero.Evaluating this at $a=a\left[k\right]$ and substituting into [link] gives another AGC algorithm
Consider the “logic” of this algorithm. Suppose that $a$ is positive. Since $\mathbf{s}$ is fixed,
Thus, if the average energy in $s\left[k\right]$ exceeds ${\mathbf{s}}^{2}$ , $a$ is decreased. If the average energy in $s\left[k\right]$ is less than ${\mathbf{s}}^{2}$ , $a$ is increased. The update ceases when $\text{avg}\left\{{s}^{2}\left[k\right]\right\}\approx {\mathbf{s}}^{2}$ , that is, where ${a}^{2}\approx \frac{{\mathbf{s}}^{2}}{{r}^{2}}$ , as desired. (An analogous logic applies when $a$ is negative.)
The two performance functions [link] and [link] define the updates for the two adaptive elements in [link] and [link] . ${J}_{LS}\left(a\right)$ minimizes the square of the deviation of the power in $s\left[k\right]$ from the desired power ${\mathbf{s}}^{2}$ . This is a kind of “least square” performance function(hence the subscript LS). Such squared-error objectives are common, and will reappear in phase trackingalgorithms in Chapter [link] , in clock recovery algorithms in Chapter [link] , and in equalization algorithms in Chapter [link] . On the other hand, the algorithm resulting from ${J}_{N}\left(a\right)$ has a clear logical interpretation (the $N$ stands for `naive'), and the update is simpler, since [link] has fewer terms and no divisions.
To experiment concretely with these algorithms,
agcgrad.m
provides an implementation in M
atlab .
It is easy to control the rate at which
$a\left[k\right]$ changes by choice of stepsize:
a larger
$\mu $ allows
$a\left[k\right]$ to change faster, while a smaller
$\mu $ allows
greater smoothing. Thus,
$\mu $ can be chosen by the system designer to
trade off the bandwidth of
$a\left[k\right]$ (the speed at which
$a\left[k\right]$ can track
variations in the energy levels of the incoming signal) versus theamount of jitter or noise. Similarly, the length over which
the averaging is done (specified by the parameter
lenavg
)
will also affect the speed of adaptation;longer averages imply slower moving, smoother estimates while
shorter averages imply faster moving, more jittery estimates.
n=10000; % number of steps in simulation
vr=1.0; % power of the inputr=sqrt(vr)*randn(n,1); % generate random inputs
ds=0.15; % desired power of outputmu=0.001; % algorithm stepsize
lenavg=10; % length over which to averagea=zeros(n,1); a(1)=1; % initialize AGC parameter
s=zeros(n,1); % initialize outputsavec=zeros(1,lenavg); % vector to store terms for averaging
for k=1:n-1 s(k)=a(k)*r(k); % normalize by a(k)
avec=[sign(a(k))*(s(k)^2-ds),avec(1:lenavg-1)]; % incorporate new update into avec a(k+1)=a(k)-mu*mean(avec); % average adaptive update of a(k)
end
agcgrad.m
minimize the performance function
$J\left(a\right)=\text{avg}\left\{\right|a\left|((1/3){a}^{2}{r}^{2}-ds)\right\}$ by choice
of
$a$
(download file)
Notification Switch
Would you like to follow the 'Software receiver design' conversation and receive update notifications?