<< Chapter < Page | Chapter >> Page > |
To apply steepest descent to the minimization of the polynomial $J\left(x\right)$ in [link] , suppose that a current estimate of $x$ is available at time $k$ , which is denoted $x\left[k\right]$ . A new estimate of $x$ at time $k+1$ can be made using
where $\mu $ is a small positive number called the stepsize, and where the gradient (derivative) of $J\left(x\right)$ is evaluated at the current point $x\left[k\right]$ . This is then repeated again and again as $k$ increments. This procedure isshown in [link] . When the current estimate $x\left[k\right]$ is to the right of the minimum, the negative of the gradient points left. When the current estimate is to the left of the minimum, thenegative gradient points to the right. In either case, as long as the stepsize is suitably small, the newestimate $x[k+1]$ is closer to the minimum than the old estimate $x\left[k\right]$ ; that is, $J\left(x\right[k+1\left]\right)$ is less than $J\left(x\right[k\left]\right)$ .
To make this explicit, the iteration defined by [link] is
or, rearranging,
In principle, if [link] is iterated over and over, the sequence $x\left[k\right]$ should approach the minimum value $x=2$ . Does this actually happen?
There are two ways to answer this question. It is
straightforward to simulate the process. Here is some M
atlab code
that takes an initial estimate of
$x$ called
x(1)
and iterates
[link] for
N=500
steps.
N=500; % number of iterations
mu=.01; % algorithm stepsizex=zeros(1,N); % initialize x to zero
x(1)=3; % starting point x(1)for k=1:N-1
x(k+1)=(1-2*mu)*x(k)+4*mu; % update equationend
polyconverge.m
find the minimum of
$J\left(x\right)={x}^{2}-4x+4$ via steepest descent
(download file)
[link] shows the output of
polyconverge.m
for 50 different
x(1)
starting values superimposed;
all converge smoothly to the minimum at
$x=2$ .
Explore the behavior of steepest descent by running
polyconverge.m
with different parameters.
mu
= -.01, 0, .0001, .02, .03, .05, 1.0, 10.0.
Can
mu
be too large or too small?N=
5, 40, 100, 5000. Can
N
be too
large or too small?x(1)
.
Can
x(1)
be too
large or too small?As an alternative to simulation, observe that the process [link] is itself a linear time invariant system, of the general form
which is stable as long as $\left|a\right|<1$ . For a constant input, the final value theorem of z-Transforms (see [link] ) can be used to show that the asymptotic (convergent)output value is ${lim}_{k\to \infty}{x}_{k}=\frac{b}{1-a}$ . To see this withoutreference to arcane theory, observe that if ${x}_{k}$ is to converge, then it must converge to some value, say ${x}^{*}$ . At convergence, $x[k+1]=x\left[k\right]={x}^{*}$ , and so [link] implies that ${x}^{*}=a{x}^{*}+b$ , which implies that ${x}^{*}=\frac{b}{1-a}$ . (This holds assuming $\left|a\right|<1$ .) For example, for [link] , ${x}^{*}=\frac{4\mu}{1-(1-2\mu )}=2$ , which is indeed the minimum.
Thus, both simulation and analysis suggest that the iteration [link] is a viable way to find the minimum of the function $J\left(x\right)$ , as long as $\mu $ is suitably small. As will become clearer in later sections, suchsolutions to optimization problems are almost always possible—as long as the function $J\left(x\right)$ is differentiable. Similarly, it is usually quite straightforward to simulate thealgorithm to examine its behavior in specific cases, though it is not always so easy to carry out a theoretical analysis.
Notification Switch
Would you like to follow the 'Software receiver design' conversation and receive update notifications?