<< Chapter < Page Chapter >> Page >

To apply steepest descent to the minimization of the polynomial J ( x ) in [link] , suppose that a current estimate of x is available at time k , which is denoted x [ k ] . A new estimate of x at time k + 1 can be made using

x [ k + 1 ] = x [ k ] - μ d J ( x ) d x x = x [ k ] ,

where μ is a small positive number called the stepsize, and where the gradient (derivative) of J ( x ) is evaluated at the current point x [ k ] . This is then repeated again and again as k increments. This procedure isshown in [link] . When the current estimate x [ k ] is to the right of the minimum, the negative of the gradient points left. When the current estimate is to the left of the minimum, thenegative gradient points to the right. In either case, as long as the stepsize is suitably small, the newestimate x [ k + 1 ] is closer to the minimum than the old estimate x [ k ] ; that is, J ( x [ k + 1 ] ) is less than J ( x [ k ] ) .

Steepest descent finds the minimum of a function by always pointing in the direction that leads downhill.
Steepest descent finds the minimum of a function by always pointing in the direction that leads downhill.

To make this explicit, the iteration defined by [link] is

x [ k + 1 ] = x [ k ] - μ ( 2 x [ k ] - 4 ) ,

or, rearranging,

x [ k + 1 ] = ( 1 - 2 μ ) x [ k ] + 4 μ .

In principle, if [link] is iterated over and over, the sequence x [ k ] should approach the minimum value x = 2 . Does this actually happen?

There are two ways to answer this question. It is straightforward to simulate the process. Here is some M atlab code that takes an initial estimate of x called x(1) and iterates [link] for N=500 steps.

N=500;                          % number of iterations mu=.01;                         % algorithm stepsizex=zeros(1,N);                   % initialize x to zero x(1)=3;                         % starting point x(1)for k=1:N-1   x(k+1)=(1-2*mu)*x(k)+4*mu;    % update equationend
polyconverge.m find the minimum of J ( x ) = x 2 - 4 x + 4 via steepest descent (download file)

[link] shows the output of polyconverge.m for 50 different x(1) starting values superimposed; all converge smoothly to the minimum at x = 2 .

The program polyconverge.m  attempts to locate the smallest value of J(x)=x^2-4x+4 by descending the gradient. Fifty different starting values all converge to the same minimum at x=2.
The program polyconverge.m attempts to locate the smallestvalue of J ( x ) = x 2 - 4 x + 4 by descending the gradient. Fifty different starting values all converge to thesame minimum at x = 2 .

Explore the behavior of steepest descent by running polyconverge.m with different parameters.

  1. Try mu = -.01, 0, .0001, .02, .03, .05, 1.0, 10.0. Can mu be too large or too small?
  2. Try N= 5, 40, 100, 5000. Can N be too large or too small?
  3. Try a variety of values of x(1) . Can x(1) be too large or too small?

As an alternative to simulation, observe that the process [link] is itself a linear time invariant system, of the general form

x [ k + 1 ] = a x [ k ] + b ,

which is stable as long as | a | < 1 . For a constant input, the final value theorem of z-Transforms (see [link] ) can be used to show that the asymptotic (convergent)output value is lim k x k = b 1 - a . To see this withoutreference to arcane theory, observe that if x k is to converge, then it must converge to some value, say x * . At convergence, x [ k + 1 ] = x [ k ] = x * , and so [link] implies that x * = a x * + b , which implies that x * = b 1 - a . (This holds assuming | a | < 1 .) For example, for [link] , x * = 4 μ 1 - ( 1 - 2 μ ) = 2 , which is indeed the minimum.

Thus, both simulation and analysis suggest that the iteration [link] is a viable way to find the minimum of the function J ( x ) , as long as μ is suitably small. As will become clearer in later sections, suchsolutions to optimization problems are almost always possible—as long as the function J ( x ) is differentiable. Similarly, it is usually quite straightforward to simulate thealgorithm to examine its behavior in specific cases, though it is not always so easy to carry out a theoretical analysis.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Software receiver design. OpenStax CNX. Aug 13, 2013 Download for free at http://cnx.org/content/col11510/1.3
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Software receiver design' conversation and receive update notifications?

Ask