<< Chapter < Page Chapter >> Page >

Optimization theory is the branch of applied mathematics whose purpose is to consider a mathematical expression in order to find a set of parameters that either maximize or minimize it. Being an applied discipline, problems usually arise from real-life situations including areas like science, engineering and finance (among many other). This section presents some basic concepts for completeness and is not meant to replace a treaty on the subject. The reader is encouraged to consult further references for more information.

Solution of linear weighted least squares problems

Consider the quadratic problem

min h d - C h 2

which can be written as

min h d - C h T d - C h

omitting the square root since this problem is a strictly convex one. Therefore its unique (and thus global) solution is found at the point where the partial derivatives with respect to the optimization variable are equal to zero. That is,

h d - C h T d - C h = h d T d - 2 d T C h + C h T C h = - 2 C T d + 2 C T C h = 0 C T C h = C T d

The solution of [link] is given by

h = C T C - 1 C T d

where the inverted term is referred [link] , [link] as the Moore-Pentrose pseudoinverse of C T C .

In the case of a weighted version of [link] ,

min h w d - C h 2 2 = k w k | d k - C k h | 2

where C k is the k -th row of C , one can write [link] as

min h W ( d - C h ) T W ( d - C h )

where W = diag ( w ) contains the weighting vector w . The solution is therefore given by

h = C T W T W C - 1 C T W T W d

Newton's method and the approximation of linear systems in an l p Sense

Newton's method and l p Linear phase systems

Consider the problem

min a g ( a ) = A ( ω ; a ) - D ( ω ) p

for a R M + 1 . Problem [link] is equivalent to the better posed problem

min a f ( a ) = g ( a ) p = A ( ω ; a ) - D ( ω ) p p = i = 0 L C i a - D i p

where D i = D ( ω i ) , ω i [ 0 , π ] , C i = [ C i , 0 , ... , C i , M ] , and

C = C 0 C L

The i j -th element of C is given by C i , j = cos ω i ( M - j ) , where 0 i L and 0 j M . From [link] we have that

f ( a ) = a 0 f ( a ) a M f ( a )

where a j is the j -th element of a R M + 1 and

a j f ( a ) = a j i = 0 L C i a - D i p = i = 0 L a j C i a - D i p = p i = 0 L C i a - D i p - 1 · a j C i a - D i

Now,

a j C i a - D i = sign ( C i a - D i ) · a j ( C i a - D i ) = C i , j sign ( C i a - D i )

where Note that

lim u ( a ) 0 + a j u ( a ) p = lim u ( a ) 0 - a j u ( a ) p = 0

sign ( x ) = 1 x > 0 0 x = 0 - 1 x < 0

Therefore the Jacobian of f ( a ) is given by

f ( a ) = p i = 0 L C i , 0 C i a - D i p - 1 sign ( C i a - D i ) p i = 0 L C i , M - 1 C i a - D i p - 1 sign ( C i a - D i )

The Hessian of f ( a ) is the matrix 2 f ( a ) whose j m -th element ( 0 j , m M ) is given by

j , m 2 f ( a ) = a 2 a j a m f ( a ) = a m a j f ( a ) = i = 0 L p C i , j a m D i - C i a p - 1 sign ( D i - C i a ) = i = 0 L α a m b ( a ) d ( a )

where adequate substitutions have been made for the sake of simplicity. We have

a m b ( a ) = a m C i a - D i p - 1 = ( p - 1 ) C i , m C i a - D i p - 2 sign ( C i a - D i ) a m d ( a ) = a m sign ( D i - C i a ) = 0

Note that the partial derivative of d ( a ) at D i - C i a = 0 is not defined. Therefore

a m b ( a ) d ( a ) = b ( a ) a m d ( a ) + d ( a ) a m b ( a ) = ( p - 1 ) C i , m C i a - D i p - 2 sign 2 ( C i a - D i )

Note that sign 2 ( C i a - D i ) = 1 for all D i - C i a 0 where it is not defined. Then

j , m 2 f ( a ) = p ( p - 1 ) i = 0 L C i , j C i , m C i a - D i p - 2

except at D i - C i a = 0 where it is not defined.

Based on [link] and [link] , one can apply Newton's method to problem [link] as follows,

  • Given a 0 R M + 1 , D R L + 1 , C R L + 1 × M + 1
  • For i = 0 , 1 , ...
    1. Find f ( a i ) .
    2. Find 2 f ( a i ) .
    3. Solve 2 f ( a i ) s = - f ( a i ) for s .
    4. Let a + = a i + s .
    5. Check for convergence and iterate if necessary.

Note that for problem [link] the Jacobian of f ( a ) can be written as

f ( a ) = p C T y

where

y = C a i - D p - 1 sign ( C a i - D ) = C a i - D p - 2 ( C a i - D )

Also,

j , m 2 f ( a ) = p ( p - 1 ) C j T Z C m

where

Z = diag C a i - D p - 2

and

C j = C 0 , j C L , j

Therefore

2 f ( a ) = ( p 2 - p ) C T Z C

From [link] , the Hessian 2 f ( a ) can be expressed as

2 f ( a ) = ( p 2 - p ) C T W T W C

where

W = diag C a i - D p - 2 2

The matrix C R ( L + 1 ) × ( M + 1 ) is given by

C = cos M ω 0 cos ( M - 1 ) ω 0 cos ( M - j ) ω 0 cos ω 0 1 cos M ω 1 cos ( M - 1 ) ω 1 cos ( M - j ) ω 1 cos ω 1 1 cos M ω i cos ( M - 1 ) ω i cos ( M - j ) ω i cos ω i 1 cos M ω L - 1 cos ( M - 1 ) ω L - 1 cos ( M - j ) ω L - 1 cos ω L - 1 1 cos M ω L cos ( M - 1 ) ω L cos ( M - j ) ω L cos ω L 1

The matrix H = 2 f ( a ) is positive definite (for p > 1 ). To see this, consider H = K T K where K = W C . Let z R M + 1 , z 0 . Then

z T H z = z T K T K z = K z 2 2 > 0

unless z N ( K ) . But since W is diagonal and C is full column rank, N ( K ) = 0 . Thus z T H z 0 (identity only if z = 0 ) and so H is positive definite.

Newton's method and l p Complex linear systems

Consider the problem

min x e ( x ) = A x - b p p

where A C m × n , x R n and b C m . One can write [link] in terms of the real and imaginary parts of A and b ,

e ( x ) = i = 1 m | A i x - b i | p = i = 1 m | Re { A i x - b i } + j I m { A i x - b i } | p = i = 1 m | ( R i x - α i ) + ( Z i x - γ i ) | p = i = 1 m ( R i x - α i ) 2 + ( Z i x - γ i ) 2 p = i = 1 m g i ( x ) p / 2

where A = R + j Z and b = α + j γ . The gradient e ( x ) is the vector whose k -th element is given by

x k e ( x ) = p 2 i = 1 m x k g i ( x ) g i ( x ) p - 2 2 = p 2 q k ( x ) g ^ ( x )

where q k is the row vector whose i -th element is

q k , i ( x ) = x k g i ( x ) = 2 ( R i x - α α i ) R i k + 2 ( Z i x - γ γ i ) Z i k = 2 R i k R i x + 2 Z i k Z i x - [ 2 α i R i k + 2 γ i Z i k ]

Therefore one can express the gradient of e ( x ) by e ( x ) = p 2 Q g ^ , where Q = [ q k , i ] as above. Note that one can also write the gradient in vector form as follows

e ( x ) = p R T diag ( R x - α ) + Z T diag ( Z x - γ ) · ( R x - α ) 2 + ( Z x - γ ) 2 p - 2 2

The Hessian H ( x ) is the matrix of second derivatives whose k l -th entry is given by

H k , l ( x ) = 2 x k x l e ( x ) = x l p 2 i = 1 m q k , i ( x ) g i ( x ) p - 2 2 = p 2 i = 1 m q k , i ( x ) x l g i ( x ) p - 2 2 + g i ( x ) p - 2 2 x l q k , i ( x )

Now,

x l g i ( x ) p - 2 p = p - 2 2 x l g i ( x ) g i ( x ) p - 4 2 = p - 2 2 q l , i ( x ) g i ( x ) p - 4 2 x l q k , i ( x ) = 2 R i k R i l + 2 Z i k Z i l

Substituting [link] and [link] into [link] we obtain

H k , l ( x ) = p ( p - 2 ) 4 i = 1 m q k , i ( x ) q l , i ( x ) g i ( x ) p - 4 4 + p i = 1 m ( R i k R i l + Z i k Z i l ) g i ( x ) p - 2 2

Note that H ( x ) can be written in matrix form as

H ( x ) = p ( p - 2 ) 4 Q diag g ( x ) p - 4 2 Q T + p R T diag g ( x ) p - 2 2 R + Z T diag g ( x ) p - 2 2 Z

Therefore to solve [link] one can use Newton's method as follows: given an initial point x 0 , each iteration gives a new estimate x + according to the formulas

H ( x c ) s = - e ( x c ) x + = x c + s

where H ( x c ) and e ( x c ) correspond to the Hessian and gradient of e ( x ) as defined previously, evaluated at the current point x c . Since the p -norm is convex for 1 < p < , problem [link] is convex. Therefore Newton's method will converge to the global minimizer x as long as H ( x c ) is not ill-conditioned.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Iterative design of l_p digital filters. OpenStax CNX. Dec 07, 2011 Download for free at http://cnx.org/content/col11383/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Iterative design of l_p digital filters' conversation and receive update notifications?

Ask