<< Chapter < Page Chapter >> Page >
The mean-squared error minimizing scalar quantizer (the Lloyd-Max quantizer) is derived here using Lagrange optimization. Background on Lagrange optimization is also provided. Finally, error variance is derived for the asymptotic case of many quantization levels.
  • Though uniform quantization is convenient for implementation and analysis, non-uniform quantization yeilds lower σ q 2 when p x ( ) is non-uniformly distributed. By decreasing | q ( x ) | for frequently occuring x (at the expense of increasing | q ( x ) | for infrequently occuring x ), the average error power can be reduced.
  • Lloyd-Max Quantizer: MSE-optimal thresholds { x k } and outputs { y k } can be determined given an input distribution p x ( ) , and the result is the Lloyd-Max quantizer . Necessary conditions on { x k } and { y k } are
    σ q 2 x k = 0 for k { 2 , , L } and σ q 2 y k = 0 for k { 1 , , L } .
    Using equation 2 from Memoryless Scalar Quantization (third equation), / b a b f ( x ) d x = f ( b ) , / a a b f ( x ) d x = - f ( a ) , and above,
    σ q 2 x k = ( x k - y k - 1 ) 2 p x ( x k ) - ( x k - y k ) 2 p x ( x k ) = 0 x k = y k + y k - 1 2 , k { 2 L } , x 1 = - , x L + 1 = , σ q 2 y k = 2 x k x k + 1 ( x - y k ) p x ( x ) d x = 0 y k = x k x k + 1 x p x ( x ) d x x k x k + 1 p x ( x ) d x , k { 1 L }
    It can be shown that above are sufficient for global MMSE when 2 log p x ( x ) / x 2 0 , which holds for uniform, Gaussian, and Laplace pdfs, but not Gamma. Note:
    • optimum decision thresholds are halfway between neighboring output values,
    • optimum output values are centroids of the pdf within the appropriate interval, i.e., are given by the conditional means
      y k = E { x | x X k } = x p x ( x | x X k ) d x = x p x ( x , x X k ) Pr ( x X k ) d x , = x k x k + 1 x p x ( x ) d x x k x k + 1 p x ( x ) d x .
    Iterative Procedure to Find { x k } and { y k } :
    1. Choose y ^ 1 .
    2. For k = 1 , , L - 1 ,
      given y ^ k and x ^ k , solve [link] (lower equation) for x ^ k + 1 ,
      given y ^ k and x ^ k + 1 , solve [link] (upper equation) for y ^ k + 1 . end;
    3. Compare y ^ L to y L calculated from [link] (lower equation) based on x ^ L and x L + 1 = . Adjust y ^ 1 accordingly, and go to step 1.
  • Lloyd-Max Performance for large L : As with the uniform quantizer, can analyze quantization error performancefor large L . Here, we assume that
    • the pdf p x ( x ) is constant over x X k for k { 1 , , L } ,
    • the pdf p x ( x ) is symmetric about x = 0 ,
    • the input is bounded, i.e., x ( - x max , x max ) for some (potentially large) x max .
    So with assumption
    p x ( x ) = p x ( y k ) for x , y k X k
    and definition
    Δ k : = x k + 1 - x k ,
    we can write
    P k : = Pr { x X k } = p x ( y k ) Δ k where we require P k = 1
    and thus, from equation 2 from Memoryless Scalar Quantization (lower equation), σ q 2 becomes
    σ q 2 = k = 1 L P k Δ k x k x k + 1 ( x - y k ) 2 d x .
    For MSE-optimal { y k } , know
    0 = σ q 2 y k = 2 P k Δ k x k x k + 1 ( x - y k ) d x y k = x k + x k + 1 2 ,
    which is expected since the centroid of a flat pdf over X k is simply the midpoint of X k . Plugging y k into [link] ,
    σ q 2 = k = 1 L P k 3 Δ k ( x - x k / 2 - x k + 1 / 2 ) 3 x k x k + 1 = k = 1 L P k 3 Δ k ( x k + 1 / 2 - x k / 2 ) 3 - ( x k / 2 - x k + 1 / 2 ) 3 = k = 1 L P k 3 Δ k 2 Δ k 2 3 = 1 12 k = 1 L P k Δ k 2 .
    Note that for uniform quantization ( Δ k = Δ ), the expression above reduces to the one derived earlier. Now we minimize σ q 2 w.r.t. { Δ k } . The trick here is to define
    α k : = p x ( y k ) 3 Δ k so that σ q 2 = 1 12 k = 1 L p x ( y k ) Δ k 3 = 1 12 k = 1 L α k 3 .
    For p x ( x ) constant over X k and y k X k ,
    k = 1 L α k = k = 1 L p x ( y k ) 3 Δ k | y k = x k + x k + 1 2 = - x max x max p x ( x ) 3 d x = C x (a known constant) ,
    we have the following constrained optimization problem:
    min { α k } k α k 3 s.t. k α k = C x .
    This may be solved using Lagrange multipliers.

    Optimization via lagrange multipliers

    Consider the problem of minimizing N -dimensional real-valued cost function J ( x ) , where x = ( x 1 , x 2 , , x N ) t , subject to M < N real-valued equality constraints f m ( x ) = a m , m = 1 , , M . This may be converted into an unconstrained optimizationof dimension N + M by introducing additional variables λ = ( λ 1 , , λ M ) t known as Lagrange multipliers.The uncontrained cost function is
    J u ( x , λ ) = J ( x ) + m λ m f m ( x ) - a m ,
    and necessary conditions for its minimization are
    x J u ( x , λ ) = 0 x J ( x ) + m λ m x f m ( x ) = 0 λ J u ( x , λ ) = 0 f m ( x ) = a m for m = 1 , , M .
    The typical procedure used to solve for optimal x is the following:
    1. Equations for x n , n = 1 , , N , in terms of { λ m } are obtained from [link] (upper equation).
    2. These N equations are used in [link] (lower equation) to solve for the M optimal λ m .
    3. The optimal { λ m } are plugged back into the N equations for x n , yielding optimal { x n } .
    Necessary conditions are
    , α k α k 3 + λ k α k - C x = 0 λ = - 3 α 2 α = - λ / 3 λ k α k 3 + λ k α k - C x = 0 k α k = C x ,
    which can be combined to solve for λ :
    k = 1 L - λ 3 = C x λ = - 3 C x L 2 .
    Plugging λ back into the expression for α , we find
    α = C x / L , .
    Using the definition of α , the optimal decision spacing is
    Δ k = C x L p x ( y k ) 3 = - x max x max p x ( x ) 3 d x L p x ( y k ) 3 ,
    and the minimum quantization error variance is
    σ q 2 min = 1 12 k p x ( y k ) Δ k 3 = 1 12 k p x ( y k ) - x max x max p x ( x ) 3 d x 3 L 3 p x ( y k ) = 1 12 L 2 - x max x max p x ( x ) 3 d x 3 .
    An interesting observation is that α 3 , the t h interval's optimal contribution to σ q 2 , is constant over .

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, An introduction to source-coding: quantization, dpcm, transform coding, and sub-band coding. OpenStax CNX. Sep 25, 2009 Download for free at http://cnx.org/content/col11121/1.2
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'An introduction to source-coding: quantization, dpcm, transform coding, and sub-band coding' conversation and receive update notifications?

Ask