<< Chapter < Page Chapter >> Page >
Here we analyze the optimal reconstruction error for transform coding. As the number of channels grows to infinity, the performance gain over PCM is shown to depend on the spectral flatness measure. Meanwhile, the performance of transform coding with an infinite number of channels is shown to equal that of DPCM with an infinite-length predictor. However, when the DPCM predictor length is equal to the number of transform coding channels, we show that DPCM always yields better performance.

Asymptotic performance analysis

  • For an N × N transform coder, Equation 1 from "Gain over PCM" presented an expression for the reconstruction error variance σ r 2 | TC written in terms of the quantizer input variances { σ y k 2 } . Noting the N -dependence on σ r 2 | T C in Equation 1 from "Gain over PCM" and rewriting it as σ r 2 | TC , N , a reasonable question might be: What is σ r 2 | TC , N as N ?
  • When using the KLT, we know that σ y k 2 = λ k where λ k denotes the k t h eigenvalue of R x . If we plug these σ y k 2 into Equation 1 from "Gain over PCM" , we get
    σ r 2 | TC , N = γ y 2 - 2 R k = 0 N - 1 λ k 1 / N .
    Writing ( k λ k ) 1 / N = exp ( 1 N k ln λ k ) and using the Toeplitz Distribution Theorem (see Grenander&Szego)
    For any f ( · ) , lim N 1 N k f ( λ k ) = 1 2 π - π π f ( S x ( e j ω ) ) d ω
    with f ( · ) = ln ( · ) , we find that
    lim N σ r 2 | TC , N = γ y 2 - 2 R exp 1 2 π - π π ln S x ( e j ω ) d ω = γ y σ x 2 2 - 2 R SFM x
    where SFM x denotes the spectral flatness measure of x ( n ) , redefined below for convenience:
    SFM x = exp 1 2 π - π π ln S x ( e j ω ) d ω 1 2 π - π π S x ( e j ω ) d ω .
    Thus, with optimal transform and optimal bit allocation, asymptotic gain over uniformly quantized PCM is
    G TC , N = σ r 2 | PCM σ r 2 | TC , N = γ x σ x 2 2 - 2 R γ y σ x 2 2 - 2 R SFM x = γ x γ y SFM x - 1 .
  • Recall that, for the optimal DPCM system,
    G DPCM , N = σ r 2 | PCM σ r 2 | DPCM , N = σ x 2 σ e 2 | min ,
    where we assumed that the signal applied to DPCM quantizer is distributed similarly to the signal applied to PCM quantizerand where σ e 2 | min denotes the prediction error variance resulting from use of the optimal infinite-lengthlinear predictor:
    σ e 2 | min = exp 1 2 π - π π ln S x ( e j ω ) d ω .
    Making this latter assumption for the transform coder (implying γ y = γ x ) and plugging in σ e 2 | min yields the following asymptotic result:
    G TC , N = G DPCM , N = SFM x - 1 .
    In other words, transform coding with infinite-dimensional optimal transformation and optimal bit allocation performs equivalently toDPCM with infinite-length optimal linear prediction.

Finite-dimensional analysis: comparison to dpcm

  • The fact that optimal transform coding performs as well as DPCM in the limiting case does not tell us the relative performanceof these methods at practical levels of implementation, e.g., when transform dimension and predictor length are equal and . Below we compare the reconstruction error variances of TC and DPCMwhen the transform dimension equals the predictor length. Recalling that
    G DPCM , N - 1 = σ x 2 σ e 2 | min , N - 1
    and
    σ e 2 | min , N - 1 = | R N | | R N - 1 |
    where R N denotes the N × N autocorrelation matrix of x ( n ) , we find
    G DPCM , N - 1 = σ x 2 | R N - 1 | | R N | , G DPCM , N - 2 = σ x 2 | R N - 2 | | R N - 1 | , G DPCM , N - 3 = σ x 2 | R N - 3 | | R N - 2 | ,
    Recursively applying the equations above, we find
    k = 1 N - 1 G DPCM , k = ( σ x 2 ) N - 1 | R 1 | | R N | = ( σ x 2 ) N | R N |
    which means that we can write
    | R N | = ( σ x 2 ) N k = 1 N - 1 G DPCM , k - 1 .
    If in the previously derived TC reconstruction error variance expression
    σ r 2 | TC , N = γ y 2 - 2 R = 0 N - 1 λ 1 / N
    we assume that γ y = γ x and apply the eigenvalue property λ = | R N | , the TC gain over PCM becomes
    G TC , N = σ r 2 | PCM σ r 2 | TC , N = γ x σ x 2 2 - 2 R γ x 2 - 2 R · σ x 2 k = 1 N - 1 G DPCM , k - 1 / N = k = 1 N - 1 G DPCM , k 1 / N < G DPCM , N .
    The strict inequality follows from the fact that G DPCM , k is monotonically increasing with k . To summarize, DPCM with optimal length- N prediction performs better than TC with optimal N × N transformation and optimal bit allocation for any finite value of N . There is an intuitive explanation for this:the propagation of memory in the DPCM prediction loop makes the effective memory of DPCM greater than N , while in TC the effective memory is exactly N .

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, An introduction to source-coding: quantization, dpcm, transform coding, and sub-band coding. OpenStax CNX. Sep 25, 2009 Download for free at http://cnx.org/content/col11121/1.2
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'An introduction to source-coding: quantization, dpcm, transform coding, and sub-band coding' conversation and receive update notifications?

Ask