Motivated by the practical problem of non-stationary sources, adaptation of the uniform quantizer's stepsize is discussed. In particular, adaptive quantization based on forward estimation (AQF) and backward estimation (AQB) are discussed, in both block-based and recursive forms.
Previously have considered the case of stationary source processes,
though in reality the source signal may be highly non-stationary.For example, the variance, pdf, and/or mean may vary significantly
with time.
Here we concentrate on the problem of adapting uniform quantizer
stepsize
Δ to a signal with unknown variance.
This is accomplished by estimating the input variance
and setting the quantizer stepsize appropriately:
Here
φ
x is a constant that depends on the distribution of the
input signal
x whose function is to prevent input values greater
than
from being clipped by the quantizer
(see
[link] ); comparing to non-adaptive step
size relation
, we see that
.
Adaptive quantization stepsize
As long as the reconstruction levels
are the same at
encoder and decoder, the actual values chosen for quantizer designare arbitrary.
Assuming integer values as in
[link] ,
the quantization rule becomes
AQF and AQB:[link] shows two structures for stepsize adaptation:
(a)
adaptive quantization with forward estimation (AQF) and
(b)
adaptive quantization with backward estimation (AQB).
The advantage of AQF is that variance estimationmay be accomplished more accurately, as it is operates directly on
the source as opposed to a quantized (noisy) version of the source.The advantage of AQB is that the variance estimates
do not need to be transmitted as side information for decoding.In fact, practical AQF encoders transmit variance estimates
only occasionally, e.g., once per block.
(a) AQF and (b) AQB
Block Variance Estimation: When operating on finite blocks of data, the structures in
[link] perform variance estimation as follows:
N is termed the
learning period and its choice may significantly
impact quantizer SNR performance:choosing
N too large prevents the quantizer from adapting to the
local statistics of the input, while choosing
N too small
results in overly noisy AQB variance estimates and excessive AQF sideinformation.
[link] demonstrates these two schemes for two choices
of
N .
Block AQF and AQB estimates of
superimposed on
for
. SNR acheived: (a) 22.6 dB, (b) 28.8 dB, (c) 21.2 dB, and (d) 28.8 dB.
Recursive Variance Estimation: The recursive method of estimating variance is as follows
where
α is a
forgetting factor in the range
and typically near to 1.
This leads to an exponential data window, as can be seen below.Plugging the expression for
into that for
,
Then plugging
into the above,
Continuing this process
N times, we arrive at
Taking the limit as
,
ensures that
Exponential AQF and AQB estimates of
superimposed on
for
. (a) 20.5 dB, (b) 28.0 dB, (c) 22.2 dB, (d) 24.1 dB.
Receive real-time job alerts and never miss the right job again
Source:
OpenStax, An introduction to source-coding: quantization, dpcm, transform coding, and sub-band coding. OpenStax CNX. Sep 25, 2009 Download for free at http://cnx.org/content/col11121/1.2
Google Play and the Google Play logo are trademarks of Google Inc.
Notification Switch
Would you like to follow the 'An introduction to source-coding: quantization, dpcm, transform coding, and sub-band coding' conversation and receive update notifications?