Lloyd-Max Quantizer: MSE-optimal thresholds
and outputs
can be determined given an input distribution
,
and the result is the
Lloyd-Max quantizer .
Necessary conditions on
and
are
Using
equation 2 from Memoryless Scalar Quantization (third equation),
,
, and above,
It can be shown that above are sufficient for global MMSE
when
,
which holds for uniform, Gaussian, and Laplace pdfs, but not Gamma.
Note:
Iterative Procedure to Find
and
:
-
Choose
.
-
For
,
given
and
,
solve
[link] (lower equation) for
,
given
and
,
solve
[link] (upper equation) for
.
end;
-
Compare
to
y
L calculated from
[link] (lower equation)
based on
and
.
Adjust
accordingly, and go to step 1.
Lloyd-Max Performance for large
L : As with the uniform quantizer, can analyze quantization error performancefor large
L . Here, we assume that
-
the pdf
is constant over
for
,
-
the pdf
is symmetric about
,
-
the input is bounded, i.e.,
for some (potentially large)
x
max .
So with assumption
and definition
we can write
and thus, from
equation 2 from Memoryless Scalar Quantization (lower equation),
σ
q
2 becomes
For MSE-optimal
, know
which is expected since the centroid of a flat pdf over
X
k is simply the midpoint of
X
k .
Plugging
into
[link] ,
Note that for uniform quantization (
), the expression
above reduces to the one derived earlier.
Now we minimize
σ
q
2 w.r.t.
.
The trick here is to define
For
constant over
X
k and
,
we have the following constrained optimization problem:
This may be solved using Lagrange multipliers.
Optimization via lagrange multipliers
Consider the problem of minimizing
N -dimensional real-valued
cost function
, where
,
subject to
real-valued equality constraints
,
.
This may be converted into an unconstrained optimizationof dimension
by introducing additional variables
known as Lagrange
multipliers.The uncontrained cost function is
and necessary conditions for its minimization are
The typical procedure used to solve for optimal
x is the following:
-
Equations for
x
n ,
, in terms of
are obtained from
[link] (upper equation).
-
These
N equations are used in
[link] (lower equation) to solve for the
M optimal
λ
m .
-
The optimal
are plugged back into the
N equations for
x
n , yielding optimal
.
Necessary conditions are
which can be combined to solve for
λ :
Plugging
λ back into the expression for
α
ℓ , we find
Using the definition of
α
ℓ , the optimal decision spacing is
and the minimum quantization error variance is
An interesting observation is that
, the
interval's optimal contribution to
σ
q
2 , is constant
over
ℓ .