<< Chapter < Page | Chapter >> Page > |
where $Q$ is the prior induced by the coding length function $l$ .
Note that
Therefore,
In fact, Gallager showed that ${R}_{n}^{+}={R}_{n}^{-}$ . That is, the min-max and max-min redundancies are equal.
Let us revisit the Bernoulli source ${p}_{\theta}$ where $\theta \in \Lambda =[0,1]$ . From the definition of [link] , which relies on a uniform prior for the sources, i.e., $w\left(\theta \right)=1,\forall \theta \in \Lambda $ , it can be shown that there there exists a universal code with length function $l$ such that
where ${h}_{2}\left(p\right)=-plog\left(p\right)-(1-p)log(1-p)$ is the binary entropy. That is, the redundancy is approximately $log\left(n\right)$ bits. Clarke and Barron [link] studied the weighting approach,
and constructed a prior that achieves ${R}_{n}^{-}={R}_{n}^{+}$ precisely for memoryless sources.
Theorem 5 [link] For memoryless source with an alphabet of size $r$ , $\theta =\left(p\right(0),p(1),\cdots ,p(r-1\left)\right)$ ,
where ${O}_{n}\left(1\right)$ vanishes uniformly as $n\to \infty $ for any compact subset of $\Lambda $ , and
is Fisher's information.
Note that when the parameter is sensitive to change we have large $I\left(\theta \right)$ , which increases the redundancy. That is, good sensitivity means bad universal compression.
Denote
this is known as Jeffrey's prior . Using $w\left(\theta \right)=J\left(\theta \right)$ , it can be shownthat ${R}_{n}^{-}={R}_{n}^{+}$ .
Let us derive the Fisher information $I\left(\theta \right)$ for the Bernoulli source,
Therefore, the Fisher information satisfies $I\left(\theta \right)=\frac{1}{\theta (1-\theta )}$ .
Recall the Krichevsky–Trofimov coding, which was mentioned in [link] . Using the definition of Jeffreys' prior [link] , we see that $J\left(\theta \right)\propto \frac{1}{\sqrt{\theta (1-\theta )}}$ . Taking the integral over Jeffery's prior,
where we used the gamma function. It can be shown that
where
Similar to before, this universal code can be implemented sequentially. It is due to Krichevsky and Trofimov [link] , its redundancy satisfies Theorem 5 by Clarke and Barron [link] , and it is commonly used in universal lossless compression.
Let us consider – on an intuitive level – why ${C}_{n}\approx \frac{r-1}{2}\frac{log\left(n\right)}{n}$ . Expending $\frac{r-1}{2}log\left(n\right)$ bits allows to differentiate between ${\left(\sqrt{n}\right)}^{r-1}$ parameter vectors. That is, we would differentiate between each of the $r-1$ parameters with $\sqrt{n}$ levels. Now consider a Bernoulli RV with (unknown) parameter $\theta $ .
One perspective is that with $n$ drawings of the RV, the standard deviation in the number of 1's is $O\left(\sqrt{n}\right)$ . That is, $\sqrt{n}$ levels differentiate between parameter levels up to a resolution that reflects the randomness of the experiment.
A second perspective is that of coding a sequence of Bernoulli outcomes with an imprecise parameter,where it is convenient to think of a universal code in terms of first quantizing the parameter and then using that (imprecise) parameter to encode the input $x$ . For the Bernoulli example, the maximum likelihood parameter ${\theta}_{ML}$ satisfies
Notification Switch
Would you like to follow the 'Universal algorithms in signal processing and communications' conversation and receive update notifications?