<< Chapter < Page | Chapter >> Page > |
In the module of Use of Laplacian PDFs in Image Compression we have assumed that ideal entropy coding has been used in order to calculate the bitrates for the coded data. In practise we must use real codes and we shall now see how this affects the compression performance.
There are three main techniques for achieving entropy coding:
First we consider the change in compression performance if simple Huffman Coding is used to code the subimages of the4-level Haar transform.
The calculation of entropy in this equation from our discussion of entropy assumed that each message with probability ${p}_{i}$ could be represented by a word of length ${\uef59}_{i}=-\log_{2}{p}_{i}$ bits. Huffman codes require the ${\uef59}_{i}$ to be integers and assume that the ${p}_{i}$ are adjusted to become:
We can use the probability histograms which generated the entropy plots in figures of level 1 energies , level 2 energies , level 3 energies and level 4 energies to calculate the Huffman entropies $\widehat{H}$ for each subimage and compare these with the true entropies to see the loss in performance caused by using realHuffman codes.
An algorithm for finding the optimum codesizes ${\uef59}_{i}$ is recommended in the JPEG specification [ the JPEG Book , Appendix A, Annex K.2, fig K.1]; and a Mathlab M-file toimplement it is given in M-file code .
Column: | 1 | 2 | 3 | 4 | 5 | 6 | - |
---|---|---|---|---|---|---|---|
0.0264 | 0.0265 | 0.0264 | 0.0266 | ||||
0.0220 | 0.0222 | 0.0221 | 0.0221 | Level 4 | |||
0.0186 | 0.0187 | 0.0185 | 0.0186 | ||||
0.0171 | 0.0172 | 0.0171 | 0.0173 | - | |||
0.0706 | 0.0713 | 0.0701 | 0.0705 | ||||
0.0556 | 0.0561 | 0.0557 | 0.0560 | Level 3 | |||
3.7106 | 3.7676 | 0.0476 | 0.0482 | 0.0466 | 0.0471 | - | |
0.1872 | 0.1897 | 0.1785 | 0.1796 | ||||
0.1389 | 0.1413 | 0.1340 | 0.1353 | Level 2 | |||
0.1096 | 0.1170 | 0.1038 | 0.1048 | - | |||
0.4269 | 0.4566 | 0.3739 | 0.3762 | ||||
0.2886 | 0.3634 | 0.2691 | 0.2702 | Level 1 | |||
0.2012 | 0.3143 | 0.1819 | 0.1828 | - | |||
Totals: | 3.7106 | 3.7676 | 1.6103 | 1.8425 | 1.4977 | 1.5071 |
shows the results of applying this algorithm to the probability histograms and lists the same results numerically for ease of analysis. Columns 1 and 2 compare theideal entropy with the mean word length or bit rate from using a Huffman code (the Huffman entropy) for the case of theuntransformed image where the original pels are quantized with ${Q}_{\mathrm{step}}=15$ . We see that the increase in bit rate from using the real code is: $$\frac{3.7676}{3.7106}-1=1.5\%$$ But when we do the same for the 4-level transformed subimages, we get columns 3 and 4. Here we see thatreal Huffman codes require an increase in bit rate of: $$\frac{1.8425}{1.6103}-1=14.4\%$$ Comparing the results for each subimage in columns 3 and 4, wesee that most of the increase in bit rate arises in the three level-1 subimages at the bottom of the columns. This is becauseeach of the probability histograms for these subimages (see figure ) contain one probability that is greater than 0.5. Huffman codes cannot allocate a word length ofless than 1 bit to a given event, and so they start to lose efficiency rapidly when $-\log_{2}{p}_{i}$ becomes less than 1, ie when ${p}_{i}> 0.5$ .
Notification Switch
Would you like to follow the 'Image coding' conversation and receive update notifications?