<< Chapter < Page | Chapter >> Page > |
Entropy of source information was discussed in the third-year E5 Information and Coding course. For an image $x$ , quantised to $M$ levels, the entropy ${H}_{x}$ is defined as:
${H}_{x}$ represents the mean number of bits per pel with which the quantised image $x$ can be represented using an ideal variable-length entropy code. AHuffman code usually approximates this bit-rate quite closely.
To obtain the number of bits to code an image (or subimage) $x$ containing $N$ pels:
shows the probabilities ${p}_{i}$ and entropies ${h}_{i}$ for the original Lenna image and shows these for each of the subimages in this previous figure , assuming a uniform quantiser with a step-size ${Q}_{\mathrm{step}}=15$ in each case. The original Lenna image contained pel values from 3 to 238 and a mean level of 120 was subtracted fromeach pel value before the image was analysed or transformed in order that all samples would be approximately evenly distributedabout zero (a natural feature of highpass subimages).
The Haar transform preserves energy and so the expected distortion energy from quantising the transformed image $y$ with a given step size ${Q}_{\mathrm{step}}$ will be approximately the same as that from quantising the input image $x$ with the same step size. This is because quantising errors can usually bemodeled as independent random processes with variance (energy) = $\frac{{Q}_{\mathrm{step}}^{2}}{12}$ and the total squared quantising error (distortion) will tend to the sum of the variances over all pels. Thisapplies whether the error energies are summed before or after the inverse transform (reconstruction) in the decoder.
Hence equal quantiser step sizes before and after an energy-preserving transformation should generate equivalentquantising distortions and provide a fair estimate of the compression achieved by the transformation.
The first two columns of (original and level 1) compare the entropy (mean bit rate) per pel for the original image (3.71 bit / pel) with that of theHaar transformed image of this previous figure (2.08 bit / pel), using ${Q}_{\mathrm{step}}=15$ . Notice that the entropy of the original image is almost as great as the 4 bit / pel that would be needed to codethe 16 levels using a simple fixed-length code, because the histogram is relatively uniform.
The level 1 column of shows the contribution of each of the subimages of this previous figure to the total entropy per pel (the entropies from have been divided by 4 since each subimage has one quarter of the total number of pels). the Lo-Lo subimagecontributes 56% to the total entropy (bit rate) and has similar spatial correlations to the original image. Hence it is alogical step to apply the Haar transform again to this subimage.
Notification Switch
Would you like to follow the 'Image coding' conversation and receive update notifications?