<< Chapter < Page Chapter >> Page >

Source compression

Consider the following 5-letter source.

Letter Probability
a 0.5
b 0.25
c 0.125
d 0.0625
e 0.0625
  1. Find this source's entropy.
  2. Show that the simple binary coding is inefficient.
  3. Find an unequal-length codebook for this sequence that satisfies the Source Coding Theorem. Does yourcode achieve the entropy limit?
  4. How much more efficient is this code than the simple binary code?

Source compression

Consider the following 5-letter source.

Letter Probability
a 0.4
b 0.2
c 0.15
d 0.15
e 0.1
  1. Find this source's entropy.
  2. Show that the simple binary coding is inefficient.
  3. Find the Huffman code for this source. What is its average code length?

Speech compression

When we sample a signal, such as speech, we quantize the signal's amplitude to a set ofintegers. For a b -bit converter, signal amplitudes are represented by 2 b integers. Although these integers could be represented by a binary code for digital transmission, we shouldconsider whether a Huffman coding would be more efficient.

  1. Load into Matlab the segment of speech contained in y.mat . Its sampled values lie in the interval (-1, 1). To simulate a 3-bit converter, we useMatlab's round function to create quantized amplitudes corresponding to the integers [0 1 2 3 4 5 6 7] .
    • y_quant = round(3.5*y + 3.5);
    Find the relative frequency of occurrence of quantized amplitude values. The following Matlab program computesthe number of times each quantized value occurs.
    • for n=0:7; count(n+1) = sum(y_quant == n);end;
    Find the entropy of this source.
  2. Find the Huffman code for this source. How would you characterize this source code inwords?
  3. How many fewer bits would be used in transmitting this speech segment with your Huffmancode in comparison to simple binary coding?

Digital communication

In a digital cellular system, a signal bandlimited to 5 kHz is sampled with a two-bit A/D converter at itsNyquist frequency. The sample values are found to have the shown relative frequencies.

Sample Value Probability
0 0.15
1 0.35
2 0.3
3 0.2

We send the bit stream consisting of Huffman-coded samples using one of the two depicted signal sets .

  1. What is the datarate of the compressed source?
  2. Which choice of signal set maximizes the communication system's performance?
  3. With no error-correcting coding, what signal-to-noise ratio would be needed for your chosensignal set to guarantee that the bit error probability will not exceed 10 -3 ? If the receiver moves twice as far from the transmitter(relative to the distance at which the 10 -3 error rate was obtained), how does the performance change?

Signal compression

Letters drawn from a four-symbol alphabet have the indicated probabilities.

Letter Probability
a 1/3
b 1/3
c 1/4
d 1/12
  1. What is the average number of bits necessary to represent this alphabet?
  2. Using a simple binary code for this alphabet, a two-bit block of data bits naturally emerges. Find anerror correcting code for two-bit data blocks that corrects all single-bit errors.
  3. How would you modify your code so that the probability of the letter a being confused with the letter d is minimized? If so, what is your new code; if not, demonstrate that thisgoal cannot be achieved.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Fundamentals of electrical engineering i. OpenStax CNX. Aug 06, 2008 Download for free at http://legacy.cnx.org/content/col10040/1.9
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Fundamentals of electrical engineering i' conversation and receive update notifications?

Ask