<< Chapter < Page Chapter >> Page >

There are also eight remaining codes or table entries as 32 - 24 = 8 and these represent double error patterns which, as can be seen, lie an equal Hamming distance from at least 2 of the initial 4 codewords in the top row. Note for example errors in the first two digits of the 00000 codeword result in us receiving 11000. However data bit pattern is identified here in [link] as a single error from codeword 11100 as we assume that 1 error is a much more likely occurence than two errors!

These represent some of the double error patterns, which can thus be detected here, but they cannnot be corrected as all the possible double error patterns do not have a unique representation in [link] .

Soft decision decoding

Nearest neighbour decoding can also be done on a soft decision basis, with real non-binary numbers from the receiver. The nearest Euclidean distance (nearest to these 5 codewords in terms of a 5-D geometry) is then used and this gives a considerable performance increase over the hard decision decoding described here.

Hamming bound

This defines mathematically the error correcting performance of a block coder. The upper bound on the performance of block codes is given by the Hamming bound, some times called the sphere packing bound. If we are trying to create a code to correct t errors with a block length of n with k information digits, then [link] shows the Hamming bound equation.

Hamming bound calculation for (n, k) block code to establish number of terms which can be included in the denominator and hence arrive at the codes error correcting power t

Here the denominator terms, which are represented by the binomial coefficients, represent the number of possible patterns or positions in which 1, 2, ..., t errors can occur in an n-bit codeword.

Note the relationship between the decoding table in [link] and the Hamming Bound equation in [link] . The 2 k = 4 left hand entry represents the number of transmitted codewords or columns in the table. The numerator 2 n = 32 represents the total possible number of unique entries in the table. The demoninator represents the number of rows which can be accommodated within the table. Here the first denominator term (1) represents the first row (i.e. the transmitted codewords) and the second term (n) the 5 single error patterns. Subsequent terms then represent all the possible double, triple error patterns, etc. The denominator has to be sized or restricted to t to ensure the inequality and this gives or defines the error correction capability as t.

If the equation in [link] is satisfied then the design of suck an (n, k) code is possible with the error correcting power of t. If the equation is not satisfied, then we must be less ambitious by reducing t or k (for the same block length n) or increasing n (while maintaining t and k).

Example 2

Comment on the ability of a (5, 2) code to correct 1 error and the possibility of a (5, 2) code to correct 2 errors?

Solution

For single error: k = 2, n = 5 and t = 1, leads to the case summarized in [link] .

Calculation to assess whether (5, 2) block code can correct t = 1 error - Answer yes

which is true so such a code design is possible.

However if we try to design a (5, 2) code to correct 2 errors we have k = 2, n = 5 and t = 2, which is summarized in [link] .

Calculation to assess whether (5, 2) block code can correct t = 2 errors - Answer no

This result is false or cannot be satisfied and thus this short code cannot be designed with a t = 2 error correcting power or capability.

This provides further mathematical derivation for the error correcting performance limits of the nearest neighbour decoding table shown previously in [link] where we could correct all single error patterns but we could not correct all the possible double error patterns.

A full decoding table is not required to be created as, through checking the Hamming bound, one can identify the required block size and number of parity check bits which are required for a given error correction capability in a block or group coder design.

[link] shows the performance of various BLOCK codes, all of rate ½, whose performance progressively improves as the block length increases from 7 to 511, even for the same coding rate of ½.

The power of these forward error correcting codes (FECC) is quantified as the coding gain, i.e. the reduction in the required E b N 0 ratio or energy required to transmit each bit divided by the spectral noise density, for a given bit error ratio or error probability.

For example in [link] the (31, 16) code has a coding gain over the uncoded case of around 1.8 dB at a P b of 10 -5 .

Error performance of 1/2 rate block coders with differing block lengths
This module has been created from lecture notes originated by P M Grant and D G M Cruickshank which are published in I A Glover and P M Grant, "Digital Communications", Pearson Education, 2009, ISBN 978-0-273-71830-7. Powerpoint slides plus end of chapter problem examples/solutions are available for instructor use via password access at http://www.see.ed.ac.uk/~pmg/DIGICOMMS/

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Communications source and channel coding with examples. OpenStax CNX. May 07, 2009 Download for free at http://cnx.org/content/col10601/1.3
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Communications source and channel coding with examples' conversation and receive update notifications?

Ask