<< Chapter < Page | Chapter >> Page > |
Example 2: If we have an error correcting code which can correct 3 errors within a block length n of 10, what is the probability that the code cannot correct a received block if the per digit error probability is ${P}_{e}$ = 0.01?
Solution: The code cannot correct the received block if there are more than 3 errors. Thus:
P>3 errors = 1 - P(0 errors) - P(1 error) - P(2 errors) - P(3 errors).
[link] shows the component parts of this calculation.
Thus the probability that the code cannot correct a received block is then:
1 - 0.9043821 - 0.0913517 - 0.0041523 - 0.0001118 = 0.0000021.
This illustrates that the very low overall error remaining after correction of three errors is much less than the original probability of error in a single bit, ${P}_{e}$ = 0.01. Note also the need for high precision arithmetic (it may be an eight digit calculator is not good enough to calculate the answer to more than 1 significant figure).Note also in [link] the much lower probability of t + 1 errors occuring, compared to t errors, as is implied in FECC.
Group codes are a special kind of block codes. They comprise a set of codewords, C1 … CN, which contain the all zeros codeword (e.g. 00000) and exhibit a special property called closure. This property means that if any two valid codewords are subject to a bit wise EX – OR operation then they will produce another valid codeword in the set.
The closure property means that to find the minimum Hamming distance, see below, all that is required is to compare all the remaining codewords in the set with the all zeros codeword instead of comparing all the possible pairs of codewords.
The saving gets bigger the longer the codeword. For example a code set with 100 codewords will require 100 comparisons for a Group code design, compared with 100+99+98+…+2+1, for a non-group code!
In Group codes the ${D}_{\mathrm{min}}$ calculation is further simplified into calculating the minimum codeword weight or minimum number of 1 digits in a codeword in the set.
Nearest neighbour decoding assumes that the codeword nearest in Hamming distance to the received word is what was transmitted, as shown in Example 1 above. This inherently contains the assumption that the probability of a small number of t errors is greater than the probability of the larger number of t+1 errors, i.e that ${P}_{e}$ is small.
A nearest neighbour decoding table for a (n, k) = (5, 2) i.e. a 5-digit group code is shown in [link] . Recall that for an n = 5 bit codeword there are ${2}^{5}$ = 32 unique patterns generated by all the possible combinations of the 5 digits.
[link] starts by forming a table with the 4 codewords across the top row. All the single error patterns, which each only differ by one bit from each of the transmitted codewords, can be readily and uniquely assigned back to an error free codeword. Thus the next 5 rows represent these single errors in position 1 through 5 in each of the 4 codewords. Now we have a table up to this point with a total of 4 x 6 = 24 unique entries. Therefore this code is capable of correcting all these single errors.
Notification Switch
Would you like to follow the 'Communications source and channel coding with examples' conversation and receive update notifications?