<< Chapter < Page | Chapter >> Page > |
A paper was published by Claude Berrou and coauthors at the ICC conference in 1993 that rocked or shook the field of forward error correction coding (FECC). This described a method of creating much more powerful block error correcting coding with only the minimum amount of effort. Its main features were two recursive convolutional encoders (RCE) interconnected via an interleaver. The data is fed into the first encoder directly and into the second encoder after interleaving or reordereing of the input data.
The important features are the use of two recursive convolutional encoders and the design of the interleaver which gives a block code with the block size equal to the interleaver size, [link] . Random interleavers tend to work better than row and column interleavers. Note that recursive convolutional encoders were known about well before their use in turbo codes, but the difficulties in driving them into a known state made them less popular than the non-recursive convolutional encoders described in the previous module.
The name turbo decoder came from the turbo charger in an automobile where the exhaust gasses are used to drive a compressor in a feedback loop to increase the input of fuel and hence the vehicles ultimate performance.
The desired output rate was initially achieved by puncturing (ignoring every second output) from each of the encoders.
Turbo decoding is iterative. The decoding is also soft, the values that flow around the whole decoder are real values and not binary representations (with the exception of the hard decisions taken at the end of the number of iterations you are prepared to perform). They are usually log likelihood ratios (LLRs), the log of the probability that a particular bit was a logic 1 divided by the probability the same bit was a logic 0.
Decoding is accomplished by first demultiplexing the incoming data stream into d, ${y}_{1}$ , ${y}_{2}$ . d and ${y}_{1}$ go into the decoder for the first code, [link] . This gives an estimate of the extrinsic information from the first decoder which is interleaved and past on to the second decoder. The second decoder thus has three inputs, the extrinsic information from the first decoder, the interleaved data d, and the received values for ${y}_{2}$ . It produces its extrinsic information and this is deinterleaved and passed back to the first encoder. This process is then repeated or iterated as required until the final solution is obtained from the second decoder interleaver.
The decoders themselves generally use soft output Viterbi algorithm (SOVA) to decode the received data. However the preferred turbo decoding method is to use the maximum a-priori (MAP) algorithm but this is too mathematical to discuss here!
[link] shows these ½ rate decoders operating at much lower $\frac{{E}_{b}}{{N}_{0}}$ or SNR values than the convolutional Viterbi decoders of the previous section and, further, as the number of iterations increases to beyond 15, then the performance comes very very close to the theoretical Shannon bound.
This is the attraction that has excited the FECC community, who were unable to achieve this low error rate before 1993! Now that iterative decoding has been introduced for turbo decoders it is also being re-applied in low delay parity check (LDPC) decoders with equal enthusiasm and success.
[link] includes a turbo decoding example (which as an animated power point slide) will show the black dot noise induced errors being corrected on each subsequent iteration with the black dots being progressively reduced in the upper cartoon.
Notification Switch
Would you like to follow the 'Communications source and channel coding with examples' conversation and receive update notifications?