<< Chapter < Page Chapter >> Page >
This module illustrates the application of the ideas of compressive sensing to the design and decoding of error correcting codes for vectors of real numbers subject to sparse corruptions.

In communications, error correction refers to mechanisms that can detect and correct errors in the data that appear duet to distortion in the transmission channel. Standard approaches for error correction rely on repetition schemes, redundancy checks, or nearest neighbor code search. We consider the particular case in which a signal x with M entries is coded by taking length- N linearly independent codewords { φ 1 , ... . φ M } , with N > M and summing them using the entries of x as coefficients. The received message is a length- N code y = m = 1 M φ i x i = Φ f , where Φ is a matrix that has the different codewords for columns. We assume that the transmission channel corrupts the entries of y in an additive way, so that the received data is y = Φ x + e , where e is an error vector.

The techniques developed for sparse recovery in the context of compressive sensing (CS) provide a number of methods to estimate the error vector e — therefore making it possible to correct it and obtain the signal x — when e is sufficiently sparse [link] . To estimate the error, we build a matrix Θ that is a basis for the orthogonal subspace to the span of the matrix Φ , i.e., an ( N - M ) × N matrix Θ that holds Θ Φ = 0 . When such a matrix is obtained, we can modify the measurements by multiplying them with the matrix to obtain y ˜ = Θ y = Θ Φ x + Θ e = Θ e . If the matrix Θ is well-suited for CS (i.e., it satisfies a condition such as the restricted isometry property ) and e is sufficiently sparse, then the error vector e can be estimated accurately using CS. Once the estimate e ^ is obtained, the error-free measurements can be estimated as y ^ = y - e ^ , and the signal can be recovered as x ^ = Φ y ^ = Φ y - Φ e ^ . As an example, when the codewords φ m have random independent and identically distributed sub-Gaussian entries, then a K -sparse error can be corrected if M < N - C K log N / K for a fixed constant C (see "Matrices that satisfy the RIP" ).

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, An introduction to compressive sensing. OpenStax CNX. Apr 02, 2011 Download for free at http://legacy.cnx.org/content/col11133/1.5
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'An introduction to compressive sensing' conversation and receive update notifications?

Ask