<< Chapter < Page Chapter >> Page >

Review

Last time we proved that for each k c 0 n log N / n , there exists an n × N matrix Φ and a decoder Δ such that

  • (a)

    x - Δ Φ ( x ) 1 c 0 σ k ( x ) 1
  • (b)

    x - Δ Φ ( x ) 2 c 0 σ k ( x ) 1 k
Recall that we can find such a Φ by setting the entries [ Φ ] j , k = φ j , k ( ω ) to be realizations of independent and identically distributed Gaussian random variables.

Deficiencies

Decoding is not implementable

Our decoding “algorithm” is:

Δ ( y ) : = arg min x F ( y ) σ k ( x ) 1
where F ( y ) : = { x : Φ ( x ) = y } . In general, this algorithm is not implementable. This deficiency, however, iseasily repaired. Specifically, define
Δ 1 ( y ) : = arg min x F ( y ) x 1 .
Then (a) and (b) hold for Δ 1 in place of Δ . This decoding algorithm is equivalent to solving a linear programmingproblem, thus it is tractable and can be solved using techniques such as the interior point method or the simplex method. Ingeneral, these algorithms have computational complexity O ( N 3 ) . For very large signals this can become prohibitive, and hencethere has been a considerable amount of research in faster decoders (such as decoding using greedy algorithms).

We cannot generate such φ

The construction of a Φ from realizations of Gaussian random variables is guaranteed to work with high probability.However, we would like to know, given a particular instance of Φ , do (a) and (b) still hold. Unfortunately, this is impossible to check (since, to show that Φ satisfies the MRIP for k , we need to consider all possible submatrices of Φ ). Furthermore, we would like to build Φ that can be implemented in circuits. We also might wantfast decoders Δ for these Φ . Thus we also may need to be more restrictive in building Φ . Two possible approaches that move in this direction are as follows:

  • Find Φ that we can build such that we can prove instance optimality in 1 for a smaller range of k , i.e.,
    x - Δ Φ ( x ) 1 c 0 σ k ( x ) 1
    for k < K . If we are willing to sacrifice and let K be smaller than before, for example, K n , then we might be able to prove that Φ T t Φ T is diagonally dominant for all T such that T = 2 k , which would ensure that Φ satisfies the MRIP.
  • Consider Φ ( ω ) where ω is a random seed that generates a Φ . It is possible to show that give x , with high probability, Φ ( ω ) ( x ) = y encodes x in an 2 -instance optimal fashion:
    x - x ¯ 2 2 σ k ( x ) 2
    for k c 0 n ( log N / n ) 5 / 2 . Thus, by generating many such matrices we can recover any x with high probability.

Encoding signals

Another practical problem is that of encoding the measurements y . In a real system these measurements must be quantized. This problem was addressed by Candes, Romberg,and Tao in their paper Stable Signal Recovery from Incomplete and Inaccurate Measurements. They prove that if y is quantized to y ¯ , and if x U ( p ) for p 1 , then we get optimal performance in terms the number of bits requiredfor a given accuracy. Notice that their result applies only to the case where p 1 . One might expect that this argument could be extended to p between 1 and 2, but a warning is in order at this stage:

Fix 1 < p 2 . Then there exist Φ and Δ satisfying

x - Δ Φ ( x ) p C 0 σ k ( x ) p
if
k c 0 N 2 - 2 / p 1 - 2 / p n log N / n p 2 - p .
Furthermore, this range of k is the best possible (save for the log term).

Examples:

  • p = 1 , we get our original results
  • p = 2 , we do not get instance optimal for k = 1 unless n N
  • p = 3 2 , we only get instance optimal if k c 0 N - 2 n log N / n 3

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Compressive sensing. OpenStax CNX. Sep 21, 2007 Download for free at http://cnx.org/content/col10458/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Compressive sensing' conversation and receive update notifications?

Ask