<< Chapter < Page Chapter >> Page >
This module establishes a simple performance guarantee of L1 minimization for signal recovery with noise-free measurements.

We now begin our analysis of

x ^ = arg min z z 1 subject to z B ( y ) .

for various specific choices of B ( y ) . In order to do so, we require the following general result which builds on Lemma 4 from " 1 minimization proof" . The key ideas in this proof follow from  [link] .

Suppose that Φ satisfies the restricted isometry property (RIP) of order 2 K with δ 2 K < 2 - 1 . Let x , x ^ R N be given, and define h = x ^ - x . Let Λ 0 denote the index set corresponding to the K entries of x with largest magnitude and Λ 1 the index set corresponding to the K entries of h Λ 0 c with largest magnitude. Set Λ = Λ 0 Λ 1 . If x ^ 1 x 1 , then

h 2 C 0 σ K ( x ) 1 K + C 1 Φ h Λ , Φ h h Λ 2 .

where

C 0 = 2 1 - ( 1 - 2 ) δ 2 K 1 - ( 1 + 2 ) δ 2 K , C 1 = 2 1 - ( 1 + 2 ) δ 2 K .

We begin by observing that h = h Λ + h Λ c , so that from the triangle inequality

h 2 h Λ 2 + h Λ c 2 .

We first aim to bound h Λ c 2 . From Lemma 3 from " 1 minimization proof" we have

h Λ c 2 = j 2 h Λ j 2 j 2 h Λ j 2 h Λ 0 c 1 K ,

where the Λ j are defined as before, i.e., Λ 1 is the index set corresponding to the K largest entries of h Λ 0 c (in absolute value), Λ 2 as the index set corresponding to the next K largest entries, and so on.

We now wish to bound h Λ 0 c 1 . Since x 1 x ^ 1 , by applying the triangle inequality we obtain

x 1 x + h 1 = x Λ 0 + h Λ 0 1 + x Λ 0 c + h Λ 0 c 1 x Λ 0 1 - h Λ 0 1 + h Λ 0 c 1 - x Λ 0 c 1 .

Rearranging and again applying the triangle inequality,

h Λ 0 c 1 x 1 - x Λ 0 1 + h Λ 0 1 + x Λ 0 c 1 x - x Λ 0 1 + h Λ 0 1 + x Λ 0 c 1 .

Recalling that σ K ( x ) 1 = x Λ 0 c 1 = x - x Λ 0 1 ,

h Λ 0 c 1 h Λ 0 1 + 2 σ K ( x ) 1 .

Combining this with [link] we obtain

h Λ c 2 h Λ 0 1 + 2 σ K ( x ) 1 K h Λ 0 2 + 2 σ K ( x ) 1 K

where the last inequality follows from standard bounds on p norms (Lemma 1 from "The RIP and the NSP" ). By observing that h Λ 0 2 h Λ 2 this combines with [link] to yield

h 2 2 h Λ 2 + 2 σ K ( x ) 1 K .

We now turn to establishing a bound for h Λ 2 . Combining Lemma 4 from " 1 minimization proof" with [link] and again applying standard bounds on p norms we obtain

h Λ 2 α h Λ 0 c 1 K + β Φ h Λ , Φ h h Λ 2 α h Λ 0 1 + 2 σ K ( x ) 1 K + β Φ h Λ , Φ h h Λ 2 α h Λ 0 2 + 2 α σ K ( x ) 1 K + β Φ h Λ , Φ h h Λ 2 .

Since h Λ 0 2 h Λ 2 ,

( 1 - α ) h Λ 2 2 α σ K ( x ) 1 K + β Φ h Λ , Φ h h Λ 2 .

The assumption that δ 2 K < 2 - 1 ensures that α < 1 . Dividing by ( 1 - α ) and combining with [link] results in

h 2 4 α 1 - α + 2 σ K ( x ) 1 K + 2 β 1 - α Φ h Λ , Φ h h Λ 2 .

Plugging in for α and β yields the desired constants.

[link] establishes an error bound for the class of 1 minimization algorithms described by [link] when combined with a measurement matrix Φ satisfying the RIP. In order to obtain specific bounds for concrete examples of B ( y ) , we must examine how requiring x ^ B ( y ) affects Φ h Λ , Φ h . As an example, in the case of noise-free measurements we obtain the following theorem.

(theorem 1.1 of [link] )

Suppose that Φ satisfies the RIP of order 2 K with δ 2 K < 2 - 1 and we obtain measurements of the form y = Φ x . Then when B ( y ) = { z : Φ z = y } , the solution x ^ to [link] obeys

x ^ - x 2 C 0 σ K ( x ) 1 K .

Since x B ( y ) we can apply [link] to obtain that for h = x ^ - x ,

h 2 C 0 σ K ( x ) 1 K + C 1 Φ h Λ , Φ h h Λ 2 .

Furthermore, since x , x ^ B ( y ) we also have that y = Φ x = Φ x ^ and hence Φ h = 0 . Therefore the second term vanishes, and we obtain the desired result.

[link] is rather remarkable. By considering the case where x Σ K = x : x 0 K we can see that provided Φ satisfies the RIP — which as shown earlier allows for as few as O ( K log ( N / K ) ) measurements — we can recover any K -sparse x exactly . This result seems improbable on its own, and so one might expect that the procedure would be highly sensitive to noise, but we will see next that [link] can also be used to demonstrate that this approach is actually stable.

Note that [link] assumes that Φ satisfies the RIP. One could easily modify the argument to replace this with the assumption that Φ satisfies the null space property (NSP) instead. Specifically, if we are only interested in the noiseless setting, in which case h lies in the null space of Φ , then [link] simplifies and its proof could be broken into two steps: ( i ) show that if Φ satisfies the RIP then it satisfies the NSP (as shown in "The RIP and the NSP" ), and ( i i ) the NSP implies the simplified version of [link] . This proof directly mirrors that of [link] . Thus, by the same argument as in the proof of [link] , it is straightforward to show that if Φ satisfies the NSP then it will obey the same error bound.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, An introduction to compressive sensing. OpenStax CNX. Apr 02, 2011 Download for free at http://legacy.cnx.org/content/col11133/1.5
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'An introduction to compressive sensing' conversation and receive update notifications?

Ask