<< Chapter < Page Chapter >> Page >

Compressive sampling matching pursuit (cosamp)

Greedy pursuit algorithms (such as MP and OMP) alleviate the issue of computational complexity encountered in optimization-based sparse recovery, but lose the associated strong guarantees for uniform signal recovery, given a requisite number of measurements of the signal. In addition, it is unknown whether these greedy algorithms are robust to signal and/or measurement noise.

There have been some recent attempts to develop greedy algorithms (Regularized OMP  [link] , [link] , Compressive Sampling Matching Pursuit (CoSaMP)  [link] and Subspace Pursuit  [link] ) that bridge this gap between uniformity and complexity. Intriguingly, the restricted isometry property (RIP), developed in the context of analyzing 1 minimization , plays a central role in such algorithms. Indeed, if the matrix Φ satisfies the RIP of order K , this implies that every subset of K columns of the matrix is approximately orthonormal. This property is used to prove strong convergence results of these greedy-like methods.

One variant of such an approach is employed by the CoSaMP algorithm. An interesting feature of CoSaMP is that unlike MP, OMP and StOMP, new indices in a signal estimate can be added as well as deleted from the current set of chosen indices. In contrast, greedy pursuit algorithms suffer from the fact that a chosen index (or equivalently, a chosen atom from the dictionary Φ remains in the signal representation until the end. A brief description of CoSaMP is as follows: at the start of a given iteration i , suppose the signal estimate is x ^ i - 1 .

  • Form signal residual estimate: e Φ T r
  • Find the biggest 2 K coefficients of the signal residual e ; call this set of indices Ω .
  • Merge supports: T Ω supp ( x ^ i - 1 ) .
  • Form signal estimate b by subspace projection: b | T Φ T y , b | T C 0 .
  • Prune b by retaining its K largest coefficients. Call this new estimate x ^ i .
  • Update measurement residual: r y - Φ x ^ i .

This procedure is summarized in pseudocode form below.

Inputs: Measurement matrix Φ , measurements y , signal sparsity K Output: K -sparse approximation x ^ to true signal representation x Initialize: x ^ 0 = 0 , r = y ; i = 0 while ħalting criterion false do 1. i i + 1 2. e Φ T r {form signal residual estimate} 3. Ω supp ( T ( e , 2 K ) ) {prune signal residual estimate} 4. T Ω supp ( x ^ i - 1 ) {merge supports} 5. b | T Φ T y , b | T C {form signal estimate} 6. x ^ i T ( b , K ) {prune signal estimate} 7. r y - Φ x ^ i {update measurement residual} end while return x ^ x ^ i

As discussed in  [link] , the key computational issues for CoSaMP are the formation of the signal residual, and the method used for subspace projection in the signal estimation step. Under certain general assumptions, the computational cost of CoSaMP can be shown to be O ( M N ) , which is independent of the sparsity of the original signal. This represents an improvement over both greedy algorithms as well as convex methods.

While CoSaMP arguably represents the state of the art in sparse recovery algorithm performance, it possesses one drawback: the algorithm requires prior knowledge of the sparsity K of the target signal. An incorrect choice of input sparsity may lead to a worse guarantee than the actual error incurred by a weaker algorithm such as OMP. The stability bounds accompanying CoSaMP ensure that the error due to an incorrect parameter choice is bounded, but it is not yet known how these bounds translate into practice.

Iterative hard thresholding

Iterative Hard Thresholding (IHT) is a well-known algorithm for solving nonlinear inverse problems. The structure of IHT is simple: starting with an initial estimate x ^ 0 , iterative hard thresholding (IHT) obtains a sequence of estimates using the iteration:

x ^ i + 1 = T ( x ^ i + Φ T ( y - Φ x ^ i ) , K ) .

In  [link] , Blumensath and Davies proved that this sequence of iterations converges to a fixed point x ^ ; further, if the matrix Φ possesses the RIP, they showed that the recovered signal x ^ satisfies an instance-optimality guarantee of the type described earlier . The guarantees (as well as the proof technique) are reminiscent of the ones that are derived in the development of other algorithms such as ROMP and CoSaMP.

Discussion

While convex optimization techniques are powerful methods for computing sparse representations, there are also a variety of greedy/iterative methods for solving such problems. Greedy algorithms rely on iterative approximation of the signal coefficients and support, either by iteratively identifying the support of the signal until a convergence criterion is met, or alternatively by obtaining an improved estimate of the sparse signal at each iteration by accounting for the mismatch to the measured data. Some greedy methods can actually be shown to have performance guarantees that match those obtained for convex optimization approaches. In fact, some of the more sophisticated greedy algorithms are remarkably similar to those used for 1 minimization described previously . However, the techniques required to prove performance guarantees are substantially different. There also exist iterative techniques for sparse recovery based on message passing schemes for sparse graphical models. In fact, some greedy algorithms (such as those in  [link] , [link] ) can be directly interpreted as message passing methods  [link] .

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, An introduction to compressive sensing. OpenStax CNX. Apr 02, 2011 Download for free at http://legacy.cnx.org/content/col11133/1.5
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'An introduction to compressive sensing' conversation and receive update notifications?

Ask