<< Chapter < Page Chapter >> Page >

Compressive sampling matching pursuit (cosamp)

Greedy pursuit algorithms (such as MP and OMP) alleviate the issue of computational complexity encountered in optimization-based sparse recovery, but lose the associated strong guarantees for uniform signal recovery, given a requisite number of measurements of the signal. In addition, it is unknown whether these greedy algorithms are robust to signal and/or measurement noise.

There have been some recent attempts to develop greedy algorithms (Regularized OMP  [link] , [link] , Compressive Sampling Matching Pursuit (CoSaMP)  [link] and Subspace Pursuit  [link] ) that bridge this gap between uniformity and complexity. Intriguingly, the restricted isometry property (RIP), developed in the context of analyzing 1 minimization , plays a central role in such algorithms. Indeed, if the matrix Φ satisfies the RIP of order K , this implies that every subset of K columns of the matrix is approximately orthonormal. This property is used to prove strong convergence results of these greedy-like methods.

One variant of such an approach is employed by the CoSaMP algorithm. An interesting feature of CoSaMP is that unlike MP, OMP and StOMP, new indices in a signal estimate can be added as well as deleted from the current set of chosen indices. In contrast, greedy pursuit algorithms suffer from the fact that a chosen index (or equivalently, a chosen atom from the dictionary Φ remains in the signal representation until the end. A brief description of CoSaMP is as follows: at the start of a given iteration i , suppose the signal estimate is x ^ i - 1 .

  • Form signal residual estimate: e Φ T r
  • Find the biggest 2 K coefficients of the signal residual e ; call this set of indices Ω .
  • Merge supports: T Ω supp ( x ^ i - 1 ) .
  • Form signal estimate b by subspace projection: b | T Φ T y , b | T C 0 .
  • Prune b by retaining its K largest coefficients. Call this new estimate x ^ i .
  • Update measurement residual: r y - Φ x ^ i .

This procedure is summarized in pseudocode form below.

Inputs: Measurement matrix Φ , measurements y , signal sparsity K Output: K -sparse approximation x ^ to true signal representation x Initialize: x ^ 0 = 0 , r = y ; i = 0 while ħalting criterion false do 1. i i + 1 2. e Φ T r {form signal residual estimate} 3. Ω supp ( T ( e , 2 K ) ) {prune signal residual estimate} 4. T Ω supp ( x ^ i - 1 ) {merge supports} 5. b | T Φ T y , b | T C {form signal estimate} 6. x ^ i T ( b , K ) {prune signal estimate} 7. r y - Φ x ^ i {update measurement residual} end while return x ^ x ^ i

As discussed in  [link] , the key computational issues for CoSaMP are the formation of the signal residual, and the method used for subspace projection in the signal estimation step. Under certain general assumptions, the computational cost of CoSaMP can be shown to be O ( M N ) , which is independent of the sparsity of the original signal. This represents an improvement over both greedy algorithms as well as convex methods.

While CoSaMP arguably represents the state of the art in sparse recovery algorithm performance, it possesses one drawback: the algorithm requires prior knowledge of the sparsity K of the target signal. An incorrect choice of input sparsity may lead to a worse guarantee than the actual error incurred by a weaker algorithm such as OMP. The stability bounds accompanying CoSaMP ensure that the error due to an incorrect parameter choice is bounded, but it is not yet known how these bounds translate into practice.

Iterative hard thresholding

Iterative Hard Thresholding (IHT) is a well-known algorithm for solving nonlinear inverse problems. The structure of IHT is simple: starting with an initial estimate x ^ 0 , iterative hard thresholding (IHT) obtains a sequence of estimates using the iteration:

x ^ i + 1 = T ( x ^ i + Φ T ( y - Φ x ^ i ) , K ) .

In  [link] , Blumensath and Davies proved that this sequence of iterations converges to a fixed point x ^ ; further, if the matrix Φ possesses the RIP, they showed that the recovered signal x ^ satisfies an instance-optimality guarantee of the type described earlier . The guarantees (as well as the proof technique) are reminiscent of the ones that are derived in the development of other algorithms such as ROMP and CoSaMP.

Discussion

While convex optimization techniques are powerful methods for computing sparse representations, there are also a variety of greedy/iterative methods for solving such problems. Greedy algorithms rely on iterative approximation of the signal coefficients and support, either by iteratively identifying the support of the signal until a convergence criterion is met, or alternatively by obtaining an improved estimate of the sparse signal at each iteration by accounting for the mismatch to the measured data. Some greedy methods can actually be shown to have performance guarantees that match those obtained for convex optimization approaches. In fact, some of the more sophisticated greedy algorithms are remarkably similar to those used for 1 minimization described previously . However, the techniques required to prove performance guarantees are substantially different. There also exist iterative techniques for sparse recovery based on message passing schemes for sparse graphical models. In fact, some greedy algorithms (such as those in  [link] , [link] ) can be directly interpreted as message passing methods  [link] .

Questions & Answers

what is variations in raman spectra for nanomaterials
Jyoti Reply
I only see partial conversation and what's the question here!
Crow Reply
what about nanotechnology for water purification
RAW Reply
please someone correct me if I'm wrong but I think one can use nanoparticles, specially silver nanoparticles for water treatment.
Damian
yes that's correct
Professor
I think
Professor
what is the stm
Brian Reply
is there industrial application of fullrenes. What is the method to prepare fullrene on large scale.?
Rafiq
industrial application...? mmm I think on the medical side as drug carrier, but you should go deeper on your research, I may be wrong
Damian
How we are making nano material?
LITNING Reply
what is a peer
LITNING Reply
What is meant by 'nano scale'?
LITNING Reply
What is STMs full form?
LITNING
scanning tunneling microscope
Sahil
how nano science is used for hydrophobicity
Santosh
Do u think that Graphene and Fullrene fiber can be used to make Air Plane body structure the lightest and strongest. Rafiq
Rafiq
what is differents between GO and RGO?
Mahi
what is simplest way to understand the applications of nano robots used to detect the cancer affected cell of human body.? How this robot is carried to required site of body cell.? what will be the carrier material and how can be detected that correct delivery of drug is done Rafiq
Rafiq
what is Nano technology ?
Bob Reply
write examples of Nano molecule?
Bob
The nanotechnology is as new science, to scale nanometric
brayan
nanotechnology is the study, desing, synthesis, manipulation and application of materials and functional systems through control of matter at nanoscale
Damian
Is there any normative that regulates the use of silver nanoparticles?
Damian Reply
what king of growth are you checking .?
Renato
What fields keep nano created devices from performing or assimulating ? Magnetic fields ? Are do they assimilate ?
Stoney Reply
why we need to study biomolecules, molecular biology in nanotechnology?
Adin Reply
?
Kyle
yes I'm doing my masters in nanotechnology, we are being studying all these domains as well..
Adin
why?
Adin
what school?
Kyle
biomolecules are e building blocks of every organics and inorganic materials.
Joe
anyone know any internet site where one can find nanotechnology papers?
Damian Reply
research.net
kanaga
sciencedirect big data base
Ernesto
Introduction about quantum dots in nanotechnology
Praveena Reply
what does nano mean?
Anassong Reply
nano basically means 10^(-9). nanometer is a unit to measure length.
Bharti
do you think it's worthwhile in the long term to study the effects and possibilities of nanotechnology on viral treatment?
Damian Reply
absolutely yes
Daniel
how did you get the value of 2000N.What calculations are needed to arrive at it
Smarajit Reply
Privacy Information Security Software Version 1.1a
Good
Got questions? Join the online conversation and get instant answers!
Jobilize.com Reply

Get the best Algebra and trigonometry course in your pocket!





Source:  OpenStax, An introduction to compressive sensing. OpenStax CNX. Apr 02, 2011 Download for free at http://legacy.cnx.org/content/col11133/1.5
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'An introduction to compressive sensing' conversation and receive update notifications?

Ask