<< Chapter < Page | Chapter >> Page > |
These steps are repeated until convergence. This is known as Orthogonal Matching Pursuit (OMP) [link] . Tropp and Gilbert [link] proved that OMP can be used to recover a sparse signal with high probability using compressive measurements. The algorithm converges in at most $K$ iterations, where K is the sparsity, but requires the added computational cost of orthogonalization at each iteration. Indeed, the total complexity of OMP can be shown to be $O\left(MNK\right).$
While OMP is provably fast and can be shown to lead to exact recovery, the guarantees accompanying OMP for sparse recovery are weaker than those associated with optimization techniques . In particular, the reconstruction guarantees are not uniform , i.e., it cannot be shown that a single measurement matrix with $M=CKlogN$ rows can be used to recover every possible $K-$ sparse signal with $M=CKlogN$ measurements. (Although it is possible to obtain such uniform guarantees when it is acceptable to take more measurements. For example, see [link] .) Another issue with OMP is robustness to noise; it is unknown whether the solution obtained by OMP will only be perturbed slightly by the addition of a small amount of noise in the measurements. Nevertheless, OMP is an efficient method for CS recovery, especially when the signal sparsity $K$ is low. A pseudocode representation of OMP is shown below.
Inputs: Measurement matrix
$\Phi $ , signal measurements
$y$ Outputs: Sparse representation
$\widehat{x}$ Initialize:
${\widehat{\theta}}_{0}=0$ ,
$r=y$ ,
$\Omega =\varnothing $ ,
$i=0$ .
while ħalting criterion false
do 1.
$i\leftarrow i+1$ 2.
$b\leftarrow {\Phi}^{T}r$ {form residual signal estimate}
3.
$\Omega \leftarrow \Omega \cup \mathrm{supp}\left(\mathbf{T}\right(b,1\left)\right)$ {add index of residual's largest magnitude entry to signal support}
4.
${\widehat{x}}_{i}{|}_{\Omega}\leftarrow {\Phi}_{\Omega}^{\u2020}x$ ,
${\widehat{x}}_{i}{|}_{{\Omega}^{C}}\leftarrow 0$ {form signal estimate}
5.
$r\leftarrow y-\Phi {\widehat{x}}_{i}$ {update measurement residual}
end while return
$\widehat{x}\leftarrow {\widehat{x}}_{i}$
Orthogonal Matching Pursuit is ineffective when the signal is not very sparse as the computational cost increases quadratically with the number of nonzeros $K$ . In this setting, Stagewise Orthogonal Matching Pursuit (StOMP) [link] is a better choice for approximately sparse signals in a large-scale setting.
StOMP offers considerable computational advantages over ${\ell}_{1}$ minimization and Orthogonal Matching Pursuit for large scale problems with sparse solutions. The algorithm starts with an initial residual ${r}_{0}=y$ and calculates the set of all projections ${\Phi}^{T}{r}_{k-1}$ at the ${k}^{th}$ stage (as in OMP). However, instead of picking a single dictionary element, it uses a threshold parameter $\tau $ to determine the next best set of columns of $\Phi $ whose correlations with the current residual exceed $\tau $ . The new residual is calculated using a least squares estimate of the signal using this expanded set of columns, just as before.
Unlike OMP, the number of iterations in StOMP is fixed and chosen before hand; $S=10$ is recommended in [link] . In general, the complexity of StOMP is $O(KNlogN)$ , a significant improvement over OMP. However, StOMP does not bring in its wake any reconstruction guarantees. StOMP also has moderate memory requirements compared to OMP where the orthogonalization requires the maintenance of a Cholesky factorization of the dictionary elements.
Notification Switch
Would you like to follow the 'An introduction to compressive sensing' conversation and receive update notifications?