<< Chapter < Page | Chapter >> Page > |
The ability to perfectly reconstruct a sparse signal from noise-free measurements represents a promising result. However, in most real-world systems the measurements are likely to be contaminated by some form of noise. For instance, in order to process data in a computer we must be able to represent it using a finite number of bits, and hence the measurements will typically be subject to quantization error. Moreover, systems which are implemented in physical hardware will be subject to a variety of different types of noise depending on the setting.
Perhaps somewhat surprisingly, one can show that it is possible to modify
to stably recover sparse signals under a variety of common noise models [link] , [link] , [link] . As might be expected, the restricted isometry property (RIP) is extremely useful in establishing performance guarantees in noise.
In our analysis we will make repeated use of Lemma 1 from "Noise-free signal recovery" , so we repeat it here for convenience.
Suppose that $\Phi $ satisfies the RIP of order $2K$ with ${\delta}_{2K}<\sqrt{2}-1$ . Let $x,\widehat{x}\in {\mathbb{R}}^{N}$ be given, and define $h=\widehat{x}-x$ . Let ${\Lambda}_{0}$ denote the index set corresponding to the $K$ entries of $x$ with largest magnitude and ${\Lambda}_{1}$ the index set corresponding to the $K$ entries of ${h}_{{\Lambda}_{0}^{c}}$ with largest magnitude. Set $\Lambda ={\Lambda}_{0}\cup {\Lambda}_{1}$ . If ${\u2225\widehat{x}\u2225}_{1}\le {\u2225x\u2225}_{1}$ , then
where
We first provide a bound on the worst-case performance for uniformly bounded noise, as first investigated in [link] .
Suppose that $\Phi $ satisfies the RIP of order $2K$ with ${\delta}_{2K}<\sqrt{2}-1$ and let $y=\Phi x+e$ where ${\u2225e\u2225}_{2}\le \u03f5$ . Then when $\mathcal{B}\left(y\right)=\{z:{\u2225\Phi ,z,-,y\u2225}_{2}\le \u03f5\}$ , the solution $\widehat{x}$ to [link] obeys
where
We are interested in bounding ${\u2225h\u2225}_{2}={\u2225\widehat{x},-,x\u2225}_{2}$ . Since ${\u2225e\u2225}_{2}\le \u03f5$ , $x\in \mathcal{B}\left(y\right)$ , and therefore we know that ${\u2225\widehat{x}\u2225}_{1}\le {\u2225x\u2225}_{1}$ . Thus we may apply [link] , and it remains to bound $\left|\u2329\Phi ,{h}_{\Lambda},,,\Phi ,h\u232a\right|$ . To do this, we observe that
where the last inequality follows since $x,\widehat{x}\in \mathcal{B}\left(y\right)$ . Combining this with the RIP and the Cauchy-Schwarz inequality we obtain
Thus,
completing the proof.
In order to place this result in context, consider how we would recover a sparse vector $x$ if we happened to already know the $K$ locations of the nonzero coefficients, which we denote by ${\Lambda}_{0}$ . This is referred to as the oracle estimator . In this case a natural approach is to reconstruct the signal using a simple pseudoinverse:
The implicit assumption in [link] is that ${\Phi}_{{\Lambda}_{0}}$ has full column-rank (and hence we are considering the case where ${\Phi}_{{\Lambda}_{0}}$ is the $M\times K$ matrix with the columns indexed by ${\Lambda}_{0}^{c}$ removed) so that there is a unique solution to the equation $y={\Phi}_{{\Lambda}_{0}}{x}_{{\Lambda}_{0}}$ . With this choice, the recovery error is given by
Notification Switch
Would you like to follow the 'An introduction to compressive sensing' conversation and receive update notifications?