<< Chapter < Page | Chapter >> Page > |
Let $\phantom{\rule{0.166667em}{0ex}}f={\sum}_{m\in \Gamma}a\left[m\right]\phantom{\rule{0.166667em}{0ex}}{g}_{m}$ be the representation of $\phantom{\rule{0.166667em}{0ex}}f$ in an orthonormal basis $\mathcal{B}={\left\{{g}_{m}\right\}}_{m\in \Gamma}$ . An approximation must be recovered from
A basis $\mathcal{B}$ of singular vectors diagonalizes ${U}^{*}U$ . Then U transforms a subset of Q vectors ${\left\{{g}_{m}\right\}}_{m\in {\Gamma}_{Q}}$ of $\mathcal{B}$ into an orthogonal basis ${\left\{U{g}_{m}\right\}}_{m\in {\Gamma}_{Q}}$ of ImU and sets all other vectors to zero. A singular value decomposition estimates the coefficients $a\left[m\right]$ of $\phantom{\rule{0.166667em}{0ex}}f$ by projecting Y on this singular basis and by renormalizing the resultingcoefficients
where h _{m} ^{2} are regularization parameters.
Such estimators recover nonzero coefficients in a space of dimension Q and thus bring no super-resolution. If U is a convolution operator, then $\mathcal{B}$ is the Fourier basis and a singular value estimationimplements a regularized inverseconvolution.
The basis that diagonalizes ${U}^{*}U$ rarely provides a sparse signal representation.For example, a Fourier basis that diagonalizes convolution operators does notefficiently approximate signals including singularities.
Donoho (Donoho:95) introduced more flexibility by looking for a basis $\mathcal{B}$ providing a sparse signal representation, where a subset of Q vectors ${\left\{{g}_{m}\right\}}_{m\in {\Gamma}_{Q}}$ are transformed by U in a Riesz basis ${\left\{U{g}_{m}\right\}}_{m\in {\Gamma}_{Q}}$ of ImU , while the others are set to zero. With an appropriate renormalization, ${\left\{{\tilde{\lambda}}_{m}^{-1}\phantom{\rule{0.166667em}{0ex}}U{g}_{m}\right\}}_{m\in {\Gamma}_{Q}}$ has a biorthogonal basis ${\left\{{\tilde{\phi}}_{m}\right\}}_{m\in {\Gamma}_{Q}}$ that is normalized $\parallel {\tilde{\phi}}_{m}\parallel =1$ . The sparse coefficients of $\phantom{\rule{0.166667em}{0ex}}f$ in $\mathcal{B}$ can then be estimated with a thresholding
for thresholds T _{m} appropriately defined.
For classes of signals that are sparse in $\mathcal{B}$ , such thresholding estimators mayyield a nearly minimax risk, but they provide no super-resolution since this nonlinear projector remains in a space of dimension Q . This result applies to classes of convolution operators U in wavelet or wavelet packet bases. Diagonal inverse estimators are computationally efficient andpotentially optimal in cases where super-resolution is not possible.
Suppose that $\phantom{\rule{0.166667em}{0ex}}f$ has a sparse representation in some dictionary $\mathcal{D}={\left\{{g}_{p}\right\}}_{p\in \Gamma}$ of P normalized vectors. The P vectors of the transformed dictionary ${\mathcal{D}}_{U}=U\mathcal{D}={\left\{U{g}_{p}\right\}}_{p\in \Gamma}$ belong to the space ImU of dimension $Q<P$ and thus define a redundant dictionary. Vectors in the approximation support λ of $\phantom{\rule{0.166667em}{0ex}}f$ are not restricted a priori to a particular subspace of ${\mathbb{C}}^{N}$ . Super-resolution is possible if the approximation support λ of $\phantom{\rule{0.166667em}{0ex}}f$ in $\mathcal{D}$ can be estimated by decomposing the noisy data Y over D _{U} . It dependson the properties of the approximation support λ of $\phantom{\rule{0.166667em}{0ex}}f$ in γ .
Let ${w}_{\lambda}=f-{f}_{\lambda}$ be the approximation error of a sparse representation $\phantom{\rule{0.166667em}{0ex}}{f}_{\lambda}={\sum}_{p\in \lambda}a\left[\phantom{\rule{0.166667em}{0ex}}p\right]\phantom{\rule{0.166667em}{0ex}}{g}_{p}$ of $\phantom{\rule{0.166667em}{0ex}}f$ . The observed signal can be written as
If the support λ can be identified by finding a sparse approximation of Y in D _{U}
then we can recover a super-resolution estimation of $\phantom{\rule{0.166667em}{0ex}}f$
This shows that super-resolution is possible if the approximation support λ can be identified by decomposing Y in the redundant transformed dictionary D _{U} . If the exact recovery criteria is satisfy $ERC\left(\lambda \right)<1$ and if ${\left\{U{g}_{p}\right\}}_{p\in \Lambda}$ is a Riesz basis, then λ can be recovered using pursuit algorithms with controlled error bounds.
For most operator U , not all sparse approximation sets can be recovered. It is necessary to impose some further geometric conditions on λ in γ , which makes super-resolution difficult and often unstable. Numerical applications to sparse spike deconvolution, tomography, super-resolutionzooming, and inpainting illustrate these results.
Candès and Tao (candes-near-optimal), and Donoho (donoho-cs) proved that stable super-resolution is possible for anysufficiently sparse signal $\phantom{\rule{0.166667em}{0ex}}f$ if U is an operator with random coefficients. Compressive sensing then becomespossible by recovering a close approximation of $\phantom{\rule{0.166667em}{0ex}}f\in {\mathbb{C}}^{N}$ from $Q\ll N$ linear measurements (candes-cs-review).
A recovery is stable for a sparse approximation set $\left|\lambda \right|\le M$ only if the corresponding dictionary family ${\left\{U{g}_{m}\right\}}_{m\in \Lambda}$ is a Riesz basis of the space it generates. The M-restricted isometry conditions of Candès, Tao, and Donoho (donoho-cs) imposes uniform Riesz bounds for all sets $\lambda \subset \gamma $ with $\left|\lambda \right|\le M$ :
This is a strong incoherence condition on the P vectors of ${\left\{U{g}_{m}\right\}}_{m\in \Gamma}$ , which supposes that any subset of less than M vectors is nearly uniformly distributed on the unit sphere of ImU .
For an orthogonal basis $\mathcal{D}={\left\{{g}_{m}\right\}}_{m\in \Gamma}$ , this is possible for $M\le C\phantom{\rule{0.166667em}{0ex}}Q{(logN)}^{-1}$ if U is a matrix with independent Gaussian random coefficients. A pursuit algorithm thenprovides a stable approximation of any $\phantom{\rule{0.166667em}{0ex}}f\in {C}^{N}$ having a sparse approximation from vectors in $\mathcal{D}$ .
These results open a new compressive-sensing approach to signal acquisition and representation.Instead of first discretizing linearly the signal at a high-resolution N and then computing a nonlinear representation over M coefficients in some dictionary, compressive-sensing measures directly M randomized linear coefficients. A reconstructed signal is then recovered by a nonlinearalgorithm, producing an error that can be of the same order of magnitude as the error obtained by the more classic two-step approximation process,with a more economic acquisiton process. These results remain valid for several types of random matrices U . Examples of applications to single-pixel cameras,video super-resolution, new analog-to-digital converters, and MRI imaging are described.
Sparsity in redundant dictionaries also provides efficient strategies to separate a family of signals ${\left\{\phantom{\rule{0.166667em}{0ex}}{f}_{s}\right\}}_{0\le s<S}$ that are linearly mixed in $K\le S$ observed signals with noise:
From a stereo recording, separating the sounds of S musical instruments is an example of source separation with $k=2$ . Most often the mixing matrix $U={\left\{{u}_{k,s}\right\}}_{0\le k<K,0\le s<S}$ is unknown. Source separation is a super-resolution problem since $S\phantom{\rule{0.166667em}{0ex}}N$ data values must be recovered from $Q=K\phantom{\rule{0.166667em}{0ex}}N\le S\phantom{\rule{0.166667em}{0ex}}N$ measurements. Not knowing the operator U makes it even more complicated.
If each source $\phantom{\rule{0.166667em}{0ex}}{f}_{s}$ has a sparse approximation support λ _{s} in a dictionary $\mathcal{D}$ , with ${\sum}_{s=0}^{S-1}\left|{\lambda}_{s}\right|\ll N$ , then it is likely that the sets ${\left\{{\lambda}_{s}\right\}}_{0\le s<s}$ are nearly disjoint. In this case,the operator U , the supports λ _{s} , and the sources $\phantom{\rule{0.166667em}{0ex}}{f}_{s}$ are approximated by computing sparse approximations of the observed data Y _{k} in $\mathcal{D}$ . The distribution of these coefficients identifies the coefficients of the mixingmatrix U and the nearly disjoint source supports. Time-frequency separation of sounds illustrate these results.
Notification Switch
Would you like to follow the 'A wavelet tour of signal processing, the sparse way' conversation and receive update notifications?