<< Chapter < Page Chapter >> Page >

Singular value decompositions

Let f = m Γ a [ m ] g m be the representation of f in an orthonormal basis B = { g m } m Γ . An approximation must be recovered from

Y = m Γ a [ m ] U g m + W .

A basis B of singular vectors diagonalizes U * U . Then U transforms a subset of Q vectors { g m } m Γ Q of B into an orthogonal basis { U g m } m Γ Q of ImU and sets all other vectors to zero. A singular value decomposition estimates the coefficients a [ m ] of f by projecting Y on this singular basis and by renormalizing the resultingcoefficients

m γ , a ˜ [ m ] = Y , U g m U g m 2 + h m 2 ,

where h m 2 are regularization parameters.

Such estimators recover nonzero coefficients in a space of dimension Q and thus bring no super-resolution. If U is a convolution operator, then B is the Fourier basis and a singular value estimationimplements a regularized inverseconvolution.

Diagonal thresholding estimation

The basis that diagonalizes U * U rarely provides a sparse signal representation.For example, a Fourier basis that diagonalizes convolution operators does notefficiently approximate signals including singularities.

Donoho (Donoho:95) introduced more flexibility by looking for a basis B providing a sparse signal representation, where a subset of Q vectors { g m } m Γ Q are transformed by U in a Riesz basis { U g m } m Γ Q of ImU , while the others are set to zero. With an appropriate renormalization, { λ ˜ m - 1 U g m } m Γ Q has a biorthogonal basis { φ ˜ m } m Γ Q that is normalized φ ˜ m = 1 . The sparse coefficients of f in B can then be estimated with a thresholding

m γ Q , a ˜ [ m ] = ρ T m ( λ ˜ m - 1 Y , φ ˜ m ) with ρ T ( x ) = x 1 | x | > T ,

for thresholds T m appropriately defined.

For classes of signals that are sparse in B , such thresholding estimators mayyield a nearly minimax risk, but they provide no super-resolution since this nonlinear projector remains in a space of dimension Q . This result applies to classes of convolution operators U in wavelet or wavelet packet bases. Diagonal inverse estimators are computationally efficient andpotentially optimal in cases where super-resolution is not possible.

Super-resolution and compressive sensing

Suppose that f has a sparse representation in some dictionary D = { g p } p Γ of P normalized vectors. The P vectors of the transformed dictionary D U = U D = { U g p } p Γ belong to the space ImU of dimension Q < P and thus define a redundant dictionary. Vectors in the approximation support λ of f are not restricted a priori to a particular subspace of C N . Super-resolution is possible if the approximation support λ of f in D can be estimated by decomposing the noisy data Y over D U . It dependson the properties of the approximation support λ of f in γ .

Geometric conditions for super-resolution

Let w λ = f - f λ be the approximation error of a sparse representation f λ = p λ a [ p ] g p of f . The observed signal can be written as

Y = U f + W = p λ a [ p ] U g p + U w λ + W .

If the support λ can be identified by finding a sparse approximation of Y in D U

Y λ = p λ a ˜ [ p ] U g p ,

then we can recover a super-resolution estimation of f

F ˜ = p λ a ˜ [ p ] g p .

This shows that super-resolution is possible if the approximation support λ can be identified by decomposing Y in the redundant transformed dictionary D U . If the exact recovery criteria is satisfy E R C ( λ ) < 1 and if { U g p } p Λ is a Riesz basis, then λ can be recovered using pursuit algorithms with controlled error bounds.

For most operator U , not all sparse approximation sets can be recovered. It is necessary to impose some further geometric conditions on λ in γ , which makes super-resolution difficult and often unstable. Numerical applications to sparse spike deconvolution, tomography, super-resolutionzooming, and inpainting illustrate these results.

Compressive sensing with randomness

Candès and Tao (candes-near-optimal), and Donoho (donoho-cs) proved that stable super-resolution is possible for anysufficiently sparse signal f if U is an operator with random coefficients. Compressive sensing then becomespossible by recovering a close approximation of f C N from Q N linear measurements (candes-cs-review).

A recovery is stable for a sparse approximation set | λ | M only if the corresponding dictionary family { U g m } m Λ is a Riesz basis of the space it generates. The M-restricted isometry conditions of Candès, Tao, and Donoho (donoho-cs) imposes uniform Riesz bounds for all sets λ γ with | λ | M :

c C | λ | , ( 1 - δ M ) c 2 m λ c [ p ] U g p 2 ( 1 + δ M ) c 2 .

This is a strong incoherence condition on the P vectors of { U g m } m Γ , which supposes that any subset of less than M vectors is nearly uniformly distributed on the unit sphere of ImU .

For an orthogonal basis D = { g m } m Γ , this is possible for M C Q ( log N ) - 1 if U is a matrix with independent Gaussian random coefficients. A pursuit algorithm thenprovides a stable approximation of any f C N having a sparse approximation from vectors in D .

These results open a new compressive-sensing approach to signal acquisition and representation.Instead of first discretizing linearly the signal at a high-resolution N and then computing a nonlinear representation over M coefficients in some dictionary, compressive-sensing measures directly M randomized linear coefficients. A reconstructed signal is then recovered by a nonlinearalgorithm, producing an error that can be of the same order of magnitude as the error obtained by the more classic two-step approximation process,with a more economic acquisiton process. These results remain valid for several types of random matrices U . Examples of applications to single-pixel cameras,video super-resolution, new analog-to-digital converters, and MRI imaging are described.

Blind source separation

Sparsity in redundant dictionaries also provides efficient strategies to separate a family of signals { f s } 0 s < S that are linearly mixed in K S observed signals with noise:

Y k [ n ] = s = 0 S - 1 u k , s f s [ n ] + W k [ n ] for 0 n < N and 0 k < K .

From a stereo recording, separating the sounds of S musical instruments is an example of source separation with k = 2 . Most often the mixing matrix U = { u k , s } 0 k < K , 0 s < S is unknown. Source separation is a super-resolution problem since S N data values must be recovered from Q = K N S N measurements. Not knowing the operator U makes it even more complicated.

If each source f s has a sparse approximation support λ s in a dictionary D , with s = 0 S - 1 | λ s | N , then it is likely that the sets { λ s } 0 s < s are nearly disjoint. In this case,the operator U , the supports λ s , and the sources f s are approximated by computing sparse approximations of the observed data Y k in D . The distribution of these coefficients identifies the coefficients of the mixingmatrix U and the nearly disjoint source supports. Time-frequency separation of sounds illustrate these results.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, A wavelet tour of signal processing, the sparse way. OpenStax CNX. Sep 14, 2009 Download for free at http://cnx.org/content/col10711/1.3
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'A wavelet tour of signal processing, the sparse way' conversation and receive update notifications?

Ask