<< Chapter < Page Chapter >> Page >

To improve the sparsity of images having edges along regular geometric curves, Candès and Donoho (CandesD:99)introduced curvelet frames, with elongated waveforms having different directions, positions, and scales.Images with piecewise regular edges have representations that are asymptotically more sparse by thresholding curveletcoefficients than wavelet coefficients.

Ideal dictionary approximations

In a redundant dictionary D = { φ p } p γ ,  we would like to find the best approximation support λ with M = | λ | vectors, which minimize the error f - f λ 2 . Chapter 12 proves thatit is equivalent to find λ T , which minimizes the corresponding approximation Lagrangian

L 0 ( T , f , λ ) = f - f λ 2 + T 2 | λ | ,

for some multiplier T .

Compression and denoising are two applications of redundant dictionary approximations. When compressing signals by quantizingdictionary coefficients, the distortion rate varies, like theLagrangian [link] , with a multiplier T that depends on the quantization step. Optimizing the coder is thus equivalent to minimizing this approximation Lagrangian.For sparse representations, most of the bits are devoted to coding the geometry of the sparse approximation set λ T in γ .

Estimators reducing noise from observations X = f + W are also optimized by finding a best orthogonal projector over a set ofdictionary vectors. The model selection theory of Barron, Birgé, and Massart (massart-birge-barron) proves that finding λ ˜ T , which minimizes this same Lagrangian L 0 ( T , X , λ ) , defines an estimator that has a risk on the same order asthe minimum approximation error f - f Λ T 2 up to a logarithmic factor.  This is similar to the optimality result obtained forthresholding estimators in an orthonormal basis.

The bad news is that minimizing the approximation Lagrangian L 0 is an NP-hard problem and is therefore computationally intractable. It is necessary therefore to find algorithms that are sufficiently fastto compute suboptimal, but “good enough,” solutions.

Dictionaries of orthonormal bases

To reduce the complexity of optimal approximations, the search can be reduced to subfamilies oforthogonal dictionary vectors. In a dictionary of orthonormal bases, any family of orthogonal dictionaryvectors can be complemented to form an orthogonal basis B included in D . As a result, the best approximation of f from orthogonal vectors in B is obtained by thresholding the coefficients of f in a “best basis” in D .

For tree dictionaries of orthonormal bases obtained by a recursive split of orthogonal vector spaces, the fast, dynamic programmingalgorithm of Coifman and Wickerhauser (CoifmanMW:92) finds such a best basis with O ( P ) operations, where P is the dictionary size.

Wavelet packet and local cosine bases are examples of tree dictionaries of time-frequencyorthonormal bases of size P = N log 2 N . A best basis is atime-frequency tiling that is the best match to the signal time-frequency structures.

To approximate geometrically regular edges, wavelets are not as efficientas curvelets, but wavelets provide more sparse representations of singularities that are not distributed along geometrically regular curves.Bandlet dictionaries, introduced by Le Pennec,Mallat, and Peyré (bandelets-siam, bandlets-peyre), are dictionaries of orthonormal bases that can adaptto the variability of images' geometric regularity. Minimax optimal asymptotic rates are derived for compression and denoising.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, A wavelet tour of signal processing, the sparse way. OpenStax CNX. Sep 14, 2009 Download for free at http://cnx.org/content/col10711/1.3
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'A wavelet tour of signal processing, the sparse way' conversation and receive update notifications?

Ask