<< Chapter < Page Chapter >> Page >

Nonlinear smoothing with wavelets

Hard-, soft-thresholding and wavelet estimator

Given the regression model [link] , we can decompose the empirical detail coefficient d ^ j k in [link] as

d ^ j k = 1 n i = 1 n m ( x i ) ψ j k ( x i ) + 1 n i = 1 n ϵ i ψ j k ( x i ) = d j k + ρ j k

If the function m ( x ) allows for a sparse wavelet representation, only a few number of detail coefficients d j k contribute to the signal and are non-negligible. However, every empirical coefficient d ^ j k has a non-zero contribution coming from the noise part ρ j k .

Note the link between the coefficients d j k in [link] and the theoretical coefficients d j k in [link] :
d j k = 1 n i = 1 n m ( x i ) ψ j , k ( x i ) = m ( x ) ψ j k ( x ) d x + O 1 n = d j k + O 1 n .

In words, d j k constitutes a first order approximation (using the trapezoidal rule) of the integral d j k . For the scaling coefficients s j k , it can be proved [link] that the order of accuracy of the trapezoidal rule is equal to N - 1 , where N is the order of the MRA associated to the scaling function.

Suppose the noise level is not too high, so that the signal can be distinguished from the noise. Then, from the sparsity property of the wavelet, only the largest detail coefficients should be included in the wavelet estimator.Hence, when estimating an unknown function, it makes sense to include only those coefficients that are larger than some specified threshold value t :

η H ( d ^ j k , t ) = d ^ j k 1 { | d ^ j k | > t } .

This `keep-or-kill' operation is called hard thresholding , see [link] (a).

Now, since each empirical coefficient consists of both a signal part and a noise part, it may be desirable to shrink even the coefficients that are larger than the threshold:

d ^ j k t : = η S ( d ^ j k , t ) = sign ( d ^ j k ) ( | d ^ j k | - t ) + .

Since the function η S is continuous in its first argument, this procedure is called soft thresholding . More complex thresholding schemes have been proposed in the literature [link] , [link] , [link] . They often appear as a compromise between soft and hard thresholding, see [link] (b) for an example.

In (a) the hard thresholding is represented in plain line: a coefficient d ^ j k with an absolute value below t is put equal to zero. The soft thresholding is given in dashed line: there coefficients with absolute value above the threshold t are shrunk of an amount equal to t . In (b), a more complex thresholding procedure, the SCAD threshold devised in Antoniadis and Fan [link] is represented.

For a given threshold value t and a thresholding scheme η ( . ) , the nonlinear wavelet estimator is given by

m ^ ( x ) = k s ^ j 0 k ϕ j 0 k ( x ) + j , k η ( . ) ( d ^ j k , t ) ψ j k ( x ) ,

where j 0 denotes the primary resolution level . It indicates the level above which the detail coefficients are being manipulated.

Let now d ^ j = { d ^ j k , k = 0 , ... , 2 j - 1 } denote the vector of empirical detail coefficients at level j and similarly define s ^ j . In practice a nonlinear wavelet estimator is obtained in three steps.

  1. Apply the analyzing (forward) wavelet transform on the observations { Y i } i = 1 n , yielding s ^ j 0 and d ^ j , for j = j 0 , ... , J - 1 .
  2. Manipulate the detail coefficients above the level j 0 , e.g. by soft-thresholding them.
  3. Invert the wavelet transform and produce an estimation of m at the design points: { m ^ ( x i ) } i = 1 n .

If necessary, a continuous estimator m ^ can then be constructed by an appropriate interpolation of the estimated m ^ ( x i ) values [link] .

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, An introduction to wavelet analysis. OpenStax CNX. Sep 14, 2009 Download for free at http://cnx.org/content/col10566/1.3
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'An introduction to wavelet analysis' conversation and receive update notifications?

Ask