<< Chapter < Page | Chapter >> Page > |
Suppose we are given as signal the projection of a function onto the space :
Using the dual refinement equations, we have:
where the coefficients are called scaling coefficients , since they are related to scaling functions. Similarly, the wavelet or detail coefficients are obtained as
The coefficients and are obtained from by `moving average' schemes, using the filter coefficients and as `weights', with the exception that these moving averages are sampled only at the even integers, i.e. a downsampling is performed. Such transform allows, once we have computed for a fine level , to compute and for all coarser levels without evaluating the integrals.
Suppose now we are given the values of at equispaced design points. The scaling functions , are compactly supported and localized around . Hence the coefficients are weighted and scaled average of on a neighborhood of which becomes smaller as tends to infinity. Consequently, it makes sense to replace the integral by the (scaled) value of at . More complicate quadrature formulae have been developedin [link] , [link] , [link] .
With and , the forward (or analyzing) wavelet transform given by [link] - [link] can be rewritten as
where denotes the Hermitian conjugate of .
The inverse (or synthesis) transform is found by using the primal refinement equations and the fact that .
from which it follows that
In matrix form, we have
In the finite and classical setting, the matrices , , and are of size . Moreover, if the basis functions are compactly supported, the four filters ( , , , ) have only a finite number of nonzero elements, and hence all these matrices are banded.
In case of the orthogonal Haar transform, and is of the form
since only and are different from zero : . The high-pass filter is such that and . The forward transform [link] - [link] reduces to
and the reconstruction is given by
The wavelet transform has been successfully applied to compress images, which are modelled as functions defined on a regular two-dimensional grid.
The easiest way to build a two-dimensional MRA is probably to use tensor products of spaces, see [link] , [link] . In terms of wavelet transforms, this leads to applying two times a one-dimensional transform: first on the `row' of the signal matrix , and second on the `columns' of the resulting two matrices, see [link] . In this figure, we see that, at each level of the decomposition, three types of detail coefficients are produced: , and . These superscripts recall that, in an image, horizontal edges will lead to large values of , vertical edges will show up in and will be sensitive to diagonal lines.
However, such a transform is not able to compress efficiently an image that contains curves. More complex bidimensional bases are now proposed in the literature to better model discontinuities along curves, see forexample [link] , [link] , [link] , [link] .
Notification Switch
Would you like to follow the 'An introduction to wavelet analysis' conversation and receive update notifications?