<< Chapter < Page | Chapter >> Page > |
The redundant wavelet transform (RWT) is widely used in order to denoise signals and images. Here, we consider two denoising methods used in the literature, to attempt to denoise astronomical images with the aim of obtaining images in which we can search for very faint objects that are not noise.
The paper is organized as follows. In "Redundant Wavelet Transform" , we introduce a few algorithms used to compute the RWT. In "Denoising Algorithms based on the RWT" , we discuss some denoising methods based on the RWT. In "Denoising Simulation" , a description of the simulation and the results from the implemented methods can be found, which are further discussed in "Conclusions" .
The redundant discrete wavelet transform, similar in nature to the discrete wavelet transform, decomposes data into low-pass scaling (trend) and high-pass wavelet (detail) coefficients to obtain a projective decomposition of the data into different scales. More specifically, at each level the transform uses the scaling coefficients to compute the next level of scaling and wavelet coefficients. The difference lies in the fact that none of the latter are discarded through decimation as in the discrete wavelet transform but are instead retained, introducing a redundancy. This transform is good for denoising images, as the noise is usually spread over a small number of neighboring pixels. The Rice Wavelet Toolbox used to compute the transform in the simulation implements the redundant wavelet transform through the undecimated algorithm, which as its name suggests is similar to the discrete wavelet transform but omits downsampling, also known as decimation, in computation of the transform and upsampling in computation of the inverse transform .
Another method of computing the redundant wavelet transform, the $\stackrel{\xb4}{a}$ trous algorithm differs from the undecimated algorithm by modifying the low-pass and high-pass filters at each consecutive level. The algorithm up-samples the low-pass filter at each level by inserting zeros between each of the filter's coefficients. The high-pass coefficients are then computed as the difference between the low-pass images from the two consecutive levels. To compute the inverse transform, the detail coefficients from all levels are added to the final low-resolution image . While inefficient in implementation, the $\stackrel{\xb4}{a}$ trous algorithm provides additional insight into the redundant discrete wavelet transform.
Notification Switch
Would you like to follow the 'The art of the pfug' conversation and receive update notifications?