<< Chapter < Page | Chapter >> Page > |
The Bayesian approach imposes an apriori model for the wavelets coefficients designed to capture the sparseness of the wavelet expansion common to most applications. An usual prior model for each wavelet coefficient ${\widehat{d}}_{jk}$ is a mixture of two distributions, one of them associated to negligable coefficients and the other to significant coefficients. Two types of mixtures have been widely used. One of them employs two normal distributions while theother uses one normal distribution and one point mass at zero.
After mathematical manipulation, it can be shown that an estimator for the underlying signal can be written as (Equation ):
i.e. the scaling coefficients are estimated by the empirical scaling coefficients while the wavelet coefficients are estimated by a Bayesian rule (BR), taking into account the obtained empirical wavelet coefficient and the noise level.
huang2000 proposed a method that takes into account the value of the prior mean for each wavelet coefficient, by introducing a estimator for the parameter into the general wavelet shrinkage model. These authorsassumed thatthe undelying signal is composed of a piecewise deterministic portion with an added zero mean stochastic part.
If ${\widehat{\mathbf{c}}}_{{j}_{0}}$ is the vector of empirical scaling coefficients, ${\widehat{\mathbf{d}}}_{j}$ the vector of empirical wavelet coefficients, ${\mathbf{c}}_{{j}_{0}}$ the vector of underlying scaling coefficients, and ${\mathbf{d}}_{j}$ the vector of underlying wavelet coefficients, then the Bayesian model (Equation ):
with $\omega ={({\widehat{\mathbf{c}}}_{{j}_{0}},{\widehat{\mathbf{d}}}_{{j}_{0}},...,{\widehat{\mathbf{d}}}_{J-1}^{\text{'}})}^{\text{'}}$ and the underlying signal $\beta ={({\mathbf{c}}_{{j}_{0}}^{\text{'}},{\mathbf{d}}_{{j}_{0}}^{\text{'}},...,{\mathbf{d}}_{J-1}^{\text{'}})}^{\text{'}}$ is assumed to follow an apriori distribution (Equation )
where $\mu $ is the deterministic mean structure and $\Sigma \left(\theta \right)$ accounts for the uncertainty and value correlation in the underlying signal. Notice that if $\eta $ following a distribution $N(0,\Sigma (\theta \left)\right)$ is defined as the stochastic component representing small variation (high frequency) in the signal, then $\mu $ can be interpretated as the stochastic component accounting for the large-scale variation in $\beta $ . So, it is possible to rewrite $\beta $ as (Equation ),
Using this model, a shrinkage rule can be established by calculating the mean of $\beta $ conditional on ${\sigma}^{2}$ which is expressed as (Equation ),
In order to assess the efficiency and accuracy of the proposed methods, a number of simulations have been conducted. To this aim, data have been generated according to the following scheme
where the data $\left\{{x}_{i}\right\}$ are considered equally spaced in the interval $[0,1]$ . The signal-to-noise ratio has been taken equal to 3. In these simulations the Symmlet 8 wavelet basis has been used. Given the random nature of $\left\{{\u03f5}_{i}\right\}$ , 100 realizations of the function $\left\{{y}_{i}\right\}$ have been produced. This has been done in order to apply the comparison criteria to the ensemble average of the realizations. Since the primary goal of the simulations is the comparison ofthe different denoising methods, the following criteria are introduced:
Notification Switch
Would you like to follow the 'Elec 301 projects fall 2008' conversation and receive update notifications?