<< Chapter < Page Chapter >> Page >

and so E [ x - x ˆ 2 2 ] , the mean square error, is double the MMSE  [link] .

Let us pause to reflect about this result. When the SNR is high, i.e., x 2 2 z 2 2 , the MMSE should be rather low, and double the MMSE seems pretty good. On the other hand, when the SNR is low, the MMSE could be almost as large as E [ x 2 2 ] , and double the MMSE could be larger – as much as twice larger – than E [ x 2 2 ] . That is, gussing x ˆ = E [ x ] could give better signal estimation performance than using the Kolmogorov sampler. This pessimistic result encourages us to search for better signal reconstruction methods.

Arbitrary channels: So far we considered the Kolmogorov sampler for the white scalar channel, y = x + z . Suppose instead that x is processed or measured by a more complicated system,

y = J ( x ) + z .

Note that J is known, e.g., in a compressed sensing application  [link] , [link] J would be a known matrix. An even more involved system would be y = ( J x ) z , where J x is application of a mapping to x, J x R L , and z denotes application of a random noise operator to J x . To keep the presentation simple, we use the additive noise setting [link] .

How can the Kolmogorov sampler [link] be applied to the additive noise setting? Recall that for the scalar channel, the Kolmogorov sampler minimizes K ( w ) subject to y - w 2 2 n . For the arbitrary mapping J with additive noise [link] , this implies y - J ( w ) 2 2 n . Therefore, we get

x ˆ = arg min s . t . Y - J ( w ) 2 2 n K ( w ) .

Another similar approach relies on optimization via Lagrange multipliers,

x ˆ = arg min K ( w ) - log 2 ( f z ( y - J ( w ) ) ) ,

where the lagrange multiplier is 1, because both K ( w ) and log 2 ( f z ( y - J ( w ) ) ) are quantified in bits.

What is the performance of the Kolmogorov sampler for an arbitrary J ? We speculate  [link] , [link] that x ˆ is generated by the posterior, and so E [ x - ˆ 2 2 ] is double the MMSE, where expectation is taken over the source X and noise z . These results remain to be shown rigorously.

Convergence of mcmc algorithm

We will now prove a substantial result – that the MCMC algorithm  [link] , [link] , [link] converges to the globally minimal energy solution for the specific case of compressed sensing  [link] , [link] . An extension of this proof to arbitrary channels J is in progress.

If the operator J in [link] is a matrix, and we denote it by Φ R m × n where m n , then the setup is known as compressed sensing (CS)  [link] , [link] and the estimation problem is commonly referred to as recovery or reconstruction.By posing a sparsity or compressibility requirement on the signal and using it as a prior during recovery, it is indeed possible to accurately estimate x from y in the CS setting.

With the quantization alphabet definition in [link] , α ˆ will quantize x to a greater resolution as N increases. We will show that under suitable conditions on f X , performing maximum a posteriori (MAP) estimation over the discrete alphabet α ˆ asymptotically converges to the MAP estimate over the continuous distribution f X . This reduces the complexity of the estimation problem from continuous to discrete.

We assume for exposition that we know the input statistics f X . Given the measurements y , the MAP estimator for x has the form

x M A P arg max w f X ( w ) f Y | X ( y | w ) .

Because z is i.i.d. Gaussian with mean zero and known variance σ Z 2 ,

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Universal algorithms in signal processing and communications. OpenStax CNX. May 16, 2013 Download for free at http://cnx.org/content/col11524/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Universal algorithms in signal processing and communications' conversation and receive update notifications?

Ask