<< Chapter < Page Chapter >> Page >
This module provides an overview of the application of Bayesian methods to compressive sensing and sparse recovery.

Setup

Throughout this course , we have almost exclusively worked within a deterministic signal framework. In other words, our signal x is fixed and belongs to a known set of signals. In this section, we depart from this framework and assume that the sparse (or compressible ) signal of interest arises from a known probability distribution , i.e., we assume sparsity promoting priors on the elements of x , and recover from the stochastic measurements y = Φ x a probability distribution on each nonzero element of x . Such an approach falls under the purview of Bayesian methods for sparse recovery .

The algorithms discussed in this section demonstrate a digression from the conventional sparse recovery techniques typically used in compressive sensing (CS). We note that none of these algorithms are accompanied by guarantees on the number of measurements required, or the fidelity of signal reconstruction; indeed, in a Bayesian signal modeling framework, there is no well-defined notion of “reconstruction error”. However, such methods do provide insight into developing recovery algorithms for rich classes of signals, and may be of considerable practical interest.

Sparse recovery via belief propagation

As we will see later in this course, there are significant parallels to be drawn between error correcting codes and sparse recovery  [link] . In particular, sparse codes such as LDPC codes have had grand success. The advantage that sparse coding matrices may have in efficient encoding of signals and their low complexity decoding algorithms, is transferable to CS encoding and decoding with the use of sparse sensing matrices Φ . The sparsity in the Φ matrix is equivalent to the sparsity in LDPC coding graphs.

Factor graph depicting the relationship between the variables involved in CS decoding using BP. Variable nodes are black and the constraint nodes are white.

A sensing matrix Φ that defines the relation between the signal x and measurements y can be represented as a bipartite graph of signal coefficient nodes x ( i ) and measurement nodes y ( i )   [link] , [link] . The factor graph in [link] represents the relationship between the signal coefficients and measurements in the CS decoding problem.

The choice of signal probability density is of practical interest. In many applications, the signals of interest need to be modeled as being compressible (as opposed to being strictly sparse). This behavior is modeled by a two-state Gaussian mixture distribution, with each signal coefficient taking either a “large” or “small” coefficient value state. Assuming that the elements of x are i.i.d., it can be shown that small coefficients occur more frequently than the large coefficients. Other distributions besides the two-state Gaussian may also be used to model the coefficients, for e.g., the i.i.d. Laplace prior on the coefficients of x .

The ultimate goal is to estimate (i.e., decode) x , given y and Φ . The decoding problem takes the form of a Bayesian inference problem in which we want to approximate the marginal distributions of each of the x ( i ) coefficients conditioned on the observed measurements y ( i ) . We can then estimate the Maximum Likelihood Estimate (MLE), or the Maximum a Posteriori (MAP) estimates of the coefficients from their distributions. This sort of inference can be solved using a variety of methods; for example, the popular belief propagation method (BP)  [link] can be applied to solve for the coefficients approximately. Although exact inference in arbitrary graphical models is an NP hard problem, inference using BP can be employed when Φ is sparse enough, i.e., when most of the entries in the matrix are equal to zero.

Sparse bayesian learning

Another probabilistic approach used to estimate the components of x is by using Relevance Vector Machines (RVMs). An RVM is essentially a Bayesian learning method that produces sparse classification by linearly weighting a small number of fixed basis functions from a large dictionary of potential candidates (for more details the interested reader may refer to  [link] , [link] ). From the CS perspective, we may view this as a method to determine the elements of a sparse x which linearly weight the basis functions comprising the columns of Φ .

The RVM setup employs a hierarchy of priors; first, a Gaussian prior is assigned to each of the N elements of x ; subsequently, a Gamma prior is assigned to the inverse-variance α i of the i th Gaussian prior. Therefore each α i controls the strength of the prior on its associated weight in x i . If x is the sparse vector to be reconstructed, its associated Gaussian prior is given by:

p ( x | α ) = i = 1 N N ( x i | 0 , α i - 1 )

and the Gamma prior on α is written as:

p ( α | a , b ) = i = 1 N Γ ( α i | a , b )

The overall prior on x can be analytically evaluated to be the Student-t distribution, which can be designed to peak at x i = 0 with appropriate choice of a and b . This enables the desired solution x to be sparse. The RVM approach can be visualized using a graphical model similar to the one in "Sparse recovery via belief propagation" . Using the observed measurements y , the posterior density on each x i is estimated by an iterative algorithm (e.g., Markov Chain Monte Carlo (MCMC) methods). For a detailed analysis of the RVM with a measurement noise prior, refer to  [link] , [link] .

Alternatively, we can eliminate the need to set the hyperparameters a and b as follows. Assuming Gaussian measurement noise with mean 0 and variance σ 2 , we can directly find the marginal log likelihood for α and maximize it by the EM algorithm (or directly differentiate) to find estimates for α .

L ( α ) = log p ( y | α , σ 2 ) = log p ( y | x , σ 2 ) p ( y | α ) d x .

Bayesian compressive sensing

Unfortunately, evaluation of the log-likelihood in the original RVM setup involves taking the inverse of an N × N matrix, rendering the algorithm's complexity to be O ( N 3 ) . A fast alternative algorithm for the RVM is available which monotonically maximizes the marginal likelihoods of the priors by a gradient ascent, resulting in an algorithm with complexity O ( N M 2 ) . Here, basis functions are sequentially added and deleted, thus building the model up constructively, and the true sparsity of the signal x is exploited to minimize model complexity. This is known as Fast Marginal Likelihood Maximization, and is employed by the Bayesian Compressive Sensing (BCS) algorithm  [link] to efficiently evaluate the posterior densities of x i .

A key advantage of the BCS algorithm is that it enables evaluation of “error bars” on each estimated coefficient of x ; these give us an idea of the (in)accuracies of these estimates. These error bars could be used to adaptively select the linear projections (i.e., the rows of the matrix Φ ) to reduce uncertainty in the signal. This provides an intriguing connection between CS and machine learning techniques such as experimental design and active learning  [link] , [link] .

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, An introduction to compressive sensing. OpenStax CNX. Apr 02, 2011 Download for free at http://legacy.cnx.org/content/col11133/1.5
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'An introduction to compressive sensing' conversation and receive update notifications?

Ask