<< Chapter < Page Chapter >> Page >
This module provides an overview of the application of Bayesian methods to compressive sensing and sparse recovery.

Setup

Throughout this course , we have almost exclusively worked within a deterministic signal framework. In other words, our signal x is fixed and belongs to a known set of signals. In this section, we depart from this framework and assume that the sparse (or compressible ) signal of interest arises from a known probability distribution , i.e., we assume sparsity promoting priors on the elements of x , and recover from the stochastic measurements y = Φ x a probability distribution on each nonzero element of x . Such an approach falls under the purview of Bayesian methods for sparse recovery .

The algorithms discussed in this section demonstrate a digression from the conventional sparse recovery techniques typically used in compressive sensing (CS). We note that none of these algorithms are accompanied by guarantees on the number of measurements required, or the fidelity of signal reconstruction; indeed, in a Bayesian signal modeling framework, there is no well-defined notion of “reconstruction error”. However, such methods do provide insight into developing recovery algorithms for rich classes of signals, and may be of considerable practical interest.

Sparse recovery via belief propagation

As we will see later in this course, there are significant parallels to be drawn between error correcting codes and sparse recovery  [link] . In particular, sparse codes such as LDPC codes have had grand success. The advantage that sparse coding matrices may have in efficient encoding of signals and their low complexity decoding algorithms, is transferable to CS encoding and decoding with the use of sparse sensing matrices Φ . The sparsity in the Φ matrix is equivalent to the sparsity in LDPC coding graphs.

Factor graph depicting the relationship between the variables involved in CS decoding using BP. Variable nodes are black and the constraint nodes are white.

A sensing matrix Φ that defines the relation between the signal x and measurements y can be represented as a bipartite graph of signal coefficient nodes x ( i ) and measurement nodes y ( i )   [link] , [link] . The factor graph in [link] represents the relationship between the signal coefficients and measurements in the CS decoding problem.

The choice of signal probability density is of practical interest. In many applications, the signals of interest need to be modeled as being compressible (as opposed to being strictly sparse). This behavior is modeled by a two-state Gaussian mixture distribution, with each signal coefficient taking either a “large” or “small” coefficient value state. Assuming that the elements of x are i.i.d., it can be shown that small coefficients occur more frequently than the large coefficients. Other distributions besides the two-state Gaussian may also be used to model the coefficients, for e.g., the i.i.d. Laplace prior on the coefficients of x .

The ultimate goal is to estimate (i.e., decode) x , given y and Φ . The decoding problem takes the form of a Bayesian inference problem in which we want to approximate the marginal distributions of each of the x ( i ) coefficients conditioned on the observed measurements y ( i ) . We can then estimate the Maximum Likelihood Estimate (MLE), or the Maximum a Posteriori (MAP) estimates of the coefficients from their distributions. This sort of inference can be solved using a variety of methods; for example, the popular belief propagation method (BP)  [link] can be applied to solve for the coefficients approximately. Although exact inference in arbitrary graphical models is an NP hard problem, inference using BP can be employed when Φ is sparse enough, i.e., when most of the entries in the matrix are equal to zero.

Sparse bayesian learning

Another probabilistic approach used to estimate the components of x is by using Relevance Vector Machines (RVMs). An RVM is essentially a Bayesian learning method that produces sparse classification by linearly weighting a small number of fixed basis functions from a large dictionary of potential candidates (for more details the interested reader may refer to  [link] , [link] ). From the CS perspective, we may view this as a method to determine the elements of a sparse x which linearly weight the basis functions comprising the columns of Φ .

The RVM setup employs a hierarchy of priors; first, a Gaussian prior is assigned to each of the N elements of x ; subsequently, a Gamma prior is assigned to the inverse-variance α i of the i th Gaussian prior. Therefore each α i controls the strength of the prior on its associated weight in x i . If x is the sparse vector to be reconstructed, its associated Gaussian prior is given by:

p ( x | α ) = i = 1 N N ( x i | 0 , α i - 1 )

and the Gamma prior on α is written as:

p ( α | a , b ) = i = 1 N Γ ( α i | a , b )

The overall prior on x can be analytically evaluated to be the Student-t distribution, which can be designed to peak at x i = 0 with appropriate choice of a and b . This enables the desired solution x to be sparse. The RVM approach can be visualized using a graphical model similar to the one in "Sparse recovery via belief propagation" . Using the observed measurements y , the posterior density on each x i is estimated by an iterative algorithm (e.g., Markov Chain Monte Carlo (MCMC) methods). For a detailed analysis of the RVM with a measurement noise prior, refer to  [link] , [link] .

Alternatively, we can eliminate the need to set the hyperparameters a and b as follows. Assuming Gaussian measurement noise with mean 0 and variance σ 2 , we can directly find the marginal log likelihood for α and maximize it by the EM algorithm (or directly differentiate) to find estimates for α .

L ( α ) = log p ( y | α , σ 2 ) = log p ( y | x , σ 2 ) p ( y | α ) d x .

Bayesian compressive sensing

Unfortunately, evaluation of the log-likelihood in the original RVM setup involves taking the inverse of an N × N matrix, rendering the algorithm's complexity to be O ( N 3 ) . A fast alternative algorithm for the RVM is available which monotonically maximizes the marginal likelihoods of the priors by a gradient ascent, resulting in an algorithm with complexity O ( N M 2 ) . Here, basis functions are sequentially added and deleted, thus building the model up constructively, and the true sparsity of the signal x is exploited to minimize model complexity. This is known as Fast Marginal Likelihood Maximization, and is employed by the Bayesian Compressive Sensing (BCS) algorithm  [link] to efficiently evaluate the posterior densities of x i .

A key advantage of the BCS algorithm is that it enables evaluation of “error bars” on each estimated coefficient of x ; these give us an idea of the (in)accuracies of these estimates. These error bars could be used to adaptively select the linear projections (i.e., the rows of the matrix Φ ) to reduce uncertainty in the signal. This provides an intriguing connection between CS and machine learning techniques such as experimental design and active learning  [link] , [link] .

Questions & Answers

show that the set of all natural number form semi group under the composition of addition
Nikhil Reply
explain and give four Example hyperbolic function
Lukman Reply
_3_2_1
felecia
⅗ ⅔½
felecia
_½+⅔-¾
felecia
The denominator of a certain fraction is 9 more than the numerator. If 6 is added to both terms of the fraction, the value of the fraction becomes 2/3. Find the original fraction. 2. The sum of the least and greatest of 3 consecutive integers is 60. What are the valu
SABAL Reply
1. x + 6 2 -------------- = _ x + 9 + 6 3 x + 6 3 ----------- x -- (cross multiply) x + 15 2 3(x + 6) = 2(x + 15) 3x + 18 = 2x + 30 (-2x from both) x + 18 = 30 (-18 from both) x = 12 Test: 12 + 6 18 2 -------------- = --- = --- 12 + 9 + 6 27 3
Pawel
2. (x) + (x + 2) = 60 2x + 2 = 60 2x = 58 x = 29 29, 30, & 31
Pawel
ok
Ifeanyi
on number 2 question How did you got 2x +2
Ifeanyi
combine like terms. x + x + 2 is same as 2x + 2
Pawel
x*x=2
felecia
2+2x=
felecia
Mark and Don are planning to sell each of their marble collections at a garage sale. If Don has 1 more than 3 times the number of marbles Mark has, how many does each boy have to sell if the total number of marbles is 113?
mariel Reply
Mark = x,. Don = 3x + 1 x + 3x + 1 = 113 4x = 112, x = 28 Mark = 28, Don = 85, 28 + 85 = 113
Pawel
how do I set up the problem?
Harshika Reply
what is a solution set?
Harshika
find the subring of gaussian integers?
Rofiqul
hello, I am happy to help!
Shirley Reply
please can go further on polynomials quadratic
Abdullahi
hi mam
Mark
I need quadratic equation link to Alpa Beta
Abdullahi Reply
find the value of 2x=32
Felix Reply
divide by 2 on each side of the equal sign to solve for x
corri
X=16
Michael
Want to review on complex number 1.What are complex number 2.How to solve complex number problems.
Beyan
yes i wantt to review
Mark
use the y -intercept and slope to sketch the graph of the equation y=6x
Only Reply
how do we prove the quadratic formular
Seidu Reply
please help me prove quadratic formula
Darius
hello, if you have a question about Algebra 2. I may be able to help. I am an Algebra 2 Teacher
Shirley Reply
thank you help me with how to prove the quadratic equation
Seidu
may God blessed u for that. Please I want u to help me in sets.
Opoku
what is math number
Tric Reply
4
Trista
x-2y+3z=-3 2x-y+z=7 -x+3y-z=6
Sidiki Reply
can you teacch how to solve that🙏
Mark
Solve for the first variable in one of the equations, then substitute the result into the other equation. Point For: (6111,4111,−411)(6111,4111,-411) Equation Form: x=6111,y=4111,z=−411x=6111,y=4111,z=-411
Brenna
(61/11,41/11,−4/11)
Brenna
x=61/11 y=41/11 z=−4/11 x=61/11 y=41/11 z=-4/11
Brenna
Need help solving this problem (2/7)^-2
Simone Reply
x+2y-z=7
Sidiki
what is the coefficient of -4×
Mehri Reply
-1
Shedrak
how did you get the value of 2000N.What calculations are needed to arrive at it
Smarajit Reply
Privacy Information Security Software Version 1.1a
Good
Got questions? Join the online conversation and get instant answers!
Jobilize.com Reply

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play




Source:  OpenStax, An introduction to compressive sensing. OpenStax CNX. Apr 02, 2011 Download for free at http://legacy.cnx.org/content/col11133/1.5
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'An introduction to compressive sensing' conversation and receive update notifications?

Ask