<< Chapter < Page Chapter >> Page >

In the case where the class of signals of interest corresponds to a low dimensional subspace, a truncated, simplified sparse approximation can be applied as a detection algorithm; this has been dubbed as IDEA  [link] . In simple terms, the algorithm will mark a detection when a large enough amount of energy from the measurements lies in the projected subspace. Since this problem does not require accurate estimation of the signal values, but rather whether it belongs in the subspace of interest or not, the number of measurements necessary is much smaller than that required for reconstruction, as shown in [link] .

Performance for IDEA. (Top) Sample wideband chirp signal and same chirp embedded in strong narrowband interference. (Bottom) Probability of error to reconstruct and detect chirpsignals embedded in strong sinusoidal interference ( SIR = - 6 dB) using greedy algorithms. In this case, detection requires 3 × fewer measurements and 4 × fewer computations than reconstruction for an equivalent probability of success. Taken from  [link] .

Classification

Similarly, random projections have long been used for a variety of classification and clustering problems. The Johnson-Lindenstrauss Lemma is often exploited in this setting to compute approximate nearest neighbors, which is naturally related to classification. The key result that randomprojections result in an isometric embedding allows us to generalize this work to several new classification algorithms and settings  [link] .

Classification can also be performed when more elaborate models are used for the different classes. Suppose the signal/image class of interest can be modeled as a low-dimensional manifold in the ambient space. In such case it can be shown that, even under random projections, certain geometric properties of the signal class are preserved up to a small distortion; for example, interpoint Euclidean ( 2 ) distances are preserved  [link] . This enables the design of classification algorithms in the projected domain. One such algorithm is known as the smashed filter  [link] . As an example, under equal distribution among classes and a gaussian noise setting, the smashed filter is equivalent to building a nearest-neighbor (NN) classifier in the measurement domain. Further, it has been shown that for a K - dimensional manifold, M = O ( K log N ) measurements are sufficient to perform reliable compressive classification. Thus, the number of measurements scales as the dimension of the signal class, as opposed to the sparsity of the individual signal. Some example results are shown in [link] (a).

Results for smashed filter image classification and parameter estimation experiments. (a) Classification rates and(b) average estimation error for varying number of measurements M and noise levels σ for a set of images of several objects under varying shifts. As M increases, the distances between the manifolds increase as well, thus increasing the noise tolerance and enabling more accurate estimation and classification. Thus, the classification and estimation performances improve as σ decreases and M increases in all cases. Taken from  [link] .

Estimation

Consider a signal x R N , and suppose that we wish to estimate some function f ( x ) but only observe the measurements y = Φ x , where Φ is again an M × N matrix. The data streaming community has previously analyzed this problem for many common functions, such as linear functions, p norms, and histograms. These estimates are often based on so-called sketches , which can be thought of as random projections.

As an example, in the case where f is a linear function, one can show that the estimation error (relative to the norms of x and f ) can be bounded by a constant determined by M . This result holds for a wide class of random matrices, and can be viewed as a straightforward consequence of the same concentration of measure inequality that has proven useful for CS and in proving the JL Lemma  [link] .

Parameter estimation can also be performed when the signal class is modeled as a low-dimensional manifold. Suppose an observed signal x can be parameterized by a K - dimensional parameter vector θ , where K N . Then, it can be shown that with 0 ( K log N ) measurements, the parameter vector can be obtained via multiscale manifold navigation in the compressed domain  [link] . Some example results are shown in [link] (b).

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, An introduction to compressive sensing. OpenStax CNX. Apr 02, 2011 Download for free at http://legacy.cnx.org/content/col11133/1.5
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'An introduction to compressive sensing' conversation and receive update notifications?

Ask