<< Chapter < Page Chapter >> Page >
This module provides a brief review of some of the key concepts in vector spaces that will be required in developing the theory of compressive sensing.

For much of its history, signal processing has focused on signals produced by physical systems. Many natural and man-made systems can be modeled as linear. Thus, it is natural to consider signal models that complement this kind of linear structure. This notion has been incorporated into modern signal processing by modeling signals as vectors living in an appropriate vector space . This captures the linear structure that we often desire, namely that if we add two signals together then we obtain a new, physically meaningful signal. Moreover, vector spaces allow us to apply intuitions and tools from geometry in R 3 , such as lengths, distances, and angles, to describe and compare signals of interest. This is useful even when our signals live in high-dimensional or infinite-dimensional spaces.

Throughout this course , we will treat signals as real-valued functions having domains that are either continuous or discrete, and either infinite or finite. These assumptions will be made clear as necessary in each chapter. In this course, we will assume that the reader is relatively comfortable with the key concepts in vector spaces. We now provide only a brief review of some of the key concepts in vector spaces that will be required in developing the theory of compressive sensing (CS). For a more thorough review of vector spaces see this introductory course in Digital Signal Processing .

We will typically be concerned with normed vector spaces , i.e., vector spaces endowed with a norm . In the case of a discrete, finite domain, we can view our signals as vectors in an N -dimensional Euclidean space, denoted by R N . When dealing with vectors in R N , we will make frequent use of the p norms, which are defined for p [ 1 , ] as

x p = i = 1 N | x i | p 1 p , p [ 1 , ) ; max i = 1 , 2 , ... , N | x i | , p = .

In Euclidean space we can also consider the standard inner product in R N , which we denote

x , z = z T x = i = 1 N x i z i .

This inner product leads to the 2 norm: x 2 = x , x .

In some contexts it is useful to extend the notion of p norms to the case where p < 1 . In this case, the “norm” defined in [link] fails to satisfy the triangle inequality, so it is actually a quasinorm. We will also make frequent use of the notation x 0 : = | supp ( x ) | , where supp ( x ) = { i : x i 0 } denotes the support of x and | supp ( x ) | denotes the cardinality of supp ( x ) . Note that · 0 is not even a quasinorm, but one can easily show that

x 0 = lim p 0 x p p = | supp ( x ) | ,

justifying this choice of notation. The p (quasi-)norms have notably different properties for different values of p . To illustrate this, in [link] we show the unit sphere, i.e., { x : x p = 1 } , induced by each of these norms in R 2 . Note that for p < 1 the corresponding unit sphere is nonconvex (reflecting the quasinorm's violation of the triangle inequality).

Unit sphere for 1 norm
Unit sphere for 2 norm
Unit sphere for norm
Unit sphere for p quasinorm
Unit spheres in R 2 for the p norms with p = 1 , 2 , , and for the p quasinorm with p = 1 2 .

We typically use norms as a measure of the strength of a signal, or the size of an error. For example, suppose we are given a signal x R 2 and wish to approximate it using a point in a one-dimensional affine space A . If we measure the approximation error using an p norm, then our task is to find the x ^ A that minimizes x - x ^ p . The choice of p will have a significant effect on the properties of the resulting approximation error. An example is illustrated in [link] . To compute the closest point in A to x using each p norm, we can imagine growing an p sphere centered on x until it intersects with A . This will be the point x ^ A that is closest to x in the corresponding p norm. We observe that larger p tends to spread out the error more evenly among the two coefficients, while smaller p leads to an error that is more unevenly distributed and tends to be sparse. This intuition generalizes to higher dimensions, and plays an important role in the development of CS theory.

Approximation in 1 norm
Approximation in 2 norm
Approximation in norm
Approximation in p quasinorm
Best approximation of a point in R 2 by a a one-dimensional subspace using the p norms for p = 1 , 2 , , and the p quasinorm with p = 1 2 .

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, An introduction to compressive sensing. OpenStax CNX. Apr 02, 2011 Download for free at http://legacy.cnx.org/content/col11133/1.5
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'An introduction to compressive sensing' conversation and receive update notifications?

Ask