<< Chapter < Page Chapter >> Page >

Hilbert spaces

Hilbert Spaces
A Hilbert space is a closed, normed linear vector space which contains all of its limitpoints: if x n is any sequence of elements in that converges to x , then x is also contained in . x is termed the limit point of the sequence.

Let the space consist of all rational numbers. Let the inner product be simple multiplication: x y x y . However, the limit point of the sequence x n 1 1 1 2 1 n is not a rational number. Consequently, this space is not a Hilbert space. However, if we define the space to consist of all finite numbers, we have aHilbert space.

orthogonal
If is a subspace of , the vector x is orthogonal to the subspace for every y , x y 0 .

We now arrive at a fundamental theorem.

Let be a Hilbert space and a subspace of it. Any element x has the unique decomposition x y z , where y and z is orthogonal to . Furthermore, x y x : the distance between x and all elements of is minimized by the vector y . This element y is termed the projection of x onto .

Geometrically, is a line or a plane passing through the origin. Any vector x can be expressed as the linear combination of a vector lying in and a vector orthogonal to y . This theorem is of extreme importance in linear estimation theoryand plays a fundamental role in detection theory.

Separable vector spaces

separable
A Hilbert space is said to be separable if there exists a set of vectors i , i 1 elementsof , that express every element x as
x i 1 x i i
where x i are scalar constants associated with i and x and where "equality" is taken to mean that the distancebetween each side becomes zero as more terms are taken in the right. m x i 1 m x i i 0

The set of vectors i are said to form a complete set if the above relationship is valid. A complete set is said to form a basis for the space . Usually the elements of the basis for a space are taken to be linearlyindependent. Linear independence implies that the expression fo the zero vector by a basis can only be madeby zero coefficients.

i i 1 i 1 x i i 0 x i 0
The representation theorem states simply that separable vector spaces exist. The representation of thevector x is the sequence of coefficients x i .

The space consisting of column matrices of length N is easily shown to be separable. Let the vector i be given a column matrix having a one in the i th row and zeros in the remaining rows: i 0 0 1 0 0 . This set of vectors i , i 1 N constitutes a basis for the space. Obviously if the vector x is given by x x 1 x 2 x N , it may be expressed as: x i 1 N x i i using the basis vectors just defined.

In general, the upper limit on the sum in is infinite. For the previous example , the upper limit is finite. The number of basis vectors that is required to express every element of a separable space in terms of is said to be the dimension of the space. In this example , the dimension of the space is N . There exist separable vector spaces for which the dimension isinfinite.

orthonormal
The basis for a separable vector space is said to be an orthonormal basis if the elements of the basis satisfy thefollowing two properties:
  • The inner product between distinct elements of the basis is zero (i.e., the elements of the basis are mutuallyorthogonal).
    i j i j i j 0
  • The norm of each element of a basis is one (normality).
    i i 1 i 1

For example, the basis given above for the space of N -dimensional column matrices is orthonormal. For clarity, two facts must be explicitlystated. First, not every basis is orthonormal. If the vector space is separable, a complete set of vectors can be found;however, this set does not have to be orthonormal to be a basis. Secondly, not every set of orthonormal vectors canconstitute a basis. When the vector space L 2 is discussed in detail, this point will be illustrated.

Despite these qualifications, an orthonormal basis exists for every separable vector space. There is an explicitalgorithm - the Gram-Schmidt procedure - for deriving an orthonormal set of functions from a complete set.Let i denote a basis; the orthonormal basis i is sought. The Gram-Schmidt procedure is:

  • 1.

    1 1 1 . This step makes 1 have unit length.
  • 2.

    2 2 1 2 1 . Consequently, the inner product between 2 and 1 is zero. We obtain 2 from 2 forcing the vector to have unit length.
  • 2'.

    2 2 2 .

The algorithm now generalizes.

  • K.

    k k i 1 k 1 i k i
  • K'.

    k k k

By construction, this new set of vectors is an orthonormal set. As the original set of vectors i is a complete set, and, as each k is just a linear combination of i , i 1 k , the derived set i is also complete. Because of the existence of this algorithm, a basis for a vector space is usually assumed to beorthonormal.

A vector's representation with respect to an orthonormal basis i is easily computed. The vector x may be expressed by:

x i 1 x i i
x i x i
This formula is easily confirmed by substituting into and using the properties of an inner product. Note that the exact elementvalues of a given vector's representation depends upon both the vector and the choice of basis. Consequently, a meaningful specification of the representationof a vector must include the definition of the basis.

The mathematical representation of a vector (expressed by equations and can be expressed geometrically. This expression is ageneralization of the Cartesian representation of numbers. Perpendicular axes are drawn; these axes correspond to theorthonormal basis vector used in the representation. A given vector is representation as a point in the "plane" with thevalue of the component along the i axis being x i .

An important relationship follows from this mathematical representation of vectors. Let x and y by any two vectors in a separable space. These vectors are represented with respectto an orthonormal basis by x i and y i , respectively. The inner product x y is related to these representations by: x y i 1 x i y i This result is termed Parseval's Theorem . Consequently, the inner product between any two vectors can becomputed from their representations. A special case of this result corresponds to the Cartesian notion of the length of avector; when x y , Parseval's relationship becomes: x i 1 x i 2 These two relationships are key results of the representation theorem. The implication is that any inner product computedfrom vectors can also be computed from their representations. There are circumstances in which the latter computation ismore manageable than the former and, furthermore, of greater theoretical significance.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Statistical signal processing. OpenStax CNX. Dec 05, 2011 Download for free at http://cnx.org/content/col11382/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Statistical signal processing' conversation and receive update notifications?

Ask