<< Chapter < Page | Chapter >> Page > |
Definition 1 A linear vector space $(X,R,+,\xb7)$ is given by a signal space $X$ (called vectors), a set of scalars $R$ , an addition operation $+:X\times X\to X$ , and a multiplication operation $\xb7:R\times X\to X$ , such that:
Example 1 Here are some examples of vector spaces:
Definition 2 A subset $M\subseteq X$ is a linear subspace of $X$ if $M$ itself is a linear vector space. Note that, in particular, this implies that any subspace $M$ must obey $0\in M$ .
Example 2 Here are some examples of subspaces:
Proposition 1 If $M$ and $N$ are subspaces of $X$ , then $M\cap N$ is also a subspace.
Proof: We assume that $M$ and $N$ hold properties of linear vector space, and show that so does $M\cap N$ :
The other properties are shown in a similar fashion.
Definition 3 A vector $x\in X$ , where $(X,R,+,\xb7)$ is a vector space, is a linear combination of a set $\{{x}_{1},{x}_{2},...,{x}_{n}\}\subseteq X$ if it can be written as $x={\sum}_{i=1}^{n}{a}_{i}\xb7{x}_{i}$ , ${a}_{i}\in R$ . The set of all linear combinations of a set of points $\{{x}_{1},{x}_{2},...,{x}_{n}\}$ builds a linear subspace of $X$ .
Example 3 $Q={\sum}_{i=0}^{2}{a}_{i}{x}^{i}$ is a linear subspace of $\left(C\right[T],\mathbb{R},+,\xb7)$ containing the set of all quadratic functions, as it corresponds to all linear combinations of the set of functions $\{{x}^{2},x,1\}$ .
Definition 4 For the set $S=\{{x}_{1},{x}_{2},...,{x}_{n}\}\subseteq X$ , the span of $S$ is written as
Example 4 The space of quadratic functions $Q$ is written as $Q=\left[{S}_{1}\right]$ , with ${S}_{1}=\{{x}^{2},x,1\}$ . The space can also be written as $\left[{S}_{2}\right]$ with ${S}_{2}=\{1,x,{x}^{2}-2\})$ , i.e. $\left[{S}_{1}\right]=\left[{S}_{2}\right]$ . To prove this, we need to show $\left[{S}_{2}\right]\subseteq \left[{S}_{1}\right]$ and $\left[{S}_{1}\right]\subseteq \left[{S}_{2}\right]$ . For the former case we have
which means that every element that can be spanned by ${S}_{2}$ , can also be spanned by ${S}_{1}$ , and hence $\left[{S}_{2}\right]\subseteq \left[{S}_{1}\right]$ . The latter case can be shown in a similar manner.
Definition 5 A set $S$ is a linearly independent set if
Otherwise, the set $S$ is linearly dependent .
Definition 6 A finite set $S$ of linearly independent vectors is a basis for the space $X$ if $\left[S\right]=X$ , i.e. if $X$ is spanned by $S$ .
Definition 7 The dimension of $X$ is the number of elements of its basis $\left|S\right|$ . A vector space for which a finite basis does not exist is called an infinite-dimensional space .
Theorem 1 Any two bases of a subspace have the same number of elements.
Proof: We prove by contradiction: assume that ${S}_{1}=\{{x}_{1},...,{x}_{n}\}$ and ${S}_{2}=\{{y}_{1},...,{y}_{m}\}$ , $m>n$ , are two bases for a subspace $X$ with different numbers of elements. We have that since ${y}_{1}\in X$ it can be written as a linear combination of ${S}_{1}$ :
Order the elements of ${S}_{1}$ above so that ${a}_{1}$ is nonzero; since ${y}_{1}$ must be nonzero then at least one such ${a}_{i}$ must exist. Solving the above equation for ${x}_{1}$ yields
Thus $\{{y}_{1},{x}_{2},{x}_{3},...,{x}_{n}\}$ is a basis, in terms of which we can write any vector of the space $X$ , including ${y}_{2}$ :
Since ${y}_{1},{y}_{2}$ are linearly independent, at least one of the values of ${b}_{i},\phantom{\rule{3.33333pt}{0ex}}i>2,$ must be nonzero. Sort the remaining ${x}_{i}$ so that ${b}_{2}$ is nonzero. Solving for ${x}_{2}$ results in
Therefore, $\{{y}_{1},{y}_{2},{x}_{3},...,{x}_{n}\}$ is a basis for $X$ . Continuing in this way, we can eliminate each ${x}_{i}$ , showing that $\{{y}_{1},{y}_{2},...,{y}_{n}\}$ is a basis for $X$ . Thus, we have ${y}_{n+1}={\sum}_{i=1}^{n}{c}_{i}{y}_{i}$ , or equivalently:
As a result, ${S}_{2}$ is linearly dependent and is not a basis. Therefore, all bases of $X$ must have the same number of elements.
Having a basis in hand for a given subspace allows us to express the points in the subspace in more than one way. For each point $x\in \left[S\right]$ in the span of a basis $S=\{{S}_{1},{S}_{2}...{S}_{n}\}$ , that is,
there is a one-to-one map (i.e., an equivalence) between $x\in \left[S\right]$ and $a=\{{a}_{1},...,{a}_{n}\}\in {R}^{n}$ , that is, both $x$ and $a$ uniquely identify the point in $S$ . This is stated more formally as a theorem.
Theorem 2 If $S$ is a linearly independent set, then
if and only if ${a}_{i}={b}_{i}$ for $i=1,2...n$ .
Proof: Theorem 1 states that the scalars $\{{a}_{1},...,{a}_{n}\}$ are unique for $x$ . We begin by assuming that indeed
This implies
Since the elements of $S$ are linearly independent, each one of the scalars of the sum must be zero, that is, ${a}_{i}-{b}_{i}=0$ and so ${a}_{i}={b}_{i}$ for each $i=1,...,n$ .
Example 5 (Digital Communications) A transmitter sends two waveforms:
The signal $r\left(t\right)$ recorded by the receiver is continuous, that is, $r\left(t\right)\in C\left[T\right]$ . Assuming that the propagation delay is known and corrected at the receiver, we will have they the received signal must be in the span of the possible transmitted signals, i.e., $r\left(t\right)\in span({S}_{1}\left(t\right),{S}_{0}\left(t\right))$ . One can check that ${S}_{1}\left(t\right)$ and ${S}_{2}\left(t\right)$ are linearly independent. Thus, one can use a unique choice of coefficients ${a}_{0}$ and ${a}_{1}$ that denote whether bit 0 or bit 1 is transmitted and contain the amount of attenuation caused by the transmission:
The uniqueness of this representation can only be obtained if the transmitted signals ${S}_{0}\left(t\right)$ and ${S}_{1}\left(t\right)$ are linearly independent. The waveforms above are used in in phase shift keying (PSK); other similar examples include frequency shift keying (FSK) and quadrature amplitude modulation (QAM).
Notification Switch
Would you like to follow the 'Signal theory' conversation and receive update notifications?