<< Chapter < Page | Chapter >> Page > |
Having an orthonormal basis for the subspace of interest significantly simplifies the projection operator.
Lemma 1 Let $x\in X$ , a Hilbert space, and let $S$ be a subspace of $X$ . If $\{{b}_{1},{b}_{2},...\}$ is an orthonormal basis for $S$ , then the closest point ${s}_{0}\in S$ to $x$ is given by ${s}_{0}={\sum}_{i}\u27e8x,{b}_{i}\u27e9{b}_{i}$ .
We begin by noting that
Now, since ${s}_{0}$ is the projection of $x$ onto $S$ , we must have that $x-{s}_{0}\perp S$ , and so for each basis element ${b}_{i}$ we must have $\u27e8x-{s}_{0},{b}_{i}\u27e9=0$ . Additionally, since ${s}_{0}\in S$ and $\{{b}_{1},{b}_{2},...\}$ is an orthonormal basis for $S$ , we must have that ${\sum}_{i}\u27e8{s}_{0},{b}_{i}\u27e9{b}_{i}={s}_{0}$ . Thus, we obtain
proving the lemma.
Consider the case of a communications receiver that records a continuous-time signal $r\left(t\right)=s\left(t\right)+n\left(t\right)$ over $0\le t\le 1$ , where $s\left(t\right)$ is one of $m$ codeword signals $\{{s}_{1}\left(t\right),...,{s}_{m}\left(t\right)\}$ , and $n\left(t\right)$ is additive white Gaussian noise. The receiver must make the best possible decision on the observed codeword given the reading $r\left(t\right)$ ; this usually involves removing as much of the noise as possible from $r\left(t\right)$ .
We analyze this problem in the context of the Hilbert space ${L}_{2}[0,1]$ . To remove as much of the noise as possible, we define the subspace $S=\mathrm{span}\left(\{{s}_{1}\left(t\right),...,{s}_{m}\left(t\right)\}\right)$ . Anything that is not contained in this subspace is guaranteed to be part of the noise $n\left(t\right)$ . Now, to obtain the projection into $S$ , we need to find an orthonormal basis $\{{e}_{1}\left(t\right),...,{e}_{n}\left(t\right)\}$ for $S$ , which can be done for example by applying the Gram-Schmidt procedure on the vectors $\{{s}_{1}\left(t\right),...,{s}_{m}\left(t\right)\}$ . The projection is then obtained according to the lemma as
where $\u27e8r\left(t\right),{e}_{i}\left(t\right)\u27e9={\int}_{0}^{1}r\left(t\right){e}_{i}\left(t\right)dt$ .
After the projection is obtained, an optimal receiver proceeds by finding the value of $k$ that minimizes the distance
note here that the first term does not depend on $k$ , so it suffices to find the value of $k$ that minimizes the “cost”
In practice, the codeword signals are designed so that their norms $\parallel {s}_{k}{\left(t\right)\parallel}_{2}=\sqrt{\u27e8{s}_{k}\left(t\right),{s}_{k}\left(t\right)\u27e9}$ are all equal. This design choice reduces the problem above to finding the value of $k$ that maximizes the score
Thus, the receiver can be designed according to the diagram in [link] .
Let ${y}_{1},...,{u}_{n}$ be elements of a Hilbert space $X$ and define the closed, finite-dimensional subspace of $X$ given by $S=\mathrm{span}({y}_{1},...,{y}_{n})$ . We wish to find the best approximation of $x$ in terms of the vectors ${y}_{i}$ , that is, the linear combination ${\sum}_{i=1}^{n}{a}_{i}{y}_{i}$ with the smallest error $e=x-{\sum}_{i=1}^{n}{a}_{i}{y}_{i}$ . To measure the size of the error, we use the induced norm $\parallel e\parallel =\u2225x,-,{\sum}_{i=1}^{n},{a}_{i},{y}_{i}\u2225$ .
To solve this problem, we rely on the projection theorem: we are indeed looking for the closest point to $x$ in $S=\mathrm{span}({y}_{1},...,{y}_{n})$ . The projection theorem tells us that the closest point ${s}_{0}={\sum}_{i=1}^{n}{a}_{i}{y}_{i}$ must give $x-{s}_{0}\perp S$ , i.e., $e\perp S$ , which implies in turn that $\u2329x,-,{\sum}_{i=1}^{n},{a}_{i},{y}_{i},,,{y}_{j}\u232a=0$ for all $j=1,...,n$ . The requirement can be rewritten as $\u27e8x,{y}_{j}\u27e9=\u2329{\sum}_{i=1}^{n},{a}_{i},{y}_{i},,,{y}_{j}\u232a={\sum}_{i=1}^{n}{a}_{i}\u27e8{y}_{i},{y}_{j}\u27e9$ for each $j=1,...,n$ . These requirements can be collected and written in matrix form as
Notification Switch
Would you like to follow the 'Signal theory' conversation and receive update notifications?