<< Chapter < Page | Chapter >> Page > |
This brief tutorial on some key terms in linear algebra is not meant to replace or be very helpful to those of you trying togain a deep insight into linear algebra. Rather, this brief introduction to some of the terms and ideas of linear algebra ismeant to provide a little background to those trying to get a better understanding or learn about eigenvectors andeigenfunctions, which play a big role in deriving a few important ideas on Signals and Systems. The goal of theseconcepts will be to provide a background for signal decomposition and to lead up to the derivation of the Fourier Series .
A set of vectors $\forall x, {x}_{i}\in \mathbb{C}^{n}\colon \{{x}_{1}, {x}_{2}, \dots , {x}_{k}\}()$ are linearly independent if none of them can be written as a linear combination of the others.
We are given the following two vectors: $${x}_{1}=\left(\begin{array}{c}3\\ 2\end{array}\right)()$$ $${x}_{2}=\left(\begin{array}{c}1\\ 2\end{array}\right)()$$ These are linearly independent since $${c}_{1}{x}_{1}=-({c}_{2}{x}_{2})$$ only if ${c}_{1}={c}_{2}=0$ . Based on the definition, this proof shows that these vectors are indeed linearly independent. Again, wecould also graph these two vectors (see [link] ) to check for linear independence.
Are $\{{x}_{1}, {x}_{2}, {x}_{3}\}()$ linearly independent? $${x}_{1}=\left(\begin{array}{c}3\\ 2\end{array}\right)()$$ $${x}_{2}=\left(\begin{array}{c}1\\ 2\end{array}\right)()$$ $${x}_{3}=\left(\begin{array}{c}-1\\ 0\end{array}\right)()$$
By playing around with the vectors and doing a little trial and error, we will discover the followingrelationship: $${x}_{1}-{x}_{2}+2{x}_{3}=0$$ Thus we have found a linear combination of these threevectors that equals zero without setting the coefficients equal to zero. Therefore, these vectors are not linearly independent !
As we have seen in the two above examples, often times the independence of vectors can be easily seen through a graph.However this may not be as easy when we are given three or more vectors. Can you easily tell whether or not thesevectors are independent from [link] . Probably not, which is why the method used in the above solution becomesimportant.
We are given the following vector $${e}_{i}=\left(\begin{array}{c}0\\ \vdots \\ 0\\ 1\\ 0\\ \vdots \\ 0\end{array}\right)()$$ where the $1$ is always in the $i$ th place and the remaining values are zero. Then the basis for $\mathbb{C}^{n}$ is $$\{\forall i, i=()\}()$$
$${h}_{1}=\left(\begin{array}{c}1\\ 1\end{array}\right)()$$ $${h}_{2}=\left(\begin{array}{c}1\\ -1\end{array}\right)()$$ $\{{h}_{1}, {h}_{2}\}()$ is a basis for $\mathbb{C}^{2}$ .
If $\{{b}_{1}, \dots , {b}_{2}\}()$ is a basis for $\mathbb{C}^{n}$ , then we can express any $x\in \mathbb{C}^{n}$ as a linear combination of the ${b}_{i}$ 's: $$\forall \alpha , {\alpha}_{i}\in \mathbb{C}\colon x={\alpha}_{1}{b}_{1}+{\alpha}_{2}{b}_{2}+\dots +{\alpha}_{n}{b}_{n}$$
Given the following vector, $$x=\left(\begin{array}{c}1\\ 2\end{array}\right)()$$ writing $x$ in terms of $\{{e}_{1}, {e}_{2}\}()$ gives us $$x={e}_{1}+2{e}_{2}$$
Try and write $x$ in terms of $\{{h}_{1}, {h}_{2}\}()$ (defined in the previous example).
$$x=\frac{3}{2}{h}_{1}()+\frac{-1}{2}{h}_{2}$$
In the two basis examples above, $x$ is the same vector in both cases, but we can express it in many different ways (we give only two out of many, manypossibilities). You can take this even further by extending this idea of a basis to function spaces .
Notification Switch
Would you like to follow the 'Signals and systems' conversation and receive update notifications?