<< Chapter < Page | Chapter >> Page > |
Figure 1. Ten Cases for the Pseudoinverse.
Here we have:
This is a setting for frames and sparse representations.
In case 1a and 3a, $\mathbf{b}$ is necessarily in the span of $\mathbf{A}$ . In addition to these classifications, the possible orthogonality of thecolumns or rows of the matrices gives special characteristics.
Case 1: Here we see a 3 x 3 square matrix which is an example of case 1 in Figure 1 and 2.
If the matrix has rank 3, then the $\mathbf{b}$ vector will necessarily be in the space spanned by the columns of $\mathbf{A}$ which puts it in case 1a. This can be solved for $\mathbf{x}$ by inverting $\mathbf{A}$ or using some more robust method. If the matrix has rank 1 or 2, the $\mathbf{b}$ may or may not lie in the spanned subspace, so the classification will be 1b or 1c and minimization of ${\left|\right|x\left|\right|}_{2}^{2}$ yields a unique solution.
Case 2: If $\mathbf{A}$ is 4 x 3, then we have more equations than unknowns or the overspecified or overdetermined case.
If this matrix has the maximum rank of 3, then we have case 2a or 2b depending on whether $\mathbf{b}$ is in the span of $\mathbf{A}$ or not. In either case, a unique solution $\mathbf{x}$ exists which can be found by [link] or [link] . For case 2a, we have a single exact solution with no equation error, $\u03f5=\mathbf{0}$ just as case 1a. For case 2b, we have a single optimal approximate solution with the least possible equation error. If the matrix hasrank 1 or 2, the classification will be 2c or 2d and minimization of ${\left|\right|x\left|\right|}_{2}^{2}$ yelds a unique solution.
Case 3: If $\mathbf{A}$ is 3 x 4, then we have more unknowns than equations or the underspecified case.
If this matrix has the maximum rank of 3, then we have case 3a and $\mathbf{b}$ must be in the span of $\mathbf{A}$ . For this case, many exact solutions $\mathbf{x}$ exist, all having zero equation error and a single one can be found with minimum solution norm $\left|\right|\mathbf{x}\left|\right|$ using [link] or [link] . If the matrix has rank 1 or 2, the classification will be 3b or 3c.
There are several assumptions or side conditions that could be used in order to define a useful unique solution of [link] . The side conditions used to define the Moore-Penrose pseudo-inverse are that the ${l}_{2}$ norm squared of the equation error $\epsilon $ be minimized and, if there is ambiguity (several solutions with the same minimum error), the ${l}_{2}$ norm squared of $\mathbf{x}$ also be minimized. A useful alternative tominimizing the norm of $\mathbf{x}$ is to require certain entries in $\mathbf{x}$ to be zero (sparse) or fixed to some non-zero value (equality constraints).
Notification Switch
Would you like to follow the 'Basic vector space methods in signal and systems theory' conversation and receive update notifications?