<< Chapter < Page | Chapter >> Page > |
In using sparsity in posing a signal processing problem (e.g. compressive sensing), an ${l}_{1}$ norm can be used (or even an ${l}_{0}$ “pseudo norm”) to obtain solutions with zero components if possible [link] , [link] .
In addition to using side conditions to achieve a unique solution, side conditions are sometimes part of the original problem. One interesting caserequires that certain of the equations be satisfied with no error and the approximation be achieved with the remaining equations.
If the ${l}_{2}$ norm is used, a unique generalized solution to [link] always exists such that the norm squared of the equation error ${\epsilon}^{\mathbf{T}*}\epsilon $ and the norm squared of the solution ${\mathbf{x}}^{\mathbf{T}*}\mathbf{x}$ are both minimized. This solution is denoted by
where ${\mathbf{A}}^{+}$ is called the Moore-Penrose inverse [link] of $\mathbf{A}$ (and is also called the generalized inverse [link] and the pseudoinverse [link] )
Roger Penrose [link] showed that for all $\mathbf{A}$ , there exists a unique ${\mathbf{A}}^{+}$ satisfying the four conditions:
There is a large literature on this problem. Five useful books are
[link] ,
[link] ,
[link] ,
[link] ,
[link] . The Moore-Penrose
pseudo-inverse can be calculated in Matlab
[link] by the
pinv(A,tol)
function which uses a singular value decomposition
(SVD) to calculate it. There are a variety of other numerical methodsgiven in the above references where each has some advantages and some
disadvantages.
For cases 2a and 2b in Figure 1, the following $N$ by $N$ system of equations called the normal equations [link] , [link] have a unique minimum squared equation error solution (minimum ${\u03f5}^{T}\u03f5$ ). Here we have the over specified case with more equations than unknowns.A derivation is outlined in "Derivations" , equation [link] below.
The solution to this equation is often used in least squares approximation problems. For these two cases ${\mathbf{A}}^{T}\mathbf{A}$ is non-singular and the $N$ by $M$ pseudo-inverse is simply,
A more general problem can be solved by minimizing the weighted equation error, ${\u03f5}^{\mathbf{T}}{\mathbf{W}}^{\mathbf{T}}\mathbf{W}\u03f5$ where $\mathbf{W}$ is a positive semi-definite diagonal matrix of the error weights. The solution to that problem [link] is
For the case 3a in Figure 1 with more unknowns than equations, $\mathbf{A}{\mathbf{A}}^{T}$ is non-singular and has a unique minimum norm solution, $\left|\right|\mathbf{x}\left|\right|$ . The $N$ by $M$ pseudoinverse is simply,
with the formula for the minimum weighted solution norm $\left|\right|x\left|\right|$ is
For these three cases, either [link] or [link] can be directly calculated, but not both. However, they are equal so you simply use the one with the non-singularmatrix to be inverted. The equality can be shown from an equivalent definition [link] of the pseudo-inverse given in terms of a limit by
For the other 6 cases, SVD or other approaches must be used. Some properties [link] , [link] are:
It is informative to consider the range and null spaces [link] of $\mathbf{A}$ and ${\mathbf{A}}^{+}$
The four Penrose equations in [link] are remarkable in defining a unique pseudoinverse for any A with any shape, any rank, for any of the ten cases listed in Figure 1.However, only four cases of the ten have analytical solutions (actually, all do if you use SVD).
Notification Switch
Would you like to follow the 'Basic vector space methods in signal and systems theory' conversation and receive update notifications?