<< Chapter < Page | Chapter >> Page > |
A new theory known as Compressed Sensing (CS) has recently emerged that can also be categorized as a type of dimensionalityreduction. Like manifold learning, CS is strongly model-based (relying on sparsity in particular).However, unlike many of the standard techniques in dimensionality reduction (such as manifold learning or the JL lemma), the goal ofCS is to maintain a low-dimensional representation of a signal $x$ from which a faithful approximation to $x$ can be recovered. In a sense, this more closely resembles the traditional problem ofdata compression (see Compression ). In CS, however, the encoder requires no a priori knowledge of thesignal structure. Only the decoder uses the model (sparsity) to recover the signal. Wejustify such an approach again using geometric arguments.
Consider a signal $x\in {\mathbb{R}}^{N}$ , and suppose that the basis $\Psi $ provides a $K$ -sparse representation of $x$
As we discussed in Compression , the standard procedure for compressing sparse signals, known as transformcoding, is to (i) acquire the full $N$ -sample signal $x$ ; (ii) compute the complete set of transform coefficients $\alpha $ ; (iii) locate the $K$ largest, significant coefficients and discard the (many) small coefficients; (iv) encode the values and locations of the largest coefficients.
This procedure has three inherent inefficiencies: First, for a high-dimensional signal, we must start with a large number ofsamples $N$ . Second, the encoder must compute all $N$ of the transform coefficients $\alpha $ , even though it will discard all but $K$ of them. Third, the encoder must encode the locations of the large coefficients, which requiresincreasing the coding rate since the locations change with each signal.
This raises a simple question: For a given signal, is it possible to directly estimate the set of large $\alpha \left(n\right)$ 's that will not be discarded? While this seems improbable, Candès, Romberg,and Tao [link] , [link] and Donoho [link] have shown that a reduced set of projections can contain enoughinformation to reconstruct sparse signals. An offshoot of this work, often referred to as Compressed Sensing (CS) [link] , [link] , [link] , [link] , [link] , [link] , [link] , has emerged that builds on this principle.
In CS, we do not measure or encode the $K$ significant $\alpha \left(n\right)$ directly. Rather, we measure and encode $M<N$ projections $y\left(m\right)=<x,{\phi}_{m}{}^{T}>$ of the signal onto a second set of functions $\left\{{\phi}_{m}\right\},m=1,2,...,M$ . In matrix notation, we measure
Notification Switch
Would you like to follow the 'Concise signal models' conversation and receive update notifications?