Review
Last time we proved that for each
$k\le {c}_{0}\frac{n}{logN/n}$ ,
there exists an
$n\times N$ matrix
$\Phi $ and a decoder
$\Delta $ such that

(a)
$\parallel x\Delta \Phi \left(x\right){\parallel}_{{\ell}_{1}}\le {c}_{0}{\sigma}_{k}(x{)}_{{\ell}_{1}}$

(b)
$\parallel x\Delta \Phi \left(x\right){\parallel}_{{\ell}_{2}}\le {c}_{0}\frac{{\sigma}_{k}(x{)}_{{\ell}_{1}}}{\sqrt{k}}$
Recall that we can find such a
$\Phi $ by setting the entries
$[\Phi {]}_{j,k}={\phi}_{j,k}(\omega )$ to be realizations of
independent and identically distributed Gaussian random variables.
Deficiencies
Decoding is not implementable
Our decoding “algorithm” is:
$$\Delta \left(y\right):={argmin}_{x\in \mathcal{F}\left(y\right)}{\sigma}_{k}{\left(x\right)}_{{\ell}_{1}}$$
where
$\mathcal{F}\left(y\right):=\{x:\Phi (x)=y\}$ . In general, this
algorithm is not implementable. This deficiency, however, iseasily repaired. Specifically, define
$${\Delta}_{1}\left(y\right):={argmin}_{x\in \mathcal{F}\left(y\right)}{\parallel x\parallel}_{{\ell}_{1}}.$$
Then (a) and (b) hold for
${\Delta}_{1}$ in place of
$\Delta $ . This
decoding algorithm is equivalent to solving a linear programmingproblem, thus it is tractable and can be solved using techniques
such as the interior point method or the simplex method. Ingeneral, these algorithms have computational complexity
$O\left({N}^{3}\right)$ .
For very large signals this can become prohibitive, and hencethere has been a considerable amount of research in faster
decoders (such as decoding using greedy algorithms).
We cannot generate such φ
The construction of a
$\Phi $ from realizations of Gaussian random
variables is guaranteed to work with high probability.However, we would like to know, given a particular instance of
$\Phi $ , do (a) and (b) still hold. Unfortunately, this is
impossible to check (since, to show that
$\Phi $ satisfies the MRIP
for
$k$ , we need to consider all possible submatrices of
$\Phi $ ). Furthermore, we would like to build
$\Phi $ that can be
implemented in circuits. We also might wantfast decoders
$\Delta $ for these
$\Phi $ . Thus we also may need to be more restrictive in
building
$\Phi $ . Two possible approaches that move in this
direction are as follows:
Encoding signals
Another practical problem is that of
encoding the measurements
$y$ . In a real system these measurements
must be quantized. This problem was addressed by Candes, Romberg,and Tao in their paper Stable Signal Recovery from Incomplete
and Inaccurate Measurements. They prove that if
$y$ is quantized
to
$\overline{y}$ , and if
$x\in U\left({\ell}_{p}\right)$ for
$p\le 1$ , then
we get optimal performance in terms the number of bits requiredfor a given accuracy. Notice that their result applies only to
the case where
$p\le 1$ . One might expect that this argument
could be extended to
$p$ between 1 and 2, but a warning is in
order at this stage:
Fix
$1<p\le 2$ . Then there exist
$\Phi $ and
$\Delta $ satisfying
$$\parallel x\Delta \Phi \left(x\right){\parallel}_{{\ell}_{p}}\le {C}_{0}{\sigma}_{k}(x{)}_{{\ell}_{p}}$$
if
$$k\le {c}_{0}{N}^{\frac{22/p}{12/p}}{\left(\frac{n}{logN/n}\right)}^{\frac{p}{2p}}.$$
Furthermore, this range of k is the best possible (save for the
$log$ term).
Examples:

$p=1$ , we get our original results

$p=2$ , we do not
get instance optimal for
$k=1$ unless
$n\approx N$

$p=\frac{3}{2}$ , we only get instance optimal if
$k\le {c}_{0}{N}^{2}{\left(\frac{n}{logN/n}\right)}^{3}$