<< Chapter < Page | Chapter >> Page > |
In addition to convex optimization and greedy pursuit approaches, there is another important class of sparse recovery algorithms that we will refer to as combinatorial algorithms . These algorithms, mostly developed by the theoretical computer science community, in many cases pre-date the compressive sensing literature but are highly relevant to the sparse signal recovery problem .
The oldest combinatorial algorithms were developed in the context of group testing [link] , [link] , [link] . In the group testing problem, we suppose that there are $N$ total items, of which an unknown subset of $K$ elements are anomalous and need to be identified. For example, we might wish to identify defective products in an industrial setting, or identify a subset of diseased tissue samples in a medical context. In both of these cases the vector $x$ indicates which elements are anomalous, i.e., ${x}_{i}\ne 0$ for the $K$ anomalous elements and ${x}_{i}=0$ otherwise. Our goal is to design a collection of tests that allow us to identify the support (and possibly the values of the nonzeros) of $x$ while also minimizing the number of tests performed. In the simplest practical setting these tests are represented by a binary matrix $\Phi $ whose entries ${\phi}_{ij}$ are equal to 1 if and only if the ${j}^{\mathrm{th}}$ item is used in the ${i}^{\mathrm{th}}$ test. If the output of the test is linear with respect to the inputs, then the problem of recovering the vector $x$ is essentially the same as the standard sparse recovery problem.
Another application area in which combinatorial algorithms have proven useful is computation on data streams [link] , [link] . Suppose that ${x}_{i}$ represents the number of packets passing through a network router with destination $i$ . Simply storing the vector $x$ is typically infeasible since the total number of possible destinations (represented by a 32-bit IP address) is $N={2}^{32}$ . Thus, instead of attempting to store $x$ directly, one can store $y=\Phi x$ where $\Phi $ is an $M\times N$ matrix with $M\ll N$ . In this context the vector $y$ is often called a sketch . Note that in this problem $y$ is computed in a different manner than in the compressive sensing context. Specifically, in the network traffic example we do not ever observe ${x}_{i}$ directly; rather, we observe increments to ${x}_{i}$ (when a packet with destination $i$ passes through the router). Thus we construct $y$ iteratively by adding the ${i}^{\mathrm{th}}$ column to $y$ each time we observe an increment to ${x}_{i}$ , which we can do since $y=\Phi x$ is linear. When the network traffic is dominated by traffic to a small number of destinations, the vector $x$ is compressible, and thus the problem of recovering $x$ from the sketch $\Phi x$ is again essentially the same as the sparse recovery problem.
Several combinatorial algorithms for sparse recovery have been developed in the literature. A non-exhaustive list includes Random Fourier Sampling [link] , HHS Pursuit [link] , and Sparse Sequential Matching Pursuit [link] . We do not provide a full discussion of each of these algorithms; instead, we describe two simple methods that highlight the flavors of combinatorial sparse recovery — count-min and count-median .
Notification Switch
Would you like to follow the 'An introduction to compressive sensing' conversation and receive update notifications?