<< Chapter < Page Chapter >> Page >

The count-min sketch

Define H as the set of all discrete-valued functions h : { 1 , ... , N } { 1 , ... , m } . Note that H is a finite set of size m N . Each function h H can be specified by a binary characteristic matrix φ ( h ) of size m × N , with each column being a binary vector with exactly one 1 at the location j , where j = h ( i ) . To construct the overall sampling matrix Φ , we choose d functions h 1 , ... , h d independently from the uniform distribution defined on H , and vertically concatenate their characteristic matrices. Thus, if M = m d , Φ is a binary matrix of size M × N with each column containing exactly d ones.

Now given any signal x , we acquire linear measurements y = Φ x . It is easy to visualize the measurements via the following two properties. First, the coefficients of the measurement vector y are naturally grouped according to the “mother” binary functions { h 1 , ... , h d } . Second, consider the i t h coefficient of the measurement vector y , which corresponds to the mother binary function h . Then, the expression for y i is simply given by:

y i = j : h ( j ) = i x j .

In other words, for a fixed signal coefficient index j , each measurement y i as expressed above consists of an observation of x j corrupted by other signal coefficients mapped to the same i by the function h . Signal recovery essentially consists of estimating the signal values from these “corrupted" observations.

The count-min algorithm is useful in the special case where the entries of the original signal are positive. Given measurements y using the sampling matrix Φ as constructed above, the estimate of the j th signal entry is given by:

x ^ j = min l y i : h l ( j ) = i .

Intuitively, this means that the estimate of x j is formed by simply looking at all measurements that comprise of x j corrupted by other signal values, and picking the one with the lowest magnitude. Despite the simplicity of this algorithm, it is accompanied by an arguably powerful instance-optimality guarantee: if d = C log N and m = 4 / α K , then with high probability, the recovered signal x ^ satisfies:

x - x ^ α / K · x - x * 1 ,

where x * represents the best K -term approximation of x in the 1 sense.

The count-median sketch

For the general setting when the coefficients of the original signal could be either positive or negative, a similar algorithm known as count-median can be used. Instead of picking the minimum of the measurements, we compute the median of all those measurements that are comprised of a corrupted version of x j and declare it as the signal coefficient estimate, i.e.,

x ^ j = median l y i : h l ( j ) = i .

The recovery guarantees for count-median are similar to that for count-min, with a different value of the failure probability constant. An important feature of both count-min and count-median is that they require that the measurements be perfectly noiseless , in contrast to optimization/greedy algorithms which can tolerate small amounts of measurement noise.

Summary

Although we ultimately wish to recover a sparse signal from a small number of linear measurements in both of these settings, there are some important differences between such settings and the compressive sensing setting studied in this course . First, in these settings it is natural to assume that the designer of the reconstruction algorithm also has full control over Φ , and is thus free to choose Φ in a manner that reduces the amount of computation required to perform recovery. For example, it is often useful to design Φ so that it has few nonzeros, i.e., the sensing matrix itself is also sparse  [link] , [link] , [link] . In general, most methods involve careful construction of the sensing matrix Φ , which is in contrast with the optimization and greedy methods that work with any matrix satisfying a generic condition such as the restricted isometry property . This additional degree of freedom can lead to significantly faster algorithms  [link] , [link] , [link] , [link] .

Second, note that the computational complexity of all the convex methods and greedy algorithms described above is always at least linear in N , since in order to recover x we must at least incur the computational cost of reading out all N entries of x . This may be acceptable in many typical compressive sensing applications, but this becomes impractical when N is extremely large, as in the network monitoring example. In this context, one may seek to develop algorithms whose complexity is linear only in the length of the representation of the signal, i.e., its sparsity K . In this case the algorithm does not return a complete reconstruction of x but instead returns only its K largest elements (and their indices). As surprising as it may seem, such algorithms are indeed possible. See  [link] , [link] for examples.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Introduction to compressive sensing. OpenStax CNX. Mar 12, 2015 Download for free at http://legacy.cnx.org/content/col11355/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Introduction to compressive sensing' conversation and receive update notifications?

Ask