<< Chapter < Page Chapter >> Page >

Semi-predictive approach

Recall that a context tree source is similar to a Markov source, where the number of states is greatly reduced. Let T be the set of leaves of a context tree source, then the redundancy is

r | T | ( r - 1 ) 2 log n | T | + O ( 1 ) ,

where | T | is the number of leaves, and we have log n | T | instead of log ( n ) , because each state generated n | T | symbols, on average. In contrast, the redundancy for a Markov representation of the tree source T is much larger. Therefore, tree sources are greatly preferable in practice, they offer a significant reductionin redundancy.

How can we compress universally over the parametric class of tree sources? Suppose that we knew T , that is we knew the set of leaves. Then we could process x sequentially, where for each x i we can determine what state its context is in, that is the unique suffix of x 1 i - 1 that belongs to the set of leaf labels in T . Having determined that we are in some state s , Pr ( x i = 0 | s , x 1 i - 1 ) can be computed by examining all previous times that we were in state s and computing the probability with the Krichevsky-Trofimov approach based on the number of times that the following symbol(after s ) was 0 or 1. In fact, we can store symbol counts n x ( s , 0 ) and n x ( s , 1 ) for all s T , update them sequentially as we process x , and compute Pr ( x i = 0 | s , x 1 i - 1 ) efficiently. (The actual translation to bits is performed with an arithmetic encoder.)

While promising, this approach above requires to know T . How do we compute the optimal T * from the data?

Tree pruning in the semi-predictive approach.
Tree pruning in the semi-predictive approach.

Semi-predictive coding : The semi-predictive approach to encoding for context tree sources  [link] is to scan the data twice, where in the first scan we estimate T * and in the second scan we encode x from T * , as described above. Let us describe a procedure for computing the optimal T * among tree sources whose depth is bounded by D . This procedure is visualized in [link] . As suggested above, we count n x ( s , a ) , the number of times that each possible symbol appeared in context s , for all s α D , a α . Having computed all the symbol counts, we process the depth- D tree in a bottom-top fashion, from the leaves to the root, where for each internal node s of the tree (that is, s α d where d < D ), we track T s * , the optimal tree structure rooted at s to encode symbols whose context ends with s , and MDL ( s ) the minimum description lengths (MDL) required for encoding these symbols.

Without loss of generality, consider the simple case of a binary alphabet α = { 0 , 1 } . When processing s we have already computed the symbol counts n x ( 0 s , 0 ) and n x ( 0 s , 1 ) , n x ( 1 s , 0 ) , n x ( 1 s , 1 ) , the optimal trees T 0 s * and T 1 s * , and the minimum description lengths (MDL) MDL ( 0 s ) and MDL ( 1 S ) . We have two options for state s .

  1. Keep T 0 S * and T 1 S * . The coding length required to do so is MDL ( 0 S ) + MDL ( 1 S ) + 1 , where the extra bit is spent to describe the structure of the maximizing tree.
  2. Merge both states (this is also called tree pruning ). The symbol counts will be n x ( s , α ) = n x ( 0 s , α ) + n x ( 1 s , α ) , α { 0 , 1 } , and the coding length will be
    KT ( n x ( s , 0 ) , n x ( s , 1 ) ) + 1 ,
    where KT ( · , · ) is the Krichevsky-Trofimov length  [link] , and we again included an extra bit for the structure of the tree.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Universal algorithms in signal processing and communications. OpenStax CNX. May 16, 2013 Download for free at http://cnx.org/content/col11524/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Universal algorithms in signal processing and communications' conversation and receive update notifications?

Ask