<< Chapter < Page Chapter >> Page >
Example of Recursive Dyadic Partition (RDP) growing ( X = [ 0 , 1 ] 2 ).

In the following we are going to consider the 2-dimensional case, but all the results can be easily generalized for the d -dimensional case ( d 2 ), provided the dyadic tree construction is defined properly. Consider a recursive dyadic partition of the feature spaceinto k boxes of equal size. Associated with this partition is a tree T . Minimizing the empirical risk with respect to this partition produces the histogram classifier with k equal-sized bins. Consider also all the possible partitions corresponding to pruned versions ofthe tree T . Minimizing the empirical risk with respect to those other partitions results in other classifiers (dyadic decision trees)that are fundamentally different than the histogram rule we analyzed earlier.

Pruning

Let F be the collection of all possible dyadic decision trees corresponding to recursive dyadic partitions of the feature space.Each such tree can be prefix encoded with a bit-string proportional to the number of leafs in the tree as follows; encode the structure ofthe tree in a top-down fashion: (i) assign a zero at each branch node and a one at each leaf node (terminal node) (ii) read the code in abreadth-first fashion, top-down, left-right. [link] exemplifies this coding strategy. Notice that, since we are considering binary trees, the total number of nodes is twice thenumber of leafs minus one, that is, if the number of leafs in the tree is k then the number of nodes is 2 k - 1 . Therefore to encode a tree with k leafs we need 2 k - 1 bits.

Since we want to use the partition associated with this tree for classification we need to assign a decision label (either zero or one)to each leaf. Hence, to encode a decision tree in this fashion we need 3 k - 1 bits, where k is the number of leafs. For a tree with k leafs the first 2 k - 1 bits of the codeword encode the tree structure, and the remaining k bits encode the classification labels. This is easily shown to be a prefix code, therefore we can use this under ourclassification scenario.

Illustration of the tree coding technique: example of a tree and corresponding prefix code.

Let

f ^ n * = arg min f F R ^ n ( f ) + ( 3 k - 1 ) log 2 + 1 2 log n 2 n .

This optimization can be solved through a bottom-up pruning process (starting from a very large initial tree T 0 ) in O ( | T 0 | 2 ) operations, where | T 0 | is the number of leafs in the initial tree. The complexity regularization theorem tells usthat

E [ R ( f ^ n ) ] min f F R ( f ) + ( 3 k - 1 ) log 2 + 1 2 log n 2 n + 1 n .

Comparison between histogram classifiers and classification trees

In the following we will illustrate the idea behind complexity regularization by applying the basic theorem to histogramclassifiers and classification trees (using our setup above).

Consider the classification setup described in "Classification" , with X = [ 0 , 1 ] 2 .

Histogram risk bound

Recall the setup and results of a previous lecture The description here is slightly different than the one in theprevious lecture. . Let

F k H = { histogram rules with k 2 bins } .

Then | F k H | = 2 k 2 . Let F H = k 1 F k H . We can encode each element f of F H with c H ( f ) = k + k 2 bits, where the first k bits indicate the smallest k such that f F k H and the following k 2 bits encode the labels of each bin. This is a prefix encoding of all the elements in F H .

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Statistical learning theory. OpenStax CNX. Apr 10, 2009 Download for free at http://cnx.org/content/col10532/1.3
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Statistical learning theory' conversation and receive update notifications?

Ask