<< Chapter < Page Chapter >> Page >

The dyadic decision trees studied here are different than classical tree rules, such as CART orC4.5. Those techniques select a tree according to

k ^ = arg min k 1 R ^ n ( f ^ n ( k ) ) + α k ,

for some α > 0 whereas ours was roughly

k ^ = arg min k 1 R ^ n ( f ^ n ( k ) ) + α k ,

for α 3 log 2 2 n . The square root penalty is essential for the risk bound. No such bound exists for CARTor C4.5. Moreover, recent experimental work has shown that the square root penalty often performs better in practice. Finally,recent results show that a slightly tighter bounding procedure for the estimation error can be used to show thatdyadic decision trees (with a slightly different pruning procedure) achieve a rate of

E [ R ( f ^ n T ) ] - R * = O ( n - 1 / 2 ) , as n ,

which turns out to be the minimax optimal rate (i.e., under the boundary assumptions above, no method can achieve a faster rate of convergence tothe Bayes error).

Box counting dimension

The notion of dimension of a sets arises in many aspects of mathematics, and it is particularly relevant to the study offractals (that besides some important applications make really cool t-shirts). The dimension somehow indicates how we shouldmeasure the contents of a set (length, area, volume, etc...). The box-counting dimension is a simple definition of the dimension ofa set. The main idea is to cover the set with boxes with sidelength r . Let N ( r ) denote the smallest number of such boxes, then the box counting dimension is defined as

lim r 0 log N ( r ) - log r .

Although the boxes considered above do not need to be aligned on a rectangulargrid (and can in fact overlap) we can usually consider them over a grid and obtain an upper bound on the box-counting dimension. Toillustrate the main ideas let's consider a simple example, and connect it to the classification scenario considered before.

Let f : [ 0 , 1 ] [ 0 , 1 ] be a Lipschitz function, with Lipschitz constant L ( i.e., | f ( a ) - f ( b ) | L | a - b | , a , b [ 0 , 1 ] ). Define the set

A = { x = ( x 1 , x 2 ) : x 2 = f ( x 1 ) } ,

that is, the set A is the graphic of function f .

Consider a partition with k 2 squared boxes (just like the ones we used in the histograms), the points in set A intersect at most C ' k boxes, with C ' = ( 1 + L ) (and also the number of intersected boxes is greater than k ). The sidelength of the boxes is 1 / k therefore the box-counting dimension of A satisfies

dim B ( A ) lim 1 / k 0 log C ' k - log ( 1 / k ) = lim k log C ' + log ( k ) log ( k ) = 1 .

The result above will hold for any “normal” set A [ 0 , 1 ] 2 that does not occupy any area. For most sets the box-counting dimension is always going to be an integer, butfor some “weird” sets (called fractal sets) it is not an integer. For example, the Koch curvehas box-counting dimension log ( 4 ) / log ( 3 ) = 1 . 26186 ... . This means that it is not quite as small as a 1-dimensional curve, but not as big as a2-dimensional set (hence occupies no area).

To connect these concepts to our classification scenario consider a simple example. Let η ( x ) = P ( Y = 1 | X = x ) and assume η ( x ) has the form

η ( x ) = 1 2 + x 2 - f ( x 1 ) , x ( x 1 , x 2 ) X ,

where f : [ 0 , 1 ] [ 0 , 1 ] is Lipschitz with Lipschitz constant L . The Bayes classifier is then given by

f * ( x ) = 1 { η ( x ) 1 / 2 } 1 { x 2 f ( x 1 ) } .

This is depicted in [link] . Note that this is a special, restricted class of problems. That is, we areconsidering the subset of all classification problems such that the joint distribution P X Y satisfies P ( Y = 1 | X = x ) = 1 / 2 + x 2 - f ( x 1 ) for some function f that is Lipschitz. The Bayes decision boundary is therefore given by

A = { x = ( x 1 , x 2 ) : x 2 = f ( x 1 ) } .

Has we observed before this set has box-counting dimension 1.

Bayes decision boundary for the setup described in Appendix .

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Statistical learning theory. OpenStax CNX. Apr 10, 2009 Download for free at http://cnx.org/content/col10532/1.3
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Statistical learning theory' conversation and receive update notifications?

Ask