<< Chapter < Page Chapter >> Page >

Data adaptive model spaces

Structural risk minimization (srm)

The basic idea is to select F n based on the training data themselves. Let F 1 , F 2 , ...be a sequence of model spaces of increasing sizes/complexities with

lim k inf f F k R ( f ) = R * .

Let

f ^ n , k = arg min f F k R ^ n ( f )

be a function from F k that minimizes the empirical risk. This gives us a sequence of selected models f ^ n , 1 , f ^ n , 2 , Also associate with each set F k a value C n , k > 0 that measures the complexity or “size” of the set F k . Typically, C n , k is monotonically increasing with k (since the sets are of increasing complexity) and decreasing with n (since we become more confident with more training data). More precisely, suppose thatthe C n , k chosen so that

P sup f F k | R ^ n ( f ) - R ( f ) | > C n , k < δ

for some small δ > 0 . Then we may conclude that with very high probability (at least 1 - δ ) the empirical risk R ^ n is within C n , k of R uniformly on the class F k . This type of bound suffices to bound the estimation error (variance)of the model selection process of the form R ( f ) R ^ n ( f ) + C n , k , and SRM selects the final model by minimizing this bound over all functions in k 1 F k . The selected model is given by f ^ n , k ^ , where

k ^ = arg min k 1 R ^ n ( f ^ n , k ) + C n , k .

A typical example could be the use of VC dimension to characterize the complexity of the collectionof model spaces i.e., C n , k is derived from a bound on the estimation error.

Complexity regularization

Consider a very large class of candidate models F . To each f F assign a complexity value C n ( f ) . Assume that the complexity value is chosen so that

P sup f F | R ^ n ( f ) - R ( f ) | > C n ( f ) < δ .

This probability bound also implies an upper bound on the estimation error and complexity regularization is based on the criterion

f ^ n = arg min f F R ^ n ( f ) + C n ( f ) .

Complexity Regularization and SRM are very similar and equivalent in certain instances. A distinguishing feature of SRM and complexityreqularization techniques is that the complexity and structure of the model is not fixed prior to examining the data; the data aid in theselection of the best complexity. In fact, the key difference compared to the Method of Sieves is that these techniques can allow the data toplay an integral role in deciding where and how to average the data.

Probably approximately correct (pac) learning

Probability bounds of the forms in [link] and [link] are the foundation for SRM and complexity regularization techniques.The simplest of these bounds are known as PAC bounds in the machine learning community.

Approximation and estimation errors

In order to develop complexity regularization schemes we will need to revisit the estimation error / approximation error trade-off. Let f ^ n = arg min f F R ^ n ( f ) for some space of models F .

R ( f ^ n ) - R * = R ( f ^ n ) - inf f F R ( f ) estimation Error + inf f F R ( f ) - R * approximation error

The approximation error depends on how close f * is close to F , and without making assumptions, this is unknown. The estimation error isquantifiable, and depends on the complexity or size of F . The error decomposition is illustrated in [link] . The estimation error quantifies how much we can “trust” the empiricalrisk minimization process to select a model close to the best in a given class.

Relationship between the errors

Probability bounds of the forms in [link] and [link] guarantee that the empirical risk is uniformly close to the true risk, and using [link] and [link] it is possible to show that with high probability the selected model f ^ n satisfies

R ( f ^ n ) - inf f F k R ( f ) C ( n , k )

or

R ( f ^ n ) - inf f F k R ( f ) C n ( f ) .

The pac learning model

The estimation error will be small if R ( f ^ n ) is close to inf f F R ( f ) . PAC learning expresses this as follows. We want f ^ n to be a “probably approximately correct” (PAC) model from F . Formally, we say that f ^ n is ε accurate with confidence 1 - δ , or ( ε , δ ) - PAC for short, if

P R ( f ^ n ) - inf f F R ( f ) > ε < δ .

This says that the difference between R ( f ^ n ) and inf f F R ( f ) is greater than ε with probability less than δ . Sometimes, especially in the machine learning community, PAC bounds are stated as, “with probability of at least 1 - δ , | R ( f ^ n ) - inf f F R ( f ) | ε

To introduce PAC bounds, let us consider a simple case. Let F consist of a finite number of models, and let | F | denote that number. Furthermore, assume that min f F R ( f ) = 0 .

F = set of all histogram classifiers with M bins | F | = 2 M .

min f F R ( f ) = 0 a classifier in F that has a zero probability of error
Theorem

Assume | F | < and min f F R ( f ) = 0 , where R ( f ) = P ( f ( X ) Y ) . Let f ^ n = arg min f F R ^ n ( f ) , where R ^ n ( f ) = 1 n i = 1 n 1 { f ( X i ) Y i } . Then for every n and ε > 0 ,

P R ( f ^ n ) > ε | F | e - n ε δ .

Since min f F R ( f ) = 0 , it follows that R ^ n ( f ^ n ) = 0 . In fact, there may be several f F such that R ^ n ( f ) = 0 . Let G = { f : R ^ n ( f ) = 0 } .

P ( R ( f ^ n ) > ε ) P f G { R ( f ) > ε } = P f F { R ( f ) > ε , R ^ n ( f ) = 0 } = P f F : R ( f ) > ε { R ^ n ( f ) = 0 } f F : R ( f ) > ε P ( R ^ n ( f ) = 0 ) | F | . ( 1 - ε ) n

The last inequality follows from the fact that if R ( f ) = P ( f ( X ) Y ) > ε , then the probability that n i.i.d. samples will satisfy f ( X ) = Y is less than or equal to ( 1 - ε ) n . Note that this is simply the probability that R ^ n ( f ) = 1 n i = 1 n 1 { f ( X i ) Y i } = 0 . Finally apply the inequality 1 - x e - x to obtain the desired result.

Note that for n sufficiently large, δ = | F | e - n ε is arbitrarily small. To achieve a ( ε , δ ) -PAC bound for a desired ε > 0 and δ > 0 we require at least n = log | F | - log δ ε training examples.

Corollary

Assume that | F | < and min f F R ( f ) = 0 . Then for every n

E [ R ( f ^ n ) ] 1 + log | F | n .

Recall that for any non-negative random variable Z with finite mean, E [ Z ] = 0 P ( Z > t ) d t . This follows from an application of integration by parts.

E [ R ( f ^ n ) ] = 0 P ( R ( f ^ n ) > t ) d t = 0 u P ( R ( f ^ n ) > t ) 1 d t + u P ( R ( f ^ n ) > t ) d t , for any u > 0 u + | F | u e - n t d t = u + | F | n e - n u

Minimizing with respect to u produces the smallest upper bound with u = log | F | n

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Statistical learning theory. OpenStax CNX. Apr 10, 2009 Download for free at http://cnx.org/content/col10532/1.3
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Statistical learning theory' conversation and receive update notifications?

Ask