<< Chapter < Page Chapter >> Page >

Data adaptive model spaces

Structural risk minimization (srm)

The basic idea is to select F n based on the training data themselves. Let F 1 , F 2 , ...be a sequence of model spaces of increasing sizes/complexities with

lim k inf f F k R ( f ) = R * .

Let

f ^ n , k = arg min f F k R ^ n ( f )

be a function from F k that minimizes the empirical risk. This gives us a sequence of selected models f ^ n , 1 , f ^ n , 2 , Also associate with each set F k a value C n , k > 0 that measures the complexity or “size” of the set F k . Typically, C n , k is monotonically increasing with k (since the sets are of increasing complexity) and decreasing with n (since we become more confident with more training data). More precisely, suppose thatthe C n , k chosen so that

P sup f F k | R ^ n ( f ) - R ( f ) | > C n , k < δ

for some small δ > 0 . Then we may conclude that with very high probability (at least 1 - δ ) the empirical risk R ^ n is within C n , k of R uniformly on the class F k . This type of bound suffices to bound the estimation error (variance)of the model selection process of the form R ( f ) R ^ n ( f ) + C n , k , and SRM selects the final model by minimizing this bound over all functions in k 1 F k . The selected model is given by f ^ n , k ^ , where

k ^ = arg min k 1 R ^ n ( f ^ n , k ) + C n , k .

A typical example could be the use of VC dimension to characterize the complexity of the collectionof model spaces i.e., C n , k is derived from a bound on the estimation error.

Complexity regularization

Consider a very large class of candidate models F . To each f F assign a complexity value C n ( f ) . Assume that the complexity value is chosen so that

P sup f F | R ^ n ( f ) - R ( f ) | > C n ( f ) < δ .

This probability bound also implies an upper bound on the estimation error and complexity regularization is based on the criterion

f ^ n = arg min f F R ^ n ( f ) + C n ( f ) .

Complexity Regularization and SRM are very similar and equivalent in certain instances. A distinguishing feature of SRM and complexityreqularization techniques is that the complexity and structure of the model is not fixed prior to examining the data; the data aid in theselection of the best complexity. In fact, the key difference compared to the Method of Sieves is that these techniques can allow the data toplay an integral role in deciding where and how to average the data.

Probably approximately correct (pac) learning

Probability bounds of the forms in [link] and [link] are the foundation for SRM and complexity regularization techniques.The simplest of these bounds are known as PAC bounds in the machine learning community.

Approximation and estimation errors

In order to develop complexity regularization schemes we will need to revisit the estimation error / approximation error trade-off. Let f ^ n = arg min f F R ^ n ( f ) for some space of models F .

R ( f ^ n ) - R * = R ( f ^ n ) - inf f F R ( f ) estimation Error + inf f F R ( f ) - R * approximation error

The approximation error depends on how close f * is close to F , and without making assumptions, this is unknown. The estimation error isquantifiable, and depends on the complexity or size of F . The error decomposition is illustrated in [link] . The estimation error quantifies how much we can “trust” the empiricalrisk minimization process to select a model close to the best in a given class.

Relationship between the errors

Probability bounds of the forms in [link] and [link] guarantee that the empirical risk is uniformly close to the true risk, and using [link] and [link] it is possible to show that with high probability the selected model f ^ n satisfies

R ( f ^ n ) - inf f F k R ( f ) C ( n , k )

or

R ( f ^ n ) - inf f F k R ( f ) C n ( f ) .

The pac learning model

The estimation error will be small if R ( f ^ n ) is close to inf f F R ( f ) . PAC learning expresses this as follows. We want f ^ n to be a “probably approximately correct” (PAC) model from F . Formally, we say that f ^ n is ε accurate with confidence 1 - δ , or ( ε , δ ) - PAC for short, if

P R ( f ^ n ) - inf f F R ( f ) > ε < δ .

This says that the difference between R ( f ^ n ) and inf f F R ( f ) is greater than ε with probability less than δ . Sometimes, especially in the machine learning community, PAC bounds are stated as, “with probability of at least 1 - δ , | R ( f ^ n ) - inf f F R ( f ) | ε

To introduce PAC bounds, let us consider a simple case. Let F consist of a finite number of models, and let | F | denote that number. Furthermore, assume that min f F R ( f ) = 0 .

F = set of all histogram classifiers with M bins | F | = 2 M .

min f F R ( f ) = 0 a classifier in F that has a zero probability of error
Theorem

Assume | F | < and min f F R ( f ) = 0 , where R ( f ) = P ( f ( X ) Y ) . Let f ^ n = arg min f F R ^ n ( f ) , where R ^ n ( f ) = 1 n i = 1 n 1 { f ( X i ) Y i } . Then for every n and ε > 0 ,

P R ( f ^ n ) > ε | F | e - n ε δ .

Since min f F R ( f ) = 0 , it follows that R ^ n ( f ^ n ) = 0 . In fact, there may be several f F such that R ^ n ( f ) = 0 . Let G = { f : R ^ n ( f ) = 0 } .

P ( R ( f ^ n ) > ε ) P f G { R ( f ) > ε } = P f F { R ( f ) > ε , R ^ n ( f ) = 0 } = P f F : R ( f ) > ε { R ^ n ( f ) = 0 } f F : R ( f ) > ε P ( R ^ n ( f ) = 0 ) | F | . ( 1 - ε ) n

The last inequality follows from the fact that if R ( f ) = P ( f ( X ) Y ) > ε , then the probability that n i.i.d. samples will satisfy f ( X ) = Y is less than or equal to ( 1 - ε ) n . Note that this is simply the probability that R ^ n ( f ) = 1 n i = 1 n 1 { f ( X i ) Y i } = 0 . Finally apply the inequality 1 - x e - x to obtain the desired result.

Note that for n sufficiently large, δ = | F | e - n ε is arbitrarily small. To achieve a ( ε , δ ) -PAC bound for a desired ε > 0 and δ > 0 we require at least n = log | F | - log δ ε training examples.

Corollary

Assume that | F | < and min f F R ( f ) = 0 . Then for every n

E [ R ( f ^ n ) ] 1 + log | F | n .

Recall that for any non-negative random variable Z with finite mean, E [ Z ] = 0 P ( Z > t ) d t . This follows from an application of integration by parts.

E [ R ( f ^ n ) ] = 0 P ( R ( f ^ n ) > t ) d t = 0 u P ( R ( f ^ n ) > t ) 1 d t + u P ( R ( f ^ n ) > t ) d t , for any u > 0 u + | F | u e - n t d t = u + | F | n e - n u

Minimizing with respect to u produces the smallest upper bound with u = log | F | n

Questions & Answers

what is the stm
Brian Reply
is there industrial application of fullrenes. What is the method to prepare fullrene on large scale.?
Rafiq
industrial application...? mmm I think on the medical side as drug carrier, but you should go deeper on your research, I may be wrong
Damian
How we are making nano material?
LITNING Reply
what is a peer
LITNING Reply
What is meant by 'nano scale'?
LITNING Reply
What is STMs full form?
LITNING
scanning tunneling microscope
Sahil
how nano science is used for hydrophobicity
Santosh
Do u think that Graphene and Fullrene fiber can be used to make Air Plane body structure the lightest and strongest. Rafiq
Rafiq
what is differents between GO and RGO?
Mahi
what is simplest way to understand the applications of nano robots used to detect the cancer affected cell of human body.? How this robot is carried to required site of body cell.? what will be the carrier material and how can be detected that correct delivery of drug is done Rafiq
Rafiq
what is Nano technology ?
Bob Reply
write examples of Nano molecule?
Bob
The nanotechnology is as new science, to scale nanometric
brayan
nanotechnology is the study, desing, synthesis, manipulation and application of materials and functional systems through control of matter at nanoscale
Damian
Is there any normative that regulates the use of silver nanoparticles?
Damian Reply
what king of growth are you checking .?
Renato
What fields keep nano created devices from performing or assimulating ? Magnetic fields ? Are do they assimilate ?
Stoney Reply
why we need to study biomolecules, molecular biology in nanotechnology?
Adin Reply
?
Kyle
yes I'm doing my masters in nanotechnology, we are being studying all these domains as well..
Adin
why?
Adin
what school?
Kyle
biomolecules are e building blocks of every organics and inorganic materials.
Joe
anyone know any internet site where one can find nanotechnology papers?
Damian Reply
research.net
kanaga
sciencedirect big data base
Ernesto
Introduction about quantum dots in nanotechnology
Praveena Reply
what does nano mean?
Anassong Reply
nano basically means 10^(-9). nanometer is a unit to measure length.
Bharti
do you think it's worthwhile in the long term to study the effects and possibilities of nanotechnology on viral treatment?
Damian Reply
absolutely yes
Daniel
how to know photocatalytic properties of tio2 nanoparticles...what to do now
Akash Reply
it is a goid question and i want to know the answer as well
Maciej
characteristics of micro business
Abigail
for teaching engĺish at school how nano technology help us
Anassong
How can I make nanorobot?
Lily
Do somebody tell me a best nano engineering book for beginners?
s. Reply
there is no specific books for beginners but there is book called principle of nanotechnology
NANO
how can I make nanorobot?
Lily
what is fullerene does it is used to make bukky balls
Devang Reply
are you nano engineer ?
s.
fullerene is a bucky ball aka Carbon 60 molecule. It was name by the architect Fuller. He design the geodesic dome. it resembles a soccer ball.
Tarell
what is the actual application of fullerenes nowadays?
Damian
That is a great question Damian. best way to answer that question is to Google it. there are hundreds of applications for buck minister fullerenes, from medical to aerospace. you can also find plenty of research papers that will give you great detail on the potential applications of fullerenes.
Tarell
how did you get the value of 2000N.What calculations are needed to arrive at it
Smarajit Reply
Privacy Information Security Software Version 1.1a
Good
Got questions? Join the online conversation and get instant answers!
Jobilize.com Reply

Get the best Algebra and trigonometry course in your pocket!





Source:  OpenStax, Statistical learning theory. OpenStax CNX. Apr 10, 2009 Download for free at http://cnx.org/content/col10532/1.3
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Statistical learning theory' conversation and receive update notifications?

Ask