<< Chapter < Page Chapter >> Page >

And so just to give this result a name, this is called – this is one instance of what’s called a uniform conversions result, and the term uniform conversions – this sort of alludes to the fact that this shows that as M becomes large, then these epsilon hats will all simultaneously converge to epsilon of H. That training error will become very close to generalization error simultaneously for all hypotheses H. That’s what the term uniform refers to, is the fact that this converges for all hypotheses H and not just for one hypothesis. And so what we’re shown is one example of a uniform conversions result. Okay? So let me clean a couple more boards. I’ll come back and ask what questions you have about this. We should take another look at this and make sure it all makes sense. Yeah, okay. What questions do you have about this?

Student:

How the is the value of gamma computed [inaudible]?

Instructor (Andrew Ng): Right. Yeah. So let’s see, the question is how is the value of gamma computed? So for these purposes – for the purposes of this, gamma is a constant. Imagine a gamma is some constant that we chose in advance, and this is a bound that holds true for any fixed value of gamma. Later on as we take this bound and then sort of develop this result further, we’ll choose specific values of gamma as a [inaudible] of this bound. For now we’ll just imagine that when we’re proved this holds true for any value of gamma. Any questions? Yeah?

Student: [Inaudible] hypothesis phase is infinite [inaudible]?

Instructor (Andrew Ng) :Yes, the labs in the hypothesis phase is infinite, so this simple result won’t work in this present form, but we’ll generalize this – probably won’t get to it today – but we’ll generalize this at the beginning of the next lecture to infinite hypothesis classes.

Student: How do we use this theory [inaudible]?

Instructor (Andrew Ng) :How do you use theorem factors? So let me – I might get to a little of that later today, we’ll talk concretely about algorithms, the consequences of the understanding of these things in the next lecture as well. Yeah, okay? Cool. Can you just raise your hand if the things I’ve proved so far make sense? Okay. Cool. Great. Thanks.

All right. Let me just take this uniform conversions bound and rewrite it in a couple of other forms. So this is a sort of a bound on probability, this is saying suppose I fix my training set and then fix my training set – fix my threshold, my error threshold gamma, what is the probability that uniform conversions holds, and well, that’s my formula that gives the answer. This is the probability of something happening.

So there are actually three parameters of interest. One is, “What is this probability?” The other parameter is, “What’s the training set size M?” And the third parameter is, “What is the value of this error threshold gamma?” I’m not gonna vary K for these purposes. So other two other equivalent forms of the bounds, which – so you can ask, “Given gamma – so what we proved was given gamma and given M, what is the probability of uniform conversions?” The other equivalent forms are, so that given gamma and the probability delta of making a large error, how large a training set size do you need in order to give a bound on – how large a training set size do you need to give a uniform conversions bound with parameters gamma and delta?

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Machine learning. OpenStax CNX. Oct 14, 2013 Download for free at http://cnx.org/content/col11500/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Machine learning' conversation and receive update notifications?

Ask