<< Chapter < Page Chapter >> Page >

Introduction

Overview of the learning problem

The fundamental problem in learning from data is proper Model Selection. As we have seen in the previous lectures, a model that istoo complex could overfit the training data (causing an estimation error) and a model that is too simple could be a bad approximation ofthe function that we are trying to estimate (causing an approximation error). The estimation error arises because of the fact that we do notknow the true joint distribution of data in the input and output space, and therefore we minimize the empirical risk (which, for eachcandidate model, is a random number depending on the data) and estimate the average risk again from the limited number of trainingsamples we have. The approximation error measures how well the functions in the chosen model space can approximate the underlyingrelationship between the output space on the input space, and in general improves as the “size” of our model space increases.

Lecture outline

In the preceding lectures, we looked at some solutions to deal with the overfitting problem. The basic approach followed was the Methodof Sieves, in which the complexity of the model space was chosen as a function of the number of training samples. In particular, both thedenoising and classification problems we looked at consider estimators based on histogram partitions. The size of the partition was anincreasing function of the number of training samples. In this lecture, we will refine our learning methods further introduce modelselection procedures that automatically adapt to the distribution of the training data, rather than basing the model class solely on thenumber of samples. This sort of adaptivity will play a major role in the design of more effective classifiers and denoising methods. Thekey to designing data-adaptive model selection procedures is obtaining useful upper bounds on the estimation error. To this end, we willintroduce the idea of “Probably Approximately Correct” learning methods.

Recap: method of sieves

The method of Sieves underpinned our approaches in the denoising problem and in the histogram classification problem. Recall that thebasic idea is to define a sequence of model spaces F 1 , F 2 , ...of increasing complexity, and then given the training data { X i , Y i } i = 1 n select a model according to

f n ^ = arg min f F n R ^ n ( f ) .

The choice of the model space F n (and hence the model complexity and structure) is determined completely by the sample size n , and does not depend on the (empirical) distribution of training data.This is a major limitation of the sieve method. In a nutshell, the method of sieves tells us to average the data in a certain way (e.g., over a partition of X ) based on the sample size, independent on the sample values themselves.

In general, learning basically comprises of two things:

  1. Averaging data to reduce variability
  2. Deciding where (or how) to average

Sieves basically force us to deal with (2) a priori (before we analyze the training data). This will lead to suboptimalclassifiers and estimators, in general. Indeed deciding where/how to average is the really interesting and fundamental aspect of learning;once this is decided we have effectively solved the learing problem. There are at least two possibilities for breaking the rigidity of themethod of sieves, as we shall see in the following section.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Statistical learning theory. OpenStax CNX. Apr 10, 2009 Download for free at http://cnx.org/content/col10532/1.3
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Statistical learning theory' conversation and receive update notifications?

Ask