<< Chapter < Page | Chapter >> Page > |
The fundamental problem in learning from data is proper Model Selection. As we have seen in the previous lectures, a model that istoo complex could overfit the training data (causing an estimation error) and a model that is too simple could be a bad approximation ofthe function that we are trying to estimate (causing an approximation error). The estimation error arises because of the fact that we do notknow the true joint distribution of data in the input and output space, and therefore we minimize the empirical risk (which, for eachcandidate model, is a random number depending on the data) and estimate the average risk again from the limited number of trainingsamples we have. The approximation error measures how well the functions in the chosen model space can approximate the underlyingrelationship between the output space on the input space, and in general improves as the “size” of our model space increases.
In the preceding lectures, we looked at some solutions to deal with the overfitting problem. The basic approach followed was the Methodof Sieves, in which the complexity of the model space was chosen as a function of the number of training samples. In particular, both thedenoising and classification problems we looked at consider estimators based on histogram partitions. The size of the partition was anincreasing function of the number of training samples. In this lecture, we will refine our learning methods further introduce modelselection procedures that automatically adapt to the distribution of the training data, rather than basing the model class solely on thenumber of samples. This sort of adaptivity will play a major role in the design of more effective classifiers and denoising methods. Thekey to designing data-adaptive model selection procedures is obtaining useful upper bounds on the estimation error. To this end, we willintroduce the idea of “Probably Approximately Correct” learning methods.
The method of Sieves underpinned our approaches in the denoising problem and in the histogram classification problem. Recall that thebasic idea is to define a sequence of model spaces ${\mathcal{F}}_{1}$ , ${\mathcal{F}}_{2}$ , ...of increasing complexity, and then given the training data ${\{{X}_{i},{Y}_{i}\}}_{i=1}^{n}$ select a model according to
The choice of the model space ${\mathcal{F}}_{n}$ (and hence the model complexity and structure) is determined completely by the sample size $n$ , and does not depend on the (empirical) distribution of training data.This is a major limitation of the sieve method. In a nutshell, the method of sieves tells us to average the data in a certain way (e.g., over a partition of $\mathcal{X}$ ) based on the sample size, independent on the sample values themselves.
In general, learning basically comprises of two things:
Sieves basically force us to deal with (2) a priori (before we analyze the training data). This will lead to suboptimalclassifiers and estimators, in general. Indeed deciding where/how to average is the really interesting and fundamental aspect of learning;once this is decided we have effectively solved the learing problem. There are at least two possibilities for breaking the rigidity of themethod of sieves, as we shall see in the following section.
Notification Switch
Would you like to follow the 'Statistical learning theory' conversation and receive update notifications?