<< Chapter < Page | Chapter >> Page > |
Revisit the polynomial regression example (Lecture 2, Ex. 4) , and incorporate a penalty term $C\left(f\right)$ which is proportional to the degree of $f$ , or the derivative of $f$ . In essence, this approach penalizes for functions which are too “wiggly”, with theintuition being that the true function is probably smooth so a function which is very wiggly will overfit the data.
How do we decide how to restrict or penalize the empirical risk minimization process? Approaches which have appeared in theliterature include the following.
Perhaps the simplest approach is to try to limit the size of $\mathcal{F}$ in a way that depends on the number of training data $n$ . The more data we have, the more complex the space of models we can entertain.Let the class of candidate functions grow with $n$ . That is, take
where $|{\mathcal{F}}_{i}|$ grows as $i\to \infty $ . In other words, consider a sequence of spaces with increasing complexity ordegrees of freedom depending on the number of training data samples, $n$ .
Given samples ${\{{X}_{i},{Y}_{i}\}}_{i=1}^{n}$ i.i.d. distributed according to ${P}_{XY}$ , select $f\in {\mathcal{F}}_{n}$ to minimize the empirical risk
In the next lecture we will consider an example using the method of sieves. The basic idea is to design the sequence of model spaces in such a waythat the excess risk decays to zero as $n\to \infty $ . This sort of idea has been around for decades, but Grenander's method ofsieves is often cited as a nice formalization of the idea: Abstract Inference , Wiley, New York.
In certain cases, the empirical risk happens to be a (log) likelihood function, and one can then interpret the cost $C\left(f\right)$ as reflecting prior knowledge about which models are more or less likely. In thiscase, ${e}^{-C\left(f\right)}$ is like a prior probability distribution on the space $\mathcal{F}$ . The cost $C\left(f\right)$ is large if $f$ is highly improbable, and $C\left(f\right)$ is small if $f$ is highly probable.
Alternatively, if we restrict $\mathcal{F}$ to be small, and denote the space of all measurable functions as $\mathbb{F}=\mathcal{F}\cup {\mathcal{F}}^{c}$ , then it is essentially as if we have placed a uniform prior over all functions in $\mathcal{F}$ , and zero prior probability on the functions in ${\mathcal{F}}^{c}$ .
Description length methods represent each $f$ with a string of bits. More complicated functions require more bits to represent.Accordingly, we can then set the cost $c\left(f\right)$ proportional to the number of bits needed to describe $f$ (the description length ). This results in what is known as the minimum description length (MDL)approach where the minimum description length is given by
In the Bayesian setting, $p\left(f\right)\propto {e}^{-C\left(f\right)}$ can be interpreted as a prior probability density on $\mathcal{F}$ , with more complex models being less probable and simpler models being more probable. In that sense,both the Bayesian and MDL approaches have a similar spirit.
The Vapnik-Cervonenkis (VC) dimension measures the complexity of a class $\mathcal{F}$ relative to a random sample of $n$ training data. For example, take $\mathcal{F}$ to be all linear classifiers in 2-dimensional feature space. Clearly, the space of linear classifiers isinfinite (there are an infinite number of lines which can be drawn in the plane). However, many of these linear classifiers would assignthe same labels to the training data.
Notification Switch
Would you like to follow the 'Statistical learning theory' conversation and receive update notifications?