<< Chapter < Page Chapter >> Page >

Regularization and model selection

Suppose we are trying to select among several different models for a learning problem. For instance, we might be using a polynomialregression model h θ ( x ) = g ( θ 0 + θ 1 x + θ 2 x 2 + + θ k x k ) , and wish to decide if k should be 0, 1, ..., or 10. How can we automatically select amodel that represents a good tradeoff between the twin evils of bias and variance Given that we said in the previous set of notes that bias and variance are two very differentbeasts, some readers may be wondering if we should be calling them “twin” evils here. Perhaps it'd be better to think of them as non-identical twins.The phrase “the fraternal twin evils of bias and variance” doesn't have the same ring to it, though. ? Alternatively, suppose we want toautomatically choose the bandwidth parameter τ for locally weighted regression, or the parameter C for our 1 -regularized SVM. How can we do that?

For the sake of concreteness, in these notes we assume we have some finite set of models M = { M 1 , ... , M d } that we're trying to select among. For instance, in our first example above, the model M i would be an i -th order polynomial regression model. (The generalization to infinite M is not hard. If we are trying to choose from an infinite set of models, say corresponding to the possible values of the bandwidth τ R + , we may discretize τ and consider only a finite number of possible values for it. More generally, most of the algorithms described herecan all be viewed as performing optimization search in the space of models, and we can perform this search over infinite modelclasses as well. ) Alternatively, if we are trying to decide between using an SVM,a neural network or logistic regression, then M may contain these models.

Cross validation

Let's suppose we are, as usual, given a training set S . Given what we know about empirical risk minimization,here's what might initially seem like a algorithm, resulting from using empirical risk minimization for model selection:

  1. Train each model M i on S , to get some hypothesis h i .
  2. Pick the hypotheses with the smallest training error.

This algorithm does not work. Consider choosing the order of a polynomial. The higher the order of the polynomial, the betterit will fit the training set S , and thus the lower the training error. Hence, this method will always select a high-variance, high-degree polynomialmodel, which we saw previously is often poor choice.

Here's an algorithm that works better. In hold-out cross validation (also called simple cross validation ), we do the following:

  1. Randomly split S into S train (say, 70% of the data) and S cv (the remaining 30%). Here, S cv is called the hold-out cross validation set.
  2. Train each model M i on S train only, to get some hypothesis h i .
  3. Select and output the hypothesis h i that had the smallest error ε ^ S cv ( h i ) on the hold out cross validation set. (Recall, ε ^ S cv ( h ) denotes the empirical error of h on the set of examples in S cv .)

By testing on a set of examples S cv that the models were not trained on, we obtain a better estimate of each hypothesis h i 's true generalization error, and can then pick the one with the smallest estimated generalization error.Usually, somewhere between 1 / 4 - 1 / 3 of the data is used in the hold out cross validation set, and 30% is a typical choice.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Machine learning. OpenStax CNX. Oct 14, 2013 Download for free at http://cnx.org/content/col11500/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Machine learning' conversation and receive update notifications?

Ask