<< Chapter < Page Chapter >> Page >

Introduction

Overview of the learning problem

The fundamental problem in learning from data is proper Model Selection. As we have seen in the previous lectures, a model that istoo complex could overfit the training data (causing an estimation error) and a model that is too simple could be a bad approximation ofthe function that we are trying to estimate (causing an approximation error). The estimation error arises because of the fact that we do notknow the true joint distribution of data in the input and output space, and therefore we minimize the empirical risk (which, for eachcandidate model, is a random number depending on the data) and estimate the average risk again from the limited number of trainingsamples we have. The approximation error measures how well the functions in the chosen model space can approximate the underlyingrelationship between the output space on the input space, and in general improves as the “size” of our model space increases.

Lecture outline

In the preceding lectures, we looked at some solutions to deal with the overfitting problem. The basic approach followed was the Methodof Sieves, in which the complexity of the model space was chosen as a function of the number of training samples. In particular, both thedenoising and classification problems we looked at consider estimators based on histogram partitions. The size of the partition was anincreasing function of the number of training samples. In this lecture, we will refine our learning methods further introduce modelselection procedures that automatically adapt to the distribution of the training data, rather than basing the model class solely on thenumber of samples. This sort of adaptivity will play a major role in the design of more effective classifiers and denoising methods. Thekey to designing data-adaptive model selection procedures is obtaining useful upper bounds on the estimation error. To this end, we willintroduce the idea of “Probably Approximately Correct” learning methods.

Recap: method of sieves

The method of Sieves underpinned our approaches in the denoising problem and in the histogram classification problem. Recall that thebasic idea is to define a sequence of model spaces F 1 , F 2 , ...of increasing complexity, and then given the training data { X i , Y i } i = 1 n select a model according to

f n ^ = arg min f F n R ^ n ( f ) .

The choice of the model space F n (and hence the model complexity and structure) is determined completely by the sample size n , and does not depend on the (empirical) distribution of training data.This is a major limitation of the sieve method. In a nutshell, the method of sieves tells us to average the data in a certain way (e.g., over a partition of X ) based on the sample size, independent on the sample values themselves.

In general, learning basically comprises of two things:

  1. Averaging data to reduce variability
  2. Deciding where (or how) to average

Sieves basically force us to deal with (2) a priori (before we analyze the training data). This will lead to suboptimalclassifiers and estimators, in general. Indeed deciding where/how to average is the really interesting and fundamental aspect of learning;once this is decided we have effectively solved the learing problem. There are at least two possibilities for breaking the rigidity of themethod of sieves, as we shall see in the following section.

Questions & Answers

what is phylogeny
Odigie Reply
evolutionary history and relationship of an organism or group of organisms
AI-Robot
ok
Deng
what is biology
Hajah Reply
the study of living organisms and their interactions with one another and their environments
AI-Robot
what is biology
Victoria Reply
HOW CAN MAN ORGAN FUNCTION
Alfred Reply
the diagram of the digestive system
Assiatu Reply
allimentary cannel
Ogenrwot
How does twins formed
William Reply
They formed in two ways first when one sperm and one egg are splited by mitosis or two sperm and two eggs join together
Oluwatobi
what is genetics
Josephine Reply
Genetics is the study of heredity
Misack
how does twins formed?
Misack
What is manual
Hassan Reply
discuss biological phenomenon and provide pieces of evidence to show that it was responsible for the formation of eukaryotic organelles
Joseph Reply
what is biology
Yousuf Reply
the study of living organisms and their interactions with one another and their environment.
Wine
discuss the biological phenomenon and provide pieces of evidence to show that it was responsible for the formation of eukaryotic organelles in an essay form
Joseph Reply
what is the blood cells
Shaker Reply
list any five characteristics of the blood cells
Shaker
lack electricity and its more savely than electronic microscope because its naturally by using of light
Abdullahi Reply
advantage of electronic microscope is easily and clearly while disadvantage is dangerous because its electronic. advantage of light microscope is savely and naturally by sun while disadvantage is not easily,means its not sharp and not clear
Abdullahi
cell theory state that every organisms composed of one or more cell,cell is the basic unit of life
Abdullahi
is like gone fail us
DENG
cells is the basic structure and functions of all living things
Ramadan
What is classification
ISCONT Reply
is organisms that are similar into groups called tara
Yamosa
in what situation (s) would be the use of a scanning electron microscope be ideal and why?
Kenna Reply
A scanning electron microscope (SEM) is ideal for situations requiring high-resolution imaging of surfaces. It is commonly used in materials science, biology, and geology to examine the topography and composition of samples at a nanoscale level. SEM is particularly useful for studying fine details,
Hilary
Got questions? Join the online conversation and get instant answers!
Jobilize.com Reply

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Statistical learning theory. OpenStax CNX. Apr 10, 2009 Download for free at http://cnx.org/content/col10532/1.3
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Statistical learning theory' conversation and receive update notifications?

Ask