<< Chapter < Page | Chapter >> Page > |
To formulate the basic learning from data problem, we must specify several basic elements: data spaces, probability measures, loss functions, andstatistical risk.
Learning from data begins with a specification of two spaces:
The input space is also sometimes called the “feature space” or “signal domain.” The output space is also called the “class label space,”“outcome space,” “response space,” or “signal range.”
A classic example is estimating a signal $f$ in noise:
where $X$ is a random sample point on the real line and $W$ is a noise independent of $X$ .
Define a joint probability distribution on $\mathcal{X}\times \mathcal{Y}$ denoted ${P}_{X,Y}$ . Let $(X,Y)$ denote a pair of random variables distributed according to ${P}_{X,Y}$ . We will also have use for marginal and conditional distributions. Let ${P}_{X}$ denote the marginal distribution on $X$ , and let ${P}_{Y|X}$ denote the conditional distribution of $Y$ given $X$ . For any distribution $P$ , let $p$ denote its density function with respect to the corresponding dominating measure; e.g., Lebesgue measure for continuous random variables or counting measure for discrete random variables.
Define the expectation operator:
We will also make use of corresponding marginal and conditional expectations such as ${E}_{X}$ and ${E}_{Y|X}$ .
Wherever convenient and obvious based on context, we may drop the subscripts (e.g., $E$ instead of ${E}_{X,Y}$ ) for notational ease.
A loss function is a mapping
In binary classification problems, $\mathcal{Y}=\{0,1\}$ . The $0/1$ loss function is usually used: $\ell ({y}_{1},{y}_{2})={1}_{{y}_{1}\ne {y}_{2}},$ where ${1}_{A}$ is the indicator function which takes a value of 1 if condition $A$ is true and zero otherwise. We typically will compare a true label $y$ with a prediction $\widehat{y}$ , in which case the $0/1$ loss simply counts misclassifications.
In regression or estimation problems, $\mathcal{Y}=R$ . The squared error loss function is often employed: $\ell ({y}_{1},{y}_{2})=({y}_{1}-{y}_{2})2,$ the square of the difference between ${y}_{1}$ and ${y}_{2}$ . In application, we are interested in a true value $y$ in comparison to an estimate $\widehat{y}$ .
The basic problem in learning is to determine a mapping $f:\mathcal{X}\mapsto \mathcal{Y}$ that takes an input $x\in \mathcal{X}$ and predicts the corresponding output $y\in \mathcal{Y}$ . The performance of a given map $f$ is measured by its expected loss or risk :
The risk tells us how well, on average, the predictor $f$ performs with respect to the chosen loss function. A key quantity of interestis the mininum risk value, defined as
where the infinum is taking over all measurable functions.
Suppose that $(X,Y)$ are distributed according to ${P}_{X,Y}$ ( $(X,Y)\sim {P}_{X,Y}$ for short). Our goal is to find a map so that $f\left(X\right)\approx Y$ with high probability. Ideally, we would chose $f$ to minimize the risk $R\left(f\right)=E\left[\ell \right(f\left(X\right),Y\left)\right]$ . However, in order to compute the risk (and hence optimize it) we need to know the jointdistribution ${P}_{X,Y}$ . In many problems of practical interest, the joint distribution is unknown, and minimizing the risk is notpossible.
Suppose that we have some exemplary samples from the distribution. Specifically, consider $n$ samples ${{X}_{i},{Y}_{i}}_{i=1}^{n}$ distributed independently and identically (iid) according to the otherwise unknown ${P}_{X,Y}$ . Let us call these samples training data , and denote the collection by ${D}_{n}\equiv {{X}_{i},{Y}_{i}}_{i=1}^{n}$ . Let's also define a collection of candidate mappings $\mathcal{F}$ . We will use the training data ${D}_{n}$ to pick a mapping ${f}_{n}\in \mathcal{F}$ that we hope will be a good predictor. This is sometimes called the Model Selection problem. Note that the selected model ${f}_{n}$ is a function of the training data:
which is what the subscript $n$ in ${f}_{n}$ refers to. The risk of ${f}_{n}$ is given by
Note that since ${f}_{n}$ depends on ${D}_{n}$ in addition to a new random pair $(X,Y)$ , the risk is a random variable (i.e., a function of the training data ${D}_{n}$ ). Therefore, we are interested in the expected risk , computed over random realizations of the training data:
We hope that ${f}_{n}$ produces a small expected risk.
The notion of expected risk can be interpreted as follows. We would like to define an algorithm (a model selection process) that performswell on average, over any random sample of $n$ training data. The expected risk is a measure of the expected performance of thealgorithm with respect to the chosen loss function. That is, we are not gauging the risk of a particular map $f\in \mathcal{F}$ , but rather we are measuring the performance of the algorithm that takes any realizationof training data and selects an appropriate model in $\mathcal{F}$ .
This course is concerned with determining “good” model spaces $\mathcal{F}$ and useful and effective model selection algorithms.
Notification Switch
Would you like to follow the 'Statistical learning theory' conversation and receive update notifications?