<< Chapter < Page | Chapter >> Page > |
The third of these assumptions might seem the least well justified of the above, and it might be better thought of as a “design choice” in ourrecipe for designing GLMs, rather than as an assumption per se. These three assumptions/design choices will allow us to derive avery elegant class of learning algorithms, namely GLMs, that have many desirableproperties such as ease of learning. Furthermore, the resultingmodels are often very effective for modelling different types of distributions over $y$ ; for example, we will shortly show that both logistic regression and ordinary least squarescan both be derived as GLMs.
To show that ordinary least squares is a special case of the GLM family of models, consider the setting where the target variable $y$ (also called the response variable in GLM terminology) is continuous, and we model the conditional distribution of $y$ given $x$ as as a Gaussian $\mathcal{N}(\mu ,{\sigma}^{2})$ . (Here, $\mu $ may depend $x$ .) So, we let the $ExponentialFamily\left(\eta \right)$ distribution above be the Gaussian distribution. As we saw previously, in the formulation of the Gaussian as an exponential family distribution, we had $\mu =\eta $ . So, we have
The first equality follows from Assumption 2, above; the second equality follows from the fact that $y|x;\theta \sim \mathcal{N}(\mu ,{\sigma}^{2})$ , and so its expected value is given by $\mu $ ; the third equality follows from Assumption 1 (and our earlier derivation showing that $\mu =\eta $ in the formulation of the Gaussian as an exponential family distribution); and thelast equality follows from Assumption 3.
We now consider logistic regression. Here we are interested in binary classification, so $y\in \{0,1\}$ . Given that $y$ is binary-valued, it therefore seems natural to choose the Bernoulli family of distributions to model the conditional distribution of $y$ given $x$ . In our formulation of the Bernoulli distribution as an exponential family distribution, we had $\Phi =1/(1+{e}^{-\eta})$ . Furthermore, note that if $y|x;\theta \sim \mathrm{Bernoulli}(\Phi )$ , then $\mathrm{E}\left[y\right|x;\theta ]=\Phi $ . So, following a similar derivation as the one for ordinary least squares, we get:
So, this gives us hypothesis functions of the form ${h}_{\theta}\left(x\right)=1/(1+{e}^{-{\theta}^{T}x})$ . If you are previously wondering how we came up with the form of thelogistic function $1/(1+{e}^{-z})$ , this gives one answer: Once we assume that $y$ conditioned on $x$ is Bernoulli, it arises as a consequence of the definition of GLMs and exponential family distributions.
To introduce a little more terminology, the function $g$ giving the distribution's mean as a function of the natural parameter ( $g\left(\eta \right)=\mathrm{E}\left[T\right(y);\eta ]$ ) is called the canonical response function . Its inverse, ${g}^{-1}$ , is called the canonical link function . Thus, the canonical response function for the Gaussian family is just the identify function; and the canonicalresponse function for the Bernoulli is the logistic function. Many texts use $g$ to denote the link function, and ${g}^{-1}$ to denote the response function; but the notation we're using here, inherited from the early machinelearning literature, will be more consistent with the notation used in the rest of the class.
Notification Switch
Would you like to follow the 'Machine learning' conversation and receive update notifications?