<< Chapter < Page Chapter >> Page >

The third of these assumptions might seem the least well justified of the above, and it might be better thought of as a “design choice” in ourrecipe for designing GLMs, rather than as an assumption per se. These three assumptions/design choices will allow us to derive avery elegant class of learning algorithms, namely GLMs, that have many desirableproperties such as ease of learning. Furthermore, the resultingmodels are often very effective for modelling different types of distributions over y ; for example, we will shortly show that both logistic regression and ordinary least squarescan both be derived as GLMs.

Ordinary least squares

To show that ordinary least squares is a special case of the GLM family of models, consider the setting where the target variable y (also called the response variable in GLM terminology) is continuous, and we model the conditional distribution of y given x as as a Gaussian N ( μ , σ 2 ) . (Here, μ may depend x .) So, we let the E x p o n e n t i a l F a m i l y ( η ) distribution above be the Gaussian distribution. As we saw previously, in the formulation of the Gaussian as an exponential family distribution, we had μ = η . So, we have

h θ ( x ) = E [ y | x ; θ ] = μ = η = θ T x .

The first equality follows from Assumption 2, above; the second equality follows from the fact that y | x ; θ N ( μ , σ 2 ) , and so its expected value is given by μ ; the third equality follows from Assumption 1 (and our earlier derivation showing that μ = η in the formulation of the Gaussian as an exponential family distribution); and thelast equality follows from Assumption 3.

Logistic regression

We now consider logistic regression. Here we are interested in binary classification, so y { 0 , 1 } . Given that y is binary-valued, it therefore seems natural to choose the Bernoulli family of distributions to model the conditional distribution of y given x . In our formulation of the Bernoulli distribution as an exponential family distribution, we had Φ = 1 / ( 1 + e - η ) . Furthermore, note that if y | x ; θ Bernoulli ( Φ ) , then E [ y | x ; θ ] = Φ . So, following a similar derivation as the one for ordinary least squares, we get:

h θ ( x ) = E [ y | x ; θ ] = Φ = 1 / ( 1 + e - η ) = 1 / ( 1 + e - θ T x )

So, this gives us hypothesis functions of the form h θ ( x ) = 1 / ( 1 + e - θ T x ) . If you are previously wondering how we came up with the form of thelogistic function 1 / ( 1 + e - z ) , this gives one answer: Once we assume that y conditioned on x is Bernoulli, it arises as a consequence of the definition of GLMs and exponential family distributions.

To introduce a little more terminology, the function g giving the distribution's mean as a function of the natural parameter ( g ( η ) = E [ T ( y ) ; η ] ) is called the canonical response function . Its inverse, g - 1 , is called the canonical link function . Thus, the canonical response function for the Gaussian family is just the identify function; and the canonicalresponse function for the Bernoulli is the logistic function. Many texts use g to denote the link function, and g - 1 to denote the response function; but the notation we're using here, inherited from the early machinelearning literature, will be more consistent with the notation used in the rest of the class.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Machine learning. OpenStax CNX. Oct 14, 2013 Download for free at http://cnx.org/content/col11500/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Machine learning' conversation and receive update notifications?

Ask