<< Chapter < Page | Chapter >> Page > |
This is an introductory module on using Logistic Regression to solve large-scale classification tasks. In the first section, we will digress into the statistical background behind the generalized linear modeling for regression analysis, and then proceed to describe logistic regression, which has become something of a workhorse in industry and academia. This module assumes basic exposure to vector/matrix notation, enough to understand
Regression Analysis is in essence the minimization of a cost function J that models the squared difference between the exact values y of a dataset, and one's estimate h of that dataset . Often, it is referred to as fitting a curve (the estimate) to a set of points based on some quantified measure of how well the curve fits the data. Formally, the most general form of the equation to model this process is:
This minimization function models all regression analysis, but for the sake of understanding, this general form is not the most useful. How exactly do we model the estimate? How exactly do we minimize? To answer these questions and to be more specific, we shall begin by considering the simplest regression form, linear regression.
In linear regression, we model the cost function's equation as:
What does this mean? Essentially, ${h}_{\theta}\left(x\right)$ is a vector that models one's hypothesis, the initial guess, of every point of the dataset. y is the exact value of every point in the dataset. Taking the squared difference between these two at every point creates a new vector that quantifies the error between one's guess and the actual value. We then seek to minimize the average value of this vector, because if this is minimized, then we have gotten our estimate to be as close as possible to the actual value for as many points as possible, given our choice of hypothesis.
As the above module demonstrates, linear regression is simply about fitting to a line, whether that line is straight or contains an arbitrary number of polynomial features. But that hasn't quite gotten us to where we wanted to get, which is classification, so we may need more tools.
As was stated in the beginning, there are many ways to describe the cost function. In the above description, we used the simplest linear model that can describe the hypothesis, but there are a range of values that can go into the hypothesis, and they can be grouped into families of functions. We can construct a Generalized Linear Model to model these extensions systematically. We can describe the value of the estimate and the actual points by incorporating them inside of an exponential function. In our example, we shall use the sigmoid function, which is the following:
Notification Switch
Would you like to follow the 'Introductory survey and applications of machine learning methods' conversation and receive update notifications?