<< Chapter < Page Chapter >> Page >
Gives introduction to a machine learning algorithm: Logistic Regression. First, we describe the theoretical background of regression analysis using simple linear regression and the Generalized Linear Model. Then, we describe the Logistic Regression algorithm itself, and its solution using gradient descent. Finally, we provide an intuitive demonstration of how it works in a classification application with figures (including the MATLAB code used to generate the figures), and links to learn about more real-world applications.

Introduction

This is an introductory module on using Logistic Regression to solve large-scale classification tasks. In the first section, we will digress into the statistical background behind the generalized linear modeling for regression analysis, and then proceed to describe logistic regression, which has become something of a workhorse in industry and academia. This module assumes basic exposure to vector/matrix notation, enough to understand

M = 2 2 1 0 , x = 3 - 1 , x 1 = ? , M * x = ?

What is all this about?

Regression Analysis is in essence the minimization of a cost function J that models the squared difference between the exact values y of a dataset, and one's estimate h of that dataset . Often, it is referred to as fitting a curve (the estimate) to a set of points based on some quantified measure of how well the curve fits the data. Formally, the most general form of the equation to model this process is:

minimize x J ( θ ) subject to h θ ( x )

This minimization function models all regression analysis, but for the sake of understanding, this general form is not the most useful. How exactly do we model the estimate? How exactly do we minimize? To answer these questions and to be more specific, we shall begin by considering the simplest regression form, linear regression.

Linear regression

In linear regression, we model the cost function's equation as:

J ( θ ) = 1 2 m i = 1 m ( h θ ( x i ) - y i ) 2

What does this mean? Essentially, h θ ( x ) is a vector that models one's hypothesis, the initial guess, of every point of the dataset. y is the exact value of every point in the dataset. Taking the squared difference between these two at every point creates a new vector that quantifies the error between one's guess and the actual value. We then seek to minimize the average value of this vector, because if this is minimized, then we have gotten our estimate to be as close as possible to the actual value for as many points as possible, given our choice of hypothesis.

As the above module demonstrates, linear regression is simply about fitting to a line, whether that line is straight or contains an arbitrary number of polynomial features. But that hasn't quite gotten us to where we wanted to get, which is classification, so we may need more tools.

Generalized linear model

As was stated in the beginning, there are many ways to describe the cost function. In the above description, we used the simplest linear model that can describe the hypothesis, but there are a range of values that can go into the hypothesis, and they can be grouped into families of functions. We can construct a Generalized Linear Model to model these extensions systematically. We can describe the value of the estimate and the actual points by incorporating them inside of an exponential function. In our example, we shall use the sigmoid function, which is the following:

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Introductory survey and applications of machine learning methods. OpenStax CNX. Dec 22, 2011 Download for free at http://legacy.cnx.org/content/col11400/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Introductory survey and applications of machine learning methods' conversation and receive update notifications?

Ask