<< Chapter < Page
  Machine learning   Page 1 / 1
Chapter >> Page >

The perceptron and large margin classifiers

In this final set of notes on learning theory, we will introduce a different model of machine learning. Specifically, we have sofar been considering batch learning settings in which we are first given a training set to learn with, and our hypothesis h is then evaluated on separate test data. In this set of notes, we will consider the online learning setting in which the algorithm has to make predictions continuously even while it's learning.

In this setting, the learning algorithm is given a sequence of examples ( x ( 1 ) , y ( 1 ) ) , ( x ( 2 ) , y ( 2 ) ) , ... ( x ( m ) , y ( m ) ) in order. Specifically, the algorithm first sees x ( 1 ) and is asked to predict what it thinks y ( 1 ) is. After making its prediction, the true value of y ( 1 ) is revealed to the algorithm (and the algorithm may use this information to performsome learning). The algorithm is then shown x ( 2 ) and again asked to make a prediction, after which y ( 2 ) is revealed, and it may againperform some more learning. This proceeds until we reach ( x ( m ) , y ( m ) ) . In the online learning setting, we are interested in the total number oferrors made by the algorithm during this process. Thus, it models applications in which the algorithm has to make predictions even while it'sstill learning.

We will give a bound on the online learning error of the perceptron algorithm. To make our subsequent derivations easier, we willuse the notational convention of denoting the class labels by y = { - 1 , 1 } .

Recall that the perceptron algorithm has parameters θ R n + 1 , and makes its predictions according to

h θ ( x ) = g ( θ T x )

where

g ( z ) = 1 if z 0 - 1 if z < 0 .

Also, given a training example ( x , y ) , the perceptron learning rule updates the parameters as follows. If h θ ( x ) = y , then it makes no change to the parameters.Otherwise, it performs the update This looks slightly different from the update rule we had written down earlier in the quarterbecause here we have changed the labels to be y { - 1 , 1 } . Also, the learning rate parameter α was dropped. The only effect of the learning rate is to scale all the parameters θ by some fixed constant, which does not affect the behavior of the perceptron.

θ : = θ + y x .

The following theorem gives a bound on the online learning error of the perceptron algorithm, when it is run as an online algorithm that performs an updateeach time it gets an example wrong. Note that the bound below on the number of errors does not have an explicit dependence on the number ofexamples m in the sequence, or on the dimension n of the inputs (!).

Theorem (Block, 1962, and Novikoff, 1962) . Let a sequence of examples ( x ( 1 ) , y ( 1 ) ) , ( x ( 2 ) , y ( 2 ) ) , ... ( x ( m ) , y ( m ) ) be given.Suppose that | | x ( i ) | | D for all i , and further that there exists a unit-length vector u ( | | u | | 2 = 1 ) such that y ( i ) · ( u T x ( i ) ) γ for all examples in the sequence (i.e., u T x ( i ) γ if y ( i ) = 1 , and u T x ( i ) - γ if y ( i ) = - 1 , so that u separates the data with a margin of at least γ ). Then the total number of mistakes that the perceptron algorithm makes on this sequence is atmost ( D / γ ) 2 .

Proof. The perceptron updates its weights only on those examples on which it makes a mistake. Let θ ( k ) be the weights that were being used when it made its k -th mistake. So, θ ( 1 ) = 0 (since the weights are initialized to zero), and if the k -th mistake was on the example ( x ( i ) , y ( i ) ) , then g ( ( x ( i ) ) T θ ( k ) ) y ( i ) , which implies that

( x ( i ) ) T θ ( k ) y ( i ) 0 .

Also, from the perceptron learning rule, we would have that θ ( k + 1 ) = θ ( k ) + y ( i ) x ( i ) .

We then have

( θ ( k + 1 ) ) T u = ( θ ( k ) ) T u + y ( i ) ( x ( i ) ) T u ( θ ( k ) ) T u + γ

By a straightforward inductive argument, implies that

( θ ( k + 1 ) ) T u k γ .

Also, we have that

| | θ ( k + 1 ) | | 2 = | | θ ( k ) + y ( i ) x ( i ) | | 2 = | | θ ( k ) | | 2 + | | x ( i ) | | 2 + 2 y ( i ) ( x ( i ) ) T θ ( i ) | | θ ( k ) | | 2 + | | x ( i ) | | 2 | | θ ( k ) | | 2 + D 2

The third step above used Equation  [link] . Moreover, again by applying a straightfoward inductiveargument, we see that  [link] implies

| | θ ( k + 1 ) | | 2 k D 2 .

Putting together  [link] and  [link] we find that

k D | | θ ( k + 1 ) | | ( θ ( k + 1 ) ) T u k γ .

The second inequality above follows from the fact that u is a unit-length vector (and z T u = | | z | | · | | u | | cos φ | | z | | · | | u | | , where φ is the angle between z and u ). Our result implies that k ( D / γ ) 2 . Hence, if the perceptron made a k -th mistake, then k ( D / γ ) 2 .

Questions & Answers

what is phylogeny
Odigie Reply
evolutionary history and relationship of an organism or group of organisms
AI-Robot
ok
Deng
what is biology
Hajah Reply
the study of living organisms and their interactions with one another and their environments
AI-Robot
what is biology
Victoria Reply
HOW CAN MAN ORGAN FUNCTION
Alfred Reply
the diagram of the digestive system
Assiatu Reply
allimentary cannel
Ogenrwot
How does twins formed
William Reply
They formed in two ways first when one sperm and one egg are splited by mitosis or two sperm and two eggs join together
Oluwatobi
what is genetics
Josephine Reply
Genetics is the study of heredity
Misack
how does twins formed?
Misack
What is manual
Hassan Reply
discuss biological phenomenon and provide pieces of evidence to show that it was responsible for the formation of eukaryotic organelles
Joseph Reply
what is biology
Yousuf Reply
the study of living organisms and their interactions with one another and their environment.
Wine
discuss the biological phenomenon and provide pieces of evidence to show that it was responsible for the formation of eukaryotic organelles in an essay form
Joseph Reply
what is the blood cells
Shaker Reply
list any five characteristics of the blood cells
Shaker
lack electricity and its more savely than electronic microscope because its naturally by using of light
Abdullahi Reply
advantage of electronic microscope is easily and clearly while disadvantage is dangerous because its electronic. advantage of light microscope is savely and naturally by sun while disadvantage is not easily,means its not sharp and not clear
Abdullahi
cell theory state that every organisms composed of one or more cell,cell is the basic unit of life
Abdullahi
is like gone fail us
DENG
cells is the basic structure and functions of all living things
Ramadan
What is classification
ISCONT Reply
is organisms that are similar into groups called tara
Yamosa
in what situation (s) would be the use of a scanning electron microscope be ideal and why?
Kenna Reply
A scanning electron microscope (SEM) is ideal for situations requiring high-resolution imaging of surfaces. It is commonly used in materials science, biology, and geology to examine the topography and composition of samples at a nanoscale level. SEM is particularly useful for studying fine details,
Hilary
Got questions? Join the online conversation and get instant answers!
Jobilize.com Reply

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Machine learning. OpenStax CNX. Oct 14, 2013 Download for free at http://cnx.org/content/col11500/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Machine learning' conversation and receive update notifications?

Ask