<< Chapter < Page
  Machine learning   Page 1 / 1
Chapter >> Page >

Mixtures of gaussians and the em algorithm

In this set of notes, we discuss the EM (Expectation-Maximization) for density estimation.

Suppose that we are given a training set { x ( 1 ) , ... , x ( m ) } as usual. Since we are in the unsupervised learning setting, these points do not come with any labels.

We wish to model the data by specifying a joint distribution p ( x ( i ) , z ( i ) ) = p ( x ( i ) | z ( i ) ) p ( z ( i ) ) . Here, z ( i ) Multinomial ( Φ ) (where Φ j 0 , j = 1 k Φ j = 1 , and the parameter Φ j gives p ( z ( i ) = j ) ), and x ( i ) | z ( i ) = j N ( μ j , Σ j ) . We let k denote the number of values that the z ( i ) 's can take on. Thus, our model posits that each x ( i ) was generated by randomly choosing z ( i ) from { 1 , ... , k } , and then x ( i ) was drawn from one of k Gaussians depending on z ( i ) . This is called the mixture of Gaussians model. Also, note that the z ( i ) 's are latent random variables, meaning that they're hidden/unobserved. This is what will make ourestimation problem difficult.

The parameters of our model are thus Φ , μ and Σ . To estimate them, we can write down the likelihood of our data:

( Φ , μ , Σ ) = i = 1 m log p ( x ( i ) ; Φ , μ , Σ ) = i = 1 m log z ( i ) = 1 k p ( x ( i ) | z ( i ) ; μ , Σ ) p ( z ( i ) ; Φ ) .

However, if we set to zero the derivatives of this formula with respect to the parameters and try to solve, we'll find that it is notpossible to find the maximum likelihood estimates of the parameters in closed form. (Try this yourself at home.)

The random variables z ( i ) indicate which of the k Gaussians each x ( i ) had come from. Note that if we knew what the z ( i ) 's were, the maximum likelihood problem would have been easy. Specifically,we could then write down the likelihood as

( Φ , μ , Σ ) = i = 1 m log p ( x ( i ) | z ( i ) ; μ , Σ ) + log p ( z ( i ) ; Φ ) .

Maximizing this with respect to Φ , μ and Σ gives the parameters:

Φ j = 1 m i = 1 m 1 { z ( i ) = j } , μ j = i = 1 m 1 { z ( i ) = j } x ( i ) i = 1 m 1 { z ( i ) = j } , Σ j = i = 1 m 1 { z ( i ) = j } ( x ( i ) - μ j ) ( x ( i ) - μ j ) T i = 1 m 1 { z ( i ) = j } .

Indeed, we see that if the z ( i ) 's were known, then maximum likelihood estimationbecomes nearly identical to what we had when estimating the parameters of the Gaussian discriminant analysis model, except that here the z ( i ) 's playingthe role of the class labels. There are other minor differences in the formulas here from what we'd obtained in PS1 with Gaussiandiscriminant analysis, first because we've generalized the z ( i ) 's to be multinomial rather than Bernoulli, and second because here we areusing a different Σ j for each Gaussian.

However, in our density estimation problem, the z ( i ) 's are not known. What can we do?

The EM algorithm is an iterative algorithm that has two main steps. Applied to our problem, inthe E-step, it tries to “guess” the values of the z ( i ) 's. In the M-step, it updates the parameters of our model based onour guesses. Since in the M-step we are pretending that the guesses in the first part were correct, the maximization becomes easy.Here's the algorithm:

  • Repeat until convergence: {
    • (E-step) For each i , j , set
      w j ( i ) : = p ( z ( i ) = j | x ( i ) ; Φ , μ , Σ )
    • (M-step) Update the parameters:
      Φ j : = 1 m i = 1 m w j ( i ) , μ j : = i = 1 m w j ( i ) x ( i ) i = 1 m w j ( i ) , Σ j : = i = 1 m w j ( i ) ( x ( i ) - μ j ) ( x ( i ) - μ j ) T i = 1 m w j ( i )
  • }

In the E-step, we calculate the posterior probability of our parameters the z ( i ) 's, given the x ( i ) and using the current setting of our parameters. I.e., using Bayes rule, we obtain:

p ( z ( i ) = j | x ( i ) ; Φ , μ , Σ ) = p ( x ( i ) | z ( i ) = j ; μ , Σ ) p ( z ( i ) = j ; Φ ) l = 1 k p ( x ( i ) | z ( i ) = l ; μ , Σ ) p ( z ( i ) = l ; Φ )

Here, p ( x ( i ) | z ( i ) = j ; μ , Σ ) is given by evaluating the density of a Gaussian with mean μ j and covariance Σ j at x ( i ) ; p ( z ( i ) = j ; Φ ) is given by Φ j , and so on. The values w j ( i ) calculated in the E-step represent our “soft” guesses The term “soft” refers to our guesses being probabilities and taking values in [ 0 , 1 ] ; in contrast, a “hard” guess is one that represents a single best guess (suchas taking values in { 0 , 1 } or { 1 , ... , k } ). for the values of z ( i ) .

Also, you should contrast the updates in the M-step with the formulas we had when the z ( i ) 's were known exactly. They are identical, except that instead of the indicator functions “ 1 { z ( i ) = j } ” indicating from which Gaussian each datapoint had come, wenow instead have the w j ( i ) 's.

The EM-algorithm is also reminiscent of the K-means clustering algorithm, except that instead of the “hard” cluster assignments c ( i ) , we instead have the “soft” assignments w j ( i ) . Similar to K-means, it is also susceptible to local optima, so reinitializingat several different initial parameters may be a good idea.

It's clear that the EM algorithm has a very natural interpretation of repeatedly trying to guess the unknown z ( i ) 's; but how did it come about, and can we make any guarantees about it, such as regardingits convergence? In the next set of notes, we will describe a more general view of EM, one that will allow us to easily apply it to otherestimation problems in which there are also latent variables, and which will allow us to give a convergence guarantee.

Questions & Answers

what is phylogeny
Odigie Reply
evolutionary history and relationship of an organism or group of organisms
AI-Robot
ok
Deng
what is biology
Hajah Reply
the study of living organisms and their interactions with one another and their environments
AI-Robot
what is biology
Victoria Reply
HOW CAN MAN ORGAN FUNCTION
Alfred Reply
the diagram of the digestive system
Assiatu Reply
allimentary cannel
Ogenrwot
How does twins formed
William Reply
They formed in two ways first when one sperm and one egg are splited by mitosis or two sperm and two eggs join together
Oluwatobi
what is genetics
Josephine Reply
Genetics is the study of heredity
Misack
how does twins formed?
Misack
What is manual
Hassan Reply
discuss biological phenomenon and provide pieces of evidence to show that it was responsible for the formation of eukaryotic organelles
Joseph Reply
what is biology
Yousuf Reply
the study of living organisms and their interactions with one another and their environment.
Wine
discuss the biological phenomenon and provide pieces of evidence to show that it was responsible for the formation of eukaryotic organelles in an essay form
Joseph Reply
what is the blood cells
Shaker Reply
list any five characteristics of the blood cells
Shaker
lack electricity and its more savely than electronic microscope because its naturally by using of light
Abdullahi Reply
advantage of electronic microscope is easily and clearly while disadvantage is dangerous because its electronic. advantage of light microscope is savely and naturally by sun while disadvantage is not easily,means its not sharp and not clear
Abdullahi
cell theory state that every organisms composed of one or more cell,cell is the basic unit of life
Abdullahi
is like gone fail us
DENG
cells is the basic structure and functions of all living things
Ramadan
What is classification
ISCONT Reply
is organisms that are similar into groups called tara
Yamosa
in what situation (s) would be the use of a scanning electron microscope be ideal and why?
Kenna Reply
A scanning electron microscope (SEM) is ideal for situations requiring high-resolution imaging of surfaces. It is commonly used in materials science, biology, and geology to examine the topography and composition of samples at a nanoscale level. SEM is particularly useful for studying fine details,
Hilary
Got questions? Join the online conversation and get instant answers!
Jobilize.com Reply

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Machine learning. OpenStax CNX. Oct 14, 2013 Download for free at http://cnx.org/content/col11500/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Machine learning' conversation and receive update notifications?

Ask