<< Chapter < Page Chapter >> Page >
( θ ( t + 1 ) ) i z ( i ) Q i ( t ) ( z ( i ) ) log p ( x ( i ) , z ( i ) ; θ ( t + 1 ) ) Q i ( t ) ( z ( i ) )
i z ( i ) Q i ( t ) ( z ( i ) ) log p ( x ( i ) , z ( i ) ; θ ( t ) ) Q i ( t ) ( z ( i ) )
= ( θ ( t ) )

This first inequality comes from the fact that

( θ ) i z ( i ) Q i ( z ( i ) ) log p ( x ( i ) , z ( i ) ; θ ) Q i ( z ( i ) )

holds for any values of Q i and θ , and in particular holds for Q i = Q i ( t ) , θ = θ ( t + 1 ) . To get Equation  [link] , we used the fact that θ ( t + 1 ) is chosen explicitly to be

arg max θ i z ( i ) Q i ( z ( i ) ) log p ( x ( i ) , z ( i ) ; θ ) Q i ( z ( i ) ) ,

and thus this formula evaluated at θ ( t + 1 ) must be equal to or larger than the same formula evaluated at θ ( t ) . Finally, the step used to get  [link] was shown earlier, and follows from Q i ( t ) having been chosen to make Jensen's inequality hold with equality at θ ( t ) .

Hence, EM causes the likelihood to converge monotonically. In our description of the EM algorithm, we said we'd run it until convergence. Given the resultthat we just showed, one reasonable convergence test would be to check if the increase in ( θ ) between successive iterations is smaller than some tolerance parameter, and to declare convergence if EM is improving ( θ ) too slowly.

Remark. If we define

J ( Q , θ ) = i z ( i ) Q i ( z ( i ) ) log p ( x ( i ) , z ( i ) ; θ ) Q i ( z ( i ) ) ,

then we know ( θ ) J ( Q , θ ) from our previous derivation. The EM can also be viewed a coordinate ascent on J , in which the E-step maximizes it with respect to Q (check this yourself), and the M-step maximizes it with respect to θ .

Mixture of gaussians revisited

Armed with our general definition of the EM algorithm, let's go back to our oldexample of fitting the parameters Φ , μ and Σ in a mixture of Gaussians. For the sake of brevity, we carry outthe derivations for the M-step updates only for Φ and μ j , and leave the updates for Σ j as an exercise for the reader.

The E-step is easy. Following our algorithm derivation above, we simply calculate

w j ( i ) = Q i ( z ( i ) = j ) = P ( z ( i ) = j | x ( i ) ; Φ , μ , Σ ) .

Here, “ Q i ( z ( i ) = j ) ” denotes the probability of z ( i ) taking the value j under the distribution Q i .

Next, in the M-step, we need to maximize, with respect to our parameters Φ , μ , Σ , the quantity

i = 1 m z ( i ) Q i ( z ( i ) ) log p ( x ( i ) , z ( i ) ; Φ , μ , Σ ) Q i ( z ( i ) ) = i = 1 m j = 1 k Q i ( z ( i ) = j ) log p ( x ( i ) | z ( i ) = j ; μ , Σ ) p ( z ( i ) = j ; Φ ) Q i ( z ( i ) = j ) = i = 1 m j = 1 k w j ( i ) log 1 ( 2 π ) n / 2 | Σ j | 1 / 2 exp - 1 2 ( x ( i ) - μ j ) T Σ j - 1 ( x ( i ) - μ j ) · Φ j w j ( i )

Let's maximize this with respect to μ l . If we take the derivative with respect to μ l , we find

μ l i = 1 m j = 1 k w j ( i ) log 1 ( 2 π ) n / 2 | Σ j | 1 / 2 exp - 1 2 ( x ( i ) - μ j ) T Σ j - 1 ( x ( i ) - μ j ) · Φ j w j ( i ) = - μ l i = 1 m j = 1 k w j ( i ) 1 2 ( x ( i ) - μ j ) T Σ j - 1 ( x ( i ) - μ j ) = 1 2 i = 1 m w l ( i ) μ l 2 μ l T Σ l - 1 x ( i ) - μ l T Σ l - 1 μ l = i = 1 m w l ( i ) Σ l - 1 x ( i ) - Σ l - 1 μ l

Setting this to zero and solving for μ l therefore yields the update rule

μ l : = i = 1 m w l ( i ) x ( i ) i = 1 m w l ( i ) ,

which was what we had in the previous set of notes.

Let's do one more example, and derive the M-step update for the parameters Φ j . Grouping together only the terms that depend on Φ j , we find that we need to maximize

i = 1 m j = 1 k w j ( i ) log Φ j .

However, there is an additional constraint that the Φ j 's sum to 1, since they represent the probabilities Φ j = p ( z ( i ) = j ; Φ ) . To deal with the constraint that j = 1 k Φ j = 1 , we construct the Lagrangian

L ( Φ ) = i = 1 m j = 1 k w j ( i ) log Φ j + β ( j = 1 k Φ j - 1 ) ,

where β is the Lagrange multiplier. We don't need to worry about the constraint that Φ j 0 , because as we'll shortly see, the solution we'll find from this derivation will automatically satisfythat anyway. Taking derivatives, we find

Φ j L ( Φ ) = i = 1 m w j ( i ) Φ j + 1

Setting this to zero and solving, we get

Φ j = i = 1 m w j ( i ) - β

i.e., Φ j i = 1 m w j ( i ) . Using the constraint that j Φ j = 1 , we easily find that - β = i = 1 m j = 1 k w j ( i ) = i = 1 m 1 = m . (This used the fact that w j ( i ) = Q i ( z ( i ) = j ) , and since probabilities sum to 1, j w j ( i ) = 1 .) We therefore have our M-step updates for the parameters Φ j :

Φ j : = 1 m i = 1 m w j ( i ) .

The derivation for the M-step updates to Σ j are also entirely straightforward.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Machine learning. OpenStax CNX. Oct 14, 2013 Download for free at http://cnx.org/content/col11500/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Machine learning' conversation and receive update notifications?

Ask