# Segment 30. Expectation Maximization (EM) Methods

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

## Contents

#### Watch this segment

(Don't worry, what you see statically below is not the beginning of the segment. Press the play button to start at the beginning.)

{{#widget:Iframe |url=http://www.youtube.com/v/StQOzRqTNsw&hd=1 |width=800 |height=625 |border=0 }}

Links to the slides: PDF file or PowerPoint file

### Problems

#### To Calculate

1. For a set of positive values $\displaystyle \{x_i\}$ , use Jensen's inequality to show (a) the mean of their square is never less than the square of their mean, and (b) their (arithmetic) mean is never less than their harmonic mean.

2. Sharpen the argument about termination of E-M methods that was given in slide 4, as follows: Suppose that $\displaystyle g(x) \ge f(x)$ for all $\displaystyle x$ , for some two functions $\displaystyle f$ and $\displaystyle g$ . Prove that, at any local maximum $\displaystyle x_m$ of $\displaystyle f$ , one of these two conditions must hold: (1) $\displaystyle g(x_m) > f(x_m)$ [in which case the E-M algorithm has not yet terminated], or (2) $\displaystyle g(x_m)$ is a local maximum of $\displaystyle g$ [in which case the E-M algorithm terminates at a maximum of $\displaystyle g$ , as advertised]. You can make any reasonable assumption about continuity of the functions.

#### To Think About

1. Jensen's inequality says something like "any concave function of a mixture of things is greater than the same mixture of the individual concave functions". What "mixture of things" is this idea being applied to in the proof of the E-M theorem (slide 4)?

2. So slide 4 proves that some function is less than the actual function of interest, namely $\displaystyle L(\theta)$ . What makes this such a powerful idea?

### Activity

The class activity for Friday can be found at EM activity.