Difference between revisions of "Segment 27. Mixture Models"

From Computational Statistics Course Wiki
Jump to navigation Jump to search
m (1 revision)
 
Line 11: Line 11:
 
The direct YouTube link is [http://youtu.be/9pWnZcpYh44 http://youtu.be/9pWnZcpYh44]
 
The direct YouTube link is [http://youtu.be/9pWnZcpYh44 http://youtu.be/9pWnZcpYh44]
  
Links to the slides: [http://slate.ices.utexas.edu/coursefiles/27.MixtureModels.pdf PDF file] or [http://slate.ices.utexas.edu/coursefiles/27.MixtureModels.ppt PowerPoint file]
+
Links to the slides: [http://wpressutexas.net/coursefiles/27.MixtureModels.pdf PDF file] or [http://wpressutexas.net/coursefiles/27.MixtureModels.ppt PowerPoint file]
  
 
===Problems===
 
===Problems===

Latest revision as of 14:41, 22 April 2016

Watch this segment

(Don't worry, what you see statically below is not the beginning of the segment. Press the play button to start at the beginning.)

{{#widget:Iframe |url=http://www.youtube.com/v/9pWnZcpYh44&hd=1 |width=800 |height=625 |border=0 }}

The direct YouTube link is http://youtu.be/9pWnZcpYh44

Links to the slides: PDF file or PowerPoint file

Problems

To Calculate

The file Media:Mixturevals.txt contains 1000 values, each drawn either with probability from the distribution (for some constant ), or otherwise (with probability ) from the distribution .

1. Write down an expression for the probability of the file's data given some values for the parameters and .

2. Calculate numerically the maximum likelihood values of and .

3. Estimate numerically the Bayes posterior distribution of , marginalizing over as a nuisance parameter. (You'll of course have to make some assumption about priors.)

To Think About

1. In problem 3, above, you assumed some definite prior for . What if is itself drawn (just once for the whole data set) from a distribution , with unknown hyperparameters . How would you now estimate the Bayes posterior distribution of , marginalizing over everything else?