Multinomial parameter estimation

From Computational Statistics Course Wiki
Jump to navigation Jump to search

The past two segments have been about Bayesian parameter estimation.

In Segment 5. Bernoulli Trials, we did Bayesian parameter estimation of the rate parameter of a binomial distribution. The setup was: we saw the outcomes of a series of independent trials. There were two possible outcomes to each trial: the jailer says B, or the jailer says C. There was one parameter of interest: x, the probability with which the jailer says B. (What about the probability that the jailer says C?) The goal was to compute the posterior distribution, given data in the form of counts of outcomes observed, of x.

In this exercise, we will generalize this to a multinomial setting. Each trial is now a chess game, to which there are three possible outcomes: white wins, black wins, or the players draw. We have data on the outcomes of 10,000 real chess games. We want to use this data to learn how likely each outcome is. In other words, we assume that due to the structure of the game of chess, there is some inherent probability of each outcome occurring, and we want to figure out what these probabilities are. The parameters of interest are w, the probability that white wins, and b, the probability that black wins. (What about the probability that they draw?) To do this, we will take counts of the outcomes observed and compute the joint posterior distribution of w and b given this data.

Notational conventions
w = probability that white wins
b = probability that black wins
d = probability that the players draw
N = total number of games observed
W = number of white wins in these games
B = number of black wins in these games
D = number of draws in these games
Activity checkpoints
  1. What does a joint uniform prior on w and b look like?
  2. Suppose we know that w=0.4, b = 0.3, and d = 0.3. If we watch N = 10 games, what is the probability that W = 3, B = 5, and D = 2?
  3. For general w, b, d, W, B, D, what is P(W, B, D | w, b, d)?
  4. Applying Bayes, what is P(w, b, d | W, B, D)? (The Bayes denominator is tricky - if you present us with the integral to evaluate, we will provide the answer.)
  5. Here is the real data - chess_outcomes.txt. Each line represents the outcome of one game. Count the outcomes of the first N games and produce a visualization of the joint posterior of the win rates for N = 0, 3, 10, 100, 1000, and 10000.

If you do this in Python, the data is already on the class server - check out the "Jeff Hussmann 01-31-14 reading a file" notebook to see how to access it.

Some snippets demonstrating library functions for evaluating and visualizing a function on a 2D grid of points can be found here.