Eleisha's Segment 31: A Tale of Model Selection

From Computational Statistics Course Wiki
Revision as of 22:09, 6 April 2014 by Eleishaj (talk | contribs) (Created page with "<b>To Calculate </b> (These problems will be the class activity on Monday, but you can get a head start on them if you want.) I measured the temperature of my framitron manif...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

To Calculate

(These problems will be the class activity on Monday, but you can get a head start on them if you want.) I measured the temperature of my framitron manifold every minute for 1000 minutes, with the same accuracy for each measurement. The data is plotted on the right (with data points connected by straight lines), and is in the file Modelselection.txt.

1. From the data, estimate the measurement error \sigma. (You can make any reasonable assumptions that follow from looking at the data.)

2. Write down a few guesses for functional forms, with different (or adjustable) numbers of parameters that might be good models for the data. Order these by their model complexity (number of parameters) from least to most.

3. Fit each of your models to the data, obtaining the parameters and \chi^2_{min} for each. (Hint: write your code generally enough that you can change from model to model by changing only one or two lines.)

4. Which of your models "wins" the model selection contest if you use AIC? Which for BIC?

To Think About

1. Both AIC and BIC decide whether to allow a new parameter based on a \Delta\chi^2. So it is possible to think about each as a p-value test for whether a null hypothesis ("no new parameter") is ruled out at some significance level. Viewed in this way, what are the critical p-values being used by each test?

2. Can you give a reasonable rationale, that might be used by a proponent of BIC, for why its \Delta\chi^2 should be larger in magnitude as N (the number of data points) increases?

Back To: Eleisha Jackson