Difference between revisions of "Eleisha's Segment 31: A Tale of Model Selection"

From Computational Statistics Course Wiki
Jump to navigation Jump to search
 
Line 14: Line 14:
 
<b> To Think About </b>
 
<b> To Think About </b>
  
1. Both AIC and BIC decide whether to allow a new parameter based on a \Delta\chi^2. So it is possible to think about each as a p-value test for whether a null hypothesis ("no new parameter") is ruled out at some significance level. Viewed in this way, what are the critical p-values being used by each test?
+
1. Both AIC and BIC decide whether to allow a new parameter based on a <math> \Delta\chi^2 </math>. So it is possible to think about each as a p-value test for whether a null hypothesis ("no new parameter") is ruled out at some significance level. Viewed in this way, what are the critical p-values being used by each test?
  
2. Can you give a reasonable rationale, that might be used by a proponent of BIC, for why its \Delta\chi^2 should be larger in magnitude as N (the number of data points) increases?
+
2. Can you give a reasonable rationale, that might be used by a proponent of BIC, for why its <math>\Delta\chi^2 </math> should be larger in magnitude as N (the number of data points) increases?
  
 
<b>Back To: </b> [[Eleisha Jackson]]
 
<b>Back To: </b> [[Eleisha Jackson]]

Latest revision as of 22:13, 6 April 2014

To Calculate

(These problems will be the class activity on Monday, but you can get a head start on them if you want.) I measured the temperature of my framitron manifold every minute for 1000 minutes, with the same accuracy for each measurement. The data is plotted on the right (with data points connected by straight lines), and is in the file File:Modelselection.txt.

1. From the data, estimate the measurement error Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \sigma } . (You can make any reasonable assumptions that follow from looking at the data.)

2. Write down a few guesses for functional forms, with different (or adjustable) numbers of parameters that might be good models for the data. Order these by their model complexity (number of parameters) from least to most.

3. Fit each of your models to the data, obtaining the parameters and Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \chi^2_{min} } for each. (Hint: write your code generally enough that you can change from model to model by changing only one or two lines.)

4. Which of your models "wins" the model selection contest if you use AIC? Which for BIC?

To Think About

1. Both AIC and BIC decide whether to allow a new parameter based on a Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \Delta\chi^2 } . So it is possible to think about each as a p-value test for whether a null hypothesis ("no new parameter") is ruled out at some significance level. Viewed in this way, what are the critical p-values being used by each test?

2. Can you give a reasonable rationale, that might be used by a proponent of BIC, for why its Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \Delta\chi^2 } should be larger in magnitude as N (the number of data points) increases?

Back To: Eleisha Jackson