Difference between revisions of "Segment 14. Bayesian Criticism of P-Values"

From Computational Statistics Course Wiki
Jump to navigation Jump to search
m (1 revision)
 
Line 11: Line 11:
 
The direct YouTube link is [http://youtu.be/IKV6Pn18C7o http://youtu.be/IKV6Pn18C7o]
 
The direct YouTube link is [http://youtu.be/IKV6Pn18C7o http://youtu.be/IKV6Pn18C7o]
  
Links to the slides: [http://slate.ices.utexas.edu/coursefiles/14.BayesianCriticismOfPValues.pdf PDF file] or [http://slate.ices.utexas.edu/coursefiles/14.BayesianCriticismOfPValues.ppt PowerPoint file]
+
Links to the slides: [http://wpressutexas.net/coursefiles/14.BayesianCriticismOfPValues.pdf PDF file] or [http://wpressutexas.net/coursefiles/14.BayesianCriticismOfPValues.ppt PowerPoint file]
  
 
===Problems===
 
===Problems===

Latest revision as of 14:31, 22 April 2016

Watch this segment

(Don't worry, what you see statically below is not the beginning of the segment. Press the play button to start at the beginning.)

{{#widget:Iframe |url=http://www.youtube.com/v/IKV6Pn18C7o&hd=1 |width=800 |height=625 |border=0 }}

The direct YouTube link is http://youtu.be/IKV6Pn18C7o

Links to the slides: PDF file or PowerPoint file

Problems

To Calculate

1. Suppose the stopping rule is "flip exactly 10 times" and the data is that 8 out of 10 flips are heads. With what p-value can you rule out the hypothesis that the coin is fair? Is this statistically significant?

2. Suppose that, as a Bayesian, you see 10 flips of which 8 are heads. Also suppose that your prior for the coin being fair is 0.75. What is the posterior probability that the coin is fair? (Make any other reasonable assumptions about your prior as necessary.)

3. For the experiment in the segment, what if the stopping rule was (perversely) "flip until I see five consecutive heads followed immediately by a tail, then count the total number of heads"? What would be the p-value?

To Think About

1. If biology journals require p<0.05 for results to be published, does this mean that one in twenty biology results are wrong (in the sense that the uninteresting null hypothesis is actually true rather than disproved)? Why might it be worse, or better, than this? (See also the provocative paper by Ioannidis, and this blog in Technology Review (whose main source is this article). Also this news story about ESP research. You can Google for other interesting references.)