Segment 16. Multiple Hypotheses

From Computational Statistics (CSE383M and CS395T)
Jump to navigation Jump to search

Watch this segment

(Don't worry, what you see statically below is not the beginning of the segment. Press the play button to start at the beginning.)

{{#widget:Iframe |url= |width=800 |height=625 |border=0 }}

The direct YouTube link is

Links to the slides: PDF file or PowerPoint file


To Calculate

1. Simulate the following: You have M=50 p-values, none actually causal, so that they are drawn from a uniform distribution. Not knowing this sad fact, you apply the Benjamini-Hochberg prescription with <math>\alpha=0.05</math> and possibly call some discoveries as true. By repeated simulation, estimate the probability of thus getting N wrongly-called discoveries, for N=0, 1, 2, and 3.

2. Does the distribution that you found in problem 1 depend on M? On <math>\alpha</math>? Derive its form analytically for the usual case of <math>\alpha \ll 1</math>?

To Think About

1. Suppose you have M independent trials of an experiment, each of which yields an independent p-value. Fisher proposed combining them by forming the statistic

<math>S = -2\sum_{i=0}^{i=M}\log(p_i)</math>

Show that, under the null hypothesis, S is distributed as <math>\text{Chisquare}(2M)</math> and describe how you would obtain a combined p-value for this statistic.

2. Fisher is sometimes credited, on the basis of problem 1, with having invented "meta-analysis", whereby results from multiple investigations can be combined to get an overall more significant result. Can you see any pitfalls in this?

Class Activity