Eleisha's Segment 12: P-Value Tests

From Computational Statistics Course Wiki
Revision as of 13:36, 20 April 2014 by Eleishaj (talk | contribs)
Jump to navigation Jump to search

To Calculate:

1. What is the critical region for a 5% two-sided test if, under the null hypothesis, the test statistic is distributed as ? That is, what values of the test statistic disprove the null hypothesis with p < 0.05? (OK to use Python, MATLAB, or Mathematica.)

Let t = the value of the test statistic If the test statistic is distributed as , then for a two sided test the critical region is when . This can calculated by taking the inverse of the CDF of the probability distribution and and evaluating it at (1 - 0.05/2).

2. For an exponentially distributed test statistic with mean (under the null hypothesis), when is the the null hypothesis disproved with p < 0.01 for a one-sided test? for a two-sided test?

Let t = the value of the test statistic

The pdf for an exponentially distributed test statistic with parameter is:

Since the mean of p(x) is , we take

We can solve for the critical region in a similar manner to question one by determining the inverse of the CDF of p(x) and evaluating it a (1 - 0.01) for a one sided test and (1 - 0.01/2).

For a one- sided test the null hypothesis is disproved with p< 0.01 when .

For a two - sided test the null hypothesis is disproved with p< 0.01 when .

Below is the mathematica code that I used to solve for the critical regions:

Eleisha math 12.png


To Think About:

1. P-value tests require an initial choice of a test statistic. What goes wrong if you choose a poor test statistic? What would make it poor?

A poor test statistic would be one where the associated distribution can not be compared with the

2. If the null hypothesis is that a coin is fair, and you record the results of N flips, what is a good test statistic? Are there any other possible test statistics?

A good test statistic would be the one that is binomially distributed with the parameter p = 0.5.

3. Why is it so hard for a Bayesian to do something as simple as, given some data, disproving a null hypothesis? Can't she just compute a Bayes odds ratio, P(null hypothesis is true)/P(null hypothesis is false) and derive a probability that the null hypothesis is true?

Even with the calculation of the Bayes odds ratio one can not definitely say that the null hypothesis is false. If the Bayesian finds EME hypothesis associated with the data and calculates ratios, this still is not a rejection of the null hypothesis.

Back To: Eleisha Jackson