# Segment 12 Sanmit Narvekar

## Segment 12

#### To Calculate

1. What is the critical region for a 5% two-sided test if, under the null hypothesis, the test statistic is distributed as **Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \text{Student}(0,\sigma,4)}**
? That is, what values of the test statistic disprove the null hypothesis with p < 0.05? (OK to use Python, MATLAB, or Mathematica.)

Answer via Mathematica:

Thus, values of the test statistic greater than 2.776 sigma or less than -2.776 sigma disprove the null hypothesis using a 2-tailed pvalue test at the 5% significance level.

2. For an exponentially distributed test statistic with mean **Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \mu}**
(under the null hypothesis), when is the the null hypothesis disproved with p < 0.01 for a one-sided test? for a two-sided test?

And again via Mathematica:

The first row corresponds to a 1 tailed test. The other two are for 2-tailed tests.

#### To Think About

1. P-value tests require an initial choice of a test statistic. What goes wrong if you choose a poor test statistic? What would make it poor?

A poor test statistic is one that doesn't describe deviation from the distribution of the null hypothesis. If you choose a poor statistic, then you are not really measuring how likely it would be by chance to see your exact dataset, given that the underlying model is your null hypothesis.

2. If the null hypothesis is that a coin is fair, and you record the results of N flips, what is a good test statistic? Are there any other possible test statistics?

3. Why is it so hard for a Bayesian to do something as simple as, given some data, disproving a null hypothesis? Can't she just compute a Bayes odds ratio, P(null hypothesis is true)/P(null hypothesis is false) and derive a probability that the null hypothesis is true?