# Difference between revisions of "Eleisha's Segment 19: The Chi-Square Statistic"

To Calculate

1. Prove the assertion on lecture slide 5, namely that, for a multivariate normal distribution, the quantity $({\mathbf {x} -\mathbf {\mu } })^{T}{\mathbf {\Sigma } }^{-1}({\mathbf {x} -\mathbf {\mu } }),$ where $\mathbf {x}$ is a random draw from the multivariate normal, is $\chi ^{2}$ distributed.

Must prove that $({\mathbf {x} -\mathbf {\mu } })^{T}{\mathbf {\Sigma } }^{-1}({\mathbf {x} -\mathbf {\mu } })$ is $\chi ^{2}$ distributed.

Since $\mathbf {x}$ is a random draw from the mulitvariate normal, x can be written as $\mathbf {x=Ly+\mu }$ , where y is filled with $y_{i}{\text{'s}}$ drawn from a normal with a \mu zero and \sigma one. (Shown in lecture 17)

If you manipulate $\mathbf {x=Ly+\mu }$ you get $\mathbf {Ly=x-\mu }$ Therefore

$({\mathbf {x} -\mathbf {\mu } })^{T}{\mathbf {\Sigma } }^{-1}({\mathbf {x} -\mathbf {\mu } })=\mathbf {(Ly)^{T}\Sigma ^{-1}(Ly)=(y^{T}L^{T})\Sigma ^{-1}(Ly)=(y^{T}L^{T})(LL^{T})^{-1}(Ly)=y^{T}y} =\sum y_{i}{\text{'s}}$ Since the $y_{i}{\text{'s}}$ are drawn for the normal distributions with mean \mu zero and \sigma one, \sum y_i \text{'s} is the sum of squared t-values.

This means that t $({\mathbf {x} -\mathbf {\mu } })^{T}{\mathbf {\Sigma } }^{-1}({\mathbf {x} -\mathbf {\mu } })$ is $\chi ^{2}$ distributed.

2. Suppose you measure a bunch of quantities $x_{i}$ , each of which is measured with a measurement accuracy $\sigma _{i}$ and has a theoretically expected value $\mu _{i}$ . Describe in detail how you might use a chi-square test statistic as a p-value test to see if your theory is viable? Should your test be 1 or 2 tailed?