Eleisha's Segment 23: Bootstrap Estimation of Uncertainty

From Computational Statistics Course Wiki
Revision as of 19:27, 6 April 2014 by Eleishaj (talk | contribs)
Jump to navigation Jump to search

To Compute:

1. Generate 100 i.i.d. random draws from the beta distribution , for example using MATLAB's betarnd or Python's random.betavariate. Use these to estimate this statistic of the underlying distribution: "value of the 75% percentile point minus value of the 25th percentile point". Now use statistical bootstrap to estimate the distribution of uncertainty of your estimate, for example as a histogram.

After generating 100 i.i.d random values, my estimated value the statistic was approximately

After bootstrapping (nboot = 100,000), the mean of my test statistic was approximately: and the standard deviation was approximately: .

Below is a histogram of the bootstrapping values:

Eleisha HW23 Figure1.png

2. Suppose instead that you can draw any number of desired samples (each 100 draws) from the distribution. How does the histogram of the desired statistic from these samples compare with the bootstrap histogram from problem 1?

When drawing from the true distribution the mean was: and the standard deviation was:

Below is a histogram of these values:

Eleisha HW23 Figure2.png

3. What is the actual value of the desired statistic for this beta distribution, computed numerically (that is, not by random sampling)? (Hint: I did this in Mathematica in three lines.) The actual value of the desired statistic for this beta distribution is approximately: 0.23295. This was calculated using the inverse cdf of the Beta Distribution with the proper quartiles.

Sample output:

Bootstrapping Values
Mean of values: 0.204169593645
Standard Deviation of values: 0.0242746089495

Values when sampling from the True Distribution
Mean of values: 0.231145899062
Standard Deviation of values: 0.0265314651351
Actual Value:  0.232952354264


To Think About

1. Suppose your desired statistic (for a sample of N i.i.d. data values) was "minimum of the N values". What would the bootstrap estimate of the uncertainty look like in this case? Does this violate the bootstrap theorem? Why or why not?

2. If you knew the distribution, how would you compute the actual distribution for the statistic "minimum of N sampled values", not using random sampling in your computation?

3. For N data points, can you design a statistic so perverse (and different from one suggested in the segment) that the statistical bootstrap fails, even asymptotically as N becomes large?

Back To: Eleisha Jackson