#1




Lecture #5
Attached is lecture #5 as given on 2/3/10.

#2




Rejection method and ratioofuniform methods
To encourage more active discussion on the forums, as part of HW 3, someone should:
 post an intuitive explanation of why the rejection method for producing random deviates works  post a rigorous proof of why the rejection method works  post a proof of why the ratioofuniforms methods works 
#3




Intuitive Explanation of the Rejection Method
Having run up against a wall on HW2, I decided to attempt to post an intuitive explanation of the rejection method, as was requested in class today.
First, it's worth it to explain the transformation method as intuitively as possible. Note that for the transformation method, I use the terms p and f interchangeably. They make more sense in the context of the rejection method. Think of the integral of f(x)  call it F(x)  as being like a transformation. As long as your deviate is between 0 and A, it will always be possible to map it on to f(x) and p(x). Further, a deviate has the greatest likelihood of being transformed where the slope of F is greatest (and hence, where the value of f is greatest). This helps to ensure that uniformly distributed deviates are mapped at the correct frequency given the probability density. Now, the rejection method specifically: There is going to be some deviation between p and f, and that's where rejection comes in. The bounding function is only so faithful to p, of course, so F doesn't accurately transform on to p. As a result, we need to reject a fraction of the deviates. That fraction will be proportional to , which ensures accepted deviates adhere to the given probability function. I sort of feel like this is a proof, but the best proofs should be intuitive explanations, right? 
#4




Rejection Method
An attempt at a proof of the acceptancerejection method.

#5




I did not really understand the Ratio of Uniforms method. Any references?

#6




Quote:
Kinderman, A. J., Monahan, J. F. "Computer Generation of Random Variables Using the Ratio of Uniform Deviates." ACM Transactions on Mathematical Software. 3, 257 (1977). It turns out that the paper also includes a proof of the ratio of uniforms method. 
#7




Thanks a lot. I'll have a look.

#8




Quote:
Let's look at the rejection method in reverse. We want some set of random deviates with frequency . The frequency is critical: if p(x) is 0 for some x, we don't want any deviates with that value. We also want to reject as few as possible, so we need to find a way to transform "cleanly" from a uniform distribution to the pdistribution. The obvious answer is to simply transform with the integral of p(x), P(x). Unfortunately, we've chosen the rejection method because p(x) is a computationally complicated distribution. We could simply approximate p(x) using some similar but easiertocalculate function f(x) and its integral F(x). But we don't really want an approximation. That's fine. We can still use f(x) and get an exact value, and that is the meat of the rejection method. F(x) transforms the initial deviate to . This is an approximation, and if the bounding function is good, it's a pretty good approximation. But we know how often that approximation will be "correct," and not just an approximation. That frequency is given by , and so we can simply reject of the transformed xvalues. We do so by taking a second deviate (scaled to 1). If that deviate is less than h, we accept our transformed xvalue. If it is between h and 1, we reject it and start the process over. Make sense? I'd love feedback. 
#9




Rejection method  Intuitive proof
I will try to post an intutive explanation for the rejection method. We are interested in getting a sample from a candidate distribution p(x). However, most of the times inverse of CDF cannot be computed in a closed form and hence transformation method cannot be used.
To overcome this problem, we select a simpler distribution known as proposal distribution q(x) which bounds p(x) from above. Now, we first sample a point y from the distribution q(x) using transformation method. Following this, a sample is obtained that is uniformly distributed as z ~ U[0, q(y)]. Note that since z has a uniform distribution conditioned over y, the joint distribution will be same as q(x). Now, the sample is accepted if it lies below the curve p(x). Hence, the accepted samples {z}, will be uniform over the region spaned by the curve p(x) or the distribution of the finally accepted points {z}, will be same as p(x) as desired. Note that we can always find a proposal distribution which upper bounds the candidate distribution via positive scaling by a factor k. The factor k will represent the lift of q(x) over p(x) and hence will be inversly proportional to the probability of selection of a particular point z under the candidate distribution. 
#10




I will use the term as they are in page 10.
An intuitive explanation: We assume f(x)>=p(x) for every x. Every sampled x0 from f(x) by transformation method can be considered as a uniformly random point in the area under the curve f(x). We then generated the second deviate u from U(0, f(x0)). If u > p(x0), we reject this sample x0, otherwise we accept it. Therefore if the number of sampled values is large enough, we can expect that the fraction of rejected samples are proportional to the size of the area between curves f(x) and p(x). Hence the accepted samples are expected to be uniformly distributed in the area under the curve p(x). Therefore the samples have the desired distribution p(x). If we don't have f(x)>=p(x) for every x. We need to scale f(x) to Mf(x) to guarantee that Mf(x)>p(x) everywhere, where M>1. A proof: P(xu<p(x)/f(x)) prop to P(u<p(x)/f(x)x)*P(x) = [p(x)/f(x)]*f(x)=p(x) (coz we have P(u<p(x)/f(x)x)=p(x)/f(x) and P(x)=f(x)) Last edited by cqian; 02112010 at 12:40 AM. 
Thread Tools  
Display Modes  

