/Segment41

From Computational Statistics (CSE383M and CS395T)
Jump to navigation Jump to search

To calculate

1. Show that the waiting times (times between events) in a Poisson process are Exponentially distributed. (I think we've done this before.) <math> p(\tau) d\tau= </math> P(0 counts in <math> \tau </math>) * P(last <math>d\tau </math> has a count) =<math> Poisson(0, \lambda \tau)* (\lambda d\tau) = \frac {0^{0}}{(0)!} e^{-\lambda \tau} \lambda d\tau=\lambda* e^{-\lambda \tau} d\tau</math>


2. Plot the pdf's of the waiting times between (a) every other Poisson event, and (b) every Poisson event at half the rate. Poi.png

from scipy.stats import poisson
import numpy as np
import matplotlib.pyplot as plt

k2=np.arange(0, 50, 2)
k=np.arange(0, 50, 1)
plt.plot(k2, poisson.pmf(k2, 4), color='r')
plt.plot(k, poisson.pmf(k, 2), color='g')
plt.show()

3. Show, using characteristic functions, that the waiting times between every Nth event in a Poisson process is Gamma distributed. (I think we've also done one before, but it is newly relevant in this segment.)


let <math> \tau </math> be the duration in which N events happened,

<math> p(\tau)d\tau=</math> P(N-1 counts in <math> \tau</math> ) * P( last count in <math> d\tau</math>)

<math> = \frac {(\lambda\tau)^{(N-1)} *e^{-\lambda\tau} \lambda}{(N-1)!} d\tau =\frac {\lambda^N \tau^{(N-1)} e^{-\lambda\tau} }{(N-1)!} d\tau </math>

<math> \Phi( t) = \int_0^\infty e^{it\tau} *\frac {\lambda^N \tau^{(N-1)} e^{-\lambda\tau} }{(N-1)!} d\tau=\lambda ^N(\lambda-it)^{-N}</math>

This is the characteristic function of <math> Gamma(\N, \lambda) </math>


On class

Teamed up with Dan and Rcardenas. Rcardenas was our coder.


import numpy as np
import scipy.stats as sp
import matplotlib.pyplot as plt


def counts(x):
    n = np.zeros(10)
    for i in x:
        n[i] += 1
    return n

data = np.loadtxt('data/Gibbs_data.txt')

assignments = sp.binom.rvs(1, 0.5, size =1000)
hist_p = []
rel_p = []
rel_q = []
for it in range(0,1000):
    ps = np.zeros(10)
    qs = np.zeros(10)
    #Count number of digits for p and q
    for i in range(0,1000):
        if assignments[i] == 1: #P
            ps+= counts(data[i])
        else:
            qs+= counts(data[i])
    #Calculate Prob of qs and ps
    ps = ps/sum(ps)
    qs = qs/sum(qs)
    
    hist_p.append(ps)
    
    
    p =1.0
    q =1.0
    #Recalculate the reasssignment
    for i in range(0,1000):
        p = prod([ps[x] for x in data[i]])
        q = prod([qs[x] for x in  data[i]])
        rel = p/(p+q)
        if rel > np.random.uniform():
            assignments[i] = 1
        else:
            assignments[i] = 0
 
plt.hist(hist_p,bins=30)
print hist_p[len(hist_p)-1]

Gibbs.png

Here's our p vector:


[ 0.12601942  0.08621359  0.11456311  0.06932039  0.08679612  0.11805825
  0.11902913  0.07825243  0.11572816  0.08601942]