/Segment2

From Computational Statistics (CSE383M and CS395T)
Jump to navigation Jump to search

Segment 2 Jan 18th, Fri

Homeworks

To Calculate

1. If the knight had captured a Gnome instead of a Troll, what would his chances be of crossing safely?

H1: cross the bridge 1; H2: cross the bridge 2; H3: cross the bridge 3

G: capture a gnome

G: come across a gnome when crossing

P(H1|G) is proportional to P(G|H1)P(H1)


P(H3|G)= P(G|H1)P(H1)/[ P(G|H1)P(H1)+P(G|H2)H2+P(G|H3)H3]

= [1 * (3/5)] / [(3/5) * (1/5)+ (4/5)*(1/5) + 1* (3/5)]= 5/9

P(cross safely) = P(H3/G) = 5/9

2. Suppose that we have two identical boxes, A and B. A contains 5 red balls and 3 blue balls. B contains 2 red balls and 4 blue balls. A box is selected at random and exactly one ball is drawn from the box.

What is the probability that it is blue?

P(Blue)= (3/8)*(1/2) + (4/6)*(1/2)=3/16+4/12 = 25/48

If it is blue, what is the probability that it came from box B?

P(B|Blue)= P(Blue|B)P(B)/ [P(Blue|B)P(B) +P(Blue|A)P(A)] = [(2/3)*(1/2)]/[(3/8)*(1/2) + (2/3)*(1/2)] = 16/25

To Think About

1. Do you think that the human brain's intuitive "inference engine" obeys the commutativity and associativity of evidence? For example, are we more likely to be swayed by recent, rather than older, evidence? How can evolution get this wrong if the mathematical formulation is correct?

I think brain's intuitive inference engine is constrained by its software design, and would thus show some non-perfect performance for certain tasks. Yes, I think we are more influenced by recent rather than older evidence. In a Bayesian point of view, equivalent background information can be weighted differently based on the relative time that the information is perceived and applied for problem solving or predictions. But it's fine for Baysian because the prior is subjective anyway. This "mistake" is the result of memory decay, proactive and retroactive interference,


It's believed that left ventrolateral prefrontal cortex (VLPFC) and left anterior prefrontal cortex (APFC) are responsible for proactive interference. This following fMRI image shows the brain regions that increased activation during A, forgetting against control; B, recent interferences.

Nee, DE; Jonides, J; Berman, MG (December 2007). "Neural mechanisms of proactive interference-resolution.". NeuroImage 38 (4): 740–51. doi:10.1016/j.neuroimage.2007.07.066. PMC 2206737. PMID 17904389.

A Forget.png B Recency.png

Then, why didn't brains evolve against information decay and interference along with its its other limitations, and become perfect? It is not easy. Any change has a tradeoff coming with it. One property can be advantageous in one circumstance but disadvantageous in the other. The selection force is however on individuals, as an integrative whole. Adaptations in one axis has to compensate for others. Brains with good memory for example can consume more energy and thus can be selected against when the selection for conserving energy outweighs the selection for good memory. When the evolutionary trajectory gets close to the balancing optima among multiple traits, further changes are expected to be slower. This following figure from Orr(2005) helps to visualize this idea.

Adaptation.png


2. How would you simulate the Knight/Troll/Gnome problem on a computer, so that you could run it 100,000 times and see if the Knights probability of crossing safely converges to 2/3?

3. Since different observers have different background information, isn't Bayesian inference useless for making social decisions (like what to do about climate change, for example)? How can there ever be any consensus on probabilities that are fundamentally subjective?

class exercise

1. Simulate the Knight/Troll/Gnome problem 100,000 times. Plot (fraction of safe crossings so far) vs. (number of simulated trials so far) to confirm that this fraction converges to the probability calculated in the segment.

Troll.png Cfplot.png

2. You are an oracle that, when asked, says "yes“ with probability 1⁄2 and "no" with probability 1⁄2. How do you do this using only a coin that comes up heads with unknown but constant probability P?

We found Von Neumann (1951) method:

say the probability of getting heads is p, then the probability of getting tail is 1-p,

if we toss the coin twice, the four possible outcomes each has the following probabilities

HH: p*p HT: (1-p) * p TH: p*(1-p) TT: (1-p) * (1-p)

HT and TH has the same probability. So we will assume head with HT and tail with TH, if otherwise, keep tossing.

Screen Shot 2013-01-23 at 11.32.09 AM.png

This figure is taken from: Elchanan Mossel and Yuval Peres (2004) New coins from old: computing with unknown bias

• Simulate your scheme to confirm that it works.

Newcoin.png