Elad Segment 2

From Computational Statistics Course Wiki
Jump to navigation Jump to search

Segment 2 Problems

1) Let's just calculate using Bayes' Theorem. The chance of being in H1,H2,H3 given that we've observed a troll is:

The only dangerous setting here is the last one (in the first and second there are potential trolls remaining under the bridge). Nonetheless, the knight's chances of crossing safely given that he's seen a troll are pretty good - , or roughly 0.682, better than the prior probability of 0.6...

2) To answer the question "what is the probability that a randomly drawn ball is blue?", we need only consider the two EME options for which box was randomly chosen (I assume with equal probability...), times the chance to draw a blue ball from that box, i.e. . Now, to answer the question "what is the probability that box B was chosen given that we drew a blue ball?", we need to calculate using Bayes' Theorem (obviously...). So:


Segment 2 "Think About" Problems

1) Our cognitive "inference engine" as humans (and I believe this holds for animals as well) clearly does not obey the commutativity and associativity properties of evidence. More recent information oftentimes obtains greater importance than it would had we applied the straightforward posterior inference mechanism. Evolution is not wrong, however - because the world isn't stationary, it doesn't obey the rules of commutativity and associativity of evidence either. For instance, if we consider the true expected value of stocks given their recent prices, if a stock is worth x on day i and y on day i+1, that's completely different compared to being worth y on day i and x on day i+1. On the other hand, I think that we are often also swayed more by first impressions, giving greater importance to preliminary evidence than to any other information that follows. I.e. if we first read that "the color green can give you cancer", no matter what evidence we'll see to the contrary, we are likely to be wary of green things. In this specific example it's partly because we are risk averse (and tend to overweigh information regarding risk, generally speaking), but I believe it might also be because we have unintentional bias for initial, "mind setting" information on objects.

2) The trivial thing is to just generate the experiment - first randomize a setting (out of the 5 possible ones, based on the provided multinomial distribution values), and then have the knight randomly catch a creature. I would then count separately all the occurrences in which the knight captures a troll, and see what the chances of crossing the bridge safely are. We will need to run the generator until we've observed 100,000 of the "troll caught" cases, to satisfy the requirement...

3) While our individual priors are distinct and different, we can sometimes agree on expected values of estimates. I also believe that by sharing beliefs, evidence and hypotheses, we can, as a community, collectively converge into such expected values of estimates (or at least ranges - "most of the community believe the expected value of X is in [i,j]", etc). In some situations, there cannot be consensus (climate change is a good example for such a contentious subject).