Segment 37. A Few Bits of Information Theory

From Computational Statistics Course Wiki
Revision as of 13:45, 22 April 2016 by Bill Press (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Watch this segment

(Don't worry, what you see statically below is not the beginning of the segment. Press the play button to start at the beginning.)

{{#widget:Iframe |url= |width=800 |height=625 |border=0 }}

The direct YouTube link is

Links to the slides: PDF file or PowerPoint file

Class Activity

There is no general way to estimate the entropy of a (non i.i.d.) process from the data it generates, because you may or may not be able to recognize its entropy-lowering internal structure. So, in general, even an accurate "estimate" is only an upper bound on the entropy.

Let's see how well we can do at estimating the true entropy of five different strings in the alphabet A, C, G, T. (Bill knows the answer, because he knows how they were generated. But he's not telling!)

The more you study the data, the better you'll do! (If you know how to use Hidden Markov Models, which we didn't have room for in this course, you might do even better.)