Segment 20. Nonlinear Least Squares Fitting

From Computational Statistics Course Wiki
Jump to navigation Jump to search

Watch this segment

(Don't worry, what you see statically below is not the beginning of the segment. Press the play button to start at the beginning.)

{{#widget:Iframe |url= |width=800 |height=625 |border=0 }}

The direct YouTube link is

Links to the slides: PDF file or PowerPoint file


To Calculate

1. (See lecture slide 3.) For one-dimensional , the model is called "linear" if , where are arbitrary known functions of . Show that minimizing produces a set of linear equations (called the "normal equations") for the parameters .

2. A simple example of a linear model is , which corresponds to fitting a straight line to data. What are the MLE estimates of and in terms of the data: 's, 's, and 's?

To Think About

1. We often rather casually assume a uniform prior on the parameters . If the prior is not uniform, then is minimizing the right thing to do? If not, then what should you do instead? Can you think of a situation where the difference would be important?

2. What if, in lecture slide 2, the measurement errors were instead of ? How would you find MLE estimates for the parameters ?

Class Activity

Here is some data: Media:Chisqfitdata.txt

In class we will work on fitting this to some models as explained here.

Here are Bill's numerical answers, so that you can see whether you are on the right track (or whether Bill got it wrong!): Media:Chisqfitanswers.txt