# Difference between revisions of "Segment 20. Nonlinear Least Squares Fitting"

## Contents

#### Watch this segment

(Don't worry, what you see statically below is not the beginning of the segment. Press the play button to start at the beginning.)

{{#widget:Iframe |url=http://www.youtube.com/v/xtBCGPHRcb0&hd=1 |width=800 |height=625 |border=0 }}

Links to the slides: PDF file or PowerPoint file

### Problems

#### To Calculate

1. (See lecture slide 3.) For one-dimensional , the model is called "linear" if , where are arbitrary known functions of . Show that minimizing produces a set of linear equations (called the "normal equations") for the parameters .

2. A simple example of a linear model is , which corresponds to fitting a straight line to data. What are the MLE estimates of and in terms of the data: 's, 's, and 's?

1. We often rather casually assume a uniform prior on the parameters . If the prior is not uniform, then is minimizing the right thing to do? If not, then what should you do instead? Can you think of a situation where the difference would be important?
2. What if, in lecture slide 2, the measurement errors were instead of ? How would you find MLE estimates for the parameters ?