# Eleisha's Segment 20: Non-linear Least Squares Fitting

To Calculate

1. (See lecture slide 3.) For one-dimensional x, the model $\displaystyle y(x | \mathbf b)$ is called "linear" if $\displaystyle y(x | \mathbf b) =\sum_k b_k X_k(x)$ , where $\displaystyle X_k(x)$ are arbitrary known functions of $\displaystyle x$ . Show that minimizing $\displaystyle \chi^2$ produces a set of linear equations (called the "normal equations") for the parameters $\displaystyle b_k$ .

2. A simple example of a linear model is $\displaystyle y(x | \mathbf b) = b_0 + b_1 x$ , which corresponds to fitting a straight line to data. What are the MLE estimates of $\displaystyle b_0$ and $\displaystyle b_1$ in terms of the data: $\displaystyle x_i's, y_i's$ , and $\displaystyle \sigma_i's$ ?

1. We often rather casually assume a uniform prior $\displaystyle P(\mathbf b)= \text{constant}$ on the parameters $\displaystyle \mathbf b$ . If the prior is not uniform, then is minimizing $\displaystyle \chi^2$ the right thing to do? If not, then what should you do instead? Can you think of a situation where the difference would be important?
2. What if, in lecture slide 2, the measurement errors were $\displaystyle e_i \sim \text{Cauchy}(0,\sigma_i)$ instead of $\displaystyle e_i \sim N(0,\sigma_i)$ ? How would you find MLE estimates for the parameters $\displaystyle \mathbf b$  ?