<< Chapter < Page | Chapter >> Page > |
When the
The maximum likelihood estimate
$({}_{\mathrm{ML}})(r)$ of a nonrandom parameter is, simply, that value which
maximizes the likelihood function (the
Let $r(l)$ be a sequence of independent, identically distributed Gaussian random variables having an unknown mean $$ but a known variance ${}_{n}^{2}$ . Often, we cannot assign a probability density to a parameter of a random variable's density; we simply do not knowwhat the parameter's value is. Maximum likelihood estimates are often used in such problems. In the specific case here, thederivative of the logarithm of the likelihood function equals $$\frac{\partial^{1}\ln p(, r)}{\partial}=\frac{1}{{}_{n}^{2}}\sum_{l=0}^{L-1} r(l)-$$ The solution of this equation is the maximum likelihood estimate, which equals the sample average. $$({}_{\mathrm{ML}})=\frac{1}{L}\sum_{l=0}^{L-1} r(l)$$ The expected value of this estimate $(, ({}_{\mathrm{ML}}))$ equals the actual value $$ , showing that the maximum likelihood estimate is unbiased. Themean-squared error equals $\frac{{}_{n}^{2}}{L}$ and we infer that this estimate is consistent.
The maximum likelihood procedure (as well as the others being discussed) can be easily generalized to situations where morethan one parameter must be estimated. Letting $$ denote the parameter vector, the likelihood function is now expressed as $p(, r)$ . The maximum likelihood estimate $({}_{\mathrm{ML}})$ of the parameter vector is given by the location of the maximum of the likelihood function (or equivalently of itslogarithm). Using derivatives, the calculation of the maximum likelihood estimate becomes
Let's extend the previous example to the situation where neither the mean nor the variance of a sequence of independentGaussian random variables is known. The likelihood function is, in this case, $$p(, r)=\prod_{l=0}^{L-1} \frac{1}{\sqrt{2\pi {}_{2}}}e^{-(\frac{1}{2{}_{2}}(r(l)-{}_{1})^{2})}$$ Evaluating the partial derivatives of the logarithm of this quantity, we find the following set of two equations to solvefor ${}_{1}$ , representing the mean, and ${}_{2}$ , representing the variance.
The expected value of $({}_{1}^{\mathrm{ML}})$ equals the actual value of ${}_{1}$ ; thus, this estimate is unbiased. However, the expected value of the estimate of the variance equals ${}_{2}\frac{L-1}{L}$ . The estimate of the variance is biased, but asymptotically unbiased. This bias can be removed by replacingthe normalization of $L$ in the averaging computation for $({}_{2}^{\mathrm{ML}})$ by $L-1$ .
Notification Switch
Would you like to follow the 'Statistical signal processing' conversation and receive update notifications?