<< Chapter < Page | Chapter >> Page > |
The maximum likelihood estimator (MLE) is an alternative to the minimum variance unbiased estimator (MVUE).For many estimation problems, the MVUE does not exist. Moreover, when it does exist, there is no systematic procedure forfinding it. In constrast, the MLE does not necessarily satisfy any optimality criterion, but it can almost always be computed,either through exact formulas or numerical techniques. For this reason, the MLE is one of the most common estimation procedures used in practice.
The MLE is an important type of estimator for the following reasons:
Supposed the data $X$ is distributed according to the density or mass function $p(, x)$ . The likelihood function for $$ is defined by $$l(x, )\equiv p(, x)$$ At first glance, the likelihood function is nothing new - it is simply a way of rewriting the pdf/pmf of $X$ . The difference between the likelihood and the pdf or pmf is what is held fixed and whatis allowed to vary. When we talk about the likelihood, we view the observation $x$ as being fixed, and the parameter $$ as freely varying.
The information brought by an observation $x$ about $$ is entirely contained in the likelihood function $p(, x)$ . Moreover, if ${x}_{1}$ and ${x}_{2}$ are two observations depending on the same parameter $$ , such that there exists a constant $c$ satisfying $p(, {x}_{1})=cp(, {x}_{2})$ for every $$ , then they bring the same information about $$ and must lead to identical estimators.
Notification Switch
Would you like to follow the 'Statistical signal processing' conversation and receive update notifications?