Dadurch werden Ihre Fortschritts- und Chatdaten für alle Kapitel dieses Kurses gelöscht und können nicht rückgängig gemacht werden!
Glossar
Wähle eines der Schlüsselwörter auf der linken Seite…
StatisticsMaximum Likelihood Estimation
Lesezeit: ~40 min
So far we've had two ideas for building an estimator for a statistical functional :one is to plug into , and the other—kernel density estimation—is closely related (we just smear the probability mass out around each observed data point before substituting into ). In this section, we'll learn another approach which has some compelling properties and is suitable for choosing from a parametric family of densities or mass functions.
Let's revisit the example from the first section where we looked for the Gaussian distribution which best fits a given set of measurements of the heights of 50 adults. This time, we'll include a goodness score for each choice of and , so we don't have to select a best fit subjectively.
The goodness function we'll use is called the log likelihood function, which we define to be the log of the product of the density function evaluated at each of the observed data points. This function rewards density functions which have larger values at the observed data points and penalizes functions which have very small values at some of the points. This is a rigorous way of capturing the idea that the a given density function is consonant with the observed data.
Adjust the knobs to get the goodness score as high as possible (hint: you can get it up to about ).
μ=${μ}
σ=${σ}
log likelihood = ${LL}
The best μ value is , and the best σ value is .
Definitions
Consider a parametric family of PDFs or PMFs.For example, the parametric family might consist of all Gaussian distributions, all geometric distributions, or all discrete distributions on a particular finite set.
Given , the likelihood is defined by
The idea is that if is a vector of independent observations drawn from , then is small or zero when is not in concert with the observed data.
Because likelihood is defined to a product of many factors, its values are often extremely small, and we may encounter overflow issues. Furthermore, sums are often easier to reason about than products. For both of these reasons, we often compute the logarithm of the likelihood instead:
Maximizing the likelihood is the same as maximizing the log likelihood because the natural logarithm is a monotonically increasing function.
Example Suppose is the density of a uniform random variable on .We observe four samples drawn from this distribution: , and .Find ,, and .
Solution.The likelihood at 5 is zero, since .The likelihood at is very small, since .The likelihood at 7 is larger: .
As illustrated in this example, likelihood has the property of being zero or small at implausible values of , and larger at more reasonable values. Thus we propose the maximum likelihood estimator
Example Suppose that is the normal density with mean and variance .Find the maximum likelihood estimator for and .
Solution.The maximum likelihood estimator is the minimizer of the logarithm of the likelihood function, which works out to
since , for each .
Setting the derivatives with respect to and equal to zero, we find
which implies (from solving the second equation) as well as v = \sigma^2 = \frac{1}{n}((X_1-\overline{X})^2 + \cdots + (X_n-\overline{X})^2) (from solving the first equation). Since there's only one critical point, and since we can observe that the log likelihood goes to -\infty as (\mu, \sigma^2) \to\infty, there must be a local maximum at this critical point.
So we may conclude that the maximum likelihood estimator agrees with the plug-in estimator for \mu and \sigma^2.
Exercise Consider a Poisson random variable X with parameter \lambda.In other words, \mathbb{P}(X = x) = \frac{\lambda^x \operatorname{e}^{-\lambda}}{x!}.
Show that it follows the maximum likelihood estimator \widehat{\lambda} is equal to the sampel mean \bar{X}, and explain why this makes sense intuitively.
Solution. When we take the derivative with respect to \lambda and set it equal to zero, we get
\begin{align*}\frac{\sum_{i = 1}^n X_i }{\widehat{\lambda}} - n = 0 ,\end{align*}
which gives us \widehat{\lambda} = \frac{\sum X_i}{n} = \bar{\mathbf{X}}, the sample mean.
Taking a second derivative gives -\frac{\sum_{i = 1}^n X_i }{\widehat{\lambda}^2}.Since this quantity is everywhere negative, the likelihood is concave.Therefore, the MLE has a local maximum at the critical point \widehat{\lambda}, and that local maximum is also a global maximum.
Example Suppose Y = X\beta + \epsilon for i = 1, 2, \cdots, n, where \epsilon has distribution \mathcal{N}(0, I \sigma^2).Treat \sigma as known and \beta as the only unknown parameter. Suppose that n observations (X_1, Y_1), \ldots, (X_n, Y_n) are made.
Show that the least squares estimator for \beta is the same as the MLE for \beta by making observations about your log likelihood.
The only term that depends on \beta is the second one, so maximizing the log likelihood is the same as maximizing - \sum_{i=1}^n \frac{(Y_i - X_i\beta)^2}{2\sigma^2}, which in turn is the same as minimizing \sum_{i=1}^n(Y_i - X_i\beta)^2.
Exercise (a) Consider the family of distributions which are uniform on [0,b], where b \in (0,\infty).Explain why the MLE for the distribution maximum b is the sample maximum.
(b) Show that the MLE for a Bernoulli distribution with parameter p is the empirical success rate \frac{1}{n} \sum_{i=1}^n X_i.
(a) The likelihood associated with any value of b smaller than the sample maximum is zero, since at least one of the density values is zero in that case. The likelihood is a decreasing function of b as b ranges from the sample maximum to \infty, since it's equal to (1/b)^n.Therefore, the maximal value is at the sample maximum.
(b) The derivative of the log likelihood function is
where s is the number of successes. Setting the derivative equal to zero and solving for p, we find p = s/n.
Properties of the Maximum Likelihood Estimator
MLE enjoys several nice properties: under certain regularity conditions, we have
Consistency: \mathbb{E}[(\widehat{\theta}_{\mathrm{MLE}} - \theta)^2] \to 0 as the number of samples goes to \infty.In other words, the average squared difference between the maximum likelihood estimator and the parameter it's estimating converges to zero.
Asymptotic normality: (\widehat{\theta}_{\mathrm{MLE}} - \theta)/\sqrt{\operatorname{Var} \widehat{\theta}_{\mathrm{MLE}}} converges to \mathcal{N}(0,1) as the number of samples goes to \infty.This means that we can calculate good confidence intervals for the maximum likelihood estimator, assuming we can accurately approximate its mean and variance.
Asymptotic optimality: the MSE of the MLE converges to 0 approximately as fast as the MSE of any other consistent estimator. Thus the MLE is not wasteful in its use of data to produce an estimate.
Equivariance: Suppose \widehat{\theta} is the MLE of \theta for f(\theta).Then the MLE for g(\theta) is g(\widehat{\theta}).This is a useful property; it states that transformation on the parameter (say, shifting the mean of a normal distribution by a number, or taking the square of the standard deviation) of interest is not an inconvenience for our MLE estimate for the parameter because we can simply apply the transformation on the MLE as well.
Example Show that the plug-in variance estimator for a sequence of n i.i.d. samples from a Gaussian distribution \mathcal{N}(\mu, \sigma^2) converges to \sigma^2 as n\to\infty.
Solution.We've seen that the plug-in variance estimator is the maximum likelihood estimator for variance. Therefore, it converges to \sigma^2 by MLE consistency.
Exercise Show that it is not possible to estimate the mean of a distribution in a way that converges to the true mean at a rate asymptotically faster than 1/\sqrt{n}, where n is the number of observations.
Solution.The sample mean is the maximum likelihood estimator, and it converges to the mean at a rate proportional to the inverse square root of the number of observations. Therefore, there is not another estimator which converges with an asymptotic rate faster than that.
Drawbacks of maximum likelihood estimation
The maximum likelihood estimator is not a panacea. We've already seen that the maximum likelihood estimator can be biased (the sample maximum for the family of uniform distributions on [0,b], where b \in \mathbb{R}). There are several other issues that can arise when maximizing likelihoods.
Computational difficulties. It might be difficult to work out where the maximum of the likelihood occurs, either analytically or numerically. This would be a particular concern in high dimensions (that is, if we have many parameters) and if the maximum likelihood function is .
Misspecification. The MLE may be inaccurate if the distribution of the observations is not in the specified parametric family. For example, if we assume the underlying distribution is Gaussian, when in fact its shape is not even close to that of a Gaussian, we very well might get unreasonable results.
Unbounded likelihood. If the likelihood function is not bounded, then \widehat{\theta}_{\mathrm{MLE}} is not even defined:
Exercise Consider the family of distributions on \mathbb{R} given by the set of density functions
where a < b < c < d, and where \gamma and \delta are nonnegative real numbers such that \gamma(b-a) + \delta(d-c) = 1.Show that the likelihood function has no maximum for this family of functions.
a=${a}b=${b}c=${c}d=${d}γ=${γ}
likelihood = ${likelihood}
Solution.We identify the largest value in our data set and choose c to be \epsilon less than that value and d to be \epsilon more than it. We choose a and b so that the interval [a,b] contains all of the other observations (since otherwise we would get a likelihood value of zero). Then we can send \epsilon to zero while holding a,b and \gamma fixed. That sends \delta to \infty, which in turn causes the likelihood to grow without bound.
One further disadvantage of the maximum likelihood estimator is that it doesn't provide for a smooth mechanism to account for prior knowledge. For example, if we flip a coin twice and see heads both times, our (real-world) beliefs about the coin's heads probability would be that it's about 50%. Only once we saw quite a few heads in a row would we begin to use that as evidence move the needle on our strong prior belief that coins encountered in daily life are not heavily weighted to one side or the other.
Bayesian statistics provides an alternative framework which addresses this shortcoming of maximum likelihood estimation.