William H. Greene - PowerPoint PPT Presentation

1 / 24
About This Presentation
Title:

William H. Greene

Description:

Estimating the asymptotic variance of the MLE. Conditional ... (Hessian r.v.) ZES Rule: Information identity: 11. Properties of MLEs (3) The likelihood equation ... – PowerPoint PPT presentation

Number of Views:183
Avg rating:3.0/5.0
Slides: 25
Provided by: billg76
Category:

less

Transcript and Presenter's Notes

Title: William H. Greene


1
William H. Greene Econometric Analysis Chapter
17
Maximum Likelihood Estimation
  • Part 1 by Jan Wrampelmeyer
  • Part 2 by Johannes Stolte

2
Agenda
  • Part 1
  • Introduction
  • The likelihood function
  • Identification of the parameters
  • Maximum Likelihood Estimation
  • Properties of MLEs
  • Part 2
  • Properties of MLEs
  • Consistency
  • Asymptotic normality
  • Asymptotic efficiency
  • Invariance
  • Estimating the asymptotic variance of the MLE
  • Conditional likelihoods and econometric models

3
Introduction
  • Maximum Likelihood Estimator is preferred
    estimator in many settings
  • Lacks optimality in small samples
  • Is optimal in the theoretical case of an infinite
    sample
  • Very nice asymptotic properties

4
The likelihood function
  • Likelihood Function is the sample pmdf, viewed as
    a function of the parameter ?.
  • Log-Likelihood function
  • In case of iid obeservations

5
Identification of the parameters
  • If parameters are not identifiable estimation is
    not possible
  • In wordsA parameter is identifiable if we can
    uniquely determine the values of ? if we have all
    information there is about the parameters (in an
    infinite sample)

6
Maximum Likelihood Estimation (1)
  • Principle of maximum likelihood provides a means
    of choosing an asymptotically efficient estimator
    for a set of parameters
  • Example 10 observations from Poisson distribution

7
Maximum Likelihood Estimation (2)
  • Maximum likelihood estimate The value of ?
    that makes the observation of this particular
    sample most probable. The values that maximize
    are the MLEs, denoted by .
  • The necessary condition for maximizing is
    or equivalently in terms of the
    log-likelihood
  • This is called the likelihood equation

8
Maximum Likelihood Estimation (3)
  • Example (likelihood equations for the Normal
    distribution)
  • The likelihood equations are
  • and
  • Solving these solutions yields the two MLEs
  • and

9
Properties of MLEs (1)
  • Under regularity, the MLE is asymptotically
    efficient.
  • Regularity ConditionsAssume that we have a
    random sample from a population with pmdf
    .
  • R1 The first three derivatives of
    with respect to ? are continuous and finite.
  • R2 The conditions necessary to obtain the
    expectations of the first and second
    derivatives of are met.
  • R3 For all values of ?, is less than a
    function that has a finite expectation.

10
Properties of MLEs (2)
  • Theorem 17.2 Moments of the Derivates of the
    Log-Likelihood
  • Random samples of random variables
  • (Gradient or Score r.v.)
  • (Hessian r.v.)
  • ZES Rule
  • Information identity

11
Properties of MLEs (3)
  • The likelihood equation
  • Information matrix equality

12
William H. Greene Econometric Analysis Chapter
17
Maximum Likelihood Estimation
  • Part 1 by Jan Wrampelmeyer
  • Part 2 by Johannes Stolte

13
Agenda
  • Part 1
  • Introduction
  • The likelihood function
  • Identification of the parameters
  • Maximum Likelihood Estimation
  • Properties of MLEs
  • Part 2
  • Properties of MLEs
  • Consistency
  • Asymptotic normality
  • Asymptotic efficiency
  • Invariance
  • Estimating the asymptotic variance of the MLE
  • Conditional likelihoods and econometric models

14
Properties of MLEs (1)
  • The Maximum likelihood estimators (MLEs) are most
    attractive because of their large-sample or
    asymptotic properties!
  • Asymptotic EfficiencyAn estimator is
    asymptotically efficient if it is
  • Consistent
  • Asymptotically normally distributed (CAN)
  • Has an asymptotic covariance matrix that is not
    larger than the asymptotic covariance matrix of
    any other consistent, asymptotically normally
    distributed estimator.
  • When minimum variance unbiased estimators exist,
    when sampling is from an exponential family of
    distributions, they will be MLEs (also in finite
    case).

15
Properties of MLEs (2)
  • is the Maximum likelihood estimators
    denotes the true value of the parameter vector
    denotes another possible value of the
    parameter vector.
  • Theorem 17.1 (Properties of an MLE)
  • Under regularity, the maximum likelihood
    estimator (MLE) has the following asymptotic
    properties
  • M1. Consistency
  • M2. Asymptotic normality
  • M3. Asymptotic efficiency is asymptotically
    efficient and achieves the Cramér-Rao lower bound
    for consistent estimators, given in M2.
  • M4. Invariance The maximum likelihood estimator
    of is if is
    a continuous and continuously differentiable
    function.

16
Properties of MLEs Consistency
17
Properties of MLEs Consistency (2)
In words, the expected value of the
log-likelihood is maximized at the true value of
the parameters.
is the expected
sample mean of n iid random variables, which
converges in probability to the population mean.
Let us take . As But
because is the MLE, we have It follows that

, and therefore under our assumptions we get
.
18
Properties of MLEs Asymptotic Normality
19
Properties of MLEs Asymptotic Efficiency
20
Properties of MLEs Invariance
  • The MLE is invariant to one-to-one
    transformations of .
  • If we are interested in and is
    the MLE, then the MLE of our transformation is
    .

21
Estimating the asymptotic variance
  • There are three ways to evaluate the covariance
    matrix for the MLE

22
Estimating the asymptotic variance
  • Extremely convenient, because it does not require
    any computations beyond those required to solve
    the likelihood equation.
  • Always non-negative definite.
  • Known as BHHH est. or outer product of gradients
    (OPG) estimator.

23
Conditional likelihoods and econometric models
  • Until now we have only allowed for observed
    random variable , but econometric models will
    involve exogenous or predetermined variables
    .
  • The denote a mix of random variables and
    constants that enter the conditional density of
    .
  • Then we rewrite the log-likelihood function as
  • The following minimal assumptions are made under
    which these results have the properties of our
    maximum likelihood estimators
  • Parameter space that have no gaps and are convex.
  • Identifiability. Estimation must be feasible.
  • Well behaved data.

24
The end
  • Questions ???
Write a Comment
User Comments (0)
About PowerShow.com