Maximum Likelihood Estimation - PowerPoint PPT Presentation

1 / 51
About This Presentation
Title:

Maximum Likelihood Estimation

Description:

where q = (q1, ... , qp) are unknown parameters assumed to lie in ... Corollary 1: Corollary 2: Proof. Suppose. Theorem. are independent, then. Suppose. Theorem ... – PowerPoint PPT presentation

Number of Views:194
Avg rating:3.0/5.0
Slides: 52
Provided by: lave9
Category:

less

Transcript and Presenter's Notes

Title: Maximum Likelihood Estimation


1
Maximum Likelihood Estimation
  • Multivariate Normal distribution

2
The Method of Maximum Likelihood
  • Suppose that the data x1, , xn has joint
    density function
  • f(x1, , xn q1, , qp)
  • where q (q1, , qp) are unknown parameters
    assumed to lie in W (a subset of p-dimensional
    space).
  • We want to estimate the parametersq1, , qp

3
Definition The Likelihood function
  • Suppose that the data x1, , xn has joint
    density function
  • f(x1, , xn q1, , qp)
  • Then given the data the Likelihood function is
    defined to be
  • L(q1, , qp)
  • f(x1, , xn q1, , qp)
  • Note the domain of L(q1, , qp) is the set W.

4
Definition Maximum Likelihood Estimators
  • Suppose that the data x1, , xn has joint
    density function
  • f(x1, , xn q1, , qp)
  • Then the Likelihood function is defined to be
  • L(q1, , qp)
  • f(x1, , xn q1, , qp)
  • and the Maximum Likelihood estimators of the
    parameters q1, , qp are the values that
    maximize
  • L(q1, , qp)

5
  • i.e. the Maximum Likelihood estimators of the
    parameters q1, , qp are the values

Such that
Note
is equivalent to maximizing
the log-likelihood function
6
The Multivariate Normal Distribution
  • Maximum Likelihood Estiamtion

7
denote a sample (independent)
Let
from the p-variate normal distribution
with mean vector
and covariance matrix
Note
8
The matrix
is called the data matrix.
9
The vector
is called the data vector.
10
The mean vector
11
The vector
is called the sample mean vector
note
12
also
13
In terms of the data vector
where
14
Graphical representation of sample mean vector
The sample mean vector is the centroid of the
data vectors.
15
The Sample Covariance matrix
16
The sample covariance matrix
where
17
There are different ways of representing sample
covariance matrix
18
(No Transcript)
19
(No Transcript)
20
(No Transcript)
21
Maximum Likelihood Estimation
  • Multivariate Normal distribution

22
denote a sample (independent)
Let
from the p-variate normal distribution
with mean vector
and covariance matrix
Then the joint density function of
is
23
The Likelihood function is
and the Log-likelihood function is
24
To find the Maximum Likelihood estimators of
we need to find
to maximize
or equivalently maximize
25
Note
thus
hence
26
Now
27
Now
28
Summary the Maximum Likelihood estimators of
are
and
29
Sampling distribution of the MLEs
30
Note
is
The joint density function of
31
This distribution is np-variate normal with mean
vector
32
Thus the distribution of
is p-variate normal with mean vector
33
(No Transcript)
34
Summary
  • The sampling distribution of
  • is p-variate normal with

35
The sampling distribution of the sample
covariance matrix S and
36
The Wishart distribution
  • A multivariate generalization of the c2
    distribution

37
Definition the p-variate Wishart distribution
be k independent random p-vectors
  • Let

Each having a p-variate normal distribution with
Then U is said to have the p-variate Wishart
distribution with k degrees of freedom
38
The density ot the p-variate Wishart distribution
  • Suppose

Then the joint density of U is
where Gp() is the multivariate gamma function.
It can be easily checked that when p 1 and S
1 then the Wishart distribution becomes the c2
distribution with k degrees of freedom.
39
Theorem
  • Suppose

then
Corollary 1
Corollary 2
Proof
40
Theorem
  • Suppose

are independent, then
Theorem
are independent and
Suppose
then
41
Theorem
Let
be a sample from
then
Theorem
Let
be a sample from
then
42
Theorem
Proof
etc
43
Theorem
Let
be a sample from
then
is independent of
Proof
be orthogonal
Then
44
Note H is also orthogonal
45
Properties of Kronecker-product
46
(No Transcript)
47
(No Transcript)
48
This the distribution of
is np-variate normal with mean vector
49
Thus the joint distribution of
is np-variate normal with mean vector
50
Thus the joint distribution of
is np-variate normal with mean vector
51
Summary Sampling distribution of MLEs for
multivatiate Normal distribution
Let
be a sample from
then
and
Write a Comment
User Comments (0)
About PowerShow.com