# Mean Squared Error and Maximum Likelihood - PowerPoint PPT Presentation

PPT – Mean Squared Error and Maximum Likelihood PowerPoint presentation | free to download - id: 83dd55-MDY5Y

The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
Title:

## Mean Squared Error and Maximum Likelihood

Description:

### Mean Squared Error and Maximum Likelihood Lecture XVIII Mean Squared Error As stated in our discussion on closeness, one potential measure for the goodness of an ... – PowerPoint PPT presentation

Number of Views:13
Avg rating:3.0/5.0
Slides: 25
Provided by: Charl659
Category:
Transcript and Presenter's Notes

Title: Mean Squared Error and Maximum Likelihood

1
Mean Squared Error and Maximum Likelihood
• Lecture XVIII

2
Mean Squared Error
• As stated in our discussion on closeness, one
potential measure for the goodness of an
estimator is

3
• In the preceding example, the mean square error
of the estimate can be written as
• where q is the true parameter value between zero
and one.

4
• This expected value is conditioned on the
probability of T at each level value of q.

5
(No Transcript)
6
MSEs of Each Estimator
7
• Definition 7.2.1. Let X and Y be two estimators
of q. We say that X is better (or more
efficient) than Y if E(X-q)2 ? E(Y-q) for all q
in Q and strictly less than for at least one q in
Q.

8
• When an estimator is dominated by another
estimator, the dominated estimator is
• Definition 7.2.2. Let q be an estimator of q. We
say that q is inadmissible if there is another
estimator which is better in the sense that it
produces a lower mean square error of the
estimate. An estimator that is not inadmissible

9
Strategies for Choosing an Estimator
• Subjective strategy This strategy considers the
likely outcome of q and selects the estimator
that is best in that likely neighborhood.
• Minimax Strategy According to the minimax
strategy, we choose the estimator for which the
largest possible value of the mean squared error
is the smallest

10
• Definition 7.2.3 Let q be an estimator of q.
It is a minimax estimator if for any other
estimator of q , we have

11
Best Linear Unbiased Estimator
• Definition 7.2.4 q is said to be an unbiased
estimator of q if
• for all q in Q. We call
• bias

12
• In our previous discussion T and S are unbiased
estimators while W is biased.
• Theorem 7.2.10 The mean squared error is the sum
of the variance and the bias squared. That is,
for any estimator q of q

13
• Theorem 7.2.11 Let Xi i1,2,n be independent
and have a common mean m and variance s2.
Consider the class of linear estimators of m
which can be written in the form
• and impose the unbaisedness condition

14
• Then
• for all ai satisfying the unbiasedness
condition. Further, this condition holds with
equality only for ai1/n.

15
• To prove these points note that the ais must sum
to one for unbiasedness
• The final condition can be demonstrated through
the identity

16
(No Transcript)
17
• Theorem 7.2.12 Consider the problem of
minimizing
• with respect to ai subject to the condition

18
• The solution to this problem is given by

19
Asymptotic Properties
• Definition 7.2.5. We say that q is a consistent
estimator of q if

20
Maximum Likelihood
• The basic concept behind maximum likelihood
estimation is to choose that set of parameters
that maximize the likelihood of drawing a
particular sample.
• Let the sample be X5,6,7,8,10. The
probability of each of these points based on the
unknown mean, m, can be written as

21
(No Transcript)
22
• Assuming that the sample is independent so that
the joint distribution function can be written as
the product of the marginal distribution
functions, the probability of drawing the entire
sample based on a given mean can then be written
as

23
• The value of m that maximize the likelihood
function of the sample can then be defined by
• Under the current scenario, we find it easier,
however, to maximize the natural logarithm of the
likelihood function

24
(No Transcript)