Title: Multiple Regression Model: Asymptotic Properties OLS Estimator
 1Multiple Regression ModelAsymptotic 
PropertiesOLS Estimator
- y  b0  b1 x1  b2 x2  . . . bk xk  u 
 
  2Lecture 5 THE MULTIPLE REGRESSION MODEL 
ASYMPTOTIC PROPERTIES OLS ESTIMATOR Professor 
Victor Aguirregabiria
- OUTLINE 
 - Convergence in Probability the Law of Large 
Numbers (LLN)  - Consistency of OLS 
 - Convergence in Distribution the Central Limit 
Theorem (CLT)  - Asymptotic Normality of OLS 
 - Asymptotic Tests 
 - Asymptotic Efficiency of OLS
 
  31. Convergence in Probability the Law of Large 
Numbers
- Consider the sequence of random variables 
 -  Z1, Z2, Zn,  
 - We say that this sequence converges in 
probability to a constant c if for any small 
constant e  -  the limit as n goes to infinity of Prob( Zn - 
c gt e) is zero  - That is, as n increases the probability 
distribution of Zn becomes closer and closer to 
the constant c.  - In the limit, the random variable Zn is equal to 
the constant c with probability 1. 
  4Law of Large Numbers (LLN)
- Let y1, y2, yn be a random sample of a random 
variable Y, that has a finite variance.  - Define the sample mean 
 
- The LLN says that the sample mean converges in 
probability to the population mean of Y. 
  5Consistency
-  Under the Gauss-Markov assumptions OLS is BLUE. 
 - However, when we relax some of the assumptions 
(constant variance) it wont be always possible 
to find unbiased estimators.  -  In those cases, we may still find estimators 
that are consistent, meaning as n ? 8, the 
distribution of the estimator collapses to the 
parameter value  
  6Sampling Distributions as n ?
n3
n1 lt n2 lt n3
n2
n1
b1 
 72. Consistency of OLS
- Under the Gauss-Markov assumptions, the OLS 
estimator is consistent (and unbiased)  - Consistency can be proved for the simple 
regression case in a manner similar to the proof 
of unbiasedness  -  Will need to take probability limit (plim) to 
establish consistency 
  8Proving Consistency 
 9A Weaker Assumption
-  For unbiasedness, we assumed a zero conditional 
mean E(ux1, x2,,xk)  0  -  For consistency, we can have the weaker 
assumption of zero mean and zero correlation 
E(u)  0 and Cov(xj,u)  0  - Without this assumption, OLS will be biased and 
inconsistent! 
  10Deriving the Inconsistency
-  Just as we could derive the omitted variable 
bias earlier, now we want to think about the 
inconsistency, or asymptotic bias, in this case 
  11Asymptotic Bias
-  So, thinking about the direction of the 
asymptotic bias is just like thinking about the 
direction of bias for an omitted variable  -  Main difference is that asymptotic bias uses the 
population variance and covariance, while bias 
uses the sample counterparts  -  Remember, inconsistency is a large sample 
problem  it doesnt go away as add data 
  123. Convergence in Distribution Central Limit 
Theorem
-  Recall that under the CLM assumptions, the 
sampling distributions are normal, so we could 
derive t and F distributions for testing  -  This exact normality was due to assuming the 
population error distribution was normal  - This assumption of normal errors implied that the 
distribution of y, given the xs, was normal as 
well 
  133. Convergence in Distribution
-  Easy to come up with examples for which this 
exact normality assumption will fail.  -  Any clearly skewed variable, like wages, 
arrests, savings, etc. cant be normal, since a 
normal distribution is symmetric.  - Normality assumption not needed to conclude OLS 
is BLUE, only for inference. 
  14Central Limit Theorem
- The central limit theorem states that the 
standardized sample mean of any sample with mean 
m and variance s2 is asymptotically N(0,1) 
  154. Asymptotic Normality of OLS 
 164. Asymptotic Normality of OLS
-  Because the t distribution approaches the normal 
distribution for large df, we can also say that 
-  Note that while we no longer need to assume 
normality with a large sample, we do still need 
homoskedasticity 
  17Asymptotic Standard Errors
-  If u is not normally distributed, we sometimes 
will refer to the standard error as an asymptotic 
standard error, since 
-  So, we can expect standard errors to shrink at a 
rate proportional to the inverse of vn 
  185. Asymptotic Tests Lagrange Multiplier (LM) 
Test
-  Once we are using large samples and relying on 
asymptotic normality for inference, we can use 
more that t and F stats  -  The Lagrange multiplier or LM statistic is an 
alternative for testing multiple exclusion 
restrictions 
  19LM Test
-  Suppose we have a standard model, 
 -  y  b0  b1x1  b2x2  . . . bkxk  u 
 - And our null hypothesis implies a restricted 
model  
  20LM Test
- First, we just run an OLS regression for the 
restricted model.  - Second, we take the residuals of the restricted 
model and make the regression  
  21LM Test
-  Under the null hypothesis, we have that
 
-  With a large sample, the result from an F test 
and from an LM test should be similar 
  226. Asymptotic Efficiency of OLS
-  Estimators besides OLS will be consistent 
 -  However, under the Gauss-Markov assumptions, the 
OLS estimators will have the smallest asymptotic 
variances  -  We say that OLS is asymptotically efficient 
 -  Important to remember our assumptions though, if 
not homoskedastic, not true