Inference with Time Series Data: An Introduction Chapters 10, 11, 12 - PowerPoint PPT Presentation

1 / 19
About This Presentation
Title:

Inference with Time Series Data: An Introduction Chapters 10, 11, 12

Description:

Natural process. Population growth. Capital accumulation. Key Time Series Properties ... If any of the conditions are violated, the process is 'nonstationary' ... – PowerPoint PPT presentation

Number of Views:44
Avg rating:3.0/5.0
Slides: 20
Provided by: economicsa
Category:

less

Transcript and Presenter's Notes

Title: Inference with Time Series Data: An Introduction Chapters 10, 11, 12


1
Inference with Time Series Data An
Introduction(Chapters 10, 11, 12)
  • Course Applied Econometrics
  • Lecturer Zhigang Li

2
Time-Series Modelling
  • Time Series Process (or Stochastic Process)
  • A sequence of random variables indexed by time.
  • Why time-series modelling?
  • Data availability
  • Utilize past information to approximate
    unobserved variables.
  • Natural process
  • Population growth
  • Capital accumulation

3
Key Time Series Properties
  • Covariance Stationary Process X
  • E(xt), Var(xt), Cov(xt,xth) are all constant
    over time
  • If any of the conditions are violated, the
    process is nonstationary.
  • A process with time trend is nonstationary (its
    mean changes over time).
  • Weakly Dependent Process X
  • A covariance stationary process X is weakly
    dependent if Cov(xt,xth) converges to zero fast
    enough as h gets large.
  • Otherwise, the process is strongly dependent
    (or highly persistent).
  • Random Walk
  • ytyt-1ut (ut is i.i.d. with mean zero)

4
Why is Inference with Time Series Data Typically
More Difficult?
  • Serial Correlation in Errors
  • More restrictive assumptions to achieve
    consistent estimates
  • Arbitrary Model Structure
  • Small sample

5
Consistency of Small Sample Estimates
  • With Strict Exogeneity
  • If ut is uncorrelated with xs for all s and t,
    coefficient estimates are generally unbiased
    (linear model, no perfect collinearity).
  • This condition rules out feedback from y on
    future values of x (e.g. crime rate and number of
    policemen).
  • This condition also rules out models with lagged
    dependent variables.
  • Random sampling assumption not needed.
  • ut can be heteroskedastic and correlated over
    time.

6
Consistency of Large Sample Estimates
  • Key Assumptions
  • Xs and u are contemporaneously uncorrelated.
  • All variables (Xs and y) are stationary and
    weakly dependent.
  • Loosely speaking, xt and xth are almost
    independent as h increases without bound.
  • A special case A covariance stationary time
    series is weakly dependent if Corr(xt,xth) goes
    to zero sufficiently quickly as h approaches
    infinity. (pp. 362 or 382)

7
Typical Time Series Models
  • Moving Average Process of Order One MA(1)
  • ytetaet-1 (et is i.i.d. with zero mean)
  • MA(1) process is a stationary and weakly
    dependent process. (Corr(yt,yth)?)
  • Autoregressive Process of Order One AR(1)
  • yta?yt-1ut (ut is i.i.d.)
  • If ?lt1, it is called a stable AR(1) process,
    which is stationary and weakly dependent, with
    Corr(yt,yth)?h
  • ARMA(1,1) is a combination of the above two
  • A trending series, while certainly nonstationary,
    can be weakly dependent (called trend-stationary
    process).

8
Inconsistency of Standard Errors with Serial
Correlation in Ut
  • OLS standard errors are not valid with serially
    correlated errors. (pp. 392 or 413)

9
Testing for Serial Correlation
  • Durbin-Watson test
  • Test for AR(1) serial correlation based on the
    OLS residuals
  • DW is approximately 2(1-?)
  • When DW is significantly less than two, serial
    correlation is present.

10
Solution 1 Differencing (pp. 409 or 431)
  • Differencing a time series model gives us
    ?yta?xt?ut
  • If the process u follows a random walk, then
    differencing makes ?ut a serially uncorrelated
    process.
  • Even if u does not follows a random walk, if ? is
    positive and large, first differencing will
    eliminate part of the serial correlation. (You
    may check it by testing the serial correlation in
    residuals.)

11
Solution 2 Use Estimator that is Robust to
Serial Correlation I
  • Davidson-MacKinnon Approach
  • Estimate yta0a1xt1 a2xt2ut (we are interested
    in estimates of a1.)
  • Regress xt1 on other independent variables and
    get residuals rt.
  • Asymptotic standard error of a1 estimate is
    SD(Sru)/SE(r2).
  • Assumption Once the terms are farther apart than
    a few periods, the correlation is essentially
    zero (like weak dependence).
  • Robust to both heteroskedasticity and serial
    Correlation.

12
Solution 2 Use Estimator that is Robust to
Serial Correlation II
  • Newey-West-Wooldridge Approach
  • The serial correlation-robust standard error of
    a1 estimate is SE(a1)/s2(v)1/2
  • Where v is a given function of an integer g,
    which controls how much serial correlation we are
    allowing in computing the standard error.
  • VSa22S1g1-h/(g1)(Satat-h)
  • The larger the g, the more terms are included to
    correct for serial correlation. (The choice of g
    can be done by statistics software but there is
    no strict rule.)
  • Also robust to arbitrary heteroskedasticity.
  • For correction under the strict exogeneity
    assumption (which is very restrictive), read
    textbook pp. 402 or 424.

13
Solution 2 Use Estimator that is Robust to
Serial Correlation III
  • Why SC-robust standard errors has lagged behind
    the use of heteroskedasticity-robust standard
    errors?
  • Large cross sections are more common than large
    time series. The SC-robust standard errors can
    behave poorly even for time periods as large as
    100.
  • The choice of g is arbitrary.
  • In the presence of severe serial correlation, OLS
    can be very inefficient. After correcting the
    standard errors for serial correlation, the
    coefficients are often insignificant. In this
    case, the correction approach under strict
    exogeneity assumption may have advantage because
    it is more efficient.

14
Solution 3 Re-specify Model until the Serial
Correlation Disappears
  • A model is called dynamically complete if its
    errors are serially uncorrelated. Therefore, with
    serially correlated errors, we may re-specify
    model by including lags such that it becomes
    dynamically complete.
  • A model should be dynamically complete if our
    purpose is to fit the data and for forecasting.
  • Example
  • yta0a1yt-1ut and ut?ut-1et (?lt1)
  • The model can be rewritten as
  • yta0a1yt-1 a2yt-2et

15
Testing for Serial Correlation (p. 395 or 416)
  • Ideas (1) Estimate time series models and obtain
    residuals. (2) Then estimate autoregressive
    models with the residuals and test whether
    coefficients of the autoregressive parts are
    statistically significant.
  • The tests can usually be done with ready-to-use
    econometrics packages. Read the textbook for more
    detail.
  • T-test and Durbin-Watson under strict exogeneity
  • Durbin-Watson test robust to the lack of strict
    exogeneity
  • Regress y on Xs to obtain residuals u
  • Regress ut on Xs and ut-1 to test whether the
    coefficient of ut-1 is significant.
  • Note that the test is invalid for a unit root
    process.

16
Unit Root Process
  • Unit Root Process
  • ytyt-1ut (ut is a weakly dependent process)
  • A random walk (ut is i.i.d. with mean zero) is a
    special case of the unit root process.
  • No matter how far in the future we look and how
    much information we have for the past, our best
    prediction of future is todays value.
  • The expected value of a random walk does not
    depend on t
  • The variance of a random walk increases as a
    linear function of time (nonstationary).
  • High persistency Corr(yt, yth)t/(th)1/2

17
Trends
  • Fixed Time Trend
  • yta0 a1tut (linear trend)
  • Ln(yt)a0 a1tut (exponential trend)
  • Stochastic Time Trend
  • A random walk with Drift ytayt-1ut
  • The best prediction of yth at time t is yt plus
    the drift ah.
  • Ignoring the fact that two sequences y and x are
    trending (possibly due to changes in other
    variables) can lead to spurious relationship
    between x and y. (actually an omitted variable
    problem)
  • Solution Include t in the regression of y on x
    for fixed time trend take first-difference of
    equations for stochastic time trend

18
Seasonal Adjustment
  • Method 1 Include a set of seasonal dummy
    variables to account for seasonality in the
    dependent and independent variables. (p.353-4)
  • Method 2 Using autoregressive models.

19
Infinite Distributed Lag Models
  • ytaa0xta1xt-1ut
  • Geometric Distributed Lag
  • aj??j, ?lt1
  • Therefore, we can rewrite the model as
  • yta?xt?yt-1ut-?ut-1
  • The errors are thus correlated with yt-1, causing
    endogeneity problem. A natural IV is xt-1.
  • Rational Distributed Lag
  • Augment the GDL model by adding a lag of z to
    independent variables.
Write a Comment
User Comments (0)
About PowerShow.com