Univariate Time Series - PowerPoint PPT Presentation

1 / 32
About This Presentation
Title:

Univariate Time Series

Description:

Issues relating to the stationarity of a time series are important ... Perron ... pperron tbill. Phillips-Perron test for unit root Number of obs = 207. Newey ... – PowerPoint PPT presentation

Number of Views:378
Avg rating:3.0/5.0
Slides: 33
Provided by: polsciC
Category:

less

Transcript and Presenter's Notes

Title: Univariate Time Series


1
Univariate Time Series
  • Stationary and
  • Nonstationary Series

2
Strictly Stationary Processes
  • Issues relating to the stationarity of a time
    series are important because they help us
    understand the behavior and properties of a
    series.
  • A series that is not stationary has some
    undesirable properties that make hypothesis
    testing using standard techniques incorrect.
  • A series is strictly stationary if the
    distribution of its values remains the same as
    time progresses the probability that y falls
    within a particular interval is the same now as
    at any time in the past or future.

3
A Weakly Stationary Series
  • If a series satisfies the following conditions
    for t1, 2, 3, , 8 it is said to be weakly or
    covariance stationary
  • 1 means that the series has constant mean
  • 2 means that the series has constant variance
  • 3 means that the series has constant
    autocovariance

4
Autocovariance
  • Autocovariance determines how y is related to its
    previous values. For a (strictly or weakly)
    stationary series they depend only on the
    difference between t1 and t2 so that the
    covariance between y1 and yt-1 is the same as the
    covariance between yt-10 and yt-11
  • The moment
  • Is the autocovariance function. When s0 the
    autocovariance at lag zero is obtained
    (covariance of yt and yt).

5
Autocorrelation
  • Recall that the covariance is not that useful of
    a measure as it depends on the units of
    measurement of y.
  • Autocorrelation covariances normalized by
    dividing by the variance
  • The series t0 has the standard properties of
    correlation coefficients it is bounded by 1
  • If we plot ts against s we get the
    autocorrelation function (acf) or correlogram.

6
Autocorrelation of tbill.ac tbill
7
Autocorrelation of grow.ac grow
8
A White Noise Process
  • Definition a white noise process is one with on
    discernible structure. Formally
  • A white noise process has constant mean and
    variance and zero autocovariances (except at lag
    zero)

9
Generating White Noise in STATA
  • set obs 100
  • gen obs_n
  • tsset obs
  • gen einvnorm(uniform())
  • tsline e
  • ac e

10
Testing Autocorrelations
  • If the process that generates yt has a standard
    normal distribution then the sample
    autocorrelation coefficients are also distributed
    normally.
  • This lets us construct confidence intervals for
    autocorrelations (see ac graphs above)
  • It is also possible to test the joint hypothesis
    that all m of the tk correlations coefficients
    are simultaneously equal to zero using the Q
    statistic developed by Box and Pierce (1970)

11
  • BP Q-statistic
  • The correlation coefficient is squared so that
    the positive and negative coefficients do not
    cancel each other out. The Q-statistic is
    asymptotically distributed as ?2 with df equal to
    the number of squares in the sum (m).
  • However, the Box-Pierce test has small sample
    properties leading to incorrect inferences in
    small samples.

12
Ljung-Box (1978) Q
  • Ljung-Box Q statistic
  • Asymptotically the (T2) and (T-k) terms cancel
    each other out so that the LB formulation is
    equivalent to the BP test

13
Q Statistics in STATA
  • . corrgram grow, lag(20) /default is 40 lags/
  • -1
    0 1 -1 0 1
  • LAG AC PAC Q ProbgtQ
    Autocorrelation Partial Autocor
  • --------------------------------------------------
    -----------------------------
  • 1 0.8342 0.8346 146.84 0.0000
    ------ ------
  • 2 0.5408 -0.5108 208.84 0.0000
    ---- ----
  • 3 0.2059 -0.2212 217.88 0.0000
    - -
  • 4 -0.0894 -0.0723 219.59 0.0000
  • 5 -0.2101 0.3722 229.09 0.0000
    - --
  • 6 -0.2225 -0.1546 239.79 0.0000
    - -
  • 7 -0.1648 -0.0888 245.69 0.0000
    -
  • 8 -0.0756 -0.0107 246.94 0.0000
  • 9 -0.0098 0.1056 246.96 0.0000
  • 10 0.0004 -0.2159 246.96 0.0000
    -
  • 11 -0.0420 -0.1205 247.36 0.0000
  • 12 -0.1234 -0.0698 250.75 0.0000
  • 13 -0.1668 0.3070 256.98 0.0000
    - --
  • 14 -0.1600 -0.1141 262.75 0.0000
    -

14
  • .wntestq grow, lag(20)
  • Portmanteau test for white noise
  • ---------------------------------------
  • Portmanteau (Q) statistic 246.9643
  • Prob gt chi2(10) 0.0000

15
  • . corrgram e, lag(20) /simulation example/
  • -1
    0 1 -1 0 1
  • LAG AC PAC Q ProbgtQ
    Autocorrelation Partial Autocor
  • --------------------------------------------------
    -----------------------------
  • 1 0.0737 0.0746 .56021 0.4542
  • 2 0.0512 0.0467 .83278 0.6594
  • 3 -0.0023 -0.0091 .83331 0.8415
  • 4 -0.0283 -0.0312 .91835 0.9219
  • 5 0.0833 0.0954 1.664 0.8934
  • 6 0.0347 0.0277 1.7949 0.9376
  • 7 0.0050 -0.0143 1.7977 0.9702
  • 8 -0.0959 -0.1036 2.8165 0.9453
  • 9 -0.0205 0.0087 2.8635 0.9695
  • 10 -0.0627 -0.0533 3.3091 0.9732
  • 11 -0.1578 -0.1807 6.1612 0.8624
    - -
  • 12 -0.1765 -0.1834 9.7712 0.6360
    - -
  • 13 -0.1144 -0.0809 11.307 0.5851
  • 14 -0.0179 -0.0031 11.345 0.6588

16
Stationarity and unit root testing
  • Focus on tests for weak stationarity. Recall
    that weak stationarity requires constant mean,
    constant variance and constant autocovariance for
    each lag
  • Why is stationarity important?
  • The stationarity of a series can influence its
    behavior. For example, we use the word shock
    to denote an unexpected change in the value of a
    variable (or error). For a stationary series a
    shock will gradually die away. That is, the
    effect of a shock during time t will have a
    smaller effect in time t1, a smaller effect in
    time t2, etc.
  • A shock to a non-stationary series will be
    infinite

17
  • The use of non-stationary data can lead to
    spurious regressions. If two stationary series
    are generated as independent random series then
    when we regress one on the other we expect the
    t-statistic on the slope to not be different from
    zero and the value of R2 to be zero.
  • This may seem obvious but if two series are
    trending over time then a regression of one on
    another could have a high R2 and a t-statistic
    that is different from zero even if the two
    series are totally unrelated

18
Two Types of Non-Stationarity
  • Random Walk with a Drift
  • Trend-stationary process (stationary around a
    trend)

19
  • Note that the random walk model can be
    generalized to the case where yt is an explosive
    process.
  • Where ?gt1. In practice this case is often
    ignored because there are few series where this
    condition is satisfied. The two more relevant
    cases are ?1 and ?lt1.

20
Random Walk
  • yt yt-1 ut
  • yt .95 yt-1 ut

21
Random Walk
  • yt .5 yt-1 ut
  • yt .1 yt-1 ut

22
Explosive Case
  • yt 1.1 yt-1 ut

23
Trend-Stationary Case
  • Recall
  • We could run a regression on this model and
    obtain the residuals which would have the linear
    trend removed.

24
  • set obs 100
  • gen t_n
  • gen einvnorm(uniform())
  • tsset t
  • gen y2te
  • tsline y
  • reg y t
  • predict r, resid
  • tsline r

25
Differencing
  • If a non-stationary series, yt must be
    differenced d times before it becomes stationary
    then it is said to be integrated of order d.
  • This is written ytI(d)
  • If ytI(d) then ?dytI(0). This means that
    applying the difference operator (?) d times
    leads to an I(0) process a process with no unit
    roots.

26
Testing for Unit Roots
  • One approach would be to examine the
    autocorrelation function of the series of
    interest. This would be incorrect, however,
    because the ACF for a unit root process (a random
    walk) will look like it decays slowly to zero.
    Thus it may be mistaken for a highly persistent
    but persistent process.
  • Thus ACFs cannot be used to test for unit roots.

27
Dickey Fuller Test
  • Basic idea is to test ?1 in
  • Against the one-sided alternative ?lt1 . The
    hypotheses of interest are
  • H0 series contains a unit root
  • HA series is stationary.
  • In practice we estimate
  • so that a test of ?1 is equivalent to a test of
    ?0

28
Dickey Fuller Tests in Stata
  • . dfuller grow
  • Dickey-Fuller test for unit root
    Number of obs 207
  • ----------
    Interpolated Dickey-Fuller ---------
  • Test 1 Critical
    5 Critical 10 Critical
  • Statistic Value
    Value Value
  • --------------------------------------------------
    ----------------------------
  • Z(t) -4.296 -3.474
    -2.883 -2.573
  • --------------------------------------------------
    ----------------------------
  • MacKinnon approximate p-value for Z(t) 0.0005
  • . dfuller tbill
  • Dickey-Fuller test for unit root
    Number of obs 207
  • ----------
    Interpolated Dickey-Fuller ---------
  • Test 1 Critical
    5 Critical 10 Critical
  • Statistic Value
    Value Value

29
Phillips-Perron Tests
  • Just like Dickey-Fuller tests but they include an
    automatic correction to the DF procedure to allow
    for autocorrelated residuals. The tests often
    give similar results.
  • . pperron tbill
  • Phillips-Perron test for unit root
    Number of obs 207

  • Newey-West lags 4
  • ----------
    Interpolated Dickey-Fuller ---------
  • Test 1 Critical
    5 Critical 10 Critical
  • Statistic Value
    Value Value
  • --------------------------------------------------
    ----------------------------
  • Z(rho) -8.741 -20.157
    -13.914 -11.143
  • Z(t) -2.227 -3.474
    -2.883 -2.573
  • --------------------------------------------------
    ----------------------------
  • MacKinnon approximate p-value for Z(t) 0.1965

30
Criticism of DF and PP Tests
  • The most important criticism is that their power
    is low if a process is stationary but with a root
    close to the non-stationary boundary (for example
    ?.95).
  • Alternative is the KPSS test

31
kpss is a user written do file
  • .kpss grow
  • KPSS test for grow
  • Maxlag 14 chosen by Schwert criterion
  • Autocovariances weighted by Bartlett kernel
  • Critical values for H0 grow is trend stationary
  • 10 0.119 5 0.146 2.5 0.176 1 0.216
  • Lag order Test statistic
  • 0 .113
  • 1 .0619
  • 2 .046
  • 3 .0393
  • 4 .0367
  • 5 .036
  • 6 .0363

32
Next Topics
  • Using time series in regression
  • Instrumental variables
  • Vector Autoregression
  • Causality
Write a Comment
User Comments (0)
About PowerShow.com