Tải bản đầy đủ (.pdf) (9 trang)

KInh tế ứng dụng_ Lecture 9: Autocorrelation

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (82.89 KB, 9 trang )

Applied Econometrics Autocorrelation
1
Applied Econometrics
Lecture 9: Autocorrelation
“It is never possible to step twice into the same river”
1) Introduction
Autocorrelation (also called serial correlation) is violation of the assumption that the error terms are
not correlated, i.e., with autocorrelation E(∈
i
, ∈
j
) ≠ 0 (∈
i



j
). That is, the error in the period t is not
independent of previous errors.
Since we do not know the population line, we do not know the actual errors (∈s), but we estimate
them by the residuals (e). Hence a look at the residual plot for a regression that (i) has no
autocorrelation; (ii) has positive autocorrelation, and, (iii) has negative autocorrelation. The positive
autocorrelation is the common problem in economics.
2) Consequences of autocorrelation
Ordinary least squares (OLS) estimates in presence of autocorrelation will not have the desirable
statistical properties. With positive autocorrelation the standard errors are too low (underestimated).
This adversely affects the t statistics (overestimated), so we may reject the null when it is in fact
valid. Likewise the R
2
and related F – statistic are likely to be overestimated.


3) Detecting autocorrelation
There are many ways to check for autocorrelation such as (1) looking at the residual plot; (2)
observing the correlogram; (3) using the runs tests; and, (4) using the Durbin – Watson statistic. This
section presents the runs tests and Durbin – Watson tests.

3.1) Runs test
Autocorrelation can show up in the residual plot. A non – autocorrelation error should jump around
the mean (zero) in a random manner. With positive autocorrelation (we are most likely to get with
economic data) the error is more likely to stay above or below the mean for successive observations.
(with negative autocorrelation it will jump above and below very frequently).
We can formalize this approach in the runs test, by counting the number of runs in the data. A run is
defined as the succession of positive or negative residual (even just one observation counts as a run).
We saw also that if there is positive autocorrelation then there will be rather fewer runs than we
should expect from a series with no – autocorrelation. On the other hand, if there is negative
autocorrelation then there are more runs than with no autocorrelation.
Written by Nguyen Hoang Bao May 31, 2004
Applied Econometrics Autocorrelation
2
The table for the runs test gave a confident interval – if the observed number of runs is outside this
interval we reject null hypothesis of autocorrelation. If the actual number of runs is less than the
lower bound of the confidence interval then we reject in favor of negative autocorrelation. If it is a
higher we reject in favor of negative autocorrelation. We may sometimes need to calculate the
interval ourselves.
E(R) =
n
NN
21
2
+1
S

R
2 =
)1(
)2(2
2
2121


nn
nNNNN

Where N
1
is the number of positive residuals, N
2
is the number of negative residual, R is total
number of runs, and n is the number observations ( so n = N
1
+ N
2
)
The confidence interval at 5 percent level of significance is given by:
E(R) – 1.96 s
R
≤ R ≤ E(R) +1.96 s
R

We accept the null hypothesis of no autocorrelation if the observed number of runs falls within the
confidence interval.


3.2) Durbin – Watson test
A second (the most common) test is the Durbin – Watson (DW) test. The DW statistic is defined as:
d =
()


=
=


n
t
t
n
t
tt
e
ee
1
2
2
2
1

Note that d = 2(1+ρ)
-1 ≤ ρ ≤ +1
d will be zero with extreme positive autocorrelation 4 with extreme negative autocorrelation and 2 if
there is no autocorrelation.
The null hypothesis is that DW = 2, which corresponds to no autocorrelation.
Reject H

0
: positive
autocorrelation
Zone of indecision Accept H
0
: no
autocorrelation
Zone of indecision Reject H
0
: positive
autocorrelation
0 d
L
d
U
2 4 – d
U
4 – d
L
4
Testing for autocorrelation with a lagged dependent variable
If the model contains a lagged dependent variable, d will be biased 2 (this bias may lead us to accept
the null when in fact autocorrelation is present. In such cases we must instead use Durbin’s h
Written by Nguyen Hoang Bao May 31, 2004
Applied Econometrics Autocorrelation
3
h =
)][var(12
1
1

bn
nd









Where var(b
1
) is the square of the standard error of the coefficient on the lagged dependent variables
and n is the number of observations. The test may not be used if n[var(b
1
)] is greater than one.
The runs test and DW are not equivalent – they may give different answers. Also the fact that DW
may frequently fall in the indecision zone mean that some judgment is required. If DW is in the in
decision zone, but fairly close to d
u
, and the run test indicates no autocorrelation, then you can
assume no autocorrelation.
4) Why do we get autocorrelation?
We test for autocorrelation on the residuals; but these are only a good proxy for the true error if the
model is correct. The presence of autocorrelation will very often indicate miss–specification,
including:

Incorrect functional form
Omitted variable(s)

1
Structural instability
Influential points
Spurious regression
Spurious regression is the very serious problem in time series data. A rule of thumb is the R
2
> d
indicates that a regression is spurious (if R
2
> d the regression is almost certainly spurious, but if
R
2
<d the regression may still be spurious).
A note on cross–section data: autocorrelation must be a time series problem, as we can always
remove autocorrelation from cross–section data by re–ordering the data. However, if the data are
sorted by one of the independent variables then the apparent presence of autocorrelation can still
indicate misspecification. Reordering is not usually an option in time – series data, and certainly not
so if the equation includes any lags.
5) Remedial measures
The first thing to be is to interpret autocorrelation as a symptom of misspecification and so to carry
out various specification tests (i.e. for omitted variables, structural breaks, etc). This will nearly
always cure the problem. If the autocorrelation is genuine you can remove the autocorrelated errors
by:


1 The exclusion of relevant variable(s) will bias the estimates of the coefficients of all variables included in the model (unless they
happen to be orthogonal to the excluded variable). The normal t – tests cannot tell us if the model is misspecified on account of
omitted variables, since they are calculated on the assumption that the estimated model is correct one.
Written by Nguyen Hoang Bao May 31, 2004
Applied Econometrics Autocorrelation

4
5.1 The Cochrane – Orcult procedure
Suppose we have the model: Y
t
= β
0
+ β
1
X
1
+ u
t

It is usually assumed that the e
i
follow the first–order autoregressive scheme, namely,
u
t
= ρu
t-1
+ ε
t
Cochrance and Orcutt (1949) then recommend the following steps to estimste ρ

1 Estimate the two – variable model and calculate the residuals, e
t-1

2 Run the following regression: e
t
= ρe

t-1
+ v
t

3 Using ρ, run the generalized difference equation:
(Y
t
– ρY
t-1
) = β
1
(1 – ρ) + β
2
(X
t
– ρX
t-1
) + (u
t
– ρu
t-1
)
Y
t
*
= β
1
*
+ β
2

*
X
t
*
+ e
t
*

4 Calculate the new residuals: e
t
* *
= Y
t
– β
1
*
– β
2
*
X
t
*

5 Estimate regression: e
t
**
= ρe
t-1
* *
+ w

t

This second round estimate of ρ may not be the best estimate. We can go into the third round
estimate and so on. We may stop calculating when the successive estimates of ρ differ by a very
small amount from 0.01 to 0.005.

5.2 The Durbin procedure
Durbin (1960) suggested can alternative method of estimating
ρ
. The generalized difference
equation can be written as:
Y
t
= β
1
(1 – ρ) + β
2
X
t
+ ρβ
2
X
t-1
+ ρY
t-1
+e
t

Once an estimate of ρ is obtained, we regress the transformed variable Y
*

on X
*
as in
Y
t
*
= β
1
*
+ β
2
*
X
t
*
+ e
t
*


5.3 The Theil – Nagar procedure
Theil and Nagar (1961) have estimated ρ based on d statistic (in the small samples)
ρ =
22
22
)2/1(
kN
kdN

+−


Where N is the total number of observations, d is DW, k is the number of coefficients including the
intercept.


Written by Nguyen Hoang Bao May 31, 2004
Applied Econometrics Autocorrelation
5
5.4 The Hildreth – Lu procedure
From the first – order autoregressive scheme u
t
= ρu
t-1
+ ε
t
Hildreth – Lu (1960) recommend
selecting ρ lie between ±1 using 0,1 unit intervals and transforming the data by the generalized
difference equation and obtain the associated RSS. Hildreth – Lu suggest choosing that ρ which
minimizes the RSS.

The differencing procedure looses one observation. To avoid this, the first observation on Y and X is
transformed as follows: Y
1
(1–ρ )
0.5

and X
1
(1–ρ)
0.5

(Prais –winsten: 1971)
5.5 Detrending by including a time trend as one of regessors
Y
t
= β
1
+ β
2
X
t
+ β
3
u
t

The first – order transformation of it as follow:
ΔY
t

2
ΔX
t
+ β
3
+ ε
t


There is an intercept term in the first difference form. It signifies that there was a linear trend term in
the original model. If β

3
> 0, there is an upward trend after removing the influence of the variable X.
We emphasize that techniques are only to be use if you are sure that there is no problem of
misspecification.

6) An example
Regression of crop output on the price index and fertilizer input (Table 6.1) was found to be badly
autocorrelated: the DW statistic was 0.96 to a critical value of d
L
of 1.28. We found that the
autocorrelation arose from a problem of omitted variable bias. But for illustrative purposes we shall
see how the autocorrelation may be removed using the Cochrane – Orcutt correction. To do this we
carry out the following steps:

(1) The estimated equation with OLS gives DW = 0.958; thus ρ = 1 – d/2 = 0.521.

(2) Calculate Q
t
*
= Q
t
– 0.521Q
t–1
, and similarly for P
*
and F
*
for observations 1962 to 1990. The
results are shown in Table 6.1.


(3) Apply the Prais – Winsten transformation to get Q
*
1961
= (1 – 0.561
2
)
1/2
. Q
1961
, and similarly for
the 1961 of P
*
and F
*
(Although we do have 1960 values for P and F, though not Q, and so
could apply the Cochrane – Orcutt procedure to the 1960 observations, the fact that we use the
Prais – Winsten transformation for one variable means that we must also use it for the others).
The resulting values are shown in Table 6.1.

Written by Nguyen Hoang Bao May 31, 2004

×