Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (75.98 KB, 22 trang )
<span class='text_page_counter'>(1)</span><div class='page_container' data-page=1>
1. Review: Spectral density estimation, sample autocovariance.
2. The periodogram and sample autocovariance.
• We have seen that the spectral density gives an alternative view of
stationary time series.
• Given a realization x1, . . . , xn of a time series, how can we estimate
the spectral density?
• One approach: replace γ(<sub>·</sub>) in the definition
f(ν) =
∞
X
h=−∞
γ(h)e−2πiνh,
with the sample autocovariance γˆ(<sub>·</sub>).
• <i>These two approaches are identical at the Fourier frequencies</i> ν = k/n.
• The asymptotic expectation of the periodogram I(ν) is f(ν). We can
derive some asymptotic properties, and hence do hypothesis testing.
If a time series {X<sub>t</sub>} has autocovariance γ satisfying
P∞
h=−∞ |γ(h)| < ∞<b>, then we define its spectral density as</b>
f(ν) =
∞
X
h=−∞
γ(h)e−2πiνh
Idea: use the sample autocovariance γˆ(<sub>·</sub>), defined by
ˆ
γ(h) = 1
n
n−|h|
X
t=1
(x<sub>t</sub><sub>+|</sub><sub>h</sub><sub>|</sub> <sub>−</sub> x¯)(x<sub>t</sub> <sub>−</sub> x¯), for <sub>−</sub>n < h < n,
as an estimate of the autocovariance γ(<sub>·</sub>), and then use
ˆ
f(ν) =
n−1
X
h=−n+1
ˆ
γ(h)e−2πiνh
For a sequence (x<sub>1</sub>, . . . , x<sub>n</sub>)<i>, define the discrete Fourier transform (DFT) as</i>
(X(ν0), X(ν1), . . . , X(νn−1)), where
X(ν<sub>k</sub>) = <sub>√</sub>1
n
n
X
t=1
x<sub>t</sub>e−2πiνkt,
and ν<sub>k</sub> = k/n (for k = 0,1, . . . , n <sub>−</sub> 1<i>) are called the Fourier frequencies.</i>
(Think of <sub>{</sub>ν<sub>k</sub> : k = 0, . . . , n <sub>−</sub> 1<sub>}</sub> as the discrete version of the frequency
range ν <sub>∈</sub> [0,1].)
Consider the space Cn <sub>of vectors of</sub> <sub>n</sub> <sub>complex numbers, with inner product</sub>
ha, b<sub>i</sub> = a∗b, where a∗ is the complex conjugate transpose of the vector
a <sub>∈</sub> Cn<sub>.</sub>
Suppose that a set <sub>{</sub>φj : j = 0,1, . . . , n − 1} of n vectors in Cn are
orthonormal:
hφj, φki =
1 if j = k,
0 otherwise.
Then these <sub>{</sub>φj} span the vector space Cn, and so for any vector x, we can
write x in terms of this new orthonormal basis,
x =
n−1
X
j=0
Consider the following set of n vectors in Cn<sub>:</sub>
e<sub>j</sub> = <sub>√</sub>1
n e
2πiνj<sub>, e</sub>2πi2νj<sub>, . . . , e</sub>2πinνj′
: j = 0, . . . , n <sub>−</sub> 1
.
It is easy to check that these vectors are orthonormal:
he<sub>j</sub>, e<sub>k</sub><sub>i</sub> = 1
n
n
X
t=1
e2πit(νk−νj) = 1
n
n
X
t=1
e2πi(k−j)/nt
=
1 if j = k,
1
ne2πi(k−j)/n
1−(e2πi(k−j)/n)n
1−e2πi(k−j)/n otherwise
=
1 if j = k,
where we have used the fact that S<sub>n</sub> = Pn
t=1 αt satisfies
αS<sub>n</sub> = S<sub>n</sub> + αn+1 <sub>−</sub> α and so S<sub>n</sub> = α(1 <sub>−</sub> αn)/(1 <sub>−</sub> α) for α <sub>6</sub>= 1.
So we can represent the real vector x = (x<sub>1</sub>, . . . , x<sub>n</sub>)′ <sub>∈</sub> Cn <sub>in terms of this</sub>
orthonormal basis,
x =
n−1
X
j=0
he<sub>j</sub>, x<sub>i</sub>e<sub>j</sub> =
n−1
X
j=0
X(ν<sub>j</sub>)e<sub>j</sub>.
That is, the vector of discrete Fourier transform coefficients
An alternative way to represent the DFT is by separately considering the
real and imaginary parts,
X(ν<sub>j</sub>) = <sub>h</sub>e<sub>j</sub>, x<sub>i</sub> = <sub>√</sub>1
n
n
X
t=1
e−2πitνjx<sub>t</sub>
= <sub>√</sub>1
n
n
X
t=1
cos(2πtν<sub>j</sub>)x<sub>t</sub> <sub>−</sub> i<sub>√</sub>1
n
n
X
t=1
sin(2πtν<sub>j</sub>)x<sub>t</sub>
= X<sub>c</sub>(ν<sub>j</sub>) <sub>−</sub> iX<sub>s</sub>(ν<sub>j</sub>),
The periodogram is defined as
I(νj) = |X(νj)|2
= 1
n
n
X
t=1
e−2πitνjx<sub>t</sub>
= X<sub>c</sub>2(νj) + X<sub>s</sub>2(νj).
Xc(νj) =
1
√
n
n
X
t=1
cos(2πtνj)xt,
Xs(νj) =
1
√
n
n
X
t=1
Since I(ν<sub>j</sub>) = <sub>|</sub>X(ν<sub>j</sub>)<sub>|</sub>2 for one of the Fourier frequencies ν<sub>j</sub> = j/n (for
j = 0,1, . . . , n <sub>−</sub> 1), the orthonormality of the ej implies that we can write
x∗x =
n−1
X
j=0
X(ν<sub>j</sub>)e<sub>j</sub>
∗
n−1
X
j=0
X(ν<sub>j</sub>)e<sub>j</sub>
=
n−1
|X(ν<sub>j</sub>)<sub>|</sub>2 =
n−1
X
j=0
I(ν<sub>j</sub>).
For x¯ = 0, we can write this as
ˆ
σ<sub>x</sub>2 = 1
n
n
X
t=1
x2<sub>t</sub> = 1
n
n−1
X
j=0
This is the discrete analog of the identity
σ<sub>x</sub>2 = γ<sub>x</sub>(0) =
Z 1/2
−1/2
f<sub>x</sub>(ν)dν.
(Think of I(ν<sub>j</sub>) as the discrete version of f(ν) at the frequency ν<sub>j</sub> = j/n,
and think of (1/n)P
νj · as the discrete version of
R
Why is the periodogram at a Fourier frequency (that is, ν = νj) the same as
computing f(ν) from the sample autocovariance?
Almost the same—they are not the same at ν<sub>0</sub> = 0 when x¯ 6= 0.
But if either x¯ = 0, or we consider a Fourier frequency ν<sub>j</sub> with
I(νj) =
1
n
n
X
t=1
e−2πitνjxt
2
= 1
n
e−2πitνj(xt − x¯)
2
= 1
n
n
X
t=1
e−2πitνj(x<sub>t</sub> <sub>−</sub> x¯)
! <sub>n</sub>
X
t=1
e2πitνj(x<sub>t</sub> <sub>−</sub> x¯)
!
= 1
n
X
s,t
e−2πi(s−t)νj(x<sub>s</sub> <sub>−</sub> x¯)(x<sub>t</sub> <sub>−</sub> x¯) =
n−1
X
h=−n+1
ˆ
γ(h)e−2πihνj,
where the fact that νj 6= 0 implies Pn<sub>t</sub><sub>=1</sub> e−2πitνj = 0 (we showed this
We want to understand the asymptotic behavior of the periodogram I(ν) at
a particular frequency ν, as n increases. We’ll see that its expectation
converges to f(ν).
We’ll start with a simple example: Suppose that X<sub>1</sub>, . . . , X<sub>n</sub> are
i.i.d. N(0, σ2) (Gaussian white noise). From the definitions,
Xc(νj) =
1
√
n
n
X
t=1
cos(2πtνj)xt, Xs(νj) =
1
√
n
n
X
t=1
sin(2πtνj)xt,
we have that Xc(νj) and Xs(νj) are normal, with
Also,
Var(X<sub>c</sub>(ν<sub>j</sub>)) = σ
2
n
n
X
t=1
cos2(2πtν<sub>j</sub>)
= σ
2
2n
n
X
t=1
(cos(4πtνj) + 1) =
σ2
2 .
Also,
Cov(X<sub>c</sub>(ν<sub>j</sub>), X<sub>s</sub>(ν<sub>j</sub>)) = σ
2
n
n
X
t=1
cos(2πtν<sub>j</sub>) sin(2πtν<sub>j</sub>)
= σ
2
2n
n
X
t=1
sin(4πtν<sub>j</sub>) = 0,
Cov(X<sub>c</sub>(ν<sub>j</sub>), X<sub>c</sub>(ν<sub>k</sub>)) = 0
Cov(X<sub>s</sub>(ν<sub>j</sub>), X<sub>s</sub>(ν<sub>k</sub>)) = 0
Cov(X<sub>c</sub>(ν<sub>j</sub>), X<sub>s</sub>(ν<sub>k</sub>)) = 0.
That is, if X<sub>1</sub>, . . . , X<sub>n</sub> are i.i.d. N(0, σ2)
(Gaussian white noise; f(ν) = σ2), then the X<sub>c</sub>(ν<sub>j</sub>) and X<sub>s</sub>(ν<sub>j</sub>) are all
i.i.d. N(0, σ2/2). Thus,
2
σ2 I(νj) =
2
σ2 X
c(νj) + Xs2(νj)
∼ χ2<sub>2</sub>.
So for the case of Gaussian white noise, the periodogram has a chi-squared
distribution that depends on the variance σ2 (which, in this case, is the
Under more general conditions (e.g., normal <sub>{</sub>Xt}, or linear process {Xt}
with rapidly decaying ACF), the Xc(νj), Xs(νj) are all asymptotically
independent and N(0, f(νj)/2).
Consider a frequency ν. For a given value of n, let νˆ(n) be the closest
Fourier frequency (that is, νˆ(n) = j/n for a value of j that minimizes
|ν <sub>−</sub> j/n<sub>|</sub>). As n increases, νˆ(n) <sub>→</sub> ν, and (under the same conditions that
ensure the asymptotic normality and independence of the sine/cosine
transforms), f(ˆν(n)) <sub>→</sub> f(ν). (picture)
In that case, we have
2
f(ν)I(ˆν
(n)<sub>) =</sub> 2
f(ν)
Thus,
EI(ˆν(n)) = f(ν)
2 E
2
f(ν)
X<sub>c</sub>2(ˆν(n)) + X<sub>s</sub>2(ˆν(n))
→ f(<sub>2</sub>ν)E(Z<sub>1</sub>2 + Z<sub>2</sub>2) = f(ν),
1. Review: Spectral density estimation, sample autocovariance.
2. The periodogram and sample autocovariance.