HW4 Solutions
Problem 1
part a
Note that
Xt − 1.6Xt−1 + .64Xt−2 = wt
(1− 1.6B + .64B2)Xt = wt
the AR polynomial is ϕ(z) = 1 − 1.6z + .64z2 = (1 − .8z)2 that has a root z0 = 10/8. Since |z0| > 1, the
process is causal.
part (b)
5 10 15 20
0.
2
0.
4
0.
6
0.
8
1.
0
ACF
lag
AC
F
part (c)
Applying the R function psi=ARMAtoMA(ar=c(1.6,-.64), ma=0, lag=20) returns the numerical values
1
## [1] 1.6000000 1.9200000 2.0480000 2.0480000 1.9660800 1.8350080 1.6777216
## [8] 1.5099494 1.3421773 1.1811160 1.0307922 0.8933532 0.7696581 0.6597070
## [15] 0.5629500 0.4785075 0.4053240 0.3422736 0.2882304 0.2421135
for ψ1, . . . , ψ20 (read row-wise) which are decreasing in absolute value. Since it always holds that ψ0 = 1,
the correlations can be computed from the R commands
## [1] 0.9747543 0.9190079 0.8458620
The values are ρ(0) = 1, ρ(1) = 0.9747543, ρ(2) = 0.9190079 and ρ(3) = 0.8458620. For comparisons, the R
function ARMAacf(ar=(1.6,-.64), ma=0, lag=3) returns the values ρ(0) = 1, ρ(1) = 0.9756, ρ(2) = 0.9209
and ρ(3) = 0.8491 instead. The values appear to be close.
Problem 2
part a
The AR polynomial ϕ(z) = 1− .75z+ .5625z2 has roots z = 2(1± i√3)/3 which lie outside of the unit circle.
The process is therefore causal. One the other hand, the MA polynomial θ(z) = 1 + 1.25z has a root at
z = −.8, and hence is not invertible. Since the AR and MA polynomials do not have any root in common
therefore the order of the ARMA process is (2, 1).
part b
Xt = .80Xt−1 − .15Xt−2 +Wt − .30Wt−1
where the process can be decompose to
(1− 0.5B)(1− 0.3B)Xt = (1− 0.3B)Wt
which can be simplify to:
(1− 0.5B)Xt =Wt
hence we have a parameter redundancy and the original model can be simplify to AR(1) model where the
model is simplified to Xt = 0.5Xt−1+Wt with AR polynomial ϕ(z) = 1−0.5z which has a root |z| = |2| > 1.
Hence the model is causal.
part c
Since the MA polynomial θ(z) = 1 − .5z − .5z2, has a root z = 1, therefore, the process is not invertible.
Since the process is an MA(2), it is Causal.
Problem 3
Xt − 0.8Xt−2 = wt + 0.6wt−1, {wt} ∼WN(0, 1)
By multiplying both sides of the equation by Xt−2, Xt−1, and Xt, and noting that
Xt = wt + 0.6wt−1 + 0.8wt−2 + . . . ,
2
we obtain the following equations.
γ(2)− 0.8γ(0) = 0
γ(1)− 0.8γ(1) = 0.6
γ(0)− 0.8γ(2) = 1 + 0.62.
By solving the above system of linear equations, we obtain
γ(2) = 1.36×0.81−0.82 = 3.022222
γ(0) = 0.8γ(2) + 1.36 = 3.777778
γ(1) = 0.6/0.2 = 3
Problem 4
Xt =Wt + θWt−1, |θ| < 1, {Wt} ∼WN(0, σ2)
Recall that
ϕ11 = ρ(1) =
θ
1 + θ2
and
ϕ22 = Corr(Xt+2 − Xˆt+2, Xt − Xˆt),
where Xˆt+2 = βXt+1 such that E[(Xt+2 − Xˆt+2)2] is minimized. Similarly, Xˆt = ηXt+1 such that E[(Xt −
Xˆt)2] is minimized. Note
E[(Xt+2 − Xˆt+2)2] = γ(0)− 2βγ(1) + β2γ(0),
where its minimum is attained at β = ρ(1). Similarly, we can show that η = ρ(1). Thus,
ϕ22 = Corr(Xt+2 − Xˆt+2, Xt − Xˆt)
=
Cov(Xt+2 − θ1+θ2Xt+1, Xt − θ1+θ2Xt+1)√
V ar(Xt+2 − θ1+θ2Xt+1)V ar(Xt − θ1+θ2Xt+1)
= −θ
2
1 + θ2 + θ4
Problem 5
part a
Observe that
• 1. the acf ordinates are decaying exponencially fast to zero
• 2. the pacf ordinates are all zero after lag 2 and the pacf ordinate at lag at lag 2 is non zero.
This is consistent with the acf and pacf of AR(2) process.
3
part b
Observe that
• 1. the acf ordinates are all zero after lag 2 and the acf ordinate at lag at lag 2 is non zero,
• 2. the pacf ordinates are decaying exponencially fast to zero.
This is consistent with the acf and pacf of MR(2) process.
Problem 2
part a
0 200 400 600 800
−
5
0
5
Time
x
plot of time series does not show apparent departure from stationarity.
part b
We can identify an appropriate ARMA model for the data by inspecting the plots of ACF and PACF.
Onserve that the acf tails off and the first two PACF ordinates are significant and the pacf ordinates after
lag 2 are not siginficant. THis indicates that AR(2) process is an appropriate model for the data.
4
0 10 20 30 40
−
0.
5
0.
5
1.
0 Series: x
LAG
AC
F
0 10 20 30 40
−
0.
5
0.
5
1.
0
LAG
PA
CF
part c
Since AR(2) is an appropriate model for the data, we can use the Yule-Walker estimator to estimate the
parameters of the model. Let {Xt} be the time seriesconsidered in this problem. Then
Xt = ϕ1Xt−1 + ϕ2Xt−2 + ωt, ωt ∼WN(0, σ2)
Then, Yule-Walker estimator for Φ = (ϕ1, ϕ2)′ is
Φˆ = Rˆ−12 ρˆ
where ρˆ = (ρˆ(1), ρˆ(2))
Rˆ2 =
[
1 ρˆ(1)
ρˆ(1) 1
]
In addition, σˆ2 = γˆ(0)[1− Φˆ′ρˆ].
Estimates for the parameters of the AR(2) process can be obtained as follows:
acf.x <- acf(x)
5
0 5 10 15 20 25 30
−
0.
4
0.
0
0.
4
0.
8
Lag
AC
F
Series x
r1 <- acf.x[[1]][2] # rho(1)
r2 <- acf.x[[1]][3] # rho(2)
g0 <- sum((x-mean(x))^2)/900 # gamma(0)
R <- matrix(c(1,r1, r1,1), nrow=2)
phi.x <- solve(R)%*%c(r1,r2)
sigma2 <- g0*(1-t(c(r1,r2))%*%phi.x)
Estimated parameters are:
phi.x
## [,1]
## [1,] 1.4863259
## [2,] -0.7353793
sigma2
## [,1]
## [1,] 0.9401966
To obtain a 95% confidence interval for the estimated coefficients we need to compute σˆ2nγˆ(0) Rˆ
−1
2
6
n <- length(x)
S <- as.vector(sigma2/(n*g0))*solve(R)
Thus a 95% confidence interval for ϕ1 can be constructed as follows:
c(phi.x[1] - 1.96*S[1,1], phi.x[1] + 1.96*S[1,1])
## [1] 1.485326 1.487326
Thus a 95% confidence interval for ϕ2 can be constructed as follows:
c(phi.x[2] - 1.96*S[2,2], phi.x[2] + 1.96*S[2,2])
## [1] -0.7363793 -0.7343792
7
学霸联盟