STA457H1-无代写
时间:2022-11-17
Time Series Analysis
STA457H1
ARIMA Models
Lecture 5
Dr. Esam Mahdi
University of Toronto - Department of Statistical Sciences
September 3, 2022
Dr. Esam Mahdi (University of Toronto - Department of Statistical Sciences)Time Series Analysis STA457H1 ARIMA Models Lecture 5 September 3, 2022 1 / 28
Learning Objectives
By the end of this chapter, you should be able to do the following:
Understand the sufficient and necessarily conditions for AR,MA, and ARMA
models.
Evaluate stationary, causality, and invertibility in time series models.
Use R with real time series data and interpret outcomes of analyses.
Dr. Esam Mahdi (University of Toronto - Department of Statistical Sciences)Time Series Analysis STA457H1 ARIMA Models Lecture 5 September 3, 2022 2 / 28
Linear Processes
The Box-Jenkins methodology requires that the model used in describing and forecasting a time
series be both stationary and invertible.
Definition
The process {xt} is a stationary if it remains in statistical equilibrium with probabilistic
properties that do not change over time, in particular varying about a fixed constant mean
level and with constant variance.
The process {xt} is an invertible if its weights decline (does not depend on time), and hence
the Box-Jenkins model can be used to express xt as a function of past x observations (that
is, xt−1, xt−2, . . . ).
For a stationary time series without trends or seasonal effects. That is; if necessary, any
trends, seasonal or cyclical effects have already been removed from the series, we might
construct a linear model for a time series with autocorrelation.
The most important special cases of linear processes are:
Autoregressive (AR) Model.
Moving-Average (MA) Model.
Autoregressive Moving Average (ARMA) Model.
Adding nonstationary models to the mix ARMA models leads to the autoregressive
integrated moving average (ARIMA) model.
Dr. Esam Mahdi (University of Toronto - Department of Statistical Sciences)Time Series Analysis STA457H1 ARIMA Models Lecture 5 September 3, 2022 3 / 28
Autoregressive (AR) Models with Zero Mean
Definition
A time series {xt}, with zero mean, is called autoregressive process of order p
and denoted by AR(p), if it can be written in the form:
xt = ϕ1xt−1 + ϕ2xt−2 + · · ·+ ϕpxt−p + wt (1)
where wt is a random shock, usually assumed to be white noise, i.e.,
wt ∼ wn(0, σ2w ), and ϕ1, · · · , ϕp(ϕp ̸= 0) are parameters.
With backshift operator, the AR process can be written as
Φp(B)xt = wt , (2)
where Φp(B) = 1− ϕ1B − ϕ2B2 − · · · − ϕpBp is a polynomial of degree p.
Dr. Esam Mahdi (University of Toronto - Department of Statistical Sciences)Time Series Analysis STA457H1 ARIMA Models Lecture 5 September 3, 2022 4 / 28
Autoregressive (AR) Models with Mean µ
If the mean, µ, of xt is not zero, replace xt by xt − µ to get:
xt − µ = ϕ1(xt−1 − µ) + ϕ2(xt−2 − µ) + · · ·+ ϕp(xt−p − µ) + wt (3)
or write
xt = δ + ϕ1xt−1 + ϕ2xt−2 + · · ·+ ϕpxt−p + wt , (4)
where
δ = µ(1− ϕ1 − ϕ2 − · · · − ϕp)
For example, the AR(2) model
xt = 1.5 + 1.2xt−1 − 0.5xt−2 + wt
is
xt − µ = 1.2(xt−1 − µ)− 0.5(xt−2 − µ) + wt
where 1.5 = µ(1− 1.2− (−0.5))⇒ 1.5 = 0.3µ.
Thus, the model has mean µ = 5 and the model can be written as
xt − 5 = 1.2(xt−1 − 5)− 0.5(xt−2 − 5) + wt
Dr. Esam Mahdi (University of Toronto - Department of Statistical Sciences)Time Series Analysis STA457H1 ARIMA Models Lecture 5 September 3, 2022 5 / 28
A Simulated AR(1) Series: xt = 0.8xt−1 + wt
set.seed(12345)
ar1.sim<-arima.sim(n=100,list(order=c(1,0,0),ar=0.8))
par(mfrow=c(1,3))
plot(ar1.sim, ylab="x(t)", main="Simulated AR(1)",lwd=2)
acf(ar1.sim, 50, main="Autocorrelation Function",lwd=2)
acf(ar1.sim, type="partial", main="Partial ACF",lwd=2)
Simulated AR(1)
Time
x(t
)
0 20 40 60 80 100

2
0
2
4
0 10 20 30 40 50

0.
2
0.
0
0.
2
0.
4
0.
6
0.
8
1.
0
Lag
AC
F
Autocorrelation Function
5 10 15 20

0.
2
0.
0
0.
2
0.
4
0.
6
Lag
Pa
rti
al
A
CF
Partial ACF
Figure: A simulated AR(1) process: (1− 0.8B)xt = wt
Dr. Esam Mahdi (University of Toronto - Department of Statistical Sciences)Time Series Analysis STA457H1 ARIMA Models Lecture 5 September 3, 2022 6 / 28
A Simulated AR(1) Series: xt = −0.8xt−1 + wt
set.seed(12345)
ar11.sim<-arima.sim(n=100,list(order=c(1,0,0),ar=-0.8))
par(mfrow=c(1,3))
plot(ar11.sim, ylab="x(t)", main="Simulated AR(1)",lwd=2)
acf(ar11.sim, main="Autocorrelation Function",lwd=2)
acf(ar11.sim, type="partial", main="Partial ACF",lwd=2)
Simulated AR(1)
Time
x(t
)
0 20 40 60 80 100

4

2
0
2
0 5 10 15 20

0.
5
0.
0
0.
5
1.
0
Lag
AC
F
Autocorrelation Function
5 10 15 20

0.
6

0.
4

0.
2
0.
0
0.
2
Lag
Pa
rti
al
A
CF
Partial ACF
Figure: A simulated AR(1) process: (1 + 0.8B)xt = wt
Dr. Esam Mahdi (University of Toronto - Department of Statistical Sciences)Time Series Analysis STA457H1 ARIMA Models Lecture 5 September 3, 2022 7 / 28
A Simulated AR(2) Series: xt = 1.1xt−1 − 0.3xt−2 + wt
set.seed(1965)
ar2<-arima.sim(n=200,list(order=c(2,0,0),ar=c(1.1,-0.3)))
par(mfrow=c(1,3))
plot(ar2, ylab="x(t)", main="Simulated AR(2)",lwd=2)
acf(ar2, main="Autocorrelation Function",lwd=2)
acf(ar2, type="partial", main="Partial ACF",lwd=2)
Simulated AR(2)
Time
x(t
)
0 50 100 150 200

4

2
0
2
4
0 5 10 15 20
0.
0
0.
2
0.
4
0.
6
0.
8
1.
0
Lag
AC
F
Autocorrelation Function
5 10 15 20

0.
2
0.
0
0.
2
0.
4
0.
6
0.
8
Lag
Pa
rti
al
A
CF
Partial ACF
Figure: A simulated AR(2) process: (1− 1.1B + 0.3B2)xt = wt
Dr. Esam Mahdi (University of Toronto - Department of Statistical Sciences)Time Series Analysis STA457H1 ARIMA Models Lecture 5 September 3, 2022 8 / 28
Causal Conditions for an of AR(1) Process
The autoregressive process of order 1, AR(1), xt = ϕxt−1 + wt (equivalently
(1− ϕB)xt = wt ) is a causal process if it is stationary with values that are not
depending on the future. In this case, the absolute value of the root of (1− ϕz) = 0
must lie outside the unit circle.
That is, the AR(1) is causal process if
|z| = |1
ϕ
| > 1 or equivalently if |ϕ| < 1 (5)
Note that causality implies stationarity but not the other way around.
If the process is not stationary then it will not causal.
Examples
1.) (1− 0.4B)xt = wt is a causal process because the absolute value of the root of
(1− 0.4z) = 0 is |z| = |1/0.4| = 2.5 > 1.
2.) (1 + 1.8B)xt = wt is not stationary process (not casual) because |root| of
(1 + 1.8z) = 0 is |z| = | 1−1.8 | = 0.56 < 1.
3.) xt = 0.5xt−1 + wt is a causal process because |0.5| < 1.
Dr. Esam Mahdi (University of Toronto - Department of Statistical Sciences)Time Series Analysis STA457H1 ARIMA Models Lecture 5 September 3, 2022 9 / 28
Causal Conditions for an of AR(2) Process
The AR(2) model,
xt = ϕ1xt−1 + ϕ2xt−2 + wt ,
is causal when the two roots of 1− ϕ1z − ϕ2z2 lie outside of the unit circle. Using the quadratic
formula, this requirement can be written as
∣∣∣∣ϕ1 ±

ϕ21 + 4ϕ2
−2ϕ2
∣∣∣∣ > 1.
In this case, the necessary and sufficient conditions for causality are:
|ϕ2| < 1, ϕ1 + ϕ2 < 1, and ϕ2 − ϕ1 < 1
Recall the solution for quadratic equation p(z) = a+ bz + cz2 is
z =
−b ±

b2 − 4ac
2a
Examples
1.) xt = 1.1xt−1 − 0.4xt−2 + wt is causal.
2.) xt = 0.6xt−1 − 1.3xt−2 + wt is not stationary (|ϕ2| ≮ 1).
3.) xt = 0.6xt−1 + 0.8xt−2 + wt is not stationary (ϕ1 + ϕ2 ≮ 1).
4.) xt = −0.4xt−1 + 0.7xt−2 + wt is not stationary (ϕ2 − ϕ1 ≮ 1).
Dr. Esam Mahdi (University of Toronto - Department of Statistical Sciences)Time Series Analysis STA457H1 ARIMA Models Lecture 5 September 3, 2022 10 / 28
Causal Conditions for an of AR(p) Process
The Autoregressive process of order p, AR(p)
xt = ϕ1xt−1 + · · ·+ ϕpxt−p + wt ,
is said to be casual if the all absolute values of the roots of
1− ϕ1z − ϕ2z2 − · · · − ϕpzp = 0 lie outside the unit circle.
Examples
1.) xt = −0.4xt−1 + 0.21xt−2 + wt is a casual process as |roots| of
1 + 0.4z − 0.21z2 = (1− 0.3z)(1 + 0.7z) = 0 are | 10.3 | = 3.3 > 1 and
| 1−0.7 | = 1.4 > 1.
2.) xt = 1.7xt−1 − 0.42xt−2 + wt is not a stationary process because the
absolute value of one of the two roots of
1− 1.7z + 0.42z2 = (1− 0.3z)(1− 1.4z) = 0 lies inside the unit circle,
which is | 11.4 | = 0.71 < 1.
3.) xt = −0.25xt−2 + wt is a casual process because |roots| of
1 + 0.25z2 = 0 are | ± 2i | = 2 > 1, where i = √−1.
Dr. Esam Mahdi (University of Toronto - Department of Statistical Sciences)Time Series Analysis STA457H1 ARIMA Models Lecture 5 September 3, 2022 11 / 28
Finding the Roots in R
In R, the function polyroot(a), where a is the vector of polynomial coefficients
in increasing order can be used to find the zeros of a real or complex polynomial
of degree n − 1
p(x) = a1 + a2x + · · ·+ anxn−1.
The function Mod() can be used to check whether the roots of p(x) have
modulus > 1 or not.
Example
The AR(4) process xt = −0.3xt−1 + 0.7xt−2 − 1.2xt−3 + 0.1xt−4 + wt is not stationary
as
roots <- polyroot(c(1,-0.3, 0.7,-1.2,0.1))
Mod(roots)
## [1] 0.8829242 0.8829242 1.1250090 11.4024243
Dr. Esam Mahdi (University of Toronto - Department of Statistical Sciences)Time Series Analysis STA457H1 ARIMA Models Lecture 5 September 3, 2022 12 / 28
Moving Average (MA) Models
Definition
A time series {xt}, with zero mean, is called moving Average process of order
q and denoted by MA(q), if it can be written in the form:
xt = wt + θ1wt−1 + θ2wt−2 + · · ·+ θqwt−q (6)
where wt is a white noise, wt ∼ wn(0, σ2w ), and θ1, · · · , θq(θq ̸= 0) are
parameters.
With backshift operator, the MA process can be written as
xt = Θq(B)wt , (7)
where Θq(B) = 1 + θ1B + θ2B2 + · · ·+ θqBq is polynomial of degree q.
The MA(q), ∀q ≥ 1, process is always stationary (regardless of the
values of θj , j = 1,2, · · · ,q).
If the |roots| of 1 + θ1z + θ2z2 + · · ·+ θqzq = 0 > 1, the MA(q) process is
said to be invertible.
Dr. Esam Mahdi (University of Toronto - Department of Statistical Sciences)Time Series Analysis STA457H1 ARIMA Models Lecture 5 September 3, 2022 13 / 28
A Simulated MA(1) Series: xt = wt + 0.8wt−1
set.seed(6436)
ma1.sim<-arima.sim(n=200,list(order=c(0,0,1),ma=0.8))
par(mfrow=c(1,3))
plot(ma1.sim, ylab="x(t)", main="Simulated MA(1)",lwd=2)
acf(ma1.sim, main="Autocorrelation Function",lwd=2)
acf(ma1.sim, type="partial", main="Partial ACF",lwd=2)
Simulated MA(1)
Time
x(t
)
0 50 100 150 200

3

2

1
0
1
2
3
4
0 5 10 15 20

0.
2
0.
0
0.
2
0.
4
0.
6
0.
8
1.
0
Lag
AC
F
Autocorrelation Function
5 10 15 20

0.
2
0.
0
0.
2
0.
4
Lag
Pa
rti
al
A
CF
Partial ACF
Figure: A simulated MA(1) process: xt = (1 + 0.8B)wt
Dr. Esam Mahdi (University of Toronto - Department of Statistical Sciences)Time Series Analysis STA457H1 ARIMA Models Lecture 5 September 3, 2022 14 / 28
A Simulated MA(1) Series: xt = wt − 0.8wt−1
set.seed(6436)
ma11.sim<-arima.sim(n=200,list(order=c(0,0,1),ma=-0.8))
par(mfrow=c(1,3))
plot(ma11.sim, ylab="x(t)", main="Simulated MA(1)",lwd=2)
acf(ma11.sim, main="Autocorrelation Function",lwd=2)
acf(ma11.sim, type="partial", main="Partial ACF",lwd=2)
Simulated MA(1)
Time
x(t
)
0 50 100 150 200

4

2
0
2
0 5 10 15 20

0.
5
0.
0
0.
5
1.
0
Lag
AC
F
Autocorrelation Function
5 10 15 20

0.
4

0.
3

0.
2

0.
1
0.
0
0.
1
Lag
Pa
rti
al
A
CF
Partial ACF
Figure: A simulated MA(1) process: xt = (1− 0.8B)wt
Dr. Esam Mahdi (University of Toronto - Department of Statistical Sciences)Time Series Analysis STA457H1 ARIMA Models Lecture 5 September 3, 2022 15 / 28
A Simulated MA(2) Series: xt = wt +0.8wt−1 +0.6wt−2
set.seed(14762)
ma2<-arima.sim(n=200,list(order=c(0,0,2),ma=c(0.8,0.6)))
par(mfrow=c(1,3))
plot(ma2, ylab="x(t)", main="Simulated MA(2)",lwd=2)
acf(ma2, main="Autocorrelation Function",lwd=2)
acf(ma2, type="partial", main="Partial ACF",lwd=2)
Simulated MA(2)
Time
x(t
)
0 50 100 150 200

4

2
0
2
4
0 5 10 15 20
0.
0
0.
2
0.
4
0.
6
0.
8
1.
0
Lag
AC
F
Autocorrelation Function
5 10 15 20

0.
2
0.
0
0.
2
0.
4
0.
6
Lag
Pa
rti
al
A
CF
Partial ACF
Figure: A simulated MA(2) process: xt = (1 + 0.8B + 0.6B2)wt
Dr. Esam Mahdi (University of Toronto - Department of Statistical Sciences)Time Series Analysis STA457H1 ARIMA Models Lecture 5 September 3, 2022 16 / 28
Non-uniqueness of MA Models and Invertibility
If we replace θ by 1/θ, the autocorrelation function for MA(1), ρ(h) = θ1+θ2
(studied later), will not be changed. There are thus two processes,
xt = wt + θwt−1
and
xt = wt +
1
θ
wt−1,
show identical autocorrelation pattern, and hence the MA(1) coefficient is
not uniquely identified.
For the general moving average process, MA(q), there is a similar
identifiability problem.
The problem can be resolved by assuming that the following operator is
invertible
1 + θ1B + θ2B2 + · · ·+ θqBq
i.e., all absolute roots of 1 + θ1z + θ2z2 + · · ·+ θqzq = 0 lie outside the
unit circle.
Dr. Esam Mahdi (University of Toronto - Department of Statistical Sciences)Time Series Analysis STA457H1 ARIMA Models Lecture 5 September 3, 2022 17 / 28
Examples - Invertibility of MA Models
Example
1.) The MA(1) process xt = wt + 0.4wt−1 is an invertible process because
|θ| = |0.4| < 1 (or equivalently the root of 1 + 0.4z = 0 in absolute value
is z = 1/0.4 = 2.5 > 1).
2.) The MA(1) process xt = w1 + 1.8wt−1, is not invertible process because
the root of 1 + 1.8z = 0 in absolute value is z = 1/1.8 = 0.56 < 1.
3.) The MA(3) process xt = wt − 0.3wt−1 + 0.7wt−2 − 1.2wt−3 is not
invertible as
roots <- polyroot(c(1,-0.3, 0.7,-1.2))
Mod(roots)
## [1] 0.8810509 0.8810509 1.0735363
Dr. Esam Mahdi (University of Toronto - Department of Statistical Sciences)Time Series Analysis STA457H1 ARIMA Models Lecture 5 September 3, 2022 18 / 28
Mixed Autoregressive-Moving Average (ARMA)
Models
Definition
A time series {xt}, with zero mean, is called Autoregressive-Moving Average
process of order (p,q) and denoted by ARMA(p,q), if it can be written as
xt =
p∑
i=1
ϕixt−i︸ ︷︷ ︸
Autoregressive Part
+
q∑
j=0
θjwt−j︸ ︷︷ ︸
Moving Average Part
, (8)
where wt ∼ wn(0, σ2w ) is white noise, ϕi & θj for i = 1,2, · · · ,p, j = 0,1, · · · ,q,
are the parameters of the Autoregressive & Moving Average parts
respectively, and θ0 = 1.
Dr. Esam Mahdi (University of Toronto - Department of Statistical Sciences)Time Series Analysis STA457H1 ARIMA Models Lecture 5 September 3, 2022 19 / 28
Mixed Autoregressive-Moving Average (ARMA)
Models
With backshift operator, the process can be rewritten as:
Φp(B)xt︸ ︷︷ ︸
Autoregressive Part
= Θq(B)wt︸ ︷︷ ︸
Moving Average Part
, (9)
where Φp(B) = 1−∑pi=1 ϕiB i and Θq(B) = 1 +∑qj=1 θjB j .
If the mean, µ, of xt is not zero, replace xt by xt − µ to get:
Φp(B)(xt − µ) = Θq(B)wt .
can also be written as:
xt = δ + ϕ1xt−1 + · · ·+ ϕpxt−p︸ ︷︷ ︸
Autoregressive Part
+wt + θ1wt−1 + · · ·+ θqwt−q︸ ︷︷ ︸
Moving Average Part
, (10)
where δ = µ(1− ϕ1 − ϕ2 − · · · − ϕp) and µ is called the intercept obtained from
the output of the R function arima().
Dr. Esam Mahdi (University of Toronto - Department of Statistical Sciences)Time Series Analysis STA457H1 ARIMA Models Lecture 5 September 3, 2022 20 / 28
Three Conditions for ARMA Models
The ARMA model is assumed to be stationary, invertible and identifiable:
The condition for the stationarity is the same for the pure AR(p) process:
i.e., the roots of 1− ϕ1z − ϕ2z2 − · · · − ϕpzp lie outside the unit circle.
The condition for the invertibility is the same for the pure MA(q) process:
i.e., the roots of 1 + θ1z + θ2z2 + · · ·+ θqzq lie outside the unit circle.
The identifiability condition means that the model is not redundant: i.e.,
Φp(B) = 0 and Θq(B) = 0 have no common roots.
Dr. Esam Mahdi (University of Toronto - Department of Statistical Sciences)Time Series Analysis STA457H1 ARIMA Models Lecture 5 September 3, 2022 21 / 28
Stationary and Invertibility Conditions for ARMA
Models
Model Stationary conditions Invertibility conditions
MA(1) None |θ1| < 1
xt = δ + at + θ1at−1
MA(2): None θ1 + θ2 < 1
xt = δ + at + θ1at−1 + θ2at−2 θ2 − θ1 < 1
|θ2| < 1
AR(1)
xt = δ + ϕ1xt−1 + at |ϕ1| < 1 None
AR(2) ϕ1 + ϕ2 < 1 None
xt = δ + ϕ1xt−1 + ϕ2xt−2 + at ϕ2 − ϕ1 < 1
|ϕ2| < 1
ARMA(1, 1)
xt = δ + ϕ1xt−1 + at + θ1at−1 |ϕ1| < 1 |θ1| < 1
Dr. Esam Mahdi (University of Toronto - Department of Statistical Sciences)Time Series Analysis STA457H1 ARIMA Models Lecture 5 September 3, 2022 22 / 28
Example: Model Redundancy (Shared Common
Roots in ARMA Models)
Consider the ARMA(1,2):
xt = 0.2xt−1 + wt − 1.1wt−1 + 0.18wt−2,
this model can be written as
(1− 0.2B)xt = (1− 1.1B + 0.18B2)wt
or equivalently
(1− 0.2B)xt = (1− 0.2B)(1− 0.9B)wt
cancelling (1− 0.2B) from both sides to get:
xt = (1− 0.9B)wt
Thus, the process is not really an ARMA(1,2), but it is a MA(1) ≡ ARMA(0,1).
Dr. Esam Mahdi (University of Toronto - Department of Statistical Sciences)Time Series Analysis STA457H1 ARIMA Models Lecture 5 September 3, 2022 23 / 28
A simulated ARMA(1,1) series: xt = 0.8xt−1 + wt − 0.6wt−1
set.seed(6436);par(mfrow=c(1,3))
arma1.sim<-arima.sim(n=200,list(order=c(1,0,1),ar=0.8,ma=-0.6))
plot(arma1.sim,ylab="x(t)",main="Simulated ARMA(1,1)")
acf(arma1.sim, main="Autocorrelation Function",lwd=2)
acf(arma1.sim, type="partial", main="Partial ACF",lwd=2)
Simulated ARMA(1,1)
Time
x(t
)
0 50 100 150 200

3

2

1
0
1
2
3
0 5 10 15 20
0.
0
0.
2
0.
4
0.
6
0.
8
1.
0
Lag
AC
F
Autocorrelation Function
5 10 15 20

0.
1
0.
0
0.
1
0.
2
0.
3
Lag
Pa
rti
al
A
CF
Partial ACF
Figure: A simulated ARMA(1, 1) process: (1− 0.8B)xt = (1− 0.6B)wt
Dr. Esam Mahdi (University of Toronto - Department of Statistical Sciences)Time Series Analysis STA457H1 ARIMA Models Lecture 5 September 3, 2022 24 / 28
A simulated ARMA(2,1) series:
xt = 0.8xt−1 − 0.4xt−2 + wt − 0.6wt−1
set.seed(6436);par(mfrow=c(1,3))
arma21.sim<-arima.sim(n=200,list(order=c(2,0,1),ar=c(0.8,-0.4),
ma=-0.6))
plot(arma21.sim,ylab="x(t)",main="Simulated ARMA(2,1)",lwd=2)
acf(arma21.sim, main="Autocorrelation Function",lwd=2)
acf(arma21.sim, type="partial", main="Partial ACF",lwd=2)
Simulated ARMA(2,1)
Time
x(t
)
0 50 100 150 200

3

2

1
0
1
2
3
4
0 5 10 15 20

0.
2
0.
0
0.
2
0.
4
0.
6
0.
8
1.
0
Lag
AC
F
Autocorrelation Function
5 10 15 20

0.
2

0.
1
0.
0
0.
1
0.
2
0.
3
Lag
Pa
rti
al
A
CF
Partial ACF
Figure: A simulated process: (1− 0.8B + 0.4B2)xt = (1− 0.6B)wt
Dr. Esam Mahdi (University of Toronto - Department of Statistical Sciences)Time Series Analysis STA457H1 ARIMA Models Lecture 5 September 3, 2022 25 / 28
A simulated ARMA(1,2) series:
xt = 0.8xt−1 + wt − 0.6wt−1 + 0.4wt−2
set.seed(6436);par(mfrow=c(1,3))
arma12.sim<-arima.sim(n=200,list(order=c(1,0,2),ar=0.8,
ma=c(-0.6,0.4)))
plot(arma12.sim,ylab="x(t)",main="Simulated ARMA(1,2)",lwd=2)
acf(arma12.sim, main="Autocorrelation Function",lwd=2)
acf(arma12.sim, type="partial", main="Partial ACF",lwd=2)
Simulated ARMA(1,2)
Time
x(t
)
0 50 100 150 200

2
0
2
4
0 5 10 15 20

0.
2
0.
0
0.
2
0.
4
0.
6
0.
8
1.
0
Lag
AC
F
Autocorrelation Function
5 10 15 20

0.
1
0.
0
0.
1
0.
2
0.
3
0.
4
0.
5
Lag
Pa
rti
al
A
CF
Partial ACF
Figure: A simulated process: (1− 0.8B)xt = (1− 0.6B + 0.4B2)wt
Dr. Esam Mahdi (University of Toronto - Department of Statistical Sciences)Time Series Analysis STA457H1 ARIMA Models Lecture 5 September 3, 2022 26 / 28
Special Forms of ARMA(p,q) Process
1 AR(1) ≡ ARMA(1,0) process may be written as:
xt = ϕxt−1 + wt or (1− ϕB)xt = wt
2 AR(2) ≡ ARMA(2,0) process may be written as:
xt = ϕ1xt−1 + ϕ2xt−2 + wt or (1− ϕ1B − ϕ2B2)xt = wt
3 MA(1) ≡ ARMA(0,1) process may be written as:
xt = wt + θwt−1 or xt = (1 + θB)wt
4 MA(2) ≡ ARMA(0,2) process may be written as:
xt = wt + θ1wt−1 + θ2wt−2 or xt = (1 + θ1B + θ2B2)wt
Dr. Esam Mahdi (University of Toronto - Department of Statistical Sciences)Time Series Analysis STA457H1 ARIMA Models Lecture 5 September 3, 2022 27 / 28
Special Forms of ARMA(p,q) Process
1 ARMA(1,1) process may be written as:
xt = ϕxt−1 + wt + θwt−1 or (1− ϕB)xt = (1 + θB)wt
2 ARMA(2,1) process may be written as:
xt = ϕ1xt−1 + ϕ2xt−2 + wt + θwt−1 or (1− ϕ1B − ϕ2B2)xt = (1 + θB)wt
3 ARMA(1,2) process may be written as:
xt = ϕxt−1 + wt + θ1wt−1 + θ2wt−2 or (1− ϕB)xt = (1 + θ1B + θ2B2)wt
4 ARMA(2,2) process may be written as:
xt = ϕ1xt−1 + ϕ2xt−2 + wt + θ1wt−1 + θ2wt−2 or Φ2(B)xt = Θ2(B)wt ,
where Φ2(B) = 1− ϕ1B − ϕ2B2 & Θ2(B) = 1 + θ1B + θ2B2
Dr. Esam Mahdi (University of Toronto - Department of Statistical Sciences)Time Series Analysis STA457H1 ARIMA Models Lecture 5 September 3, 2022 28 / 28


essay、essay代写