Backward and Forward Closed Solutions of Multivariate ARMA Models.
Ludlow-Wiechers, Jorge
Universidad Autónoma Metropolitana, Azcapotzalco
25 March 2012
Online at https://mpra.ub.uni-muenchen.de/37635/
MPRA Paper No. 37635, posted 27 Mar 2012 16:46 UTC
Backward and Forward Closed Solutions of Multivariate ARMA Models.
Jorge Ludlow-Wiechers.
Economics. CSH.
Universidad Autónoma Metropolitana, Azcapotzalco.
Av. San Pablo 180, Col Reynosa-Tamaulipas.
México City 02200.
jlw@correo.azc.uam.mx
"I certify that I have the right to deposit this contribution with MPRA, I am the author".
Mathematics Subject Classification 60G12 General second-order processes.
JEL classification: C01, C22; C32; C50.
Keywords. Causal models, non-causal models, invertible models, non-invertible models, backward solution, forward solution.
Abstract. Some of the most widely used models in economics are based on variables not yet observed, and their specification depends on future observations; the theory that underpins these delivers the backward/ forward solution. We present a newly unified construction, starting with a more general specification of an ARMA model, yet is capable of delivering in closed form, in both the backward and forward cases, leading to an alternative presentation of causal/non-causal and invertible/non-invertible cases.
A general discussion of the model:
2 q t 2 q 1
t 1 t 0 1 t 1 1
q t 1 q
2 p t p 1
t 1 t 0 1 t 1 1
p t p
X ...
X X X
...
X
Y ...
Y Y Y
...
Y 2
1
is given, first for the case of two stationary random vectors {Yt},{Xt}, and then for the case in which {Xt} is white noise. The case in which future dates are involved using an expectative is also considered.
1 Introduction.
Let L2(,F,P,)={Y: --> |
)dP( ) (
Y
EY2 2 }, the Hilbert space of squared
integrable real-valued random variables defined on the probability space (, F, P) where F is a sigma-algebra of subsets of and P is a probability measure defined on F, in which the inner
product <Y1,Y2> = E(Y1Y2) and norm||Y|| EY2 are both defined. An m-variate time series process is a sequence of column m-vectors {Yt}, Yt´=(Yt(1), Yt(2),...,Yt(m)) formed of elements taken from the space Yt(i)L2(,F, P).
The mean of an m-variate process is t E[Yt](E[Yt(i)])(t(i)), and the autocovariance is ]
)' Y ( ) Y
[(
E ) ,j
( t j t j t t
Y
. A process is second order stationary if the mean and covariance do not depend on the integer variable t, which represents time. The case considered herein has a zero mean, hence E[Yt]0 and Y(j)E[YtjYt']
{At} is white noise a numerable collection of stationary random variables with mean zero 0
] A [
E t , with autocovariance A(j)E[AtjAt] if j0 and A(j)0if j0 .
The m x m matrix is invertible, positive definite and symmetric and is termed the covariance.
The lag operator is defined by Lk(Yt(i)) = Yt-k(i), where k is an integer. The notation
0 s
s ||
B
||
means absolute summability and is a matrix norm.
The present study considers an admissible exposition that encompasses causal/non-causal and
invertible/non-invertible cases, delivers closed solutions associated with the specification based on a conditional expectative:
t q1 1 t
t1 0 t 1 t1 q2 t q2 t1 q
2 p t p 1
t 1 t 0 1 t t 1 1
p t t p
X ...
X X X
E ...
X E
Y ...
Y Y Y
E ...
Y
E 2
1
Starting with the theoretical model:
2 q t 2 q 1
t 1 t 0 1 t 1 1
q t 1 q
2 p t p 1
t 1 t 0 1 t 1 1
p t p
X ...
X X X
...
X
Y ...
Y Y Y
...
Y 2
1
The focus then shifts to the multivariate ARMA case. The remainder of the paper is divided into three sections. In Section 2, a constructive presentation is used to describe a general procedure for dealing with a linear filter. In Section 3, the direct application to the VARMA case is considered, followed by a proposal for dealing with an applied case using a model with an expectative.
2 Linear processes.
Let us take two stationary second-order processes {Yt}, {Xt}. Both are m-variate and have a zero mean, and we now consider a linear model of order (p1, p2, q1, q2):
2 q t 2 q 1
t 1 t 0 1 t 1 1
q t 1 q
2 p t p 1
t 1 t 0 1 t 1 1
p t p
X ...
X X X
...
X
Y ...
Y Y Y
...
Y 2
1
In this equation -p1≠0, p2≠0, -q1 ≠0, q2≠0, all the coefficients are real m x m matrices, 0 and
0 are m x m identity matrices.
The standard notation is the usual (L)Yt (L)Xt The linear operators:
2 p 2 p 2
2 1 0 1 1 2 2 1
p 1
pL ... L L L L ... L
) L
(
2 q q 1
1 0 1 1 1
q
qL ... L L ... L
) L
( 1 2
in fact have, alternative formulations:
) L ( L
) L ( L ) L
( p1p1 p2 p2
) L ...
L L ...
( L ) L (
Lp1p1 p1 p1 0 p11 p11 p2 p1p2 ) ...
L L
...
L ( L ) L (
Lp2p2 p2 p1 p2p1 0 p21 p21 p2 )
L ( L
) L ( L ) L
( q1q1 q2 q2
) L ...
L L
...
( L ) L (
Lq1 q1 q1 q 0 q1 1 q11 q q1q2
2 1
) ...
L L
...
L ( L ) L (
L 1 q2
1 2 q 1 2 q 0 2
q 1 q q 2 q 2
q 2
q
The backward and forward solutions respectively require the following two representations:
t 1 q 1 q t 1 p 1
p (L)Y L (L)X
L and Lp2p2(L)Yt Lq2q2(L)Xt
The backward analysis:
t 1 q 1
1 q 1 p t 1 q 1 q 1
1 p t 1
t (L) (L)X L (L) L (L)X L (L) (L)X
Y
p1
p1
t 2 q 1 q q 1
q 0 q
1 2 p 1 p 2 p 1 p 0 1 p 1 q 1 p t 1
t (L) (L)X L [ ... L .. L ] [ ... L ... L ]X
Y 1 2
t 2 q 1 q q 1
q 0 q
2 2 1 0 1 q 1 p t 1
t
( L ) ( L ) X L [ L L ...] [ ... L ... L ] X
Y
1
2 If all the roots of the polynomial:
] z ...
z z
z z
...
det[
)]
z ( det[
) z
( p1 p1 1 p11 0 p1 1 p11 2 p1 2 p2 p1 p2
1 p
lie outside the unit circle, it is known that the inversion is guaranteed and the selection fulfills the condition that the real m x m matrices {s} are such that
0 s
s ||
|| .
It is well known that the product of the matrix series with a matrix polynomial is another well defined matrix series,
0 s
s||
||
...]
L L [
L ) L ( L
) L ( ) L (
Lp1q1p11 q1 p1q1 p1q101 2 2 Hence, the backward stationary solution is:
...
X X
X X
Yt 0 tp1q11 tp1q112 tp1q12 3 tp1q13
The forward analysis:
The rationale for obtaining the forward solution is that the filter (L)Yt=(L)Xt and the dual filter (L-1)Yt=(L-1)Xt are related, because there is a path travel interchange between traveling forward and going backward. The forward solution is the backward solution in the dual case, but it is pulled back.
Now let us take:
t 2 q 2 q t 2 p 2
p (L)Y L (L)X
L
t 2 q 1
2 p 2 q t 2 q 2 q 1 2 p t 1
t (L) (L)X L (L) L (L)X L (L) (L)X
Y p2 p2
) ...
L L
...
L ( ) ...
L L
...
L ( ) L ( ) L
( 1 2
2
p q
1 2 q 1 2 q 0 2
q 1 q q 1 2 p 1
2 p 1 2 p 0 1
p 2 p 1 p 2
q
1
and let us apply the transformation L-->L-1
) ...
L L ...
L ( ) ...
L L ...
L ( ) L ( ) L
( 1 2
2
p q
1 2 q 1 2 q 0 2
q 1 q q 1 2 p 1
2 p 1 2 p 0 1
p 2 p 1 p 1 2 q 1
1
If det[p2(1/z)] is a polynomial with roots that lie outside the unit circle, we may write ...
L L )
...
L L
...
L ( ) L
( 1 p1 p2 p1 0 p2 1 p21 p2 1 0 1 2 2
1
2
p
This selection fulfills the condition that the real m x m matrices {s} are such that
0 s
s||
|| .
Therefore,p21(L1)q2(L1)(01L12L2...)(q1Lq1q2...0Lq21Lq21...q2) and we again have again a product of a matrix series with a matrix polynomial
...
L L )
L ( ) L
( 1 q2 1 0 1 1 2 2
1
2
p
Now going backwards by applying the transformation L-->L-1 ...
L L
) L ( ) L
( q2 0 1 1 2 2
1
2
p
It should be noted that det[p2(1/z)] has roots that all lie outside the unit circle, if and only if all the roots of the dual polynomial det[p2(z)] lie inside the unit circle and are non-zero.
It is therefore required that the roots of the polynomial det[p2(z)] all lie inside the unit circle and are not null, in order to ensure the existence of the required convergent matrix series.
...]
L L
[ L ) L ( ) L
( q2 p2 0 1 1 2 2
1
and the real m x m matrices {s} are such that
0 s
s||
|| .
Hence, the forward stationary solution is:
...
X X
X X
Yt 0 tq2p21 tq2p212 tq2p22 3 tq2p23
In summary, the backward case uses:
t 1 q 1 q t 1 p 1
p (L)Y L (L)X
L and solves
0 s
s 1 q 1 p t s t
1 q 1 p t 1 q 1 1 p 1 q 1 p
t L (L) (L)X L (L)X X
Y
and the forward model uses Lp2p2(L)Yt Lq2q2(L)Xt then
0 s
s 2 p 2 q t s t
2 p 2 q t 2 q 1 2 p 2 p 2 q
t L (L) (L)X L (L)X X
Y
Collecting the results of the previous analysis we have proved the closed solution of a linear model, as discussed below.
Closed solution of a linear model.
Let us first consider a multivariate backward solution. Let {Yt} and {Xt} be two stationary second-order processes that are m-variate, and consider the stochastic equation (L)Yt (L)Xt of order (p1, p2, q1, q2). Let the polynomial det[p1(z)] be such that all its roots lie outside the unit circle, then there exists an integer index given by k= p1-q1 and a countable collection of real m x m matrices s {s} with
j0
s such that
0 s
s k t s 2
k t 2 1 k t 1 k t 0
t X X X ... X
Y is the stationary solution.
Let us now consider a multivariate forward solution. Let {Yt} and {Xt} be two stationary second- order processes that are m-variate, and consider the stochastic equation (L)Yt (L)Xtof order (p1, p2, q1, q2). The polynomial det[p2(z)] is such that all its roots are non zero and lie inside the unit circle; there then exists an integer index given by k=q2-p2 , and there exists a countable collection of real m x m matrices {s} with
0 j
s such that
0 s
s k t s 2
k t 2 1 k t 1 k t 0
t X X X ... X
Y is the stationary solution.
3 Multivariate ARMA processes.
Let us now assume that Xt=At and make the additional assumption that {At} is white noise . In this section select Xt=At and solve for Yt or At , there are now four cases:.
1. - VMA backward 2. - VAR backward 3. - VMA forward 4. - VAR forward
Take a zero mean stationary process {Yt}, this series is a solution of the VARMA(p1,p2,q1,q2) stochastic equation, if it satisfies:
2 q t 2 q 1 t 1 t 0 1 t 1 1
q t 1 q 2 p t p 1 t 1 t 0 1 t 1 1
p t
pY .. Y Y Y .. Y A .. A A A .. A
2
1
-p1≠0, -p2≠0, -q1≠0, q2≠0. det[0]≠0 and det[0]≠0
Corollary 1: VMA Backward. Let {Yt} be a vector stationary process and let {At} be white noise, and let us consider the stochastic equation (L)Yt (L)At in the form
t 1 q 1 q t 1 p 1
p (L)Y L (L)A
L . If the polynomial det[p1(z)] is such that all its roots lie outside the unit circle, there then exists an integer key k=p1-q1 and a countable collection of real m x m matrices {s} with
j0
s , and the solution is given by:
0 s•
s k t s 3
k t 3 2 k t 2 1 k t 1 k t 0
t A A A A ... A
Y .
Corollary 2: VAR Backward. Let {Yt} be a vector stationary process and let {At} be white noise, and let us consider the stochastic equation (L)Yt (L)At in the form
t 1 q 1 q t 1 p 1
p (L)Y L (L)A
L . If the polynomial det[q1(z)]is such that all its roots lie outside the unit circle, there then exists an integer key k=q1-p1 and a countable collection of real m x m
matrices {s} with
j0
s , and the solution is given by:
0 s
s k t s 3
k t 3 2 k t 2 1 k t 1 k t 0
t Y Y Y Y ... Y
A
Corollary 3: VMA Forward . Let {Yt} be a vector stationary process and let {At} be white noise, and let us consider the stochastic equation (L)Yt (L)At in the form Lp2p2(L)YtLq2q2(L)At
. If the polynomial det[p2(z)] is such that all its roots lie inside the unit circle and are not null, there then exists an integer key k=q2-p2 and a countable collection of real m x m matrices {s} with
0 j
s , and the solution is given by:
0 s
s k t s 3
k t 3 2 k t 2 1 k t 1 k t 0
t A A A A ... A
Y
Corollary 4: VAR Forward. Let {Yt} be a vector of stationary process and let {At} be white noise, and let us consider the stochastic equation (L)Yt (L)At in the form
t 2 q 2 q t 2 p 2
p (L)Y L (L)A
L If the polynomial det[q2(z)]is such that all its roots lie inside the unit circle and are not null, there then exists an integer key k=p2-q2 and a countable collection of real matrices {s} with
j0
s and
0 s
s k t s 3
k t 3 2 k t 2 1 k t 1 k t 0
t Y Y Y Y ... Y
A
4 Models that depends on future observations.
Let us now consider an application in which the economic agents incorporate expectations in their plans. We may then consider
t q1 1 t
t1 0 t 1 t1 q2 t q2 t1 q
2 p t p 1
t 1 t 0 1 t t 1 1
p t t p
X ...
X X X
E ...
X E
Y ...
Y Y Y
E ...
Y
E 2
1
where Et[Zt] is the conditional expectative of Zt respect to the sigma field generated by all the past information in the collection { Zt, Zt-1, Zt-2,…} and fulfills:
0,1,...
j Z ] Z [
Et tj tj .In the case where Zt=At, is given by white noise, the notation will imply the use of the residuals Et[Atj]Aˆtj j0,1,...
The conditional expectative is linear, thus:
] X ...
X X X
...
X [ E
] Y ...
Y Y Y
...
Y [ E
2 q t 2 q 1
t 1 t 0 1 t 1 1
q t 1 q t
2 p t p 1
t 1 t 0 1 t 1 1
p t p
t 1 2
Let us solve for the skeleton:
2 q t 2 q 1
t 1 t 0 1 t 1 1
q t 1 q
2 p t p 1
t 1 t 0 1 t 1 1
p t p
X ...
X X X
...
X
Y ...
Y Y Y
...
Y 2
1
We now have the relation:
s
s k t s
t X
Y
By applying a conditional expectation and using the concept of limit, it is possible to simplify the expression above according to the rules:
. 0,1,2,3,..
s X ] X [ E
. 0,1,2,3,..
j Y ] Y [ E
s t s t t
j t j t t
Either in backward or forward form, we may conclude that
s
s k t t s t
t[Y] E[X ]
E
It is possible to apply the ideas in two ways: by invoking a learning procedure to obtain an
estimate for each expectative, or alternatively by using surveys to fill the expectation terms on the
right hand side. However, a sudden change in the information set might alter the anticipated value.
The {Yt} path depends not only on past information but also on future expected values. An inherent uncertainty is present, in that the policy maker cannot work in isolation behind a desk;
instead he or she must try to gauge public opinion in order to address the uncertainty. It is possible to say that there is time consistency if the sequence of expected values remains constant, otherwise there is inconsistency when the sequence of expected values rapidly changes.
References
Box, G.E.P. and G.M. Jenkins, (1970), Time series analysis, forecasting and control Holden-Day, San Francisco, CA.
Brockwell, P.J., Davis, R.A., (1991). Time Series: Theory and Methods, second ed. Springer, New York.
Hamilton, J.D., 1994. Time Series Analysis. Princeton University Press, New Jersey
Lutkepohl Helmut (2005) New Introduction to Multiple Time Series Analysis. Springer-Verlag.
Reinsel Gregory C. (1997), Elements of Multivariate Time Series Analysis, 2ed. Springer.