• Keine Ergebnisse gefunden

3. Expectation formation using statistical predictors

3.1 Parametric prediction models

3.1.2 State-space modeling

MA polynomial

θ

( ) 1L = +

θ

1L+

θ

2L2 + +...

θ

qLq. An ARMA model is stationary provided that the roots of

φ

( ) 0L = lie outside the unite circle. This process is invertible if the roots of

θ

( ) 0L = lie outside the unite circle. Low order ARMA models are of much interest since many real data sets are well approximated by them rather than by a pure AR or pure MA model. In general, ARMA models need fewer parameters to describe the process.

In most cases economic time series are non-stationary and therefore we cannot apply ARMA models directly. One possible way to remove the problem is to take difference so as to make them stationary. Non-stationary series often become stationary after taking first difference (XtXt1 = −(1 L X) t). If the original time series is differenced d times, then the model is said to be an ARIMA (p, d, q) where ‘I’

stands for integrated and d denotes the number of differences taken. Such a model is described by

φ

( )(1LL X)d t = +

α θ

( )L

ε

t

The combined AR operator is now

φ

( )(1LL)d. The polynomials

φ

( )z and ( )z

θ

have all their roots outside the unit circle. The model is called integrated of order d and the process is said to have d unit roots.

Following Harvey (1991, 1993), let yt be a N×1 vector of observed variables at time t which is related to an m×1 state vector,

α

t, through a measurement equation yt = Zt

α

t + +dt

ε

t, t =1,...,T (1) where Zt is an N m× matrix, dt anN×1 vector and

ε

t an N×1 vector of serially uncorrelated disturbances with mean zero and covariance matrix Ht. The unknown vector

α

t is assumed to follow a first order Markov process,

α

t =Tt

α

t1+ +ct Rt t

η

, t =1,...,T (2) where Tt is an m m× matrix, ct an m×1vector, Rt an m g× matrix, and

η

t a

1

g× vector of serially uncorrelated disturbances with mean zero and covariance matrix Qt. Equation (2) is called the transition equation. The matrices Zt,dt,and Ht in the measurement equation and the matricesTt,ct,Rt,and Qt in the transition equation are referred to as the system matrices. The model is said to be time invariant or time homogeneous if the system matrices do not change over time, otherwise, it is time variant. For instance, the AR (1) plus noise model

yt =

μ ε

t + t

μ φμ

t = t1+

η

t

is a time invariant state space model with

η

t being the state.

Kalman filter

The Kalman filter can be applied to the state-space form equations to estimate time-varying parameters. The estimations can be carried out in three steps: prediction, updating and smoothing. The first step is to calculate the optimal estimator of the state vector given all the currently available information. Reaching the end of series, optimal predictions of state vector for the next period can be made. Updating step is done as new observation becomes available. Using a backward recursion, the

estimators are smoothed based on the full sample in the final step. These steps are presented below in more detail.

Let at denote the optimal estimate of the state vector,

α

t, based on all observations (t=1,…,t), and

P

t the m m× covariance matrix of the estimate error, that is

Pt = E[(

α

tat)(

α

tat) ]'

Now assume that we are at time t-1, and that at1 and Pt1 are given. The optimal estimate of

α

t is then given by the prediction equations

at t| 1 =T at t1+ct and

Pt t| 1 =T P Tt t1 t' +R Q Rt t t', t=1,...,T While the corresponding estimate of

y

t is

y%t|t-1=Z at t|t-1+dt, t=1,...,T

Once the new observations of yt becomes available, the estimator of the state can be updated with updating equations

at =at t| 1 + P Z F vt t| 1 t t' 1 t, and

Pt = Pt t| 1P Z F Z Pt t| 1 t t' 1 t t t| 1 , t =1,...,T

where vt = −yt Z at t t| 1dt is the prediction error and Ft = Z P Zt t t| 1 t' +Ht the MSE of the prediction error.

The prediction and updating equations utilize information available at time t in estimating the state vector while the smoothing step is done using the information available after time t. Applying the fixed-interval smoothing algorithm, the last step start with the final quantities, aT and PT , and work backwards. The smoothing equations are

at|T = +at P at*( t+1|TT at+1 tct+1) and

Pt|T = +P P Pt t*( t+1|TPt+1|t)Pt*'

Pt* = PT Pt t'+1 t-1+1|t, t =1,...,T with aT|T =aT and PT|T = PT .

Estimating the state vector

Estimation of parameters in the state equation (vector

ψ

which is referred to as hyperparameters), assuming initial values of atand Pt (a0andP0)¹, can be carried out by the method of maximum likelihood. For a multivariate

t-1

1

( ; ) T ( |Y )t

t

L y

ψ

p y

=

=

where

p y ( |Y )

t t-1 denotes the distribution of

y

t conditional on

{ }

t-1 1 2 1

Y = yt ,yt ;,...,y . For a Gaussian model, the likelihood function above can be written as

t ' t-1 t

1 1

1 1

log ( ) log 2 log|F |- |F |

2 2 2

T T

t

t t

L

ψ

NT

π

v v

= =

= − −

∑ ∑

Time-varying parameter models and state-space form

It is possible to analyze the time-varying parameter model which can be cast in state-space form. Consider a linear model

yt = xt'

β ε

+ t, t =1,...,T

where

x

t is a k×1 vector of exogenous variables and

β

the corresponding k×1 vector of unknown parameters. We can use the state-space model and Kalman filter to estimate the time-varying parameter model. In this case,

β

is allowed to evolve over time according to various stochastic processes. Now let us examine different

---

1. The initial values for a stationary and time-varying transition equation are given as a0 =(I T )1c and vec P( ) [0 = − ⊗I T T]1vec RQR( '). If the transition equation is non-stationary, the initial values must be estimated from the model. To do so, there are two approaches. The first assumes the initial state is fixed with P0 =0 and is estimated as unknown parameters in the model. The second assumes that the initial state is random and has a diffuse distribution withP0 =κI, whereκ goes to .

forms of time-varying parameter model.

Consider first, the random walk model. Here the time-varying coefficients follow a random walk. The state space form is as follows

yt = xt'

β ε

t + t, t=1,...,T

β

t =

β

t1+

η

t

where

ε

t ~ NID(0,

σ

2) ,

η

t ~ NID(0,

σ

2Q) , and

β

tdenotes the state vector. The k k× positive semi-definite matrix Q determines to what extent it may vary. In case

0

Q= , the model reduces to an ordinary linear regression model because

β

t

= β

t1. But if Q is positive definite, all coefficients will be time-varying.

The second time-varying form might be referred to as return to normality model. In this model, the time-varying coefficients are generated by a stationary vector AR (1) process. The state-space form could be represented as

yt = xt'

β ε

t + t, t=1,...,T

β β φ β

t − = ( t1

β

)+

η

t

where

ε

t ~ NID(0,

σ

2) ,

η

t ~ NID(0,

σ

2Q). The stationary coefficients evolve around a constant mean,

β

. If the matrix

φ

=0, the model is called random-coefficient model. In this case, the coefficients have a fixed mean (

β

) but are allowed to evolve randomly around it.

Applying the Kalman filter and letting

β

t* =

β β

t − , the return to normality model could be rewritten as

yt =(x xt t' ')

α ε

t + t, t =1,...,T and

* *1

1

0

0

0

t t

t t t

t

β

I

β

β β η

α φ

⎡ ⎤

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= ⎣ ⎦ = ⎢ ⎣ ⎥ ⎦ = ⎣ ⎦ + ⎣ ⎦

A diffuse prior is used for

β

t , meaning that starting values are constructed from the first k observations. The initial values of

β

t* is given by a zero vector and the initial values of covariance matrix is given as

1 '

( ) [0 ] ( )

vec P = − ⊗I T T vec RQR .