2.A. Finding optimal policies and welfare gains
This section describes the algorithm that we use to derive the optimal federal RI scheme.
As discussed in Section 2.4.2, we wish to have closed-form expressions for the objective function values (for given member-state policies and a given federal RI scheme). To obtain optimal federal RI in Sections 2.5.1 and 2.5.2, we find federal and member-state policies that maximize the unconditional mean of the objective function. When accounting for the transi-tion, in Sections 2.5.3 and 2.5.4, we find policies that maximize the conditional expectation of the objective. In these cases, we condition on the initial state being the non-stochastic steady state implied by our calibrated model (Table 2.4).
The next section describes how we find the optimal federal RI scheme. We describe this for the case when the member state can choose policies once and we account for the transi-tions. The other cases are handled analogously.
2.A.1. Finding the optimal federal RI scheme. The algorithm proceeds as follows.
(1) Fix Chebyshev nodes for the federal RI scheme as described in Section 2.4.2. Keep nodes fixed.
(2) The goal is to find valuesφ=[φ1,φ2,φ3,φ4]0∈RandτFthat solve the federal govern-ment’s problem (2.21) anticipating the member-states’ policy choices. The values of φinduce payout functionBF(·;·).
(3) Findφby numerical optimization. For this, for each tryφ, evaluate the federal gov-ernment’s objective function either using unconditional expectations (Sections 2.5.1 to 2.5.2) or conditional expectations for a given initial state (Sections 2.5.3 and 2.5.4).
(4) In order to evaluate the federal planner’s objective function for givenφ, make sure that the scheme is feasible in light of the member states’ optimal response. In partic-ular, a federal RI policy has to be self-financing in light of member-states’ responses, recall (2.19). For fixedφwe iterateas follows.
(a) mark the iteration by(n). Setn=0. Start from an initial value ofτ(−1)F .
(b) setτF =τ(nF−1). For given federal RI policyφandτF =τ(nF−1), let the member state solve (2.23).
(c) label the maximizing member-state policies {τi,(n)v ,τi,(n)ξ ,bi,(n)}. These induce a law of motionµ(n)0 and a value for the objective function ofR
W0(n)dµ(n)0 . (d) for given member-state policies {τi,(n)v ,τi,(n)ξ ,bi,(n)}, and given federal policyφ,
find a valueτ(n)F that solves the federal RI scheme’s financing constraint (2.19) for these given policies and given the induced dynamics for the member-state economies.
(e) ifτ(n)F is not sufficiently close toτ(nF−1), setn=n+1 and go to step 4b. Else, set τF=τ(n)F and go to step 5.
(5) The federal policy implied byφandτFis feasible. SetR
W0dµ0=R
W0(n)dµ(n)0 . (6) Continue numerical optimization started in step 3 until the maximum for the federal
government’s objective is found.
2.A.2. Welfare gains from federal UI. The consumption-equivalent welfare gains reported in Table 2.5 are computed as follows. For each policy setting, compute welfare under optimal federal RI as detailed in Section 2.A.1. This givesR
W0dµ0. Follow the program as in Sec-tion 2.A.1, but settingφ=0 (BF(·;·)=0) andτF=0. That is, solve for optimal member-state policy in autarky. Denote the induced welfare byR
W0autdµaut0 . The consumption equivalent welfare gains are computed as the value of the direct transfer received every period which would make households indifferent between living in autarky and living under the federal RI.
The welfare gain is expressed in terms of percent of the steady state consumption level.
2.B. Calculating fourth-order-accurate unconditional first moments
In this section we consider a pruned perturbation solution to a dynamic stochastic gen-eral equilibrium (DSGE) model. We derive closed-form solutions for fourth-order-accurate unconditional first moments of the model’s endogenous variables. The exposition here heav-ily builds on Andreasen et al. (2018) who already provide the formulae up to third-order of accuracy. Our sole contribution is to provide formulae for fourth-order moments.
2.B.1. Preliminaries. We consider the following class of DSGE models. Letyt∈Rnybe a vector of control variables,xt∈Rnx+1a vector of state variables which includes a perturbation parameterσ≥0. Consider a perturbation solution to a DSGE model around the steady state xSS=0. The exact solution to the model is given by
yt=g(xt),
xt+1=h(xt)+ση²t+1, (2.41)
where²t+1followsn²dimensional multivariate normal distribution and is independently and identically distributed in each period. Solving a DSGE model amounts to finding unknown functionsgandh.
For most DSGE models, the full solution to system (2.41) cannot be found explicitly. The perturbation solution approximates the true solution using a Taylor series expansion around the steady state,xt=xt+1=0. Up to fourth order, we have
(2.42) xt+1=hxxt+1
2hxxx⊗2t +1
6hxxxx⊗3t + 1
24hxxxxx⊗4t +σηεt+1,
wherehx,hxx, . . . denote first, second, etc, order derivatives of functionhwith respect to vec-torx. Superscript⊗nrepresents then-th Kronecker power, i.e.x⊗n=
ntimes
z }| { x⊗x⊗. . ..
However, the system (2.42) may display explosive dynamics and may not have any finite unconditional moments (Andreasen et al. 2018) . The solution to this problem suggested by Kim et al. (2008) is to prune the state space of the approximated solution so as to remove explosive paths. As shown by Andreasen et al. (2018) the pruned 4th-order approximation to the perturbation solution reads
xt+1=xtf+1+xst+1+xr dt+1+xt4t h+1, where
xtf+1=hxxtf+σηεt+1, xts+1=hxxst+1
2hxx
³xtf´⊗2
, xr dt+1=hxxr dt +1
2hxx
³2³
(xtf ⊗xts)´´
+1 6hxxx
³xtf´⊗3
, and
x4t ht+1=hxx4t ht +1 2hxx³
2³
xtf ⊗xr dt ´ +¡
xst¢⊗2´ +1
6hxxx µ
3³ xtf´⊗2
⊗xts
¶ + 1
24hxxxx³ xf´⊗4
.
Note that if the shock is drawn from the standard normal distribution, as is the case in the model developed in the current paper, then
E(εt)⊗2=vec(Ine), E(εt)⊗3=0, and E(εt)⊗5=0.
LetM4≡E(εt)⊗4be the kurtosis of the standard multivariate normal distribution.
In the course of the proofs we will use extensively the following (well-known) properties of the Kronecker product (for example, Magnus and Neudecker 1999). These are:
A⊗(B+C)=A⊗B+A⊗C, (A+B)⊗C=A⊗C+B⊗C,
(kA)⊗B=A⊗(kB)=k(A⊗B), (A⊗B)⊗C=A⊗(B⊗C),
(AC)⊗(BD)=(A⊗B)(C⊗D), vec(ABC)=(C0⊗A)vec(B).
We say that matrixKm,nof sizemn×mnis acommutation matrixif it has the following prop-erty: LetAbe (m×n) matrix andBis (p×q) matrix. Then
Km,p(A⊗B)Kq,n=B⊗A.
That is, the commutation matrix reverses the order of Kronecker product. The commutation matrixKm,ncan be defined explicitly as
Kn,m=
m
X
i=1 n
X
j=1
³
(emi (enj)0)⊗(enj(emi )0)´ ,
whereeimis theith unit column vector of orderm. For any commutation matrixKp,q=Kq−1,p.
2.B.2. Analytical expressions for the first moments. We are ready to derive formulas for unconditional first moments of the endogenous variables. Our goal is to characterize the following expression
E0xt=E0xtf+E0xts+E0xtr d+E0xt4t h,
Andreasen et al. (2018) showed thatE0xtf =E0xr dt =0. The first equality is the certainty equiv-alence of the linear approximation. The second equality,E0xr dt =0, results from the symme-try of the normal distribution (i.e. skewness is zero).
We write perturbation parameterσas a separate variable, not included in statext. Note thathσ=hxσ=hxxσ =hxxxσ=hxxxxσ=0 (see, for example, Theorem 7 in Jin and Judd (2002)).
For completeness, we derive expressions for the unconditional first moments for the so-lution approximations of all orders from one up to four. Derivations for orders of approxima-tion up to three are based on Andreasen et al. (2018). Formulas for the fourth order are our contribution.
2.B.2.1. First and second-order of approximation. We start with the formulae accurate up to the second order. We have
xtf+1=hxxtf+σηεt+1
xt+1s =hxxst+1 2hxx
³xtf´⊗2
+1 2hσσσ2 Extf =0
E³ xtf´⊗2
=(In2
x−hx⊗hx)−1(σ2η⊗η)vec(I)) Exts=(Inx−hx)−1
·1 2hxx(In2
x−hx⊗hx)−1(σ2η⊗η)vec(I))+1 2hσσσ2
¸ PROOF.
xt+1f =hxxtf+ση²t+1
Ext+1f =Ehxxtf (stationarity) Extf(I−hx)=0 ⇒ Extf=0.
³xtf´⊗2
=³
hxxtf+ση²t+1
´
⊗³
hxxtf +ση²t+1
´
=hxxtf⊗hxxtf +(hxxtf)⊗(ση²t+1)+(ση²t+1)⊗(hxxtf)+(ση²t+1)⊗(ση²t+1) E³
xtf´⊗2
=hx⊗hxE³ xtf´⊗2
+σ2η⊗ηE(²t+1)⊗2 E³
xtf´⊗2
=(I−hx⊗hx)−1(σ2η⊗η)vec(I))
xst+1=hxxst+1 2hxx
³ xtf´⊗2
+1 2hσσσ2 Exst=hxExts+1
2hxxE³ xtf´⊗2
+1 2hσσσ2 Exst=(I−hx)−1
·1
2hxx(I−hx⊗hx)−1(σ2η⊗η)vec(I))+1 2hσσσ2
¸ .
2.B.2.2. Third order. Next, we tackle the third-order approximation. We replicate the re-sults in Andreasen et al. (2018) in the following
xr dt+1=hxxtr d+1 2hxx
³ 2³
(xtf⊗xst)´´
+1 6hxxx
³ xtf´⊗3
+3·1
6hxσσxtfσ2+1
6hσσσσ3
Extf+1⊗xts+1=¡
I−h⊗x2¢−1µ hx⊗1
2hxx
¶ E³
xtf´⊗3
E³ xtf+1´⊗3
=0 Extr d+1=0.
PROOF. xt+1f ⊗xst+1=³
hxxtf+ση²t+1
´
⊗ µ
hxxst+1 2hxx
³ xtf´⊗2
+1 2hσσσ2
¶
=(hx⊗hx)³
xtf⊗xts´ +
µ hx⊗1
2hxx
¶³ xtf´⊗3
+(hxxf)⊗1 2hσσσ2 +¡
ση⊗hx¢ ¡
²t+1⊗xst¢ +¡
ση⊗.5hxx¢ µ
²t+1⊗³ xtf´⊗2¶
+(η²t+1)⊗1 2hσσσ3. E(xtf ⊗xts)=¡
I−hx⊗2¢−1µ hx⊗1
2hxx
¶ E³
xtf´⊗3 .
³ xt+1f ´⊗3
=³ xtf+1´⊗2
⊗³
hxxtf +ση²t+1
´
=³
hxxtf+ση²t+1
´
⊗h
(hx⊗hx) µ³
xtf´⊗2¶ +¡
ση⊗ση¢ ¡
²⊗2t+1
¢ +¡
ση⊗hx¢³
²t+1⊗xtf´ +¡
hx⊗ση¢³
xtf⊗²t+1
´ i
=(hx⊗hx⊗hx)³ xtf´⊗3
+(ση²t+1)⊗(hx⊗hx) µ³
xtf´⊗2¶ +(hxxtf)⊗¡
ση⊗ση¢ ¡
²⊗t+21
¢+(ση²t+1)⊗¡
ση⊗ση¢ ¡
²⊗t+21
¢ +(hxxtf)⊗¡
ση⊗hx¢³
²t+1⊗xtf´
+(ση²t+1)⊗¡
ση⊗hx¢³
²t+1⊗xtf´ +(hxxtf)⊗¡
hx⊗ση¢³
xtf⊗²t+1
´
+(ση²t+1)⊗¡
hx⊗ση¢³
xtf⊗²t+1
´
=(hx⊗hx⊗hx)³ xtf´⊗3
+terms zero in expectation.
The last equality follows from the fact thatxtf and²t+1are independent, thereforeE³
xtf⊗²t+1
´
= Extf ⊗E²t+1=0,E(hxxtf)⊗¡
ση⊗ση¢ ¡
²⊗2t+1¢
=0 sinceE(hxxtf)=0.
Therefore,
E µ³
xtf+1´⊗3¶
=E(hx⊗hx⊗hx)³ xtf´⊗3
and by stationarityE(hx⊗hx⊗hx)³ xtf´⊗3
=0, xr dt+1=hxxr dt +1
2hxx
³ 2³
(xtf⊗xts)´´
+1 6hxxx
³ xtf´⊗3
+3·1
6hxσσxtfσ2+1
6hσσσσ3 Exr dt+1=hxExtr d+1
6hσσσσ3 Exr dt (I−hx)=0,
the last line following sincehσσσ=0 for symmetric distribution (see Andreasen, 2012) 2.B.2.3. Fourth order. Finally, we derive the solutions accurate up to fourth order.
We have that the fourth-order accurate law of motion of the states is given by xt4t h+1=hxx4t ht +1
2hxx
³ 2³
xtf⊗xr dt ´ +¡
xts¢⊗2´ +1
6hxxx
µ 3³
xtf´⊗2
⊗xst
¶ + 1
24hxxxx
³ xf´⊗4 +3
6hσσxσ2xts+6· 1
24hσσxxσ2³ xtf´⊗2
+4· 1
24hσσσxσ3xtf+ 1
24hσσσσσ4. The fourth raw moment of the state variables is given by.
Ex4t ht =¡
Inx−hx¢−1h1 2hxxE³
2³
xtf⊗xr dt ´ +¡
xts¢⊗2´ +1
6hxxxE µ
3³ xtf´⊗2
⊗xst
¶ + 1
24hxxxxE³ xf´⊗4 +3
6hσσxσ2Exst+6· 1
24hσσxxσ2E³ xtf´⊗2
+ 1
24hσσσσσ4i .
Where the respective terms for each basis vector of the 4th-order pruned state space are listend in the following.
E³ xt+1f ´⊗4
=σ2³
In4x−h⊗4x ´−1h
σ2η⊗4M4+³
(h⊗2x ⊗η⊗2)Kne2,n2x +(hx⊗η⊗hx⊗η)¡
Inx⊗Knx,ne⊗Ine¢ Kn2
e,n2x
+(hx⊗η⊗η⊗hx)³
Inx⊗Knx,n2e´ Kn2e,n2x +¡
η⊗hx⊗hx⊗η¢³
Ine⊗Kne,n2
x
´ +¡
η⊗hx⊗η⊗hx¢ ¡
Ine⊗Kne,nx⊗Inx¢ +¡
η⊗2⊗h⊗2x ¢´ ³
vec(Ine)⊗E(xtf)⊗2´ i . This we can calculate givenE³
xtf´⊗2
=σ2(I−h⊗2x )−1(η⊗2)vec(Ine)) from further above.
Regarding the remaining vectors that span the 4th-order pruned state space, we have Eh
¡xst+1¢⊗2i
=³ In2
x−h⊗2x ´−1³ +.5³
Knx,nx+In2 x
´
(hxx⊗hx)E µ³
xtf´⊗2
⊗xts
¶ +1
4h⊗2xxE(xtf)⊗4 +³
Knx,nx+In2 x
´·1
2[(hx⊗hσσ)(σ2xst)]+1
4(hxx⊗hσσ)(σ2(xtf)⊗2)
¸ +1
4hσσ⊗hσσσ4´ ,
E
·³ xtf´⊗2
⊗xts
¸
=³ In3
x−h⊗3x ´−1³
(σ2η⊗2⊗hx)(E£
²⊗2t+1
¤⊗Exts)+1
2(h⊗2x ⊗hxx)E(xtf)⊗4 +1
2(σ2η⊗2⊗hxx)(vec(Ine)⊗E(xtf)⊗2)+1
2(h⊗x2⊗hσσ)σ2Eh (xtf)⊗2i +1
2(η⊗2⊗hσσ)σ4E²⊗2t+1
´ ,
Eh
xtf+1⊗xr dt+1i
=
³ In2
x−h⊗x2´−1³
(hx⊗hxx)E((xtf)⊗2⊗xst)+1
6(hx⊗hxxx)E³ xtf´⊗4 +3
6(hx⊗hσσ)σ2Eh
(xtf)⊗2i ´ . PROOF. We derive each of the terms.
³ xtf+1´⊗4
=³
hxxtf+ση²t+1
´
⊗³ xtf+1´⊗3
=(hxxtf)⊗³ xtf+1´⊗3
+(ση²t+1)⊗³ xtf+1´⊗3
. (2.43)
We tackle separately each of the summands in (2.43).
(hxxtf)⊗³ xtf+1´⊗3
=(hxxtf)⊗³h
hxxtf +ση²t+1
i
⊗h
hxxtf+ση²t+1
i
⊗h
hxxtf +ση²t+1
i´
=(hxxtf)⊗³ h
hxxtf+ση²t+1
i
⊗ h
hxxtf ⊗hxxtf+hxxtf ⊗ση²t+1+ση²t+1⊗hxxtf+ση²t+1⊗ση²t+1
i ´
=(hxxtf)⊗
³ h
hxxtf+ση²t+1
i
⊗h
h⊗2x (xtf)⊗2+σ(hx⊗η)(xtf⊗²t+1)+σ(η⊗hx)(²t+1⊗xtf)+σ2η⊗2²⊗2t+1
i ´ . Further,
(hxxtf)⊗³ xtf+1´⊗3
=(hxxtf)⊗³
h⊗x3(xtf)⊗3+σ(h⊗x2⊗η)((xtf)2⊗²t+1)
+σ(hx⊗η⊗hx)(xtf ⊗²t+1⊗xtf)+σ2(hx⊗η⊗2)(xtf ⊗²⊗2t+1) +σ(η⊗h⊗2x )(²t+1⊗(xtf)⊗2)+σ2(η⊗hx⊗η)(²t+1⊗xtf⊗²t+1) +σ2(η⊗2⊗hx)(²⊗t+21⊗xtf)+σ3η⊗3²⊗t+31
´ .
So that
(hxxtf)⊗
³ xt+1f ´⊗3
=h⊗4x (xtf)⊗4+σ2(h⊗2x ⊗η⊗2)((xtf)⊗2⊗²⊗2t+1) +σ2¡
hx⊗η⊗hx⊗η¢³
xtf⊗²t+1⊗xtf⊗²t+1
´
+σ2¡
hx⊗η⊗2⊗hx¢³
xtf⊗²⊗2t+1⊗xtf´
+terms zero in expectation.
(2.44)
The last equality follows since²t+1⊥xtf andExtf =E³ (xtf)⊗3´
E²t+1=0.
Using the commutation matrixKnx,neto change the order of⊗. For instance xtf ⊗²t+1⊗²t+1⊗xtf =xtf ⊗Ine²t+1⊗Knx,ne(xtf ⊗²t+1)
=xtf ⊗(Ine⊗Knx,ne)(²t+1⊗xtf⊗²t+1)
=xtf ⊗(Ine⊗Knx,ne)(Knx,ne(xtf⊗²t+1)⊗²t+1)
=xtf ⊗(Ine⊗Knx,ne)((Knx,ne⊗Ine)(xtf ⊗²t+1⊗²t+1))
=£
Inx⊗(Ine⊗Knx,ne)(Knx,ne⊗Ine)¤h
(xtf)⊗2⊗²⊗t+12
i . This can be simplified further using (Ir⊗Km,s)(Km,s⊗Is)=Km,r s. Thus
Eh
xtf⊗²t+1⊗²t+1⊗xtfi
=
³ Inx⊗Kn
x,n2e
´E³
(xtf)⊗2⊗²⊗t+12´ . Similarly,
E³
xtf ⊗²t+1⊗xtf ⊗²t+1
´
=¡
Inx⊗Knx,ne⊗Ine¢ E³
(xtf)⊗2⊗²⊗t+12
´ . Going back to (2.44) we have
E
·
(hxxtf)⊗³
xtf+1´⊗3¸
=h⊗x4E(xtf)⊗4+σ2h
(hx⊗2⊗η⊗2) +(hx⊗η⊗hx⊗η)¡
Inx⊗Knx,ne⊗Ine¢ +(hx⊗η⊗η⊗hx)³
Inx⊗Kn
x,n2e
´ i ³ Eh
(xtf)⊗2i
⊗vec(Ine)´ , sinceE³
(xtf)⊗2⊗²⊗2t+1´
=Eh (xtf)⊗2i
⊗vec(Ine).
Regarding the other summand in (2.43), (ση²t+1)⊗
³ xt+1f ´⊗3
=(ση²t+1)⊗
³
h⊗3x (xtf)⊗3+σ(h⊗2x ⊗η)((xtf)2⊗²t+1)
+σ(hx⊗η⊗hx)(xtf⊗²t+1⊗xtf)+σ2(hx⊗η⊗2)(xtf⊗²⊗t+12) +σ(η⊗h⊗2x )(²t+1⊗(xtf)⊗2)+ση²t+1⊗σ(hx⊗η)(xtf ⊗²t+1) +ση²t+1⊗σ(η⊗hx)(²t+1⊗xtf)+σ3η⊗3²⊗3t+1´
=σ2¡
η⊗hx⊗hx⊗η¢³
²t+1⊗xtf⊗xtf⊗²t+1
´
+σ2¡
η⊗hx⊗η⊗hx¢³
²t+1⊗xtf⊗²t+1⊗xtf´ +σ2¡
η⊗2⊗h⊗2x ¢³
²⊗2t+1⊗(xtf)⊗2´
+σ4η⊗4²⊗4t+1 +terms that are zero in expectation.
(2.45)
Using the commutation matrices E³
²⊗t+12 ⊗(xtf)⊗2´
=vec(Ine)⊗E(xtf)⊗2 E³
²t+1⊗xtf ⊗²t+1⊗xtf´
=¡
Ine⊗Kne,nx⊗Inx¢³
vec(Ine)⊗E(xtf)⊗2
´
E³
²t+1⊗xtf ⊗xtf⊗²t+1
´
=(Ine⊗Kn
e,n2x)(vec(Ine)⊗E(xtf)⊗2).
Plugging into (2.45) delivers E
·
(ση²t+1)⊗
³
xt+1f ´⊗3¸
=σ2h
¡η⊗hx⊗hx⊗η¢³ Ine⊗Kn
e,n2x
´
+¡
η⊗hx⊗η⊗hx¢ ¡
Ine⊗Kne,nx⊗Inx¢ +¡
η⊗2⊗h⊗2x ¢i ³
vec(Ine)⊗E(xtf)⊗2´
+σ4η⊗4M4, whereM4≡E£
²⊗t+14¤ .
Going back to the original formula (2.43), E³
xt+1f ´⊗4
=E
·
(hxxtf)⊗
³
xt+1f ´⊗3¸ +E
·
(ση²t+1)⊗
³
xt+1f ´⊗3¸
=h⊗4x E(xtf)⊗4+σ2h
(h⊗2x ⊗η⊗2) +(hx⊗η⊗hx⊗η)¡
Inx⊗Knx,ne⊗Ine¢ +(hx⊗η⊗η⊗hx)³
Inx⊗Kn
x,n2e
´ i ³ Eh
(xtf)⊗2i
⊗vec(Ine)´ +σ2h
¡η⊗hx⊗hx⊗η¢³ Ine⊗Kn
e,n2x
´
+¡
η⊗hx⊗η⊗hx¢ ¡
Ine⊗Kne,nx⊗Inx¢ +¡
η⊗2⊗h⊗2x ¢i ³
vec(Ine)⊗E(xtf)⊗2´
+σ4η⊗4M4.
Hence, using stationarity and the fact thatEh (xtf)⊗2i
⊗vec(Ine)=Kn2 e,n2x
³
vec(Ine)⊗E(xtf)⊗2´
, we have that
E³ xtf+1´⊗4
=σ2³ In4
x−h⊗x4´−1h
σ2η⊗4M4+³
(h⊗x2⊗η⊗2)Kn2 e,n2x
+(hx⊗η⊗hx⊗η)¡
Inx⊗Knx,ne⊗Ine¢ Kn2
e,n2x
+(hx⊗η⊗η⊗hx)³ Inx⊗Kn
x,n2e
´ Kn2
e,n2x
+¡
η⊗hx⊗hx⊗η¢³ Ine⊗Kn
e,n2x
´
+¡
η⊗hx⊗η⊗hx¢ ¡
Ine⊗Kne,nx⊗Inx¢ +¡
η⊗2⊗h⊗2x ¢´ ³
vec(Ine)⊗E(xtf)⊗2´ i .
Continued on next page.
Next, we calculateE³
¡xt+1s ¢⊗2´ .
¡xst+1¢⊗2
= µ
hxxst+1 2hxx³
xtf´⊗2 +1
2hσσσ2
¶
⊗ µ
hxxts+1 2hxx³
xtf´⊗2 +1
2hσσσ2
¶
=h⊗2x (xts)⊗2+(hxxts)⊗(1 2hxx³
xtf´⊗2 )+(1
2hxx³ xtf´⊗2
)⊗(hxxts) +
µ1 2hxx³
xtf´⊗2¶
⊗ µ1
2hxx³ xtf´⊗2¶ +1
2(hσσ⊗hx)(σ2xts)+1
4(hσσ⊗hxx)(σ2(xtf)⊗2)+1
4hσσ⊗hσσσ4 +1
2(hx⊗hσσ)(σ2xts)+1
4(hxx⊗hσσ)(σ2(xtf)⊗2)
=h⊗x2(xts)⊗2+(1
2hx⊗hxx) µ
xts⊗³ xtf´⊗2¶
+ µ1
2hxx⊗hx
¶ µ³ xtf´⊗2
⊗xts
¶
+1
4h⊗xx2(xtf)⊗4+1
2(hx⊗hσσ+hσσ⊗hx) (σ2xst) +1
4(hσσ⊗hxx+hxx⊗hσσ) (σ2(xtf)⊗2)+1
4hσσ⊗hσσσ4
=h⊗2x (xts)⊗2+(1
2hx⊗hxx)Kn2 x,nx
µ³ xtf´⊗2
⊗xts
¶ +
µ1
2hxx⊗hx
¶ µ³ xtf´⊗2
⊗xts
¶
+1
4h⊗xx2(xtf)⊗4+1 2
³
Knx,nx+In2 x
´
[(hx⊗hσσ)(σ2xst)]
+1 4
³
Knx,nx+In2 x
´
([hxx⊗hσσ)(σ2(xtf)⊗2)]+1
4hσσ⊗hσσσ4
=h⊗2x (xts)⊗2+1
2(hxx⊗hx)
³ Kn2
x,nx+In2 x
´µ
³ xtf´⊗2
⊗xts
¶ +1
4h⊗2xx(xtf)⊗4 +³
Knx,nx+In2 x
´·1
2[(hx⊗hσσ)(σ2xst)]+1
4(hxx⊗hσσ)(σ2(xtf)⊗2)
¸ +1
4hσσ⊗hσσσ4. where the last line follows since (hx⊗hxx)Kn2
x,nx =Knx,nx(hxx⊗hx) (Note: Kp,q−1 =Kq,p, soKnx,nx = Kn−1x,nx andK1,n=Kn,1=In).
Hence Eh
¡xst+1¢⊗2i
=³ In2
x−h⊗2x ´−1³ +.5³
Knx,nx+In2 x
´(hxx⊗hx)E µ³
xtf´⊗2
⊗xts
¶ +1
4h⊗2xxE(xtf)⊗4 +³
Knx,nx+In2 x
´·1
2[(hx⊗hσσ)(σ2xts)]+1
4(hxx⊗hσσ)(σ2(xtf)⊗2)
¸ +1
4hσσ⊗hσσσ4´ .
Continued on next page.
Next we calculateE µ³
xt+1f ´⊗2
⊗xst+1
¶ .
³ xtf+1´⊗2
⊗xst+1=³
hxxtf ⊗hxxtf+(hxxtf)⊗(ση²t+1)+(ση²t+1)⊗(hxxtf)+(ση²t+1)⊗(ση²t+1)´
⊗ µ
hxxts+1 2hxx³
xtf´⊗2 +1
2hσσσ2
¶
=h⊗3x ((xtf)⊗2⊗xst)+(σ2η⊗2⊗hx)(²⊗2t+1⊗xts)+1
2(hx⊗2⊗hσσ)σ2(xtf)⊗2 +1
2(η⊗2⊗hσσ)σ4²⊗2t+1+1
2(h⊗2x ⊗hxx)(xtf)⊗4+1
2(σ2η⊗2⊗hxx)(²⊗2t+1⊗(xtf)⊗2) +terms zero in expectation.
Hence E
·³ xtf´⊗2
⊗xts
¸
=
³ In3
x−h⊗3x ´−1³
(σ2η⊗2⊗hx)(E£
(²t+1)⊗2¤
⊗Exts)+1
2(hx⊗2⊗hxx)E(xtf)⊗4 +1
2(σ2η⊗2⊗hxx)(vec(Ine)⊗E(xtf)⊗2)+1
2(h⊗2x ⊗hσσ)σ2Eh (xtf)⊗2
i
+1
2(η⊗2⊗hσσ)σ4E²⊗2t+1´ , where the equality follows since²t+1andxtsare orthogonal.
For the last missing terms.
xtf+1⊗xr dt+1=³
hxxtf +σηεt+1
´
⊗³
hxxr dt +1 2hxx³
2³
(xtf⊗xts)´´
+1 6hxxx³
xtf´⊗3 +3
6hσσxσ2xtf+1
6hσσσσ3´
=h⊗2x (xt+1f ⊗xtr d+1)+(hx⊗hxx)((xtf)⊗2⊗xts)+1
6(hx⊗hxxx)³ xtf´⊗4
+3
6(hx⊗hσσ)σ2(xtf)⊗2+terms zero in expectation so that
Eh
xt+1f ⊗xr dt+1i
=³ In2
x−h⊗2x ´−1³
(hx⊗hxx)E((xtf)⊗2⊗xst)+1
6(hx⊗hxxx)E³ xtf´⊗4
+3
6(hx⊗hσσ)σ2Eh (xtf)⊗2
i ´
This completes the proof.
2.B.2.4. Simplifying the 4th-order expressions. For the model used in the current paper, the 4th-order Kronecker products can be simplified further. Namely,
¡h⊗2x ⊗η⊗2¢ Kn2
e,n2x=Kn2 x,n2x
¡η⊗2⊗h⊗2x ¢
For the current model we havene=1 so thatKne,nx=Knx,ne=Inx,Ine=vec(Ine)=1.
E³ xtf+1´⊗4
=σ2³ In4
x−h⊗x4´−1h
σ2η⊗4M4+Kn2 x,n2x
¡η⊗2⊗h⊗x2¢
+(hx⊗η⊗hx⊗η)¡
Inx⊗Inx¢ +(hx⊗η⊗η⊗hx)¡
Inx⊗Inx¢ +¡
η⊗hx⊗hx⊗η¢ +¡
η⊗hx⊗η⊗hx¢ ¡
Inx⊗Inx¢ +¡
η⊗2⊗h⊗2x ¢´ ³
E(xtf)⊗2´ i
=σ2³ In4
x−h⊗4x ´−1h
σ2η⊗4M4 +(hx⊗η⊗hx⊗η)¡
Inx⊗Inx¢ +(hx⊗η⊗η⊗hx)¡
Inx⊗Inx¢ +¡
η⊗hx⊗hx⊗η¢ +¡
η⊗hx⊗η⊗hx¢ ¡
Inx⊗Inx¢ +³
Kn2 x,n2x+In2
x
´¡
η⊗2⊗h⊗2x ¢´ ³
E(xtf)⊗2´ i . Note thatInx⊗Inx=In2
x so that E³
xtf+1´⊗4
=σ2³ In4
x−h⊗x4´−1h
σ2η⊗4M4
+(hx⊗η⊗hx⊗η) +(hx⊗η⊗η⊗hx) +¡
η⊗hx⊗hx⊗η¢ +¡
η⊗hx⊗η⊗hx¢ +
³ Kn2
x,n2x+In2 x
´¡
η⊗2⊗h⊗2x ¢´ ³
E(xtf)⊗2´ i .
Moreover
hx⊗η⊗hx⊗η=hx⊗£ Knx,nx¡
hx⊗η¢ Kne,nx¤
⊗η
=¡
Inx⊗Knx,nx⊗Inx¢ ¡
h⊗x2⊗η⊗2¢
=¡
Inx⊗Knx,nx⊗Inx¢³ Kn2
x,n2x
¡η⊗2⊗hx⊗2¢´ ,
hx⊗η⊗η⊗hx=hx⊗η⊗£ Knx,nx¡
hx⊗η¢ Kne,nx¤
=
³ In2
x⊗Knx,nx´
£hx⊗η⊗hx⊗η¤
=³ In2
x⊗Knx,nx´
£¡Inx⊗Knx,nx⊗Inx¢ ¡
h⊗x2⊗η⊗2¢¤
=³ In2
x⊗Knx,nx´ h
¡Inx⊗Knx,nx⊗Inx¢³ Kn2
x,n2x
¡η⊗2⊗h⊗2x ¢ ,´i
η⊗hx⊗η⊗hx=η⊗£ Knx,nx¡
η⊗hx¢ Knx,ne¤
⊗hx
=(Inx⊗Knx,nx⊗Inx)(η⊗2⊗h⊗2x ),
η⊗hx⊗hx⊗η=η⊗hx⊗[Knx,nx(η⊗hx)Knx,ne]
=(In2
x⊗Knx,nx)(η⊗hx⊗η⊗hx)
=(In2
x⊗Knx,nx)£
(Inx⊗Knx,nx⊗Inx)(η⊗2⊗hx⊗2)¤ . Hence
E³ xt+1f ´⊗4
=σ2³ In4
x−h⊗4x ´−1h
σ2η⊗4M4 +¡
Inx⊗Knx,nx⊗Inx¢³ Kn2
x,n2x
¡η⊗2⊗h⊗2x ¢´
+³ In2
x⊗Knx,nx´ h
¡Inx⊗Knx,nx⊗Inx¢³ Kn2
x,n2x
¡η⊗2⊗h⊗x2¢´i
+(In2
x⊗Knx,nx)£
(Inx⊗Knx,nx⊗Inx)(η⊗2⊗h⊗2x )¤ +(Inx⊗Knx,nx⊗Inx)(η⊗2⊗h⊗2x )
+³ Kn2
x,n2x+In2 x
´¡
η⊗2⊗h⊗x2¢´ ³
E(xtf)⊗2´ i ,
or
E³ xt+1f ´⊗4
=σ2³ In4
x−h⊗4x ´−1h
σ2η⊗4M4 +
n¡
Inx⊗Knx,nx⊗Inx¢ Kn2
x,n2x
+³ In2
x⊗Knx,nx
´ h¡
Inx⊗Knx,nx⊗Inx¢³ Kn2
x,n2x
´i
+(In2
x⊗Knx,nx)£
(Inx⊗Knx,nx⊗Inx)¤ +(Inx⊗Knx,nx⊗Inx)
+³ Kn2
x,n2x+In4 x
´ o¡
η⊗2⊗h⊗2x ¢´ ³
E(xtf)⊗2´ i ,
E³ xt+1f ´⊗4
=σ2³ In4
x−hx⊗4´−1h
σ2η⊗4M4 +
n ³ In2
x⊗Knx,nx´
¡Inx⊗Knx,nx⊗Inx¢ +In4
x
+Inx⊗Knx,nx⊗Inxo ³ Kn2
x,n2x+In4 x
´¡
η⊗2⊗h⊗2x ¢³
E(xtf)⊗2´ i , hence
E³ xt+1f ´⊗4
=σ2³ In4
x−h⊗4x ´−1h
σ2η⊗4M4 +n ³
In2
x⊗Knx,nx+In4 x
´¡
Inx⊗Knx,nx⊗Inx¢ +In4
x
o ³ Kn2
x,n2x+In4 x
´¡
η⊗2⊗h⊗x2¢³
E(xtf)⊗2´ i .
CHAPTER 3