• Keine Ergebnisse gefunden

the PC-OLS and two-step PC-GLS estimators. Consequently, this leaves more room for improvement with respect to the factor loadings and, furthermore, the regressors for the additional steps of the iterated estimator are estimated quite precisely, which additionally boosts the performance of that estimator.

Similar findings arise for the estimated common factors, where the efficiency of standard PC is larger than for the factor loadings, however. Nevertheless, employing two-step PC-GLS and, in particular, the iterated PC-GLS estimator leads to more precise estimates compared to OLS. The gain in efficiency of using two-step PC-GLS compared to standard PC is already quite noticeable, whereas the subsequent increase of using iterated PC-GLS is not as large as for the factor loadings. This also stems from the particular relation between N and T present in this simulation. As noted above, the increasingT relative toN positively affects the two-step estimates of the common factors, so that there is less room for improvement via iterating the PC-GLS estimator. Furthermore, since the relation betweenN andT adversely affects the accuracy of the estimated factor loadings, the reduction in the error of the regressors in the subsequent iteration steps is not that large. Overall, this table clearly illustrates the advantages of employing the two-step PC-GLS estimator or its iterated version, when confronted with a real-life data set.

Monte Carlo experiments suggest that in small samples the hybrid estimator suffers from the poor properties of the estimated weight matrix. We therefore recommend the (iterated) PC-GLS estimator for datasets of moderate sample size.

If one is willing to accept the framework of a strict factor model (that is a model with cross-sectionally uncorrelated factors and idiosyncratic errors), then our approach can also be employed for inference. For example, recent work by Breitung and Eick-meier (2011) shows that a Chow-type test for structural breaks can be derived using the iterated PC-GLS estimator. Other possible applications are LR tests for the number of common factors or tests of hypotheses on the factor space.

The following lemma plays a central role in the proofs of the following theorems:

Lemma A.1: It holds for all k ≤pi that (i) T−1

XT t=pi+1

(Fbt−HFt)Ft−k =Op−2N T), T−1 XT t=pi+1

(Fbt−HFt)Fbt−k =OpN T−2)

(ii) T−1 XT t=pi+1

FbtFbt−k =T−1 XT t=pi+1

HFtFt−k H+Op−2N T)

(iii) T−1 XT t=pi+1

(Fbt−HFt)ei,t−k=OpN T−2)

(iv) N−1 XN

i=1

1

ωi2(λbi−H−1λii =OpN T−2), N−1 XN

i=1

1

ωi2(bλi−H−1λi)bλi =Op−2N T)

(v) N−1

XN i=1

1

ωi2(λbi−H−1λi)eit =OpN T−2).

Proof: (i) The proof follows closely the proof for k = 0 provided by Bai (2003, Lemmas B.2 and B.3). We therefore present only the main steps.

We start from the representation Fbt−HFt= 1

N TVN T−1

Fbet+FbeΛFt+Fbeet ,

whereet= [e1t, . . . , eN t],e = [e1, . . . , eT], andVN T is anr×r diagonal matrix of the r largest eigenvalues of (N T)−1XX (Bai 2003, Theorem 1). Consider

1 T

XT t=pi+1

(Fbt−HFt)Ft−k = 1

N T2VN T−1 Fb XT t=pi+1

etFt−k +FbeΛ XT t=pi+1

FtFt−k

+Fbe XT t=pi+1

etFt−k

!

= I+II +III.

49

From Assumption 1 (v) it follows that Λ

XT t=pi+1

etFt−k = XN

i=1

XT t=pi+1

eitλiFt−k =Op(√ N T),

and using Lemma B.2 of Bai (2003) it follows that T−1FbF =T−1HFF +T−1(Fb− F H)F =T−1HFF +Op−2N T). Thus, we obtain

I =VN T−1

T−1FbF 1

√N TΛ XT t=pi+1

etFt−k

! 1

√N T =Op 1

√N T

.

Next, we consider

ΛeFb= Λ XT

t=1

etFtH+ Λ XT

t=1

et(Fbt−HFt). Following Bai (2003, p. 160), we have

1 N T Λ

XT t=1

etFtH = Op

1

√N T

1 N T Λ

XT t=1

et(Fbt−HFt) = Op

1 δN T

√N

.

Using T−1PT

t=pi+1FtFt−k =Op(1), we obtain II =VN T−1

1 N TFb

1 T

XT t=pi+1

FtFt−k

!

=

Op 1

√N T

+Op

1 δN T

√N

Op(1).

For the remaining term, we obtain 1

N T2Fbe XT t=pi+1

etFt−k = 1 N T2

XT s=1

XT t=pi+1

esetFbsFt−k

= 1

T2 XT

s=1

XT t=pi+1

FbsFt−k ζN T(s, t) + 1 T2

XT s=1

XT t=pi+1

FbsFt−k γN(s, t), where

ζN T(s, t) = eset/N −γN(s, t) γN(s, t) = E(eset/N).

As in Bai (2003, p. 164f), we obtain III =VN T−1

Op

1 δN T

√T

+Op

1 δN T

√N

.

Collecting these results, we obtain I+II+III =Op

1

√N T

+Op

1

√T δN T

+Op

1

√N δN T

=Op

1 δ2N T

.

The proof of the second result in (i) is a similar modification of Lemma A.1 in Bai (2003) and is therefore omitted.

(ii) Consider T−1

XT t=pi+1

FbtFbt−k =T−1 XT t=pi+1

[HFt+ (Fbt−HFt)][HFt−k+ (Fbt−k−HFt−k)]

= T−1

XT t=pi+1

HFtFt−k H+ (Fbt−HFt)Ft−k H+HFt(Fbt−k −Ft−k H) + (Fbt−HFt)(Fbt−k −Ft−k H)

!

= T−1

 XT t=pi+1

HFtFt−k H+HFt(Fbt−k −Ft−k H)

| {z }

T a

+ (Fbt−HFt)Fbt−k

| {z }

T b

= T−1 XT t=pi+1

HFtFt−k H

!

+a+b.

Using (i) the termsa and b can be shown to beOpN T−2).

(iii) The proof for k = 0 is given in Bai (2003, Lemma B.1). It is not difficult to see that the result remains unchanged ifk 6= 0.

(iv) Following Bai (2003, p. 165) we have

i−H−1λi =T−1HFei+T−1Fb(F −F Hb −1i+T−1(Fb−F H)ei, (1.30) whereei = [ei1, . . . , eiT]. Post-multiplying by ωi−2λi and averaging yields

N−1 XN

i=1

1

ωi2(bλi−H−1λii =T−1HF N−1 XN

i=1

1 ωi2eiλi

!

+T−1Fb(F −F Hb −1) N−1 XN

i=1

1 ωi2λiλi

!

+T−1(Fb−F H) N−1 XN

i=1

1 ω2ieiλi

! .

From Bai (2003, p. 165) it follows that the last two terms are Op−2N T). From As-sumption 1 (v) and AsAs-sumption 2 (i) it follows that

XT t=1

1

ω2iHFtλieit ≤ 1

ωmin2

XT t=1

HFtλieit

=Op(1/√ T),

where ωmin = min(ω1, . . . , ωN). Thus, the first part of (iv) is Op−2N T). The second equation can be shown by using the first part and Lemma A.1 (v).

(v) From (1.30) it follows that N−1

XN i=1

(bλi−H−1λi)eit =N−1T−1 XT

s=1

XN i=1

Fbseiseit+N−1T−1 XT

s=1

XN i=1

Fbs(Fs−H′−1Fbs)λieit

= a+b.

For expression a we write N−1T−1

XT s=1

Fbs

XN i=1

eiseit=T−1 XT s=1

Fbs

"

N−1 XN

i=1

eiseit−E(eiseit)

# +T−1

XT s=1

FbsγN(s, t).

From Lemma A.2 (a) and (b) of Bai (2003), it follows that the first term on the r.h.s.

is Op(N−1/2δN T−1), whereas the second term is Op(T−1/2δN T−1).

To analyze b we note that by Lemma A.1 (i) and Assumption 1 (v)

"

T−1 XT

s=1

Fbs(Fs−H′−1Fbs)

# "

N−1 XN

i=1

λieit

#

=OpN T−2)Op(N−1/2).

Collecting these results, it follows that

N−1

XN i=1

1

ωi2(bλi−H−1λi)eit

≤ 1 ωmin2

N−1

XN i=1

(λbi−H−1λi)eit

= Op(T−1/2δN T−1) +Op(N−1/2δ−1N T) +Op(N−1/2δN T−2) = OpN T−2).

Proof of Lemma 1:

Let

zt=



 eit

...

ei,t−pi+1



 and zbt =



xit−bλiFbt ...

xi,t−pi+1−bλiFbt−pi+1



.

Using the same arguments as in Lemma 4 of Bai and Ng (2002), it can be shown that T−1

XT t=pi+1

b

eitzbt−1−T−1 XT t=pi+1

eitzt−1 =Op−2N T)

and T−1PT

t=pi+1(zbt−1bzt−1 −zt−1zt−1 ) = Op−2N T). Therefore, we obtain for the least-squares estimator of ρ(i)

b

ρ(i) = ρ(i)+

XT t=pi+1

zt−1zt−1

!−1 XT t=pi+1

zt−1εit+OpN T−2)

= ρ(i)+Op(T−1/2) +OpN T−2) and, similarly, for the least-squares estimator of ω2i:

b

ωi2 = ωi2+ T−1 XT t=pi+1

e2it−ωi2

!

+ T−1 XT t=pi+1

b

e2it−e2it!

= ωi2+Op(T−1/2) +OpN T−2).

Proof of Theorem 1:

The feasible two-step estimator of λi is obtained as

i,bρ = [FbR(ρb(i))R(ρb(i))Fb]−1FbR(ρb(i))R(ρb(i))Xi

= [FbR(ρb(i))R(ρb(i))Fb]−1FbR(ρb(i))R(ρb(i))(F λi+ei)

= [FbR(ρb(i))R(ρb(i))Fb]−1FbR(ρb(i))R(ρb(i)){[Fb+ (F H −Fb)]H−1λi+ei)} eλi,bρ−H−1λi = [FbR(ρb(i))R(ρb(i))Fb]−1FbR(ρb(i))R(ρb(i))[(F H −Fb)H−1λi+ei],

whereei = [ei1, . . . , eiT].

Using Lemma A.1 (ii) and Lemma 1, we obtain 1

TFbR(ρb(i))R(ρb(i))Fb

= 1

T XT t=pi+1

(Fbt−ρb1,iFbt−1−. . .−ρbpi,iFbt−pi)(Fbt−ρb1,iFbt−1−. . .−ρbpi,iFbt−pi)

= 1

TFbR(ρ(i))R(ρ(i))Fb+Op(T−1/2) +Op−2N T) [by Lemma 1]

= 1

THFR(ρ(i))R(ρ(i))F H +Op(T−1/2) +OpN T−2) [by Lemma A.1 (ii)].

Lemma A.1 (i) yieldsT−1PT

t=pi+1Fbt−k(Fbt−k −Ft−k H) =OpN T−2) and by using Lemma 1

T−1FbR(ρb(i))R(ρb(i))(Fb−F H)H−1λi =OpN T−2) +Op−2N TT−1/2).

Next, we consider

T−1/2 XT t=pi+1

[ρbi(L)Fbt] [ρbi(L)eit]

= T−1/2 XT t=pi+1

b

ρi(L)[HFt+ (Fbt−HFt)]ρbi(L)eit

= T−1/2 XT t=pi+1

ρi(L)HFti(L)eit] +Op(√

T /δ2N T) +Op(T−1/2),

where Lemma A.1 (iii) and Lemma 1 are invoked. Hence, we find

√T(eλi,ρb−H−1λi) = [T−1HFR(ρ(i))R(ρ(i))F H]−1T−1/2HFR(ρ(i))R(ρ(i))ei

+Op(√

T /δN T2 ) +Op(T−1/2),

where √

T /δN T2 →0 if √

T /N →0. Finally, Assumption 1 (v) implies

T−1/2HFR(ρ(i))R(ρ(i))ei

→ Nd (0,VeF e(i)),

whereVeF e(i)is defined in Theorem 1. With these results, part (i) of the theorem follows.

The proof of part (ii) is similar. We therefore present the main steps only. The feasible two-step estimator of the common factors is given by

Fet,ωb = (ΛbΩb−1Λ)b −1ΛbΩb−1Xt

= (ΛbΩb−1Λ)b −1ΛbΩb−1[(Λb−Λ + ΛHb −1)HFt+et] Fet,bω−HFt = (ΛbΩb−1Λ)b −1ΛbΩb−1[(ΛH−1−Λ)Hb Ft+et],

where et = [e1t, . . . , eN t]. Under Lemma 1, the (diagonal) elements of (Ωb−1−Ω−1) are Op(T−1/2) +Op−2N T). Following Bai (2003) and using Lemma A.1 (iv) and (v),

we obtain

N−1ΛbΩb−1Λ =b N−1H−1Λ−1ΛH−1+Op(T−1/2) +OpN T−2) →p ΨeΛ

N−1ΛbΩb−1(Λb−ΛH−1) ≤ N−1Λb−1(Λb−ΛH−1)

+ 1

N XN

i=1

|bωi−2−ω−2i |2

!1/2 1

N||(Λb−H−1Λ)Λb||2 1/2

= OpN T−2) +Op(T−1/2N T2 ) N−1(Λb−ΛH′−1)Ωb−1et = N−1(Λb−ΛH′−1)−1et+N−1

XN i=1

1 b ω2i − 1

ωi2

eit(bλi−H−1λi)

= OpN T−2) +Op(T−1/2N T2 )

N−1/2H−1ΛΩb−1et = N−1/2H−1Λ−1et+Op(T−1/2) +OpN T−2) → Nd (0,Veλe(t)) Veλe(t) = E

N→∞lim N−1H−1Λ−1etet−1ΛH−1

= lim

N→∞N−1 XN

i=1

XN j=1

1

ωi2ω2jH−1λiλjH−1E(eitejt).

From these results the limit distribution stated in Theorem 1 (ii) follows.

Proof of Theorem 2:

First, we compare the asymptotic covariance matrices of the PC-OLS estimatorFbtand the PC-GLS estimator Fet (where for notational convenience the dependence on ωb is suppressed). Using the results presented in Theorem 1 (ii), the asymptotic covariance matrix ofFet can be written as

N,Tlim→∞N (Λ0−1Λ0)−1Λ0−1E(etet)Ω−1Λ00−1Λ0)−1,

where Λ0 = ΛH′−1. If the covariance structure is correctly specified, thenE(etet) = Ω and the asymptotic covariance matrix reduces to

N,T→∞lim 1

0−1Λ0

−1

.

The asymptotic covariance matrix of the PC-OLS estimator is (Bai 2003)

N,T→∞lim N (Λ0Λ0)−1Λ0E(etet00Λ0)−1

= lim

N,T→∞N (Λ0Λ0)−1Λ0ΩΛ00Λ0)−1,

if the covariance matrix Ω is correctly specified.

Let Fbt=Fett where δt=Fbt−Fet. From

var(Fbt) =var(Fet) +var(δt) +cov(Fet, δt) +cov(δt,Fet)

it follows that Fet is asymptotically more efficient than Fbt if N cov(Fet, δt)→0 or

N,Tlim→∞N E[(Fet−HFt)(Fbt−HFt)] = lim

N,T→∞N E[(Fet−HFt)(Fet−HFt)].

Since

N,Tlim→∞N E[(Fet−HFt)(Fbt−HFt)]

= lim

N,T→∞N(Λ0−1Λ0)−1Λ0−1E(etet00Λ0)−1

= lim

N,T→∞

1

0−1Λ0

−1

,

it follows that the difference of the asymptotic covariance matrices of the PC-GLS and PC-OLS estimators is positive semidefinite.

In a similar manner, it can be shown that the PC-GLS estimator ofλi is asymptot-ically more efficient than the PC-OLS estimator. LetF0 =F H and R(ρ(i)) as defined in Section 1.3. The asymptotic distribution of the PC-GLS estimatorλei presented in Theorem 1 (i) can be written as

N,T→∞lim E 1

TF0R(ρ(i))R(ρ(i))F0

−1

× lim

N,T→∞E

1

TF0R(ρ(i))R(ρ(i))eieiR(ρ(i))R(ρ(i))F0

×

N,T→∞lim E 1

TF0R(ρ(i))R(ρ(i))F0

−1

.

If the autoregressive model for the idiosyncratic errors is correctly specified, we have E(εiεi) = σi2IT−pi, where εi =R(ρ(i))ei. IfF0 is independent of εi, it follows from the law of iterated expectations that the asymptotic covariance matrix of the PC-GLS estimator eλi reduces to

σi2

N,Tlim→∞E 1

TF0R(ρ(i))R(ρ(i))F0

−1

.

Consider

N,T→∞lim T E[(eλi−H−1λi)(λbi−H−1λi)]

=

N,T→∞lim E 1

TF0R(ρ(i))R(ρ(i))F0

−1

× lim

N,T→∞E

1

TF0R(ρ(i))R(ρ(i))eieiF0 lim

N,T→∞E

1 TF0F0

−1

=

N,T→∞lim E 1

TF0R(ρ(i))R(ρ(i))F0

−1

× lim

N,T→∞EF0

E

1

TF0R(ρ(i))R(ρ(i))eieiR(ρ(i))R(ρ(i))[R(ρ(i))R(ρ(i))]−1F0

F0

×

N,Tlim→∞E 1

TF0F0

−1

= σi2

N,Tlim→∞E 1

TF0R(ρ(i))R(ρ(i))F0

−1

and, therefore, the asymptotic covariance between eλi and λbi −eλi tends to zero. It follows that the PC-GLS estimator eλi is asymptotically more efficient than the PC-OLS estimator bλi.

Optimal monetary policy under

labor market frictions: the role of

wage rigidity and markup shocks