• Keine Ergebnisse gefunden

Rotations and Identification Restrictions

4.3 Interval-Based Trading Strategies for Single Markets

5.1.1 Rotations and Identification Restrictions

The idea behind FAVARs is to gain from the advantages of FMs and VARs. Although VARs are well-known in the literature and offer many methods for measuring the impact of certain variables on the whole system, they are still restricted to a few time series and so, can only incorporate a limited number of variables. By contrast, FA enables a significant reduction in dimension, when the panel data is described by a few latent factors. Similar to DFMs, FAVARs have a transition equation and a measurement equation.

However, the factors part of FAVARs are partially observed. In this section, we stick to FAVARs with complete time series, that is, neither the panel data nor the observed factor variables have any data gaps.

Definition 5.1.1 (Factor-Augmented Vector Autoregression Model)

For any point in timet, the vectorsXt∈RN andYt∈RM consist of panel data and other variables. Here, Xt andYt are not necessarily disjoint, i.e., they can share some variables. Furthermore, all univariate times series part of the processes{Xt}and{Yt} are supposed to be complete, of the same frequency and standardized with zero mean and standard deviation of one. If the vectorFt∈RK denotes the unobserved factors at timet, the measurement and transition equations of a FAVAR are defined as follows:

Xt=h Λf Λy

i

"

Ft Yt

#

+et, et∼ N(0Ne)iid, (5.1)

"

Ft Yt

#

= Φ(L)

"

Ft−1 Yt−1

# +vt=

"

Φf f(L) Φf y(L) Φyf(L) Φyy(L)

# "

Ft−1 Yt−1

#

+vt, vt∼ N(0K+Mv)iid, (5.2) with constant loadings matrices Λf ∈ RN×K and Λy ∈ RN×M. The operator Φ(L) in (5.2) represents a conformable lag polynomial of order p ≥1 given by Φ(L) = Φ1+ Φ2L1+...+ ΦpLp−1 and constant coefficient matrices Φj ∈R(K+M)×(K+M) for1 ≤j ≤p. Moreover, we assume the idiosyncratic shocks et∈RN andvt∈RK+M to be iid Gaussian and independent of each other. That is,et⊥vs∀t, s.

The above FAVAR definition complies with the FAVAR definition in Bernanke et al. (2005). They combine PCA and OLS regressions for parameter estimation. To this, they assume the shocksvtto be zero mean with covariance matrix Σv, but do not pinpoint a concrete distribution. For the errorset, they assume zero mean, but admit two correlation cases: the errors et are either uncorrelated or weakly cross-sectionally correlated. In the sequel, we will discuss restrictions to avoid parameter ambiguity and so, identification issues. Therefore, we do not further comment the shock distributions in Definition 5.1.1 at this point in time. In particular, we do not assume the covariance matrix Σeas diagonal and follow the argumentation in Doz et al. (2012) to justify that weakly cross-sectionally correlated errors can be ignored as in Ba´nbura and Modugno (2014). Note, Bernanke et al. (2005) also presented a Bayesian estimation method, which exceeds the scope of this section.

In the sequel, we assume the VAR(p) process in (5.2) to be covariance-stationary as in Definition A.2.1.

Equation (5.2) is a standard VAR(p) in the observed variablesYt, if all matrix elements of Φ(L) covering the impact ofFt−1onYtare zero (Bernanke et al., 2005). Otherwise, Bernanke et al. (2005) call (5.2) the transition equation a FAVAR. Moreover, they note: On the one hand, the FAVAR in (5.2) nests a VAR(p) supporting comparisons with general VAR(p) results and assessments of the marginal contribution of the factorsFt. Second, if the true system is a FAVAR, neglecting the hidden factorsFt and sticking to the simple VAR inYtwill cause biased estimation results such that the interpretation of Impulse Response Functions (IRFs) and Forecast Error Variance Decompositions (FEVDs) may be faulty.

ForM = 0, i.e., in the absence of observed factor elements, FAVARs coincide with ADFMs by definition.

However, forM >0, Bork (2009) and Marcellino and Sivec (2016) showed that special loadings constraints and properly sorted panel data, where the observed variablesYtare a subset ofXt, result in the common state-space representation of ADFMs. Since the variablesYtpart ofXton the left-hand side in (5.1) are identically mapped to the vectorYt on the right-hand side in (5.1), some identification problems of the model parameters are implicitly solved. But this mapping also provides that the respective entries of the idiosyncratic shocksetshould be zero and thus, the covariance matrix Σedoes not have full rank. From a theoretical perspective the reduced rank of the covariance matrix Σecauses that its inverse matrix Σ−1e part of the log-likelihood function is not well-defined. In addition, the reduced rank of Σe and the fact that the variablesYtare observed such that the covariance matrix ofYtconditioned on the information up to timet is a zero matrix have to be addressed, when the standard KF and KS in Section 2.1.4 are

applied. In such cases, the inverse matrices in the Kalman gains of Lemmata 2.1.8 and 2.1.9, perhaps, do not always exist. If this reduced rank issue is neglected on purpose, as the standard KF and KS still work, one implicitly ignores the observability ofYt on the right-hand side in (5.2). Regardless the performed adjustments, rather specific ADFMs than real FAVARs are estimated.

Because of (5.1), the vector

F0t,Y0t0

drives the dynamics ofXt. This explains why Bernanke et al. (2005) regard allXtas “noisy measures of the underlying unobserved factorsFt”. For monetary policy analysis, the variablesYtoften cover policy instruments, e.g., the US Effective Federal Funds Rate (FEDFUNDS) or Monetary Base. By contrast, in conventional VARs, the vectorYt contains all data and thus, lacks the ability to include additional information in the form of the factorsFt. In case of FAVARs, the size of the panel data can be large such that we often receive: K+M N. For reasons of clarity, we proceed with the assumptionN < T in the sequel.

Lemma 5.1.2 (Ambiguity of FAVAR Parameters)

Let R∈R(K+M)×(K+M)be a non-singular matrix defined as follows:

R= R1 R2 OM×K IM

! ,

with OM×K ∈RM×K as zero matrix. The matrices R1 ∈RK×K andR2 ∈RK×M are arbitrary as long as the non-singularity of the matrix R is kept. Then, the transformed vector h

0t,Y0ti0

=R

F0t,Y0t0

∈ RK+M can be equivalently rewritten in the form of (5.1)-(5.2).

Proof:

In case of (5.1), we receive with the non-singular matrixR:

Xt=h Λf Λy

i

"

Ft

Yt

#

+et=h Λf Λy

i R−1R

"

Ft

Yt

#

+et. (5.3)

As mentioned in Bai et al. (2015), the matrixRhas to map the observed vectorYtto itself, i.e., we get:

Yt=h R3 R4

i

"

Ft

Yt

#

⇔h

R3 (R4−IM)i

"

Ft

Yt

#

=0M.

Since this must hold for all point in timest, we obtain:R3=OM×K andR4=IM. For the inverse of the resulting matrixR we have:

R−1= R−11 −R−11 R2

OM×K IM

!

, (5.4)

such thatK(K+M) degrees of freedom are still left. 2

Lemma 5.1.2 confirms that the model in (5.1)-(5.2) is econometrically unidentified. For instance,R1=IK

andR2=OK×M orR1=IK andR2=1K10M are possible choices. Moreover, Lemma 5.1.2 emphasizes a feature of FAVARs, that is, the observability of the vectorYtconstrains the matrixRto mapYton itself.

Next, we follow the ideas in Bai et al. (2015) once again to further simplify our FAVAR representation.

Lemma 5.1.3 (FAVAR Formulation with Partially Uncorrelated VAR Shocks)

For the non-singular matrixR∈R(K+M)×(K+M)in Lemma 5.1.2, the vector˘vt=Rvt∈RK+M denotes

the transformed errors of the VAR(p)in (5.2)and is iid Gaussian as follows:

By defining the matrixH ∈R(K+M)×(K+M) as H=

For the invertible matrixR in Lemma 5.1.2, the transformed FAVAR is given by:

Xt=h

The matrices Σf fv|y˘ and Σyyv˘ are supposed to have full rank such that their inverse matrices exist and thus, the matrixH is well-defined. The inverse matrix ofH is given by:

H−1=

which is well-defined, too. Note, the matrix H is non-singular and has the shape Lemma 5.1.2 calls for.

Hence, it also belongs to the mentioned class of transformation matrices. In particular, we can prove that the product of the matricesH andR remains in this transformation class:

HR=

With this in mind, we insert the matrixH in (5.8)-(5.9) and obtain:

Xt=h

Next, the covariance matrix Σv¯can be simplified in the following manner:

As shown in Lemma 5.1.2, the FAVAR parameters in Definition 5.1.1 are unique except for a non-singular transformation withK(K+M) degrees of freedom. To eliminate this parameter ambiguity the one-step estimation method of Bernanke et al. (2005) restricted the firstKrows of the loadings matrix as follows:

Λ¯f = same method is used in Marcellino and Sivec (2016). Although Bork (2009) took over this idea, he gained flexibility such that he admitted the structure in (5.13) to be scattered across anyK rows of the loadings matrix. That is, not necessarily the firstK rows matter.

In this chapter, linear parameter constraints also guarantee parameter identifiability. For this purpose, we linearly constrain the loadings matrix in (5.6) or the VAR(p) coefficients in (5.7). The transformed model in (5.6)-(5.7) shows that the total mappingHRdoes not affect the observed variablesYt, but it simplifies the covariance matrix of the errorsv¯tin the transition equation. Furthermore, the special shape of matrix H decreases the number of degrees of freedom to K(K−1)/2. To verify this let ˜R ∈R(K+M)×(K+M) be a non-singular matrix defined as follows:

R˜=

12

OM×K IM

!

. (5.14)

To preserve the structure of the covariance matrix Σ¯vany additional transformation has to satisfy:

RΣ˜ v¯ haveK(K−1)/2 degrees of freedom left. Thus, the matrix ˜R1must be a rotation matrix.

Therefore, we do not impose restrictions on matrix ¯Λy in (5.6), when we talk about loadings constraints.

However, for matrix ¯Λf in (5.6), we propose the following formulation:

Λ¯f =

where the matrix ¯Λf(N−K)×K ∈R(N−K)×K comprises the unconstrained lastN−K rows of ¯Λf and the upperK×K-dimensional submatrix of ¯Λf represents a lower triangular matrix. Before we proceed, we explain in more detail how to obtain the shape of ¯Λf in (5.15) using the rotation matrix ˜R in (5.14).

Referring to Golub and Van Loan (1996, p. 215, Section 5.1.8), Givens Rotations enable us to zero entries of a vector and rank among the rotation matrices. In the sequel, let ¯ΛfK×K∈RK×K be the unrestricted upper block matrix of ¯Λf in (5.6) and letGi,j ∈RK×K be the Givens Rotation to zero the element in for matrix ˜R defined by:

R˜=

"

G˜ OK×M

OM×K IM

#

the FAVAR in (5.6)-(5.7) keeps the special structure of the covariance matrix Σ¯v.

At the end, we put the FAVAR in (5.6)-(5.7) with loadings restrictions in (5.15) in relation to the results from Bai et al. (2015). If Σeis a diagonal matrix, “Assumptions A-D” and the identification restrictions

“IRb” in Bai et al. (2015) are satisfied such that their asymptotic distributions of the factor loadings, the coefficient matrices and the IRFs remain valid. Unfortunately, their lengthy expressions of the distribution parameters appear cumbersome and unattraktive, when it comes to their implementation.