• Keine Ergebnisse gefunden

Monte Carlo results: first-step estimates

3.3 Monte Carlo study

3.3.2 Monte Carlo results: first-step estimates

We focus on four different sets of moment matches to estimate the seven macro pa-rameters in ξM, as described in Section 3.2.5. Table 3.5 shows that the number of momentsmused for the GMM estimation ranges from exact identification (m=7) to ample over-identification (m=185).6 All four setups include the five moment matches of Equation (3.12), and then add increasing numbers of auto-moments selected from Equation (3.13). The maximum lag order isL1=L2=L3=60 (m=185), meaning that we use auto-moments up to five years, assuming a monthly frequency. The interme-diate cases useL1=L2=L3=10 (m=35) andL1=L2=L3=36 (m=113). We obtain the first-stage GMM macro estimates by using WMT = Im in Equation (3.9). To check whether an asymptotically efficient weighting scheme is beneficial in smaller samples, we also compute second-stage GMM estimates, based on the distance matrix

WMT =h

VarT

gMt −E[gM(qt; ˆξM(1))]i−1

, (3.22)

where ˆξM(1)is the first-stage GMM estimate,gM is the observation function pertain-ing to the respective macro moment match, and VarT(·) denotes a sample variance-covariance matrix. Asymptotically efficient GMM estimation should use a distance matrixWT = ˆS−1 −→

p S−1 in Equation (3.3), whereS = limT→∞Var(T−1/2GT0)).

We experimented with alternative estimators of S that account for serial

correla-6 To produce the results in Panel C of Table 3.2, we use the m=185 variant for the first-step estimation and the theory-based financial moment matchesGPT for the second-step estimation.

tion in gMt , but the estimate in Equation (3.22) delivered the best results in finite samples.

We intentionally choose starting values located at some distance from the true parameters.7 Poor initial values make the problem harder for the optimization algorithm, and the Monte Carlo study more time-consuming, but they also prevent the threat of overly optimistic results. In light of the aforementioned results, we seek to avoid this fallacy at all costs. Any replications for which the optimization algorithm failed or that produced implausible estimates are excluded from Table 3.6 and Figure 3.2.8 The number of successful estimations, which we report in Panel H of Table 3.6, is itself an interesting statistic, because it indicates how well the respective moment matches define the optimization problem. Table 3.6 contains the means and standard deviations of the macro parameter estimates computed across successful replications, and Figure 3.2 illustrates the results.

The T=100k results show that the GMM estimation strategy works, and that the macro moment matches can identify the macro parameters in ξM. The bias in the estimates vanishes, and the standard deviation shrinks; estimation failure is a rare event. There is a notable exception though: The bias and standard deviation of ˆρand ˆϕe remain considerably large for m=7; the bias of ˆφ is small, but the stan-dard deviation is not. The moment sensitivity analysis in Section 3.2.6 already has suggested that these three parameters, associated with the latent growth compo-nentxt, may prove difficult to estimate. Note that the four sets of moment matches only differ with respect to the number of auto-moments. The m=7 variant uses

7 The starting values are µc=0.018, µd=0.018, ρ=0.881, σ=0.082,ϕe=0.003, φ=7.389, and ϕd=7.389.

8 An estimation result is considered implausible if one of the parameter values to which the NM algorithm converges differs from the true parameter by a factor of 10 or more. In an empirical application, a treatment of problematic data could use different starting values and optimization algorithms, and tune the algorithm’s parameters. However, such a clinical approach is impractical in a large-scale simulation study.

GMM/SMM MONTE CARLO STUDY

just the first two auto-moments of consumption growth, and the simulation results indicate that this is not enough: auto-moment matches that involve higher lags of auto-moments are required to identify ρ,ϕe, andφ.

The T=5k results corroborate the benefits of exploiting the information con-tained in higher-order auto-moments, but now sampling error takes effect. As might be expected from the 100k results, the parameters associated withxt prove hard to estimate, but the results can be improved using higher auto-moment matches. The improvement is most striking for the autoregressive parameter ρ. A comparison of the m=7 and m=35 results on the one hand, and the m=113 and m=185 results on the other hand, shows that estimation precision increases with the use of higher auto-moments. Figure 3.2 also illustrates the substantial advancement from m=7 and m=35 to m=113, whereas a further enhancement due to the use of m=185 is more marginal. Asymptotically efficient weighting is particularly useful to hone the estimation results forϕe, though only in combination with higher auto-moment matches. Generally, using an asymptotically efficient distance matrix cannot replace the use of higher auto-moments.

The estimation precision is good for µc, σ, and ϕd, confirming the conjecture that the moment matches in Equation (3.12) should identify these parameters quite well. The mean of dividend growthµdproves hard to estimate, because the dividend growth series is volatile. Using auto-moments is no remedy here.

The T=1k results confirm these conclusions, although sampling error becomes more of an issue, as does the increasing number of failed estimations. Estimation precision is reduced in particular for the critical parameters ρ,ϕe, and φ. However, the usage of higher auto-moments again can mitigate these problems. Estimation precision improves when moving from m=7 to m=113, and the number of failed replications decreases. A comparison of m=35 with m=113 shows a substantial

improvement. The effect of increasing the number of auto-moments further, e.g.

fromm=113 to m=185, is less pronounced.

The T=1k results indicate that the favorably small asymptotic standard errors reported in the empirical estimations of LRR models should be taken with a grain of salt. These applications use much smaller sample sizes. The simulation results show that estimation precision with the currently available sample size must be limited. Using the information contained in higher auto-moments is beneficial, but time is a constraining factor. The available consumption and dividend time series are relatively short, creating the familiar trade-off between efficiency (allowing for a high lag order) and robustness. The improvement of estimation quality from m=35 (max. lag: <1 year) to m=113 (max. lag: 3 years) is considerable, but the incremental benefits of using m=185 (max. lag: 5 years) may be offset by picking up noise from the data.