• Keine Ergebnisse gefunden

Approximation of Survey Expectations

3.3 Learning With a Simple Forecasting Model

3.4.4 Approximation of Survey Expectations

Having seen that forecasting properties of the proposed models is not generally better than that of simple backward–looking ones, it is now important to see whether the simulated forecast seriesπft+h|tmatch the survey–based measuresπet+h|t. In principal, also the approximation properties can be tested by the augmented version of the Diebold–Mariano statistic14. The resulting test statistics can be inferred from tables 3.4 to 3.6. In addition, the sample is split into two parts, the first one covering the whole sample 1980–2007, the middle panel covers the Volcker disinflation period 1980–1987 and the last panel covers the more moderate period 1988–2007.

14Here, the null hypothesis is given byH0:E[|πet+hπf,it+h|t| − |πt+he πf,jt+h|t|] = 0.

SPFh=180–07SPFh=480–07 ModelIIIIIIIVVVIVIIIIIIIIIVVVIVII I0.005.846.454.956.374.42-0.610.00-2.210.49-1.410.00-0.83-1.98 II-5.840.002.042.051.671.41-2.442.210.001.51-0.941.260.28-1.67 III-6.45-2.040.001.10-2.200.10-3.16-0.49-1.510.00-2.04-2.13-1.33-2.15 IV-4.95-2.05-1.100.00-1.62-2.72-3.341.410.942.040.001.791.57-1.09 V-6.37-1.672.201.620.000.64-2.950.00-1.262.13-1.790.00-1.01-1.97 VI-4.42-1.41-0.102.72-0.640.00-2.940.83-0.281.33-1.571.010.00-1.78 VII0.612.443.163.342.952.940.001.981.672.151.091.971.780.00 SPFh=180–87SPFh=480–87 ModelIIIIIIIVVVIVIIIIIIIIIVVVIVII I0.003.333.562.273.121.82-0.800.00-1.851.15-0.930.72-0.39-1.47 II-3.330.001.470.950.690.47-1.911.850.002.09-0.611.720.40-1.25 III-3.56-1.470.000.15-4.49-0.53-2.68-1.15-2.090.00-2.24-2.78-1.47-2.23 IV-2.27-0.95-0.150.00-0.82-2.04-2.340.930.612.240.001.791.15-1.00 V-3.12-0.694.490.820.000.15-2.23-0.72-1.722.78-1.790.00-1.10-1.90 VI-1.82-0.470.532.04-0.150.00-1.950.39-0.401.47-1.151.100.00-1.52 VII0.801.912.682.342.231.950.001.471.252.231.001.901.520.00 SPFh=188–07SPFh=488-07 ModelIIIIIIIVVVIVIIIIIIIIIVVVIVII I0.004.836.386.126.275.99-0.080.00-1.43-0.71-1.23-1.00-0.93-1.63 II-4.830.001.593.432.533.36-1.561.430.000.13-0.77-0.05-0.17-1.23 III-6.38-1.590.003.140.811.91-1.880.71-0.130.00-0.99-0.78-0.30-1.29 IV-6.12-3.43-3.140.00-2.89-2.00-2.371.230.770.990.000.893.19-0.50 V-6.27-2.53-0.812.890.001.99-1.951.000.050.78-0.890.00-0.12-1.25 VI-5.99-3.36-1.912.00-1.990.00-2.180.930.170.30-3.190.120.00-1.07 VII0.081.561.882.371.952.180.001.631.231.290.501.251.070.00 Note:NumbersaremodifiedDiebold–Mariano(DM)teststatisticswhichfollowat–distributionwithn1degreesof freedom.Here,n=108isthenumberofout–of–sampleforecastsforthefullsample.H0:DM=0canberejectedonthe 5%leveliftheteststatisticexceeds1.98inabsolutevalues(two–sidedtest).Anegativenumbermeansthatthemodel inrowihasalowermeasureddeviationthanthemodelincolumnj.Thefirstpartcapturesresultsbasedonthewhole sample,whereasthesecondandthirdpartcontainresultsfortwosub–samplessplitintheendof1987. Table3.4:ModifiedDiebold–MarianotestondeviationsofSPFandforecastingmodels

LIVh=180–07LIVh=280–07 ModelIIIIIIIVVVIVIIIIIIIIIVVVIVII I0.000.170.760.341.500.93-1.430.000.43-0.30-0.24-0.52-0.36-1.20 II-0.170.000.710.281.460.86-1.39-0.430.00-0.34-0.28-0.55-0.39-1.22 III-0.76-0.710.00-0.503.540.47-1.520.300.340.000.42-1.04-0.15-1.13 IV-0.34-0.280.500.001.383.36-1.470.240.28-0.420.00-1.03-0.61-1.12 V-1.50-1.46-3.54-1.380.00-0.54-1.810.520.551.041.030.000.60-0.94 VI-0.93-0.86-0.47-3.360.540.00-1.710.360.390.150.61-0.600.00-1.07 VII1.431.391.521.471.811.710.001.201.221.131.120.941.070.00 LIVh=180-87LIVh=280-87 ModelIIIIIIIVVVIVIIIIIIIIIVVVIVII I0.000.651.491.101.591.16-1.240.000.520.120.24-0.090.04-0.89 II-0.650.001.310.901.420.97-1.26-0.520.000.060.18-0.14-0.01-0.92 III-1.49-1.310.00-0.160.98-0.03-1.72-0.12-0.060.000.93-1.31-0.37-1.28 IV-1.10-0.900.160.000.320.67-1.84-0.24-0.18-0.930.00-1.52-1.57-1.30 V-1.59-1.42-0.98-0.320.00-0.21-1.780.090.141.311.520.000.46-1.02 VI-1.16-0.970.03-0.670.210.00-1.80-0.040.010.371.57-0.460.00-1.12 VII1.241.261.721.841.781.800.000.890.921.281.301.021.120.00 LIVh=188–07LIVh=288–07 ModelIIIIIIIVVVIVIIIIIIIIIVVVIVII I0.00-0.65-1.24-1.700.09-0.54-0.690.000.00-0.98-1.20-0.95-0.96-0.90 II0.650.00-0.96-1.380.38-0.23-0.610.000.00-0.93-1.17-0.93-0.92-0.87 III1.240.960.00-1.223.791.44-0.290.980.930.00-0.770.130.48-0.04 IV1.701.381.220.003.354.01-0.161.201.170.770.001.141.930.06 V-0.09-0.38-3.79-3.350.00-1.25-0.660.950.93-0.13-1.140.000.37-0.06 VI0.540.23-1.44-4.011.250.00-0.510.960.92-0.48-1.93-0.370.00-0.12 VII0.690.610.290.160.660.510.000.900.870.04-0.060.060.120.00 Note:NumbersaremodifiedDiebold–Mariano(DM)teststatisticswhichfollowat–distributionwithn1degreesof freedom.Here,n=54isthenumberofout–of–sampleforecasts.H0:DM=0canberejectedonthe5%levelifthetest statisticexceeds2.00inabsolutevalues(two–sidedtest).Anegativenumbermeansthatthemodelinrowihasalower measureddeviationthanthemodelincolumnj.Thefirstpartcapturesresultsbasedonthewholesample,whereasthe secondandthirdpartcontainresultsfortwosub–samplessplitintheendof1987. Table3.5:ModifiedDiebold–MarianotestondeviationsofLIVandforecastingmodels

MHSh=1280–07 ModelIIIIIIIVVVIVII I0.00-2.79-0.99-0.99-0.980.04-0.16 II2.790.00-0.74-0.73-0.730.440.21 III0.990.740.000.410.261.310.92 IV0.990.73-0.410.00-0.041.340.93 V0.980.73-0.260.040.001.300.90 VI-0.04-0.44-1.31-1.34-1.300.00-0.25 VII0.16-0.21-0.92-0.93-0.900.250.00 MHSh=1280-87 ModelIIIIIIIVVVIVII I0.00-2.30-0.87-0.80-0.960.440.75 II2.300.00-0.70-0.63-0.770.670.91 III0.870.700.000.95-0.321.691.68 IV0.800.63-0.950.00-0.741.611.62 V0.960.770.320.740.001.991.88 VI-0.44-0.67-1.69-1.61-1.990.000.61 VII-0.75-0.91-1.68-1.62-1.88-0.610.00 MHSh=1288–07 ModelIIIIIIIVVVIVII I0.00-1.86-0.42-0.67-0.16-0.64-1.86 II1.860.00-0.11-0.300.20-0.32-1.66 III0.420.110.00-1.041.54-0.32-1.02 IV0.670.301.040.008.12-0.08-0.95 V0.16-0.20-1.54-8.120.00-0.74-1.35 VI0.640.320.320.080.740.00-0.96 VII1.861.661.020.951.350.960.00 Note:NumbersaremodifiedDiebold–Mariano(DM)teststatisticswhichfollowat–distributionwithn1degreesof freedom.Here,n=324isthenumberofout–of–sampleforecasts.H0:DM=0canberejectedonthe5%levelifthetest statisticexceeds1.97inabsolutevalues(two–sidedtest).Anegativenumbermeansthatthemodelinrowihasalower measureddeviationthanthemodelincolumnj.Thefirstpartcapturesresultsbasedonthewholesample,whereasthe secondandthirdpartcontainresultsfortwosub–samplessplitintheendof1987. Table3.6:ModifiedDiebold–MarianotestondeviationsofMHSandforecastingmodels

SPF h=1. Considering the whole sample, Model IV clearly shows negative values for the modified Diebold–Mariano test statistic throughout and, hence, dominates the other out–of–sample forecasts. Moreover, the approximation error is even signif-icantly lower when compared to forecasts obtained by Models I and II. This means that, forSPF h=1, learning by signal–extraction clearly gives a better approximation of survey expectations than a simple backward–looking forecasting scheme which is given by Model I. Moreover, it also outperforms recursive least squares learning of coefficients, which is represented by Model II. Interestingly, it also performs much better than the recursively estimated models V and VI. Also note that all learning models III to VI yield a closer approximation of SPF h=1 than the models which are not characterized by signal extraction. As argued in section 3.2.2, rational ex-pectation formation is a poor proxy for survey exex-pectations. Splitting the sample does not alter the results. Considering SPF h=4, which has a forecasting horizon of one year, it becomes apparent that Model III – the simplest signal extraction model – yields the best approximation. The difference here is even significant with the exception of Model I. This basically remains true for the first sub–sample. However, during the moderate period after 1987 the naive model proxiesSPF h=4 closest but if tested against Models III to VI the difference is not significant. Now turning to the left part of table 3.5, it is apparent that Model V yields the approximation clos-est to LIV h=1. Again, it outperforms Models I and II significantly when the test is based on the whole sample and the first sub–sample. Survey expectations from LIV h=1 cannot be approximated by rational expectations which perform worst of all models. Looking at the right panel, results are mostly insignificant. For the whole sample period recursive least squares learning seems to yield the smallest deviation from LIV h=2 and, again, rational expectations perform worst. During the period of disinflation, however, signal extraction gives the best description of expectation formation as Model IV performs best in the first sub–sample. The sec-ond sub–sample confirms the results found for the whole sample period. Coming now to table 3.6, which contains results for MHS h=12, findings are rather mixed.

During the whole period, the recursively estimated Model VI gives the closest ap-proximation of MHS h=12. Thus, one could conclude that, also in this case, signal extraction provides the best explanation for survey expectations. However, results are not significantly better than those obtained from rational expectations or naive and simple autoregressive forecasting schemes. Moreover, when taking a look at the first sample period, rational expectations seem to give the best approximation for MHS h=12. When compared to figure 3.3 it becomes clear that, during the first period, forecast errors do not show any sign of persistence which is in contrast to the other survey measures of expectations and which may explain the last result.

When looking at the second sub–sample, it is apparent that the naive forecasting scheme outperforms the other models.

To sum up, signal extraction gives a pretty good approximation of the expec-tation formation process. This is in particular the case for SPF h=1 and LIV h=1.

Here, learning by signal extraction generally outperforms other forecasting schemes.

Furthermore, it gives a significantly better approximation of expectation forma-tion than recursive least squares learning. However, it remains unclear whether, in general, expectations are better characterized by signal extraction models whose estimated parameters are updated over time. For SPF h=1, recursively estimated signal extraction models do not outperform learning models with fixed structural parameters, whereas forLIV h=1 the recursively estimated model is better. Consid-ering longer forecasting horizons of one year as in SPF h=4 and LIV h=2, learning by signal extraction approximates survey expectations best at least during the Vol-cker period. Consequently, I conclude that agents seem to change their forecasting scheme over time, as during the second sample–period naive forecasting schemes seem to be more important. But also note, that the performance of these models is not significantly better when compared to Models III to VI. In case of MHS h=12, results are not that clear–cut. Insofar, the findings from section 3.3.3 are confirmed.

Here again, it might play a role that this series is characterized by a large overlap of twelve periods and, hence, additional information from month to month observations should play an important role for the process of expectation formation.