• Keine Ergebnisse gefunden

3. Methodology 17

3.2. Empirical Asset Pricing Methods

in section 4.

estimation window calculation window

tjk tjl tj t

event day

t+j

Figure 4: The structure of an event study

The estimation window spans a time period before the event of interest in which the parameters of asset pricing models are estimated. The models are supposed to capture expected returns. For instance, one might assume the FFM is an adequate description of expected returns and use the estimation window to estimate a stock’s loadings on the market, size and value factors, respectively. The estimation window is separated from the calculation window because parameter estimation is based on the assumption that the stock price of interest evolves normally throughout the estimation window and is not affected by the event of interest. In figure 4, the demarcation of the event window from the calculation window is indicated by the time subscripts l and k. There is no rule as to how many days the estimation window should span, but the number must be sufficient to guarantee the feasibility of regression procedures. As a next step, the required parameters are used to compute fitted values as a proxy for expected returns in the calculation window, which may contain 2×j + 1 ticks centered around the event day denoted as t in figure 4. The variable of interest in the calculation window are the residuals of the asset pricing models which are obtained by subtracting the fitted values from the actually observed returns. These residuals are called abnormal returns (AR). Moreover, cumulating AR yields an abnormal holding period return called cumulative abnormal return (CAR). The conventional null hypothesis isCAR = 0.

A battery of hypothesis tests has been proposed. The power of these event-studies tests suffers from a few methodological problems. Return variances increase around the event day, CAR tend to be highly autocorrelated and the whole procedure is very sensitive to outliers (Brown & Warner 1985). Recently, the methodology has been enhanced by new powerful nonparametric tests (Kolari & Pynnonen 2011).

If firm distress risk is indeed a state variable in the ICAPM, tests should reject the null when markets learn about distress risk. Specifically, they should discount the value of a firm when default risk increases and respond with increasing prices to news about decreasing distress risk. These are the mechanics required to gen-erate a premium for distressed firms. Furthermore, since state variables reflect macroeconomic activity, there should be a significant dependence of the reaction on the business cycle. We would expect the discount to distressed firms to be es-pecially large when the economy is in a recession because recessions are typically

36

accompanied by high agency costs of lending and tight credit markets. A firm suffering from a downgrade in such a setting should be more affected than a firm downgraded in a booming economy. This is precisely what the first research paper summarized in section 4 tests.

3.2.2. Approaches to Assess Long-Run Relationships

The short-run event-study is limited to analyze reactions, the calculation window depicted in figure 4 spans a few weeks at the most. Obviously, a look at the long-run relationship is necessary to gauge whether distress risk explains patterns in equity returns. From a methodological point of view, this requires something like a multivariate panel regression of several characteristics C for each firmi, including distress risk, on long-run returnsr (Cochrane 2011):

ri,t+1 =a+bCi,ti,t+1. (19) Almost any research sample in finance consists of repeated observations of a cross-section, i.e. empirical research in finance is mainly based on panel data. Still, real panel regressions like (19) are relatively uncommon in empirical asset pricing (Cochrane 2011, Freyberger et al. 2016). One can only speculate about the reasons, perhaps one reason is that tests of the EMH became important in the 1960s, when panel data analysis was still in its infancy. A whole subbranch of econometrics devoted to the problem stated in (19) has evolved in asset pricing. Below I discuss portfolio sorts and Fama & MacBeth (1973) regressions, two methods which have been applied in the thesis as ways to test long-run relationships. These are still the most commonly used methods. Obviously, there is not really anything new about them, but the vast body of literature on “anomalies” summarized by Harvey et al.

(2015) (see section 2) has given rise to a new debate about inference in finance, which I add to the short methodological discussion here.

Portfolio sorts appeared in the late 1970s. Basu (1977) was a very early adopter.

The technique is appealing to intuition because it mirrors the actual experience of an investor ranking firms according to some firm characteristic and then forming portfolios based on this ranking. The long run performance of such investment strategies are used to infer the relationship between the characteristic and returns.

Usually, the final tests are based on long-short portfolios which assume a long (short) position in the top- (bottom-) ranked firms. Apart from its intuition, this method has two compelling advantages. First, the sorts itself are nonparametric

(though, admittedly, standard regression tests of the long-short portfolios are typ-ically parametric). Second, sorts can be used to detect non-linear relationships.

Many theories suggest a non-linear relation between characteristics and returns (Garlappi et al. 2006, Garlappi & Yan 2011), a linear regression will fail in these instances. On the other hand, portfolio sorts have several downsides. Cochrane (2011) emphasizes they are simply not able to deal with the multidimensionality characterizing modern asset pricing. Double-sorts are still feasible, in some in-stances maybe even triple-sorts, but sorting can get us nowhere near to controlling for the plethora of characteristics that has been proposed in the literature. Fur-thermore, Lo & MacKinlay (1999) point out that sorting on a variable showing in-sample correlation with returns gives rise to a data-snooping bias. Following in the same vein, Berk (2000) shows sorts affect the variance structure in the panel: sorting a cross-section on a characteristic that is known to be correlated with returns yields portfolio returns with high variance between the portfolios but, compared to the full-sample, lower within variance. Reducing the within variance of portfolios or test assets can artificially swamp the explanatory power of asset pricing models. Lastly, a researcher applying portfolio sorts has to question the assumptions underlying the technique: Are the assets in a portfolio really liquid and are short-sales feasible? Are the extreme portfolios, which are used to compute long-short strategy returns, especially affected by these limitations? These issues are to a lesser degree a problem when using US data since data quality generally is, in this case, less of an issue and control variables for liquidity are available.

However, my personal research experience prompts me to express more doubts when the sample consists of European stocks and the variable of interest (distress) is per se associated with extreme return behavior.

Portfolio sorts treat the cross-section and time series dimensions separately, sorts address the cross-section and regressions address the time series. Cochrane (2011) argues this is effectively equivalent to nonparametric cross-sectional regressions with histogram weights. After all, sorts might be not that different from true panel data models like (19). The connection between panel data analysis and the two-pass Fama & MacBeth (1973) regressions is even more straightforward.

This approach can be used to assess whether a certain factor or characteristic is priced in the cross-section. When the variable of interest is a factor, the procedure begins with an estimation of factor loadings, for instance an estimation of CAPM betas for each asset. Thereafter, the factor loadings or characteristics, which do not need to be estimated beforehand, are regressed in the cross-section at each point in time on returns. That is, Fama & MacBeth (1973) regressions are a series of t cross-sectional regressions similar to (19), just without the time dimension, resulting in t estimates of cross-sectional coefficients and standard errors or t-values. The most obvious thing to do next in order to obtain an estimate for

38

the long-run relation between the exogenous variable and expected returns is to average the coefficient estimates and t-values.23 After all, Fama & MacBeth (1973) regressions are also related to panel data methods like (19). Cochrane (2005) discusses circumstances under which the two-pass procedure outlined above is equivalent to a pooled Ordinary Least Squares (OLS) regression.

A common assumption underlying regression frameworks like (19) and the Fama

& MacBeth (1973) cross-sectional regressions is that the error terms ϵ are inde-pendent and identically distributed. More recently, two important concerns about the standard errors in finance have been voiced by Cameron et al. (2006), Petersen (2008) and Gow et al. (2010). The first concern refers to cross-sectionally corre-lated error terms. Whenever some unobservable factor affects returns contempora-neously, the assumption Cov[ϵi,t, ϵj,t] is violated in each cross-sectional regression and standard errors are subject to a time effect. Second, returns might be auto-correlated in time, i.e. Cov[ϵi,t, ϵi,t+s] fors ̸= 0 does not hold, which is called a firm effect. These effects may cause standard errors to be severely downward-biased.

According to the literature survey of Petersen (2008), 42% of research papers in finance did not adjust standard errors for these problems which is likely to render their results incorrect. Cameron et al. (2006) have proposed to compute errors clustered on firm and time in order to deal with these issues. Petersen (2008) as well as Gow et al. (2010) present simulation evidence underlining the impor-tance of these adjustments in finance. The former recommend to compute several standard errors in procedures like Fama & MacBeth (1973) regressions or the gen-eral framework (19). Conventional errors should be shown alongside of firm-level, time-level and two-way clustered errors.24

To recap, methods in empirical asset pricing have been living a life of their own until the end of the last decade. Portfolio sorts and Fama & MacBeth (1973) regressions are both in some way related to a generic panel regression framework like (19), but they tend to deal with variation in the cross-section and in the time series in separate steps. Portfolio sorts are nonparametric and able to deal with nonlinearity, but Fama & MacBeth (1973) regressions are better suited to deal with multidimensional problems. Important recent methodological contributions

23When the exogenous variable in the cross-sectional regressions are estimated factor loadings, a correction of the standard problems for the errors-in-variables problem according to Shanken (1992) is recommendable.

24The conventional OLS variance estimator is V =s2×(XX)−1, where s2= N1K ×N i=1ei

and N, K denote the numbers of observations and parameters, respectively. With one-way clustering the variance estimator isVclustered= (XX)1×Nc

j=1uj×uj×(XX)1, where Nc denotes the number of clusters (e.g. firms/years) anduisei×xi in each cluster. See Cameron et al. (2006) for further details and the variance estimator in the case of two-way clustering.

are beginning to acknowledge the panel structure of finance data more directly and Cochrane (2011) recommends uniting time series and cross-sections in true panel data models for future research. Empirical research in this thesis follows this recommendation and applies portfolio sorts, Fama & MacBeth (1973) regressions as well as panel data models in a more narrow definition.