• Keine Ergebnisse gefunden

Methodology

Im Dokument Essays in Financial Economics (Seite 88-93)

HOW DOES THE DODD-FRANK ACT AFFECT ISSUER RATINGS AND RATING REPORTS?

2.2 Market Counterfactual

2.2.2 Methodology

hypothesis. As such, it is concerned with rating levels. If it holds true, we should observe lower rating levels after the introduction of the DFA, holding firm-specific credit risk and macroeconomic conditions constant. The second hypothesis, H2, provides a novel perspective on the impact of the DFA on credit ratings using rating changes. I estimate likelihoods of up- and downgrades based on changes of firm-specific credit risk to pinpoint which credit risk measures play an increased role in the rating decisions of CRAs after the regulatory overhaul. Rating changes are interesting from two perspectives: Anticipation of rating changes and reactions to rating changes.

Under H2, the disciplining hypothesis, I should observe changes to the lead-lag rela-tionship between market-risk measures and credit ratings. With increased liability, CRAs might want to align their ratings faster with market-implied measures to avoid accusa-tions of unjustified optimism. One would therefore expect that higher order lags of the market-implied risk measures have a lower impact on current rating changes.

(2016) and Bauer and Agarwal (2014) confirm the superior performance of the failure score over credit ratings. Further, these measures are also highly practice-oriented.

Equity-implied default probabilities are calculated using the Merton (1974a) model.

According to the model, equity can be interpreted as a call option on equity and priced accordingly. In this context, the book value of liabilities (Xt) represents the strike price of the call option. The equity value is only positive if the value of assets (VA,t) exceeds the book value of liabilities. Solving the model for the probability that equity has no value yields the following equation:5

Equity PD=N −lnVXA,t

t + (µ−0.5σA2)T σA

T

!

Note that the actual volatility of assets, σA, is not directly observable. I adopt the iterative method of Vassalou and Xing (2004) to pin down σA. The equity-implied proba-bility of default is given through the cumulative normal density function, which is applied to the negative of the distance-to-default. T is the time horizon of the default probabil-ity. For all measures in this chapter, I normalise the PDs to describe the one-year-ahead default probability.

CDS-implied default probabilities are calculated by adopting Duffie (1999)’s constant hazard rate model. Assume that the probability of default in each period t is equal to p(t) = 1−e−λt, withλ being the hazard rate. If fairly priced, the insurance against credit risk in form of the CDS should be equal to the expected loss. Writing the recovery rate as RR and the risk-free interest rate as r, the CDS spread can be written as:

CDS spreadt = Pt

0ert(eλ(t−1)−eλt) Pt

0e−(λ−r)t (1−RR)

5A more detailed description of the model is provided by Vassalou and Xing (2004).

Observing the CDS spread, one can back-out the underlying hazard rate and thereby the default probability. I set the recovery rate (RR) equal to the conventional level of 40%

and solve for λ numerically. CDS spreads are available for different tenors, restructuring clauses and seniorities. To mitigate the effect of liquidity on the spread, I pick the five year, senior CDS spreads with the modified restructuring clause as it represents the most liquid tenor, restructuring and seniority combination. Again, I annualize the impliedCDS PD to make this measure comparable with the other models.

The failure score (F-Score) proposed by Campbell et al. (2008) is based on a large database on defaults, which are used to calibrate a logistic regression model with tradi-tional accounting ratios and stock market data as explanatory variables. Note that this predictor reflects real-world PDs rather than the risk-neutral probabilities captured by asset prices. As I do not have access to the default data, I have to rely on a calibration of the model provided by Hilscher and Wilson (2016). All of the used accounting ratios and stock variables are well-documented and can easily be re-constructed using COMPUSTAT and CRSP databases. Hilscher and Wilson (2016) estimated the model of Campbell et al.

(2008) for a set of rated, North American firms and a sample period ranging from 1986 -2013.6 As my sample is comprised of very similar firms and the summary statistics of the independent variables are similar to those of Hilscher and Wilson (2016), the obtained default probabilities are not a perfect, but a valid proxy for the real-world default proba-bility. In the remainder of the chapter, the resulting probability of default is referred to as CHS PD. Appendix .1 contains a detailed description of the construction of this default measure and the summary statistics.

6More specifically, the calibration of column 4, rated-only in Table 2 of Hilscher and Wilson (2016) is used.

2.2.2.2 Identifying rating policy changes

I relate agency ratings to these benchmarks in an ordered logistic regression model to determine whether the introduction of the DFA prompted CRAs to assign lower issuer ratings relative to the benchmark. The rationale behind the benchmarking is the follow-ing: Unlike the agency ratings, the market-based risk measures are not affected by the regulation. If the regulation does indeed influence the assigned rating levels, the mapping of PDs into ratings should change. By using market perceptions of risk rather than mere balance sheet variables, I am able to control for trends in the default risk common to both ratings and my set of PDs, but not to the balance sheet variables. For example, lower ratings might reflect a more negative outlook on future development that is not yet captured by balance sheet variables. Further, more stringent ratings might not be the results of a tighter regulation, but rather of a general shift of the judgement of default risk. Therefore, looking at the evolution of ratings vis-à-vis other measures of default risk is of paramount importance to test my research hypotheses. To foster comparability with the literature, I use a similar specification as Dimitrov et al. (2015). Formally, the regression equation for the rating levels reads as follows:

Numerical Ratingi,t+1 =α+β×DFAt+γ×PDi,t+ρ×Ki,t+φ×Xt+i,t. (2.1)

Numerical Rating is a numerical transformation of the alphanumeric credit ratings with AAA=21, . . . , CC=1 for every firm i in periodt. DFA is a dummy which takes the value of one for every observation after July 2010 and zero otherwise. P Di,t is one of the three above specified credit risk measures. I estimate one separate version of Equation 2.1 for every credit risk measure. Ki,t is the same set of firm control variables used by Dimitrov et al. (2015). The vector of macroeconomic control variables X includes the natural logarithm of the GDP, the VIX and the level of the S&P500. The coefficient of

interest is β. If β is negative and significant, rating levels are lower after the DFA than before. As one firm might experience several rating changes in the sample, the standard errors are clustered at the firm level.7 Unit of observation in this regression is firm-quarter. Though ratings are available at monthly frequency, the accounting ratios are not, I estimate Equation 2.1 using quarterly data. Ratings are collapsed to quarterly frequency by using the level of the first month in the quarter. To address possible endogeneity concerns, I use the forwarded value of the numerical rating as dependent variable.

Lastly, I investigate the impact of the DFA on the policy of CRAs in more detail using monthly data on upgrades and downgrades. The methodology closely follows the standard literature on rating predictability, i.e. the lead-lag relationship between credit ratings and other risk measures (Alsakka and ap Gwilym, 2010; Güttler and Wahrenburg, 2007;

Milidonis, 2013). To test predictability, I regress the rating changes on lagged changes in the market-based risk measures (∆Impliedi,t−k). Formally, the model is captured by the following equation:

Rating Changei,t =

6

X

k=1

κk×vRating Changei,t−k+

6

X

k=1

µk×∆Impliedi,t−k

+

6

X

k=1

πk×DFAt×∆Impliedi,t−ki,t

(2.2)

where Rating Changei,t is the change in the issuer rating by S&P of firm i in year-month t. The dependent variable can take five values. If the rating is lifted or reduced by two or more notches, Rating Changei,t is set to 2 and -2, respectively. Changes by one notch translate into a value of 1 for an upgrade and -1 for a downgrade. If the rating

7Clustering the standard errors at the year-month level or at the rating grade before the downgrade generates marginally stronger results.

remains unchanged in month t, Rating Changei,t is set to zero.8

∆Impliedi,t−k is the absolute change in the implied PD measure. To control for possible autocorrelation in the dependent variables, the lags of Rating Changei,t are in-cluded. To elicit possible changes to the lead-lag relationship, the interaction of DFAt and ∆Impliedi,t−k are added. If markets anticipate rating changes - or, alternatively, CRAs follow market sentiments - the implied PDs should increase before a downgrade and decrease before an upgrade. Hence, µk is expected to be negative. The coefficients of interest are πk. If the interactions of the implied risk measure and the post DFA dummy DFA are significant and negative, the predictability of rating changes has increased. In essence, equation 2.2 tests the predictability of up- and downgrades by changes in the marked-implied risk measures, while controlling for possible autocorrelation in the depen-dent variable. Note that unlike in 2.1, all the data items which are required to estimate Equation 2.2 are available at monthly frequency. Hence, the unit of observation of Equa-tion 2.2 is firm-month rather than firm-quarters . As before, standard errors are clustered at the firm-level.

Im Dokument Essays in Financial Economics (Seite 88-93)