• No results found

Capital requirements for credit trading under Basel III & FRTB

N/A
N/A
Protected

Academic year: 2021

Share "Capital requirements for credit trading under Basel III & FRTB"

Copied!
76
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Capital Requirements for Credit

Trading under Basel III & FRTB

Master of Science in Finance:

Quantitative Finance

Submission date:

01.07.2018

Student's name:

Anna Tsutsunava

Student's number: 10597743

Supervisor:

Peter Boswijk

Master Thesis

Public version

(2)

ii Statement of Originality

This document is written by Anna Tsutsunava who declares to take full respon-sibility for the contents of this document. I declare that the text and the work presented in this document are original and that no sources other than those men-tioned in the text and its references have been used in creating it. The Faculty of Economics and Business is responsible solely for the supervision of completion of the work, not for the contents.

(3)

iii Preface

This thesis has been written as a part of an internship at a nancial institution who's name shall not be mentioned for condentiality purposes. The methodological approaches explored throughout this research are partly based on internal models developed by this nancial institution, and therefore cannot be associated by name. It must therefore be acknowledged that several technical approaches, input param-eters and data sets have been provided by this outside party in order to complete the analysis within this thesis. More specically, the positions to which the method-ology within this thesis is applied is a specic credit trading portfolio provided by this institution. As a result, I would like to express my deep gratitude for the given support and access provided to me throughout my internship.

Furthermore, I would like to thank all my colleagues who have welcomed me to the team, and oered an open and positive working environment. Moreover, I would like to thank my daily supervisor David, for his continued help, guidance and commitment oered throughout my internship. I would further like to thank my colleague Kristof, who was always available to answer my questions, and very patiently helped me with programming related issues.

I also would like to thank my supervisor from the University of Amsterdam, Peter Boswijk, for his guidance and insights that have led to the successful completion of this thesis. Lastly, I would like to thank my family for their continued support and for the opportunities they have given me over my academic career.

(4)

iv Abstract

The Incremental and Default Risk Charge are models developed in order to meet the capital requirements for market risk set out by the Basel Committee on Banking Supervision. As a part of the Fundamental Review of the Trad-ing Book, the current approach to model credit tradTrad-ing risk, the Incremental Risk Charge, will be replaced as of 2019. The Incremental Risk Charge was developed in order to model credit migrations and default risk of issuers on a portfolio level. New regulation will replace the IRC model with the Default Risk Charge, which will solely focus on the default risk of obligors. The ques-tion to investigate is whether the change in modeling approach signicantly impacts the capital charge, and which default modeling approaches drive this impact. This research nds the Default Risk Charge to lead to signicantly higher capital requirements when compared to the current Incremental Risk Charge. Based on similar modeling assumptions for both the IRC and DRC, the higher DRC charges seem to be largely driven by the use of a 2 factor model as opposed to a single factor model explored within the Incremental Risk Charge model.

(5)

Contents

1 Introduction 1

2 Literature review 4

2.1 Supervisory Framework . . . 4

2.2 Credit Risk . . . 6

2.2.1 The Merton Model . . . 6

2.2.2 The Vasicek Extension . . . 7

2.2.3 CreditMetrics . . . 8

2.2.4 Copulas . . . 9

2.3 FRTB . . . 11

2.4 Modeling Framework . . . 12

3 Incremental Risk Charge 14 4 The Default Risk Charge 18 4.1 Factor Calibration . . . 18

4.2 The Algorithm . . . 20

4.2.1 Stochastic Recovery Rate . . . 22

4.3 Student-t Copula . . . 24

4.3.1 Factor Correlations . . . 24

4.3.2 Stochastic Recovery Rate: Calibration . . . 26

4.3.3 DRC Simulation . . . 28

5 Data 29 5.1 Factor Correlations . . . 29

5.2 Stochastic Recovery Rate . . . 32

6 Calibrated Parameters 35 6.1 The Default Risk Charge . . . 35

(6)

CONTENTS ii

7 Results 41

7.1 The Default Risk Charge . . . 41 7.1.1 Sensitivity analysis . . . 43 7.2 The Incremental Risk Charge . . . 47

8 Conclusion & Outlook 55

9 Appendix 57

(7)

List of Figures

2.1 Comparison between credit and equity returns. Source: (Gupton et al.,

2007) . . . 6

3.1 Example concerning binning. This gure indicates the thresholds con-cerning credit quality changes, denoted as Φ−1 P r, c . Source: (Gup-ton et al., 2007) . . . 16

6.1 Gaussian model: Scatter plot of correlation structure . . . 37

6.2 Student-t model: Scatter plot of correlation structure . . . 37

6.3 Gaussian kernel density estimation . . . 38

6.4 Student-t model kernel density estimation . . . 38

6.5 Standardized t-densities . . . 39

List of Tables

5.1 Regional factor mapping . . . 29

5.2 Sector factor mapping . . . 29

5.3 Factor correlations . . . 31

5.4 Probability of default for each credit rating for calibration purposes . . . 32

5.5 Mean and standard deviation of recovery rates . . . 32

5.6 Re-scaling factor per seniority and credit rating . . . 33

5.7 Product types within DRC . . . 34

6.1 Stochastic recovery calibration: Gaussian . . . 40

(8)

List of Tables iv

6.3 Stochastic recovery calibration: (Bade et al., 2011) . . . 40

7.1 DRC results . . . 45

7.2 DRC sensitivity analysis . . . 46

7.3 Unscaled transition matrix. Source: Standard & Poor's . . . 48

7.4 Scaled transition matrix. Source: Standard & Poor's . . . 48

7.5 Credit spreads . . . 49

7.6 IRC result in millions . . . 51

7.7 Portfolio composition . . . 53

7.8 Portfolio composition . . . 54

9.1 Mapping of market seniority to DRC seniority . . . 57

9.2 Equity return regression model. Continued on next page . . . 58

9.3 Issuer input. Continued on next page . . . 61

9.4 Stochastic recovery re-calibration: Gaussian model . . . 64

(9)

1

|

Introduction

The 2007-2008 nancial crisis revealed a signicant discrepancy in the methods of modeling credit risk (Laurent et al., 2016). Banks which were heavily exposed to un-securitized credit products witnessed severe losses on their trading books which were not captured by the 99% 10-day VaR (Basel Committee on Banking Supervision, 2009b). These incurred losses were not due to actual defaults, but were rather a result of issuers migrating to new credit ratings (Bharathulwar & Udatha, 2011). In light of this, the Basel Committee of Banking Supervision imposed new regulation outlining the requirements to which banks must adhere for modeling credit risk. These guidelines stipulated the implementation of the Incremental Risk Charge.

The Incremental Risk Charge is one of the several capital charges among VaR, stressed VaR and CVA (credit valuation adjustment) that make up the overall reg-ulatory capital charge for banks (Martin et al., 2011). After the nancial crisis, the Basel Committee (2009a) promoted stricter requirements concerning the level of cap-ital to be held, as the losses witnessed were signicantly higher than the minimum capital charge under the Pillar 1 regulation.

The modeling approach of the Incremental Risk Charge captures both migration and default risk by simulating the joint changes in credit quality of specic issuers. The Basel Committee of Banking Supervision (2009a) requires the IRC to cover un-securitized credit products over a one-year time horizon, taking into consideration the liquidity1 horizon of a position, or sets of positions.

Institutions subjected to the published regulation are allowed to use internal modeling approaches concerning the Incremental Risk Charge, provided that they follow the requirements. The use of such internal methodologies however, have led to large disparities in risk measure levels resulting from variability in modeling approaches amongst banks, as outlined by Laurent et al. (2016). In order to address this variation, the Basel Committee (2013) decided to establish a more prescriptive

1The time required to sell a position in stressed markets (Banking Committee on Banking

(10)

default risk model; the Default Risk Charge.

The Default Risk Charge (DRC) model is set to replace the Incremental Risk Charge as a result of the implementation of 'Basel IV' regulation. The new set of accords outline several changes to the minimum capital requirements concerning market risk. This revised FRTB 2 regulation indicates the change from Value at

Risk to Expected Shortfall in order to suciently capture 'tail risk' during periods of nancial stress (Banking Committee on Banking Supervision, 2016). As outlined by the documentation, banks must model a range of Expected Shortfall charges based on a series of risk classes such as equity, commodity, interest rate, FX and credit spread risk. The Committee (2013) quanties credit spread risk as the change in market value of credit products as a result of credit spread volatility, where the spread indicates the excess rate earned on a corporate bond over the risk free rate (J. C. Hull, 2016). The credit spread charge captured by the Expected Shortfall thus coincides with the quantication of migration risk within the Incremental Risk Charge. In order to avoid the double counting of credit spread risk, the DRC solely focuses on the default scenarios of issuers (Banking Committee on Banking Supervision, 2016).

The guidelines concerning the Default Risk Charge provided by the Basel Com-mittee indicate a more conservative approach to modeling credit risk. Therefore, the revised methodology is expected to have a signicant eect on the overall regulatory capital charges. Banks are to develop internal modeling approaches based on their interpretation of the regulation published by the Committee. The question to be addressed is whether the change from the IRC to DRC has a signicant impact on the regulatory capital charge. Furthermore, which modeling approaches drive this impact?

A wide range of existing literature explores several methods to accurately model credit risk. After the recent nancial crisis, much attention has been devoted to the adequacy of certain joint distributional assumptions concerning credit returns. One of these assumptions included the use of the Gaussian Copula, a method initially used to price collateral debt obligations (CDOs) by David X. Li. The use of the Gaussian copula to price complex credit derivatives has been coined as the "formula that killed Wall Street" (Salmon, 2009). As the Default Risk Charge models the joint default of issuers, the choice of copula to portrait the distribution of default scenarios is of great signicance (Wilkens & Predescu, 2018). Existing literature such as the

2Fundamental Review of the Trading Book. This documentation is published by the Basel

(11)

recent paper by Wilkens & Predescu (2018) explore the eect of various copula assumptions on the Default Risk capital charge. However, little to no published literature explores these assumptions with reference to the Incremental Risk Charge. Institutions adhering to FRTB regulation are expected to perform annual validations concerning their internal modeling approaches (Banking Committee on Banking Supervision, 2016). These validations include comparisons and analyses of new model implementations, signifying the relevance and contribution of the research conducted within this thesis. The methodology explored throughout this research concerns the simulation of joint changes in credit quality and the event of default. A popular benchmark approach is considered to be the Merton-Vasicek (1974) (2002) factor model, which can be used in combination with Monte Carlo simulation to generate default and migration scenarios. The Merton-Vasick factor model indicates that the credit-state or the asset returns of an obligor depend on a systematic and idiosyncratic factor, where the systematic factor is represented by the 'overall state of the economy' (J. C. Hull, 2015). The obligor is correlated to this systematic factor with a specic correlation parameter. The Basel Committee of Banking Supervision (2005) provides an empirically derived formulation of the correlation parameter to be used within the factor model.

The probabilities of migrating to new credit ratings within the IRC model are extracted from transition matrices, published by credit-rating agencies annually. The modeling approach concerning the Default Risk Charge is based on a multi-factor model, in which two dierent systematic multi-factors are used in order to depict an issuer's asset returns as required by Basel (2016) regulation.

This thesis will be outlined as follows. Firstly, literature concerning the modeling approaches for both the IRC and DRC will be presented in chapter 2. This will subsequently lead to the description of the methodology concerning both models in chapter 3 and 4. The data used throughout the research is summarized and described in chapter 5. The analyses and results are discussed in chapter 6 and 7, followed by concluding remarks.

(12)

2

|

Literature review

2.1 Supervisory Framework

The Basel Committee of Banking Supervision was established in 1974 in order to enhance nancial stability and facilitate coherent supervision worldwide (Basel Committee on Banking Supervision, 2009b). In 1988, the organization published the Basel Capital Accord, focusing on setting a common minimum capital requirement across the banking industry, largely addressing credit risk and risk-weighted assets (RWA) (Basel Committee on Banking Supervision, 1988). The Committee (1988) required the level of capital to be held to consist of 8% of the banks' assets relative to its credit risk levels (RWA). However, the published Accords (1988) did not take into account or set capital requirements concerning market risks. Therefore, in 1996, the BCBS published a consultative document amending the original accords in order to include requirements for market risk (Basel Committee on Banking Supervision, 1996), after which a nal version became available in 2006, known as the Basel II accords. The Basel II framework amendment covered a broader scope of risk, including market, operational and credit risks, commonly known as the Pillar 1 minimum capital requirements.

The regulation outlined two alternative approaches for measuring credit risk; the Standardized and Internal Ratings-Based Approach (Basel Committee on Banking Supervision, 2006). The Standardized Approach measures credit risk with the sup-port of external credit rating agencies, whilst the Internal Ratings-Based Approach allows banks to internally asses credit risk parameters, subject to supervisory ap-proval (Basel Committee on Banking Supervision, 2006). According to the regula-tion (2006), the parameters to be estimated internally by banks are the probability of default (PD), loss given default (LGD) and the exposure at default (EAD).

The Basel Committee of Banking Supervision (2006) outlines several approaches to obtain the probability of default parameters within the Internal Ratings-Based Approach. Banks subjected to the regulation can either use data based on their own

(13)

default experience, map to external data or use developed statistical default models (Basel Committee on Banking Supervision, 2006). The loss-given default parameter can similarly be derived from historical data or estimated internally, given banks are able to demonstrate their estimates to be suciently robust (Basel Committee on Banking Supervision, 2006). As outlined by the Committee (2006), the EAD estimate measures the amount exposed to the counterparty in case of their default and can be similarly derived by the bank's internal credit risk systems provided they can fulll supervisory requirements. As a response to the aftermath of the nancial crisis, stricter capital requirements were drafted by the Basel Committee on Bank-ing Supervision, known as Basel III. Among other thBank-ings, the new regulation raised capital levels from the original 2% concerning common equity to 4.5% (Basel Com-mittee on Banking Supervision, 2010), introduced counter-cyclical capital buers during period of excess credit growth, and implemented reforms concerning leverage and liquidity requirements (Basel Committee on Banking Supervision, 2010). As a part of new Basel III regulation, the Committee (2009a) decided to expand the scope of the regulatory capital charge in order to include losses incurred from trad-ing credit products. The large losses witnessed within the tradtrad-ing books led to the development of the Incremental Risk Charge, capturing default and migration risk concerning unsecuritized credit products.

Complementary to the guidelines provided by the BCBS, the European Bank-ing Authority (EBA) published additional guidance concernBank-ing the methodologies to calculate the Incremental Risk Charge. Banks are allowed to use an Internal Model Approach (IMA), to which these published EBA guidelines apply (European Banking Authority, 2012). According to their documentation (2012), the IRC model should take into account the interdependence between the credit risk experienced by dierent issuers. Therefore, the methodology used by banks must model the corre-lation between the default and migration events across obligors (European Banking Authority, 2012). This correlation is to be modeled using an idiosyncratic and one or several systematic factors (European Banking Authority, 2012). The methodology concerning these requirements will be further discussed and outlined in chapter 3 of this thesis.

In order to provide a general outline, various approaches exist concerning meth-ods to quantify credit risk and the dependence across events and obligors. The following sections explore existing literature concerning these methodologies.

(14)

2.2 Credit Risk

According to Christoersen (2012), credit risk can be dened as the risk of in-curred losses due to a counterparty's failure to meet its obligation partially, or in full. The quantication of credit risk is argued to be dependent on movements in unobservable latent variables (Gordy, 2000). Gordy (2000) indicates that these latent variables are determined by certain external risk factors, which drive the dependence of credit events across obligors. This correlation among credit events and across obligors can pose a challenge from a risk management perspective. The CreditMetrics documentation initially released in 1997 by J.P Morgan outlines the challenges of modeling credit risk on a portfolio level. The gure below indicates the dierence between equity and credit returns. As can be seen from Figure 2.1, credit returns witness a skewed distribution with a fatter left tail than the relatively symmetric equity returns (Gupton et al., 2007).

Figure 2.1: Comparison between credit and equity returns. Source: (Gupton et al., 2007)

As outlined by the technical documentation (2007), the skewness in the credit return distribution is caused by defaults. This leads the portfolio to incur rela-tively small prots whilst, with a small probability, witnessing relarela-tively large losses (Gupton et al., 2007).

2.2.1 The Merton Model

Merton (1974) provides a methodology for pricing corporate debt. The author incorporates the Black-Scholes formula in order to model a rm's asset and debt value. According to Merton (1974), a rm will go into default once its asset value falls below the face value of its debt. The stockholders of the rm can be considered

(15)

the residual claimants of the rm, whose equity value can be seen as a call option on the rm's assets (Christoersen, 2012). As further stipulated by Hull (2016), the asset value follows a Geometric Brownian Motion with volatility σA, indicating the

return on the asset to follow a normal distribution in small periods of time. The Black-Scholes formula can then be used to calculate the equity value held by the stockholders. The payo of the option at time T is given by:

Equity = max A − D, 0 (2.1)

As a result, the expected discounted payo at time T is given by: AΦ(d) − De−rfT d − σ A √ T (2.2) Where d follows: d = ln A/D + rf + σ 2 A/2T σA √ T (2.3)

The model uses the risk free rate rf earned on a government bond and time

to maturity T . The probability of default corresponds to the asset value A falling below debt value D at time T. Christoersen (2012) provides a simple formulation of the methodology followed by Merton (1974).

P r A < D = 1 − Φ d − σA √ T = Φ σA √ T − d = Φ − dd (2.4) Here, dd indicates the distance to default, measuring the number of standard deviations the asset value must move in order for the rm to go into bankruptcy (Christoersen, 2012). The model outlined by Merton (1974) looks at a single rm, whereas the interest within credit risk includes returns on a portfolio level. Moreover, corporate defaults have proven to be highly correlated events across rms (Christoersen, 2012) which have to be taken into account when modeling credit risk. Vasicek (2002) extends the Merton (1974) model in order to incorporate the interdependence between obligors in case of default.

2.2.2 The Vasicek Extension

Vasicek (2002) extends Merton's methodology on a portfolio level, assigning a factor structure to a rm's asset values. This factor structure ensures default

(16)

oc-currences to be correlated across issuers. In this case, the asset values are assigned a log-normal distribution for i borrowers.

ln Ai,t+T = ln Ait+ rf − 1 2σ 2 A,iT + σA,i √ T Xi (2.5)

Within this implementation, Xi is a standard normal random variable following a

factor structure: Xi = ρiF + q 1 − ρ2 iZi (2.6) X, F, Z ∼ N (0, 1) .

The above relationship indicates that Xi is exposed to a systematic factor F and a

rm-specic factor Z; with correlation ρ; and p1 − ρ2; respectively (Vasicek, 2002).

From the Merton (1974) implementation the following is known. P r Ai,t+T < Di = P r ln(Ai,t+T) < ln(Di)

 = Φ − ddi

 (2.7)

If the common factor F remains constant, the conditional default probability is given by the following: P r Xi < −ddi|F = P r Zi < −ddi− √ ρiF 1 −√ρi ! (2.8) This conditional probability implies portfolio credit risk depending on correlation parameter ρi.

2.2.3 CreditMetrics

The CreditMetrics technical documentation (2007) published by J.P Morgan's Risk Management Research Group was developed in order to model the volatility in asset value due to credit quality changes. As witnessed in the nancial crisis, the losses incurred on the trading book were largely due to issuers migrating to new credit ratings (Bharathulwar & Udatha, 2011), emphasizing the relevance of the methodology explored within the CreditMetrics documentation.

As opposed to quantifying a rm's asset volatility in isolation as under Merton's (1974) approach, CreditMetrics (2007) assumes a credit rating for each obligor, which subsequently corresponds to a specic default probability. The authors extend the Merton model to not only include default thresholds, but also credit migration

(17)

thresholds. The documentation indicates the use of transition matrices provided by credit rating agencies as an input parameter for modeling migration risk.

The methodology pursued by the authors of CreditMetrics (2007) includes the modeling of a rm's asset value and mapping this to a certain credit rating, leading to the asset value thresholds. The authors then specify that once the asset thresholds are known, the change in the asset value must be modeled in order to describe the issuer's credit rating evolution (Gupton et al., 2007).

For large portfolios, the CreditMetrics (2007) approach uses Monte Carlo sim-ulation to generate several default or migration scenarios. The generated scenarios represent the credit rating of an issuer within the portfolio. To do so, rstly normally distributed asset values are simulated for each issuer, assuming the asset thresholds are known. Subsequently, the simulated values are mapped to credit ratings accord-ing to the thresholds. The authors re-value the portfolio based on the changes in the credit quality of obligors (Gupton et al., 2007). This provides a distribution of the portfolio losses due to either default or migration.

Within the approach outlined above, the asset returns are assumed to be nor-mally distributed. However, as presented in Figure 2.1, credit returns do not follow a bell-shaped or symmetric distribution. As previously presented, the apparent skew-ness in the return distribution is caused by correlated default events across obligors (Gupton et al., 2007) (Christoersen, 2012). A popular method to model the de-pendence between default and migration scenarios across issuers is known to be a copula (Frey et al., 2001). The following section reviews literature concerning copula modeling and its application to credit risk.

2.2.4 Copulas

Copulas provide a way to combine various univariate distributions, known as the marginal distributions, in order to create a multivariate density (Christoersen, 2012). The original theory concerning copulas can be summarized by Sklar's Theo-rem. A denition as provided by Embrechts et al. (2002) is outlined below.

Denition 1. The author indicates that a copula is any function C : [0, 1]n → [0, 1]

with three distinct properties:

1. C(x1, · · · , xn)is increasing in component xi

(18)

3. C is n increasing.

Furthermore, the following proposition is outlined by Frey et al. (2001).

Proposition 1. Let F represent a joint distribution with F1, · · · , Fn as continuous

marginals. In this case there exists a unique copula function C that links together the marginals in order to create a joint distribution (Christoersen, 2012). In this case, copula function C ensures that

F x1, · · · , xn = C F1 x1, · · · , Fn xn



(2.9) holds. This implies that the given function F is a joint distribution with F1, · · · , Fn

as marginals. Likewise, one can extract a unique copula C from the multivariate distribution function F , where again F1, · · · , Fn represent the marginals (Frey et al.,

2001). The copula is then found by calculating the following; C u1, · · · , un = F F1−1 u1, · · · , F1−1 un  (2.10) where F−1 1 , · · · , F −1

1 represent the inverse univariate distribution functions of F1, · · · ,

Fn.

The use of copula modeling within nancial application was originally pioneered by David X. Li in order to price complex credit derivatives such as collateral debt obligations (CDOs). Li (2000) studies the problem of default correlation and applies a normal copula function to model the survival time of either an entity or nancial instrument. Within his approach, Li (2000) denes a bivariate copula C u, v, ρ, where ρ indicates the correlation parameter between the two default events. In a portfolio of n credits, the type of copula used by Li (2000) consists of the Gaus-sian copula, indicating normally distributed marginals and thereby a joint normal distribution.

C u1, · · · , un = Φ Φ−1 u1, · · · , Φ−1 un



(2.11) Here, Φ and Φ−1 represent the normal cumulative distribution function and its

inverse, respectively. The copula function can thus indicate the joint distribution of the survival times of the n credits in the portfolio.

Brigo et al. (2009) outline several issues concerning Li's (2000) approach of mod-eling default dependence. Their main point addresses the unrealistic assumption of condensing the large pool of correlation parameters within portfolios into one

(19)

corre-lation ρ. Furthermore, Brigo et al. (2009) indicate that these correcorre-lation parameters may change, highlighting the lack of dynamics within the Gaussian copula approach. Kole et al. (2007) test the accuracy of several copulas for risk management pur-poses. Using a portfolio based on stocks, bonds and real estate, the authors nd the best t to be the Student-t copula. Through a goodness-of-t test, Kole et al. (2007) conclude that the Gaussian copula underestimates the risk found in the distribu-tion's tails, not capturing the joint probability of extreme downward movements. The authors argue that the Student-t copula assigns probability to extreme nega-tive returns, and witnesses stronger dependence within these tails, even if correlation parameters are zero (Kole et al., 2007).

Meneguzzo & Vecchiato (2004) focus on the choice of appropriate copula, speci-cally based on a credit trading portfolio. From the range of copula families, the t of Gaussian and Student-t copula are compared. The authors test their assumptions by attempting to price collateralized debt obligations (CDOs) and basket default swaps. CDOs are essentially asset-back securities, where the underlying assets con-sists of bonds. The securities are then 'tranched' based on seniority (J. C. Hull, 2016). Basket credit default swap follows the same notion as a single credit default swap, however instead of referring to a single entity, the basket refers to a group of entities (J. C. Hull, 2016). Once one of the entities in the basket defaults, settle-ment occurs (J. C. Hull, 2016). The nature of these products imply the necessity of modeling the joint dependency between the obligors within the underlying pools and tranches of securities. Meneguzzo & Vecchiato (2004) test the use of the Student-t copula by quasi-maximum likelihood estimation in order to capture the tail 'fatness' of the return series. Similarly to Kole et al. (2007), the authors conclude that the Student-t copula exhibits the best t to the return series used.

2.3 FRTB

The Basel Committee periodically publishes nal revisions to the Basel regu-latory guidelines commonly known within risk management as the Fundamental Review of the Trading Book (FRTB). The newly published regulation (2016) indi-cates several changes to the minimum capital requirements for market risk, to be implemented in 2019. Within these changes, the Committee (2016) has decided to replace the Incremental Risk Charge with the Default Risk Charge. Banks are again allowed to use either a Standardized Approach provided by the regulators, or develop models according to the Internal Model Approach, subject to supervisory

(20)

approval. The following paragraphs will shortly summarize the regulation concern-ing the Internal Model Approach, as provided in FRTB (Bankconcern-ing Committee on Banking Supervision, 2016).

The Default Risk Charge is to be modeled using a 99.9% VaR measure, where the default simulation model must contain two dierent systematic factors. This is in contrast with the IRC requirements, where banks had the freedom to choose either one or multiple systematic factors (European Banking Authority, 2012). The data series concerning the correlation and calibration process must cover a period of 10 years, including a period of stress (Banking Committee on Banking Supervision, 2016). Regulation (2016) further requires the data used within this process must either consist of listed equity prices or CDS spreads. The positions within the scope of the DRC model must include defaulted debt positions, sovereign exposures and equity positions (Banking Committee on Banking Supervision, 2016). The default of equity must be modeled corresponding to its value dropping to zero (Banking Committee on Banking Supervision, 2016).

The Committee (2016) further outlines the requirements concerning the used probabilities of default (PD) within the DRC methodology. If an institution has been approved to use IRB input parameters, the corresponding PDs must be used. PDs based on external sources may also be consulted, provided they can be proven to be relevant for the bank's trading portfolio (Banking Committee on Banking Supervision, 2016). The same applies to the LGD parameters: the regulation (2016) states that estimates can be based on the IRB approach, and must reect the type of position and seniority. Furthermore, FRTB (2016) requires recovery rates (1-LGD) to be dependent on the economic cycle.

2.4 Modeling Framework

Literature concerning modeling frameworks for both IRC and DRC are scarce. However, several articles provide example approaches concerning either charge based on their interpretation of the regulation. Martin et al. (2011) provide an approach for modeling the IRC, and suggest the use of a factor model to obtain rm asset values, for which asset thresholds are dened. The authors (2011) then use the thresholds to model either a migration or default scenario based on in which threshold the asset value return falls. The thresholds are calibrated such that the possibility of a credit rating change is equal to the probability of migration, obtained from a transition matrix (Martin et al., 2011).

(21)

The implied factor structure allows dependency across joint migration and de-fault scenarios for dierent obligors, as required by regulation (Martin et al., 2011). The authors (2011) outline both a single and multi-factor approach using a three fac-tor industry-country combination for the multi-facfac-tor approach, based on a portfolio of bonds. The results conrm their hypothesis of witnessing only a small change in the capital charge when comparing both single and multi-factor approaches (Martin et al., 2011).

A modeling framework concerning the Default Risk Charge is extensively out-lined by Wilkens & Predescu (2015), based on their interpretation of the FRTB guidelines. As required, their approach uses a two-factor model, along with a stochastic process for the recovery rate estimation. The approach used by the au-thors is based on a Gaussian copula factor model. Their provided methodology is used as a starting point for the methods used within this thesis, and will be discussed throughout the remaining chapters.

Additionally, separately from their previous paper, Wilkens & Predescu (2018) analyze several copulas for the purpose of modeling the Default Risk Charge. Be-sides the Gaussian copula factor model, the authors test the t of the Student-t and Clayton copula. In this case the authors (2018) do not impose the required factor structure upon the issuers' asset returns and use a constant recovery rate as op-posed to a stochastic process. Due to these limitations, the authors emphasize that their research can be used for sensitivity analyses and stress testing rather than the provision of a modeling framework inline with the FRTB requirements. This thesis thus attempts to ll the apparent gap within existing literature by combining the required FRTB DRC modeling approaches with the analysis of a dierent copula assumption, and compare the resulting charges to the capital requirements indicated by the IRC model, currently in use by banks.

The following chapters provide the approach for modeling the IRC and DRC, provide data descriptions and results. The modeling approaches for both charges are based on regulation set out by the Basel Committee on Banking Supervision and the European Banking Authority.

(22)

3

|

Incremental Risk Charge

The regulation outlined by the Basel Committee of Banking Supervision stipu-lates that the Incremental Risk Charge model must take into consideration default and credit migration risk of un-securitized credit products over a one-year time horizon at a 99.9 percent condence level (Basel Committee on Banking Supervi-sion, 2009a). Current regulation requires the Incremental Risk Charge to model interdependence between dierent issuers using systematic and idiosyncratic factors (European Banking Authority, 2012). As a result, a single factor Gaussian copula approach can be used to model either the default or migration of a specic issuer, based on the original work by Merton (1974) and Vasicek (2002). The Gaussian copula single-factor model is outlined by J. C. Hull (2015) and Christoersen (2012) as the following. Xi = ρiF + q 1 − ρ2 iZi (3.1) X, F, Z ∼ N (0, 1) . This implies Cov(X1, X2) = Cov(ρ1F + q 1 − ρ2 1Z1, ρ2F + q 1 − ρ2 2Z2) = ρ1ρ2 = Corr(X1, X2) (3.2)

The relation indicates that the credit-state X of issuer i is determined by a com-mon systematic factor F , representing the state of the economy, along with a rm specic, idiosyncratic factor Zi. The parameter ρ is the correlation between the

credit state of the obligor and the common factor F , which is assumed to be inde-pendent from idiosyncratic factor Z. Condition 3.1 further implies the correlation between issuers.

The factors explaining the credit-state of each obligor were originally introduced by Basel II's explanatory note concerning the Internal Ratings-Based Approach

(23)

(Basel Committee on Banking Supervision, 2005), which had adopted the methods of Merton (1974) and Vasicek (2002) in order to model the capital requirements for credit risk.

The CreditMetric's technical documentation (Gupton et al., 2007) largely denes the methodology concerning credit migration and default risk applicable for mod-elling IRC. As outlined by the documentation, scenarios concerning the systematic factor F and rm-specic factor Z are generated using Monte Carlo simulation. The correlation parameter ρ is calculated using the IRB Basel II correlation formula as indicated below (Basel Committee on Banking Supervision, 2005).

ρ = 0.12 1 − e −50×P D 1 − e50 ! + 0.24 1 −1 − e −50×P D 1 − e50 ! (3.3)

The latter condition has been derived from time series data analysis for corporate, bank and sovereign exposures by the G10 supervisors (Basel Committee on Banking Supervision, 2005). As outlined by the IRB documentation, a higher probability of default indicates a larger eect of the idiosyncratic component on default risk whereas the overall state of the economy plays a smaller role (Basel Committee on Banking Supervision, 2005). The correlation parameter thus depends on the probability of default of an issuer, which is uniquely determined by its credit rating. These credit ratings are extracted from the Issuer Risk Report database and indicate the transition or default probability of an obligor. The PDs based on the credit ratings are extracted from transition matrices provided by Standard & Poor's annually. The transition matrices are based on annual data, whereas the IRC methodology uses a liquidity horizon1 shorter than one year. If the transition

matrix provided by S & P is denoted by P , then the matrix for each liquidity horizon (LH) can be calculated as P = PLH. If for example, the liquidity horizon is set to

3 months, the matrix would be scaled by setting the entire matrix to the power of 1/4. In order to ensure non-negative probability values and rows adding up to one, a re-scaling procedure is subsequently considered. Firstly, negative values are replaced by zero. Secondly, the rows are scaled as follows.

P (r, c) = P (r, c) 8 P i=1 P (r, i) (3.4)

1 Time required to liquidate a position in stressed conditions. The liquidity horizon for a

(24)

i ∈ [AAA, AA, A, BBB, BB, B, CCC, D]

In this case, P (r, c) denotes the probability of migrating from credit rating r to rating c. The correlation parameter (3.3) is calibrated such that the probability of Xi falling in a certain range of thresholds (bin) coincides with the corresponding

probability in the transition matrix. Figure 3.1 indicates an example extracted from CreditMetrics' publication concerning binning. The thresholds of each rating are given by Φ−1 P r, c

, where Φ−1 is the inverse of the standard normal cumulative

distribution. The probability P (r, c) is extracted from a transition matrix.

Figure 3.1: Example concerning binning. This gure indicates the thresholds con-cerning credit quality changes, denoted as Φ−1 P r, c

. Source: (Gupton et al., 2007)

As required by the Basel Committee the correlation parameter for nancial rms is multiplied by 1.25 (Banking Committee on Banking Supervision, 2016). The credit rating of obligor i will migrate if the credit-state index X falls into a bin which is dierent than its original rating. Default occurs once Xi falls in category D

(Default in Figure 3.1). In case of migration, the nancial impact on the instrument is calculated as the change in its market-to-market value.

δVi(Z) = CS01i· " 1 − e−δZ·Ti −Ti # (3.5) The credit sensitivity of the instrument is denoted as CS01i, which indicates

(25)

whereas δZ represents the change in the credit spread. The shift in credit spread is based on data sourced from Markit Ltd., an independent source providing nancial information concerning credit derivatives. The spread is calculated by taking the average per product type (Bonds/CDS), rating and across currency (EU and USD). The calculation occurs on a monthly basis and is based on data of a specic day, usually the rst day of the month. The tenor of the instrument (time until contract expires) is denoted as Ti. Credit sensitivities C01i are per tenor bucket2, per issuer,

currency and instrument.

In case of default, the nancial impact is indicated as the dierence between the market-to-market value before and after default. Apart from losing the market value of the instrument, the holder receives the recovered amount with respect to the notional value N.

Financial impact = MtM − N · RR

= M tM − N · (1 − LGD) = EAD − N · (1 − LGD)

(3.6)

According to IRC guidelines, the LGD estimates are required to comply with the Internal Ratings-Based approach (Basel Committee on Banking Supervision, 2009a). The IRB guidelines report that institutions must use LGDs which reect higher loss severities due to economic downturns (Basel Committee on Banking Supervision, 2005). These input parameters are modeled by banks internally and therefore used as inputs.

The IRC algorithm is thus as follows: per liquidity horizon a systematic factor F is generated, whilst for each issuer, an idiosyncratic factor Z is obtained. This leads to the credit-state of each issuer Xi. Based on the credit-state, either the issuer

defaults, migrates or retains the same credit rating. In case there is no change, the nancial impact is zero. In the remaining cases, the nancial impact is calculated for each issuer. Each liquidity horizon consists of three months, leading the 4 scenarios of nancial impact to be aggregated over issuers i to obtain a one-year impact. These steps are repeated for a large number of simulations, e.g 20 million times, of which the 99.9th percentile is the Incremental Risk Charge.

2A set of risk positions grouped by common characteristics (Banking Committee on Banking

(26)

4

|

The Default Risk Charge

4.1 Factor Calibration

The Basel Committee of Banking Supervision (2016) requires default risk to be measured using a VaR model including a one-year time horizon at a 99.9 condence level. The accords further stipulate that banks are required to use a default sim-ulation model with two types of systematic risk factors (Banking Committee on Banking Supervision, 2016). As a result, the Default Risk Charge can based on the multi-factor Merton model (Merton, 1974). The generic version using multiple factors can be formulated as proposed by J. C. Hull (2015).

Xi = ρi1F1 + ρi2F2+ · · · + ρiMFM +

q 1 − ρ2

i1− ρ2i2− · · · − ρ2iMZi (4.1)

X, F, Z ∼ N (0, 1) .

The model indicates that Xi is determined by multiple systematic factors F1, · · · , Fm

and an idiosyncratic component Zi, where the systematic factors are uncorrelated

with the idiosyncratic components and with each other. In this case, the correlation structure between Xi and Xj is as follows (J. C. Hull, 2015):

Corr Xi, Xj = M

X

m=1

ρimρjm (4.2)

Martin et al. (2011) indicate that the factor loadings concerning the system-atic factors are assumed to be identical for rms within the same country-industry category. The factors considered within the DRC algorithm will thus concern a common global factor, along with regional and sector factors. The following para-graphs outline the process of estimating the parameters used as inputs within the DRC algorithm presented in section 4.2. A Monte Carlo simulation is used in order to simulate m + n + 1 independent, identically distributed standard normal random

(27)

vectors of the same size as the number of simulations, Nsim. These vectors repre-sent m regions, n sectors, in addition to one global factor. To model the dependence between the factors, the Pearson correlation coecients are estimated for each set of index log-returns (R1, ...Rm, S1, ..., Sn, G), creating a correlation matrix M, where

the last column represents the global factor G, denoted by Sn+1. Using the same set

of log index returns, the standard deviation for each region and sector are calculated, henceforth denoted by σRi and σSi, respectively.

M =        1 ρR1,R2 · · · ρR1,Sn+1 ρR2,R1 1 · · · ρR2,Sn+1 ... ... ... ... ρSn+1,R1 ρSn+1,R2 · · · 1       

In order to correlate the simulated factors, the Cholesky decomposition of the cor-relation matrix M is calculated as indicated by Christoersen (2012). The Cholesky decomposition of symmetric matrix M is obtained by LL0, where L represents a

lower triangular matrix and L0 indicates its transpose.

The correlated factors are obtained by multiplying the matrix of uncorrelated simulated factors by the transpose of the Cholesky decomposed correlation matrix M. The correlated regional and sector vectors are indicated as ZR and ZS

respec-tively. The global factor vector is represented by ZG. The calibration of the model

further concerns the correlation of the issuer to its region and sector, which is mod-eled by an ordinary least squares regression. The FRTB regulation (2016) requires correlation estimates and remaining parameters to be calculated for each issuer. However, due to a lack of data concerning every issuer for a history of 10 years, proxies must be used for the missing data. A denite DRC model should include parameters for each individual issuer. However, in this case, the following approach is applied.

The list of issuers is divided in two parts: Issuer1, . . . , Issuertop



and Issuer

top+1, . . . , Issuerk



, where Issuertop represents the top issuers within the portfolio

for which 10 historical year equity or CDS data is available. These issuers are selected per region and sector as the those having the highest exposure-at-default. In order to explain the variation in the return of each top issuer within Issuer1,. . . ,

Issuertop



, the standardized issuer returns Ik,t are regressed on the index returns

(28)

Ik,t = αkRik,t+ βkSjk,t+ k,t (4.3)

The regression analysis is an adaptation of the method used by Wilkens & Pre-descu (2015), where the authors capture the factor dependence by regressing the country and industry returns separately on global returns. The output concerning (4.2) provides the estimated regression coecients ˆαk for the regional index returns

and ˆβk for the sector index return, along with R2k as the coecient of determination.

For non-top issuers Issuertop+1,. . . , Issuerk



within a certain region and sector (bucket), proxy parameters are dened. The non-top issuer is assigned the median values of the parameters α, β and R2 of the set of top-issuers with the corresponding

region and sector. If this set is empty (there is no data available for this specic bucket), then the non-top issuer is assigned the median values for α, β and R2 of

the entire set of top-issuers.

4.2 The Algorithm

Using the calibrated parameters in section 4.1, the Default Risk Charge can be computed using a factor model. The idiosyncratic factor is generated by simulating an i.i.d random standard normal vector represented by ξk. The size of the vector is

determined by the number of simulations, Nsim. The asset returns for each issuer k, denoted by Xk, is estimated as follows.

Xk = r R2k Ψ(αkσRikZRik + βkσSikZSik) + q 1 − R2 kξk (4.4) Ψ = α2kσR2 ik + β 2 kσ2S ik + 2αkβkσRikσSikρRik,Sik (4.5) Xk ∼ N (0, 1)

The correlation between the regional and sectorial factors i for each issuer k is denoted by ρR

ik,Sik, where Ψ is used as an normalization coecient, as suggested by

Wilkens & Predescu (2015). This ensures that the summation of factors results in Xk following a standard normal distribution. The parameters αkand βk used within

the factor model are those estimated in regression (4.3), where R2

k is its coecient

of determination. The property R2 = ρ2 leads (4.4) to be similarly specied as the

(29)

The regional and sector factors denoted by ZRik and ZSik are determined in

section 4.1, along with σRik and σSik. The issuer is in default once its level of asset

returns, Xkis below a certain critical threshold (Löeer & Posch, 2011). Each issuer

has a specic credit rating, resulting in a corresponding probability of default. The scenario is dened as follows.

t = argmint∈(1,··· ,nT)Xk ≤ Φ

−1

(P Dt) (4.6)

where Φ−1 represents the inverse of the standard normal cumulative distribution

function. The latter condition indicates the time of default of an issuer, where nT

represents the number of time steps within one year. In this case, nT is set to 12,

indicating monthly time steps, in which a rm can either default or survive. Only in the case of default, a recovery rate is dened. The probabilities of default are those provided in the 'D', or default column in the transition matrices also used for the Incremental Risk Charge, given in Table 7.3 and 7.4. This eventually allows for eective comparison between the two charges. However, the PD concerning the highest rating, AAA, is oored to 3 basis points as required by FRTB regulation (Banking Committee on Banking Supervision, 2016). This restriction is eventually relaxed in order to provide additional results for appropriate comparison between the IRC and DRC. The recovery rate can either follow a stochastic process, or be held constant at a percentage specic to each issuer. As a result, the nancial impact is calculated using the following for each defaulted path.

F Isk = EADs,tk − RRk

s ∗ N otional k

s,t (4.7)

The latter expression includes EADk

s,t, representing the exposure at default for

seniority s at default time t, Notionalk

s,t indicates the notional for seniority s for the

defaulted time t, whilst RRsis the stochastic recovery rate for that specic path for

seniority s. If a deterministic recovery rate is used, each simulation uses a constant rate per issuer. The process however is not iterative: if a rm has defaulted within a simulation, the process repeats, and another scenario is re-generated where the issuer can again either default or survive. For the defaulted paths, the nancial impact of each issuer is summed, of which the 99.9th percentile of the vector is the DRC.

The following section denes the methodology concerning the stochastic recovery rate model dened within a default scenario.

(30)

4.2.1 Stochastic Recovery Rate

This section addresses the requirement by the FRTB to introduce a dependency between the recovery rates and the systematic factors: the estimated losses in case of default must reect the economic cycle (Banking Committee on Banking Super-vision, 2016). A factor model proposed by Wilkens & Predescu (2015) and based on the prior work by Pykhtin (2003) captures this dependency. This model will be calibrated to the empirical distribution of recovery rates upon default of senior-unsecured bonds, as suggested by Wilkens & Predescu (2015). The historically observed mean and standard deviation of the recovery rates are obtained from Alt-man & Kalotay (2014), in which the estimates are calculated from a set of 2,828 corporate bonds over the period 1988 to 2011.

Yqk = γq+ σq( p ρYZ G+ p 1 − ρYk q) (4.8) RRkq = eYqk, q ∈ [AAA, AA, A, BBB, BB, B, CCC, CC, C] (4.9) ZG, kq ∼ N (0, 1) .

The variables γq and σq indicate the historical mean and the volatility of the

log-recovery rates per credit rating, respectively, conditional on default. The id-iosyncratic component of Yk

q for each issuer k with credit rating q is given by kq.

The proposed model takes into account the correlation between the process of default and the recovery rates, based on a global factor ZG generated in section

4.1, and correlated to the systematic region and sector factors. The parameter ρY represents the correlation between recovery rates. Wilkens & Predescu (2015) suggest xing the correlation parameter based on the previous work by Bade et al. (2011). The authors use maximum likelihood estimation to jointly t the asset and recovery rate process using 188,000 annual observations concerning non-nancial bonds from 1982 until 2009, based on a 1% default rate. Bade et al. (2011) nd a correlation estimate of approximately 4.11%, suggesting that log-recovery rates upon default are largely determined by idiosyncratic factors. The approach provided by Wilkens & Predescu (2015) however, does not ensure that the recovery rate remains within the interval [0,1]. In order to overcome this limitation, a logit-normal model is used in order to dene the recovery rate.

RRqk= e

Yk q

(31)

The use of a logit normal model requires re-calibration of the correlation pa-rameter presented by Bade et al. (2011), as their estimate is based on a log-normal distribution. The calibration is conducted by matching the rst two moments of the empirically observed recovery rates upon default of an issuer, with the simulated parameters conditional on default, as suggested by Wilkens & Predescu (2015). In this case, we condition on default by dening an average "global issuer" XG, which

follows the same model as presented in section 4.2. XG = r R2 Ψ(ασRZR+ βσSZS) + √ 1 − R2ξ G (4.11) Ψ = α2σR2 + β2σS2 + 2αβσRσSρR,S (4.12) XG∼ N (0, 1)

XG represents the simulated asset return of the global issuer. The parameters

used to dene XG are the median values for the set 'Europe' and 'Government'. The

simulated ZRand ZS therefore represent this specic set of region and sector factors.

The method of calibration is shown in (4.13). The process minimizes the squared dierence between the conditional expectation of the recovery and the historical av-erage mean provided by Altman & Kalotay (2014), along with the squared dierence between the conditional standard deviation and historical standard deviation from the latter source.

minX

q

[(E[RRq|XG< Φ−1(P Dq)]− ˆµq)2+(std[RRq|XG < Φ−1(P Dq)]− ˆσq)2] (4.13)

q ∈ [AAA, AA, A, BBB, BB, B, CCC, CC, C]

The values given by Altman & Kalotay (2014) are given in chapter 5 in Table 5.5, along with the corresponding probabilities of default used within the calibration process in Table 5.4. PDs for calibration purposes must be based on historical observation of default data, including default events and price declines, and should cover a minimum of 5 years (Banking Committee on Banking Supervision, 2016). In case an issuer defaults, the calibrated parameters are used as inputs within the recovery rate model, and subsequently, the DRC algorithm.

(32)

The FRTB requirements outlined by the Basel Committee (2016) indicate that the LGD parameters, and thereby recovery rates, must be based on the seniority of the product within the trading portfolio. In order to address this requirement, the recovery rates are re-scaled according to seniority. The re-scaling factor is obtain by dividing the average recovery rate for senior secured products and non-senior products by the overall average rate provided by Altman & Kalotay (2014). The recovery rates are then multiplied by the re-scaling factors according to the issuers' seniority and credit rating. The re-scaling parameters are provided in chapter 5, section 5.2. The average recovery rates for senior secured and non-senior products are determined according to the IRB methodology.

For each defaulted path, an i.i.d standard normal random k

q is simulated,

rep-resenting the idiosyncratic component of the stochastic recovery rate model. The estimated parameters dene condition (4.8), and thereby the recovery rate used within the DRC algorithm presented in section 4.2.

4.3 Student-t Copula

4.3.1 Factor Correlations

The DRC methodology provided in sections 4.1- 4.2 concerns the use of a Gaus-sian copula factor model. FRTB regulation requires banks to benchmark dierent internal modeling approaches to asses the overall accuracy of their Default Risk Charge models (Banking Committee on Banking Supervision, 2016). Therefore, this section will concern the methodology used to implement a Student-t copula within the two-factor model presented in section 4.2. The estimated parameters within the Gaussian model [αk, βk, σRik, σSik, ρRik,Sik, R2k, Ψ]are applied throughout

the Student-t copula implementation.

Daul et al. (2003) outline the use of the Student-t copula within credit risk application. Firstly, standard normal vectors for regional, sectorial and one global factor are generated and correlated as in section 4.1. These vectors can be considered to be a matrix Y of size (m + n + 1) × Nsim, of which the last column includes the global factor ZG. As described by Daul et al. (2003), let Y ∼ N (0, Σ), whilst S2 ∼

χ2(ν)/ν follows a Chi-Squared distribution with ν degrees of freedom, independent from Y . Matrix Y is thus transformed to matrix Y∗ and is given by

(33)

Y∗ =           Z1,1 S1 Z1,2 S1 · · · Z1,G S1 Z2,1 S2 Z2,2 S2 · · · Z2,G S2 ... ... ... ... ZN sim,1 SN sim ZN sim,2 SN sim · · · ZN sim,G SN sim          

which now follows a Student-t distribution with ν degrees of freedom. As described by Christoersen (2012), in order to obtain a standardized Student-t random vari-able, matrix Y∗ is scaled by pν − 2/2, resulting in an expectation of zero, and

variance 1.

The idiosyncratic component ξk is again generated independently from the

sys-tematic factors, as per denition of a general factor model proposed by J. C. Hull (2015). However, under the t-copula implementation, ξk is a vector of size Nsim

fol-lowing a Student-t distribution using a similar transformation as for matrix Y. Thus, the generated elements within the vector are divided by S and scaled by pν − 2/2 independently from the systematic factors. The applied methodology follows the approach provided by Hull (2015), in which the author indicates that other factor copula models are obtained by using other zero-mean unit-variance distributions for the systematic and idiosyncratic factors. His approach is implemented within Hull & White (2004), in which he assigns student-t distributions to both systematic and idiosyncratic factors using 5 degrees of freedom, to subsequently scale them, resulting in unit variance student-t distributions. The authors (2004) outline that this approach ts market prices surprisingly well when considering credit derivative market data such as Itraxx and CDX for CDOs and CDS spreads.

The aim within the student-t implementation is to keep the marginal simulated asset distibution the same as the benchmark Gaussian model. The study by Oh & Patton (2017) evaluates several copula assumptions using factor models. The au-thors set the marginal distributions to N(0,1) across copula estimations, separating the distribution of the asset return from the copula applied to the latent variables.

The simulated factors in matrix Y∗ now follow a student-t distribution. These

factors are then used as inputs in the DRC factor model prescribed by condition (4.4). Originally, under the Gaussian benchmark model, this factor model calculated the asset returns X1, · · · , Xk for each issuer k. However, as a result of using the

(34)

asset returns for each issuer prescribed in condition (4.4) now follow a student-t distribution, represented by X∗

1, · · · , X ∗ k.

Therefore, in order to obtain the eventual default time and subsequently the nancial impact as described by formulas (4.6) and (4.7), the simulated asset returns for each issuer X∗

1, · · · , Xk∗ are transformed to standard normal by the following

application. Firstly, as suggested by both Daul et al. (2003) and Demarta & McNeil (2005), we dene vector U which denes the simulated asset returns X∗

1, · · · , X ∗ k of each issuer. U = (tν(X1∗), · · · , tν(Xk∗)) (4.14) U = (u1, · · · , uk) (4.15) X = Φ−1(u1), · · · , Φ−1(uk)  (4.16) The simulated asset returns for each issuers are student-t distributed as men-tioned previously. Applying the cumulative student-t density function, as given by vector U, to each issuer's asset returns creates uniformly distributed variables. The cumulative density function of a standard univariate t-distribution is denoted by tv.

Subsequently, this vector is transformed to standard normal in order to obtain the same distribution for each issuer asset return as in the benchmark Gaussian model. Vector X transforms the issuer asset return back to standard normal by applying the inverse standard normal cumulative density function to each of the elements, u1, · · · , uk. The inverse of the cumulative standard normal function is denoted by

Φ−1 . The transformations described in (4.14-4.16) are repeated for every simula-tion, Nsim in total. This procedure ensures the use of a Student-t copula whilst keeping the marginal simulated asset returns standard normal.

4.3.2 Stochastic Recovery Rate: Calibration

The implementation of the Student-t copula aects both the calibration and the simulation of the stochastic recovery rate. Wilkens & Predescu (2018) analyze the impact of the use of a Student-t copula within the DRC framework, whilst keeping the recovery rate constant. The implementation of the Student-t copula will thus be tested using both a stochastic and deterministic recovery rate. The calibration is based on the default of one average "global issuer", which follows the model specied by (4.11). The asset value for the global issuer XG is simulated and transformed

similarly as X∗

(35)

Again, the following steps apply. Firstly, re-generate standard normal vectors for regional, sector and one global factor as in section 4.1. These vectors can be considered to be a matrix Y of size (m + n + 1) × Nsim, where the last column includes the global factor ZG. Secondly, matrix Y is transformed to matrix Y∗,

resulting in student-t distributed systematic factors. The idiosyncratic factor ξG is

similarly simulated as standard normal, and subsequently transformed to a student-t disstudent-tribustudent-ted facstudent-tor. The lastudent-tstudent-ter process leads student-the simulastudent-ted assestudent-t value for student-the global issuer XG to follow a student-t distribution. XG is dened as

XG = r R2 Ψ(ασRZR+ βσSZS) + √ 1 − R2ξ G (4.17) Ψ = α2σR2 + β2σS2 + 2αβσRσSρR,S (4.18)

The asset value XG is used within the calibration of the stochastic recovery rate.

However, seeing as the recovery rate follows a logit normal distribution, the eventual input parameters must follow a Gaussian distribution. The recovery rate is dened as: Yqk = γq+ σq( p ρYZ G+ p 1 − ρYk q) (4.19) RRqk= e Yk q 1 + eYk q (4.20)

The input parameter for (4.19) includes the global factor ZG, corresponding to

the last column of correlation matrix Y∗. As a result, factor Z

G follows a student-t

distribution. However, in order to obtain the recovery rate, the global factor must be transformed back to standard normal using the similar steps as in (4.14-4.16).

The calibration of the model can be described as a minimization problem, out-lined by (4.13). Again, the process minimizes the squared dierence between the conditional expectation of the recovery and the historical average mean provided by Altman & Kalotay (2014), along with the squared dierence between the conditional standard deviation and historical parameter from the same source.

minX

q

[(E[RRq|XG< Φ−1(P Dq)]− ˆµq)2+(std[RRq|XG < Φ−1(P Dq)]− ˆσq)2] (4.21)

(36)

Seeing as the simulated asset value for the global issuer follows a student-t dis-tribution, XG must be transformed to standard normal similarly as in (4.14-4.16).

This approach allows XG to have a correlation structure based a Student-t copula,

whilst using a stochastic recovery rate based on a logit-normal distribution. The calibration procedure now follows the same process as under the Gaussian model, leading to estimated parameters γq, σq and ρY.

4.3.3 DRC Simulation

The following paragraph describes the DRC simulation steps within the Student-t copula implementation with respect to the recovery rate and thereby the subsequent nancial impact. The recovery rate is dened conditional on default. If the simulated return Xk falls below a certain threshold, the issuer is in default. As a result, the

recovery rate is estimated using either the stochastic process, or kept constant using the recovery rates determined for each issuer. The list of recovery rates are provided in Table 9.3 within the appendix. In case the stochastic recovery rate is used, the global factor ZG given by the last column of matrix Y∗, is transformed as in

4.14-4.16 to have a standard normal distribution in order to be used as an input within the logit-normal model (4.19-4.20). Subsequently, upon default, a standard normal kq is simulated, representing the idiosyncratic component of the stochastic recovery rate model. The parameters estimated through calibration are then used to dene the recovery rate. For each defaulted path, the nancial impact is determined as (4.7) and aggregated across issuers, of which the 99.9th percentile of the vector is the DRC. Yqk = γq+ σq( p ρYZ G+ p 1 − ρYk q) (4.22) RRqk= e Yk q 1 + eYqk (4.23) F Isk = EADs,tk − RRk s ∗ N otionalks,t (4.24)

(37)

5

|

Data

5.1 Factor Correlations

Data concerning the process of correlating the risk factors are extracted from Bloomberg, consisting of equity spot returns from each sector, region and an overall global index. An overview of each region and sector, along with their Bloomberg ticker is provided below. The global factor is mapped to the MSCI World Index, given by the MXWO Index ticker.

Regional Factors

Factor Index Bloomberg ticker

Europe MSCI EUROPE MXEU Index

Asia MSCI AC ASIA MXAS Index

North America MSCI NORTH AMERICA MXNA Index

Latin America MSCI EM LATIN AMERICA MXLA Index

Africa & Middle East MSCI FM AFRICA MXFMAF Index

Pacic MSCI PACIFIC MXPC Index

Table 5.1: Regional factor mapping

Sector Factors

Factor Index Bloomberg ticker

Materials MSCI WRLD/MATERIALSE MXWO0MT Index

Consumer products MSCI WRLD/CONSUMER MXWO0CD Index

Services MSCI WRLD/CONSUMER SVC MXWO0HR Index

Financials MSCI WRLD/FINANCIALS MXWO0FN Index

Industrials MSCI WRLD/INDUSTRIALS MXWO0IN Index

Government Itraxx Sovx Glob. Liquid Inv. GLIG CDSI Crop Table 5.2: Sector factor mapping

The FRTB guidelines require the calibration of the factor correlations to cover a minimum of a 10 year window, including a period of stress (Banking Committee on

(38)

Banking Supervision, 2016). This period of stress corresponds to a moment in time in which correlations are at a peak. In this case, the latest period of stress can be considered as the 2008 nancial crisis. The available historical equity data ranges from 01-01-2006 to 18-05-2016. Due to the unavailability of the full series concerning the government factor index, the Itraxx Sovx Global Liquid Investment Grade data ranges from 02-01-2009 to 01-06-2016. Even though FRTB (2016) species correla-tions to be estimated over a one year period, the correlacorrela-tions in this case are based on monthly returns as suggested by Wilkens & Predescu (2015). The authors (2015) argue that correlations measured over monthly and annual intervals are identical.

(39)

Europe Asia North Latin Africa& Pacic Materials Consumer Services Finan- Indus- Gov- Global America America Middle

East products cials trial erment Europe 1 0.79 0.81 0.57 0.29 0.78 0.74 0.84 0.71 0.83 0.84 0.72 0.87 Asia 0.79 1 0.84 0.80 0.41 0.96 0.85 0.84 0.76 0.88 0.86 0.76 0.89 North America 0.81 0.84 1 0.77 0.33 0.82 0.86 0.93 0.89 0.92 0.96 0.78 0.98 Latin America 0.57 0.80 0.77 1 0.42 0.75 0.86 0.71 0.68 0.76 0.77 0.72 0.80 Africa & Middle

East 0.29 0.41 0.33 0.42 1 0.35 0.39 0.29 0.29 0.38 0.36 0.32 0.36 Pacic 0.78 0.96 0.82 0.75 0.35 1 0.81 0.82 0.76 0.88 0.84 0.72 0.89 Materials 0.74 0.85 0.86 0.86 0.39 0.80 1 0.82 0.76 0.83 0.88 0.74 0.89 Consumer prod-ucts 0.84 0.84 0.93 0.71 0.29 0.82 0.82 1 0.89 0.89 0.93 0.77 0.94 Services 0.71 0.76 0.89 0.68 0.29 0.76 0.76 0.89 1 0.80 0.87 0.64 0.88 Financials 0.83 0.88 0.92 0.76 0.38 0.88 0.83 0.89 0.80 1 0.94 0.83 0.96 Industrial 0.84 0.86 0.96 0.77 0.36 0.84 0.88 0.93 0.87 0.94 1 0.79 0.97 Government 0.72 0.76 0.78 0.72 0.32 0.72 0.74 0.77 0.64 0.83 0.79 1 0.82 Global 0.86 0.89 0.98 0.80 0.36 0.89 0.89 0.94 0.88 0.96 0.97 0.82 1

Table 5.3: Factor correlations

This table represents the factor to factor correlations based on region and sector. The correlations are calculated based on monthly log-returns of equity indices of each corresponding factor. The table includes 6 dierent regional factors, and 6 sector factors. Each issuer is mapped to a certain region and sector, representing the systematic factors ZR and ZS used in the DRC algorithm, more specically

condition (4.4). The last column and row provide the correlation estimate for the global factor to the remaining factors. This global factor, ZG, is used as input within the stochastic recovery rate model, outlined in section 4.2.1 and 4.3.2 for the Gaussian and Student-t

implementations, respectively.

(40)

5.2 Stochastic Recovery Rate

The calibration of the stochastic recovery rate requires data concerning histori-cally observed recovery rates and the probability of default for every credit rating. The mean and standard deviation of the observed recovery rates are taken from Altman & Kalotay (2014). The recovery rates obtained through calibration are then re-scaled based on seniority for each credit rating. The re-scaling factors are presented in Table 5.6 and are calculated by dividing the µ parameter for Senior-Secured and Non-Senior in Table 5.5 by the overall average recovery rate of 0.449. The mapping of market seniority to DRC seniority buckets is provided in Table 9.1 in the appendix. Calibration inputs Rating PD AAA 0.0003 AA 0.0003 A 0.0003 BBB 0.0003 BB 0.009271 B 0.050624 CCC 0.238914 CC 0.238914 C 0.238914

Table 5.4: Probability of default for each credit rating for calibration purposes Parameters A. External parameters µ σ Overall 1988-2011 0.4490 0.3790 B. Internal parameters Senior Secured 0.5365 0.2513 Non-Senior 0.2824 0.2196

Table 5.5: Mean and standard deviation of recovery rates

The probabilities of default are grouped for certain credit ratings, such as AAA to BBB and CCC to C, as shown in Table 5.4. The probabilities of default for AAA/AA/A/BBB listed in 5.4 are oored values according to FRTB guidelines (Banking Committee on Banking Supervision, 2016). The remaining probabilities are determined following the Internal Ratings-Based approach, along with the

Referenties

GERELATEERDE DOCUMENTEN

Voor het uitwisselen van decentrale informatie gelden nieuwe werkprocessen met als belangrijke kenmerken: Een goede webomgeving is cruciaal Ondersteun de praktijk door een

We fit the multivariate normal distribution to the series of growth returns, inflation and portfolio components in order to proceed with simulation the future assets and

Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence.. Any further distribution of this work must maintain attribution to the author(s)

U-Multirank onderscheidt zich van de bestaande rankings door zijn diversiteit.. Zo reikt de geografische focus verder

It is shown that for the layer stack of our cantilevers, the multimorph model is more accurate compared to the bimorph model for the d 31,eff determination.. Corrections to the

Uit de MANOVA komt echter naar voren dat er geen significant verschil is tussen de drie groepen; participanten die zijn blootgesteld aan geen (storytelling en) alignment met

Keywords: political economy of television, Dallas Smythe, free lunch, audience commodity, audience labour, The Colbert Report, satire, critique,

The degree of quality of the financial statements provides how much information is reported about the firm’s financial performance (Dechow et al. 2010).The financial benefits of