• No results found

Deriving the SCR for Counterparty Default Risk:An Extended Pareto Case

N/A
N/A
Protected

Academic year: 2021

Share "Deriving the SCR for Counterparty Default Risk:An Extended Pareto Case"

Copied!
42
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Deriving the SCR for Counterparty Default

Risk:An Extended Pareto Case

Department of Economics, Econometrics & Finance

Vasileios Manthatis (s2559455)

Supervisor: Dr C. (Kees) Praagman

June 6, 2018

Abstract

Solvency II is in force since the beginning of 2016. During the implementation of this framework all the EU insurance companies need to comply with the qualitative and quantitative requirements of the directive. Insurance companies need to derive the Solvency Capital Requirement (SCR) based on the Solvency II Standard Formula in order to comply with the regulations. This master thesis proposes another method for the calculation of the SCR for Counterparty Default Risk. Its purpose is to adequately approximate a reinsurer's portfolio loss distribution.

Acknowledgements

First and foremost, I would like to thank my supervisor Dr C. Praagman for his guidance and understanding throughout the whole writing of this thesis. I would also like to thank my family and my friends for supporting me all these years and I know that they will always keep doing so.

Declaration

(2)

Contents

1 Introduction 1 2 Academic Review 4 2.1 CRM in Insurance . . . 4 3 Methodology 6 3.1 Solvency II Formula . . . 6 3.2 Alternative Method . . . 9 3.2.1 Stochastic Parameters . . . 11 3.3 Shocks . . . 13 4 Data 15 5 Results 16 5.1 Solvency SCR . . . 16

5.2 The EPD Case . . . 16

5.3 Stochastic Parameters . . . 23

5.4 Second ReInsurance year . . . 25

6 Discussion 28 A Appendix 35 A.1 Life Insurance . . . 35

A.2 Stochastic Premiums . . . 38

(3)

Chapter 1

Introduction

Credit Risk is an extremely important issue for the Risk Management Department of an Insurance company. Part of credit risk is the Counterparty Default Risk (CDR) which reects the change in the value of assets and liabilities caused by unexpected default or deterioration in the credit standing of independent counterparties and debtors [18]. The Solvency II Directive, which was imposed by the European Union and it is in full force since 2016, splits CDR in two types of exposures (1&2). The rst type covers reinsurance arrangements, securitisations, derivatives (excluding credit derivatives which are already treated under the spread risk module), deposits with ceding and credit institutions. The second type applies to receivables from intermediaries and policyholder debtors [44]. In-surers as well as reinIn-surers use many risk metrics in order to assess this risk. These include transition matrices from rating agencies, default rates, probabilities of default (PDs) and other risk metrics such as Loss Given Default (LGD), Recovery Rates (RR) and Exposure at Default (EAD). According to the Solvency II Glossary, a PD is dened as the likelihood that a counterparty will not repay contractual obligations according to the agreement. The LGD of an exposure is conceptually dened to be the loss of basic own funds which the insurer would incur if the counterparty defaulted [11]. Basic own funds comprise the excess of assets over liabilities which is broadly represented by or-dinary share capital, the equivalent for mutual type undertakings and reserves together with subordinated liabilities [17]. At this point it is important to clear the landscape regarding the LGD factor and its use. In general, the LGD is a percentage and is a basic element in the credit risk management. However, in Solvency II the LGD is dened in a dierent way (as in formula 3.7). In this thesis LGD is used in the way Solvency II commands. The only case in which the LGD is used as a percentage is under section (3.2) in the replicating stochastic portfolio. In order to avoid any confusion, when LGD is used as Solvency II commands, there will be a superscript (Sol).

(4)

CHAPTER 1. INTRODUCTION The purpose of this master thesis is to study and propose a formal risk management approach for the CDR from a reinsurer's perspective (Type 1 exposures) under Solvency II regime. Specically, the reinsurer has to deal with the CDR of its counterparties. By the aforementioned, it is clear that an indepth risk management analysis should be provided with the proper use of risk metrics ( e.g Value at Risk, Loss Distribution Approximation, Conditional Tail Value at Risk, Hill estimates). One of the most vital parts of every risk analysis is the calculation of the Solvency Capital Requirement (SCR).1

This thesis will also propose an alternative method for the SCR calculation which could be useful for the reinsurer and might serve as a benchmark internal model. The reinsurer's SCR for CDR has not been thoroughly examined, at least not on an academic level, and this master thesis aims to add more insight into CDR management. In this thesis, the research provides a complex CDR analysis in the life reinsurance sector for which the academic literature is limited, while accounting for the shortcomings of the underlying loss distribution assumptions in Solvency II.

In the Solvency II calibration paper for the CDR, there is a xed formula for the SCR, in which, the variance of the loss distribution and the quantile factor are approximated [12]. According to the Directive, the 99.5% quantile of the loss distribution is estimated by multiplying the standard deviation with a xed quantile factor q. In addition, that factor choice is based on the assumption of a skewed lognormal loss distribution [12]. However, that assumption may not always hold, since the underlying loss distribution could be completely dierent from the one assumed by Solvency II. False assumptions regarding the loss distribution can lead insurers/reinsurers to severe losses, misspecication of their SCR, resulting in adverse eects to other business lines within the company.

The method proposed in this master thesis aims to tackle this problem by appropriately approximating the loss distribution of the life reinsurer's portfolio. If the distribution is approximated correctly, then the reinsurer is in a position to directly estimate the 99.5% quantile of this distribution (VaR99.5%) instead of using the assumptions of the Solvency II

formula. The alternative method also has the interesting property of focusing on the tails of the loss distribution, in which heavy losses are occurring. The above implies that the proper risk metric to use is the Conditional Tail Expectation (CTE) in an EVT concept. Why CTE? The majority of banks and insurance companies focus their risk analysis on VaR. However, in recent years this approach is criticized by market participants and academics. CTE is lately considered to be a better option for the calibration of capital requirements. For instance, the Basel Accord for banks, issued new instructions on the use of Expected Shortfall (special case of CTE) (ES) instead of VaR for the market risk module [37]. Osmundsen (2017) [37] studied the use of ES for credit risk and found that it captures better the tail risk, it is subadditive and is a "better" risk metric when the loss distribution is heavytailed. Critical VaR reviews came from Bernard et al. (2015) [7] who showed that VaR assessments for credit portfolios at high condence levels (Solvency II case) remain subject to model uncertainty. The Society of Actuaries [43] has also proposed that CTE is a more appropriate risk metric compared to VaR. They suggest that CTE covers insurance risks that have "fatter" tails. In addition, they consider that CTE

1The SCR is the amount of funds that insurance and reinsurance undertakings are required to hold

(5)

CHAPTER 1. INTRODUCTION provides a closer approximation of an insurer's risk prole and a more accurate reection of extreme events. In the same vein, the International Actuarial Association [27] favours the use of CTE at a 99% condence level while calculating capital requirements.

The alternative method captures also another problem. The Solvency II proposes xed values of PDs and LGDs for companies that are rated by credit rating rms (e.g Moody's) and in some cases even if these are unrated. However, the Directive does not provide any evidence regarding the SCR levels in the industry. The reinsurer can tackle this problem by simulating a replicating credit portfolio and derive its SCR.

Replicating Credit Portfolio. Insurers/Reinsurers sometimes want to have an idea about how their SCR ts into the industry's SCR. Since they do not have any credible information about their competitors' SCR, the only way of having a clue about it, is by simulating a large credit portfolio of similar characteristics and derive its SCR. The simu-lative portfolio has to be constructed carefully, since it has to have both the characteristics of the reinsurer (number of countries/counterparties, more weight in life insurance sector), as well as the attributes of the industry (higher PDs, LGDs, more business lines).

The replicating credit portfolio can have another use. The reinsurance company in consid-eration has a good credit quality. This means that each of the counterparties has a small PD. However, there is a possibility that the reinsurer's risk department might want to examine the impact of the simulated PDs from the replicating portfolio to the company's SCR.

The analysis' objective is to shed light on the following research questions that arise. 1. What is the required capital buer under the two methods (Solvency II/alternative)

for the rst and the second year of the reinsurance agreement?

2. Which can be considered the most reliable method for estimating the required capital for the reinsurance company?2

3. Do the xed parameters of the alternative method signicantly alter the results compared to the use of stochastic ones?

4. What would change in the rst research question if instead of the CTE the VaR is used?

5. How much is the sector's SCR and where the reinsurer's SCR stands compared to it (Is it higher/lower)?

6. After estimating the SCR with both methods, what are the implications regarding the choice of this form of reinsurance?

2Note that this research question applies to the reliability of each method based on this specic data

(6)

Chapter 2

Academic Review

This chapter covers an academic literature review on credit risk management (which includes CDR) and credit risk models on insurance/reinsurance and nance. Furthermore, the preceding concepts will be reviewed in a Solvency II context.

2.1 CRM in Insurance

Credit risk management (CRM) includes models that are used to determine the loss distribution of a portfolio over a predened period of time [34]. The next step usually covers the inclusion of risk metrics in order to assess the risk and properly allocate capital in dierent business lines. One of the most prominent CRM models is the one proposed by Merton (1974) [35], which included assets following a stochastic process and a single debt obligation. An extension of Merton's model, is the KMV (Kealhofer, McQuown & Vasicek) model. CDR was examined thoroughly by Hull & White (1995) [26] who proposed how the CDR can be managed by the use of derivative securities.1Another important paper is

that of Jarrow (2001) [29] in which, he aimed to "detach" default and recovery risk2 by

examining debt and equity prices. His core assumption was that in an event of default, equity value (That is market value of assets minus market value of liabilities) equals zero. The rst standard industry model implemented was Credit Metrics which was developed by JPMorgan and the RiskMetrics Group [28]. This model is a primary example of CRM based on credit migration. In this approach, every rm is assigned with a specic credit rating highlighting its credit quality. The credit migration approach presumes that PDs are directly linked with the credit ratings [34]. Another important industry model is that proposed by Credit Suisse Financial Products (1997) [34] which has the structure of a Poisson mixture model and also provides a variety of mixture distributions.

Another important aspect in CRM are the CreditScoring models which are implemented in order to forecast a company's default. There are three types of these models.

Lin-1Derivatives were originally used in nancial institutions. Insurance companies, especially in later

years tend to possess a large amount of derivatives in their portfolio.

(7)

2.1. CRM IN INSURANCE CHAPTER 2. ACADEMIC REVIEW ear discriminant analysis, regression models and heuristic inductive models [44]. The rst type focuses on the identication of variables that separate the "healthy" companies from the problematic ones [22]. Metrics concerning these discriminations include Wilks' lambda and Altman's Zscore, from which a PD can be derived [2]. Regression models (logit, probit) use variables that may lead to default. Moreover, these variables according to their signicance and role when a company defaults are given specic weights. The mechanics behind their implementation can be summarized in four steps; Sample selec-tion, independent variables selecselec-tion, estimation of coecients and estimated PD. Finally, neural networks use an inductive process. Starting from the data sample, an empirical regularity is found and it is used in an uncritical way to forecast future defaults by other companies [44].

Insurance companies, especially in the recent years have developed asset management and investment departments. An insurance company can have in its balance sheet assets such as bonds or investments in stocks. This fact has led insurance companies to "borrow" capital market models from the banking industry. These models include the usage of stocks and bonds as inputs, in order to assess the likelihood of default by the issuing company [44]. Part of the capital market models is the approach based on corporate bond spreads.3 Insurance companies, because of embedded options in their policyholders

contract tend to invest their capital in longterm assets such as corporate or government bonds. Eckert et al. (2016) [19] conducted an analysis in which they discovered that credit risk associated with bonds has a strong impact on the fair valuation and risk measurement in the context of life insurance contracts.

The impact of CDR in the reinsurance industry is studied by Bernard & Ludkovski (2012) [6] who examined a default risk model with partial recovery where the probability of the reinsurer's default depends on the loss incurred by the insurer. They found that the reinsurance buyer wishes to overinsure above a deductible level. Moreover, Burren (2013) [9] proposed suggestions to the regulatory authorities in case of exponential claims and analyzed welfaremaximizing capital requirements for insurance companies. He obtained closed form solutions for exponential claims while proposing a tractable model for in-surance demand in continuous time. A nice book concerning credit risk management concepts in nance insurance, portfolio management, credit models and rating agencies is that of Altman et al. (1988) [13]. A newer eort in covering credit risk management comes from Due & Singleton (2003) [16]. They cover default by examining historical patterns and statistical models and they provide an in depth analysis on credit swaps, optional credit pricing, collateralized debt obligations (CDOs) and correlated defaults. An interesting paper which is based on credit risk optimization by using the Conditional VaR (TailVaR) is that of Andersson et al. (2001) [3]. They approach the credit risk dis-tribution by Monte Carlo simulations and the optimization problem, that is, minimizing the Conditional VaR, is solved eectively by linear programming. Their algorithm is very ecient as it can handle hundreds of instruments and thousands of scenarios.

(8)

Chapter 3

Methodology

Before entering into further details regarding the methods that will be applied, let's briey explain the hypothesis behind the methodology. The life reinsurer has to deal with the CDR of 400 life insurers, meaning that in any way, the rm should be covered in case of an event of default1. In order to have sucient protection, the reinsurance company has

to calculate its SCR. For this master thesis, because of lacking real portfolio data from life insurance companies, 400 ctitious life portfolios are constructed in a realistic way and their policyholders' individual premiums will be calculated. The main assumption is that the reinsurer has a portfolio of good credit quality. The reinsurer enters in a quota share treaty with its counterparties and adds an excess of loss option. The preceding means that the ceding companies have to pay a certain percentage of their received premiums in advance to the reinsurer and the reinsurer will cover their claims (up to a specic loss limit) if they default. The starting point of the methodology is to calculate the premiums of each insurance policy and then the Net Asset Value (NAV)2, which will be used in the

upcoming analysis.

3.1 Solvency II Formula

First of all, the Solvency II standard formula for the calculation of the SCR will be presented. An essential part of the method is the use of the PDs and the LGD of the relevant counterparties. Second, the formula depends on the variance (V) of the portfolio's underlying loss distribution3 and on a quantile factor q. In general, the SCR for type I

exposures for a number of counterparties is calculated as [11] SCRdef,1 = min n X i=1 LGDiSol; q√V ! (3.1)

1In case a life insurance company defaults, the reinsurer is responsible to cover/reimburse the

com-pany's policyholders.

2That is Assets minus Liabilities.

3Obtain an appropriate probability model that adequately describes the insurance losses and how to

(9)

3.1. SOLVENCY II FORMULA CHAPTER 3. METHODOLOGY However, in the above formula there is a clause stating that the SCR equals 3√V if √

V ≤ 5%P

iLGD Sol

i . If this condition does not hold then a higher quantile factor should

be chosen. Specically, the Directive proposes a quantile factor equal to 5. The sum is taken over all n independent counterparties with type I exposures. The reason behind the assumption of independent counterparties is the decrease of calculations' complexity and because this is also the base case scenario in Solvency II. The Directive proposes a xed formula for the calculation of the variance of the loss distribution. Once again the relevant LGDs are the starting point. For each rating class j, yj and zj are dened as

follows [21]. yj = X i∈j LGDiSol zj = X i∈j (LGDiSol)2 (3.2)

The aforementioned sums cover all independent counterparties i in the rating class j. By using the above the variance formula can be used in the calculations of the SCR.

V =X j X k uj,k· yj· yk+ X j vj· zj (3.3)

where j and k in the sums run over all rating classes and uj,k and vj are xed parameters

which only depend on the rating classes. The above parameters are derived directly from the individual PDs of the counterparties and a xed parameter γ = 0.25 [21]. It is clear that the PDs have a large impact on the variance of the loss distribution since they are its basic building block.

uj,k = pj(1 − pj)pk(1 − pk) (1 + γ)(pj+ pk) − pjpk (3.4) vj = (1 + 2γ)pj(1 − pj) 2 + 2γ − pj (3.5)

In the above formulas the term pj, pk are the PDs assigned to each counterparty in each

rating class. The QIS 54 assigns specic PDs to the companies based on their credit rating.

Additionally, it provides PDs for companies that are not rated based on their Solvency ratio [21]. The latter is a key metric used to measure a company's ability to meet its debt and other obligations. The solvency ratio indicates whether a company's cash ow is sucient to meet its short-term and long-term liabilities. In this master thesis, in order to have a diversied portfolio, some of the life insurance companies will not have a credit rating and some of them will have dierent credit ratings. However, companies with a solvency ratio less than 80% and with a credit rating less than B will not be used. A life reinurance rm is not very likely to insure companies with the above characteristics due to the fact that they have a high probability of default.

4The gures in the following page are taken from QIS5 technical specications and are approved by the

(10)

3.1. SOLVENCY II FORMULA CHAPTER 3. METHODOLOGY At this point it should be mentioned that the Solvency II formula can be used when there are simulated5 PDs instead of the xed PDs. Under the inclusion of simulated PDs the

SCR for the CDR is dened as

SCRdef,1 = min n X i=1 LGDiSol; qpVSim ! (3.6) where the factor VSim denotes the variance of the loss distribution when simulated PDs

are used.

Figure 3.1: No Ratings assigned. Credit rat-ing is not available for these companies.

Figure 3.2: Credit Ratings assigned and PDs.

Now let's explain a bit more the LGD. Under the Solvency II regime, the LGD of an exposure is conceptually dened to be the loss of basic own funds which the insurer would incur if the counterparty defaults. In the formula (3.1) the term LGDi can be dened as

follows:

LGDSoli = max(50% · (Recoverablesi+RMi−Collaterali); 0) (3.7)

However, if a reinsurance counterparty has tied up an amount for collateralisation commit-ments greater than 60% of the assets on its balance sheet6, then the factor 50% is replaced

by 90%. The term 50% is the RR set by the regulatory authorities. In the above formula the factor Recoverablesi denotes the best estimate recoverables from the reinsurance

con-tract. Reinsurance recoverables includes the amount owed to the ceding companies by the reinsurer for claims and claims-related expenses. The second term underpins the risk mit-igating eect on underwriting risk of the reinsurance arrangement. The collaterali is the

risk-adjusted value of the initial collateral posted, related to the reinsurance arrangement

5See Subsection 3.2.1

6This means that the monetary value of the collateral posted by the ceding company is greater than

(11)

3.2. ALTERNATIVE METHOD CHAPTER 3. METHODOLOGY and in this case it is xed at 5 million for every counterparty in the reinsurance company. The general interpretation of the collateral is an asset pledged by a borrower to a lender, usually in return for a loan. The lender has the right to seize the collateral if the borrower defaults on the obligation. The recoverables in the case of this analysis are dened as the Actuarial Present Value (APV)7 that is derived from the life insurance counterparties. A

value for the risk mitigating eect should be carefully selected. The risk mitigating eect is an approximation of the dierence between the (hypothetical) capital requirement for underwriting risk under the condition that the reinsurance arrangement is not taken into account and the capital requirement for underwriting risk [11]. Due to lack of real data from a reinsurance company, the risk mitigating eect is set at zero.

3.2 Alternative Method

The backbone of the alternative method for the CDR capital requirement is the proper approximation of the portfolio's loss distribution. In general, insurers/reinsurers prefer to use heavytailed distributions to t their losses. These distributions have the advantage that can cover severe losses. Primarily, this method's focal point is the proper estimation of the reinsurer's loss distribution. The rst step is to try and t an Extended Pareto Distribution (EPD) [5].

F (x) = (

1 − (x(1 + κ − κxτ))−1/γ if x > 1

0 if otherwise

Note that an EPD with parameters τ = −1 and κ = (γ/σ) − 1 is a special case of a Generalized Pareto distributed with xed parameters µ = 1, γ, σ [5]. The reason this distribution was selected in the rst place is because it is widely applied in reinsurance undertakings [1]. It has also the interesting property that it has a major role in Extreme Value Theory (EVT). The latter covers a group of models for threshold exceedances which are applied to large observations that exceed some high level. It is considered one of the most useful practical applications because of its ecient use of the limited data on extreme outcomes [34]. Moreover, it is possible, in order to provide a better t to the loss distribution to use a splicing model. That means that the "body" of the loss distribution could have another distribution while its tail has an EPD. This means that even if an EPD proves to be an inappropriate model for the loss distribution, it is still a heavytailed distribution that can be used to assess the tails of the loss distribution in which large losses occur.

The next step is to check the goodness of t of this distribution on the data. A common measure regarding how properly the distribution is tted to the data is the AIC. Suppose that there are n statistical models and that its model i has Ki parameters denoted by θi

and a likelihood function Li(θi; X). According to Akaike's approach the best choice is

(12)

3.2. ALTERNATIVE METHOD CHAPTER 3. METHODOLOGY the model minimizing

AIC(n) = −2 ln Li( ˆθi; X) + 2ki (3.8)

in which ˆθi symbols the maximum likelihood estimator of θi [34].

Assuming that there is a good t of the loss distribution, the next step is to provide a formal risk analysis on the reinsurer's loss portfolio, that is the calculation of VaR and the CTail Expectation which is dened as [1]

CTE1−p= E(X|Q(1 − p)) = E(X|X > VaR1−p) =VaR1−p+ Π(VaR1−p)/p (3.9)

In the above formula Π(u) = E((X − u)+) is the premium of the excess-loss insurance

with retention u. For this expression to have any validity, it is needed that u ≥ Xn−k,1

the (k + 1)th largest observation [1]. If this condition does not hold the premium cannot be calculated. The excess of loss premium is estimated by using the parameters of the EPD. The premium formula when there is an insurance limit L is modied as Π(u, L) = E((X −u)+, L). The CTE is basically the Tail VaR which will be used later [42]. Assuming

a good t of the distribution then formula (3.1) under the alternative method is modied into SCRdef,1= min n X i=1 LGDSoli ;CTE1−p ! (3.10) Now assume that the EPD does not serve as a good model for the loss distribution. There are two options. Either try and t another heavytailed distribution (e.g Burr) or try to approximate the distribution or its tail by using the GramCharlier formula [1]

b

FGC(x) = Φ(z) + φ(z)(−v/6h2(z) − k/24h3(z)) (3.11)

in which z = (x − µ)/σ, h2(z) = z2 − 1 and h3(z) = z3 − 3z. This approach will also

provide the rst three moments of the approximated distribution as well as a vector of probabilities for the points of the distribution that are estimated. Another statistical model for the approximation of the loss distribution would be the Edgeworth method [14].

b

FE(X) = Φ(z) + φ(z) −v/6h2(z) − (3kh3(z) + γ32h5(z))/72)



(3.12) where z = (x − µ)/σ, h2(z) = z2− 1, h3(z) = z3− 3z and h5(z) = z5− 10z3+ 15z. The

factors φ and Φ are the standard normal probability density and cumulative distribution functions. Intuitively, it is noticeable that there are lot of similarities in the parameters compared to the GramCharlier formula. Once again, these two distribution models will be compared in order to decide which one approaches better the loss distribution. By using the above formulas it is possible to approximate the loss distribution dierently and have also an estimate about the variance. Since there is a variance value, directly it can be plugged in formula (3.1) and instead of the term q√V.

(13)

3.2. ALTERNATIVE METHOD CHAPTER 3. METHODOLOGY Note that the success of the alternative method relies heavily on how well the loss distri-bution of the reinsurer is estimated. Furthermore, it should be emphasized that all of the aforementioned depend on the use of xed parameters. These, refer to the PDs, which are directly taken from Solvency's II instructions. In the following subsection the alter-native method with the use of stochastic starting parameters is used. The calculations concerning the algorithms/methods and the ideas behind the data are from Reynkens et al. (2017) [41], Lang et al. (2015) [40] and Verbelen et al. (2015) [47].

3.2.1 Stochastic Parameters

Before proceeding further, it is important to explain the fact of why the use of stochastic parameters maybe be convenient. Insurance companies, as well as reinsurers especially when they want to make an appropriate business/risk plan in the long runnd it dicult to estimate PDs and LGDs for the forthcoming years. So, it is common to use stochastic simulations and scenarios in order to assess their CDR, calculate their SCR and estimating their loss distribution. At this point, it is important to explain the dierent interpretation of the term LGD compared to the Solvency II denition. Here LGD is dened as the percentage of an asset/claim that is lost if a counterparty defaults.

The idea behind this form of simulations came from Jacob & Fischer [30]. Their paper provides an array of building blocks for simulating PDs, LGDs and EADs while accounting for dierent distribution assumptions. Their paper is based on the CreditRisk+ and CreditMetrics modelling approaches. The simulations' steps are presented in the bullet points below.

• Create a portfolio of 400 counterparties, three business lines (focus on life insurance sector) and ve countries. This simulated portfolio includes randomly generated PDs, EADs and LGDs. Those values are the initial simulated portfolio data. Add constraints to PDs. The maximum PD value equals 0.1 and the minimum is set at 0.

• Draw random sector variances and weights.

• Draw 100000 sector realizations. So for 400 counterparties and three business lines or sectors (K = 3), there are s(1). . . s(100000) ∈ RK sector realizations.

• Calculate conditional P DSim such that

P Dsim = Φ  φ−1P D − wTx √ 1 − wTΣw  (3.14) In the above formula PD is the randomly generated probability of default as de-scribed in the rst bullet point, x is the vector of section drawings, w is the vector of sector weights and Σ is the correlation matrix of the sector variables.

• Draw the default of each counterparty i for every sector according to a Bernoulli distribution Dfi ∼Ber(P DSim).

(14)

3.2. ALTERNATIVE METHOD CHAPTER 3. METHODOLOGY The purpose is to estimate the aforementioned factors, that is by simulating a large credit portfolio, while taking into consideration the correlation between dierent business lines. The random weights are going to be assigned to each insurance sector8 with major weight

assigned to life insurance. An input for the correlation will be the Solvency II correlation matrix for separate business lines [12].

The rationale behind this approach stems from the desire to replicate a credit portfolio9.

The simulations will produce estimates of PDs, LGDs and EADs for all the counterparties for all business lines. However, what will be used are the estimates of these parameters for the life insurance since the reinurance company in consideration operates in the life insurance sector. The nice property for the simulation of the starting parameters is that they can be used instead of the xed ones10, but they can also provide an "extension"

to the alternative method. Now let's focus on this extended version. All notations and denitions will be used as Jakob & Fischer [30] and as Sironi & Resti [44]. Another formula that will be used in accordance with the simulations is the Expected Loss (EL).

E(L) =

n

X

i=1

LGDi· P Di · EADi (3.15)

In addition, another formula that will be applied is the widely known risk metric VaR. Formally, that is dened as the smallest number of l such that the probability that the loss Lexceeds l is not larger than (1−α) [34]. Essentially, the quantile of the loss distribution equals the VaR at a specic condence level α.

VaRα = inf{l ∈ R : P (L > l) ≤ 1 − α} (3.16)

Finally, the fourth one is the stochastic SCR in case of counterparty default.

SCRdef 1,Stoch =VaRα− E(L) (3.17)

The above formula is taken by Jakob & Fischer but it is generic and can be also found in standard textbooks [32] and papers. Note that this extension covers the fact that the Solvency II formula for the CDR does not imply the use of EAD as it is dened in the credit risk literature.

The replicating credit portfolio can be important for the reinsurer for two reasons. Firstly, note that the reinsurance company does not have any information about the level of SCR in the reinsurance industry. Reinsurance companies provide their SCR only to the regulatory authorities as part of their own risk and solvency assessment (ORSA). No company would report this kind of information publicly. However, it would be useful for the reinsurer to have a gist regarding the sector's SCR. Simulating a large credit portfolio and estimating its SCR, is important because the latter may be used as an index for the level of SCR in the reinsurance industry. It can be proved a useful tool

8Insurance companies do not always operate in even terms within their business lines. For instance, a

company could invest more in life insurance than casualty/property insurance.

9Large reinsurance rms do not focus just on one industry sector. They provide services for all

insurance sectors.

(15)

3.3. SHOCKS CHAPTER 3. METHODOLOGY for the reinsurance rm for checking whether its SCR is higher or lower compared to the industry's SCR. Secondly, the reinsurer can use the PDSim11in order to calculate the SCR

and assess the impact of the simulated PDs to its SCR value.

3.3 Shocks

For the calculation of the SCR for the second year of the reinsurance, the company has to apply shocks to the PDs of the counterparties. For the second reinsurance year, life insurance company's PD maybe higher or lower according to the shock applied to its PD. The methodology is that prescribed by Solvency II [11]. A risk analyst can possibly generate random shocks but it is better to use the Directive's instructions. It is proposed that a common shock follows a probability distribution with one shape parameter α and 0 < s < 1. That is

P (S ≤ s) = sα (3.18)

From this probability distribution 400 shock values are drawn and will be applied to the PDs of each counterparty. The shocked PD is driven by the shock size si and its

formulation is

P Dinew= bi + (1 − bi)s τ /bi

i (3.19)

in which bi is dened as a baseline default probability and τ is a shape parameter. In the

Solvency II calibration paper [12], there is a relation between the two preceding shape parameters, that is α/τ = 4 and the concept behind that is the fact that market default rates are high (because of the nancial crisis). The baseline default probability includes the original PDs for each counterparty and can be dened as

bi =

P Di

(α/τ (1 − P Di)) + 1

(3.20) Once again, it should be reminded that a shocked PD will be assigned to each of the 400 counterparties. The shocked PDs will be applied in both methods. For the Solvency II method, the shocked PDs will be plugged in formulas (3.4) and (3.5) which are used in the calculation of the loss distribution's variance (3.3). The values below denote the descriptive statistics of the shocked PDs and the original PDs.

P Dnewi = 0.00963 P D = 0.00662

σ2(P Dnew) = 0.00462 σ2(P D) = 0.000424

Finally, the ratio of the shocked probabilities' average to the original probabilities' average is equal to 1.46. In the next page the plot of simulated PDs versus the original PDs is provided.

11While calculating the SCR, the reinsurer uses the PDs for the life insurance sector, derived from

(16)

3.3. SHOCKS CHAPTER 3. METHODOLOGY

(17)

Chapter 4

Data

The data for this master thesis will be constructed manually. Specically 400 life insurance portfolios will be built. Each of these portfolios is going to represent a life insurance rm which came into a quota share reinsurance agreement with the life reinsurance company. Consequently, the reinsurer will have to face the probability of default1 of the ceding

companies. In order to fabricate these life portfolios in a realistic way, the proper choice of their parameters is vital.

The inputs for the portfolio construction consist of four major parts. The type of the life insurance program, the value of the policy, the demographic data of the population and the interest rate of each policy. The term, type of the insurance program, covers the variety of the life insurance products oered. For instance, single, deered, jointstatus, decreasing and increasing annuities. The value of the policy is the amount of money the insurance company has to pay the beneciary when the insurance contract is terminated or when the beneciary dies. The demographic data contain actuarial life tables and survival probabilities from dierent countries for males/females of every age. These data represent the dierences in the population from dierent countries. For this research, demographic data for Italy, USA, Canada and China will be used and are taken from the Society of Actuaries (SOA). Regarding the proper choice of the interest rateall insurance companies should use an interest rate in order to discount their longterm liabilitiesit will be set at 4.2%, that is the Ultimate Forward Rate (UFR) set by the EU regulatory authorities. This discount rate is for the valuation of longterm liabilities.2 However,

insurance companies are allowed to slightly alter this UFR according to their discount function.

These inputs will be plugged into the life actuarial functions of the aforementioned annu-ities and will return the (APV)3 of the policies. Part of the data is also the calculation of

premiums for each life policy. The functions that derive the premiums for each policy by using its APV are taken from Gerber et.al [8]

1When a company defaults it is unable to pay its claims and nancial obligations.

2Life Insurance products are considered long term liabilities since their "lifespan" is more than 15

years.

3The APV is the expected value of the present value of a contingent cash ow stream (i.e. a series of

(18)

Chapter 5

Results

5.1 Solvency SCR

First of all the Solvency II formula for the CDR capital requirement will be presented. Before continuing further, it should be mentioned the dierent use of the PDs assigned to the 400 life insurance portfolios. The original PDs as illustrated in gures (3.1) and (3.2) are the input parameters in order to construct the terms uj,k, vj that will be used

in the Solvency II SCR calculation. After this step the variance (3.3) is calculated. The procedure behind the SCR calculation is exactly as it is described in the methodology chapter. The SCR is calculated as

SCR = q√V = 19105541 (5.1)

with quantile factor q equal to 3 and V the variance of the loss distribution as it is dened in formula (3.3). The motive behind the quantile factor's choice is that for a portfolio in which the credit quality of the counterparties is good, it seems to be appropriate to base the factor on a skewed distribution like the lognormal distribution [11]. If the initial PDs assigned to each counterparty were higherthat would have reected a low credit quality portfoliothen it is assumed that the resulting distribution would be much more skewed than the lognormal and hence, a higher quantile factor would have been chosen (e.g q = 5) [11].

5.2 The EPD Case

(19)

5.2. THE EPD CASE CHAPTER 5. RESULTS EPDand in general the Pareto distributionthe reinsurer can use EVT concepts such as Hill estimators, threshold exceedances, Extreme Value Index (EVI) etc.

Before continuing further let's explain some of the concepts presented later on.

 Erlang Distributions: Erlang distribution is a two parameter family of continu-ous distributions with one scale and one shape parameter. It is a special case of the Gamma distribution. In this thesis a mixture of Erlangs (ME) is used to model the body of the loss distribution and a Pareto to model its tails. The motive behind choosing a mixture of Erlang distributions is that they provide exibility in mod-elling the main part of the loss distribution. Any positive continuous distribution can thus be approximated up to any accuracy by a ME distribution [42]. Another reason for this choice is that according to Klugman et al. (2012) [31], instead of trying many standard distributions in order to model the complete loss distribution, splicing two or more distributions is a better option.

 EVI: This term is basically a tail index. Since its inception, EVT is responsible for describing and predicting extreme events in a limited data environment. The prob-lem of estimating the high quantiles of the loss distribution is linked to the accurate modelling of its tails. In EVT it is welldocumented that the EVI dominates the tail behaviour of the loss distributions. The heaviness of the tails strongly depends on EVI [33].

 Hill Estimator: One of the most wellknown approaches about the tail behaviour of the loss distribution was introduced by Hill (1975) [24]. He proposed an estimator for the tail index of a Pareto distribution function. It is based on a number of upper order statistics from a positive part of a general sample. Hill estimates for positive extreme value indices, adapted for interval censoring, as a function of the number of order statistics are computed (See Appendix A.3).

(20)

5.2. THE EPD CASE CHAPTER 5. RESULTS The Hill plot shows the EVI estimates for censored data. However, two problems arise. At rst, for small samples, Hill estimates have high volatility and their performance is aected. Secondly, the big jump in the value of γ at the last order statistics suggests that Hill estimates maybe poor and hence, be ignored. Censoring data in an EVT concept means that the NAVs and the LGDs are censored in the sense that NAV s > LGDs (See Appendix A.3). The EVI is highly related to the general EVT. The next step would be to calculate the parameter estimates for the censored EPD and check the Pareto QQPlot for the censored data. This, can be a good measure to check if the Pareto distribution provides a nice t to the reinsurer's portfolio loss distribution.

The QQPlot has the same quantiles as a normal Pareto QQPlot but the theoretical quantiles are replaced. Combined with the aforementioned, the EPD estimates for the EVI are provided.

Figure 5.2: Pareto QQPlot adapted for

right censoring. Figure 5.3: Censored EPD for EVI estimates.

Figure 5.4: Exp QQPlot for

Cen-sored data. Figure 5.5: Exp QQPlot for original data.

(21)

5.2. THE EPD CASE CHAPTER 5. RESULTS the data. The shape parameters of the distribution are based on the Hill estimators. The table below gives the parameters estimates for tting the data. At this point it is

Mix.Erl Pareto ME PA π 0.842013 0.1579887 t 24601535  Π 0.295, 0.6118, 0.08974  Shape Par 1, 3, 15  θ  105377 µ 3 3 γ  0.6117699 LogLik -5855.051 -5855.051 AIC 11726.1 11726.10 BIC 11758.03 11758.03

Table 5.1: Mixed Erlang with Pareto for censored data.

important to underline the parameters' meaning. First, the term t represents the splicing point while the term Π denotes the splicing weights which help dening the splicing density. The parameter θ is the scale parameter of the distribution and µ captures the number of Erlang mixtures. The AIC and BIC are the information criteria which help an analyst to choose which statistical model is better for usage. It is important to note that a Spliced Pareto will be t into uncensored data as well. The uncensored data are the original NAVs from the reinsurer's portfolio. The following table depicts the parameter estimation that the reinsurer got from the original data. Apart from the above cases, a

Mix.Erl Pareto ME PA π 0.87 0.13 t 24601555  Π 0.3068, 0.5817209, 0.1114  Shape Par 2,7,22  θ  609087 µ 3 3 γ  0.4107613 LogLik -68033.69 -68033.69 AIC 13683.4 13683.4 BIC 13715.33 13715.33

Table 5.2: Mixed Erlang with Pareto for uncensored data.

ME with a Generalized Pareto will be t to the data for both censored and uncensored cases. Note that the "best" statistical model selection will be based on the AIC and BIC criteria which are based on the loglikelihood values.

(22)

5.2. THE EPD CASE CHAPTER 5. RESULTS Mix.Erl GPD ME GPD π 0.99 0.01 t 64083322  Π 0.7893, 0.153071, 0.05759  Shape Par 2,10,25  θ  2734032 µ 3 3 γ  -1.261028 σ  3948257 LogLik -682513 -682513 AIC 13675.03 13675.03 BIC 13710.95 13710.95

Table 5.3: Mixed Erlang with GPD for uncensored data.

Mix.Erl GPD ME GPD π 0.995 0.005 t 64083322  Π 0.77446, 0.108811, 0.11671  Shape Par 1,8,24  θ  3393887 µ 3 3 γ  -1.489694 σ  4664206 LogLik -6825.13 -6825.13 AIC 13307.99 13307.99 BIC 13343.92 13343.92

Table 5.4: Mixed Erlang with GPD for censored data.

(23)

5.2. THE EPD CASE CHAPTER 5. RESULTS Now let's discuss the plots presented previously. Regarding the Hill plot it should be used cautiously because of the small dataset of order statistics. In addition, Hill's success relies on the Pareto distribution assumption for the whole dataset, which is not true in this case. For small datasets the Hill estimator is biased [25]. The large jump in the γ value after the 300th order statistic indicates that after this point the Hill estimates cannot be trusted [15]. For the EPD estimates of the EVI which rely on the Hill estimator the same problems exist. The nal steps of the analysis include the calculation of the risk metrics that will have an important role in the calculation of the SCR. However, in order to check one last time whether the EPD provides a good t to the tail, the GramCharlier (GC) and the Edgeworth approximation of the t of the loss distribution will be used.

Figure 5.7: Comparison of the GC and the Edgeworth for the EPD.

The graph shows that the EPD approximation is adequate for both cases1. Now the last

step is to calculate and plot the risk metrics. The following plots are based on the Splice Fit Pareto and in the yaxis the term p denotes the exceedance probability. In addition the VaR for the loss portfolio at a 99.5% condence level is calculated. Another risk metric that will be calculated is the CTE at the same condence level. These results if applied

Risk Metric VaRCensored VaR CTECensored CTE

MEPA 2162654 2062115 13130391 11073881

MEGPD 1987659 4087092 7594634 10734474

Table 5.5: Presentation of Risk Metrics at a 99.5% condence level.

to formula (3.10) denote the SCR for the alternative method. However, an individual should be really careful which of the four to select. The proper selection is based on the information criteria which indicate the best statistical model.

1The Edgeworth and GC are approximation series regarding the probability distribution based on its

(24)

5.2. THE EPD CASE CHAPTER 5. RESULTS

Figure 5.8: CTE for censored data. Figure 5.9: CTE for uncensored data.

Figure 5.10: CTE for MEGPD using

uncensored data. Figure 5.11: CTE for MEGPD, censoreddata.

Apart from the risk metrics and the calculation of the SCR, the excess of loss reinsurance premiums are plotted. As mentioned previously the type of reinsurance used is a quota share treaty with a stop loss term xed at 150 million. This signies that the reinsurer will stop reimbursing the ceding company if its losses are larger than this limit. In addition, each of the ceding companies pays for each losses up to 10 million (including the collateral, see p.8). For the tted spliced distribution, the premium that should be charged is set at 14740.36. The nominal2 individual premium's mean value is 22070.28 and it indicates

that the reinsurer has entered in a favourable agreement with its counterparties.

Furthermore, the excess of loss premiums3 are plotted. The plot shows that if the claims

for each counterparty exceeded 150 million, then the reinsurer should either charge the excess of loss premiums or seek reinsurance from a larger rm. However, the excess of loss premiums estimates are based on the Pareto distributional assumption4for the dataset and

on the Hill estimates. In any case, because of the Hill estimates "crankiness" the reinsurer should be sceptical when the excess of loss premiums are taken into consideration.

2The nominal premium is the original premium charged by the ceding companies to their policyholders.

The reinsurance company seizes part of these premiums.

3The excess of loss premium is the premium that should be charged if the claims' size exceed 150

million.

4The Pareto distribution hypothesis for the whole dataset was rejected previously. It was proved that

(25)

5.3. STOCHASTIC PARAMETERS CHAPTER 5. RESULTS

Figure 5.12: Excessloss Premiums estimates for the reinsurance company compared to nominal individual premiums.

5.3 Stochastic Parameters

(26)

5.3. STOCHASTIC PARAMETERS CHAPTER 5. RESULTS

Figure 5.13: Simulated credit portfolio's loss distribution.

The reinsurance company can use the simulated PDs from the stochastic portfolio in the Solvency II formula for the SCR calculation.

SCR = qpVSim = 84319651.54 (5.2)

The above SCR uses as inputs the simulated PDs and hence the term VSim denotes the

variance of the loss distribution when the simulated PDs are used. Nonetheless, the rein-surer can calculate the SCR of the replicating stochastic portfolio by using the formula (3.17). According to this formula the SCR is set at 30980000. There is a big dierence between the result in 5.2 and the above. The reason for that is that compared to the original PDs as prescribed by Solvency II, the simulated PDs are much higher. In addi-tion, another reason for this discrepancy is the dierent method in the SCR calculation. Furthermore, the SCR of the replicating portfolio is not close to the SCR from formula (5.1)and that proves that the stochastic portfolio does not describe so well the properties of the original and hence it should not be used as an "internal index" regarding where the SCR should be. A good approach would be to run the simulating portfolio multiple times in order to get a better approximation of the SCR. However, the stochastic SCR stemming from the simulated portfolio can be used as a measure for monitoring where the reinsurer's SCR stands in the insurance industry.

Finally, let's devote some time to the explanation of ES that was calculated. ES is dened as the average of all losses which are greater or equal than VaR. Putting it in mathematical terms, ESα = E(L|L ≥ VaRα). ES is equivalent to CTE when the underlying loss

distribution function is continuous at VaRα(x) [46]. For the replicating credit portfolio

(27)

5.4. SECOND REINSURANCE YEAR CHAPTER 5. RESULTS

5.4 Second ReInsurance year

As it was mentioned previously, the reinsurance company has decided to cover the 400 ceding companies for two years. For the reinsurer it is important to calculate again the relative risk metrics for the forthcoming year. An important aspect of the computation of the SCR for the second reinsurance year is the inclusion of shocks to the original PDs from Solvency II. The methods used in the generation of these shocks are described in section (3.3). Once the shocks are generated, the calculation of the shocked PDs is used. The SCR stemming from the Solvency II method is calculated as

SCR = q√V = 20899470 (5.3)

in which q = 3. The next step is to present the rest of the risk metrics for the EPD case.

Mix.Erl Pareto ME PA π 0.770663 0.229337 t 25516209  Π 1  Shape Par 1 θ  6063885 µ 1 1 γ  0.605668 LogLik -4675.68 -4675.68 AIC 9359.360 9359.360 BIC 9375.326 9375.326

Table 5.6: Mixed Erlang with Pareto for censored data for the second year.

Mix.Erl Pareto ME PA π 0.34 0.66 t 4459518  Π 0.000831, 0.9991  Shape Par 2,14  θ  1048027 µ 2 2 γ  0.9002188 LogLik -6844.757 -6844.757 AIC 13701.51 13701.51 BIC 13725.46 13725.46

(28)

5.4. SECOND REINSURANCE YEAR CHAPTER 5. RESULTS Mix.Erl GPD ME GPD π 0.34 0.66 t 4459518  Π 0.000831, 0.9901  Shape Par 2,14  θ  1048027 µ 2 2 γ  0.5942095 σ  5397643 LogLik -6841.412 -6841.412 AIC 13696.82 13696.82 BIC 13724.71 13724.71

Table 5.8: Mixed Erlang with GPD for uncensored data. Second year.

Mix.Erl GPD ME GPD π 0.9975 0.0025 t 66960196  Π 0.83494427, 0.11651586 0.04853986 Shape Par 2,12,29  θ  2218315 µ 3 3 γ  -1.645352 σ  418096 LogLik -6719.937 -6719.937 AIC 13457.87 13457.87 BIC 13493.80 13493.80

Table 5.9: Mixed Erlang with GPD for censored data. Second year.

Once again the reinsurer, in order to decide which model to use for the SCR calculation, should focus on the values of the AIC, BIC and loglikelihood value. The smaller these values are, the better the model is. The next step is to present the risk metrics for each of these models. Note the the term CTE symbols the SCR for the EPD case. It is evident

Risk Metric VaRCensored VaR CTECensored CTE

MEPA 3881070 2006475 19312459 30383330

MEGPD 2499297 2100685 8837883 12518961

(29)
(30)

Chapter 6

Discussion

In this chapter the answers to the research question are given accompanied by comments on the results and the methods as well as the causes behind these outcomes.

Now let's focus on the research questions. Therst research question concerns the SCR under Solvency II/alternative method for the rst and the second reinsurance year. The Solvency II SCR for the rst year is almost 19 million (formula 5.1) and for the second is close to 21 million (formula 5.3). Regarding the SCR under the alternative method there are four possible dierent SCR values. Initially, the reinsurer should choose which of the four to use based on which statistical model better ts the data. For the constructed portfolio the selection is based on the AIC/BIC criteria which are based on the value of the loglikelihood. The model with the smaller AIC/BIC values is the MEPA for censored data. If an individual would opt to focus just on the likelihood value of each model a likelihood ratio test could be used instead. The SCR is equal to the CTE of this model (13 million, table (5.5)). The same procedure should be followed for the second year of the reinsurance. By checking once again the selection criteria the MEPA for censored data remains the favoured model. The SCR is equal to the CTE (19 million, table (5.10)). Note that the selection criteria have a big impact on the choice of the most appropriate model. If these criteria would have implied a dierent model (e.g for the rst compared to the second reinsurance year), then it should be made clear that the reinsurer should choose the model that satises the criteria. It should be apparent that the AIC/BIC criteria "govern" the appropriate model choice's decision.

The second research question concerns the reliability of each method. Once again, it

(31)

CHAPTER 6. DISCUSSION should provide the regulatory authorities results stemming from the Solvency II method. The use of the alternative method within the reinsurance company is that of a benchmark model regarding its risk prole and dataset. Note that the alternative method could be inadequate for another dataset. For instance, a portfolio with smaller claims' size but with a large number of counterparties.

The third research question is easily answered by a simple fact. The simulated PDs

which are generated using formula (3.14) produce much higher values than the xed PDs of Solvency II. This is a sign that the replicating credit portfolio does not have a good credit quality. The simulated PDs are produced by the approach of the Credit Metrics which is based on simulating values from the standard normal distribution function Φ. It is possible that if another distribution was used, the simulated PDs would be dier-ent. Another explanation regarding the large simulated PDs is that the Credit Metrics model's mechanics are much dierent than those in the Solvency II model. In addition, the correlation's matrix Σ inputs may also inuence the values of the simulated PDs. Empirically, higher PDs imply larger capital requirements from the insurers/reinsurers. In this thesis this stylised fact does not change. By the use of stochastic probabilities the SCR is skyrocketed and the reinsurer would be wrong to take such a big dierence in the SCRs into account. The reinsurance company operates within good credit quality coun-terparties and it would be ignorant to use high PDs that do not reect its credit portfolio prole. In the author's opinion, the simulated probabilities should be even dismissed from the analysis.

The alternative method's key risk metric is the CTail VaR or CTE. However, an indi-vidual might opt to choose the VaR instead of the CTE. The fourth research question examines what would change in the SCR value when VaR is used. In the answer to the rst research question it was shown that under the alternative method, the SCR is lower compared to Solvency II. If VaR is used instead of CTE, the SCR is even lower. Speci-cally, for the rst reinsurance year, the dierence between the SCR stemming from VaR and the Solvency SCR is 17 million. For the second reinsurance year, the dierence is 16 million. This is anticipated since CTE focuses on the expected loss value conditional on the fact that a loss at 99.5% condence level has already occurred. By observing the VaR values, the decision to use CTE as the key risk metric of the alternative method seems to be more wise.

(32)

reinsur-CHAPTER 6. DISCUSSION ance industry, then this means that the reinsurer's portfolio has better credit quality than its competitors in the reinsurance business.

Thesixth research question concerns the implications of this form of reinsurance. Under

the Solvency II method and without having any additional data, the reinsurance company bases its decision on the amount of capital required. The total SCR, that is the rst year's SCR plus the second year's SCR, is roughly equal to 40 million. That alone, in the author's opinion is quite high and maybe it is an indication that the reinsurer should seek for larger premium holdings or larger collateral from the ceding companies. Another option for the reinsurer would be to ask for reinsurance from a larger reinsurance rm. However, this decision is solely based on the SCR under Solvency II. The alternative method has the interesting property that except from the SCR value the premiums are calculated (see p.23, 28). These indicate that for the rst reinsurance year, the reinsurer has chosen a protable policy, by securing higher premiums from the ceding companies. In addition, the excess of loss premiums are estimated and plotted. It should be explicit that because these premiums are based on Hill estimates, the reinsurer should be careful while using them. For the second reinsurance year, the reinsurer should ask for higher premiums or seek for reinsurance from a larger rm. Intuitively, a stricter form of reinsurance would lead to lower SCR estimates both for the Solvency II and the alternative method.

(33)

References

[1] Albrecher Hanjorg, Beirlant Jan, Teugels Josef. Reinsurance: Actuarial and Statis-tical Aspects. 2017. Jon Wiley & Sons Ltd.

[2] Altman Edward. Financial Ratios, Discriminant Analysis and the Prediction of Cor-porate Bankruptcy. // Journal of Finance. 1968. 189-209.

[3] Andersson Fredrik, Mausser Helmut, Rosen Dan, Uryasev Stanislav. Credit risk op-timization with Conditional Value-at-Risk criterion // Mathematical Programming. 2001. Vol 89, 273-291.

[4] Beirlant Jan, Delafosse Emmanuel, Guillou Armelle. Estimation of the extreme value index and high quantiles under random censoring. // Extremes. 2007.

[5] Beirlant Jan, Joosens Elisabeth, Segers Johan. Second Order Rened Peaks-Over-Threshold Modelling for Heavy-Tailed Distributions. // Journal of Statistical Plan-ning and Inference. 2009. Vol1139, no8.

[6] Bernard Carole, Ludkovski Mike. Impact of Counterparty Risk on the Reinsurance Market // North American Actuarial Journal. 2012. Vol 16.

[7] Bernard Carole, Ruschendorf Ludger, Vanduel Steven, Yao Jing. How robust is the value-at-risk of credit risk portfolios? // The European Journal of Finance. 2015. Volume 23.

[8] Bowers Newton, Gerber Hans, Hickman James, Jones Donald, Nesbitt Cecil. Actu-arial Mathematics. 1997. Society of Actuaries, ISBN: 978-0938959465.

[9] Burren Daniel. Insurance demand and welfare-maximizing risk capitalSome hints for the regulator in the case of exponential preferences and exponential claims. // Insurance: Mathematics and Economics. 2013. Vol 53, 551-568.

[10] CAS . Statement of Principles Regarding Property and Casualty Insurance Ratemak-ing // Casualty Actuarial Society. 1988. Adopted by the Board of Directors of the CAS.

[11] CEIOPS . CEIOPS' Advice for Level 2 Implementing Measures on Solvency II: SCR standard formula - Counterparty default risk module // Committee of European Insurance and Occupational Pensions Supervisors. 2009. CEIOPS-DOC-23/09. [12] CEIOPS . Solvency II Calibration Paper // Committee of European Insurance and

(34)

REFERENCES REFERENCES [13] Caouette John, Altman Edward, Narayan Paul. Managing Credit Risk: The Next

Great Financial Challenge. 1988. Jon Wiley & Sons.

[14] Cheah P.K, Fraser A, Reid N. Some Alternatives to Edgeworth // The Canadian Journal of Statistics. 1993. Vol21, p.131-128.

[15] Danielsson John, Ergun Lerby, DeVries Gasper, DeHaan Laurens. Tail Index Esti-mation: Quantile Driven Threshold Selection. // Risk Research. 2016.

[16] Due Darrel, Singleton Keneth. Credit Risk: Pricing, Measurement, and Manage-ment. 2003. Princeton University Press.

[17] EIOPA . EIOPA Report on the fth Quantitative Impact Study (QIS5) for Solvency II // European Insurance and Occupational Pensions Authority. 2011. EIOPA-TFQIS5-11/001.

[18] EIOPA . The underlying assumptions in the standard formula for the Solvency Capital Requirement calculation // European Insurance and Occupational Pensions Authority. 2014. EIOPA-14-322.

[19] Eckert Johanna, Gatzert Nadine, Martin Michael. Valuation and risk assessment of participating life insurance in the presence of credit risk // Insurance: Mathematics and Economics. 2016. Vol 71. 382-393.

[20] Einmahl John, Villetard Amelie, Guillou Armele. Statistics of extremes under ran-dom censoring. // Bernoulli. 2008. DOI: 10.3150/07-BEJ104, 207-227.

[21] European Commission. QIS5 Technical Specications // EUROPEAN COMMIS-SION Internal Market and Services DG Financial Institutions, Insurance and pen-sions. 2010. Annex to Call for Advice from CEIOPS on QIS5.

[22] Fischer Ronald. The Use of Multiple Measurements in Taxonomic Problems. // Annals of Eugenics. 1936. 179-188 Vol7.

[23] Garrido Myriam, Lezaud Pascal. Extreme Value Analysis: an Introduction. // Journal de la Société Française de Statistique. 2013. Vol.154, No. 2.

[24] Hill Bruce. A Simple General Approach to Inference about the Tail of the Distriution // The Annals of Statistics. 1975. Vol. 3, 1163-1174.

[25] Huisman Ronald, Koedijk Kees, Kool Clemens, Palm Franz. Tail-Index Estimates in Small Samples. // Journal of Business and Economic Statistics. 2001. American Statistical Association, Vol.19, No1.

[26] Hull John, White Allan. The Impact of Default Risk on the Prices of Options and Other Derivative Securities // Journal of Banking and Finance. 1995. Vol 19, p.299-322.

[27] IAA . A Global Framework for Insurer Solvency Assessment. // International Actu-arial Association. 2004.

(35)

REFERENCES REFERENCES [29] Jarrow Robert. Default Parameters Estimation Using Market Prices // Financial

Analyst Journal. 2001. Vol 57.

[30] Kevin Jakob, Fischer Matthias. GCPM: A Flexible Package to Explore Credit Port-folio Risk // Austrian Journal of Statistics. 2016. Vol 45, p.25-44.

[31] Klugman Stuart, Panjer Harry, Willmot Gordon. Loss Models: From Data to Deci-sions. 2012. Wiley Series in Probability and Statistics, ISBN: 9780470391341. [32] Lütkebohmert Eva. Concentration Risk in Credit Portfolios. 2009. Springer, ISBN:

978-3-540-70869-8.

[33] Matthys Gunter, Beirlant Jan. Estimating the Extreme Value Index and High Quan-tiles with Exponential Regression Models // Statistica Sinica. 2003. Vol. 13, 853-880. [34] McNeil Alexander, Frey Rudiger, Embrechts Paul. Quantitative Risk Management.

2005. Princeton University Press, ISBN13: 978-0-691-12255-7.

[35] Merton Robert. On the Pricing of Corporate debt: The risk structure of interest rates. // The Journal of Finance. 1974. 449-470, vol29.

[36] Minkah Richard, Amponsah Kwabena, DeWet Tertius. On Extreme Value Index Estimation under Random Censoring. // Research Gate. 2017.

[37] Osmundsen Kjartan. Using Expected Shortfall for Credit Risk Regulation. // Econ-Papers. 2017. Orebro University School of Business.

[38] Pickands James. Statistical inference using extreme order statistics. // The Annals of Statistics. 1975. Vol.3, 119-131.

[39] Promislow David. Fundamentals of Actuarial Mathematics. 2015. Jon Wiley & Sons Ltd, ISBN: 9781118782460.

[40] Reynkens Tom, Antonio Katrien, Gong Lang, Badescu Andrei. Fitting Mixtures of Erlangs to Censored and Truncated Data Using the EM Algorithm // Astin Bulletin. 2015. 729-758, vol45.

[41] Reynkens Tom, Beirland Jan, Verbelen Roel, Antonio Katrien. Modelling Censored Losses Using Splicing: a Global Fit Strategy With Mixed Erlang and Extreme Value Distributions // Insurance: Mathematics and Economics. 2017. 75-77, vol77.

[42] Reynkens Verbelen, Beirland Antonio. Modelling Censored Losses Using Splicing: a Global Fit Strategy With Mixed Erlang and Extreme Value Distributions // Insurance: Mathematics and Economics. 2017. p.65-77.

[43] SOA . Risk Measurement  Is VaR the right measure? // Society of Actuaries. 2011. [44] Sironi Andrea, Resti Andrea. Risk Management and Shareholder's Value: From Risk Measurement Models to Capital Allocation Policies. 2007. John Wiley and Sons Ltd, ISBN: 978-0-470-82521-1.

(36)

REFERENCES REFERENCES [46] Svetlozar Rachev, Stoyanov Stoyan, Fabozzi Frank. Advanced Stochastic Models, Risk Assessment, and Portfolio Optimization: The Ideal Risk, Uncertainty, and Per-formance Measures. 2008. John Wiley, ISBN:978-0-470-05316-4.

(37)

Appendix A

Appendix

A.1 Life Insurance

The actuarial present value of one unit of a whole life insurance issued to an individual (x) in actuarial notation is symbolized as Ax. Some other important features, denitions

and symbols are presented below taken from Gerber et al. (1997) [8].

• T=T(G,x) is the future life span variable, the time elapsed between a person's age x and his/her age when the benet is paid.

• G= age of death and it is a random variable which denotes the age the individual x will die.

• V = uT = e−δT, is the present value random variable of a whole life insurance that

pays 1e, at time T while δ denotes the force of interest.

• The Actuarial Present Value (APV) of the benet is calculated as the expected value of V, that is Ax= E[V ].

Assuming that the benet is paid at the end of the year of death then T(G,x)=dxe represents the number of years rounded upwards that a person of age x lived beyond that age and if the benet is paid at the time of death then T(G,x)= G-x, the APV of a whole life insurance policy is given by

Ax = E[V ] = Z ∞ 0 utfT(t) dt = Z ∞ 0 uttpxµx+tdt

in which fT is the probability density function of T, tpx is the survival probability of a

policyholder age x to live for t more years and µx+t is the mortality rate at time x + t

for a person at the age of x. Besides the above formula which is general, a nyear term insurance with a payable benet at the time of death can be calculated by just changing the integration interval from 0 up to n years. For the case of an nyear endowment insurance the APV can be calculated as

(38)

A.1. LIFE INSURANCE APPENDIX A. APPENDIX The relevant information needed for the above calculations can be found in actuarial and life tables.

For a decreasing annuity of one unit, the present value can be calculated as follows: Let an be the present value of a simple annuity which can be computed as

an =

1 − (1 + i)−n i

where i denotes the interest rate. The above formula is the result of the sum of a geometric progression and by using it the present value of a nyear decreasing annuity can be calculated as

(Da)n =

n − an

i

in which P = n are the number of periods and D = −1. In general, P symbolizes an annuity of level of payments.

Figure A.1: Decreasing annuity for P=n and D = −1

All of the aforementioned formulas are valid and can be found under dierent notations in many textbooks. A recent publication concerning guidelines examples and principles of actuarial mathematics is that of David Promislow (2015) [39].

For a joint status insurance policy of a unit amount payable at the moment of failure, the present value and the APV of the policy are computed as ( [8], chapter 9).

Z = uT ¯ Au = Z ∞ 0 uttpuµu(t) dt

and for the last survivor of (x) and (y) the APV would be calculated as ¯

Axy =

Z ∞

0

uttpxyµxy(t) dt

and after further manipulation the above formula's nal state is ¯

Axy =

Z ∞

0

(39)

A.1. LIFE INSURANCE APPENDIX A. APPENDIX An annuity payable continuously at the rate of 1 unit per annum as long as at least one of (x) and (y) survives, the APV provided that Y = ¯aT and W = at is calculated as

¯ axy =

Z ∞

0

W[tpxµ(x + t) +tpyµ(y + t) −tpxyµxy(t)]dt

The APV of a life annuity of one unit payable in monthly installments at the beginning of each month while (x) survives is quoted as ¨α(m)

x . The present value of this annuity is Y

and it is a function of the interest rate and the variables K and J where K refers to the complete years and J = b(T − K)me denotes the greatest integer function so that J is the number of complete months of a year lived in the year of death. For this annuity type there are m monthly payments for the complete K number of years and J + 1 payments of 1/m in the year of death. The present value of this life annuity is calculated as

Y = mK+J X j=0 1 mu j/m = 1 − uK+(J +1)/m d(m)

and after additional calculations using the present value, the APV is derived as E[Y ] = ¨α(m)x = 1 − A

(m) x

d(m)

where d(m)is the eective discount rate. As a general principle, for all the nyear annuities

the calculations slightly dier only by changing the upper integration limit from innity to n. The above formulas are normally used with life tables in multiple occasions. In this master thesis actuarial and life tables were used in the valuation of the liabilities' portfolios and hence an example of a life table is illustrated below. It starts from 0 days

Figure A.2: Life Table for the US population 1979-81, Source: Actuarial Mathematics, Newton, Bowers et al.

Referenties

GERELATEERDE DOCUMENTEN

Willingness to exit the artisanal fishery as a response to scenarios of declining catch or increasing monetary incentives.Fisheries Research, 111(1), 74-81... Qualitative

The termination locus, i.e., the maximal shear stress, |τ ∗ | in critical-state flow, also called critical-state yield stress, when plotted against pressure – for those parts of

Uit de MANOVA komt echter naar voren dat er geen significant verschil is tussen de drie groepen; participanten die zijn blootgesteld aan geen (storytelling en) alignment met

The theory of strong and weak ties could both be used to explain the trust issues my respondents felt in using the Dutch health system and in explaining their positive feelings

Om echter goed te kunnen begrijpen hoe goedwillende mensen zich voortdurend proberen aan te passen aan alle eisen die aan hen gesteld worden, moet u ook iets weten over hoe

De relevantie ervan voor het begrijpelijk maken van de wijze waarop consumenten tegen voedsel(veiligheid) aankijken, laat zich bijvoorbeeld illustreren door het onderzoeksresultaat

De bovengronden in het zuidwestelijk perceel (1001 en 1002) hebben een te hoge fosfaattoestand voor schrale vegetaties en hier zijn de perspectieven om deze

(i) Stochastic volatility slightly increases the expected return for OBPI but decreases it for CPPI (ii) CPPI is more affected by stochastic volatility. (iii) if the percentage