• No results found

Capital requirements; a new methodology based on the expected shortfall

N/A
N/A
Protected

Academic year: 2021

Share "Capital requirements; a new methodology based on the expected shortfall"

Copied!
45
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Capital requirements; a new methodology

based on the expected shortfall

Author:

(2)

Master Thesis Econometrics Operations Research and

Actuarial Studies

(3)

Capital requirements; a new methodology based on the expected

shortfall

Author: Boris van Grevenhof

Abstract

After almost 20 years of using Value-at-Risk (VaR) measures with a 99% confidence level, regulators have decided it is time to change the way capital requirements are calculated for market risk. Basel III is scheduled to be introduced on 31 March 2019. The coherent risk measures Expected Shortfall (ES) will replace the current VaR based risk measures. Besides that the confidence levels are likely to change from 99% to 97.5%. This paper investigates for which confidence levels α the VaRα

produces the same capital requirements as for a risk measure based on the ESβ with confidence

level β. Besides that, we will illustrate the changes in capital requirements caused by switching from an VaR0.99 based risk measure to an risk measure based on the ES0.975. A distinguishment is

made between three financial institutions, to see the effect of holding different portfolios. We find that for all institutions the implementation of an ES0.975 based risk measure will cause an increase

(4)

Contents

1 Introduction 5 1.1 Pension funds . . . 6 1.2 Banks . . . 7 1.3 Insurance companies . . . 9 2 Data collection 10 2.1 Univariate data analysis . . . 12

3 Univariate econometric models 13 3.1 Univariate distributions . . . 14

3.2 The risk measures VaR and ES . . . 15

4 Econometric models for the VaR and ES and the methodology of a copula 16 4.1 Historical Simulation . . . 16 4.2 Variance-Covariance Method . . . 17 4.3 Monte Carlo . . . 17 4.4 Copula . . . 18 4.4.1 Sklar’s theorem . . . 18 4.4.2 Gaussian copula . . . 19 4.4.3 t copula . . . 19 4.5 Gumbel copula . . . 20 5 Results 21 5.1 Univariate results . . . 21 5.2 Marginal distributions . . . 22 5.3 Copula estimates . . . 23

5.4 Random sampling from the different copulas . . . 25

5.5 Historical simulation results . . . 25

5.6 Variance-covariance results . . . 26

5.7 Monte Carlo simulation results . . . 27

5.8 Capital requirements . . . 28

6 Conclusion 32 6.1 Recommendations/Limitations . . . 33

7 Appendix 35 7.1 Generalized hyperbolic distribution . . . 35

(5)

7.3 Delta method . . . 35

7.4 VaR=ES for the Gaussian distribution . . . 36

7.5 Graphs and Figures . . . 36

(6)

1

Introduction

During the last decades of the 20thcentury, the financial world has experienced a couple of remark-able financial crises. In the 1970s, many banks in developed countries provided loans to developing countries, during the boom in lending to developing countries. As a result, a number of banks became technically insolvent when the debt crisis occurred in 1982 as stated by Barth, Caprio, and Levine (2008). Similar problems occurred during the Asian crisis of 1997 and the Russian financial crisis, also known as the Ruble crisis of 1998. During this crisis some banks took large unhedged positions in response to market movements. The most recent world crisis is the credit crisis of the summer of 2007, which started with the subprime mortgage market in the US. This crisis developed into an international banking crisis, which reached a peak in autumn 2008 with a breakdown of the Lehman Brothers bank in September that year. After analysing these crises, one can conclude proper risk management in the financial sector to be very important. To elaborate on risk management and requirements to adhere to certain rule for risk management, this paper will focus on three large players in the financial area, namely banks, insurance companies and pension funds. For example, Dutch pension funds need to calculate their financial position on the basis of rules from the Financieel Toetsingskader FTK which is part of the Dutch pension law (Pw). For banks, a different set of rules is applicable. Banks are obliged to follow the rules drawn up in Basel II, which is scheduled to be replaced by Basel III in 2019. The rules in Basel are meant to ensure banks reserving enough capital to cover the risk they are exposed to, to keep solvent and induce economic stability. Basel contains a set of capital requirements. Nowadays, these capital require-ments are mostly based on a so-called Value-at-Risk (VaR) method. In the new regulation (Basel III) the Value-at-Risk approach is likely to be replaced by a similar method called the Expected Shortfall (ES). Besides the new methodology, the confidence levels are likely to change from 99% to 97.5%. In 2002 a paper by Acerbi and Tasche (2002) explored the use of an ES based risk measure already. They stated that the ES is more complete: it produces a unique global assessment for portfolios exposed to different sources of risk. They stated that the ES is (even more than VaR) a simple concept, since it is the answer to a natural and legitimate question on the risks run by a portfolio. The paper by Yamai and Yoshiba (2005) compares the VaR with the ES, they emphasize the problem of tail risk, the problem whereby VaR disregards losses beyond the VaR level. This problem can cause serious real-world problems, since information provided by VaR may mislead investors.

(7)

fol-lowing research questions will be answered: ”Which confidence level β for ESβ produces the same

capital requirement as α for the VaRα?”. This research question will be answered for different

univariate models, and for a portfolio consisting of stocks, bonds and real estate. Secondly, the question of what the effect is of switching from a VaR0.99 to an ES0.975 for a particular portfolio,

will be answered. To answer this question, first a comparison will be made to find out the effect of using different confidence levels α for the VaRα comparing it to the confidence levels β of the ESβ.

The comparison is made by univariate models and figures will be used to present the results of this comparison. These figures show which ESβ produces the same VaRα, so for which confidence levels

the methods are similar. Secondly, after the univariate models we follow with a simple historical simulation, the variance-covariance method and a Monte Carlo simulation where copulas are used to capture the dependence between different asset classes. As mentioned before, three asset classes will be considered in the comparison, namely stocks, bonds and real estate. These asset classes will be mimicked by indexes. Finally, a comparison is made for the various financial institutions. Before this comparison, an introducing of the three financial institutions (pension funds, banks and insurance companies) will be given. The remainder of this paper is structured as follows. Section 2 starts with a univariate data analysis. Section 3 and 4 discusses the different econometric models and copulas used in this paper. Section 5 present the estimation results of different approaches. Section 6 will critically discuss the research and present a final conclusion.

1.1 Pension funds

The pension funds are the first to be further explored. When comparing the ratio of pension assets to their gross domestic product, the Dutch pension funds are the number one of the world (168%), followed by Australia (126%) and Switzerland (123%). This is shown in the Global Pension Assets Study 2017, an annual study by Willis Towers Watson. The study compares the 22 most important pension markets in the world1. The 168% does not say anything about the financial situation of a particular pension fund as it does not include the value of pension obligations. To get an indication of the financial situation of a particular pension the coverage ratio is most useful. This ratio can be used as an (global) indicator for the capital position of a pension fund. The coverage ratio is a simple ratio where the current value of the assets are divided by the present value of the pension obligations/liabilities,

Coverage ratio = Current value of assets

Present value of pension obligations.

To determine the present value of pension obligations two factors play an important role, namely the survival probability of people and the yield curve. The current interest rates are historically low, which makes future obligations more expensive. Relative small fluctuation in the interest rates can cause the present value of obligations to fluctuate enormously, which would result in an unstable

(8)

coverage ratio. This situation is undesirable for pension funds. Since the first of January 2015 Dutch pension funds are required to use the policy coverage ratio (in Dutch Beleidsdekkingsgraad) for policy decisions. This policy coverage ratio uses muted interest rates over the last years making the policy coverage ratio more stable than the coverage ratio.

Every year pension funds are required to determine their own required funds (VEV, in Dutch Vereist Eigen Vermogen). The VEV can be seen as the minimum level of equity that a pension fund should have. If a fund does not have this level of equity, there is a deficit situation, the fund is then required to submit a recovery plan to the Dutch Central Bank (DNB). The VEV is aimed at the legal security measure of 97.5%. That is, in the equilibrium situation, there is a 2,5% probability/risk that a pension fund will have a policy coverage ratio less than 100% in a period of one year. The level of required equity depends on the risk profile of the fund that follows from the fund’s strategic investment policy. So the VEV can be seen as a 97,5% VaR. There is a standard model to calculate the VEV. Each year, a fund must assess whether the standard model fits adequately with the risk profile of the fund. In case of material deviations, the fund must contact DNB. In case of deviations in the risk profile, adaptations of the standard model are needed to ensure the model fits the situation in case. Although an exact same risk profile seems unlikely and thus adaptation of the model seems likely, most of the pension funds (around 90%, stated by employers from Willis Towers Watson) use either the standard model or the standard model with a small modification.

The standard model makes a distinction in ten risk categories, namely Interest risk (S1), Equity

and real estate risk (S2), Currency risk (S3), Raw material risk (S4), Credit risk (S5), Insurance risk

(S6), Liquidity risk (S7), Concentration risk (S8), Operational risk (S9) and Active management

risk (S10). The definition of these specific risks categories can be found on the website of the DNB2.

The standard formula to calculated the VEV is given by the following square root function:

VEV = 100% +pS21+ S22+ ρ1,2S1S2+ S32+ S42+ S25+ ρ1,5S1S5+ ρ2,5S2S5+ S62+ S72+ S28+ S92+ S102 ,

where ρ1,2 = 0.40 and ρ1,5 = 0.40 if a decline in interest rate is assumed for S1 and ρ1,5 = 0 if

S1 is based on an increase in the interest rate and ρ2,5 = 0.50. Pension funds that take more risk

will have a higher VEV. On the other hand, diversification in a portfolio will lead to a lower VEV. Actuarially, it is expected that the VEV varies between 105% and 135% for pension funds. So important in this type of financial institutions is the fact that pension funds use a risk measure based on a Value at Risk approach with the confidence level at 97, 5%

1.2 Banks

Prior to 1988, there was no global regulation for banks. Within a country bank regulators implied the rules per specific country. In most countries there were minimum levels for the ratio of capital

2

(9)

to total assets defined. In 1988 the global banking regulation changed by the implantation of an agreement, known as the Basel Accord (Basel I). Basel I aims to improve the stability of the financial sector by improving supervision. The main target of this regulation was to make sure that banks keep enough capital for the risks they take. Of course Basel I does not completely remove the possibility of a bank failure, but governments aim to make the likelihood of default smaller. So Basel I was implemented to enlarge the stability of the financial sector. In the current financial system, there are many interlinkages and interdependencies between banks. For this reason, a failure of a large bank can start a chain reaction also affecting other banks. This risk is known as systemic risk and is a major concern of governments. Difficult situations can arise, when a bank or other financial institution does get into problems. If the government does not intervene when a large bank is failing, more banks could fail and the larger financial system could be at risk. On the other hand if governments do save all the banks from failure, they are implicitly sending a strong message to the financial market. A message which is known as ”too big to fail” Sorkin (2011). In the crisis of 2008 many financial institutions bailed out, but in September 2008 the Lehman Brothers (Lehman was back then the fourth-largest investment bank in the US) was allowed to fail. The impact of the Lehman Brothers was enormous and the decision to allow for failure has been criticized for making the credit crisis even worse.

The implementation of the rules of Basel II began in the years prior to 2008. Basel II was only implemented in the major economies when the financial crisis intervened before it could become fully effective. These rules of Basel II apply to all banks in Europe and in the US to internationally active banks. The level of particular risk weighted assets increased and Basel II is based on a three pillar concept:

1. Minimum Capital Requirements 2. Supervisory Review Process 3. Market Discipline

This paper will focus on the first pillar, Minimum Capital requirements, with a special focus on market risk. The capital requirements under Basel II are defined as

CA= max[VaRt−1, mc× VaRavg] + SRC,

where CA is the capital charge, VaRt−1 is yesterday’s 10-day 99% VaR, VaRavg is the average

10-day 99% VaR over the past 60 days and SRC is a specific risk charge. During the credit crisis, new insights came with respect to market risk. A few changes were made referred to Basel II.5. One which is related to capital requirement for market risk. A stressed VaR was added in the calculations. The idea behind a stressed VaR is to get an idea of possible losses given terrible market conditions. The new formula for the total capital charge for market risk became

(10)

where sVaRt−1 is yesterday’s stressed 10-day 99% VaR, sVaRavg is the average stressed 10-day

99% VaR over the past 60 days, mc and ms are multiplication factors. These factors are minimal

equal to 3 and are determined by bank supervisors. It is expected that the capital charges in Basel III will be based on the expected shortfall method. Besides the new methods banks will need to hold more capital based on new quality requirements. For example banks will become subject to liquidity rules. A liquidity coverage ratio will be introduced, but this paper will only focus on the implementations of the expected shortfall method and compares it to the VaR methodology.

1.3 Insurance companies

The regulatory framework for insurance companies is known as Solvency II. This is a quite new risk-based insurance framework entered into force on January 1, 2016. The framework consists of the Solvency II Directive (2009/138 / EC) and its further amendments in the form of the Delegated Regulation, Technical Standards and Guidelines. The Delegated Regulation contains important information and requirements for, among other things, the determination of the balance sheet, equity, capital requirements, internal business management, internal models, reporting and group supervision under Solvency II. In section 1.1 the VEV was mentioned which was a measure for the required own funds of a pension fund in the Netherlands. The solvency capital requirements (SCR) are similar as for pension funds. The SCR is the amount of capital an insurance company should have to ensure that the probability of insolvency over a one-year period is less or equal to 0.5%. The standard formula for the capital requirements for insurance companies is defined as

SCRmarket=

s X

i,j

Corri,j× SCRi, ×SCRj.

There are some differences between SCR for pension funds and insurance companies. For example Solvency II stops after calculating the SCR, whereas the Dutch pension funds iterate the SCR until they find an equilibrium value of the VEV. Besides this, insurance companies use more sources of risk in general, since an insurance company has generally broader activities. S7, S8 and S9 are not

required for pension funds, risk associated to these categories will be covered by Prudent person (Section 135 of the Pw). This ensures that the asset allocation of a pension fund links to the risk attitude of the participants of the pension fund. The Corri,j covers more dependences between

different risk factor, whereas the VEV only takes a few correlations into consideration.

(11)

2

Data collection

Unfortunately, data corresponding to the financial institutions mentioned above are not freely available. A possible solution to this problem is to replicate the assets of a financial institution. The idea is to assume a general investor who has a portfolio consisting of different asset classes. The considered asset classes will be stocks, bonds and real estate. Indexes are used to replicate the historical movements of these different asset classes. The data used in this paper is retrieved from Bloomberg. For stocks we use the Standard & Poor’s 500, J.P. Morgan Hedged USD GBI Global index is used for bonds and the EPRA/ NAREIT Developed Europe Index is used for real estate. Historical data from the 31th of March 1993 until the 27th of September 2017 is collected.

The empirical data is similar to the data used in a paper from Kole, Koedijk, and Verbeek (2007). The historical data contains the daily prices of the different indexes. During the weekend and holidays there is no trading. The dates for which at least one index did not have a value are eliminated from the observations. This reduced the sample to 6.169 observations. The elimination of the observations has little to no effect on the estimation of the marginal return distribution and subsequently the estimation of particular copulas. To see the movements of the different indexes over time, one is advised to take a look at Figure 1.

Figure 1: Time series of the different indexes.

(12)

big bubble occurring before the financial crisis of 2008. This bubble is known as the United States housing bubble. This bubble can be seen as one of the main drivers of the financial crisis of 2008. It is interesting to compare level of fluctuation of the different indexes. To get more insight into the data, the daily logarithmic losses are calculated and the main summary statistics are placed in Table 1, where all values are stated in percentage, except the skewness and the kurtosis. It is important to remember that this study uses daily losses, which are calculated by the following formula:

Logarithmic loss: Lt,i = − ln

 St,i

St−1,i



, i = {s, b, r}

where t is the time indicator and s, b, r stands for the corresponding asset class stocks, bonds and real estate. So Lt,i indicates the daily loss at time t for asset class i.

Table 1: Summary statistics of the daily losses

Stocks Bonds Real estate

Mean −0.0278 −0.0217 −0.0191 Median −0.0545 −0.0243 −0.0576 Volatility 1.14 0.18 1.03 Skewness 0.26 0.23 0.60 Kurtosis 11.86 4.52 12.09 Minimum −10.96 −0.97 −7.14 Maximum 9.47 0.95 9.38 α90 1.20 0.194 0.98 α95 1.78 0.284 1.50 α975 2.37 0.355 2.19 α99 3.16 0.460 3.25

(13)

the stocks and real estate markets are heavy tailed, since we see for example in 10% of the daily returns a loss of more than 1% for stock and real estate markets.

Having the historical data for stocks, bonds and real estate does not give us directly historical data for a particular financial institution. This paper assumes the following asset allocation for the particular financial institution, see Table 2.

Table 2: Asset allocation per financial institution Stocks Bonds Real Estate

Bank 60% 25% 15%

Insurance company 35% 35% 30%

Pension fund 32% 56% 12%

Defining WB, WI and WP, as three dimensional vectors containing the asset allocation weights for

respectively stocks, bonds and real estate for the corresponding financial institution. I.e. WP =

(32%, 56%, 12%), is the assumed asset allocation for a pension fund. In reality these weights can differ significantly per pension fund. The weights we picked are based on the average asset allocation per 2016 for Dutch pension funds. Dutch banks and insurance companies are less transparent in their asset allocation, that is why we assume a fictitious asset allocation for these institutions. We assume a risky bank and a diversified insurance company. In reality these weights can differ and there will be broader variety of asset classes, but that is outside the scope of this paper.

2.1 Univariate data analysis

(14)

We analyse and compare the goodness of fit of the different univariate distributions. The goodness of fit will be based on a few criteria. The first criteria used, is the modified Kolmogorov-Smirnov (KS) test statistics which is a nonparametric test. The KS test quantifies a distance between the empirical distribution function and a given continuous hypothesised distribution (null distribution) function to test whether the data was sampled from the hypothesised distribution. We use an average version of the standard KS test, suggested in the paper by Kole et al. (2007). The second criteria is based on the average Anderson-Darling(AD) test suggested in the paper by Kole et al. (2007). The AD test is similar as the KS test but it gives more weight to deviations in the tails whereas the KS test is more sensitive to deviations in the center of the distribution. Moreover, the AD test is assumed to be more powerful than Kolmogorov-Smirnov test when testing for normality, stated by Razali and Wah (2011). The last criteria to compare the goodness of fit for the different candidate distributions will be the log-likelihood value which can be used for a likelihood ratio test. This is a statistical test used for comparing the goodness of fit of two statistical models also known as the test of over-identifying restrictions Hayashi (2000). This test will be used to test a full model against an alternative model. In mathematical terms the test statistics are defined as follows:

KS = 1 n n X t=1 |FE(xt) − FH(xt)| AD = 1 n n X t=1 |FE(xt) − FH(xt)| pFH(xt)(1 − FH(xt)) D = −2 ln 

Likelihood for null model Likelihood for alternative model

 ,

where KS is the average Kolmogorov-Smirnov test statistic with FE the empirical distribution

function, FH the hypothesised distribution and xt will be the logarithmic loss at day t, AD is the

average Anderson-Darling test statistic and D is the likelihood ratio statistic,.

3

Univariate econometric models

(15)

3.1 Univariate distributions

This section will introduce the univariate distributions which are estimated to see which one is most appropriate to use for the particular asset class. Starting with the Gaussian distribution. Gaussian distribution

The Gaussian (or normal) distribution is a common continuous probability distribution. The uni-variate density is defined as:

f (x; µ, σ) = √ 1 2πσ2e

−(x−µ)2

2σ2 ,

where µ is the mean or expectation of the distribution and σ is the standard deviation. Student t distribution

The second distribution has shape as the normal distribution. The standard Student’s t-distribution has the probability density function given by:

f (y; ν) = Γ( ν+1 2 ) √ νπΓ(ν2)  1 +y 2 ν −ν+12 ,

where ν is the parameter for the degrees of freedom and Γ is the well-known gamma function3. This distribution can be generalized to a three parameter location-scale family. This can done by introducing a scale parameter σ and a location parameter µ. Using the following relation

X = Y − µ σ ,

where Y has a standard Student’s t distribution with ν degrees of freedom. X follows a scaled Student’s t-distribution which has the following density function:

f (x; ν, µ, σ) = Γ( ν+1 2 ) √ νπΓ(ν2)σ 1 + 1 ν  x − µ σ 2!− ν+1 2 ,

from the equation it can be seen that the density is now parametrized by three parameters. ν is the degree of freedom, µ corresponds to the mean of the distribution and σ is the scaling parameter. Note that the variance of this distribution is σ2ν−2ν , for ν > 2 and not σ2.

Generalized hyperbolic distribution

The last candidate distribution is the generalized hyperbolic distribution (GHD). This distribution is the most flexible since it consist of multiple parameters, thereby the distribution becomes a little bit more difficult to understand. The density function can be found in the Appendix 7.1 and for more information about this distribution one can take a look at section 3.2.2 in book from McNeil, Frey, and Embrechts (2005), where the distribution is explained in detail. The estimation results of these marginal distributions will be presented in section 5, the next subsection introduces the risk measures VaR and ES.

3

Γ(z) =R∞

0 x z−1

(16)

3.2 The risk measures VaR and ES

Both risk measures VaR and ES can be calculated in different ways. For example by historical simulation, the variance-covariance and the Monte Carlo method. Before explaining the VaR and ES in detail, we start with the definition of a coherent risk measure. For a risk measure to become a coherent risk measure it has to satisfies four general properties stated by Artzner, Delbaen, Eber, and Heath (1999). They explain to us that a coherent risk measure is a function % that satisfies the following properties:

If we assume that M is a convex cone, i.e. that L1 ∈ M and L2 ∈ M implies that L1+ L2 ∈ M

and λL1 ∈ M for every λ > 0. Here we interpret %(L) as the amount of capital that should be

added to a position with loss given by L so that the position becomes acceptable to an external or internal risk controller as cited by McNeil et al. (2005). Now we can introduce the axioms that a risk measure % : M → R on a convex cone M should satisfy to be called coherent.

1. Monotonicity: For L1, L2∈ M such that L1 ≤ L2 almost surely we have %(L1) ≤ %(L2).

2. Sub-additivity: For all L1, L2∈ M we have %(L1+ L2) ≤ %(L1) + %(L2).

3. Homogeneity: For all L ∈ M and every λ > 0 we have %(λL) = λ%(L).

4. Translational invariance: For all L ∈ M and every l ∈ R we have %(L + l) = %(L) + l. These properties will play an important role between the comparison of the VaR and the ES. The VaR of a portfolio at the given confidence level α ∈ (0, 1), is given by the smallest number l such that the probability that the loss L is no larger than (1 − α). In mathematical terms it is defined as,

VaRα = inf{l ∈ R : P (L > l) ≤ 1 − α} = inf{l ∈ R : FL(l) ≥ α} (1)

From this definition one could see that the VaR is thus a quantile of the loss distribution. Continuing with the risk measure ES. The ES is related to the VaR and the preferred risk measure by modern day risk managers since it gives an expectation of the loss. Furthermore when looking at the properties of a coherent risk measure by Artzner et al. (1999), one could show that the VaR does not satisfy the property of sub-additivity. Therefore the VaR is criticized as being a reasonable risk measure. The ES satisfies all the four properties to be a coherent risk measure. The mathematical definition of the ES is as follows. For a loss L with E (|L|) < ∞ and distribution function FL, the

expected shortfall at confidence level β ∈ (0, 1) is defined as: ESβ = 1 1 − β Z 1 β qu(FL)du, (2)

where qu(FL) = FL← is the quantile function of FL. From this definition one can see that the ES

(17)

These definitions can be found in the book McNeil et al. (2005). By looking at the definitions (equation 1 & 2) it is clear that the assumption of FL will play a crucial role. This will be

emphasized in the following section were the historical simulation, the variance-covariance and the Monte Carlo method are explained in detail.

4

Econometric models for the VaR and ES and the methodology

of a copula

4.1 Historical Simulation

The first method discussed to estimate the VaR and ES is based on the historical simulation. This method is easy to implement and it does not require parametric estimation of a loss distribution function FL. The historical simulation works as follows. We estimated the distribution function

of the daily losses by the empirical distribution function. The simulated losses will be indicated by ˜Ls : s = 1, ..., t. This method implicitly assumes that the past is a good predictor of the

future. This method applies equal weights to all returns of the data set, one could argue that this is inconsistent with the diminishing predictability of data that are further away from the present. Nevertheless this is irrelevant if we assume i.i.d. historical data. Having our simulated data ˜Ls

we order them and denoted the ordered values as ˜Ln,n ≤ · · · ≤ ˜L1,n. Now the VaRα estimator is

simply defined as ˜L[n(1−α)],n, here [n(1 − α)] denotes the largest integer not exceeding n(1 − α). So this is simply a quantile of the simulated distribution function. For example having a data set with n = 10, 000 and α = 0.99, the VaR0.99 is estimated by the 100th largest loss. To calculated the

ESβ based on historical simulated is similar as calculating the VaRα, instead of taking the quantile,

we average all values greater than the quantile. For example having a data set with n = 10, 000 and β = 0.99, the ES0.99 is estimated by the average of the 100th largest losses. It is important

(18)

4.2 Variance-Covariance Method

The second method discussed is the variance-covariance method. This method assumes that the daily losses Lt have a multivariate normal distribution, denoted by Lt ∼ N (µ, Σ). To calculated

different risk measures by using a variance-covariance approach the first step is to estimate the parameters µ and Σ. This can be done easily, since we know that their maximum likelihood estimates are given by the sample means and sample variance-covariance matrix of the daily losses. For the univariate case the VaR estimates will be calculated as follows:

VaRα= µ + σΦ−1(α),

where µ and σ will be estimated by their sample counter parts and Φ−1 is the inverse of the normal distribution function. The expected shortfall for the variance covariance method boils down to the following equation:

ESβ = µ + σ

φ(Φ−1(β)) (1 − β) .

Instead of assuming a normal distribution for losses factors one might argue to use a Student t distribution. So by the assumption that the daily losses Lt follow a Student t distribution denoted

by Lt∼ t(ν, µ, σ2) with E [Lt] = µ and var(Lt) = νσ

2

(ν−2) when ν > 2. The implied VaR and ES when

assuming a Student t distribution are similar as by for normal distribution. The VaR and ES are given by: VaRα= µ + σt−1(α) ESβ = µ + σ gν(t−1ν (β)) 1 − β  ν + (t−1 ν (β))2 ν − 1 

where tν denotes the distribution function and gν the density of standard t. These results can

be easily verified by following the definitions of the VaR and ES defined in equation (1) & (2). The variance-covariance method is the basis for current models for capital requirements in Basel, Solvency and FTK. It is not clear why this is the case, maybe because the variance-covariance method offers a simple analytical solution to the risk-measurement problem. However, the assump-tion of normality is unlikely to be realistic for the distribuassump-tion of the daily losses. The last method considered to estimate the risk-measures is explained in the following subsection.

4.3 Monte Carlo

(19)

daily losses. These simulated losses will be indicated by ˜Ls : s = 1, ..., t. Having these simulations,

similar methods as by the historical simulation can be applied to calculated risk measures such as the VaR and the ES. The number of replication can be chosen very large, which in general will result in more accurate VaR and ES estimates than in the case of historical simulation. To find the best multivariate model a two-step approach is used, first the appropriate univariate models are estimated, thereafter a copula is estimated to capture the dependence structure. The concepts of copula is explained in the next section.

4.4 Copula

In this section the main idea about copulas and the copulas which are used in this paper are ex-plained. A copula can be seen as a multivariate probability distribution for which the marginal probability distribution of each variable is uniform. These copulas can be used to model/describe the dependence between random variables. The advantage of a copula is that it helps in under-standing the dependence at a deeper level. It goes further than the standard correlation based methods. Copulas measure the dependence on a quantile scale. This quantile scale can be very useful in a risk-management environment since we saw that the VaR was defined as a quantile of the loss distribution. In probabilistic terms, C : [0, 1]d→ [0, 1], a d-dimensional copula is a distribution function on the unit cube [0, 1]dwith standard uniform marginal distributions. So copula C can be

seen as a distribution function:

C(u1, ..., ud) = P (F1(X1) ≤ u1, ..., Fd(Xd) ≤ ud).

In analytic terms, the following properties must hold for a copula. (i) C(u1, ..., ud) is increasing in each component ui.

(ii) C(1, ..., 1, ui, 1, ..., 1) = ui for all i ∈ {1, ..., d}, ui∈ [0, 1].

(iii) For all (a1, ..., ad), (b1, ..., bd) ∈ [0, 1]dwith ai ≤ bi we have

P2 i1=1. . . P2 id=1(−1) i1+...+idC(u 1i1, ..., udid) ≥ 0, where uj1 = aj and uj2= bj for all j ∈ {1, ..., d}.

4.4.1 Sklar’s theorem

Sklar’s theorem is the most important theorem in the world of copulas, it is named after Abe Sklar. This theorem provides the foundation for the application of the use of a copula and was obtained in 1959. The theorem states that every multivariate joint distribution F can be expressed in terms of its marginals F1, ..., Fd and a copula C. Secondly, the theorem states that one can combine

(20)

et al. (2005) and is stated as follows:

Let F be a joint distribution function with margins F1, ..., Fd. Then there exist a copula C : [0, 1]d→

[0, 1] such that, for all x1, ..., xd in R = (∞, ∞).

F (x1, ..., xd) = C(F1(x1), ..., Fd(xd)). (3)

If the margins are continuous, then C is unique; otherwise C is uniquely determined on Ran F1×

Ran F2× . . . × Ran Fd, where Ran F1 = Fi(R) denotes the range of Fi. Conversely, if C is a copula

and F1, ..., Fd are univariate distribution functions, then the function F defined in equation (3) is

a joint distribution function with margins F1, ..., Fd. Continuing from this fundamental definition

to the implicit copulas which are defined in the next section. These implicit copulas are extracted from well-known multivariate distributions using Sklars theorem and are used in this paper. The Gaussian and the t copula are explained in the next section.

4.4.2 Gaussian copula

The Gaussian copula is constructed from a multivariate normal distribution over R. If Y ∼ N (µ, Σ) is a vector which follows a Gaussian distribution, then its copula is the Gaussian Copula. Note that a copula of a distribution is invariance under strictly increasing transformations of the marginals, for more clarification one might take a look at the invariance proposition in the Appendix 7.2. This property is very useful, since it implies that the copula of Y is equivalent to the copula of X ∼ N (0, P ), where P is the correlation matrix of Y . Now we find the definition of the Gaussian copula, which is given by:

CGauss

P (u) = ΦP(φ−1(u1), ..., φ−1(ud)),

where φ denotes the standard normal distribution and ΦP is the joint distribution function of X.

This copula is parametrized by the parameters of the correlation matrix P . It is interesting to see that two fundamental copulas are special cases of the Gaussian copula. The independence copula occurs when the correlation matrix P is equal to the identity matrix. If the correlation matrix P consist entirely of ones, comonotonicity is obtained. A disadvantage of the Gaussian copula is that it does not exhibit tail dependence, which is a measure of strength of the dependence in the tails of a bivariate distribution. The next implicit copula discussed is the t copula.

4.4.3 t copula

In a similar way as the Gaussian copula the t copula can be constructed. The d-dimensional t copula has the following form:

(21)

where tν is the distribution function of the standard t distribution, tν,P is the d-dimensional joint

distribution function of the vector X ∼ td(ν, 0, P ) and P is the correlation matrix. The Gaussian

copula can be seen as a limiting case of the t copula as ν → ∞. In contrast with the Gaussian copula, the t exhibits lower and upper tail dependence. Another contrast to the Gaussian copula is that the independence copula is not obtained when P equals the identity matrix. Similar as the Gaussian copula, comonotonicity is obtained for the t copula when P consist entirely of ones. Both the Gaussian and t copula are implied by known multivariate distribution functions and do not have simple closed forms, they can be expressed as an integral over the density. The next copula discussed is the Gumbel copula, which has a simple closed form.

4.5 Gumbel copula

Before introducing the Gumbel copula we start with a general two dimensional Archimedean copula which is defined as:

C(u1, u2) = Ψ−1(Ψ(u1) + Ψ(u2)),

where the function Ψ is the so called copula-generating function. Well known copulas of this cate-gory are the Gumbel and the Clayton copula. The Clayton copula possesses lower tail dependence, whereas the Gumbel copula possesses upper tail dependence. This is of our main interest as we consider high quantile values in the loss distribution. The generating function for the Gumbel copula is defined as Ψ(t) = (− ln(t))θ, with a restriction on the parameter θ ≥ 1. Combining this with the general formula of a two dimensional Archimedean copula gives the Gumbel copula:

CGU θ (u1, u2) = exp  −[(− ln u1)θ+ (− ln u2)θ]1/θ  , θ ∈ [1, ∞). (4)

This copula interpolates between the independence copula when θ = 1 and the two-dimensional comonotonicity copula as θ → ∞. This copula can be extended into a more dimensional copula by a compound method proposed by Bouy´e (2002). This extension is characterized by a specific θi

and has the following form:

CGU(u1, ..., ud; θ1, .., θd−1) = ( CGU(u 1, u2; θ1) if n = 2 CGU(CGU(u 1, ..., un−1; θ1, ..., θn−2), un; θn−1) if n > 2,

with θ1 ≥ θ2 ≥ ... ≥ θn−1≥ 1 and CGU denotes the standard bivariate Gumbel copula as defined in

equation (4). The restriction on the parameters θiis important, it ensures a descending dependence

order. The dependence between the first two u1 and u2 controlled by θ1 is at least as strong as the

dependence between u3 on one hand and u1 and u2 on the other, controlled by θ2. This restriction

will play a crucial role in the estimation procedure, it causes the ordering of the variables to become important. The three dimensional version which is used in this paper is characterized by:

(22)

with the same parameter restriction as the general version. The estimation procedure of the different models and copulas together with the results can be found in the next paragraph.

5

Results

5.1 Univariate results

The first research question aimed to find out for which confidence level β the ESβ would be equal

to the VaRα. So which ESβ produces the same risk measures as the VaRα in a univariate setting.

We considered multiple univariate distributions such as the empirical, Gaussian, Student t and the GHD. These univariate distributions are estimated based on a maximum likelihood procedure, the parameter estimates can be found in Table 9 to 13 in the Appendix. Here the standard error is indicated between brackets. Based on these parameter estimates we created a sample of 10 million random drawings from the estimated distributions for each asset class and for each distribution. Based on these drawing we calculated the VaRα and ESβ. Thereafter, the equation VaRα = ESβ

was solved for each distribution separately which yields Figure 2 for stocks, the figures for bonds and real estate can be found in the Appendix Figures 13 to 16. For the Gaussian distribution we did not need the random drawing. For the Gaussian distribution we were able to solve the equation algebraically, this is done in the Appendix section 7.4, where we see that the estimates for µ and σ are irrelevant. For the Student t distribution this is not the case as the parameter ν plays an important role as well.

(23)

The dark line indicates the empirical distribution, blue the Gaussian distribution, green the GHD and red the Student t, where the lighter green and red are the symmetric version of the distributions. The blue line is above the dark one. This is probably caused by the fact that the normal distribution is not able to capture the fat tails of the empirical distribution which was implied by a kurtosis of around 12 for stocks (see Table 1). From Figure 2 we see that the GHD is most in line with the empirical distribution. This is in line with next section where we try to obtain the most suitable marginal distributions for the copulas. When looking at the red line, one can see that it is below the GHD this is probably caused by the low estimates for the degrees of freedom parameter ν. The parameter estimated for ν was around 2.7 (see Table 11), this results in a very fat tailed distribution, which causes extreme events in the random drawings. The right figure is a zoomed-in version of the left figure. An interesting thing to see here is that for all distributions the ES0.975does correspond

to a VaR with a confidence level between 99% and 99.5%. Meaning that switching from a VaR0.99

to an ES0.975 based risk measure will increase the associated capital for risk management.

5.2 Marginal distributions

In section 2.1 we introduced several marginal distribution which are fitted for the particular indexes. Based on three criteria mentioned in section 2.1 we will make a decision for each of the indexes which distribution will be most appropriate to use. The KS and AD test are appropriate to test the equality of our empirical sample distribution to the introduced marginal distributions. Table 3 shows the P-values of corresponding tests statistics, the test statistics itself are not so relevant but for completeness these values can be found in the Appendix Table 14.

Table 3: Tabel with the corresponding P-values of the KS and AD tests Symmetric Symmetric Skewed Symmetric Skewed

Gaussian Student’s t Student’s t GHD GHD

KS AD KS AD KS AD KS AD KS AD

(24)

Table 4: Likelihood ratio test, a skewed vs a symmetric distribution

Symmetric Skewed Symmetric Skewed Student’s t Student’s t GHD GHD

Log L Log L D P-value Log L Log L D P-value Stocks 19627.51 19623.45 8.12 0.0044 19646.86 19641.33 11.06 0.00088 Bonds 30398.99 30391.28 15.41 0.00009 30401.4 30393.67 15.45 0.00008 Real estate 20596.54 20592.32 8.44 0.0037 20610.05 20604.14 11.82 0.00059

By looking at Table 4 we see significant results in favour of a skewed distribution. This is in line with the observed skewness in section 2. Besides the Tables with the three criteria, we created 15 QQ-plots where the preference for the GHD distribution appears. These figures can found in the Appendix, Figures 17 to 31. By combining the results of Table 3, 4 and the QQ-plots we find a clear preference towards the skewed version of the GHD. This distribution will play an important role for the Monte Carlo simulations where we use copulas to capture the dependence between the different asset classes. The marginal distributions will be assumed to be generalized hyperbolic. Next subsection will discuss the estimation results for the different copulas, thereafter the VaR and ES are calculated for the particular financial institutions.

5.3 Copula estimates

As discussed in section 4.4 we base our analysis on three different copulas. The Gaussian, the Student t and the Gumbel copula are used to capture the dependence between the different asset classes, where the marginal asset classes are assumed to follow an generalized hyperbolic distribu-tion. The parameters of the Gaussian and the Student t copula are estimated by the use of an R-package, created by Hofert, M. and M¨achler, M. (2011). The estimation of the parameters of our trivariate Gumbel copula was not that straightforward. The parameters of the Gumbel copula are estimated based on a maximum likelihood procedure. The maximum likelihood estimator (MLE) of the parameter vector is obtained by maximizing the log likelihood:

ln L(θ1, θ2; ˆu1, ˆu2, ˆu3) = n

X

i=1

ln cθ( ˆUt),

with respect to the parameter vector θ. Here the copula density is denoted by cθ and ˆUt denotes

a pseudo-observation from the copula. This density cθ is be obtained by taking the derivative of

the copula which is showed by the following equation: cθ(u1, u2, u3) =

∂3CGU

θ1,θ2(u1, u2, u3) ∂u1∂u2∂u3

.

(25)

and do not have a joint density are the comonotonicity and countermonotonicity copulas. A second difficulty in optimizing the likelihood are the following two parameter restrictions, θ2 ≥ 1 and

θ1 ≥ θ2. These restrictions give problems in the optimization process, therefore a parametrization

is suggested to tackle this problem. We rewrite θ1 and θ2 by the use of the function G which we

defined as: G(x, y) = g1(x, y) g2(x, y) ! = 1 + e y+ x2 1 + x2 !

θ1= g1(x, y) = 1 + ey+ x2 and θ2 = g2(x, y) = 1 + x2, allowing y, x ∈ R. So by using this function

G, the likelihood is optimized with respect to x and y and the standard errors of ˆθ1 and ˆθ2 are

calculated by applying the delta method. The definition of the delta method can be found in the Appendix section 7.4. To apply the delta we calculated the first order derivatives of our function G(x, y) which we define by G0(x, y):

G0(x, y) = ∂g1(x,y) ∂x ∂g1(x,y) ∂y ∂g2(x,y) ∂x ∂g2(x,y) ∂y ! = 2x e y 2x 0 ! .

After applying this delta method we found our parameter estimates with their standard errors for the Gumbel copula. These parameter estimates are combined with the estimates of the Gaussian and the Student t copula, and are summarized in Table 5, where the standard errors are indicated between brackets.

Table 5: Parameter estimates for the copulas Gaussian copula Student’s t copula Gumbel copula

ρs,b -0.1528 (0.011) ρs,b -0.1309 (0.014) θ1 1.2968 (0.0123)

ρs,r 0.3646 (0.010) ρs,r 0.3457 (0.013) θ2 1.000 (3.313· 10−6)

ρb,r -0.1422 (0.012) ρb,r -0.1198 (0.015)

ν 5.6296 (0.320)

Log L 540.3 Log L 747.8 Log L 498.1

In Table 5 we see similar estimates for the Student t and the Gaussian copula. Positive correlation between stocks and real estate, and negative correlation between bonds and the other two assets classes. For the Gumbel copula we observe positive correlation between stocks and real estate controlled by θ1, and independence between bonds on one hand and stocks and real estate on the

other, controlled by θ2. The choice for θ1 controlling the correlation between stocks and real estate

(26)

5.4 Random sampling from the different copulas

We want to simulate three dimensional samples, such that the simulated samples have the distri-bution as implied by the copula. For simulation of a univariate distridistri-bution, the idea is to simulate uniform variables and transform the variables using the inverse cumulative distribution function, such that it is distributed according to the appropriate distribution. This is technique is not directly possible for multivariate distributions. Fortunately, simulating random data corresponding to our estimated Gaussian and Student t copula is still relative simple and can be done by the use of an R-package created by Hofert, M. and M¨achler, M. (2011). The simulation of our three dimensional Gumbel copula is not so trivial. Genest and Rivest (1993) explain us how to simulate from bivariate Archimedean copulas, unfortunately this technique will not work for our three dimensional Gumbel copula. We suggest to use the following general algorithm:

1. Generate N independent uniform variates (v1, ..., vn, ..., vN);

2. Generate recursively the three as follows

un= C−1(u1,...,un−1)(vn)

with

C(u

1,...,un−1)(vn) = Pr{Un≤ un|(U1, ..., Un−1) = (u1, ..., un−1)} = ∂

n−1

(u1,...,un−1)C(u1,...,un,1,...,1)

∂n−1

(u1,...,un−1)C(u1,...,un−1,1,...,1)

For more details on this algorithm one might read Bouy´e, Durrleman, Nikeghbali, Riboulet, and Roncalli (2000). For the specific elaboration of our three dimensional Gumbel, N is equal to three. These techniques to simulate three dimensional samples for the corresponding copulas will be used in combination with the inverse transformation sampling technique to generate the implied stock, bond and real estate daily losses based on the marginal distributions. These samples will be used to calculated the implied VaR and the ES for banks, insurance companies and pension funds based on the assumption of a constant asset allocation using a Monte Carlo simulation. But first the results for the historical simulation are presented in next section.

5.5 Historical simulation results

(27)

we calculated the VaR and ES for confidence levels ranging from 90% to 99.99%, and for the ES even from 70% to 99.99%. Based on these simulations we created the Figure 4 and 5.

Figure 4: VaRα = ESβ based on a historical

sim-ulation

Figure 5: VaRα = ESβ based on a historical

sim-ulation

In the Figures above the black line correspond to a bank, blue to an insurance company and red to a pension fund. The lines are not smooth, this can be seen as a drawback of the historical simulation method. This happens since we only allow to simulate from a limited amount of empirical data. By looking at the different lines we do not spot remarkable things. The only thing which is obviously to see from Figure 5, is that switching from a risk measure based on a VaR0.99 to a method based

on the ES0.975 will increase the required amount of capital. Next section will continue with the

variance covariance method.

5.6 Variance-covariance results

The variance-covariance method assumes that the daily losses Lt have a multivariate normal

dis-tribution. The maximum likelihood parameter estimates of this multivariate normal distribution are simply given by the mean and the variance covariance matrix corresponding to the daily losses for the asset classes. Now to calculate the risk measures for the portfolios of the different financial institutions. We used 10 million random drawings from the estimated multivariate normal distribu-tion and calculated the risk measures. Based on these VaRα and ESβ estimates we created Figure

(28)

Figure 6: VaRα = ESβ based on the

variance-covariance method

Figure 7: VaRα = ESβ based on the

variance-covariance method

The black line indicates a bank, blue an insurance company and red again a pension fund. As in the univariate case the lines lie exactly over each other. This is in line with our findings for the univariate case. Of course this makes sense, since any linear combination of normal distributions follows a normal distribution. More interesting is to see the effect in the change of capital requirements when switching from VaR0.99 to ES0.975, this will be discussed in section 5.8.

5.7 Monte Carlo simulation results

The Monte Carlo simulation is the most comprehensive method which we will use in this paper. The analysis started with finding the most appropriated marginal distribution. The generalized hyperbolic distribution turned out to be the best candidate. After finding the best marginal dis-tributions we estimated several copulas for which the parameter estimates were presented in Table 5. To create confidence intervals around our VaR and ES estimates, we created 200 parameters drawings based on the estimated Hessian matrix. Having these parameter drawings we created per parameter set 100,000 univariate data points. These univariate data points were translated into daily losses via the estimated generalized hyperbolic distributions. Whereafter, we were able to calculate the appropriate risk measures, based on the different asset allocations per financial institution. Based on these VaRαand ESβ estimates we created Figure 8 for an insurance company

(29)

Figure 8: VaRα= ESβfor an insurance company

based on a Monte Carlo simulation

Figure 9: VaRα= ESβ for a pension fund based

on a Monte Carlo simulation

Looking at Figure 8 we see that the dark line is above the blue, and the blue line is above the red line. The dark line is created by the use of the Gaussian copula, the blue by the Student t copula and the red line indicates the Gumbel copula. So for an insurance company that uses an asset allocation of (35%,35%,30%) for respectively stocks, bonds and real estate. We see that VaRαwith

confidence levels from 90% up to 98% corresponds to certain levels of β. For the Gumbel copula these β are the lowest followed by the Student t and finally the Gaussian copula. This is in line with the general expectation that the Gaussian copula does not capture such extreme joint movements as the Student t and the Gumbel copula. However, interesting to see is that it does not hold for a pension fund who’s asset allocation is assumed to be respectively (32% ,56% ,12%). Here we see that the Gumbel does produces similair results as the Gaussian copula, this is probably caused by the large amount invested in bonds. The Gumbel copula assumes that on one hand bonds and on the other hand stocks and real estate are uncorrelated, since we saw that the estimate of θ2

was equal to 1.00 in Table 5. Whereas the Gaussian and Student t copula do captures correlations between bonds and the other two asset classes. The Figure for banks can be found in the Appendix (see Figure 32). The Figure for banks looks similair as the Figure for an insurance company. Next section discusses the effects for the capital requirements for the different financial institutions.

5.8 Capital requirements

(30)

institution will be explored. We start with the historical simulation, this will be our benchmark when comparing the other two methods. Based on the historical simulation the VaR0.99 and the

ES0.975 are calculated. The results are placed in Table 6, where the VaR and ES estimates are in

percentage points.

Table 6: Historical simulation

VaR0.99 ES0.975 Delta %

Bank 2.10 2.31 0.21 9.91

Insurance company 1.75 1.86 0.11 6.59 Pension fund 1.17 1.26 0.09 8.11

This Table makes clear that the institution with the highest allocation to risky assets, the bank, has the highest VaR and ES estimates. For a pension fund that invests a significant amount in bonds, the lowest VaR and ES estimates can be noted. More interesting are the changes in capital requirements from VaR0.99 to ES0.975. For the bank we see the highest increase in absolute sense,

as well as in percentage points. Based on the historical simulation the risk measure for a bank went up from 2.10% to 2.31%, an increase of almost 10%. The increases for the insurance company and the pension fund are respectively 6.6% and 8.1%.

The second method we investigate is the variance-covariance method. This method is the building block for the of current models for capital requirements in Basel, Solvency and FTK. The assump-tion of this method was that the risk factors would follow a multivariate normal distribuassump-tion. Based on this assumption and 10 million drawings we calculated the associated risk measures per insti-tution. These findings are placed in Table 7, where the VaR and ES estimates are in percentage points.

Table 7: Variance Covariance

VaR0.99 ES0.975 Delta %

Bank 1.733 1.741 0.008 0.47

Insurance company 1.344 1.351 0.007 0.51 Pension fund 0.961 0.965 0.005 0.50

It is interesting to see the relatively small increase when changing from a VaR0.99 to an ES0.975

(31)

The third method is the Monte Carlo simulation. Here a distinguishment is made for the three different copulas which are used. The Gaussian, Student t and the Gumbel are used in the analysis. The risk measures are calculated based on 10 million random drawing. The random drawings are translated to daily losses by the use of the generalized hyperbolic marginal distributions. Table 8 presents the results per copula and per institution.

Table 8: Monte Carlo simulation with different copulas VaR0.99 ES0.975 Delta %

Bank with Gaussian 2.20 2.28 0.08 3.68

Bank with Student t 2.22 2.31 0.09 4.21

Bank with Gumbel 2.32 2.42 0.10 4.51

Insurance company with Gaussian 1.68 1.74 0.06 3.91 Insurance company with Studen t 1.70 1.79 0.09 5.37 Insurance company with Gumbel 1.84 1.95 0.11 5.85 Pension fund with Gaussian 1.21 1.25 0.04 3.61 Pension fund with Student t 1.23 1.29 0.06 4.54 Pension fund with Gumbel 1.32 1.38 0.06 4.91

The effects of using a different copula are clear present in the Table. For all institutions we observe the highest risk measures when using the extreme value Gumbel copula, followed respectively by the Student t and Gaussian copula. These findings are in line with the results of Kole et al. (2007). They found in a slightly different context that the Gaussian copula would underestimates potential risks and that the Gumbel copula would overestimate these potential risks. One could argue that this occurs in our analysis as well. We observe higher risk estimates based on the Gumbel copula compared to the historical simulation method. The estimates of the Student t are most in line with the historical simulation method.

The historical simulation and variance-covariance method are straightforward to use. The results are not effected by choosing your own model. Whereas the Monte Carlo simulation allows for choos-ing your own models. These models brchoos-ing along uncertainty. To capture some of the uncertainty we created the following Figures. Based on the assumption of correct specified marginals we draw 200 parameters based on the estimated Hessian matrix for the copulas. Per parameter set the VaR and ES where calculated. For a bank these Figures are plotted (see Figures 10 to 12). The blue line indicates the VaR estimates and the green line the ES estimates. The 95% confidence intervals are plotted around these estimates to see whether the confidence around VaR0.99 overlap with ES0.975

(32)

Figure 10: VaR and ES based on the Gaussian copula including a 95% confidence interval for a bank

Figure 11: VaR and ES based on the Student t copula including a 95% confidence interval for a bank

Figure 12: VaR and ES based on the Gumbel copula including a 95% confidence interval for a bank

(33)

VaR and ES levels are displayed. The horizontally dotted red line indicates the 95% confidence level for the ES0.975 and the black line indicates the 95% confidence level for the VaR0.99. Of course

these confidence intervals are based on the same sample set which will increase the type two error. Nonetheless we think that it gives a good idea whether the increase from VaR0.99 to ES0.975 is

significant. Interesting to see is that these intervals are non overlapping for all 9 figures. This gives us an indication that the change in capital requirements is significant for all financial institutions. The figures for an insurance company and a pension fund can be found in the Appendix, see Figures 33 to 38.

6

Conclusion

The first research question to answer was: ”Which confidence level β for ESβ produces the same

capital requirement as α for the VaRα?”. This question has been answered for a couple of

univari-ate models. This was illustrunivari-ated by using figures in section 5.1. We found that for the Gaussian distribution the levels of β were the highest. This was caused by the fact that the normal distri-bution was not able to capture the fat tails of the empirical distridistri-bution. Whereas the Student t produced the lowest levels of β, caused by the low estimates for the degrees of freedom parameter ν. The estimates of the generalized hyperbolic distribution were most in line with the empirical distribution for stocks, bonds and real estate. Furthermore we found that the GHD distribution was most suitable to use when combining the asset classes in a multivariate setting. In this multi-variate setting, copulas were used to capture the dependence structure of between the different asset classes. Combining the Monte Carlo simulation with copulas results with the other two methods, the historical simulation and the variance-covariance we answered the second research question. This question asked what the effect was of switching from a VaR0.99 to an ES0.975 for a particular

portfolio. We found that a shift from the VaR0.99to the ES0.975would lead to a significant increase

in the capital requirements. This yielded for all the portfolios of the different assumed financial institutions and for all methods we used. Interesting to see was that the effect was not so extreme for the variance-covariance method, which is the standard model in Basel, Solvency and the FTK. So for the standard models, based on the variance-covariance method, the effects of switching from VaR0.99 to ES0.975 was only 0.5%. This effect was more notable in setting of the Monte Carlo

(34)

6.1 Recommendations/Limitations

In our analysis we considered three financial institutions. Where the differences of the institutions was only captured via a different asset allocation. In reality, the asset allocation is a broader study in which the dimensionality is not bounded by three. Besides that, the idea of a pension fund is completely different than the main concepts of an insurance company. Pension funds came to life to insure a stable life income for individuals, whereas an insurance company tries to make a profit by risk diversification. The different intentions per financial institution will play a major role in the future decisions of changing the risk management of a particular financial institution. Besides this we think that there is room for further research. We think that it would be interesting to investigate a similar research question in a time varying setting. This could lead to obtaining the effects effects for conditional methods such as a generalized autoregressive conditional heteroscedasticity (GARCH) model. These models should be able to capture more of the stylized facts of financial time series such as volatility clustering than the current considered models. To capture both the dependence between asset classes and the stylized facts, one could combine copulas and GRACH kind of models. The application of GARCH-copula based methods is applied earlier in the papers from Jondeau and R (2006) and Palaro and Hotta (2006). The idea is to model the margins by dynamic models such as a ARMA-GARCH model, and to model the innovation vector by using copulas. A disadvantage of these kind of models is that they are far away from the current static models in Basel, Solvency and FTK. This will make the link to the current situation more difficult. However, we think this would give be a nice addition to our research.

Acknowledgment

I want to thank Professor dr. R.H. (Ruud) Koning for his helpful comments and the enjoyable meetings.

References

Acerbi, C. and D. Tasche (2002). Expected shortfall: a natural coherent alternative to value at risk. Economic notes 31 (2), 379–388.

Artzner, P., F. Delbaen, J.M. Eber, and D. Heath (1999). Coherent measures of risk. Mathematical finance 9 (3), 203–228.

Barth, J.R., G. Caprio, and R. Levine (2008). Rethinking bank regulation: Till angels govern. Cambridge University Press.

(35)

Bouy´e, E., V. Durrleman, A. Nikeghbali, G. Riboulet, and T. Roncalli (2000). Copulas for finance: a reading guide and some applications.

Chavez-Demoulin, V., A.C. Davison, and A.J. McNeil (2005). Estimating value-at-risk: a point process approach. Quantitative Finance 5 (2), 227–234.

Genest, C. and LP. Rivest (1993). Statistical inference procedures for bivariate archimedean copulas. Journal of the American statistical Association 88 (423), 1034–1043.

Hayashi, F. (2000). Econometrics. Princeton University Press Princeton.

Hofert, M. and M¨achler, M. (2011). Nested archimedean copulas meet R: The nacopula package. Journal of Statistical Software 39 (9), 1–20.

Jondeau, E and Michael R (2006). The copula-garch model of conditional dependencies: An international stock market application. International Money and Finance 25, 827–853.

Kole, E, K. Koedijk, and M. Verbeek (2007). Selecting copulas for risk management. Journal of Banking & Finance 31, 2405–2423.

McNeil, A.J., R. Frey, and R. Embrechts (2005). Quantitative Risk Management. Princeton Uni-versity Press.

Palaro, H.P. and L.K. Hotta (2006). Using conditional copula to estimate value at risk. Journal of Data Science 4, 93–115.

P´erignon, C. and D.R. Smith (2010). The level and quality of value-at-risk disclosure by commercial banks. Journal of Banking & Finance 34 (2), 362–377.

Razali, N.M. and Y.B. Wah (2011). Power comparisons of shapiro-wilk, kolmogorov-smirnov, lilliefors and anderson-darling tests. Journal of statistical modeling and analytics 2 (1), 21–33. Sorkin, Andrew Ross (2011). Too Big to Fail. Viking Press.

(36)

7

Appendix

7.1 Generalized hyperbolic distribution

The density function of the generalized hyperbolic distribution is given by:

f (x) = c

Kλ−(d/2)p(χ + (x − µ)0Σ−1(x − µ))(ψ + γ0Σ−1γ)e(x−µ)0Σ−1γ p(χ + (x − µ)0Σ−1(x − µ))(ψ + γ0Σ−1γ)(d/2)−λ

,

with normalizing constant

c = (p(ψ) −λ(ψ + γ0Σ−1γ)(d/2)−λ (2π)d/2|Σ|1/2K λ( √ χψ) .

where where Kλ denotes a modified Bessel function of the third kind with index λ.

7.2 Invariance proposition

Let (X1, ..., Xd) be a random vector with continuous margins and copula C and let T1, ..., Td be

strictly increasing functions. Then (T1(X1), ..., Td(Xd)) also has copula C.

7.3 Delta method

The definition of the Delta method can be found in the book of Hayashi (2000) and tells us the following: Suppose {xn} is a a sequence of K-dimensional random vectors such that {xn}

p

−→ β and √

N (xn− β)−→ z,D

and suppose a a(·) : RK → Rr has continuous first derivatives with A(β) denoting the r × K

matrix of first derivatives evaluated at β:

A(β) ≡ ∂a(β) ∂β0 . Then √ n[a(xn) − a(β)] D −→ A(β)z. In particular: ”√n(xn− β)−→ N (0, Σ)” =⇒ ”D √

(37)

7.4 VaR=ES for the Gaussian distribution

By solving the equation the VaRα = ESβ under the assumption of a normal distribution see that

it is the same for every normal distribution, irrelevant of µ and σ. VaRα = ESβ µ + σΦ−1(α) = µ + σφ(Φ −1(β)) (1 − β) Φ−1(α) = φ(Φ −1(β)) (1 − β) α = Φ φ(Φ −1(β)) (1 − β) 

7.5 Graphs and Figures

(38)

Figure 15: VaRα= ESβ for real estate Figure 16: VaRα= ESβ for real estate

Figure 17: QQ-plot of the daily stocks losses against a Gaussian distribution

(39)

Figure 19: QQ-plot of the daily real estate losses against a Gaussian distribution

Figure 20: QQ-plot of the daily stock losses against a Student t distribution

Figure 21: QQ-plot of the daily bond losses against a Student t distribution

(40)

Figure 23: QQ-plot of the daily stock losses against a symmetric Student t distribution

Figure 24: QQ-plot of the daily bond losses against a symmetric Student t distribution

Figure 25: QQ-plot of the daily real estate losses against a symmetric Student t distribution

(41)

Figure 27: QQ-plot of the daily bond losses against a GHD distribution

Figure 28: QQ-plot of the daily real estate losses against a GHD distribution

Figure 29: QQ-plot of the daily stock losses against a symmetric GHD distribution

(42)

Figure 31: QQ-plot of the daily real estate losses

against a symmetric GHD distribution Figure 32: VaRα = ESβ for a bank based on a

Monte Carlo simulation

Figure 33: VaR and ES based on the Gaussian copula including a 95% confidence interval for an insurance company

(43)

Figure 35: VaR and ES based on the Gumbel copula including a 95% confidence interval for an insurance company

Figure 36: VaR and ES based on the Gaussian copula including a 95% confidence interval for a pension fund

Figure 37: VaR and ES based on the Student t copula including a 95% confidence interval for a pension fund

(44)

7.6 Tables

Table 9: Gaussian distribution

Stocks Bonds Real estate

µ -2.78· 10−4 (1.46· 10−4) -2.17· 10−4 (2.28· 10−4) -1.91· 10−4 (1.31· 10−4) σ 0.0114 (1.03· 10−4) 0.00178 (1.61· 10−5) 0.0103 (9.29· 10−4)

Log L 18,818.65 30268.53 19461.74

Table 10: Symmetric t distribution

Stocks Bonds Real estate

µ -5.959· 10−4 (1.07· 10−4) -2.478· 10−4 (2.22· 10−4) -5.211· 10−4 (8.754· 10−5)

σ 0.0132 (0.049) 0.0018 (0.013) 0.0142 (0.097)

ν 2.7215 (0.161) 6.5705 (0.119) 2.3424 (0.252)

Log L 19623.45 30391.28 20592.32

Table 11: Skewed Student t distribution

Stocks Bonds Real estate

µ -8.899· 10−4 (1.53· 10−4) -4.851· 10−4 (6.83· 10−5) -7.201· 10−4 (1.138· 10−4)

σ 0.0131 (0.048) 0.0018 (0.0129) 0.0139 (0.092)

ν 2.7401 (0.159) 6.8023 (0.120) 2.3584 (0.241)

γ 7.207· 10−4 (2.65· 10−4) 2.685· 10−4 (7.211· 10−5) 8.285· 10−4 (3.12· 10−4)

Log L 19627.51 30398.99 20596.54

Table 12: Generalized hyperbolic distribution

Stocks Bonds Real estate

λ -0.2801 (0.200) -0.4130 (1.57) -0.7517 (0.111) χ 0.3179 (0.073)* 1.9315 (0.116)* 0.4138 (0.136)* ψ 0.5686 (0.073)* 2.0753 (0.116)* 0.1702 (0.136)* µ -9.584· 10−4 (1.49· 10−4) -4.807· 10−4 (6.60· 10−5) -7.951· 10−4 (1.16· 10−4) σ 0.0113 (0.018) 0.0018 (0.012) 0.0103 (0.024) γ 6.772· 10−4 (2.09· 10−4) 2.631· 10−4 (6.98· 10−4) 5.990· 10−4 (1.79· 10−4) Log L 19646.86 30401.4 20610.05

* indicates the standard errors of ¯α, in the following parametrization: ψ = ¯αKλ+1( ¯α)

Kλ( ¯α) and χ =

¯ α2

(45)

Table 13: Symmetric Generalized hyperbolic distribution

Stocks Bonds Real estate

λ -0.2775 (0.204) 0.1126 (1.67) -0.7753 (0.108) χ 0.3130 (0.073)* 1.4465 (0.178)* 0.4186 (0.144)* ψ 0.5659 (0.073)* 2.4526 (0.178)* 0.1550 (0.144)* µ -6.131· 10−4 (1.03· 10−4) -2.4813· 10−4 (2.23· 10−5) -5.360· 10−4 (8.65· 10−5) σ 0.0113 (0.018) 0.0018 (0.0012) 0.0104 (0.0025) Log L 19641.33 30393.67 20604.14

Table 14: Tabel containing the KS and AD statistics

Referenties

GERELATEERDE DOCUMENTEN

Verdere verbeteringen van de bodemstructuur (langere termijn structuurvorming) zijn te verwachten als de oogst bodemvriendelijk genoeg uitgevoerd wordt. Ploegen is dan vaak niet

Dit betekent dat ook voor andere indicaties, indien op deze wijze beoordeeld, kan (gaan) gelden dat protonentherapie zorg is conform de stand van de wetenschap en praktijk. Wij

Dr Francois Roets (Department of Conservation Ecology and Entomology, Stellenbosch University) and I are core team members of this CoE, and our main research aim is to study

over the protection of property rights in the interim Constitution” 1995 SAJHR 222-240; Ntsebeza, “Land redistribution in South Africa: the property clause revisited” in Ntsebeza

Previous research indicates that information behaviour depends on the context in which it is displayed (Agosto, 2002). Therefore, students are asked to report on their

Following the framework developed by both Barley and Tolbert (1997) and Burns and Scapens (2000), I identified that numerous institutional works were done by a dedicated

The study revealed that by using a CDSS supporting the clinical decision making for anticoagulant treatment in AF patients, a statistically significant

Om bij de aandachtsvertekeningscores van testmoment 1 te controleren voor algemene reactiesnelheid werden de scores gedeeld door de standaarddeviaties van de neutrale trials