• No results found

Modeling aggregation of dependent risks : performance of the rearrangement algorithm for calculating the worst-case VaR

N/A
N/A
Protected

Academic year: 2021

Share "Modeling aggregation of dependent risks : performance of the rearrangement algorithm for calculating the worst-case VaR"

Copied!
26
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Faculty of Economics and Business Bachelor Thesis - Actuarial Sciences

Modeling Aggregation of Dependent Risks

Performance of the Rearrangement Algorithm for

calculating the worst-case VaR

Kerim Kes

Name: Kerim Kes

Student number: 10542760 Supervisor: dr. Sami Umut Can Date: 28 June 2016

(2)

Hierbij verklaar ik, Kerim Kes, dat ik deze scriptie zelf geschreven heb en dat ik de volledige verantwoordelijkheid op me neem voor de inhoud ervan. Ik bevestig dat de tekst en het werk dat in deze scriptie gepresenteerd wordt origineel is en dat ik geen gebruik heb gemaakt van andere bronnen dan die welke in de tekst en in de referenties worden genoemd. De Faculteit Economie en Bedrijfskunde is alleen verantwoordelijk voor de begeleiding tot het inleveren van de scriptie, niet voor de inhoud.

(3)

Contents

1 Introduction 1 2 Theoretical framework 2 2.1 Loss distributions . . . 2 2.1.1 Pareto distribution . . . 3 2.1.2 Gamma distribution . . . 3 2.1.3 Log-Normal distribution . . . 3 2.1.4 Weibull distribution . . . 3 2.1.5 Burr distribution . . . 4

2.2 Worst-case VaR and the Rearrangement Algorithm . . . 4

2.2.1 Loss and Value-at-Risk . . . 4

2.2.2 Rearrangement Algorithm . . . 5

2.3 Dual bound and standard bound . . . 7

2.3.1 Dual bound . . . 7

2.3.2 Standard bound . . . 8

2.4 Criteria . . . 9

3 Computing the worst-case VaR 9 3.1 Data . . . 10

3.2 Worst-case VaR by RA, dual bound and standard bound . . . 10

4 Results 12 4.1 The homogeneous case . . . 12

4.2 The inhomogeneous case . . . 15

4.3 Some improvements . . . 17

5 Conclusions 19

References 20

(4)

1

Introduction

A well-known and widely used risk measure is the Value-at-Risk (VaR). Except being used as a risk measure, the VaR is also regularly used as capital requirement for insurance companies (Denuit, Dhaene, Goovaerts, & Kaas, 2005). For an insurance company, modeling the aggregation of risks in a portfolio is an important problem. Assuming independence between the individual risks simplifies the problem, but this assumption is clearly not realistic in many situations. Therefore, Embrechts, Puccetti, and Rüschendorf (2013) developed a numerical algorithm which is able to calculate sharp bounds for the VaR of high-dimensional portfolios, where the individual risks are assumed to be dependent. For simplicity, this algorithm is called ‘the Rearrangement Algorithm (RA) to compute bounds on the VaR’.

The upper limit of the VaR is of particular interest for insurance companies, as this indicates the capital requirement in the worst-case scenario. However, there are hardly any methods to calculate this worst-case VaR analytically when the individual variables are dependent and have different distributions. The existing methods are often very inefficient and it has not been proven that those methods are accurate for portfolios with more than two differently distributed risks, see (Embrechts et al., 2013) and (Embrechts & Puccetti, 2006). Therefore, this thesis examines the following question: to what extent is the Rearrangement Algorithm for calculating the worst-case VaR of aggregate dependent risks working accurately and efficiently?

To answer this question, a distinction must be made between two cases: the homogeneous case and the inhomogeneous case. In the homogeneous case, the marginal risks are all identically distributed. In this situation, the worst-case VaR can be calculated analytically with the so-called ‘dual bound’ by using some assumptions (Embrechts et al., 2013). However, in the inhomogeneous case the marginal risks are no longer identically distributed, which makes the calculation of the worst-case VaR a lot harder. If there are only two risks with different marginal distributions, the worst-case VaR can be computed analytically by the so-called ‘standard bound’, see (Puccetti & Rüschendorf, 2012a) and (Embrechts & Puccetti, 2006). But, when the portfolio contains more than two risks with different marginal distributions, the standard bound is no longer useful. In this case, the worst-case VaR can be calculated by the dual bound (Embrechts et al., 2013). However, this computation may become numerically cumbersome, because the complexity of the dual bound

(5)

increases with the amount of differently distributed risks. Besides that, this bound is not proved to be sharp for inhomogeneous marginals in dimensions equal to, or greater than three (Embrechts et al., 2013). Therefore, the RA for calculating the worst-case VaR is compared with both the dual bound with multiple variables in the homogeneous case, as well as the standard bound with only two variables in the inhomogeneous case.

In order to compare the algorithm with the dual bound and the standard bound, the next chapter first reviews which marginal distributions are mostly used for modeling losses. After that, the worst-case VaR is defined with a description of the Rearrangement Algorithm. Next, the dual bound is described for the homogeneous case and the standard bound for the inhomogeneous case. Finally, the criteria for the accuracy and efficiency of the algorithm are proposed and described.

In the third chapter is described how the research is conducted. First is described which data is used and how it is obtained. After that, the parameter values are discussed for the use of the marginal distribution functions. Ultimately, in the second section of this chapter is described how the case VaR is computed and how it is compared for the different methods. After the worst-case VaR is computed and compared for the different methods, the results are presented and anal-ysed in chapter four. Eventually, the central question is answered in chapter five.

2

Theoretical framework

This chapter contains the literature review needed for the comparison between the RA and the existing analytical methods to calculate the worst-case VaR. First, the most widely used different marginal distributions are reviewed for modeling risks or losses. Thereafter, the worst-case VaR is defined and the Rearrangement Algorithm is introduced. Here it is explained how this algorithm works. After describing the RA, the dual bound and the standard bound are explained and dis-cussed. In the end, the criteria for the accuracy and efficiency of the algorithm in comparison with the dual bound and standard bound are described.

2.1

Loss distributions

In order to compare the algorithm with the dual bound and the standard bound, it is necessary to know how the individual risks are distributed. According to Bernardi, Maruotti, and Petrella (2012)

(6)

it is well-recognized that the distribution of losses is usually strongly skewed and has heavy tails. Also Rockafellar and Uryasev (2002) suggest that risks are often distributed with fat tails, which makes the VaR numerically difficult to work with. Examples of such distributions are: the Pareto, the Gamma, the log-Normal, the Weibull and the Burr distribution (Bernardi et al., 2012). Below, the probability density functions of these distributions are briefly described.

2.1.1 Pareto distribution

The Pareto distribution with parameters θ and α has a probability density function given by:

f(x) = α θ

α

(x + θ )α +1, x> 0

with scale parameter θ > 0 and shape parameter α > 0, see (Denuit et al., 2005).

2.1.2 Gamma distribution

The Gamma distribution with parameters α and τ has a probability density function given by:

f(x) =x

α −1ταe−τx

Γ(α ) , x> 0

with shape parameter α > 0 and rate parameter τ > 0, see (Denuit et al., 2005).

2.1.3 Log-Normal distribution

The log-Normal distribution with parameters µ and σ has a probability density function given by:

f(x) = 1 xσ√2πe −1 2( log(x)−µ σ ) 2 , x> 0

with scale parameter σ > 0, see (Denuit et al., 2005) and (Foss, Korshunov, & Zachary, 2013).

2.1.4 Weibull distribution

The Weibull distribution with parameters κ and α has a probability density function given by:

f(x) = α κ x κ α −1 e−(κx) α , x> 0

(7)

with the scale parameter κ > 0 and the shape parameter α > 0, see (Foss et al., 2013). This is a heavy-tailed distribution for 0 < α < 1.

2.1.5 Burr distribution

The Burr distribution with parameters α, κ and τ has a probability density function given by:

f(x) = α τ κ

αxτ −1

(xτ+ κ)α +1, x> 0

with the shape parameters α > 0 and τ > 0 and also the scale parameter κ > 0, see (Foss et al., 2013).

Now that it has become clear how individual risks often are distributed, the next section deals with the Rearrangement Algorithm for calculating the worst-case VaR.

2.2

Worst-case VaR and the Rearrangement Algorithm

2.2.1 Loss and Value-at-Risk

Before the Rearrangement Algorithm is explained, first the aggregate loss and the worst-case VaR are defined. Let the aggregate loss be defined as:

L+=

d

i=1

Li

where L1, ..., Ld are the individual random loss variables over a fixed period of time (Embrechts et

al., 2013). The Value-at-Risk of the aggregate loss L+ for a probability level α ∈ (0, 1) is defined as:

VaRα(L

+) = F−1

L+ (α) = inf{x ∈ R : FL+(x) > α}

where FL+(x) = P(L+≤ x) is de cdf of L+. The VaR is a measure of extreme loss, when α is close

to 1.

In the case when the marginal risks are perfectly positively correlated, the VaR can be calculated by VaRα(L+) = VaR+α(L

+) = d

i=1

VaRα(Li), see (Denuit et al., 2005) and (Embrechts et al., 2013).

(8)

generally not subadditive and can contain higher values than the sum of the VaRs of the individual risks (Denuit et al., 2005). Therefore, this method to calculate an upper bound for the VaR is not correct.

In order or to find such an upper bound, it must first be well-defined. Embrechts et al. (2013) define the worst-case VaR as:

VaRα(L+) = sup{VaRα(L1+ ... + Ld) : FLLL∈F (F1, ..., Fd)}

Here, LLL= (L1, ..., Ld)0 andF (F1, ..., Fd) is the Fréchet class of all distributions FLLL which are

pos-sible on Rd (Embrechts et al., 2013). These distributions have the given marginals F1, ..., Fd. It

follows automatically from the definitions of the VaR and the worst-case VaR that:

VaRα(L+) ≤ VaRα(L+)

So the worst-case VaR is logically an upper bound for the VaR. In the next subsection it is explained how the value of VaRα(L+) can be computed numerically.

2.2.2 Rearrangement Algorithm

For the calculation of sharp bounds on the VaR, Embrechts et al. (2013) developed the Rearrange-ment Algorithm (RA). This algorithm is based on the earlier version of the RA in (Puccetti & Rüschendorf, 2012b), which calculated bounds on the distribution function of dependent risks. This earlier version worked well for small dimensions up to 30, but it is adapted and improved by Embrechts et al. (2013) in order to calculate sharp bounds for the VaR, especially in the inhomoge-neous case. They argue that this RA for computing bounds on the VaR is fast and accurate and is able to process portfolios with more than 1000 dimensions, for any set of distribution functions Fi.

Since the lower bound of the VaR is not of interest in this thesis, only the RA to calculate the worst-case VaR is discussed. First, the operator s(XXX) is defined as the row-wise minimum of the row-sums for a (N × d)-matrix XXX with entries xi, j (Embrechts et al., 2013):

s(XXX) = min

1≤i≤N

1≤ j≤d

(9)

Then, the RA to calculate the worst-case VaR is described by Embrechts et al. (2013, p. 2757) as follows:

Rearrangement Algorithm (RA) to compute VaRα(LLL+)

1. Fix an integer N and the desired level of accuracy ε > 0. 2. Define the matrices XXXα = (xα

i, j) and XXX α = (xα i, j) as xα i, j = Fj−1  α +(1 − α)(i − 1) N  , xα i, j= Fj−1  α +(1 − α)i N  , for 1 ≤ i ≤ N, 1 ≤ j ≤ d.

3. Permute randomly the elements in each column of XXXα and XXXα.

4. Iteratively rearrange the j - th column of the matrix XXXα so that it becomes

op-positely ordered to the sum of the other columns, for 1 ≤ j ≤ d. A matrix YYYα is

found.

5. Repeat Step 4 until

s(YYYα) − s(XXXα) < ε

A matrix XXX∗is found.

6. Apply Steps 4-5 to the matrix XXXα until a matrix XXX∗is found. 7. Define

sN = s(XXX∗) and sN = s(XXX∗) Then we have sN ≤ sN and in practice we find that

sN N→∞' sNN→∞' VaRα(L+).

The idea behind the algorithm is simple. Each column of the matrices XXXα and XXXα contains N

discrete, stochastically ordered points (Embrechts et al., 2013). These points come from the (1 − α) upper part of the support of each marginal risk. The columns of matrix XXXα are rearranged into XXX

(10)

componentwise sum of each row of XXX∗ is larger than this value (Embrechts et al., 2013). In the same way, the columns of XXXα are rearranged into XXX∗ to find the maximal value of sN for which

the componentwise sum of each row of XXX∗ is larger than this value. If N is large, this results into sN ≤ VaRα(L+) ' sN, see (Embrechts et al., 2013).

For more details concerning the Rearrangement Algorithm, the reader is referred to Puccetti and Rüschendorf (2012b) and Embrechts et al. (2013). In the first-mentioned article the mathematical aspect of the algorithm is explained, while in the second-mentioned article the attention is limited to more practical points of the RA. In contrast with this section, the next section deals with methods to analytically calculate the worst-case VaR.

2.3

Dual bound and standard bound

As described in the introduction, the worst-case VaR can be calculated analytically with multiple variables in the homogeneous case as well as with two variables in the inhomogeneous case. First, the dual bound is described, which can calculate VaRα(L+) in the homogeneous case for

high-dimensional portfolios. After that, the standard bound is described, which is able to calculate VaRα(L+) in the inhomogeneous case for a portfolio with just two variables.

2.3.1 Dual bound

The dual bound is defined in Embrechts et al. (2013) as:

D(s) = inf

t<s/d

dRs−(d−1)t

t F(x)dx

(s − dt) , (1)

The relevance of the dual bound to the worst-case VaR is explained by the following result; see Proposition 4 in Embrechts et al. (2013):

Proposition (Dual bound). In the homogeneous case Fi = F, 1 ≤ i ≤ d, with d ≥

3, Let F be a continuous distribution with an unbounded support and an ultimately decreasing density. Suppose that for any sufficiently large threshold s the infimum in (1) is attained at some a < s/d, that is assume that

(11)

D(s) =d Rb

aF(x)dx

(b − a) = F(a) + (d − 1)F(b), (2)

where b= s − (d − 1)a, with F−1(1 − D(s)) ≤ a < s/d. Then, for any sufficiently large threshold α we have that

VaRα(L

+) = D−1(1 − α). (3)

Here, F(x) = 1 − F(x). Embrechts et al. (2013) argue that equation (3) almost always holds, when distributions and confidence levels are used which are common in quantitative risk management. Computing the worst-case VaR by the dual bound takes place in two steps. The first step is to calcu-late the dual function D(s) by numerically solving equation (2) and the second step is to numerically calculate the inverse of the dual function D−1at the confidence level (1 − α), see Embrechts et al. (2013). This method is useful when the portfolio contains variables with identically distributed marginals.

2.3.2 Standard bound

In contrast to the last subsection, this subsection deals with the calculation of the worst-case VaR for a portfolio with differently distributed risks. This can be done by means of the standard bound. However, this method is only able to calculate sharp bounds for portfolios with just two variables. In connection with the notation in other sections, the standard bound for two variables is defined as follows (Embrechts & Puccetti, 2006):

P[L+< s] ≥ τ(s) = sup

X−2∈R

[φ (X−2)]+

where the function φ (X−2) : R → R is defined as:

(12)

Here, F−(x) := P[X < x], x ∈ R, see Embrechts and Puccetti (2006). They argue that this bound can be translated to the worst-case VaR in the following way:

VaRα(L +

) ≤ τ−1(α) = VaRα(L +

), α ∈ [0, 1]

Now that it has become clear how the worst-case VaR can be computed by the Rearrangement Algorithm, the dual bound and the standard bound, it is necessary to define some criteria for the comparison between the RA and the two bounds. This is considered in the next section.

2.4

Criteria

In order to compare the Rearrangement Algorithm with the dual bound and the standard bound, some criteria must be established. The accuracy of the RA is judged on the basis of the absolute difference and the relative difference of the range values sN and sN with the exactly calculated

values of the worst-case VaR by the dual bound and the standard bound. Besides that, the efficiency of the RA is judged on the basis of the ease of the calculation in comparison with the calculation of the dual bound and the standard bound. In particular, the duration of the calculations are compared to each other. In this thesis, the Rearrangement Algorithm is said to be accurate, when the relative difference between the range values and the exact values of the worst-case VaR is smaller than 0.5%. Further, the calculations are called efficient when the duration of a calculation is no longer than 15 minutes. These measurement standards are subjectively chosen, since there are no exact measurement standards for the accuracy and efficiency of this subject.

This chapter has discussed the most important theory in order to determine to what extend the RA for computing the worst-case VaR is working accurately and efficiently. In the next chapter is described how the research is set-up, in order to find an answer to the central question.

3

Computing the worst-case VaR

In the first section of this chapter is explained which data is used and how it is obtained. Also, the parameter values for the marginal distribution functions are discussed. Next, in the second section is explained how the worst-case VaR is computed and how it is compared for the different methods.

(13)

3.1

Data

The data that is used in this research, is simulated with the programming language R. These simula-tions produce pseudo-random values from the distribusimula-tions in section 2.1. R offers various packages to generate values from these distributions. In order to execute the simulations, the parameter val-ues of the used distributions have to be set.

These parameter values are chosen in such a way, so that the assumptions concerning the RA, the dual bound and the standard bound are satisfied. For the Pareto distribution, parameter values of θ = 1 and α = 2 are chosen. Next, the parameters of the Gamma distribution are set at α = 2 and τ = 13. Aside from these distributions, the location parameter for the log-Normal distribution is set at µ = 4 and the scale parameter is set at σ = 3. Then, for the Weibull distribution, parameter values of κ = 5 and α = 12 are chosen. Finally, the Burr distribution is the only distribution with three parameters, which are chosen as α = 3, τ = 2 and κ = 16.

3.2

Worst-case VaR by RA, dual bound and standard bound

Now that one knows how the data has been generated, it can be described how the worst-case Value-at-Risk is computed in this section. It must first be determined for both the homogeneous and the inhomogeneous case, which distribution functions, confidence levels and dimensions are used.

For the homogeneous case, all the distributions from sections 2.1 (with parameters as specified in section 3.1) are considered, except the Pareto distribution. This is done, because the Pareto distribution already has been tested by Embrechts et al. (2013) in this case. For each of the other distributions, the worst-case VaR is computed by both the Rearrangement Algorithm and the dual bound at confidence levels of α = 0.95, α = 0.975, α = 0.99 and α = 0.995. Besides varying confidence levels, the dimensions used are also varied. For each distribution at a certain confidence level, the RA and dual bound are calculated for d = 10, d = 50 and d = 250, where d denotes the dimension of the portfolio. So, for example consider a portfolio with d = 50 identically distributed Gamma marginals with parameters α = 2 and τ = 13 and confidence level αVaR= 0.975. In this

case, the worst-case VaR can now be calculated by the RA and the dual bound, so that it is possible to determine the accuracy and efficiency of the RA.

(14)

In contrast with the homogeneous case, the RA can only be compared with the standard bound for portfolios of dimension d = 2 in the inhomogeneous case. Here, the same confidence levels of α = 0.95, α = 0.975, α = 0.99 and α = 0.995 are used as in the homogeneous case. For each confidence level α, the worst-case VaR is computed by the RA and the standard bound for a portfolio with two differently distributed risks. In this thesis, each combination of two different marginals from section 2.1 is considered. For example, consider a portfolio with a Pareto distributed risk and a Weibull distributed risk, including the parameter values as stated in section 3.1. Now, for a certain confidence level, say α = 0.995, the worst-case VaR can be computed by the RA and the standard bound to determine the accuracy and efficiency in the inhomogeneous case.

Now that it has become clear how the parameters are set in both the homogeneous case and the inhomogeneous case and which distributions are used, one is able to calculate the worst-case VaR by the Rearrangement Algorithm from section 2.2.2, the dual bound from section 2.3.1 and the standard bound from section 2.3.2. In this thesis, the R package ‘qrmtools’ is used to ob-tain functions for the calculations of the RA and the dual bound. With the help of this package, it is possible to compute the worst-case VaR by the Rearrangement Algorithm with the function RA(), see Hofert, Memartoluie, Saunders, and Wirjanto (2015). The input variables are set at N = 25.000 and abstol = ε = 0.001. Hofert et al. (2015) state that this version of the RA contains a little bit more information than the RA in Embrechts et al. (2013). Besides the Re-arrangement Algorithm, the worst-case VaR by the dual bound can be calculated with the function worst_VaR_hom(..., method="dual"), see Hofert et al. (2015). They argue that it is necessary to define an initial interval as input for this function. It is easy to construct such an interval with the help of the function crude_VaR_bounds() (Hofert et al., 2015). This function is also included in the R package qrmtools. For more details concerning the implementation of those functions, the reader is referred to Hofert et al. (2015).

In contrast with the RA and the dual bound, the standard bound cannot be computed with ‘qrmtools’. In this thesis, the worst-case VaR by the standard bound is computed in the following way. First, calculate the function τ(s) as defined in section 2.3.2. After that, invert τ(s) = α iteratively with the help of some ‘if-else’ and ‘while’ statements in R. Finally, this inverted function has input α and gives the desired output VaRα(L+). See the appendix for the R code that has been

(15)

After these calculations have been made, the accuracy of the VaR can be determined by means of the relative difference:

sN− VaRα(L+) VaRα(L+) · 100%, sN− VaRα(L +) VaRα(L+) · 100%

where sN and sN denote the range values of the RA and VaRα(L+) denotes the exactly calculated

value of the worst-case VaR by the dual bound or the standard bound. If these differences are less than 0.5%, the RA is said to be accurate. Moreover, the efficiency of the RA can be determined in R with the function system.time(). If the duration of the calculation is less than 15 minutes, the RA is said to be efficient. In the next chapter, the results are presented and analysed.

4

Results

In the previous chapter is explained how the worst-case VaR is computed by the RA, dual bound and standard bound and how the data for the calculations is simulated. In this chapter, the results of the calculations are presented and analysed. The results of the homogeneous case are shown in the first section. After that, the results of the inhomogeneous case are shown in the second section. Finally, some improvements and adjustments are proposed for further research in the last section.

4.1

The homogeneous case

In this section, the results of the performance of the Rearrangement Algorithm in comparison with the dual bound are presented and analysed. In each of the first four tables, the results are shown of a portfolio with identically distributed risks. The results of a portfolio with Gamma-distributed risks, are presented in Table 1. In this table, d and α represent the dimension and the confidence level as described in chapter 3. Besides that, VaR+α(L+) denotes the sum of the individual VaRs, see section 2.2.1. The exact value of the worst-case VaR is given by VaRα(L+) in the table. This value

is computed with the help of the dual bound. Furthermore, VaRα(L+) (RA) presents the values sN

and sN, which are calculated by the RA. With these values, the accuracy of the RA is calculated by

means of the relative difference. The maximum of the relative difference of respectively sN and sN

(16)

effiency of the Rearrangement Algorithm is represented by the value of Time (s). This value shows the running time of the RA, expressed in seconds. In the same manner as in Table 1, the results of the log-Normal-, Weibull- and Burr-distributed risks are presented in Table 2, Table 3 and Table 4 respectively.

Table 1

Results Gamma

α VaR+α(L+) VaRα(L+) VaRα(L+) (RA) Max. RD. (%) Time (s)

d=10 0.95 142.32 177.54 177.53-177.54 0.0042 1.84 0.975 167.15 201.71 201.71-201.72 0.0036 1.92 0.99 199.15 233.08 233.07-233.08 0.0031 1.99 0.995 222.9 256.46 256.45-256.47 0.0027 2.08 d= 50 0.95 711.58 887.69 887.65-887.73 0.0045 8.36 0.975 835.75 1008.57 1008.53-1008.6 0.0039 8.8 0.99 995.75 1165.39 1165.35-1165.42 0.0033 9.18 0.995 1114.52 1282.31 1282.27-1282.34 0.003 9.64 d=250 0.95 3557.9 4438.47 4438.28-4438.63 0.0044 40.54 0.975 4178.73 5042.86 5042.66-5043.02 0.0039 43.2 0.99 4978.76 5826.96 5826.76-5827.11 0.0034 44.79 0.995 5572.6 6411.58 6411.37-6411.72 0.0032 45.43 Table 2 Results log-Normal

α VaR+α(L+) VaRα(L+) VaRα(L+) (RA) Max. RD. (%) Time (s)

d=10 0.95 75898 524233 524114-524343 0.0227 1.39 0.975 195336 1086264 1086035-1086476 0.0211 1.38 0.99 586325 2643726 2643217-2644202 0.0193 1.45 0.995 1239320 4950073 4949162-4950909 0.0184 1.45 d=50 0.95 379490 3639844 3636197-3643484 0.1002 6.9 0.975 976681 7139771 7133027-7146523 0.0946 6.87 0.99 2931623 16466893 16452296-16481319 0.0886 6.44 0.995 6196598 29896804 29870158-29920373 0.0891 6.28 d=250 0.95 1897451 20831173 20731467-20930106 0.4786 30.17 0.975 4883403 39621655 39512170-39873555 0.6358 32.28 0.99 14658113 88265103 88677676-89442505 1.3339 28.09 0.995 30982988 158123365 158582476-159889473 1.1169 30.57

(17)

Table 3

Results Weibull

α VaR+α(L+) VaRα(L+) VaRα(L+) (RA) Max. RD. (%) Time (s)

d=10 0.95 448.72 846.55 846.47-846.63 0.0098 1.03 0.975 680.39 1147.8 1147.7-1147.89 0.0085 0.95 0.99 1060.38 1619.65 1619.54-1619.76 0.0072 0.95 0.995 1403.61 2032.32 2032.19-2032.44 0.0064 0.92 d=50 0.95 2243.6 4241.47 4240.39-4242.21 0.0254 2.62 0.975 3401.96 5746.4 5745.24-5747.2 0.0202 2.6 0.99 5301.9 8104.48 8103.21-8105.38 0.0157 2.54 0.995 7018.04 10167.2 10165.85-10168.16 0.0133 2.63 d=250 0.95 11218.01 21207.36 21201.96-21211.05 0.0255 9.5 0.975 17009.79 28732 28726.19-28736.03 0.0202 9.36 0.99 26509.49 40533.95 40516.07-40526.9 0.0441 9.46 0.995 35090.21 50847.64 50829.24-50840.82 0.0362 9.45 Table 4 Results Burr

α VaR+α(L+) VaRα(L+) VaRα(L+) (RA) Max. RD. (%) Time (s)

d=10 0.95 52.37 67.75 67.75-67.75 0.0048 0.81 0.975 62.22 78.84 78.84-78.85 0.0044 0.81 0.99 76.33 95.03 95.02-95.03 0.0042 0.8 0.995 88.07 108.66 108.66-108.67 0.004 0.81 d=50 0.95 261.87 338.96 338.91-338.98 0.0128 2.83 0.975 311.12 394.47 394.43-394.5 0.0121 2.72 0.99 381.66 475.45 475.39-475.48 0.0118 2.75 0.995 440.37 543.67 543.61-543.71 0.0115 2.71 d=250 0.95 1309.36 1694.78 1694.56-1694.91 0.0128 11.95 0.975 1555.62 1972.37 1972.13-1972.52 0.0122 11.92 0.99 1908.29 2377.51 2376.95-2377.4 0.0234 11.91 0.995 2201.83 2718.68 2718.04-2718.54 0.0235 11.93

From the tables can be observed that the values of the worst-case VaR by the RA are close to the exact values of the worst-case VaR by the dual bound. On the other hand, the values of VaR+α(L+) are significantly less than the exact values of the worst-case VaR, which was expected by the information in section 2.2.1. In this thesis, the RA is said to be accurate, when the relative difference between the range values and the exact values of the worst-case VaR is smaller than 0.5%. This is the case for almost all the values, except for the results of the log-Normal-distributed

(18)

risks with a dimension of d = 250, where α = 0.975, 0.99 or 0.995. It is striking that the exact value of the worst-case VaR in two of these three cases is not within the RA-range. This may be due to a loss of accuracy of the dual bound in R, by using the function called in section 3.2 with method="dual". In section 4.3, an adjusted method is introduced which might be able to solve this problem. Besides the accuracy, the calculations are called efficient when the duration of a calculation is no longer than 15 minutes. All values comply with ease to this efficiency standard, so the Rearrangement Algorithm is called efficient for the homogeneous case in this thesis.

4.2

The inhomogeneous case

In contrast with the homogeneous case, the results of the inhomogeneous case are the results of a portfolio that only contains two variables. These two variables are each differently distributed, as explained before in earlier chapters. In the first column of the next four tables is shown how each of the two risks is distributed. The other columns represent the same values as described for the homogeneous case, except that VaRα(L+) has been calculated with the help of the standard bound

instead of the dual bound. The results have been calculated for the confidence levels α = 0.95, 0.975, 0.99 and 0.995, which are presented respectively by Table 5, Table 6, Table 7 and Table 8.

Table 5

Results α = 0.95%

Marginals VaR+α(L+) VaRα(L+) VaRα(L+) (RA) Max. RD. (%) Time (s)

Pareto-Gamma 17.7 22.03 22.03-22.03 0.0013 0.22 Pareto-Lognormal 7593.27 7703.17 7702.73-7703.62 0.0058 0.1 Pareto-Weibull 48.34 60.42 60.42-60.42 0.0025 0.09 Pareto-Burr 8.71 11.3 11.3-11.3 0.0016 0.09 Gamma-Lognormal 7604.03 7634.52 7634.08-7634.97 0.0058 0.22 Gamma-Weibull 59.1 70.37 70.37-70.38 0.002 0.22 Gamma-Burr 19.47 22.51 22.51-22.51 9e-04 0.22 Lognormal-Weibull 7634.67 7984.3 7983.85-7984.74 0.0056 0.09 Lognormal-Burr 7595.04 7618.16 7617.72-7618.6 0.0059 0.11 Weibull-Burr 50.11 56.49 56.49-56.49 0.0023 0.09

(19)

Table 6

Results α = 0.975%

Marginals VaR+α(L+) VaRα(L+) VaRα(L+) (RA) Max. RD. (%) Time (s)

Pareto-Gamma 22.04 27.04 27.04-27.04 0.0012 0.24 Pareto-Lognormal 19538.94 19721.94 19720.93-19722.95 0.0051 0.1 Pareto-Weibull 73.36 89.42 89.42-89.42 0.0021 0.09 Pareto-Burr 11.55 14.67 14.67-14.67 0.0015 0.11 Gamma-Lognormal 19550.33 19583.18 19582.18-19584.19 0.0051 0.22 Gamma-Weibull 84.75 96.53 96.53-96.54 0.0017 0.24 Gamma-Burr 22.94 26.07 26.07-26.07 8e-04 0.25 Lognormal-Weibull 19601.65 20058.09 20057.07-20059.1 0.005 0.1 Lognormal-Burr 19539.83 19568.95 19567.95-19569.96 0.0051 0.11 Weibull-Burr 74.26 81.45 81.45-81.45 0.002 0.09 Table 7 Results α = 0.99%

Marginals VaR+α(L+) VaRα(L+) VaRα(L+) (RA) Max. RD. (%) Time (s)

Pareto-Gamma 28.92 34.9 34.9-34.9 0.0011 0.23 Pareto-Lognormal 58641.45 58986.16 58983.52-58988.81 0.0045 0.09 Pareto-Weibull 115.04 137.9 137.9-137.91 0.0017 0.09 Pareto-Burr 16.63 20.64 20.64-20.64 0.0015 0.11 Gamma-Lognormal 58652.37 58688.02 58685.43-58690.26 0.0044 0.24 Gamma-Weibull 125.95 138.28 138.28-138.29 0.0015 0.23 Gamma-Burr 27.55 30.85 30.85-30.85 7e-04 0.25 Lognormal-Weibull 58738.49 59347.86 59345.22-59350.52 0.0045 0.08 Lognormal-Burr 58640.09 58678.85 58676.33-58681.61 0.0047 0.1 Weibull-Burr 113.67 122.04 122.03-122.04 0.0016 0.09

(20)

Table 8

Results α = 0.995%

Marginals VaR+α(L+) VaRα(L+) VaRα(L+) (RA) Max. RD. (%) Time (s)

Pareto-Gamma 35.43 42.24 42.24-42.25 0.0011 0.22 Pareto-Lognormal 123945.09 124489.02 124483.86-124494.17 0.0041 0.09 Pareto-Weibull 153.5 182.98 182.98-182.98 0.0015 0.09 Pareto-Burr 21.95 26.8 26.8-26.8 0.0015 0.09 Gamma-Lognormal 123954.24 123991.82 123987.12-123994.45 0.0038 0.23 Gamma-Weibull 162.65 175.33 175.33-175.34 0.0013 0.22 Gamma-Burr 31.1 34.55 34.55-34.55 6e-04 0.23 Lognormal-Weibull 124072.31 124805.41 124800.27-124810.57 0.0041 0.08 Lognormal-Burr 123940.76 123988.38 123983.52-123993.81 0.0044 0.09 Weibull-Burr 149.17 158.52 158.52-158.52 0.0014 0.09

From the tables can be observed that the values of the worst-case VaR by the RA are even closer to the exact values of the worst-case VaR, than they were in the homogeneous case. Also, the difference between the values of VaR+α(L+) and the exact values of the worst-case VaR is smaller. These may be consequences of the fact that there are only two variables. Unlike the homogeneous case, the maximum of the relative difference is smaller than 0.5% for all the values of the inhomogeneous case. Moreover, the duration of all these calculations is far less than 15 minutes. This means that the Rearrangement Algorithm is called accurate and efficient for the inhomogeneous case in this thesis. In the next section, some improvements and adjustments are discussed for further research.

4.3

Some improvements

As stated in section 4.1, the function to compute the worst-case VaR by the dual bound in R may become inaccurate for larger values of dimension d. One possible way to solve this problem, is by using the same function from section 3.2 with method="Wang" instead of method="dual". Hofert et al. (2015) state that this approach is still not straightforward to apply, though it is easier and numerically more stable than the dual bound. For more details concerning the mathematics and implementation of this method, the interested reader is referred to Hofert et al. (2015). In Table 9, the results for log-Normal-distributed risks are presented. Here, the so-called ‘Wang-method’ is used to compute the exact worst-case VaR, in order to check whether the RA is accurate in this

(21)

case. Al the other input variables have not been changed.

Table 9

Results log-Normal (method="Wang")

α VaR+α(L+) VaRα(L+) VaRα(L+) (RA) Max. RD. (%) Time (s)

d=10 0.95 75898 524233 524114-524343 0.0227 1.42 0.975 195336 1086264 1086035-1086476 0.021 1.39 0.99 586325 2643726 2643217-2644202 0.0193 1.48 0.995 1239320 4950073 4949162-4950909 0.0184 1.45 d=50 0.95 379490 3639841 3636197-3643484 0.1001 7.41 0.975 976681 7139770 7133027-7146523 0.0946 6.99 0.99 2931623 16466817 16452296-16481319 0.0882 6.52 0.995 6196598 29895262 29870158-29920373 0.084 6.37 d=250 0.95 1897451 20831047 20731467-20930106 0.478 30.64 0.975 4883403 39692332 39512170-39873555 0.4566 32.43 0.99 14658113 89055288 88677676-89442505 0.4348 28.31 0.995 30982988 159245525 158582476-159889473 0.4164 30.51

Using this approach, the maximum of the relative difference between the RA-range values and the ‘Wang bound’ is less than 0.5%, which implies that the RA is accurate in this case. The duration of the calculations is more or less the same as with the dual bound. These results indicate that the accuracy of the Rearrangement Algorithm depends on the calculation method of the exact worst-case VaR in the homogeneous worst-case. Although both of these methods should lead to equivalent bounds, differences may arise due to numerical complexities (Hofert et al., 2015). In order to assess the accuracy of the RA better, further research is necessary to improve the implementation of the dual bound or the Wang bound.

Besides the implementation of the dual bound or the Wang bound, the Rearrangement Algo-rithm can also be improved. Hofert et al. (2015) introduce the ‘Adaptive Rearrangement AlgoAlgo-rithm (ARA)’ which represents an improved version of the RA. They argue that this version is algorith-mically improved, has more meaningful tuning parameters and returns more information. Another way to improve the algorithm, is with the help of some higher dimensional dependence information (Embrechts et al., 2013). They state that if this information is available, it leads to narrower bounds of the worst-case VaR by the RA. Finally, the parameters N and ε as mentioned in section 3.2 may also be changed to obtain more information about the trade off between accuracy and efficiency.

(22)

These adjustments are some assumptions that can be used for further research.

5

Conclusions

In this final chapter, the most important information of this thesis is summarized in order to give an answer to the central question. Furthermore, the limitations of this research and the recommenda-tions for future research are repeated.

Embrechts et al. (2013) developed a numerical algorithm which is able to calculate sharp bounds for the VaR of high-dimensional portfolios, where the individual risks are assumed to be dependent. This algorithm is called ‘the Rearrangement Algorithm (RA) to compute bounds on the VaR’. The upper limit of the VaR is of particular interest for insurance companies, as this indicates the capital requirement in the worst-case scenario. However, there are hardly any methods to calculate this worst-case VaR analytically when the individual variables are dependent and have different distributions. Moreover, these existing methods are often very inefficient and it has not been proven that those methods are accurate for portfolios with more than two differently distributed risks. Therefore, this thesis has examined the following question: to what extent is the Rearrangement Algorithm for calculating the worst-case VaR of aggregate dependent risks working accurately and efficiently?

To answer this question, a distinction has been made between two cases: the homogeneous case and the inhomogeneous case. The RA has been compared to the dual bound in the homoge-neous case, where all the individual risks are identically distributed. These risks are assumed to be Gamma-, log-Normal-, Weibull- or Burr-distributed. For each distribution, three different dimen-sions and four different confidence levels have been considered. The same four confidence levels have also been used in the inhomogeneous case. Here, the RA has been compared to the standard bound for only two variables, which are differently distributed. Each combination between the just mentioned distributions including the Pareto-distribution has been used.

To actually compute the values of the worst-case VaR by the Rearrangement Algorithm, dual bound and standard bound, the R package qrmtools is used. With the help of these values, the accuracy of the RA is determined by means of the relative difference. In this thesis, the Rearrange-ment Algorithm is said to be accurate, when the relative difference between the RA-range values

(23)

and the exact values of the worst-case VaR is smaller than 0.5%. Further, the calculations are called efficient when the duration of a calculation is no longer than 15 minutes.

It was found that the calculations by the RA are efficient in both the homogeneous case as well as the inhomogeneous case. Furthermore, it was also found that the values of the RA are very accurate in both cases, except for three values where the portfolio is large and only contains log-Normal-distributed risks. This may be due to a loss of accuracy of the dual bound in R, by using the function worst_VaR_hom(..., method="dual") from the package qrmtools. One possible way to solve this problem, is by using the same function with method="Wang" instead of method="dual". Using this approach, it was found that the maximum of the relative difference between the RA-range values and the ‘Wang bound’ is less than 0.5%, which means that the RA is accurate in this case. These results imply that the accuracy of the Rearrangement Algorithm depends on the calculation method of the exact worst-case VaR in the homogeneous case.

In order to assess the accuracy of the RA better, further research is necessary to improve the implementation of the dual bound or the Wang bound. Besides the implementation of the dual bound or the Wang bound, Hofert et al. (2015) introduce the ‘Adaptive Rearrangement Algorithm (ARA)’ which represents an improved version of the RA. Another way to improve the algorithm, is with the help of some higher dimensional dependence information. Finally, the parameters N and ε may also be changed to obtain more information about the trade off between accuracy and efficiency. These adjustments are some assumptions that can be used for further research.

References

Bernardi, M., Maruotti, A., & Petrella, L. (2012). Skew mixture models for loss distributions: A Bayesian approach. Insurance: Mathematics and Economics 51, 617-623.

Denuit, M., Dhaene, J., Goovaerts, M., & Kaas, R. (2005). Actuarial Theory for Dependent Risks. Chichester, England: Wiley.

Embrechts, P., & Puccetti, G. (2006). Aggregating risk capital, with an application to operational risk. The Geneva Risk and Insurance Review, 31(2), 71-90.

Embrechts, P., Puccetti, G., & Rüschendorf, L. (2013). Model uncertainty and VaR aggregation. Journal of Banking & Finance 37, 2750-2764.

(24)

Foss, S., Korshunov, D., & Zachary, S. (2013). An Introduction to Heavy-Tailed and Subexponential Distributions(2nd ed.). New York, NY: Springer.

Hofert, M., Memartoluie, A., Saunders, D., & Wirjanto, T. (2015, December 29). Improved Algorithms for Computing Worst Value-at-Risk: Numerical Challenges and the Adaptive Re-arrangement Algorithm. Retrieved from http://arxiv.org/abs/1505.02281

Puccetti, G., & Rüschendorf, L. (2012a). Bounds for joint portfolios of dependent risks. Statistics & Risk Modeling, 29(2), 107-131.

Puccetti, G., & Rüschendorf, L. (2012b). Computation of sharp bounds on the distribution of a function of dependent risks. Journal of Computational and Applied Mathematics, 236(7), 1833-1840.

Rockafellar, R. T., & Uryasev, S. (2002). Conditional value-at-risk for general loss distributions. Journal of Banking & Finance 26, 1443-1471.

(25)

A

Appendix

The function τ(s) = α as defined in section 2.3.2 has been described by the following R code: s t a n d a r d _ bound <− f u n c t i o n ( s , pF1 , pF2 ) { s b <− f u n c t i o n ( x ) { pF1 ( x ) + pF2 ( s−x)−1 } s t a n d a r d b o u n d <− o p t i m i z e ( f = sb , i n t e r v a l = c ( 0 , s ) , maximum=TRUE ) r e t u r n ( s t a n d a r d b o u n d $ o b j e c t i v e ) }

Using the function above, the value of τ−1(α) = VaRα(L+) for α ∈ [0, 1] has been described by

the R function ‘VaR_standard’ as:

VaR_ s t a n d a r d <− f u n c t i o n ( a l p h a , pF1 , pF2 ) { s <− 100 s _ v o r i g <− 0 s _ hoog <− 150 s _ l a a g <− 75 w h i l e ( abs ( s−s _ v o r i g ) > 0 . 0 0 0 0 1 ) { s _ v o r i g <− s i f ( s t a n d a r d _ bound ( s , pF1=pF1 , pF2=pF2 )== a l p h a ) { s <− s } e l s e i f ( s t a n d a r d _ bound ( s , pF1=pF1 , pF2=pF2 ) < a l p h a ) { s <− ( s _ v o r i g + s _ hoog ) / 2 s _ hoog <− s _ l a a g ∗ 2 s _ l a a g <− s _ v o r i g } e l s e { s <− ( s _ v o r i g + s _ l a a g ) / 2 s _ hoog <− s _ v o r i g s _ l a a g <− s _ hoog / 2

(26)

} }

r e t u r n ( s ) }

Referenties

GERELATEERDE DOCUMENTEN

To combat this, companies must get used to pooling details of security breaches with their rivals.. Anonymising the information might make this

Regarding the second question we show that also the Lasserre bounds have a O(1/d 2 ) convergence rate when using the Chebyshev type measure from ( 6 ). The starting point is again

De genetwerkte relatie tussen onderzoek en onderwijs blijkt bijvoorbeeld uit samenwerkingsprojecten van de universiteit en onderwijsinstellingen, het produceren van studieboeken

This thesis aims to explore the discursive strategies of ‘othering’ used in the online discourse of the Slovak radical right party Kotleba – People’s Party Our Slovakia..

Pyrolysis gas chromatography measurements of the chemically bound rubber layer on the filler surface show that the bonding ability of butadiene is about half of the ability

MLHD is ontwikkeld voor rationeel gebruik van herbiciden om goede landbouwpraktijk (GAP) in geïntegreerde landbouw te ondersteunen.. Waar mogelijk gebruikt MLHD een meettechniek om

De vraag is of na de twee hierboven genoemde onderzoeken, die werden meegeschreven en gesponsord door de fabrikant, nieuwe gerandomiseerde onderzoeken over de werkzaamheid of

Statements of such problems - and various algorithms for solving them - appear to be omnipresent in many recent advanced machine learning applications, and various approaches to