• No results found

Determining Value-at-Risk bounds of aggregate risks through Copula Theory : using Rearranging Methods in R

N/A
N/A
Protected

Academic year: 2021

Share "Determining Value-at-Risk bounds of aggregate risks through Copula Theory : using Rearranging Methods in R"

Copied!
49
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Aggregate Risks through Copula

Theory

— Using Rearranging Methods in R

Maurits Carsouw

Master’s Thesis to obtain the degree in Actuarial Science and Mathematical Finance University of Amsterdam

Faculty of Economics and Business Amsterdam School of Economics Author: Maurits Carsouw Student nr: 10455043

Email: maurits carsouw@live.nl Date: August 31, 2014

Supervisor: Dr. Umut Can

(2)
(3)

Abstract

In financial risk management, risk measures are used to determine the minimum extra cash required on top of a given financial position to make the related risks acceptable to the regulator. The main disadvantage of the widely used risk measure called Value-at-Risk (VaR) is that it is not subadditive and hence fails to be coherent. The VaR of the aggregate risk of an asset portfolio is subject to the dependence structure between the individual asset risks. Copula theory provides a mathematical way to uniquely de-scribe any such dependence structure. Given a portfolio of risky assets, the worst- and best case VaR scenarios are characterized by the dependence structures (copulas) yield-ing the sharp upper and lower VaR bounds. For portfolios consistyield-ing of d = 2 assets, general analytical expressions for the VaR bounds are known. For higher dimensions d > 2, known analytical expressions require the assumption of a “homogeneous” port-folio consisting of identically distributed asset risks. However, distribution fitting of the log-returns of the stock market indices CAC 40, SMI, and Dow Jones exemplifies that in practice, portfolios are typically non-homogeneous. In particular, in the case of these non-homogeneous portfolios the Rearrangement Algorithm (RA) can be applied to nu-merically compute VaR bounds up to dimensions d ≈ 600. The performance of the RA is analyzed by applying the RA to (light-tailed) Exponential and (heavy-tailed) Pareto marginals and comparing the results to their analytical counterparts. Also the size of the “RA range” of possible VaR bounds is investigated. The claim that the accuracy of the RA is not affected by the type of the marginals used cannot be rejected, since the analytical VaR bound lies in the RA range in both the Exponential and in the Pareto case, for all considered dimensions d and probability levels α. However, it is found that performance of the RA in terms of RA range size, does depend on the choice of marginas.

Keywords Coherence, Comonotonicity, Copula, Dual bound, Fr´echet class, Fr´echet-Hoeffding bounds, Rearrangement Algorithm, Risk measure, Value-at-Risk, VaR bounds.

(4)

Contents

Preface v 1 Introduction 1 1.1 Value-at-Risk . . . 1 1.2 Coherence . . . 1 1.3 Alternatives to VaR . . . 2 1.3.1 Tail Value-at-Risk . . . 3

1.3.2 Conditional Tail Expectation . . . 3

1.3.3 Expected Shortfall . . . 3

1.4 Risk aggregation and copula theory. . . 3

1.5 Best- and worst-case scenarios. . . 6

2 Analytical VaR bounds 8 2.1 Standard bounds . . . 8 2.2 Dual bounds . . . 9 2.3 Homogeneous case . . . 10 3 Rearrangement Algorithm 13 3.1 Opposite order . . . 13 3.2 RA . . . 17 3.3 Performance . . . 19

4 Partial dependence information 22 4.1 Systems of marginals . . . 22

4.2 Non-overlapping marginal classes . . . 23

4.3 Applying the RA . . . 24

5 Distribution fitting 28 5.1 French stock market index (CAC 40) . . . 28

5.2 Swiss market index (SMI) . . . 30

5.3 Dow Jones index . . . 31

5.4 Portfolio of stocks . . . 33

6 Conclusions and further research 37 6.1 Conclusions . . . 37

6.2 Further research . . . 38

Appendix A: R-Code VaRbound() 39

Appendix B: Other R-Codes 40

References 42

(5)

This thesis is submitted at the University of Amsterdam (UvA) to fulfil the require-ments of a Master’s degree in Actuarial Science and Mathematical Finance. The thesis supervisor is Dr. Umut Can of the faculty Economics and Business, section Actuarial Science. The research is done during the period 7 May 2014 – 31 August 2014. The main focus lies on measuring the total risk of a portfolio of assets using the widely applied risk measure known as the Value-at-Risk (VaR). The VaR of a portfolio’s aggregate risk is subject to the dependence structure between the individual risks of the corresponding assets. Currently, finding “optimal” dependence structures yielding sharp bounds for the VaR of an aggregate risk is a hot topic in the field of actuarial science. The approach introduced by Puccetti & R¨uschendorf (January 2012) and modified by Embrechts et al. (2013) applies rearrangement methods based on copula theory. These methods are further explored in this thesis.

First of all, we thank Dr. Umut Can for the inspiration for the topic of this thesis and for providing the essential literature. During the project, Dr. Umut Can steered the thesis in the right direction without restricting our freedom and made inspirational sug-gestions for possible additions leading i.a. to Chapter 5on distribution fitting. Chapter

5 shows the practical applications of the Rearrangement Algorithm (RA) and enriches the thesis as a whole, we feel. Furthermore, we thank Prof. Dr. Paul Embrechts, Dr. Giovanni Puccetti, and Prof. Dr. Ludger R¨uschendorf for their work on the RA. All literature used in this thesis is cited in the References. Finally, we thank Dr. Bernhard Pfaff for sharing with us the R-code for the function VaRbound().

(6)
(7)

Introduction

In financial risk management, risk measures are used to determine the minimum extra cash required on top of a given financial position to make the related risks acceptable to the regulator. The risk that is measured should be thought of as a stochastic variable that captures the variability of the future value of the financial position. The risk measure expresses the risk in a single number and is in fact a real-valued function on the set of all risks.

1.1

Value-at-Risk

A well-known and perhaps the most widely used risk measure is called the Value-at-Risk (VaR). For a given portfolio, time horizon and probability level α ∈ (0, 1), the VaR is an upper bound for the future loss X on the portfolio with probability α. Usually the probability level is a value α close to 1. Mathematically speaking the VaRα(X) simply equals the α-quantile of X,

VaRα(X) = FX−1(α) = inf {x|FX(x) ≥ α} , (1.1) also see Figure 1.1.

This thesis focuses in particular on the VaR and the range of values it can assume in the case where the considered risk is a sum of separate (possibly dependent) risks.

1.2

Coherence

What properties does an acceptable risk measure require? In answer to this question, we recall the concept of coherence (see e.g. Artzner et al., 1999). A risk measure ρ is coherent if it satisfies each of the following four conditions.

1. Positive homogeneity: for all non-negative constants λ and risks X,

ρ(λX) = λρ(X). (1.2)

2. Subadditivity: for all risks X1 and X2,

ρ(X1+ X2) ≤ ρ(X1) + ρ(X2). (1.3) 3. Translation invariance: for all risks X and real numbers γ,

ρ(X + γr) = ρ(X) − γ, (1.4)

where r is the end-of-period price of a risk-free asset with initial value 1, so that the initial value of X equals X/r.

(8)

2 Maurits Carsouw — Determining Value-at-Risk Bounds through Copula Theory

Figure 1.1: Given a probability level α and the distribution of loss X (for a certain time horizon) on a portfolio, the VaR simply equals the α-quantile of X. This figure shows the VaR for a standard normally distributed risk X and probability level α = 0.95.

4. Monotonicity: for all risks X1 and X2 with X1≤ X2 a.s.,

ρ(X1) ≥ ρ(X2). (1.5)

Positive homogeneity (1.2) refers to the fact that position size directly influences the cor-responding risk. The intuitive interpretation of subadditivity (1.3) is that the spreading of assets over a wider portfolio reduces risk, i.e.: diversification benefits are non-negative. A risk X, or a position with risk X, is called acceptable if ρ(X) ≤ 0. Correspondingly, translation invariance (1.4) implies that

ρ(X + ρ(X) · r) = 0, (1.6)

i.e.: adding an amount ρ(X) of cash (or more) to the initial position with risk X (taking into account the time-value of money) ensures an acceptable position. Furthermore, the sign of a coherent risk measure ρ(X) is such that it is monotonically decreasing (1.5) in X. In this view the VaR often is alternatively defined as

VaR−α(X) = −FX−1(α) = − inf {x|FX(x) ≥ α} . (1.7) Besides positive homogeneity (1.2) and translation invariance (1.4), the added minus sign makes sure the VaR also satisfies monotonicity (1.5). However, in general the VaR fails subadditivity (1.3). The next example shows that the (alternative) VaR is not sub-additive and hence that the VaR is not coherent.

Consider the risks X1and X2given by (X1, X2) = (−1, 0), (0, −1), (1, 1) with respec-tive probabilities 0.496, 0.496, 0.008. Then FXi(−1) = 0.496, FXi(0) = 0.992, FXi(1) = 1

for i = 1, 2, and FX1+X2(−1) = 0.992, FX1+X2(2) = 1, so that subadditivity fails:

VaR−0.99(X1+ X2) = 1 > 0 + 0 = VaR0.99− (X1) + VaR−0.99(X2).

1.3

Alternatives to VaR

The VaR captures an important part of the risk in a single number, always exists and is easy to interpret and communicate. However, the non-subadditivity of the VaR is a definite shortfall. Other risk measures have been proposed in the risk theory literature as possible alternatives to the VaR. We briefly discuss some examples.

(9)

1.3.1 Tail Value-at-Risk

A well-known alternative for the VaR is the Tail Value-at-Risk (TVaR). This risk mea-sure is not only coherent, but it also considers the entire tail of the distribution as opposed to the VaR, which only reports a single quantile. This means that besides the probability that big losses occur, the TVaR gives information on the size of big losses whenever they occur. On the down side, the TVaR does not always exist and is possibly more difficult to estimate. For any probability level α ∈ (0, 1) the TVaR is defined as the mean of all β-quantiles with β > α (see Figure 1.2),

TVaRα(X) = 1 1 − α Z 1 α VaRβ(X)dβ. (1.8)

1.3.2 Conditional Tail Expectation

The Conditional Tail Expextation (CTE) is defined as the average loss in the worst (1 − α) × 100% cases (see Kaas et al., 2008),

CTEα(X) = E[X|X > VaRα(X)]. (1.9) For continuous distributions, the CTE and the TVaR coincide. In fact, CTEα(X) 6= TVaRα(X) holds if and only if α < FX(VaRα(X)), i.e.: if and only if FX jumps over level α. The CTE is subadditive only for continuous distributions.

1.3.3 Expected Shortfall

The Expected Shortfall (ES) is defined as the expected (non-negative) loss after the VaR is used to cover the initial risk X,

ESα(X) = E [max {X − VaRα(X), 0}] . (1.10) It considers the size of big losses and measures the difference between the CTE and the VaR,

ESα(X) = CTEα(X) − VaRα(X). (1.11) Hence the ES is non-subadditive as a sum of non-subadditive functions. The relation between the TVaR, the VaR and the ES can be expressed as

TVaRα(X) = VaRα(X) + 1

1 − αESα(X). (1.12)

1.4

Risk aggregation and copula theory

Often it is not a single asset, but a portfolio of assets or even a group of portfolios of which the risk needs to be measured. Therefore risk aggregation is of interest. It considers measuring the risk of a sum of possibly dependent risks X1, . . . , Xd. Given the marginal distributions (marginals) F1, . . . , Fd of these seperate risks, the measurement ρ(X1+· · ·+Xd) is a function of the dependence structure (joint distibution) between the marginals. This thesis in particular analyses the measurement ρ(X) = VaR(X) of the aggregate risk X = X1+· · ·+Xdfrom the perspective of a variable dependence structure between the marginals. One of the studies that enables a view from this perspective is that of copula theory. Standard references for an introduction to copula theory are Joe (1997) and Nelsen (1999). A d-dimensional copula C : [0, 1]d→ [0, 1] is any cumulative distribution function (cdf) with uniform marginals. This definition implies that given marginals F1, . . . , Fd, a copula C always defines a joint cdf F by

(10)

4 Maurits Carsouw — Determining Value-at-Risk Bounds through Copula Theory

Figure 1.2:The ES, the VaR and the TVaR for the standard normal distribution. The ES is the mean of all horizontal (dashed) lines between 0 and α. The TVaR is the mean of all horizontal (dotted) lines between α and 1.

and thus fixes the dependence structure between the marginals. The well-known Sklar’s Theorem (see e.g. Schmidt, 2006) conversely states that given any joint cdf F with con-tinuous marginals F1, . . . , Fd, there exists a unique copula coupling the marginals to their dependence structure, i.e.: satisfying (1.13). Thus, copula theory provides a mathemati-cal way to uniquely describe any dependence structure. For continuous random variables a copula can alternatively be defined as the joint cdf of the ranks F1(X1), . . . , Fd(Xd), i.e.:

C(α1, . . . , αd) = P(F1(X1) ≤ α1, . . . , Fd(Xd) ≤ αd). (1.14) Next, we consider the two-dimensional or bivariate case and give some examples of copulas for a pair (X, Y ) of random variables.

Example 1.4.1. If X and Y are independent, FX,Y(x, y) = FX(x)FY(y). Hence, the independence copula is given by

C(u, v) = uv. (1.15)

Example 1.4.2. If the pair (X, Y ) is comonotonic, i.e.: in the case of perfect positive dependence, the copula of (X, Y ) is given by the Fr´echet-H¨offding upper bound,

C(u, v) = min{u, v}. (1.16)

The copula in (1.16) is also known as the comonotonic or maximum copula and is a sharp upper bound for all (bivariate) copulas (see Figure1.3).

Example 1.4.3. If the pair (X, Y ) is countermonotonic, i.e.: in the case of perfect negative dependence, the copula of (X, Y ) is given by the Fr´echet-H¨offding lower bound, C(u, v) = max{u + v − 1, 0}. (1.17) The copula in (1.17) is also known as the countermonotonic or minimum copula and is a sharp lower bound for all (bivariate) copulas (see Figure 1.3).

A number of parametric families of copulas are named after and generated from well-known (bivariate) probability distributions.

Example 1.4.4. The bivariate Normal copula is given by

(11)

Figure 1.3:The pyramid (tetrahedron) shown in the figure contains the images of all bivariate copulas C(u, v). The surface of the pyramid corresponds to the Fr´echet-H¨offding bounds. The front side is given by the comonotonic copula C(u, v) = min{u, v} and the back side and base are given by the countermonotonic copula C(u, v) = max{u + v − 1, 0}.

where Φ is the standard normal cdf and Φ2,ρ is the bivariate standard normal cdf with correlation parameter ρ ∈ (−1, 1). Taking the limits ρ ↓ −1 and ρ ↑ 1 in (1.18) yields the respective Fr´echet-H¨offding bounds (1.17) and (1.16).

Example 1.4.5. The bivariate Pareto copula is given by

C(u, v) = ((1 − u)−1/γ+ (1 − v)−1/γ− 1)−γ + u + v − 1, (1.19) where the difference with the Fr´echet-H¨offding lower bound (1.17) is determined by the dependence parameter γ > 0.

Archimedean copula families arise from convex continuous generator functions φ : [0, 1] → [0, ∞) that strictly decrease to φ(1) = 0. Let φ[−1](t) = φ−1(t)1[0,φ(0)]denote the pseudo-inverse of a given generator φ, then an Archimedean copula family is generated by

C(u, v) = φ[−1](φ(u) + φ(v)). (1.20) Example 1.4.6. The Archimedean copula family generated by φ(t) = (−log(t))θ with θ ∈ [1, ∞) is known as the Gumbel copula family and is given by

C(u, v) = exp 

−h(−log(u))θ+ (−log(v))θi1/θ 

. (1.21)

In particular, the independence copula (1.15) is the Gumbel copula with θ = 1.

Example 1.4.7. The Archimedean copula family generated by φ(t) = 1θ(t−θ− 1) with θ ∈ [−1, ∞)\{0} is known as the Clayton copula family and is given by

C(u, v) = (max(u−θ+ v−θ− 1, 0))−1/θ. (1.22) Note that the Fr´echet-H¨offding lower bound (1.17) is the Clayton copula with θ = −1.

(12)

6 Maurits Carsouw — Determining Value-at-Risk Bounds through Copula Theory

1.5

Best- and worst-case scenarios

The total risk of a portfolio of risky assets is subject to the dependence structure be-tween the assets. This dependence structure is in general difficult to ascertain precisely, but can be estimated when market data is available. In particular the interrelation be-tween the economic sectors to which the assets belong can be of influence. However, any uncertainty in the true dependence structure leads to uncertainty in the measured risk. This uncertainty can be taken into account by establishing a range for the mea-sured risk, using the fact that the meamea-sured risk is a (mathematical) function of the dependence structure between the individual risks. In fact, assuming fixed marginals, the range of risk measurements arises as the image of the risk measure on the set of all dependence structures. The boundaries of this range equal the measured risk in the best- and worst-case dependence scenarios.

In risk aggregation many optimization problems are solved by the Fr´echet-H¨offding bounds for copulas. Indeed, the idea that negatively dependent risks cancel each other out and stabilize a portfolio (hedging), while positively dependent risks increase volatil-ity and hence the risk, is intuitive. However, this idea can be misleading in case the VaR is used as a risk measure, meaning that the Fr´echet-H¨offding bounds do not necessarily provide the minimum and maximum VaRs of an aggregate risk. This is related to the fact that the VaR is non-subadditive. In fact, under comonotonicity the VaR is addi-tive (see McNeil et al., 2005), while for any probability level there exists a dependence structure that yields a VaR that is larger than the VaR under comonotonicity. Key ques-tions in VaR aggregation are: What dependence structure between the individual risks minimizes/maximizes the VaR of the aggregate risk? Recent articles focusing on these questions include e.g. Embrechts et al. (2005), Embrechts et al. (2006), Laeven (2009), Puccetti & R¨uschendorf (January 2012), Puccetti & R¨uschendorf (June 2012), and Puc-cetti & R¨uschendorf (2013). Possible solutions and analytical solvability may depend on marginal information and are non-trivial in particular due to the non-subadditivity of the VaR, making VaR aggregation a topic of academic interest in the fields of financial and actuarial risk management.

Next, we give a brief overview of the research questions on which this thesis focuses. Given a random vector X = (X1, . . . , Xd) of risks with fixed marginals F1, . . . , Fd, let the Fr´echet class F (F1, . . . , Fd) of X denote the set of all possible joint distributions FX on Rd having marginals F1, . . . , Fd. Then the VaR bounds are obtained by fixing marginals and taking the infimum and supremum of the VaR over the set of all possible dependence structures. For the aggregate risk X = X1+ · · · + Xd the lower and upper VaR bounds at probability level α are defined as

VaRα(X) = inf {VaRα(X)|FX ∈ F (F1, . . . , Fd)} , (1.23)

VaRα(X) = sup {VaRα(X)|FX ∈ F (F1, . . . , Fd)} . (1.24) Let CX and Cddenote respectively the copula of X and the set of all d-dimensional cop-ulas, then Sklar’s Theorem implies that the bounds in (1.23) and (1.24) can equivalently be defined as

VaRα(X) = inf {VaRα(X)|CX ∈ Cd} , (1.25)

VaRα(X) = sup {VaRα(X)|CX ∈ Cd} . (1.26) We will also refer to (1.25) and (1.26) as the best- and worst-case VaR, respectively.

• Under what circumstances can VaR bounds be found analytically?

Embrechts et al. (2013) investigate the above question applying the so-called “dual bound”, but particularly introduce a numerical algorithm that is of use in case VaR

(13)

bounds cannot be found analytically. Along the lines of the following questions, we analyse their Rearrangement Algorithm, test its accuracy in different circumstances (heavy-tailed vs. light-tailed distributions) and apply it in an example of dependence modelling through copulas using real market data.

• How is the Rearrangement Algorithm of Embrechts et al. (2013) applied to estimate the VaR bounds numerically?

• Does the Rearrangement Algorithm cover all circumstances where the VaR bounds cannot be found analytically?

• Can the numerical VaR bounds be improved in terms of sharpness? • How can the Rearrangement Algorithm be applied to real market data?

The software that is used to create the numerical examples in this thesis is R, a language and environment for statistical computing. In particular the R-package “QRM” (Pfaff et al., 2014) is applied to examine quantitative risk management concepts. Throughout the text, some examples are complemented with corresponding R-codes. For other cases we refer to the Appendix.

(14)

Chapter 2

Analytical VaR bounds

This chapter analytically describes best and worst VaR scenarios and contains three sections. Known solutions for the case d = 2 (see e.g. R¨uschendorf, 1982) are presented as a particular case of the more general so-called standard bounds (Section2.1) introduced by Frank et al. (1987). The standard bounds fail to be sharp for dimensions d ≥ 3, in which case it is of interest to investigate improved versions of the standard bounds (Section2.2) known as the dual bounds, introduced by Embrechts & Puccetti (2006). In general, the dual bounds are difficult to evaluate. However, in the “homogeneous” case (Section 2.3), where marginals F1 = . . . = Fd are identical, analytical results have been obtained by Wang & Wang (2011). Analytical results are in particular difficult to find in the case where different marginals and any number of dimensions d ≥ 3 are allowed. In fact, an analytical expression for the best-case VaR does not exist in this general setting. An analytical expression for the worst-case VaR (assuming its existense) depends in contrast to the two-dimensional case on given marginals (see e.g. R¨uschendorf, 1982), but such an expression has not yet been found.

2.1

Standard bounds

The question of the worst- and best-case VaR in the two-dimensional scenario was first answered by Makarov (1981). Shortly after, R¨uschendorf (1982) provided another proof of the same solution using duality theory. The result of R¨uschendorf (1982, Proposition 1) translates to the equations

inf {P(X1+ X2 < s)|CX ∈ C2} = sup x∈R {F1(x−) + F2(s − x)} − 1, (2.1) and sup {P(X1+ X2 ≤ s)|CX ∈ C2} = inf x∈R{F1(x−) + F2(s − x)} , (2.2) where F1(x−) denotes the left-hand side limit of F1 in x. Note that equations (2.1) and (2.2) do not explicitly express sharp VaR bounds, but determine sharp bounds for the cdf of X1 + X2. Indeed, due to the fact that cdf’s are non-decreasing in general, optimizing the cdf equals optimizing the VaR. Furthermore, the sharp bounds (2.1) and (2.2) are expressed in terms of marginals F1 and F2 and thus provide full information on the optimal dependence structure in between.

In fact, the result of R¨uschendorf (1982) for d = 2 is a particular case of the later developed so-called standard bounds for dimensions d ≥ 2 (see e.g. Puccetti & R¨ uschen-dorf, June 2012, Theorem 2.7). Indeed, for aggregate risk X = X1+ · · · + Xdwith given marginals F1, . . . , Fd, define the standard lower bound

BX(s) = max ( sup t∈T (s) {F1(t1−) + F2(t2) + · · · + Fd(td)} − (d − 1), 0 ) (2.3) 8

(15)

and the standard upper bound BX(s) = min  inf t∈T (s) {F1(t1−) + · · · + Fd(td−)} , 1  , (2.4) where T (s) = ( t ∈ Rd d X i=1 ti = s ) . (2.5)

Then (2.3) and (2.4) are bounds for the lower tail probability

P(X < s) ∈BX(s), BX(s) . (2.6) It must be noted that the standard bounds fail to be sharp when d ≥ 3. The dual bounds introduced by Embrechts & Puccetti (2006) are based on the dual approach of R¨uschendorf (1982) and lead to significant improvements of the standard bounds. In the next section, the dual bounds are further investigated.

2.2

Dual bounds

A useful tool for finding bounds for functions of sums of dependent risks is the dual approach introduced by R¨uschendorf (1982). Applying this tool to the standard bounds BX(s) and BX(s) as defined in (2.3) and (2.4) yields their dual counterparts DX(s) and DX(s). The idea is to transform the standard bounds for the lower tail probability (2.6) into correspondingly sharper (dual) bounds for the upper tail probability

P(X > s) ∈DX(s), DX(s) . (2.7) Embrechts & Puccetti (2006) define the dual bounds as

DX(s) = sup t∈T (s) max    Pd i=1 Rs− P j6=itj ti Fei(x)dx s − t1− · · · − td − (d − 1), 0    (2.8) and DX(s) = inf t∈T (s) min    Pd i=1 Rs− P j6=itj ti Fei(x)dx s − t1− · · · − td , 1    , (2.9) where T (s) = ( t ∈ R d X i=1 ti > s ) , T (s) = ( t ∈ R d X i=1 ti< s ) , (2.10)

and where eFi is the complementary cdf (ccdf) of Fi given by eFi(t) = 1 − Fi(t), i = 1, . . . , d. This construction of the dual bounds satisfies (2.7). It is easy to see that this implies the possibly non-sharp VaR bounds given by

D−1X (1 − α) ≤ VaRα(X) and VaRα(X) ≤ D −1

X (1 − α), (2.11) e.g.: see Figure 2.1. The dual bounds improve the standard bounds in the following sense,

B−1X (α) ≤ D−1X (1 − α) and D−1X (1 − α) ≤ B−1X (α). (2.12) Still, the dual bounds are not sharp in general. The next section in particular investigates under which assumptions the upper dual bound is sharp, yielding analytical results for the worst-case VaR.

(16)

10 Maurits Carsouw — Determining Value-at-Risk Bounds through Copula Theory

Figure 2.1: The graph shows the cdf F and ccdf eF = 1 − F of a Pareto distributed random variable X. The upper dual bound satisfies eF ≤ DX so that VaRα(X) = eF−1(1 − α) ≤ D

−1 X (1 −

α). For Pareto distributed variables X this bound is sharp, yielding VaRα(X) = D −1

X (1 − α) for

all α ∈ (0, 1). Analogously, VaRα(X) = D−1X (1 − α).

2.3

Homogeneous case

Consider the homogeneous case where the risk vector X = (X1, . . . , Xd) has fixed identical continuous marginals F1 = · · · = Fd and an arbitrary dependence structure. Under some extra assumptions, analytical results for the best- and worst-case VaR are given by Wang & Wang (2011) (in fact, they solve a more general problem). This section presents the necessary assumptions to ensure sharpness of the upper dual bound. Under these assumptions, the analytical upper VaR bound is obtained and applied in two examples.

In the homogeneous case F1 = · · · = Fd, the dual bounds as defined in (2.8) and (2.9) are reduced to DX(s) = sup t>s/d ( dRts−(d−1)tFe1(x)dx s − d · t − (d − 1) ) (2.13) and DX(s) = inf t<s/d ( dRs−(d−1)t t Fe1(x)dx s − d · t ) . (2.14)

Sharpness of the upper dual bound (2.14), i.e.:

DX(s) = sup {P (X > s)|CX ∈ Cd} , (2.15) holds if and only if D−1X (1 − α) = VaRα(X). Embrechts et al. (2013, Proposition 4) investigate sharpness of the upper dual bound. Let continuous marginals F1 = · · · = Fd with probability density functions (pdf’s) f1= · · · = fdsatisfy the following criteria.

1. Unbounded support: the support of f1 is not contained in a finite interval; 2. Ultimately decreasing densities: there exists an x0 ∈ R such that f1(x) ≥ f1(y)

for all x < y larger than x0;

3. There exists a threshold s0 ∈ R such that for all s > s0, the infimum in (2.14) is attained at some t0 = t0(s) < s/d, i.e.: for t1= s − (d − 1)t0 the upper dual bound equals DX(s) = dRt1 t0 F (x)dxe t1− t0 = eF (t0) + (d − 1) eF (t1) (2.16)

(17)

where t0 ∈F1−1(1 − DX(s)), s/d.

Then for α ∈ (0, 1) sufficiently large, the upper dual bound is sharp, i.e.: the sharp analytical upper VaR bound is obtained via

VaRα(X) = D −1

X (1 − α). (2.17)

Next, we show two homogeneous examples of how to compute the upper VaR bound using (2.17).

Example 2.3.1. Let X1, . . . , Xd be Pareto distributed with tail parameter θ = 2, i.e.: Fi(x) = 1 − (1 + x)−2, x > 0, (2.18) for i = 1, . . . , d. Then (2.16) can be written as

DX(s) =

d (1 + t0)(1 + t1)

= (1 + t0)−2+ (d − 1)(1 + t1)−2 (2.19) where t1 = s − (d − 1)t0. Solving the second equation in (2.19) yields expressions for t0 and t1, t0 = s − d + 2 2(d − 1), t1 = s + d 2 − 1. (2.20)

Thus, expressions for DX and its inverse D −1 X are obtained, DX(s) = 4d(d − 1) (d + s)2 , D −1 X (s) = −d + 2pd(d − 1) √ s . (2.21)

Finally, equation (2.17) is applied to obtain the worst-case VaR. Take e.g. α = 0.99 and d = 8, then

VaR0.99(X) = −8 + 40 √

14 ≈ 141.66630. (2.22)

In particular, (2.22) is an upper bound for the VaR in the case of d = 8 comonotonic Pareto(2) marginals, where the (additive) VaR for α = 0.99 equals

VaR+0.99(X) = 8 X i=1 VaR0.99(Xi) = 8  (1 − 0.99)−1/2− 1= 72. (2.23)

Example 2.3.2. Let X1, . . . , Xdbe Exponentially distributed with rate parameter λ = 1, i.e.:

Fi(x) = 1 − exp(−x), x ≥ 0 (2.24) for i = 1, . . . , d. Then equation (2.16) with t1= s − (d − 1)t0 becomes

DX(s) =

d (1 − d exp(−s + t0d)) s − t0d

= 1 + (d − 1) exp(−s + t0d). (2.25) Numerically solving the second equation in (2.25) for t0 yields an expression for DX(s) and hence D−1X (s). We use the substitution xd= s − t0d and solve

 1 − d − d xd  exp(−xd) + d xd − 1 = 0 (2.26)

for xd (e.g.: d = 8 yields xd≈ 7.97812). Next, t0=

s − xd

d and t1 =

s − xd+ dxd

(18)

12 Maurits Carsouw — Determining Value-at-Risk Bounds through Copula Theory imply DX(s) = d xd (1 − exp(−xd)) exp  xd− s d  (2.28) and hence

D−1X (s) = d log (1 − exp(−xd)) + d log(d) − d log(s) − d log(xd) + xd. (2.29) In conclusion, the worst-case VaR is obtained via (2.17). Take e.g. α = 0.99 and d = 8, then

VaR0.99(X) ≈ 44.83865. (2.30)

In particular, (2.30) is an upper bound for the VaR in the case of d = 8 comonotonic Exponential(1) marginals, where the (additive) VaR for α = 0.99 equals

VaR+0.99(X) = 8 X

i=1

VaR0.99(Xi) = −8 log(1 − 0.99) ≈ 36.84136. (2.31)

For more examples and applications of the dual bounds, see e.g. Puccetti & R¨ uschen-dorf (June 2012, Section 5).

(19)

Rearrangement Algorithm

In this chapter we analyse the Rearrangement Algorithm (RA) introduced by Embrechts et al. (2013). The RA produces accurate numerical estimates of the VaR bounds as defined in (1.23) and (1.24) for dimensions up to d ≈ 600. The RA is a modification of the RA introduced by Puccetti & R¨uschendorf (January 2012), which works well for dimensions up to d ≈ 30. A particular advantage of the RA is that it can be used also in the inhomogeneous case where different marginals are allowed, and where analytical solutions may not be available. Section 3.1 explains the concept of opposite order, on which the RA is based. Section3.2contains a formal definition of the RA and an example of its applications in R. Section 3.3 tests the accuracy of the RA on heavy-tailed and light-tailed marginals.

3.1

Opposite order

Let two vectors x, y ∈ RN be oppositely ordered if and only if (xj − xk)(yj − yk) ≤ 0 holds for all j, k ∈ {1, . . . , N }, i.e.: x and y are oppositely ordered if and only if the slopes of the functions i 7→ xi and i 7→ yi are of opposite sign (or at least one of the slopes is zero). It is intuitively clear that the variance of the sum xi+yitends to be small if vectors x and y are oppositely ordered. More generally, given a matrix, oppositely ordering each column (by rearranging its entries) w.r.t. the sum of the other columns minimizes the variance of the row-wise sums. In the rest of this thesis any matrix is called oppositely ordered if and only if each of its columns is oppositely ordered to the sum of the other columns.

The RA is based on the concept of opposite order and estimates the lower and upper VaR bounds analogously. To compute VaRα(X), the RA fixes a typically large integer N ≥ 1 and defines an N -point discretization α1 < · · · < αN of the interval (α, 1). For each marginal Fj the probability levels α1, . . . , αN are iteratively rearranged into permutations (α1j, . . . , αN j), j = 1, . . . , d that minimize the variance of the i-dependent sumPd

j=1F −1

j (αij). Indeed, a smaller variance yields a more accurate estimation of the VaR bound, VaRα(X) w min 1≤i≤N d X j=1 Fj−1(αij). (3.1)

In fact, the permutations α1j, . . . , αN j, j = 1, . . . , d oppositely order the matrix (Fj(αij))ij and thus minimize the variance of its row-wise sums. Furthermore, the true RA uses two separate N -point discretizations of (α, 1) simultaneously to obtain extra information on the VaR bound (a formal definition of the RA is given in Section 3.2). To compute the lower VaR bound, the RA uses N -point discretizations of (0, α) instead of (α, 1) and estimates VaRα(X) w max 1≤i≤N d X j=1 Fj−1(αij). (3.2) 13

(20)

14 Maurits Carsouw — Determining Value-at-Risk Bounds through Copula Theory

Next, using R we reproduce an example on opposite orderings and the RA from a recent talk of Embrechts et al. (2013).

Example 3.1.1. Consider d marginals of Pareto(2) type, i.e.:

Fi(x) = 1 − (1 + x)−2, x > 0, (3.3) for i = 1, . . . , d and define an N -point discretization of (α, 1) by

αi = α +

(1 − α)(i − 1)

N , i = 1, . . . , N. (3.4)

Note that the matrix Xα = xαij given by xαij = Fj−1(αi) is now fixed. For d = 3, α = 0.99 and N = 30, this example applies rearranging methods to find d permutations (α1j, . . . , αN j), j = 1, . . . , d of (α1, . . . , αN) such that the matrix X∗ =



x∗ij given by x∗ij = Fj−1(αij) is oppositely ordered. The following R-code produces the matrix Xα including row-wise and column-wise sums, also see Table3.1.

alpha <- 0.99 N <- 30 mqf <- function(x)(1-x)^(-1/2)-1 # see (3.3) qmarginals <- list(mqf,mqf,mqf) d <- length(qmarginals) # Fix d=3 alpha.N <- rep(NA,N) for(i in 1:N){

alpha.N[i] <- alpha+(1-alpha)*(i-1)/N # see (3.4) }

X <- matrix(data=NA,nrow=N,ncol=d) for(j in 1:d){

X[,j] <- qmarginals[[j]](alpha.N) # define matrix F^{-1} j(alpha i) } X.rowsums <- cbind(X,rowSums(X)) TableA <- rbind(X.rowsums,colSums(X.rowsums)) TableA[N+1,d+1] <- NA print(TableA) [,1] [,2] [,3] Σ [1,] 9.000000 9.000000 9.000000 27.00000 [2,] 9.170953 9.170953 9.170953 27.51286 [3,] 9.350983 9.350983 9.350983 28.05295 [4,] 9.540926 9.540926 9.540926 28.62278 [5,] 9.741723 9.741723 9.741723 29.22517 [6,] 9.954451 9.954451 9.954451 29.86335 [7,] 10.180340 10.180340 10.180340 30.54102 [8,] 10.420805 10.420805 10.420805 31.26241 [9,] 10.677484 10.677484 10.677484 32.03245 [10,] 10.952286 10.952286 10.952286 32.85686 [11,] 11.247449 11.247449 11.247449 33.74235 [12,] 11.565617 11.565617 11.565617 34.69685 [13,] 11.909944 11.909944 11.909944 35.72983 [14,] 12.284223 12.284223 12.284223 36.85267 [15,] 12.693064 12.693064 12.693064 38.07919 [16,] 13.142136 13.142136 13.142136 39.42641 [17,] 13.638501 13.638501 13.638501 40.91550

(21)

[18,] 14.191091 14.191091 14.191091 42.57327 [19,] 14.811388 14.811388 14.811388 44.43416 [20,] 15.514456 15.514456 15.514456 46.54337 [21,] 16.320508 16.320508 16.320508 48.96152 [22,] 17.257419 17.257419 17.257419 51.77226 [23,] 18.364917 18.364917 18.364917 55.09475 [24,] 19.701967 19.701967 19.701967 59.10590 [25,] 21.360680 21.360680 21.360680 64.08204 [26,] 23.494897 23.494897 23.494897 70.48469 [27,] 26.386128 26.386128 26.386128 79.15838 [28,] 30.622777 30.622777 30.622777 91.86833 [29,] 37.729833 37.729833 37.729833 113.18950 [30,] 53.772256 53.772256 53.772256 161.31677 Σ 494.999201 494.999201 494.999201

Table 3.1: The matrix Xα = xαij given by xijα = Fj−1(αi) for marginals Fj, j = 1, . . . , d of

the Pareto(2) type, where α = 0.99, N = 30, d = 3 and αi as defined in (3.4). The columns

are identical since this case is homogeneous. Row-wise and column-wise sums are added. The minimum of the row-wise sums equals 27.00000.

Next, we iteratively rearrange the entries of the columns of Xα such that an oppositely ordered matrix X∗ is obtained. This maximizes the minimum and minimizes the vari-ance of the row-wise sums. There are many possible iterative processes that can be used. Our approach is to run through a list of the N (N − 1)/2 pairs {i1, i2} of entries of the j-th column and to swap entries i1 and i2 if and only if the pair is not yet oppositely ordered to the corresponding pair in the vector that is the row-wise sum P

k6=jxαik 

i of the other columns, j = 1, 2, . . . . This process does not necessarily stop after the d-th column, but is repeated until no more swaps occur. The following R-code implements our approach.

count.x <- 0 stop.x <- FALSE for(j in 1:d){

X[,j] <- sample(X[,j]) # Random permutation }

repeat{

for(j in 1:d){

for(i in 1:((N*(N-1)/2))){

if(diff(combn(X[,j],2)[,i]) # If pair i in column j ...

*diff(combn(rowSums(X)-X[,j],2)[,i]) > 0){ # ...is not oppositely ordered ...

count.x <- 0

copy.x <- X[((combn(1:N,2))[2,i]),j] X[((combn(1:N,2))[2,i]),j]

<-X[((combn(1:N,2))[1,i]),j]

X[((combn(1:N,2))[1,i]),j] <- copy.x # ...swap! } else{

count.x <- count.x+1

if(count.x >= d*N*(N-1)/2){ # Otherwise, check for convergence. stop.x <- TRUE

break }

} }

(22)

16 Maurits Carsouw — Determining Value-at-Risk Bounds through Copula Theory

if(stop.x) break }

if(stop.x) break }

The above R-code implies that the process stops if and only if d × N (N − 1)/2 subse-quential iteration steps take place without further swaps, which means that the entries in each column of the initial matrix Xα are rearranged until an oppositely ordered ma-trix X∗ =

 x∗ij



, x∗ij = Fj(αij) is found. The matrix X∗ including column-wise and row-wise sums is printed using the R-code below, also see Table3.2.

X.rowsums <- cbind(X,rowSums(X)) TableB <- rbind(X.rowsums,colSums(X.rowsums)) TableB[N+1,d+1] <- NA print(TableB) [,1] [,2] [,3] Σ [1,] 21.360680 11.247449 12.284223 44.89235 [2,] 10.420805 10.180340 26.386128 46.98727 [3,] 11.247449 21.360680 12.693064 45.30119 [4,] 26.386128 10.420805 10.180340 46.98727 [5,] 10.677484 10.952286 23.494897 45.12467 [6,] 9.350983 9.540926 37.729833 56.62174 [7,] 30.622777 9.741723 9.954451 50.31895 [8,] 37.729833 9.350983 9.540926 56.62174 [9,] 17.257419 11.909944 15.514456 44.68182 [10,] 23.494897 10.677484 10.952286 45.12467 [11,] 53.772256 9.000000 9.170953 71.94321 [12,] 13.638501 19.701967 11.565617 44.90609 [13,] 9.741723 9.954451 30.622777 50.31895 [14,] 18.364917 12.284223 14.191091 44.84023 [15,] 19.701967 13.638501 11.247449 44.58792 [16,] 16.320508 16.320508 11.909944 44.55096 [17,] 10.952286 23.494897 10.677484 45.12467 [18,] 9.170953 53.772256 9.000000 71.94321 [19,] 15.514456 13.142136 16.320508 44.97710 [20,] 12.693064 12.693064 19.701967 45.08809 [21,] 11.565617 11.565617 21.360680 44.49191 [22,] 10.180340 26.386128 10.420805 46.98727 [23,] 11.909944 15.514456 17.257419 44.68182 [24,] 14.811388 14.811388 14.811388 44.43416 [25,] 9.540926 37.729833 9.350983 56.62174 [26,] 14.191091 17.257419 13.638501 45.08701 [27,] 9.954451 30.622777 9.741723 50.31895 [28,] 9.000000 9.170953 53.772256 71.94321 [29,] 13.142136 18.364917 13.142136 44.64919 [30,] 12.284223 14.191091 18.364917 44.84023 Σ 494.999201 494.999201 494.999201

Table 3.2: The matrix X∗= x∗ij given by x∗ ij= F

−1

j (αij) is obtained by oppositely ordering

the matrix Xα using iterative rearranging methods. Note that the minimum 44.43416 of the row-wise sums of X∗ is significantly larger than the minimum 27.00000 of the row-wise sums of Xα(see Table3.1).

(23)

As the rearrangement of entries occurs only within columns, the column-wise sums of matrices Xαand X∗are identical (compare Tables3.1and3.2), i.e.: the rearrangement does not affect the marginals. In contrast, the rearrangement determines which (cou-plings of) entries F1−1(·), . . . , Fd−1(·) occur in each row and hence the row-wise sums. Thus, rearranging entries within columns of Xα actually means considering different couplings (copulas) of fixed marginals. In particular, the oppositely ordered matrix X∗ represents the copula that maximizes the minimum of the row-wise sums of the marginal quantiles. Applying equation (3.1) to X∗ =

 x∗ij



given by x∗ij = Fj−1(αij) estimates the upper VaR bound. (The lower VaR bound can be estimated in an analogous example using (3.2)). For details, we refer to Section 3.2.

3.2

RA

In this section we formally define the RA, show how to apply it using R and give an example in which we estimate the worst-case VaR of an inhomogeneous portfolio of risks.

Rearrangement Algorithm to estimate the upper VaR bound

1. Fix a probability level α ∈ (0, 1) and an integer N ≥ 1. Larger values of N yield better final estimates, but longer computation times.

2. Consider two N -point discretizations {αi} and {α+i } of the interval (α, 1), given by αi= α + (1 − α)(i − 1) N and α + i = α + (1 − α)i N , (3.5)

i = 1, . . . , N and define two matrices Xα=  xαij  and Yα=  yijα  by xαij = Fj−1(αi) and yijα = F −1 j α + i  , (3.6)

i = 1, . . . , N, j = 1, . . . , d. Note that xαij = yi−1,jα , so that the j-th columns of matrices Xαand Yα represent two stochastically ordered N -point discretizations of the upper (1 − α) parts of the supports of the marginal risk Xj, j = 1, . . . , d. 3. Permute randomly the entries xαij, i = 1, . . . , N of the j-th column of Xα for

j = 1, . . . , d. Do the same for the matrix Yα.

4. The actual rearranging comes down to the execution of an iterative process that ultimately transforms the matrix Xα into an oppositely ordered matrix X∗. This process sequentially jumps from column to column and iteratively rearranges the entries within each column to establish an opposite ordering w.r.t. to the sum of the other columns, see Example 3.1.1. Step 4 is to execute one iteration step. 5. Repeat Step 4 until no further changes occur, i.e.: until an oppositely ordered

matrix X∗ = (x∗ij) is found. Note that X∗ may not be unique.

6. Apply Steps 4 and 5 to the matrix Yα and thus obtain an oppositely ordered matrix Y∗ = (y∗ij). Note that Y∗ may not be unique.

7. Define sN = min 1≤i≤N X 1≤j≤d x∗ij and sN = min 1≤i≤N X 1≤j≤d yij∗. (3.7) Then sN ≤ sN. In conclusion, the upper VaR bound is estimated to lie in the “RA range”, i.e.:

(24)

18 Maurits Carsouw — Determining Value-at-Risk Bounds through Copula Theory

In fact, for fixed d and N → ∞, the size (sN− sN) of the RA range asymptotically goes to zero at rate o(1/N ). For N large enough we have

sN ≤ VaRα(X) ' sN, (3.9)

(see Puccetti & R¨uschendorf, January 2012, Theorem 3.1) while in the limit equal-ities hold,

VaRα(X) = lim

N →∞sN = limN →∞sN. (3.10) Rearrangement Algorithm to estimate the lower VaR bound After taking into account the following alterations, the above description of the RA to estimate the upper VaR bound can also be used to estimate the lower VaR bound.

• In Step 2, instead of the N -point discretizations (3.5) of (α, 1), take N -point discretizations of (0, α) given by αi = α(i − 1) N and α + i = αi N, (3.11) i = 1, . . . , N, j = 1, . . . , d.

• In Step 7, instead of sN and sN, define the RA range by tN = max 1≤i≤N X 1≤j≤d x∗ij and tN = max 1≤i≤N X 1≤j≤d yij∗, (3.12)

with tN ≤ tN. Then the final estimation of the lower VaR bound is given by VaRα(X) ∈ tN, tN



. (3.13)

Analogously to the previous remarks on (sN − sN), for fixed d and N → ∞ the size (tN − tN) of the RA range in (3.13) asymptotically goes to zero as o(1/N ). Furthermore, for N large enough we have

tN ' VaRα(X) ≤ tN, (3.14)

while equalities are obtained by taking the limit, VaRα(X) = lim

N →∞tN = limN →∞tN. (3.15) The R-package “QRM” (Pfaff et al., 2014) is an excellent tool for examining quantitative risk management concepts. In particular, the package contains the function VaRbound(), which uses the RA to compute the RA range for the lower or upper VaR bound. The function usage is given by

VaRbound(alpha,N,qmargins,bound=c("upper","lower"),verbose=FALSE) with arguments

alpha a probability level in (0, 1),

N the tail discretization parameter in {1, 2, . . .},

qmargins a list containing the marginal quantile functions Fi−1(·),

bound a character string indicating the VaR bound to be approximated, either "upper" (default) or "lower",

verbose logical indicating whether progress information is displayed, either FALSE (default) or TRUE.

(25)

N.B.: We thank Dr. Bernhard Pfaff for sharing the R-code for the function VaRbound(), which is included in Appendix A.

Example 3.2.1. The RA is particularly useful because it can also be applied in the inhomogeneous case. Consider the case d = 3 with marginal risks

X1 ∼ Exponential(1), X2 ∼ Normal(0, 1), X3 ∼ Pareto(2), (3.16) and fix α = 0.99. Using the function VaRbound(), it is easy to estimate both VaR bounds of aggregate risk X = X1+ X2+ X3. In particular, the upper VaR bound VaRα(X) is computed using the following R-code.

alpha <- 0.99 N <- 10^5

q1 <- qexp # Exponential(1) quantile function q2 <- qnorm # Standard Normal quantile function

q3 <- function(x){(1-x)^(-1/2)-1} # Pareto(2) quantile function qmargins <- list(q1,q2,q3)

VaRbound(alpha,N,qmargins,"upper") The resulting output is the RA range given by

lower upper

19.59521 19.59542.

Hence, we conclude that for probability level α = 0.99 and aggregate risk X = X1+ X2+ X3 with marginal risks Xi, i = 1, 2, 3 given by (3.16), the upper VaR bound is estimated by

VaR0.99(X) ≈ 19.60. (3.17)

3.3

Performance

Embrechts et al. (2013) mainly apply the RA to marginals of the Pareto type and state that in general, the accuracy of the RA is not affected by the type of the marginals used. This section tests the correctness of that statement by applying the RA to both light-tailed and heavy-tailed marginals and comparing the results to their analytical counterparts. Specifically, we test the performance of the RA in the case of (heavy-tailed) marginals of Pareto type and (light-(heavy-tailed) marginals of the Exponential type, for different values of α and d.

Consider aggregate risks X = X1+ · · · + Xd and Y = Y1+ · · · + Yd with marginal risks given by Xi ∼ Pareto(2) and Yi∼ Exponential(1), i = 1, . . . , d, i.e.:

FXi(x) = 1 − (1 + x)

−2

and FYi(x) = 1 − exp(−x), (3.18)

for x ≥ 0 and i = 1, . . . , d. Then the marginal risks have unbounded support and ultimately decreasing densities while the upper dual bound satisfies (2.16), implying

VaRα(Z) = DZ(1 − α), (3.19)

for Z = X, Y and α ∈ (0, 1). Expressions for the upper dual bounds of X and Y are produced in Examples 2.3.1and 2.3.2, we have

D−1X (s) = −d +2pd(d − 1)√

s (3.20)

and

(26)

20 Maurits Carsouw — Determining Value-at-Risk Bounds through Copula Theory where xdsatisfies  1 − d − d xd  exp(−xd) + d xd − 1 = 0. (3.22)

Hence, using (3.19) it is easy to obtain exact values for VaRα(Z), Z = X, Y for various values of d and α. In particular, we use (3.19) to produce the upper VaR bounds of X and Y for d = 8, 24, 40, 56 and α = 0.99, 0.995, 0.999, see the R-code below.

# Upper VaR bound, Pareto(2) marginals: for (d in c(8,24,40,56)){

for(alpha in c(0.99,0.995,0.999)){

print(-d+2*sqrt(d*(d-1))/sqrt(1-alpha)) # see (3.20) }

}

# Upper VaR bound, Exponential(1) marginals: for (d in c(8,24,40,56)){ f <- function(x){(1-d-d/x)*exp(-x)+d/x-1} # see (3.22) x.d <- uniroot(f,c(0.001,100))$root for(alpha in c(0.99,0.995,0.999)){ print(d*log(d*(1-exp(-x.d)))-d*log(x.d*(1-alpha))+x.d) # see (3.21) } }

Next, we use the RA to estimate VaRα(Z), Z = X, Y , for d = 8, 24, 40, 56, and α = 0.99, 0.995, 0.999. We print the corresponding RA ranges in two columns for re-spectively X and Y , as follows.

N <- 10^5

for (d in c(8,24,40,56)){ qpars <- vector("list", d) qexps <- vector("list", d) for(i in 1:d){

qpars[[i]] <- function(x)(1-x)^(-1/2)-1 # Pareto(2) quantile function qexps[[i]] <- qexp # Exponential(1) quantile function

} for(alpha in c(0.99,0.995,0.999)){ print(c(VaRbound(alpha,N,qpars,"upper"), VaRbound(alpha,N,qexps,"upper"))) } }

Thus, for each of the aggregate risks X = X1 + · · · + Xd and Y = Y1 + · · · + Yd, both exact and estimated values of the upper VaR bound are obtained, see Table3.3.

(27)

d α VaRα(X) RA range VaRα(Y ) RA range 8 0.99 141.666 141.663 – 141.669 44.839 44.838 – 44.839 0.995 203.660 203.656 – 203.664 50.384 50.383 – 50.384 0.999 465.286 465.276 – 465.295 63.259 63.259 – 63.260 24 0.99 445.894 445.865 – 445.922 134.524 134.522 – 134.528 0.995 640.530 640.490 – 640.570 151.160 151.158 – 151.164 0.999 1461.934 1461.845 – 1462.023 189.786 189.784 – 189.790 40 0.99 749.937 749.858 – 750.016 224.207 224.204 – 224.220 0.995 1077.139 1077.027 – 1077.251 251.933 251.930 – 251.946 0.999 2457.999 2457.749 – 2458.249 316.310 316.307 – 316.324 56 0.99 1053.955 1053.799 – 1054.110 313.890 313.886 – 313.917 0.995 1513.713 1513.493 – 1513.933 352.706 352.702 – 352.733 0.999 3453.986 3453.494 – 3454.476 442.834 442.830 – 442.862 Table 3.3: Exact values and RA estimates of the upper VaR bound for aggregate risks X = X1+ · · · + Xd and Y = Y1+ · · · + Yd with Xi∼ Pareto(2) and Yi∼ Exponential(1), i = 1, . . . , d.

The values in columns “VaRα(X)” and “VaRα(Y )” are obtained using the upper dual bound,

see (3.19). The corresponding RA estimates are computed using N = 105 and are presented in

columns “RA range”.

In both the Pareto case (Xi ∼ Pareto(2)) and the Exponential case (Yi ∼ Exponential(1)), the analytical upper VaR bound lies desirably in the corresponding RA range, d = 8, 24, 40, 56, α = 0.99, 0.995, 0.999 (see Table 3.3). The size of the RA range is small in general and increases in d and in α. We stress the point that the only difference between the two cases is the choice of marginals and make the following remark. For fixed d and α, the size of the RA range is larger in the Pareto case than in the Exponential case, in general. This is true in both the absolute sense and the relative (w.r.t. the analytical value) sense, see also Table 3.4.

/1000 Risk X Risk Y

Size RA Relative size Size RA Relative size

d α range RA range range RA range

8 0.99 6 4 % 1 2 % 0.995 8 4 % 1 2 % 0.999 19 4 % 1 2 % 24 0.99 57 13 % 1 1 % 0.995 80 12 % 6 4 % 0.999 178 12 % 6 3 % 40 0.99 158 21 % 16 7 % 0.995 224 21 % 16 6 % 0.999 500 20 % 17 5 % 56 0.99 311 30 % 31 10 % 0.995 440 29 % 31 9 % 0.999 982 28 % 32 7 %

Table 3.4:For d = 8, 24, 40, 56 and α = 0.99, 0.995, 0.999 this table shows the (relative) size of the RA range for the upper VaR bounds of aggregate risks X = X1+· · ·+Xdand Y = Y1+· · ·+Yd

with marginals of type respectively Pareto(2) and Exponential(1). The relative size of the RA range is defined as the ratio between the absolute size of the RA range and the analytical upper VaR bound. Values are displayed in thousandths.

(28)

Chapter 4

Partial dependence information

In this chapter we consider a portfolio X = (X1, . . . , Xd) of risks for which not only stochastic information on the separate risks X1, . . . , Xdis available, but also dependence information between (some of) the separate risks. For fixed marginals F1, . . . , Fd we investigate how additional fixed joint distributions of given subvectors of X can be applied to tighten the VaR bounds of the aggregate risk X = X1+· · ·+Xd. Furthermore, we describe the appropriate application of the RA in the presence of partial dependence information.

4.1

Systems of marginals

Let E ⊂ P ({1, . . . , d}) be a subset of the power set of {1, . . . , d}, so that elements J ∈ E are subsets J ⊂ {1, . . . , d}. In particular, let J ∈ E if and only if the joint marginal distribution FJ is known. W.l.o.g. assumeS{J|J ∈ E} = {1, . . . , d}, then it is of interest to consider the generalized Fr´echet class

FE = F (FJ, J ∈ E ) (4.1)

of all probability measures on Rd having subvector models FJ on RJ for all J ∈ E . We refer to a collection E as a system of marginals and to sets J ∈ E as marginal classes. The generalized Fr´echet class FE is a subclass of all possible dependence structures, i.e.: FE ⊂ F (F1, . . . , Fd) . (4.2) In fact, FE is the smallest subclass that contains all possible dependence structures under the restrictions of the partial dependence information that this chapter assumes. Assuming fixed marginals and partial dependence information, Puccetti & R¨uschendorf (June 2012) analyse bounds for the tail risk P(X1+ · · · + Xd ≥ s). Given a system E, the standard bounds (2.3) and (2.4) and dual bounds (2.8) and (2.9) can be improved in terms of sharpness. This leads to the reduced standard and dual bounds of Puccetti & R¨uschendorf (June 2012, Theorem 3.6 & 3.7).

Example 4.1.1. In the following we use the general notation Fi1···in to indicate the

joint cdf of the marginal risks with corresponding indices.

(i) Assuming fixed marginals, the simple system E = {{1}, . . . , {d}} is the most gen-eral system allowing all dependence structures and defines the Fr´echet class

FE = F (F1, . . . , Fd) . (4.3) (ii) The star-like system E = {{1, j}|2 ≤ j ≤ d} assumes a fixed dependence relation within each pair of marginal risks that includes risk X1 and defines the Fr´echet class

FE = F (F12, F13, . . . , F1d) . (4.4)

(29)

(iii) The series system E = {{j, j+1}|1 ≤ j ≤ d−1} assumes fixed dependence relations within all pairs of risks having consecutive indices and defines the Fr´echet class

FE = F (F12, F23, . . . , Fd−1,d) . (4.5) (iv) The pairwise system E = {{i, j}|1 ≤ i < j ≤ d} fixes all bivariate distributions

and defines the Fr´echet class

FE = F (Fij, 1 ≤ i < j ≤ d) . (4.6) Section4.3 analyses another actuarially relevant example. For even d and n = d/2, the system E = {{2j − 1, 2j}|1 ≤ j ≤ n} defines the Fr´echet class

FE = F (F12, F34, . . . , Fd−1,d) . (4.7) In this case the marginal classes J ∈ E are called non-overlapping, i.e.: the set {1, . . . , d} is a disjoint union of its subsets J ∈ E . Section4.2uses partial dependence information in the case of non-overlapping marginal classes to reduce the Fr´echet class of all possible dependence scenarios.

4.2

Non-overlapping marginal classes

In this section we consider systems with non-overlapping marginal classes J ∈ E . For 1 ≤ n ≤ d, let E = {J1, . . . , Jn} be a collection of disjoint sets with union {1, . . . , d}, i.e.:

J1∪ · · · ∪ Jn= {1, . . . , d} and Ji∩ Jj = ∅, i 6= j. (4.8) For aggregate risk X = X1+ · · · + Xd with fixed marginals FX1, . . . , FXd, any known

joint distributions of given subvectors of X are given by the collection E . Therefore, using additional dependence information to tighten the VaR bounds, actually means finding reduced VaR bounds

VaREα(X) = inf {VaRα(X)|FX ∈ FE} (4.9) and

VaREα(X) = sup {VaRα(X)|FX ∈ FE} . (4.10) From (4.2) it follows that

VaRα(X) ≤ VaREα(X) ≤ VaRα(X) ≤ VaR E

α(X) ≤ VaRα(X). (4.11) By associating to X a risk vector Y = (Y1, . . . , Yn), the problem of finding VaR bounds VaREα(X) and VaREα(X) given additional dependence information can be reduced to the original problem of finding VaR bounds VaRα(Y ) and VaRα(Y ) without any dependence information, see Puccetti & R¨uschendorf (June 2012) and Embrechts et. al (2013). Indeed, define

Yj = X

i∈Jj

Xi, j = 1, . . . , n. (4.12) Then, for non-overlapping marginal classes, we have

X = d X i=1 Xi= n X j=1 Yj = Y. (4.13)

On the one hand, the marginals FYj, j = 1, . . . , n of risk vector Y are fixed by

con-struction. Indeed, a class Jj ∈ E of marginals corresponds to fixed marginals Xi, i ∈ Jj of which the joint distribution is known, hence Jj fixes the distribution of the sum

(30)

24 Maurits Carsouw — Determining Value-at-Risk Bounds through Copula Theory

Yj =Pi∈JjXi, j = 1, . . . , n. On the other hand, the dependence structure between any

of the marginals FYj, j = 1, . . . , n is unknown. We define

VaRrα(X) = inf {VaRα(Y )|FY ∈ F (FY1, . . . , FYn)} , (4.14)

VaRrα(X) = sup {VaRα(Y )|FY ∈ F (FY1, . . . , FYn)} . (4.15)

Using (4.13), Puccetti & R¨uschendorf (June 2012, Proposition 3.3) show that in the case of non-overlapping marginal classes, we have

VaRrα(X) = VaREα(X) and VaRrα(X) = VaREα(X). (4.16) Hence, (4.11) can be rewritten as

VaRα(X) ≤ VaRrα(X) ≤ VaRα(X) ≤ VaR r

α(X) ≤ VaRα(X). (4.17) To summarize, given joint cdf’s of non-overlapping subvectors of X, the associated risk vector Y fully captures the given dependence information, yields an equal aggregate risk X = Y , and has fixed marginals FY1, . . . , FYn. Thus, instead of estimating VaR bounds

for X by considering the restricted set FE of dependence structures of risk vector X, one can equivalently estimate VaR bounds for Y by considering all possible dependence structures of risk vector Y .

4.3

Applying the RA

In this section, we use the RA and the methods described in Section 4.2 to compute upper VaR bounds applying partial dependence information. To incorporate any given dependence information in the RA, one simply applies the marginals of the reduced risk vector Y instead of risk vector X. Therefore, the key is to find expressions for the marginals of Y . For α = 0.99, 0.995, 0.999 we evaluate upper VaR bounds under different dependence scenarios.

Table 3 of Embrechts et. al (2013) presents upper VaR bounds for a portfolio of d = 600 Pareto(2) marginals under four different dependence scenarios given by (1) the maximum copula, (2) the system E = {{2j − 1, 2j}|1 ≤ j ≤ n}, n = d/2 in combination with the bivariate independence copula, (3) the same system E in combination with the bivariate Pareto(γ = 3/2) copula, and (4) the simple system (4.3) where no dependence information is available. For d = 8 we produce the similar Table4.1where the bivariate Pareto(γ=3/2) copula of dependence scenario (3) is generalized.

Example 4.3.1. Maximal copula; For the maximal copula the VaR of the total port-folio equals the sum of the marginal VaR’s. Indeed, comonotonicity implies

VaR+α(X) = d X i=1 VaRα(Xi) = VaRα d X i=1 (Xi) = VaRα(X). (4.18) Hence, under comonotonicity, the aggregate VaR of d = 8 Pareto(2) marginals equals d × F1−1(α) = 8 × (1 − α)−1/2− 1.

Example 4.3.2. Bivariate independence copula; For d = 8 Pareto(2) marginals, con-sider the system E = {{2j −1, 2j}|1 ≤ j ≤ n}, n = d/2 having non-overlapping marginal classes J ∈ E and defining the Fr´echet class

FE = F (F12, F34, . . . , Fd−1,d) . (4.19) In case of independent risks X2j−1 and X2j, the sum Yj = X2j−1+ X2j is distributed as

FYj(x) =

Z x 0

(31)

j = 1, . . . , n, where F1 is the Pareto(2) cdf given by F1(x) = 1−(1+x)−2. The marginals FY1 = · · · = FYn are computed in R as follows.

# cdf independent Pareto(2) sum cdf <- function(x){ value <- rep(NA,length(x)) for(i in 1:length(x)){ if(x[i]>10000){value[i] <- 1} else{ value[i] <- integrate(function(y){ (2*(1+y)^(-3))*(1-(1+x[i]-y)^(-2)) },0,x[i])[[1]] } } return(value) }

The R-function inverse() defined below transforms FY1 into the quantile function F

−1 Y1 .

# Quantile independent Pareto(2) sum inverse <- function(f){ function(y){ value <- rep(NA,length(y)) for(i in 1:length(y)){ if(y[i]==1){value[i] <- Inf} else{ value[i] <- uniroot((function(x){f(x)-y[i]}) ,lower=0.0000001,upper=1000)[[1]] } } return(value) } } quantile <- inverse(cdf)

Example 4.3.3. Bivariate Pareto copula; Fixing the dependence relations within n = d/2 pairs (X2j−1, X2j), j = 1, . . . , n of Pareto(2) marginals using the bivariate Pareto copula (1.19) with dependence parameter γ > 0, it directly follows that the bivariate distributions F2j−1,2j = F12, j = 1, . . . , n are given by

F12(x1, x2) = 1 +  (1 + x1)2/γ + (1 + x2)2/γ − 1 −γ − (1 + x1)−2− (1 + x2)−2. (4.21)

For this dependence scenario the conditional distribution F2|x1 of (X2|X1 = x1) exists in closed form and is given by

F2|x1(x) = 1 − (1 + x1)

2/(γ+2)(1 + x)2/γ + (1 + x

1)2/γ − 1 −γ−1

. (4.22)

Finally, the cdf of risk Yj = X2j−1+ X2j is obtained using FYj(x) =

Z x 0

F2|x1(x − x1)dF1(x1) (4.23)

(32)

26 Maurits Carsouw — Determining Value-at-Risk Bounds through Copula Theory

Figure 4.1:The VaR of the sum of two marginal risks X1, X2∼ Pareto(2) having the Pareto(γ)

copula, plotted against the dependence parameter γ > 0 (see Appendix B for the R code), α = 0.99. gamma <- 3/2 # or e.g. 5, or 50 cdf <- function(x){ value <- rep(NA,length(x)) for(i in 1:length(x)){ if(x[i]>10000){value[i] <- 1} else{ value[i] <- integrate(function(y){ (2*(1+y)^(-3))*(1-(1+y)^(2/(2+gamma)) *((1+x[i]-y)^(2/gamma)+(1+y)^(2/gamma)-1)^(-1-gamma)) },0,x[i])[[1]] } } return(value) } Corresponding quantiles FY−1 1 = · · · = F −1

Yn are computed using the R-function inverse()

as described in the case (ii) of the bivariate independence copula, also see Figure 4.1. Example 4.3.4. Simple system; The described reduction methods of Section4.2can be applied effectively to all non-overlapping marginal classes, except to the simple system (see (4.3)) where all marginal classes are one-dimensional and no dependence informa-tion is available.

(33)

α VaR+α(X) [a] VaRrα(X) VaRα(X)

0.99 72.0 98.0 141.5

0.995 105.0 139.0 203.5

0.999 245.0 310.0 465.0

α [b1] VaRrα(X) [b2] VaRrα(X) [b3] VaRrα(X)

0.99 66.5 74.0 80.5

0.995 95.0 105.0 115.0

0.999 216.0 233.5 261.0

Table 4.1: The upper VaR bounds of aggregate risk X having d = 8 Pareto(2) marginals under different dependence scenarios and for different values of α, as estimated by the RA and rounded to half units. The first column “VaR+α(X)” represents the comonotonic copula; the second column “[a] VaRrα(X)” assumes independent bivariate marginals F2j−1,2j; the third

column “VaRα(X)” assumes no dependence information; the fourth, fifth and sixth columns

“[b1], [b2] and [b3] VaRrα(X)” assume bivariate marginals F2j−1,2j having the Pareto copula,

with dependence parameters respectively γ = 3/2, 5, 50.

Table 4.1 shows once again that the VaR is not subadditive. Indeed, subadditivity requires the VaR in the comonotonic case to be equal to the upper VaR bound in the case of no dependence information (compare the first and third column). However, even if we require bivariate independence for pairs (X2j−1, X2j), j = 1, . . . , n, the upper VaR bound exceeds the VaR in the comonotonic case (compare the second and the first column). Also note that in the case of bivariate marginals F2j−1,2j, j = 1, . . . , n having the Pareto copula with dependence parameter γ = 3/2, the upper VaR bound is smaller than the VaR in the comonotonic case, but larger if γ = 5, 50 (compare the fourth, fifth and sixth colums to the first).

(34)

Chapter 5

Distribution fitting

A stock market index is a measurement of the value of a specific part of the stock market. Typically, it is a weighted average of the prices of the selected stocks. In this chapter we consider three well-known stock market indices, the CAC 40 index (France), the SMI index (Switzerland), and the Dow Jones (USA). Using historical market data, we observe these stock market indices during ten consecutive years. Based on the data, we estimate the return distributions for each of the stock market indices and compute the corresponding VaR’s including the VaR of the portfolio that contains one stock of each of the three markets.

5.1

French stock market index (CAC 40)

The return rt of any stock index Index[t] at time t is defined as the ratio between consecutive data points,

rt=

Index[t]

Index[t − 1]. (5.1)

Typically, the log-return is considered by taking the difference between logarithms of consecutive data points,

log rt= log (Index[t]) − log (Index[t − 1]) . (5.2) In fact, the R-function returns() computes the vector of log-returns for any time series. For a stock market index, the observed risk X = Xtat time t is defined as the negative of the log-return at that time,

Xt= − log rt. (5.3)

Indeed, small risks are generally preferred to large risks.

The CAC 40 (Cotation Assist´ee en Continu Quarante) is a French stock market index and measures the forty most significant stocks on the French stock market, which is known as Euronext Paris. Values of the index are available in the R-package “QRM” (Pfaff et al., 2014). For the ten-year period 1994/01/01 – 2003/12/31, a graph of the index is shown in Figure 5.1.

According to Bialkowski, J. (2004), CAC 40 is one of the few stock price indices for which the normal distribution is not rejected. For simplicity, we assume the log-returns to be i.i.d. and fit the normal distribution to our sample by simply setting its mean and variance equal to the mean and variance of the sample (method of moments), see the R-code below. # CAC 40 Index data(cac40) r <- -returns(cac40) risk <- window(r[,"CAC40"], "1994-01-01", "2003-12-31") 28

(35)

Figure 5.1: The French stock market index CAC 40 from 1994/01/01 to 2003/12/31.

Figure 5.2: The scaled frequency of the risk X associated to the French stock market index CAC 40 during the ten-year time span 1994/01/01–2003/12/31. The line shows the normal pdf with mean µ ≈ −0.000 and standard deviation σ ≈ 0.015 equal to respectively the mean and standard deviation of the sample.

# Goodness of fit mu <- mean(risk) sigma <- sd(risk)

hist(risk,breaks=60,prob=TRUE,main="CAC 40 index",xlab="Risk") curve(dnorm(x, mean=mu, sd=sigma),add=TRUE,col="blue")

The frequencies of different risk sizes, as observed over the period of ten years, give some idea of the shape of the density. Indeed, Figure 5.2 plots the (scaled) frequen-cies over small intervals (“bins”) of risk sizes and thus estimates the pdf of the risk Xt= − log rt that is associated to the CAC 40 index via (5.1).

(36)

30 Maurits Carsouw — Determining Value-at-Risk Bounds through Copula Theory

Figure 5.3: The Swiss market index from 1994/01/01 to 2003/12/31. Data are obtained from R-package “QRM” (Pfaff et al., 2014).

5.2

Swiss market index (SMI)

The SMI is the most significant stock market index of Switzerland and is made up of the twenty largest stocks of the country. It is generally considered to be a reliable measure of the overall Swiss stock market. Values of the index during the ten-year period 1994/01/01 – 2003/12/31 are shown in Figure5.3. A theoretical distribution for the risk X that is associated to the SMI is fitted to the data. The model choice is based on results by Rensburg et. al (2009), showing that a two-component mixture of normal distributions fits the SMI better than a single normal distribution. Assume that the two normally distributed components X1 and X2 have equal means, and let f1 and f2 denote the respective pdf’s, then the pdf of the mixture X is given by

fX(x) = w × f1(x) + (1 − w) × f2(x), x ∈ R, (5.4) for weight w ∈ [0, 1]. Different variances of X1 and X2 are allowed. Typically, the component with the larger weight has the smaller variance, and vice versa. For sample variance s2, assume

Var(X1) = (1 − w) × s2 and Var(X2) = (1 + w) × s2. (5.5) Parameter fitting is done by “shooting” different values for w ∈ [0, 1] and comparing the mixed density (5.4) to the empirical density as obtained from the data. In particular, Figure 5.4is obtained from the following R-code using w = 2/3.

# Swiss Market Index data(smi)

r <- -returns(smi)

risk <- window(r[,"SMI"], "1994-01-01", "2003-12-31") # Mix of normal distributions

dNN <- function(x,mean,sd1,sd2,w){ w*dnorm(x,mean,sd1) + (1-w)*dnorm(x,mean,sd2) } # Goodness of fit mu <- mean(risk) sigma <- sd(risk)

(37)

Figure 5.4:The scaled frequency of the risk X associated to the SMI during the ten-year time span 1994/01/01–2003/12/31. The blue line shows the normal pdf with mean x ≈ −0.000 and standard deviation s ≈ 0.013 equal to respectively the mean and standard deviation of the sample. The red line shows the mixed density, given in (5.4), of two normal components X1and

X2with equal means x and variances given by Var(X1) = (1−w)×s2and Var(X2) = (1+w)×s2,

for w = 2/3.

xlab="Risk")

curve(dnorm(x, mean=mu, sd=sigma),add=TRUE,col="blue") w <- 2/3

curve(dNN(x, mean=mu,sd1=sigma*sqrt(1-w),sd2=sigma*sqrt(1+w),w=w), add=TRUE,col="red")

From Figure 5.4 it is clear that the normal mixed density (5.4) is a better fit for the empirical density than the normal density.

5.3

Dow Jones index

The Dow Jones, Dow Jones Industrial, or Dow 30, is the oldest stock market index of the USA. It is a scaled average of thirty significant stocks in the USA and possibly the most famous stock market index in the world. Values of the index during the ten-year period 1994/01/01 – 2003/12/31 are shown in Figure5.5. Based on these values, we estimate the distribution of the associated risk X = Xt. To model risk X, Stockhammar &

¨

Oller (2009) compare the normal distribution, a normal mixture, and a mixture (NAL) of a normal and an asymmetric Laplace (AL) distribution. It is found that the NAL distribution fits best. For location parameter µ ∈ R and scale parameters a, b > 0, the AL distribution is given by f (x) =    1 2aexp |x−µ| a  if x ≤ µ 1 2bexp |x−µ| b  if x > µ . (5.6)

The (symmetric) Laplace distribution arises from (5.6) in the particular case a = b. For our purposes, an important advantage of the AL distribution is that it is skewed when a 6= b. We estimate scale parameters a and b as

a = v u u t n X t=1 (max{x − xt, 0})2 and b = v u u t m X t=1 (max{xt− x, 0})2, (5.7)

(38)

32 Maurits Carsouw — Determining Value-at-Risk Bounds through Copula Theory

Figure 5.5:The Dow Jones from 1994/01/01 to 2003/12/31. Data are obtained from R-package “QRM” (Pfaff et al., 2014).

Figure 5.6:The scaled frequency of the risk X associated to the Dow Jones during the ten-year time span 1994/01/01–2003/12/31. The blue line shows the normal pdf with mean µ ≈ −0.000 and standard deviation σ ≈ 0.011 equal to respectively the mean and standard deviation of the sample. The red line shows the NAL mixture density (5.8) with weight w = 0.45, mean µ ≈ −0.000 and scale parameters a ≈ 0.0077 and b ≈ 0.0081 given by (5.7).

Referenties

GERELATEERDE DOCUMENTEN

Since the MA-, GARCH- and TGARCH-model do not show significant test values, this paper concludes that a significant difference exists in the accuracy of

[r]

The above expansions provide an approach to obtain sensitivity results on the degree of dependence of the quantities determining the asymptotic behavior of the risk process, if

Financial analyses 1 : Quantitative analyses, in part based on output from strategic analyses, in order to assess the attractiveness of a market from a financial

(2.5) From your conclusion about null geodesics in (2.3), draw a representative timelike geodesic (the spacetime path of a massive particle) which starts outside the horizon and

The case studies in which all four BoP-strategy elements were part of the business model clearly had a greater increase in all four capitals than those firms who didn’t. In

Ik noem een ander voorbeeld: De kleine Mohammed van tien jaar roept, tijdens het uitdelen van zakjes chips voor een verjaardag van een van de kinderen uit de klas: ‘Dat mag niet,

Our interpretation of the field of reviews is that NHSTs are indeed used by review authors to report meta-analytic results, only a small proportion report the results in-text using