• No results found

Goodness-of-t tests for copulas : order of conditioning for tests on the Rosenblatt transform

N/A
N/A
Protected

Academic year: 2021

Share "Goodness-of-t tests for copulas : order of conditioning for tests on the Rosenblatt transform"

Copied!
35
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Goodness-of-fit tests for copulas: order

of conditioning for tests on the

Rosenblatt transform

Maarten Offenberg

Master’s Thesis to obtain the degree in Actuarial Science and Mathematical Finance University of Amsterdam

Faculty of Economics and Business Amsterdam School of Economics

Author: Maarten Offenberg

Student nr: 10681612

Date: August 31, 2018

Supervisor: Dr. Umut Can

(2)
(3)

Goodness-of-fit tests for copulas — Maarten Offenberg iii

Statement of Originality

This document is written by student Maarten Offenberg who declares to take full re-sponsibility for the contents of this document. I declare that the text and the work presented in this document is original and that no sources other than those mentioned in the text and its references have been used in creating it. The Faculty of Economics and Business is responsible solely for the supervision of completion of the work, not for the contents.

(4)

iv Maarten Offenberg — Goodness-of-fit tests for copulas

Abstract

In 2007 Genest et al. published a paper ’Goodness-of-fit tests for copulas: A review and a power study’. For this paper they carried out a large Monte Carlo experiment to assess the effect of the sample size and the strenght of dependence on the level and power of the blanket tests for various combinations of copula models under the null hypothesis and the alternative. They made use of a parametric bootstrap procedure. Three tests made use of Rosenblatt’s transform. In this paper we examine the effect of the order in which conditioning is done for two of those tests. We conclude that the order of conditioning has no specific influence.

Keywords: Copula; Cram´er-von Mises statistics; Goodness-of-fit; Kendall’s tau; Monte Carlo simulation, Order of conditioning; Parametric bootstrap; Power study; Pseudo-observations; P-values; Rosenblatt’s transformation

(5)

Contents

Preface vi

1 Introduction 1

2 Preliminairies for Goodness-of-fit testing 4

2.1 Estimation of the dependence parameter theta . . . 4

2.2 Goodness-of-fit testing . . . 4

3 Two tests based on Rosenblatt’s transform 5 4 Experimental design 7 4.1 Two dimensions . . . 8

4.2 Three dimensions . . . 8

5 Results 9 5.1 Results for the two dimensional case . . . 9

5.1.1 Testing the Clayton hypothesis . . . 9

5.1.2 Testing the Gumbel hypothesis . . . 10

5.1.3 Testing the Frank hypothesis . . . 10

5.1.4 Testing the Plackett hypothesis . . . 10

5.1.5 Testing the Joe hypothesis . . . 11

5.2 Results for the three dimensional case . . . 11

5.2.1 Testing the Clayton hypothesis . . . 11

5.2.2 Testing the Gumbel hypothesis . . . 11

5.2.3 Testing the Frank hypothesis . . . 12

5.3 Order of transformations . . . 12

6 Observations and recommendation 17

Appendix A: A parametric bootstrap for Sn(C) and Sn(B) 18

Appendix B: Formulas of the copulas 19

Appendix C: The R-code for the two-dimensional runs 20

Appendix D: The R-code for the three-dimensional runs 23

Appendix E: Additional Results 27

References 29

(6)

Preface

After five years my study in Actuarial Science and Mathematical Finance has come to an end. This thesis is the final part.

It has been a long journey for me. Especially the combination of working and study-ing was a challenge.

I want to thank my colleagues for asking and not asking how the study progressed. I want to thank my friends for supporting me finishing this study, especially Ton who was always willing to teach me about R. Mostly I want to thank my supervisor Umut Can from the University of Amsterdam for supervising this thesis. His patience, his nice and calm words helped me writing this thesis.

(7)

Chapter 1

Introduction

A copula is a function that contains all the information about the dependence structure between two or more continous vectors. General interest in copulas emerged not so long ago. In 1995 Roger B. Nelsen realised there was no introductory-level monograph on the subject and he decided to write one. In 1998 this resulted in ’An introduction to copulas’, a textbook for students and practitioners in statistics and probability at almost any level.

In 1959 Abe Sklar proved his theorem that a multivariate cumulative distribution function can be separated in two parts:

1. The marginal distributions,

2. The copula that describes the relation between the marginals distributions. The copula representation of a multivariate cumulative distribution function H is given by

H(x1, . . . , xd) = C(F1(x1), . . . , Fd(xd)) (1.1)

where F1, . . . , Fd are the marginal distributions of a random vector X = (X1, . . . , Xd)

and function C : [0, 1]d → [0, 1] is the copula. We see that a copula is a function that couples the multivariate distribution function to the marginal distribution functions. Because F1(X1), . . . , Fd(Xd) have standard uniform distribution functions, we also see

that C is a multivariate cumulative distribution function with uniform marginals. When the marginal distribution functions are continuous there is a unique copula. For more information about copulas, we refer to Joe (1997) and the forementioned book of Nelsen. Because of the theorem of Sklar we can model the marginal distributions and the dependence structure in separate steps. This has made them popular in applications in several areas such as survival analysis, hydrology, actuarial science and finance, es-pecially in risk management, portfolio management and - optimization and derivatives pricing. For more information about these applications, we refer to Malevergne and Sor-nette (2006), Cherubini, Luciano and Vecchiato (2004) and McNeil, Frey and Embrechts (2005).

In a copula model for random vector X, C is assumed to belong to a class

C0 = {Cθ: θ ∈ O} (1.2)

where θ is a parameter in the set O of admissible parameters. In most cases this param-eter θ decides the strength of dependence.

There are many families of copulas. An important subclass is the Archimedean copula. They are defined by

C(u, v) = ϕ−1(ϕ(u) + ϕ(v)) (1.3)

(8)

2 Maarten Offenberg — Goodness-of-fit tests for copulas

where ϕ is the generator, a function under some conditions. Most Archimedean copu-las have a closed-form expression, which is not true for all copucopu-las. In this thesis all considered copulas are Archimedean copulas. For more information about Archimedean copulas we refer to the website of Vose Software.

When we have n independent copies X1 = (X11, . . . , X1d), . . . , Xn= (Xn1, . . . , Xnd)

of X we can estimate θ. This subject has been covered in much work already. The testing of the null hypothesis

H0 : C ∈ C0 (1.4)

has been covered not so much but the attention for this is increasing.

In the paper ’Goodness-of-fit tests for copulas: A review and a power study’ by Genest, R´emillard and Beaudoin (2007) the writers address this latest subject. Their purpose was to present a critical review of the blanket goodness-of-fit tests proposed till then, to suggest variants or improvements, and to compare the relative power of these procedures. Blanket tests are goodness-of-fit tests that are applicable to all copula structures and require no strategic choice for their use. At page 203 they write:

A large-scale Monte Carlo experiment was conducted to assess the finite-sample properties of the proposed goodness-of-fit tests for various choices of dependence structures and degrees of association. Two characteristics of the tests were of interest: their ability to maintain their nominal level, arbitrarily fixed at 5% throughout the study, and their power under a variety of alternatives.

To curtail the computational effort, comparisons were limited to the bivariate case and to three degrees of dependence, viz. τ = 0.25, 0.50, 0.75. Seven one-parameter families of copulas were also considered, both under the null hypothesis and under the alternative.

Further at page 203 they write:

For every possible choice of copula and fixed value of τ , 10,000 random samples of size n = 50 were generated. An equal number of samples of size n = 150 was also obtained. Each of these samples was then used to test the goodness-of-fit of the seven families of distributions.

For eight tests this was applied in turn. They displayed the results for seven of those tests and for size n = 150.

Three of the seven tests that were conducted are based on Rosenblatts transform, viz Sn(C), S(B)n and An. At page 211 they write:

In future work, it would be interesting to investigate the sensitivity of tests based on the Rosenblatt transform to the order in which conditioning is done.

In this thesis we examine the effect of the order of conditioning for the tests Sn(C) and

Sn(B).

We started with trying to replicate their results and then we performed the tests where we changed the order of conditioning. We found out that we couldn’t replicate their results. Regardless of this problem we were able to investigate the sensitivities of tests based on the Rossenblatt transform to the order in which conditioning is done.

Their Monte Carlo study was conducted only for the bivariate case. We added some trivariate cases to our study. It gave us new results about how well the tests behave and new results about the effects of the order in which the conditioning is done.

(9)

Goodness-of-fit tests for copulas — Maarten Offenberg 3

In chapter 2 some preliminaries are presented. In chapter 3 we present the two tests that are based on Rosenblatt’s transform. In chapter 4 we outline the tests that we perform. The results of the tests are reported and discussed in chapter 5. In the last chapter we make some observations.

(10)

Chapter 2

Preliminairies for Goodness-of-fit

testing

There is a big distinction between estimation of the dependence parameter of a copula and testing the null hypothesis H0 : C ∈ C0. In this chapter we discuss these subjects.

2.1

Estimation of the dependence parameter theta

There are several methods for estimation of the dependence parameter θ. We estimated them in the same way as Genest et al. did. Because we don’t wish to make assumptions for the parametric families of univariate distributions, we calculate the pseudo-samples (z1, . . . , zn) as we will describe in formula 2.1. Then we estimate the parameter by

inversion of Kendall’s tau. How we do this, is described in chapter 4 ’Experimental design’.

For the three dimensional case we estimate pairwise Kendall’s taus. Then we estimate the parameter as the average of the pairs of Kendall’s taus. This method is also used in ’Copula goodness-of-fit testing: an overview and power comparison’ by Berg (2009).

2.2

Goodness-of-fit testing

When testing the null hypothesis H0: C ∈ C0, again we don’t want to make assumptions

for the parametric families of univariate distributions. As a consequence Genest et al. carried out the testing using pseudo-observations that were deduced from the ranks. Suppose we have n independent samples x1 = (x11, . . . , x1d), . . . , xn = (xn1, . . . , xnd)

from the d-dimensional random vector X. We define our pseudo-samples: Zj = (zj1, . . . , zjd) =  Rj1 n + 1, . . . , Rjd n + 1  (2.1) where Rji is the rank of xji amongst (xi1, . . . , xni). The scaling factor n/(n + 1) is used

to avoid potential problems at the boundaries of [0, 1]d. These pseudo-observations can be considered to be samples from the underlying copula. Because they are no longer independent, there is need for a parametric bootstrap procedures to obtain reliable p-value estimates. This bootstrap procedure is described in Appendix A. In our Monte Carlo experiment we used this same bootstrap procedure.

(11)

Chapter 3

Two tests based on Rosenblatt’s

transform

In the paper of Genest et al. they perform seven tests. Three of them are based on Rosenblatt’s transform: the Anderson-Darling test statistic An and two Cram´er-von

Mises statistics Sn(C) and S(B)n .

At page 202 Genest et al. recall the standard definition for Rosenblatt’s transform: Definition. Rosenblatt’s probability integral transform of a copula C is the mapping R : (0, 1)d → (0, 1)d which to every u = (u

1, . . . , ud) ∈ (0, 1)d

assigns another vector R(u) = (e1, . . . , ed) with e1 = u1 and for each i ∈

{2, . . . , d},

ei=

δi−1C(u1, . . . , ui, 1, . . . , 1)

δu1. . . δui−1

.δi−1C(u1, . . . , ui−1, 1, . . . , 1) δu1. . . δui−1

(3.1)

A critical property of Rosenblatt’s transform is that U is distributed as C, denoted U ∼ C, if and only if the distribution of R(U ) is the d-variate independence copula

C⊥(e1, . . . , ed) = e1× · · · × ed, e1, . . . , ed∈ [0, 1]. (3.2)

Thus H0 : U ∼ C ∈ C0 is equivalent to H0∗: Rθ(U ) ∼ C⊥ for some θ ∈ O.

Further at page 202 they state with regarding to test statistic An that the validity

of the parametric bootstrap procedure they used to calculate the p-values depends critically on the existence of a limiting distribution for An. The conditions (if any)

under which this happens remain to be determined. Still they included this test in their simulation study. Because of this problem and the practical issue of run time we decided to exclude this test in this thesis.

The two test statistics that we considered are

S(C)n = n Z

[0,1]d

{Dn(u) − C⊥(u)}2dDn(u)

= n X i=1 {Dn(Ei) − C⊥(Ei)}2 5

(12)

6 Maarten Offenberg — Goodness-of-fit tests for copulas and Sn(B)= n Z [0,1]d {Dn(u) − C⊥(u)}2du = n 3d − 1 2d−1 n X i=1 d Y k=1 (1 − Eik2) = 1 n n X i=1 n X j=1 d Y k=1 (1 − max(Eik, Ejk)).

These Cram´er-von Mises statistics are based on the assumption that under the null hypothesis H0, the empirical distribution function

Dn(u) = 1 n n X i=1 1(Ei ≤ u), u ∈ [0, 1]d (3.3)

associated with the pseudo-observations E1, . . . , Enshould be ’close’ to C⊥. These two

statistics differ only in their integration measure. To calculate the p-values, they made use of the parametric bootstrap procedure that is described in Appendix A. In this thesis we examine the sensitivity of the order in which conditioning is done for these two tests.

(13)

Chapter 4

Experimental design

In the paper of Genest et al. they state that ’In future work, it would be interesting to investigate the sensitivity of tests based on the Rosenblatt transform to the order in which conditioning is done’. In this thesis we examine the effect of the order of conditioning for the tests Sn(C) and Sn(B).

A Monte Carlo experiment was conducted to investigate the sensitivity of the two new tests based on the Rosenblatt transform, as proposed in Genest et al. to the order in which conditioning is done.

We did this for the bivariate case and for the trivariate case and for three degrees of dependence, viz. τ = 0.25, 0.50, 0.75. Five families of copulas were considered, both under the null hypothesis and under the alternative hypothesis:

1. The Clayton family 2. The Frank family

3. The Gumbel-Hougaard family 4. The Plackett family

5. The Joe family

The formulas of these copulas are mentioned in Appendix B. For practical reasons we tested the Plackett family and the Joe family only in the bivariate case. The R-codes that we used are mentioned in Appendices C en D. For every possible choice of copula, fixed value of τ and dimension, 1000 random samples of size n = 150 were generated. Each of the samples was then used to test the goodness-of-fit for the five families of distributions. Then the tests Sn(C) and Sn(B) were applied. In all cases the number of

bootstrap samples was fixed at N = 1000. The parametric bootstrap is mentioned in Appendix A.

When the parameter of a copula model had to be estimated, this was done by inversion of Kendall’s tau. This involves solving for θ in the equation

4 Z

[0,1]2

Cθ(u1, u2)dCθ(u1, u2) − 1 = τn, (4.1)

where τnis the sample version of τ .

This function of solving for θ is programmed in R. For the Clayton and Gumbel copula we used the formulas

2 ∗ τ /(1 − τ ) respectively 1/(1 − τ ), (4.2)

because the calculation time for these formulas is much smaller than for the programmed function in R. In all families considered, the solution is unique.

(14)

8 Maarten Offenberg — Goodness-of-fit tests for copulas

In the three dimensional case there are three pairwise Kendall’s taus, which gives us three parameters. The average of these three parameters is the parameter that we use.

4.1

Two dimensions

In the bivariate case, there were the following factors: C0: hypothesized copula model under H0: 5 choices

True copula: copula model from which the data were generated: 5 choices τ : level of dependence in C, as measured by Kendall’s tau: 3 choices Tests: 2 choices

In each of these 5 x 5 x 3 x 2 = 150 cases, 1000 repetitions were performed in order to estimate one of the following two characteristics: their ability to maintain their nominal level, arbitrary fixed at 5% or their power under a variety of alternatives.

First of all these results were compared with the results as provided in Genest et al. We found out that the results did not comply with the results in Genest et al. for all tests. There was however one difference between our tests and the tests in Genest et al. They did 10,000 repetitions instead of 1000. We chose some random tests and performed these for 10,000 repetitions. The results were very close to the results of the tests with 1000 repetitions. After further investigations we haven’t found any explanations for the different results between our tests and the Genest tests. The packages in R that includes some of these tests, gave the same results as our own tests did. We will contact the writers of the article about this issue.

We repeated all these tests while the order of conditioning was switched. So there were 300 cases in total.

4.2

Three dimensions

In the three dimensional case there were the following factors: C0: hypothesized copula model under H0: 3 choices

True copula: copula model from which the data were generated: 3 choices τ : level of dependence in C, as measured by Kendall’s tau: 3 choices Tests: 2 choices

In each of these 3 x 3 x 3 x 2 = 54 cases, 1000 repetitions were performed in order to estimate the same two characteristics: their ability to maintain their nominal level, arbitrary fixed at 5% or their power under a variety of alternatives. We could not compare these results with the article cause the three dimensional case was not a subject of investigation there. In the three dimensional case there are 3! = 6 possible orders of conditioning. We repeated all the tests for all the five other possibilities. So there were 324 cases in total.

(15)

Chapter 5

Results

In this chapter we discuss the results of the tests that we performed. We report our results in tabels 1-18. Tables 1-6 report the two dimensional situation and the outlook is the same for these six tables. As an example we discuss table 5.1. In the first two columns we report the percentage of rejection of H0 by Sn(C) for data sets of size n = 150 from

different copula models with τ = 0,25 and different orders of conditioning. In columns three and four we report the average and the standarddeviation of these percentages. In the fifth column we report the p-value. This p-value represents the likelihood that the samples of the stochasts under the different ordenings of condition are from the same distribution. We will discuss this p-value later. In columns six, seven and eight we report the percentage of the number of rejections of H0 in the two orders of conditioning by

Sn(C) for data sets of size n = 150 from different copula models with τ = 0,25.

As mentioned in Chapter 4 ’Experimental design’ we compare the results of our tests with the results of Genest et al. We see that there were considerable differences. We haven’t found any explanation for these differences. We will contact the writers of the paper.

Tables 7-18 report the results for the threedimensional situation. Tables 7-12 report the percentage of rejection of H0 by S

(C)

n and Sn(B) for data sets of size n = 150 from

different copula models, different taus and different orders of conditioning. They also report the average, the standarddeviation and the p-value. Because of typographical reasons we couldn’t present the percentage of number of rejections of H0 in the six

orders of conditioning in the same tables. These results are reported in tabels 13-18, that can be found in Appendix E.

In the next two paragraphs we discuss the results for the tests of the two and three dimensional cases. In paragraph 5.3 we discuss the results of the sensitivities of the order of transformations.

5.1

Results for the two dimensional case

5.1.1 Testing the Clayton hypothesis

We summarize the results of the Clayton hypothesis under several fixed alternatives. • Nominal levels of all approaches match prescribed size of 5%. We see that our

percentages differ more from 5% than the percentages in Genest et al. This is due to the number of repetitions. In Genest et al. they did 10,000 repetitions, we did 1000.

• Both tests perform very well.

• Power increases with level of dependence. • Sn(B) performs better than Sn(C).

(16)

10 Maarten Offenberg — Goodness-of-fit tests for copulas

Table 5.1: Percentage of rejection of H0 by S (C)

n for two dimensional data sets of size n = 150 from

different copula models with τ = 0,25 and different orders of conditioning.

Copula under H0 True copula order of cond avg std p-value number of rejections

1-2 2-1 0 1 2 Clayton Clayton 6.2 6.0 6.1 0.1 0.85 89.3 9.2 1.5 Gumbel-Hougaard 27.2 26.9 27.1 0.2 0.88 62.9 20.1 17.0 Frank 12.1 10.8 11.5 0.9 0.36 82.9 11.3 5.8 Plackett 11.4 13.2 12.3 1.3 0.22 80.5 14.4 5.1 Joe 62.6 61.2 61.9 1.0 0.52 27.8 20.6 51.6 Gumbel-Hougaard Clayton 59.4 57.3 58.4 1.5 0.34 32.4 18.5 49.1 Gumbel-Hougaard 4.9 4.3 4.6 0.4 0.52 91.7 7.4 0.9 Frank 9.2 9.3 9.3 0.1 0.94 86.3 8.9 4.8 Plackett 10.2 10.6 10.4 0.3 0.77 84.0 11.2 4.8 Joe 9.9 10.1 10.0 0.1 0.88 85.3 9.4 5.3 Frank Clayton 31.9 34.0 33.0 1.5 0.32 53.5 27.1 19.4 Gumbel-Hougaard 10.0 8.7 9.4 0.9 0.32 85.5 10.3 4.2 Frank 4.7 4.3 4.5 0.3 0.67 92.2 6.6 1.2 Plackett 6.3 5.7 6.0 0.4 0.57 89.6 8.8 1.6 Joe 27.4 25.9 26.7 1.1 0.45 65.0 16.7 18.3 Plackett Clayton 27.5 30.2 28.9 1.9 0.18 57.6 27.1 15.3 Gumbel-Hougaard 9.3 8.3 8.8 0.7 0.43 85.7 11.0 3.3 Frank 4.9 3.4 4.2 1.1 0.09 92.8 6.1 1.1 Plackett 5.1 5.1 5.1 0.0 1.00 90.7 8.4 0.9 Joe 27.8 26.8 27.3 0.7 0.62 63.7 18.0 18.3 Joe Clayton 90.9 90.2 90.6 0.5 0.59 7.6 3.7 88.7 Gumbel-Hougaard 12.4 11.0 11.7 1.0 0.33 82.5 11.6 5.9 Frank 34.1 33.8 34.0 0.2 0.89 58.0 16.1 25.9 Plackett 36.3 35.9 36.1 0.3 0.85 56.7 14.4 28.9 Joe 3.2 4.4 3.8 0.8 0.16 94.1 4.2 1.7

5.1.2 Testing the Gumbel hypothesis

We summarize the results of the Gumbel hypothesis under several fixed alternatives. • Nominal levels of all approaches match prescribed size of 5%.

• For the Clayton alternative both tests perform very well. For the Placket alter-native and a high level of dependence both tests perform well. For the other alternative both tests perform poor.

5.1.3 Testing the Frank hypothesis

We summarize the results of the Frank hypothesis under several fixed alternatives. • Nominal levels of all approaches match prescribed size of 5%.

• Power increases with level of dependence.

• For high level dependence both tests perform well, for low level of performances both test perform clearly worse.

5.1.4 Testing the Plackett hypothesis

We summarize the results of the Plackett hypothesis under several fixed alternatives. • Nominal levels of all approaches match prescribed size of 5%.

• For the Clayton and Joe alternatives both tests perform well.

• Power increases with level of dependence, except for the Clayton alternative. • Sn(B) performs better than Sn(C), except for the Clayton alternative.

(17)

Goodness-of-fit tests for copulas — Maarten Offenberg 11

Table 5.2: Percentage of rejection of H0 by S (C)

n for two dimensional data sets of size n = 150 from

different copula models with τ = 0,50 and different orders of conditioning.

Copula under H0 True copula order of cond avg std p-value number of rejections

1-2 2-1 0 1 2 Clayton Clayton 6.2 6.0 6.1 0.1 0.85 89.3 9.2 1.5 Gumbel-Hougaard 27.2 26.9 27.1 0.2 0.88 62.9 20.1 17.0 Frank 12.1 10.8 11.5 0.9 0.36 82.9 11.3 5.8 Plackett 11.4 13.2 12.3 1.3 0.22 80.5 14.4 5.1 Joe 62.6 61.2 61.9 1.0 0.52 27.8 20.6 51.6 Gumbel-Hougaard Clayton 59.4 57.3 58.4 1.5 0.34 32.4 18.5 49.1 Gumbel-Hougaard 4.9 4.3 4.6 0.4 0.52 91.7 7.4 0.9 Frank 9.2 9.3 9.3 0.1 0.94 86.3 8.9 4.8 Plackett 10.2 10.6 10.4 0.3 0.77 84.0 11.2 4.8 Joe 9.9 10.1 10.0 0.1 0.88 85.3 9.4 5.3 Frank Clayton 31.9 34.0 33.0 1.5 0.32 53.5 27.1 19.4 Gumbel-Hougaard 10.0 8.7 9.4 0.9 0.32 85.5 10.3 4.2 Frank 4.7 4.3 4.5 0.3 0.67 92.2 6.6 1.2 Plackett 6.3 5.7 6.0 0.4 0.57 89.6 8.8 1.6 Joe 27.4 25.9 26.7 1.1 0.45 65.0 16.7 18.3 Plackett Clayton 27.5 30.2 28.9 1.9 0.18 57.6 27.1 15.3 Gumbel-Hougaard 9.3 8.3 8.8 0.7 0.43 85.7 11.0 3.3 Frank 4.9 3.4 4.2 1.1 0.09 92.8 6.1 1.1 Plackett 5.1 5.1 5.1 0.0 1.00 90.7 8.4 0.9 Joe 27.8 26.8 27.3 0.7 0.62 63.7 18.0 18.3 Joe Clayton 90.9 90.2 90.6 0.5 0.59 7.6 3.7 88.7 Gumbel-Hougaard 12.4 11.0 11.7 1.0 0.33 82.5 11.6 5.9 Frank 34.1 33.8 34.0 0.2 0.89 58.0 16.1 25.9 Plackett 36.3 35.9 36.1 0.3 0.85 56.7 14.4 28.9 Joe 3.2 4.4 3.8 0.8 0.16 94.1 4.2 1.7

5.1.5 Testing the Joe hypothesis

We summarize the results of the Joe hypothesis under several fixed alternatives. • Nominal levels of all approaches match prescribed size of 5%.

• Test perform rather well.

• Power increases with level of dependence, except for the Frank alternative. • There is no clear difference between Sn(B) and Sn(C).

5.2

Results for the three dimensional case

5.2.1 Testing the Clayton hypothesis

We summarize the results of the Clayton hypothesis under several fixed alternatives. • Nominal levels of all approaches match prescribed size of 5%. We see that our

percentages differ more from 5% than the percentages in Genest et al. This is due to the number of repetitions. In Genest et al they did 10,000 repetitions, we did 1000.

• Both tests perform poor for low level of dependence, for high level of dependence both tests perform very well.

• Sn(B) performs clearly better than Sn(C).

5.2.2 Testing the Gumbel hypothesis

We summarize the results of the Gumbel hypothesis under several fixed alternatives. • Nominal levels of all approaches match prescribed size of 5%. Again there is a

(18)

12 Maarten Offenberg — Goodness-of-fit tests for copulas

Table 5.3: Percentage of rejection of H0 by S (C)

n for two dimensional data sets of size n = 150 from

different copula models with τ = 0,75 and different orders of conditioning.

Copula under H0 True copula order of cond avg std p-value number of rejections

1-2 2-1 0 1 2 Clayton Clayton 5.4 4.2 4.8 0.8 0.21 92.6 5.2 2.2 Gumbel-Hougaard 99.8 99.8 99.8 0.0 1.00 0.0 0.4 99.6 Frank 98.4 98.3 98.4 0.1 0.86 0.7 1.9 97.4 Plackett 81.0 80.3 80.7 0.5 0.69 7.8 23.1 69.1 Joe 100.0 100.0 100.0 0.0 1.00 0.0 0.0 100.0 Gumbel-Hougaard Clayton 100.0 100.0 100.0 0.0 1.00 0.0 0.0 100.0 Gumbel-Hougaard 5.2 5.1 5.2 0.1 0.92 90.8 8.1 1.1 Frank 6.4 7.2 6.8 0.6 0.48 90.0 6.4 3.6 Plackett 64.5 65.8 65.2 0.9 0.54 25.7 18.3 56.0 Joe 24 23.5 23.8 0.4 0.79 67.7 17.1 15.2 Frank Clayton 96.7 96.9 96.8 0.1 0.80 1.1 4.2 94.7 Gumbel-Hougaard 20.3 21.5 20.9 0.8 0.51 62.1 34.0 3.9 Frank 4.8 4.8 4.8 0.0 1.00 91.6 7.2 1.2 Plackett 54.2 55.4 54.8 0.8 0.59 26.4 37.6 36.0 Joe 63.5 60.7 62.1 2.0 0.20 21.1 33.6 45.3 Plackett Clayton 19.6 18.7 19.2 0.6 0.61 74.9 11.9 13.2 Gumbel-Hougaard 23.1 23.2 23.2 0.1 0.96 70.1 13.5 16.4 Frank 11.6 10.7 11.2 0.6 0.52 83.8 10.1 6.1 Plackett 4.8 4.9 4.9 0.1 0.92 91.9 6.5 1.6 Joe 66.6 65.4 66.0 0.8 0.57 27.8 12.4 59.8 Joe Clayton 100.0 100.0 100.0 0.0 1.00 0.0 0.0 100.0 Gumbel-Hougaard 29.3 27.0 28.2 1.6 0.25 54.1 35.5 10.4 Frank 51.2 50.0 50.6 0.8 0.59 38.8 21.2 40.0 Plackett 89.0 90.5 89.8 1.1 0.27 4.6 11.3 84.1 Joe 4.8 5.4 5.1 0.4 0.54 90.2 9.4 0.4

• For the Clayton alternative both tests perform very well, for the Frank alternative both tests perform poor.

• For the Clayton alternative we see that the power increases with level of depen-dence. For the Frank alternative we don’t see this pattern.

• For low level of dependence we see that Sn(C) performs better than Sn(B). For high

level of dependences we see that Sn(B) perform better than Sn(C).

5.2.3 Testing the Frank hypothesis

We summarize the results of the Frank hypothesis under several fixed alternatives. • Nominal levels of all approaches match prescribed size of 5%. Again there is a

bigger difference than we see in Genest et al.

• For the Clayton alternative both tests perform very well, for the Gumbel alterna-tive both tests perform very poor.

• For the Clayton alternative we see that the power increases with level of depen-dence. For the Gumbel alternative we see the same pattern but very slightly. • For the Clayton alternative the test Sn(C) performs better than Sn(B), specifically

for low level of dependence.

5.3

Order of transformations

As described earlier we performed the tests Sn(C) and Sn(B) for all possible orders of

transformation. In the two dimensional case there are 2! = 2 possible orders, in the three dimensional case there are 3! = 6 possible orders.

In this thesis we want to examine the effect of the order of conditioning for the tests Sn(C) and Sn(B). Our null hypothesis is that the order of conditioning has no effect

(19)

Goodness-of-fit tests for copulas — Maarten Offenberg 13

Table 5.4: Percentage of rejection of H0 by S (B)

n for two dimensional data sets of size n = 150 from

different copula models with τ = 0,25 and different orders of conditioning.

Copula under H0 True copula order of cond avg std p-value number of rejections

1-2 2-1 0 1 2 Clayton Clayton 4.5 5.2 4.9 0.5 0.47 92.2 5.9 1.9 Gumbel-Hougaard 42.1 43.0 42.6 0.6 0.68 48.0 18.9 33.1 Frank 23.8 22.6 23.2 0.8 0.52 69.5 14.6 15.9 Plackett 20.5 22.4 21.5 1.3 0.30 71.0 15.1 13.9 Joe 76.4 76.0 76.2 0.3 0.83 16.1 15.4 68.5 Gumbel-Hougaard Clayton 54.4 54.8 54.6 0.3 0.86 36.9 17.0 46.1 Gumbel-Hougaard 5.5 4.1 4.8 1.0 0.14 92.6 5.2 2.2 Frank 8.5 8.8 8.7 0.2 0.81 88.0 6.7 5.3 Plackett 8.3 11.0 9.7 1.9 0.04 85.5 9.7 4.8 Joe 13.4 12.3 12.9 0.8 0.46 82.5 9.3 8.2 Frank Clayton 22.4 24.4 23.4 1.4 0.29 66.9 19.4 13.7 Gumbel-Hougaard 13.4 11.3 12.4 1.5 0.15 82.6 10.1 7.3 Frank 4.9 4.4 4.7 0.4 0.60 92.5 5.7 1.8 Plackett 5.5 5.0 5.3 0.4 0.62 91.9 5.7 2.4 Joe 37.7 36.2 37 1.1 0.49 53.8 18.5 27.7 Plackett Clayton 20.0 22.8 21.4 2.0 0.13 69.4 18.4 12.2 Gumbel-Hougaard 13.4 10.6 12.0 2.0 0.05 82.9 10.2 6.9 Frank 5.0 4.4 4.7 0.4 0.53 92.6 5.4 2.0 Plackett 5.2 4.8 5.0 0.3 0.68 92.1 5.8 2.1 Joe 37.6 37.3 37.5 0.2 0.89 53.4 18.3 28.3 Joe Clayton 90.9 89.4 90.2 1.1 0.26 8.4 2.9 88.7 Gumbel-Hougaard 10.0 9.6 9.8 0.3 0.76 87.1 6.2 6.7 Frank 35.2 35.3 35.3 0.1 0.96 59.2 11.1 29.7 Plackett 34.7 34.3 34.5 0.3 0.85 60.3 10.4 29.3 Joe 4.2 4.2 4.2 0.0 1.00 94.0 3.6 2.4

on the results. We assume that the 1000 trials that we do for one specific situation has a binomial distribution with n = 1000. We want to test if the hypothesis that the parameters for the two different orders of conditioning are equal, holds.

For this test we make use of a χ2statistic. In ’Probability and Statistics for Engineers’ by Scheaffer et al. in paragraph 10.4.2. this method is detailed. For each combination of null hypothesis C0, alternative hypothesis, level of dependence τ and test, we calculated

a p-value.

In the twodimensional case for test Sn(C) we see that the minimum of all 75 p-values

is 0.09. We cannot reject our nul hypothesis that the parameters for the two different orders of conditioning are equal. In the twodimensional case for test Sn(B)we see that the

minimum of all 75 p-values is 0.04. This occurs when we test for the Gumbel-Hougaard copula while the true copula is the Plackett copula and τ = 0.25. This is the only p-value that is less than 5% however. In 75 tests it is natural to assume that at least one of the tests will fail. Again we cannot reject our nul hypothesis that the parameters for the two different orders of conditioning are equal.

In the three dimensional case for test Sn(C)we see that the minimum of all 27 p-values

is 0.04. This occurs when we test for the Clayton copula while the true copula is the Gumbel-Hougaard copula and τ = 0.75. Again this is the only p-value that is less than 5%. The lowest p-value besides this one is 0.12. There is no reason to reject our nul hypothesis that the parameters for the six different orders of conditioning are equal. In the three dimensional case for test Sn(B) we see that the minimum of all 27 p-values is

0.09. Again there is no reason to reject our nul hypothesis that the parameters for the six different orders of conditioning are equal.

(20)

14 Maarten Offenberg — Goodness-of-fit tests for copulas

Table 5.5: Percentage of rejection of H0 by Sn(B) for two dimensional data sets of size n = 150 from

different copula models with τ = 0,50 and different orders of conditioning.

Copula under H0 True copula order of cond avg std p-value number of rejections

1-2 2-1 0 1 2 Clayton Clayton 5.0 4.1 4.6 0.6 0.33 93.2 4.5 2.3 Gumbel-Hougaard 94.5 94.6 94.6 0.1 0.92 2.7 5.5 91.8 Frank 84.0 82.8 83.4 0.8 0.47 12.6 8.0 79.4 Plackett 64.5 65 64.8 0.4 0.81 28.0 14.5 57.5 Joe 99.9 99.7 99.8 0.1 0.32 0.1 0.2 99.7 Gumbel-Hougaard Clayton 96.3 96.8 96.6 0.4 0.54 1.6 3.7 94.7 Gumbel-Hougaard 5.2 5.8 5.5 0.4 0.56 90.7 7.6 1.7 Frank 12.8 13.8 13.3 0.7 0.51 79.2 15.0 5.8 Plackett 29.3 29.1 29.2 0.1 0.92 57.9 25.8 16.3 Joe 21.9 21.5 21.7 0.3 0.83 68.0 20.6 11.4 Frank Clayton 64.1 65.4 64.8 0.9 0.54 19.9 30.7 49.4 Gumbel-Hougaard 14.2 14.7 14.5 0.4 0.75 79.6 11.9 8.5 Frank 6.6 5.8 6.2 0.6 0.46 90.4 6.8 2.8 Plackett 14.1 14.4 14.3 0.2 0.85 77.8 15.9 6.3 Joe 59.8 57.9 58.9 1.3 0.39 31.6 19.1 49.3 Plackett Clayton 39.8 41.3 40.6 1.1 0.49 43.3 32.3 24.4 Gumbel-Hougaard 19.3 19.7 19.5 0.3 0.82 74.1 12.8 13.1 Frank 8.5 6.6 7.6 1.3 0.11 88.1 8.7 3.2 Plackett 4.0 5.7 4.9 1.2 0.08 92.8 4.7 2.5 Joe 66.7 64.2 65.5 1.8 0.24 26.7 15.7 57.6 Joe Clayton 99.9 99.9 99.9 0.0 1.00 0.0 0.2 99.8 Gumbel-Hougaard 20.7 21.7 21.2 0.7 0.58 67.6 22.4 10.0 Frank 64.9 64.8 64.9 0.1 0.96 25.2 19.9 54.9 Plackett 72.3 70.0 71.2 1.6 0.26 20.7 16.3 63.0 Joe 6.3 4.8 5.6 1.1 0.14 90.9 7.1 2.0

Table 5.6: Percentage of rejection of H0 by Sn(B) for two dimensional data sets of size n = 150 from

different copula models with τ = 0,75 and different orders of conditioning.

Copula under H0 True copula order of cond avg std p-value number of rejections

1-2 2-1 0 1 2 Clayton Clayton 5.4 4.7 5.1 0.5 0.47 94.1 1.7 4.2 Gumbel-Hougaard 100.0 100.0 100.0 0.0 1.00 0.0 0.0 100.0 Frank 99.6 99.5 99.6 0.1 0.74 0.4 0.1 99.5 Plackett 92.7 92.7 92.7 0.0 1.00 4.6 5.4 90.0 Joe 100.0 100.0 100.0 0.0 1.00 0.0 0.0 100.0 Gumbel-Hougaard Clayton 100.0 100.0 100.0 0.0 1.00 0.0 0.0 100.0 Gumbel-Hougaard 5.3 5.3 5.3 0.0 1.00 91.1 7.2 1.7 Frank 11.6 12.6 12.1 0.7 0.49 81.5 12.8 5.7 Plackett 71.8 71.6 71.7 0.1 0.92 20.9 14.8 64.3 Joe 43 42.7 42.9 0.2 0.89 43.6 27.1 29.3 Frank Clayton 95.2 94.3 94.8 0.6 0.37 2.5 5.5 92.0 Gumbel-Hougaard 16.7 17.5 17.1 0.6 0.63 74.9 16.0 9.1 Frank 5.9 5.8 5.9 0.1 0.92 91.5 5.3 3.2 Plackett 52.5 51.4 52.0 0.8 0.62 39.5 17.1 43.4 Joe 75.3 73.9 74.6 1.0 0.47 18.5 13.8 67.7 Plackett Clayton 19.4 19.9 19.7 0.4 0.78 72.2 16.3 11.5 Gumbel-Hougaard 27.4 25.7 26.6 1.2 0.39 67.4 12.1 20.5 Frank 15.4 15.7 15.6 0.2 0.85 79.9 9.1 11.0 Plackett 4.5 4.6 4.6 0.1 0.91 94.0 2.9 3.1 Joe 77.5 76.7 77.1 0.6 0.67 17.5 10.8 71.7 Joe Clayton 100.0 100.0 100.0 0.0 1.00 0.0 0.0 100.0 Gumbel-Hougaard 32.7 30.1 31.4 1.8 0.21 51.3 34.6 14.1 Frank 67.5 65.6 66.6 1.3 0.37 23.5 19.9 56.6 Plackett 91.8 93.3 92.6 1.1 0.20 3.0 8.9 88.1 Joe 5.6 5.5 5.6 0.1 0.92 90.9 7.1 2.0

(21)

Goodness-of-fit tests for copulas — Maarten Offenberg 15

Table 5.7: Percentage of rejection of H0by Sn(C)for three dimensional data sets of size n = 150 from

different copula models with τ = 0,25 and different orders of conditioning.

Copula under H0 True copula order of cond avg std p-value

1-2-3 1-3-2 2-1-3 2-3-1 3-1-2 3-2-1 Clayton Clayton 5.6 5.2 5.0 5.2 5.4 5.6 5.3 0.2 0.99 Gumbel-Hougaard 10.8 12.8 11.7 13.7 10.9 11.9 12.0 1.1 0.31 Frank 3.2 3.6 3.3 3.3 3.8 3.3 3.4 0.2 0.98 Gumbel-Hougaard Clayton 93.2 93.2 93.2 93.3 94 94.4 93.6 0.5 0.81 Gumbel-Hougaard 5.0 4.5 5.6 6.4 4.8 5.6 5.3 0.7 0.45 Frank 15.6 16.2 16.7 14.8 17.0 15.4 16.0 0.8 0.76 Frank Clayton 79.2 80.9 80.8 79.2 80.6 80.3 80.2 0.8 0.86 Gumbel-Hougaard 5.8 5.4 6.4 6.9 6.5 6.7 6.3 0.6 0.74 Frank 4.4 5.3 4.0 4.4 4.2 4.2 4.4 0.5 0.78

Table 5.8: Percentage of rejection of H0by Sn(C)for three dimensional data sets of size n = 150 from

different copula models with τ = 0,50 and different orders of conditioning.

Copula under H0 True copula order of cond avg std p-value

1-2-3 1-3-2 2-1-3 2-3-1 3-1-2 3-2-1 Clayton Clayton 5.0 4.8 5.0 5.5 4.5 5.1 5.0 0.3 0.95 Gumbel-Hougaard 82.1 81.1 81.2 80.2 79.4 79.1 80.5 1.2 0.51 Frank 66.2 64.0 64.2 64.8 65.8 66.3 65.2 1.0 0.81 Gumbel-Hougaard Clayton 99.9 99.8 100.0 99.9 100.0 100.0 99.9 0.1 0.42 Gumbel-Hougaard 6.0 4.7 5.4 6.2 4.5 5.9 5.5 0.7 0.43 Frank 13.3 11.8 12.4 12.7 12.9 13.5 12.8 0.6 0.89 Frank Clayton 99.5 99.4 99.5 99.7 99.4 99.4 99.5 0.1 0.93 Gumbel-Hougaard 10.9 12.2 10.4 11.2 13.0 12.6 11.7 1.0 0.40 Frank 5.2 4.9 4.7 5.0 4.9 5.3 5.0 0.2 0.99

Table 5.9: Percentage of rejection of H0by S (C)

n for three dimensional data sets of size n = 150 from

different copula models with τ = 0,75 and different orders of conditioning.

Copula under H0 True copula order of cond avg std p-value

1-2-3 1-3-2 2-1-3 2-3-1 3-1-2 3-2-1 Clayton-Hougaard Clayton 3.8 5.3 4.1 4.5 4.5 3.7 4.3 0.6 0.52 Gumbel 99.5 99.9 100 99.8 100 99.9 99.9 0.2 0.04 Frank 98 98.5 97.8 98.8 98.6 98.5 98.4 0.4 0.47 Gumbel-Hougaard Clayton 100.0 100.0 100.0 100.0 100.0 100.0 100.0 0.0 1.00 Gumbel-Hougaard 4.7 4.1 6.2 5.2 4.5 4.9 4.9 0.7 0.35 Frank 5.3 7.0 5.4 4.3 6.6 5.7 5.7 1.0 0.12 Frank Clayton 100 100 99.9 99.9 99.9 100.0 100.0 0.1 0.70 Gumbel-Hougaard 22.5 23.6 24.1 23.9 22.2 24.0 23.4 0.8 0.86 Frank 5.9 5.8 5.7 4.9 5.8 5.6 5.6 0.4 0.94

(22)

16 Maarten Offenberg — Goodness-of-fit tests for copulas

Table 5.10:Percentage of rejection of H0by Sn(B)for three dimensional data sets of size n = 150 from

different copula models with τ = 0,25 and different orders of conditioning.

Copula under H0 True copula order of cond avg std p-value

1-2-3 1-3-2 2-1-3 2-3-1 3-1-2 3-2-1 Clayton Clayton 5.5 4.7 4.6 5.2 5.4 5.2 5.1 0.4 0.92 Gumbel-Hougaard 51.1 52.0 49.8 49.7 49.6 51.6 50.6 1.1 0.81 Frank 22.8 24.7 25.0 23.8 26.7 25.2 24.7 1.3 0.45 Gumbel-Hougaard Clayton 87.5 87.3 87.4 86.4 88.0 87.9 87.4 0.6 0.92 Gumbel-Hougaard 4.5 4.6 4.8 5.1 4.7 5.6 4.9 0.4 0.88 Frank 10.7 11.0 10.3 10.5 11.8 11.4 11 0.6 0.89 Frank Clayton 61.9 61.1 61.5 62.5 62.1 62.4 61.9 0.5 0.99 Gumbel-Hougaard 15.4 15.1 15.1 14.2 16.1 15.4 15.2 0.6 0.92 Frank 3.6 4.2 3.7 3.5 4.5 4.6 4.0 0.5 0.70

Table 5.11:Percentage of rejection of H0by Sn(B)for three dimensional data sets of size n = 150 from

different copula models with τ = 0,50 and different orders of conditioning.

Copula under H0 True copula order of cond avg std p-value

1-2-3 1-3-2 2-1-3 2-3-1 3-1-2 3-2-1 Clayton Clayton 5.2 4.6 4.8 4.7 4.2 3.7 4.5 0.5 0.68 Gumbel-Hougaard 98.6 98.4 98.7 98.5 98.5 98.3 98.5 0.1 0.98 Frank 94.0 93.5 92.8 92.9 93.6 93.1 93.3 0.5 0.89 Gumbel-Hougaard Clayton 99.9 99.9 99.8 99.9 100.0 99.9 99.9 0.1 0.85 Gumbel-Hougaard 4.6 5.1 5.5 5.3 5.8 5.9 5.4 0.5 0.81 Frank 14.2 14.1 14.8 14.8 14.1 16.1 14.7 0.8 0.80 Frank Clayton 97.5 97.0 97.1 96.6 96.5 96.5 96.9 0.4 0.75 Gumbel-Hougaard 15.5 15.4 15.3 15.7 17.6 17.5 16.2 1.1 0.51 Frank 5.3 4.7 5.6 5.6 4.8 5.9 5.3 0.5 0.81

Table 5.12:Percentage of rejection of H0by S (B)

n for three dimensional data sets of size n = 150 from

different copula models with τ = 0,75 and different orders of conditioning.

Copula under H0 True copula order of cond avg std p-value

1-2-3 1-3-2 2-1-3 2-3-1 3-1-2 3-2-1 Clayton Clayton 3.9 4.7 3.5 4.1 4.7 4.4 4.2 0.5 0.73 Gumbel-Hougaard 100.0 100.0 100.0 100.0 100.0 100.0 100.0 0.0 1.00 Frank 100.0 100.0 99.8 99.7 99.9 99.9 99.9 0.1 0.32 Gumbel-Hougaard Clayton 100.0 100.0 100.0 100.0 100.0 100.0 100.0 0.0 1.00 Gumbel-Hougaard 5.7 5.2 5.8 5.5 4.5 5.4 5.4 0.5 0.83 Frank 11.2 13.0 12.1 12.6 13.6 11.9 12.4 0.9 0.65 Frank Clayton 99.9 100.0 99.8 99.4 99.8 99.8 99.8 0.2 0.09 Gumbel-Hougaard 11.5 13.1 12.0 12.5 14.2 13.0 12.7 0.9 0.55 Frank 6.6 5.3 5.3 5.7 5.6 6.0 5.8 0.5 0.81

(23)

Chapter 6

Observations and

recommendation

Genest et al. carried out a comparitive power study of seven existing blanket goodness-of-fit tests for copula models. The results were presented in ’Goodness-goodness-of-fit test for copulas: A review and a power study’ in 2007. The tests S(B)n and Sn(C) were based on

Rosenblatt’s transform. We tried to copy those results but we failed. Although Genest et al. did 10,000 simulations and we did only 1000 simulations, we don’t believe this is the cause of the different results. We performed a few tests with 10,000 simulations and these results did not significantly differ from our tests with 1000 simulations. Further research has to be done to find out why the results are different.

In the forementioned paper all tests were bivariate tests. We expanded our study with threevariate tests.

In general we can conclude that test Sn(B) performs better than test Sn(C) although

there are specific situations where this does not hold.

We performed the tests again but now with a different order in which conditioning is done. In the twodimensional cases there are two possible orders, in the threedimensional cases there are six possible orders. Although we got different results, the differences were small. A χ2-test showed us that we can’t reject the null-hypothesis that the order of conditioning has no specific influence on the results. Of course this is the result that we wished for because we want the same results when we tests several times on the same data.

(24)

Appendix A

A parametric bootstrap for S

n(C)

and S

(B) n

The following algorithm is described in terms of statistic Sn(C). However, it is also

valid mutatis mutandis for Sn(B) or any other rank-based statistic.

1. Compute Dn as per formula (3.3) and estimate θ by θn= Tn(U1, . . . , Un.)

2. Compute the value of Sn(C) as per formula (3.3).

3. For some large integer N , repeat the following steps for every k ∈ {1, . . . , N }: (a) Generate a random sample Y∗1,k, . . . , Y∗n,kfrom distribution Cθn and compute

their associated rank vectors R∗1,k, . . . , R∗n,k. (b) Compute U∗i,k = R∗i,k/(n + 1) for i ∈ {1, . . . , n}.

(c) Estimate θ by θ∗n,k = Tn(U∗1,k, . . . , U∗n,k) and compute E∗1,k, . . . , E∗n,k, where

E∗i,k = Rθ∗n,k(U∗i,k), i ∈ {i, . . . , n}.

(d) Let D∗n,k(u) = 1nPn

i=11(E ∗

i,k ≤ u), u ∈ [0, 1]d and set

Sn,k(C)∗=Pn i=1{D ∗ n,k(E ∗ i,k) − C⊥(E∗i,k)}2.

An approximate p-value for the test is then given by PN k=11(S (C)∗ n,k > S (C) n )/N . 18

(25)

Appendix B

Formulas of the bivariate copulas:

Clayton:

C(u, v) = (u−θ+ v−θ− 1)−1/θ

Gumbel:

C(u, v) = exp− (−log(u))θ+ (−log(v))θ1/θ

Frank: C(u, v) = −1 θlog 1 + (exp(−θu) − 1)(exp(−θv) − 1) exp(−θ) − 1  Placket: C(u, v) = 1 2 1 − 1 + (θ − 1)(u + v) − 2θv

(1 + ((θ − 1)(u + v))2− uθ(θ − 1)uv)1/2



Joe:

C(u, v) = 1 − (1 − u)θ+ (1 − v)θ− (1 − u)θ(1 − v)θ1/θ

Formulas of the trivariate copulas:

Clayton:

C(u, v, w) = (u−θ+ v−θ+ w−θ− 2)−1/θ

Gumbel:

C(u, v, w) = exp− (−log(u))θ+ (−log(v))θ+ (−log(w))θ1/θ

Frank:

C(u, v, w) = −1

θlog 1 +

(exp(−θu) − 1)(exp(−θv) − 1)(exp(−θw) − 1) (exp(−θ) − 1)2



(26)

Appendix C

The R-code for the two-dimensional runs

This code tests for the Clayton copula while the true copula is Clayton as well, ρ is 0.25 and the teststatistic is Sn(B). By relocating the pound signs (#) this code can be

transformed into the codes for all the other tests. library(copula)

# FUNCTIONS

# We define function .e2Computation that transforms u1 and u2 to e2. This # .e2computations is dependent on the H0-copula.

e2Computation < − function(u1, u2, th){

(u1ˆ(-th) + u2ˆ(-th) - 1)ˆ-((th+1)/th) * u1ˆ(-th-1)

# exp(-((-log(u1))ˆth+(-log(u2))ˆth)ˆ(1/th))*((-log(u1))ˆth+(-log(u2))ˆth) #ˆ((1-th)/th)*(-log(u1))ˆ(th-1)/u1 # Gumbel # 1/(1+(exp(-th*u1)-1)*(exp(-th*u2)-1)/(exp(-th)-1))*exp(-th*u1)*(exp(-th*u2)-1)/ # (exp(-th)-1) #Frank # 0.5*(1-(1+(th-1)*(u1+u2)-2*th*u2)/(((1+(th-1)*(u1+u2))ˆ2-4*th*(th-1)*u1*u2)ˆ #0.5)) #Plackett # -((1-u1)ˆth+(1-u2)ˆth-(1-u1)ˆth*(1-u2)ˆth)ˆ(1/th-1)*(1-u1) #ˆ(th-1)*((1-u2)ˆth-1) #Joe }

# We define the function .calculateRosenblatt that transforms the (nx2)-matrix of u’s # to the (nx2)-matrix of e’s.

calculateRosenblatt < − function(samp, th) {

cbind(samp[,1], e2Computation(samp[,1],samp[,2],th)) }

# We define function .D as in formula 8 at page 203 of Genets et al. # E is a (nx2)-matrix

# u is a two-dimensional number Dn < − function(E, u) {

r < − (E[,1] <= u[1])*(E[,2] <= u[2]) sum(r)/length(r)

}

(27)

Goodness-of-fit tests for copulas — Maarten Offenberg 21

# We define the function calculateSCFromSample that calculates statistic SC as in formula 9 at page 203.

calculateSCFromSample < − function(samp, th) { E < − calculateRosenblatt(samp, th)

sum(apply(E, 1, function(x) (Dn(E,x) - prod(x))ˆ2)) }

# We define the function calculateSBFromSample that calculates statistic SB as in # formula 9 at page 203. calculateSBFromSample < − function(samp, th) { E < − calculateRosenblatt(samp, th) C < − 0 f < − function(x,y) {1-pmax(x,y)} firstColumn < − outer(E[,1],E[,1], f) secondColumn < − outer(E[,2],E[,2], f) C < − sum (firstColumn * secondColumn)

n/9-0.5*sum(apply(E, 1, function(x) prod(1-xˆ2)))+C/n }

setwd(”C:/Users/Student/Documents/Maarten Offenberg”) # END FUNCTIONS

n < −150 Ktau < − 0.25

theta < − # theta van de true copula

2*Ktau/(1-Ktau) #Clayton

# 1/(1-Ktau) #Gumbel

# iTau(frankCopula(), Ktau) #Frank # iTau(plackettCopula(), Ktau) #Plackett # iTau(joeCopula(), Ktau) #Joe

rej < − NA set.seed(100) date()

output < − numeric() for (j in 1:1000)

# We create a sample Z from the 2-dim true copula Z < − rCopula(n, claytonCopula(theta)) #Clayton # Z < − rCopula(n, gumbelCopula(theta)) #Gumbel # Z < − rCopula(n, frankCopula(theta)) #Frank # Z < − rCopula(n, plackettCopula(theta)) #Plackett # Z < − rCopula(n, joeCopula(theta)) #Joe

# We transform sample Z to u’s.

Zu < − cbind(rank(Z[,1]),rank(Z[,2]))/(n+1)

# We calculate tau n from the sample and theta n from the sample: tau n < − cor(Zu[,1], Zu[,2], method=”kendall”)

theta n < − #from the H0 copula 2*tau n/(1-tau n) #Clayton

(28)

22 Maarten Offenberg — Goodness-of-fit tests for copulas

# 1/(1-tau n) #Gumbel

# iTau(frankCopula(), tau n) #Frank # iTau(plackettCopula(), tau n) #Plackett # iTau(joeCopula(), tau n) #Joe

# We calculate the value of SB (formule 9 at pag 203) SB < − calculateSBFromSample(Zu, theta n) outputrij < − SB count < − NA rijSB1 < − numeric() for (i in 1:1000){ Y1 < − rCopula(n, claytonCopula(theta n)) # Y1 < − rCopula(n, gumbelCopula(theta n)) # Y1 < − rCopula(n, frankCopula(theta n)) # Y1 < − rCopula(n, plackettCopula(theta n)) # Y1 < − rCopula(n, joeCopula(theta n)) U1 < − cbind(rank(Y1[,1]),rank(Y1[,2]))/(n+1) tau1 < − cor(U1[,1], U1[,2], method=”kendall”) theta1 < − #from the H0 copula

pmax(0.0001, 2*tau1/(1-tau1)) #Clayton # pmax(0.0001, 1/(1-tau1)) #Gumbel

# pmax(0.0001, iTau(frankCopula(), tau1)) #Frank # pmax(0.0001, iTau(plackettCopula(), tau1)) #Plackett # pmax(0.0001, iTau(joeCopula(), tau1)) #Joe

SB1 < − calculateSBFromSample(U1, theta1) outputrij < − c(outputrij, SB1)

count[i] < − SB1>SB }

rej[j] < − sum(count)/1000 < 0.05 # TRUE if p-value is less than 0.05 if (j<21 | j==250 | j==500 | j==750 | j==1000){

cat(j, ”\t”, date(), ”\t”, ”SB=”, SB, ”\t”, ”rej. rate =”, sum(rej)/j, ”\n”) }

outputrij < − c(outputrij, sum(count)/1000, rej[j]) output < − cbind(output, outputrij)

}

(29)

Appendix D

The R-code for the three-dimensional runs

This code tests for the Clayton copula while the true copula is Clayton as well, ρ is 0.25 and the teststatistic is Sn(C). By relocating the pound signs (#) this code can be

transformed into the codes for all the other tests. library(copula)

# FUNCTIONS

# We define function .e2Computation that transforms u1 and u2 to e2. This # .e2computations is dependent on the H0-copula.

e2Computation < − function(u1, u2, th){

(u1ˆ(-th) + u2ˆ(-th) - 1)ˆ-((th+1)/th) * u1ˆ(-th-1) #Clayton

# exp(-((-log(u1)) ˆ th+(-log(u2)) ˆ th) ˆ (1/th))*((-log(u1)) ˆ th+(-log(u2)) ˆ th) ˆ ((1-th)/th)*(-log(u1))ˆ(th-1)/u1 #Gumbel # 1/(1+(exp(-th*u1)-1)*(exp(-th*u2)-1)/(exp(-th)-1))*exp(-th*u1)*(exp(-th*u2)-1)/(exp(-th)-1) #Frank # 0.5*(1-(1+(th-1)*(u1+u2)-2*th*u2)/(((1+(th-1)*(u1+u2)) ˆ 2-4*th*(th-1)*u1*u2) ˆ 0.5)) #Plackett # -((1-u1)ˆth+(1-u2)ˆth-(1-u1)ˆth*(1-u2)ˆth)ˆ(1/th-1)*(1-u1)ˆ(th-1)*((1-u2)ˆth-1) #Joe }

# We define functions that we use for the e3-calculation. F < − function(x,th){exp(-th*x)-1}

G2 < − function(x,y,th){(-log(x))ˆth+(-log(y))ˆth}

G3 < − function(x,y,z,th){(-log(x))ˆth+(-log(y))ˆth+(-log(z))ˆth}

# We define the function .e3Computation that transforms u1, u2 and u3 to e3 (page 202). The .e3Computation is dependent of the H0-copula.

e3Computation < − function(u1, u2, u3, th){

((u1 ˆ (-th) + u2 ˆ (-th) + u3 ˆ (-th) - 2) / (u1 ˆ (-th) + u2 ˆ (-th) - 1)) ˆ -((2*th+1)/th) #Clayton

(30)

24 Maarten Offenberg — Goodness-of-fit tests for copulas # exp(-G3(u1,u2,u3,th) ˆ (1/th))*((-1/th ˆ 2+1/th)*G3(u1,u2,u3,th) ˆ (1/th-2)-(1/th ˆ 2)*G3(u1,u2,u3,th)ˆ(2/th-2)) / (exp(-G2(u1,u2,th)ˆ(1/th))*((-1/thˆ2+1/th)*G2(u1,u2,th) ˆ(1/th-2)-(1/thˆ2)*G2(u1,u2,th)ˆ(2/th-2))) # Gumbel # (-F(u1,th)*F(u3,th)/(F(1,th)ˆ2+F(u1,th)*F(u2,th)*F(u3,th))ˆ2 + F(u3,th)*-th*exp(-th*u1)/(F(1,th)ˆ2+F(u1,th)*F(u2,th)*F(u3,th)))/(-F(u1,th)*F(1,th)/(F(1,th)ˆ2+F(u1,th)*F(u2,th)*F(1,th)) ˆ2 + F(1,th)*-th*exp(-th*u1)/(F(1,th)ˆ2+F(u1,th)*F(u2,th)*F(1,th))) # Frank }

# We define the function .calculateRosenblatt that transforms the (nx3)-matrix of u’s to the (nx3)-matrix of e’s.

calculateRosenblatt < − function(samp, th) { cbind(samp[,1],e2Computation(samp[,1],samp[,2],th),e3Computation(samp[,1],samp[,2],samp[,3],th)) #123 #cbind(samp[,1],e3Computation(samp[,1],samp[,3],samp[,2],th),e2Computation(samp[,1],samp[,3],th)) #132 #cbind(e2Computation(samp[,2],samp[,1],th),samp[,2],e3Computation(samp[,2],samp[,1],samp[,3],th)) #213 #cbind(e3Computation(samp[,2],samp[,3],samp[,1],th),samp[,2],e2Computation(samp[,2],samp[,3],th)) #231 #cbind(e2Computation(samp[,3],samp[,1],th),e3Computation(samp[,3],samp[,1],samp[,2],th),samp[,3]) #312 #cbind(e3Computation(samp[,3],samp[,2],samp[,1],th),e2Computation(samp[,3],samp[,2],th),samp[,3]) #321 }

# We define function .D as in formula 8 at page 203 of Genest et al. # E is a (n*3)matrix

# u is a 3-dimensional number Dn < − function(E, u) {

r < − (E[,1] <= u[1])*(E[,2] <= u[2])*(E[,3] <= u[3]) sum(r)/length(r)

}

# We define a function that calculates the statistic SC as in formula 9 at page 203 calculateSCFromSample <- function(samp, th) {

E < − calculateRosenblatt(samp, th)

sum(apply(E, 1, function(x) (Dn(E,x) - prod(x))ˆ2)) }

# We define a function that calculates the statistic SB as in formula 9 at page 203 calculateSBFromSample < − function(samp, th) { E < − calculateRosenblatt(samp, th) C < − 0 f < − function(x,y) 1-pmax(x,y) firstColumn < − outer(E[,1],E[,1], f) secondColumn < − outer(E[,2],E[,2], f) thirdColumn < − outer(E[,3],E[,3], f)

C < − sum (firstColumn * secondColumn * thirdColumn) n/27-0.25*sum(apply(E, 1, function(x) prod(1-xˆ2)))+C/n }

(31)

Goodness-of-fit tests for copulas — Maarten Offenberg 25

# END FUNCTIONS n < −150

Ktau < − 0.25

theta < − #van de true copula 2*Ktau/(1-Ktau) #Clayton # 1/(1-Ktau) #Gumbel

# iTau(frankCopula(), Ktau) #Frank # iTau(plackettCopula(), Ktau) #Plackett # iTau(joeCopula(), Ktau) #Joe

rej < − NA set.seed(100) date()

output < − numeric() for (j in 1:1000) {

# We create a sample Z from the 3-dim true copula

Z < − rCopula(n, claytonCopula(theta, dim=3)) #Clayton # Z < − rCopula(n, gumbelCopula(theta, dim=3)) #Gumbel # Z < − rCopula(n, frankCopula(theta, dim=3)) #Frank # Z < − rCopula(n, plackettCopula(theta, dim=3)) #Plackett # Z < − rCopula(n, joeCopula(theta, dim=3)) #Joe

# We transform sample Z to u’s.

Zu < − cbind(rank(Z[,1]),rank(Z[,2]),rank(Z[,3]))/(n+1) # We calculate tau n and theta n:

tau n12 < − cor(Zu[,1], Zu[,2], method=”kendall”) tau n13 < − cor(Zu[,1], Zu[,3], method=”kendall”) tau n23 < − cor(Zu[,2], Zu[,3], method=”kendall”) theta n < − #van de H0 copula

(2*tau n12/(1-tau n12)+2*tau n13/(1-tau n13)+2*tau n23/(1-tau n23))/3 #Clayton # (1/(1-tau n12)+1/(1-tau n13)+1/(1-tau n23))/3 #Gumbel

# (iTau(frankCopula(), tau n12)+iTau(frankCopula(), tau n13)+iTau(frankCopula(), tau n23))/3 #Frank

# (iTau(plackettCopula(), tau n12)+iTau(plackettCopula(), tau n13)+iTau(plackettCopula(), tau n23))/3 #Plackett

# (iTau(joeCopula(), tau n12)+iTau(joeCopula(), tau n13)+iTau(joeCopula(), tau n23))/3 #Joe

# We calculate the value of SC as in formula 9 at page 203 SC < − calculateSCFromSample(Zu, theta n)

outputrij < − SC count < − NA

rijSC1 < − numeric() for (i in 1:1000)

Y1 < − rCopula(n, claytonCopula(theta n, dim=3)) # Y1 < − rCopula(n, gumbelCopula(theta n, dim=3)) # Y1 < − rCopula(n, frankCopula(theta n, dim=3)) # Y1 < − rCopula(n, plackettCopula(theta n, dim=3))

(32)

26 Maarten Offenberg — Goodness-of-fit tests for copulas

# Y1 < − rCopula(n, joeCopula(theta n, dim=3))

U1 < − cbind(rank(Y1[,1]),rank(Y1[,2]),rank(Y1[,3]))/(n+1) tau112 < − cor(U1[,1], U1[,2], method=”kendall”)

tau113 < − cor(U1[,1], U1[,3], method=”kendall”) tau123 < − cor(U1[,2], U1[,3], method=”kendall”) theta1 < − #van de H0 copula

pmax(0.0001, (2*tau112/(1-tau112)+2*tau113/(1-tau113)+2*tau123/(1-tau123))/3) #Clay-ton

# pmax(0.0001, (1/(1-tau112)+1/(1-tau113)+1/(1-tau123))/3) #Gumbel

# pmax(0.0001, (iTau(frankCopula(), tau112)+iTau(frankCopula(), tau113)+iTau(frankCopula(), tau123))/3) #Frank

# pmax(0.0001, (iTau(plackettCopula(), tau112)+iTau(plackettCopula(), tau113)+iTau(plackettCopula(), tau123))/3) #Plackett

# pmax(0.0001, (iTau(joeCopula(), tau112)+iTau(joeCopula(), tau113)+iTau(joeCopula(), tau123))/3) #Joe

SC1 < − calculateSCFromSample(U1, theta1) outputrij < − c(outputrij, SC1)

count[i] < − SC1>SC }

rej[j] < − sum(count)/1000 < 0.05 # TRUE if p-value is less than 0.05 if (j<21 | j==100 | j==250 | j==500 | j==750 | j==1000){

cat(j, ”\t”, date(), ”\t”, ”SC=”, SC, ”\t”, ”rej. rate =”, sum(rej)/j, ”\n”) }

outputrij < − c(outputrij, sum(count)/1000, rej[j]) output < − cbind(output, outputrij)

}

(33)

Appendix E

Table 6.1:Percentage of number of rejections of H0in the six orders of conditioning by S (C)

n for three

dimensional data sets of size n = 150 from different copula models with τ = 0,25.

Copula under H0 True copula number of rejections

0 1 2 3 4 5 6 Clayton Clayton 80.6 11.6 5.1 1.4 0.8 0.2 0.3 Gumbel 65.7 15.9 8.8 4.2 2.4 1.9 1.1 Frank 85.6 9.7 3.5 1.0 0.2 0.0 0.0 Gumbel Clayton 2.0 0.9 2.0 1.5 3.0 3.7 86.9 Gumbel 82.4 9.9 4.0 1.9 1.1 0.3 0.4 Frank 62.7 13.6 9.1 4.8 3.4 2.5 3.9 Frank Clayton 6.0 4.7 5.0 5.8 6.9 8.3 63.3 Gumbel 79.4 11.2 5.0 2.4 1.1 0.5 0.4 Frank 84.8 8.8 3.8 1.3 0.6 0.4 0.3

Table 6.2:Percentage of number of rejections of H0in the six orders of conditioning by Sn(C)for three

dimensional data sets of size n = 150 from different copula models with τ = 0,50.

Copula under H0 True copula number of rejections

0 1 2 3 4 5 6 Clayton Clayton 81.9 11.1 4.0 1.8 0.7 0.4 0.1 Gumbel 2.8 4.6 5.8 6.7 8.9 16.0 55.2 Frank 9.2 9.1 9.1 9.2 13.8 16.4 33.2 Gumbel Clayton 0.0 0.0 0.0 0.1 0.0 0.1 99.8 Gumbel 79.1 12.9 5.6 1.7 0.3 0.1 0.3 Frank 64.7 15.1 9.3 4.8 3.4 1.3 1.4 Frank Clayton 0.1 0.1 0.0 0.1 0.4 0.9 98.4 Gumbel 59.1 22.6 11.0 4.4 2.2 0.5 0.2 Frank 80.7 12.2 4.8 1.2 0.9 0.2 0.0

Table 6.3:Percentage of number of rejections of H0in the six orders of conditioning by S (C)

n for three

dimensional data sets of size n = 150 from different copula models with τ = 0,75.

Copula under H0 True copula number of rejections

0 1 2 3 4 5 6 Clayton Clayton 83.0 12.1 2.9 0.7 0.8 0.3 0.2 Gumbel 0.0 0.0 0.0 0.1 0.1 0.4 99.4 Frank 0.0 0.0 0.1 0.2 1.6 5.6 92.5 Gumbel Clayton 0.0 0.0 0.0 0.0 0.0 0.0 100 Gumbel 78.6 15.7 3.9 1.3 0.4 0.0 0.1 Frank 79.9 12.1 4.9 1.4 0.7 0.6 0.4 Frank Clayton 0.0 0.0 0.0 0.1 0.0 0.0 99.9 Gumbel 26.4 31.5 26.2 9.0 5.3 1.4 0.2 Frank 77.2 15.5 4.8 1.6 0.7 0.2 0.0 27

(34)

28 Maarten Offenberg — Goodness-of-fit tests for copulas

Table 6.4:Percentage of number of rejections of H0in the six orders of conditioning by Sn(B)for three

dimensional data sets of size n = 150 from different copula models with τ = 0,25.

Copula under H0 True copula number of rejections

0 1 2 3 4 5 6 Clayton Clayton 85.8 6.3 3.5 2.0 1.3 0.5 0.6 Gumbel 24.5 9.9 9.9 10.0 9.2 11.7 24.8 Frank 49.4 15.0 10.3 8.0 5.2 4.8 7.3 Gumbel Clayton 7.4 1.4 2.1 2.3 2.8 3.2 80.8 Gumbel 88.0 4.6 3.2 1.3 1.1 0.8 1.0 Frank 78.6 6.1 4.3 3.2 2.0 1.4 4.4 Frank Clayton 20.2 7.4 7.6 5.7 7.7 7.4 44 Gumbel 70.4 7.9 6.2 3.9 3.9 2.5 5.2 Frank 88.4 5.7 2.2 1.9 1.1 0.3 0.4

Table 6.5:Percentage of number of rejections of H0in the six orders of conditioning by Sn(B)for three

dimensional data sets of size n = 150 from different copula models with τ = 0,50.

Copula under H0 True copula number of rejections

0 1 2 3 4 5 6 Clayton Clayton 84.2 9.5 3.5 1.4 0.8 0.3 0.3 Gumbel 0.2 0.1 0.1 0.6 1.1 2.9 95.0 Frank 0.9 1.2 2.0 1.8 4.0 7.3 82.8 Gumbel Clayton 0.0 0.1 0.0 0.0 0.0 0.1 99.8 Gumbel 81.1 11.2 4.6 1.6 0.8 0.4 0.3 Frank 64.6 12.9 7.7 6.5 3.8 1.9 2.6 Frank Clayton 0.3 0.4 0.6 1.0 3.1 3.4 91.2 Gumbel 61.4 13.6 8.7 6.9 4.2 2.7 2.5 Frank 82.9 9.4 3.4 2.3 1.2 0.8 0.0

Table 6.6:Percentage of number of rejections of H0in the six orders of conditioning by S (B)

n for three

dimensional data sets of size n = 150 from different copula models with τ = 0,75.

Copula under H0 True copula number of rejections

0 1 2 3 4 5 6 Clayton Clayton 86.3 8.4 2.4 1.0 0.9 0.5 0.5 Gumbel 0.0 0.0 0.0 0.0 0.0 0.0 100 Frank 0.0 0.0 0.0 0.0 0.2 0.3 99.5 Gumbel Clayton 0.0 0.0 0.0 0.0 0.0 0.0 100 Gumbel 80.4 11.6 5.1 1.9 0.5 0.4 0.1 Frank 66.6 14.7 7.3 5.0 3.3 1.7 1.4 Frank Clayton 0.0 0.1 0.0 0.0 0.2 0.4 99.3 Gumbel 62.1 18.3 9.7 4.7 2.4 1.9 0.9 Frank 78.7 12.7 5.6 1.9 0.7 0.3 0.1

(35)

Goodness-of-fit tests for copulas — Maarten Offenberg 29

References

Berg, D., 2009. Copula goodness-of-fit testing: an overview and power comparison Cherubini, U., E. Luciano and W. Vecchiato, 2004. Copula Methods in Finance

Genest, C., B. R´emillard and D. Beaudoin, 2007. Goodness-of-fit tests for copula: A review and a power study

Joe, H., 1997. Multivariate Models and Dependence Concepts

Malevergne, Y., and D. Sornette, 2006. Extreme Financial Risks: From Dependence to Risk Management

McNeil, A.J., R. Frey and P. Embrechts, 2005. Quantitative Risk Management Nelsen, R., 2005. An Introduction to Copulas, second edition

Scheaffer, R., M. Mulekar and J. McClave, 2010. Probability and Statistics for Engi-neers, fifth edition

https://en.wikipedia.org/wiki/Copula (probability theory)

Referenties

GERELATEERDE DOCUMENTEN

Het Milieu- en Natuurplanbureau en het Ruimtelijk Planbureau geven in de &#34;Monitor Nota Ruimte&#34; een beeld van de opgave waar het ruimtelijk beleid voor de komende jaren

where CFR is either the bank credit crowdfunding ratio or the GDP crowdfunding ratio,

The case examples of Charlie Hebdo versus al-Qaeda, Boko Haram versus the Nigerian government, and Pastor Terry Jones essentially depict a picture of what

All of the Central Eastern European jurisdictions discussed in this chapter provide protection for property rights in their respective constitutions. While the

While the model of complex systems developed in chapter 1 forms the general structure of the this project, deconstruction completes this structure by adding the

Het probleem zit hem in het tweede deel: uitsluitend gerando- miseerde klinische trials zouden kunnen onderbouwen of blootstelling aan infecties gedurende de eerste twee levens-

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of

Verspreid op het onderzochte terrein zijn diverse waardevolle sporen aangetroffen, die op basis van het gevonden aardewerk te dateren zijn in de Romeinse periode. Gelet