• No results found

Dependence modelling through Archimedean copulas : the bivariate case

N/A
N/A
Protected

Academic year: 2021

Share "Dependence modelling through Archimedean copulas : the bivariate case"

Copied!
20
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Dependence Modelling Through

Archimedean Copulas

| The Bivariate Case

Shadee van Vlaanderen

Abstract

In this paper the Archimedean copula method is applied to bivariate data. With copulas it is possible to model the dependence between random variables with given marginal distributions. The application of copula theory in nancial studies is a relatively new concept, but copulas have been studied for decades in probability theory. The application of this theory to nancial studies is rapidly making progress. A certain group of copu-las, the Archimedean copulas are able to reduce the study of a multivariate distribution to that of a univariate distribution. The Archimedean copula method is illustrated and performed on random data of index losses from two separate stock markets. The progress and outcome of the method is described and the eventual resulting estimated distribution is presented in a contour plot.

Bachelor's thesis of

Bachelor Actuariele Wetenschappen Universiteit van Amsterdam

Faculty Economie en Bedrijfskunde Amsterdam School of Economics Author: Shadee van Vlaanderen Student nr: 10221018

Email: shadee1993@hotmail.com Date: June 19, 2015

(2)

Contents

1 Introduction to Archimedean

Copulas 3

2 Archimedean Copula Method 4

2.1 Archimedean Copulas. . . 5

2.2 Applying Various Archimedean Generators . . . 6

2.3 Finding the Best Fitting Archimedean

Generators . . . 7

2.4 Marginal Distribution of Variables. . . 7

2.5 Finding the Best Fitting Model . . . 8

3 Application 9

4 Results and Analysis 10

5 Conclusion 14

5.1 Future Studies. . . 14

Bibliography 15

Appendix I 16

(3)

1. Introduction to Archimedean

Copulas

At all times people weigh their chances before putting their money at stake. So it is not illogical that companies from their early days on have always done so too. Now consider an insurance company trying to compose a proper premium level for their clients. The wealthy citizens of a nation are probably in better living circumstances than the poor citizens are. So higher death rates would have to be taken into account in the process of setting the price of the premium. Now consider stock losses during the years. In times of recession stock losses have grown to unappealingly high rates. At these times stock owners were not eager to bet on volatile stocks. Consequently, also companies are constantly trying to estimate the uctuation of random rates like these.

Through the years researchers have found their ways of predicting such changes in random variables. They have found that when one can identify what distribution of prob-abilities is approximating this random variable, future rates on these matters could be estimated and anticipated. The idea is to observe past data of a certain random variable and nding the matching distribution of this data.

However, this idea becomes a tough matter when we would like to describe data that is dependent on many di erent variables. The distribution to be found will be a multivariate distribution and is very hard to nd. In the nineteenth century the distribution of such data was often estimated by a normal distribution. But this seems a rough estimate on the data in consideration and this becomes a problem when the data is dependent on a higher number of variables.

But recently, Genest and Rivest (1993) came to the idea to apply the copula theory to this problem. The copula theory is already known for decades in the mathematical probability literature (Schweizer, 1991), but was not earlier applied in the nancial world. A copula function is a function which is able to represent the dependence structure of a multivariate distribution with known marginal distributions. This makes the nding of a matching distribution to concerned data an easier task, since we can separately t models for the marginal distributions and the copula.

Copulas are mostly used in statistical applications where distributions of a large num-ber of variables are being dealt with. So common areas of application are risk management and actuarial analysis, for example by analysing downside regimes or by modelling de-pendence structures. Another application is in the eld of civil engineering, where for instance reliability of mechanical structures is analysed using copulas. Also in the elds

(4)

of medicine, warranty data analysis, weather research, random vector analysis and mod-elling turbulent combustion, copulas have been successfully applied. Hence, the area of application is on the increase.

Now there is a special sort of copula functions that has a very desired characteristic. It is able to reduce the study of a multivariate distribution function to the study of a univariate function. This group of copulas is called the Archimedean copulas (Genest & MacKay, 1986a, 1986b; Sklar, 1959).

Frees and Valdez (1998) have described the application of the theory of Archimedean copulas to statistical analysis. To nd out whether the method applies to random bivariate data, it has been put into practise in this paper. The focus in this thesis is on the nding of the Archimedean copula that matches the considered bivariate data best with the use of the method from Frees and Valdez (1998). The theory on this method is elaborately described in Section 2 of this thesis. How the method has been turned into practice on the corresponding data and what data has been used is illustrated in Section 3. In the following part, Section 4, the results from applying the Archimedean copula method are displayed and discussed. In Section 5, the ndings of this survey will be summarized. In this last section the main question on dependence modelling by Archimedean copulas will be answered, by stating whether the Archimedean copula method applies to the concerned random bivariate data and to what result the application leads.

2. Archimedean Copula Method

In this section a method to model the dependence among random variables is described.

Consider a multivariate dataset, which we wish to analyse. Theoretical research shows that there exists a function, which gives the ability to describe a multivariate distribution in terms of its marginal distributions. This function is called a copula. To construct this function, the use of a mathematical theory considering copulas, is required. Let's assume we have a multivariate distribution F , with marginal distributions F1; F2; ::; Fp. Then

according to Frees and Valdez (1998) there exists a copula C such that:

F (x1; x2; :::; xp) = C(F1(x1); F2(x2); :::; Fp(xp)) (2.1)

holds. Any multivariate distribution can be transformed into such a copula representation. In case of continuous marginal distributions, this copula is uniquely de ned, according to Sklar (1959).

(5)

2.1 Archimedean Copulas

In this survey the focus is on nding the Archimedean copula tting the concerned bivari-ate data (so for p = 2 in Equation (2.1)). The Archimedean group of copulas are able to reduce the study of a multivariate copula into that of a univariate function. To retrieve the matching Archimedean copula, we rst look at the standard form of Archimedean copulas, as deduced by (Genest & MacKay, 1986). This standard form is also proven by them to be a valid distribution function.

C(u; v) =  1((u) + (v)), for u; v 2 (0; 1] (2.2)

This function contains a generator, denoted by  and its inverse  1, with u; v the

con-cerned variables. Generator function  is convex and decreasing on (0; 1], with (1) = 0. Moreover, Frees and Valdez (1998) have shown that this generator function uniquely determines the form of the Archimedean copula up to a scalar multiple. So, each gen-erator function generates its own Archimedean copula. In Table 1, several families of Archimedean copulas and their generators are shown, such as in (Frees & Valdez, 1998).

Table 1

Archimedean Copulas and Their Generators

From \Understanding Relationships Using Copulas." by Frees, E. W., Valdez, E. A., 1998, North Amer-ican Actuarial Journal, 2, p. 7.

The sorts of bivariate copulas that are shown, result from implementing the various corresponding generators in Equation (2.2). The dependence parameter is a measure of dependence among the considered bivariate data.

The rst copula family, Independence, holds when the data is independent. Since the focus is on describing the dependency among variables, only the last three families are taken into account during the following part of nding a suiting Archimedean copula.

(6)

2.2 Applying Various Archimedean Generators

In order to nd appropriate Archimedean copulas, the following steps of the method de-scribed in (Frees & Valdez, 1998) are pursued. Consider a certain data set (X11; X21); :::;

(X1n; X2n), which is assumed to be an i.i.d. sample from an unknown bivariate

distribu-tion F .

First, Kendall`s correlation coecient is estimated by:

n=  n 2  1X i<j sign[(X1i X1j)(X2i X2j)] (2.3)

Second, an intermediate stochastic Zi with distribution function K(z) is needed. It is

de- ned as Zi = F (X1i; X2i), so K(z) = P (Zi  z). For a nonparametric estimate of these,

the following is de ned:

Zi = f#pairs(X1j; X2j), such that X1j < X1i and X2j < X2ig=(n 1) for i = 1; :::; n

(2.4) And the nonparametric estimate of K is de ned as:

Kn(z) = ffraction of Zi`s  zg (2.5)

Genest and Rivest (1993) have shown that the relation between K(z) and the gener-ator function  can be described by the following equation:

K(z) = z (z)0(z) (2.6)

Now a parametric estimate of K(z), K(z), can be acquired by implementing each of the

given generator functions in this equation. For each generator the corresponding can be estimated using the formulas for  in Table 2 (Frees & Valdez, 1998) and the n found in

(7)

Table 2

Archimedean Copulas and Their Measures of Dependence

Adapted from \Understanding Relationships Using Copulas." by Frees, E. W., Valdez, E. A., 1998, North American Actuarial Journal, 2, p. 10.

Debye function: D1(x) = 1xR0xett1dt.

2.3

Finding the Best Fitting Archimedean

Generators

To see which copula suits the considered data best, the approximation of Kn(z) by K(z) is

compared for each implemented generator. This is best done in a graphical representation of the resulting estimates in the form of a Q-Q plot, also known as a quantile-quantile plot. This plot reports the quantiles of the nonparametric and parametric estimates of the distribution K in a graphic. The best tting copulas will produce a Q-Q plot that stays close to the 45 line. Once we identify the best tting copulas this way, we can proceed to estimate parameters for marginal distributions and t a model for the entire bivariate distribution.

2.4 Marginal Distribution of Variables

To t a model for the bivariate distribution, the marginal distributions of the bivariate data are also required. For now it is assumed that the bivariate dataset used in this paper has normally distributed marginals. Later on this assumption will be justi ed by observing the histograms of the data for each variable.

(8)

2.5 Finding the Best Fitting Model

To determine the best tting parameters for the bivariate data in consideration, the Maximum Likelihood Estimation (MLE), is used. The idea is to estimate the unknown parameters for the considered Archimedean copula as well as the considered marginal dis-tributions by MLE, and to measure by Akaike's Information Criteria which model gives us the best t. For this, the joint probability distribution function of X1 and X2 is needed.

It is deducted from marginal probability distribution functions f1, f2 and marginal

cu-mulative distribution functions F1, F2, as follows:

f(x1; x2) = @ 2F (x 1; x2) @x1@x2 = @2C[F 1(x1); F2(x2)] @x1@x2 = f1(x1)f2(x2)C12[F1(x1); F2(x2)]; (2.7)

where F (x1; x2) is substituted by the copula representation of this function and with:

C12(u; v) = @

2C(u; v)

@u@v (2.8)

In order to compute the log-likelihood function for MLE, the natural logarithm function is applied over the computed joint density:

logf(x1; x2) = log(f1(x1)f2(x2)C12[F1(x1); F2(x2)]) (2.9)

Then, to retrieve the log-likelihood function on the total dataset, a summation over the total number of data points is taken over Equation (2.9). This results in the following equation: l(1; 1; 2; 2; ) = n X j=1 logC12[F1(x1;j); F2(x2;j)] + n X j=1 logf1(x1;j) + n X j=1 logf2(x2;j); (2.10)

where C is the best- tting Archimedean copula with unknown parameter , F1 and F2

are normal distribution functions and f1 and f2 normal density functions with unknown

parameters 1; 1; 2 and 2.

Now the Maximum Likelihood Estimators can be found by maximizing the log-likelihood function in (2.10) over the ve parameters. This is done for each parameter, resulting in the following Maximum Likelihood Estimates:

(9)

^ 1 = arg max 1 l(1; 1; 2; 2; ) (2.11) ^ 1 = arg max 1 l(1; 1; 2; 2; ) (2.12) ^ 2 = arg max 2 l(1; 1; 2; 2; ) (2.13) ^ 2 = arg max 2 l(1; 1; 2; 2; ) (2.14) ^ = arg max l(1; 1; 2; 2; ) (2.15)

At last, Akaike's Information Criteria (AIC) (Frees & Valdez, 1998) is to be calcu-lated from the maximized loglikelihood function l( ^1; ^1; ^2; ^2; ^ ), and the number of

parameters k:

AIC = 2  maximized likelihood + 2  k (2.16) The model that results in the smallest AIC is taken as the best tting model. This model comprises the best tting parameter for the best- tting Archimedean copula as determined above, as well as best- tting estimates for the marginal normal distributions. In the next section we apply this methodology to a nancial dataset.

3. Application

The Archimedean copula tting method as described in the former section, is applied to a bivariate dataset, which has to be independent, identically distributed. The dataset that is chosen is a set of price indices from two individual stock markets. This data is retrieved from a capacious online database called Datastream. The rst set is from the German stock market, DAX-30 Performance, consisting of thirty major German compa-nies. The second is from the Dow Jones Industrials stock market. This major index also displays the indices of thirty major companies trading in the stock market. The codes with which these datasets can be retrieved on Datastream are respectively DAXINDX(PI) and DJINDUS(PI). The dataset contains the observations of daily price indices over two years starting from 12-4-2013 going to 14-4-2015. The price index is composed using a repre-sentative list of shares and is calculated by Datastream as follows:

(10)

I0 = index value at base date = 100 It= It 1 n P 1 (Pt Nt) n P 1 (Pt 1 Nt f) (3.1) Where,

It= index value at day t

It 1= index value one previous working day (of t)

Pt = unadjusted share price on day t

Pt 1 = unadjusted share price on previous working day (of t)

Nt= number of shares in issue on day t

f = adjustment factor for a capital action occurring on day t n = number of constituents in index

The summations are performed on the constituents as they exist on day t

In order to get independent, identically distributed random variables, the daily percentage losses were calculated over this data for each of the two variables. The dates on which at least one of the stock markets was closed were removed from the data set, in order to clean the data of false "zero change" days.

The method of the Archimedean copula tting is applied to the data by using the program R. This is a programming software in which statistical calculations and graphics can be made. The R code used for our analysis can be found in Appendix II.

Anticipating on implementing the method, I expect that there is a strong dependency amongst the two price index variables. And if the method would apply to random bivariate data, then we should be able to approximate the distribution of the two price indices to a certain accuracy level by an Archimedean copula.

4. Results and Analysis

In this section, the method is applied to the concerned bivariate dataset. From the rst step, Kendall's correlation coecient is estimated. This results in  = 0:707125.

Then, after having estimated the nonparametric estimate Kn(z) of K(z), the

(11)

to the nonparametric estimates in the following Q-Q plots: Figure 1

Quantile-Quantile plots reporting the quantiles of the nonparametric estimates with the parametric estimates of K(z) for each of the Archimedean

generators. The solid line represents the case where the quantiles of Kn(z)

equal the quantiles of K(z).

From these plots we can see that all three estimates are close to the nonparametric estimate. There is no estimator clearly better approaching then the others that can be seen from the plots, so all three generators with their own copulas are taken into account in the following procedure.

For the MLE estimation, the marginals of the variables are needed. A histogram of each variable shows that the marginals are roughly normally distributed, see Figure 2.

Then the copula density C12 (Equation 2.8) is needed for each of the copulas. The

retrieving of these is described in Appendix I. The optimization of the resulting log-likelihood function l(1; 1; 2; 2; ) for each of the copulas, performed through R's

op-tim() routine, gives the following Maximum Likelihood Estimates: Table 3

Family ^1 ^1 ^2 ^2 ^

Clayton, Cook-Johnson, Oakes -0.07471227 1.04702885 -0.02885840 0.72385324 0.84869877 Gumbel-Hougaard -0.06732223 1.03141536 -0.02778218 0.72052471 1.49707257 Frank -0.07572207 1.04255061 -0.03582133 0.72762286 -4.38045304

(12)

Figure 2

Implementing those Maximum Likelihood Estimates in the log-likelihood function gives the Maximum Likelihood values to each of the copula families. Table 4 lists those values of l( ^1; ^1; ^2; ^2; ^ ) and the corresponding values for Akaike's Information Criteria

(AIC).

Table 4

Family Maximized value of log-likelihood AIC Clayton, Cook-Johnson and Oakes -1174.171 2358.342

Gumbel-Hougaard -1162.136 2334.272

Frank -1154.582 2319.164

Frank's copula results in the lowest value for AIC, so the best tting bivariate model can be found by implementing the found values of the parameters by Maximum Likelihood into Frank's copula with the normal distribution as its variables. The density function of the estimated distribution can be obtained as follows:

f(x1; x2) = @ 2F (x 1; x2) @x1@x2 = @2C[F 1(x1); F2(x2)] @x1@x2 = f1(x1)f2(x2)C12[F1(x1); F2(x2)]; (4.1)

(13)

Frank's copula partially derived as shown in Appendix I. This leads to the following equation: f(x1; x2) = f1(x1)f2(x2) e F1(x1)+ F2(x2) (e 1)(e F1(x1) 1)(e F2(x2) 1) e 1 + 1 (e F1(x1) 1)(e F2(x2) 1)e F1(x1)+ F2(x2) (e 1)2((e F1(x1) 1)(e F2(x2) 1) e 1 + 1)2 (4.2) where, F1(x1), F2(x2) are the normal cumulative distribution functions and f1(x1), f2(x2)

the normal probability density functions with parameters ^1; ^1; ^2; ^2.

To compare the estimated distribution of the data by the Archimedean copula method with the original data, I have produced a contour plot of the density function (Equation (4.2)) with the original data points in it, see Figure 3.

Figure 3

Contour plot of estimated density function by Frank's copula with original data points from dataset added.

As we can see from this plot, the middle ranges of the estimated density where the probabilities are highest contain the most data points. This is an indication that the estimated density gives a good approximation of the dataset. So the copula estimation of the joint probability function of the dataset gives a close estimate to the probability distribution of the original data points.

(14)

5. Conclusion

In the end, the Archimedean copula method con ned to the use of Frank's, Gumbel-Hougaard's and Cook-Johnson's copula leads to a resulting estimated distribution, which is close to the original bivariate random dataset.

The neat steps of the method have made an in general quite dicult concept a less complicated task. By methodically working through the rst steps of the method the most appropriate Archimedean copulas have been determined. This can save a lot of work and time, because not all Archimedean copulas have to be tted then. Though in this case there were only three Archimedean copulas applied which all seemed equally appropriate in the Q-Q plots, so now it did not save the time and e orts that it could save when even more Archimedean copulas would have been applied. Then in the last part of the method the most suitable Archimedean copula was determined by Maximum Likelihood Estimation and this Archimedean copula has been tted and gave the best approximation to the distribution of the dataset. The Archimedean copula method led to a solution to the problem of modelling dependency among the two random loss variables.

5.1 Future Studies

In case of continuation on this thesis, it might be an idea to apply the Archimedean copula tting method with other than normal marginals for the distribution of the data, since the histograms of the variables did not perfectly match the normal distribution curve. This might improve the currently found best- tting model.

Since the marginals seemed roughly normal, we could subsequently investigate whether tting a bivariate normal model to the data would result in a better estimation than the best- tting model found by the Archimedean copula method. The AIC for instance or other criteria could determine which model performs best.

For future studies is might also be interesting to consider the currently used data, but then over the years in which the great recession took place, to see what e ect this might have had on the dependence structure among the variables and to nd out whether the method would lead to similar results.

I hope that this paper encourages the use of the Archimedean copula method in future studies. There may be a lot more matters to which this method could be applied. I'm aiming at not just nancial studies, but for instance also in the medical world.

(15)

Bibliography

Frees, E., & Valdez, E. (1998). Understanding Relationships Using Copulas. North Amer-ican Journal 2, 1-25.

Genest, C., & MacKay, J. (1986). Copules Archimediennes et Familles de lois Bidimension-nelles Dont les Marges Sont Donnees. The Canadian Journal of Statistics(14), 154-159. Genest, C., & MacKay, J. (1986). The Joy of Copulas: Bivariate Distributions with Uni-form Marginals. The American Statistician(40), 280-283.

Genest, C., & Rivest, L. (1993). Statistical Inference Procedures for Bivariate Archimedean Copulas. Journal of the American Statistical Association 88, 1034-1043.

Schweizer, B. (1991). Thirty Years of Copulas. Advances in Probability Distributions with Given Marginals: Beyond the Copulas.

Sklar, A. (1959). Fonctions de repartition n dimensions et leurs marges. Publ. Inst. Stat. Univ. Paris(8), 229-231.

(16)

Appendix I

In this appendix, the di erent copulas are being partially derived in order to retrieve the likelihood function in Section 4. It starts with partially deriving Cook-Johnson's copula, which is de ned as:

C(u; v) = (u + v 1) 1= (A.1)

Then take the partial derivative with respect to both u and v as follows: C12(u; v) = @ 2C (u; v) @u@v = @2(u + v 1) 1= @u@v = @ @v u 1(u + v 1) 1= 1 = (1 + )u 1v 1(u + v 1) 1= 2 (A.2)

Then Equation (2.8) with regards to Cook-Johnson's copula is obtained by implementing the normal distributions functions in Equation (A.2).

In the same way the partial derivatives of Gumbel-Hougaard's copula are taken: C(u; v) = e (( logu) +( logv) )1= (A.3)

C12(u; v) = @ 2C (u; v) @u@v = @2e (( logu) +( logv) )1= @u@v = (( log(u)) 1( log(v)) 1e (( log(u))

+( log(v)) ) 1

(( log(u)) + ( log(v)) )2= 2

uv (1

1) ( log(u)) 1( log(v)) 1e (( log(u))

+( log(v)) ) 1

(( log(u)) + ( log(v)) )1= 2

uv

(A.4)

1

And of Frank's copula:

C(u; v) = 1log(1 + (e u 1)(e v 1) e 1 ) (A.5) C12(u; v) = @ 2C (u; v) @u@v = @2 1 log(1 + (e u 1)(e v 1) e 1 ) @u@v = e u+ v (e 1)(e u 1)(e v 1) e 1 + 1

(e u 1)(e v 1)e u+ v

(e 1)2((e u 1)(e v 1) e 1 + 1)2

(A.6)

1

Now the likelihood function in Paragraph 2.5 can be obtained for each copula.

1Calculation was performed using a computational engine (https://www.wolframalpha.com, Wolfram Alpha, 2015).

(17)

Appendix II

In this appendix, the R code that is used to acquire the obtained results is given. # We start with reading in the file containing the dataset

x <- read.table(""Directory"/"filename".txt",header=,sep="") attach(x)

# Then the Kendall's tau estimate is computed tau <- cor(x,method=c("kendall"))[1,2]

# Next the non-parametric estimator for K is computed N <- length(V1)

Z <- vector(length=N)

for(i in 1:N) {Z[i] <- sum(V1 < V1[i] & V2 < V2[i])/(N-1)} Kn <- function(z) {sum(Z < z)/N}

# A package is loaded to make use of the debye function from package `gsl' library(gsl)

# A debye function is defined for positive as well as negative values of z debye <- function(z) ifelse(z<0, debye_1(-z)-z/2, debye_1(z))

# Then Alpha is estimated for each of the copulas, where `Falpha' refers # to Frank's alpha, `Calpha' refers to Cook-Johnson's alpha and `Galpha' # refers to Gumbel-Hougaard's alpha

f <- function(z) {1-4/z*(debye(-z)-1)-tau} Falpha <- uniroot(f,c(-10^9, 10^9))$root Calpha <- 2*tau/(1-tau)

Galpha <- 1/(1-tau)

# Cook-Johnson's generator function `Cphi0' and its derivative `Cphi1' are # defined.

Cphi0 <- function(z){z^(-Calpha)-1}

Cphi1 <- function(z){-Calpha*z^(-Calpha-1)} # Idem for Gumbel-Hougaard's generator

(18)

Gphi0 <- function(z){(-log(z))^Galpha}

Gphi1 <- function(z){-Galpha/z*(-log(z))^(Galpha-1)} # Idem for Frank's generator

Fphi0 <- function(z){log((exp(Falpha*z)-1)/(exp(Falpha)-1))} Fphi1 <- function(z){1/(exp(Falpha*z)-1)*Falpha*exp(Falpha*z)}

# Now the parametric estimator for K is computed for each of the copula # families

CKphi <- function(z){z-Cphi0(z)/Cphi1(z)} GKphi <- function(z){z-Gphi0(z)/Gphi1(z)} FKphi <- function(z){z-Fphi0(z)/Fphi1(z)}

# After that the Q-Q plots representing the non-parametric and parametric # estimates of distribution K are produced

v1 <- seq(0,1,by=0.01) Knvector <- sapply(v1,Kn) CKphivector <- sapply(v1,CKphi) GKphivector <- sapply(v1,GKphi) FKphivector <- sapply(v1,FKphi) qqplot(Knvector,CKphivector,xlab="nonparametric",ylab="Cook-Johnson") abline(0,1,col="black") qqplot(Knvector,GKphivector,xlab="nonparametric",ylab="Gumbel-Hougaard") abline(0,1,col="black") qqplot(Knvector,FKphivector,xlab="nonparametric",ylab="Frank") abline(0,1,col="black")

# Then the histograms of the dataset are produced with a normal # distribution curve added to it

h1 <- hist(V1, main="DAX-30 Performance", freq = FALSE,

xlab = "Daily Percentage Loss Price Indices", ylab="Density")

curve(dnorm(x, mean=mean(V1), sd=sd(V1)), add=TRUE, col="darkblue", lwd=2) h2 <- hist(V2, main="Dow Jones Industrials", freq = FALSE,

xlab = "Daily Percentage Loss Price Indices", ylab="Density")

curve(dnorm(x, mean=mean(V2), sd=sd(V2)), add=TRUE, col="darkblue", lwd=2) # The copula density function and log-likelihood function is defined for

(19)

# Cook-Johnson's, Gumbel-Hougaard's and Frank's copula respectively CC12 <- function(u,v,a){(1+a)*v^(-a-1)*u^(-a-1)*(u^(-a)+v^(-a)-1)^(-1/a-2)} Cl <- function(x){sum(log(CC12(pnorm(V1[1:N],x[1],x[2]), pnorm(V2[1:N],x[3],x[4]),x[5])))+sum(log(dnorm(V1[1:N],x[1],x[2])))+ sum(log(dnorm(V2[1:N],x[3],x[4])))} GC12 <- function(u,v,a){((-log(u))^(a-1)*(-log(v))^(a-1)* exp(-((-log(u))^a+(-log(v))^a)^(1/a))*((-log(u))^a+ (-log(v))^a)^(2/a-2))/(u*v)-((1/a-1)*a*(-log(u))^(a-1)*(-log(v))^(a-1)* exp(-((-log(u))^a+(-log(v))^a)^(1/a))*((-log(u))^a+ (-log(v))^a)^(1/a-2))/(u*v)} Gl <- function(x){sum(log(GC12(pnorm(V1[1:N],x[1],x[2]), pnorm(V2[1:N],x[3],x[4]),x[5])))+sum(log(dnorm(V1[1:N],x[1],x[2])))+ sum(log(dnorm(V2[1:N],x[3],x[4])))} FC12 <- function(u,v,a){(a*exp(a*u+a*v))/((exp(a)-1)*(((exp(a*u)-1)* (exp(a*v)-1))/(exp(a)-1)+1))-(a*(exp(a*u)-1)*(exp(a*v)-1)*exp(a*u+a*v))/ ((exp(a)-1)^2*(((exp(a*u)-1)*(exp(a*v)-1))/(exp(a)-1)+1)^2)} Fl <- function(x){sum(log(FC12(pnorm(V1[1:N],x[1],x[2]), pnorm(V2[1:N],x[3],x[4]),x[5])))+sum(log(dnorm(V1[1:N],x[1],x[2])))+ sum(log(dnorm(V2[1:N],x[3],x[4])))}

# Afterwards each of the log-likelihood functions is optimized optim(c(0,1,0,1,1),Cl,control=list(fnscale=-1))

optim(c(0,1,0,1,1),Gl,control=list(fnscale=-1)) optim(c(0,1,0,1,1),Fl,control=list(fnscale=-1))

# Finally a contour plot is produced of the estimated density by Frank's # copula with original data points included

x1 <-seq(-3.5, 3.5,by=0.01) x2 <-seq(-3.5, 3.5,by=0.01)

I <- matrix(, nrow = 701, ncol = 701) for(i in -350:351){

for(j in -350:351){

I[i+350,j+350] = dnorm(i/100,-0.07572207,1.04255061)* dnorm(j/100,-0.03582133,0.72762286)*

(20)

pnorm(j/100,-0.03582133,0.72762286),-4.38045304)}} contour(x = x1,y = x2,I)

Referenties

GERELATEERDE DOCUMENTEN

Reitsma and others [12] proposed the direct analysis of sensitivity and specificity estimates using a bivariate model BM, which yields a rigorous method for the meta- analysis of

If you wish to remove or change the word Page in the footer, change the value

1) the study of brain activity when confronting an entrepreneur with a) a/the common topic(s) in market orientation heuristics, and subsequently see what happens when b) a case

aangezien er geen verschil in complicaties en compliance worden verwacht. De totale kosten van trastuzumab behandeling bij vroege HER2-positieve borstkanker in de intramurale setting

Dat is een enorme omwenteling die onvermijdelijk zal blijven doorwerken in de manier waarop onze voorzieningen werken, in onze maatschappelijke rol (die er alleen maar belangrijker

The application of such melt compo- sitions to understanding migmatite formation (Sawyer, 1996), where it has been proposed that the component of source residuum that combined with

Een vermindering van de omvang van de Nube programmering wordt enerzijds bereikt wanneer het programmeren zelf door bepaalde hulpmiddelen vereenvoudigd wordt en

- Het werkbereik behorende bij een bepaalde orientatie van de hand kan wor- den gegenereerd door elk element uit het werkbereik W(G) van het snijpunt G tussen de laatste drie assen