• No results found

Comparison of historical method and Extreme Value Theory Method in Value-at-Risk Estimation

N/A
N/A
Protected

Academic year: 2021

Share "Comparison of historical method and Extreme Value Theory Method in Value-at-Risk Estimation"

Copied!
26
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Comparison of Historical Method and

Extreme Value Theory Method in

Value-at-Risk Estimation

Yi Yu

Master’s Thesis to obtain the degree in Actuarial Science and Mathematical Finance University of Amsterdam

Faculty of Economics and Business Amsterdam School of Economics Author: Yi Yu

Student nr: 10826424

Email: hensha2007@hotmail.com Date: April 8, 2015

Supervisor: Dr. S. U.(Umut) Can Second reader: Dr. Roger J. A. Laeven

(2)
(3)

iii

Abstract

This study compares Value-at-Risk estimation in two different ap-proaches: Extreme Value Theory and Historical Methods for ING Fi-nancial Market data. The EVT approach includes fitting excesses over high thresholds by the Generalized Pareto Distribution, as well as the Hill method. The historical method is simply taking the relevant sam-ple quantile as the VaR estimate.

Some benefits and limitations are also discussed.

Keywords Risk Measure, Extreme Value Theory, Generalized Pareto Distribution, Value-at-Risk, Hill Method, Stochastic processes, Sensitivity Analysis, Risk Management, Threshold Exceedance

(4)

Contents

Preface v

1 Introduction 1

2 Risk Measure and Modeling Concept 2

2.1 Risk Measures . . . 2

2.2 Value-at-Risk . . . 2

2.2.1 Historical Method for VaR Estimation . . . 3

2.3 Extreme Value Theory . . . 3

2.4 Threshold Exceedance . . . 4

2.4.1 Generalized Pareto Distribution. . . 4

2.4.2 Sample Mean Excess Plot . . . 5

2.4.3 Modeling VaR . . . 5

2.4.4 The Hill Method . . . 5

3 Data Fitting and Parameter Estimation 7

4 Results and Sensitivity Analysis 13

5 Conclusions and Suggestions 16

Appendix : Data and Estimates 18

References 19

(5)

Preface

The paper illustrates my growths in academic and industrial field of Actuarial Science study. I applied what I have learned at the ING Headquarter Market Risk Management Bank Department Risk Con-solidation Team. I have become familiar with risk measure methods in the perspective of risk managers. I have developed my deep un-derstanding in the courses in Master of Actuarial Science and Math-ematical Finance Program at University of Amsterdam, such as risk managment, premium models, and pension plan set up. Therefore, I would like to thank all people who help me in my life. All of your words help to equip my knowledge and broaden my horizon.

Firstly, I would like to thank my supervisor Professor S. U. (Umut) Can who guided me by providing with me some relavent documents and who helped me put my thoughts into practice. Each word he said was really an important point to keep me carrying on with my study and inspire my other creative ideas. In addition, I also want to express my gratitude to my program coordinor, Prof. Rob Kaas, who answered my questions quite clearly, and communicated with me frequently with definite care.

Secondly, I extend my gratitude to my ING Risk Consolidation Team Manager, Mark Van Laar. He is the nicest person I have ever met in Netherlands. He cared my study, life and family. The team envi-ronment was very healthy and powerful. All my colleagues, they were welcomed to answer my questions at work and taught me the proce-dure step by step. My manager also encouraged me to try my best to solve each problem encountered at work by myself. He knowed how to innovate staff’s ability and develop better personality.

Lastly, I am really thankful to my parents, my boyfriend, and friends all over the world. They supported me in their own way. They com-municated with me regularly and encouraged me to do whatever I like and what I think is right. They never turn their back to me in case I face difficulties.

(6)
(7)

Chapter 1

Introduction

This report addresses alternative ways of estimating the Value-at-Risk, which is a risk measure that helps financial institutions set aside appropriate capital to cover losses in the future. The accuracy of VaR estimation and its volatility directly determine the credit of a financial company. Nowadays, every company has a risk department to manage the risk of catastrophic losses after the financial crisis hit the entire market in 2008. Several companies still have not recovered yet. From some statistics in 2009, major companies found themselves in a deep recession and it is still affecting the global economy now.

Dutch government has asssembled new monetary and fiscal policy in turn to protect against tremendous loss caused by this financial disaster, such as ING Amsterdam. They modified their Historical VaR Policy. However, we would like to evaluate the effectiveness of the methods, so we use an alternative way to recalculate VaR by using Generalized Pareto Distribution and Threshold Exceedance concept in construction models.

There are several approaches in VaR estimation, both in policies and academic papers. Since 1996, the Market Risk Amendment(MRA) was adopted according to the 1988 Basel Capital Accord. Two hypothesis testing methods are very common in the current days, one is the binomial method embodied in MRA, the other is the interval forecast method by Christoffers (1997). An alternative method is proposed by Jose Lopez (1998) who introduced the VaR estimate testing method by loss function. We can also categorize VaR estimation methods by distribution assumptions, Delta-Normal Method, Historical Simulation Method and Monte Carlo Method.

In this thesis, we use the Generalized Pareto Distribution method to fit the actual market data. After estimating the relevant parameter and VaR figure, we compare with Historical Simulation Method to see if there is any difference. We then evaluate the method from the perspective of risk-averse investors.

Understanding the basic concept and Generalized Pareto Distribution is not trivial, we start with some relavant term and concept explanation and Risk measure principle in Chapter 2. Chapter 3 introduces the Generalized Pareto Distribution method and Threshold exceedance parameter estimation method. In Chapter 4, we derive the result to compare with Historical Simulation Method and sensitivity analysis are carried out. From the outcome, we draw some conclusions. Shortages and benefits are also discussed in Chapter 5.

(8)

Chapter 2

Risk Measure and Modeling

Concept

In this chapter, we introduce some basic knowledge and concepts in order to under-stand risk measure in real market. Risk measure is useful in risk management banking. Risk measure is also used in setting aside the economic capital to ensure the uncertain future liabilities are covered in this way.

2.1

Risk Measures

A risk measure is defined as a mapping from a set Xi of random variables to real

numbers, which represents portfolio returns. The common notation for a risk measure associated with a random variable X is ρ(X). A risk measure ρ : Ξ → R ∪ {+∞} has the following properties.

1. Normalization: ρ(0) = 0

2. Translation: If a ∈ R, and A ∈ Ξ, then ρ(A + a) = ρ(A) − a. 3. Monotonicity: If A1, A2 ∈ Ξ, and A1 ≤ A2, then ρ(A2) ≤ ρ(A1).

The risk measure is assumed to encapsulate the risk associated with a loss distribution.

2.2

Value-at-Risk

Value-at-Risk (VaR) is defined as the potential loss resulting from market data change under normal market circumstances over a particular time period at a given confidence level. It can be written in mathematical form as:

V aRp(x) = F−1(p) = inf {x ∈ R | F (x) ≥ p}

where p is any given probability between 0 and 1 and F (x) is the cumulative distribution function,F−1(p) is the inverse cdf computed at probability level p.

From the above definition, we can simply say VaR boils down to the p-quantile of a random variable X. As we know, the cumulative distribution function is right continu-ous. As VaR is the inverse function of cdf, it is left continucontinu-ous. By choosing an arbitrary confidence level, VaR illustrates the probability that a loss exceeds a certain value.

(9)

3 However, VaR has some deficiencies. Firstly, the VaR measure encourages the investor to take a “gambling” view of portfolios. For example, if one uses a 95% confidence level, the portfolio will perform well in 95% of the cases, but suffers a substantial loss in 5% of the cases. The second disadvantage is that it does not take the entire tail of the loss distribution into consideration. In case there is a heavy tail, investors will not be well-protected by this risk measure method. In addition, the VaR is not always subadditive, meaning that it does not always preserve the diversification benefit. Investors can switch to other risk measures for full consideration such as Tail Value-at-Risk (TVaR) and Conditional Tail Expectation (CTE), which are not discussed further here.

2.2.1 Historical Method for VaR Estimation

The historical method has emerged as the most popular method to estimate VaR in the industry. From the survey designed by Perignon and Smith (2006), around 73 percent of banks were using historical VaR methodology among 60 US, Canadian and large international banks between 1996-2005. The second most popular was the Monte Carlo (MC) simulation method. The rest of the banks apply variance-covariance method and some other methods in VaR estimation.

The historical method is very simple: suppose we have a bulk of data, putting them in order from worst to best of the loss amount reorganizes the actual historical returns. Then we find out the p-percentile of the ordered data so that we have p% confidence the loss is beyond the corresponding loss figure. The method automatically assumes that the past losses are i.i.d. and that history will repeat itself, from a risk perspective.

2.3

Extreme Value Theory

Not only are we interested in the tail behavior of the loss distribution, but we also need to take the extreme values into consideration. Extreme Value Theory is a branch of statistics dealing with the extreme deviations from the mean of all the values. It seeks to measure the extreme value given an ordered sample of a random variable.

The Generalized Extreme Value (GEV) Distribution is defined as

Hξ(x) =

(

exp(−(1 + ξx)−1/ξ), ξ 6= 0, exp(−e−x), ξ = 0,

where 1 + ξx > 0. The parameter ξ is the shape parameter of the distribution. It deter-mines the type of each GEV distribution. There are three types of GEV distribution: If ξ < 0, it is a Weibull Distribution; when ξ = 0, it is called a Gumbel distribution, while ξ > 0 corresponds to the Fr´echet distribution.

The analysis of excessive loss distribution lies in the core of Extreme Value Theory. The loss distributions generally have heavy tails. Extreme Value Theory analyzes the stochastic behaviour of extreme values of random variables, such as financial losses, in-surance claims and so on. Since a large class of extreme value distribution has strong con-nection with their maximals, we define the normalized maxima Mn= max(X1, . . . , Xn),

where X1, . . . , Xn are i.i.d. random variables. We can then define a sequence:

(10)

4

If the above sequence is satisfied, F is in the maximum domain of attraction of H (MDA(H)).

Now we introduce a famous Theorem: Fisher-Tippett-Gnedenko Theorem. If F ∈ M DA(H) for some non-degnerate df H, then H must be of type Hξ, i.e. a GEV

distri-bution.

The GEV distribution is usually defined with a location parameter µ ∈ R and a scale parameter σ > 0 : Hξ,µ,σ(x) = Hξ(x−µσ ).

Suppose we have actual market profit-and-loss data, where profit-and-loss is defined as the change in value of a trade. We assume the data lies in the maximum domain of attraction of an extreme value distribution Hξ. One way to apply the above theory is

to divide the data into m blocks of size n. We denote the maximum value in the jth block with Mnj, and we have Mn1...Mnj.Then multiple approaches such as maximum

likelihood and weighted probability moments can be used to fit the GEV distribution with the maxima in each block. Here we do not discuss further about this method. Our main focus is on the next method of Threshold Exceedance.

2.4

Threshold Exceedance

2.4.1 Generalized Pareto Distribution

Given a random variable X with df F , we define the excess distribution of X over the threshold u as follows:

Fu(x) = P (X − u ≤ x|X > u) =

F (x + u) − F (u)

1 − F (u) , 0 ≤ x ≤ xF − u where xF ≤ ∞ is the right endpoint of F .

The Generalized Pareto Distribution (GPD) is

Gξ,β(x) =

(

1 − (1 + ξx/β)−1/ξ, ξ 6= 0, 1 − exp(−x/β), ξ = 0,

where x ≥ 0 when ξ ≥ 0 , and 0 ≤ x ≤ −β/ξ when ξ < 0. Similar to the Generalized Extreme Value Distribution, the Generalized Pareto Distribution also has three special cases: ξ < 0 is the so-called Pareto Type 2 distribution, ξ = 0 is the exponential distribution with parameter 1/β, and ξ > 0 is the ordinary Pareto distribution with α = 1/ξ, κ = β/ξ.

Pickands-Balkema-de Haan Theorem. We can find a positive function β(u) such that

lim

u→xF

sup

0≤x≤xF−u

| Fu(x) − Gξ,β(u)(x) |= 0

(11)

5

2.4.2 Sample Mean Excess Plot

Suppose X1, ..., Xn are i.i.d. loss data with df F ∈ M DA(Hξ), then we are able to

approximate the excess loss df Fu for high thresholds u. The procedure to construct

excess loss is to pick a threshold u, the number of data which is above u is called Nu. If

X1, ..., XNu are the actual losses, we define Yi = Xi− u to represent the excess losses.

Apply the Maximum Loglikelihood Estimation function to optimize β, ξ

ln L(ξ, β; Y1, ...YNu) = −Nuln β − (1 + 1/ξ)

Nu

X

i=1

ln(1 + ξYi/β)

First we define the mean excess function of a r.v. X with finite mean as e(u) = E(X − u|X > u). In a sample X1, ..., Xnwhere Y1, ..., YN udenote the exceedance amounts over

u, e(u) can be estimated as en(u) = PNu

j=1Yj

Nu . Now let us introduce a new Theorem. If

Fu(x) = Gξ,β(x) for some ξ < 1, β > 0, then e(u) = 1−ξβ and e(v) = β−ξu1−ξ +1−ξξ v, v ≥ u.

Since e(v) is a linear function of v, the sample mean excess function e(u) grow linearly as u is increasing if Fu ≈ Gξ,βis a good approximation for some threshold u. The sample

mean excess plot is en(u) against u. Since the equation is a linear function of v, the

sample mean excess plot will exhibit linearity as we increase the threshold u to where a good approximation is achieved. In our β, ξ optimization, we take u near the beginning of the linear trend in the plot.

2.4.3 Modeling VaR

Under the assumption of Fu(x) = Gξ,β(x), we derive the following:

¯ F (x) = P (X > u)P (X > x | X > u) = ¯F (u)P (X − u > x − u | X > u) = ¯F (u) ¯Fu(x − u) = ¯F (u)  1 + ξx − u β −1/β

By the proposal of Smith(1987), we have the estimator of

ˆ ¯ F (x) = Nu n  1 + ˆξx − u ˆ β −1/ ˆξ

With this tail distribution, we can estimate VaR in the following equation:

ˆ V aRp= u + ˆ β ξ  n(1 − p) Nu − ˆξ − 1 !

2.4.4 The Hill Method

Besides the GPD method, there is an alternative way to estimate the tail of a dis-tribution which is the well-known Hill Method. The Hill estimator was introduced by Hill (1975). Before introducing the Hill Method, we define a slowly varing function as follows:

(12)

6

lim

x→∞

L(tx)

L(x) = 1 f or any t > 0

The slowly varying functions change relatively slowly as x increases such as the loga-rithmic fuction.

Then we have the following theorem: Fr`echet MDA, Gnedenko Theorem. For ξ > 0,

F ∈ M DA(Hξ) ⇔ ¯F (x) = x−1/ξL(x)

for some slowly varying function L.

In other words, if F ∈ M DA(Hξ) for some ξ > 0, then F is a heavy-tailed distribution,

in the sense that it has infinite higher moments. The bigger ξ is, the heavier the tails of F are.

Assume the underlying loss distribution is in the maximum domain of attraction of the Fr`echet distribution, by the above theorem, it has a tail of the form

¯

F (x) = L(x)x−α

for a slowly varying function L and a positive parameter α. Let α = 1/ξ. The Hill Method is based on finding a good estimator for α from i.i.d. data X1, ...Xn generated

from F . Assume that X1,n≤ X2,n≤ ... ≤ Xn,n are the order statistics from a positive

sample data X1, ...Xn. Let e∗ denote the mean excess function of ln(X), so we have the

following equation:

e∗(ln u) = E(ln(X) − ln u| ln(X) > ln u)

It is easy to show that limu→∞e∗(ln u) = 1/α. Therefore, we should have en(ln(Xk,n)) ≈

1/α for large n, where Xk,n denotes the kth largest order data in the sample data set.

The Hill estimator based on k upper order statistics is defined as:

ˆ α(H)k,n = (1 k k X j=1 ln(Xj,n) − ln(Xk,n))−1

Assume we have a standard tail distribution and concerning only the risk-mearsure estimates, we are able to write the tail model of the form ˆF (x) = Cx−αgiven a threshold u, where x ≥ u > 0. C can be a power function of u multiplying by the tail cumulative distribution function at u, as we can write C = uαF (x). Taking the empirical estimator¯ k/n, we rewrite the Hill tail estimator in the following form:

ˆ ¯ F (x) = k n  x Xk,n − ˆα(H)k,n

(13)

Chapter 3

Data Fitting and Parameter

Estimation

In this chapter, we obtain the actual daily profits and losses data for ING Financial Market Level in 2014, the data reflects the actual profit and loss for the 259 business days in 2014 at ING. Let us take a look at the plot of the data and discuss some of their properties. 0 50 100 150 200 250 −15000 −5000 0 5000 10000 Index loss1

Figure 3.1: Daily loss distribution for ING FM Level in 2014

From the plot of the loss data in Figure 3.1, we can see the data is randomly dis-tributed except for some extreme points. The trading has mostly profit rather than loss everyday for ING financial level in 2014. They are approximate around level -3312.85, which is the mean of the data with standard deviation 3237.908. It tells us ING has

(14)

8

an average of 3312.85 million euros profit in general in that year. Now let us plot the histogram of the loss distribution shown in Figure 3.2

Histogram of loss1 loss1 Frequency −15000 −10000 −5000 0 5000 10000 0 10 20 30 40 50 60

Figure 3.2: Histogram of loss data

We apply the extreme value theory method to estimate the risk together with param-eter estimation. The actual data table is shown in the appendix. We use R to perform the data fitting and estimation.

First, we define all the random variables needed in the estimation procedure.

f <- file.choose();## choose a file in computer directory d <- read.csv(f);## read the .csv data

loss<- -d[,2]; ## input the second column as loss data u<-seq(0,10000,10); ## define a threshold scope

Nu<-rep(0,1001);## No. of excess amount

y<- rep(0,259); ## Excess amounts for each threshold e.nu<-rep(0,1001);## mean excess amount

The data is taken for business days, so it is 259 days in 2014. We increase the threshold by 10 from 0 to 10000 which is 1001 data points in total. Then a for loop is built up to keep track of the number of excess amount and the actual excess amount with different threshold.

(15)

9 {for (j in 1:259) {y[j]<-max(loss[j]-u[i],0) if (y[j]>0) { Nu[i]<- Nu[i]+1; } } e.nu[i]<-sum(y)/Nu[i]; } 0 2000 4000 6000 8000 10000 1000 2000 3000 4000 5000 6000 u e .n u

Figure 3.3: mean excess function with respect to threshold

After getting the mean excess function en(u), we plot it against u which is the sample

mean excess plot, which is shown in Figure 3.3. It is obvious that we observe some decreasing straight lines in the plot, so it is better to use only the observed loss values as thresholds, instead of defining a threshold scope u. We also leave out the last few highest thresholds because there are too few data points above them. The following code is the second way to produce the mean excess plot:

loss <- sort(loss[loss>0])

## only take positive losses, and sort them from small to large N <- length(loss) ## number of positive losses

e.n <- rep(0,N-1) ## vector to store the means excesses

for (i in 1:length(e.n)) {e.n[i] <- mean(loss[(i+1):N]-loss[i])} ## compute mean excesses

(16)

10 500 1000 1500 2000 2500 1500 2000 2500 3000 3500

observed loss value

e

.n

u

Figure 3.4: mean excess function with respect to observed loss values

As seen from the new plot Figure 3.4, a linear trend lies in the area of u ∈ (1900, 2400). Above this limit, it does not has any regular shape and it even decreases for a certain range of threshold. Therefore, we make a random choice first, and the sensitivity analysis is discussed in the next chapter.

u.new<- seq(1900, 2400,10); ## choose new threshold Nu.new<- Nu[191:241];

beta<-rep(0,51); xi<-rep(0,51);

a<- 0.5; b<- 2; ## initial values for (i in 1:51) {log.lik<- function(p) { Nu.new[i]*log(p[1])+(1+1/p[2])* sum(log(1+p[2]*pmax(loss-u.new[i],0)/p[1])) } beta[i] <- optim(c(a,b),log.lik)$par[1] xi[i] <- optim(c(a,b),log.lik)$par[2] }

The table of β, ξ will also be shown in the Appendix. By taking the mean of ξ, we get our estimation for ˆξ = 0.7245229 (SE=0.16), and its corresponding ˆβ = 624.7158 (SE=168) with taking u=2200 as the threshold.

In order to compare about the results, we apply Hill Method here to see how the differences are. Since our data set is really small,k should be rather small compared to

(17)

11

Table 3.1: ξ with different order k k ξ k ξ k ξ 2 0.4292 4 0.7558 5 0.6814 6 0.5690 8 0.6203 10 0.5748 12 0.7386 14 0.6790 16 0.8606

n = 259, where k is the kth largest order statistic in the sample. In our case, we choose k starting from 2 to 16. By using the equation we discussed in chapter 2:

ˆ α(H)k,n =   1 k k X j=1 ln Xj,n− ln Xk,n   −1

where α = 1ξ, we have the table 3.1 showing the corresponding ξ and k. Figure 3.5 indicates the stable region for ξ is where k from 6 until 10. We do not choose k greater than 10 because the number of data n we have is 259, which is really small due to the limited access, so we need to choose a rather small k compared to n.

2 4 6 8 10 12 14 16 0.5 0.6 0.7 0.8 k ξ

Figure 3.5: ξ with respect to k

Therefore, we take k = 6 in our VaR estimation. Plugging into the equation of the tail of a distribution F :

(18)

12 ˆ ¯ F (x) = k n  x Xk,n − ˆα(H)k,n

We are able to get the VaR estimator by setting ˆF (x) = p and solving for x based on¯ the Hill Method. The result is given in the next chapter.

(19)

Chapter 4

Results and Sensitivity Analysis

From the last chapter, we maximize the logliklihood function and get the optimal β, ξ estimation. Recall the VaR formula:

[ V aRp= u + ˆ β ξ  n(1 − p) Nu − ˆξ − 1 !

We get our VaR estimate is 3386.65. Since we have 259 days, the traditional method to calculate the VaR figure is by ranking the data and finding the 99% percentile of the data, which is an interpolation between the 2nd highest loss and 3rdhighest loss. In this case, it is 4080.1495. From the result, we see a 19

On the other hand, we are curious about how sensitive our ξ is to the change of threshold. To make it more precise and clear, we make a plot of estimates ξ against the choice of u in Figure 4.1.

There is an increasing pattern for a specific range of thresholds from 1900 until 2100, then another pattern repeats from 2100 and continues increasing until around 2260 and so on. The pattern repeats with the same shape again and again until the largest threshold 2400.

Based on the pattern, we reproduce a table of VaR at each threshold in Table 4.1. The table shows clearly the VaR is less than 3600 for whatever choice of u, indicating a less prudent VaR figure compared to the historical estimate of 4080.1495. The standard deviation of VaR depending on different u is 140.5021. Therefore, our VaR figure does not vary widely, meaning our standard deviation sensitivity analysis is stable.

Now we derive the result for the alternative Hill Method, we get the VaR is 3074.2995 for k = 6, which is far more less than the Historical Method. The VaR estimation with different choices of k is given in table 4.2

(20)

14 1900 2000 2100 2200 2300 2400 0.5 0.6 0.7 0.8 0.9 1.0 1.1 u.new xi

Figure 4.1: sensitivity of ξ with respect to u

Table 4.1: VaR with different thresholds between 2000 and 2700 Threshold VaR Threshold VaR Threshold VaR

1900 3456.462 2070 3469.057 2240 3283.754 1910 3726.465 2080 3449.346 2250 3251.480 1920 3173.188 2090 3427.619 2260 3214.265 1930 3697.803 2100 3404.020 2270 3394.760 1940 3685.291 2110 3552.609 2280 3373.787 1950 3670.118 2120 3536.340 2290 3351.702 1960 3657.079 2130 3518.421 2300 3326.245 1970 3641.813 2140 3502.557 2310 3299.338 1980 3627.714 2150 3485.906 2320 3511.486 1990 3611.147 2160 3467.845 2330 3499.525 2000 3596.127 2170 3447.692 2340 3484.527 2010 3578.377 2180 3428.342 2350 3469.399 2020 3562.399 2190 3407.728 2360 3453.061 2030 3544.429 2200 3386.650 2370 3437.281 2040 3526.348 2210 3362.966 2380 3420.333 2050 3508.857 2220 3338.618 2390 3403.325 2060 3789.404 2230 3312.695 2400 3384.205

(21)

15

Table 4.2: VaR with different order k k VaR k VaR k VaR 4 2918.1045 5 2988.1884 6 3074.2995 8 2989.5240 10 2937.3558 12 3287.4171 14 3057.7467 16 3397.4742

(22)

Chapter 5

Conclusions and Suggestions

First, we make some comments about the Historical Method in use in ING Trading Risk Consolidation Team. It uses actual market data in the sense that no distributional assumptions need to be made. In addition, it is fairly easy to understand. The disad-vantage of the historical method is that day to day changes lead to large volatility. The assumption made is that past performance is a reflection of future behaviour. If a particular risk factor has had an upward trend in the past, then downside risks may be underestimated.

Nowadays, the loss distribution of the current financial market is heavily non-Gaussian, the Extreme Value Theory evades the inaccuracy and instead using threshold to approx-imate the tail behaviour rather than the whole data set. It captures mostly the extreme events that might happen in the actual market. The VaR for 1 day 99% ING FM level based on GPD is 3300.37, and it is 19% less than the historical method. The standard deviation is small, which indicates robustness. It provides more useful information for financial capital that is setting aside in stressed market. One discrepancy is the the-ory we apply is a univariate EVT, but in reality, multiple factors may be involved in computation of the VaR of a portfolio.

Apparently, VaR only returns a single number and gives limited information about the profit and loss beyond that level. Other risk measure methods are significantly useful in studying the whole behaviour of the loss distribution.

In this thesis, we provide an alternative way to estimate VaR, which is the popular Hill Method. As we can see from the table 4.2, all the VaR estimators are less than that estimated in the generalized pareto distribution method. Some other papers also combine the Extreme Value Theory with time series and suggest the Hill Method is more preferable in short term time horizons.

The historical method gives greater VaR compared to both the GPD method and Hill method, indicating the bank is really prudent about future liability. Then they are more likely to have higher credit in a stressed market. However, both models have their deficiencies and benefits, the choice of model also should support consistent VaR over multiple horizons.

In our method, the number of data points above the selected threshold is 8, which is too small to have enough information. Since the scope of our data is 259 days, we are limited to the data provided due to access limit. Furthermore, the GPD approximation to Fu is good, but few data points lead to large variance. The same situation reflects

in Hill Method since our choice of k is small, and we do not have enough data points above Xk,n. This kind of bias-variance trade-off is always considered to be a problem

(23)

17 and making the results more volatile. On the other hand, the accuracy also depends on the choice of threshold between the tail observations and the observations belonging to the center of the distribution.

Another alternative approach is the Monte Carlo VaR, which serves non-linear prod-ucts. Concerning all the disadvantage, a combination of those models may be useful to generate a more accurate VaR. Further, different risk measures methods are made for different assets, so that different length of data is used in each case. Some further discussion is possible in new models fitting the loss distribution.

(24)

Appendix A: Data and Estimates

Here is the actual profit and loss data in 2015 for ING FM Level (259 business days) [1] 11113.38934 4710.23830 3642.28750 2717.38928 2667.93085 [6] 2485.93751 2318.34000 2264.51336 2101.04984 1908.70370 [11] 1906.04114 1563.39386 1485.19902 1391.08362 1351.12911 [16] 1208.72625 1065.32858 1059.28096 1004.36448 972.39537 [21] 748.62376 708.94428 562.05406 537.16403 527.41893 [26] 414.23844 294.64806 273.19148 193.19699 130.67357 [31] 113.70831 -30.84082 -77.18874 -78.89830 -110.05125 [36] -191.18345 -272.30611 -291.48666 -410.07301 -423.64529 [41] -460.38248 -558.36438 -569.77637 -625.07110 -708.54076 [46] -753.53389 -828.13341 -848.37808 -873.93270 -877.53103 [51] -932.97147 -1022.92238 -1038.79545 -1073.40912 -1137.20506 [56] -1140.74137 -1178.72118 -1192.83491 -1222.26398 -1224.47784 [61] -1281.12580 -1294.26860 -1297.37333 -1300.60040 -1301.28225 [66] -1301.90702 -1302.98721 -1345.52383 -1361.70504 -1367.81190 [71] -1370.50528 -1372.15647 -1397.56016 -1402.17418 -1417.78445 [76] -1455.17388 -1466.98432 -1494.30050 -1572.31793 -1612.45268 [81] -1629.23519 -1642.79550 -1686.83756 -1760.37863 -1781.42404 [86] -1783.15755 -1804.11370 -1806.78039 -1838.51664 -1840.52574 [91] -1855.68490 -1874.92255 -1929.68959 -1981.88132 -2032.07649 [96] -2038.74091 -2039.01822 -2065.89309 -2093.50890 -2150.98165 [101] -2151.63630 -2209.61389 -2248.07940 -2259.26072 -2301.09553 [106] -2340.92415 -2391.97336 -2408.68568 -2436.67231 -2443.21262 [111] -2463.09388 -2469.10755 -2496.12505 -2499.16017 -2509.93667 [116] -2512.76945 -2525.78908 -2543.70870 -2550.04228 -2556.45216 [121] -2576.34716 -2578.17641 -2628.34347 -2674.73043 -2728.05087 [126] -2805.60192 -2811.39028 -2813.80589 -2818.92585 -2869.42267 [131] -2954.50610 -2974.67088 -3012.88572 -3122.55356 -3159.77577 [136] -3160.43838 -3170.13548 -3192.93775 -3194.78359 -3225.16093 [141] -3232.47797 -3255.09801 -3438.77371 -3491.99182 -3493.08418 [146] -3563.61045 -3634.72875 -3698.20757 -3710.50435 -3715.02675 [151] -3764.02952 -3781.14221 -3861.37628 -3953.16134 -4059.32466 [156] -4063.91282 -4077.46850 -4106.26065 -4140.08661 -4153.03887 [161] -4160.92240 -4196.93283 -4203.69250 -4217.98930 -4226.52972 [166] -4268.63966 -4301.59711 -4344.88630 -4345.77568 -4379.61492 [171] -4412.07973 -4421.66596 -4427.17150 -4440.97153 -4502.56417 [176] -4505.32856 -4511.52650 -4544.63410 -4632.98282 -4635.21374 [181] -4713.63924 -4723.70961 -4793.42732 -4804.04783 -4817.68362 [186] -4891.58131 -5034.86856 -5043.78599 -5059.14371 -5227.19684 [191] -5293.54812 -5314.84021 -5350.18971 -5360.68659 -5396.90563 [196] -5399.02518 -5428.64575 -5466.57072 -5468.17462 -5516.71824 [201] -5532.32095 -5543.55464 -5583.45305 -5605.14036 -5632.42535 [206] -5684.10013 -5691.24375 -5700.84532 -5712.38012 -5718.89302 18

(25)

19 [211] -6079.01007 -6193.05875 -6260.94308 -6279.22913 -6376.27265 [216] -6406.57974 -6513.51053 -6519.18954 -6748.12356 -6786.54065 [221] -6805.77367 -6836.45723 -6869.45484 -6901.73488 -6912.56618 [226] -6984.56161 -7026.37598 -7050.80147 -7091.60513 -7189.84435 [231] -7241.77817 -7266.00397 -7357.94523 -7412.66624 -7523.04378 [236] -7647.57731 -7770.39290 -7819.17587 -7896.99523 -7924.95549 [241] -8002.62426 -8011.18883 -8148.29323 -8273.19398 -8424.54692 [246] -8446.52246 -8577.61488 -8584.90366 -8780.14710 -8922.90063 [251] -8957.93569 -9484.08752 -9599.24991 -10592.14314 -11461.85462 [256] -12114.82108 -12269.41106 -13604.92484 -14841.75960

Here is the β estimated value for the corresponding threshold from 2000 to 2700 [1] 863.1846 838.5744 814.7519 789.9143 763.7406 738.8092 711.9101 [8] 684.4117 657.8428 628.5225 598.2936 869.3388 844.2971 817.8827 [15] 792.6039 766.5943 739.7383 711.1411 682.8023 654.2484 624.7158 [22] 592.9994 560.4720 526.5984 490.6958 450.8927 406.8985 721.7041 [29] 688.9759 654.6276 617.8774 578.9321 1051.9166 1023.5968 991.7220 [36] 959.9185 927.3479 895.0112 861.8859 827.4250 792.3493 756.5748 [43] 718.7490 679.3874 637.7698 595.0613 549.0736 499.1921 439.8348 [50] 1156.7941 1116.2933 1078.0126 1037.3587 993.7607 951.0101 904.8230 [57] 855.5133 804.1806 749.5815 691.5497 626.7876 557.7198 480.9844 [64] 398.2956 309.7003 219.5610 126.3767 1656.6983 1609.6001 1558.5286 [71] 1507.2817

Here is the ξ estimated value for the corresponding threshold from 2000 to 2700 [1] 0.5788012 0.5949471 0.6120776 0.6299175 0.6509544 0.6712704 0.6945356 [8] 0.7194243 0.7444744 0.7749369 0.8075580 0.6231593 0.6401810 0.6587569 [15] 0.6783888 0.6995423 0.7218911 0.7467272 0.7734983 0.8008619 0.8318263 [22] 0.8671288 0.9065374 0.9509743 1.0014552 1.0651508 1.1443883 0.8151227 [29] 0.8474270 0.8845830 0.9257582 0.9747023 0.6402911 0.6589757 0.6804914 [36] 0.7027280 0.7255739 0.7502976 0.7760336 0.8066667 0.8364651 0.8701421 [43] 0.9087216 0.9495105 0.9979071 1.0515363 1.1143566 1.1894942 1.2930302 [50] 0.6699216 0.6967844 0.7226019 0.7525373 0.7843616 0.8190397 0.8582376 [57] 0.9019831 0.9522193 1.0095584 1.0759691 1.1584743 1.2595984 1.3892816 [64] 1.5542640 1.7773189 2.0820348 2.5777705 0.5125712 0.5364914 0.5623533 [71] 0.5892376

References

Alexander J. McNeil et al. (2005). Quantitative Risk Management—Concepts, Tech-niques and Tools”, 1st edition, Princeton University Press, Princeton.

Abhay K Singh (2011). “Value at Risk Estimation Using Extreme Value Theory”.http: //www.mssanz.org.au/modsim2011/D6/singh.pdf

Bulent Gokay (2009). “The 2008 World Economic Crisis: Global Shifts and Faultlines”. http://www.globalresearch.ca/ the-2008-world-economic-crisis-global-shifts-and-faultlines/12283

Jose A. Lopez (1998). “Methods for evaluating Value-at-Risk estimates”, Federal Re-serve Bank of New York, 9802.

Kay Giesecke et al. (2008). “Measuring the Risk of Large Losses”, Journal of Investment Management, 6(4),1-15.

Mary R. Hardy(2006). An Introduction to Risk Measures for Actuarial Applications”, 1st edition, Education and Examination Committee of the Society of Actuaries, U.S.A.

(26)

20

Meelis K¨a¨arik and Anastassia ˇZegulova (2012). “On estimation of loss distributions and risk measures”, ACTA et Commentationes Universitatis tartuensis de mathematica, 16, 53-67.

Meera Sharma (2012). “The Historical Simulation Method for Value-at-Risk: A Re-search Based Evaluation of the Industry Favorite”.http://papers.ssrn.com/sol3/ papers.cfm?abstract_id=2042594

Turan G. Bali (2003). “An extreme value approach to estimating volatility and Value at Risk”, The Journal of Business, 76(1),83-108.

Viviana Fernandez (2003). “Extreme Value Theory: Value-at-Risk and returns de-pendence around the world”.http://www.dii.uchile.cl/~ceges/publicaciones/

ceges51.pdf

Warwick J Mckibbin and Andrew Stoeckel (2009). “The global financial crisis: causes and consequences”, Lowy Institute, 2.09.

Referenties

GERELATEERDE DOCUMENTEN

De sterk dalende registratiegraad van verkeersdoden en verkeersgewonden in het Bestand geRegistreerde Ongevallen in Nederland (BRON) heeft te maken met recente ontwikkelingen bij

Concerning neighbourhood perceptions, evidence is found for both predictions that perceptions of more social cohesion and social capital in the neighbourhood and

Nünning points to American Psycho as an example of such a novel, as it call's upon “the affective and moral engagement of readers, who are unable to find a solution to to puzzling

Following the above, the theoretical model consists of four factors: absorptive capacity (contains of recognition, assimilation, transformation and exploitation of new

Omniscient debugging is a debugging technique that records interesting runtime information during the program execution and debug the recorded information after the

For relatively low luminosities and therefore small accretion column heights, the total observed flux mostly consists of the reflected photons, while for high luminosities, the

De respondenten zijn geselecteerd op basis van hun beroep en dagelijkse werkzaamheden: Ze zijn communicatieprofessionals die zich in hun dagelijkse werkzaamheden bezighouden met

Verschillende termen uit het CAI-interpretatieluik kon- den wel teruggevonden worden binnen deze databank, maar over het algemeen is dit systeem toch niet zo aan- sluitend bij