• No results found

Estimation of Value-at-Risk by traditional methods and extreme value theory : a comparative evaluation of their predictive performance during the recent financial crisis.

N/A
N/A
Protected

Academic year: 2021

Share "Estimation of Value-at-Risk by traditional methods and extreme value theory : a comparative evaluation of their predictive performance during the recent financial crisis."

Copied!
46
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

University of Amsterdam, Amsterdam Business School Master in International Finance

Estimation of Value-at-Risk by traditional methods and extreme

value theory: a comparative evaluation of their predictive

performance during the recent financial crisis.

September 2013

Abstract

Value-at-Risk is a commonly used risk measure in the financial industry. In this study, we compare three traditional VaR models (namely: RiskMetrics, Historical Simulation and Monte Carlo Simulation) with a VaR model that is based on Extreme Value Theory. Using daily returns of FTSE 100, DAX, SSE and BOVESPA over the period from December 27, 2001 to December 28, 2010, we find that most traditional methods tend to underestimate risk, especially for RiskMetrics which assumes normality of the returns distribution. Extreme Value Theory overcomes this problem by modeling the extreme returns of the distribution and provides a more precise and robust estimation of market risk.

\

Supervisor: Dr. Chrisostomos Florackis By Minghe Lu (10451900)

(2)

Content page

PART1: Introduction………...3

PART2: Literature Review………...6

2.1. Parametric Models………7

2.2. Nonparametric Methods-Historical Simulation………...8

2.3. Nonparametric Methods-Monte Carlo Simulation……….10

2.4. Semi-parametric Method- Extreme Value Theory………..12

2.5. Traditional methods v.s EVT………..13

PART3: Methodology………...15

3.1 Traditional methods……….15

3.2 Extreme Value Theory……….16

3.3 Parameter Estimation: Maximum Likelihood……….18

3.4 Back-testing……….19

PART4: Data description and normality test………....21

PART5: Empirical Analysis………..27

5.1 The extreme value approach………....27

5.2 VaR values based on traditional methods and the EVT approach………29

5.3 Back-testing results for VaR models……….…30

PART6: Concluding Remarks………..…35

PART7: Appendix……….38

(3)

PART1. Introduction:

Over the past few decades, a number of financial catastrophes have occurred due to unfavorable market movement, such as the worldwide stock market collapse in 1987, the Mexican crisis in 1995 as well as the 1997 financial turmoil in Asian markets. The shortcomings in risk management practices have been widely viewed as a potential contributing factor to the financial disasters (Ho et al., 2000). Due to the financial and economic globalization which has generated many investment opportunities and, at the same time, intensified the effects of unfavorable market movement, the collapse of the asset prices during the 2008 financial crisis has induced risk managers, researchers and regulators to readdress the importance of better risk measurements. Additionally, the awareness of regulators has given an impetus to the use of internal risk models. In 1996, the Bank for International Settlements (BIS 1996) amended the Capital Accord of 1988 to incorporate market risk and stipulated the use of models by financial firms. At the same time, the revised Basle Accord in 1998 allows banks to use internal market risk management models to fulfill their capital adequacy requirement.

In response to these challenges, risk management has developed various tools to measure and control market risk in order to protect investor from potential losses. Value-at-Risk (VaR) has become one of the most important risk management tools. The main advantage of VaR is its conceptual simplicity. VaR is defined as the maximum potential loss in value of a portfolio due to adverse market movements, for

(4)

a given probability. Statistically, it can be defined as one of the lower quantiles of the distribution of returns that is only rarely exceeded. However, the diversiform of assets, currencies and markets in which a global bank will have an interest mean that the implementation of VaR faced many difficulties and it is very sensitive to data requirements.

Since we attempt to assess market risk from the point of view of international investors, an international study of different VaR methods will be presented in this paper. A broad range of data will be analyzed including stock returns in four countries: China, Brazil, United Kingdom and Germany. China and Brazil are believed to be the largest emerging markets and fast growing economies. Germany has the largest national economy in Europe measured by nominal GDP, followed by UK. More clearly, this thesis intents to compare the results of mature capital markets with those generated from emerging markets which believed to have the features of high volatility and liquidity crashes during recession period.

The global financial crisis of 2007-2008 provides an interesting opportunity and gives the main aim and motivation for this paper in order to explore a better measurement of market risk. The rest of this paper is organized as follows: Part 2 discuss the pervious literature findings in various VaR approaches. Part 3 introduces details of methodology adopted in this paper, including three traditional VaR methods: the analytical or RiskMetrics approach (RM), historical simulation method (HS), and the

(5)

(EVT). Part 4 describes data and test the normality assumption of index return distribution. In part 5, empirical analysis is conducted using four stock indices, namely UK, Germany, China and Brazil. The period beginning from December 27, 2001 to December 27, 2007 is used to estimate various parameters, which are then used to calculate VaR values based on EVT method and compared to those derived from traditional methods. These methods are then evaluated using back-testing from December 28, 2007 to December 28, 2010. Finally, Part 6 summarizes and concludes the paper.

The performance of the models investigated in this paper is evaluated using a Binomial distribution back-testing method. Considering the fact that most financial return series are asymmetric and fat-tailed, the EVT approach is advantageous over other models, especially for RiskMetrics (based on normal distribution). Therefore, the preview result shows that the performance of traditional risk management strategies, such as Historical Simulation, RiskMetrics and Monte Carlo are relatively poorly performed compared to the EVT approach.

(6)

PART2. Literature Review:

Risk management has developed various tools to measure and control risk. VaR has become the standard measurement in the financial markets due to its simple and intuitive theory. As stated by Manganelli and Engle (2001), although the existing methods for VaR estimation use different methodologies, they generally follow a common procedure: 1) Mark-to-market the portfolio, 2) Estimate the return distribution, 3) Calculate the VaR. The main differences among VaR models are derived from the second step, which addresses several discussions and debates over the changes of the portfolio value. Three major categories are generated among the existing models:

 Parametric Conditional (RiskMetrics and GARCH)

 Nonparametric unconditional (Historical Simulation, Monte Carlo and the Hybrid model)

 Semiparametric unconditional (Extreme Value Theory and CAViaR)

The result generated from the methods mentioned above can be very different from each other. Beder (1995) estimated VaR using eight kinds of methods over three hypothetical portfolios and showed that there are large differences among the results, varying by more than 14 times for the same portfolio applied. Therefore, in order to

(7)

decide which method can best capture market risk, it is essential to identify the implicit assumptions as well as the quantitative techniques used. In this paper, three most commonly used traditional methodologies are mainly discussed as well as the most developed EVT, namely RiskMetrics approach (RM), historical simulation method (HS), and Monte- Carlo simulation method (MC).

2.1. Parametric Models:

RiskMetrics model proposes a specific parameterization for the behavior of prices. The family of RiskMetrics models was firstly introduced by Engle (1982) and Bollerslev (1986) and has been successfully applied to financial data. This method assumes that all the returns are jointly normally distributed and estimates volatility and correlations under the assumption of normality. In terms of portfolios with little options content, parametric models provide a better prediction of downside risk (Jorion, 1997). Additionally, Jorion (1996) gives praise to RiskMetrics as its less sensitivity to estimation error compared to methodologies which have a general distribution assumption. Furthermore, by applying variance/covariance relationships among various portfolio assets, parametric methods performed much better compared other methods under back-testing and sensitivity analysis (Linsmeier and Pearson, 1996). Jorion (1997) also states one of the advantages is that RiskMetrics is easy to explain to management. However, Linsmeier and Pearson (1996) argues in the

(8)

opposite and commends that solving this issue would mainly depend on the manager’s interpersonal skills and familiarity with statistical methods. A significant number of literatures provide empirical evidences of the characteristic of excess kurtosis in financial returns, the heavy-tails effect, as well as different degree of asymmetry, which contributing to the main drawback of RiskMetrics approach. The general findings show that this procedure tends to underestimate VaR as the normality assumption is not always consistent with the financial data. On one hand, this assumption contributes the main advantage of this method: it allows a large space of performance improvement by avoiding the normality assumption. On the other hand, according to Manganelli and Engle (2001), both GARCH and RiskMetrics are subjected to three sources of misspecifications: the specification of variance equation and the distribution chosen to build the log-likelihood may be wrong, and the standardized residuals may not be i.i.d. However, Jorion (1996, 1997) suggests that applying alternative cumulative distribution (such as student’s t-distribution) can largely improve the accuracy when fat tails are encountered.

2.2. Nonparametric Methods-Historical Simulation:

One of the most commonly used nonparametric methods to calculate VaR is the Historical Simulation and Monte Carlo Simulation. The Historical Simulation is the simplest nonparametric method. It does not make any distribution assumptions like

(9)

RiskMetrics approach but use historical data to predict future price movement which in a way simplifies the calculation procedure. The historical simulation has been given praise for its flexibility and ease of implementation (Jackson, Maude and Perraudin, 1997). According to Linsmeier and Pearson (1996), it allows variation from normality as it does not rely on distribution assumptions. In addition, HS is free of risk as the parameter estimation and the correlation between portfolios assets are modeled implicitly (Manfredo and Leuthold, 1999). Finally, Manfredo and Leuthold (1999) also suggest that Historical Simulation is a method easy to understand by manager. Despite the positive aspects of the Historical Simulation, several problems are derived from the implicit assumption hiding behind: based on historical data, the distribution of financial returns does not change. Firstly, this method seems to be logically inconsistent. If all the financial data within window period have the same distribution, then all the returns in different time series must have the same distribution. Second, the empirical quantile tend to easily estimate errors as it has much more standard errors than parametric methods (Manfredo and Leuthold, 1999). Moreover, the length of the window is a crucial factor: it should be adequately large to make statistic inference significant, and at the same time it must not be too large so as to avoid taking the observations outside the volatility cluster (Manganelli and Engle, 2001). Obviously, so far there is no clear solution to this issue. Another criticism is that Historical Simulation based VaR estimation may be biased downwards if the market is moving from relatively low-volatility period to high-volatility period. Otherwise, the estimation will be biased upwards. The reason is that it takes some

(10)

time for the observations from low-volatility period or high-volatility period to leave the window (Jorion, 1996; 1997). Moreover, claimed by Danielsson and Vries (1997), Historical Simulation gives the same weight to all observations, ignoring the time series of different variations. The final problem concerns on the presence of predictable jump of Historical Simulation due to extreme returns. When the beginning returns are negative, it is easy to predict that VaR estimate will jump upwards, discarding the reliability of Historical Simulation method.

Concerning on the limitations of the RM and the HS, an interesting hybrid method has been developed by Boudouk, Richardson and Whitelaw (1998). This hybrid method combines the RM and the HS by using exponentially declining weights to past returns of the portfolio. It was believed that this method has significantly improved the previous methodologies. It simplifies the assumptions made in RiskMetrics and it also integrates a more flexible feature than the historical approach.

2.3. Nonparametric Methods-Monte Carlo Simulation:

Instead of using past data to generate the return distribution, Monte Carlo Simulation uses a statistical distribution that is believed to adequately capture or approximate the possible changes in the market factors. Jorion (1997) and Linsmeier and Pearson (1996) claim that Monte Carlo method is the most flexible VaR methodology. It

(11)

integrates options content well and has distributional flexibility. On the other hand, the flexibility of Monte Carlo method might also be its limitation. Jorion (1997) criticizes Monte Carlo methods because of its tendency to generate specification error, especially for complicated portfolios. Moreover, the underlying data-producing procedures must also be established, and geometric Brownian motion is suggested by Jorion (1997). It also claim that time variation can be included by GARCH variations. However, this may be misrepresented with Monte-Carlo Simulation which implies a trade-off between time variation and model flexibility. The major pitfall of Monte-Carlo Simulation is its high requirement of computational skills and good understanding of the stochastic process.

However, one of the major pitfalls of the VaR is that most of the estimation approaches discussed above, both parametric and nonparametric methods, are designed to estimate the whole distribution and do not sufficiently exploit the low tail of the return distribution, in which case, companies and institutions may suffer substantial loss when market price fall significantly (Tolikas, Koulakiotis and Brown, 2007). In order to solve this problem, alternative approaches were recently proposed to estimate VaR, such as Extreme Value Theory (Danielsson and de Vries, 1998), Regression Quantile techniques (Chernozhukov and Umantsev, 2000) and Quasi-maximum likelihood GARCH implemented by McNeil and Frey (2000). In this paper, only Extreme Value Theory will be further discussed below.

(12)

2.4. Semi-parametric Method- Extreme Value Theory

Responding to inconsistencies from previous models, to capture the magnitude and probability of extreme values, the Extreme Value Theory provides a sound tool to assess and model risk. It focuses on the rare and extreme observations and their corresponding probabilities by providing tail estimates which extend the historical sample. Although EVT already applied in scientific area long time ago, the application in financial and economic models to capture tail event is developed recently. Research that use EVT to test the return behavior in terms of exchange rate changes include: Danielsson and de Vries (1997) and Hols and de Vries (1991). The interest in EVT for risk management aspect has significantly developed in the recent few decades. The first use of Generalized Extreme Value (GEV) distribution in EVT is launched by Longin (1996) for daily returns of S&P stock index. A more recent study by Danielsson and de Vries (2000) estimate VaR for randomly selected portfolios. The potential distribution is then extended by McNeil (1999) and McNeil and Frey (2000), who illustrate the application of Generalized Pareto Distribution (GPD) in the case of DAX and S&P indices. Finally, different data frequencies are applied in Tolikas, Koulakiotis and Brown (2007) in the German stock market. The main advantage of EVT can be summarized as follow: it incorporates GEV distribution which covers most of the commonly used distribution (Manganelli and Engle, 2001). Additionally, Danielsson et al. (1998) suggested that VaR estimated by EVT can better forecast market risk compared to modeling the whole distribution.

(13)

Even though there are several positive aspects of Extreme Value Theory, some problems derived are also need to be considered. Firstly, the assumption of i.i.d returns seems to be inconsistent with the characteristics of financial data. In addition, Manganelli and Engle (2001) propose that EVT method can only work well in low confidence levels, but to what extent is still under discussion and hard to tell. The third problem concerns on the selection of threshold which determines the number of statistics used in the hill estimation. If the selected point is too high, there would be too few exceptions resulting in high variance estimator. Otherwise, too low threshold would generate a biased estimator (Embrechts, 1999; 2000). So far, there is no clearly method to select threshold which constitutes the major drawback of Extreme Value Theory.

2.5. Traditional methods v.s EVT.

The choice of selecting appropriate risk measurement, including different VaR models, is very critical, especially for emerging countries which are featured by high instability (Consigli, 2002). The role of EVT as a method to predict VaR has been tested by several academics and analysts. Danielsson and de Vries (1997) examined the predictive performance of different VaR methods using seven US stock returns. They found out that EVT- based VaR is more accurate in the fat-tailed distributions while the other methods in some extent under or overestimate the risk. Similar result

(14)

has been obtained by Pownall and Koedijk (1999) who compared VaR estimation using data from Asian stock markets. The result showed EVT has superiority to fit the distribution of extreme observations. The test of different methods has been extended by Longin (2000) in which VaR based on different confidence intervals were examined. They reported that almost all the methods can perform equally well at a relatively lower confidence interval. However, at a high confidence interval (for example 99.95%), EVT clearly emerges. Additionally, Gencay, Selcuk and Ulugulyagci (2003) also conclude that as conditional VaR (GARCH, RM) vary a lot than unconditional VaR when market volatility is high, EVT has a more stable and constant performance. This result is also supported by Bekiros and Georgoutsos (2005), which conclude that EVT-based VaR can significantly reduce large loss in a long-term investment period (various volatility) while the traditional methods could provide sufficient information in a short investment horizon. However, Bali (2003) found out that EVT-generated VaR were generally 24% to 38% larger than that generated by the normal distribution. According to the findings, he argued that substantially high capital requirement suggested by EVT may give banks less incentives to use a better internal risk model. Similar result was also obtained by Danielsson et al. (1998). However, in a further study, Danielsson (2002) used US data to test different methods, including RM, HS, GARCH, EWMA and EVT and concluded that EVT does offer superiority to capture the features of fat-tailed distribution at high confidence intervals.

(15)

PART3: Methodology:

3.1 Traditional methods:

The basic idea of historical simulation is to re-value portfolio based on past actual prices on market factors that affect the portfolio. The VaR estimation in this case is estimated by:

1 t

VaR ( )

F ( ) r

(1)

Where F-1 (σ) r is the qth quantile (q=1-α) of the sample distribution.

Firstly, market factors should be identified. Then, historical data of factors need be obtained and their changes are used to construct portfolio value. Thirdly, we need to subject to the portfolio and calculate daily profits and losses that would have occurred. Finally, portfolio value will be ordered from highest to lowest and the loss that is equaled or exceeded to 5% of the time would be selected as VaR. Since no parametric form assumption for the distribution, the historical simulation method may fit the sample well, around reasonable quantiles. In terms of MC approach, it has a number of similarities to HS. The main difference is that rather using historical data, MC use a pseudo-random generator to generate the changes in the market factor. Regarding RM method, it assumes the underlying market factors are normally distributed. If every random variable rt is subjected to N (μ, σ) (Normal distribution with mean μ, standard

(16)

deviation σ), the VaR using p confidence level can be calculated as follows (Hull 2010):

1

(1

)

VaR

N

p

 

(2)

Where N-1(1-p) is the pth quantile value of the normal distribution function. If we let

the sample as rt, t= 1, 2, 3, 4…n, n is the sample size, an estimate of α and μ can be

obtained from the sample mean and the sample variance by:

2 2 1 1

1

1

=

,

(

) .

1

n n i i t i i

r

r

n

n

 

(3)

3.2 Extreme Value Theory:

In addition to traditional approaches, EVT approach estimates VaR in a different way. Instead of modeling the entire distribution, EVT only focuses on return data that carry information about extreme behavior. There are mainly four steps. Firstly, the extreme returns need to be generated. Secondly, the distribution that can best capture the data is determined. Thirdly, value of parameters is estimated. In the final step, VaR estimates can be calculated at a certain confidence level.

(17)

Maxima and the more modern Peaks over Threshold (POT). For POT method, a threshold μ is firstly fixed and then only exceedences y over μ are focused. According to Hull (2010), μ must be sufficiently high to support the truly investigation of the distribution shape, but also sufficiently low in order to have adequate numbers of data included. In this paper, we choose μ approximately equal to 95th percentile of the

empirical distribution. Let F(Y) indicates the distribution function of returns Y, and then the cumulative distribution of exceedences over μ is:

 

F y u

  

 

F u F y P Y y u | Y u , x 0 1 F u u          (4)

The behavior of threshold exceedences is Generalized Pareto distribution (GPD), which is given by:

1/ ( ) 1 1 , 0 ( ) 1 exp( ), 0 y G y G y y

              , (5)

Where ξ represent tail index value, ξ<0 corresponding to heavy tailed distribution, ξ>0 to short tailed distribution and ξ=0 is to exponentially decayed distribution, such as normal and log normal; the parameter σ is a scale parameter. If we assume the number of exceedences is Nμ, and those exceedences are identically and

independently distributed (i.i.d.) with a GPD distribution, then the maximum likelihood of parameter σ and ξ is consistent as Nμ ->∞ and ξ> -0.5. If define Fu(y) =

(18)

1- Fu(y) and then employing in (1), then we get:

   

F u

y

F u F y

u (6)

In (3), we substituteF u

y

 1 p, the confidence level,F u

 

Nu

n

 , which is the proportion of data in the tail and F VaR 1 pu

 

u

 G VaR 1 p

 

u

, where G

represent GPD distribution with parameter σ and ξ. Then, the VaR equation in p-quantiles based on POT is

ˆ ˆ (1 ) (1 ) ˆ 1 . u n p VaR p u N                        (7)

3.3 Parameter Estimation: Maximum Likelihood:

Having selected the distribution, the main challenge faced in EVT is the estimation of parameters under GPD distribution, which refers to as β, and ξ. In order to estimate these two parameters, several methods could be used, such as probability-weighted moment estimation, regression analysis and maximum likelihood. In this paper, the maximum likelihood is applied. The main idea is to find the value which maximized the likelihood by making the first-order conditions to zero. By differentiating (2) with respect to y, the likelihood function is

(19)

1/ 1 1

(

)

1

ln [

(1

)

]

u n i i

v

u

  

(8)

Where v is the exceedence. After maximize the likelihood, the estimated i

parameters of extreme distribution can then be used to calculate VaR at different confidence intervals.

An alternative approach to estimate the tail of the distribution is defined as Block Maxima, according to which the data is divided into N blocks and each block has n observations. Extreme is the maxima or minima within each block. In this study, the former method is applied.

3.4 Back-testing:

VaR is only useful if its prediction is sufficiently precise to capture the future price movement. Back-testing is one of the verification of how accurate VaR is measured. The basic idea is that if the method is valid, the percentage of violations (loss exceed VaR) during the back-testing period should be equal or less than the confidence interval. One model for back-testing is Binomial Distribution (Hull, 2010). The probability of m or more exception in n days is:

!

(1 ) ! ! n k n k n p p k n k       

(9)

(20)

p is the theoretical probability of an exception.

This can also be calculated using the BINOMDIST function in EXCEL. If the probability of the VaR level being exceeded on m or more days is less than the confidence level, the hypothesis that the probability of exceptions is p will be rejected.

(21)

PART4: Data and Description

Dynamics of financial markets in emerging countries show substantial differences as compared to developed economies. These markets experience larger volatility than developed economies. However, since mature market invests a considerable proportion of savings in emerging market by hedge and mutual funds, the appearance of financial markets dynamics affect developed market returns in a large extent. Hence, the exploration of the distribution in those markets would benefit investors in both mature and emerging economies. The primary data used in this paper will be the stock indices from Thomson DataStream. The empirical analysis conducted consists of 2349 daily returns on four indices: FTSE, DAX, Shanghai Stock Exchange Composite (SSE Composite) and BOVESPA. The daily stock return will be expressed in the natural log form of each stock return. The total analysis period beginning 2001 through 2010 is used to estimate various parameters. Table 1 contains descriptive statistics of the four indices daily returns for the entire period as well as for two sub periods. The stable period (sub period 1) used to generate VaR range from Dec 27, 2001 to Dec 27, 2007. Turmoil period (sub period 2) ranging from Dec 28, 2007 to Dec 28, 2010 is reserved for backtesting the VaR estimated by alternative models. Table 1 indicates that the daily returns of these four indices over the 9-year period are not normally distributed. In all cases, skewness is evident, kurtosis is greater than 3 and the Jarque-Bera statistics are highly significant. However, these are some differences between countries. Germany stands apart from the rest with a positive

(22)

skewness 0.1092, while UK, China and Brazil all have negative skewed distribution.

In general, the description statistics present that the distributions of daily returns were different in two sub-periods examined. As comes no surprise, the mean daily returns in the second period for each index are more negative than that in the first period. The second sub period contains significant international and domestic economic events, which affected the stock market.

Additionally, QQ plot is a tool to compare the empirical distribution with a theoretical distribution (e.g. normal distribution). If the distribution agrees, the QQ plot should be lie on the 45-degree line. As we can see from Figure 1, QQ plot of these four indices are below the 45-degree line for large negative values, and above the line for large positive values, which corresponding fat tails. Another technique to test the normality is Jarque-Bera test. Same result is indicated as shown in Table 1. The presence of fat tail indicates that the assumption of normal distribution, as in the variance-covariance method of calculating VaR, would underestimate the VaR value, particularly at confidence levels greater than 95%.

(23)

Descriptive statistics for four stock indices daily returns Whole period 27/12/2001-28/12/2010 Subperiod1 27/12/2001-27/12/2007 Subperiod2 28/12/2007-28/12/2010 Whole period 27/12/2001-28/12/2010 Subperiod1 27/12/2001-27/12/2007 Subperiod2 28/12/2007-28/12/2010 Mean 0.006% 0.014% -0.010% 0.013% 0.029% -0.019% Median 0 0.0002 0 0.0005 0.0007 0.0002 Maximum 0.0938 0.059 0.0938 0.108 0.0755 0.108 Minimum -0.0927 -0.0559 -0.0927 -0.0743 -0.0634 -0.0743 Std. Dev. 0.013133 0.010676 0.01703 0.016088 0.01493 0.018201 Skewness -0.11157 -0.208302 -0.034535 0.109198 -0.023842 0.270736 Kurtosis 10.3119 7.510619 8.644538 8.167873 6.70365 8.932621 Jarque-Bera 5235.414 1338.025 1038.287 2617.493 894.6125 1156.356 Probability 0 0 0 0 0 0 Sum 0.1408 0.2198 -0.0758 0.3081 0.4508 -0.1463 Sum Sq. Dev. 0.404795 0.178246 0.226509 0.607482 0.348631 0.258721 Observations 2348 1565 783 2348 1565 783 FTSE100 DAX Table 1:

(24)

Notes: This table includes descriptive statistics for the FTSE100, DAX, SSE, BOVESPA index daily returns over period 27/12/2001 to 28/12/2010 and two sub-periods: 27/12/2001 to 27/12/2007 and 28/12/2007 to 28/12/2010. Std.Dev indicates the standard deviation. Jarque- Bera denotes the Jarque- Bera test which examines the hypothesis that daily returns are normally distributed.

Whole period 27/12/2001-28/12/2010 Subperiod1 27/12/2001-27/12/2007 Subperiod2 28/12/2007-28/12/2010 Whole period 27/12/2001-28/12/2010 Subperiod1 27/12/2001-27/12/2007 Subperiod2 28/12/2007-28/12/2010 Mean 0.021% 0.075% -0.084% 0.068% 0.098% 0.008% Median 0 0 0 0.0004 0.0006 5.00E-05 Maximum 0.0903 0.0885 0.0903 0.1368 0.0616 0.1368 Minimum -0.0926 -0.0926 -0.0804 -0.121 -0.0686 -0.121 Std. Dev. 0.017094 0.01479 0.020934 0.018969 0.016702 0.022851 Skewness -0.174871 -0.091312 -0.148056 -0.091646 -0.28792 0.104315 Kurtosis 6.8032 7.483643 5.322049 7.828779 3.852423 8.995334 Jarque-Bera 1427.668 1313.062 178.5434 2284.477 69.00454 1172.595 Probability 0 0 0 0 0 0 Sum 0.4946 1.1769 -0.6557 1.5978 1.5322 0.0638 Sum Sq. Dev. 0.686104 0.342123 0.34226 0.844516 0.436269 0.407826 Observations 2348 1565 783 2348 1565 783 SSE BOVESPA

(25)

-8 -6 -4 -2 0 2 4 6 8 -.10 -.05 .00 .05 .10 FTSE N o rm a l Q u a n ti le Theoretical Quantile-Quantile -6 -4 -2 0 2 4 6 8 -.08 -.04 .00 .04 .08 .12 DAX N o rm a l Q u a n ti le Theoretical Quantile-Quantile

(26)

Figure1. QQ Plot of daily returns for FTSE100, DAX, SSE and BOVESPA -6 -4 -2 0 2 4 6 -.10 -.05 .00 .05 .10 SSE N o rm a l Q u a n ti le Theoretical Quantile-Quantile -8 -6 -4 -2 0 2 4 6 8 -.15 -.10 -.05 .00 .05 .10 .15 BOVESPA N o rm a l Q u a nt ile Theoretical Quantile-Quantile

(27)

PART5. Empirical Analysis:

5.1 The extreme value approach

To analysis the behavior of extreme returns for the stock markets of UK, Germany, China and Brazil, we implement the models of Peaks over Threshold (POT). As previously mentioned, this consists of estimating the threshold, μ. We follow Hull (2010) according to whom μ approximately equal to 95th percentile of the empirical

distribution. This implies that the exceedences over the threshold belong to the 5% tails and in our case they have been estimated to be 0.0152, 0.0210, 0.0238, and 0.0550 (in absolute values) for FTSE100, DAX, SSE and BOVESPA indices, respectively. The results of thresholds are displayed in Table 2.

After determining the subsamples, the parameters for the extreme value distribution are estimated using the maximum-likelihood method. Table 3 presents the results of the parameters of scale, β and shape, ξ for the lower tails of the return distribution. The scale factor value, β, differs slightly across countries. The larger the scale parameter, the more spread out the distribution. As shown in Table 3, China has the

Table 2.

FTSE 100 DAX SSE BOVESPA

Threshold μ 0.0152 0.0210 0.0238 0.0550

Number of exceedances 79 79 79 79

(28)

is the tail index, ξ, where in general the higher its value, the heavier the tail and the higher the quantiles estimate we derive. Brazil has the only positive tail index (0.0396), which indicates the largest heaviness of the distribution tail and market volatility among these four countries. The tail index of FTSE100, DAX and SSE are all negative. This finding is at variance with previous studies of financial time series return data (e.g. QQ plot and Jarque-Bera test) where there are obvious heavy-tail sign for the shape parameter. This unexpected result may be caused by several reasons, such as the inappropriate selection of threshold μ, as the values of β and ξ do depend on the choice of μ. Another reason may be the sample selection bias. Different choice of sample length and period do affect the parameter estimation. As the VaR predictions based on these numbers do not seem unreasonable, we still use the parameter shown in table 3 for further investigation.

Notes: β indicates the scale parameter; ξ represents the shape parameter (or tail index). All the results are obtained by Excel Solver function.

After determining threshold μ and estimating β and ξ, we can derive VaR using

Table 3.

FTSE 100 DAX SSE BOVESPA

β 0.0105 0.0091 0.0669 0.0347

ξ -0.1820 -0.3098 -0.5149 0.0396

Maximum Likelihood Estimates (MLE) of the parameters of the Generalized Pareto Distribution (GPD)

(29)

corresponding to six different confidence levels (90%, 95%, 97.5%, 99%, 99.5%, and 99.9%). The results are then compared with those generated from parametric and non-parametric models. Parametric models include the delta normal model, GARCH (1, 1) and weighted moving average. In this paper, only the delta normal model with unconditional normal distribution is used. Non-parametric models include historical simulation and Monte Carlo simulation.

5.2 VaR values based on traditional methods and the EVT approach.

In all markets, we can see that RM and MC generate relatively higher VaR values than the estimates based on HS and EVT-POT. For smaller confidence intervals, take 90% as an example, the VaR losses under HS method in UK is 1.07, applying the RM and MC leads to figures of 1.69 and 1.66, respectively. However, the value predicted by EVT-POT is only 0.81. In the case of larger confidence levels (99.9%), VaR estimates based on EVT-POT are evidently greater than the VaR estimates obtained by HS, RM and MC models: 4.43, 3.99 and 3.89 for HS, RM and MC, respectively v.s 4.49 for EVT-POT. This is the verification clearly justifies the advantages of Generalized Pareto distribution (GPD) for high confidence levels (such as 99.5% and 99.9%).

As we can see from Table 4, HS yields the most similar numbers with EVT-POT compared to the other two traditional methods. However, EVT-POT estimate shows larger numbers for most of the confidence levels and most financial markets, which

(30)

means that the risk is overestimated. This result can be obviously obtained from UK (95%, 97.5%, 99%, and 99.9%), Germany (95%, 97%, and 99.9%), China (95%, 97%, and 99%) and Brazil (97.5%). This indicates that the VaR estimates that depend on a discrete sample distribution may induce overestimations or underestimations. The HS cannot generate good VaR estimation out of the sample while values based on EVT-POT can guarantee more accurate market risk predictions.

5.2 Back-testing results for VaR models.

A back-testing procedure is performed for HS, RM, MC and EVT-POT at six different confidence levels (90%, 95%, 97.5%, 99%, 99.5%, and 99.9%). There are totally 783 daily returns in the evaluation period, ranging from December 28, 2007 to December 28, 2010 for all four countries. A violation occurs when the loss is greater than the VaR estimation. Then, the failure rate will be defined under 95% significance level Binomial Distribution1. If the test statistics is greater than α (in this case, α= 1-p,

p is the confidence level), the null hypothesis which indicates the model is correct will be rejected; otherwise, the model is sufficiently precise.

The reason for choosing December 28, 2007 as the back-testing start date is that the global market appears to be affected by the financial crisis and began to go down at the end of December 2007. It is worth noting that another sample (based on Lehman Brothers Bankruptcy) is also tested during the research: subperiod1 from December

(31)

27, 2001 to September 15, 2008 for parameter estimation and subperiod2 from September 16, 2008 to September 16, 2009 for back-testing. The back-testing result shows that all the four VaR models are inadequate to forecast the market risk, for example, at 99% confidence levels all VaR models are rejected for four stock indices. This finding is reasonable as the capital market went down significantly. VaR estimation is too low to capture incurred huge losses and in turn a large number of violations are generated. As the findings are not obvious to compare the predictive performance of different models, we do not show and analysis the results in this paper.

Table 5 reports the number of violations and Binomial statistic test results for four indices. As anticipated, the number of violations is greatest at the 90% confidence level, following by higher levels. It is interesting to note that EVT-POT does not indicate its superiority at low confidence level. It can be seen from Table 5, the number of exceedances derived from EVT-POT is larger than the others in all four financial markets, especially for SSE (170) which has the largest violations among all the models. At the same time, Delta normal clearly emerges other three, producing the smallest number of exceedences. However, the advantages of EVT-POT present gradually as the confidence level increases. Reaching 95% confidence level, EVT-POT has the lowest number of violations. Moreover, the null hypothesis is rejected for all models at 97.5% confidence level except for the HS in DAX and EVT-POT in both DAX and SSE. The case of higher confidence levels (99%, 99.5%

(32)

and 99.9%) reports similar results; the models based on empirical distribution and GPD distribution provide better VaR estimates than a measure of risk based on normal distribution. This phenomenon can be easily explained: normal distribution based VaR models tend to underestimate the left-tail risk at higher confidence levels, especially in emerging market which have experienced significant market movements due to economic boom. However, it can also be explained in another way: VaR models based on GPD distribution may overestimate the real risk for short trading positions. From the prospective of risk management, EVT may not be preferred by banks and financial institutions for internal modeling of market risk as considerable capital are required to meet legal prerequisite (such as revised Basle requirements). According to Ho et al. (2000), since Basle requires three times the VaR (99% confidence level) estimated by a bank’s internal models, it shows that banks have little motivations to use EVT approach for this purpose. Despite the overestimate fact, it is still necessary to emphasis the importance of extreme value theory as a useful tool for evaluating tail-related risk under extreme market movements.

(33)

1

Notes: This table presents VaR values using four methods: HS: Historical Simulation; RM: Riskmetrics; MC: Monte Carlo Simulation; EVT-POT: Extreme Value Theory- Peaks over Threshold. We estimate the VaR for four markets, including UK, Germany, China and Brazil.

2

2

Confidence levels of 96%, 97%, 98%, 99.99% were also carried out for VaR estimation and back-testing. As the above levels aleady show sufficient conservative Table 4.

UK (FTSE 100) Germany (DAX)

Model HS RM MC EVT-POT Model HS RM MC EVT-POT

90.00% 1.07 1.69 1.66 0.81 90.00% 1.45 2.96 2.89 1.40 95.00% 1.52 2.17 2.35 1.58 95.00% 2.10 3.81 3.88 2.10 97.50% 2.16 2.59 2.83 2.26 97.50% 2.61 4.55 4.89 2.67 99.00% 3.01 3.08 3.20 3.03 99.00% 3.37 5.42 5.80 3.25 99.50% 3.67 3.42 3.51 3.53 99.50% 3.63 6.01 5.93 3.60 99.90% 4.43 3.99 3.89 4.49 99.90% 4.09 7.01 7.48 4.16

China (SSE) Brazil (BOVESPA)

Model HS RM MC EVT-POT Model HS RM MC EVT-POT

90.00% 1.46 5.89 6.40 -3.10 90.00% 3.34 9.46 11.31 3.16 95.00% 2.38 7.64 8.00 2.44 95.00% 5.50 12.28 13.31 5.44 97.50% 5.14 9.15 9.34 6.32 97.50% 7.74 14.72 14.91 7.78 99.00% 8.77 10.93 10.56 9.72 99.00% 12.10 17.59 17.95 10.97 99.50% 12.35 12.13 12.26 11.42 99.50% 13.94 19.52 19.36 13.47 99.90% 17.17 14.19 16.97 13.64 99.90% 19.58 22.85 24.28 19.53

(34)

Notes: This table shows the number of exceedences and back-testing result in 95% significance level. Back-testing period is from December 28, 2007 to December 28, 2010 (783 observations). Four models are tested, namely Delta normal, Historical simulation, Monte Carlo and EVT-POT using four stock indices (FTSE100, DAX, SSE and BOVESPA). Shaded numbers indicate significance at 95% level (the model is not rejected); otherwise, the model is insignificant and rejected.

Table 5. Model P 90% 95% 97.5% 99% 99.50% 99.90% FTSE100 93 68 46 32 20 17 DAX 68 43 31 21 19 14 SSE 105 74 55 40 32 17 BOVESPA 86 62 40 24 19 14 FTSE100 120 75 38 17 11 5 DAX 98 47 21 12 7 3 SSE 120 84 45 18 11 0 BOVESPA 91 59 32 17 10 6 FTSE100 115 80 72 52 32 23 DAX 72 49 36 27 22 18 SSE 131 87 70 50 41 28 BOVESPA 91 68 49 24 23 14 FTSE100 115 56 30 19 13 6 DAX 84 42 27 19 16 7 SSE 170 61 23 12 9 5 BOVESPA 89 59 40 19 9 5 Delta Normal Historical Simulation Monte Carlo EVT-POT

(35)

PART6. Concluding Remarks:

The recent worldwide financial crisis highlighted the need for a proper measurement of financial risk. The present study contributes to the risk management literature by providing a comparison of various Value-at-Risk (VaR) models (Historical Simulation, RiskMetrics, Monte Carlo Simulation and the Extreme value theory). The traditional methods of determining VaR focus on the entire return distributions. However, these models cannot always cope with larger periods of strong financial instability and fail to estimate extraordinary losses that arise in times of turbulences such as the Asian financial crisis of 1997 and the recent global depression. In response to these restrictions, extreme value theory sets models that capture the statistical behavior of extreme values, making a more precise estimate of market risk.

This paper has demonstrated that EVT provides several benefits in terms of a more accurate forecasting of potential losses for large confidence levels. In our findings, this is true for both emerging market and mature market. The empirical analysis conducted in this paper can be summarized as follows: Firstly, the stock markets for the selecting four countries (UK, Germany, China and Brazil) have been volatile and all characterized by fat tails. This can explain by the recent financial crisis spread out the world. This fact can also be supported by the QQ plot and Jarque-bera statistics. However, the negative tail index ξ of the Generalized Pareto distribution (GPD) is not consistent with the heavy-tailed character in our findings (ξ are negative in our

(36)

findings except in Brazil). This fact induces one of the limitations in this paper: sample selection bias. Since we only include nine year daily stock price (from December 27, 2001 to December 28, 2010) in our findings, our results are flowed in some extent, as those believed to adequately capture the market movements before, during and after the 2008 financial crisis. In spite of selection bias, the tail index presents valuable information: ξ shows different values in different countries, which demonstrates a different behavior of market returns and different level of risk. Brazil has the largest tail index which believed to be the most volatile one among the selected four countries, followed by Germany, UK and China. Secondly, based on the back-testing results, GPD does not offer significant benefits at low confidence levels (90% and 95%). This is reasonable since EVT mainly focus on lower quantiles of the distribution instead of the central part. However, EVT clearly outweigh other VaR methods since the 97.5% confidence levels. This finding clearly shows that the VaR estimate based on EVT provide more precise values than traditional methods. This induces the fact that Generalized Pareto distribution (GPD) can better catch the magnitude of extreme movement of return distribution. In other hands, the only method that could compete with EVT at high confidence level is HS. However, VaR based on HS need a large amount of data input for high confidence level and this has to be one of HS’s main constraints. It is worth noting that all the back-testing results mentioned above are conducted under the period from December 28, 2007 to December 28, 2010. The reason for choosing this start date is that the global market appears to be affected by the financial crisis and began to go down at the end of

(37)

December 2007. However, the selected back-testing period based on other criteria, such as September 16, 2008, the day when Lehman Brothers bankrupted, probably generate more reasonable results. This has to be another selection bias in this paper.

To sum up, VaR estimates based on EVT can be used effectively to measure market risk as it allows investors to have a better sense of their position under uncertain market movement in both mature and emerging markets. However, like any other models, EVT also has its drawbacks; the first is that it cannot model VaR dynamically in a stochastic volatility or regime-switching environment. Future research can put more emphasis on cross-sectional data and model widely diversified market portfolios. Additionally, VaR should also be assessed by longer time length, with a wider sample of financial markets. The problems mentioned above, growing sense of risk, together with higher requirements from Basel committee risk measurement and regulatory frameworks are believed to offer more incentives for future development in the Value-at-Risk methodology.

(38)

PART7. Appendix: Table A1. HS back-testing result # of Days in backtesting 783 P 90.00% 95.00% 96.00% 97.00% 97.50% 99.00% 99.50% 99.90% 99.95% 99.99% Expected # of Violations 78.3 39.15 31.32 23.49 19.575 7.83 3.915 0.783 0.3915 0.0783 FTSE100 # of Violations 120 75 63 46 38 17 11 5 3 2

Test of the Model-95% 1.0000 1.0000 0.9999 0.8841 0.4673 0.0000 0.0000 0.0000 0.0000 0.0000 Results reject reject reject reject reject do not reject do not reject do not reject do not reject do not reject

DAX

# of Violations 98 47 38 26 21 12 7 3 2 1

Test of the Model-95% 1.0000 0.9116 0.4673 0.0150 0.0009 0.0000 0.0000 0.0000 0.0000 0.0000 Results reject reject reject do not reject do not reject do not reject do not reject do not reject do not reject do not reject

SSE

# of Violations 120 84 67 56 45 18 11 0 0 0

Test of the Model-95% 1.0000 1.0000 1.0000 0.9965 0.8509 0.0001 0.0000 0.0000 0.0000 0.0000 Results reject reject reject reject reject do not reject do not reject do not reject do not reject do not reject

BOVESPA

# of Violations 91 59 48 37 32 17 10 6 4 3

Test of the Model-95% 1.0000 0.9991 0.9337 0.4023 0.1364 0.0000 0.0000 0.0000 0.0000 0.0000 Results reject reject reject reject reject do not reject do not reject do not reject do not reject do not reject

(39)

Table A2. RM back-testing result # of Days in backtesting 783 P 90.00% 95.00% 96.00% 97.00% 97.50% 99.00% 99.50% 99.90% 99.95% 99.99% Expected # of Violations 78.3 39.15 31.32 23.49 19.575 7.83 3.915 0.783 0.3915 0.0783 FTSE100 # of Violations 93 68 63 52 46 32 20 17 17 16

Test of the Model-95% 1.0000 1.0000 0.9999 0.9825 0.8841 0.1364 0.0004 0.0000 0.0000 0.0000 Results reject reject reject reject reject reject do not reject do not reject do not reject do not reject

DAX

# of Violations 68 43 38 34 31 21 19 14 13 11

Test of the Model-95% 1.0000 0.7659 0.4673 0.2259 0.1019 0.0009 0.0002 0.0000 0.0000 0.0000 Results reject reject reject reject reject do not reject do not reject do not reject do not reject do not reject

SSE

# of Violations 105 74 65 59 55 40 32 17 17 16

Test of the Model-95% 1.0000 1.0000 1.0000 0.9991 0.9947 0.5967 0.1364 0.0000 0.0000 0.0000 Results reject reject reject reject reject reject reject do not reject do not reject do not reject

BOVESPA

# of Violations 86 62 58 45 40 24 19 14 13 13

Test of the Model-95% 1.0000 0.9998 0.9986 0.8509 0.5967 0.0054 0.0002 0.0000 0.0000 0.0000 Results reject reject reject reject reject do not reject do not reject do not reject do not reject do not reject

(40)

Table A3. MC back-testing result # of Days in backtesting 783 P 90.00% 95.00% 96.00% 97.00% 97.50% 99.00% 99.50% 99.90% 99.95% 99.99% Expected # of Violations 78.3 39.15 31.32 23.49 19.575 7.83 3.915 0.783 0.3915 0.0783 FTSE100 # of Violations 115 80 74 72 72 52 32 23 19 19

Test of the Model-95% 1.0000 1.0000 1.0000 1.0000 1.0000 0.9825 0.1364 0.0031 0.0002 0.0002

Results reject reject reject reject reject reject reject reject do not reject reject

DAX

# of Violations 72 49 44 37 36 27 22 18 17 17

Test of the Model-95% 1.0000 0.9512 0.8115 0.4023 0.3393 0.0234 0.0017 0.0001 0.0000 0.0000 Results reject reject reject reject reject reject do not reject do not reject do not reject do not reject

SSE

# of Violations 131 87 81 74 70 50 41 28 22 19

Test of the Model-95% 1.0000 1.0000 1.0000 1.0000 1.0000 0.9647 0.6577 0.0355 0.0017 0.0002

Results reject reject reject reject reject reject reject reject reject reject

BOVESPA

# of Violations 91 68 60 51 49 24 23 14 12 12

Test of the Model-95% 1.0000 1.0000 0.9995 0.9749 0.9512 0.0054 0.0031 0.0000 0.0000 0.0000 Results reject reject reject reject reject do not reject do not reject do not reject do not reject do not reject

(41)

Table A4.

EVT back-testing result

# of Days in backtesting 783 P 90.00% 95.00% 96.00% 97.00% 97.50% 99.00% 99.50% 99.90% 99.95% 99.99% Expected # of Violations 78.3 39.15 31.32 23.49 19.575 7.83 3.915 0.783 0.3915 0.0783 FTSE100 # of Violations 115 56 46 36 30 19 13 6 5 4

Test of the Model-95% 1.0000 0.9965 0.8841 0.3393 0.0739 0.0002 0.0000 0.0000 0.0000 0.0000 Results reject reject reject reject reject do not reject do not reject do not reject do not reject do not reject

DAX

# of Violations 84 42 35 29 27 19 16 7 7 6

Test of the Model-95% 1.0000 0.7145 0.2801 0.0520 0.0234 0.0002 0.0000 0.0000 0.0000 0.0000 Results reject reject reject reject do not reject do not reject do not reject do not reject do not reject do not reject

SSE

# of Violations 170 61 40 27 23 12 9 5 5 4

Test of the Model-95% 1.0000 0.9997 0.5967 0.0234 0.0031 0.0000 0.0000 0.0000 0.0000 0.0000 Results reject reject reject do not reject do not reject do not reject do not reject do not reject do not reject do not reject

BOVESPA

# of Violations 89 59 51 45 40 19 9 5 3 0

Test of the Model-95% 1.0000 0.9991 0.9749 0.8509 0.5967 0.0002 0.0000 0.0000 0.0000 0.0000 Results reject reject reject reject reject do not reject do not reject do not reject do not reject do not reject

(42)

PART8. References:

Bali, T. G. (2003) An extreme value approach to estimating volatility and value at risk,

Journal of Business, 76(1), pp. 83–108.

Beder, T. S. (1995) VaR: Seductive but Dangerous, Financial Analyst Journal, Sep-Oct, 12-24.

Bekiros, S. D., Georgoutsos, D. A. (2005) Estimation of Value-at-Risk by extreme value and conventional methods: a comparative evaluation of their predictive performance, International Financial markets, Institutions and Money, 15, pp. 209-228.

Bollerslev, T. (1986) Generalized Autoregressive Conditional Heteroskedasticity,

Journal of Econometrics, 31, pp. 307-327

Boudoukh, J., M. Richardson and R. Whitelaw (1998) The best of both Worlds, RISK, 11, pp. 64-67.

Chernozhukov, V. and L. Umantsev (2000), conditional Value-at-Risk: Aspects of modeling and estimation, Standford University, preprint.

(43)

Consigli, G. (2002) Tail estimation and mean-VaR portfolio selection in markets subject to financial instability. Journal of Banking and Finance, 26 (7), pp. 1355-1382.

Danielsson, J., de Vries, C. (1997) Value-at-Risk and Extreme Returns, Disc. Paper No. 273, LSE, Financial Markets Group.

Danielsson, J. and de Vries, C. (1998) Beyond the Sample: Extreme Quantile and Probability Estimation, London School of Economics, Discussion Paper 298.

Danielsson, J. and de Vries, C. G. (2000) Value-at Risk and extreme returns. Annales

d’Economie et Statistique, 60, pp. 239-270.

Danielsson, J., Hartmann, P., de Vries, C. (1998) The cost of conservatism. Risk 11 (1), pp. 101–103.

Danielsson, J. (2002) The emperor has no clothes: limits to risk modelling, Journal of

Banking and Finance, 26, pp. 1273–1296.

Embrechts, P. (1999) Extreme value theory in finance and insurance. Department of Mathematics, ETH, Swiss Federal Technical University.

(44)

Embrechts, P. (2000) Extreme value theory: potentials and limitations as an integrated risk management tool. Derivatives Use, Trading and Regulation 6, 449–456.

Engle, R.F. (1982), Autoregressive Conditional Heteroscedasticity with Estimation of the Variance of United Kingdom Inflation, Econometrica, 50, pp.987-1007.

Gencay, R., Selcuk, F. and Ulugulyagci, A. (2003) High volatility, thick tails and extreme value theory in value-at-risk estimation. Insurance:Mathematics and

Economics, 33, pp.337-356.

Gourieroux, C. and JASAL, J. (1998) Truncated Maximum Likelihood, Goodness of Fit Tests and Tail Analysis.

Ho, L., Burridge, P., Cadle, J. and Theobald, M. (2000) Value-at-Risk: Applying the extreme value theory approach to Asian markets in the recent financial turnoil.

Pacific-Basin Finance Journal, 8, pp. 249-275.

Hols, M and de Vries, C. (1991) Limiting distribution of extreme exchange rate returns. Journal of Applied Econometrics, 6(3), pp. 287-302.

(45)

Jackson, P., Maude, D. J. and Perraudin, W. (1997) Bank Capital and Value at Risk.

Journal of Derivatives, 4, pp. 73- 89.

Jorion, P. (1996) Risk: Measuring the Risk in Value at Risk. Finan Analysts J. 52, pp. 47-56.

Jorion, P. (1997) Value at Risk:the New Benchmark for controlling Derivatives Risk. Chicago: Irwin.

Linsmeier, T.J., and Pearson, N. D. (1996) Risk Measurement: An introduction to Value at Risk. Office for Futures and Options Research Working Paper No. 96-04, University of Illinois.

Longin, F. M. (1996) the asymptotic distribution of extreme stock market returns.

Journal of Business, 69 (3), pp. 383-408.

Longin, F. M. (2000) From Value at Risk to stress testing: the Extreme Value approach. Journal of Banking and Finance, 24, pp. 1097–1130.

Manfredo, M. R. and Leuthold, R. M. (1999) Value-at-Risk Analysis: A Review and the potential for Agricultural Applications. Review of Agricultural Economics,

(46)

Manganelli, S. and Engle,R. F. (2001) Value at Risk models in Finance. European Central Bank, Working Paper NO.75.

McNeil, A. (1999) Extreme value theory for risk managers. Internal Modelling and

CAD 2, London: Risk books, pp. 93-118

McNeil, A.J. and R. Frey (2000) Estimation of Tail-Related Risk Measures for Heteroscedastic Financial Time Series: an Extreme Value Approach, Journal of Empirical Finance, 7, pp. 271-300.

Pownall, R. A. J. and Koedijk, K. G. (1999) Capturing downside risk in financial markets: the case of the Asian Crisis, Journal of International Money and

Finance, 18(6), pp. 853–870.

Tolikas, K., Koulakiotis, A. and Brown, R. A. (2007) Extreme risk and Value-at-Risk in the German stock market, the European Journal of Finance, 13(4), pp. 373-395

Referenties

GERELATEERDE DOCUMENTEN

With the lack of an up-to-date systematic overview of the scientific literature, the current study aimed to sys- tematically review international literature on tests and protocols

In the current study, we looked at the uptake and effects of the e-Vita personal health record with self-management support program and additional asynchronized coaching, in a sample

Richten we ons op de overheid dan moet worden geconstateerd, dat de huidige structuren van de overheidsinspanningen op het gebied van ICT, ondanks de vele feilen, oppermachtig

Concerning neighbourhood perceptions, evidence is found for both predictions that perceptions of more social cohesion and social capital in the neighbourhood and

Nünning points to American Psycho as an example of such a novel, as it call's upon “the affective and moral engagement of readers, who are unable to find a solution to to puzzling

(2014) Water Footprint Assessment for Latin America and the Caribbean: An analysis of the sustainability, efficiency and equitability of water consumption and pollution, Value

De resultaten van dit onderzoek zijn in principe niet extrapoleerbaar naar deze bedrijven, maar gelden alleen voor Nederlandse beursgenoteerde multinationals. Door alleen deze

Following the above, the theoretical model consists of four factors: absorptive capacity (contains of recognition, assimilation, transformation and exploitation of new