• No results found

Robustness of Expected Shortfall in comparison with Value-at-Risk in a Filtered Historical Simulation framework.

N/A
N/A
Protected

Academic year: 2021

Share "Robustness of Expected Shortfall in comparison with Value-at-Risk in a Filtered Historical Simulation framework."

Copied!
83
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Specialisation: Financial Econometrics

Robustness of Expected Shortfall in

comparison with Value-at-Risk in a Filtered

Historical Simulation framework

Author:

Gerdie Knijp, 10433481

Supervisor UvA:

Prof. dr. H. Peter Boswijk

Second marker

Dr. Simon A. Broda

Supervisor Deloitte Financial Risk Management:

D.P. (Niek) Crez´

ee, MSc FRM

(2)

First of all, I would like to thank my supervisor Peter Boswijk for his ideas and useful feedback through-out this research project. Moreover I would like to thank my colleagues from Deloitte Financial Risk Management for their input and for making writing my thesis much more enjoyable. Besides working on my thesis, they gave me the opportunity to work for several projects and to learn a lot about the practical applications of econometrics within the financial industry. I really look forward to start working

there. Thanks to Florian Chilla, for helping me out with all my MATLAB and LATEX-problems. I am

very grateful to Niek Crez´ee, who supervised me during this internship. He was of great help, had useful

suggestions and always took time to read my work and to answer my questions, even during the night or during the weekends.

Furthermore I would like to thank my friends. Without them it would have been impossible to successfully finish this study. Special thanks to Lisanne Cock, not only for her feedback on the presentation of this thesis but especially for making studying econometrics much more fun. Finally, I would like to thank my family, for their endless support and encouragement throughout my study.

(3)

1 Introduction 4

1.1 Goal of the thesis . . . 4

1.2 Outline of the thesis . . . 6

2 Basel regulatory framework 7 2.1 Risk management . . . 7

2.2 Regulation . . . 7

2.2.1 Standardised approach . . . 9

2.2.2 Internal model approach . . . 9

2.3 Variability in risk weighted assets . . . 10

2.4 From VaR to ES . . . 11

3 Methodology 12 3.1 Market risk measures . . . 12

3.2 Return series . . . 14

3.2.1 ARMA models . . . 15

3.2.2 Conditional volatility models . . . 15

3.2.3 Parameter estimation . . . 18

3.3 Estimation of risk measures . . . 19

3.3.1 Historical simulation . . . 21

3.3.2 Filtered historical simulation . . . 21

3.4 Robustness . . . 24 3.4.1 Quantitative robustness . . . 25 3.4.2 Estimation error . . . 25 3.4.3 Robustness under FHS . . . 26 3.4.4 Regression analysis . . . 27 4 Data description 29 4.1 Hypothetical portfolio . . . 29 4.2 Risk factors . . . 30 4.3 Descriptive statistics . . . 33

4.4 Risk factor mappings . . . 35

5 Model estimation 37

6 Empirical research 41

(4)

6.1 Sampling period . . . 41

6.2 FHS vs. HS . . . 45

6.2.1 Randomness in FHS . . . 45

6.2.2 VaR and ES estimates . . . 47

6.2.3 Stability over time . . . 48

6.3 Robustness towards parameter modifications . . . 52

7 Conclusions and recommendations 70 Appendix 73 A.1 Risk factor characteristics . . . 73

A.2 Bloomberg Fair Value Curve . . . 80

(5)

Introduction

1.1

Goal of the thesis

One of the objectives of the Basel Committee is ensuring consistency of risk-weighted asset (RWA) outcomes. Risk weighted assets are the banks’ exposures weighted by certain risk factors and they are used in the calculation of the capital ratio of a bank. Risky investments have higher weights than investments that are considered less risky. The riskier a portfolio, the higher the risk weighted assets outcomes will be and the more capital a bank should keep in order to meet the requirements. The consideration of risk weighted assets in the calculation of capital ratios is important since it makes sure that banks with different risk profiles have to meet different capital requirements.

Recently several regulators (BIS and IMF) have published papers (Le Lesl´e and Avramova, 2012), (Basel

Committee on Banking Supervision, 2013b) in which they show that the risk-weighted assets differ across countries or banks. These variations are not only due to different risk profiles or different supervisory rules but it is presumed that a significant part of the variability in risk weighted assets is caused by different methodology choices of banks. This affects market confidence and therefore there is need for revision.

At the same time, the Basel Committee on Banking Supervision presents a number of propositions for a revision of the trading book, as it is recognized that the old framework has some shortcomings. Banks divide their activities into trading book activities and banking book activities, where the trading book refers to assets that are regularly traded, rather than traditional banking activities that are intended to be held to maturity. One of the considerations of the trading book reviews as described in Basel Committee on Banking Supervision (2013a) is the suggestion to use expected shortfall (ES) as an alternative market risk measure for value-at-risk (VaR). The propositions are now elaborated on in more detail and it seems most likely that VaR will indeed be replaced by ES.

VaR aims to measure the maximum loss of a portfolio over a certain time period with a given confidence level. It simply is the quantile of the loss distribution over the holding period and it has become a standard measure in market risk management as it is the recommended market risk measure of Basel II. As the financial crisis exposed some shortcomings of the current validation methods and risk measures, some changes were made in the regulatory rules. Basel 2.5 now requires banks to additionally report stressed VaR, which is VaR applied to a historical period of significant financial stress. VaR is a simple and

(6)

applicable risk measure but it also has its weaknesses, as widely discussed in literature. ES measures the expected loss of a portfolio over a certain holding period, given the loss has exceeded the VaR level. It aims to provide more information about extreme events and therefore it could theoretically be a market risk measure that is more reliable.

A lot of literature focuses on VaR and ES: the differences between these market risk measures, their advantages and disadvantages and the optimal procedure to calculate them. Concerning the observed variability in market risk weighted assets (mRWA) measured under VaR, we consider it worthwhile to look into the assumptions of the models that calculate VaR and ES and into the sensitivities of certain assumptions on the outcomes of the measures. Ideally, a good risk measure measures the same, or at least the same risk for two identical firms, even if those firms use different methodologies. Particularly now that ES is proposed as replacement for VaR, we consider it worthwhile to investigate how this would affect the consistency of mRWAs among firms with similar risk profiles. Therefore, we need to take a closer look into the robustness of VaR and ES, where we define robustness as the sensitivity of market risk measures towards certain model choices or assumption changes in a model.

There are several methods for calculating ES and VaR of a portfolio where the variance-covariance method, the Monte Carlo simulation method and historical simulation (HS) are the most common. HS is a non-parametric method that simply calculates VaR and ES over the historical dataset whereas the variance-covariance method is a fully parametric method that makes assumptions on distributions of risk factors and models correlations between risk factors. The Monte Carlo method generates future risk factor scenarios by running simulations. These three methods and their advantages and disadvantages will be explained in more detail later on.

The methodology we use for calculating VaR and ES is called filtered historical simulation (FHS) which is a method widely acknowledged in the literature. It is a semi-parametric technique which uses bootstrapping and combines historical simulation with conditional volatility modelling. FHS generates scenarios of relevant risk factor returns using historical data on these risk factors. With these returns, assets that are contained in a portfolio can be priced and ES and VaR can be calculated. Within this framework, we analyse the robustness of ES and VaR by varying assumptions and looking into the stability over time.

We try to answer the following research question:

How robust is Expected Shortfall as a market risk measure in comparison with Value-at-Risk in a filtered historical simulation framework?

In order to investigate robustness we first construct a portfolio consisting of simple equity and fixed income products. Historical data on risk factors to which these products are exposed, such as equity indices, short- and long-term interest rates and short- and long-term credit spreads for different credit ratings will be collected from Bloomberg.

We first analyse the VaR and ES estimates for this portfolio calculated using FHS for different volatility models and compare it with simple HS. Furthermore we look into stability over time and the choice of the sampling period. Robustness of the measures is investigated by means of sensitivity of VaR and ES towards small parameter modifications. We vary certain parameters of the conditional volatility models and investigate what the effect on VaR and ES by using the theory of influence functions. We will try to link our conclusions to market risk management and Basel III implementation.

(7)

VaR, by looking into effects of assumption changes on outcomes of risk measures. This is relevant since external risk measures, those that are used by regulators, must be unambiguous, stable and must have the ability to be implemented consistently by banks, no matter what internal models these banks use or what beliefs they have.

1.2

Outline of the thesis

Chapter 2 provides a short introduction of the Basel regulatory framework and explains the latest propo-sitions and developments regarding market risk measurements and market risk weighted assets. The methodologies used in this thesis are explained in Chapter 3. In this chapter, risk factor return models are introduced which involves conditional volatility modelling. Also, we shortly summarise the estimation methods that are mostly used in practice where we go into detail on FHS, as this is the method that we use in our analysis. Furthermore, the definition of, and the way we measure robustness is outlined in this chapter. The data on risk factors we use is described in Chapter 4, together with the hypothetical port-folio that we construct. Chapter 5 describes the model estimation process and the estimated parameters of the conditional volatility model. In Chapter 6 our empirical findings are explained: we look into the sampling period of the VaR and ES process, make a comparison between results generated by HS and FHS and present the robustness results. Finally, Chapter 7 summarises the main conclusions.

(8)

Basel regulatory framework

2.1

Risk management

Risk management can be seen as the core competence of a financial institution. For a bank it is challenging to manage the financial risk that arises from uncertainty in a proper way. The financial crisis showed us how important risk management is and how the financial system as a whole failed to capture extreme losses. First, adequate risk management is of importance to modern society. Nowadays everything relies on a proper functioning of the financial system and regulation is based on minimising systemic risk, which is the risk of transferring problems of a single institution to other financial institutions which can lead to failure of the entire financial system. Moreover, risk management is important to shareholders as proper risk management can increase the value of a company. Also, adequate risk management is needed in the calculation of economic capital, which is the amount of capital internally calculated by banks, which banks should keep in order to minimise the probability of default.

Banks face many types of risk, including credit risk, market risk, interest rate risk, operational risk and liquidity risk. In this research, we focus on market risk in the trading book of banks. Banks divide their activities into trading book activities and banking book activities, where the trading book refers to assets that are traded on a daily basis in order to make profit from bid and ask spreads or to use for hedging purposes. Securities held in the banking book on the other hand, are typically intended to be held to maturity. McNeil et al. (2010) define market risk as “the risk of a change in the value of a financial position due to changes in the value of the underlying components on which that position depends, such as stock and bond prices, exchange rates and commodity prices.” It basically is the risk of a change in market prices leading to changes in the value of the bank’s portfolio. In the next sections, we provide a short introduction on how market risk regulation has developed over the years and what the current findings and proposals are regarding the regulatory system.

2.2

Regulation

Regulation is needed to maintain the integrity of the financial system. Ensuring stability of the finan-cial system is an important objective of regulation, but banking regulation also aims to offer consumer protection, preserve market confidence and reduce financial crime. The Basel Committee on Banking

(9)

pervision is the setter of the regulation rules of banks and its main objective is to“enhance understanding

of key supervisory issues and improve the quality of banking supervision”1.

The Basel Committee on Banking Supervision was established in 1974 by Central-Bank Governors of the Group of Ten (G-10). Basel I, the first Basel Accord as introduced in 1988, primarily focusses on credit risk, the risk of default of borrowers. Assets are weighted according to the extent of credit risk and from this, capital requirements are calculated. However, the importance of measuring market risk in addition to credit risk was acknowledged later on, since there was a need for addressing off-balance sheet products such as derivatives. In 1993 the G-30 published a report on this. Around the same time, at JPMorgan there was a request for a one-day, one-page report of the banks’ market risk that could be sent to the CEO and so RiskMetrics was developed. RiskMetrics uses several VaR methodologies and VaR was set as the standard measure for market risk. In Basel Committee on Banking Supervision (1996) a standardised model for market risk is prescribed, but in addition, the possibility for banks to use an internal model, provided that it is approved by the supervisor, to measure market risk is proposed. It is proposed as an incentive for banks to improve their own risk models. The advantage of an internal model based approach over the standardised approach is that banks can use more advanced models compared to the models that are presented by the Basel Committee and banks can adapt models to their own situation and portfolio. This mostly leads to more favourable capital requirements and therefore most banks prefer an internal model approach over the standardised approach.

Basel II, as introduced in 2006, focusses on three types of financial risk, namely credit risk, market risk and a newly introduced type of risk, operational risk. Operational risk is the risk of losses resulting from inadequate or failed internal processes, people and systems, or from external events (McNeil et al., 2010). However, the main focus of Basel II is still on credit risk. The risk sensitivity of risk weights for credit risk are now increased with respect to Basel I. It also introduces the three pillar concept, through which they aim to achieve more interaction between risk categories. Pillar I describes the minimum capital ratio that banks are required to calculate for market risk, operational risk and credit risk individually. It is calculated using risk weighted assets and the definition of regulatory capital:

Capital ratio = Regulatory capital

Risk-weighted assets. (2.1)

Regulatory capital consists of core capital (Tier 1) and supplementary capital (Tier 2). The total capital (Tier 1 and Tier 2) ratio should be at least 8% and the minimum capital ratio calculated for Tier 1 capital only is 4% under Basel II. Pillar II, also referred to as the supervisory review process, describes the responsibilities for supervisors. In Pillar III, Pillar I and Pillar II are complemented by a set of market disclosure requirements (Basel Committee on Banking Supervision, 2006).

In order to calculate risk weighted assets for different risk types, banks are allowed to choose between a standardised approach or an internal model based approach, if approved by the supervisor, as mentioned before. At the moment of writing, only two large banks in the Netherlands use an internal model approach for market risk.

Both the standardised approach and the internal model based approach measure capital requirements in terms of two charges, namely general risk and specific risk. General risk is market risk in the portfolio and specific risk measures market risk that is unique to a particular instrument. An example of specific risk is the credit quality of the issuer. Furthermore, market risk is categorised into interest rate risk, equity position risk, foreign exchange risk, commodity risk and market risk measured for options.

(10)

2.2.1

Standardised approach

The standardised approach is used by banks whose business models do not require refined market risk measurements, such as small banks, by banks that only hold simple financial instruments, or by banks that fail to construct a proper internal market risk model. The standardised approach is a ‘building block’ approach and sets capital requirements for each type of market risk and is based on instrument specific rules. For each position, a fixed risk weight charge is defined, which is based on external ratings as well as market price fluctuations. We will not go into detail on the standardised approach now, since we focus on variability in risk weighted assets due to modelling choices. This is obviously related to the internal model approach, as banks are more flexible in determining which methodologies they use in the internal model approach.

2.2.2

Internal model approach

The internal model based approach requires the calculation of VaR. The Basel Committee prescribes that

• VaR is calculated over a 10-day horizon; • VaR is calculated at a 99% confidence interval;

• The minimum length of the historical dataset is one year; • Historical data must be updated every 3 months;

• Banks are free to choose the method for calculating VaR, e.g. variance-covariance, historical simu-lation or Monte Carlo.

(Basel Committee on Banking Supervision, 2006)

As a response to the financial crisis, the Basel Committee came up with a revision to the Basel II framework for market risk in 2009, also referred to as Basel 2.5 (Basel Committee on Banking Supervision, 2009). Besides the most common measure of market risk, VaR, banks are now required to report stressed VaR as well. Stressed VaR is intended to show how VaR, as calculated on the bank’s current portfolio, behaves under financially stressed conditions. It simply is a 10-day, 99th percentile VaR applied to a one-year historical dataset that includes a continuous 12-month period of significant financial stress. The period 2007/2008 is for example often used as a period of significant financial stress, but the exact period must be approved by the supervisor. Stressed VaR avoids the problem that periods of market stress fall out of the historical dataset. Moreover, both the the Incremental Risk Charge (IRC), which addresses default risk and credit risk migration and the Comprehensive Risk Measure (CRM) that aims to measure counterparty risk are introduced in Basel 2.5. However, as we focus on VaR and ES, we will not go into detail on those measures in this thesis.

The main part of the capital charge for market risk, calculated under the internal model approach and used in the calculation of risk weighted assets, is now measured as a function of VaR, stressed VaR and multiplication factors:

(11)

where VaRavgis the average VaR over the last 60 business days and mc and msrepresent multiplication

factors that are set by supervisors on the basis of a banks risk management system. The minimum multiplication factor equals three and it can be adjusted up to four, which is based on backtesting results. Backtesting requires the counting of violations of the last 250 trading days. These violations are historical exceedances of the 99%-VaR as calculated. A method is then said to lie in the green, yellow or red zone in case the number of violations is larger than 10, between 5 and 9 or less than 4 respectively. Based on these backtesting results, the multiplication factor is adjusted, where obviously a high number of violations leads to a higher multiplication factor.

The total capital charge for market risk according to the internal model approach also adds IRC and CRM where applicable. It depends on the type of product whether IRC or CRM is taken into account. Nowadays, Basel III (which also includes Basel II and Basel 2.5) is being implemented in banks. The main adjustments compared to Basel II are the strengthening of the numerator of the capital ratio, that is regulatory capital, and the increase of the capital requirement (Tier 1 and Tier 2) up to 10.5%. Fur-thermore, additional charges are introduced, such as the Credit Valuation Adjustment (CVA) charge for over-the-counter (OTC) derivatives. This charge requires banks to take into account the creditworthiness of couterparties to non-cleared derivatives trades. Moreover, a requirement for banks to calculate Ex-pected Positive Exposure (EPE) under stressed market conditions is introduced. In addition, wrong-way risk is addressed, which is the risk that the exposure to a counterparty is positively correlated with the probability of counterparty default.

2.3

Variability in risk weighted assets

As RWAs form the denominator of the capital ratio, which can be seen as the key indicator of banks’ solvency, it is important that these are monitored consistently. Even though some changes are made in

the calculation of risk measures, the capital framework still depends heavily on RWAs. Le Lesl´e and

Avramova (2012) listed the key concerns about RWAs. In their IMF report they address the differences in RWA outcomes that are partly due to banks’ business models used to calculate risks among several banks and countries.

Also the Basel Committee looked into the variability across RWAs recently, where they focus on market risk specifically. In January 2013b, Basel Committee on Banking Supervision published a report in which the RWAs for market risk are investigated. They analyse public reports on market risk weighted assets (mRWAs) and perform a hypothetical test portfolio exercise to investigate the variability in mRWA as observed across banks. The hypothetical portfolio they constructed consists of simple long and short positions and is well-diversified. They find that an important part of the variation in mRWA is due to modelling choices of banks. They also identify the key modelling choices that could be an explanation for this variability, where they make a distinction between VaR models and IRC models.

Since we focus on VaR and ES, we shortly summarise the drivers of variability for VaR models as identified by the Basel Committee. First, the length of the look-back period, which is the historical period used in the model, and the applied weighting scheme appears to be an important driver. Some banks use a one-year look-back period whereas others use a five-one-year historical period. Furthermore the choice of whether to use non-overlapping periods or overlapping periods appears to affect the market risk outcomes. Other drivers of variability as identified in the paper are the choice of whether to use Monte Carlo or historical

(12)

simulation, the choice of scaling from 1-day VaR to 10-day VaR scaling rule2 or to measure 10-day VaR

directly and the methods of banks to calculate general risks and specific risks.

This inconsistency in mRWAs among banks affects market confidence. The Basel report describes several policy options that can be considered in order to reduce this variability. One of these is narrowing down the modelling choices of banks. A single approach to scale a 10-day VaR could be determined for example, or the flexibility of choosing the historical period on which VaR calculation is based could be reduced. An important policy consideration for our research is the suggestion to move from VaR and stressed VaR measures to a single ES measure.

2.4

From VaR to ES

This proposition is further outlined in Basel Committee on Banking Supervision (2012) where, based on shortcomings of VaR, a transition from VaR to ES is suggested. The committee proposes to use ES for the internal model approach and Basel will use ES for the calibration of capital requirements in the standardised approach as well. Their main argument is the property of ES of better capturing tail risk since it measures the size and probability of losses given a certain threshold. More discussion on VaR and ES measures can be found in the next chapter.

In October 2013, the Basel Committee publishes an additional consultative document (Basel Committee on Banking Supervision, 2013a). Here, they set out more detailed proposals for the review of the trading book. More importantly, they mention that they agreed to use a 97.5% ES for the internal models-based approach. The 97.5% confidence interval is chosen since the Committee believes that this interval is appropriate relative to the 99% confidence level for the current VaR measure. This comparison stems from

normality assumptions, since the ES0.975 approximately equals the VaR0.99 if the underlying distribution

is normal.

Moreover, the committee believes that the calculation of both VaR and stressed VaR might be duplicative and proposes to calibrate ES on a period of significant financial stress. However, this might lead to difficulties as some risk factors only have a short window of historical data. The committee proposes that the historical data should go back to at least 2005 and to avoid problems with availability of risk factors over this observation period, an indirect method for the calculation of capital charges is introduced. A reduced set of relevant risk factors is specified for which there is a long period of historical data available. This reduced set of risk factors must explain at least 75% of the ES model.

ES = ESR,S

ESF,C

ESR,C

. (2.3)

ESR,S is the ES calculated over the reduced set of risk factors and over a stressed period, ESF,C is the

ES calculated over the full set of risk factors over the current, most recent 12 month period and ESR,C

is the ES calculated over the reduced set of risk factors over the current period (Basel Committee on Banking Supervision, 2013a).

2A common scaling method is the square-root-of-time rule, which simply multiplies 1-day VaR with10 in order to

(13)

Methodology

3.1

Market risk measures

A risk measure ρ represents the degree of riskiness of a portfolio over a certain horizon. Artzner et al. (1999) introduces the notion of coherent risk measures, which can be considered as the first systematic attempt to quantify financial risk. The authors proposed a set of properties that a risk measure must satisfy in order to be coherent. We define M as the set of random variables representing portfolio losses over a certain time horizon and L ∈ R represents a loss of a certain portfolio over a time horizon. A loss L is now represented by a positive number, meaning that a profit results in a negative value of L. Risk measures are functions ρ : M → R and a risk measure ρ is said to be coherent if it satisfies the following properties:

• Monotonicity. ρ(L1) ≤ ρ(L2) for all L2, L2∈ M such that L1≤ L2;

• Translation invariance. ρ(L + c) = ρ(L) + c for any c ∈ R and all L ∈ M;

• Sub-additivity. ρ(L1+ L2) ≤ ρ(L1) + ρ(L2) for all L1, L2∈ M;

• Positive homogeneity. ρ(λL) = λρ(L) for all L ∈ M and for every λ ≥ 0.

L1≤ L2means that in any state of the world, L2 yields a loss which is at least as high as the loss yielded

by L1, so monotonicity ensures that portfolios with larger losses are more risky. Translation invariance

makes sure that an increase in the loss of the portfolio, increases the risk of that portfolio with the same amount. Sub-additivity reflects the idea that diversification reduces risk and positive homogeneity ensures that if a loss is multiplied with a factor, the risk is multiplied with the same factor as well. Risk measures we consider are VaR and ES and in this section we describe these market risk measures.

VaR. This risk measure measures the maximum loss that could occur with a confidence level 1 − α over a given holding period. Formally,

VaR1−α= inf{l ∈ R : P (L > l) ≤ α} = inf{l ∈ R : FL(l) ≥ 1 − α}, (3.1)

where FL is the loss distribution over a given time horizon. Typical values of the confidence level 1 − α

are 0.95 or 0.99 and the time horizon is usually 1 or 10 days for market risk in the trading book. VaR is

in fact the quantile of the loss distribution, and therefore it is equal to q1−α(FL) = FL−1(1 − α).

(14)

ES. This risk measure measures the average of the largest 100α% losses, which means that it is the average loss over the losses that exceed VaR. The formal definition of ES is given by

ES1−α= 1 α Z 1 1−α qu(FL)du, (3.2)

with qu(FL) representing the quantile function of FL. For an integrable loss L with continuous distribution

function FL, we have: ES1−α= E(L | L ≥ V aR1−α) = E(L; L ≥ q 1−α(FL)) P (L ≥ q1−α(FL)) = E(L; L ≥ q1−α(FL)) α = 1 α Z 1 1−α qu(FL)du. (3.3) Since E(L; L ≥ q1−α(FL)) = R1

1−αqu(FL)du which is proven in McNeil et al. (2010).

ES is a special case of spectral risk measures. This class of risk is given by:

SRM =

Z 1

0

qu(FL)φ(u)du, (3.4)

where φ(u) is a weighting function defined on the range of cumulative probabilities u ∈ [0, 1]. So ES is a

special case of a spectral risk measure where φ(u) = α11{1−α≤u≤1}.

Cont et al. (2010) introduce a proposition which states that a risk measure is coherent if and only if it is a spectral risk measure. This results in ES being coherent, whereas VaR is not. VaR fails the sub-additive property, meaning that the risk of a portfolio can be larger than the sum of the individual risks of components of the portfolio measured under VaR (Artzner et al., 1999). This means that VaR may fail in stimulating diversification.

Besides the failure of VaR being a coherent risk measure, another main disadvantage of VaR is that it does not measure the magnitude of extreme losses. It only measures the quantile and does not consider extreme tail events. Also it is argued that VaR lacks in robustness and leads to excessive risk taking.

Moreover, as mentioned in Yamai and Yoshiba (2005), VaR fails to capture tail risk. In an example in their paper it is shown how a VaR constraint can result in a more vulnerable optimal portfolio that can lead to large losses that exceed the VaR level. In contrast, when constrained by ES, the portfolio risk significantly reduces. Furthermore, Boyle et al. (2005) show that traders have incentives to invest in riskier portfolios under a VaR constraint. Since ES is proven to be a coherent risk measure and it measures the expected loss in extreme cases, it has the quality of being a better risk measure in theory.

However, there is criticism on the use of ES instead of VaR as well and thus on the proposition of the Basel Committee to switch from VaR to ES. One of the main arguments against this proposed transition is that VaR is simple and easy to understand, whereas ES is more difficult and more data is needed in the estimation of ES. Furthermore, VaR satisfies the elicitability property as described in Chen (2013) in contrast to ES, which means that there are difficulties in reliable backtesting of ES. Moreover, the failure of VaR not being sub-additive is relaxed by some researches. Dan´ıelsson et al. (2005) show by both theoretical properties and simulations that in most practical applications VaR is sub-additive. Also Heyde et al. (2007) summarise why the sub-additive property can be relaxed. In their paper they additionally state that ES is not robust against noisy or unreliable data or changes in model assumptions.

(15)

3.2

Return series

For the calculation of market risk estimates such as VaR and ES, time series of returns of risk factors are used. They are constructed from historical data on daily, weekly or monthly prices of risk factors and returns can be calculated in several ways. The most common method used is the log differences of returns, meaning that returns are continuously compounded. We consider daily returns since this is in line with the methods used by banks to model returns. Daily log returns are calculated in the following way: rt= ln  p t pt−1  . (3.5)

Alternatively, simple relative returns, rt=

pt−pt−1

pt−1 can be used. Moreover, absolute returns are sometimes

considered as well to model risk factors. The daily returns then become

rt= pt− pt−1. (3.6)

Returns are calculated for t = 2, . . . , T trading days. rt represents the daily return, which is either

logarithmic, relative or absolute and pt is the value of a risk factor at time t.

Returns series of many financial assets show three important statistical characteristics, also referred to as stylised facts. First, returns are not normally distributed. Second, there is hardly any autocorrelation in return series and third, squared or absolute returns do show profound autocorrelation (Taylor, 2011). Moreover, volatility appears to vary over time and extreme returns appear in clusters (McNeil et al., 2010).

The fact that returns are not normally distributed is based on analysis of return series and in most financial time series it can easily be shown that the distribution of returns is not necessarily symmetric, is high peaked and has heavy tails. Symmetry is measured by skewness whereas the excess kurtosis of a series measures the peakedness and the extent to which the series has fat tails relative to the normal distribution, which has a kurtosis equal to three.

The absence of correlation between returns of different time periods is measured by the sample autocor-relation of returns, which is close to zero in general. The Ljung-Box Q-statistic is a test for the presence

of autocorrelation. It tests H0 : ρl = 0 for all l against H1 : ρl 6= 0 for some l ≤ k. This test statics is

given by Qk,r= T (T + 2) k X l=1 ˆ ρ2l T − l, (3.7)

and is asymptotically χ2-distributed, with k degrees of freedom. This statistic is calculated from the first

k squared autocorrelations ˆρ2

l. ˆρl is the sample autocorrelation from n observations at lag l. H0 should

be rejected if Q > χ2

1−α,k. In case autocorrelation is present, there is need for an ARMA model. ARMA

models will be explained in the next section.

The third property of time series, namely the presence of positive correlation between the volatilities on nearby time periods, has to do with volatility clustering. This basically means that today’s volatility is positively correlated with future volatility and that squared returns are positively autocorrelated. The Ljung-Box test can also be applied to squared returns in order to measure their autocorrelation.

(16)

Conditional volatility models are constructed to account for this autocorrelation and we will use them in our framework to calculate ES and VaR.

3.2.1

ARMA models

Autoregressive moving average (ARMA) modelling can be used if returns exhibit autocorrelation, which occurs when the residuals are correlated over time. An ARMA model is then needed to remove this autocorrelation and it allows for a time-varying mean. An ARMA(1,1) model is given by

rt= φ0+ φ1rt−1+ at+ ψ1at−1, (3.8)

where at is a white noise sequence satisfying E(at) = 0, Var(at) = σ2a and Cov(at, at−l) = 0 for all

t = 1, . . . , T and l 6= 0. φ0, φ1 and ψ1are the parameters to be estimated.

We can write

rt= µt+ at, (3.9)

where µt represent the mean of the return series. µt is time dependent in case an ARMA model is

estimated and is just a constant that can be estimated when there is no need for an ARMA model.

3.2.2

Conditional volatility models

Volatility measures the variation in returns over time and it can be simply calculated using the standard deviation of a series of returns. This measure is called the historical volatility. However, the assumption of constant volatility does not work very well in practice. When volatility is measured on a short number of observations it is a very noisy measure and when it is measured on a long period it becomes so smooth that it does not respond very well to new information. Moreover, as volatility clustering, meaning that large deviations from the mean tend to be followed by large deviations again, is present in most time series, conditional volatility models might be a better way of analysing the volatility. Conditional volatility measures the future volatility conditional on past information, such as past returns. It thus estimates the volatility for different time periods, rather than just calculating the volatility over the whole historical period that is considered.

Many models have been developed in the past years and the model that fits the data best should produce residuals that are independent and identically distributed and should have good forecasting power. The autoregressive conditional heteroskedasticity (ARCH) model is the most basic one. It is introduced by Engle (1982) and it basically assigns more weight to recent observations and less weight to observations that happened a long time ago. An extension of this model is the generalised ARCH (GARCH) model (Bollerslev, 1986), which appears to be a good fit for many financial time series. Additionally, we consider asymmetric volatility models. These models are based on the idea that future prices are not always symmetric functions of today’s prices. Large negative returns typically have a different impact on volatility compared to large positive shocks, in the sense that a fall in market prices usually has much more effect on next day’s volatility than a price increase. This is what we call the leverage effect. An example of a model that takes the leverage effect into account is the GJR-GARCH model, as introduced by Glosten et al. (1993). Conditional volatility models are in line with the three facts about time series of returns. More details on this can be found in Taylor (2011).

(17)

Conditional volatility modelling starts with time series of returns. Let {rt} be a series of daily returns

of a risk factor. It is assumed that the distribution of the daily return on period t, conditional on all previous returns, is distributed with a time-varying mean and volatility. We have

rt| Ft−1∼ D(µt, ht), (3.10)

where D represents the distribution and Ft−1 the information available up to time t − 1. The variance

given past returns htis given by

ht= Var(rt| Ft−1) (3.11)

= E[(rt− µt)2| Ft−1] (3.12)

= E[a2t | Ft−1] (3.13)

= Var(at| Ft−1), (3.14)

defining at = rt− µt. Obviously, the mean return can also be a constant and in that case µt can be

replaced by µ. When we do have evidence for a time varying mean, we can estimate it with the ARMA

model. For the AR(1) for example, µt= φ0+ φ1rt−1. Therefore the standardised residuals become

t=

rt− µt

ht

, (3.15)

and t| Ft−1∼ D(0, 1). These residuals are identically and independently distributed, which implies that

the conditional expected value of at becomes zero and that autocorrelation disappears. A common and

easy choice for the distribution of the residuals is the standard normal distribution, but other choices of

the distribution of t are possible and the student’s t-distribution is often used in practice as well. The

t-distribution has fatter tails and is more peaked around the mean than the standard normal distribution and turns out to be a better fit for many financial time series.

ARCH. The most simple conditional volatility model is known as the ARCH model. The ARCH(m) model is given by ht= α0+ m X i=1 αia2t−i, (3.16)

with ata white noise term, representing the error in forecasting rt. αi, for i = 0, . . . , m, are the ARCH

parameters to be estimated where α0 is the intercept. The restrictions are α0 > 0 and αi ≥ 0 for all

i = 1, . . . , m. The conditional volatility in this model depends on the previous squared deviations from

the mean return only and since αi is positive, a large deviation from the mean in period t − 1 implies

a higher expected conditional volatility in period t, whereas a return close to the mean implies a lower expected conditional volatility in the next period.

GARCH. The conditional volatility that is most used in empirical research is the GARCH model. It is a generalised ARCH process in the sense that current conditional variance is allowed to depend on past conditional variance as well.

The GARCH(1,1) is given by

(18)

The restrictions on the parameters are α0 > 0, αi ≥ 0 for all i, . . . , m and βj ≥ 0 for all j = 1, . . . , s.

In practice, the GARCH(1,1) model is mostly used. In a GARCH model, volatility forecasts depend on both previous volatility forecast and past squared deviations from the mean return, rather than on past squared deviations from the mean returns only, which is the case in the ARCH model. In the GARCH(1,1) model, periods of high volatility tend to be persistent, meaning that periods of high volatility as well as periods of low volatility tend to last for a longer period. Similar effects can occur in ARCH models but lower-order GARCH models show this effect more clearly as is illustrated in McNeil et al. (2010). Another way to represent a GARCH(1,1) model is to write it in terms of its unconditional variance. A

GARCH(1,1) is weakly stationary if α1+ β1< 1. Then htconverges to its unconditional variance ¯σ2. For

α + β < 1, we can rewrite ht= α0+ α1a2t−1+ β1ht−1 (3.18) =  α0 1 − α1− β1 + α1 1 − α1− β1 a2t−1+ β1 1 − α1− β1 ht−1  (1 − α1− β1) (3.19) = α0 1 − α1− β1 − α0α1 1 − α1− β1 − α0β1 1 − α1− β1 +α1(1 − α1− β1) 1 − α1− β1 a2t−1+β1(1 − α1− β1) 1 − α1− β1 ht−1 (3.20) = ¯σ2− α1σ¯2− β1σ¯2+ α1a2t−1+ β1ht−1 (3.21) = ¯σ2+ α1(a2t−1− ¯σ 2) + β 1(ht−1− ¯σ2), (3.22) with ¯σ2= α0

1−α1−β1 representing the unconditional variance. Now the GARCH formula is a function of its

unconditional variance (¯σ2), the deviation of the lagged conditional variance from unconditional variance

(ht−1− ¯σ2) and the deviation of the lagged variance of the error term from the unconditional variance

(a2

t−1− ¯σ2).

RisMetrics. RiskMetrics is a methodology developed by JPMorgan. It uses an exponentially weighted moving average (EWMA) model to calculate volatility. It is a special case of a GARCH (1,1) model with

α1+ β1= 1, α0= 0 and the assumption that µt= 0. Since α1+ β1 is not smaller than 1, this process is

not stationary and therefore, for non-stationary processes the EWMA model is preferred. In this model, there is no mean reversion in volatility, meaning that the volatility does not decay towards its long run average. The EWMA(1,1) is given by

ht= (1 − λ)rt−12 + λht−1. (3.23)

An EWMA(1,1) process can be easily derived from a GARCH(1,1) process with α0= 0, α1+ β1= 1 and

µt= 0. Since µt= 0, we get at= rt, so the GARCH(1,1) process becomes

ht= α1r2t−1+ β1ht−1 (3.24)

= (1 − λ)rt−12 + λht−1, (3.25)

using λ = 1 − α1. In RiskMetrics, the parameter λ is set to be 0.94 for daily data, such that recent

obser-vations are weighted more than past obserobser-vations. For each parameter λ, also called the decay factor, and

a certain tolerance level, the effective amount of data required by the EWMA can be computed.1

GJR-GARCH. The GJR-GARCH model is an asymmetric conditional volatility model. Asymmetry

can be observed by the news impact curve, which reflects the relationship between at−1and ht. GARCH

1For λ = 0.94 and tolerance level 0.01% for example, the effective number of observations used is 149, which is based on

P149 t=1

λt−1 PT

t=1λt−1

(19)

models have a symmetric news impact curve that is centred around at−1= 0. For asymmetric models, the

news impact curve has a minimum around at−1= 0 as well, but the slope parameters of both directions

are different, which makes the curve asymmetric. The GJR-GARCH(1,1) model is given by ht= α0(α1+ γ1{at−1<0})a

2

t−1+ β1ht−1. (3.26)

The parameter restrictions are α0> 0, αi≥ 0, γi≥ 0 for all i, . . . , m and βj≥ 0 for all j = 1, . . . , s. Note

that this model coincides with a GARCH(m,s) model if γi = 0 for all i = 1, . . . , m. So γ can be viewed

as the extent to which the leverage effect is taken into account.

To summarise, the ARCH model is the simplest conditional volatility model where the volatility depends on past deviations from the mean. The GARCH model is an extension where the function additionally depends on lagged conditional variance terms. The EWMA model is used instead of GARCH in case volatility is not mean reverting. An asymmetric GARCH, such as the GJR-GARCH, may be chosen if there is evidence for a leverage effect in returns. We chose these different models since these are all commonly used in practice and since they have different features.

3.2.3

Parameter estimation

Maximum likelihood can be used to estimate the parameters of the volatility models. Either a normal

distribution or a t-distribution is assumed for the residuals t, as these are the most common choices

for the error distribution. Obviously, other choices are possible as well, but we will not go into detail on these in this thesis. Estimating parameter values in MATLAB using maximum likelihood involves

manually inserting starting values of the model. Typical starting values for at and ht are a0 = 0 and

h0=T −11 PTt=2(rt− ¯rt)2.

The parameters θ = (φ0, φ0, ψ1, α0, αi, βj, γi, λ) are then obtained from the log-likelihood function:

ln L(θ) =

n

X

t=1

ln f (rt| Ft−1, θ), (3.27)

with Ft−1 the information set up to time t − 1. This includes rt−1 and ht since volatility at time t is

known at time t − 1. ln f (rt| Ft−1, θ) is defined by

ln f (rtFt−1, θ) = − 1 2  ln(2π) + ln(ht(θ)) + (rt− µt(θ))2 ht  , (3.28)

in case the conditional returns are normally distributed. If the conditional distribution of rt is chosen to

be t-distributed, we get zt| Ft−1∼ D(0, 1) with D the standardised t-distribution. Then,

ln f (rt| Ft−1, θ, ν) = − 1 2ln(ht(θ)) + ln Γ(12(ν + 1)) Γ(12ν)pπ(ν − 2) ! −ν + 1 2 ln  1 + (rt−µt(θ))2 ht ν − 2  , (3.29)

with ν the degrees of freedom (ν > 2) and Γ() the traditional gamma function. Note that if ν → ∞, the t-distribution will converge to a normal distribution.

Maximising this log-likelihood function provides us with estimated parameters ˆθ = ( ˆφ0, ˆφ1, ˆψ1, ˆα0, ˆα1, ˆβ1,

ˆ

(20)

After fitting a suitable model on the time series, the residuals should be identically and independently distributed, which can be tested by the Ljung-Box test applied to residuals and squared residuals. Further-more, it can be tested if residuals are normally distributed, which is done by the Jarque-Bera test. The Jarque-Bera test is given by

J B = T − 1 6  S2+1 4(K − 3) 2  , (3.30)

where S is the sample skewness, K the sample kurtosis and T −1 the number of observations in the return series. J B is asymptotically chi-squared distributed with two degrees of freedom. The null hypothesis of

normality can be rejected in case J B > χ2

1−α,2.

3.3

Estimation of risk measures

The risk measurement procedure consists of two steps. The first step is the estimation of the return distribution and the corresponding P&L, calculated using this return distribution, the portfolio weights and using the valuation methods. The next step is the application of the risk measure to the estimated loss distribution, yielding a risk estimator. The estimation of the return distribution can be either done in a parametric way or can be obtained from historical data.

We start the estimation procedure of the return distribution by applying historical simulation. This is the easiest method for calculating market risk and gives us a good starting point for further research. Historical simulation is a non-parametric method that is widely used by financial institutions. However, also fully parametric methods like the variance-covariance method are often used and Monte Carlo simulation is a very common method as well. The multivariate normal variance-covariance method assumes that risk factors are normally distributed and makes assumptions on the correlation structure between certain risk factors. Using this method, VaR can easily be calculated using normal quantiles. Other fully parametric methods are sometimes considered as well, such as the non-normal variance-covariance approach and extreme value theory, which makes assumptions on the distribution of the tails. The Monte Carlo method simulates scenarios of risk factors by making assumptions on the behaviour of a risk factor. For a stock price for example, future stock price scenarios can be generated by a geometric Brownian motion, which assumes that stock prices are log-normally distributed. By generating multiple scenarios, an empirical distribution of the future returns can be generated and from this, VaR and ES can be calculated. The three methods most used in practice all have their advantages and disadvantages, which are summarised below.

Fully parametric methods: Advantages

• There exists a simple analytical solution;

• It is possible to include extreme tail behaviour by making assumptions about the tail of the distri-bution;

• Similarly it can accommodate for skewness and fat tails;

• There is only a limited amount of data needed to estimate the variance-covariance matrix. Disadvantages

(21)

• When extreme value theory is used, it is hard to draw conclusions on tail behaviour since there are only a few extreme value observations to work with;

• It is difficult to model the correlations between risk factors properly;

• Variance-covariance can only be generalised to simple parametric forms such as the t-distribution or the normal distribution.

Historical Simulation: Advantages

• It is easy to calculate; • It is intuitively simple;

• There are no parametric assumptions, such as specifications on distributions needed;

• It is easy to combine with parametric methods such as conditional volatility models as is done in FHS;

• It is a one dimensional problem, there are no multivariate assumptions needed such as covariance matrices.

Disadvantages

• Historical simulation is highly dependent on the quality and availability of data;

• It is subject to the so-called ‘ghost effect’. Once an extreme event enters the dataset, the risk measure suddenly increases and stays high until this event drops out of the dataset;

• The method does not account for plausible events that might occur but have not occurred yet. Monte Carlo simulation:

Advantages

• It can easily accommodate sophisticated stochastic processes and is very flexible; • It can handle path-dependent and non linear portfolios;

• The accuracy can simply be increased by running more trials; • There are no problems in handling multiple risk factors. Disadvantages

• Assumptions on a distribution have to be made and the results are only as good as the model used; • Calibration of parameters in the model is dependent on historical data as well;

• Running many simulations is very time consuming; • It can be subject to high sampling errors;

• The process chosen for each risk factor might not be a good representation of reality.

In historical simulation, there are only a few observations that describe the tail behaviour of the loss distribution whereas in Monte Carlo simulation and fully parametric methods assumptions have to be made on the distribution of the returns. The estimation method used in this thesis is FHS and is a form of semi-parametric bootstrapping. This simulation method allows for a combination of historical simulation and the benefits of conditional volatility modelling and therefore combines the benefits of parametric

(22)

methods with the benefits of non-parametric methods. The historical returns are standardised by the corresponding conditional volatility estimates. Later they will be adjusted with the volatility forecasts such that they reflect current and future market conditions. In this way, multiple scenarios are generated and therefore FHS is able to generate losses that are more extreme than those which occurred in the past. According to Christoffersen (2011) this method should be given serious consideration by any risk management team as it has been found to perform very well in several studies. Giannopoulos and Tunaru (2005) looked into FHS for ES estimation specifically and they demonstrate that ES calculated using FHS is a coherent risk measure as well. Moreover, they state that ES calculated using FHS combines one of the best risk measures in theory with one of the best modelling techniques used in risk management.

3.3.1

Historical simulation

Basic historical simulation simply calculates the VaR and the ES over the ordered loss observations from

the dataset. The h-day VaR1−α, is the α quantile over the h-day empirical loss distribution. If we have

a dataset of 1000 observations, the VaR corresponds with the 51th highest loss value and the 95%-ES with the average over the 50 highest losses. In historical simulation, there is no need for parameter estimation. Also, no underlying parametric model has to be adopted and we do not have to make any assumptions on the mean or the volatility of the distribution.

However, this model-free nature has drawbacks. The main disadvantage is that it assumes that the historical sample period is representative for the next h days over which a risk measure is calculated and that returns are independently and identically distributed. Moreover, due to the observations that are equally weighted, historical simulation exhibits the so-called ghost effect. This means that risk measures are suddenly high when an extreme events enters the sample period and continues to be high until this observation abruptly drops out of the trailing window.

3.3.2

Filtered historical simulation

FHS, as developed by Hull and White (1998), allows for a combination of historical simulation and the benefits of conditional volatility modelling. It captures conditional heteroskedasticity in the data, unlike standard historical simulation, which assumes that volatility is constant over time. It is still unrestricted about the distribution of the risk factors and therefore combines the advantages of historical simulation with conditional volatility modelling.

FHS works as follows. The first step in the FHS procedure is to estimate a conditional volatility model

on the data consisting of historical daily returns rt. To illustrate the method step by step we use a simple

AR(1)-GARCH(1,1) model with normally distributed error terms in this example. We have

rt= µt+ at (3.31) t= rt− µt √ ht ⇒ at= p htt (3.32) t∼ i.i.d. N (0, 1) (3.33) µt= φ0+ φ1rt−1 (3.34) ht= α0+ α1a2t−1+ β1ht−1. (3.35)

(23)

and ˆµ = µt(ˆθ). The next step is to obtain filtered residuals. Historical returns are standardised by

subtracting the time varying mean and dividing by the corresponding volatility estimates. Later they will be adjusted with the volatility forecasts such that the simulated returns reflect current and future market conditions. The filtered residuals are

ˆ t= rt− ˆµt p ˆht , t = 1, . . . , T, (3.36) where p ˆht= q

ht(ˆθ) and ˆµt= µt(ˆθ) are respectively the estimated conditional volatility and the

esti-mated mean at time t that follow from the estiesti-mated parameters in the maximum likelihood estimation and T is the last time period in the dataset. Provided that the returns follow an AR(1)-GARCH(1,1) pro-cess, the returns are now approximately identically and independently distributed and volatility clustering is removed.

Next, we initialise ˜h0 to be the estimated daily GARCH variance on the most present observation of

the historical dataset and ˜a0 to the last log return minus the corresponding mean return, that is ˜a0 =

rT − ˆµT.

Then we repeat the following R times, where R is the number of simulations, to obtain R different scenario paths of horizon length τ

• Draw τ residuals independently with replacement from the estimated residuals ˆt from which we

obtain a scenario for the future τ days, that is ˜tfor t = 1, .., τ ;

• Calculate corresponding volatilities and means for this τ -day scenario from ˜ht−1and ˜at−1given the

estimated parameters ˆθ which gives us generated conditional future volatilities ˜htand future means

˜

µtfor t = 1, . . . , τ ;

• Calculate ˜at=p˜ht˜tusing the GARCH equation for t = 1, . . . , τ ;

• Calculate a τ -day simulated return scenario ˜rtusing ˜rt= ˜µt+ ˜at for t = 1, .., τ .

Because past realisations are i.i.d, provided that the conditional volatility model is correctly specified, it is possible to make draws from the empirical distribution. Then, at each horizon day and each simulation trial, a new risk factor return is generated using the volatility forecast, such that it takes current market conditions into account. In this way, R × τ simulated residuals are obtained, with R the number of simulations and τ the risk horizon. Our dataset now becomes

      ˜ r(1)1 r˜(1)2 · · · r˜(1)τ ˜ r(2)1 r˜(2)2 · · · r˜(2)τ .. . ... ... ... ˜ r(R)1 ˜r2(R) · · · ˜rτ(R)       . (3.37)

These returns represent τ -day paths and therefore a simulated distribution of τ day returns conditional

on htcan be obtained. The empirical τ -day return distribution is constructed using

˜ r(s)1:τ = τ X i=1 ˜ ri(s), s = 1, . . . , R. (3.38)

(24)

this sum of returns up to time τ . We now have       ˜ r1:τ(1) ˜ r1:τ(2) .. . ˜ r(R)1:τ       . (3.39)

These cumulative returns form our empirical distribution and historical risk estimators can be obtained from it. Note that these return scenarios are for a single risk factor i only. From now we refer to these

returns as ˜r(s)i for s = 1, . . . , R and i = 1, · · · m.

For a one day (τ = 1) ES or VaR, the above procedure simplifies to just selecting R residuals from the standardised residuals, multiplying it with today’s conditional volatility forecast and adding the mean. This gives us an empirical distribution of the returns of the risk factor. For a portfolio VaR or ES, we run the FHS procedure for multiple risk factors i = 1, . . . , m. For each risk factor a GARCH process is estimated. The assumption of i.i.d. residuals makes the multivariate case just a simple extension of the univariate case. It is important that the random drawings should be from the same time periods for the different risk factor. In this way, FHS accounts for correlation between risk factors implicitly and therefore there is no need for estimation of the correlations between risk factors through an multivariate model. However, time varying correlations are not modelled in this way. This can be done by an Dynamic Conditional Correlation (DCC) for example. In this research, we restrict ourselves to constant correlations between risk factors over time and therefore univariate GARCH modelling with residual drawings from the same period is sufficient.

Performing FHS for each risk factor gives us a simulated τ -day loss distribution for each risk factor i = 1, . . . , m. In this way empirical distributions for the risk factor returns at a certain horizon are generated:       ˜ r(1)1 r˜(1)2 · · · r˜(1)m ˜ r(2)1 r˜(2)2 · · · r˜(2)m .. . ... ... ... ˜ r(R)1 ˜r2(R) · · · ˜rm(R)       . (3.40)

We have now obtained empirical return distributions for risk factors i = 1, . . . , m, which are also referred to as shocks of the risk factors. We first have to convert them into losses of assets so that we can calculate VaR and ES over the P&L distribution. Some assets can depend on multiple risk factor returns. The conversion from risk factor return to the price of an asset depends on the type of asset and the method chosen for the calculation of the return. In the next section we explain which assets we use and how we re-price them from the simulated risk factor returns.

Subtracting the future simulated prices of an asset from today’s price and multiplying it with −1 gives us an empirical loss distribution for each asset i. From this and using the portfolio weights, we can calculate an empirical loss distribution of the whole portfolio.

Using this empirical P&L distribution of the portfolio, VaR and ES can be calculated. Cont et al. (2010)

(25)

distribution:

ˆ

ρ(L) = ρ(FLemp), (3.41)

with L the loss of a portfolio over a given time horizon.

Let ˜L(s) for s = 1, . . . , R be the empirical loss distribution of the portfolio over time horizon 1, . . . , τ .

The portfolio losses first have to be sorted in descending order such that we get a sample ˜L(1)≥ ˜L(2)≥

. . . ≥ ˜L(R).

The historical VaR is just the α quantile of the empirical distribution. For a discrete distribution, this is the αR + 1-th number in the sorted losses sample:

V aR1−α= ˜L([αR]+1). (3.42)

The historical ES for the confidence interval 1 − α is obtained by averaging the highest α% losses:

ES1−α= 1 αR αR X k=1 ˜ L([k]). (3.43)

When α is not an integer, the definition becomes:

ES1−α= 1 αR   [αR] X k=1 ˜ L([k])+ ˜L([αR]+1)(αR − [αR])  , (3.44)

(Cont et al., 2010). [k] the integer part of k, [k] = max{n ∈ N | n ≤ k}. ˜L([αR]+1)(αR − [αR]) represents a

fraction of the αR + 1-th ordered loss scenario. For large R, this term becomes irrelevant. It is important that R is chosen large enough to ensure that ES is based on a solid amount of data points.

3.4

Robustness

According to Hampel et al. (2011), robust statistics deals with deviations from idealised assumptions in statistics. A robust method is a method that is relatively insensitive to outliers and failure of assumptions. The outlier problem is well known in statistics and might have consequences for a statistical procedure as these outliers are by definition far away from the gross of the dataset. Other reasons for deviations are the assumptions made about underlying distributions that are only theoretical. Under fully parametric models for example, where distributions and parameters are fully specified, procedures might appear to be valid, but they do not tell us anything about the performance of these estimators when assumptions are not completely valid. Sometimes the results change significantly after only a small deviation in a model assumption.

A risk measure is said to be robust if it is not sensitive to outliers or to modelling assumptions, that is, the measure still performs well even if assumptions are somewhat violated. Robust statistics tells us something about the reliability of methods, the relevance of results and the effect of errors on the results.

(26)

3.4.1

Quantitative robustness

A practical measure of robustness is quantitative robustness, which determines the degree of robustness of a statistic by quantifying it. Quantitative robustness is measured by the influence function, which basically describes the effect of an additional observation to the data on a particular statistic. The effect of the addition of an observation with value z to a sample on a statistic T is measured by an influence function

IC(z, FL, T ) = lim

→0

T (δz+ (1 − )FL) − T (FL)

 , (3.45)

for any constant z such that the limit exists. δz represents the point mass 1 at z, which is the observation

that is added to the dataset. The influence function or influence curve is introduced by Hampel et al. (2011) and also referred to as a sensitivity function.

Cont et al. (2010) provide influence functions for historical VaR and historical ES specifically. They can be obtained by simple calculus. The influence function for historical VaR is given by

IC(z) =      1−α f (qα(FL)) if z < qα(FL) 0 if z = qα(FL) − α f (qα(FL)) if z > qα(FL) . (3.46)

Similarly, the influence function for the historical ES is given by

IC(z) = ( −z α+ 1−α α qα(FL) − ESα(FL) if z ≤ qα(FL) −qα(FL) − ESα(FL) if z ≥ qα(FL) , (3.47)

see Cont et al. (2010) for derivations and proofs. The influence function of the historical ES is linear in z, meaning that it is unbounded. Sensitivity can become very large as z becomes extreme. If we compare this to the historical VaR, which does not depend on z, we could conclude from influence functions and from a theoretical point of view, that historical ES is less robust than historical VaR. Sensitivity analysis is also studied by Gourieroux and Lu (2006) and Heyde et al. (2007). In Heyde et al. (2007) a theorem is presented that shows that coherent risk measures are in general not robust. Gourieroux and Lu (2006) looked into the sensitivities with respect to the confidence parameter α of VaR, ES and other risk measures. They state that knowledge about the sensitivity of VaR and ES with respect to small changes in the confidence parameter is useful for determining a risk management strategy. Sensitivity with respect to the confidence parameter is measured by the partial derivative of VaR and ES with respect to the confidence parameter. Their conclusions about whether VaR or ES is more robust against this parameter change are not very clear. Note that the influence function measures robustness in terms of sensitivity to outliers, rather than to modelling assumptions. Later, we will extend the definition of the influence function in such a way that we can use it for the measurement of sensitivity to slight parameter modifications in our model.

3.4.2

Estimation error

In Yamai and Yoshiba (2002) the estimation error, the sampling variability due to limited amount of data, of both VaR and ES is investigated. Sampling variability occurs in FHS since the VaR and ES outcomes depend on random drawings from the historical dataset. The returns as observed in the historical data

(27)

period are a random sample again itself, where it is assumed that they represent an unknown population. Estimation errors are related to robustness in the sense that high estimation errors are a sign of less robust measures. In their paper, Yamai and Yoshiba (2002) run Monte Carlo simulations that show that the estimation error of ES is comparable to the estimation error of VaR. However, when the underlying distribution gets fat-tailed, the estimation error of ES is higher than the estimation error of VaR. As sample size increases, the estimation error of ES reduces again. They simulated random variables from distributions with different tail parameters, measuring the heaviness of the tail.

Acerbi (2004) looked into the estimation error of ES as well. He indicates that it is not necessarily true that VaR has smaller estimation errors. Even though one might expect that VaR is affected by lower estimation errors since VaR neglects tail events and is therefore less sensitive to extreme events than ES, this reasoning is not logical from a risk point of view according to him. In his paper he states that this reasoning in fact means that it is better to completely ignore the facts rather than being short sighted. He analyses estimation errors for VaR and ES for some distributions, varying the confidence parameter α. In an example where the log-normal distribution is assumed, he states that VaR is not necessarily more stable for small values of α and is less stable for higher values of α. However, as tails become heavier, ES declines in precision compared to VaR.

Closed form expressions of estimation errors of VaR and ES can be obtained analytically (Yamai and Yoshiba (2002), Acerbi (2004)). In deriving these formulas, the authors make use of influence functions and asymptotic properties of these estimates.

Giannopoulos and Tunaru (2005) state that ES estimators are more uncertain than VaR estimators. In

their analysis, they make a comparison between the VaR0.99 and the ES0.99. They provide a closed form

formula for the standard error of the ES estimate under FHS following the approach of Acerbi (2004). This analytical expression allows them to calculate the error in their model and can be used as a tool for monitoring the variability of ES under FHS.

3.4.3

Robustness under FHS

Since we focus on FHS, we have to determine how robustness of ES and VaR can be measured under FHS and how it can be linked to the theory of influence functions and estimations errors. Sensitivity towards modifications in certain parameters can be written in terms of influence functions. We define robustness under FHS as the sensitivity of market risk measures to certain model choices or assumption changes in the FHS framework. We deviate from earlier investigations in robustness of VaR and ES, since these were based on the addition of extreme observations to the dataset, which is the original definition of quantitative robustness. We however, measure the influence function with respect to certain parameter changes in the model.

Robustness of ES and VaR can be measured by varying the conditional volatility model as assumed in the FHS framework. We compare market risk outcomes with a few volatility models that are most feasible for our portfolio assets. For example, we can compare the standard GARCH model with a GJR-GARCH model that takes the leverage effect into account and determine in this way how the assumption of an asymmetric conditional volatility model affects ES and VaR outcomes. The degree to which leverage effects are taken into account can be determined by the parameter γ from the GJR-GARCH. Varying this from 0 (such that it coincides with a GARCH model) to a higher value gives us insights into the robustness of ES and VaR.

Referenties

GERELATEERDE DOCUMENTEN

Secondly, the impact of different asset mixes on the measured risk and the overlap between   the  risk  profiles  is  examined.  The  comparison  between  the 

To answer our research question more precisely: Based on two back-testing tests (Ku- piec’s unconditional coverage test and Christoffersen’s conditional coverage test), we find that

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of

Nünning points to American Psycho as an example of such a novel, as it call's upon “the affective and moral engagement of readers, who are unable to find a solution to to puzzling

participation!in!authentic!deliberation!by!all!those!subject!to!the!decision!in!question”!(,! Dryzek,! 2001,! p.651,! emphasis! added;! See! also,! Cohen! and! Sabel,!

amine based capture facility is compared with the energy efficiency of a power plant equipped with a standard MEA-capture facility using the Spence ® software tool developed

A Normative-Empirical Analysis of Corporate Responsibility for Sustainable Electricity Provision in the Global South... Joseph Wilde-Ramsing, University of Twente, MB/ CSTM

In highly reliable systems one is often interested in estimating small failure probabilities, meaning that efficient simulation techniques meant for rare events for model checking are