• No results found

What is the effect on liquidity risk and intraday trading on capital requirements under Basel III?

N/A
N/A
Protected

Academic year: 2021

Share "What is the effect on liquidity risk and intraday trading on capital requirements under Basel III?"

Copied!
84
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

University of Amsterdam

MASTER’S THESIS

What is the effect of liquidity risk and

intraday trading on capital requirements

under Basel III?

Author: Matˇej ˇConka 11797606

Supervisor: R.C. Sperna Weiland MSc. Academic Year: 2017/2018

(2)

Statement of Originality

This document is written by Student Matej Conka who declares to take full responsibility for the contents of this document.

I declare that the text and the work presented in this document are original and that no sources other than those mentioned in the text and its references have been used in creating it.

The Faculty of Economics and Business is responsible solely for the super-vision of completion of the work, not for the contents.

(3)

Acknowledgments

First, I would like to express my gratitude to R.C. Sperna Weiland MSc. for guidance during the entire process of completion of this thesis.

Second, this thesis was completed within a thesis internship in Optiver Europe. I would thereupon want to thank all my colleagues, especially to David Kuhnl and Dries Maathuis, for their support and valuable insights.

(4)

Abstract

This thesis investigates the presence of liquidity risk premiums for risk measures such as Value at Risk and Expected Shortfall for both daily and intradaily frequencies and its effect on the risk capital banks need to hold under Basel III regulatory framework. We employ the parametric approach and GARCH-type modelling in order to calculate the risk measures by using data for three stocks traded on SX5E Index, Royal Dutch, Adidas and Siemens. The results show the presence of liquidity risk premiums, however, in some cases negative ones. These premiums are incremented in magnitude while using intraday dataset. Likewise, solely the intraday character increases the risk capital. Out of three types of models, GARCH, GJR-GARCH and EGARCH, the latter one seems to perform the best in case of the backtesting, model estimation and calculation of risk capital. Last, when assuming the Student-t distribution, 97.5% Expected Shortfall gives considerably different capital requirements than 99% Value at Risk.

(5)

Contents

List of Tables vii

List of Figures viii

1 Introduction 1

2 Theoretical backround and literature review 5

2.1 Liquidity risk . . . 5

2.1.1 Market liquidity risk . . . 6

2.1.2 Risk management and liquidity . . . 8

2.2 Basel accords . . . 9

2.3 Risk management measures . . . 11

2.3.1 Value at risk . . . 11

2.3.2 Expected shortfall . . . 14

3 Data and Methodology 16 3.1 Return creation . . . 16 3.2 Data description . . . 17 3.3 Seasonality factor . . . 21 3.4 Volatility models . . . 23 3.4.1 GARCH . . . 23 3.4.2 GJR-GARCH . . . 24 3.4.3 EGARCH . . . 25 3.5 Model assessment . . . 25 3.6 Liquidity premium . . . 27

(6)

Contents vi

3.7 Backtesting and capital requirements . . . 27

4 Analysis 32 4.1 Model estimation results . . . 33

4.2 Backtesting results . . . 34

4.3 Liquidity premium results . . . 38

4.4 Risk capital’s results . . . 39

4.5 Discussion . . . 41

5 Conclusion 44

A Title of Appendix One I

A.0.1 Jarque-Bera Test . . . I A.0.2 Ljung-Box Test . . . I A.0.3 Number of Lags . . . II A.0.4 Number of degrees of freedom . . . III A.0.5 Models’ estimation results . . . IV A.0.6 Liquidity Premiums . . . VIII A.0.7 Capital Requirements - Robustness . . . IX A.0.8 Graphs . . . X

(7)

List of Tables

3.1 Summary statistics . . . 20

3.2 Traffic Light backtesting . . . 29

4.1 EGARCH Results - Daily Sample . . . 34

4.2 Number of Breaches . . . 36

4.3 Liquidity Premiums . . . 39

4.4 Capital Requirements . . . 41

A.1 Number of lags for voltility models as a result of AIC . . . II A.2 Number of degrees of freedom . . . III A.3 GARCH Results - Intraday Sample . . . IV A.4 EGARCH Results - Intraday Sample . . . V A.5 GJR-GARCH Results - Intraday Sample . . . VI A.6 Capital Requirements - Intradaily . . . VII A.7 Liquidity Premiums - Daily Results from Intraday Frequencies . VIII A.8 Capital Requirements - Average of 20 Days for Daily Frequencies IX A.9 Capital Requirements - Average of 20 Days for Intradaily

(8)

List of Figures

2.1 Exogeneous and endogenous liquidity . . . 7

3.1 Price Evolution . . . 18

3.2 Seasonality - Liquidity-adjusted Returns . . . 22

3.3 Histograms - Deseasonalized Returns . . . 26

4.1 LVaR from Historical Simulation . . . 35

4.2 10-minute-ahead LIVaR and IVaR - EGARCH . . . 37

4.3 1-day-ahead LIVaR and LVaR - EGARCH . . . 40

A.1 Seasonality - Frictionless Returns . . . X A.2 Histograms - Deseasonalized Returns . . . XI A.3 Histograms - Seasonalized Returns . . . XII A.4 Histograms - Daily Returns . . . XIII A.5 1-day-ahead LIVaR and LVaR - GARCH . . . XIV A.6 1-day-ahead LIVaR and LVaR - GJR-GARCH . . . XV A.7 1-day-ahead LIVaR, IVaR, LVaR and VaR - GARCH . . . XVI A.8 1-day-ahead LIVaR, IVaR, LVaR and VaR - EGARCH . . . XVII A.9 1-day-ahead LIVaR, IVaR, LVaR and VaR - GJR-GARCH . . . XVIII A.10 1-day-ahead LIES, IES, LES and ES - GARCH . . . XIX A.11 1-day-ahead LIES, IES, LES and ES - EGARCH . . . XX A.12 1-day-ahead LIES, IES, LES and ES - GJR-GARCH . . . XXI A.13 VaR from Historical Simulation . . . XXII A.14 10-minute-ahead LIVaR and IVaR - GARCH . . . XXIII

(9)

List of Figures ix

(10)

Chapter 1

Introduction

“Market prices represent achievable transaction prices.”

- Phillipe Jorion

Value at Risk (VaR) is a longstanding widely-used risk measure in the financial sector, used by trading companies, banks, institutional investors and regula-tors. In the light of the recent financial crisis, several weaknesses have been highlighted such as the inability to capture the tail risk and capture the mag-nitude of the loss. Therefore, there is a shift towards the Expected Shortfall (ES) measure which is present in the newest version of Basel III regulatory framework. These measures are calculated using the market prices which, in accordance with the theory, should be a representation of actual transaction prices. That is to say, the market conditions, volume availability and effect of portfolio liquidation ought to be represented in the market prices. With such effects, market liquidity risk emerges. For the sake of correctness of the risk measures, the prices should be adjusted for liquidity risk. Albeit the VaR and ES are static measures for a fixed portfolio, liquidity risk is involved even for yet unrealized transactions. The adjustment is even more pronounced during the crisis, when there is a dry up of liquidity on the markets and the presence of the liquidity shortage may be the precursor of a consecutive crisis. In such scenario, the liquidity-adjusted risk measure may more accurately forecast the following crisis. The indisputability of the connection between market risk and

(11)

1. Introduction 2

liquidity demonstrates the case of Long Term Capital Management hedge fund, showing the effect of liquidation risk on the prices. We argue that the inclu-sion of liquidity dimeninclu-sion creates more precise risk measures for the market participants but also market regulators. That is at the end also the conclu-sion of Basel Committee on Banking Superviconclu-sion (Giot and Gramming, 2002, Jorion, 2000, Borio, 2004, Dionne et al., 2015, Basel Committee on Banking Supervision, 2011).

In 2010, Basel Committee on Banking Supervision introduced the so-called Basel III, banking regulatory framework which has its roots in Basel I, brought in 1988. This regulation, apart from other things, sets the limits for finan-cial institutions in terms of capital requirements and methods how to calculate risk capital from the market risk by using specific risk measures. Even though Basel Committee on Banking Supervision acknowledges the market liquidity risk (2011), it remains unincorporated directly into Basel III-based risk mea-sures requirements in the form of liquidity-adjusted returns. Currently, the regulation deals only with the different liquidity horizon. As mentioned, re-cent studies stress the importance of the market liquidity risk as a part of market risk. That motivates us to explore the market liquidity risk effect on the capital requirements of the banks, with the focus on the endogenous part of liquidity risk. Following previous studies, we believe the liquidity-adjusted returns are the most appropriate inputs for the risk measures as Value at Risk and Expected Shortfall.

Furthermore, apart from liquidity risk we are interested in the effect of intra-day trading on the capital requirements of the banks. First, even though banks may not participate directly in intraday or high-frequency trading, they use intraday VaR for internal risk control and portfolio management (Gourieroux and Jasiak, 2010). Second, if banks opt to utilize more sophisticated models for computing VaR or ES than historical simulation, for instance GARCH-based models, such model requires around 4 years of data in order to work sufficiently (when working with daily frequencies). Such long period diminishes

(12)

substan-1. Introduction 3

tially by using intraday frequencies. Third, most models of risk management presume the conditional normality of returns which is usually unsatisfied. Intra-day returns tend to fit the normality distribution more (Beltratti and Morana, 1999). If not specified, GARCH-based models allow for other distribution spec-ification. Fourth, there may be a significant difference of the effect of liquidity market risk on the risk measures while using either daily or high-frequency data.

The main objective of this thesis is to build upon existing literature concern-ing the intraday tradconcern-ing and liquidity-adjustment of risk measures and explore those two concepts’ effect on the banks’ capital requirement under Basel III. By employing GARCH-based models for estimating future volatility and risk measures like Value at Risk or Expected Shortfall, we answer the set of follow-ing questions: Is the liquidity premium present for the liquidity-adjusted risk measures? Is liquidity risk more pronounced for intraday trading? Is there a difference between Value at Risk and Expected Shortfall for computing the risk capital under Basel Accords? What models work the best for estimating such risk measures? Does the intraday trading itself effect the risk capital? Does the underlying data follow intraday seasonal patterns? And is the assumption of normality distribution of returns satisfied? Although there exist articles answering some of the questions, we provide the systematic and comprehen-sive overview. To the best of our knowledge, there has not yet been a study comparing liquidity-adjusted risk measures for both high-frequency and daily frequencies and linking them to the Basel regulatory framework. For this pur-pose, we collect two sets of data, one with 10-minute-spaced intervals and the second with the daily intervals, containing 10 levels of prices and volumes avail-able on the limit order book (LOB). Each underlying data sample is collected for three stocks traded on the SX5E Index, Royal Dutch, Adidas and Siemens, which were selected mainly based on data availability.

Results obtained from the GARCH-based modelling verify the presence of the liquidity premium for liquidity-adjusted VaR and ES, and show that the

(13)

1. Introduction 4

magnitude of liquidity risk is amplified for the high-frequency data. Surpris-ingly, some of the liquidity premiums are negative. EGARCH model seems to work better than GARCH and GJR-GARCH for the underlying data set. When assuming the Student-t distribution, the risk capital is considerably different when computed from 97.5% ES than from 99% VaR. Extensive inspection and discussion of these and additional results follow throughout the thesis.

This thesis is structured in the following way: Chapter 2 describes the theoretical background of liquidity risk, risk capital’ computation, Value at Risk and Expected Shortfall, and summarizes the existing researches in the field. In Chapter 3, we aim to describe in detail the data sample used and elaborate on the process of incorporation of liquidity into the risk measures. We continue with the analysis of the seasonality pattern in our data and we clarify on the volatility models employed. In models’ context, we introduce the backtesting scheme. Chapter 4 displays and interprets the results. Finally, Conclusion section briefly summarizes the outcomes of the thesis.

(14)

Chapter 2

Theoretical backround and

literature review

“The more you warn your colleagues about the tail risks - the rare but devastating events that can bring the bank down - the more they roll their eyes, give a yawn and change the subject.”

- Leonard Matz

This Chapter discusses the theoretical background and literature review of se-lected key topics. First, we present the definitions of liquidity risk and dis-tinguish between exogenous and endogenous liquidity. Previous researches concerning liquidity risk incorporation into risk measures are presented con-sequently. Next, Basel regulatory framework is introduced. In the third part of this chapter, we define Value at Risk and Expected Shortfall measures and elaborate on their usage in high-frequency trading.

2.1

Liquidity risk

Liquidity is defined as the simplicity to trade an asset (Amihud et al., 2005). In general, we divide liquidity risk into trading liquidity risk and funding liq-uidity risk. The latter is related to the financial institution’ balance sheet risk,

(15)

2. Theoretical backround and literature review 6

i.e. the ability of the institution to meet its liabilities for instance (Marrison, 2002). In this study, we focus on the former one, liquidity market/trading risk and its relation to the Value at Risk, Expected shortfall and Basel regulatory framework.

2.1.1

Market liquidity risk

The concept of the market liquidity risk is a longstanding topic in the finan-cial literature. Stock prices can be affected by the liquidity even though the fundamentals of the companies stay unchanged (liquidity premium) and high liquidity is necessary for the market efficiency (Amihud et al. 1997). When studying liquidity premium, one may examine the relative liquidity premium which compares the difference in prices of two identical stocks except that one stock is more liquid than the other (Hibbert et al., 2009). Stange and Kaserer (2009) define market liquidity cost as: ”the cost of trading an asset relative to fair value.” The sources of the liquidity cost are the order-processing cost, in-ventory risk, associated with demand to sell the inin-ventory but lack of demand to buy the inventory, adverse selection (private information), i.e. liquidity cost appearing due to the fact that there is some information disadvantage for uninformed traders, and search friction which is more common on the over-the-counter markets. One of the first defining liquidity market risk is Kyle (1985), dividing it into three forms; (1) tightness, (2) depth and (3) resiliency.

ˆ Tightness is related to the transaction cost without the operational cost. Such cost pops out when trading the stock through bid and ask quoted on the market. Such quotes are different from the fair average market price, leading to the cost of transaction for the investor. Therefore, the tightness is measured as a bid-ask spread.

ˆ Depth refers to the transaction size which is necessary to change the price of an asset.

(16)

2. Theoretical backround and literature review 7

ˆ Resiliency indicates the speed and time in which the asset price rebounds to its equilibrium after the market shock appears.

Different distinction of the market liquidity risk is offered by Buhl (2004) with three dimensions of similarity to Kyle (1985), the volume dimension, price dimension and time dimension. However, we stick to the definition of Bangia et al. (1990a) who distinguish between exogenous and endogenous liquidity risk.

Figure 2.1: Exogeneous and endogenous liquidity

Source: Basel Committee on Banking Supervision, 2011.

Exogenous liquidity risk is essentially the same as tightness mentioned above. It refers to the average transaction cost which is set by the market; either by the market makers on the quoted-driven market or by the investors itself on the limit order-driven market. Bangia et al. (1990a) mention that such liquidity fluctuation is beyond the control of the individual trader. Contrary, the endogenous liquidity is affected by the actions of the individuals and is related to the size of the position; the higher endogenous liquidity arises with the larger size of the position. More precisely, the endogenous liquidity appears when the size of the order is larger than the quote depth/ size associated with the best bid or ask. Even though Basel Committee on Banking Supervision (2016) incorporates liquidity horizon into the risk management measures

(17)

un-2. Theoretical backround and literature review 8

der Basel III, the LOB-related liquidity risk remains neglected. In the next section, we discuss the possibilities of how to incorporate liquidity risk into the risk management frameworks such as VaR and ES.

2.1.2

Risk management and liquidity

Throughout this study, we are incorporating liquidity risk into the market risk measures. Exogenous liquidity risk, as explained in the previous section, is in theory easy to incorporate into such risk measures. For instance, VaR is simply calculated from the bid/spread variation. On the contrary, endogenous liquidity risk is generally believed to be harder to account for and yet it may have a much larger effect on the price of the asset or portfolio (Basel Com-mittee on Banking Supervision, 2013). Rogers and Singh (2005) explain what methodology authors in general use; optimal liquidation strategy, in which the expected liquidation price is deducted from the strategy. Observing and exam-ining endogenous liquidity is however problematic as the researchers frequently lack the data from different levels of the order book. That is why the proxies of market liquidity are widely used.

Wu (2009) scales total value of the market by Amihud’ s liquidity mea-sure (2002). He finds that liquidity risk accounts for more than 22% of the overall risk for the most illiquid portfolio. Bervas (2006) furthermore suggests using another measure based on volume and returns, Kyle’ s Lambda. The liquidity discount is examined by Jarrow and Subramanian (2001) who modify the variance and mean used for VaR computation so that it incorporates these discounts and liquidation time. Hasbrouck and Seppi (2001) recommend using intraday quote slope as market liquidity proxy.

Out of the researchers who have sufficient data from the LOB, Berkowitz’s (2000) study is one of the pioneering work which includes liquidity risk asso-ciated with the position sizes outside the best bid/ask. He investigates the RiskMetrics as well as EGARCH model, both liquidity adjusted and

(18)

unad-2. Theoretical backround and literature review 9

justed, for the period during and after the crisis, finding that trading volume adjustment improves the VaR model, particularly in the crisis period.

Stange and Kaserer (2009) integrate liquidity risk into daily VaR by creating a weighted spread liquidity measure which is consequently used to calculate actual (liquidity-adjusted) returns as opposed to the frictionless returns which are calculated based on the best bid/asks. They discover 25% upside in VaR when using actual returns, even for liquid stocks listed on DAX Index and consequently show that simply adding liquidity risk to the VaR overestimates the risk measure by 100%.

Similar approach is followed by Giot and Gramming (2002) in their study about liquidity risk on a Xetra automated auction market. Equivalently to Stange and Kaserer (2009), underestimation of frictionless VaR is found. In-terestingly, by increasing the time horizon to the one specified by Basel Accord, the importance of liquidity risk curtail. Out of the most recent studies, Dionne et al. (2015) examine liquidity-adjusted intraday VaR (LIVAR) on tick-by-tick frequencies of stock returns. Log-ACD-VARMA-MGARCH model is used for the duration, frictionless and actual returns multivariate estimation and Monte Carlo simulation for multiple-steps ahead forecast. Contrary to the previous studies, Dionne et al. (2015) do not model returns itself but its changes. From their results, liquidity risk may account for more than 32% of overall risk.

Lastly, dynamic conditional relation multivariate GARCH model is em-ployed by Qi and Ng (2009) to jointly model bid and ask and different trading volume levels on the LOB; liquidity risk premium is however negligible.

2.2

Basel accords

”To enhance understanding of key supervisory issues and improve the quality of banking supervision worldwide” is an official quote and goal of the Basel Committee on Banking Supervision (BCBS) (originally Banking Regulations and Supervisory Practices) which is a part of Bank of International Settlements (BIS). The committee was set up in 1974 by ten countries; Germany, the United

(19)

2. Theoretical backround and literature review 10

Kingdom, Japan, Belgium, France, Canada, Italy, Sweden, the Netherlands and the United States. There exist several reasons for which the committee was created; collapse of the Bretton Woods system, crisis of inflation and oil prices, and high indebtedness of sovereigns. Currently, 28 members are part of the BCBS. BCBS plays a role of a creator of rules for banks and monitors their implementation. However, the rules are not directly legally applicable to the members who need to implement the rules locally. The first rule document created by BCBS was the Concordat in 1975, setting responsibilities for bank’ s subsidiaries. The most pronounce regulations are Basel I-III.

First Basel I. was introduced in 1988, setting three main pillars of the frame-work; the minimum capital requirements for the banks, supervisory review and market discipline. This regulatory regime is mainly focused on credit risk and its connection to the Tier 1 and Tier 2 capital. The assets of banks were cate-gorized into five categories based on their credit riskiness. Then, total capital to risk-weighted assets was created which needed to be at least 8% (Basel Com-mittee on Banking Supervision, 1988). Basel I. was severally amended so that it consequently included market risk and Tier 3 regulation. Coming up in 2004, Basel II. build upon Basel I. and assimilate operation risk and market risk into the computation of capital requirements. The full effect of this regime started in 2008. Capital requirements under Basel II are calculated as follows:

Capital requirements = 0.08 ∗XwiAi(credit risk)

+reserve f rom operation risk + reserve f rom market risk

(2.1)

In this thesis, we focus on the third part of the capital requirements, i.e. market risk. More specifically, our spotlight is on the equity positions, not on the interest rate risk or commodities risk. Bank may opt to adopt either the prescribed standardized approach or the internal approach. While choosing internal methods for calculation of capital requirements by banks, the method-ology used is based on the daily VaR (Basel Committee on Banking

(20)

Supervi-2. Theoretical backround and literature review 11

sion, 2008). At the beginning of every trading day, banks need to report their forecast of VaR and keep the calculated required capital.

Negative aspect of this Accord is its pro-cyclicality. Banks have to keep more capital during the economic crisis, leading to less lending, making the crisis more severe (Teply, 2010). In 2010, BCBS came up with the third Basel Accord, which specifies leverage ratio and is concern about liquidity risk and stress testing. Banks are required to hold Leverage Coverage Ratio and Net Stable Funding Ratio. While BCBS is making steps towards incorporating liquidity risk, the current capital requirement system is yet lacking precision in this topic.

As previous versions of Basel Accord, Basel III. was amended several times and should be implemented in 2019. At early stages of the proposed regulation, the internal bank method for calculation of capital requirements from market risk was VaR with 99% confidence interval. In the recent years, there is a shift for another risk measure, Expected Shortfall with 97.5% confidence interval (Basel Committee on Banking Supervision, 2016). In the following two sections, we discuss VaR and ES theory and related works as we use both methods for our analysis. Concrete models and methods for calculation of capital requirements are described in the methodology part.

2.3

Risk management measures

2.3.1

Value at risk

Value at Risk is an ubiquitous risk measure used by banks and other financial institutions, traders, brokers and regulators. For instance, traders can set their intraday trading limits based on VaR. It focuses on losses and profits for dif-ferent kind of activities in the company and calculates the risk together. Given the confidence interval and time horizon, VaR is usually defined as a loss of portfolio or asset value that it does not exceed with a p% certainty for specific time period. As our models are based on logarithmic returns, our definition of

(21)

2. Theoretical backround and literature review 12

VaR is no different:

P r(−RP F > V aR) = p (2.2)

Equation 2. says that we get return worse than VaR with a probability of p. When we assume the normality distribution of returns, it can be mathemati-cally transformed to:

V aRt+1p = −σP F,t+1Φ−1p (2.3)

where σP F,t+1 is the forecasted standard deviation and Φ standard normal’

cumulative density function. When we do not consider standard normal distri-bution but Student-t distridistri-bution, VaR is given as:

V aRpt+1 = −pν−1(ν − 2)t−1

v (q)σP F,t+1 (2.4)

where ν is the number of degrees of freedoms and t−1v Student-t’ cumulative density function. Essentially, that means we need to forecast volatility for the next period and then we can calculate VaR for the related period. There exist three major methods how to calculate VaR. The first is the historical simulation. The clear merit of this method is its simplicity as it only takes the percentile of past returns. Furthermore, it does not require any parametric model estimation (no bias problems). The downside is that it is not build upon any dynamic model. Historical simulation uses the assumption of repetition of history; therefore, does not consider financial crisis or trend changing of the markets. As this is still widely used methodology by the banks, we compute simple historical simulation-based VaR to have a comparable measure to our main method. The second way to estimate VaR is through Monte Carlo simula-tion. It uses the information from the history of returns and performs randomly generated simulations from which the volatility is estimated. The idiosyncratic judgment of researchers is both the advantage and disadvantage of the method. Further drawback of such method is the high technology requirements. The third method which is also employed in this thesis is the parametric VaR. The parametric approach estimates the stochastic volatility of returns, for example

(22)

2. Theoretical backround and literature review 13

by using Generalized Autoregressive Conditional Heteroskedasticity (GARCH) model. The upside is its simplicity to estimate the volatility for individual assets. The caveat is the underlying assumption of normality in returns which is very often violated and the complicatedness of the model’s calculation for portfolio containing large number of assets, i.e. multivariate estimation.

Value at Risk in general contains several weaknesses. Dependence on the assumption of Gaussian distribution of returns is one of the most pronounced. Departing from the theory, it is a broad belief that returns do not follow nor-mality. Next shortcoming arises due to the arbitrary selected quantile bound and rigid period of time. Therefore, even if we have correctly specified model, the two arbitrarily specifications may make the VaR biased. Besides these, the measure overstates or understates the risk when aggregating to portfolio level. In addition, VaR is failing during the time of economic distress, due to the presence of tail risk. VaR is by the definition concerned only about the number of losses which outpace the VaR, not about the degree of such losses (Chen, 2014). The last obstacle is overcome by another risk measure, expected shortfall, explained in the following section.

In the past decades, there has been a substantial development in the high-frequency trading, leading to a creation of risk measures for intraday trad-ing. Giot (2002) is one of the first examining intraday VaR (IVaR) by using GARCH, StudentGARCH, RiskMetrics and duration models (for unequally spaced data). He moreover points out the important feature of intraday re-turns, its seasonality. Giot and Laurent (2001) utilize the intraday trading information for the creation of daily realized variance. The VaR based on the realized variance is thereupon compared to the VaR-based on the normal daily returns; no significant difference is found. Beltratti and Morana (1999) con-trast VaR computation with high frequency and daily data for the exchange rate of Deutsche mark and US dollar. The GARCH and FIGARCH models are employed to estimate multi-period volatility, concluding that intraday data are useful for more frequent volatility assessment but not very useful for

(23)

improve-2. Theoretical backround and literature review 14

ment in measuring long-run volatility. Moreover, they point out that filtering out the seasonality in intraday returns, both by the stochastic and deterministic approach, improves the estimation of the models.

Other papers follow the one from Beltratti and Morana (1999). Usage of tick-by-tick data for the analysis of market risk is investigated by Dionne et al. (2009), exploiting UHF-GARCH model and Monte Carlo simulation for speci-fying the joint process of duration and intraday returns for several stocks traded on Toronto Stock Exchange. They conclude that their model works well for most of the time horizons, measuring out-of-sample. A similar study is written by Liu and Tse (2013) who however use AACD model for duration modelling. The concluding statement is made that their model outperforms the one used by Dionne et al. (2009) base on the backtesting procedure. Different approach is followed by Barunik and Zikes (2014), modelling only specific quantiles of returns and utilizing the realized volatility. Such method does not require the assumption of normality of returns.

Based on the literature, our VaR stands for Value at Risk based on daily returns, LVaR for Value at Risk based on liquidity-adjusted daily returns, IVaR for Value at Risk based on intradaily returns and LIVaR for Value at Risk based on liquidity-adjusted intradaily returns. Equally, such definitions stand for ES, LES, IES and LIES.

2.3.2

Expected shortfall

In the previous section, we discussed the fact that VaR is concerned only about the probability and not about the magnitude of the losses. The magnitude of the loss is nonetheless important from the risk manager stand of point. The extreme loss may cause the distress of the company be more likely (Marrison, 2002). In 2013, BCBS issued a document with the proposal to replace the current risk management technique (VaR) for Expected Shortfall. The conse-quence is the inclusion of ES into Basell III, after its implementation in 2019. Note that in this thesis we are not interested in exploiting the stress version

(24)

2. Theoretical backround and literature review 15

of ES and the liquidity horizon-adjusted ES. As Chang illustrates (2015), ES is consistent risk management measure which is already implemented in the insurance sector. We define ES, sometimes referred to as Tail VaR, as:

ESt+1p = − Et(RP F,t+1|RP F,t+1< V aRpt+1) (2.5)

Assuming the normality of returns, mathematical transformation leads to:

ESt+1p = −σP F,t+1

φ(Φ−1p )

p (2.6)

where φ is a density function and Φ standard normal’ cumulative density func-tion. For Student-t distribution, the expected shortfall is denoted as:

ESt+1p = −σP F,t+1

fv(t−1v (q))

1 − q

(ν − 2 + (t−1v (q))2)

1 − ν (2.7)

where t−1v is a Student-t’ cumulative density function and fv is a

Student-t’ density function. Since the ES is essentially the transformation of VaR, it sustains its simplicity. However, that holds only when utilizing generally simple distribution of returns (Gaussian, Student-t). Following Altzner et al.(1997), ES is superior to VaR because it does not disdain ”beyond the percentile” losses and is subadditive. Despite its benefits, the ES is linked with the inability to backtest it by using historical values and is referred to as not elicitable (Chen, 2014).

For the estimation of ES, we employ the same volatility models as for the VaR. We are particularly interested how the change to ES framework change the amount of capital requirements for banks, together with the effect of liquidity risk. Danielsson (2013) provides incomplete answer to our question. He studies the quantitative impact of the move from 99% VaR to 97.5% ES. The results show that under normality assumption, Var and ES are the same and under Student-t distribution, ES is greater. Chang et al. (2015) compare VaR and ES using stochastic dominance framework. They do not find a support for rejecting the null hypothesis of a stochastic dominance of ES.

(25)

Chapter 3

Data and Methodology

“VaR is only as good as its backtest. When someone shows me a VaR number, I don’t ask how it is computed, I ask to see the backtest.”

- Aaron Brown

Following sections of our work explain in detail the sample data set used and the process of estimating the risk measures. We start with the clarification of returns’ creation process, how we incorporate the liquidity dimension into the returns and the data description. We proceed by presenting our method for accounting for the seasonality in returns. Next, volatility models and liquidity premium are defined. Last, backtesting procedures and capital requirements computation are introduced.

3.1

Return creation

In this section, we describe the process of creating stock returns which are necessary inputs for our volatility modelling and for the risk measures. First, we introduce the frictionless returns which are the ordinary returns computed from the time development of the best bids available in a specific point of time. We examine only the bid size of the LOB where we can observe the possible ex-ante liquidation of bank’s holding positions. Therefore, our frictionless return

(26)

3. Data and Methodology 17

is defined as the logarithmic ratio of the best bid at time t and the best bid at time t − 1 (the (1) stands for the closest level of the LOB to the midpoint).

Rfi,t = Ln( bt(1) bt−1(1)

) (3.1)

For that, we construct the equally spaced price-point process with 10-minutes grid for intraday data and 1-day grid for daily data. This return series already incorporates exogenous liquidity risk as it assimilates the transactions costs arising by not trading the midpoint value. To embody endogenous liquidity risk, the liquidity-adjusted (actual) returns are created, i.e. returns created from the volume-weighted average bids, as:

Rai,t = Ln( bt(v) bt−1(1)

) (3.2)

The volume-weighted bid is constructed by averaging the products of bids and corresponding volumes by the cumulative volume across all levels of the LOB:

bt(v) =

P

i

bi,tvi,t

v (3.3)

where the cumulative volume v is the pre-defined threshold of trading volume at the specific time and i is the i-th level of the LOB. We denote this v to be the minimum cumulative trading volume calculated across all the levels of the underlying dataset so that we secure that there is always the volume available to calculate the weighted bid price. As Dionne et al. (2015) and Qi and Ng (2009) explain, we may refer to this return as an ex-ante liquidity measure because it is the return after liquidating v number of stocks.

3.2

Data description

For this study, data are collected through the trading house Optiver and its continuous electronic trading system. Such system operates on a tick-by-tick basis and keeps track of orders submitted to the order book. In this kind of

(27)

3. Data and Methodology 18

Figure 3.1: Price Evolution

This figure depicts the evolution of best bid prices for Royal Dutch, Adidas and Siemens stocks and for period ranging from 1.4.2016 to 6.3.2018. Prices are in EURO currency.

((a)) Royal Dutch ((b)) Adidas

((c)) Siemens

market, liquidity is not solely created by market makers quoting prices, but by all market participants, setting their price limits. We collect historical bid prices and volumes from the order book for three stocks listed on SX5E Index, namely Adidas (ADS), Siemens (SIE) and Royal Dutch (RD). The sample pe-riod ranges from 4.1.2016 to 6.3.2018 which is the most recent data available and meanwhile such period ensures an adequate amount of data for our re-search. We consider only the phase of the day in which the trading is visible for all participants of the market, i.e. from the opening of the market at 9 am to the closing of the market at 5:30 pm (Central European Time). As a matter of fact, we do not use the first opening price at 9 am but we start with 9:10 am in order to diminish the effect of call auction and focus only on continuous trading as well as we chose to use the method of ridding data for days where the trading activity is present only during the part of a day. Our database

(28)

con-3. Data and Methodology 19

tains 10 levels of the limit order book for every time point from which we can evaluate liquidity risk. To make our methodology for studying liquidity risk plausible, we get rid of the observations with the very low cumulative volumes across all levels of the LOB at one point in time. Based on this procedure, the lowest cumulative volumes in our datasets are 13,442 for Royal Dutch, 5,371.5 for Siemens and 2,145 for Adidas.

In order to compare VaR and IVaR (ES and IES), we transform our data so that we have one dataset in daily frequencies with closing prices and volumes and one dataset in intraday, 10-minute equally spaced frequencies of prices and volumes, which we chose in accordance with standards of academic, high-frequency trading-oriented research, giving us 51 intraday returns per day. That means we create 10-minute grid throughout the date and take the prices closest to this grid. That secures that our testing is in line with the econometric theory. The first intraday return of a day is calculated by using the first price at 9:10 am and the last price at 5:30 pm in the previous day.

Summary statistics of the three stocks mentioned above is depicted in Ta-ble 3.1, containing statistics for intraday data, intraday deseasonalized data (defined below) and for daily data. Looking at the intraday data, Siemens has the highest average cumulative volume across all levels of the LOB, followed closely by Royal Dutch. To assess which stock is the most liquid, we calculate Amihud Liquidity measure (Amihud, 2002) for the intraday data. Both by using liquidity-adjusted and frictionless returns, the Amihud Liquidity is the lowest for Royal Dutch, implying it is the most liquid one. On the other hand, Adidas seems like the most illiquid. In terms of the 10-minute returns, the best performing stock is Adidas (0.002475% - liquidity-adjusted), having three times higher intraday return than Royal Dutch (0.000793% - liquidity-adjusted) and almost four times higher intraday return than Siemens (0.000664% - liquidity-adjusted). Lastly, the lowest standard deviation has Siemens; Royal Dutch and Adidas are roughly the same.

(29)

3. Data and Methodology 20

Table 3.1: Summary statistics

This table provides summary statistics of the cumulative volumes and returns for Royal Dutch, Adidas and Siemens stocks for the period from 4.1.2016 to 6.3.2018. Panel A is for intraday data and Panel B is for daily data. The cumulative volume is calculated by summing volumes across all 10 levels of the LOB. Liquidity-adjusted returns and frictionless returns are logarithmic returns where the liquidity-adjusted ones are calculated as ratio of volume-weighted average bids and frictionless ones as ratio of best bid at time t and t-1. Deseasonalized returns are computed by scaling the seasonalized returns by the deterministic factor.

Cumulative volume Liquidity - adjusted returns Frictionless returns Deseasonilized Liquidity -adjusted returns Deseasonilized frictionless returns Panel A. Royal Dutch Mean 26770 0.001% 0.001% 0.006% 0.115% Standard d. 5975 0.230% 0.225% 100.002% 100.002% Min 13442 -7.731% -7.742% -1632.614% -1643.913% 25% 23417 -0.081% -0.082% -46.066% -47.760% 75% 28503 0.080% 0.081% 45.714% 47.282% Max 103962 5.303% 5.336% 1081.455% 1073.791% Adidas Mean 13234 0.002% 0.002% 0.606% 0.664% Standard d. 5674 0.226% 0.225% 99.998% 100.016% Min 2145 -10.720% -10.676% -1565.479% -1567.440% 25% 9542 -0.084% -0.084% -50.681% -51.066% 75% 16503 0.087% 0.086% 51.053% 51.475% Max 51276 7.641% 7.683% 2120.787% 2121.781% Siemens Mean 30929 0.001% 0.001% -0.164% 0.086% Standard d. 15529 0.191% 0.191% 100.013% 99.997% Min 5373 -8.783% -8.761% -2042.007% -2036.832% 25% 15978 -0.076% -0.081% -51.663% -51.694% 75% 42334 0.078% 0.082% 53.065% 53.824% Max 169560 5.744% 5.765% 1926.518% 1924.950% Panel B. Royal Dutch Mean 41667 0.04% 0.04% Standard d. 15440 1.52% 1.51% Min 13770 -7.22% -7.33% 25% 26093 -0.73% -0.78% 75% 53707 0.76% 0.67% Max 103962 6.59% 6.45% Adidas Mean 8427 0.07% 0.12% Standard d. 4572 1.52% 1.52% Min 2153 -6.29% -6.23% 25% 5366 -0.80% -0.73% 75% 10193 0.82% 0.88% Max 33681 8.77% 8.87% Siemens Mean 30684 0.04% 0.00% Standard d. 15053 1.34% 1.34% Min 5841 -7.59% -7.60% 25% 15730 -0.69% -0.72% 75% 43047 0.73% 0.70% Max 79131 6.77% 6.74%

(30)

3. Data and Methodology 21

3.3

Seasonality factor

It is widely recognized and documented that the trading activity during the day follows an explicit persistent pattern. These arise due to several features of the exchange markets and its organization; for instance, there is a differ-ent trading activity during the lunchtime or opening and closing hours. Giot (2002) and other authors advice to remove seasonality before performing any modelling. First, the so-called ”open auction effect” may be present in our data set. Therefore, the first observation per each day is discarded. We choose to employ the deterministic seasonality factor for deseasonalizing the returns as proposed by Engle and Rusell (1998). The deterministic factor is calculated by averaging squared returns for a specific time in a day, across all days in the dataset. Therefore, as we have 10-minutes intervals, we obtain 51 determinis-tic factors which are consequently used as a scaling factor for the returns as in equation 11: Ri,deseason= Ri,t p E((Ri)2) (3.4)

Figure 3.2 shows liquidity-adjusted returns for specific time of a day aver-aged across all days and also deterministic factors for specific time of a day which is in a way the average volatility of returns. In terms of the average returns, there is no clear pattern visible from the graphs for liquidity-adjusted series (even less visible pattern for frictionless returns - Appendix A.0.8., Figure A.1).

It seems that after 1 pm stocks suffer a plunge, followed by a surge. For frictionless series, any pattern is even less visible. What the two series have in common is the negative average return following the opening of a day. Perhaps more interesting is to look at the intraday deterministic factor flow. Following the majority of previous studies, we would expect the volatility to be U-shaped across the day, meaning that the volatility is higher at the beginning and end of the trading. Some, however, recorded the volatility to be L-shaped (Eaves and

(31)

3. Data and Methodology 22

Figure 3.2: Seasonality - Liquidity-adjusted Returns

This figure depicts set of graphs of averaged liquidity-adjusted returns (on the left) and deterministic factors (on the right) for specific time of the days and for intraday data from 4.1.2016 to 6.3.2018. Liquidity-adjusted returns are logarithmic returns calculated as ratio of volume-weighted average bids. The deterministic factor is calculated by averaging squared returns for a specific time in a day, across all days in the data set. The first row is for Royal Dutch stock, the second is for Adidas stock and the third for the Siemens stock.

(32)

3. Data and Methodology 23

Williams, 2010, Tian and Guo, 2007). That is what we partially observe for both frictionless and liquidity-adjusted series and for all three stocks, having the realized volatility significantly higher after the opening of exchanges. Overall, however, we may infer based on the analysis of data in hand that we do not observe a clear seasonal pattern.

3.4

Volatility models

Having settled the assimilation of liquidity into the returns and the problem with seasonality in the returns, we now present the parametric volatility models used throughout our study. We employ GARCH, EGARCH and GJR-GARCH models due to their wide usage in the quantitative finance literature. The mod-els are estimated with the specification of Gaussian and Student-t distributions. The output of these models at time t is the conditional variance forecast for time t + 1 out of which the forecast of risk measures may be estimated and compared to the realized returns at time t + 1

3.4.1

GARCH

The first model used is the simplest autoregressive conditional heteroscedas-ticity model, GARCH, used to model dynamic variance and introduced by Bollerslev (1986). This model requires nonlinear estimation of parameters and assumes that the variance at time t + 1 is a weighted average of the variance at time t, the squared return at time t and the long-run variance and that both moving average and autoregression are present in the variance. Therefore, the GARCH model assumes that the variance in the future reverts to the average variance. The GARCH(p,q) is specified as:

Ri,t+1 = σt+1εt+1 σt+12 = ω + p X i=1 αiR2t+1−i+ q X j=1 βiσ2t+1−j (3.5)

(33)

3. Data and Methodology 24

where α and β are nonvarying variables satisfying α1+ .. + αp+ β1+ .. + βq< 1

and ω > 0, α ≥ 0 and β ≥ 0 need to be satisfied in order to the variance be positive. The coefficients α1 + .. + αp + β1 + .. + βq present the persistence

of the model. Hence, if the sum is close to 1, the model is said to be very persistent. The q is the number of autoregressive lags and p is the number of lags in the moving average of the returns. Due to the nonlinearity of the model, the maximum likelihood estimation is used to obtain ω, α and β. This method chooses the parameters such that the following function is maximized:

M ax(−1 2( T X t=1 ln(σt2) + R 2 t σ2 t )) (3.6)

3.4.2

GJR-GARCH

Over the past years, there has been a development of the GARCH family models to more complex models. The vast number of literature work with the GJR-GARCH, exploiting the so-called leverage effect, arising when the negative shock of the same magnitude as the positive shock inflate the variance more. Mathematically speaking, whenever the positive impact is α and negative impact is (α+γ), then the leverage effect is present if γ is positive. The leverage effect is embodied into the model as a dummy variable, taking value 1 when the return is negative and value 0 when the return is nonnegative.

It =      1, if Rt< 0 0, if Rt≥ 0 (3.7)

The GJR-GARCH is then defined as:

Ri,t+1= σt+1εt+1 σ2t+1= ω + p X i=1 αiR2t+1−i+ p X i=1

γIt+1−iR2t+1−i+ q

X

j=1

βiσt+1−j2

(34)

3. Data and Methodology 25

Here the p stands not only for lags in moving average but also for lags in the leverage components. Due to the restrictions, ω > 0, α ≥ 0, γ ≥ 0 and β ≥ 0, the forecasted conditional volatility is set to be positive.

3.4.3

EGARCH

Alternative to the GJR-GARCH is EGARCH or exponential GARCH, pro-posed by Nelson (1991), capturing the sign and size of the returns effect. It furthermore assures the scale coefficients to be positive and integrates the lever-age effect as well. The equation for the model is given as:

Ri,t+1 = σt+1εt+1

ln(σt+12 ) = ω +

p

X

i=1

αi(φRt+1−i+ γ[|Rt+1−i| − E |Rt+1−i|]) + q

X

j=1

βiln(σ2t+1−j)

(3.9)

Following Shephard (1996), EGARCH model is consistent if β1+ .. + βq < 1.

3.5

Model assessment

To assess the relative performance of the GARCH models, we utilize the Akaike’s (1973) Information Criterion (AIC). In his study, Akaike (1973) discovers the relationship between Fisher’s (1922) maximum likelihood and anticipated Kullback-Leibler (1951) information to be applicable for such assessment.

AIC = −2lnL(bθ|y) + 2k (3.10)

The L is the maximum logarithmic likelihood function and K is the amount of estimated parameters. Hence, the goodness of fit is offset by additional parameters estimated. AIC is calculated for every model with different (p,q) [or (p,p,q)] specification and the one with the lowest value is selected as ”the best” model relative to the others. One should remember that the AIC does not give the information about the best performing model in general but only

(35)

3. Data and Methodology 26

Figure 3.3: Histograms - Deseasonalized Returns

This figure depicts set of histograms for deseasonilized liquidity-adjusted returns for the period from 4.1.2016 to 6.3.2018 and for intradaily frequencies. Liquidity-adjusted returns are logarithmic returns calculated as ratio of volume-weighted average bids. The first histogram is for Royal Dutch stock, the second is for Adidas stock and the third for the Siemens stock.

((a)) Royal Dutch ((b)) Adidas

((c)) Siemens

for a specific sample of a data (”sampling variability”). Hence, this procedure is followed for both intraday and daily data.

Histograms and Jarque-Bera (1987) test are employed to assess the normal-ity of returns. Furthermore, the estimated coefficients of GARCH models have to be significant in order for the model to be trustworthy. Moreover, we assess the validity of the models by checking their ability to sufficiently capture the data dynamics by performing Ljung-Box (1978) test on the squared standard-ized residuals.1 Apart from these, the power of the model is questioned by

backtesting procedure.

The histograms for the liquidity-adjusted returns are shown in Figure 3.3

(36)

3. Data and Methodology 27

and the histograms for other returns are depicted in Appendix A.0.8 from which we get an initial impression about the normality of returns. Whereas both seasonalized and deseasonalized liquidity-adjusted returns for all three stocks seem to be bell-shaped, frictionless returns display non-normal patterns. Nevertheless, the p-value of 0.00 from Jarque-Bera normality test for every return series rejects normality even for the liquidity-adjusted returns.

3.6

Liquidity premium

Liquidity risk amount is calculated by comparing the LIVaR and IVaR (LIES and IES) for intraday data and LVaR and VaR (LES and ES) for daily data, i.e. we create a liquidity risk premium as proposed by Giot and Gramming (2002):

λ = X − Y

X (3.11)

where X is 1-day-ahead or 10-minute-ahead LIVaR, LVaR, LIES or LES and Y is corresponding non-liquidity-adjusted counterparts, i.e. IVaR, VaR, IES and ES. Albeit the theory of market microstructure would advise us to compare LIVaR/LVaR (LIES/LES) to the frictionless VaR (ES) made from the midpoint (theoretical fair price), we believe it is better to compare it to the frictionless VaR (ES) made from the best bid as the midpoint is unobservable for the banks and other financial markets’ participants. We compute the premium for every time-point in our data set and then take the average from the studied period. The additional way how to observe the liquidity effect is to look at the difference of capital charges, introduced in the following section.

3.7

Backtesting and capital requirements

Models validation and ability to forecast future risks (on a given confidence level, how well are the possible losses captured) are questioned through the backtesting procedure. Under the Basel III, banks are required to perform

(37)

3. Data and Methodology 28

backtesting every day. In this sense, the mathematical property called elic-itability is of great importance. Any function is elicitable under the ability to define it as a minimization of some scoring function S(x, y):

ψ = argminxE[S(x, y)] (3.12)

Practically, it means that we can compare the forecasted values with the real-ized ones. VaR is by its definition elicitable, however, ES is not (Wimmerstedt, 2015). Therefore, the ES forecast is not directly comparable to the earned returns. Further, we explain how we deal with such problem. All versions of Basel Accords, and the last one is no exception, advise to utilize the so called ”traffic light approach”. It is one of the simple methods of unconditional cover-age backtesting which counts the number of VaR breaches; scenarios in which the return is lower than the risk measure.

XV aR(i) (α) := 1 =      1, if Rt ≤ V aRt(α) 0, if Rt > V aRt(α) (3.13)

The breaches therefore follow Bernoulli distribution with a possibility of α. That implies that the sum of the breaches over the period T follows a binomial distribution. For daily data with 250 backtesting period and 99% VaR, we got expected number of breaches to be 2.5. Furthermore, since the breach is the binomial variable, we may calculate the cumulative probability of specific number of breaches per period T.

M (k; n, p) = P (X ≤ k) = k X i=0 n i  pi(1 − p)n−i (3.14)

Such cumulative probability is important in terms of the traffic light approach. Basel III. defines three zones based on the cumulative probabilities and number of breaches (see Table X.), and in order to counterbalance the statistical type 1. and type 2. error. The model is said to be accurate (produce 99% coverage)

(38)

3. Data and Methodology 29

if the number of breaches ranges from 0 to 4, i.e. falls into the green zone. The yellow zone states that the backtesting shows 5 to 9 breaches; accuracy and inaccuracy of the model is both plausible. The model is inaccurate if the number of breaches is at least 10 (red zone).

Table 3.2: Traffic Light backtesting

This table displays three evaluation zones for the traffic light backtesting procedure. The cumulative probability is calculated from the cumulative distribution function. Multiplier is set by Basel Committee based on the cumulative probability and is used for capital requirements computation.

Zone Number of Breaches Multiplier Cumulative Probability Green zone 0 1.5 8.11% 1 1.5 28.58% 2 1.5 54.32% 3 1.5 75.81% 4 1.5 89.22% Yellow zone 5 1.7 95.88% 6 1.76 98.63% 7 1.83 99.60% 8 1.88 99.89% 9 1.92 99.97% Red zone 10 2 99.99%

Source: Basel Committee on Banking Supervision, 2016.

The backtesting is easily applicable for the VaR due to its elicitability. Since ES is not elicitable, Basel III. defines the backtesting of ES based on the Emmer, Kratz and Tasche’s (2013) quantile approximation, which states that ES can be approximated by various VaR on different levels. Specifically, banks are required to perform backtesting of 99% and 97.5% VaR. If, the number of breaches for 1-day-ahead 99% VaR is at least 12 or for 1-day-ahead 97.5% VaR is at least 30, then the bank cannot utilize the internal method for calculating the capital requirements. If the backtesting passes, the multiplier is established from 1-day-ahead 99% VaR’s number of breaches.

The results from the traffic light test are important for the banks because there is a direct link between the number of breaches and the capital require-ments. This link is in the form of a multiplier k, entering the formula for capital

(39)

3. Data and Methodology 30

requirements.

Ca= max(|risk measuret−1|; k ∗ |risk measureavg|)

∗volume ∗ stock price

(3.15)

The capital requirements are calculated as a maximum of absolute value of risk measure forecasted in the previous trading day and the absolute value of the average of risk measure’ forecasts for the past 60 trading days (this average is multiplied with the multiplier); such maximum is multiplied by the volume position and a current price of a stock. The risk measure entering the formula is either VaR or ES and their liquidity adjustments.

For the purpose of the backtesting and capital requirements’ estimation, the out-of-sample window is created, ranging from 9.3.2017 to 6.3.2018 and therefore containing 250 trading day. The precise process is as follows:

1. The model is estimated by using data sample from the estimation win-dow, using the data ranging from 4.1.2016 to 8.3.2017, and generates the forecast of the conditional volatility for the next period (either next day or next 10-minute) from which the risk measure is calculated.

2. We compare the forecast of the risk measure with the first realized return from the out-of-sample window.

3. The anchored forecasting is used - the estimation window is extended by one observation and out-of-sample window shrinks by the same amount.

4. The same process is repeated until we run out of data.

The out-of-sample subset is consequently extended to 270 trading days for the robustness checks. As a result, we perform three sets of backtesting. One for the daily frequencies and other for the intraday frequencies. These are com-pared to the respective returns. Third backtesting is executed as follows; at the end of each day, we take the 1-day-ahead ahead forecasts of the risk measures from the intraday data sample (51-ahead intraday risk measures) and scale it

(40)

3. Data and Methodology 31

to the daily frequencies by the square root of 51. The scaled risk measures are further compared to the daily returns and used for the computation of capital requirements. The bootstrapping method is employed for 51-ahead forecast-ing. For the daily-backtesting purposes, 1-day-ahead VaR is used whether for capital requirements, 10-day-ahead VaR and ES are used, created by scaling the 1-day-ahead measures by square root of 10. Note that we acknowledge the statistical incorrectness of such approach, however, we want to stick to Basel III framework (Basel Committee on Banking Supervision, 2016).

(41)

Chapter 4

Analysis

“However beautiful the strategy, you should occasionally look at the results.”

- Winston Churchill

In this chapter, we present our results, starting off with the model estimation results, continuing with the results for the backtesting and the liquidity premi-ums and finishing with the capital requirements. First, however, we clarify on the methods adopted after we receive the outcomes and perform the necessary tests. We report only the results from models which use Student-t specification on the behalf of the insignificance of coefficients of the majority of models while using the normality specification. This conclusion is supported by the rejec-tion of the normality in the previous chapter by using Jarque-Bera test and the visualization of returns by histograms. Further, we compared the Ljung-Box test for models while using as the input both deseasonalized and seasonalized returns. For both daily and intradaily data samples, there is a clear conclu-sion; the models for deseasonalized returns do not pass the test and the models for seasonalized do. As a result of aforementioned outcomes, and opposed to the previous studies, we proceed by not employing the deseasonalized returns which is moreover supported by the analysis of the seasonal patterns where we stated that there is a scarcity of any visible patterns. Thus, only the sea-sonalized returns are used as an input for the volatility forecast models. The

(42)

4. Analysis 33

additional selection is based on the Akaike’s Information Criterion; number of lags selected for each model are shown in the Table A.1.

Lastly, the significance of the coefficients is investigated. In the underlying daily data context, GARCH and GJR-GARCH show insignificant coefficients, mainly α. Accordingly, we report only EGARCH results for the daily data. For high-frequency data and the capital requirements, note that we show also EGARCH-based results as it appears to be the best model; both from the regulatory point of view as the number of breaches is the lowest and from the bank point of view as the amount of risk capital is likewise the lowest. We also report the number of degrees of freedom for Student-t distribution and for every model as a result of fitting the return series with Student-t probability density function. The degrees of freedom are necessary for the calculation of the risk measures and are presented in the Table A.2.

4.1

Model estimation results

Model estimation results and model validity for both daily and 10-minute-spaced data are presented in this section. As stated earlier, for the daily VaRs we employ the EGARCH volatility model. Table 4.1 displays the results of the estimating process for the overall data sample period. We detect that all EGARCH models are covariance stationary as β1+ .. + βq < 1 is fulfilled. For

every EGARCH model, all the coefficients are significant on 1% significance level. Since the α are significant, EGARCH displays the asymmetry for every scenario.

Tables A.3, A.4 and A.5 present the estimated coefficients and standard er-rors for GARCH, EGARCH and GJR-GARCH models, respectively, estimated for the 10-minute spaced data and the overall data sample period. Most of the coefficients are significant, with the majority on 1% significance level. The model restrictions, i.e. β1+..+βq < 1 for EGARCH and α1+..+αp+β1+..+βq <

1 for GARCH, are satisfied for all specification. Except the Royal Dutch-Liquidity-adjusted-GARCH specification, all GARCH-type models are

(43)

rela-4. Analysis 34

tively persistent as α1 + .. + αp + β1 + .. + βq > 0.9. The significance of α

for all EGARCH models proves the asymmetry effect as well as the positive significant γ for all GJR-GARCH models confirms the leverage effect. To-gether with the positive outcomes from the Ljung-Box test, we are incentivised to conclude that the models perform relatively well for both frequencies.

Table 4.1: EGARCH Results - Daily Sample

This table displays the estimated coefficients and its standard errors from the EGARCH model for Royal Dutch, Adidas and Siemens stocks and for the daily sample data frequencies from 4.1.2016 to 6.3.2018. The liquidity-adjusted stands for the logarithmic returns calculated as a ratio of volume-weighted average bids and frictionless stands for the logarithmic ratio of best bid at time t and t-1. Note: * means p-value<0.1, ** means p-value<0.05, *** means p-value<0.01.

Coefficients Estimates Standard Error Royal Dutch - Liquidity-adjusted ω -0.0030*** 0.0000008

α1 -0.0458*** 0.00069 β1 0.9951*** 0.00000015 Royal Dutch - Frictionless ω 0.0263*** 0.00001

α1 0.0169*** 0.0036 α2 -0.1325*** 0.0035 β1 0.4281*** 0.0000004 β2 0.0000*** 0.0000007 β3 0.0130*** 0.0000007 β4 0.5500*** 0.0000004 Adidas - Liquidity-adjusted ω -0.0045*** 0.0000025 α1 -0.0597*** 0.0002 β1 0.9983*** 0.00001 Adidas - Frictionless ω 0.0671*** 0.00007 α1 0.1563*** 0.0077 α2 -0.2420*** 0.0114 β1 0.3562*** 0.000000002 β2 0.6299*** 0.0000003 Siemens - Liquidity-adjusted ω -0.0497*** 0.00002 α1 -0.1273*** 0.009 α1 0.0391*** 0.0108 β1 0.0504*** 0.00000007 β2 0.9397*** 0.00000016 Siemens - Frictionless ω 0.0390*** 0.00024 α1 -0.0864*** 0.0075 β1 0.00854*** 0.0000001 β2 0.9397*** 0.0000005

4.2

Backtesting results

The performance of the models is further questioned by evaluating forecasting ability of the risk measures. We start with the historical simulation method for computing VaR so that we have an initial starting point to compare our

(44)

4. Analysis 35

Figure 4.1: LVaR from Historical Simulation

This figure depicts set of figures displaying liquidity-adjusted returns (blue line) and related 1-day-ahead 99% LVaR (red dash line) from the historical simulation. Liquidity-adjusted returns are logarithmic returns calculated as ratio of volume-weighted average bids. The period is from 9.3.2017 to 6.3.2018. Subfigure a. is for Royal Dutch stock, subfigure b. is for Adidas stock, subfigure c. is for Siemens stock.

((a)) Royal Dutch ((b)) Adidas

((c)) Siemens

volatility models to as many banks still utilize this method. We plot the 1-day-ahead 99% LVaR from the historical simulation and liquidity-adjusted returns in Figure 4.1 (1-day-ahead 99% VaRs are plotted in Appendix A.0.8, Figure A.13). We can observe that for all three stocks, the LVaR is relatively stable throughout the time and it is not adjusting much for the return changes. For Siemens, the traffic light backtesting detects 9 breaches in the past 250 days, both for LVaR and VaR. This number shows the inaccuracy of the historical simulation method, even though Adidas and Royal Dutch stocks ended up in the green zone.

Table 4.2 presents the number of breaches for the 1-day-ahead 99% LVaR and VaR for the period of 250 days in which the risk measures are calculated. Clearly, EGARCH method improves the VaR and ES forecasting ability as the

(45)

4. Analysis 36

number of breaches for both Siemens and Adidas stocks and for both liquidity-adjusted and frictionless return series significantly decreases, and, although violations appear more frequently for Royal Dutch, it is still on the lower edge of the yellow zone. The results imply that only for EGARCH-LVaR and for Royal Dutch, the capital requirements’ multiplier increases to 1.7. When evaluating the backtesting for LES and ES, the number of violations for 1-day-ahead 99% LVaR and Var is under 12 and for 1-day-ahead 97.5% LVaR and VaR is under 30, showing that the banks could continue to follow the internal method for capital requirements’ calculation.

Table 4.2: Number of Breaches

The table shows results from the traffic light backtesting for both daily risk measures (Panel A.) and for scaled intradaily risk measures (Panel B.) and for the period ranging from 9.3.2017 to 6.3.2018. We present the number of breaches; scenarios where the return is smaller than 99% VaR. The yellow box stands for the yellow zone as defined by Basel Committee on Banking Supervision and the red box for the red zone. The liquidity-adjusted stands for the logarithmic returns calculated as a ratio of volume-weighted average bids and frictionless stands for the logarithmic ratio of best bid at time t and t-1.

Royal Dutch Adidas Siemens Panel A. Liquidity-adjusted EGARCH 5 1 4 Historical Simulation 2 4 9 Frictionless EGARCH 3 1 4 Historical Simulation 2 4 9 Panel B. Liquidity-adjusted GARCH 3 4 3 EGARCH 1 4 2 GJR-GARCH 2 4 2 Frictionless GARCH 4 4 3 EGARCH 3 3 3 GJR-GARCH 3 4 3

The good performance of the models is confirmed likewise for the 10-minute spaced data. If we take a ratio of breaches to the overall number of returns in the out-of-sample period, it is always under 4, signalling the accuracy of our models for 10-minute frequencies (Figure 4.2).

(46)

4. Analysis 37

Figure 4.2: 10-minute-ahead LIVaR and IVaR - EGARCH

This figure depicts set of figures displaying minute liquidity-adjusted returns (blue line) and related 10-minute-ahead 99% LIVaR (green dash line) and 10-10-minute-ahead 99% LVaR (red dash line) calculated from EGARCH. Liquidity-adjusted returns are logarithmic returns calculated as ratio of volume-weighted average bids. The period is randomly selected in order to contain 250 observations from the period ranging from 9.3.2017 to 6.3.2018. Subfigure a. is for Royal Dutch stock, subfigure b. is for Adidas stock, subfigure c. is for Siemens stock.

((a)) Royal Dutch ((b)) Adidas

((c)) Siemens

Last, we display the backtesting analysis in which we present the results on the daily basis by using the scaled information from the 10-minute frequencies. The traffic light backtesting shows good performance of our method as none of the models for any stock reveals 1-day-ahead 99% LIVaR and IVaR’ violations in a frequency higher than 4, nor the models are overly conservative, implying needlessness to hold a superfluous buffer of risk capital for the bank. As conse-quence, the capital requirement’s multiplier remains to be 1.5. Similar to the simple daily backtesting, the number of violations for 1-day-ahead 99% LIVaR and IVaR and for 1-day-ahead 97.5% LIVaR and IVaR are under 12 and 30, respectively.

(47)

4. Analysis 38

4.3

Liquidity premium results

The most interesting outcomes are recognizable for the average liquidity premi-ums for the same out-of-sample period as the backtesting have been performed. We start with the outcomes for the daily frequencies. The relatively small mag-nitude of the liquidity premiums is present for daily frequencies. Contrary to our expectations and to the results of the previous studies, some of the pre-miums are negative, meaning that on the average, the liquidity-adjusted risk measure is smaller in absolute values than the frictionless risk measure. That may imply that in some cases, the liquidity adjustment actually smoothens the returns and lowers the volatility. Unfortunately, there is no clear pattern in the differences in the liquidity premiums between Value at Risk and Expected Shortfall. Whereas for Adidas the liquidity-adjusted Var and ES are smaller than its frictionless counterparts with the same magnitude, for Siemens the percentage difference is two times larger for Expected Shortfall. Meanwhile, the Royal Dutch’ liquidity premiums are not different only in the magnitude but also in the sign.

In terms of the liquidity premiums, it is apparent that the liquidity effect is larger for the intraday data. Such observation is in line with the previous studies. However, the average liquidity premium for Adidas and for Value at Risk remains negative. In fact, it has the largest magnitude (above 9.5% in the absolute values). The statistic worth mentioning is that LIVaR-IVaR liquidity premium is now positive, opposite to the daily sample. It is evident that the larger magnitude of liquidity premiums is transferred from 10-minute intervals to the daily ones when taking the last observation of the day and scaling them to daily frequencies. The Table A.7 displays approximately the same liquidity premiums as Table 4.3.

Referenties

GERELATEERDE DOCUMENTEN

We conduct direct numerical simulations for turbulent Rayleigh–Bénard (RB) convection, driven simultaneously by two scalar components (say, temperature and concentration) with

Departing from the classical optimisation point of view, we take a slightly different path by solving the problem in a Bayesian manner with the help of new spectral based sparse

Additionally, studies were performed concerning the role of S100 receptor RAGE in COPD, showing an increased expression of RAGE in lung mucosal cells, bronchial epithelial

Secondly, protonation/deprotonation is a tool to control the (photo)chemical properties of photoswitches. For example, the group of Dube reported a hemithioindigo

replications of hybridization (F1) and nine generations of backcrossing (F2‐F10) using genetically vetted American black ducks (ABDU) and mallards (MALL) (Supporting Information

Teachers who design an inquiry learning space, especially those who have no expe- rience with inquiry, also need pedagogical support. In Go-Lab we offer this support through what

Concluding, based on this research and the data used in this research, stocks performing well on socially and environmental aspect give higher returns and have a lower correlation

The general mechanical design of the Twente humanoid head is presented in [5] and it had to be a trade-off between having few DOFs enabling fast motions and several DOFs