• No results found

"Modeling and Backtesting of Liquidity-Adjusted Value at Risk- A Quantile Regression Approach"

N/A
N/A
Protected

Academic year: 2021

Share ""Modeling and Backtesting of Liquidity-Adjusted Value at Risk- A Quantile Regression Approach""

Copied!
53
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Nijmegen School of Management

Master Financial Economics

Academic year 2018-2019 July 2019

Master`s Thesis

Modeling and Backtesting of Liquidity-Adjusted

Value at Risk- A Quantile Regression Approach

Cem Cicekci (s1029769)

(2)

Abstract

Value-at-Risk (VaR) has received far-reaching attention in internal risk management practice as a measure of market risk in the capital markets environment as well as credit business. Different methods are used in the literature to determine VaR, of which "Historical Simulation" and the "Variance-Covariance Approach" are the most commonly used approaches (Hong et al., 2014)). This thesis ties on recent research testing the accuracy of Quantile Regressions to estimate VaR. Via Quantile Regressions the conditional quantile in question can be modeled directly without making any assumptions about the distribution of the return series (Hagoum et al., 2014). We analyze three different Quantile Regression models, HAR-QREG, Symmetric CaViaR-QREG and Asymmetric CaViaR-QREG. The HAR-QREG model isolates the effect of short-, mid- and long-term volatility in order to asses market risk. Symmetric CaViaR-QREG and Asymmetric CaViaR-QREG include an autoregressive term to capture volatility clusters also in the tails (Rubia & Sanchis-Marco, 2013). We modify the models incorporating liquidity measures to estimate the impact of liquidity on market risk. The results provide evidence for Quantile Regressions as a proper tool to forecast VaR. Our findings also support that liquidity costs drive market risk.

(3)

Table of Contents

Page

Abstract i

Table of Contents ii

List of Figures iii

List of Tables iii

1. Introduction 1

2. Theoretical Background 2

2.1 Value at Risk Concepts 2

2.2 Market Liquidity Risk 8

3. Related Work 17

4. Data and Descriptive Statistics 20

5. Methodology 25 6. Backtesting 27 7. Results 31 8. Conclusion 37 References 39 Appendix

Appendix A - Descriptive statistics for currency and equity portfolio I

Appendix B - Estimation Results HAR-QREG II

Appendix C - Estimation Results Sym. CaViaR-QREG III

Appendix D - Estimation Results Asym. CaviaR-QREG IV

Appendix E - Backtesting Results for Long Position V

(4)

List of Figures Page Figure 1: VaR 95% and 99% confidence levels with normally distributed returns 4

Figure 2: Price Impact increases with order size 10

Figure 3: Spread of Currencies 22

Figure 4: Traded Volume Currencies 23

Figure 5: Spread Stocks 23

Figure 6: Traded Volume Stocks 24

Figure 7: Return Series Stock Portfolio vs. Currency Portfolio 24 from 04.01.2000-23.04.2019

Figure 8: Comparison of actual returns and VaR forecast at 1% VaR (equities) 32 Figure 9: Actual returns in comparison to VaR forecast at 1% VaR (currencies) 34 Figure 10: Actual Returns and VaR forecast at 99% (equities) 35

List of Tables

Table 1: Descriptive Statistics Stocks 20

Table 2: Descriptive Statistics Currencies 21

Table 3: Basel Traffic Light Approach 27

(5)

1. Introduction

Background

Financial Institutions are exposed to high significant financial risk through changes in market prices. For this reason, it became increasingly important to predict the resulting potential consequences as accurately as possible (Dowd, 1998).

In order to precisely quantify such risks, Value at Risk (VaR) has emerged in the 90s as a risk indicator. The VaR concept is commonly used in internal risk management models of banks, corporate treasurers and other financial intermediaries. This concept is used to capture, manage and control market and credit risks (Damadoran, 2007; Alexander, 2008). For a given equity base, VaR estimates a maximum loss of the bank in terms of market value reductions of bank assets. If the equity base at least matches the estimated VaR, the stakeholder claims are secured with a certain probability (Jorion, 2009). Hence, banks use this model to set their capital requirements. Fund managers and corporate treasurers use VaR to quantify their portfolio risk. There are several drivers of market value changes. Besides equity price risk, interest rate change and foreign exchange risk, liquidity risk is the main driver of price changes in capital markets (Alexander, 2008). The liquidity and marketability of various financial products has steadily increased in recent decades. An expansion of financial innovations allowed banks to operate more flexibly and transformed financial markets from slow floor trading to faster, more efficient and liquid computer trading. The electronic trading tools are quite advanced, but there are still situations in which traders cannot easily and conveniently close their positions at a fair market price. This was demonstrated, for example, in 1998 when the bankruptcy of the Long-Term Capital Management (LTCM) hedge fund resulted in the drying up of some local bond markets (Jorion, 2009).

Also, in the years 2007/2008, during the so-called "Credit Crisis", the impact of market illiquidity even on top rated asset-backed securities was significant. Due to these crises, market liquidity will increasingly be in the focus of the regulators in the future. Initial approaches to this development can be found, for example, in the concept paper "Basel III" of the Basel Committee of Banking Supervision (Hong et al., 2014). Despite these regulatory approaches, market liquidity risk, in particular risk management, is not yet given sufficient significance.

Objective

The aim of this thesis is twofold. First, we try to find a suitable method to capture the characteristics of historical returns in terms of leptokurtosis, volatility clustering and leverage effect to forecast Value at Risk. A distribution with fatter tails and an excess peak than the

(6)

normal distribution is called leptokurtosis. The volatility of financial time series is usually not constant. That means one can observe clusters of volatility where cycles of large movements are followed by cycles with higher movements (Alexander, 2008). Leverage effect refers to the tendency of volatility to increase following a price drop than a price rise to the same extend (Rubia & Sanchis-Marco, 2013). With the approach of Quantile Regression, we tie on recent research and try to analyze, whether different models of Quantile Regressions including liquidity proxies to account for liquidity risk are able to capture the mentioned features of empirical returns properly (Rubia & Sanchis-Marco, 2013; Hagoum et al., 2014; Buczyński & Chlebus, 2017). Second, we test the proposed model’s accuracy and compare the results.

Structure

The thesis is structured as follows. In section 2, we give an overview of the theoretical background of the Value at Risk concept and then introduce methods how to estimate Value at Risk. In addition, we explain the theory of liquidity in the capital markets environment and how to quantify liquidity risk. The subsequent section 3 provides an overview on related work to our research problem. Section 4 analyses the features of the chosen data. In section 5, we introduce the methodology to forecast VaR and the performance criteria to evaluate the models in question. Section 6 provides the estimation results and section 7 the backtesting results. Section 8 provides the conclusion of our study and suggest an outlook on further research to that topic.

2. Theoretical Background

2.1 Value at Risk concepts

Value-at-risk is a downside risk measure. It gives an indication about the maximum loss of an asset position or a portfolio, which is not exceeded with a probability over a period of time. Since the future market value of the portfolio at time t + h is unknown at t , an assumption about the probability distribution of expected future market values must be made. The expected risk exposure is then determined as a quantile based on a one-tailed confidence interval of this probability distribution (Alexander, 2008).

The value change K of a portfolio at the end of an observation period can be regarded as a random variable and the VaR can be formally determined as a function of a confidence level 1- " as follows:

(7)

The introduction of the Basel capital adequacy criteria has led to increased use of the VaR in practice, which has become the industry standard. In practice, the VaR is generally determined in accordance with the Solvency Regulation (Solv. V) §315 for a holding period of ten days on the basis of a one-year observation period and a probability level of 99% (Basel Committee on Banking Supervision, 1996).

For the calculation of Value at Risk, certain conditions must be met:

• the risks must be broken down into individual categories and described with a suitable distribution function

• the correlation between the risks and the assets should be known or estimated • the characteristics of the risks must be reasonably stable and predictable over time

(extreme scenarios are not considered)

The VaR can be determined at stock level as well as at portfolio level. For the valuation of j securities, j+j * (j-1) parameters has to be calculated at any time in order to be able to determine the variance-covariance matrix (Alexander, 2008). The model of RiskMetrics (1994) therefore computes the market value changes of a security by particular risk factors.

For the evaluation, a so-called factor mapping is performed, which means that each instrument to be evaluated is assigned to one or more adequate risk factors. This may considerably reduce the effort since now only the variance-covariance matrix of the risk factors and the sensitivity of the instruments to the mapped factors have to be determined (RiskMetrics, 1994). The available methods for determining the VaR can be subdivided into parametric and non-parametric ones. Non-non-parametric models are based on simulation approaches, in particular the historical simulation and the Monte Carlo simulation. From the group of parametric models, the variance covariance approach or the Quantile Regression approach should be mentioned. All these approaches have in common that the prices of securities are determined depending on the "-quantile of the market risk factors distribution to estimate VaR (Alexander, 2008). After introducing the basics of Value at Risk, we will give an overview of the most common methodologies to forecast VaR.

Variance-Covariance method:

Harry Markowitz developed the first mathematical approach to calculate the VaR in his research to portfolio theory (Dowd, 1998). Various different approaches have emerged after the initial development of the VaR. During that time the difficulty was to calculate the variance of many assets and considering their correlations. J.P. Morgan released in 1995 a data base

(8)

including all variances and covariances across all assets they used for their own models in risk management. The access to data has led to the broad use of Variance-Covariance method to calculate VaR (Damodaran, 2007).

The Variance-Covariance approach is a parametric-analytical method. It is based on the assumption that the portfolio composed of linear risk factors, each with normally distributed returns. The weights of each risk factor within this linear approach are called delta positions. In addition to the delta positions combined to form a vector 6⃑, the covariance matrix σ of the risk factors is also needed in order to calculate the portfolio variance 9:; directly as a vector product:

9:; = 6⃑<σ 6⃑. (2)

The normal distribution hypothesis is assumed for the individual risk factors as well as for the portfolio as a whole because of the linearity assumption, thus the corresponding standard deviation σ: has to be multiplied with á (95%) = 1.65 or á (99%) = 2,33 - depending on whether 95% or 99% probability of confidence is expected (Figure 1) - in order to finally obtain the Value at Risk of the portfolio (Alexander, 2008).

Figure 1: VaR 95% and 99% confidence levels with normally distributed returns (Source: Alexander, 2008)

According to (Dowd, 1998) the downside of this approach is the non-stationarity of the variances and covariances as they change over time, since all data is based on historic prices. This can lead to an underestimation of the VaR because of the not-satisfied assumption of normal distribution.

(9)

Historical Simulation:

The historical simulation is a very simple approach, since no explicit assumptions about the distribution of the risk factors have to be made. The only prerequisite for a valid simulation is that future changes in the risk factors must be within the scope of those that occurred during the observation period (Damodaran, 2007). To compute VaR, first the historical returns of the portfolio over a selected period are captured in the required frequency and the relative changes between two observation points are determined. Based on the current value of the portfolio, scenarios for future development can now be designed taking the same probability distribution as for the recorded returns. The simulated returns of the portfolio are used to design scenarios for possible performance over the holding period using the valuation formula of the assets as a function of the risk factors in terms of volatility (Dowd, 1998). Subtracting the current portfolio value from the designed scenarios yields the possible value of the portfolio. The negative "-quantile of the determined portfolio value gives us the VaR based on the historical simulation. Manganelli and Engle (2001) formulate three central criticisms on this approach. Despite the fact that no explicit distribution assumption is made, there is a strong implicit assumption of distribution. To yield valid results, it is necessary that the returns of a risk factor in the defined time window follow a normal distribution. Due to the rolling time window, this assumption must therefore apply to the entire time series. Furthermore, the estimator for the empirical "-quantile is only valid if the observation period runs to infinity. The third problem concerns directly the length of the observation period. In practice, when analyzing financial market data, so-called "volatility clusters" can be identified, which means that the variance is not constant but there are periods of particularly high volatility and periods of relatively low volatility. To take this effect into account, the time window used should not be too large. On the other hand, a minimum size of the observation period is required to have enough data points for the quantile estimation.

Monte-Carlo-Simulation:

The Monte Carlo simulation also determines portfolio or security prices depending on market risk factors. However, explicit assumptions about the distribution can be made without matching the normal distribution or the historical distribution. Following this, hypothetical scenarios are generated with the aid of parameterized random generators. Several thousand simulation runs can be made. The further procedure now corresponds to that of the historical simulation. The " -quantile is determined from the generated values and the VaR of the portfolios or securities is determined as a function of the risk factors (Dowd, 1998). The great

(10)

advantage of the Monte Carlo simulation is above all its flexibility. In addition to changes in distribution assumptions, it is also possible to adequately depict non-linear risks such as those that are found in option-type financial instruments. Further, high outliers can be considered as well as the effect of volatility clustering (Manganelli and Engle, 2001). The biggest disadvantage of the Monte Carlo simulation is the computational effort that has to be operated. Especially for large portfolios, which primarily involve linear risks, the Variance-Covariance approach is therefore in certain circumstances preferable (Damodaran, 2007).

Garch (1,1):

The next approach we want to introduce is the Generalized Autoregressive Conditional Hetereoscedasticity (GARCH) initially developed by Bollerslev (1986). The advantage of the GARCH-model is that it captures volatility clustering and serial correlation. Hence, the variance of the returns is conditional on previous values of returns. The conditional variance is defined as (Bollerslev, 1986):

9<>&; = ω+ ∑ $

B C<'&; D'&

BEF + ∑:'&HEFGB 9<'H; (3)

with,

I > 0, "1 ≥ 0 $NO G1 ≥ 0 $NO "1 + G < 1 ensuring a positive conditional variance. The GARCH (1,1) is then defined as (Alexander, 2008):

9<; = ω + $

& C<'& ; + G& 9<'&; (4)

with, 9<'&; as lagged conditional variance and C

<'& ; lagged squared returns.

In this model $& indicates how fast the variance reacts to market shocks. A large $& express that shocks are immediately reflected in the variance forecast for time t, whereas low values predict a stable variance pattern. G& is a weight measure of the variance in the previous period into the forecast.

The parameters ω, α and β are estimated by using Maximum Likelihood Estimation (MLE) defined by Alexander (2008): max 1(9<) = ∑ Xln X & ;Z[\] − ^\_ ;[\] ` <E& . (5)

(11)

Finally, the Value at risk can be computed as:

#$%(,< = 12'&(")9a

< . (6)

Quantile Regression:

Quantile Regressions were firstly developed by Koenker and Basset (1978) in order to predict the quantile of dependent variable conditionally based on the independent variable. QR are a suitable approach for estimating VaR, since VaR is the explicit quantile of future returns depending on current information. The main advantage of Quantile Regression is that it does not need any assumption of the underlying distribution of the time series and is also suitable for skewed distributions. It models the required quantile directly instead of modeling the whole distribution. Further, in the case of changing extreme values, the quantile regression coefficients and standard errors do not change (C.F. Lee and J.C. Lee, 2015).

The basic linear Quantile Regression model is:

C<D = "D+ GD9

<'&+ b<D (7)

with undefined distribution of the error terms; The conditional "-quantile is determined by the following minimization problem:

-cN(,d∑` (e − 1^\f(>d[\gh)(i<− [" + G9<'&])

<E& (8)

with

1^\f(>d[\gh = l1 cm C<≤ " + G9<'&

0 opℎrCsctr . (9)

The purpose of this regression is to find a vector of the parameter β, which should ensure that the "-quantile of b<D is as close to 0 as possible. The known quantile regression is that which is associated with the ½- quantile. In this case, β is optimized to take the median equal to zero (Koenker and Basset, 1978).

After discussing the respective methods of VaR forecasting and their shortcomings, let us focus on the general criticism of the VaR as a risk indicator. The estimation of the risk is relatively abstract. The Value at Risk does not indicate the amount of loss in case the loss -ΔV exceeds

(12)

the barrier (Damodaran, 2007). At this point, it should be mentioned that the Expected Shortfall is an alternative measure of risk in order to map extreme values. The Expected Shortfall is defined as the expected loss that occurs when the loss exceeds the barrier VaR. Another downside of VaR is that it is not sub-additive. This means that the VaR of two individual securities or portfolios cannot easily be added to a common VaR. Basically, interdependencies in the form of correlation effects can have an impact on VaR. Depending on the number and type of securities to be evaluated, this can cause considerable expense (Dowd, 1998). Further, the VaR gives no indication of the intermittency. This means we cannot estimate the frequency when the barrier might be exceeded (Damodaran, 2007).

However, the benefits of Value at Risk are in the simple determination and consistent interpretation of risk for different positions and securities which makes it to a popular measure in practice. It also accounts for the correlation of different risk factors. The risk is compressed to a number and the needed capital reserve for banks can be easily derived from this measure (Alexander, 2008).

2.2 Market liquidity risk

Market risk is defined as movements in the level of asset prices. There are basically five main drivers of price changes namely volatility, foreign exchange rate risk, interest rate risk and liquidity risk which can influence the present value of securities (Dowd, 1998). The idea of VaR was initially developed in order to measure market risk. However, the VaR methodology can also be applied to estimate other types of risk e.g. credit risk and liquidity risk. Except of the Quantile Regression approach all other methods do not allow to add more explanatory variables into the estimation of VaR, hence Quantile Regressions are a suitable method to account for more types of risk (C.F. Lee and J.C. Lee, 2015).

This thesis will focus on incorporation of liquidity risk into the VaR estimation. The next chapter provides a framework about the concept of liquidity and liquidity risk in the capital markets environment.

Definition of liquidity

The objective of this section is to clarify concepts of liquidity and point out the relevant items for this thesis. In order to discuss the integration of liquidity risk into a market risk model, some concepts for measuring liquidity will be introduced in advance. This is necessary because the liquidity, unlike e.g. the market price of a security is not directly observable.

(13)

The idea of liquidity corresponds to three different settings. First, liquidity is related to the solvency of the firm. The liquidity of a company is sufficient if revenue and available cash holdings are sufficient at any time to cover the current and planned expenditures (Bervas, 2006). Hence, the net liquidity of assets and liabilities of corporates are a major component to maintain operational activities. Second, we distinguish between funding liquidity which considers the liability side and market liquidity which is based on assets. From an investors perspective market liquidity describes the ability to trade an asset. The third dimension of liquidity considers the liquidity of the whole economy in monetary terms and will not be further considered in this work (Amihud, 2002).

In the further course of this work we will focus on market liquidity in order to incorporate liquidity risk in the VaR framework.

Kyle (1985) characterizes four dimensions of market liquidity. These are given by the width, the depth, resiliency and the speed:

• A market is wide when large-scale orders and can be traded at low transaction costs. • A market has depth when there are no executed buy and sell orders close to the current

price. In this case, the bid-ask spread is small, and the transaction costs are low. • Resiliency correspondents to how fast a price returns to its equilibrium.

• Speed is the amount of transactions needed to trade a particular item at pre-defined transaction costs, or for the time required to execute those transactions. The liquidity of a market therefore depends negatively on the time needed to trade a position at a pre-defined transaction cost rate.

Amihud and Mendelson (2006) consider also the cost of liquidity and hence define market liquidity as the cost of trading an asset to its fair price. The fair price is basically defined as the mid-price of the bid-ask spread. Liquidity cost is quantified as follows:

u<(.) ≔ w(.) + xy<(.) + z<(.) . (10) w(.) are the direct trading costs for a position . measured in monetary units. xy<(.) denotes the price impact and z<(.) the delay costs. Direct trading costs are defined as transaction taxes, brokerage commission and exchange fees.

(14)

Figure 2: Price Impact increases with order size (Source: Stange & Kaserer, 2008)

The price impact is illustrated as the difference of transaction price depending on order size x and the mid-price resulting from imperfect elastic demand and supply curves of a security at a particular time point. The price impact increases with an increasing x. Figure 2 illustrates the dynamics of the price impact. Delay costs occur when an order cannot be executed immediately. Thus, a counterparty cannot be matched to the initiated order and the investor bears the price risk. The literature distinguishes delay costs due to forced delay and deliberated delay for the execution of orders (Amihud and Mendelson, 2006; Stange and Kaserer, 2008). Forced delay occurs in the case when the market is not liquid enough to cover the whole order size. Whereas a deliberated delay arises in the case where the investor divides a large transaction in small tranches even though the whole transaction could be executed immediately. The idea behind this strategy is to avoid large price movements due to the potential trade-off between price impact and delay costs.

Hence, the liquidation period is extended by dividing the position . = .& + .; + ⋯ + .| in n single orders at times p&,p;, … , p|. This leads to a reduced price impact which usually rises with the order size. This strategy is useful as long as the additional delay costs are lower than the price impact (Almgren, 2003). In order to show that the price impact is the biggest driver of liquidity costs, the next section quantifies the price impact in practice.

(15)

Indirect liquidity costs

Indirect liquidity measures are derived indirectly from market data (Stange and Kaserer, 2008). The most wide-spread measures are the proportion of zero-trading days, the turnover rate and the traded volume, the ‘Illiquid-Ratio’ and the ‘Liquidity Index’ (LIX).

Proportion of zero-trading days:

In comparison to the previous measures another simple approach is proposed by Damodaran (2007) based on trading days without any transaction activities. According to Damodaran (2007) there is a relationship between liquidity costs and the number of days with zero returns. His idea is that an asset with higher liquidity costs is less frequent in terms of price changes, since the high liquidity costs act as a barrier for transactions and hence causes more days with zero returns. This threshold will only be overcome by investors when they receive a valuable information signal about the price movement. This approach requires just time-series returns in order to be applied and hence beneficial when volume data or high-frequency data is not available. The proportion of zero trading days is defined as:

~rCo = |ÄÅÇ^ ÉÑ <^ÜÖB|å ÖÜáà B| Ü ÄÉ|<ä |ÄÅÇ^ ÉÑ ÖÜáà âB<ä ãÇ^É ^Ç<^|à . (13)

Turnover rate:

Since the traded volume does not account for outstanding shares it is not a proper measure to compare across assets and markets. Amihud and Mendelson (2006) propose the implementation of the turnover rate as liquidity proxy. It is computed by relating the traded volume # to the outstanding volume of the securities market value. The market value is simply calculated as the product of the number of outstanding shares and the average transaction price of this security:

w| =éçç . (12)

Traded volume:

The traded volume measures the transaction volume of a single asset or an entire market for a certain time period. It serves as a proxy to determine a price quantity function in order to measure the price impact xy<(.) (Almgren, 2003). It is computed as the sum of the transaction price xè multiplied with the traded quantity Nè for all:

(16)

Return-to-volume measure- “Amihud Illiquid-Ratio”:

Amihud (2002) used the US stock market to examine whether illiquid securities promised a higher return than liquid assets. He was aware that there were good and accurate measures of liquidity, such as the quoted or effective bid-ask spread, to quantify the degree of liquidity of individual stocks. He also knew that these measures represented the adverse price movements well and their probability of occurrence, but all those measurement systems required assumptions about the market structure and sometimes very detailed market data. This market data was not available for its long-term study. For this reason, he developed the so-called Illiquidity-Ratio. The Illiquidity-Ratio of a stock i as the average of absolute daily log-returns ëCB,<ë in relation to the daily transaction volume #íuB,<:

yuuyìB = &`∑ ë^î,\ë

çïñî,\

`

<E& . (14)

A high ‘ILLIQ-Ratio’ indicates low liquidity, as a low trading volume is associated with a relatively high price movement. In his study, Amihud (2002) came to the conclusion that high illiquidity in retrospect can be expected to yield a high return. Accordingly, the illiquidity and yield of a security correlate positively, and a liquid stock is expected to generate less return than an illiquid one. Illiquidity is therefore a risk component for which the investor demands and receives compensation. However, the ratio of return and illiquidity is by no means constant and varies with time. Amihud (2002) also notes that market liquidity risk for small company stocks (in terms of small market capitalization) are higher than for large capitalization stocks. The advantages of his indicator are undoubtedly the ease of use and the very low required database. However, the small database is also one of the biggest drawbacks as it does not cover all aspects of market liquidity risk. It also lacks interpretability and comparability across asset classes. Also, the ‘ILLIQ-Ratio’ only captures the exogenous component of market liquidity risk.

‘Liquidity Index’ (LIX):

Danyliv et al. (2014) introduced a new measure the ‘Liquidity Index’ defined as:

uyó<= òoô&F(çÉöÄÇ\∗ú^BùÇûîü,\

ú†î°†,\'ú¢£§,\ ) (15)

where

(17)

xäBåä,< refers to the highest price of an asset at time t,

xöÉâ,< refers to the lowest price of an asset at time t,

xÄBÖ,< refers to the price of an asset at time t as the average of xäBåä,< and xöÉâ,<.

The idea behind LIX is to quantify the amount of trading volume needed to move the price of an asset by 1 monetary unit. Using the òoô&F-function the LIX is roughly scaled from 5-very illiquid to 10- very liquid. Basically, from €-perspective the amount of capital needed to create 1€ fluctuation is estimated by 10ñ•¶\. The advantage of LIX is, it also captures intraday movements and currency values are excluded from calculations making LIX comparable across all international markets. Relevant information for calculating LIX can be accessed easily from free data bases.

Direct liquidity costs

Fundamentally, the costs of the price impact can also be determined directly from market data. One of the most widespread approaches is to measure the price impact costs using a price-volume function derived from transaction data. There are three variables to examine the direct cost, the bid-ask spread which does not account for different order sizes and the weighted spread which considers the rising cost of liquidity as the result of increasing order size and the ‘Roll estimator’.

The bid-ask spread:

The bid-ask spread is a widely established measure of price impact which an investor encounters while trading an asset. It captures the cost of a so-called ‘roundtrip transaction’, either buy and sell or sell and buy1 (Roy, 2004). A perfectly liquidated market would exhibit the same price of bid and ask without any spreads. Hence, one can conclude the bigger the spread of bid and ask, the more illiquid the market is. The market maker quotes the bid-ask spread t< for a certain quantity of orders. This variable is defined as the ‘spread depth’ (Giot and Gramming, 2005). However, note that the bid-ask spread and spread depth vary over time. The bid-ask spread is calculated as follow:

t< = :\ß®©':\™îü

:\ûîü (16)

where

(18)

´<Üਠrefers to the ask price,

´<ÅBÖ refers to the bid price,

´<ÄBÖ denotes the mid-price.

The market microstructure theory characterizes three main drivers for the bid-ask spread (Damodaran, 2007):

• Costs for order processing: The market maker charges fees for all administrative work which occurs with an order. These costs decline with an increasing trading volume. • Costs of asymmetric information: Market makers protect themselves by increased

spreads against informed investors which have an advantage to market makers.

• Costs for inventory carrying: Since the market maker manages open positions, he bears additional costs. These costs compensate the market maker for facing risks due to price volatility, interest rate change and fluctuating trading volume.

The main advantage of the bid-ask spread as a direct liquidity cost measure is the availability of the quotes among all asset classes. However, the downside of this measure is that the bid-ask spread is just quoted for a fixed number of orders. This does not capture increasing transaction volumes with varying spread depth (Roy, 2004).

The weighted spread:

Another suitable measure which overcomes the drawbacks of the bid-ask spread is the weighted spread. The most common measure for the weighted spread was developed by ‘Deutsche Börse’- the so-called ‘Xetra Liquidity Measure’ (XLM). The big advantage is its easy implementation since the required data is based purely on trading history. The calculation of the XLM also includes iceberg orders2 with their full volume, hidden orders are also taken into account in addition to the visible orders (Gomber et al., 2004). The XLM is thus able to cover the liquidity dimensions of depth, width and speed at the same time. The XLM also takes order size x into account, which makes it to a good complementary measure to the spread. With regard to large institutional orders the spread is not a proper measure for liquidity costs. As indicated earlier the bid-ask spread is just valid for a limited quantity and thus just suitable for retail

2In the case of an iceberg order only a small part of the total order volume can be seen in the open order book. Iceberg orders are primarily used by institutional investors to disguise the total volume of the order, thus reducing the risk of large price movements in the opposite direction. After the visible part is executed, one of the hidden order at the same size is replacing and becomes visible on the screen. This procedure is repeated until the whole transaction is executed. Source: https://www.investopedia.com/terms/i/icebergorder.asp

(19)

orders. And since very large orders exceed the spread depth which are executed with multiple order limits in the order book, the average transaction price declines (Giot and Gramming, 2005).

Algebraic, .ò- (.) is the volume weighted average spread for a particular order volume in monetary units with . = ´<ÄBÖ× e, where q is the number of assets. In order to calculate the

XLM, first we need to determine the average bid price:

´<ÅBÖ(e) = ∑ :© ©,\™îü D©,\

D . (17)

Every order is performed at k different bid prices ´<ÅBÖ with the corresponding volume e ¨,< at

time t in the order book sorted by price priority (Giot and Gramming, 2005). The average ask price is calculated similarly.

The next step is to calculate the .ò- (.) as the average volume weighted spread relative to the unit mid-price expressed in basis points:

.ò-(.) = :\ß®©(D)' :\™îü(D)

:\ûîü × 100 . (18)

Formula 18 is considered as the relative discount of a round trip with order volume x= ´<ÄBÖ × e.

For our purpose, we determine the cost of liquidity as:

≠oò< cN % = 0,5 × Xúß®©,\'ú™îü,\

ú\ûîü ] . (19)

Since we only consider the liquidity cost of selling an asset or buying respectively, the ≠oò is just half of the spread. However, Stange and Kaserer (2008) assume a symmetrical order book, thus the liquidity cost of a single transaction position can be calculated as:

u<(.) = &; .ò- (.) . (20)

Hence, the absolute liquidity costs for an order with volume x is calculated as:

uÜÅà< (.) = &

(20)

Referring back to Figure 2 the shaded area illustrates the liquidity costs between the bid curve and the ask curve depending on the volume x (Stange and Kaserer, 2008). The weighted spread examines the liquidity costs before a trade is executed. This makes it to an advantageous measure in order to calculate the liquidity costs beyond the spread depth for any order size. However, note that, the data to compute the weighted spread is not always available.

Roll estimator:

Roll (1984) developed another parameter where the spread measures are determined from the serial covariance attributes of transaction price changes. Thus, transaction costs are examined from serial covariance of daily stock returns:

%∞ = ±2≥−≠o¥(∆x<, ∆x<'&), cm ≠o¥(∆x<, ∆x<'&) < 0

0, opℎrCsctr. (22) The big drawback to the measure of Roll is that asymmetrical information cannot be considered. According to Huang and Soll (1997), short-term returns can be influenced by various other factors that are not taken into account when measuring Roll.

Based on the liquidity cost framework as introduced above, the following section focus on the concept of liquidity risk.

Liquidity risk

According to Hong et al. (2014) market liquidity risk occurs when a trader is not able to offset or eliminate a position without causing significant price movements because of imperfect market depth. This risk also arises for long positions. Liquidity risk is caused by the unknown components of liquidity costs, especially with respect to price impact xy<(.) and delay costs z<(.).

In terms of liquidity risk the literature distinguish between non-strategic and strategic transactions (Bangia et al., 1999; Amihud, 2002; Chriss and Almgren, 1998). As already specified earlier a strategic transaction is dividing a large order in equal smaller sub-orders. With regard to a non-strategic transaction, liquidity risk u∂< (.) at a discrete time pB < p

B

emerges solely due to the uncertainty of price impact. The absolute liquidity cost is calculated as the unsure difference between the realized portfolio value %#< = e × x<<^Ü|à (e) and the

(21)

u∂< (.) = e × x<<^Ü|à (e) × r'^(<î'<h)− e × x

<ÄBÖ . (23)

where x<<^Ü|à refers to the realized price for a unit of an asset and q denotes the total number of

assets.3

Hence, we can conclude that liquidity risk is just determined by the price quantity function unknown in pB < p

B and affected by the time varying supply and demand of the considered asset.

Contrary, in the case of a strategic transaction liquidity risk in pB < p

B for a position . = .&+

.;+ ⋯ + .| traded at p&,p;, … , p| emerges due to uncertain price impact at the respective transaction date and additional price risk during execution delay. Whereas, absolute liquidity risk u∂< (.&+ .;+ ⋯ + .|) occurs from the unknown difference of the present value of the

realized portfolio value %#<= ∑| eB

BE& × x<<^Ü|à (eB) and the fair value e × x<ÄBÖ of the

trading position x:

u∂< (.&+ .;+ ⋯ + .|) = ∑| eB

BE& × x<<^Ü|à (eB) × r'^(<î'<h)− e × x<ÄBÖ. (24)

In formula 23 as well as formula 24 market risk is incorporated. Concretely, in time in pB < p B

uncertainty of x<ÄBÖ at time t contains market risk. Hence, just the difference of transaction

prices and mid-prices exhibit liquidity risk. More precisely, receiving a lower amount than the market value of an asset is due to liquidity risk.

3. Related work

A broad range of literature on Value-at Risk has arisen since the 90s. This section summarizes the most relevant research related to this thesis. Since we already introduced some approaches for general VaR-measures in chapter two, this section will focus on incorporating the risk component induced by liquidity into VaR.

Bangia and et al. (1999) suggest a method to integrate exogenous liquidity risk into the VaR. Exogenous liquidity risk is defined as the risks associated with fluctuations in the bid-ask spread. Exogenous liquidity is driven by factors beyond investor behavior, whereas endogenous liquidity is driven by e.g. liquidation of large positions. This is one of the first models to explicitly incorporate liquidity risk into VaR. It can be determined relatively easy from available historical bid-ask spreads. Bangia et al. (1999) assume that market and liquidity risk

3 Note, the relationship between the order size x expressed in monetary units and the size of a position denoted in number of assets q is x=

(22)

are perfectly correlated i.e. in phases of extremely negative market price movements, extremely large bid-ask spreads are also observed. However, endogenous liquidity risk in terms of order-induced price effects are not considered in this model as the bid-ask spread is unable to capture them. Ernst et al. (2008) extend the model of Bangia et al. (1999), no longer relying on the empirical distribution of the bid-ask spread to estimate the LVaR, instead they assume normal distribution for returns and the bid-ask spreads. In order to capture the higher moments of the distributions, they resort to the ‘Cornish Fisher expansion’4. Ernst et al. (2008) figured out that their method is superior in comparison to the standard model by Bangia et al. (1998). The model estimates a parametric VaR including the mean-variance-estimated spread to the price risk of an asset. Another advantage of this approach is the additivity of price risk and liquidity risk, which facilitates implementation in practice. Due to the ability of quoted spreads this model is easily implementable in practice and so far, the most cited approach. There is no need to modify existing programs for computing VaR. Just the calculated cost of liquidity has to be included in existing VaR models. However, Loebnitz (2006) criticizes a structural inconsistency. The spread adjustment rather on the original position’s value than on the forecast. He reformulated a model considering this issue. Nonetheless, Stange & Kaserer (2008) emphasized that perfectly correlated price and spread variations are not in accordance with reality. They find that for large order sizes the assumption of perfect correlation leads to an overestimation.

To overcome this weakness Stange & Kaserer (2008) and Giot & Gramming (2005) propose a model on net returns adjusted for liquidity risk. The order size dependent weighted spread is used as liquidity measure. They applied the ‘Xetra Liquidity Measure’ to calculate the ex-ante expected net return for an asset adjusting the expected gross return for liquidity costs. In comparison to the approach of Bangia et al. (1999) this model is superior in terms of taking the impact of order-size into consideration and including the dynamics of net return distribution. Since it combines the mid-price-return dynamics and liquidity costs dynamics, these two factors are not necessarily perfectly correlated with each other. Francois-Heude and van Wyndale (2001) extend the model of Bangia et al. (1998), based on order book data from the Paris Stock Exchange to determine the bid-ask spread as a function of the volume. Unlike Bangia et al. (1998) they overcome the problematic assumption of a perfect correlation between market and liquidity risks by integrating them into an indicator. The dynamics of liquidity are taken into account by a correction term, which compares the current volume with the average volume-based bid-ask spread. If the current spread is lower, the liquidity risk and thus also the LVaR

(23)

decrease. If the spread is greater than the average, this indicates a narrower market width, results in an increase of the LVaR. This procedure has the advantages that, on the one hand, explicit assumptions regarding the spread distribution can be neglected and, on the other hand, that empirical quantiles do not have to be determined. In addition, order book data is used to estimate order-induced price impact versus parametric methods, thereby mapping actual trading opportunities.

The study of Roy (2004) focuses on price changes caused by transaction volumes. He investigated that in the case where the price is affected by liquidity risk the general VaR model would be incomplete, since the time interval for its calculation does not allow for an orderly liquidation, thus an adjustment of the time horizon is required.

Many other studies suggest optimal liquidation approaches. In the study of Lawrence & Robinson (1995) the researchers match the VaR time horizon with the estimated holding period of the investor’s asset. The researchers state that the VaR is more underestimated the shorter the holding period. They propose an approach by examining the optimal execution strategy by implementing market risk using a mean-standard deviation model. The model by Hisata & Yamai (2000) considers the investor`s behavior and the resulting market impact on prices by adjusting the VaR to the size of the investor’s position. Basically, they elaborate the best execution strategy based on the varying level of market liquidity. The approach of Almgren & Chriss (1998) minimizes volatility risk and transaction cost in order to consider portfolio liquidation. They construct an optimal execution strategy by deriving a mean-variance approach.

Hisata & Yamai (2000) transform in their paper the time interval of security sales into an endogenous variable. Their approach considers market risk by adjusting VaR with regard to the level of market liquidity and the magnitude of investors positions. Jarrow and Protter (2005) estimate a linear liquidity supply function to determine the order-induced price impact associated with endogenous liquidity risk. They assume that the estimated rise in the liquidity supply curve in times of crisis is particularly high and can serve as an upper estimate for the maximum expected liquidity costs. They distinguish between stable market phases and market crises. To determine the maximum expected liquidity costs, they only use the sequence of the time series that can be assigned to a market crisis. The exclusive consideration of market crises seems to be less suitable for risk assessment in normal market phases. In addition, the VaR statement based on a certain confidence level is not possible.

(24)

4. Data and descriptive statistics

This chapter provides the relevant data for our analysis. The data of this thesis consists of two types of portfolios. The first portfolio contains three volatile equally weighted currencies (Brazilian Real, Russian Rubel and British Pound). The second portfolio is an equally weighted equity portfolio consisting of stocks with low market capitalization in order to capture liquidity effects covering the main sectors: financial services, retail and technology. The stocks in the portfolio are Banco Brazil (BBAS3-BR; sector: financial services, Brazil), Kingdee International Software Group Co., Ltd. (268-HK; sector: technology, Hong Kong), Migros Ticaret A.S. (MGROS-TR; sector: retail, Turkey). The choice to forecast VaR at portfolio level is a common procedure in practice and allows us to eliminate idiosyncratic noises. Logarithmic returns with daily closing prices from 04.01.2000-23.04.2019 will be divided in three sub-samples (Periods from: 2000-2006; 2007-2013 and 2014-2019) in order to capture different stress scenarios e.g. dotcom bubble 2000, financial crisis 2008, Brazilian economic crisis 2014 and BREXIT. For the currency portfolio we used future prices, since for spot prices the trading volume was not available, US Dollar serves as a reference currency. The data is retrieved form Thomson Reuters Eikon and FactSet.

Table 1 summarizes the main statistics for the daily returns of the selected stocks. We can see that the mean of the returns is close to 0 which is not surprising for daily returns and just differ slightly among all stocks. Whereas the stocks exhibit a standard deviation slightly above 3% with Kingdee indicating the highest volatility.

Banco Brazil Kingdee Migros

count 4846 4846 4846 mean 0,04024% 0,0872% -0,0197% minimum -28,7343% -25,4642% -25,1457% maximum 21,4088% 28,3294% 27,6929% standard deviation 3,1762% 3,4824% 3,0957% kurtosis 8,3240 8,780024 10,8247 skewness -0,1134 0,4707906 0,09476 Shapiro-Francia 125,3220 204,65 220,3740 Shapiro-Francia p-value 0,00001 0,00001 0,00001

Box Ljung Q-Statistics 54,32310 44,35230 63,42190

Box Ljung p-value 0,06490 0,29320 0,01060

ADF 2 lags, trend -26,4920 -23,2350 -24,5450

(25)

Normal distribution assumes a symmetrical shape around its mean and hence infer a skewness of 0. We can see that the skewness for all stocks is slightly around 0. Banco Brazil is marginally left skewed, in contrast Kingdee and Migros are lightly right skewed. That means that Banco Brazil has a little longer left tail with a greater probability of negative returns in comparison to perfectly normal distributed data. The kurtosis indicates concentrated returns are around their mean. Banco Brazil and Kingdee display a kurtosis above 8 and Migros show a kurtosis above 10. Normal distributed data has a kurtosis of 3. Hence, we can conclude that the stocks follow a leptokurtic behaviour. With regard to the Shapiro-Francia test we can reject the hypothesis that all stocks are normally distributed, since all p-values are approximately 0. The Ljung-Box test is a statistical measure to identify whether our return series is independently distributed. The null hypothesis is that there is no serial correlation. For Migros we can reject the null hypothesis of independent returns series. Whereas Banco Brazil and Kingdee show no evidence of autocorrelation.

Financial time series are usually not stationary i.e. mean, volatility, autocorrelation etc. are not constant over time. By use of the log-returns, we can comply with the assumption that the stocks are approximately stationary since the critical value of -3,43 of the Augumented Dickey Fuller test is exceeded and exclude a random walk process.

Table 2:Descriptive statistics currencies from 04.01.2000-23.04.2019

Table 2 summarizes all relevant statistics for the selected currencies of our second portfolio. As in the case of the equity portfolio the mean of the currency portfolio’s daily return is close to 0, although slightly negative. Brazilian Real exhibits the highest volatility with 1,1066%, whereas the Pound shows the lowest volatility with 0,5928% among all three currencies. We

BRL RUB Pound count 4855 4855 4855 mean -0,0155% -0,01425% -0,0071% minimum -8,9255% -20,4953% -6,2215% maximum 13,8806% 8,5548% 3,9293% standard deviation 1,1066% 0,9025% 0,5928% kurtosis 13,57076 72,59 7,9099 skewness -0,0970 -2,8510 -0,3493 Shapiro-Francia 235,4700 743,92 94,81 Shapiro-Francia p-value 0,00001 0,00001 0,00001

Box Ljund Q-statistics 73,1896 142,38 75,94

Box Ljund p-value 0,0011 0,0000 0,0005

(26)

see that all currencies are left skewed, especially the Russian Ruble is strongly negative skewed. Considering the kurtosis, Rubel also stands out with a high value of 72,59. The Shapiro-Francia test confirms that all currencies are not normally distributed.

The low Box-Ljund p-values indicate that there is a strong evidence of autocorrelation among all currencies. Similar to the stock time series we apply log-returns in order to smooth the return series. The Augumented-Dickey-Fuller-Test statistics display evidence for approximately stationary time series.

Figure 3: Spreads of Currencies

With regard to liquidity in figure 3 the respective spreads of the currencies are displayed. We see a large variation of the bid-ask-spread for Banco Brazil, especially between 2010 and 2015. Rubel and Pound exhibit a stable spread throughout the observation period.

In figure 4 the traded volume for all currencies are illustrated. The volume of Pound increases until 2005. During the financial crisis in 2008, we see several drops of the traded volume. The volume remains constant afterwards, even though a slight increase of the volume is observed after the BREXIT-vote. In the case of Brazilian Real, no large deviations are observed. Also, during the economic crisis period (from 2014 onwards) we cannot identify any anomalies. The traded volume for Rubel varies slightly over time. Especially, between 2009 and 2011 the traded volume decreases.

(27)

Figure 4:Traded Volume Currencies

The spreads for the stocks in our portfolio are displayed in figure 5. Notably, the spreads of Banco Brazil and Migros vary largely over time. For Migros, we observe the highest peak during 2010. In the case of Banco Brazil, the highest spread is during 2014 examined. Kingdee exhibits a constant spread until 2010 and varies slightly afterwards. An increase is observed in 2018.

(28)

Figure 6 displays the traded volume for all stocks. Migros shows a constant traded volume during the observation period. The volume for Banco Brazil varies from 2010 onwards.

Figure 6: Traded Volume Stocks

We identify large variations between 2014 and 2019. Kingdee exhibits the largest variation among all stocks. Especially during 2014 and 2019, the volume varies significantly.

Figure 7 illustrates the return series of the constructed portfolios under investigation. At first glance, we observe a significant more volatile equity portfolio than currency portfolio.5

5

(29)

Especially, between 2000-2010 the stocks exhibit an extremely volatile pattern. Both portfolios exhibit a negative trend after the burst of the dotcom bubble in 2001. In the end of 2004, we inspect a steep drop of the equity portfolio mainly affected by Kingdee and Banco Brazil. After the financial crisis in 2008, there is a strong recovery effect in the stock portfolio followed by the highest observed return for the considered time interval, in contrast the currency portfolio recovered slightly to the same level before the drop. Between 2013-2014 we see a relatively stable period for both portfolios followed by multiple price drops starting in 2016. The stock portfolio is primarily affected by the aggravation of the Brazilian economy crisis and the currency portfolio mainly by the depreciation of Brazilian Real, Rubel and Pound due to the recession in Russia and BREXIT-voting as well.

5. Methodology

After analyzing the relevant data, we introduce the methodological approach for our study. In our Analysis we use modified Quantile Regression models with HAR-QREG following the approach of Hagoum et al. (2014) and CAViaR-QREG proposed by Rubia & Sanchis-Marco (2013). The HAR-QREG model is a specific version of the standard QR that measures the respective volatility of daily, weekly and monthly log-returns and predicts the quantiles in question directly:

#$%<D = GFD+ G&D9ÖÜá,<'&+ G;D9âÇǨ,<'&+ G∑D9ÄÉ|<ä,<'&+ G∏Duyó<'&+ GπD≠íu<'&+b<D, (25)

where

#$%<D is the conditional quantile of the day ahead return

GFD estimates the constant, G&D, G;D, GD estimate the historical volatility for daily, weekly and monthly returns respectively.

GD, GπD estimate the lagged liquidity proxies LIX and COL respectively q defines the conditional quantile

b<Ddenotes the error term.

Due to the fact that the volatility of returns in time series may exhibit cluster in terms of autocorrelation, we can expect the VaR to show the same behavior. Using an autoregressive specification, we can formalize this characteristic (Engle and Manganelli, 2004). The idea of CAViaR is to treat the nth conditional quantile like a latent autoregressive process, possibly depending on a number of lagged covariates. This term offers a large degree of flexibility. We consider two time-varying specifications. The symmetric CAViaR-model is defined as:

(30)

#$%<D = GFD+ G&D#$%<'&+ G;D|C<'&| + G∑Duyó<'&+ G∑D≠íu<'&+b<D, (26)

where

#$%<D is the conditional quantile of the day ahead return,

GFD predicts the constant, G&D the lagged conditional quantile of the day ahead return,

G;D denotes the estimator of the lagged absolute returns,

G∏D, GπD estimate the lagged liquidity proxies LIX and COL respectively,

q defines the conditional quantile, b<Ddenotes the error term,

and the asymmetric CaViaR-model is:

#$%<D = GFD+ G&D#$%<'&+ G;D-$.|C<'&; 0| + G∑D-cN|−C<'&; 0| + G∏Duyó<'&+ GπD≠íu<'&+b<D, (27)

where

#$%<D is the conditional quantile of the day ahead return,

GFD estimate the constant, G&D the lagged conditional quantile of the day ahead return,

G;D denotes the estimator of the lagged absolute positive returns and GD is the estimator for the lagged absolute, negative returns,

GD, GπD predict the lagged liquidity proxies LIX and COL respectively, q defines the conditional quantile,

b<Ddenotes the error term.

The advantage of the asymmetric slope is to account for leverage effects, since the changes of the conditional quantiles in returns are associated with high volatility the VaR prediction responds asymmetrically to positive and negative returns (Kuester et al, 2006).

The model can be reparametrized linearly and is a common approach for modelling conditional quantiles directly. We modify the above introduced models including liquidity measures in order to gauge the effects of liquidity risk on forecasting VaR. The selected liquidity proxies are Cost of Liquidity (COL) and Liquidity Index (LIX) as introduced in chapter two. The motivation to select LIX and COL as liquidity proxies is to include two measures containing the main driver of the liquidity premium- the traded volume and bid-ask spread. The quantiles in question are e = 1% , e = 5%, e = 95% $NO e = 99%.

(31)

6. Backtesting

In order to assess the model’s accuracy, we perform backtests. First, we introduce the regulatory framework then we give an overview about the statistical tests to validate the models.

The regulatory framework requires a validation of the VaR models by comparing the last 250 daily 99% VaR forecasts with the corresponding daily true returns. A VaR model classified as accurate is rewarded with a low scaling factor. The scaling factor º< is used to determine the market risk capital requirement ∞≠%<:

∞≠%< = max æ#$%<(0.01), º<øF& ∑π¿ #$%<'B(0.01)

BEF ¡ + ¬ . (28)

The scaling factor º< multiplied by the average VaR over the last 60 trading days and an increased amount of equity capital determined by the portfolios underlying risk yields the required capital reserve (Hong et al., 2014).

º< is defined as: º< = √ 3 3 4 + 0.2(. − 4) cm . ≤ 4 cm 5 ≤ . ≤ 9 cm 10 ≤ . ôCrrN ∆ròòos CrO , (29)

where . is the number of violations. A violation or exceedance occurs when the realized loss is higher than the VaR forecast. Table 4 illustrates the classification of models. Accurate models are assigned to the green zone, whereas inaccurate models are categorized in the yellow or red zone. But note, that the yellow zone does not always imply inaccurate models.

Table 3 Basel Traffic Light Approach: If a model is classified in the green zone the probability for a type 2 error is very low (a failed model will be accepted). A type one error occurs then an accurate model will be classified to the yellow or even red zone, the probability is therefore 4.12%. Source: Jorion (2007)

(32)

It is more likely that the yellow zone contains more inaccurate models than accurate models, but accurate models might be classified in the yellow zone due to e.g. markets moving in fashion unanticipated by the model.

If the bank is able to prove that the model is fundamentally reliable, the supervision may accept the model. Hence, the Basel Committee (1996) distinguish between four categories for back testing failure:

• Basic integrity of the model: The model is unable to capture the risk of the respective assets in the portfolio or volatility and correlations are computed incorrectly.

• Models accuracy could be improved: There is a lack of precision in forecasting VaR. • Bad luck or markets moved in fashion unanticipated by the model: E.g. events with low

probability occurred but with high market impact or over/under-prediction of volatilities and correlations due to unexpected market movements.

• Intra-day trading: A change in positions occurred after estimating VaR.

Models classified into the red zone are automatically rejected. The probability that an accurate model will be rejected is very low with 0.01%.

The main shortcoming of this approach is that, it does not consider the independence of exceptions. Hence, a more advanced backtest procedure should be applied.

To determine the reliability of our models, we test the correctness over a historical period. To classify the model as precise, the number of violations divided by the total number of observations should be very close to the selected confidence level. For example, if we choose a 99% confidence level for VaR, then an exceedance once in 100 observations should be observed. We implement the Kupiec (1995) and Christoffersen (1998) tests for backtesting our models. The Kupiec test performs an unconditional coverage test, in order to determine the frequency of exceedances of the estimates is consistent with the specified confidence level. The null hypothesis «F: ´ = ´̂ =è< is defined as the model being ‘correct’. The only required information for this test is the total number of observed returns n, number of violations x and the confidence level … (Kupied, 1995). The objective of this test is to identify if the observed failure rate ´̂ is significantly different from ´. Kupiec (1995) suggests conducting the test as a likelihood-ratio (LR) test6 :

6The Likelihood-ratio test computes the ratio among the maximum probabilities of an outcome under two alternative hypotheses. The numerator is defined as the maximum probability of the observed result under «F and denominator is defined as the maximum probability of

(33)

u%2 = −2 ln – (&':)—g“ :“

æ&'X“]¡—g“X“]“ ” ~ ’¨E&

; . (30)

There are two drawbacks, which has to be taken into account, while applying this test. The test is not reliable with sample sizes under a period of one year. Second, the test just considers the frequency of losses, but not the time (Dowd, 2006). Since, a reliable VaR-model require independent exceptions and the number of exceptions corresponding to the selected confidence interval, we apply a second test called ‘Christoffersen-test’.

The Christoffersen (1998) test measures whether the probability of observing an exception on a particular day depends on whether an exception occurred a day before. In contrast to the unconditional probability of observing an exception, Christoffersen's test determines the dependency between consecutive days only. The test is based on the same log-likelihood testing framework as in the Kupiec test.

Firstly, an indicator variable I is defined with values 1 when the estimated VaR at time t is exceeded and 0 if not. Then "BH is the number of exceptions j in days conditional on state i occurred the day before, "& the number of non-exceptions in days, ÷B is the expected

probability of exceptions assuming state i appeared the previous day.

Finally, we can implement a 2 x 2 contingence table to illustrate the possible outcomes:

Table 4: contingence table

y<'&= 0 y<'&= 1 y< = 0 y< = 1 "FF "&F "F& "&& "FF+ "&F "F&+ "&& "FF+ "F& "&F+ "&&

÷F = (ÿh (ÿÿ>(ÿh , ÷& = (hh (hÿ>(hh and ÷ = (ÿh>(hh (ÿÿ>(ÿh>(hÿ>(hh . (31)

The null hypothesis is ÷F = ÷&, since an exceedance at t should not be linked with whether an exception occurred or not on t-1. Hence, the test statistic is:

u%B|Ö = −2 ln X (&'Z)

Ÿÿÿ⁄Ÿhÿ>ZŸÿh⁄Ÿhh

(&'Zÿ)ŸÿÿZÿŸÿh(&'Zÿ)ŸhÿZhŸhh] ~ ’¨E;

(34)

Combining this test with the Kupiec test, we obtain a conditional coverage test which examines the correct failure rate and independence of exceptions:

u%ùù = u%2+ u%B|Ö ~ ’¨E;; . (33)

The model is rejected if the value of the u%ùù -statistic is higher than the critical value of χ² distribution.

(35)

7. Results

In this section we analyze the results of the VaR estimation for HAR-QREG, Sym. CaViaR-QREG and Asym. CaViaR-CaViaR-QREG models. We forecast one-day ahead VaR for our portfolios and compare the results with the empirical returns for the introduced models at 1%, 5%, 95% and 99% levels. We consider three samples with periods from 2000-2006, 2007-2013 and 2014-2019. Stata 14.1 is used to predict the quantile regression models. The model validation is done in Excel implementing the Kupiec and Christoffersen tests. In Appendix B, C and D we summarize the estimation results to evaluate the variables and to address the statistical significance for all models respectively. Appendix D and E report the backtesting results each for long and short positions to proof the model’s accuracy. We assign a “pass” to a model then it passes the Kupiec and Christoffersen test as well as the joint test simultaneously.

HAR-QREG

Appendix B displays all estimation results of the HAR-QREG-model for the equity as well as currency portfolio. According to Hagoum et al. (2014) traders are concerned with long term volatility. Hence, we expect the long-term component to have the biggest impact on the VaR forecast. With regard to the equity portfolio, we observe that short-term volatility has largely the biggest impact on the conditional quantile throughout the considered periods and VaR levels followed by the long-term component. The weekly volatility has the weakest effect. However, the short-term component seems to be insignificant across all periods and levels except of VaR level 1% during 2000-2006 and VaR level 95% during 2014-2019. Also, the other two volatility components are insignificant in most of the cases. Regarding the left tail of the distribution (VaR 1% and VaR 5%), the impact of the coefficients is to some extend greater between 2000-2006 than in the other periods. This results from the price drop after the burst of the dotcom bubble and the Brazilian Economy crisis.

Evaluating the liquidity proxies, we can conclude COL has a strong significant effect in all cases, whereas we cannot find conclusive evidence for LIX as a good explanatory variable among all VaR levels and considered periods. The large estimates for COL result from the definition of COL as small number.

With regard to the currency portfolio a different pattern in comparison to the equity portfolio is identified. The mid-term and long-term volatility have mostly a greater impact on VaR forecast than the short-term component. Except during the period 2000-2006 at VaR 95% the estimated parameters for weekly and monthly volatility are highly significant. Hence, in that case we can

Referenties

GERELATEERDE DOCUMENTEN

After correcting for individual firm characteristics (price, size, volatility and trading volume) it is found that diversification increases equity liquidity: both the

The market wide variables (stock return co-movement, beta and systematic volatility) and company specific variables (company size, turnover and price inverse) are used to measure

Instruments, however, cannot be chosen arbitrarily. In the appendix, box 1 presents the instruments used in this endogeneity test are shown. There are two

Granger causality between rehypothecation (assets available for rehypothecation, assets rehypothecated and the ratio of assets rehypothecated to assets available for

Taken together, Chauhan, Kumar, and Pathak (2017) state that the (negative) relationship between stock liquidity and crash risk increases when firms have a higher level

The results of this study provide evidence that those countries that show a strong degree of association in liquidity shocks with another country are also more affected in the

First, we will, by making use of conic finance theory, introduce the concept of implied risk-aversion and implied gain-enticement; these quantities imme- diately lead to bid-ask

This paper will mainly go into the effects of the Liquidity Coverage Ratio (LCR) and will also discuss the Net Stable Funding Ratio (NSFR), to assess whether these regulatory