• No results found

VaR and implied correlation

N/A
N/A
Protected

Academic year: 2021

Share "VaR and implied correlation"

Copied!
29
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

VaR and implied

correlation

Sean Tervooren 6170242 6/27/2014 dr. S.A. Broda

(2)

1

Contents

Abstract ... 2 1. Introduction ... 3 2. The Model ... 8 2.1 RiskMetrics ... 8

2.2 The alternative model ... 8

2.3 Value-at-Risk ... 9 2.4 Backtesting ... 10 3. The Data ... 13 3.1 Implied correlation ... 13 3.2 Dow 30 ... 14 3.3 Data analysis ... 14 4. Results ... 17 4.1 RiskMetrics ... 17

4.2 The alternative model ... 20

4.3 Backtesting ... 23

5. Conclusion ... 27

(3)

2

Abstract

This paper compares two different Value-at-Risk (VaR) methodologies. These models will consist of the RiskMetrics benchmark model and an alternative model that incorporates implied correlation in the estimation process. The performance of the models is evaluated using LR, independence, conditional coverage and unconditional coverage tests. The results are illustrated using the data of the DOW industrial index. It is widely accepted that implied volatilities contain information that is not available in models based on time series data. One of the reasons is that implied volatility is a market’s perspective of future volatility and therefore contains investor specific information. This paper concludes that incorporating the implied correlation index in the volatilities estimation model enhances the VaR forecasts in comparison to the RiskMetrics EWMA model. This is the case for all the models that incorporate the implied correlation index. The models with implied correlation perform better than the RiskMetrics model in all likelihood ratio tests. These models also show independence in the duration in contrast to the RiskMetrics model. This proves that adding more information by incorporating the implied correlation index improves the forecasting capability of the VaR measure.

(4)

3

1. Introduction

Investors are always searching for viable information to make better and more profitable decisions. To sustain profitability investors should diversify their portfolio to reduce risk. An important piece of information to reduce risk is the volatility of the returns of a stock. Index volatility is driven by two factors: the individual volatilities of index components and the correlation between index components returns.

The first risk factor that influences option prices is often referred to as implied volatility. The implied volatility of a single stock reflects the market’s assessment of the future volatility of the returns of that particular stock. The second factor is the implied correlation. This is a market’s perspective of the correlation between stocks on an index and companies like the Chicago Board Option Exchange (CBOE) estimate this correlation. Intuitively, one should expect that if the implied volatility of an index option rises there will be a corresponding change in the implied correlation of the index components. But there are cases where the implied volatility of an index option changes and there is no corresponding movement of implied volatility of options on those components. The explanation is that the market’s view of correlation has changed.

The CBOE measures the relationship between the implied volatility of options on an index and the implied volatility of a weighted-portfolio of options on the components of that index, these two become a measure of the market’s expectation of the future correlation of the index components. This is the implied correlation which the CBOE measures over a two year period. They use two separate indexes that have two different maturities, which are a year apart. Both indexes are measures of implied correlations between components of the S&P 500, implied through de SPX (tracking basket which is a subset of the fifty most valuable components of the S&P 500 as measured by market capitalization) option prices, which mature in December, and the prices of single-stock options on the fifty largest components in the SPX, which are LEAP (options with a maturity of more than one year) options that expire in January the next year.

Options are considered as forward looking indicators and are used as informative predictors of future asset price behavior. Implied volatility may contain information that is not captured by the underlying time series, but might be useful for predicting future volatility. Especially foreign currency options contain vast amounts of information, which are not only potentially capable to predict implied volatility but also correlations. It is possible to trade currency assets directly and therefore the implied correlation can be calculated out of the implied volatilities of the options. Campa (1998)

(5)

4

states that the implied volatilities forecast capabilities for individual stocks exceed those of the historical volatilities method. This is not the case for S&P 100 index options.

Campa (1998) concludes that implied correlation improves the EWMA method. The converse is never true. The RiskMetrics method is not capable of consistently improving the forecast of the implied correlation forecast. This emphasizes the assumption that options are forward-looking instruments that provide information that is not included in time series data. Therefore implied correlation can improve a forecast that is solely based on time series data. This is especially appropriate for portfolio risk management because correlation has an effect on the portfolio variance. This means an improved risk assessment of the portfolio.

Walter (2000) however states that GARCH based forecasts, but no other forecasts, always improve the performance of the implied correlations. This suggests that implied correlation do not incorporate all information in the price history or their calculation is based on a incorrectly specified option pricing model. It also indicates that historical information that is useful in forecasting correlation is most effectively contained in a GARCH based correlation forecast. The variance covariance matrix forecast can be improved by using implied correlation in two ways. First an investor could construct an implied variance-covariance matrix using correlations and volatilities extracted from the option prices. This could be very beneficial if the predictive power of the implied volatilities and correlations have a superior predictive power over the forecast methods that use a time series. Second an investor could generate variance estimates using a time series and combine it with the implied correlations to construct a variance-covariance matrix. This method works best if the implied correlations have superior predictive power over a time series generated forecast but the implied volatilities have not.

Walter (2000) concludes that implied correlation contains information that is not contained in a time series forecast. Secondly, implied correlation does not contain all the information in the price history that is useful for forecasting. Thirdly, the information that is not captured by implied correlation is most effectively summarized by GARCH based correlation forecasts. Fourth, a GARCH based forecast that combines market information with time series information may provide improved correlation forecasts.

The paper of Canina (1993) gives two ways to calculate volatility. One way is to use historical volatility and assume that the resent realized level of volatility will continue in the future. Secondly, one may use current option prices to estimate future volatility. Given the observable parameters and an option valuation formula, there is a one to one correspondence between option prices and the volatility input. Canina (1993) says that implied volatility contributes a statistically significant amount

(6)

5

of information about volatility over the short-term forecasting horizon covered by other models, but she also finds that implied volatility is not fully capable to contain all information that other models are able to extract form historical prices.

It is a widely accepted view that an option’s implied volatility is a good estimate of the market’s perspective of the asset’s future perspective, but Canina (1993) finds evidence that does not support that view. An explanation is that besides investors volatility forecasts an option price also contains a net effect of the many effects that influence option supply and demand but are not incorporated in option models. These include investors liquidity considerations, an investors taste for particular payoff patterns, and so on. Another conclusion is that implied volatility and historical volatility fail the rationality test. The consequence of this is that implied volatility should be considered an element of the information set from which the conditional expectation is derived and not as the conditional expectation itself.

A growing trend in applied finance literature advocates the use of implied volatility as the best estimate of future volatility. According to Canina (1993) implied volatility is a good but biased forecast of future volatility. She shows there is almost no correlation between implied volatility and future realized volatility. This fact may however be influenced by including the October 1987 market crash. According to Giot (2003) this may be the reason why implied volatility was found inefficient and biased and compared so poorly to volatility forecasts using historical data.

Giot (2003) concludes that implied correlation has a high information content and provides a meaningful input into risk measures of the VaR type. He shows that GARCH-type forecasts have little information that is not contained in implied volatility. It becomes clear that implied volatility provides accurate and meaningful information as to future volatility forecasts. While sophisticated econometric models which deal with intraday data supply noteworthy and valuable information, they are complicated to estimate and use. Giot (2003) states that volatility forecasts based on implied volatility indexes computed by exchanges are an interesting alternative and supply meaningful market risk information without the need of complex econometric models.

In turbulent market conditions, real-time valuation of market conditions is essential for a portfolio. To estimate the value of a portfolio you need accurate correlation estimates. Martens (2001) says that correlation between the US and the rest of the world has increased over time. In stock market crises there is an especially large increase in correlation. The fact that different markets have different trading times complicates the capture of the correlation dynamics. To level this non-synchronicity researchers use weekly data instead of daily data. The methodology behind this is that the difference in trading time will be spread out over a longer period of time.

(7)

6

To incorporate the non-synchronicity in the RiskMetrics EWMA model Martens (2001) incorporate the covariance’s between the current returns and the returns at time t-1.

( ) ( ) ( ) ( ) ( )

The second and third terms on the right of the equation are only needed in the case of the non-synchronicity of the trading days. These two terms are included because of the fact that if information becomes available after the first market closes, it may still have an effect on the second market that is currently open. The effect on the first market takes place on the next day. Therefore the two terms have to be included to measure the covariance between the returns of the two markets at times t and t-1.

Modeling with low frequency data mitigates the problem of non-synchronicity, but it lacks the correlation short-term dynamics and decreases the models efficiency. The model confirms the asymmetric effect on conditional variances in international stock markets. A large negative return leads to a larger increase in conditional variance than a large positive return. This effect is equal for portfolio returns of large or small stocks.

Value at Risk is a measure of market risk in a portfolio. It quantifies the exposure of a portfolio to market fluctuations. There are several ways to select a VaR model. Sarma (2001) selects models according to two criteria. First, he selects the model on its statistical accuracy. The 99% VaR threshold shouldn’t be violated more than 1 % of the time, and at time t+1 this should also be no more than 1 % because of the conditioning of all information up to time t. This implies that the VaR is low in periods with low volatility and high in periods of high volatility, so the events where loss exceeds the VaR forecast are spread out over time and are not clustered. So the first stage in selecting a model tests VaR measures for their conditional coverage. Christoffersen (2004) uses the same selection criteria.

The second stage of model selection depends on the utility function of the risk managers. The VaR model that maximizes utility will be the most attractive. Sarma (2001) considers two loss functions: the regulatory loss function and the firm loss function. The regulator loss function expresses the goals of a financial regulator and the firm loss function measures the opportunity cost of capital faced by a firm.

An important contribution of the RiskMetrics methodology is the Value-at-Risk concept that greatly simplifies the distribution into a single number that investors find useful to measure market risk. The VaR is basically a p-percentage quantile of the conditional distribution of a portfolio. The VaR measure requires only a few parameters and works very well in common cases.

(8)

7

Christoffersen (2001) states that GARCH, RiskMetrics and conditional volatility models are all based on the return of itself. To incorporate the market’s belief of future return’s an investor might include implied volatilities to their model. Models like the Black-Scholes model uses put and call option prices as parameters to calculate implied volatilities.

Usually the implicit assumption is made that is of a location scale family, which implies that the conditional quantile is some linear function of volatility. The coefficients of the function are determined by the distribution of the standardized returns. Therefore one can think of the VaR measure an outcome of a quantile regression.

( ) ( | )( )

These parameters will vary with the chosen p-percentage quantile.

The moment condition states that no information available at time t-1 to the risk manager should help predict if the returns of time t will exceed the VaR measure predicted at time t-1. This I called the efficient VaR condition. It is of clear interest to a risk manager to test the appropriateness of an individual VaR measure.

The CBOE makes the implied correlations available for investors by supplying two indexes with two different maturities. Because implied correlation is one of the factors that influences the volatility of a portfolio, it is an important factor to include in the volatility estimation. This paper will compare this new volatility estimation against an estimate that excludes the implied correlation. The estimation of the risk of loss of a portfolio might also improve when using the new volatility estimate. This research will investigate if including implied correlation in the volatility estimation will improve the VaR forecast.

First the implied correlations must be acquired, and their estimation method will be reviewed. The data and estimation method is found on the CBOE website. Only the daily data is freely accessible which starts in January 2007. The next step will be to estimate the volatility of individual stocks, which will be obtained from the DOW industrials index. This will be estimated with and without including the implied correlation as an exogenous variable in a RiskMetrics EWMA model. The volatility estimates will be used to forecast the VaR of a portfolio, this is the risk of loss of a certain portfolio for a given probability and horizon. The performance of each VaR estimate will be tested using likelihood ratio tests for serial independence and conditional and unconditional coverage. Based on the comparison of the two estimates, a conclusion can be drawn on whether or not adding implied correlation to the volatility equation improves the VaR forecast.

(9)

8

The next section will be an explanation of the theory and models that will be used in this research. In the third section the data is discussed. After the data the results of the research will be discussed. The final section of this paper will consist of a conclusion and a proposal for future research.

2. The Model

This paper will research the effect of implied correlation on the volatility of a portfolio of stocks and the effect of the new volatility estimate on the Value at risk (VaR) forecast in comparison to the RiskMetrics method. The VaR is the risk of loss of a certain portfolio for a given probability and horizon. For the formulation of VaR, the volatility must be estimated. This paper will use backtesting to measure the performance of the VaR estimates.

2.1 RiskMetrics

This paper will use the RiskMetrics EWMA model to estimate this volatility. The RiskMetrics methodology was launched by JP Morgan in 1992. There are two kinds of volatility estimates. The first one uses historical data and the second one uses implied volatility, which is a market assessment of volatility. The RiskMetrics EWMA model is a volatility estimate that uses historical data and a recursive formula. The problem is that researchers always want more data but the bigger the data collection the more the data becomes diluted, this means that the data becomes less relevant as the distance between the current time and the time the data was collected increases. The EWMA model takes this fact into account. The model assigns exponential weights to all observations. Because of these exponential weights the amount of weight that is assigned to very distant data decreases. This means that the RiskMetrics methodology can cope with very large data sets because it assigns more weight to resent observations. Because this research uses a fairly large dataset the RiskMetrics model is a good fit.

2.2 The alternative model

This research will use the volatility estimates to construct the portfolio variance which is needed to estimate the VaR measure. This means that this research also needs the covariance’s between the constituents in the portfolio. Therefore the multivariate model will be used. The RiskMetrics model will take the following shape:

(10)

9

( ) ( )

The RiskMetrics approach suggests λ=0, 94 for daily data. To startup the recursion the covariance’s at time one will be used. The RiskMetrics model serves as the benchmark. The performance of all other models will be compared to the RiskMetrics methodology.

The articles of Campa (1998) and Walter (2000) say that implied correlations contain valuable information that is not incorporated in models based on time series data. One of the reasons is that implied correlation is a market’s perspective on the volatilities. This means that certain preferences, strategies and knowledge of investors are taken into accounts, which are not considered in models solely based on time series data. Therefore incorporating implied correlation in our time series data based RiskMetrics method should enhance the forecasting capability of our model. To incorporate the effect of implied correlation on the volatility the resulting equation will be estimated:

( ) ( ) ( ) ( ) ( ) ( ) ( )

Where is the implied correlation. To estimate the implied volatilities this research will use the volatilities that are estimated by the RiskMetrics methodology. The reason why these are used is because the implied volatilities couldn’t be acquired. According to Walter (2000) using the RiskMetrics volatilities should give reliable volatilities. This equation yields an improved forecast of the because of the combination of the implied volatilities and the RiskMetrics volatilities. By varying the alpha in this process the balance between the two types of volatilities can be changed. With these new volatilities the new VaR measures can be estimated.

2.3 Value-at-Risk

VaR is defined for a long position in an asset over horizon l, with probability p (0<p<1): ( ) [ ( )] [ ( )]

Where is the value at time for an asset and is the accumulated distribution for the random variable . The VaR is measured in monetary units and is the p-quantile of the accumulated

distribution. For the calculation of the quantile the standardized residuals need to be tested for normality. If rejected, than the T-distribution might be a good solution due to the fact that it has heavy tails. If there is an obvious case of asymmetric skewness of the distribution then an

(11)

10

asymmetric distribution can be chosen. When the parameters of the distribution are known or estimated, the expected portfolio loss can be estimated as follows:

( ) ( ) ( )

Where is the horizon and is the chosen probability. This research will estimate the VaR by using equation six in the following manner:

( ) ( )

In this equation is the portfolio mean and the icdf is the inverse cumulative distribution of the standardized residuals that is found earlier.

2.4 Backtesting

To measure the performance of the new VaR estimates this research will use the backtesting by Christoffersen (2004). The event where an ex post portfolio loss exceeds the ex-ante VaR measure is called a violation. Of great importance in backtesting are clustered violations. This means that there are violations that occur in rapid succession. Large losses that occur in rapid succession are more likely to lead to catastrophic events such as bankruptcy.

The hit sequence of the hit violations will be defined by the following indicator function:

( ) ( )

The theory behind the VaR measure is that ( ) distributed. The unconditional coverage (uc) test will test this against the alternative that ( ) distributed. This gives the null hypothesis . This means that it tests whether on average the coverage is correct. This

test implicitly assumes that the hits are independent. The independence can be estimated by assuming an alternative where the hit sequence follows a first order Markov sequence with switching probability matrix: (

)

Where is the probability of an i on day t-1 being followed by a j on day t. The test of

independence (ind) is then . These two tests can be combined in a test for conditional coverage (cc)

(12)

11

The idea behind the Markov alternative is that clustered violations represent a signal of risk model risk misspecification. Violation clustering is important as it implies repeated capital losses to the institution which together could result in bankruptcy.

The likelihood function for a sample T and a Bernoulli variable, , with a known probability p is written as:

( ) ( ) ( )

Where is the number of violations in the sample. The likelihood function for a Bernoulli distribution with a unknown probability π is:

( ) ( ) ( )

The MLE of is :

(11)

And therefore the Likelihood ratio test for the unconditional coverage can be written as: ( ) ( (( ( )) ( ( ))) ( )

For the independence test the likelihood under the alternative hypothesis is: ( ) ( ) ( ) ( ) ( ) ( )

Where denotes the number of observations with j following i. The MLE estimates are:

( )

This gives the following likelihood ratios for the independence test and the conditional coverage test: ( ) ( ( ( )) ( ( ( ))) ( )

( ( ( )) ( ( ( ))) ( )

The tests are asymptotically distributed as with the above described degrees of freedom. These likelihood ratio tests make it possible to compare the performances of the new VaR estimates with the benchmark RiskMetrics VaR measure.

(13)

12

It is also important to test the duration of the violations. The intuition behind a duration based test is that clustered violations will result in an excessive amount of relatively short and relatively long no-hit durations, corresponding to market turbulence and market calm. The duration of time (in days) between two VaR violations is considered as follows:

( ) Where denotes the day of violation i.

Under the null hypothesis that the risk model is correctly specified, the hit duration should be have no memory and have a mean of . The only memory free continuous distribution is the exponential distribution. This means that under the null distribution the durations are exponentially distributed:

( ) ( ) ( )

In order to establish a statistical test for independence an alternative that allows for dependence must be specified. A convenient choice is the Weibull distribution.

( ) ( ) ( ( ) )

The Weibull distribution has the advantage that the exponential distribution appears when b=1. The Weibull distribution has a decreasing hazard rate which leads to excessive short and long no-hit durations. This could be evidence of misspecification of the volatility dynamics in the risk model.

Because of the bankruptcy threat from VaR violation clustering, the null hypothesis of independence is of particular interest. The null hypothesis is therefore:

( )

The null hypothesis is tested with a likelihood ratio test. To use the durations in the likelihood function it is important to indicate if the durations are censored. The series is created and is zero if the duration is uncensored and one if the duration is censored. For example, if the hit sequence starts with zero then is the number of days until the first violation. Accordingly because the observed duration is left-censored. If the hit sequence starts with one is the number of days until the second hit and this means that .

The procedure is similar for the last duration. If the last observation in the hit-sequence is zero, then the last duration ( ), is the number of days since the last one in the hit sequence and

(14)

13

The contribution to the likelihood function of uncensored durations is its corresponding probability density function. For censored durations the contribution is their survival function ( ) ( ). Combining the uncensored and censored durations leads to the following log-likelihood function: ( ) ( ) ( ) ( ) ( ) ∑ ( ) ( ) ( ) ( ( )) ( ( )) ( ( ))

When the likelihood functions are computed the likelihood ratio test will be calculated in straightforward fashion.

All the necessary calculations are made in MATLAB version R2013a.

3. The Data

3.1 Implied correlation

The implied correlation data can be found on the CBOE website. The data consists of two indexes and each index has a different maturity. The difference comes from the fact that both indexes use different leap-options and SPX options that expire at different moments in time.

First the CBOE creates a tracking basket of the fifty most valuable options on the S&P500. The options get ranked each month by their market capitalization value, which is the option price multiplied by the number of shares outstanding. There is a reserve bench of the five most valuable options that are not in the tracking basket so that in case of a foreclosure or any other reason an option leaves the index, there is a reserve option to fill in the gap. Secondly the volatilities of the individual options in the SPX tracking basket are estimated by the Barone-Adesi Whaley option valuation model. The implied volatilities of each put/call pair are then weighted through a linear valuation method to find the single the-money implied volatility for each stock. When a stock is at-the-money it means that the strike price is equal to the option price. To estimate the volatility of the SPX index the same method is used only in this case the Black option valuation model for stock indexes is used for the put/call options. The third step is to estimate the market capitalization value of each individual stock and their capitalization weight. The formula of the capitalization weight is:

(15)

14 ( )

where is the price of component I and are the float-adjusted shares outstanding of component i. The last step is to estimate the implied correlation and it has the following formula:

( )

∑ ∑

The CBOE started its calculation of the CBOE S&P 500 implied Correlation index on the third of January 2007. It is interesting to observe that similar to the VIX, the implied correlation shows a tendency to increase when the S&P 500 decreases. This relationship suggests that there are limited benefits of diversification offered by investing in a broad-based equity index.

3.2 Dow 30

This research will use the DOW industrial index (DOW 30) and the constituents on that index. The DOW industrial consists of thirty stocks that are traded on the DOW JONES index and the NASDAQ and the price-weighted index. The data has been obtained from the DataStream in the UVA library. The data has been sampled over the period of 2007-2014. This means that the data sample is consistent with the sample of the implied correlation, which starts in 2007. This generates the largest possible dataset.

3.3 Data analysis

This research constructs a portfolio of twenty-nine stocks that are traded on the DOW 30. This means that the constituent VISA is omitted, which is the case because VISA is not traded on the DOW industrial until the nineteenth of March in 2008. Including this constituent in the portfolio gives some computational problems in MATLAP and also changes the dynamic of the portfolio slightly after the IPO of VISA. To keep the research as reliable as possible it is for that reason that the choice to omit the constituent VISA is made. The mix of all the constituents of the DOW 30 produces a portfolio with a very wide variety of stocks that are active in a very broad range of industries. This portfolio consists of pharmaceuticals, aircraft manufactures and financial institutions amongst many other industries. This gives the best overall view of the performance of the economy. The portfolio returns are calculated by using the following equation:

(16)

15

In this equation w is the weight vector and is the return vector.

Figure 1-Portfolio return

Figure 1 shows the return of the portfolio. Between 2008 and 2010 there is a very noticeable period of clustered volatility, this is not completely inexplicable due to the fact that the credit crunch took flight in the same period. Also the Eurozone crisis is clearly visible in the graph around 2012.

It is important to analyze the portfolio data before the actual research. First of all it is interesting to estimate the mean and variance of the returns.

Table 1-Discriptive statistics

Mean 0.00019782 Variance 0.00018555

Table one shows that the mean is slightly positive and the variance of the portfolio is very small. This supports the theoretical assumption that the mean of stock returns is usually very close to zero. The variance in table one is also used to startup the recursion of the RiskMetrics method. It is also very interesting to look at the autocorrelation and partial autocorrelation function. The correlations between successive values of a time series are of key interest in forecasting the future movements of the time series.

(17)

16

Figure 2-Sample Autocorrelation function

The sample autocorrelation function shows that after two lags the process cuts off and is no longer statistically different from zero. The eighth lag is again statistically different from zero. But fluctuations are common in real time series data so this fact will be ignored. The theory dictates that the autocorrelation function should geometrically decline. From this graph it is possible to conclude that the time series follows a MA(2) process.

Figure 3-Sample Partial Autocorrelation Function

The sample partial autocorrelation function cuts off after two lags. Again there are some lags that are statistically different from zero but this will be disregarded because of the same reason as with the autocorrelation function. This means that the time series data follows an AR(2) process.

The returns are also tested for normality by using the Jarque-Bera test for normality. The null hypothesis in this test is that the returns are normally distributed. This is rejected by the Jarque-Bera test. This research will use the t-distribution when the normality is rejected as is very common in stock returns.

(18)

17

Figure 4-QQ plot Returns

The QQ plot of figure four confirms the outcome of the Jarque-Bera test. The points that are plotted do not lie on the diagonal line which is the normal distribution. The returns also seem slightly skewed to the left. The portfolio return has a skewness of -0.0148 . This means the returns therefore are fairly symmetric.

To test the returns for a unit root the augmented Dickey-Fuller test is used. The null hypothesis is that the time series contain a unit root and is therefore not stationary. For a time series to be stationary the solutions should lie outside the unit circle. The test rejects the null hypothesis, which means that the time series is stationary. This means that the joint probability distribution does not change when shifted in time.

4. Results

The purpose of this section is to calculate the VaR estimates and compare the alternative models with the RiskMetrics EWMA model. This research will describe the calculation of the VaR estimates. After the calculation of the VaR measures the estimates will be compared for their forecasting performance. The models will be compared using likelihood ratio tests for serial independence, conditional and unconditional coverage.

4.1 RiskMetrics

To estimate the volatility this research starts with estimating the benchmark RiskMetrics model. The multivariate setting of the RiskMetrics methodology is described in the following equation:

(19)

18

This means that the further the observations are in time, less weight is assigned to these observations and therefore have less influence on the volatility at time t. This method gives an n*n covariance matrix, where n is the number of constituents in the portfolio, for every time t. Using these matrices the portfolio variance can be calculated with the following formula:

( ) ( ) ∑ ∑

Where is the weight of constituent i.

Figure 5-Portfolio variance RiskMetrics

Figure 5 shows the portfolio variance which is estimated using the RiskMetrics method. Between 2008 and 2010 there are some noticeable spikes volatility spikes, this can be explained due to the fact that the credit crunch took flight in the same period. This period had large fluctuations of the returns on the DOW 30. These large fluctuations in the returns explain the volatility spikes in the graph.

The portfolio variance is an important building block in the process to construct the VaR estimate. The VaR will be estimated choosing a certain probability p and using the following equation:

( ) ( )

In this equation µ is the expected value of the portfolio and icdf(p) is the inverse cumulative distribution using probability p. The distribution will follow the distribution of the standardized residuals. This means that the standardized residuals have to be calculated. These are estimated by using the following formula:

(20)

19 ( )

√ ( )

First the standardized residuals are tested for normality. The normal distribution is the most convenient distribution for calculation purposes. The normality is tested using the Jarque-Bera test for normality. The null hypothesis of normally distributed residuals is rejected by this test. Because of the fact that in stock returns the tails of the distribution are usually heavy, this research will fit the t-distribution to the residuals. This in fact will be a location scale t-t-distribution. To find the correct distribution parameters for the residuals this research uses the fitdist function in MATLAB. This function fits a specified distribution to the residuals and estimates the correct parameters for that distribution. This method estimates the following parameters for the distribution:

Table 2- Parameters t-distribution

t Location-Scale Distribution mu 0,0476 sigma 0,8007 dof 4,5235

It is very noticeable that the estimated parameters value for mu is very high.

Figure 6-Residuals RiskMetrics

Figure 6 shows the histogram of the residuals and the location scale t-distribution with the estimated parameters plotted in the histogram. From this figure it becomes clear that the left tail of the residuals is a bit longer, and therefore heavier, then the right side. This confirms the theory that

(21)

20

returns are asymmetric to the left. This is due to the fact that stocks react more violently in periods of negative returns than they do in periods of positive returns.

Figure 7-VaR RiskMetrics

Figure 7 shows VaR estimate with the volatilities calculated according to the RiskMetrics EWMA model. For this estimate the p-value is 0.01, this means this is a 99 percent VaR. This VaR estimate will be used as the benchmark. The forecasting performance of the VaR estimates that will be calculated by the alternative method will be compared to the performance of this VaR estimate.

4.2 The alternative model

The second step is to calculate the VaR estimate using the alternative model. The alternative model is calculated by using the following equation:

( ) ( ) ( ) ( ) ( ) ( ) ( )

This research will vary the alpha to obtain various VaR estimates with different characteristics. The higher the alpha the more the VaR estimate shares the characteristics with the RiskMetrics methodology.

There are two implied correlation measures. This is due to the fact that the CBOE estimates two measures with two different maturities. There is one with the maturity in the current year and one with the maturity in the following year. The reason is that the options that these two measures use dictate the date of maturity. The VaR measures under the alternative model will be estimated

(22)

21

with both implied correlation indexes and will be compared if the choice of either one influences the results.

Table 3-Parameters distribution using IC1

t Location-Scale Distribution

α=0 α=0,2 α=0,5 α=0,8

mu 0,00445 0,00492 0,00615 0,00954 sigma 0,07041 0,07917 0,10043 0,15773 dof 4,07547 4,21151 4,33865 4,47096

Table 3 shows the parameters of the location scale t-distribution for the alternative model using the specified alpha and the first implied correlation index. The noticeable difference between the RiskMetrics parameter values and these parameter values is that the mu and sigma of the alternative model are much smaller. This means that they are more centered around the mean.

Figure 8-VaR alternative method

This figure shows all the VaR measures using the alternative model and using the first implied correlation vector. The bottom VaR estimate is the one that uses an alpha of zero. This means that this model only uses the covariance’s obtained from the implied correlation index. It is important to note that this method uses the variances that have been calculated by the RiskMetrics method. This means that although the alpha in this case is zero the RiskMetrics method is still an important part of the forecasting process.

(23)

22

Because there are two implied correlation indexes the choice between either one could influence the VaR forecast. Therefore the distributions, parameters and VaR estimates are also calculated using the second implied correlation index.

Table 4-Parameters distribution using IC2

t Location-Scale Distribution

α=0 α=0,2 α=0,5 α=0,8

mu 0,00447 0,00494 0,00618 0,00958

sigma 0,07044 0,07920 0,10046 0,15777

dof 4,04670 4,17961 4,30354 4,43198

Table 4 shows the parameters using the second implied correlation index. The parameter values are very similar to the parameter values of table 3, which are the parameter estimates using the first correlation index. There is no reason to suspect that the choice of either implied correlation index will significantly alter the distribution of the residuals of the VaR estimate.

(24)

23

Figure 9 show the difference in the VaR measures when choosing the different implied correlation index. The alpha in this VaR estimate is zero because that seems to give the best forecast. There is a large spike around 2009 which indicates that using the second implied correlation gives a VaR estimate that is about 3.5 percent lower. These spikes are in a period of high volatility and the moment that the implied correlation index number one approaches the date of maturity. So an explanation could be that when an option approaches the maturity date it becomes more volatile. This means that choosing either of the implied correlation indexes influences the forecasting capability of the VaR measure.

4.3 Backtesting

To compare the forecasting capabilities of the VaR measures this research will use backtesting by Christoffersen (2004). The likelihood ratio tests use the amount of violations. A violation is the event where an ex post portfolio loss exceeds the ex-ante VaR measure.

Figure 10- VaR violations

The graph to the left in figure 10 shows the violations for the alternative method using an alpha of zero, the graph to the right shows the violations of the RiskMetrics model. Every downward spike is a violation. It shows that the RiskMetrics method has more violations than the alternative model. The graph also shows that the violations when using the RiskMetrics model are spread out over the estimation period. The violations of the alternative model occur more frequently in the beginning of the estimation period and are scarce at the end of the estimation period. This could mean that the alternative model works better in periods of economic stability than the RiskMetrics model. The models perform more or less the same in periods of economic distress.

(25)

24

Table 5-Number of violations

Violations Model T0 T1 T01 T11 α=0 IC2 1883 20 20 0 α=0 1883 20 20 0 α=0,2 1883 20 20 0 α=0,5 1882 21 21 0 α=0,8 1881 22 22 0 RM 1876 27 27 0

Table 5 shows the number of violations of each model. In this table denotes the number of observations with j following i. It shows that by decreasing the alpha the number of violations decline. This means that the lower the alpha the better the VaR forecast becomes. It is remarkable that none of the specified models have clustered violations. Using the two different implied correlation indexes does not influence the amount of violations or the amount of clustered violations. For this reason this research will disregard the VaR estimate with the second implied correlation index for all succeeding tests.

To measure the performance of the VaR forecast this research uses likelihood ratio tests to measure unconditional coverage, serial independence and conditional coverage.

Table 6-Results LR tests

LR tests

Model LRuc LRind LRcc

α=0 0,0492 0,4226 0,4717

α=0,2 0,0492 0,4226 0,4717 α=0,5 0,1993 0,4661 0,6654 α=0,8 0,4458 0,5116 0,9575

RM 2,9481 0,7716 3,7557

Table 6 shows the results of the three likelihood ratio tests for the alternative models and the benchmark RiskMetrics model. The tests for serial independence and unconditional coverage are chi squared distributed with one degree of freedom. The 95% quantile value of a chi squared distribution with one degree of freedom is 3.8415. This means that for all models the null hypothesis, which is that the violation probability , is not rejected. There is no statistical evidence that the violation probability deviates from the specified probability in the VaR measure, in this case that is

(26)

25

the 99% VaR model. The null hypothesis that the models are serial independent is also not rejected. This means that there is no statistical evidence that the probability of a violation depends on the previous values. Which is the same as . The likelihood ratio test for conditional coverage is

chi squared distributed with two degrees of freedom. The 95% quantile for this distribution is 5.9915. This means that for all models the null hypothesis of conditional coverage is not rejected. It is important to notice that the models are already tested for serial independence. Because the models had no serial dependence the result of the likelihood ratio test for conditional coverage is not surprising.

The results of the likelihood ratio tests show that when decreasing the alpha, which means that the influence of the RiskMetrics method becomes smaller, the values of the tests become smaller. This means that the probability of rejecting the null hypothesis becomes smaller. It is possible to conclude that the forecasting capability improves when the value of alpha becomes smaller. The best forecasting model therefore is the one where the alpha is zero. It is important to notice that for none of the models the null hypothesis is rejected in any of the tests. This means that all of the models perform well according to these tests. So the model with the alpha value of zero outperforms the rest in a group of models who already perform well according to the criteria of the tests.

The same conclusion can be drawn when looking at the number of violations. When the alpha goes towards zero the amount of violations becomes smaller. The model that has the most violations is the RiskMetrics model. The model with the least violations is again the model where the alpha is zero. One should notice that for all models, according to the likelihood ratio test for unconditional and conditional coverage, the amount of violations are no reason to suspect any misspecification.

The distribution of the violations is equally important. Clustered violations will result in an excessive amount of relatively short and relatively long no-hit durations. To measure the distribution of the hit-sequence this research will use a likelihood ratio test for independence of the distribution. The null hypothesis is that the violations are exponentially distributed, which is a memory free distribution.

(27)

26

Table 9- MLE parameters Weibull distribution

MLE

Model a p

α=0 68,190 0,663

RM 66,081 0,941

Table 9 shows the parameters of the Weibull distribution. The tests will be performed on the RiskMetrics model and on the alternative model with the alpha value of zero. The reason for this fact is that the RiskMetrics model is the benchmark model and the alternative model performs best in the previous test for independence and conditional coverage.

Table 10-Duration test

Weibull duration test

Model LRind

α=0 3,4216

RM 6,7245

Table 10 shows the results of the Weibull duration test. The likelihood ratio test is chi-square distributed with one degree of freedom. This is due to the fact that there is only one free variable in the ratio test. The 95% quantile value of a chi squared distribution with one degree of freedom is 3.8415. This shows that the null hypothesis of the alternative model is not rejected. This means that the durations of the violations are independently distributed. The null hypothesis for the RiskMetrics model is rejected. This means that there is duration dependence. This could result in clustered violations and misspecification of the risk model.

The likelihood ratio value of the alternative model is only just in the range where the null hypothesis is not rejected. This means that the durations are, in contrast to the RiskMetrics model, independently distributed. The probability of clustered violations is much smaller in the alternative model. This means that the alternative model outperforms the RiskMetrics model. The alternative model should have no higher-order dependence in the hit sequence. This means that the alternative model is the better model and is a more reliable risk model.

(28)

27

5. Conclusion

Previous research shows that implied correlation contains information that is not present in models that are based on time series data. This should mean that incorporating implied correlation in the estimation process for a VaR measure should enhance the forecasting capabilities. The results of this research confirm this hypothesis.

The results show that the amount of violations drops when adding the implied correlation index in the volatility estimation equation. The drop in violations of the VaR measure is largest when the value of alpha is zero. This means that the volatility equation depends entirely on the implied volatilities. An important remark is that in this estimation method the RiskMetrics volatilities still play a large part. This is due the fact that the RiskMetrics covariance’s help calculate the implied variance-covariance matrix. The likelihood ratio tests also show that all models perform well. Interestingly, the lower the value of alpha the better the models perform in the likelihood ratio tests. The model with the alpha value of zero is again the model that performs best. The durations of the alternative model are independently distributed. The RiskMetrics model shows dependence in the durations. This means that the probability of clustered violations is much lower in the alternative model. This is a definite sign that the alternative model outperforms the RiskMetrics model.

The results show that including implied correlation in the volatility estimation process enhances the VaR forecast. The benchmark RiskMetrics model is outperformed in every test and in the amount of threshold violations by every alternative model, regardless of the parameter value of alpha. The RiskMetrics model also shows dependence in the durations where the alternative model does not. This is clear evidence of the fact that implied correlation supplies extra information and therefore improves the VaR forecasts.

There are however a few remarks and suggestions for further research. First, this research uses the covariance’s that are estimated by the RiskMetrics methodology to calculate the implied variance-covariance matrix. Ideally one would use the implied volatilities, which are also used by the CBOE to construct the implied correlation index, in the estimation of the implied variance-covariance matrix. These implied volatilities where however not easily obtainable on the website of the CBOE. Using these implied volatilities could cause different results and therefore different conclusions. Secondly, the standardized returns are usually skewed to the left. Although it is very common to use t-distributions in stock returns this is still a symmetric distribution. Using an asymmetric distribution might improve the VaR estimates even further.

(29)

28

6. Bibliography

Campa, José Manuel, and P. H. Chang. "The forecasting ability of correlations implied in foreign exchange options." Journal of International Money and Finance 17.6 (1998): 855-880.

Canina, Linda, and Stephen Figlewski. "The informational content of implied volatility." Review of

Financial studies 6.3 (1993): 659-681.

Christoffersen, Peter, and Denis Pelletier. "Backtesting value-at-risk: A duration-based approach." Journal of Financial Econometrics 2.1 (2004): 84-108.

Christoffersen, Peter, Jinyong Hahn, and Atsushi Inoue. "Testing and comparing value-at-risk measures." Journal of empirical finance 8.3 (2001): 325-342.

Giot, Pierre. The information content of implied volatility indexes for forecasting volatility and market

risk. Universite catholique de Louvain, 2003.

Kearney, Colm, and Valerio Potì. "Correlation dynamics in European equity markets." Research in

International Business and Finance 20.3 (2006): 305-321.

Martens, Martin, and Ser-Huang Poon. "Returns synchronization and daily correlation dynamics between international stock markets." Journal of Banking & Finance 25.10 (2001): 1805-1827.

Sarma, Mandira, Susan Thomas, and Ajay Shah. "Selection of Value‐at‐Risk models." Journal of

Forecasting 22.4 (2003): 337-358.

Walter, Christian A., and Jose A. Lopez. "Is implied correlation worth calculating? Evidence from foreign exchange options." The Journal of Derivatives 7.3 (2000): 65-81.

Referenties

GERELATEERDE DOCUMENTEN

These quadrature rules can significantly reduce the number of computations compared to algorithms that evaluate the stiffness matrix using exact integration and can handle

This study shows that quantification of blood flow in the human abdominal aorta is possible with echo PIV, and velocity profiles and data correspond well with those seen with

H2: Viewing satire will lead to a higher intention of political engagement in terms of (a) political participation and (b) interpersonal talk than watching a documentary or hard news

Niet alleen waren deze steden welvarend, ook was er een universiteit of illustere school gevestigd; daarmee wordt nogmaals duidelijk dat de firma Luchtmans zich met hun

SNLMP can be introduced as transition systems with stochastic and non-deterministic labelled transitions over a continuous state space.. Moreover, structure must be imposed over

Thus, we expected the capacity of 3D stimulus class to be lower compared to high-contrast (HC) and isoluminant (ISO) stimuli, since afterimages could not benefit the capacity of IM

To recap, the main Research Question is as follows: “How do perceptions of Cypriot citizens after the Conference on Cyprus in July 2017 influence the future of the negotiations on

al (2013) worden een aantal suggesties gedaan voor lessen die hierop gericht zijn. Mijn lesontwerp zal dus aan zoveel mogelijk van deze suggesties moeten voldoen. Dit betekent