• No results found

Do events of high volatility lower the predicting power of implied volatility? : the case of national election of the U.S.

N/A
N/A
Protected

Academic year: 2021

Share "Do events of high volatility lower the predicting power of implied volatility? : the case of national election of the U.S."

Copied!
32
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Do events of high volatility lower the predicting power of implied volatility?

The case of national election of the U.S.

UNIVERSITY OF AMSTERDAM

AMSTERDAM BUSINESS SCHOOL

MSc FIN

Author:

K. PAN

Student number:

11622385

Thesis supervisor: Dr. J. Lemmen

Finish date:

July 2018

(2)

PREFACE AND ACKNOWLEDGEMENTS

I want to thank my supervisor Dr. J. Lemmen from the faculty of economics and business at the

University of Amsterdam. He is always very kind and accessible and always cares about the progress of his students. He consistently allowed me to take this thesis as my work, but steered me in the right direction. I would like to thank all the practitioners and econometricians for providing such extensive prior researches, so that I can stand on the shoulders giants. I want to thank all professors who powered my master’s study for guiding me to be not only a practitioner, but also a professional with academic mindset. I am very glad that I learnt a lot about derivatives during the composition of my thesis, especially in the fields of implied volatility.

Statement of Originality

This document is written by Student Keji Pan who declares to take full responsibility for the contents of this document.

I declare that the text and the work presented in this document are original and that no sources other than those mentioned in the text and its references have been used in creating it.

The Faculty of Economics and Business is responsible solely for the supervision of completion of the work, not for the contents.

(3)

ABSTRACT

This paper examines the hypothesis that the predictive power of implied volatility is lower during the occurrence of events that induces high uncertainty. Taking the national election of the United States as the proxy for uncertainty and 21 actively traded indexes as research target, this paper conducted a quasi-event study, and tested the hypothesis both visually and statistically. The results show it is true that the

predictive power of implied volatility is harmed by high uncertain events, given there is no volatility risk premium. This paper also discusses some potential errors in the research procedure and possible

approaches to address those errors in the end.

Keywords:

(4)

TABLE OF CONTENTS

PREFACE AND ACKNOWLEDGEMENTS ... ii

ABSTRACT ... ii

TABLE OF CONTENTS ... iv

LIST OF TABLES ... v

LIST OF FIGURES ... vi

CHAPTER 1 Introduction ... 1

1.1 Relevance and Literature ... 1

1.2 Hypothesis and Methodology ... 1

CHAPTER 2 Literature Review and Theoretical Models ... 3

2.1 Literature Review ... 3

2.2 Event Study ... 6

2.2.1 Assumptions in Event Study ... 7

2.2.2 Difference Between Event Study and The Methodology in This Paper ... 7

2.2.3 Potential Problems ... 8

CHAPTER 3 Data and Methodology ... 9

3.1 Data ... 9

3.2 Methodology ... 14

CHAPTER 4 Results and Interpretation... 16

CHAPTER 5 Alternatives ... 19

5.1 Potential errors in the research process and the alternative approach ... 19

5.1 Results and interpretation of the alternative approach ... 20

CHAPTER 6 Conclusion... 22

REFERENCES ... 23

(5)

LIST OF TABLES

Table 1 Indexes included in the dataset………9

Table 2 Descriptive statistics of the volatility involved………...…...12

Table 3 Descriptive statistics of predictive error………..……..13

Table 4 Popular votes situation of each national election and derived corresponding uncertainty proxy………..…15 Table 5 Abnormal relative predictive error averaged across 21 selected indexes………..…17

Table 6 Regression model testing abnormal predictive error……….…18

(6)

LIST OF FIGURES

Figure 1 Implied volatility and realized volatility in the full-time period………...…13

Figure 2 Illustration of estimation window and event window selection………....14

Figure 3 Abnormal relative predictive error averaged across 21 selected indexes………….….…16 Figure 4 Abnormal relative predictive error averaged across 21 selected indexes (Alternative

(7)

CHAPTER 1 Introduction

1.1 Relevance and Literature

Volatility, known as a measure of possible price changes of assets in the future, has been one of the most important research topics in the field of finance, and is crucial to the decision making of investors, hedgers, speculators, and regulators. For a long time, scholars and practitioners have been trying to forecast the conditional volatility of asset’s return using various advanced models based on historical volatilities, such as GARCH model and its extensions. However, despite the effort in building up

sophisticated models, there could be significantly relevant factors or information overlooked. Fortunately, predicting future volatility utilizing the sentiments of the broad participants in the gradually efficient financial market provides a sensible alternative. For example, Black-Scholes-Merton (BSM) model allows us to invert a spontaneous volatility forecast, called implied volatility, from the market prices of options. Besides, model-free implied volatility, or non-parametric implied volatility and the entropy-based implied volatility are wise ways to forecast future volatility utilizing option prices as well. To be comparable with the majority of research in the relevant area, I will use the BSM inverted implied volatility in this paper.

Are the markets good at pricing the risks they understand? In other words, is the implied volatility a sufficiently good estimator of the future volatility? Extensive literature have tried to examine the

predictive power of implied volatility by comparing it with the ex-post realized volatility. However, their conclusions vary. proposed various methodologies to adjust the raw implied volatility derived from above approaches so that the adjusted implied volatility can better forecast the future realized volatility.

Although there are studies that took into consideration the effect of extraordinary events, such as the dot-com bubble and the financial crisis by separating time frames when testing the predictive power of implied volatilities, most of the literature in the relevant area compared implied volatility and realized volatility across the whole timeframe of their datasets. These researches have adopted the same implicit assumption that, on average, the predictive error is relatively stable, because otherwise, results deduced by taking a long period as whole would be solely a smoothed result of some discontinued hikes of predictive errors. There are few pieces of research specifically testing the effect of events with high uncertainty on the predictive power of implied volatility. This paper uses the national election as a proxy of high uncertainty to assess the effect of similar events on the predictive power of implied volatility.

1.2 Hypothesis and Methodology

The testable hypothesis I propose in this paper is that the predictive power of implied volatility is lower during events inducing high uncertainty, compared to normal times. That is, the markets perform worse at pricing the risks they understand when the uncertainty is high.

(8)

The methodology I will employ is similar to that of the event study. To address the methodology, it is more convenient to mention the construction of variables first. The difference between implied volatility and realized volatility is denoted as the absolute predictive error of implied volatility. To make the predictive error comparable among various indexes and time periods, it is necessary to divide the absolute predictive error by the realized volatility to generate the relative predictive error. This relative predictive error will take the role of daily return in the procedure of event study. It is important to mention that the difference between the implied volatility and the realized volatility is composed of two parts, namely volatility risk premium and strict predictive error. To investigate the variation of the predictive power, I will conduct an alternative approach to mitigate the influence of the volatility risk premium, which I will illustrate in the methodology section.

Following are the steps to construct the quasi-event study approach. Firstly, I will identify a series of events that potentially led to high uncertainty, which is the national election of the United States in this case. Surrounding the date of the event, I will construct a window called event window as we do in event study. Near the event window, I will construct two proper time periods, one of which is located before the event window, and the other one is located after the event window. Secondly, within the estimation windows, I calculate the unweighted average of predictive error as the so-called normal predictive error, that is, the expectation of predictive error within the event window without the appearance of events that inducing high uncertainties. Hereby, I assume that the expected predictive error would be the same in the event window if there is no event, so that we can take the expectation of predictive error as a forecast of predictive error in the event window. Finally, the difference between the forecasted predictive error and the real observed predictive error is defined as the abnormal predictive error, which is the variable I need to investigate whether it is significantly different from zero or not in the event window.

(9)

CHAPTER 2 Literature Review and Theoretical Models

2.1 Literature Review

Implied volatility has been extensively researched in the past decades. Implied volatilities derived from various approaches include model-free implied volatility, entropy-based implied volatility, option-based implied volatility, and so on. Among them, the option-based volatility derived by inverting the Black-Sholes-Merton option pricing model is most popular. Hull and White (1987) researched the option pricing based on stochastic volatility of assets. They indicated that, for near-expiration and at-the-money options, the implied volatility should be an unbiased forecast of the average volatility over the remaining life of the option, and its forecast error should be orthogonal to the information available at the estimation time. This conclusion is based on the assumption that the asset price and the volatility are uncorrelated and the volatility risk premium is zero.

Researches from the early age show that the implied volatility is either a biased predictor or an inefficient one, or even both biased and inefficient estimate of ex-post realized volatility. Some of these early researches agreed on the conclusion that implied volatility was a way of quoting option prices, instead of a predictor or estimate of future volatility. The first empirical studies of the predictive power of the implied volatility conducted by Latane´ and Rendleman (1976), Chiras and Manaster (1978), and Beckers (1981) found that the implied volatility indeed contained relevant information regarding future volatility, although they relied on very small datasets and focused on the cross-sectional relationship of a certain group of stocks. Linda Canina and Stephen Figlewski (1993) investigated the relationship between implied volatility and realized volatility of S&P 100 index. Meanwhile, they compared the information content embedded in the implied volatility with that embedded in the historical volatility. The timeframe of their data set is from 1983 to 1987, and they excluded options far in or out of the money. The

methodology they used is regressing the realized volatility on the implied volatility, asserting that the constant and the coefficient on the implied volatility should be 0 and 1, respectively, if the implied volatility is a good estimator. Moreover, adding volatility estimates based on historical data into the regression model above, its coefficient should be 0 if the implied volatility contains all the information content the historical volatility has. They concluded that implied volatility is a poor estimator of subsequent realized volatility in aggregate and subsamples separated by maturity and strike price. Besides, they alleged that the procedure they employed, which is using time-series overlapping data and then correcting for the potential problems caused by overlapping data, is more accurate than using the sparse non-overlapping data.

Subsequent studies paid more attention to the quality of the sample they use, trying to mitigate the problem caused by various reasons, such as residual autocorrelation caused by time-series overlapping sample and extreme variability caused by the high sampling frequency. As a result, more and more researches concluded that implied volatility contains information content contained in historical volatility

(10)

and typically outperforms time-series models. However, most of the studies also yielded the conclusion that implied volatility is an upward biased predictor of future volatility, though the market efficiency is evolving. Jeff Fleming. (1998) researched on the same topical above using the S&P 100 index and corresponding option data provided by Chicago Board Option Exchange. The time frame of the data sample is from 1985 to 1992, and he excluded the 1987 equity market crash from the sample. He used the moments’ method of Hansen (1982) to test if the constant and coefficient was 0 and 1, respectively, and tested for spurious regression. His results indicated that the implied volatility is an upward biased forecast but it contains relevant information regarding future volatility. Manfredo et al. (2002) obtained 1-week averaged implied volatility of live cattle futures prices derived from Black (1976) option pricing model using options on live cattle futures contracts. They used time-series non-overlapping datasets from 1986 to 1999, and the options are all nearby, at the money contracts. They tested the information content of implied volatility practically from the perspective of agriculture risk managers by proposing the hypothesis that the difference between implied volatility and realized volatility is zero. They found that the implied volatility is a biased and inefficient forecast of volatility, but also, that implied volatility has improved as a forecast of volatility over time and includes the information content embedded in the time-series model such as GARCH model. Philippe Jorion(1995) examined the information content and predictive power of implied volatility of foreign currency market. They sampled German mark, Japanese Yen and Swiss franc from different time period according to their availability and activeness of their trading. Still, the methodology they employed is regressing the realized volatility on implied volatility and they found the implied volatility is an upward biased estimate of ex-post realized volatility, though outperforms time-serials models. He attributed the difference of his result from others saying implied volatility is neither unbiased nor efficient to two reasons. Firstly, index options are priced differently from foreign exchange futures options because of higher trading costs. Secondly, S&P 100 option implied volatility has a greater errors-in-variable problem. However, he indicated that the implied volatility was too variable relative to the future volatility and thus had limited value for volatility forecasting.

Later researches employed more elaborated methodologies and more advanced econometrics technics to mitigate some of the potential errors in the previous studies, such as measurement errors. Some of them overturned the earliest results criticising the bad performance of implied volatility. Christensena and Prabhalab (1998) used a longer time series, non-overlapping, and low sampling frequency dataset to overturn the conclusion of previous study that implied volatility is a biased estimator of future volatility. The methodology they used involves ARIMA(p,d,q) model, instrumental variables and conventional regressions that regress realized volatility on implied volatility and time-series volatility (GARCH) estimates based on historical data. They indicated that the reason why the implied volatility was more biased in previous researches was the regime shift around the 1987 market crash. Based on the results of Philippe Jorion(1995) and others, Andrew Szakmary et al. (2003) extended the analysis into a very broad array of contracts and exchanges. They used data from 35 futures options markets from eight separate exchanges. The procedure they employed involves an augmented Dickey-Fuller testing the stationarity of

(11)

both volatilities, followed by three conventional regressions that are similar to those in other studies. They are regressions of realized volatility on implied volatility, on modelled volatility using historical data, and on both variables, respectively. The results revealed that implied volatility is a good estimator of future realized volatility, at least as good as models based on historical volatility, regardless of whether the simple moving average or GARCH model.

From the perspective of practitioners, researchers proposed various methodologies to improve the

predictive power of implied volatility. M. A. J. Bharadia et al. (1996) proposed a quadratic method for the calculation of implied volatility using the Garman–Kohlhagen Model, which improved the efficiency. This technique provides the solution of a simple quadratic equation and relatively accurate answers to the implied volatility for close-to-the-money options. This is also similar to the updated method we use to calculate the VIX index. Due to the finite quote precision, bid-ask spreads, non-synchronous observations and other measurement errors, the technique we use to invert implied volatility from market data using Black-Sholes model is detected. It is more appropriate to say that we estimate implied volatility instead of that we calculate implied volatility. Hentschel, L. (2003) revealed that at nighty five percent confidence level, implied volatility is biased from minus to plus six percentage points. This is because the lower absence of arbitrage bounds on option prices systematically eliminates low implied volatility. He discussed each of the potential problem listed above and proposed feasible GLS estimators that theoretically reduce the noise and bias in implied volatility estimates. Martens et al. (2004) tested the predictive power of implied volatility from three separate asset classes, equity, foreign exchange, and commodities, and showed that implied volatility is a good estimator and that both the measurement and the forecast of financial volatility is improved using high-frequency data and long memory models. A key difficulty in investigating the impact of events inducing high uncertainty is that those events can potentially depend on other factors such as macroeconomic uncertainties. Bryan Kelly et al. (2016) isolated a set of political uncertainties, namely the national election and global summit, and empirically proved that uncertainties are incorporated in the price of the option. They used the model proposed by Pastor and Veronesi (2013), which specifies that the government decides which policy to adopt and the investors are uncertain about the future of the policy choice. They directly adopted this model when analysing the impact of the global summits and reinterpreted this model as the voters decide the candidate of the president and the investors are uncertain about the results. In this paper, we can adopt the second understanding directly.

Recall that the research of Hull and White (1987) was based on a strong assumption that the volatility risk premium is zero, or in other words, the market participants are indifferent to the variance risk of assets. Thus, they assumed that the market did not price this risk in the option price. However, after so many years of debates in this area, scholars finally started setting their eyes on the assumption made by Hull and White. Recent researchers found that the persistent difference between implied volatility and realized volatility does not solely mean the predictive error. Marcel Prokopczuk and Chardin Wese Simen (2013) are among the first who studied the role of volatility premium for the forecasting power of implied

(12)

volatility. They indicated that the persistent difference between the implied volatility and ex-post realized volatility is composed of not only the predictive error of implied volatility, but also a volatility risk premium required by the option seller. They proposed a non-parametric and parsimonious approach to adjust the model-free implied volatility for the volatility premium and provided compelling evidence using more than 20 years of option and futures data that this approach improved the predictive power of implied volatility. Peter Carr and Liuren Wu (2016) developed a new option pricing framework that starts with the near-term dynamics of implied volatility surface and derives no-arbitrage constraints on the shape of the volatility surface. They showed that, similar to implied volatility, realized volatility can also be constructed across different option contracts. This new framework allows them to extract the volatility premium from the volatility surface explicitly. Thus, provide an approach to improve the predictive power of implied volatility. Christopher G. Lamoureux (1993) investigate the behaviour of measured volatility derived from the market price and the underlying asset prices, and provided evidence that are not consistent with the orthogonality assumption that volatility premium is not priced into option prices. Their results showed that the volatility risk premium is time-varying, and it is a decreasing function of the level of variance of the underlying asset prices. This research provided a new equilibrium for option pricing and, in turn, provided a more sensible way to forecast the underlying asset’s future volatility using its implied volatility. Wei Ge(2016) also looked into the volatility risk premium, or insurance risk

premium from the perspective of a practitioner. He claimed that volatility risk premium originated from a combination of behavioural bias, economic factors, and structural constraints. He found that the volatility risk premium is most prominent in broad market equity indexes and examined and compared strategies used in the industry to profit from the volatility risk premium with three types of derivatives, equity index options, variance swaps, and VIX futures.

Most of the literature focuses on whether there is good predictive information in implied volatility but not comparing the predictive power during different periods, though researches tested the predictive power in subperiods of their data sample to mitigate the influence of market crash on the predictive power. Assume that implied volatility derived from the BSM model has relatively good information content and

predictive power, this paper will mainly focus on examining if the predictive power of the implied volatility during the existence of high volatility event is different from that during regular times.

2.2 Event Study

Event study is an important tool in the empirical study in finance. It is a statistical method to assess the impact of an event on the value of a firm, or the volatility of return. An event study is an empirical investigation of the impact of an economic event on the critical attributes of securities. Typically, market efficiency-driven event studies have analysed average abnormal returns and information content driven event studies have analysed volatility of returns and trading volume.

(13)

2.2.1 Assumptions in Event Study

The typical event study takes asset returns as the proxy; thus, it is easier to elaborate the assumption of event study taking the example of return event study. Every event study represents a joint test of two research hypothesis, the model of expected returns used, and the underlying finance theory assumptions, respectively. Researchers need to be particularly aware of the latter assumptions, as their research contexts may not fulfil these assumptions.

The event study methodology assumes that capital markets accurately reflect the economic implications the analysed event has for the asset in question. In other words, market efficiency is a premise of event studies. The assumption of event study was nicely elaborated by Brown and Warner (1980). The fact of repeatedly positive or negative abnormal metrics after the occurrence of such events, regardless of whether return, volatility, or trading volume, contradicts the efficient market hypothesis that security prices alter swiftly to mirror the arrival of new information. Besides, because such events are typically unexpected, the scale of abnormal performance during the event window can gauge the influence of that type of events on the value of the stakeholder. As long as there is such abnormal performance, there is evidence that the market is to some extent efficient. Nonetheless, investors can gain such abnormal profit only if they predict such event with inevitability. So, the application of the event study relies on analysts’ investigation of the depth of the capital market where the research targets are located and to which extent the benchmark is correlated with the research target.

2.2.2 Difference Between Event Study and The Methodology in This Paper

Up to now, there are various categories of event study methodologies, include return event study,

volatility event study, trading volume even study, and reverse event study, etc. A brief introduction of the unique event study methodology can be helpful in explaining the one I employ in this paper. Volatility event study takes the square of the abnormal return used in typical return event study. Because the expected abnormal return is zero, the squared return is equivalent to the return volatility mathematically. Different from inferences about the information content based on abnormal return, where researchers are required to specify an expectation model that determines the unexpected component of return in the even window, volatility event study has the advantage of not using the sign of the abnormal return and thus avoids the potential error caused by unfavourable expectation model. As price changes represent the aggregate consensus evaluation of new information, an increased trading volume gauges the lack of consensus in the market, in other words, it gauges the extent to which the market participants disagree about the meaning of new information. Besides, volume event study can also address abnormal trading volume due to clientele adjustment related to risk and tax, information asymmetry, liquidity

consideration, and market microstructures.

Different from event study described above, the methodology used in this thesis focus on the events’ impact on the predictive power of implied volatility derived from options, instead of the return, volatility or trading volume. The difference between implied volatility and the contemporaneous realized volatility

(14)

is the proxy in this paper. Given the fact that typically the implied volatility is higher than the ex-post realized volatility, the research on predictive error will less likely be affected by the sign of it.

2.2.3 Potential Problems

Firstly, there has not been exactly same research on such proxy as the predictive error. Although this research firmly believes the framework of the rationale of the methodology employed, the result can be not as convincing in lack of substantial theoretical literature proving the viability of this proxy. Secondly, the option markets can be not as efficient as the markets of their underlying even in the US. If the market prices of options do not fully represent the best judgement of investors’ sentiments, the implied volatility is not a good proxy of the market estimate, and the results of this research would be defected. Thirdly, BSM model assumes that the price of the underlying asset follows a logarithm diffusion process with constant instantaneous mean and volatility. However, the common sense and various researches tell us the volatilities of assets are uncertain and typically time-varying. This fundamental inconsistency embedded in the model we use to derive implied volatility can cause potential bias in the result. Fourthly, the possibility that the market participants do not use the pricing model we use in this paper can induce error similar to the one caused by market inefficiency, that is, the implied volatility does not necessarily represent the implied volatility perceived by the markets. Furthermore, the assumption of a zero correlation between asset price and volatility in the BSM model is unrealistic because it contradicts the widely agreed leverage effect observed in the stock market. Last but not least, the assumption of zero volatility risk premium is unrealistic because the risk premium is an essential source of income of option sellers in the option markets. I plan to use an alternative approach to mitigate the influence of volatility risk premium at the end of this paper.

(15)

CHAPTER 3 Data and Methodology

3.1 Data

This research is based on option related data of the United States of America because the US option markets are amid the ones with most extended history and are widely acknowledged to be one of the most efficient markets in the world. The efficiency of the option markets is critical for the robustness of this research because the implied volatility is the representor of the market participants’ sentiments. Only if the option markets are highly liquid and the participants are sufficiently sophisticated, the implied volatility is reliable as an estimator of the future volatility.

The selection criteria of those indexes include the annual trading volume, the representativeness of the overall equity or fixed income market, and their correlation with political uncertainties. As a result, the sample indexes encompass not only the most highly traded equity indexes such as S&P, Russell, and NASDAQ, but also indexes representing specific sectors or industries such as the NASDAQ Banking Index, Morgan Stanley High Tech Index, and Treasury Yield Index, etc. The reasons why I use various index options whose underlying indexes are correlated instead of using just one index for each asset category are: (1) for the convenience of building statistical test; (2) option markets can be less correlated as their underlying and thus represent distinctive target proxies.

Table 1

Indexes included in the dataset

This table reports 21 indexes included in the research dataset. The first column is the name of indexes, and the second column is the symbols traders typically use to refer to the corresponding indexes, and the third column is the unique serial number provided by OptionMetrics to identify the indexes. The fourth and fifth column identify the dates when the specific option index started trading and the ended trading. The ending date of 29 December 2017 is due to the availability of the dataset, and it means the index option is still traded on the market. The final column indicates the type of each of index option.

NAME TICKER SECID START DATE END DATE Exercise Type

1 NYSE Arca Major Market Index XMI 101499 04jan1996 20nov2008 European 2 NYSE Arca Institutional Index XII 101485 04jan1996 21nov2001 European 3 NYSE Composite Index (Old) NYZ 107880 04jan1996 20feb2003 European

4 CBOE Mini-NDX Index MNX 102491 14aug2000 29dec2017 European

5 CBOE Treasury Yield Option TYX 102495 04jan1996 03nov2010 European

6 S&P 100 Index OEX 109764 04jan1996 29dec2017 American

7 S&P 500 Index SPX 108105 04jan1996 29dec2017 European

8 S&P Midcap 400 Index MID 101507 04jan1996 22may2012 European 9 S&P Smallcap 600 Index SML 102442 04jan1996 16feb2012 European 10 S&P 100 Options with European-style exercise XEO 112878 25jul2001 29dec2017 European 11Mini SPX Index XSPAM 125063 25oct2005 20nov2014 European 12 S&P 500 Index Options - PM-Settled SPXPM 150513 04oct2011 48apr2017 European

13 Russell 2000 Index RUT 102434 04jan1996 29dec2017 European

14 Russell 1000 Index RUI 100219 06nov2003 29dec2017 American

15 Russell 1000 Value Index RLV 100221 05nov2003 29dec2017 European 16 Russell 1000 Growth Index RLG 100222 05nov2003 29dec2017 European

17 NASDAQ 100 Index NDX 102480 04jan1996 29dec2017 European

18 Morgan Stanley High Technology Index MSH 101490 04jan1996 18aug2011 European 19 Dow Jones Industrial Average DJX 102456 06oct1997 29dec2017 European 20 PSE Wilshire Smallcap Index WSX 108656 04jan1996 23dec1999 European

(16)

Theoretically, the implied volatility can be derived from the Black-Scholes-Merton model using the observable option premiums, strike prices, the rate of dividends, remaining days to maturity, and the risk-free rates during the remaining life of the options. Start from the origination of the pricing model:

C = S ∙ 𝑒−𝑞𝑇∙ 𝑁(𝑑1) − 𝐾 ∙ 𝑒−𝑟𝑇∙ 𝑁(𝑑2) P = 𝐾 ∙ 𝑒−𝑟𝑇∙ 𝑁(−𝑑2) − S ∙ 𝑒−𝑞𝑇∙ 𝑁(−𝑑1) Where 𝑑1= 𝑙𝑛(𝐾) + (𝑟 − 𝑞 +𝑆 12 𝜎2) ∙ 𝑇 𝜎 ∙ √𝑇 𝑑2= 𝑑1− 𝜎 ∙ √𝑇

C is the price of a call option, P is the price of a put option, S is the current underlying security price, K is the strike price of the option, T is the time in years remaining to option expiration, r is the continuously-compounded interest rate, q is the continuously continuously-compounded annualized dividend yield, and σ is the implied volatility. The risk-free interest rate is acquired from a compilation of continuously compounded interest rates of zero coupon bonds of various maturities, referred to as the zero curve. The zero-curve interest rate used in the data sample is obtained from ICE IBA LIBOR rates and settlement prices of CME Eurodollar futures. The desired input of risk-free rate for any given option should be the zero-coupon rate with a maturity close to the option’s time to expiration. This can be obtained by linearly interpolating the two closest zero-coupon rates on the zero curve. For dividend-paying indices, OptionMetrics assumes that the security pays dividends continuously, according to a continuously compounded dividend yield. A put-call parity relationship is assumed, and the implied index dividend is calculated from the proprietary linear regression model:

𝐶 − 𝑃 = 𝑏0+ 𝑏1∙ 𝑆 + 𝑏2∙ 𝑆𝑇 + 𝑏3∙ 𝐾 + 𝑏4∙ 𝐾𝑇

The C and P in the regression model represent the bid price of the call option and the offer price of put option. This regression is conducted using three months’ options data with all strike prices and

time to expirations, except for options expiring in 15 days. Given the put-call parity, − 𝑏̂ is 2

approximately equal to the dividend rate of the underlying index. Finally, for calculating implied

volatilities and associated option sensitivities of European options, the theoretical option price is set equal to the midpoint of the best closing bid price and best closing offer price for the option. The Black-Scholes formula is then inverted using a numerical search technique to calculate the implied volatility for the option.

However, due to the characteristics of options, the remaining life of a certain option decreases as time goes by, which may cause inconsistency when I compare the predictive error of implied volatility derived

(17)

from options with 30 days to maturity and that with10 days to maturity. OptionMetrics provides a

sensible way of calculating the standardized option prices and implied volatility using linear interpolation from the volatility surface. Briefly, the procedure is as follows: Firstly, organize the data by the log of days to expiration and by “call-equivalent delta”, that is the delta for a call, one plus delta for a put. A kernel smoother is then used to generate a smoothed volatility value at each of the specified interpolation grid points. At each grid point j on the volatility surface, the smoothed volatility 𝜎̂ is calculated as a 𝑗

weighted sum of option implied volatilities:

𝜎̂ =𝑗

∑ 𝑉𝑖 𝑖∙ 𝜎𝑖∙ ∅(𝑥𝑖𝑗, 𝑦𝑖𝑗, 𝑧𝑖𝑗)

∑ 𝑉𝑖 𝑖∙ ∅(𝑥𝑖𝑗, 𝑦𝑖𝑗, 𝑧𝑖𝑗)

where i is indexed over all the options for that day, 𝑉𝑖 is the vega of the option, 𝜎𝑖 is the implied volatility,

and ∅(∙) is the kernel function:

∅(𝑥, 𝑦, 𝑧) = 1 √2𝜋∙ 𝑒 −[2∙ℎ𝑥2 1+ 𝑦2 2∙ℎ2+ 𝑧2 ℎ3]

The parameters to the kernel function, 𝑥𝑖𝑗, 𝑦𝑖𝑗, and 𝑧𝑖𝑗 are measures of the “distance” between the option

and the target grid point:

𝑥𝑖𝑗= 𝑙𝑜𝑔( 𝑇𝑖 𝑇𝑗 ) 𝑦𝑖𝑗 = ∆𝑖− ∆𝑗 𝑧𝑖𝑗 = 𝐼{𝐶𝑃𝑖=𝐶𝑃𝑗}

where 𝑇𝑖 (𝑇𝑗) is the number of days to expiration of the option (grid point); ∆𝑖 (∆𝑗) is the “call-equivalent

delta” of the option (grid point); CPi (CPj) is the call/put identifier of the option (grid point); and 𝐼{∙} is an

indicator function, which equals 0 if the call/put identifiers are equal, or 1 if they are different. The kernel “bandwidth” parameters were chosen empirically, and are set as h1=0.05, h2=0.005, and h3=0.001. Next, the forward price of the underlying security is calculated using the zero curve and the projected

distributions, and the volatility surface points are linearly interpolated to the forward price and the target expiration, to generate at-the-money-forward implied volatility. By this method OptionMetrics provides, I can then compare the daily calculated realized volatility with the daily calculated implied volatility of the same period. The realized volatility is calculated as the annualized standard deviation of the daily return within a period:

Realized Volatility = √252

𝑛 ∙ ∑(𝑅𝑡− 𝑅̅)2

𝑛

𝑡=1

Following is the descriptive statistical table of implied volatility and realized volatility I plan to use. I also calculated the log form of both volatilities to see if the distribution of them is better than that of the original volatilities. As we can see, the mean of implied volatility is greater than the mean of realized

(18)

volatility for both the original and the log form, indicating that the implied volatility overestimates the future volatility on average. Although the variances of the logarithmic form are higher than that of the original ones, the skewness and kurtosis of the logarithmic form volatilities are significantly lower than that of the original volatilities. This descriptive statistical table shows that the distribution of the log form implied volatility is closer to the normal distribution. Because the typical event study use return as the proxy whose distribution is close to normal distribution, the logarithm form of implied volatility is potentially more in line with the assumption of typical event study.

Table 2

Descriptive statistics of the volatility involved

The first two columns are the implied volatility and realized volatility calculated using the methodology provided by OptionMetrics for the full-time period. The latter two columns are the same dataset in logarithm form.

To make the data more visible, I produced a figure showing an overview of the predictive power of the implied volatility over the realized volatility across the whole time period from 1996 to 2017. The graph shows that the performance of implied volatility in predicting the realized volatility in the full period varies. Specifically, the performance is better at times of volatility hikes and worse at normal times. One explanation for this is the volatility risk premium embedded in the implied volatility, as investigated by Christopher G. Lamoureux (1993) and Wei Ge(2016).

Figure 1

Implied volatility and realized volatility in the full-time period

The horizontal axis is time, and the vertical axis is volatility in annualized volatility in decimals. The red line is the realized volatility over this period, averaged across all 21 indexes selected above. The blue line is the corresponding ex-post realized volatility. In line with the descriptive statistics above, we can see that, in most cases, the implied volatility is upward biased from the realized volatility. Noticeably, in cases of volatility spikes, the realized volatility is closer to the implied volatility and even high than the implied volatility.

Descriptive Statistics: Full Period — 01/01/1996 to 31/12/2017

Statistics Imlied Realized Log Implied Log Realized

Volatility Volatility Volatility Volatility

Mean 0.21 0.19 -1.64 -1.81

100*Variance 1.14 2.28 18 29.12

Skewness 2.5 7.64 0.51 0.6

(19)

Following is the descriptive statistical table of the variables I will use in the analogous event study. The absolute predictive error provides an overview of the magnitude of the predictive error. Because the data is in decimal form, the mean of 0.02 represents 2% difference between the implied volatility and realized volatility. It is worth notice that the kurtosis is exceptionally high, meaning that the distribution of absolute predictive error has a very heavy tail. Combined with the significant negative skewness, we can conclude that the absolute predictive error has substantial negative outliers, mostly during volatility spikes. The mean of the relative predictive error is 0.222. It means that the implied volatility is 22.2% higher than the ex-post realized volatility on average. The logarithm of relative predictive error is consistent with the original form, but the premium of implied volatility is 16.6%, slightly less than the premium in the original form. Similarly, the variance of relative predictive error in original form is larger than the one in logarithm form, indicating that the logarithm has a smoothing effect. The kurtosis of both relative predictive error is significantly smaller than the absolute value. Combined with the corresponding skewness, it shows that both the relative predictive metrics are closer to the normal distribution and thus more in line with the implicit assumption of the typical event study. Thus, I choose to use the relative predictive error and its logarithm form for subsequent study.

Table 3

Descriptive statistics of predictive error

This table summarizes the target variables that I use in this paper. The Absolute predictive error is the difference between the implied volatility and realized volatility. The relative predictive error is the absolute predictive error divided by the ex-post realized volatility, and the final column is the logarithm form of relative predictive volatility for validity check.

𝑅𝑒𝑙𝑎𝑡𝑖𝑣𝑒 𝑃𝑟𝑒𝑑𝑖𝑐𝑡𝑖𝑣𝑒 𝐸𝑟𝑟𝑜𝑟 =𝐼𝑚𝑝𝑙𝑖𝑒𝑑 𝑉𝑜𝑙𝑎𝑡𝑖𝑙𝑖𝑡𝑦 − 𝑅𝑒𝑎𝑙𝑖𝑧𝑒𝑑 𝑉𝑜𝑙𝑎𝑡𝑖𝑙𝑖𝑡𝑦 𝑅𝑒𝑎𝑙𝑖𝑧𝑒𝑑 𝑉𝑜𝑙𝑎𝑡𝑖𝑙𝑖𝑡𝑦

(20)

3.2 Methodology

The event window I use in the research is from 20 days before to 20 days after the national election date. Include the election date, there are 41 trading days in the event window. The estimation window I choose is composed of two periods, a pre-election window from 70 days before to 40 days before the national election date and an after-election window from 40 days after to 70 days after the national election date. A brief illustration is as below.

According to the methodology I stated above, I calculate the relative predictive error in the estimation window for each index, and take the average of them as the normal predictive error in this period. The difference between the actual relative predictive error and the relative predictive error is the abnormal predictive error.

Figure 2

Illustration of estimation window and event window selection

The estimation window is composed of two sub-windows, one before and one after the national election date. After producing the relative predictive error from the estimation window as stated above, I average all the data from estimation window for each index as the so-called normal relative predictive error in the estimation window. Compare the relative predictive error with the normal predictive error in the event window, the difference is the abnormal predictive error.

Also, for statistical tests, I plan to use the following model:

𝐴𝑏𝑛𝑜𝑟𝑚𝑎𝑙 𝑅𝑒𝑙𝑎𝑡𝑖𝑣𝑒 𝑃𝑟𝑒𝑑𝑖𝑐𝑡𝑖𝑣𝑒 𝐸𝑟𝑟𝑜𝑟 = 𝛽−20𝐷−20+ 𝛽−19𝐷−19+ ⋯ + 𝛽0𝐷0+ ⋯ + 𝛽19𝐷19+ 𝛽20𝐷20

Descriptive Statistics: Full Period — 01/01/1996 to 31/12/2017

Absolute Predictive Relative Predictive Log Relative Predictive

Error Error Error

Mean 0.020 0.222 0.166

100*Variance 0.935 11.943 7.045

Skewness -22.387 2.371 -0.467

Kurtosis 756.867 25.290 12.822

(21)

Where D stands for the days different from the national election date, for example, 𝐷−20 stands for 20

days before the election date and 𝐷0 stands for the election date. To make the results more statistically

convincing, I take into consideration the distribution of the election pool, the time fixed effect, and the index fixed effect. Moreover, because there are correlations between different indexes, it is necessary to adjust the standard deviation of each coefficient. The procedure will follow the technique provided by Frank de Jong and Peter de Goeij (2011).

According to the federal government statistical data, I summarized the pool of votes for each national election as a metric for evaluating the uncertainty of each national election. Although there are usually two metrics used assessing the distribution of votes, the electoral votes and popular votes. To better mitigate the influence of votes counting rules and better reflect the uncertainty perceived by the market, I take the popular votes as the proxy.

Table 4

Popular votes situation of each national election and derived corresponding uncertainty proxy

The popular votes data are from the official site of federal government. I listed the first three candidates who obtained the most votes. Taking the square of absolute value of the percentage share and summing them up, I produce the concentration indicator of the election (similar to Herfindahl-Hirschman index). Because the degree of concentration is a contrary indicator of the uncertainty of the election, I then time the reciprocal of the concentration indicator by 1,000,000 and then minus 200 to produce the uncertainty indicator.

Concentration = (100 ∗ percentage1)2+ (100 ∗ percentage2)2+ (100 ∗ percentage3)2

Uncertainty = 1,000,000/Concentration − 200 Election 1996 2000 2004 Popular votes Percentage Popular votes Percentage Popular votes Percentage 1st Candidate 47,402,357 49.24% 50,456,062 48.36% 62,040,610 50.73% 2nd Candidate 39,198,755 40.71% 50,996,582 48.88% 59,028,444 48.27% 3rd Candidate 8,085,402 8.40% 2,882,955 2.76% 465,650 0.38% Concentration 4152.44 4735.27 4903.67 Uncertainty 40.82 11.18 3.93 Election 2008 2012 2016 Popular votes Percentage Popular votes Percentage Popular votes Percentage 1st Candidate 69,498,516 52.93% 65,915,795 51.06% 65,853,514 48.18% 2nd Candidate 59,948,323 45.65% 60,933,504 47.20% 62,984,828 46.09% 3rd Candidate 739,034 0.56% 1,275,971 0.99% 4,489,341 3.28% Concentration 4885.82 4835.94 4456.36 Uncertainty 4.67 6.78 24.40

I assign the uncertainty indicator of each election to the abnormal predictive error data to control for the uncertainty in the regression model stated above.

(22)

CHAPTER 4 Results and Interpretation

According to the construction of estimation and event window stated above, I conducted standard event study procedure and produced the result as follows. Assume that the abnormal predictive errors are independently and identically distributed with mean zero and variance 𝜎2. I employed a t-test to test the significance level of the abnormal predictive error. The variance of a single abnormal predictive error is unknown and can be estimated as:

𝑠𝑡 = √

1

𝑁 − 1∙ ∑(𝐴𝑃𝐸𝑖𝑡− 𝐴𝐴𝑃𝐸𝑡)2

𝑁

𝑖=1

The test statistic for average abnormal predictive error is then: 𝑇𝑆 = √𝑁 ∙𝐴𝐴𝑃𝐸𝑡

𝑠𝑡

~ 𝑡𝑁−1

, which follows a t-distribution with a degree of freedom 20. Figure 3 shows the abnormal predictive error averaged across all the selected indexes. Table 4 reveals the numerical data correspondent to figure 3. As we can see, there are statistically significant positive abnormal predictive error before the election date, which means that the implied volatility is higher abnormally higher than the realized volatility (given that the normal predictive error is typically positive). Besides, there are also statistically significant negative abnormal predictive error after the election date, though with a smaller magnitude compared to the pre-election counterpart. Noticeably, the abnormal predictive errors drop near the event date. If you refer to the appendix for scenarios of each specific election, this phenomenon is especially significant during the election in 2016, which was controversial and caused a very high uncertainty. It means the predictive error is smaller than the normal level after the election date. We can interpret this phenomenon as: (1) the market is particularly bad at pricing, or in other words, predicting the future volatility before the occurrence of events inducing high uncertainty; (2) the market is better at predicting the future volatility after the events, compared to normal times. The latter phenomenon can probably be a result of the learning process of the market. That is, the markets understand the impact of the event after the uncertainty is eliminated and thus can be better at predicting the post-event volatility. This result is consistent with the hypothesis of this paper that events possessing high volatility can lower the predictive power of implied volatility. Moreover, the unexpected catch is the potential evolution of the markets’ ability of predicting future volatility.

Figure 3

Abnormal relative predictive error averaged across 21 selected indexes

The horizontal axis is the number of days away from the national election date, a negative sign represents the days before the election and a positive sign stands for the days after the election date. The vertical axis is the relative predictive error in decimal. A 0.1 relative predictive error means the implied volatility is 10% higher than the realized volatility. The red line is the relative predictive error in original form and the blue line is the relative predictive error in logarithm form. The data points with a triangle “▲” stand for statistical significance at 99% level.

(23)

Table 5

Abnormal relative predictive error averaged across 21 selected indexes

“AAPE” and “ln AAPE” represent average abnormal predictive error and its corresponding logarithm form. “s” and “ln s” stand for the standard deviation of “AAPE” and “ln AAPE”, respectively. “t” and “ln t” mean the t statistics for “AAPE” and “ln AAPE”. The data points with underlines are statistically significant.

The following table shows the regression result of the model mentioned in Chapter 3.

𝐴𝑏𝑛𝑜𝑟𝑚𝑎𝑙 𝑅𝑒𝑙𝑎𝑡𝑖𝑣𝑒 𝑃𝑟𝑒𝑑𝑖𝑐𝑡𝑖𝑣𝑒 𝐸𝑟𝑟𝑜𝑟 = 𝛽−20𝐷−20+ 𝛽−19𝐷−19+ ⋯ + 𝛽0𝐷0+ ⋯ + 𝛽19𝐷19+ 𝛽20𝐷20 𝑅𝑒𝑙𝑎𝑡𝑖𝑣𝑒 𝑃𝑟𝑒𝑑𝑖𝑐𝑡𝑖𝑣𝑒 𝐸𝑟𝑟𝑜𝑟 = 𝛽−20𝐷−20+ 𝛽−19𝐷−19+ ⋯ + 𝛽0𝐷0+ ⋯ + 𝛽19𝐷19+ 𝛽20𝐷20 -0.15 -0.1 -0.05 0 0.05 0.1 0.15 0.2 0.25 0.3 -20 -18 -16 -14 -12 -10 -8 -6 -4 -2 0 2 4 6 8 10 12 14 16 18 20 Re lat iv e Predictiv e Erro r

Number of Days Away from The Election Date

AAPE ln_AAPE Days -20 -19 -18 -17 -16 -15 -14 -13 -12 -11 -10 -9 -8 -7 AAPE 7.52% 13.30% 17.21% 9.95% 9.28% 10.68% 13.88% 6.75% 4.31% 2.19% 1.24% 6.61% 10.85% 15.70% ln AAPE 5.62% 10.55% 13.59% 8.75% 5.61% 6.31% 9.05% 2.99% 1.37% -1.73% -1.73% 3.82% 6.70% 10.37% s 10.68% 10.68% 10.68% 10.68% 10.68% 10.68% 10.68% 10.68% 10.68% 10.68% 10.68% 10.68% 10.68% 10.68% ln s 9.12% 9.12% 9.12% 9.12% 9.12% 9.12% 9.12% 9.12% 9.12% 9.12% 9.12% 9.12% 9.12% 9.12% t 3.22 5.71 7.38 4.27 3.98 4.58 5.96 2.90 1.85 0.94 0.53 2.83 4.66 6.74 ln t 3.08 5.78 7.45 4.80 3.07 3.46 4.96 1.64 0.75 -0.95 -0.95 2.09 3.67 5.69 Days -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 AAPE 23.37% 17.05% 16.85% 14.30% 17.93% 4.51% 2.30% -10.92% -9.43% -11.19% -6.51% -8.17% -6.43% -11.51% ln AAPE 14.71% 7.58% 8.13% 5.14% 5.65% -1.05% -3.37% -11.15% -9.05% -11.04% -6.18% -7.21% -4.58% -9.63% s 10.68% 10.68% 10.68% 10.68% 10.68% 10.68% 10.68% 10.68% 10.68% 10.68% 10.68% 10.68% 10.68% 10.68% ln s 9.12% 9.12% 9.12% 9.12% 9.12% 9.12% 9.12% 9.12% 9.12% 9.12% 9.12% 9.12% 9.12% 9.12% t 10.03 7.32 7.23 6.13 7.69 1.94 0.99 -4.69 -4.04 -4.80 -2.79 -3.51 -2.76 -4.94 ln t 8.07 4.15 4.46 2.82 3.10 -0.58 -1.85 -6.11 -4.96 -6.05 -3.39 -3.95 -2.51 -5.28 Days 8 9 10 11 12 13 14 15 16 17 18 19 20 AAPE -9.24% -8.48% -7.61% -5.07% -3.42% -8.80% -8.07% -5.71% -2.24% 3.44% 6.02% 0.57% -1.64% ln AAPE -6.95% -6.66% -5.58% -3.88% -3.10% -7.07% -7.53% -6.39% -4.62% 0.03% 1.80% -3.06% -4.90% s 10.68% 10.68% 10.68% 10.68% 10.68% 10.68% 10.68% 10.68% 10.68% 10.68% 10.68% 10.68% 10.68% ln s 9.12% 9.12% 9.12% 9.12% 9.12% 9.12% 9.12% 9.12% 9.12% 9.12% 9.12% 9.12% 9.12% t -3.97 -3.64 -3.27 -2.18 -1.47 -3.78 -3.46 -2.45 -0.96 1.48 2.58 0.25 -0.71 ln t -3.81 -3.65 -3.06 -2.13 -1.70 -3.88 -4.13 -3.50 -2.53 0.02 0.99 -1.68 -2.69

(24)

This result is largely consistent with the result shown in Figure 3 and Table 5, except that the data points on days way before the election date are no longer statistically significant. This is an improvement because the effect of the event is more outstanding after considering the fixed effect of year, index, and uncertainty. Still, similar to the analysis for Figure 3 and Table 5, these results support the hypothesis that the predictive power of implied volatility is lower during events of high uncertainty.

Table 6

Regression model testing abnormal predictive error

This table summarizes the regression results of the model mentioned in the final part of Chapter 3, which takes into consideration the year fixed effect, index fixed effect, and the effect of uncertainty proxied by the spread of the popular voting pool.

Days Abnormal Predictive Error Relative Predictive Error

D_20 0.0189 -0.0259 D_19 0.0767** 0.0319 D_18 0.116*** 0.071 D_17 0.0433 -0.00156 D_16 0.0365 -0.00831 D_15 0.0506 0.00576 D_14 0.0825** 0.0377 D_13 0.0113 -0.0336 D_12 -0.0131 -0.058 D_11 -0.0343 -0.0792* D_10 -0.0438 -0.0887** D_9 0.00979 -0.035 D_8 0.0522 0.00742 D_7 0.101*** 0.0559 D_6 0.177*** 0.133*** D_5 0.114*** 0.0694 D_4 0.112*** 0.0674 D_3 0.0867** 0.0419 D_2 0.123*** 0.0782* D_1 -0.0112 -0.056 D0 -0.0333 -0.0781* D1 -0.165*** -0.210*** D2 -0.151*** -0.195*** D3 -0.168*** -0.213*** D4 -0.121*** -0.166*** D5 -0.138*** -0.183*** D6 -0.121*** -0.165*** D7 -0.171*** -0.216*** D8 -0.149*** -0.194*** D9 -0.141*** -0.186*** D10 -0.132*** -0.177*** D11 -0.107*** -0.152*** D12 -0.0905** -0.135*** D13 -0.148*** -0.192*** D14 -0.141*** -0.185*** D15 -0.117*** -0.161*** D16 -0.0827** -0.127*** D17 -0.0258 -0.07

(25)

D18 -0.0127 -0.0441 D19 -0.0545 -0.0986** D20 -0.0767** -0.121*** Observations 3,272 7,947 R-squared 0.535 0.427 *** p<0.01, ** p<0.05, * p<0.1

CHAPTER 5 Alternatives

5.1 Potential errors in the research process and the alternative approach

Plenty of researches have been done to investigate the predictive power of the implied volatility, and they typically take the difference between the implied volatility and the realized volatility as the predictive error. Theoretically, the average predictive error should be close to zero. However, due to the fact of non-zero average predictive error of implied volatility, it is important to identify the components of the absolute predictive error of the implied volatility. Imagine the situation in which the realized volatility is, on average, higher than the implied volatility. Option sellers who are effectively shorting the volatility will, on average, incur losses. Thus, ideally there will be no option sellers in the market. According to the BSM model, the implied volatility is obtained under the risk-neutral environment. Nonetheless, the realized volatility is calculated using real trading data, which may contain volatility risk premium. Therefore, direct contrasting the implied volatility with the corresponding realized volatility needs the prerequisite that the market does not price the volatility risk, or in other words, the market is indifferent to volatility risk. However, studies of Carr and Wu (2009), and Prokopczuk and Wese Simen (2012) showed that this premise does not hold. They found a compelling evidence that the time-varying volatility risk premium creates a difference between implied volatility and realized volatility. Therefore, the difference between implied volatility and realized volatility is composed of the volatility risk premium and the strict predictive error of the implied volatility. Given the situation discussed above, it is possible to construct an alternative approach to mitigate the influence of volatility premium to assess the predictive error. Marcel Prokopczuk and Wese Simen(2013) studied the role of volatility risk premium and proposed a non-parametric and parsimonious approach to adjust the non-non-parametric implied volatility and pointed out that the volatility risk premium tend to be relatively constant during short periods of time. Thus, if conduct a regression of the ex-post realized volatility on the implied volatility, the result would shed some light on the volatility risk premium.

Realized Volatility = α + β ∙ Implied Volatility + ε

The above regression is equivalent to a model adjusting the volatility risk premium embedded in the difference between the implied and realized volatility using the coefficient β and the constant α. Using the regression model in the estimation window, researchers can estimate the corresponding coefficient in this period. Then, implement the estimated model in the event window, that is, take the implied volatility in

(26)

the event window as inputs to derive the predicted ex-post volatility. The predicted ex-post will, thus, include not only the normal predictive error, but also the relatively stable volatility risk premium in this period. The difference between the predicted ex-post volatility and actual volatility is the abnormal predictive error. This procedure considers the adjustment for the relatively stable volatility premium in the short period. Finally, implement the standard event study procedure as we did in this paper can yield more precise result.

There are also potential errors in the process of statistics construction for reasons such as the abnormal predictive error not necessarily following the same distribution, correlation among indexes, etc.

5.2 Results and interpretation of the alternative approach

Using the alternative approach stated above, I produced the following descriptive statistics of the predictive error of implied volatility. That is, the predicted error adjusted for volatility risk premium. As we can see, the relative predictive error has a distribution very close to normal distribution. This idea distribution can potentially yield more sensible understanding.

Table 7

Descriptive statistics of predictive errors calculated under the alternative approach

This is the descriptive statistics derived from the alternative approach, which adjusted for the volatility risk premium.

Absolute Predictive Error = Model Predicted Volatility − Actual Realized Volatility Relative Predictive Error = Absolute Predictive Error/Avtual Realized Volatility

Log Relative Predictive Error = ln(Model Predicted Volatility) − ln(Actual Realized Volatility)

The following graph is produced with the same procedure as in Chapter 4. Although the shapes of the abnormal predictive error are similar to the ones in Chapter 4, the massive significant data points disappear in this graph. This indicates that the predictive power of implied volatility is not significantly influenced by the occurrence of events that induce high uncertainty. Due to the quasi-normal distribution of this measure of predictive error, the statistical test result is even more compelling than that in Chapter 4. The reason why we draw conclusion that contradicts the previous conclusion in Chapter 4 is the consideration of volatility risk premium. If it is true that the volatility risk premium dominates an important part in the difference between implied volatility and realized volatility, we can conclude that the predictive errors measured in Chapter 3 and Chapter 4 largely represent the required premium of the

Descriptive Statistics: Full Period (Event Window) — 01/01/1996 to 31/12/2017

Absolute Predictive Relative Predictive Log Relative Predictive

Error Error Error

Mean -0.033 0.029 -0.037

100*Variance 1.798 13.150 14.610

Skewness -1.745 0.639 -1.159

Kurtosis 7.502 3.401 9.079

(27)

option seller. In other words, the alleged predictive error in most of the existing research is a reasonable charge for extra risk, instead of an indicator of poor abilities in predicting future volatility.

Figure 4

Abnormal relative predictive error averaged across 21 selected indexes (Alternative Approach)

The horizontal axis is the number of days away from the national election date, a negative sign represents the days before the election and a positive sign stands for the days after the election date. The vertical axis is the relative predictive error in decimal. A 0.1 relative predictive error means the implied volatility is 10% higher than the realized volatility. The red line is the relative predictive error in original form and the blue line is the relative predictive error in logarithm form. The data points with a triangle “▲” stand for statistical significance at 99% level.

-0.2 -0.15 -0.1 -0.05 0 0.05 0.1 0.15 0.2 -20 -18 -16 -14 -12 -10 -8 -6 -4 -2 0 2 4 6 8 10 12 14 16 18 20 Ab n o rm al Predictiv e Erro r

Number of Days Different from The Eletion Date

AAPE ln_AAPE

(28)

CHAPTER 6 Conclusion

This paper investigated the predictive power of implied volatility. Most previous works focused on the overall predictive power of implied volatility. Their conclusions vary from alleging that implied volatility is a poor estimator to implied volatility outperforms other time series models in predicting future

volatility. The hypothesis tested is that the predictive power of implied volatility will be poorer during the occurrence of events that induce high uncertainty.

The methodology employed is similar to event study. I identified two estimation windows near the event window and estimated the expectation of the predictive error of implied volatility. The difference between the actual predictive error and the expected predictive error in the event window is denoted as abnormal predictive error. I produced graphs to visually assess the impact of such events on the predictive power of implied volatility. Then I tested the abnormal predictive error statistically with two approaches. Although only the second statistic approach takes into consideration of various fixed effect, the conclusion s yield from both approaches are consistent. That is, the predictive power of implied volatility is harmed by events with high uncertainty. In other words, the market is bad at pricing the risk they perceived. Moreover, the potential problem embedded in the proxy we use as predictive error is the volatility risk premium. I provide an alternative approach to adjust for the volatility risk premium and it yields a conclusion that contradicts the previous one. That is, the predictive power of implied volatility is not significantly affected by events inducing high volatility.

Finally, I conclude that the test result of the hypothesis is dependent on the existence of volatility risk premium. The predictive power of implied volatility is significantly affected by those events if there is no volatility risk premium, and vice versa.

(29)

REFERENCES

Andrew Szakmary, Evren Ors, Jin Kyoung Kim, Wallace N Davidson. (2003). The predictive power of implied volatility: Evidence from 35 futures markets. Journal of Banking & Finance, 27(11), 2151-2175. B.J. Christensen and N.R. Prabhala. (1998). The Relation between Implied and Realized Volatility. Journal of Financial Economics, 50(2), 125–150.

Canina, L., & Figlewski, S. (1993). The Informational Content of Implied Volatility. The Review of Financial Studies, 6(3), 659-681.

Carr, P., and Wu, L. (2016), Analyzing volatility risk and risk premium in option contracts: A new theory. Journal of Financial Economics, 120(1), 1-20.

Federal Election Committee – United State of America. Retrieved from https://transition.fec.gov/pubrec/electionresults.shtml

Ge, Wei. (2016). A Survey of Three Derivative-Based Methods to Harvest the Volatility Premium in Equity Markets. The Journal of Investing Fall 2016, 25 (3) 48-58.

Goodell, J. W., & Vähämaa, S. (2013). US presidential elections and implied volatility: The role of political uncertainty. Journal of Banking & Finance, 37(3), 1108-1117.

Hull, J., & EBSCOhost. (2018). Options, futures, and other derivatives / (Ninth edition, Global ed.). Hentschel, L. (2003). Errors in Implied Volatility Estimation. Journal of Financial and Quantitative Analysis, 38(4), 779-810.

Jeff Fleming. (1998). The quality of market volatility forecasts implied by S&P 100 index option prices. Journal of Empirical Finance, 5(4), 317-345.

Jong, F., & Goeij, P.(2011) Event Studies Methodology Retrieved from

https://www.coursehero.com/tutors-problems/ Managerial-Accounting/10105220-hi-i-need-a-summary-for-this-article-in-two-three-pages-please-/

Jorion, P. (1995). Predicting Volatility in the Foreign Exchange Market. Journal of Finance, 50(2), 507-528.

Kelly, B., Pástor, &., & Veronesi, P. (2016). The Price of Political Uncertainty: Theory and Evidence from the Option Market. Journal of Finance, 71(5), 2417-2480.

Lamoureux, Christopher G., & Lastrapes, William D. (1993) Forecasting Stock-Return Variance: Toward an Understanding of Stochastic Implied Volatilities. The Review of Financial Studies, 6(2), 293-326. M. A. J. Bharadia, Christofides, N., & Salkin, G. (1996). A Quadratic Method for the Calculation of Implied Volatility Using the Garman-Kohlhagen Model. Financial Analysts Journal, 52(2), 61-64. Manfredo, M. R., and D. R. Sanders. (2002). The Information Content of Implied Volatility from Options on Agricultural Futures Contracts. Proceedings of the NCR-134 Conference on Applied Commodity Price Analysis, Forecasting, and Market Risk Management.

Martens, M. and Zein, J. (2004), Predicting financial volatility: High-frequency time-series forecasts vis-à-vis implied volatility. J. Fut. Mark., 24: 1005–1028.

(30)

Prokopczuk, M., and Wese Simen, C. (2013), The importance of the volatility risk premium for volatility forecasting. Journal of Banking and Finance, March 2014, 40, 303-320.

Yadav, Pradeep K. (1992). Event studies based on volatility of returns and trading volume: A review. The British Accounting Review, 24(2), 157-184.

(31)

APPENDIX A Graphs of the reaction during each election

(32)

Referenties

GERELATEERDE DOCUMENTEN

 to determine the ecological condition or health of various wetlands by identifying the land-cover types present in wetlands, as well as in the upslope catchments by

A sample of the network graph with topics and its connected Tweets, and two Tweets linked to the topic “ISIS in the media”. Entity Disambiguation Interpretation Visualisation

Niet alleen waren deze steden welvarend, ook was er een universiteit of illustere school gevestigd; daarmee wordt nogmaals duidelijk dat de firma Luchtmans zich met hun

SNLMP can be introduced as transition systems with stochastic and non-deterministic labelled transitions over a continuous state space.. Moreover, structure must be imposed over

After determining whether the most accurate volatility estimation model is also not drastically different to the realized volatility (benchmark) obtained from

In this thesis I try to make the literature on volatility a little bit more conclusive. As described in the theoretical framework, there are some problems in

This paper conducts a comparative study in which three volatility forecasting models are tested. Based on the theory and previous literature two hypotheses were formulated.

The primary goal of learning by doing is to foster skill development and the learning of factual information in the context of how it will be used. It is based on