• No results found

Pricing Options Using Stochastic Volatility Models: a Multi-Factor Extension

N/A
N/A
Protected

Academic year: 2021

Share "Pricing Options Using Stochastic Volatility Models: a Multi-Factor Extension"

Copied!
53
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Pricing Options Using Stochastic Volatility

Models: a Multi-Factor Extension

Nick Timmer

A thesis presented for the degree of

Master of Science

Faculty of Economics and Business

Rijksuniversiteit Groningen

(2)

Master’s Thesis Econometrics, Operations Research and Actuarial Studies.

Supervisor: D. Vullings, MSc

(3)

Pricing Options Using Stochastic Volatility

Models: a Multi-Factor Extension

Nick Timmer

Tuesday 3

rd

March, 2020

Abstract

(4)

Table of Contents

List of Tables ii

List of Figures iii

1 Introduction 1

2 Literature Review 4

3 Data Description 6

3.1 Principal Component Analysis . . . 7

4 Model 11 4.1 Return Dynamics . . . 11 4.2 Option Valuation . . . 13 5 Estimation Methodology 18 5.1 Metropolis Algorithm . . . 21 6 Results 23 6.1 Parameter Estimation . . . 23 6.2 Out-of-Sample Results . . . 24 6.3 Trading Strategies . . . 27 7 Conclusion 30 Bibliography 32 Appendix A Algebra 36

(5)

List of Tables

3.1 Descriptive Statistics Call Options on Coca-Cola Co. . . 8

3.2 Principal Component Analysis of Implied Volatility. . . 10

6.1 Estimated Parameters for the One- and Two-Factor Models. . . 24

6.2 Absolute Mean Error for Each Model Separated by Day. . . 25

6.3 Mean Pricing Errors Separated by Category. . . 27

(6)

List of Figures

(7)

1

Introduction

Many researchers have tried to derive the fair price of options by imposing structure on the dynamics of the underlying stock. The Black-Scholes option pricing model is arguably the best-known model in the field of quantitative finance. Given the price of the underlying today, as well as the maturity and the strike price of the option, the Black-Scholes model tries to give the price of the option today. The model is based on the assumption of a random walk, that is, the best predictor for the future return is the sum of the current return and a drift term. As the time increments of the random walk go to zero, and the changes of this process are scaled accordingly, the random walk converges to a Brownian motion. In the Black-Scholes model, the change in the return of the stock can be described by a drift term and a Brownian motion term, scaled by a volatility parameter. The former can be interpreted as the expected increase while the latter can be seen as a perturbation from this expectation. It is this scaling volatility parameter that has received a lot of attention. Inadequacies in the modelling of stock returns and, in particular, the volatility of stock returns, led to the downfall of Scholes’ and Merton’s hedge fund, Long-Term Capital Management (LTCM). Scholes and Merton are two of the developers of the Black-Scholes model and the majority of their hedging strategies relied on this model. Losses were so far-reaching that the U.S. Federal Reserve Board (FED) felt compelled to intervene (Edwards [15]).

One of the main shortcomings of the Black-Scholes models is its modelling of volatility; the volatility parameter is assumed to be constant. In reality, however, different strike prices have different volatilities (Rubinstein [35]). Call options with a relatively low strike price, the so-called In The Money (ITM) options1, have a higher volatility than other call options. This phenomenon is known as the volatility skew. Absence of this skew in the model can lead to significant mispricing of Out Of The Money OTM options (Chakrabarti and Santra [10]). Furthermore, volatility is not constant for different maturities. These drawbacks made many new models emerge, most notably the stochastic volatility models in Heston [24] and the SABR model in Hagan et al. [23]. In these types of models, volatility, in addition to the underlying, is assumed to follow a random process as well. Henceforth, we will refer to the random components of the volatility of the stock price as volatility factors. In this paper we develop a model that extends the Heston model while it still remains analytically tractable.

Jaber [26] argues that the single-factor Heston model fails to model the explosive behaviour of the at-the-money2 (ATM) skew for short maturities. Modeling the short and long term maturities separately in a multi-factor model (see e.g. Da Fonseca et al. [13]) might help in modeling this behaviour. Moreover, Christoffersen et al. [11] argue that the single-factor

1An option is considered ITM when the current stock price is considerably larger than the strike

price.

(8)

model is unable to capture the independence between the level of volatility and the shape of the smirk; the single-factor model can generate a steep and a flat curve for a given level of volatility but is unable to generate both. Christoffersen et al. [11] add that the model allows for stochastic correlation, since the factors in a multi-factor model have distinct correlation with the underlying and the factor weights are allowed to vary over time. This allows us to capture the temporal dependencies of the volatility skew in contrast to the single-factor model, where correlation is constant over time.

The main reason for using stochastic volatility models is the ability to capture this volatility skew. In single-factor models, like the Heston model, volatility is modelled as an Ornstein Uhlenbeck (OU) process; a stochastic process that exhibits mean reversion with mean zero. Stochastic volatility models with a negative correlation between the Brownian motion driving the volatility process and the Brownian motion driving the return of the underlying facilitate the modelling of the leverage effect; rising asset prices are accompanied by a drop in volatility, and vice versa (Aït-Sahalia et al. [3]). Properly modeling this leverage effect increases the (expected) probability of large losses and as a corollary the price of OTM puts (ITM calls), generating the volatility skew (Christoffersen et al. [11]).

The Heston model assumes a zero mean reversion level of the volatility process. This mean reversion level, however, is shown to be non-zero in, for example, Larry J. Merville [29] and Stein [42]. Moreover, Schöbel and Zhu [37] show that option prices are especially sensitive to the mean reversion level of the volatility process. Stein and Stein [41] consider a similar single-factor stochastic volatility model in which the aforementioned mean reversion level is allowed to be non-zero, yet they restrict the Brownian term driving the volatility process to be independent of the return process of the underlying. As described above, in order to properly model the volatility smirk, a negative correlation is needed and hence independence is too restrictive. Crucially, combining the above considerations, we follow Schöbel and Zhu [37] in merging Heston [24] and Stein and Stein [41] to model the volatility factors. That is, we model the volatility factors as a mean reverting OU process with non-zero mean that is possibly correlated with the return process of the underlying.

This paper combines the above mentioned considerations into a multi-factor model with mean reverting volatility factors and squared volatility factors3 with non-zero mean; both the volatility and the variance of the stock exhibit mean reversion. The model will be derived using the characteristic function (CF) technique described in Scott [38] as opposed to the pure Partial Differential Equation (PDE) technique used in Heston [24]. The former

3The squared volatility factors describe the dynamics of the volatility factors squared. These factors

(9)

technique does not require an ansatz for the form of the CF4 and guessing a suitable form for the CF may prove to be a difficult task. Jacquier et al. [27] shows that Monte Carlo sampling is particularly effective for stochastic volatility models and hence we estimate the parameters and spot volatilities using Bayesian analysis. In this approach, we follow Eraker [16].

Using data on Coca-Cola Co. (KO)5 call options ranging from March 2014 to April 2014 we will compare the out-of-sample fit of this model with some of the benchmark models in the literature. In particular, we look at the modelling errors across multiple maturities and strike prices. We find improvements on current benchmark models in both the maturity and the moneyness6 dimension. That is, our model outperforms benchmark models when options are categorized according to their maturity or moneyness. Moreover, we analyse the performance of the model through implementing some basic trading strategies; the results confirm the findings of the error analysis.

This paper is outlined as follows: In the next section, commonly used stochastic volatility models are presented in the literature review. Afterwards, in Section 3, we present and describe the data. In addition, we perform a factor analysis to support the multi-factor model. In the next section, we derive the above described model analytically which will be followed by an elaborate description of the estimation methodology. Section 5 presents the results and Section 6 concludes.

4In fact, the technique considered in Heston [24] does not lead to a system of ordinary differential

equation which can easily be solved.

(10)

2

Literature Review

Many researchers have tried to generalize the volatility structure that is present in the original Heston model (Heston [24]). As discussed before, Jaber [26] argues that the original Heston model is inadequate in explaining the explosive behaviour of the ATM skew. To resolve this problem, multiple approaches have been considered, most notably: adding more stochastic volatility factors, the inclusion of jumps and the introduction of the rough Heston model.

As mentioned before, multi-factor models aim to capture the skew behaviour through separately accounting for long term and short term volatility. Christoffersen et al. [11] explain that, while being able to capture the basics of the volatility skew, the Heston model is not able capture its time-varying nature, urging the need for a multi-factor model. Specifically, the single-factor Heston model is not able to capture the independence between the level of volatility and the shape of the skew. They find improvements of the two-factor model over the single-factor model in both the moneyness and the duration dimension. They argue that “...in the future multifactor models may become as widespread in the option valuation literature as they now are in the term structure literature” (Christoffersen et al. [11], p. 4). Indeed, multi-factor models have been used extensively, for example in, Göncü and Ökten [22], Cortazar et al. [12], Lorig et al. [30]. Moreover, Da Fonseca et al. [13] derive the general form of the multi-factor case but do not compare the out-of-sample fit to benchmark models in the literature.

Pan [33] and Bates [6] consider combined jump-diffusion models in which stochastic volatility models are combined with the jump models of Merton [31] and Kou [28]. The former states that, while the standalone jump models assume a constant intensity for the jump process, hence constituting problems in explaining the tendency of large price movements to cluster, a jump component is needed to capture large daily price movements. Their findings, however, reflect the need to also include a jump term in the volatility process in order to adequately capture the systematic variations present in the option prices. The inclusion of jumps in the volatility dynamics combines the persistency of stochastic volatility models with the ability to abstract away from normally distributed increments. The latter is implemented in for instance, Eraker [16] and Eraker et al. [17] to create the class of Stochastic Volatility Jump Diffusion (SVJD) models. While we recognize the effectiveness of including jumps in the model, they are not included in our model since they impede the establishment of closed-form solutions. The need for simulation in the evaluation of a single option would, in combination with the simulations described in the estimation section, yield an infeasible computational challenge.

(11)
(12)

3

Data Description

Tick data on Coca-Cola Co. (KO)7 European call options are used for our empirical analysis. First, the usage of tick data implies that every price change on a certain exchange is recorded, instead of recording the price on constant time increments. Second, the call options are European style and hence cannot be exercised before its expiration date. The data range from March 2014 to April 2014 and are taken from multiple major exchanges located in the United States. Note that, due to budget limitations, the span of this data set is rather short. However, a relatively short time interval minimizes the probability of impact of news on the price of the stock. Moreover, increasing the span of the (tick) data set would increase the computational burden significantly. The current stock price is taken as the price of the last traded stock adjusted for dividends. The discounted dividends, determined using risk-free rates, are subtracted from the price of the underlying. Risk-free rates for the different maturities are determined using inter- and extrapolation of existing T-Bill rates. Finally, to avoid potential miss pricing from extrapolating data, we also use LIBOR rates to estimate the risk-free rate and verify the results.

Filtering of the data is performed in accordance with Bakshi et al. [5]. First, options with a maturity less than 6 days are removed from the sample to exclude potential liquidity-related biases. Second, excluding options with a price lower than $1601 helps to alleviate the impact of the non-continuity of the quoted prices.8 On most exchanges, the quoted prices are required to be a multiple of the given tick-size. As a result, the price series are not entirely smooth. This non-smoothness can lead to estimation biases which become more severe at progressively lower prices (Ajay [1]). Lastly, prices should abide by the no-arbitrage restriction,

𝐶(𝑡) ≥ max {0, 𝑆(𝑡) − 𝑒−𝑟(𝑇−𝑡)𝐾}

where 𝐶 denotes the price of the option, 𝑆 the price of the underlying stock, 𝐾 the strike price of the contract and 𝑟 the risk-free rate. To see why this condition must hold, consider a portfolio constituted of buying a call, investing in the risk-free asset and shorting a stock. At expiration, this portfolio has a non-negative expected payoff and hence the price at time 𝑡 should be non-negative. Moreover, the payoff of a call options is always non-negative and hence the price at time 𝑡 should be non-negative.

After filtering, 16,411 contracts are left which are, as in Bakshi et al. [5], split into three Date-To-Maturity and six moneyness 𝐾𝑆 categories. For each category, the number of observations, the average price of the call option and the average implied volatility is

7A similar analysis is performed on Manitowoc Company Inc. (MTW) to exclude potential biases

resulting from considering a single stock. For brevity, the results are omitted but available on request. The main results of this paper also apply to this stock.

8This limit is obtained from Bakshi et al. [5] and scaled to compensate for the lower level of the KO

(13)

reported in Table3.1. Observe that the majority of the sample consists of ATM options (46%). The price of the call options ranges from $0.07 for short term deep OTM options to $6.10 for the long term deep ITM options. Moreover, the volatility smirk is present for each Date-To-Maturity but is most sizeable for the short maturities.

In estimating the model, we split the data into two parts. The estimating part comprises 70% of the data which amounts to 11,492 contracts. The validation part consists of 4,924 contracts. This separation is performed across time such that the relative share of each combination of moneyness and maturity is approximately equal for both sets.

A favourable property of our model is that both volatility and variance are allowed to have a non-zero level of mean reversion. We will show that this condition is indeed commended. Since naturally, stock volatility is a hidden variable, we use implied volatility as a proxy. If one were to use the S&P500 call options, the corresponding volatility index, VIX, could be considered as a proxy instead.

First, we take the average of the implied volatility per hour which we denote by 𝑣𝑡. We

estimate an Autoregressive model of order one on these volatilities, that is, 𝑣𝑡 = 𝛼 + 𝛽𝑣𝑡−1+𝜖𝑡, 𝑡 = 2, . . . , 𝑇.

The intercept, 𝛼, is significantly different from zero for any meaningful significance level and hence the mean reversion level of volatility is likely to be different from zero. Next, we will provide an exploratory analysis to show that multiple volatility factors are desired.

3.1

Principal Component Analysis

To determine the number of volatility factors needed in the model we follow Christoffersen et al. [11] and perform a principal component analysis. As argued in this paper, this approach is subject to some limitations. Most notably, the subjectiveness of the number of dimensions to retain and the sensitivity to data scaling. In our case, however, its purpose is merely to show the existence of multiple factors and to provide a starting point for our empirical analysis. As argued above, stock variance is a hidden variable and hence the analysis will be performed on implied volatilities.

(14)

Table 3.1: Descriptive Statistics Call Options on Coca-Cola Co.

(15)

coefficients of the different components as well as the cumulative percentage of variance explained by the different factors are reported in Table3.2.

Notice that, from the variance explained in Table 3.2, a model with two or three factors seems to be appropriate. This is in line with existing literature, see e.g. Aït-Sahalia and Xiu [4]. Observe that the first principal component has a relatively high positive loading for the options that are deep in or out of the money with the loadings of the former being the highest. The ATM options have a small (close to zero or negative) loading. Hence, it is expected that the first component represents the volatility smirk. Recall that Christoffersen et al. [11] argue that the shape of the volatility smirk is largely independent of the level of the volatility; a single-factor model can generate a steep smirk and a flat smirk for a given volatility level but it is not able to generate both. Now taking a look at the second factor in Table3.2, we observe that the loadings are largely positive for the OTM options while being mostly negative for the ITM options. Hence, the second component is expected to represent a factor that adjusts the steepness of the smirk depending on the level of the volatility. Lastly, we see that the third component has relatively large positive loadings for the long maturities while being small for the low maturities. Hence, this factor allows the smirk to be shifted up and down for different levels of maturity.

(16)
(17)

4

Model

As previously discussed we will consider a multi-factor stochastic volatility model where the factors are as discussed in Schöbel and Zhu [37]. The factors itself and the factors squared exhibit mean reversion. We start off with discussing the specific return dynamics. Afterwards, we determine the closed form solution for a European call option. In theory, however, the price of any simple claim can be derived in a similar way. Starting from a general pricing expression for a call option, the probabilities can be rewritten using the Fourier transform of the characteristic function. Note that, as is standard in literature, our analysis relies on the assumption of no arbitrage.

4.1

Return Dynamics

We assume the dynamics of the underlying price process, henceforth 𝑆(𝑡) or the underlying, to be known. In our case, we assume that the dynamics of the log of price of the underlying 𝑥(𝑡) = ln(𝑆(𝑡)) are given by

𝑑𝑥(𝑡) = 𝜇𝑑𝑡 + 𝒗(𝑡)𝑑𝒘𝑃𝑥(𝑡) (4.1)

where 𝒘𝑥 is an 𝑛-dimensional Wiener process. Note that the Wiener process here is

evaluated under the 𝑃-measure, as indicated by the superscript. The drift term, 𝜇, is independent of time and can be interpreted as the mean return of the underlying. The assumption of a constant drift term is in line with the literature (see e.g. Stein and Stein [41], Heston [24] and Schöbel and Zhu [37]). Moreover, 𝑣(𝑡) can be interpreted as an 𝑛-dimensional vector with the individual volatility terms.

(18)

In defining the individual volatility factors, we pursue Stein and Stein [41] and consider them to follow a mean-reverting Ornstein-Uhlenbeck process. The general definition of a one-dimensional version of such a process is given by

Definition 1 Mean Reverting Ornstein-Uhlenbeck (MROU) Process

A MROU process, 𝑣(𝑡), is a process which is defined by the stochastic differential equation 𝑑𝑣(𝑡) = 𝛿(𝜃 − 𝑣(𝑡))𝑑𝑡 + 𝜎𝑑𝑤𝑣(𝑡)

where 𝑤𝑣(𝑡) is a Wiener process and 𝛿, 𝜃, 𝜎 ∈R++.

Intuitively, assuming such a process for the volatility factors implies that these factors are dragged towards the equilibrium value of𝜃 where 𝛿 can be seen as the rate of convergence and the Wiener term as the error.

Combining the above and changing to a risk-neutral measure yields the following model9

𝑑𝑥(𝑡) = 𝑟𝑑𝑡 + 𝒗(𝑡)𝑑𝒘𝑥𝑄(𝑡) (4.2)

𝑑𝑣𝑖(𝑡) = 𝛿𝑖(𝜃𝑖−𝑣𝑖(𝑡))𝑑𝑡 − 𝜎𝑖𝜆𝑖(𝑥(𝑡), 𝑣𝑖(𝑡), 𝑡)𝑑𝑡 + 𝜎𝑖𝑑𝑤𝑣𝑖𝑄(𝑡). (4.3) Note that the change of measure resulted in a change of drift term but the volatility term remains unaltered, this is a direct consequence of the Girsanov Theorem. Moreover, observe the dependence on 𝝀 ∈ R𝑛, the market price of volatility risk. This dependence stems from the fact the volatilities are not traded, implying an incomplete market and hence non-uniqueness of the risk-neutral measure. The price of the traded assets is still uniquely determined by the risk-free measures. The price of the non-traded assets, the volatilities, depends on the specified preferences,𝝀, and may differ for the different measures. For the functional form of𝜆𝑖 we follow standard arguments (see e.g. Christoffersen et al. [11]) in

assuming 𝜆𝑖(𝑥(𝑡), 𝑣𝑖(𝑡), 𝑡) = 𝜆𝑖𝑣𝑖(𝑡) for some constant 𝜆𝑖. Lastly, 𝑟 can be interpreted as

the risk-free rate.

In words, we assume that the random sources driving the volatility factors are uncorrelated with each other. The 𝑖𝑡 ℎ random source of the stock return is correlated with the random source of the 𝑖𝑡 ℎ volatility factor but uncorrelated with the random sources driving the other factors. Formally, we can describe these interactions as follows

𝑑𝒘𝑥(𝑡)𝑑𝒘𝑥(𝑡)𝑇 =𝑰𝑛𝑑𝑡,

𝑑𝒘𝑥(𝑡)𝑑𝒘𝑣(𝑡)𝑇 =diag 𝝆𝑑𝑡,

𝑑𝒘𝑣(𝑡)𝑑𝒘𝑣(𝑡)𝑇 =𝑰𝑛𝑑𝑡,

(19)

where 𝑤𝑣𝑖 is the Wiener process considered in the 𝑖𝑡 ℎ volatility process and𝝆 ∈ R𝑛. The

diag(·) operator transform an 𝑛-dimensional vector into an 𝑛 × 𝑛 diagonal matrix. Note that this correlation structure is in line with Heston [24] but extends Stein and Stein [41]. Moreover, it coincides with the current literature on multi-factor extensions, see e.g. Christoffersen et al. [11].

4.2

Option Valuation

Assuming the above dynamics we will derive the price of a European call option. If at some predefined future date 𝑇 the value of the underlying is above some pre-specified level 𝐾 the contract will pay out the difference, in any other case, the payoff is 0. That is, the payoff of a European call option at time 𝑇 can be written as

(𝑆(𝑇) − 𝐾) · 1{𝑆(𝑇)>𝐾}.

For any other 𝑡 ∈ [0, 𝑇], in order to price the option consistently with the rest of the market, the price of the call can be expressed as

𝐶(𝑡, 𝒗, 𝑆, 𝑇) =E 𝑄 𝑡 𝑒−𝑟(𝑇−𝑡)( 𝑆(𝑇) − 𝐾) · 1{𝑆(𝑇)>𝐾}  = 𝑒−𝑟(𝑇−𝑡)E𝑄𝑡 𝑆(𝑇) · 1{𝑆(𝑇)>𝐾}  𝑒−𝑟(𝑇−𝑡)𝐾 E𝑄𝑡  1{𝑆(𝑇)>𝐾}  , (4.4) where the subscripts indicate that the expectation should be taken conditional on the available information at time 𝑡, that is 𝑆(𝑡) = 𝑆 and𝒗(𝑡) = 𝒗. The superscript 𝑄 expresses that the expectation should be taken with respect to the risk-neutral measure. Recall that this measure exists by the assumption of no-arbitrage but depends on𝝀, the market price of volatility risk, since the market is not complete. Intuitively, under the 𝑄-measure we could price derivatives as if the agents were risk-neutral and hence this pricing formula states that the value of the call option at time 𝑡 is equal to the discounted expected value of the claim.

Next, we require the concept of a martingale. A martingale is a process in which the conditional expectation of a future value is equal to the present value. Martingales are closely related to risk-neutral measures and to the concept of numeraires. Under the 𝑄-measure we are pricing in terms of a bond, which in our setting, grows with a constant risk-free rate. Under this 𝑄-measure, the price process of the traded assets divided by the numeraire, the bond, is a martingale.10 In order to simplify further calculations we follow Heston [24] in changing numeraires. Specifically, for the first term in equation (4.4) we change the numeraire to the price of the underlying, 𝑆(𝑡), and change measure accordingly.

(20)

Intuitively, this implies that we are now pricing this expression in terms of the underlying. Under this new measure we then have that 𝐶(𝑡,𝒗,𝑆,𝑇)𝑆(𝑡) is a martingale, that is,

𝐶(𝑡, 𝒗, 𝑆, 𝑇) 𝑆(𝑡) =E 𝑄1 𝑡  𝐶(𝑇, 𝒗(𝑇), 𝑆(𝑇), 𝑇) 𝑆(𝑇)  .

Moreover, by definition of measure 𝑄,

𝐶(𝑡, 𝒗, 𝑆, 𝑇) = 𝑒−𝑟(𝑇−𝑡)E

𝑄

𝑡 [𝐶(𝑇, 𝒗(𝑇), 𝑆(𝑇), 𝑇] .

The Radon-Nikodym derivative, 𝑑𝑄𝑑𝑄

1, which can be seen as a fraction of ‘densities’ that

governs the transition between the different measures, is given by 𝑑𝑄1

𝑑𝑄 = 𝑒

𝑟(𝑇−𝑡)𝑆(𝑡)

𝑆(𝑇). As a result, expression (4.4) can be written as

𝐶(𝑡, 𝒗, 𝑆, 𝑇) = 𝑒−𝑟(𝑇−𝑡)E 𝑄1 𝑡  𝑑𝑄 𝑑𝑄1 ·𝑆(𝑇) · 1{𝑆(𝑇)>𝐾}  −𝑒−𝑟(𝑇−𝑡)𝐾E𝑄𝑡 1{𝑆(𝑇)>𝐾} = 𝑆(𝑡)E𝑄1 𝑡  1{𝑆(𝑇)>𝐾}  𝑒−𝑟(𝑇−𝑡)𝐾 E𝑄𝑡  1{𝑆(𝑇)>𝐾}  = 𝑆(𝑡)P𝑄1 𝑡 [𝑆(𝑇) > 𝐾] − 𝑒−𝑟(𝑇−𝑡)𝐾P𝑄𝑡 [𝑆(𝑇) > 𝐾] . (4.5)

Hence, the price of the call option can be determined as the current stock price multiplied by a factor minus the discounted value of the strike price weighted by the probability of exercise. This factor can be interpreted as the difference between the current stock price and the discounted expected value of the stock. Note that from this point onward, calculations get very technical and interpretation becomes hard.11 For this reason most of the calculations are shifted to the Appendix Section A.2.

In order to get a closed-form solution for these probabilities we follow Heston [24] in deriving their characteristic functions. In order to simplify calculations, we determine the characteristic functions in terms of the log stock price, 𝑥(𝑇), which are given by

𝑓𝑗(𝜙) =E𝑄𝑡 𝑗 𝑒𝑖𝜙𝑥(𝑇) , 𝑗 = 0, 1,

11Fourier transform in the setting of quantitative finance still can be given a useful interpretation.

(21)

where we used 𝑄0= 𝑄. Using the previously found Radon-Nikodym derivative we can express these characteristic functions under the original martingale measure, 𝑄, as

𝑓0(𝜙) =E𝑄𝑡 𝑒𝑖𝜙𝑥(𝑇) 𝑓1(𝜙) =E𝑄𝑡  𝑒−𝑟(𝑇−𝑡)𝑆(𝑇) 𝑆(𝑡)𝑒𝑖𝜙𝑥(𝑇)  .

As is shown in the appendix, 𝑓1 can be reduced to

𝑓1(𝜙) = exp        𝑖𝜙((𝑇 − 𝑡) + 𝑥(𝑡)) − (1 + 𝑖𝜙) 𝑛 Õ 𝑗=1  𝜌𝑗 2𝜎𝑗 (𝑣2 𝑗(𝑡) + 𝜎𝑗(𝑇 − 𝑡))        (4.6) E 𝑄 𝑡       exp        𝑛 Õ 𝑗=1  𝑎𝑗 ∫ 𝑇 𝑡 𝑣2 𝑗(𝑠)𝑑𝑠  − 𝑛 Õ 𝑗=1  𝑏𝑗 ∫ 𝑇 𝑡 𝑣𝑗(𝑠)𝑑𝑠  + 𝑛 Õ 𝑗=1 h 𝑐𝑗𝑣2 𝑗(𝑇) i            where 𝑎𝑗, 𝑏𝑗 and 𝑐𝑗 are as defined in the appendix. Hence, we need to determine an

expectation of the form, 𝑝1 : [0, 𝑇] ×R𝑛 →R,

𝑝1(𝑡, 𝒗) :=E𝑄𝑡       exp        ∫ 𝑇 𝑡       𝑛 Õ 𝑗=1 𝑎𝑗𝑣2 𝑗(𝑠) − 𝑏𝑗𝑣𝑗(𝑠)       𝑑𝑠        exp        𝑛 Õ 𝑗=1 h 𝑐𝑗𝑣2 𝑗(𝑇) i            , 𝑡 ∈ [0, 𝑇]. (4.7) By the Feynman-Kac theorem, as is elaborated upon in the appendix, this expectation can be expressed as the solution of the following partial differential equation,

1 2 𝑛 Õ 𝑗=1 𝜎2 𝑗 𝜕2𝑝 1 𝜕𝑣2 𝑗 (𝑡, 𝒗) + 𝑛 Õ 𝑗=1 𝛿𝑗(𝜃𝑗−𝑣𝑗(𝑡)) − 𝜎𝑗𝜆𝑗𝑣𝑗(𝑡) 𝜕𝑝1 𝜕𝑣𝑗 (𝑡, 𝒗) + 𝜕𝑝1 𝜕𝑡 (𝑡, 𝒗) (4.8) +𝑝1(𝑡, 𝒗) 𝑛 Õ 𝑗=1  𝑎𝑗𝑣2 𝑗(𝑡) − 𝑏𝑗𝑣𝑗(𝑡)  =0,

with boundary condition

(22)

Next, extending the hypothesized form of Heston [24], we assume that 𝑝1(𝑡, 𝒗) = exp        𝑛 Õ 𝑗=1 h 𝐵𝑗(𝑡)𝑣2 𝑗(𝑡) + 𝐷𝑗(𝑡)𝑣𝑗(𝑡) i +𝐶(𝑡)        ,

for some 𝐵𝑗, 𝐷𝑗, 𝐶 : [0, 𝑇] →R, ∀𝑗. Closed-form expressions for 𝐵𝑗(𝑡; 𝒂, 𝒃, 𝒄), 𝐷𝑗(𝑡; 𝒂, 𝒃, 𝒄)

and 𝐶(𝑡;𝒂, 𝒃, 𝒄) will be derived in the appendix. Note the dependence on the vectors 𝒂, 𝒃 and 𝒄.

Using the results from the appendix, we conclude that,

(23)

where, ˜ 𝑎𝑗 = 1 2𝑖𝜙  𝑖𝜙(1 − 𝜌2 𝑗) + 2𝛿𝑗𝜌𝑖 𝜎𝑗 + 2𝜆𝑗𝜌𝑗  ˜ 𝑏𝑗 = 𝑖𝜙𝜌𝑗𝛿𝑗𝜃𝑗 𝜎𝑗 ˜ 𝑐𝑗 = 𝑖𝜙 𝜌𝑗 2𝜎𝑗 .

Lastly, as is shown in Gil-Peleaz [21], the relationship between this characteristic function and the Cumulative Distribution Function (CDF) of 𝑥(𝑇) is as follows

P 𝑄𝑗 𝑡 (𝑥(𝑇) ≤ 𝑧) = 1 2 − 1 2𝜋 ∫ ∞ −∞ 𝑒𝑖𝜙𝑧𝑓𝑗(𝜙) 𝑖𝜙 𝑑𝜙,

which by trivial algebra, as is shown in (Schmelzle [36]), can be reduced to 1 2 − 1 𝜋 ∫ ∞ 0 ℜ 𝑒−𝑖𝜙𝑧𝑓 𝑗(𝜙) 𝑖𝜙 ! 𝑑𝜙,

(24)

5

Estimation Methodology

To estimate stochastic volatility models, the set of structural parameters and the spot volatilities need to be estimated jointly. This is a non-trivial task (Shephard [39]), predomi-nantly because direct application of maximum likelihood is infeasible (Frühwirth-Schnatter and Sögner [19]). As a result, methods based on simulation are utilized predominantly. In particular, Generalized Method of Moments is used in, for example, Renault [34] while Andersen et al. [2] employ Efficient Method of Moments. A Kalman Filtering approach is used in, for example, Meyer et al. [32]. Moreover, Christoffersen et al. [11] estimate the structural parameters and the volatilities by minimizing the sum of squares in a two-step procedure. The latter approach is not applicable to our data set since observations occur asynchronously, making the first step unreliable. In Jacquier et al. [27] it is shown that Bayesian analysis is particularly effective for stochastic volatility models.

In this paper, we are inspired by the results of Jacquier et al. [27] and apply a Bayesian analysis. We follow the analysis of Eraker [16]; distributional assumptions are made for the parameters and are updated by the observed likelihood of the observed data. In particular, let Θ := {𝜶, 𝜷, 𝝈, 𝝆} denote the set of structural parameters and {𝑽𝑡} denote

the sequence of spot volatilities. Here,𝛼𝑖 :=𝛿𝑖𝜃𝑖 and𝛽𝑖 :=𝛿𝑖+𝜎𝑖𝜆𝑖 which are introduced because of the overparameterization of (4.3).12 Since we are dealing with fractional times, we let 𝑡𝑖 denote the times at which an observation occurs (𝑖 = 1, . . . , 𝑁). We assume that the option price at time 𝑡𝑖, 𝑦𝑡𝑖 is observed with pricing error 𝜀𝑡𝑖,

𝑦𝑡𝑖 = 𝑓 (Θ,𝑽𝑡𝑖, 𝑋𝑡𝑖) +𝜀𝑡𝑖, 𝑖 = 1, . . . , 𝑁 , (5.1)

where 𝑓 is price of a call options using our model and 𝑋𝑡𝑖 are the remaining arguments of the option price such as the strike price. As is the case in Eraker [16], we want to allow for autocorrelation in 𝜀𝑡 in our prior distribution. This reflects the believe that the relatively large pricing errors of our model are clustered in time. As elaborated upon in the Appendix Section A.3, we assume normality of these error terms. Since we deal with fractional times, we can use Doob’s representation of an OU process [14] to arrive at

𝜀𝑡𝑖|𝜀𝑡𝑖−1 ∼ 𝒩  𝜀𝑡𝑖−1𝑒−𝜌𝜀Δ𝑡𝑖, 𝑠2 2𝜌𝜀  1 − 𝑒−2𝜌𝜀Δ𝑡𝑖  ,

where𝜌𝜀 ∈ [0, 1] is the autocorrelation between the error terms and 𝑠2

R++is the variance

of the unconditional error terms. Observe that, as the time increments, Δ𝑡𝑖, become smaller, the mean of the conditional distribution approaches the previous error. Moreover, the variance decreases as the time increments become smaller. These observations combined

12In equation (4.3), for each 𝑖, only the rate of convergence𝛿𝑖+𝜎𝑖𝜆𝑖 can be estimated. As a result,𝛿𝑖

(25)

imply that the next error is likely to be close to the current error if time increments are small.

Next, we derive the joint posterior density of the observed data, the model parameters and the spot volatilities; the density used in estimating the parameters. We use the notational convention to let 𝑝 denote the unnormalized form of a density. This implies that 𝑝 may take on different forms depending on its arguments.

Let Θ1:= {𝜌𝜀, 𝑠2}, by Bayes’ theorem, the joint density of the observed option price, 𝑌,

the spot volatilities, 𝑉, and the parameters is given by

𝑝(𝑌, 𝑉 , Θ, Θ1) ∝𝑝(𝑌 | 𝑉 , Θ, Θ1)𝑝(𝑉 | Θ, Θ1)𝑝(Θ, Θ1). (5.2) That is, the joint posterior density is proportional to the complete date likelihood multiplied by the prior density of the volatilities conditional on the parameters, multiplied by the prior density of the parameters.

The complete data likelihood can be derived from the specification of the error terms,

𝑝(𝑌 | 𝑉 , Θ, Θ1) ∝ 𝑁 Ö 𝑖=1 𝑝(𝑦𝑡𝑖|𝑋𝑡𝑖, 𝜀𝑡 𝑖−1, Θ, Θ1) := 𝑁 Ö 𝑖=1 𝜙  𝑦𝑡𝑖; 𝑓 (Θ,𝑽𝑡𝑖, 𝑋𝑡𝑖) +𝜀𝑡𝑖−1𝑒−𝜌𝜀Δ𝑡𝑖, 𝑠 2 2𝜌𝜀  1 − 𝑒−2𝜌𝜀Δ𝑡𝑖  ,

where 𝜙(· ; 𝜇, 𝜎2) denotes the normal density with mean 𝜇 and variance 𝜎2.

The prior density of the spot volatilities conditional on the parameters can be approximated using the Euler discretization of process (4.3),

𝑣𝑗,𝑡𝑖 −𝑣𝑗,𝑡𝑖−1 = (𝛼𝑗−𝛽𝑗𝑣𝑗,𝑡𝑖−1)Δ𝑡𝑖+𝜎𝑗pΔ𝑡𝑖𝑍𝑡𝑖,

where 𝑍𝑡𝑖 is standard normal for all 𝑖. This approximation is justifiable as long at Δ𝑡𝑖 are

small. Eraker et al. [17] shows that the error is negligible for daily intervals. Conditioning on the previous spot volatility,

𝑝(𝑉 | Θ, Θ1) ∝ 𝑁 Ö 𝑖=1 𝑚 Ö 𝑗=1 𝑝(𝑣𝑗,𝑡𝑖|𝑣𝑗,𝑡 𝑖−1, Θ) ≈ 𝑁 Ö 𝑖=1 𝑚 Ö 𝑗=1 𝜙(𝑣𝑗,𝑡𝑖; 𝑣𝑗,𝑡𝑖−1 + (𝛼𝑗−𝛽𝑗𝑣𝑗,𝑡𝑖−1)Δ𝑡𝑖, 𝜎2𝑗Δ𝑡𝑖),

(26)

Lastly, for the prior density of the parameters we assume that 𝑝(Θ, Θ1) = 𝑝(Θ)𝑝(Θ1), which are set to uninformative priors, that is, a density that only imposes the natural restrictions on the parameters.

Our goal is to obtain moments from the joint posterior distribution. To derive these moment analytically, we need to determine the constants of proportionality which we did not consider up to now. These constants are, in general, very hard to determine. Therefore, the moments are not available analytically. To deal with this problem, we are to rely on Markov Chain Monte Carlo (MCMC) sampling. That is, we try to simulate draws from the joint posterior density in order to approximate the moments. The dimension of this density, however, being a multiple of the sample size, is too high to sample from directly.13 This problem can be dealt with by implementing a Gibbs sampler. Instead of sampling from the joint posterior directly, we can sample sequentially from the conditional posterior densities. The obtained sequence of draws can be seen as a draw from the joint posterior density. These conditional densities, however, are only determined up to a constant of proportionality. This makes direct sampling infeasible. In order to deal with this problem, we incorporate the Metropolis algorithm. This algorithm will be explained in detail in Section 5.1.

The Gibbs sampler allows us to sample from the joint posterior density by sampling in blocks from the conditional posterior densities. Specifically, given appropriate starting values for ℎ = 0, for iteration ℎ = 1, . . . , 𝐻, we sequentially sample

𝑣ℎ𝑗,𝑡𝑖 from 𝑝(𝑣𝑗,𝑡𝑖 |𝑣ℎ −(𝑗,𝑡𝑖), Θ ℎ−1, 𝑌 ), 𝑗 = 1 . . . , 𝑚 and 𝑖 = 1, . . . , 𝑁 , Θℎ from 𝑝(Θ |𝑉ℎ, Θ1ℎ−1, 𝑌 ), Θ1ℎ from 𝑝(Θ1 |𝑉ℎ, Θ1ℎ−1, 𝑌 ), (5.3) here 𝑣ℎ −(𝑗,𝑡𝑖)= {𝑽 ℎ 𝑡˜𝑖 : ˜𝑖< 𝑖} ∪ {𝑣ℎ˜𝑗,𝑡𝑖 : ˜𝑗< 𝑗} ∪ {𝑽𝑡ℎ−1˜𝑖 : ˜𝑖> 𝑖} ∪ {𝑣ℎ−1˜𝑗,𝑡𝑖 : ˜𝑗> 𝑗}.

Intuitively, at each iteration ℎ, we sample the spot volatilities, 𝑣ℎ𝑗,𝑡𝑖, from the conditional density having the other spot volatilities fixed at the value from the previous iteration (if it appears before 𝑣ℎ𝑗,𝑡𝑖 in a natural ordering) or from the current iteration (if it appears after 𝑣ℎ𝑗,𝑡𝑖 in a natural ordering).

First of all, we are not sampling directly from the joint posterior and hence the samples may not be fully representative of this distribution. However, the stationary distribution

13Since we are dealing with a stochastic volatility model we have to draw a value for each volatility

(27)

of the above sampling process is the joint posterior and hence the sample should be satisfactory as long as a large number of iterations is run. Second, notice that the early iterations are highly dependent on the starting values and hence often discarded; the so called burn-in period. Next, we explain the process of individual sampling in more detail.

5.1

Metropolis Algorithm

We will explain the Metropolis algorithm in detail for the sampling of 𝑣1,𝑡1, the sampling

of other parameters follows analogously. The usefulness of this algorithm stems from the fact that we only need to determine the distribution to be sampled from up to a constant of proportionality. Consider the problem of sampling

𝑣ℎ

1,𝑡1 from

𝑝(𝑣1,𝑡1 |𝑣ℎ

−(1,𝑡1), Θ

ℎ−1, 𝑌 ). (5.4)

That is, we have completed the first ℎ − 1 iterations of the sampling scheme described in equation (5.3) and are at the start of iteration ℎ. Notice that, when we arrive at this iteration, by definition, the values 𝑣−(1,𝑡

1), Θ

ℎ−1 and 𝑌 are available. First, we draw a

proposal from the jumping distribution, ˜ 𝑣ℎ 1,𝑡1 ∼ 𝒩 𝑣ℎ−1 1,𝑡1, ˜𝜎 2,

where ˜𝜎2 is some variance which has to be set manually. Hence, we propose a draw that is relatively close to the previous draw. The algorithm will converge irrespective of the choice of this variance, however, the choice may impact the speed of convergence. Moreover, the jumping distribution is set as the Normal distribution but is allowed to be any symmetrical distribution such that the produced chain remains reversible (for details, see e.g. Sherlock et al. [40]).

Second, we determine the acceptance probability, 𝑞. For this step, we require a density that is proportional to the density to be sampled from. That is, a density proportional to equation (5.4). Define,

𝜋(𝑣𝑗,𝑡𝑖 ; 𝑣−(𝑗,𝑡𝑖), Θ, Θ1, 𝑌) := 𝑝(𝑌 | 𝑉, Θ, Θ1)𝑝(𝑉 | Θ, Θ1)𝑝(Θ, Θ1).

(28)

The acceptance probability is then given by 𝑞 := min ( 1, 𝜋(˜𝑣 ℎ 1,𝑡1 ; 𝑣 ℎ −(1,𝑡1), Θ ℎ−1, Θℎ−1 1 , 𝑌) 𝜋(𝑣1,𝑡1ℎ−1; 𝑣ℎ −(1,𝑡1), Θ ℎ−1, Θℎ−1 1 , 𝑌) ) . (5.5)

We accept the proposed draw, ˜𝑣1,𝑡

1, with probability 𝑞 and stay at the current point 𝑣 ℎ−1 1,𝑡1 with probability 1 − 𝑞. 𝑣ℎ−1 1,𝑡1 𝑣˜ ℎ,1 1,𝑡1 ˜ 𝑣ℎ,2 1,𝑡1 𝑞 < 1 𝑞 = 1 𝑣1,𝑡1 𝜋 (𝑣1 ,𝑡1 ; 𝑣−( 1 ,𝑡1 ) ,Θ ,Θ 1 ,𝑌 )

Figure 5.1: Example of the Metropolis Algorithm.

(29)

6

Results

In this section we compare the out-of-sample results of the two-factor model with several benchmarks models. In particular, the single-factor model of Schöbel and Zhu [37], the Heston model and the the Black-Scholes models will serve as benchmarks. We start off by discussing the estimation results. Next, we compare the performances of the different models by looking at pricing errors. Here, we follow Bakshi et al. [5] in separating the data set into multiple moneyness-maturity categories. As is common in the literature (see e.g. Bakshi et al. [5], Eraker [16] and Christoffersen et al. [11]), we will assess the results based on the mean pricing error. Moreover, we look at the performances of the models when they are used as the driver of a buy-and-hold strategy. In the following sections, our proposed model will be referred to as the ’Multi-Factor Zhu’ model.

6.1

Parameter Estimation

As argued in Section 3, a model with two volatility parameters seems appropriate. We will discuss the estimated parameters of the one-factor and two-factor variants of our model. One estimation chain per model is used to estimate the parameters. Using multiple chains could improve the robustness of the estimates, however, we refrain from doing so due to computational capacity. Approximately 80.000 iterations are run of which 16.000 iterations comprise the burn-in period.

(30)

Table 6.1: Estimated Parameters for the One- and Two-Factor Models.

The average parameter estimate is reported first, followed by a 95% confidence interval.

Parameter Model One-Factor Two-Factor 𝛼 0.08 0.47 (-0.14, 0.29) (-0.27, 0.95) - 0.49 (-0.42, 1.67) 𝛽 0.88 1.25 (0.82, 0.93) (0.30, 2.37) - 0.50 (0.19, 1.33) 𝜌 -0.23 -0.37 (-0.48, -0.13) (-0.67, -0.21) - -0.50 (-0.74, -0.23) 𝜎 0.82 1.90 (0.63, 1.48) (0.90, 2.90) - 1.70 (0.90, 2.85)

6.2

Out-of-Sample Results

(31)

the estimation set differs from performance later in the set. The results can be found in Table 6.2. As expected, the Black-Scholes model has the largest pricing error while our two-factor models performs best. Surprisingly, the performance of the Black-Scholes model is close to the performance of the Heston model. This might be explained by the observation of Jaber [26] that the Heston model has difficulties explaining the volatility skew for ATM options with a short maturity; these type of options are over-represented in our validation data set. Judging from the pricing errors, there does not seem to be a pattern over time present in the mispricing of the models.

Table 6.2: Absolute Mean Error for Each Model Separated by Day.

Day

Model

Black Scholes Heston Zhu Multi Factor Zhu

1 0.49 0.48 0.23 0.24 2 0.53 0.53 0.33 0.19 3 0.59 0.58 0.39 0.16 4 0.70 0.70 0.46 0.22 5 0.49 0.49 0.28 0.15 6 0.60 0.59 0.31 0.14 7 0.65 0.63 0.37 0.17

(32)

-2 -1 0 1 14:00 15:00 16:00

Time

𝜀

𝑡

(in

USD)

BS Heston MF Zhu Zhu

Figure 6.1: Pricing Errors for All Maturities and Moneyness Categories Combined.

(33)

Table 6.3: Mean Pricing Errors Separated by Category. Moneyness Date-To-Maturity Subtotal 𝑆/𝐾 < 60 60 − 180 ≥ 180 < 0.975 BS -0.12 -0.46 -0.47 -0.44 Heston -0.12 -0.46 -0.46 -0.44 Zhu -0.04 -0.40 -0.40 -0.31 MF Zhu -0.07 -0.21 -0.21 -0.14 0.975 - 1.025 BS -0.27 -1.03 -0.49 -0.46 Heston -0.26 -1.01 -0.37 -0.44 Zhu -0.14 -0.08 0.64 -0.16 MF Zhu -0.09 0.05 0.39 0.00 > 1.025 BS -0.15 -0.60 -1.67 -1.19 Heston -0.15 -0.57 -1.37 -1.18 Zhu -0.07 -0.38 -0.71 -0.44 MF Zhu -0.08 -0.48 -0.39 -0.41 Subtotal BS -0.32 -0.64 -0.79 Heston -0.31 -0.63 -0.78 Zhu -0.19 -0.30 -0.35 MF Zhu -0.05 -0.08 -0.22

6.3

Trading Strategies

We compare the different models by implementing a basic trading strategy, that is, for each of the models let 𝑓𝑚(Θ, 𝑽𝑡𝑖, 𝑋𝑡𝑖) be the price resulting from using model 𝑚. Here, 𝑚

(34)

Formally, let n

𝑍𝑚 𝑡𝑖

o𝑁˜

𝑖=1 be a sequence of buying (selling) signals for a group of options

obtained from using model 𝑚. That is, for each time 𝑡𝑖, the value is either one, negative

one or zero if we buy, sell or hold, respectively. Moreover, recall that the set 𝑋𝑡𝑖 contains

the option specific properties for the option observed at time 𝑡𝑖. Now, define 𝑡𝑖𝑋 as the

subsequence of 𝑡𝑖 with the times at which an action corresponding to an option with

characteristics 𝑋 occurs. Define 𝑦 and 𝑽 analogously. First, we compare the prices of the model with the observed prices and set the signals accordingly,

𝑍𝑚𝑡𝑖 =            −1, if 𝑦𝑡𝑖−𝑗𝑋𝑡𝑖 > 𝑓𝑚(Θ, 𝑽𝑡𝑖−𝑗𝑋𝑡𝑖, 𝑋𝑡𝑖), 𝑗 = 0, . . . , 𝑛 − 1 1, if 𝑦𝑡𝑖−𝑗𝑋𝑡𝑖 < 𝑓𝑚(Θ, 𝑽𝑡𝑖−𝑗𝑋𝑡𝑖, 𝑋𝑡𝑖), 𝑗 = 0, . . . , 𝑛 − 1 0, otherwise.

That is, if the last 𝑛 occurrences of the option observed at time 𝑡𝑖 are high (low) we

set the signal at time 𝑡𝑖 to sell (buy). Here we used the convention that, if the contract

observed at time 𝑡𝑖 is within the first 𝑛 observations of its class, the signal is set to zero.

The taken position is then closed after ℎ seconds. Let ˜𝑡𝑖 denote the time at which we

close the position taken at time 𝑡𝑖. That is,

˜

𝑡𝑖 := max{𝑡𝑋

𝑡𝑖,𝑗 : 𝑡𝑋𝑡𝑖,𝑗 ≤𝑡𝑖+ℎ}.

Observe that we need to take the maximum since we are interested in the last observation that occurs before or at time 𝑡𝑖+ℎ. Furthermore, this implies that, for a last occurrence

of a particular option type in our data set, the taken position is closed immediately. The total profit stemming from the strategy over the considered group of options is

Π(𝑚; 𝑛, ℎ) = ˜ 𝑁 Õ 𝑖=1  −𝑍𝑚𝑡𝑖 · 𝑦𝑡𝑖 − 𝑓𝑚(Θ, 𝑽𝑡𝑖, 𝑋𝑡𝑖) −𝑍𝑚 ˜ 𝑡𝑖 · 𝑦˜𝑡𝑖− 𝑓𝑚(Θ, 𝑽𝑡𝑖˜, 𝑋𝑡𝑖)  ,

where we ignored the discount factor, which is justified by the fact that we are using tick data.

(35)

close to zero is desired. Another way to see that the best performing model does not necessarily have the highest profit is that the profit is only determined by the sign of the pricing error and not by the magnitude.

From Table 6.4, we observe that for all cases considered, zero is included in the confidence intervals. This might imply that our model, as well as the considered benchmark models, does not consistently over- or undervalue the options. Note that this does not exclude the presence of consistent over- or undervaluation completely since the above described rewards and penalties could still cancel each other. Multiple stocks would have to be investigated to account for this. We refrain from doing to do to limited computational capacity. Moreover, we used that the mispricing of options in the different moneyness and maturity categories have equal weight. In practice, this might not be desired. The main purpose of this analysis, however, is to complement the results found in the previous section; a thorough analysis through trading strategies is beyond the scope of this paper.

Table 6.4: Profit Obtained through Trading per Model.

The mean profit is reported first, followed by a 95% confidence interval.

ℎ 𝑛 Model

Black Scholes Heston Zhu Multi Factor Zhu

0.01 1 0.00 0.00 -0.00 -0.00 (-0.02, 0.01) (-0.02, 0.01) (-0.02, 0.01) (-0.03, 0.00) 5 -0.00 -0.00 -0.00 -0.00 (-0.01, 0.00) (-0.01, 0.00) (-0.00, 0.00) (-0.01, 0.00) 10 1 0.03 0.03 0.01 0.00 (-0.05, 0.25) (-0.05, 0.25) (-0.05, 0.24) (-0.11, 0.15) 5 0.01 0.01 -0.01 -0.00 (-0.01, 0.04) (-0.01, 0.04) (-0.01, 0.03) (-0.03, 0.03)

(36)

7

Conclusion

In this paper, we proposed an analytically tractable model that yields closed-form expres-sion for the price of European call options. The specified dynamics are simple, yet robust and are therefore attractive from both a practical and a theoretical point of view. The model builds on the existing models proposed by Stein and Stein [41] and Schöbel and Zhu [37] and can be seen as an alternative to the multi-factor extension of the popular model proposed by Heston [24].

On the one hand, the advantage of our model is that the dynamics of both the volatility and variance are kept flexible. In contrast to the Heston model, both volatility and variance perform mean reversion. On the other hand, the benefits of the square-root specification of Heston are obvious; squared volatilities never become negative. However, negative volatilities are, in the setting of Brownian motions, allowed and simply imply that upwards movements of the random component of the volatility factors are accompanied by a downward movement of the underlying. As expected, squared volatilities do not become negative in our model either.

Moreover, the proposed model extends the single-factor model of Schöbel and Zhu [37] in the following ways. Correlation between the volatility factor and the underlying is held constant in the Zhu model. In our model, the factor weights and thereby the correlation between the variance and the underlying are allowed to vary over time. Moreover, a multi-factor is, in contrast to a single-factor model, able to generate steep and flat volatility skews for a given level of volatility. The advantages of the single-factor models are clear; less parameters reduce the calibration time and the likelihood of overfitting.

In an empirical analysis, we compared the fit of our proposed model with the standard benchmark in literature, the Black-Scholes model, as well as the two models discussed above. Based on a principal component analysis, we presented the two factor variant of our model. In both the out-of-sample analysis and by a comparison through a simple trading strategy, the proposed model outperforms current models with improvements in both the maturity and moneyness dimension. However, while the mean pricing error was smallest for our proposed model, the analysis showed a dependence on a rich calibration set.

(37)
(38)

Bibliography

[1] Ajay, R. Dravid (1991). Effects of Bid-Ask Spreads and Price Discreteness on Stock Returns. Rodney L. White Center for Financial Research Working Papers 06-91, Wharton School Rodney L. White Center for Financial Research.

[2] Andersen, Torben G., Hyung-Jin Chung, and Bent E. Sørensen (1999). Efficient method of moments estimation of a stochastic volatility model: A monte carlo study. Journal of Econometrics 91 (1), 61 – 87.

[3] Aït-Sahalia, Yacine, Jianqing Fan, and Yingying Li (2013). The leverage effect puzzle: Disentangling sources of bias at high frequency. Journal of Financial Economics 109 (1), 224 – 249.

[4] Aït-Sahalia, Yacine and Dacheng Xiu (2019). Principal component analysis of high-frequency data. Journal of the American Statistical Association 114 (525), 287–303. [5] Bakshi, Gurdip, Charles Cao, and Zhiwu Chen (1997). Empirical performance of

alternative option pricing models. The Journal of Finance 52 (5), 2003–2049.

[6] Bates, David S (1997, 01). Post-’87 crash fears in s&p 500 futures options. Working Paper 5894, National Bureau of Economic Research.

[7] Bayer, Christian, Peter Friz, and Jim Gatheral (2016). Pricing under rough volatility. Quantitative Finance 16 (6), 887–904.

[8] Bentata, Amel (2008, 02). A note about conditional ornstein-uhlenbeck processes. [9] Cerny, Ales (2006, 02). Introduction to fast fourier transform in finance. SSRN

Electronic Journal.

[10] Chakrabarti, B. B. and Arijit Santra (2017, 03). Comparison of black scholes and heston models for pricing index options. SSRN Electronic Journal.

[11] Christoffersen, Peter, Steven Heston, and Kris Jacobs (2009, 12). The shape and term structure of the index option smirk: Why multifactor stochastic volatility models work so well. Management Science 55, 1914–1932.

[12] Cortazar, Gonzalo, Matias Lopez, and Lorenzo Naranjo (2017). A multifactor stochastic volatility model of commodity prices. Energy Economics 67, 182 – 201. [13] Da Fonseca, José, Martino Grasselli, and Claudio Tebaldi (2008). A multifactor

volatility heston model. Quantitative Finance 8 (6), 591–604.

(39)

[15] Edwards, Franklin R. (1999). Hedge funds and the collapse of long-term capital management. The Journal of Economic Perspectives 13 (2), 189–210.

[16] Eraker, Bjørn (2004). Do stock prices and volatility jump? reconciling evidence from spot and option prices. The Journal of Finance 59 (3), 1367–1403.

[17] Eraker, Bjørn, Michael Johannes, and Nicholas Polson (2003). The impact of jumps in volatility and returns. The Journal of Finance 58 (3), 1269–1300.

[18] Euch, Omar, Jim Gatheral, and Mathieu Rosenbaum (2018, 01). Roughening heston. SSRN Electronic Journal, 84–89.

[19] Frühwirth-Schnatter, Sylvia and Leopold Sögner (2003). Bayesian estimation of the heston stochastic volatility model. In Ulrike Leopold-Wildburger, Franz Rendl, and Gerhard Wäscher (Eds.), Operations Research Proceedings 2002, Berlin, Heidelberg, pp. 480–485. Springer Berlin Heidelberg.

[20] Gatheral, Jim, Thibault Jaisson, and Mathieu Rosenbaum (2018). Volatility is rough. Quantitative Finance 18 (6), 933–949.

[21] Gil-Peleaz, J. (1951, 12). Note on the inversion theorem. Biometrika 38 (3-4), 481–482. [22] Göncü, Ahmet and Giray Ökten (2014). Efficient simulation of a multi-factor

stochastic volatility model. Journal of Computational and Applied Mathematics 259, 329 – 335. Recent Advances in Applied and Computational Mathematics: ICACM-IAM-METU.

[23] Hagan, Patrick, Deep Kumar, Andrew Lesniewski, and Diana Woodward (2002, 01). Managing smile risk. Wilmott Magazine 1, 84–108.

[24] Heston, Steven L. (1993). A closed-form solution for options with stochastic volatility with applications to bond and currency options. The Review of Financial Studies 6 (2), 327–343.

[25] Hui, Eddie C.M. and Ka Kwan Kevin Chan (2019). Alternative trading strategies to beat “buy-and-hold”. Physica A: Statistical Mechanics and its Applications 534, 120800. [26] Jaber, Eduardo Abi (2019). Lifting the heston model. Quantitative Finance 19 (12),

1995–2013.

(40)

[28] Kou, S. G. (2002). A jump-diffusion model for option pricing. Management Sci-ence 48 (8), 1086–1101.

[29] Larry J. Merville, Dan R. Pieptea (1989). Stock-price volatility, mean-reverting diffusion, and noise. Journal of Financial Economics 24 (1), 193–214.

[30] Lorig, Matthew, Stefano Pagliarani, and Andrea Pascucci (2017). Explicit implied volatilities for multifactor local-stochastic volatility models. Mathematical Finance 27 (3), 926–960.

[31] Merton, Robert C. (1976). Option pricing when underlying stock returns are discon-tinuous. Journal of Financial Economics 3 (1), 125 – 144.

[32] Meyer, Renate, David A. Fournier, and Andreas Berg (2003). Stochastic volatility: Bayesian computation using automatic differentiation and the extended kalman filter. The Econometrics Journal 6 (2), 408–420.

[33] Pan, Jun (2002). The jump-risk premia implicit in options: evidence from an integrated time-series study. Journal of Financial Economics 63 (1), 3 – 50.

[34] Renault, Eric (2009). Moment–Based Estimation of Stochastic Volatility Models, pp. 269–311. Berlin, Heidelberg: Springer Berlin Heidelberg.

[35] Rubinstein, Mark (1985). Nonparametric tests of alternative option pricing models using all reported trades and quotes on the 30 most active cboe option classes from august 23, 1976 through august 31, 1978. The Journal of Finance 40 (2), 455–480. [36] Schmelzle, Martin (2010, 04). Option pricing formulae using fourier transform:

Theory and application, working paper.

[37] Schöbel, Rainer and Jianwei Zhu (1999, 04). Stochastic Volatility With an Orn-stein–Uhlenbeck Process: An Extension. Review of Finance 3 (1), 23–46.

[38] Scott, Louis O. (1997). Pricing stock options in a jump-diffusion model with stochastic volatility and interest rates: Applications of fourier inversion methods. Mathematical Finance 7 (4), 413–426.

[39] Shephard, Neil G. (1996). Statistical aspects of arch and stochastic volatility. Mono-graphs on Statistics and Applied Probability 65, 1–68.

[40] Sherlock, Chris, Paul Fearnhead, and Gareth Roberts (2010, 11). The random walk metropolis: Linking theory and practice through a case study. Statistical Science 25. [41] Stein, Elias M. and Jeremy C. Stein (1991). Stock price distributions with stochastic

(41)

[42] Stein, Jeremy (1989). Overreactions in the options market. Journal of Fi-nance XLIV (4), 1011–1023.

(42)

Appendix A

Algebra

A.1

Change of Measure

Consider the probability space (Ω, ℱ , 𝑄), where 𝑄 is a risk-neutral measure. By standard no-arbitrage arguments similar to Heston [24] and Stein and Stein [41] there exists 𝑛-dimensional Wiener processes, 𝑤𝑄𝑥(𝑡), defined on a filtered probability space with Sub

𝜎-algebra, ℱ¯𝑥, the filtration generated by 𝑤𝑄𝑣𝑖(𝑡), and 𝑤𝑄𝑥(𝑡), defined on a filtered probability space with Sub 𝜎-algebra, ¯ℱ𝑣, the filtration generated by 𝑤𝑣𝑄(𝑡), such that

𝑑𝑥(𝑡) = 𝑟𝑑𝑡 + 𝒗(𝑡)𝑑𝒘𝑥𝑄(𝑡),

𝑑𝑣𝑖(𝑡) = (𝛿𝑖(𝜃𝑖−𝑣𝑖(𝑡)) − 𝜆𝑖(𝑥(𝑡), 𝑣𝑖(𝑡), 𝑡)𝜎𝑖)𝑑𝑡 + 𝜎𝑖𝑑𝑤𝑄𝑣𝑖(𝑡).

By the Girsanov Theorem, we can choose a new probability measure ˜𝑄 on ℱ such that

𝑑𝒘𝑄𝑥˜(𝑡) = 𝝍(𝑡)𝑑𝑡 + 𝑑𝒘𝑄𝑥(𝑡),

𝑑𝒘𝑄𝒗˜(𝑡) = 𝝍(𝑡)𝑑𝑡 + 𝑑𝒘 𝑄 𝒗(𝑡),

for some 𝑛-dimensional adapted process 𝝍. If we let 𝝍(𝑡) = 𝜆(𝑥(𝑡), 𝒗(𝑡), 𝑡) then we are only shifting the absolute price of market risk and hence the new measure ˜𝑄 is also risk-neutral. Then the dynamics under the measure ˜𝑄 are

𝑑𝑥(𝑡) = (𝑟 + 𝝀(𝑥(𝑡), 𝒗(𝑡), 𝑡) · 𝒗(𝑡))𝑑𝑡 + 𝒗(𝑡)𝑑𝒘𝑄𝑥˜(𝑡),

𝑑𝑣𝑖(𝑡) = (𝛿𝑖(𝜃𝑖−𝑣𝑖(𝑡))) 𝑑𝑡 + 𝜎𝑖𝑑𝑤𝑣𝑖𝑄˜(𝑡).

Now following Heston [24] in assuming the risk premium is proportional to the volatility, that is,𝜆𝑖(𝑥(𝑡), 𝑣𝑖(𝑡), 𝑡) = 𝜆𝑖𝑣𝑖(𝑡) with 𝜆𝑖 ∈ R, we get

𝑑𝑥(𝑡) = (𝑟 + 𝑛 Õ 𝑖=1 𝜆𝑖𝑣2𝑖(𝑡))𝑑𝑡 + 𝒗(𝑡)𝑑𝒘 ˜ 𝑄 𝑥(𝑡).

Now choosing 𝝀 appropriately yields equation (1) in Schöbel and Zhu [37].

A.2

Characteristic Functions

(43)

Starting with 𝑓1, 𝑓1(𝜙) =E𝑄𝑡  𝑒−𝑟(𝑇−𝑡)𝑆(𝑇) 𝑆(𝑡)𝑒𝑖𝜙𝑥(𝑇)  .

Using 𝑥(𝑇) = ln(𝑆(𝑇)), this reduces to,

=E𝑄𝑆 exp −𝑟(𝑇 − 𝑡) + 𝑥(𝑇) − 𝑥(𝑡) + 𝑖𝜙𝑥(𝑇) .

Now from the dynamics of the underlying, as described in equation (4.2), we obtain,

𝑥(𝑇) = 𝑥(𝑡) + ∫ 𝑇 𝑡 𝑟𝑑𝑠 + 𝑛 Õ 𝑗=1 ∫ 𝑇 𝑡 𝑣𝑗(𝑠)𝑑𝑤𝑥𝑗(𝑠).

Substituting this in our expression for 𝑓1, we get,

𝑓1(𝜙) =E𝑄𝑡       exp        −𝑟(𝑇 − 𝑡) − 𝑥(𝑡) + (1 + 𝑖𝜙)© ­ « 𝑥(𝑡) + ∫ 𝑇 𝑡 𝑟𝑑𝑠 + 𝑛 Õ 𝑗=1 ∫ 𝑇 𝑡 𝑣𝑗(𝑠)𝑑𝑤𝑥𝑗(𝑠)ª ® ¬              =exp𝑖𝜙(𝑟(𝑇 − 𝑡) + 𝑥(𝑡)) E𝑄𝑡       exp        (1 + 𝑖𝜙)© ­ « 𝑛 Õ 𝑗=1 ∫ 𝑇 𝑡 𝑣𝑗(𝑠)𝑑𝑤𝑥𝑗(𝑠)ª ® ¬              .

Now note that, by standard decomposition arguments (see e.g. Schöbel and Zhu [37]), ∀𝑗, 𝑑𝑤𝑥𝑗(𝑡) = 𝜌𝑗𝑑𝑤𝑣𝑗(𝑡) +q1 −𝜌2𝑗𝑑 ˜𝑤𝑣𝑗(𝑡) for some 𝑛-dimensional Wiener process, ˜𝒘𝑣(𝑡), such that 𝑑𝒘𝑣(𝑡)𝑑 ˜𝒘𝑣(𝑡)𝑇 = 0𝑛,𝑛 and 𝑑𝒘𝑥(𝑡)𝑑 ˜𝒘𝑣(𝑡)𝑇 =diag

q 1 −𝜌211, . . . , q 1 −𝜌2𝑛1  𝑑𝑡. Lastly, the newly created Wiener process is uncorrelated for all 𝑖 ≠ 𝑗, that is, 𝑑 ˜𝒘𝑣(𝑡)𝑑 ˜𝒘𝑣(𝑡)𝑇 =

𝑰𝑛𝑑𝑡. Consequently, the characteristic function reduces to,

𝑓1(𝜙) = exp𝑖𝜙(𝑟(𝑇 − 𝑡) + 𝑥(𝑡)) E𝑄𝑡       exp        (1 + 𝑖𝜙)© ­ « 𝑛 Õ 𝑗=1  𝜌𝑗 ∫ 𝑇 𝑡 𝑣𝑗(𝑠)𝑑𝑤𝑣𝑗(𝑠) +q1 −𝜌2 𝑗 ∫ 𝑇 𝑡 𝑣𝑗(𝑠)𝑑 ˜𝑤𝑣𝑗(𝑠)  ª ® ¬              .

Next, consider the stochastic process, 𝑦𝑗, defined by

(44)

Using It¯o’s lemma, we get 𝑑𝑒𝑦𝑗(𝑡)= © ­ ­ ­ « 1 2(1 + 𝑖𝜙 2)(1 −𝜌2 𝑗)𝑣2𝑗(𝑡) −  (1 + 𝑖𝜙)q1 −𝜌2𝑗𝑣𝑗(𝑡) 2 2 ª ® ® ® ¬ 𝑒𝑦𝑗(𝑡)𝑑𝑡 + (1 + 𝑖𝜙)q1 −𝜌2𝑗𝑣𝑖(𝑡)𝑒𝑦𝑗(𝑡)𝑑 ˜𝑤𝑣𝑗(𝑡), such that, E𝑄𝑡 𝑒𝑦𝑗(𝑇) = 𝑒𝑦𝑗(𝑡)=1.

Using that 𝑑 ˜𝑤𝑣𝑗 is uncorrelated with 𝑑 ˜𝑤𝑣𝑖 for 𝑗 ≠ 𝑖 and uncorrelated with 𝑑𝑤𝑣𝑗 for all 𝑗, we can now reduce the characteristic function to

𝑓1(𝜙) = exp𝑖𝜙(𝑟(𝑇 − 𝑡) + 𝑥(𝑡)) (A.1) E𝑄𝑡       exp        (1 + 𝑖𝜙)© ­ « 𝑛 Õ 𝑗=1 𝜌𝑗 ∫ 𝑇 𝑡 𝑣𝑗(𝑠)𝑑𝑤𝑣𝑗(𝑠)ª ® ¬ + 1 2(1 + 𝑖𝜙) 2© ­ « 𝑛 Õ 𝑗=1 (1 −𝜌2𝑗) ∫ 𝑇 𝑡 𝑣2 𝑗(𝑠)𝑑𝑠ª® ¬              .

(45)

Substituting this in equation (A.1) and rewriting, 𝑓1(𝜙) = exp𝑖𝜙(𝑟(𝑇 − 𝑡) + 𝑥(𝑡)) E𝑄𝑡       exp        −(1 + 𝑖𝜙) 𝑛 Õ 𝑗=1 𝜌𝑗𝛿𝑗𝜃𝑗 𝜎𝑗 ∫ 𝑇 𝑡 𝑣𝑗(𝑠)𝑑𝑠  + 1 2(1 + 𝑖𝜙) 𝑛 Õ 𝑗=1   (1 + 𝑖𝜙)(1 − 𝜌2𝑗) + 2𝛿𝑗𝜌𝑗 𝜎𝑗 + 2𝜆𝑗𝜌𝑗  ∫ 𝑇 𝑡 𝑣2 𝑗(𝑠)𝑑𝑠  −(1 + 𝑖𝜙) 𝑛 Õ 𝑗=1  𝜌𝑗 2𝜎𝑗 (𝑣2 𝑗(𝑡) + 𝜎2𝑗(𝑇 − 𝑡))  + (1 + 𝑖𝜙) 𝑛 Õ 𝑗=1  𝜌𝑗 2𝜎𝑗 𝑣2 𝑗(𝑇)              =exp        𝑖𝜙(𝑟(𝑇 − 𝑡) + 𝑥(𝑡)) − (1 + 𝑖𝜙) 𝑛 Õ 𝑗=1  𝜌𝑗 2𝜎𝑗 (𝑣2 𝑗(𝑡) + 𝜎𝑗(𝑇 − 𝑡))        E 𝑄 𝑡       exp        𝑛 Õ 𝑗=1  𝑎𝑗 ∫ 𝑇 𝑡 𝑣2 𝑗(𝑠)𝑑𝑠  − 𝑛 Õ 𝑗=1  𝑏𝑗 ∫ 𝑇 𝑡 𝑣𝑗(𝑠)𝑑𝑠  + 𝑛 Õ 𝑗=1 h 𝑐𝑗𝑣2 𝑗(𝑇) i          , with 𝑎𝑗 = 1 2(1 + 𝑖𝜙)  (1 + 𝑖𝜙)(1 − 𝜌2𝑗) + 2𝛿𝑗𝜌𝑖 𝜎𝑗 + 2𝜆𝑗𝜌𝑗  , 𝑏𝑗 = (1 + 𝑖𝜙)𝜌𝑗𝛿𝑗𝜃𝑗 𝜎𝑗 , 𝑐𝑗 = (1 + 𝑖𝜙) 𝜌𝑗 2𝜎𝑗 , this is equation (4.6).

Next, we provide the details to acquire equation (4.8). Following this derivation, we solve this PDE with boundary conditions (4.9). The Feynman-Kac theorem states,

Theorem 1 Consider the functions 𝝁 : [0, 𝑇] ×R𝑛 → R𝑛, 𝝈 : [0, 𝑇] × R𝑛 → R𝑛×𝑑,

Φ :R𝑛 →R and a discount function 𝑟 : [0, 𝑇] ×R𝑛 → R. The solution, 𝑝 : [0, 𝑇] ×R𝑛,

(46)

is equivalent to the solution of 𝑝(𝑡, 𝒗) =E𝑡 h 𝑒−∫𝑡𝑇𝑟(𝑡,𝒗) Φ(𝒗𝑇) i where 𝒗 satisfies, 𝑑𝒗𝑇 =𝝁(𝑠, 𝒗𝑠)𝑑𝑡 + 𝝈(𝑠, 𝒗𝑠)𝑑𝑊𝑠, 𝒗𝑡 =𝒗.

In our case, the dynamics of 𝒗 are specified in equation (4.3). The function 𝑟 and the boundary condition can be obtained from equation (4.7). Notice that, in our case, all cross-terms in the second sum of equation (A.2) are equal to zero.

In order to solve equation (4.8) we plug the hypothesized form into equation (4.8), 1 2 𝑛 Õ 𝑗=1 𝜎2 𝑗 h 𝑝1(𝑡, 𝒗)2𝐵𝑗(𝑡) + 4𝐵2𝑗(𝑡)𝑣2𝑗(𝑡) + 4𝐵𝑗(𝑡)𝐷𝑗(𝑡)𝑣𝑗(𝑡) + 𝐷2𝑗(𝑡) i + 𝑛 Õ 𝑗=1 𝛿𝑗(𝜃𝑗−𝑣𝑗(𝑡)) − 𝜎𝑗𝜆𝑗𝑣𝑗(𝑡) 𝑝1(𝑡, 𝒗) 2𝐵𝑗(𝑡)𝑣𝑗(𝑡) + 𝐷𝑗(𝑡)  +       𝑝1(𝑡, 𝒗)© ­ « 𝑛 Õ 𝑗=1 𝑑𝐵𝑗 𝑑𝑡 (𝑡)𝑣2𝑗(𝑡) + 𝑑𝐷𝑗 𝑑𝑡 (𝑡)𝑣𝑗(𝑡)  + 𝑑𝐶 𝑑𝑡 (𝑡)ª® ¬       +𝑝1(𝑡, 𝒗) 𝑛 Õ 𝑗=1  𝑎𝑗𝑣2 𝑗(𝑡) − 𝑏𝑗𝑣𝑗(𝑡)  =0.

Collecting terms then gives,

2𝜎2𝑗𝐵2𝑗(𝑡) − 2𝐵𝑗(𝑡)(𝛿𝑗 +𝜎𝑗𝜆𝑗) + 𝑑𝐵𝑗 𝑑𝑡 (𝑡) + 𝑎𝑗 =0, ∀𝑗, 𝑡 ∈ [0, 𝑇], (A.4) 2𝜎2𝑗𝐵𝑗(𝑡)𝐷𝑗(𝑡) + 2𝛿𝑗𝜃𝑗𝐵𝑗(𝑡) − 𝐷𝑗(𝑡)(𝛿𝑗+𝜎𝑗𝜆𝑗) + 𝑑𝐷𝑗 𝑑𝑡 (𝑡) − 𝑏𝑗 =0, ∀𝑗, 𝑡 ∈ [0, 𝑇], (A.5) 𝑛 Õ 𝑖=1  𝜎2 𝑗𝐵𝑗(𝑡) + 1 2𝜎 2 𝑗𝐷2𝑗(𝑡) + 𝛿𝑗𝜃𝑗𝐷𝑗(𝑡)  + 𝑑𝐶 𝑑𝑡 (𝑡) = 0, 𝑡 ∈ [0, 𝑇], (A.6) and boundary conditions

Referenties

GERELATEERDE DOCUMENTEN

The fact that a implied volatility risk is separate from a risk proxy in the model (size, value, past return) does not indicate whether stock returns and the expected change

'fabel 1 memperlihatkan jum1ah perkiraan produksi, jumlah perusahaan/bengkel yang membuat, jenis traktor yang diproduksi, asa1 desain dan tahap produksinya.. Jenis

15 Overige sporen die vermoedelijk aan deze periode toegeschreven kunnen worden zijn benoemd onder STR3/4-002 en STR3/4-003, alhoewel de interpretatie als structuur niet

A detailed analysis of the achievable BER performance of the considered block transmission techniques where realistic channel estimates are used for the de- sign of the

The following section we will use a Monte Carlo study to compare the performance of our maximum likelihood estimator to an adapted method used in Rösch and Scheule [2005] to see

In the present study, we measure multi-scale and multi- time correlations in a quasi-Lagrangian reference frame from fully resolved high-statistics three-dimensional direct

(Source: Author’s expanded version of a normative model of strategic planning.) Developing a Strategic Vision and Business Mission Setting Objectives Crafting a Strategy to

1969 Vienna Convention on Law of Treaties. The OTP should in the future, look to apply a more flexible approach in determining whether an entity is a state for the