• No results found

The COGARCH Model: Stochastic Properties, Applications and Estimation Procedures

N/A
N/A
Protected

Academic year: 2021

Share "The COGARCH Model: Stochastic Properties, Applications and Estimation Procedures"

Copied!
71
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

The COGARCH Model: Stochastic Properties,

Applications and Estimation Procedures

Mark van den Hoven

December 3, 2020

Master thesis Stochastic and Financial Mathematics Supervisor: prof. dr. Peter Spreij

Korteweg-de Vries Institute for Mathematics Faculty of Sciences

(2)

Abstract

It is a realistic assumption in the financial industry that volatility is stochastic and has jumps. In econometrics, GARCH stochastic volatility models are commonly used to describe such markets in which volatility can change. In my thesis, a continuous-time analogue of such models, the so called ’COGARCH’ model, will be thoroughly examined. The driving component in this model is a L´evy process. We will focus on some important properties and theorems about such processes and then proceed by stating some properties of the model. Our aim here is to examine the probabilistic properties which are important building blocks for later studies in this thesis. An interesting application of the model is option pricing. We will compare call option prices and their corresponding Black-Scholes implied volatilities with the ones we obtain in the Heston model. To do so, we will use a Variance Gamma process as the driving L´evy process. Finally, we will compare two estimation procedures for estimating the parameters in the COGARCH model and investigate their effectiveness.

Title: The COGARCH Model: Stochastic Properties, Applications and Estimation Pro-cedures

Authors: Mark van den Hoven, m.vandenhoven@student.uva.nl, 10533133 Supervisor: prof. dr. Peter Spreij,

Second grader: dr. Asma Khedher, End date: December 3, 2020

Korteweg-de Vries Institute for Mathematics University of Amsterdam

Science Park 904, 1098 XH Amsterdam http://www.kdvi.uva.nl

(3)

Contents

1 Introduction 6

2 L´evy processes 8

2.1 L´evy processes and infinite divisible distributions . . . 8

2.1.1 Some examples of L´evy processes . . . 10

2.2 The L´evy-Itˆo decomposition . . . 11

2.3 The Strong Markov Property . . . 12

2.4 The Compensation Formula . . . 13

3 The COGARCH model 14 3.1 The GARCH(1,1) model . . . 14

3.2 Intodruction of the COGARCH model . . . 14

3.3 Properties of the model . . . 15

3.4 Second order properties . . . 21

3.4.1 The volatility process . . . 21

3.4.2 The price process . . . 26

4 COGARCH in option pricing 32 4.1 Variance-Gamma COGARCH . . . 34

4.1.1 Simulation studies . . . 35

5 Statistical Estimation 41 5.1 Moments estimation . . . 41

5.1.1 Moment estimation algorithm . . . 45

5.1.2 Checking conditions . . . 46

5.1.3 Simulation results . . . 47

5.1.4 Moment estimation on real data . . . 48

5.2 Pseudo Maximum Likelihood . . . 51

5.2.1 The algorithm . . . 53

5.2.2 Simulation results . . . 54

5.3 Comparison with moment estimation . . . 54

6 Comparison with the Heston model 61 6.1 Introduction of the Heston model . . . 61

6.2 Comparing dynamics . . . 63

6.3 Call option prices in the Heston model . . . 64

(4)

Popular Summary 69

(5)

Preface

I would like to extend my gratitude to my supervisor Peter Spreij for being such a great mentor during the time spent writing this thesis. Without him this thesis would not be what it is today. His positive attitude was a joy to work with and very motivating. He was really flexible and consistent with our weekly meetups which made it so much easier to obtain the right feedback at the right times.

I would also like to thank the second reader Asma Khedher for being an inspiring teacher in the course ’Interest Rate Models’, which contributed a lot to my motivation for writing a thesis on stochastic volatility models. I appreciate her taking the time to read the thesis and attending the defense.

Finally, I would like to thank my family, friends and girlfriend for being such great support and contributing to my well being during the challenge of writing a thesis on a mathematical subject.

(6)

1 Introduction

In 1982, Engle [4] introduced ARCH (AutoRegressive Conditional Heteroscedasticity) models in which the conditional variance of the data is dependent on errors made in the past. This is a desirable feature for example in weather forecasting, where it is plausible that the recent past gives information about the variance of future predictions. These models were soon generalised to GARCH (Generalised ARCH) models by Bollerslev [5]. Here, variance of future predictions not only depend on past errors, but also on past conditional variances. Since GARCH models turned out to fit historical data of market volatilities quite well, they have become widely popular in stochastic volatility modelling. However, by that fact that high frequency data has become more available and for option pricing purposes, it is a natural choice to consider continuous time limits of such, discrete, volatility models. Kl¨uppelberg, Lindner and Maller [1] introduced the continuous time limit of the GARCH(1,1) model in such a way that the essential features of the model are kept and generalised. This continuous time limit model is referred to as the COGARCH (Continuous GARCH) model, and shall be the sole focus throughout my thesis.

My thesis is organised as follows. In Chapter 2 we start with some basic definitions and theorems on L´evy processes. These processes will be the driving factor in the model and therefore it is important to have some foundational knowledge of them. We will discuss the L´evy Khintchine formula, the L´evy-Itˆo decomposition, the Strong Markov property and the Compensation formula.

We then introduce the COGARCH model in Chapter 3 and state and proof some statistical properties of the model, such as second order properties of the variance process and the price process. We also show that the variance process is a time homogeneous Markov process and we give conditions as to when this process converges in distribution. In Chapter 4 we discuss an important application of the COGARCH model, namely option pricing. We introduce the financial market in which we will model the stock price process and we assume the existence of a martingale measure Q under which the discounted stock price process is a martingale. We then apply Monte Carlo simulation to obtain call option prices and we will show the corresponding Implied Volatility Surface. We proceed in Chapter 5 by discussing two estimation procedures for estimating the parameters in the model. In the first method we estimate the parameters by the moments and correlations of the squared returns of the price process. The second method is based on a Pseudo Maximum Likelihood estimation. We will finish this chapter with a comparison of these methods.

Finally, in Chapter 6, we introduce the Heston stochastic volatility model and compare it’s dynamics with the COGARCH model. The affine structure of this model allows us to obtain an analytical expression for the call option prices. With the help of a Fast

(7)

Fourier Transform we compute these prices and compare them to the prices obtained in Chapter 4. We also show the corresponding Implied Volatility Surface, and compare it with the one obtained in the GOGARCH model.

(8)

2 L´

evy processes

The driving component in the model will be a so called L´evy process. In this chapter we define L´evy processes and state some important theorems, such as the L´evy Khintchine formula. We also give some examples of stochastic processes that satisfy the L´evy criteria. In the second section, the L´evy-Itˆo decomposition is given, which states that a L´evy process can be decomposed into three different kind of L´evy processes. In the third chapter it is shown that L´evy processes exhibit the strong Markov property. Finally, we will state the Compensation Formula, which is a result on L´evy processes that turns out to be quite useful in further calculations.

2.1 L´

evy processes and infinite divisible distributions

We start this section by defining a L´evy process.

Definition 2.1 (L´evy process). A process X = {Xt : t ≥ 0} defined on a probability

space (Ω, F , P) is said to be a L´evy process if it possesses the following properties: (i) The paths of X are P-almost surely c`adl`ag (i.e. they are right continuous with left

limits).

(ii) P(X0= 0) = 1.

(iii) For 0 ≤ s ≤ t, Xt− Xs is equal in distribution to Xt−s.

(iv) For 0 ≤ s ≤ t, Xt− Xs is independent of {Xu : u ≤ s}.

Properties (iii) and (iv) show us that the process X has so called stationairy and in-dependent increments respectively. L´evy processes are intimately related with processes that have infinitely divisible distributions. Below, we give a definition of a infinitely divisible distribution.

Definition 2.2. A real valued random variable X has said to have an infinitely divis-ible distribution if for each n ∈ N there exists a sequence of i.i.d. random variables X1,n, · · · , Xn,n such that

X= XD 1,n+ · · · + Xn,n.

Suppose X has characteristic exponent Ψ(u) := − log E(eiuX). Here we take the distinguished logarithm (see M. Finkelstein et al [20] for a more detailed description of this logarithm). This logarithm always exists in this case. We see that X has an infinitely divisible distribution if for each n ∈ N there exists a characteristic exponent of some probability distribution, Ψn say, such that Ψ(u) = nΨn(u) for all u ∈ R.

(9)

Theorem 2.3 (L´evy-Khintchine formula). A real valued random variable has an infinite divisible distribution with characteristic exponent Ψ(θ) if and only if there exists a triplet (a, σ, Π), where a ∈ R, σ ≥ 0 and Π is a measure concentrated on R/{0} satisfying R R(1 ∧ x 2)Π(dx) < ∞, such that Ψ(θ) = iaθ + 1 2σ 2θ2+Z R (1 − eiθx+ iθx1(|x|<1))Π(dx)

for every θ ∈ R. The measure Π is called the L´evy (characteristic) measure.

Let us now elaborate the relation between processes with infinitely divisible distribu-tions and L´evy processes. For a L´evy process X we can write

Xt= Xt n + (X 2t n − X t n) + · · · + (Xt− X(n−1)tn ), for any n ∈ N (2.1)

and by the fact that X has stationary and independent increments it follows that X has an infinitely divisible distribution. Conversely, it can be shown that given an infinitely divisible distribution one can construct a L´evy process (Xt)t≥0 such that X1 has that

distribution. This is a result of Theorem 2.4 below. Before stating this theorem, let us define, for t ≥ 0 and θ ∈ R, Ψt(θ) := − log E[eiθXt]. For any positive integers m, n we

obtain

mΨ1(θ) = Ψm(θ) = nΨm/n(θ),

which follows from applying (2.1) twice. From this we obtain that

Ψt(θ) = tΨ1(θ) (2.2)

for all rational t. Now let t ∈ R and let {qn: n ∈ N} be a sequence of rational numbers

such that qn↓ t. Then

lim

qn↓t

Ψqn(θ) = lim

qn↓t



− log E[eiθXqn] (2.3)

= − log E[lim

qn↓t

eiθXqn] (2.4)

= − log E[eiθXt]

= Ψt(θ)

where (2.3) follows from dominated convergence using that |E[eiθqn]| ≤ 1 for all q

n∈ Q

and (2.4) follows from right continuity of X. Hence (2.2) holds for all t ∈ R.

Define Ψ(θ) := Ψ1(θ) as the characteristic exponent of a L´evy process X. We can now

state the following theorem.

Theorem 2.4 (L´evy-Khintchine formula for L´evy processes). Suppose that a ∈ R, σ ≥ 0 and Π is measure concentrated on R/{0} such that RR(1 ∧ x2)Π(dx) < ∞. From this triple define for each θ ∈ R,

Ψ(θ) = iaθ + 1 2σ

2θ2+Z R

(1 − eiθx+ iθx1(|x|<1))Π(dx). (2.5)

Then there exists a probability space (Ω, F , P) on which a L´evy process is defined having characteristic exponentent Ψ(θ).

(10)

Proving this theorem can be done by establishing the so called L´evy-Itˆo decomposition, which states that we can decompose a L´evy process X into the sum of independent L´evy processes X(1), X(2) and X(3), each with different types of path behaviour. This decomposition will be more thoroughly discussed in section 2.2. First we will examine some examples of L´evy processes.

2.1.1 Some examples of L´evy processes

We will finish this section with some examples of the most commonly seen L´evy process. Poisson processes

A Poisson process (Nt)t≥0with intensity λ is per definition a L´evy process such that for

each t > 0, Nt is equal in distribution to a Poisson random variable with parameter λt.

Some easy calculations reveal that

E(eiθNt) = ∞ X n=0 e−λt(λt) n n! e iθn = e−λt(1−eiθ)

and hence its characteristic exponent is given by Ψ(θ) = λ(1 − eiθ) for θ ∈ R.

Compound Poisson processes

Let (Nt)t≥0 be a Poisson process with intensity λ > 0 and let {ξi : i ≥ 1} be an i.i.d.

sequence of random variables, independent of (Nt)t≥0. Suppose that the ξihave common

law F with no atom at zero. Consider the compound Poisson process (Xt)t≥0 defined

by Xt= Nt X i=0 ξi, t ≥ 0.

To show that Poisson processes are indeed L´evy processes, we write for 0 ≤ s < t < ∞,

Xt= Xs+ Nt

X

i=Ns+1

ξi.

Now use that N has stationairy independent increments and that the random variables {ξi : i ≥ 1} are mutually independent, to conclude that Xt is the sum of Xs and an

independent copy of Xt−s. The paths of X are P-almost surely right continuous with

left limits, by the fact that the paths of N are P-almost surely right continuous with left limits. We conclude that compound Poisson processes are L´evy processes. By

(11)

conditioning on Nt, we have for θ ∈ R, E(eiθ PNt i=1ξi) =X n≥0 E(eiθ Pn i=1ξi)e−λt(λt) n n! =X n≥0 Z R eiθxF (dx) n e−(λt)(λt) n n! = e−λtRR(1−e iθx)F (dx) ,

showing that the characteristic exponent of a compound Poisson process takes the form Ψ(θ) = λR

R(1 − e

iθx)F (dx). This reveals that the triple (a, σ, Π) from (2.5) takes the

form a = −λR

|x|<1xΠ(dx), σ = 0 and Π = λF .

Brownian motion with linear drift

Let B = {Bt : t ≥ 0} be a standard Brownian motion, which is by definition a L´evy

process with almost surely continuous paths such that for each t ≥ 0, Bt is equal in

distribution to a normal random variable with mean 0 and variance t. We define a scaled Brownian motion with linear drift as

Xt:= ˆσBt+ ˆµt, t ≥ 0, ˆµ > 0, ˆσ ∈ R.

We see that Xt∼ N (ˆµt, ˆσ2), so we can calculate

E[eiθXt] = Z R 1 √ 2π ˆσ2e −(x−ˆµt)2/2ˆσ2 eiθxdx = Z R 1 √ 2π ˆσ2e −(x−(ˆµt+iθˆσ2))2/2ˆσ2 e−θ2ˆσ2/2+iθ ˆµtdx = e−θ2ˆσ2/2+iθ ˆµt,

from which we conclude that Ψ(θ) = ˆσ2θ2/2 − iθ ˆµ.

Gamma process

A Gamma process (Γt(α, β))t≥0 is per definition a L´evy process such that Γt− Γs is

Gamma distributed with shape and scale parameters α(t − s), β respectively. We will not show any properties of this process, but we will use it to construct another L´evy process later in this thesis, the so called Variance Gamma Process (see Section 4.1).

2.2 The L´

evy-Itˆ

o decomposition

A L´evy process can be decomposed into the sum of three independent L´evy processes, each with their own properties. This decomposition turns out to be quite useful and can help to understand the structure of general L´evy processes a lot better. It is also referred to as the L´evy-Itˆo decomposition. Assuming the conditions in Theorem 2.3 and

(12)

that Π(R \ (−1, 1)) 6= 0, we see that we can decompose any characteristic exponent Ψ(θ) belonging to an infinitely divisible distribution as

Ψ(θ) =  iaθ +1 2σ 2θ2  + Π(R \ (−1, 1)) Z |x|≥1 (1 − eiθx) Π(dx) Π(R \ (−1, 1)) ! + Z 0<|x|<1 (1 − eiθx+ iθx)Π(dx) ! (2.6) =: Ψ1(θ) + Ψ2(θ) + Ψ3(θ), say, (2.7)

for all θ ∈ R. Note that Π(R\(−1, 1)) < ∞ by the assumption thatR

R(1∧x

2)Π(dx) < ∞.

Whenever Π(R \ (−1, 1)) = 0 we take Ψ2(θ) = 0 in (2.7). If we can show that for

each i = 1, 2, 3 it holds that Ψi(θ) corresponds to the characteristic exponent of a

different kind of L´evy process, then Ψ(θ) corresponds to the characteristic exponent of the independent sum of these L´evy processes. This is then again a L´evy process, which is quite easily shown. We have seen in Section 2.1 that Ψ1(θ) corresponds to a Brownian

motion with drift a and variance σ2. Whenever Π(R \ (−1, 1)) 6= 0, Ψ2(θ) corresponds

to a compound Poisson process with rate Π(R \ (−1, 1)), where the random variables {ξi : i ≥ 1} have common law F (dx) = Π(dx)/R(−1, 1) concentrated on {x : |x| ≥ 1}. If

Π(R \ (−1, 1)) = 0, we take Ψ2(θ) = 0, corresponding to the zero process. Determining

the L´evy process corresponding to the characteristic exponent Ψ3(θ) turns out to be a

bit more complicated, but this process will be shortly mentioned in Theorem 2.5 below. Theorem 2.5 (L´evy-Itˆo decomposition). Given any a ∈ R, σ ≥ 0 and measure Π concentrated on R/{0} satisfying

Z

R

(1 ∧ x2)Π(dx) < ∞,

there exists a probability space on which three independent L´evy processes exists, X(1), X(2) and X(3) where X(1) is a linear Brownian motion, X(2) is a compound Poisson process and X(3) is a square integrable martingale with an almost surely countable number of

jumps on each finite time interval which are of magnitude less than unity and with char-acteristic exponent given by Ψ(3)(θ). By taking X = X(1)+ X(2)+ X(3) we have found a L´evy process with characteristic exponent (2.5).

2.3 The Strong Markov Property

We call a process X = {Xt: t ≥ 0} defined on the filtered probability space (Ω, F , F, P)

a Markov process if for each B ∈ B(R) and s, t ≥ 0,

(13)

All L´evy processes are Markovian since they satisfy the stronger condition that Xt+s−Xt

is independent of Ft. In addition, we recall some basic definitions. We say that τ : Ω →

[0, ∞] defined on the same filtered probability space (Ω, F , F, P) is a stopping time if {τ ≤ t} ∈ Ft

for all t > 0. The following sigma algebra is associated with such a given stopping time: Fτ := {A ∈ F : A ∩ {τ ≤ t} ∈ Ft for all t ≥ 0}.

We say that a process is a Strong Markov Process if it still satisfies (2.8) when we change the fixed time t by any stopping time τ with respect to F:

P(Xτ +s∈ B|Fτ) = P(Xτ +s∈ B|σ(Xτ)) on {τ < ∞}.

The next theorem states that L´evy processes also exhibit the strong Markov property. Theorem 2.6. Suppose that τ is a stopping time. Define on {τ < ∞} the process

˜

X = ( ˜Xt)t≥0 where

˜

Xt= Xτ +t− Xτ, t ≥ 0.

Then on the event {τ < ∞} the process ˜X is independent of Fτ and has the same law

as X and hence in particular is a L´evy process.

2.4 The Compensation Formula

Let X be a L´evy process with L´evy measure Π. We let N denote a Poisson random measure with intensity ds × dΠ, describing the jumps of X. The following result is referred to as the Compensation Formula.

Theorem 2.7. Suppose φ : [0, ∞) × R × Ω → [0, ∞) is a random time-space function such that

(i) as a trivariate function φ = φ(t, x)[ω] is measurable,

(ii) for each t ≥ 0 it holds that φ(t, x)[ω] is Ft× B(R)- measurable and

(iii) for each x ∈ R, with probability one, {φ(t, x)[ω] : t ≥ 0} is a left continuous process. Then for all t ≥ 0,

E Z [0,t] Z R φ(s, x)N (ds × dx) ! = E Z [0,t] Z R φ(s, x)dsΠ(dx) !

with the understanding that the right-hand side is infinite if and only if the left-hand side is.

(14)

3 The COGARCH model

We start this chapter by introducing GARCH models, which are quite popular in finan-cial industries, mainly in econometrics. We do this in Section 3.1. We then introduce their continuous-time analogue, the so called COGARCH model, in Section 3.2. Finally, we state and proof some important stochastic properties of this model in Section 3.3.

3.1 The GARCH(1,1) model

The GARCH(1, 1) is the most basic GARCH model. It is a discrete time process (Yn)n∈N,

paramatrised by β > 0, φ ≥ 0 and δ ≥ 0. It satisfies

Yn= nσˆn (3.1)

where

ˆ

σ2n= β + φYn−12 + δ ˆσn−12 = β + (φ2n−1+ δ)ˆσn−12 . (3.2) Here, ˆσ0 and 0 are given quantities and the (i)i∈N are assumed to be centered i.i.d.

random variables, independent of ˆσ0 and 0. This model could be extended to a

GARCH(p,q) model by adding more autoregressive terms in (3.2) above. In the next section we will introduce a continuous time limit of the the GARCH(1,1) model, the so called COGARCH(1,1) model, which we will from now on refer to as the GOGARCH model.

3.2 Intodruction of the COGARCH model

The COGARCH model describes two processes, the price process (Gt)t≥0and the

volatil-ity process (σt)t≥0. The stochastic driver in the equations describing these processes is

a L´evy process (Lt)t≥0 with characteristic triplet (γ, σ2, Π). The price process is given

by

dGt= σt−dLt, t ≥ 0, G0= 0, (3.3)

where the volatility process, (σt)t≥0, is defined by

dσ2t = (β − ησ2t−)dt + φσ2t−d[L, L]t. (3.4)

Here, β > 0, η ≥ 0 and φ ≥ 0 are the parameters of the model. The starting value σ20 is assumed to be a finite random variable, independent of (Lt)t≥0. The process [L, L]t is

(15)

the quadratic variation of L and is given by [L, L]t:= σ2t +

X

0<s≤t

(∆Ls)2 = σ2t + [L, L]dt,

where [L, L]dt denotes the pure jump part of [L, L]. To see that this model is indeed a continuous time version of the GARCH model described in Section 3.1, we note that it follows from (3.2) that

ˆ

σn2 − ˆσn−12 = β − (1 − δ)ˆσn−12 + φσn−12 2n−1.

By taking η = (1 − δ) in (3.4) and by letting the jumps of the L´evy process be the ran-domness, we see that we are in the same situation whenever we take the time increment dt as one unit (or at least a fixed interval) of time. We now define the process (Xt)t≥0

as

Xt= ηt −

X

0<s≤t

log(1 + φ(∆Ls)2), t ≥ 0. (3.5)

It turns out that (3.4) can be solved explicitely as σt2= e−Xt  β Z t 0 eXsds + σ2 0  , t ≥ 0, (3.6)

which we’ll elaborate in more detail later on.

3.3 Properties of the model

In this section we state and proof some important properties of the processes σ2, G and X defined above.

Theorem 3.1. X is a spectrally negative L´evy process of bounded variation. Further-more, X has the characteristic triplet (γX, σX2, ΠX) with drift γX = η, Gaussian

compo-nent σ2X = 0 and L´evy measure ΠX given by

ΠX((−∞, −x]) = ΠL({y ∈ R : |y| ≥

p

(ex− 1)(1/φ)}), x > 0

and ΠX([0, ∞)) = 0.

Proof. It is easily seen that X is a L´evy process with no positive jumps. It holds that the counting measure

N (t) = #{s ≤ t : ∆Xs≤ −x} (3.7)

is a Poisson process with intensity tΠX((−∞, −x]), which is shown e.g. in Sato [6],

theorem 19.2. Thereby it follows that

ΠX((−∞, −x]) = E (#{s ≤ 1 : ∆Xs≤ −x}) = E#{s ≤ 1 : ∆Ls≥ p (ex− 1)(1/φ)} = ΠL{y : |y| ≥ p (ex− 1)(1/φ)}. (3.8)

(16)

Recall that, given a measure space (S, Σ, µ), a measurable space (S0, Σ0) and a measurable function f : S → S0, the image measure µIm of µ under the transformation f is given

by µIm(B) = µ(f−1(B)), B ∈ Σ0. We have thus shown in (3.8) that ΠX and the

image measure of ΠL under the (continuous and hence measurable) transformation T :

R → (−∞, 0]; x → − log(1 + φx2) co¨ıncide on the π-system Π0 = {(−∞, −x] : x > 0} ∪ {[x, ∞), x > 0)}. This implies that they are the same (on σ(Π0) = B(R \ {0})). Using the standard approach, it can be easily shown that for the image measure µIm of

µ defined above it holds that µIm(f ) = µ(f ◦ T ) and hence

Z

[−1,1]

|x|ΠX(dx) = Z

|y|≤√(e−1)(1/φ)

log(1 + φy2)ΠL(dy) < ∞,

sinceR[−1,1]y2ΠL(dy) < ∞ (we can expand the log term in the equation above in terms

of y2 and higher order terms). By Lemma 2.12 in Kyprianou [7] it follows that X is of bounded variation. It also follows from Sato [6], theorem 19.3, that

E(eiθXt) = exp itθη + t Z

(−∞,0)

(eiθx− 1)ΠX(dx), θ ∈ R, (3.9)

showing that X has drift γX = η and Gaussian component σ2X = 0.

We define the process (σ2t)t≥0 by

σt2= e−Xt  β Z t 0 eXsds + σ2 0  , t ≥ 0, (3.10)

where β > 0 and σ02a finite random variable, independent of (Lt)t≥0. After the following

proposition it easily seen that (3.10) is a solution of (3.4).

Proposition 3.2. The process (σt2)t≥0 satisfies the stochastic differential equation

dσ2t = βdt + σt−2 eXt−d(e−Xt), t > 0 (3.11) and we have σt2= βt − η Z t 0 σs−2 ds + φ X 0<s≤t σ2s−(∆Ls)2+ σ20. (3.12)

Proof. We define Kt= −ηt and St=Q0<s≤t(1 + φ(∆Ls)2) (if no jumps occurred before

time t we set St= 1). We know that Stis P-a.s. finite, which follows from the fact that

log(St) = X 0<s≤t log(1 + φ(∆Ls)2) ≤ φ X 0<s≤t (∆Ls)2 < ∞ P-a.s.,

(17)

which follows from the fact thatR

R(1∧x

2)Π(dx) < ∞. Furthermore we let f (x, y) = exy.

Then, by Ito’s lemma it follows that e−Xt = f (K t, St) = 1 − η Z t 0 e−Xsds + X 0<s≤t (f (Kt, St) − f (Ks−, Ss−)), (3.13) where f (Kt, St) − f (Kt−, St−) = e−ηt Y 0<s≤t (1 + φ(∆Ls)2) − e−ηt Y 0<s<t (1 + φ(∆Ls)2) = e−ηt((1 + φ(∆Lt)2) Y 0<s<t (1 + φ(∆Ls)2) − Y 0<s<t (1 + φ(∆Ls)2)) = φ(∆Lt)2e−ηt Y 0<s<t (1 + φ(∆Ls)2) = φ(∆Lt)2e−Xt−, (3.14) so that (3.13) becomes e−Xt = 1 − η Z t 0 e−Xsds + φ X 0<s≤t e−Xs−(∆L s)2. (3.15)

It follows from integration by parts that e−Xt Z t 0 eXsds = Z t 0 e−Xs−d Z s 0 eXydy  + Z t 0 Z s 0 eXydy  d(e−Xs) +  eX·, Z . 0 eXsds  t , (3.16) where by (3.15) the quadratic variation satisfies

 eX·, Z . 0 eXsds  t =  − η Z . 0 e−Xsds, Z . 0 eXsds  t = Z t 0 d[−ηs, s] = 0. Now (3.16) becomes e−Xt Z t 0 eXsds = Z t 0 e−Xs−d Z s 0 eXydy  + Z t 0 Z s 0 eXydy  d(e−Xs) = Z t 0 eXs−Xs−ds + Z t 0 Z s 0 eXydy  d(e−Xs) = Z t 0 1 + φ(∆Ls)2ds + Z t 0 Z s 0 eXydy  d(e−Xs) and therefore d  e−Xt Z t 0 eXsds  = dt + Z t 0 eXsds  d(e−Xt).

(18)

We proceed by showing that the process (σ2)t≥0 is Markovian.

Theorem 3.3. The variance process (σ2t)t≥0 is a time homogeneous Markov process.

Proof. Let (Ft)t≥0 be the filtration generated by (σt2)t≥0. For 0 ≤ y < t we have

σt2 = β Z y 0 eXsds e−Xye−(Xt−Xy)+ β Z t y eXsds e−Xt + σ2 0e −Xt = (σ2y− σ20e−Xy)e−(Xt−Xy)+ β Z t y eXsds e−Xt + σ2 0e −Xt =: σ2yA(y, t) + B(y, t), (3.17) where we define A(y, t) := e−(Xt−Xy) and B(y, t) := β Z t y eXs−Xyds e−(Xt−Xy).

Since X is a L´evy process, it follows that A(y, t) and B(y, t) are independent of Fy. This

shows that conditional on Fy, σt2 depends only on σ2y, from which it follows that σ2t is a

Markov process.

To show that (σ2t)t≥0 is time homogeneous, we define D[0, ∞) as the space of c´adl´ag

functions on [0, ∞) and let

gy,t: D[0, ∞) → R, x →  e−(xt−xy), β Z t y e−(xt−xs)ds  .

Since X is a L´evy process we have that (Xs)s≥0 D

= (Xs+h− Xh)s≥0. It is easily seen that

gy,t((Xs)s≥0) = ((A(y, t), B(y, t)) and gy,t((Xs+h− Xh)s≥0) = (A(y + h, t + h), B(y +

h, t + h)). Hence the joint distribution of (A(y, t), B(y, t)) depends only on t − y. By independence of (A(y, t), B(y, t)) and σy it follows that the transition functions of σ2t are

time homogeneous.

The next theorem showcases the condition under which (σt)t≥0 converges in

distribu-tion.

Theorem 3.4. Suppose that Z

R

log(1 + φy2)Π(dy) < η, (3.18)

then (σt2)t≥0converges in distribution to a finite random variable σ2∞as t → ∞, satisfying

σ∞2 D = β Z ∞ 0 e−Xtdt. (3.19)

(19)

Proof. Define x+ as x+ := max{0, x} and x− as x− := max{0, −x}. Then, as X is spectrally negative, it holds that EX1+ < ∞ so that EX1 is well defined. By (3.9) it

holds that EX1 = 1 i d dθE(e iθXt)| θ=0 = η + Z (−∞,0) xΠX(dx) = η − Z R (1 + φy2)Π(dy)

so that (3.18) implies that EX1 > 0. We can conclude that Xt/t → EX1 almost surely

(see Sato [6], Theorem 36.5) and hence Xt→ ∞ almost surely as t → ∞. It follows by

the Goldie-Maller theorem [19] and Erickson and Maller [18] that R∞

0 e

−Xsds converges

almost surely to a finite random variable if Xt → ∞ almost surely and σ2t P → ∞ as t → ∞ otherwise. Since Ee−Xt Z t 0 eXsds=D Z t 0 e−Xsds, t ≥ 0,

which follows from stationarity of the increments of (Xt)t≥0, it holds that (σ2t)t≥0

con-verges in distribution. If (3.18) does not hold, then EX1 ≤ 0 and hence Xt → −∞ as

t → ∞ or (Xt)t≥0 oscillates, so it follows that σ2t → ∞.P

Now we shall investigate the pathwise behaviour of σ. Proposition 3.5. (a) The volatility σt at time t satisfies

σt2≥ β η(1 − e

−tη

), for all t ≥ 0. If σt20 ≥ βη for some t0, then σ2t ≥ βη for every t ≥ t0.

If σt2 = σD ∞2 is the stationary version, then

σ2 ≥ β

η a.s. (3.20)

(b) The jumps of the squared volatility process at time t > 0 are described by σt2− σt−2 = φσt−2 (∆Lt)2.

(c) Let (Lt)t≥0 be a compound Poisson process with jump times 0 = T0 < T1 < · · · .

Then the volatility satisfies, for t ∈ (Tj, Tj+1),

d dtσ 2 t = β − ησt2, σt2 = β η + σ 2 Tj++ β −ηe −(t−Tj)η.

(20)

Proof. (a) It follows from (3.5) that for 0 ≤ s < t, Xs− Xt= (s − t)η − X s<u≤t log(1 + φ(∆Lu)2) ≥ (s − t)η, (3.21) hence σ2t = β Z t 0 eXs−Xtds + σ2 0e−Xt ≥ β Z t 0 e(s−t)ηds = β η(1 − e −ηt ).

Then (3.20) follows as we let t → ∞. Now let t > t0 and suppose that σt20 ≥

β

η. In (3.17)

it was shown that

σt2= eXt0−Xtσ2

t0 + β

Z t

t0

eXs−Xtds.

It then follows from (3.21) that σ2t ≥ e(t0−t)ησ2 t0+ β Z t t0 e(t0−s)ηds ≥ e(t0−t)η β η + β η(1 − e (t0−t)η) = β η. (b) This follows easily from (3.12).

(c) The first equation follows from (3.12), using that σt2 has no jumps on (Tj, Tj+1).

The second equality follows from solving this differential equation with starting value σT2 j+. Proposition 3.6. We have φ[G, G]t= (φσ2+ η) Z t 0 σ2sds + σ2t − σ02− βt, t ≥ 0. (3.22)

Proof. It holds that

[G, G]t= Z t 0 σs−2 d[L, L]s = Z t 0 σs−2 d(sσ2+ X 0<u≤s (∆Lu)2) = σ2 Z t 0 σ2s−ds + X 0<u≤t σ2u−(∆Lu)2.

(21)

3.4 Second order properties

In this section we will investigate the second order behaviour of the processes.

3.4.1 The volatility process

We shall restrict ourselves to the stationairy version of the process (σ2t)t≥0. We start

with a lemma on the exponential moments of (Xt)t≥0.

Lemma 3.7. Let (Xt)t≥0 be as in (3.5). Let c > 0.

(a) Ee−cXt < ∞ for some t > 0 (or, equivalently, for all t > 0) if and only if E|L

1|2c< ∞. (b) When Ee−cX1 < ∞, put Ψ(c) = Ψ X(c) = log Ee−cX1. Then |Ψ(c)| < ∞, Ee−cXt = etΨ(c), and Ψ(c) = −cη + Z R ((1 + φy2)c− 1)Π(dy). (3.23)

(c) If Ψ(c) < 0 for some c > 0, then Ψ(d) < 0 for all 0 < d < c.

(d) If E|L1|2c < ∞ and Ψ(1) ≤ 0, then (3.18) holds, and a stationairy version of

(σ2t)t≥0 exists.

Proof. (a) By Theorem 25.17 in Sato [6] the Laplace transform Ee−cXt is finite for

some, and hence all, t ≥ 0 if and only if Z |x|>1 e−cxΠX(dx) = Z (−∞,−1) e−cxΠX(dx) = Z {|y|>√(e−1)(1/φ)} (1 + φy2)cΠ(dy) (3.24) is finite. It follows from Theorem 25.3 in Sato [6] that the latter is equivalent to E|L1|2c< ∞. (b) By (3.9) it follows that |Ψ(c)| = | − cη + Z (−∞,0) (e−cx− 1)ΠX(dx)|

where we can decompose the integral as follows Z (−∞,0) (e−cx− 1)ΠX(dx) = Z |x|>1 e−cxΠX(dx) − ΠX((−∞, −1)) + Z [−1,0) −cx +c 2x2 2! − · · · ΠX(dx), (3.25) where we expanded e−cx in the last part of the integral. The first integral on the right hand side of (3.25) is finite by Theorem 25.17 in Sato [6]. The last integral is

(22)

finite by the fact that X is of bounded variation and henceR

R(1 ∧ |x|)Π(dx) < ∞.

The fact that Ee−cXt = etΨ(c) follows from (2.2). The last equality follows from

(3.8).

(c) Let Ψ(c) < 0. By (a) and (b) it follows that Ψ(d) is wel defined for all 0 < d ≤ c. By (3.23) it holds that Ψ(d) < 0 if and only if

1 d

Z

R

((1 + φy2)d− 1)Π(dy) < η.

The function x → 1x((1 + φy2)x− 1) from (0, ∞) to R is increasing for every fixed y. This follows from the following calculations. Fix φ ≥ 0 and y ∈ R. We have that f0(x) = −1 x2((1 + φy 2)x− 1) + 1 x(log(1 + φy 2)(1 + φy2)x) = 1 x 

log(1 + φy2)(1 + φy2)x−1

x((1 + φy

2)x− 1)

 . To prove that f0(x) > 0 for x > 0, we note that

ex log(1+φy2)(1+φy2)x = (1 + φy2)xe(1+φy2)x > e−1e(1+φy2)x = e(1+φy2)x−1, so that

x log(1 + φy2)(1 + φy2)x> (1 + φy2)x− 1 and hence f0(x) > 0 for x > 0. The result follows.

(d) Note that Ψ(1) ≤ 0 is equivalent to Z

R

φy2Π(dy) ≤ η. (3.26)

By the fact that x > 1 + log x for x > 1, it holds that log(1 + φy2) < φy2 for any y 6= 0. This shows that (3.26) implies (3.18).

Proposition 3.8. Let γ > 0, t > 0 and h > 0.

(a) The mean Eσt2 is finite if and only if EL21< ∞ and Eσ20 < ∞. If this is so, then

Eσt2 = β −Ψ(1) + Eσ 2 0+ β Ψ(1)e tΨ(1), (3.27)

where, if Ψ(1) = 0, the right-hand side has to be interpreted as its limit as Ψ(1) → 0, that is, Eσt2= βt + Eσ0t.

(23)

(b) The second moment Eσ4t is finite if and only if EL41 < ∞ and Eσ04 < ∞. In that

case, the following formula holds (with a suitable interpretation as a limit if some of the denominators are zero):

Eσ4t = 2β2 Ψ(1)Ψ(2) + 2β2 Ψ(2) − Ψ(1) etΨ(2) Ψ(2) − etΨ(1) Ψ(1) ! + 2βEσ02 etΨ(2)− etΨ(1) Ψ(2) − Ψ(1) ! + Eσ04etΨ(2), (3.28) cov(σt2, σt+h2 ) =var(σt2)ehψ(1). (3.29)

Proof. (a) First we calculate Eσ2

t. It follows from (3.10) that

Eσ2t = βE Z t 0 eXs−Xtds + Eσ2 0Ee−Xt = β Z t 0 Ee−Xsds + Eσ20Ee−Xt, (3.30)

where we used Fubini’s theorem and the fact that σ02 is independent of Xt. Then

by Lemma 3.7 the right hand side of (3.30) is finite if and only if EL21 < ∞ and

Eσ02< ∞. Now (3.27) follows since

Eσ2t = β

Z t

0

esΨ(1)ds + Eσ02etΨ(1).

(b) Assume that EL41 < ∞ and Eσ40 < ∞. Then

Eσt4 = β2E Z t 0 eXs−Xtds2+ 2βEσ2 0E Z t 0 eXs−2Xtds + Eσ4 0Ee−2Xt =: β2EI1+ 2βEσ02EI2+ Eσ40etΨ(2), (3.31) where we define I1 := Z t 0 eXs−Xtds2 and I 2:= Z t 0 eXs−2Xtds.

From stationarity of increments we obtain Z t 0 eXs−Xtds)2 D= Z t 0 e−Xt−sds2 = Z t 0 e−Xsds2 = Z t 0 Z t 0 e−Xse−Xududs = 2 Z t 0 Z s 0 e−(Xs−Xu)e−2Xududs.

(24)

By independence of increments it follows that EI1= 2 Z t 0 Z s 0

(Ee−(Xs−Xu))(Ee−2Xu)duds

= 2 Z t

0

Z s

0

e(s−u)Ψ(1)euΨ(2)duds

= 2 Ψ(1)Ψ(2) + 2 Ψ(2) − Ψ(1)  etΨ(2) Ψ(2) − etΨ(1) Ψ(1)  . We see that EI2 = E Z t 0 eXs−2Xtds = E Z t 0 e−2(Xt−Xs)e−Xsds = Z t 0 e(t−s)Ψ(2)esΨ(1)ds = e tΨ(2)− etΨ(1) Ψ(2) − Ψ(1) .

From these calculations, we see that Eσt4 < ∞ and we obtain (3.28). Conversely,

assume that Eσ4t is finite. Then it follows from our calculations in (3.31) that

Eσt4< ∞ and Ee−2Xt < ∞ and hence by Lemma 3.7 that also EL41 < ∞.

For the proof of (3.29), let (Ft)t≥0 be the filtration generated by (σt2)t≥0. From

(3.17) and (3.27) we obtain E(σt+h2 |Ft) = σt2ehΨ(1)+ β Z h 0 esΨ(1)ds = (σt2− Eσ2 0)ehΨ(1)+ Eσh2. (3.32) We see that

E(σ2t+hσ2t) = E(σt2E(σt+h2 |Ft))

= E(σt2((σ2t − Eσ20)ehΨ(1)+ Eσh2))

= (Eσt4− Eσt2Eσ02)ehΨ(1)+ Eσt2Eσh2. (3.33)

After some calculations using(3.27), we conclude

Eσt2Eσh2− Eσ2tEσ2t+h= (Eσ2tEσ20− (Eσ2t)2)ehΨ(1).

Now (3.29) follows immediately from (3.33).

(25)

Theorem 3.9. Let σ2 have the stationairy distribution of the volatility process. For k ∈ N, the k-th moment of σ2∞ is finite if and only if EL2k1 < ∞ and Ψ(k) < 0. In this

case Eσ2k∞ = k!βk k Y l=1 1 −Ψ(l). (3.34)

Proof. First we assume that EL2k1 < ∞ and that Ψ(k) < 0. It follows from (3.19),

Fubini’s theorem and the stationary increment property that, for k ∈ N, Eσ∞2k= βkE Z ∞ 0 e−Xtdtk = βkE Z ∞ 0 · · · Z ∞ 0 e−Xt1· · · e−Xtkdtk· · · dt1 = k!βkE Z ∞ 0 Z t1 0 · · · Z tk−1 0 e−(Xt1−Xt2)e−2(Xt2−Xt3)· · · e−(k−1)(Xtk−1−Xtk)e−kXtkdtk· · · dt1 = k!βk Z ∞ 0 Z t1 0 · · · Z tk−1 0 et1Ψ(1)et2(Ψ(2)−Ψ(1))· · · etk(Ψ(k)−Ψ(k−1))dt k· · · dt1 = k!βk k Y l=1 1 −Ψ(l), (3.35)

where it should be noted that Ψ(1), · · · , Ψ(k) are all defined and negative. The last equality follows from the fact that

Z ∞ 0 Z t1 0 et1Ψ(1)et2(Ψ(2)−Ψ(1))dt 2dt1 = 1 Ψ(1)Ψ(2)

and an induction argument. So E[σ∞2k] < ∞. Conversely, assume that j ∈ {1, · · · , k}

is the first index for which Ψ(j) ≥ 0 or Ee−jX1 = ∞, then by the above calculation it

follows that Eσ∞2j = ∞. But then also Eσ2k∞ = ∞.

Corollary 3.10. If (σ2

t)t≥0 is the stationairy process with σ02 D = σ2 ∞ then Eσ2∞= β −Ψ(1), (3.36) Eσ4∞= 2β2 Ψ(1)Ψ(2), (3.37) cov(σt2, σt+h2 ) = β2  2 Ψ(1)Ψ(2) − 1 Ψ2(1)p  ehΨ(1), t, h ≥ 0, (3.38)

provided that EL2k1 < ∞ and Ψ(k) < 0 when k = 1 for (3.36) and when k = 2 for (3.37)

(26)

Theorem 3.11. Let k ∈ N, η ≥ 0 and φ ≥ 0. Then the limit variable σ2∞ exists and has

finite k-th moment if and only if 1 k Z R  1 + φy2k − 1Π(dy) < η. (3.39)

Proof. If the limit variable σ2

∞ exists and has finite k-th moment, then EL2k1 < ∞ and

Ψ(k) < 0 by Theorem 3.9, which is (3.39) by Lemma 3.7. Conversely, if (3.39) holds then EL2k1 < ∞ by Theorem 25.3 in Sato [6] and φ(k) < 0 by Lemma 3.7. So E[σ∞2k] < ∞ by

(3.39). But then also, by Lemma 3.7, EL21< ∞ and φ(1) < 0, so also (3.18) holds.

3.4.2 The price process

In this section we investigate second order properties of the price process (Gt)t≥0. Recall

that

G(r)t = Gt+r− Gt, t ≥ 0, r > 0,

corresponding to logarithmic asset returns over time periods of length r.

Theorem 3.12. Let the process (Gt)t≥0 be defined by (3.3) for the stationairy volatility

process (σt)t≥0. Suppose (Lt)t≥0 is a quadratic pure jump process, i.e. σ = 0. Suppuse

further that EL21 < ∞, EL1= 0 and that Ψ(1) < 0. Then, for any h ≥ r > 0,

E(G(r)t ) = 0, (3.40) E(G(r)t )2 = βr −Ψ(1)EL 2 1, (3.41) cov(G(r)t , G(r)t+h) = 0. (3.42)

Proof. Since (Lt)t≥0 is a pure jump process, its quadratic variation process is given by

[L]t=

X

0<s≤t

(∆Ls)2, t ≥ 0.

Then, by properties of the stochastic integral and the compensation formula, EG2r= E Z r 0 σ2s−d[L]s = E X 0<s≤r σ2s−(∆Ls)2 = E Z r 0 Z R σ2s−x2Π(dx)ds = EL21 βr −Ψ(1)

(27)

(using (3.36)). This shows (3.41). By the fact that is (σt2)t≥0 is the stationairy process

and σ02 is a finite random variable independent of (Lt)t≥0, we obtain for all t ≥ 0,

E Z t 0 σ2s−d[L]s= E X 0<s≤t σ2s−(∆Ls)2 = Eσ02E X 0<s≤t (∆Ls)2

= Eσ20E[L]t= Eσ20ELt2= Eσ20t2EL21< ∞.

It now follows from Ito isometry for square-integrable martingales as integrators that

E[G(r)t G (r) t+h] = E  Z t+h+r 0 σs−1(t,t+r]dLs Z t+h+r 0 σs−1(t+h,t+h+r]dLs  = E Z t+h+r 0 σ2s−1(t,t+r]1(t+h,t+h+r]d[L]s= 0

for h ≥ r. By the martingale property of (Lt)t≥0 we obtain (3.40) and, hence, (3.42)

also follows. That (Lt)t≥0 is a martingale follows from

E[Lt|Fs] = E[(Lt− Ls) + Ls|Fs]

= E[Lt− Ls] + Ls

= E[(Lt−s− Lt−s−1) + (Lt−s−1− Lt−s−2) + · · · + (L2− L1) + L1] + Ls

= (t − s)EL1+ Ls

= Ls,

using the independent and stationairy increment properties of (Lt)t≥0. Integrability

follows from the fact that, by a same argument as above combined with the triangle inequality, it holds that E|Lt| ≤ tE|L1| < ∞ (using that EL21 < ∞).

Theorem 3.13. Assume the conditions in Theorem 3.12. Assume further that EL41< ∞

and Ψ(2) < 0. Then, for any t ≥ 0 and h ≥ r > 0,

cov((G(r)t )2, (G(r)t+h)2) = e

−rΨ(1)− 1

−Ψ(1) !

EL21cov(G2r, σ2r)ehΨ(1). (3.43)

Proof. We write Er as the conditional expectation given Fr, where Fr is the σ-algebra

(28)

compensation formula, (3.17) and (3.32), that Er(G(r)h )2 = Er  2 Z h+r h+ Gs−dGs+ [G]h+r− [G]h  = Er  2 Z h+r h+ Gs−σs−dLs  + Er Z h+r h+ σ2s−d[L]s = 0 + Er X h<s≤h+r (σr2Ar,s−+ Br,s−)(∆Ls)2 = Er Z h+r h Z R (σ2rAr,s−+ Br,s−)x2Π(dx)ds = EL21 Z h+r h (σ2rEAr,s−+ EBr,s−)ds = EL21 Z h+r h Er(σs−2 )ds = EL21 Z h+r h [(σr2− Eσ02)e(s−r)Ψ(1)+ Eσ2(s−r)−]ds = (σ2r− Eσ2 0)EL21 Z r 0 e−sΨ(1)dsehΨ(1)+ Eσ20EL21r. (3.44)

Now we condition on Fr to obtain

E((G(r)0 ) 2(G(r) h ) 2 ) = E(G2rEr(G(r)h ) 2) = EL21 e−rΨ(1)− 1 −Ψ(1) !

E(G2rσr2− G2rEσ02)ehΨ(1)+ Eσ02EL21rEG2r.

Using (3.36), this reveals that cov(G2r, (G(r)h )2) = e −rΨ(1)− 1 −Ψ(1) ! EL21cov(G2r, σ2r)ehΨ(1)+ EG2r  βr −Ψ(1)EL 2 1  − EG2rE[(G (r) h ) 2] = e −rΨ(1)− 1 −Ψ(1) ! EL21cov(G2r, σ2r)ehΨ(1)+ EG2r  βr −Ψ(1)EL 2 1− EG2r  . Plugging (3.41) in this equation then gives us (3.43).

Theorem 3.14. Assume the conditions in Theorem 3.12. Assume further that EL81 <

∞, Ψ(4) < 0, that (Lt)t≥0 is of finite variation and that

R

Rx

3Π(dx) = 0. Then the

right-hand side of (3.43) is strictly positive.

Proof. We shall prove that cov(G2t, σ2t) > 0. First we calculate E[G2tσt2]. Using

integra-tion by parts, G2t = [G]t+ 2 Z t 0+ Gs−dGs= X 0<s≤t σs−2 (∆Ls)2+ 2 Z t 0+ Gs−σs−dLs.

(29)

Substituting from (3.12) we obtain φG2t = σ2t − βt + η Z t 0+ σs−2 ds − σ02+ 2φ Z t 0+ Gs−σs−dLs, meaning that

φE[G2tσt2] =Eσt4− βtEσt2+ η

Z t 0+ E[σ2tσ2s−]ds − E[σ20σ2t] + 2φE Z t 0+ Gs−σs−σ2tdLs. (3.45)

First we shall show that the last term in the right-hand side of (3.45) is zero. Recall (3.17) and write Z t 0+ Gs−σs−σ2tdLs= Z t 0+ Gs−σs−(σs−2 As−,t+ Bs−,t)dLs (3.46) where As,t= eXs−Xt and Bs,t= β Z t s eXu−Xtdu. Let It := Rt 0+e Xs−G

s−σ3s−dLs. We will show that E[e−XtIt] = 0, meaning that the

A-component in (3.46) has expectation 0. Integration by parts gives e−XtI t= Z t 0+ e−Xs−dI s+ Z t 0+ Is−d(e−Xt) + Ct, (3.47)

where Ct is quadratic variation. We claim that It is a square-integrable zero-mean

martingale. Indeed, for all t ≥ 0, E Z t 0+ e2Xs−G2 s−σs−6 d[L]s≤ e2ηtE Z t 0+ G2s−σs−6 d[L]s = e2ηtE   X 0<s≤t σ8s−(∆Ls)4+ 2 X 0<s≤t Gs−σs−7 (∆Ls)3   < ∞.

Hence the first term on the right-hand side of (3.47) has expectation 0. By applying integration by parts on e−Xte−tΨ(1) we can substitute

d(e−Xt) = etΨ(1)d(e−Xt−tΨ(1)− 1) + e−Xt−Ψ(1)dt.

We claim that e−Xt−tΨ(1)− 1 is a square-integrable zero-mean martingale. Indeed, by

the independent and stationary increment properties,

E[e−Xt|Fs] = E[e−(Xt−Xs)e−Xs|Fs] = e−Xs E[e−(Xt−Xs)] = e−Xs E[e−Xt−s] = e−Xse(t−s)Ψ(1),

(30)

so that E[e−Xt−tΨ(1)− 1|F

s] = e−Xs−sΨ(1) − 1. Also E[e−Xt−tΨ(1)] = 0 and

square-integrability follows from the fact that EL41 < ∞, see Lemma 3.7. We conclude that

the second term in the right-hand side of (3.47) is the sum of two integrals, the first having expectation 0 since it is an integral with respect to a square-integrable zero-mean martingale. The remaining integral is Ψ(1)R0te−Xs−I

sds. Since Ltis pure jump,

∆Ct= (∆e−Xt)(∆It) = φGt−σ3t−(∆Lt)3,

where we used the calculations in (3.14) in the last equality. Letting Mt=P0<s≤t(∆Ls)3,

the quadratic variation is

Ct= φ

Z t

0+

Gs−σ3s−dMs.

If we can show that Mt is a locally square-integrable martingale, with mean zero, then

we see that Cthas expectation 0. It indeed follows by the compensation formula that

E[ X 0<s≤t (∆Ls)3|Fu] = X 0<s≤u (∆Ls)3+ E[ X u<s≤t (∆Ls)3|Fu] = X 0<s≤u (∆Ls)3+ Z t u Z R x3Π(dx)ds = X 0<s≤u (∆Ls)3,

by our assumption thatR

Rx

3Π(dx) = 0. That M

thas mean zero can be shown similarly.

Square-integrability of (Mt)t≥0 follows from Theorem 25.3 in Sato [6] and the fact that

E|L1|6 < ∞. Taking expectations in (3.47) thus gives E[eXtIt] = Ψ(1)

Rt

0E[e XsI

s]ds

(us-ing that Xthas no fixed points of discontinuity, almost surely,) implying that E[e−XtIt] =

0 (noting that X = 0 is the unique solution to the differential equation Xt= Ψ(1)Xtdt

with X(0) = 0). So the A-component in (3.46) has expectation 0. Now we turn to the B-component in (3.46), write it as β Z t 0 eXu−Xtdu  Z t 0+ Gs−σs−dLs  − β Z t 0+ Gs−σs− Z s 0 eXu−Xtdu  dLs. (3.48)

After applying integration by parts, the first term equals β Z t 0 Z s 0+ Gu−σu−dLu  e−(Xt−Xs)ds + β ˜C t, (3.49)

where ˜C is quadratic variation, satisfying ∆ ˜Ct=  ∆  e−Xt Z t 0 eXudu  (Gt−σt−∆Lt) = φe−Xt− Z t 0 eXudu  Gt−σt−(∆Lt)3.

Here ˜Ct has expectation 0, again since

R

Rx 3Π

L(dx) = 0, so (3.49) has expectation 0

(31)

has expectation zero. We conclude that the last term in (3.45) contributes 0 to the expectation. Now we turn to the other integral in (3.45). Use the fact that σs2 has no fixed points of continuity, almost surely and (3.33) with the fact that (σt2)t≥0 is the

stationairy version, to write

E[σt2σs−2 ] = E[σt2σs2] = var(σ02)e(t−s)Ψ(1)+ (Eσ20)2.

Thus, from (3.45), it follows that φE[G2tσt2] = Eσt4− βtEσt2+ η

Z t

0+

(var(σ02)e(t−s)Ψ(1)+ (Eσ02)2)ds − E[σ20σ2t] + 0

= Eσt4− E[σ02σt2] − βtEσt2+ ηvar(σ20)

1 − etΨ(1)

−Ψ(1) + tη(Eσ

2

0)2. (3.50)

It follows from (3.36) - (3.38) that

Eσt4− E[σ20σ2t] = Eσt4− (cov(σ02, σ2t) + (E[X02])2) = var[σ02](1 − etΨ(1)),

so that

φE[G2tσt2] = var[σ20](1 − etΨ(1)) − βtEσt2+ ηvar(σ02)

1 − etΨ(1)

−Ψ(1) + tη(Eσ

2

0)2. (3.51)

Note that, by the fact that R

Ry

2Π(dy) = EL2

1, (3.23) gives us φEL21 = Φ(1) + η. Thus,

from (3.41) φEG2tEσ02= φβtEL21Eσ20 −Ψ(1) = −βtE[σ02] + βtηEσ02 −Ψ(1) = −βtE[σ02] + ηt(Eσ20)2

(using (3.36)). Subtracting this from (3.50) gives

φ cov(G2t, σt2) = var(σ02) 1 − etΨ(1)+ η1 − e

tΨ(1)

−Ψ(1) !

, (3.52)

which is positive by the fact that Ψ(1) < 0.

The theorems above tell us that the returns are uncorrelated, while the squared re-turns are correlated. The autocorrelation function of the squared returns decreases exponentially. Furthermore, Var(G(r)t ) is linear in r.

(32)

4 COGARCH in option pricing

In this chapter we consider an important application of the GOGARCH model, namely option pricing. Let the financial market be defined on a probability space (Ω, F , P, (Ft)t≥0)

which is large enough to support a L´evy process L satisfying EL41 < ∞. We let (Rt)t≥0

be the return process, which is driven by the COGARCH process in the following way:

dRt= (r + λ(σt−)σt−) dt + σt−dLt. (4.1)

We see that the return process is the COGARCH process G defined in (3.3), with an extra drift term r + λ(σt−σt−). Here, we interpret r as the interest rate and λ : [0, ∞) → R as

a premium to compensate for the risk of default. This will be explained in more detail later. We now wish to define the stock price process. The process ˜S defined by

d ˜St= ˜StdRt, S0 > 0, (4.2)

gets negative whenever Rt makes a jump smaller than or equal to −1. If we want to

define a stock price process, this process must stay positive. We thus first introduce the following stopping time:

τ := inf{t > 0 : ∆Rt≤ −1}

and can now define the stock price process as follows St= S0E(R)t1{t<τ },

where E (R) denotes the stochastic exponential of R. Hence, before default, the stock price process follows the same dynamics as in (4.2):

dSt= StdRt, S0 > 0,

however it drops to 0 at time t = τ , provided τ < ∞, and stays there. To compensate for the risk of this happening, the risk premium λ in (4.1) is introduced. Let’s take a look at the effects of default on the L´evy process L and the return process. Define the stopped version of L as

t := (

Lt if t ≤ τ ;

Lτ if t > τ.

Define the stopped version of L with default adjustment as follows ˆ Lt:= Lτt +1{t≥τ }  −∆Lτ− 1 στ −  ,

(33)

so ˆLt= Lt whenever t < τ and ˆLt= Lτ −−στ −1 whenever t ≥ τ . Hence στ −∆ ˆLτ = στ −(Lτ − Lτ −) = στ −  − 1 στ −  = −1. (4.3)

Now define the stopped process ˆR with default adjustment as follows ˆ

Rt=

Z t∧τ

0

(r + λ(σu−)σu−)du +

Z t

0

σu−d ˆLu, (4.4)

so that ˆRt = Rt whenever t < τ and ∆Rτ = στ −∆ ˆLτ = −1, by (4.3). For t > τ the

process ˆR stays at 0, so by construction we have that St= S0E( ˆR).

For risk-neutral purposes, we want the discounted price process Z defined as Zt= e−rtS

to be a martingale under the risk neutral measure Q. Note that it holds by definition that St= 1 +

Rt

0Sud ˆRu and hence we want ( ˆRt− r(t ∧ τ ))t≥0 to be a Q-martingale. The

next theorem shows us how we must choose λ for this to happen.

Theorem 4.1. Assume that EL1 = 0 and that EL21 = 1. Let ˆL be the stopped version

of L with default adjustment ˆ Lt= Lτt +1{t≥τ }(−∆Lτ − 1/στ −), t ≥ 0. With ˆλ defined by ˆ λ(x) = − Z −1/x −∞ (y + 1 x)Π(dy), x > 0, the compensated version ( ˆLt−

Rt∧τ

0 ˆλ(σu−)du)t≥0 is a martingale.

The P-dynamics of (σt)t≥0 and (Rt)t≥0 are described in (3.10) and (4.1) respectively.

We further assume a martingale measure Q ∼ P under which ˆL is a L´evy process such that EL1 = 0, EL21 = 1 and EL41 < ∞. Let (γQ, (σQ)2, ΠQ) be it’s L´evy triplet. The

process (σt)t≥0 described in (3.10) then follows the Q-dynamics

dσ2t = (βQ− ηQσt−)dt + φQσt−2 d[L, L]t, t > 0, (4.5)

for some, possibly altered, parameters βQ > 0, ηQ≥ 0, φQ≥ 0. We let

ˆ λQ(x) = − Z −1 x −∞  y + 1 x  ΠQ(dy), (4.6)

so that, by Theorem 4.1, ˆL +R0t∧τλ(σˆ u−)du is a Q-martingale and we set

ˆ Rt=

Z t∧τ

0

(r + ˆλ(σu−)σu−)du +

Z t

0

σu−d ˆLu. (4.7)

The stock-price process in then given by S = S0E( ˆR). Recall that a contingent claim C

is a FT-measurable random variable, which price Vt(C) at time t can be determined by

(34)

4.1 Variance-Gamma COGARCH

In this section we take L as the Variance-Gamma process, see also Madan and Seneta [11] and Madan, Carr and Chang [12] for a more detailed description of this process. It is defined as

Lt= θV GΓt+ σV GWΓt, t ≥ 0, (4.8)

where (Γt)t≥0 is the Gamma process with parameters k = νV Gt and θ = νV G so that Γt

has density f satisfying

f (x) = 1

Γ(k)θkx k−1e−x

θ,

where Γ(k) is the Gamma function. (Wt)t≥0is a standard brownian motion. The process

L is a pure jump process with characteristic triplet (γQ, 0, ΠQ), where the L´evy measure ΠQ satisfies ΠQ(dx) = exp(θV Gx/σ 2 V G) |x|νV G exp  − q 2σ2V G/µV G+ θ2V G σV G2 |x|  dx, (4.9)

and the drift γQ satisfies

γQ = θV G−

Z

|x|>1

xΠQ(dx).

We calculate ˆλQ(x) from (4.6) as follows

ˆ λQ(x) = 1 νV G Z −1 x −∞ exp     θV G+ q 2σV G2 /νV G+ θ2V G  y σ2 V G   dy + 1 xνV G Z −1 x −∞ 1 yexp     θV G+ q 2σ2V G/νV G+ θV G2  y σ2 V G   dy = σV G2 exp −  θV G+ √ 2σ2 V G/νV G+θ2V G  σ2 V Gx ! νV G(θV G+ q 2σV G2 /νV G+ θ2V G) + 1 xνV G E1     θV G+ q 2σ2 V G/νV G+ θV G2  σV G2 x   , (4.10) where E1(x) = R∞ x 1 ye

−ydy for x > 0. The latter integral will be approximated since it

(35)

4.1.1 Simulation studies

We will simulate the Q-dynamics in (4.5) and (4.7). To do this, we first need to approx-imate E1(x), which we do by the following approximation of Barry et al [9]:

E1(x) ≈ e−x G + (1 − G)e−1−Gx ln  1 +G x − 1 − G (h(x) + bx)2  , where h(x) := 1 1 + x√x+ h∞q 1 + q q(x) := 20 47x √ 31/26 h∞:= (1 − G)(G2− 6G + 12) 3G(2 − G)2b b := s 2(1 − G) G(2 − G) G := e−γ,

with γ being the Euler-Mascheroni constant (which has first five decimal places 0.57721). This approximation enables us to calculate the default premium ˆλQ(x)x for varying x,

see Fig. 4.1.

Now we simulate the path of (Lt)t≥0 (the Variance Gamma process) on the interval

[0, T ] divided into N increments of size ∆ = T /N . We do this by the following iteration, for i from 1 to N :

• Generate a gamma random variable g(i) ∼ Γ( ∆

νV G, νV G) and a normal random

variable n(i) ∼ N (0, 1), both independent of each other and of past variables. • Set ˆLi∆= ˆLi−1+ θV G· g(i) +pg(i) · n(i), Lˆ0 = 0.

Then we simulate the path of (σ2

t)t≥0 described in (4.5) as follows. Iterate for i from 1

to N :

• Set ˆσi∆2 = ˆσ(i−1)∆2 + (βQ− ηQ· ˆσ2

(i−1)∆) · ∆ + φ Q· ˆσ2

(i−1)∆· ( ˆLi∆− ˆL(i−1)∆)2,

where we set ˆσ20 = η−φQQ to ensure stationarity. We simulate the path of the return

process (Rt)t≥0 as follows. Iterating for i from 1 to N :

• Set ˆRi∆ = ˆR(i−1)∆+



r + ˆλQ(ˆσ(i−1)∆)ˆσ(i−1)∆



· ∆ + ˆσ(i−1)∆ ˆLi∆− ˆL(i−1)∆

 , setting ˆR0 = 0. For the stock price process, we again iterate for i from 1 to N :

• Set ˆSi∆= ˆS(i−1)∆+ ˆS(i−1)∆· ˆRi∆− ˆR(i−1)∆



whenever ˆRi∆− ˆR(i−1)∆> −1 and

ˆ

Si∆= 0 otherwise.

(36)

Figure 4.1: Risk-neutral default premium ˆλQ(x)x for varying volatilities. The volatility parameters are θV G = 0, σV G = 1, νV G = 0.01.

(37)

Figure 4.3: Sigma process, N = 5000, T = 1, θV G= 0, σV G= 1, νV G = 0.01,βQ= 0.30,

ηQ= 1 + √1 0.03, φ

Q = 1 0.03.

Figure 4.4: Return process, N = 5000, T = 1, θV G = 0, σV G = 1, νV G = 0.01, βQ =

0.30, ηQ = 1 +1 0.03, φ

Q= 1 0.03.

(38)

Figure 4.5: Stock process, N = 5000, T = 1, θV G= 0, σV G= 1, νV G= 0.01, βQ= 0.30,

ηQ= 1 + √1 0.03, φ

Q = 1 0.03.

Recall that at time t = 0, the price of a call option with expiration date T and strike price K is given by e−rTEQ[max(ST − K, 0)]. Given a maturity T and a strike price

K, we can thus apply Monte Carlo simulation to obtain estimates of the according call price. We will explain how this works. Each simulation described above gives us a payoff max( ˆST − K, 0), so if we apply many simulations (say 1000) and take the discounted

average of all the pay offs, we hope to get a good estimate of the according call price. Indeed, the discounted average of these payoffs must converge to the true price as the number of simulations increases to infinity, by the law of large numbers. The following table showcases the estimated call price for varying strikes and maturities.

To illustrate, we also calculate the Black Scholes implied volatilities for various strike prices and maturities, and they are shown in Fig. 4.6 below.

(39)

maturity (T) - strike (K) 65 95 125 0.6 37.22275 12.99186 2.81844 0.8 38.27531 14.66884 3.87381 1.0 38.67691 16.31729 5.19689 1.2 39.80377 17.62428 6.55090 1.4 40.33851 19.21223 7.81366

Table 4.1: Estimation call option prices for varying maturities (T ) and prices (K), where S0 = 100, βQ = 0.30, ηQ = 1 + √0.031 , φQ = √0.031 , θV G = 0, σV G = 1, νV G= 0.01. maturity (T) - strike (K) 65 95 125 0.6 0.48718 0.33258 0.31711 0.8 0.48262 0.33500 0.31082 1.0 0.44755 0.34098 0.31571 1.2 0.45532 0.34082 0.32146 1.4 0.43916 0.34937 0.32518

Table 4.2: Corresponding Black Scholes implied volatilities for varying strike maturities (T ) and prices (K), where S0 = 100, βQ= 0.30, ηQ= 1 +√0.031 , φQ= √0.031 ,

(40)

Figure 4.6: Implied volatility surface, S0= 100, βQ= 0.30, ηQ = 1 +√0.031 , φQ= √0.031 ,

(41)

5 Statistical Estimation

In this chapter we discuss two methods to estimate the parameters (β, η, φ) in equations (3.3) and (3.4). In the first method these quantities will follow from the moments and correlations of the squared returns of the price process described in (3.3). Here, it is assumed that the data is equally spaced on the time grid. In the second method, this assumption is omitted, but the estimation procedure will be a bit more complicated. The goal of this chapter is to analyse both estimation procedures in detail and to end with a comparison of the two. Throughout this chapter we assume the following: Assumptions 5.1. Let (Lt)t≥0be a L´evy process. The following properties are satisfied:

• E(L1) = 0,

• E(L2

1) = 1 and

• L has no Gaussian part (σ = 0).

Note that the above properties translate to the L´evy triplet (γ, σ2, Π) as follows. We have E(L1 = 0) and thus the drift of the L´evy process satisfies

γ − Z

{|x|≥1}

xΠ(dx) = 0.

By the fact that E(L21) = 1 and σ = 0 it holds that

σ2+ Z R x2Π(dx) = Z R x2Π(dx) = 1.

5.1 Moments estimation

As said before, it is assumed that the observation times 0 = t0 < t1 < · · · < tN are

equally spaced on the time grid. For simplicity we assume ∆ti = ti − ti−1 = 1, so

that t1 = 1, · · · , tN = N . We observe Gi, i = 0, · · · , n from the process described in

(3.3), which is assumed to be in its stationairy regime. We will use the return data G(1)(i) := Gi− Gi−1to estimate the parameters (β, η, φ). The estimation method is based

on the following theorem:

Theorem 5.1. Let (Lt)t≥0 be a L´evy process such that the assumptions in (5.1) are

satisfied. Furthermore, assume that E(L41) < ∞,

R

Rx

(42)

that µ, λ, c, q are constants such that E((G(1)i )2) = µ, Var((G(1)i )2) = λ and ρ(m) = corr((G(1)i )2, (G(1)i+m)2) = ce−mq, m ∈ N. Define C1:= λ − 2µ2− 6 1 − q − e−q (1 − eq)(1 − e−q)cλ, and (5.1) C2:= 2cλq C1(eq− 1)(1 − e−q) . (5.2)

Then C1, C2 > 0 and β, η, φ are uniquely determined by µ, λ, c and q and satisfy the

following equations:

β = qµ, (5.3)

φ = qp1 + C2− q (5.4)

η = qp1 + C2. (5.5)

Proof. Using that Ψ(1) < 0 by Lemma 3.7 and setting E(L1) = r = 1 in (3.41) gives us

µ = β

|Ψ(1)|. (5.6)

Further, by (3.43) and (3.52) it holds that cov((G(r)t )2, (G(r)t+h)2) = e −rΨ(1)− 1 −Ψ(1) ! EL21 × 1 φ Var(σ 2 0) 1 − erΨ(1)+ η 1 − erΨ(1) −Ψ(1) !! ehΨ(1) (5.7)

and by (3.37) it holds that

Var(σ02) = 2β

2

|Ψ(1)Ψ(2)| − β2

|Ψ(1)|2, (5.8)

so that, after some calculations, the right hand side of (5.7) becomes E(L21) β2 |Ψ(1)|3  η φ+ |Ψ(1)| φ   2 |Ψ(2)| − 1 |Ψ(1)  ×(1 − e−r|Ψ(1)|)(er|Ψ(1)|− 1)e−h|Ψ(1)|.

(43)

Applying Eq. (3.23) with c = 1 we see that |Ψ(1)| = η − φ

Z

R

x2Π(dx) = η − φE(L21), (5.9)

where we used that L has no Gaussian part in the last equality. We conclude cov((G(r)t )2, (G(r)t+h)2) = E(L21) β2 |Ψ(1)|3  2η φ − E(L 2 1)   2 |Ψ(2)|− 1 |Ψ(1)  ×(1 − e−r|Ψ(1)|)(er|Ψ(1)|− 1)e−h|Ψ(1)|. By setting EL2 1 = r = 1, we now obtain c = β 2 λ|Ψ(1)|3  2η φ − 1   2 |Ψ(2)|− 1 |Ψ(1)  (1 − e−|Ψ(1)|)(e|Ψ(1)|− 1) (5.10) and q = |Ψ(1)|. (5.11)

To get an expression for λ we first need an expression for E(G(r)i )4. The product rule

gives us that G4t = 2 Z t 0 G2s−dG2s+ [G2, G2]t, (5.12) where G2t = 2 Z t 0 Gs−dGs+ [G, G]t= 2 Z t 0 Gs−σs−dLs+ Z t 0 σ2s−d[L, L]s (5.13)

and hence, plugging (5.13) in (5.12), we obtain G4t = 4 Z t 0 G3s−σs−dLs+ 2 Z t 0 G2s−σs−2 d[L, L]s+ 4 Z t 0 G2s−σ2s−d[L, L]s + 4 Z t 0 Gs−σ3s−d[L, [L, L]]s+ Z t 0 σs−4 d[[L, L], [L, L]]s. (5.14)

As a consequence of the compensation formula and by the fact that EL1= 0 we have

E Z t 0 G3s−σs−dLs  = E X 0<s≤t G3s−σs−(∆Ls) = Z R xΠ(dx) Z t 0 E(G3s−σs−)ds = 0,

where we also used that L is pure jump. Further, E Z t 0 Gs−σ3s−d[L, [L, L]]s  = E X 0<s≤t G3s−σs−(∆Ls)3 = Z R x3Π(dx) Z t 0 E(G3s−σs−)ds = 0

(44)

by the fact that R

Rx

3Π(dx) = 0. Thus, taking expectations in (5.14) leaves us with

E(G4t) = 6E(L21) Z t 0 G2s−σ2s−ds + Z x4Π(dx) Z t 0 E(σ4s−)ds, (5.15) where Z x4Π(dx) = Ψ(2) − 2Ψ(1) φ2 , (5.16)

which follows from (3.23). Combining (3.12), (3.36), (3.52), (5.8), and (5.9) gives E(G2t−σt−) = β2 Ψ(1)2  2 Ψ(2)− 1 Ψ(1)   2η φ − E(L 2 1)  (1 − e−t|Ψ(1)|) + β 2 Ψ(1)2E(L 2 1)t (5.17) and by plugging (3.37) , (5.16) and (5.17) in (5.15) we obtain

E(G4r) = 6E(L21) β2 Ψ(1)2  2η φ − EL 2 1   2 |Ψ(2)|− 1 |Ψ(1)|  r − 1 − e −r|Ψ(1)| |Ψ(1)| ! +2β 2 φ2  2 |Ψ(2)|− 1 |Ψ(1)|  r + 3 β 2 Ψ(1)2(E(L 2 1))2r2. By setting r = EL1 = 1, we obtain λ = 6 β 2 |Ψ(1)|3  2η φ − 1   2 |Ψ(2)|− 1 |Ψ(1)|   Ψ(1) − 1 − e−|Ψ(1)| +2β 2 φ2  2 |Ψ(2)|− 1 |Ψ(1)|  + 2 β 2 Ψ(1)2. (5.18)

Now (5.3) follows from (5.6) and (5.11). If we plug in (5.6), (5.10) and (5.11) in (5.18), we obtain λ = 6 λ(q − 1 − e −q) (1 − e−q)(eq− 1)c + 2µ2q2 φ2  2 |Ψ(2)|− 1 q  + 2µ2 and hence C1 = 2µ2q2 φ2  2 |Ψ(2)|− 1 q  = 2µ 2q2 φ2 φ2R Rx 4Π(dx) |Ψ(2)|q > 0, where we used (5.16) in the second equality. The first equality shows us that

C1φ2 2µ2q2 =  2 |Ψ(2)|− 1 q  , which we can, together with (5.6), plug in to (5.10), to obtain

λc = (1 − e −q)(eq− 1) q3  2η φ − 1  C1φ2 2 .

(45)

It follows that qC2 = 2cλq2 C1(eq− 1)(1 − e−q) = φ 2 q  2η φ − 1  =  2 +φ q  φ > 0,

where we used that q = |Ψ(1)| = η − φ in the last equality. Solving the above equation for φ gives us (5.4) and (5.5) immediately follows by the previous equation.

We will estimate the parameters (β, η, φ) using a plug in approach.

5.1.1 Moment estimation algorithm

Assume that the assumptions in Theorem 5.1 are satisfied. The algorithm is described in the following steps:

• step 1: Estimate µ by the moment estimator:

ˆ µn:= 1 n n X i=1 (G(1)i )2,

the empirical autocovariances by ˆλn:= (ˆλn(0), ˆλn(1), · · · , ˆλn(d))T, for fixed d ≥ 2,

where ˆ λn(m) := 1 n − m n−m X i=1 (G(1)i+m)2− ˆµn  (G(1)i )2− ˆµn, m = 0, · · · d,

and the autocorrelations by ˆ

ρn:= (ˆλn(1)/ˆλn(0), · · · , ˆλn(d)/ˆλn(0))T.

• step 2: For d ≥ 2 we define the mapping M(2) : Rd+2

+ → R by M(2)( ˆρn, (θ1, θ2)) := d X m=1 (log( ˆρn(m)) − log(θ1) + θ2m)2. (5.19)

We estimate c and q by minimizing M over θ = (θ1, θ2):

ˆ

θn(2):= arg minθ∈R2 +M

(2)( ˆρ

n, θ). (5.20)

The superscript in ˆθn(2) stands for the fact that we minimise the function M(2),

but this estimator may be dominated by large differences. Later, we will replace ˆ

θ(2) with an estimator which minimises the L1-norm of the differences and with a Huber estimator, to see which works best.

(46)

• Define the mapping I : R4 +→ [0, ∞)3 by I(µ, λ, θ) := ( (qµ, q√1 + C2− q, q √ 1 + C2) if q, C2 > 0 (0, 0, 0) otherwise

with C2 defined as in (5.2). Then we estimate (β, η, φ) by

ˆ

Ψn= I(ˆµn, ˆλn, ˆθn). (5.21)

In the next section we will choose our L´evy process L and check if the conditions are satisfied.

5.1.2 Checking conditions

We take L as the Variance Gamma process defined as in (4.8) with parameters θV G =

0, νV G = 1 and σV G= 1, and call it’s Levy measure ΠV G. Equation (4.9) shows us that

the L´evy measure ΠV G satisfies

ΠV G = 1 |x|exp  −√2|x|  dx. (5.22)

We want to check if the conditions in Theorem 5.1 are satisfied. It follows from the construction of the Variance Gamma process that EL1 = 0. Now we check if EL21 = 1.

We note that, by the fact that L has no Gaussian part, it holds that EL21= Z R x2ΠV G(dx) = Z [0,∞) x exp(−√2x)dx − Z (−∞,0) x exp(√2x)dx = 1 2 + 1 2 = 1. By similar calculations it follows thatR

Rx

4ΠV G(dx) = 3, so that indeed E(L4

1) < ∞ and

also thatR

Rx

3ΠV G(dx) = 0. Furthermore, by (3.23) it holds that

Ψ(2) = −2η + Z R ((1 + φx2)2− 1)Π(dx) = −2η + φ2 Z R x4Π(dx) + 2φ Z R x2Π(dx) = −2η + 3φ2+ 2φ,

so by choosing β = 0.5, η = 0.92 and φ = 0.68 we have that Ψ(2) = −0.0176 < 0. We will now describe the path simulations.

Referenties

GERELATEERDE DOCUMENTEN

The socio-economic factors included as independent variables in the multivariate regressions consist of the home country gross domestic product (GDP) of the sponsoring

Cumulative abnormal returns show a very small significant reversal (significant at the 10 per cent level) for the AMS Total Share sample of 0.6 per cent for the post event

Auch eine Anwendung zu erzwingen, die Graphenstruktur des Werkzeugs zu verwenden, hilft nicht viel, da die Graphenstruktur des Werkzeugs nicht ausdrucksfähig genug für die

This leads to the following research question: How feasible is the production of circular alkaline batteries within the current battery manufacturing industry.. Moreover

Er werd wel een significant effect gevonden van perceptie van CSR op de reputatie van Shell (B = .04, p = .000) Uit deze resultaten kan worden geconcludeerd dat ‘perceptie van

As both operations and data elements are represented by transactions in models generated with algorithm Delta, deleting a data element, will result in removing the

The World Bank (2004b) also concerns itself with building efficient and accountable public sector institutions because the Bank contends that dysfunctional public

Our PIE algorithm is mainly based on four parameters from the smartphone location sensor readings and its spatio-tem- poral derivatives: the accuracy of the location samples as