• No results found

A Fourier-Cosine method for solving Backward SDEs 

N/A
N/A
Protected

Academic year: 2021

Share "A Fourier-Cosine method for solving Backward SDEs "

Copied!
61
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

MSc Stochastics and Financial Mathematics

Master Thesis

A Fourier-Cosine method for solving

Backward SDEs

Author: Supervisor:

Caroline Kosalka

dr. Asma Khedher (UvA)

Isabelle Liesker (RQ)

Examination date:

(2)

Abstract

Backward Stochastic Di↵erential Equations (BSDEs) are the stochastic counterpart of semi-linear parabolic di↵erential equations. Since the discovery of the generalized Feynman-Kac formula, BSDEs gained a wide range of applications in economics and more generally in optimal control. As it is not possible to solve BSDEs analytically, it is logical to ask for numerical methods approximating the unique solution of this type of equations. This thesis presents the basic theory of BSDEs where the generator satisfies the Lipschitz condition. Moreover, a system of a BSDE associated with a state process satisfying some forward SDE results in a so-called Forward-Backward SDE, satisfying the Markovian property, will be introduced. Also, a Fourier-Cosine method for com-puting the solutions of BSDEs, which proved to be very efficient, is performed where a computational example is discussed.

Title: A Fourier-Cosine method for solving Backward SDEs

Author: Caroline Kosalka, carolinekosalka@student.uva.nl, 10901396 Supervisor: dr. Asma Khedher,

Second Examiner: dr. Peter Spreij Examination date: June 4, 2018

Korteweg-de Vries Institute for Mathematics University of Amsterdam

Science Park 105-107, 1098 XG Amsterdam http://kdvi.uva.nl

RiskQuest B.V.

Herengracht 495, 1017 BT Amsterdam http://www.riskquest.com

(3)

Acknowledgements

This thesis marks one of the final steps in obtaining the Master of Science degree in Stochastics and Financial Mathematics at the University of Amsterdam (UvA). The research was conducted during an internship at RiskQuest.

First of all, I would like to thank RiskQuest for this opportunity as well as all my colleagues for their support. I would like to thank my supervisor, Asma Kheder, for her guidance, especially during the last months of my thesis. Without her support I would not have been able to go through this very challenging master thesis. Measure theory and Stochastic Integration are one of the most abstract and theoretical courses in my masters and therefore the thesis was a big challenge for me.

I would like to thank also my friend Jenya for the late working hours in the library, as well as for motivating me to go swimming. Sometimes presence is all what you need. Finally, my last thanks goes to my godmother, Justine, who supported me during my studies and who made it possible for me to study abroad for so many years. I am grateful for all the di↵erent places, experiences and people I met, it definitely made my life more diverse as well as complete.

(4)

Contents

Introduction 6

1. Backward Stochastic Di↵erential Equations 8

1.1. Motivation and preliminaries . . . 8

1.2. Existence and Uniqueness of a solution to Backward Stochastic Di↵eren-tial Equation . . . 10

1.3. Linear BSDE . . . 19

1.4. Comparison Theorem . . . 22

1.5. Applications of BSDEs in finance . . . 23

1.5.1. Stochastic Optimal Control Problem . . . 23

1.5.2. Hedging in the Black-Scholes model . . . 25

2. FBSDEs and Feynman-Kac formula 28 2.1. Forward-Backward Stochastic Di↵erential Equations . . . 28

2.1.1. Regularity Properties of Solutions . . . 29

2.1.2. Markov Properties of Solutions . . . 30

2.2. The link with PDEs: Feynman-Kac formula . . . 30

3. Numerical Methods for BSDEs 34 3.1. Discretization of the SDE . . . 35

3.1.1. Euler-Maruyama scheme . . . 35

3.1.2. The Milstein Scheme . . . 36

3.1.3. The weak Taylor Scheme of order 2.0 . . . 36

3.2. Discretization of the BSDE . . . 39

4. The Fourier-Cosine method for Backward Stochastic Di↵erential Equa-tions 42 4.1. Fourier Transform & Fourier Cosine series . . . 42

4.2. Approximations of the conditional expectations . . . 46

4.2.1. COS approximation of the function Z⇧ i . . . 49

4.2.2. COS approximation of the function Yi⇧ . . . 49

4.2.3. Fourier-cosine coefficients . . . 50

4.2.4. Discretization of the Fourier-cosine coefficient . . . 50

4.3. BCOS summarized . . . 51

5. Numerical experiment: Hedging in the Black-Scholes model 52

(5)

Conclusion 57

(6)

Introduction

Di↵erential equations first came into existence in 1671, where Isaac Newton solved dif-ferential equations using infinite series and discussed the non-uniqueness of solutions. They are used in describing di↵erent phenomena mathematically, for instance, some can be modelled by ordinary di↵erential equations (ODE)

dx(t)

dt = f (t, x),

which is applied in di↵erent branches of science. However, noise, which is a sort of randomness can not be omitted. So a better approximation to the reality is

dx(t)

dt = f (t, x) + ’white noise’, (0.1)

where ’white’ means that the noise contains frequencies analogously to white light. Thus this leads us to the mathematical concept of stochastic di↵erential equations (SDE).

In 1827, Robert Brown discovered the random movement of particles suspended in a fluid: Brownian motion. The mathematics behind Brownian motion were studied by many great mathematicians, among others Norbert Wiener after whom Wiener process is named. This stochastic process is denoted Wt where t 0. The sample paths of a

Brownian motion are continuous but proved to be nowhere di↵erentiable and even of unbounded variation, so classic calculus fails to define integration with respect to it. Nevertheless for describing white noise it is the best candidate.

In 1940, Kiyosi Itˆo defined Itˆo stochastic integral and proved the Itˆo isometry (Lemma A.7).

So the above equation (0.1) can be expressed as the following SDE dx(t, !) = f (t, x)dt + dWt.

The notion of time is of big importance for SDEs, in contradiction to ODEs, where initial value problem and terminal value problem are treated in the same way.

Solving an SDE with known terminal value implies the manipulation of a system to achieve a prescribed aim in a randomly disturbed circumstance.

A typical SDE on the time interval [0, T ] has the form (

dXt= b(t, Xt)dt + (t, Xt)dWt,

(7)

This SDE can be solved by Picard iteration and Banach fixed point theorem and there exists a unique adapted solution under certain conditions.

However, this routine fails to give adapted solutions when the SDE is defined via the terminal value XT = ⇠ 2 FT, known as the Backward stochastic di↵erential equation

(BSDE).

In 1973, the BSDEs were first introduced by J.M. Bismut [2] in the case where f is linear w.r.t. (Y, Z). He used BSDEs to study stochastic optimal control problems in the stochastic version of the Pontryagin’s maximum principle.

In 1990, the theory of BSDEs was a main focus for many academic researchers and a large number of publications have been made. However the most famous authors are Pardoux and Peng [19], who studied the BSDEs of the following form,

(

dYt= f (t, Yt, Zt)dt ZtdWt,

YT = ⇠,

(0.2) and proved the existence and uniqueness of a solution to the latter BSDE where the gen-erator satisfies the Lipschitz condition. In 1992, Peng [23] introduced the comparison theorem which provides a sufficient condition for the wealth process to be nonnegative. In [20], Pardoux and Peng showed that the solution of the BSDE in the Markovian case corresponds to a probabilistic solution of a non-linear PDE, and gave a generalization of the Feynman Kac formula. Since the discovery of the generalized Feynman-Kac formula, applications in mathematical finance and economics started to gain a lot of attention. Moreover in [22], [25], [24], Peng stated the connection between the BSDEs associated with a state process satisfying some forward classical SDEs, and PDEs in the Markovian cases. The theories behind can be used for European option pricing in the constrained Markovian case.

In 1995, in [9] the theory of contingent claim valuation, particularly the Black-Scholes-Merton model for option pricing is expressed in terms of a solution to a BSDE.

In 2000, Kohlmann and Zhou [15], developed Peng’s stochastic optimal control theory and interpreted BSDEs as equivalent to stochastic control problems.

Nowadays, Backward Stochastic Di↵erential Equations (BSDEs) remain an interesting field attracting lots of well-known researchers’ investigation. Therefore, it is interesting to know how the theory of BSDE and its applications is being developed.

This thesis is organized as follows. In Chapter 1 we provide a general introduction to BS-DEs and discuss uniqueness and existence results as well as the Comparison Theorem. In Chapter 2 Forward-Backward SDEs with their Markovian property are introduced and the generalized Feynman-Kac theorem is proven in both directions. This gives us an important link between BSDEs and PDEs. To obtain a numerical solution to a FB-SDE we present in Chapter 3 discretization methods for FB-SDEs and introduce the theta-discretization for BSDEs which results in expressions with conditional expectations. To compute these conditional expectations, we develop, in Chapter 4, a Fourier-Cosine method, called BCOS method. Finally, in Chapter 5 a computational example is given.

(8)

1. Backward Stochastic Di↵erential

Equations

The aim of this chapter is to get familiar with the notion of Backward Stochastic Dif-ferential Equations (BSDEs). After introducing the notation and the main definitions of BSDEs, we will prove two important theorems: the existence and uniqueness of a solution for a BSDE satisfying the Lipschitz condition and the Comparison Theorem. We will finish this chapter by providing some examples. The following references [9], [18] are used.

1.1. Motivation and preliminaries

Let (⌦,F, P) be a complete probability space, where the R-valued standard Brownian motion W is defined with respect to the natural filtration Ft. We consider a simple

ordinary di↵erential equation (ODE)

dYt= 0, 0 t  T, (1.1)

where T > 0 is a given terminal time. For any ⇠ 2 R we can suppose either Y0 = ⇠

or YT = ⇠, such that the above ODE has a unique solution Yt = ⇠. However, if the

equation (1.1) is considered as a Stochastic Di↵erential Equation (SDE), we are in a di↵erent setting. The solution of (1.1) seen in a stochastic sense should be adapted to the filtration{Ft}t 0. Therefore specifying YT or Y0 does a big di↵erence.

Consider the ODE (1.1) with the following terminal condition: (

dYt = 0 0 t  T,

YT = ⇠,

(1.2) where ⇠ is a random FT-measurable and square-integrable variable. Since (1.2) is an

ODE with a unique solution given by Yt = ⇠, which is not necessarily adapted to the

filtrationFt, unless ⇠ is constant, the equation (1.2) viewed as an SDE, does not have a

solution in general. Note that in the case where an initial condition is given, the solution is adapted.

To deal with this terminal value problem we will reformulate (1.2) in such a way that we can ensure the adaptability of the solution toFt. We introduce the following conditional

expectation,

(9)

Since ⇠ isFT-adapted we have YT = ⇠. As the process (Yt)t 0 defined by (1.3) is

square-integrable Ft martingale, we get by the Martingale Representation Theorem (A.2),

Yt= Y0+

Z t 0

ZsdWs, 0 t  T a.s., (1.4)

where Zs is aFt adapted square integrable process. From (1.3) and (1.4) we get,

(

dYt= ZtdWt 0 t  T

YT = ⇠.

(1.5) In other words, we reformulated (1.2) by (1.5) and what is of big importance is that instead of seeking only for one Ft-adapted stochastic process Y as a solution to the

SDE, we are looking for a pair (Y, Z). This method by adding the extra component Z to the solution makes it possible to find an Ft-adapted solution.

We will rewrite the backward SDE (1.5) in an integral form. To do this, the SDE is first evaluated in (1.4) with the terminal condition YT = ⇠ and solved for Y0.

Y0 = YT Z T 0 ZsdWs= ⇠ Z T 0 ZsdWs. (1.6)

Plugging (1.6) into (1.4) we get Yt= ⇠

Z T t

ZsdWs 0 t  T. (1.7)

Hence we have established the idea of how Backward SDEs appeared and that they can be represented by (1.5) or (1.7).

We introduce the following notations which will be used throughout the thesis: • T > 0, the so-called terminal time.

• {Ft; t2 [0, T ]}, the filtration generated by the Brownian motion W and augmented

by all theP-null sets.

• P the -field of predictable sets on ⌦ ⇥ [0, T ].

• B = B(R), the Borel -algebra and the Lebesgue measure on B is denoted by . • L2

T(R), the space of all FT -measurable random variables X : ⌦ ! R satisfying

kXk2 =E |X|2<1. • H↵

T(R), the space of all predictable processes : ⌦ ⇥ [0, T ] ! R such that

E✓qRT

0 | t| 2dt

◆↵

<1, where ↵ > 0. • For fixed 2 R+ and 2 H2

T(R), || ||2 :=E ⇣RT 0 e t| t|2dt ⌘ . H2T, (R) denotes the space H2

(10)

For notational simplicity we sometimes use: L2

T(R) = L2T,H↵T(R) = H↵T,H2T, (R) = H2T,

and for f (!, t, Yt, Zt) = f (t, Yt, Zt).

Definition 1.1. Let ⇠ : ⌦! R be an FT-measurable random variable and let

f : ⌦⇥ [0, T ] ⇥ R ⇥ R ! R be a P ⌦ B ⌦ B-measurable mapping. Then, ( dYt= f (!, t, Yt, Zt)dt ZtdWt; t2 [0, T ], YT = ⇠, (1.8) or, equivalently, Yt= ⇠ + Z T t f (!, s, Ys, Zs)ds Z T t ZsdWs t2 [0, T ] (1.9)

is called a Backward Stochastic Di↵erential Equation (BSDE) with terminal value ⇠ and generator f .

Properties of di↵erent BSDEs, such as, linear BSDEs, Lipschitz BSDEs, Markovian BSDEs, quadratic BSDEs, etc. are properties of the generator f . In other words a linear BSDE represents a BSDE with a linear generator, i.e. f (t, Yt, Zt) = ↵t+ tYt+ tZt. In

this thesis we will only consider Lipschitz BSDEs, unless stated otherwise.

Definition 1.2. Suppose that ⇠ 2 L2T(R), f(·, 0, 0) 2 H2T(R) and assume f is uniformly Lipschitz continuous; i.e. there exists constant C > 0 such that dP ⌦ dt-a.s.

|f(t, y1, z1) f (t, y2, z2)|  C (|y1 y2| + |z1 z2|) ,

holds true8(y1, z1), (y2, z2)2 R ⇥ R. Then the pair (f, ⇠) is said to be a set of standard

parameters of the BSDE (1.8).

Definition 1.3. A solution to the BSDE (1.8) is a pair of value (Y, Z) such that{Yt; t2

[0, T ]} is a continuous R-valued (Ft)t 0-adapted process and {Zt; t 2 [0, T ]} is an

R-valued predictable process satisfying R0T|Zs|2ds <1, P-a.s..

Note that a solution (Y, Z) 2 H2

T ⇥ H2T, when it exists, is often referred to a

square-integrable solution.

1.2. Existence and Uniqueness of a solution to Backward

Stochastic Di↵erential Equation

In this section we present in detail the existence and uniqueness of a solution to the BSDE (1.8). Following the derivations in [9] we first introduce a priori estimates that we need to prove the main Theorem 1.5.

(11)

Lemma 1.4. [3] Let ((fi, ⇠i); i = 1, 2) be two standard parameters of the BSDE (1.8)

and ((Yi, Zi); i = 1, 2) be two square-integrable solutions to the corresponding standard parameters. Consider C be the Lipschitz constant of f1 and denote

Yt= Yt1 Yt2

Zt= Zt1 Zt2

2ft= f1(t, Yt2, Zt2) f2(t, Yt2, Zt2).

Then for any ( , µ, ) such that µ > 0, 2 > C and C(2 + 2) + µ2, it follows that

k Y k2  T  e TE(| YT|2) + 1 µ2k 2fk 2 , (1.10) k Z|2  2 2 C  e TE(| YT|2) + 1 µ2k 2fk 2 . (1.11)

Proof. We divide the proof into four steps. 1. Let (Y, Z) 2 H2

T ⇥ H2T be a solution of the BSDE where (f, ⇠) are the standard

parameters, Yt= ⇠ + Z T t f (s, Ys, Zs)ds Z T t ZsdWs t2 [0, T ]. (1.12)

In this step we want to show that sup0tT|Yt| 2 L2T. To show this we will take

the absolute value on both sides as well as the supremum and show that the right hand side of equation (1.12) is inL2T. Taking the absolute value on both sides of equation (1.12), we have |Yt| = ⇠ + Z T t f (s, Ys, Zs)ds Z T t ZsdWs |Yt|  |⇠| + Z T t f (s, Ys, Zs)ds + Z T t

ZsdWs (by the triangle inequality)

|Yt|  |⇠| + Z T t |f(s, Ys , Zs)| ds + Z T t

ZsdWs (by Jensen inequality)

sup 0tT|Yt|  |⇠| + Z T 0 |f(s, Ys , Zs)| ds + sup 0tT Z T t ZsdWs . (1.13) By assumption, ⇠ 2 L2

T, i.e. E(|⇠|2) <1 so we have that |⇠| 2 L2T. For the second

term in (1.13) we will use the Lipschitz continuity of f , we obtain, |f(s, Ys, Zs)|  |f(s, Ys, Zs) f (s, 0, 0)| + |f(s, 0, 0)|

(12)

Moreover, we have, E✓ Z T 0 |f(s, Ys , Zs)| ds ◆2  T E ✓ Z T 0 |f(s, Y s, Zs)|2ds ◆

(by Cauchy-Schwarz inequality)  T E ✓ Z T 0 (C(|Ys| + |Zs|) + |f(s, 0, 0)|)2ds ◆ (by (?))  4T C2E ✓ Z T 0 |Ys| 2ds ◆ + 4T C2E ✓ Z T 0 |Zs| 2ds ◆ + 2TE ✓ Z T 0 |f(s, Ys , Zs)|2ds ◆ ( by Lemma A.5 ) < 1.

HenceR0T |f(s, Ys, Zs)| ds 2 L2T. For the third term in (1.13),we get

E sup 0tT Z T t ZsdWs 2! = E sup 0tT Z T 0 ZsdWs Z t 0 ZsdWs 2!  2E Z T 0 ZsdWs 2! + 2E sup 0tT Z t 0 ZsdWs 2!

(by Lemma A.5)

 2E Z t 0 ZsdWs 2! + 2E sup 0tT Z t 0 ZsdWs 2! . By Itˆo’s isometry (A.7) we have:

E Z t 0 ZsdWs 2! =E ✓Z t 0 |Z s|2ds ◆ .

By Burkholder-Davis-Gundy inequality (A.1), there exists a constant K > 0 such that E sup 0tT Z t 0 ZsdWs 2!  KE ✓Z t 0 |Zs| 2ds ◆ . So for the third term in (1.13), we get

E sup 0tT Z T t ZsdWs 2! 2(1 + K)E ✓Z t 0 |Zs| 2ds ◆ <1.

(13)

2. Consider the two solutions (Y1, Z1) and (Y2, Z2) of the BSDE (1.8) with the

standard parameters (f1, ⇠1) and (f2, ⇠2), respectively. By Itˆo formula applied to e s| Ys|2, one obtains

d⇣e s| Ys|2

= e s| Ys|2ds + 2e s Ysd Ys+ e sd Ysd Ys.

Integrating the above equation from t to T , we have e t| Yt|2+ Z T t e s| Ys|2ds + Z T t e s| Zs|2ds =e T| YT|2+ 2 Z T t e s Ys f1(s, Ys1, Zs1) f2(s, Ys2, Zs2) ds (1.14) 2 Z T t e s Ys ZsdWs.

3. In this step, we want to show E⇣e t| Yt|2 ⌘  E⇣e T| YT|2 ⌘ + 1 µ2E ✓Z T t e s| 2fs|2ds ◆ t2 [0, T ]. From step 1, we know sup0tT|Yt| 2 L2T, which leads to

E 0 @sZ T 0 |e s Z s Ys|2ds 1 A =E 0 @sZ T 0 e2 s| Z s|2| Ys|2ds 1 A  E 0 @ sup 0tT| Yt| sZ T 0 e2 s| Z s|2ds 1 A  v u u tE sup 0tT| Yt| 2 !s E✓Z T 0 e2 s| Z s|2ds ◆

(by Cauchy Schwarz inequality) <1.

We showed that e s Zs Ysbelongs toH1T which implies that the stochastic integral

RT

t e s Ys ZsdWs becomesP-integrable and has zero expectation. Moreover,

f1(s, Ys1, Zs1) f2(s, Ys2, Zs2)

= f1(s, Ys1, Zs1) f1(s, Ys2, Zs2) + f1(s, Ys2, Zs2) f2(s, Ys2, Zs2)  f1(s, Ys1, Zs1) f1(s, Ys2, Zs2) + f1(s, Ys2, Zs2) f2(s, Ys2, Zs2)  C (| Ys| + | Zs|) + | 2fs|

(14)

and

2 h Ys, f1(s, Ys1, Zs1) f2(s, Ys2, Zs2)i

 2 | Ys| f1(s, Ys1, Zs1) f2(s, Ys2, Zs2)

 2C | Ys|2+ 2| Ys| (C | Zs| + | 2fs|),

where C is the Lipschitz constant. Taking the expectation in (1.14) we get: E⇣e t| Yt|2 ⌘ + E ✓Z T t e s| Ys|2ds ◆ +E ✓Z T t e s| Zs|2ds ◆ =E⇣e T| YT|2 ⌘ + 2E ✓Z T t e s Ys(f1(s, Ys1, Zs1) f2(s, Ys2, Zs2))ds ◆ . (??)

We take the last term in the above equation and we rewrite as follows, E✓Z T t 2e s Ys(f1(s, Ys1, Zs1) f2(s, Ys2, Zs2))ds ◆  E ✓Z T t 2e s| Ys||f1(s, Ys1, Zs1) f2(s, Ys2, Zs2)|ds ◆  E ✓Z T t 2e sC| Ys|2+ 2e s| Ys| (C | Zs| + | 2fs|)ds ◆  E ✓Z T t 2e sC| Ys|2+ e s ✓ C| Zs| 2 2 + | 2fs|2 µ2 +| Ys| 22+ C 2) ◆ ds ◆ (by Lemma A.6)

= C(2 + 2) + µ2 E ✓Z T t e s| Ys|2ds ◆ + C2E ✓Z T t e s| Zs|2ds ◆ + 1 µ2E ✓Z T t e s| 2fs|2ds ◆ . Now equation (??) becomes

E⇣e t| Yt|2 ⌘ + E ✓Z T t e s| Ys|2ds ◆ +E ✓Z T t e s| Zs|2ds ◆ E⇣e T | YT|2 ⌘ + C(2 + 2) + µ2 E ✓Z T t e s| Ys|2ds ◆ (1.15) + C2E ✓Z T t e s| Zs|2ds ◆ + 1 µ2E ✓Z T t e s| 2fs|2ds ◆ . By rearranging the terms in (1.15) we get

E⇣e t| Yt|2 ⌘ E⇣e T | YT|2 ⌘ + C(2 + 2) + µ2 E ✓Z T t e s| Ys|2ds ◆ + ✓ C 2 1 ◆ E ✓Z T t e s| Zs|2ds ◆ + 1 µ2E ✓Z T t e s| 2fs|2ds ◆ .

(15)

The above inequality with C(2 + 2) + µ2 and C < 2, becomes E⇣e t| Yt|2 ⌘  E⇣e T | YT|2 ⌘ + 1 µ2E ✓Z T t e s| 2fs|2ds ◆ . (1.16)

4. In this step we integrate (1.16) on both sides from 0 to T and we use Fubini’s theorem to change the order of the integrals, we get

E✓Z T 0 e t| Yt|2dt ◆  T E⇣e T| YT|2 ⌘ + 1 µ2 Z T 0 E ✓Z T t e s| 2fs|2ds ◆ dt  T ✓ e TE⇣| YT|2 ⌘ + 1 µ2E ✓Z T t e s| 2fs|2ds ◆◆ . This is equivalent to k Y k2  T  e TE | YT|2+ 1 µ2k 2fk 2 .

By rearranging the terms in (1.15), we can also derive

✓ 2 C 2 ◆ E Z T t e s| Zs|2dsE ⇣ e T | YT|2 ⌘ E⇣e t| Yt|2 ⌘ + C(2 + 2) + µ2 E ✓Z T t e s| Ys|2ds ◆ + 1 µ2E ✓Z T t e s| 2fs|2ds ◆ .

Now, the above inequality with C(2 + 2) + µ2 and C < 2, becomes

2 C 2 ◆ E Z T t e s| Zs|2dsE ⇣ e T| YT|2 ⌘ + 1 µ2E ✓Z T t e s| 2fs|2ds ◆ . Notice that, we can take > 1 and t = 0, which leads to

k Zk2  2 2 C  e TE(| YT|2) + 1 µ2k 2fk 2 .

Theorem 1.5. [3] Given standard parameters (f, ⇠) there exists a unique pair (Y, Z)2 H2

T(R) ⇥ H2T(R) which solves the BSDE (1.8).

Proof. We will use the Banach fixed-point theorem for the following mapping: :H2T, ⇥ H2T, ! H2T, ⇥ H2T,

(16)

where (Y, Z) is a solution of the BSDE (1.8) with generator f (s, Us, Vs): Yt= ⇠ + Z T t f (s, Us, Vs)ds Z T t ZsdWs 0 t  T. (1.17)

We divide the proof into steps.

1. We will show that is indeed well defined, i.e. there exists a solution (Y, Z) to (1.17). The assumption (f, ⇠) being standard parameters implies that f (·, U, V ) 2 H2 T E✓Z T 0 |f(s, Us , Vs)|2ds ◆ 4C2E ✓Z T 0 |Us| 2ds ◆ + 4C2E ✓Z T 0 |Vs| 2ds ◆ + 2E ✓Z T 0 |f(s, 0, 0)| 2ds ◆ <1. For t2 [0, T ] we have, E Z T t f (s, Us, Vs)ds 2! (T t)E ✓Z T t |f(s, Us , Vs)|2ds ◆ T E ✓Z T 0 |f(s, Us , Vs)|2ds ◆ <1.

Hence we showed thatRtT f (s, Us, Vs)ds2 L2T 8t 2 [0, T ]. Consider

Mt:=E ✓Z T 0 f (s, Us, Vs)ds + ⇠ Ft ◆ ,

which is a square-integrable martingale. Indeed, notice that since8s  t, Fs⇢ Ft,

we have, E (Mt|Fs) =E ✓ E ✓Z T 0 f (s, Us, Vs)ds + ⇠ Ft ◆ Fs ◆

(by the tower property)

=E ✓Z T 0 f (s, Us, Vs)ds + ⇠ Fs ◆ = Ms.

By the Martingale Representation Theorem (A.2), we know there exists a unique integrable process Z 2 H2 T such that Mt= M0+ Z t 0 ZsdWs, t2 [0, T ]. (1.18)

(17)

Next we define the adapted and continuous process Y as Yt:= Mt Z t 0 f (s, Us, Vs)ds. We obtain Yt=E ✓Z T 0 f (s, Us, Vs)ds + ⇠ Ft ◆ Z t 0 f (s, Us, Vs)ds =E ✓Z T t f (s, Us, Vs)ds + ⇠ Ft ◆ . Now it is easy to show that Y 2 H2

T, E✓Z T 0 |Yt| 2dt ◆ =E Z T 0 E ✓Z T t f (s, Us, Vs)ds + ⇠ Ft ◆2 dt ! = Z T 0 E E ✓Z T t f (s, Us, Vs)ds + ⇠ Ft ◆2! dt

(by Fubini’s theorem)  Z T 0 E E Z T t f (s, Us, Vs)ds + ⇠ 2 Ft ! dt  Z T 0 E Z T t f (s, Us, Vs)ds + ⇠ 2! dt  2 Z T 0 E Z T t f (s, Us, Vs)ds 2 +|⇠|2 !

dt (by Lemma A.5)

<1.

In order to conclude this step, we need to clarify that the unique determined pair (Y, Z) solves the equation (1.17). But

⇠ = YT = MT Z T 0 f (s, Us, Vs)ds = M0+ Z T 0 ZsdWs Z T 0 f (s, Us, Vs)ds. (1.19)

Using (1.18) and (1.19), we deduce Yt= Mt Z t 0 f (s, Us, Vs)ds = M0+ Z t 0 ZsdWs Z t 0 f (s, Us, Vs)ds (by using 1.18)

(18)

= ✓ ⇠ Z T 0 ZsdWs+ Z T 0 f (s, Us, Vs)ds ◆ + Z t 0 ZsdWs Z t 0 f (s, Us, Vs)ds (by using 1.19) = ⇠ + Z T t f (s, Us, Vs)ds Z T t ZsdWs

and hence we get the claim.

2. Let (U1, V1), (U2, V2) be two elements of H2T, ⇥ H2T, , and consider their images (U1, V1) = (Y1, Z1) and (U2, V2) = (Y2, Z2) . We use the following notation

2fs= f (s, Us1, Vs1) f (s, Us2, Vs2)

By Lemma (1.4) applied with C = 0 and = µ2, we have

k Y k2 TE Z T 0 e s f (s, Us1, Vs1) f (s, Us2, Vs2) 2ds TC2E Z T 0 e s(| Us| + | Vs|)2ds 2TC2 k Uk2 +k V k2 and k Zk2 1E Z T 0 e s f (s, Us1, Vs1) f (s, Us2, Vs2) 2ds 2C2 k Uk2 +k V k2 . We deduce that k Y k2 +k Zk2  2(1 + T )C 2 k Uk2 +k V k2 . (1.20)

Now, let > 2(1 + T )C2, we see that the mapping is a contraction fromH2 T, ⇥

H2

T, onto itself and that there exists a unique fixed point ( ¯Y , ¯Z)2 H2T, ⇥ H2T,

satisfying ¯ Yt= ⇠ + Z T t f (s, ¯Ys, ¯Zs)ds Z T t ¯ ZsdWs 0 t  T.

We can choose the continuous version Y defined by Yt:=E ✓Z T t f (s, ¯Ys, ¯Zs)ds + ⇠ Ft ◆ =E ✓Z T t f (s, Ys, ¯Zs)ds + ⇠ Ft ◆ , i.e. (Y, ¯Z) is the unique continuous solution of the BSDE.

(19)

1.3. Linear BSDE

Linear BSDEs have a very useful property as it has an explicitly solution. In financial mathematics the solution of a linear BSDE is related to the pricing and hedging problem of a contingent claim.

Proposition 1.6. [21] Let ( , µ) be a bounded (R, R)-valued predictable process, ' an element inH2T and ⇠2 L2

T. We consider the following linear BSDE:

Ys= ⇠ + Z T s ('u+ uYu+ µuZu)du Z T s ZudWu, 0 t  T (1.21)

a) The equation (1.21) has a unique solution (Y, Z)2 H2

T ⇥ H2T, where Y is given by

the following explicit formula Yt=E  ⇠ tT + Z T t t s'sds|Ft , P a.s.

where ( ts)s t is the adjoint process defined by the forward linear SDE

d ts= ts( sds + µsdWs), tt= 1. (1.22)

b) If ⇠, ' are non negative, then the process (Yt)tT is non negative. If in addition

Y0 = 0, then for any s t, Ys= 0, ⇠ = 0 and 's = 0, dP ⌦ ds a.s.

Proof. The first step is to show that (f, ⇠) are the standard parameters. ⇠2 L2

T is given

and f is uniformly Lipschitz by the below inequality,

|f(t, y1, z1) f (t, y2, z2)| = |'t+ ty1+ µtz1 't ty2 µtz2|

=| t(y1 y2) + µt(z1 z2)|

 C|(y1 y2) + (z1 z2)|,

where | t| < K1,|µt| < K2 and C = max(K1, K2). Since and µ are bounded

processes and the pair (f, ⇠) are standard parameters as well as the linear generator f (t, y, z) = 't+ tyt+ µtzt satisfies the Lipschitz condition, by Theorem (1.5) there

exists a unique square-integrable solution (Y, Z) of the BSDE (1.21).

As and µ are bounded processes we also have the existence and uniqueness result for the SDE (1.22) (i.e. see section 6.2. of [28] ).

(20)

in other words that is sup0sT|Ys| 2 L2T. We will show that E h sup0sT ts 2i<1, E sup 0sT t s 2 ! =E sup 0sT t t+ Z t 0 s t sds + Z t 0 µs tsdWs 2! 3E sup 0sT t t 2 ! + 3E sup 0sT Z t 0 s t sds 2! + 3E sup 0sT Z t 0 µs tsdWs 2! 3 + 3E ✓Z T 0 s t s 2 ds ◆ + 3E sup 0sT Z t 0 µs tsdWs 2!

(by Jensen inequality) 3 + 3E ✓Z T 0 | s| 2 t s 2 ds ◆ + 3CpE ✓Z T 0 µs ts 2 ds ◆

(by BDG inequality (A.1)) 3 + 3E ✓Z T 0 | s| 2 t s 2 ds ◆ + 3CpE ✓Z T 0 |µs| 2 t s 2 ds ◆ 3 + 3C(1 + Cp)E ✓Z T 0 t s 2 ds ◆ <1. (as t s is square-intagrable)

By applying Itˆo’s formula to t

sYs, we get

d( tsYs) = tsdYs+ Ysd ts+ d tsdYs

= ts( 'sds Ys sds µsZsds + ZsdWs) + tsYs( sds + µsdWs) + tsZsµsds

= 's tsds + ts(Zs+ Ysµs)dWs.

Which implies that the process t

sYs+R0s'u tudu tT is a local martingale. If the

latter is a martingale, we will get

t tYt+ Z t 0 's tsds =E ✓ t T⇠ + Z T 0 's tsds Ft ◆ t tYt=E ✓ t T⇠ + Z T t 's tsds Ft ◆ , which implies Yt=E ✓ t T⇠ + Z T t 's tsdsFt ◆ ,

(21)

and the claim a) of the proposition is proved. Thus we need to prove that t sYs +

Rs

0 'u tudu is a martingale. To do that we prove E

⇥ sup0sT | t sYs+ Rs 0 'u tudu| ⇤ <1. Notice that E " sup 0sT t sYs+ Z s 0 'u sudu #  E " sup 0sT t sYs # +E " sup 0sT Z s 0 'u tudu # . (1.23) From (1.23) we will first show that the first and second terms are finite, we have

E " sup 0sT t sYs #  E " sup 0sT t s sup 0sT|Ys| #  E " sup 0sT t s 2 #1 2 E " sup 0sT|Ys| 2 #1 2

(by H¨older’s inequality) <1. E " sup 0sT Z s 0 'u tudu #  E Z T 0 'u tu du  E Z T 0 |'u| 2du 1 2 E Z T 0 t u 2 du 1 2

(by H¨older’s inequality)  T12E Z T 0 |'u| 2du 1 2 E " sup 0sT t s 2 #1 2 <1. Hence ( t

sYs+R0s'u tudu)tT is a uniform-integrable local martingale which implies it

is a martingale.

b) Consider Y0= 0 and use a), we get 0 0Y0+ Z 0 0 's 0sds =E ✓ 0 T⇠ + Z T 0 's 0sdsF0 ◆ 0 =E ✓ 0 T⇠ + Z T 0 's 0sds ◆ ,

then the nonnegative variable 0T⇠ +R0T 's 0sds has zero expectation. Therefore ⇠ = 0,

P a.s., 's = 0, dP ⌦ dt a.s. and Y = 0 a.s.. Particularly, if ⇠ and ' are nonnegative,

(22)

1.4. Comparison Theorem

This subsection is devoted to the comparison theorem which allows to compare the solutions of two BSDEs as soon as one can compare the terminal conditions and the generators.

Theorem 1.7. [9] Let (Y, Z) and ( ¯Y , ¯Z) be the solutions of two BSDEs with associated parameters (f, ⇠) and ( ¯f , ¯⇠). We suppose that

 ¯⇠, P-a.s. and

f (t, ¯Yt, ¯Zt) f (t, ¯¯ Yt, ¯Zt) 0, dt ⌦ dP -a.s.

Then we have that P almost surely for any time t 2 [0, T ], Yt ¯Yt.

Moreover the comparison is strict, i.e., if in addition Y0= ¯Y0, then ⇠ = ¯⇠, f (t, ¯Yt, ¯Zt) =

¯

f (t, ¯Yt, ¯Zt) and Yt= ¯Yt, 8t 2 [0, T ] P a.s.

In particular, whenever P(⇠ < ¯⇠) > 0 or f(t, ¯Yt, ¯Zt) f (t, ¯¯ Yt, ¯Zt) < 0 on a set of positive

dt⌦ dP-measure, then Y0 < ¯Y0. Proof. We define: 8 > > > > < > > > > : 't= f (t, ¯Yt, ¯Zt) f (t, ¯¯ Yt, ¯Zt) Y0= Y Y¯ Z0 = Z Z¯ ⇠0 = ⇠ ⇠.¯ For all t 0, Yt0 = ⇠0+ Z T t (f (t, ¯Yt, ¯Zt) f (t, ¯¯ Yt, ¯Zt))ds Z T t Zs0dWs We rewrite f as follows f (t, Yt, Zt) f (t, ¯¯ Yt, ¯Zt) =f (t, Yt, Zt) f (t, ¯Yt, Zt) + f (t, ¯Yt, Zt) f (t, ¯Yt, ¯Zt) + f (t, ¯Yt, ¯Zt) f (t, ¯¯ Yt, ¯Zt).

Now we introduce two processes and µ as follows

t= ( (Yt Y¯t) 1(f (t, Yt, Zt) f (t, ¯Yt, Zt)), if Yt6= ¯Yt, 0, otherwise, µt= ( (Zt Z¯t) 1(f (t, ¯Yt, Zt) f (t, ¯Yt, ¯Zt)), if Zt6= ¯Zt, 0 otherwise.

(23)

By writing Yt0 in terms of and µ, we get Yt0 = ⇠0+ Z T t (Ys s0 + Zs0µs+ 's)ds Z T t Zs0dWs. The couple (Y0

t, Zt0) is a solution of a linear BSDE. Note that ( , µ) satisfy the

assump-tions of Proposition 1.6. Indeed , µ are bounded as f is Lipschitz and the integrability condition of the process ' can be deduced easily from the standards hypotheses and from the properties of Y and Z. From Proposition 1.6 we get 8t 2 [0, T ],

Yt=E  ⇠ tT + Z T t t s'sds|Ft ,

where ( ts)s t is the adjoint process defined by the following forward linear SDE

d ts= ts( sds + µsdWs).

Hence ⇠0  0 and 's 0, then Y0

t  0 follows.

1.5. Applications of BSDEs in finance

1.5.1. Stochastic Optimal Control Problem

Stochastic optimal control is used to solve optimization problems in random systems (which can be controlled) that evolve over time and whose evolution can be influenced by external forces.

Optimal control theory appears in the 1950’s, when engineers became interested in prob-lems where the goal was to maximize the returns and minimize the costs of the operation, i.e. in aerospace problems a small improvement of optimal control had a large impact in reducing the costs. The solution of a stochastic optimal control problem can be found by solving a system of di↵erential equations.

In this example, based on [18], we will focus on the following dynamic stochastic di↵er-ential equation given by the controlled process:

(

dXt= (aXt+ but) dt + dWt t2 [0, T ]

X0 = x

(1.24) where W is a Brownian motion, Xt, with t 0, is called the state process taking values

in (S,B(S)), where S is a Polish space, that is, a closed and bounded set of R.

It is assumed that the system can be controlled, i.e. the system is equipped with con-trollers whose position dictates its future evolution. These concon-trollers are characterized by points u = (u1, ..., un)2 Rn, the control variable.

The values that can be assumed by the control variables are restricted to a certain con-trol regionU 2 Rn, which is bounded.

(24)

Definition 1.8. A continuous control ut, defined on some time interval 0 t  T , with

range in the control regionU,

ut2 U 8t 2 [0, T ]

is said to be an admissible control.

We consider processes (X, u) to be (Ft)t 0-adapted and square-integrable. It is also

necessary to specify a cost function for evaluating the performance of a system quanti-tatively. We define the cost function as follows

J :U[0, T ] ! R J(u) = 1 2E " Z T 0 (|Xt|2+|ut|2)dt +|XT|2 # . (1.25)

The optimal control problem is to minimize the value of the cost function (1.25) subject to the equation (1.24). There exists a control u2 U that minimizes (1.25) and is unique a.s..

We suppose that u is the optimal control and X is the corresponding process state. For any admissible controls v2 U, we have

0J(u + "v) J(u) " = 1 2E ⇢R T 0 | ¯Xt|2+|ut+ "vt|2 dt +| ¯XT|2 12E ⇢R T 0 |Xt|2+|ut|2 dt +|XT|2 " , (1.26) where ¯X has the following dynamic:

( d ¯Xt= a ¯Xtdt + b(ut+ "vt)dt + dWt, t2 [0, T ], ¯ X0 = x. (1.27) From (1.26) it follows: 0 1 2E ⇢R T 0 | ¯Xt|2 |Xt|2+ 2ut"vt+ "2v2t dt +| ¯XT|2 |XT|2 " (1.28) =1 2E ⇢ Z T 0 ✓ ( ¯Xt+ Xt) ( ¯Xt Xt) " + 2utvt+ "v 2 t ◆ dt + ( ¯XT + XT) ( ¯XT XT) " (1.29) We define ⇠ = lim"!0( ¯Xt Xt)

" . From (1.24) and (1.27), it follows that ⇠ satisfies:

(

d⇠t= (a⇠t+ bvt)dt, t2 [0, T ],

⇠0 = 0.

(25)

Letting "! 0, equation (1.29) becomes:

0 E

⇢Z T

0

(Xt⇠t+ utvt)dt + XT⇠T . (1.31)

We introduce the following BSDE: (

dYt= (aYt+ Xt)dt + ZtdWt, t2 [0, T ],

YT = XT.

(1.32) We suppose that the BSDE (1.32) admits a unique adapted solution (Y, Z). By applying Itˆo’s formula to Yt⇠t we get:

E(XT⇠T) =E(YT⇠T) =E Z T 0 {( aYt Xt)⇠t+ Yt(a⇠t+ bvt)} dt =E Z T 0 { Xt ⇠t+ Ytbvt}dt. By (1.31) we get, 0 E Z T 0 (bYt+ ut)vtdt

As v is arbitrary, we get the following

ut= bYt a.s. 8t 2 [0, T ]. (1.33)

Now, since Y is part of the solution of the BSDE (1.32) and is Ft-adapted, so that u is

an admissible control. By substituting (1.33) in (1.24) we arrive at the following optimal

system: 8 > > > > < > > > > : dXt= (aXt b2Yt)dt + dWt, t2 [0, T ], dYt= (aYt+ bXt)dt + ZtdWt, t2 [0, t], X0 = x YT = XT.

Finally, we have a system of one Forward Stochastic Di↵erential Equation (which are introduced later in the thesis) as well as a Backward Stochastic Di↵erential Equation, such a system is known as a Forward Backward Stochastic Di↵erential Equation (FB-SDE). If we can solve the FBSDE and find a unique adapted solution (X, Y, Z) then u in (1.32) is the optimal control that solves the original problem.

1.5.2. Hedging in the Black-Scholes model

We give an example for the application of BSDEs in risk hedging. In a financial market we have the following: for t2 [0, T ]

(26)

1. There exists a risk-free asset, whose price is modelled by: (

dBt = Btrdt,

B0 = y2 R,

(1.34) where r denotes the interest rate.

2. We model a stock with a risky asset by the following forward SDE: (

dSt = St(µdt + dWt),

S0 = x2 R,

(1.35) where µ2 R, > 0, µ is called drift and is the volatility.

The solutions of equations (1.34) and (1.35) are given by

Bt= yert, (1.36) St= x exp ✓✓ µ 1 2 2 ◆ t + Wt ◆ , t 0. (1.37)

A portfolio is a couple of adapted processes (at, bt) that represents the amount of

invest-ment in both assets at time t. The wealth process is given by

Yt= atSt+ btBt, t2 [0, T ]. (1.38)

A main assumption is that Y is self-financing, which means that the wealth process satisfies the following

dYt= atdSt+ btdBt

= atSt(µdt + dWt) + rbtBtdt.

As btBt= Yt atSt, then we have

dYt= (rYt+ atSt(µ r))dt + atSt dWt

Now by putting Zt= atSt , we get

dYt= rYtdt + Zt

µ r

dt + ZtdWt.

One interesting question in the financial market is: At which price v should one sell the European call option?

The answer is by replicating the portfolio: the seller sells the option at the price v and invests this sum in the market. The value of his portfolio is characterized by the SDE:

(

dYt = rYtdt + µ rZtdt + ZtdWt

(27)

The problem is to find v and Z such that the solution of the previous SDE verifies YT = ⇠ = (ST K)+,

(

dYt = rYtdt + µ rZtdt + ZtdWt

YT = ⇠.

In this case v is called the fair price of the option and it suffices to sell the option at the price v = V0. We see that pricing the option is to solve a linear BSDE.

(28)

2. FBSDEs and Feynman-Kac formula

In this chapter we present the Forward-Backward Stochastic Di↵erential Equations (FB-SDE). FBSDEs provide an intensively studied modelling tool for stochastic control prob-lems as well as for financial mathematics. They appeared for the first time in Bismut’s paper [2] and then studied in a general way by Pardoux and Peng in [19].

At the end of this chapter the Feynman-Kac representation theorem is introduced which gives us a method to connect PDEs and stochastic processes. We can solve PDEs by making simulations for the stochastic process.

The books [4], [20] and the article [9] are used as the main references for this chapter.

2.1. Forward-Backward Stochastic Di↵erential Equations

We consider the solution of certain BSDEs associated with some forward classical stochas-tic di↵erential equations. We are going to suppose that the randomness of the parameters (f, ⇠) of the BSDE comes from the state of the forward equation.

We consider the following Forward Stochastic Di↵erential Equation: (

dXs = b(s, Xs)ds + (s, Xs)dWs s2 [t, T ],

Xt = x,

(2.1) where the coefficients

b : [0, T ]⇥ R ! R, : [0, T ]⇥ R ! R, satisfy the following assumptions with C > 0

(t, x) (t, x0) + b(t, x) b(t, x0)  C x x0 , H1

| (t, x)| + |b(t, x)|  C(1 + |x|). H2

Under the assumptions H1 and H2, there exists a unique solution for (2.1) (see Theorem A.12) which we denote as Xst,x, for s2 [0, T ].

Now we consider the associated BSDE, (

dYs = f (s, Xst,x, Ys, Zs)ds ZsdWs, s2 [0, T ],

YT = g(XTt,x),

(2.2) where the coefficients

f : [0, T ]⇥ R ⇥ R ⇥ R ! R

(29)

satisfy the following assumptions with C > 0 such that

f (t, x, y, z) f (t, x0, y0, z0)  C(|y y0| + |z z0| + |x x0|) H3

|g(x)|  C(1 + |x|). H4

From Theorem 1.5 we know that there exists a unique solution for (2.2) which we denote as (Yst,x, Zst,x) with s2 [0, T ]. Notice that we needed to introduce a new assumption H4

as the function g in the BSDE (2.2) is linked to the SDE (2.1). We rewrite (2.1) and (2.2) in one system as follows

(

Xst,x= x +Rtsb(r, Xrt,x)dr +Rts (r, Xrt,x)dWr s t

Yst,x = g(XTt,x) +RsTf (r, Xrt,x, Yrt,x, Zrt,x)dr RsT Zrt,xdWr s t

(2.3)

The system (2.3) is a combination of a forward SDE and a BSDE, where the terminal condition of the BSDE is now allowed to depend on the process Xst,x. Such a type of

equation is called a Forward-Backward Stochastic Di↵erential Equation (FBSDE). Xst,x

represents the forward component and (Yst,x, Zst,x) the backward component.

It is important to notice that the solution of the backward component does not appear in the coefficients of the forward SDE. We call the system (2.3) a decoupled FBSDE. A more general FBSDE has the following form

(

Xt= X0+R0tb(s, Xs, Ys, Zs)ds +R0t (s, Xs, Ys)dWs,

Yt= g(XT) +RtT f (s, Xs, Ys, Zs)ds RtT ZsdWs.

We observe that the solution of the backward component appears in the coefficients of the forward component, we call this class of FBSDE, coupled FBSDE. This type of FBSDE is used in the application to carbon emissions markets. However the theory of the coupled BSDEs is out of scope for this thesis, for further reading we refer to the book [4].

2.1.1. Regularity Properties of Solutions

Proposition 2.1. [9] For each t2 [0, T ] and x 2 R, there exists C 0 such that

E sup 0sT Yst,x 2 ! +E ✓Z T 0 Zst,x 2ds ◆  C(1 + |x|2).

Proposition 2.2. [9] For each t, t0 2 [0, T ], x, x0 2 R and if f, g are Lipschitz in x, uniformly in t concerning f there exists C 0 such that

E sup 0sT Yst,x Yst0,x0 2 ! +E ✓Z T 0 Zst,x Zst0,x0 2ds ◆  C(1 + |x|2)(|x x0|2+|t t0|).

(30)

2.1.2. Markov Properties of Solutions

In this subsection, we show that the the Markov property of the solution of an SDE translates to the solution of the BSDE. This property is mainly used to study the link to PDEs. We introduce the following filtrationFstwhich denotes the future -algebra of W after t, i.e. Fst= (Wu Wt, t u  s). Proposition 2.3. [4] If (t, x)2 [0, T ]⇥R, then⇣Xst,x, Yst,x ⌘ , where t s  T is adapted to the filtration Ft

s. In particular Ytt,x is deterministic. We can choose a version Z t,x s

where t s  T adapted to the filtration Ft

s. Since Ytt,x is deterministic we can define

a function

u(t, x) := Ytt,x, for all (t, x)2 [0, T ] ⇥ R.

Proposition 2.4. [4] The function u verifies, for any (t, x), (t0, x0) |u(t, x) u(t0, x0)|  C(|x x0| + |t t0|12(1 +|x|)),

for some c 0.

By the help of the function u we can establish the Markov property for the BSDEs. Proposition 2.5. [4] Let t2 [0, T ] and x be a square integrable random variable. Then P-a.s.,

Yst,x= u(s, Xst,x), 8s 2 [t, T ]. This property is called the Markov property.

2.2. The link with PDEs: Feynman-Kac formula

We introduce a semi-linear parabolic PDE, as follows: (

u0(t, x) +Au(t, x) + f(t, x, u(t, x), Du(t, x) (t, x)) = 0, (t, x) 2 [0, T ] ⇥ R,

u(T, x) = g(x), (2.4)

whereA represents the linear di↵erential operator:

Au(t, x) = b(t, x)Du(t, x) +12 2(t, x)D2u(t, x) . (2.5)

For what concerns the notation, when u is a function of t and x we denote u0 the partial di↵erential in time, Du the partial di↵erential in space of first order and D2u the partial

di↵erential in space of second order. Feynman-Kac

(31)

The following proposition gives us a probabilistic representation formula for the solution of a semi-linear parabolic PDE. Hence by solving the PDE (2.4) we can deduce the solution of the FBSDE (2.3).

Proposition 2.6. [9] Let u2 C2([0, T ]⇥ R) and suppose that there exists a constant C

such that, for each (t, x)2 [0, T ] ⇥ R, it holds

|u(s, x)| + |Du(s, x) (s, x)|  C(1 + |x|). Moreover let u be the solution of the semilinear parabolic PDE,

(

u0(t, x) +Au(t, x) + f(t, x, u(t, x), Du(t, x) (t, x)) = 0, (t, x) 2 [0, T ] ⇥ R, u(T, x) = g(x),

whereA is defined as in (2.5). Then for any (t, x) 2 [0, T ] ⇥ R, Ytt,x := u(t, x),

where {(Yst,x, Zst,x), t  s  T } is the unique solution of the BSDE in (2.3). Moreover

(Yst,x, Zst,x) =

u(s, Xst,x), Du(t, Xst,x) (t, Xst,x)

⌘ . Proof. We apply Itˆo’s lemma to u(s, Xst,x), we get

du(s, Xst,x) =u0(s, Xst,x)ds + Du(s, Xst,x)dXst,x+1 2 2(s, Xt,x s )D2u(s, Xst,x)ds = ✓ u0(s, Xst,x) +1 2 2(s, Xt,x s )D2u(s, Xst,x) + Du(s, Xst,x)b(s, Xst,x) ◆ ds + Du(s, Xst,x) (s, Xst,x)dWs = u0(s, Xst,x) +Au(s, Xst,x) ds + Du(s, Xst,x) (s, Xst,x)dWs. (by definition ofA (2.5)) As u is a solution of the PDE (2.4), we have u0+Au = f, we get

du(s, Xst,x) = f s, Xst,x, u(s, Xst,x), Du (s, Xst,x) ds + Du (s, Xst,x)dWs,

which proves the statement.

To get the reverse way, i.e. studying the BSDE and being able to deduce the con-struction of the solution of the PDE we need to define the following viscosity definition. The basic idea of viscosity solution (see for example [11]) is to replace the di↵erential Du(x) at a point x where it does not exist (for example because of a kink in u) with the di↵erential D (x) of a smooth function touching the graph of u, from above for the sub solution condition and from below for the super solution one, at the point x.

(32)

Definition 2.7. Suppose u2 C([0, T ] ⇥ R, R) with u(T, x) = g(x) s.t. x 2 R, then u is called viscosity sub solution of the PDE (2.4), if for each 2 C2([0, T ]⇥ R), we have,

0(t, x) +A (t, x) + f (t, x, (t, x), D (t, x) (t, x)) 0,

8(t, x) 2 [0, T ) ⇥ R whenever u has a local maximum.

Suppose u 2 C([0, T ] ⇥ R, R) satisfies u(T, x) = g(x) s.t. x 2 R, then u is called viscosity super solution of the PDE (2.4), if for each 2 C2([0, T ]⇥ R) , we have,

0(t, x) +A (t, x) + f(t, x, (t, x), D (t, x) (t, x))  0,

8(t, x) 2 [0, T ) ⇥ R whenever u has a local minimum.

Hence u2 C([0, T ] ⇥ R, R) is called a viscosity solution of (2.4) if it is both a viscosity sub- and super solution.

For a further reading on viscosity solution, we refer to the papers [6] and [8].

Theorem 2.8. [4] The function u(t, x) := Ytt,x, as defined in (2.3), continuous on [0, T ]⇥ R, is a viscosity solution of the PDE (2.4).

Proof. By construction u is continuous and u(T,·) = g(·) so we only need to prove is the viscosity sub solution property. Let inC2 such that u has at (t0, x0) (0 < t0 < T )

a local maximum equal to 0. We want to show that

0(t

0, x0) +A (t0, x0) + f (t0, x0, u(t0, x0), D (t0, x0)) 0.

We will proceed by finding a contradiction, so we assume that there exists > 0 such that

0(t

0, x0) +A (t0, x0) + f (t0, x0, u(t0, x0), D (t0, x0)) = < 0,

The function u has a local maximum at (t0, x0) equal to 0, this means that

8(t, x) close to (t0, x0) we have,

(u )(t, x) (u )(t0, x0)

u(t, x) (t, x) u(t0, x0) (t0, x0) = 0

u(t, x) (t, x).

By continuity there exists 0 < ↵ t t0such that for all t0  t  t0+↵ and|x x0|  ↵,

u(t, x) (t, x)

0(t, x) +A (t, x) + f(t, x, (t, x), D (t, x))

2 < 0. Define the stopping time,

(33)

Since Xt0,x0 is a continuous process, we have Xt0,x0

⌧ x0  ↵.

By Itˆo formula applied to (r, Xt0,x0

r ) between u^⌧ and (t0+↵)^⌧ = ⌧ for t0  u  t0+↵,

we obtain (u^ ⌧, Xt0,x0 u^⌧ ) = (⌧, X⌧t0,x0) Z ⌧ u^⌧ 0+A (r, Xt0,x0 r )ds (2.6) Z ⌧ u^⌧ D r, Xt0,x0 r dWr

We consider two couples of processes.

On the one hand, for t0 u  t0+ ↵, we define

(Yu0, Zu0) :=⇣ (u^ ⌧, Xt0,x0

u^⌧ ), {u⌧}D (r, Xrt0,x0)

⌘ . By replacing Yu0 and Zu0 in (2.6), we get

Yu0 = (⌧, Xt0,x0 ⌧ )+ Z t0+↵ u {r⌧} 0+A (r, Xt0,x0 r )dr Z t0+↵ u Zr0dWr, t0 u  t0+↵.

On the other hand, for t0 u  t0+ ↵, we define

(Yu, Zu) = ⇣ Yt0,x0 u^⌧ , {u⌧}Zut0,x0 ⌘ .

By replacing Yu and Zu in in the backward SDE of the system (2.3), we get

Yu = Yt0+↵+ Z t0+↵ u {r⌧} f (r, Xt0,x0 r , Yr, Zr)dr Z t0+↵ u ZrdWr, t0 u  t0+ ↵.

The Markov property (2.5) implies that P-a.s. for all t0  r  t0 + ↵, Yrt0,x0 =

u(r, Xt0,x0

r ), and thus Yt0+↵= Y

t0,x0

⌧ = u(⌧, X⌧t0,x0) and we get

Yu= u(⌧, X⌧t0,x0) + Z t0+↵ u {r⌧} f r, Xt0,x0 r , u(r, Xrt0,x0), Zr dr Z t0+↵ u ZrdWr, where t0  u  t0+ ↵.

In the next step we will apply the Comparison Theorem (1.7) to (Yu0, Zu0) and (Yu, Zu).

From the definition of ⌧ we get,

u(⌧, Xt0,x0

⌧ ) (⌧, X⌧t0,x0)

{r⌧}f (r, Xrt0,x0, u(r, Xrt0,x0)D (r, Xrt0,x0)) {r⌧} 0+A (r, Xrt0,x0)

Moreover, we always have by definition of ⌧ , E ✓Z t0+↵ t0 {r⌧} 0+A + f ⇣ r, Xt0,x r , u(r, X t),x r ), D (r, Xrt0,x) ⌘ dr ◆ E(⌧ t0) 2 > 0 This quantity is indeed strictly positif, as > 0 and ⌧ > t since Xt0,x0

t0 x0 = 0 < ↵.

We can apply the strict version of the Comparison Theorem (1.7) to obtain u(t0, x0) =

(34)

3. Numerical Methods for BSDEs

Typically the dynamics of stock prices and interest rates are driven by a continuous-time stochastic process. However, simulation is done in discrete continuous-time steps. Hence, the first step in any simulation scheme is to find a way to ”discretize” a continuous-time process into a discrete time process. We will discretize the forward component by the simple Euler scheme and the backward component by a theta-scheme. The use of a theta-scheme introduces two parameters ✓1 and ✓2, which are used to generate multiple

discretizations.

It is the Feynman–Kac theorem, proven in the previous section, that relates the con-ditional expectation of the value of a contract payo↵ function under the risk-neutral measure to the solution of a partial di↵erential equation.

The existing numerical methods can be classified into 3 major groups: • PDE methods

• Monte-Carlo simulation

• Numerical Integration Methods (often rely on Fourier )

Each of them have advantages and disadvantages and the last one is often used for Calibration purposes. We will use probabilistic numerical methods to solve BSDEs which will rely on time discretization of the stochastic process and and approximations for the appearing conditional expectations.

In this chapter we will make use of the following defined FBSDE 8 > < > : Xt= X0+ Rt 0b(s, Xs)ds + Rt 0 (s, Xs)dWs X0 = x0, (3.1) Yt= ⇠ + Z T t f (s, Xs, Ys, Zs)ds Z T t ZsdWs ⇠ = g(XT). (3.2)

The equation (3.1) represents a forward SDE and (3.2)represents a BSDE. As seen in the previous chapter we know that this FBSDE satisfies the Markov property.

Let ⇧ be a partition of time points 0 = t0 < t1 < t2 < t3 < .. < ti < .. < tM = T ,

t := ti+1 ti, be fixed time steps and Wti := Wti+1 Wti, where Wt is a Wiener

process. Notice that the increments Wti ⇠ N (0,

p t).

For notational simplicity we use: Xi = Xti, Yi = Yti, Zi = Zti. To determine the values

X⇧

i+1 for i = 0, ..., M 1, we use three di↵erent Taylor schemes: Euler, Milstein and 2.0

(35)

3.1. Discretization of the SDE

In this section, based on the paper [29], we will approximate the numerical solution of the FSDE (3.1). Just as in the deterministic case, we can derive numerical schemes by looking at the Taylor expansions of certain functions. In the thesis we consider the stochastic setting. First we introduce the following definitions of strong and weak convergence.

Definition 3.1. Let X be the solution of the FSDE (3.1) and X⇧

i be the time-discretized

approximation of X. A time-discretized approximation Xi⇧converges to the stochastic process X in the strong sense with order ↵1, if there exists a constant C 2 R such that:

E[|X⇧

i Xi|]  C t↵1.

Definition 3.2. Let X be the solution of the FSDE (3.1) and Xi⇧be the time-discretized approximation of X. A time-discretized approximation Xi⇧converges to the stochastic process X in the weak sense, with order ↵2 if there exists a constant C 2 R such that

for every infinitely often di↵erentiable function :Rd ! R with at most polynomially growing derivatives it holds that:

|E[ (Xi⇧)] E[ (Xi)]|  C t↵2.

3.1.1. Euler-Maruyama scheme

Let k :R ! R be a function that is twice di↵erentiable and an Itˆo process Xt with drift

term b(t, Xt)2 C2, and di↵usion term (t, Xt)2 C2.

Consider the forward SDE of (3.1), Xt= X0+ Z t t0 b(s, Xs)ds + Z t t0 (s, Xs)dWs, 8t 2 [t0, T ]. (3.3)

By Itˆo’s lemma we get: k(Xt) =k(Xt0) + Z t t0  b(s, Xs)k0(Xs) + 1 2 (s, Xs) 2k00(X s) ds + Z t t0 (s, Xs)k0(Xs)dWs. (3.4) Define: L0k := bk0+1 2 2k00 L1k := k0. We can rewrite (3.4) as : k(Xt) = k(Xt0) + Z t t0 L0k(Xs)ds + Z t t0 L1k(Xs)dWs. (3.5)

(36)

Consider now the integral form of Xt in (3.3) and substitute the functions b and for

k in the equation above (3.5), we get the so called Itˆo-Taylor expansion of first order of Xt: Xt=Xt0 + Z t t0 ✓ b(t0, Xt0) + Z s t0 L0b(tp, Xp)dp + Z s t0 L1b(tp, Xp)dWp ◆ ds + Z t t0 ✓ (t0, Xt0) + Z s t0 L0 (tp, Xp)dp + Z s t0 L1 (tp, Xp)dWp ◆ dWs =Xt0 + b(t0, Xt0) Z t t0 ds + (t0, Xt0) Z t t0 dWs+ Rt =Xt0 + b(t0, Xt0)(t t0) + (t0, Xt0)(Wt Wt0) + Rt,

where Rtis the remaining term consisting of double integrals. This Itˆo-Taylor expansion

gives us the following Euler-Maruyama scheme as an approximation for the FSDE in (3.3) : ( X⇧ i+1= Xi⇧+ b(ti, Xi⇧) t + (ti, Xi⇧) Wi 8i = 0, .., M 1 X0⇧= x0. (3.6) Proposition 3.3. [30] Let X be given by (3.3) and assume conditions H1 and H2 (from Chapter 2) such that there exists a unique solution. Then the Euler-Maruyama scheme (3.6) has order of strong convergence ↵1= 12 and order of weak convergence ↵2 = 1.

3.1.2. The Milstein Scheme

The Milstein approximation for the SDE in (3.3) has the form: (

X⇧

0 = x0

Xi+1⇧ = Xi⇧+ b(ti, Xi⇧) t + (ti, Xi⇧) Wi+12 (ti, Xi⇧)D (ti, Xi⇧) Wi2 t ,

for i = 0, ..., M 1

Proposition 3.4. [30] Let X be given by (3.3) and assume conditions H1 and H2 such that there exists a unique solution. Then the Milstein scheme (??) has order of strong convergence ↵1 = 1 and order of weak convergence ↵2= 1.

3.1.3. The weak Taylor Scheme of order 2.0

The weak Taylor scheme of order 2.0 for the SDE in (3.3), given Xi⇧= x, has the form: 8 > > > > < > > > > : X⇧ 0 = x0 Xi+1⇧ = x + b(ti, x) t + (ti, x) Wi+12 (ti, x)D (ti, x) ( Wi)2 t +Db(ti, x) (ti, x) Zi+1+12 b(ti, x)Db(ti, x) + 12D2b(ti, x) 2(x) ( t)2 + b(ti, x)D (ti, x) + 12D2 (ti, x) 2(x) ( Wi t Zi+1),

(37)

Proposition 3.5. [30] Let X be given by (3.3) and assume conditions H1 and H2. Then the weak Taylor scheme of order 2.0 (3.7)has order of strong convergence ↵1 = 1 and

order of weak convergence ↵2 = 2

We observe the following, E[ Zi+1] = 0, Var( Zi+1) =

1 3( t) 3 and Cov( W i, Zi+1) = 1 2( t) 2/. (3.7)

If we replace Zi+1 by Zi+1=

1

2 Wi t as suggested in [27], then E[ Zi+1] = 0, Var( Zi+1) = 1

4( t)

3 and Cov( W

i, Zi+1) = 1

2( t)

2. (3.8)

This replacement has the same moments in first order and simplifies the scheme. Remark 3.6. Similarly as in [29], the Taylor discretization scheme can be written in a general form, given Xi⇧= x, as follows:

(

Xi+1⇧ = x + m(ti, x) t + s(ti, x) Wi+ t(ti, x)( Wi)2, for i = 0, ..M 1

X⇧ 0 = x0.

(3.9) • For the Euler scheme, we find:

8 > < > : m(ti, x) = b(ti, x), s(ti, x) = (ti, x), t(ti, x) = 0.

• For the Milstein scheme, we have: 8 > < > : m(ti, x) = b(ti, x) 12 (ti, x)D (ti, x), s(ti, x) = (ti, x), t(ti, x) = 12 (ti, x)D (ti, x).

• For the 2.0-weak-Taylor scheme, we see that: 8 > > > > > > < > > > > > > : m(ti, x) = b(ti, x) 12 (ti, x)D (ti, x) + 12 b(ti, x)Db(ti, x) +12D2b(ti, x) 2(ti, x) t, s(ti, x) = (ti, x) + 12 Dµ(ti, x) (ti, x) + µ(ti, x)Dµ(ti, x) +12D2µ(t i, x) 2(ti, x) t, t(ti, x) = 12 (ti, x)D (ti, x).

For the discretization schemes above we can determine a characteristic function of Xi+1⇧ (3.9), which is given in Lemma 3.7. For notational simplicity, in Lemma 3.7, we will use that the variables only depend on the space term, i.e. m(ti, x) = m(x), s(ti, x) =

(38)

Lemma 3.7. [30] The characteristic function of X⇧

i+1 (3.9), given Xi⇧= x, is given by x

X⇧

i+1(u) =E[exp(iuX

i+1)|Xi⇧= x]

=exp iux + ium(x) t

1

2u2s2(x) t

1 2iut(x) t

!

(1 2iut(x) t) 12.

Proof. First we show the result when t(x) = 0,

x X⇧ i+1(u) =E ⇥ exp(iuXi+1⇧ )|Xi⇧= x ⇤

=E⇥exp(iux + ium(x) t + ius(x) Wi)|Xi⇧= x

⇤ =exp (iux + ium(x) t)E[exp(ius(x) Wi)],

where Wi ⇠ N (0, t), we get x

X⇧

i+1(u) = exp(iux + ium(x) t) N (0, t)(us(x))

=exp ✓ iux + ium(x) t 1 2u 2s2(x) t ◆ . For t(x)6= 0 we find that

x X⇧

i+1(u) =E

exp(iuXi+1⇧ )|Xi⇧= x⇤

=E⇥exp iux + ium(x) t + ius(x) Wi+ iut(x)( Wi)2 ⇤

=E "

exp iux + ium(x) t + iut(x)

✓ Wi+ 1 2 s(x) t(x) ◆2 iu1 4 s2(x) t(x) !#

=exp⇣iux + ium(x) t iu1 4 s2(x) t(x) ⌘ E " exp t(x)iu ✓ Wi+ 1 2 s(x) t(x) ◆2!# , where Wi+12s(x)t(x) ⇠ N 12s(x)t(x), t by (A.11) this is equivalent to

1 t ⇣ Wi+12s(x)t(x) ⌘2 ⇠ 02 1 ⇣ 1 4 s2(x) t2(x) t

which denotes the chi-squared distribution with one degree of freedom and noncenltrality parameter 14t2s(x) t2(x) . Hence,

x X⇧

i+1(u) =exp

✓ iux + ium(x) t + iu1 4 s2(x) t(x) ◆ 02 1 ⇣ 1 2 s2(x) t2(x) t ⌘(ut(x) t) =exp ✓ iux + ium(x) t + iu1 4 s2(x) t(x) ◆ exp ✓ 1 4 s2(x) t(x) iu 1 2iut(x) t ◆ (1 2iut(x) t) 12

=exp iux + ium(x) t

1

2u2s2(x) t

1 2iut(x) t

!

(1 2iut(x) t) 12

(39)

3.2. Discretization of the BSDE

In this section we will develop a discretization scheme for the BSDE (3.1). First, the terminal condition YM will be approximated by using the Euler scheme (3.6)for Xt:

YM = g(XM⇧).

From the Feynman-Kac formula, i.e. Proposition 2.6, we know that the solution of the FBSDE (3.1)

(Yt, Zt) = (u(t, Xt), (t, Xt)Du(t, Xt)).

At time T we know that u(tM, XM) = g(XM), so we have ZM = (tM, XM)Dg(XM).

Hence the terminal condition for Ztis obtained by substituting for XT its Euler scheme:

ZM = (tM, XM⇧)Dg(XM⇧).

To develop a numerical scheme for the BSDE, we focus on the discrete version of the BSDE on the interval [ti, ti+1]:

Yi = Yi+1+ Z ti+1 ti f (s, Xs, Ys, Zs)ds Z ti+1 ti ZsdWs. (3.10)

By a basic Euler discretization, backward in time, the unknown value Yi+1is required to

approximate Yi. However, by taking the adaptability constraints on Y and Z and taking

the conditional expectation on both sides with respect to the filtrationFti into account,

it is possible to get a backward induction scheme. Hence, as Yi and Zi are adapted to

Fti, we have

E[Zi|Fti] = Zi,

E[Yi|Fti] = Yi.

We consider (3.10) with the conditional expectation w.r.t. the filtrationFti we get,

Yi=E[Yi+1|Fti] +E h Z ti+1 ti f (s, Xs, Ys, Zs)ds Fti i Eh Z ti+1 ti ZsdWs Fti i =E[Yi+1|Fti] + Z ti+1 ti E[f(s, Xs, Ys, Zs)|Fti]ds E h Z ti+1 ti ZsdWs Fti i

(by Fubini’s theorem) Since we assumed that Zt2 H2T, we get that its integral form is a martingale. Hence we

get,

Yi=E[Yi+1|Fti] +

Z ti+1

ti

(40)

Now we will use the theta-time discretization method [13]: a convex combination of an explicit term, i.e. at time ti+1 and an implicit term, i.e. at time ti to approximate the

integral. The equation (3.11) becomes:

Yi ⇡ E[Yi+1|Fti] + t✓1E[f(ti, Xi, Yi, Zi)|Fti] + t(1 ✓1)E[f(ti+1, Xi+1Yi+1, Zi+1)|Fti]

(since Yt, Zt areFtimeasurable)

=E[Yi+1|Fti] + t✓1f (ti, Xi, Yi, Zi) + t(1 ✓1)E[f(ti+1, Xi+1Yi+1, Zi+1)|Fti],

(3.12) with ✓1 2 [0, 1].

To get a numerical scheme for the process Zi, we multiply both sides of equation (3.10)

by Wi and then take conditional expectations w.r.t the filtration Fti. We get the

following 0 =E[ WiYi+1|Fti] +E h Z ti+1 ti Wif (s, Xs, Ys, Zs)ds|Fti i Eh Wi Z ti+1 ti ZsdWs Fti i =E[ WiYi+1|Fti] + Z ti+1 ti Ehf (s, Xs, Ys, Zs)(Wi+1 Ws+ Ws Wi) Fti i ds Eh Z ti+1 ti dWs Z ti+1 ti ZsdWs Fti i =E[ WiYi+1|Fti] + Z ti+1 ti Ehf (s, Xs, Ys, Zs)(Wi+1 Ws+ Ws Wi) Fti i ds Z ti+1 ti E[Zs|Fti]ds =E[ WiYi+1|Fti] + Z ti+1 ti E[f(s, Xs, Ys, Zs)(Wi+1 Ws)|Fti]ds + Z ti+1 ti E[f(s, Xs, Ys, Zs)(Ws Wi)|Fti]ds Z ti+1 ti E[Zs|Fti]ds =E[ WiYi+1|Fti] + Z ti+1 ti E[E[f(s, XsYs, Zs)(Wi+1 Ws)|Fts]|Fti]ds + Z ti+1 ti E[f(s, Xs, Ys, Zs)(Ws Wi)|Fti]ds Z ti+1 ti E[Zs|Fti]ds =E[ WiYi+1|Fti] + Z ti+1 ti

E[f(s, Xs, Ys, Zs)E[(Wi+1 Ws)|Fts]|Fti]ds

+ Z ti+1 ti E[f(s, Xs, Ys, Zs)(Ws Wi)|Fti]ds Z ti+1 ti E[Zs|Fti]ds =E[ WiYi+1|Fti] + Z ti+1 ti E[f(s, Xs, Ys, Zs)(Ws Wi)|Fti]ds Z ti+1 ti E[Zs|Fti]ds, (3.13)

(41)

where we used Fubini’s theorem to switch the order of integration in the first integral. By theta-approximation of both integrals in (3.13), we get:

0⇡E[ WiYi+1|Fti] + t(1 ✓2)E[f(ti+1, Xi+1, Yi+1, Zi+1) Wi|Fti]

t✓2Zi t(1 ✓2)E[Zi+1|Fti], ✓22 [0, 1], (3.14)

where we used the Ito-adaptedness property of Yi and Zi and the fact that Wi|Fti ⇠

N (0, t). The above equations (3.12) and (3.14) lead to the following discrete-time approximation (Y⇧, Z⇧) for (Y, Z):

8 > > > > > > > > > < > > > > > > > > > : Y⇧ M = g(XM⇧), ZM⇧ = Dg(XM⇧), Zi⇧= ✓21(1 ✓2)E[Zi+1⇧ |Fti] + 1 t✓21E[Yi+1⇧ Wi|Fti]

+✓21(1 ✓2)E[f(ti+1, Xi+1⇧ , Yi+1⇧ , Zi+1⇧ ) Wi|Fti],

Yi⇧= E[Yi+1⇧ |Fti] + t✓1f (ti, X

i , Yi⇧, Zi⇧)

+ t(1 ✓1)E[f(ti+1, Xi+1⇧ , Yi+1⇧ , Zi+1⇧ )|Fti].

We observe that Yi⇧ and Zi⇧ depend on the value Xi⇧, so when Xi⇧= x, then 8 > > > > > > > > > > < > > > > > > > > > > : YM⇧ = g(XM⇧), (3.15) ZM⇧ = Dg(XM⇧), (3.16)

Zi⇧= ✓21(1 ✓2)E[Zi+1⇧ Xi+1⇧ |Xi⇧= x] (3.17)

+ 1

t✓

1

2 E[Yi+1⇧ Xi+1⇧ Wi|Xi⇧= x],

Yi⇧=E[Yi+1⇧ Xi+1⇧ |Xi⇧= x] + t✓1f (ti, x, Yi⇧(x) , Zi⇧(x)) (3.18)

+ t(1 ✓1)E[f(ti+1, Xi+1⇧ , Yi+1⇧ (Xi+1⇧ ), Zi+1⇧ (Xi+1⇧ ))|Xi⇧= x],

(42)

4. The Fourier-Cosine method for

Backward Stochastic Di↵erential

Equations

In this chapter we present an option pricing method for European options with one underlying asset, based on the Fourier-cosine series. This method developed by Fang and Oosterlee [12] is called the one-dimensional COS method and as it is used for solving BSDEs a di↵erent name is BCOS method. The key idea of this method is the relation of the characteristic function with the series coefficients of the Fourier-cosine expansion of the discounted expected payo↵.

This method covers a range of application in di↵erent underlying dynamics, including L´evy processes and the Heston stochastic volatility model, and various types of option contracts. In this thesis we will only deal with European options in particular. This chapter is based on [26] [27]. To get a better understanding, the theory of Fourier series is introduced in the first section which is based on the book [7].

4.1. Fourier Transform & Fourier Cosine series

Definition 4.1. Let p(x) be a piecewise continuous real function over R which satisfies

the integrability condition Z

+1

1 |p(x)|dx < 1.

The Fourier transform of p(x) is defined by ˆ

p(!) = Z +1

1

ei!xp(x)dx, x2 R. (4.1)

The inverse Fourier transform of ˆp(x) is given by

p(x) = 1

2⇡ Z +1

1

e iyxp(y)dy,ˆ y2 R. (4.2)

Definition 4.2. Let ˆR be a random variable having probability density function f (x),

such that Z

+1 1

(43)

The Fourier transform of f (x) is the characteristic function, denoted by such that, (u) =Eheiu ˆRi=

Z +1

1

eiuxf (x)dx, u2 R. (4.3)

The probability density function is the inverse Fourier transform of the characteristic function. The characteristic function and probability density function form a Fourier pair.

A Fourier series is an expansion of a periodic function f (x) in terms of an infinite sum of sines and cosines. A function f (x) supported on the domain x2 [ ⇡, ⇡] can be written as its Fourier series by

f (x) = 1 2A0+ 1 X n=1 Ancos(nx) + 1 X n=1 Bnsin(nx), A0 := 1 ⇡ Z ⇡ ⇡ f (y)dy, An:= 1 ⇡ Z ⇡ ⇡ f (y) cos(ny)dy, Bn:= 1 ⇡ Z ⇡ ⇡ f (y) sin(ny)dy,

For a symmetric function, i.e. f (x) = f ( x) 8x, the coefficients Bn will be zero. The

COS method is based on the Fourier cosine series.

A function f (x) supported on the domain x2 [a, b] can be written as its Fourier cosine series by f (x) = 1 2A0+ 1 X k=1 Akcos ✓ k⇡x a b a ◆ = 1 X k=0 0A kcos ✓ k⇡x a b a ◆ ,

whereP1k=00 indicates that the first term in the summation is weighted by 12 and Ak is

defined by Ak := 2 b a Z b a f (y) cos ✓ k⇡y a b a ◆ dy. (4.4)

For functions supported on any other finite interval, say [a, b] 2 R, the Fourier-cosine series expansion can easily be obtained via a change of variables. The coefficients Ak

can be approximated using the Fourier transform of f (y) as we explain below. Notice that a cosine-function can be written as follows

cos ✓ k⇡y a b a ◆ =R ⇢ exp ✓ k⇡y a b a ◆ .

(44)

where R{·} represents the real part of the argument. Putting the above equality into (4.4) gives Ak = 2 b a Z b a f (y)R ⇢ exp ✓ k⇡y a b a ◆ dy Ak = 2 b aR ⇢Z b a f (y) exp ✓ k⇡y a b a ◆ dy .

Suppose the integral over the whole domain is a good approximation of the integral of the interval. The coe↵cients Ak can be written as

Ak= 2 b aR ⇢Z b a f (y) exp ✓ i ✓ k⇡ b a ◆ y ik⇡a b a ◆ dy ⇡ 2 b aR ⇢Z Rf (y) exp ✓ i ✓ k⇡ b a ◆ y ik⇡a b a ◆ dy ⇡ b 2aR ⇢ ✓ k⇡ b a ◆ exp ✓ i k⇡a b a ◆ .

A method for pricing European options with numerical integration techniques is the risk-neutral valuation formula:

v(St, t) = e r(T t)EQ[v(ST, T )|St] = e r(T t)

Z

Rv(ST, T )f (ST|St)dST, (4.5)

where v denotes the option value, t is the initial time T is the marturity time, EQ[·] is the expectation under the risk-neutral measure Q, St and ST are the price variables of

the underlying asset at time t and T , f (ST|St) is the density function of ST given St, r

is the risk-neutral interest rate. We insert in (4.5):

x := St, y := ST, t := T t

v(x, t) = e r tEQ[v(y, T )|x] = e r t Z

Rv(y, T )f (y|x)dy. (4.6)

The integral (4.6) will be approximated with the COS method in 5 steps 1. Truncate the integration part.

The density function f (y|x) in (4.6) decays to zero very fast as y ! ±1. Therefore, v(x, t) can be approximated by some finite integration range [a, b]⇢ R

v(x, t)⇡ v1(x, t) = e r t

Z b a

v(y, T )f (y|x)dy. 2. Replace the density function by its cosine expansion.

The density function is usually not known, so we replace the density function by its cosine expansion in y.

f (y|x) = 1 X0 Ak(x)· cos ✓ k⇡y a b a ◆ , (4.7)

Referenties

GERELATEERDE DOCUMENTEN

If some subset of discs are initially more massive or extended, then they could exhibit greater mass loss rates at the present day and may contribute to the number of bright and

Hydron Zuid-Holland stelt momenteel ook methaanbalansen op voot andere locaties om vast te stellen in welke mate methaan fysisch en dus niet biologisch verwijderd wordt. De

Met het sluiten van de schermen wordt weliswaar foto-inhibitie bij de bovenste bladeren van het gewas voorkomen, maar tegelijk wordt de beschikbare hoeveelheid licht voor de

Vorig jaar tijdens de zeefexcursie in Boxtel van Langen- boom materiaal liet René Fraaije kleine concreties zien met krabbenresten.. Of we maar goed wilden opletten bij

The Europe-USA Workshop at Bochum [75] in 1980 and at Trondheim [6] in 1985 were devoted to nonlinear finite element analysis in structural mechanics and treated topics such as

The underplayed sequence of illocutionary force and its perlocutionary effect on the movement from text to sermon constrain both the “theme sentence centred

Een stevige conclusie is echter niet mogelijk door een aantal factoren in het dossier van de aanvrager; er is namelijk sprake van een zeer klein aantal patiënten in de L-Amb

We are of the view that the accurate prospective radiological reporting of this anomaly is important, especially in the context of endovascular stent procedures of aortic