• No results found

Sampling per mode simulation for switching diffusions

N/A
N/A
Protected

Academic year: 2021

Share "Sampling per mode simulation for switching diffusions"

Copied!
98
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

S

AMPLING PER MODE SIMULATION FOR SWITCHING

DIFFUSIONS

Pascal Lezaud1 Jaroslav Krystul2 Franc¸ois Le Gland3 1DSNA/DTI/R&D and Institut de Math ´ematiques de Toulouse

2

Twente University, Enschede

3

INRIA Rennes, Bretagne–Atlantique

8th International Conference on Rare Event Simulation, RESIM’10, Cambridge, June 21–22, 2010

(2)

1

I

NTRODUCTION

Motivation

Switching jump diffusion Splitting technique Some issues

2

F

EYNMAN

-K

AC FORMULATION

Multilevel Feynman-Kac distributions Dynamical evolution

3

S

AMPLING PER

M

ODE ALGORITHM

Particle Methods

Sampling per Mode algorithm

4

A

SYMPTOTIC

B

EHAVIOUR

Asymptotic Behaviour Law of Large Numbers Central Limit Theorem

(3)

PLAN

1

I

NTRODUCTION Motivation

Switching jump diffusion Splitting technique Some issues

2

F

EYNMAN-KAC FORMULATION

Multilevel Feynman-Kac distributions Dynamical evolution

3

S

AMPLING PER

MODE ALGORITHM

Particle Methods

Sampling per Mode algorithm

4

ASYMPTOTIC

BEHAVIOUR

Asymptotic Behaviour Law of Large Numbers Central Limit Theorem

(4)

MOTIVATIONS

Many complex dynamical multi-agent systems make use of

continuous-time strong Markov processes with an hybrid state space:

I one state component evolves inRd,

I the other state component evolves in a discrete set,

I and each component may influence the evolution of the other component.

Our motivation is to estimate the probability that the continuous component hits a critical set.

We use a splitting technique adapted to the context of switching diffusions: the sampling per mode algorithm introduced by Krystul in [Krystul, 2006]

(5)

MOTIVATIONS

Many complex dynamical multi-agent systems make use of

continuous-time strong Markov processes with an hybrid state space:

I one state component evolves inRd,

I the other state component evolves in a discrete set,

I and each component may influence the evolution of the other component.

Our motivation is to estimate the probability that the continuous component hits a critical set.

We use a splitting technique adapted to the context of switching diffusions: the sampling per mode algorithm introduced by Krystul in [Krystul, 2006]

(6)

MOTIVATIONS

Many complex dynamical multi-agent systems make use of

continuous-time strong Markov processes with an hybrid state space:

I one state component evolves inRd,

I the other state component evolves in a discrete set,

I and each component may influence the evolution of the other component.

Our motivation is to estimate the probability that the continuous component hits a critical set.

We use a splitting technique adapted to the context of switching diffusions: the sampling per mode algorithm introduced by Krystul in [Krystul, 2006]

(7)

MOTIVATIONS

Many complex dynamical multi-agent systems make use of

continuous-time strong Markov processes with an hybrid state space:

I one state component evolves inRd,

I the other state component evolves in a discrete set,

I and each component may influence the evolution of the other component.

Our motivation is to estimate the probability that the continuous component hits a critical set.

We use a splitting technique adapted to the context of switching diffusions: the sampling per mode algorithm introduced by Krystul in [Krystul, 2006]

(8)

MOTIVATIONS

Many complex dynamical multi-agent systems make use of

continuous-time strong Markov processes with an hybrid state space:

I one state component evolves inRd,

I the other state component evolves in a discrete set,

I and each component may influence the evolution of the other component.

Our motivation is to estimate the probability that the continuous component hits a critical set.

We use a splitting technique adapted to the context of switching diffusions: the sampling per mode algorithm introduced by Krystul in [Krystul, 2006]

(9)

MOTIVATIONS

Many complex dynamical multi-agent systems make use of

continuous-time strong Markov processes with an hybrid state space:

I one state component evolves inRd,

I the other state component evolves in a discrete set,

I and each component may influence the evolution of the other component.

Our motivation is to estimate the probability that the continuous component hits a critical set.

We use a splitting technique adapted to the context of switching diffusions: the sampling per mode algorithm introduced by Krystul in [Krystul, 2006]

(10)

PLAN

1

I

NTRODUCTION

Motivation

Switching jump diffusion

Splitting technique Some issues

2

F

EYNMAN-KAC FORMULATION

Multilevel Feynman-Kac distributions Dynamical evolution

3

S

AMPLING PER

MODE ALGORITHM

Particle Methods

Sampling per Mode algorithm

4

ASYMPTOTIC

BEHAVIOUR

Asymptotic Behaviour Law of Large Numbers Central Limit Theorem

(11)

SWITCHING JUMP DIFFUSION

Strong Markov processZ = {(Xt, θt);t ≥ 0}with value inRd× Mwith a

finite setM = {1, · · · , M},

the continuous component is described as ad-dimensional SDE

dXt =b (Xt, θt)dt + σ (Xt, θt)dBt,

and the discrete mode as a pure jump process

P (θt+∆t=j|θt =i, Xt=x ) = λij(x )∆t + o(∆t), i 6= j, with jump intensities depending on the continuous component.

Zt starts att = 0inD0× Mwith known initial probabilityη0 LetA ⊂ Rd be a closed critical region in whichX

t could enter but with a very small probability.

IfTAdenotes the hitting time ofA, we would like to estimateP(TA≤ T ) withT a deterministic or a stopping time.

(12)

SWITCHING JUMP DIFFUSION

Strong Markov processZ = {(Xt, θt);t ≥ 0}with value inRd× Mwith a

finite setM = {1, · · · , M},

the continuous component is described as ad-dimensional SDE

dXt =b (Xt, θt)dt + σ (Xt, θt)dBt,

and the discrete mode as a pure jump process

P (θt+∆t=j|θt =i, Xt=x ) = λij(x )∆t + o(∆t), i 6= j, with jump intensities depending on the continuous component.

Zt starts att = 0inD0× Mwith known initial probabilityη0 LetA ⊂ Rd be a closed critical region in whichX

t could enter but with a very small probability.

IfTAdenotes the hitting time ofA, we would like to estimateP(TA≤ T ) withT a deterministic or a stopping time.

(13)

SWITCHING JUMP DIFFUSION

Strong Markov processZ = {(Xt, θt);t ≥ 0}with value inRd× Mwith a

finite setM = {1, · · · , M},

the continuous component is described as ad-dimensional SDE

dXt =b (Xt, θt)dt + σ (Xt, θt)dBt,

and the discrete mode as a pure jump process

P (θt+∆t=j|θt =i, Xt=x ) = λij(x )∆t + o(∆t), i 6= j, with jump intensities depending on the continuous component.

Zt starts att = 0inD0× Mwith known initial probabilityη0 LetA ⊂ Rd be a closed critical region in whichX

t could enter but with a very small probability.

IfTAdenotes the hitting time ofA, we would like to estimateP(TA≤ T ) withT a deterministic or a stopping time.

(14)

SWITCHING JUMP DIFFUSION

Strong Markov processZ = {(Xt, θt);t ≥ 0}with value inRd× Mwith a

finite setM = {1, · · · , M},

the continuous component is described as ad-dimensional SDE

dXt =b (Xt, θt)dt + σ (Xt, θt)dBt,

and the discrete mode as a pure jump process

P (θt+∆t=j|θt =i, Xt=x ) = λij(x )∆t + o(∆t), i 6= j, with jump intensities depending on the continuous component.

Zt starts att = 0inD0× Mwith known initial probabilityη0

LetA ⊂ Rd be a closed critical region in whichX

t could enter but with a very small probability.

IfTAdenotes the hitting time ofA, we would like to estimateP(TA≤ T ) withT a deterministic or a stopping time.

(15)

SWITCHING JUMP DIFFUSION

Strong Markov processZ = {(Xt, θt);t ≥ 0}with value inRd× Mwith a

finite setM = {1, · · · , M},

the continuous component is described as ad-dimensional SDE

dXt =b (Xt, θt)dt + σ (Xt, θt)dBt,

and the discrete mode as a pure jump process

P (θt+∆t=j|θt =i, Xt=x ) = λij(x )∆t + o(∆t), i 6= j, with jump intensities depending on the continuous component.

Zt starts att = 0inD0× Mwith known initial probabilityη0 LetA ⊂ Rd be a closed critical region in whichX

t could enter but with a very small probability.

IfTAdenotes the hitting time ofA, we would like to estimateP(TA≤ T ) withT a deterministic or a stopping time.

(16)

SWITCHING JUMP DIFFUSION

Strong Markov processZ = {(Xt, θt);t ≥ 0}with value inRd× Mwith a

finite setM = {1, · · · , M},

the continuous component is described as ad-dimensional SDE

dXt =b (Xt, θt)dt + σ (Xt, θt)dBt,

and the discrete mode as a pure jump process

P (θt+∆t=j|θt =i, Xt=x ) = λij(x )∆t + o(∆t), i 6= j, with jump intensities depending on the continuous component.

Zt starts att = 0inD0× Mwith known initial probabilityη0 LetA ⊂ Rd be a closed critical region in whichX

t could enter but with a very small probability.

IfTAdenotes the hitting time ofA, we would like to estimateP(TA≤ T ) withT a deterministic or a stopping time.

(17)

PLAN

1

I

NTRODUCTION

Motivation

Switching jump diffusion

Splitting technique

Some issues

2

F

EYNMAN-KAC FORMULATION

Multilevel Feynman-Kac distributions Dynamical evolution

3

S

AMPLING PER

MODE ALGORITHM

Particle Methods

Sampling per Mode algorithm

4

ASYMPTOTIC

BEHAVIOUR

Asymptotic Behaviour Law of Large Numbers Central Limit Theorem

(18)

SPLITTING TECHNIQUE

Identify intermediate sets that are (sequentially) visited much more often than the rare target set:

LetA = Dn⊂ · · · ⊂ D1⊂ Rd, D0∩ D1= ∅

R

n

M

D0 D2 D1 D1 D2 A A

WithB = A × MandBk =Dk× M, we define fork = 1, · · · , n

Tk =inf{t ≥ 0 : Zt ∈ Bk} = inf{t ≥ 0 : Xt∈ Dk}, which satisfy0 = T0≤ T1≤ · · · ≤ Tn=TB. Then P(TA≤ T ) = P(TB≤ T ) = n Y k =1 P(Tk ≤ T |Tk −1≤ T ), where conditional probabilities are not very small.

(19)

SPLITTING TECHNIQUE

Identify intermediate sets that are (sequentially) visited much more often than the rare target set:

LetA = Dn⊂ · · · ⊂ D1⊂ Rd, D0∩ D1= ∅

R

n

M

D0 D2 D1 D1 D2 A A

WithB = A × MandBk =Dk× M, we define fork = 1, · · · , n

Tk =inf{t ≥ 0 : Zt ∈ Bk} = inf{t ≥ 0 : Xt∈ Dk}, which satisfy0 = T0≤ T1≤ · · · ≤ Tn=TB. Then P(TA≤ T ) = P(TB≤ T ) = n Y k =1 P(Tk ≤ T |Tk −1≤ T ), where conditional probabilities are not very small.

(20)

SPLITTING TECHNIQUE

Identify intermediate sets that are (sequentially) visited much more often than the rare target set:

LetA = Dn⊂ · · · ⊂ D1⊂ Rd, D0∩ D1= ∅

R

n

M

D0 D2 D1 D1 D2 A A

WithB = A × MandBk =Dk× M, we define fork = 1, · · · , n

Tk =inf{t ≥ 0 : Zt ∈ Bk} = inf{t ≥ 0 : Xt∈ Dk}, which satisfy0 = T0≤ T1≤ · · · ≤ Tn=TB. Then P(TA≤ T ) = P(TB≤ T ) = n Y k =1 P(Tk ≤ T |Tk −1≤ T ), where conditional probabilities are not very small.

(21)

SPLITTING TECHNIQUE

Identify intermediate sets that are (sequentially) visited much more often than the rare target set:

LetA = Dn⊂ · · · ⊂ D1⊂ Rd, D0∩ D1= ∅

R

n

M

D0 D2 D1 D1 D2 A A

WithB = A × MandBk =Dk× M, we define fork = 1, · · · , n

Tk =inf{t ≥ 0 : Zt ∈ Bk} = inf{t ≥ 0 : Xt∈ Dk}, which satisfy0 = T0≤ T1≤ · · · ≤ Tn=TB. Then P(TA≤ T ) = P(TB≤ T ) = n Y k =1 P(Tk ≤ T |Tk −1≤ T ), where conditional probabilities are not very small.

(22)

PLAN

1

I

NTRODUCTION

Motivation

Switching jump diffusion Splitting technique

Some issues

2

F

EYNMAN-KAC FORMULATION

Multilevel Feynman-Kac distributions Dynamical evolution

3

S

AMPLING PER

MODE ALGORITHM

Particle Methods

Sampling per Mode algorithm

4

ASYMPTOTIC

BEHAVIOUR

Asymptotic Behaviour Law of Large Numbers Central Limit Theorem

(23)

SOME ISSUES

Splitting technique applies since a switching process is a strong Markov process, but

this approach fails to produce a reasonable estimate, since each resampling step tends

I to sample more particles from mode with higher probability,

I to discard the particles in the ”light” modes.

Increasing the number of particles should improve the estimate but only at the cost of increased simulation time,

Idea: keep constant the number of particles in each visited mode at each resampling step,

(24)

SOME ISSUES

Splitting technique applies since a switching process is a strong Markov process, but

this approach fails to produce a reasonable estimate, since each resampling step tends

I to sample more particles from mode with higher probability,

I to discard the particles in the ”light” modes.

Increasing the number of particles should improve the estimate but only at the cost of increased simulation time,

Idea: keep constant the number of particles in each visited mode at each resampling step,

(25)

SOME ISSUES

Splitting technique applies since a switching process is a strong Markov process, but

this approach fails to produce a reasonable estimate, since each resampling step tends

I to sample more particles from mode with higher probability,

I to discard the particles in the ”light” modes.

Increasing the number of particles should improve the estimate but only at the cost of increased simulation time,

Idea: keep constant the number of particles in each visited mode at each resampling step,

(26)

SOME ISSUES

Splitting technique applies since a switching process is a strong Markov process, but

this approach fails to produce a reasonable estimate, since each resampling step tends

I to sample more particles from mode with higher probability,

I to discard the particles in the ”light” modes.

Increasing the number of particles should improve the estimate but only at the cost of increased simulation time,

Idea: keep constant the number of particles in each visited mode at each resampling step,

(27)

SOME ISSUES

Splitting technique applies since a switching process is a strong Markov process, but

this approach fails to produce a reasonable estimate, since each resampling step tends

I to sample more particles from mode with higher probability,

I to discard the particles in the ”light” modes.

Increasing the number of particles should improve the estimate but only at the cost of increased simulation time,

Idea: keep constant the number of particles in each visited mode at each resampling step,

(28)

SOME ISSUES

Splitting technique applies since a switching process is a strong Markov process, but

this approach fails to produce a reasonable estimate, since each resampling step tends

I to sample more particles from mode with higher probability,

I to discard the particles in the ”light” modes.

Increasing the number of particles should improve the estimate but only at the cost of increased simulation time,

Idea: keep constant the number of particles in each visited mode at each resampling step,

(29)

PLAN

1

INTRODUCTION

Motivation

Switching jump diffusion Splitting technique Some issues

2

F

EYNMAN

-K

AC FORMULATION Multilevel Feynman-Kac distributions

Dynamical evolution

3

S

AMPLING PER

MODE ALGORITHM

Particle Methods

Sampling per Mode algorithm

4

ASYMPTOTIC

BEHAVIOUR

Asymptotic Behaviour Law of Large Numbers Central Limit Theorem

(30)

MULTILEVEL

FEYNMAN-KAC DISTRIBUTIONS

To capture the behaviour ofZ between each thresholds, we consider the random excursionsZk ofZ betweenTk −1andTk ∧ T

Zk = ((Xt, θt),Tk −1 ≤ t ≤ Tk∧ T ) ,

and we introduce the selection functions,

gk(Zk) =1{ZTk ∧T∈Bk}, g

j

(31)

MULTILEVEL

FEYNMAN-KAC DISTRIBUTIONS

To capture the behaviour ofZ between each thresholds, we consider the random excursionsZk ofZ betweenTk −1andTk ∧ T

Zk = ((Xt, θt),Tk −1 ≤ t ≤ Tk∧ T ) , and we introduce the selection functions,

gk(Zk) =1{ZTk ∧T∈Bk}, g

j

(32)

MULTILEVEL

FEYNMAN-KAC DISTRIBUTIONS

To capture the behaviour ofZ between each thresholds, we consider the random excursionsZk ofZ betweenTk −1andTk ∧ T

Zk = ((Xt, θt),Tk −1 ≤ t ≤ Tk∧ T ) , and we introduce the selection functions,

gk(Zk) =1{ZTk ∧T∈Bk}, g

j

(33)

MULTILEVEL

FEYNMAN-KAC DISTRIBUTIONS

To capture the behaviour ofZ between each thresholds, we consider the random excursionsZk ofZ betweenTk −1andTk ∧ T

Zk = ((Xt, θt),Tk −1 ≤ t ≤ Tk∧ T ) , and we introduce the selection functions,

gk(Zk) =1{ZTk ∧T∈Bk}, g

j

(34)

MULTILEVEL

FEYNMAN-KAC DISTRIBUTIONS

Clearly,

1{Tk ≤ T } = gk(Zk), and 1{Tk ≤ T , θTk =j} = g

j k(Zk).

We can interpret the rare event probability in terms of the Feynman-Kac measures defined by γk(f ) = E [f (Zk)gk −1(Zk −1)] = Ef ((Xt, θt), Tk −1≤ t ≤ Tk∧ T )1{Tk −1≤T }  b γk(f ) = E [f (Zk)gk(Zk)] = E h f ((Xt, θt), Tk −1 ≤ t ≤ Tk)1{T k ≤ T } i ,

and the corresponding normalized measures defined by

ηk(f ) = γk(f ) γk(1) = E [f ((X t, θt),Tk −1≤ t ≤ Tk ∧ T )|Tk −1 ≤ T ] b ηk(f ) = b γk(f ) b γk(1) = E [f ((Xt , θt), Tk −1≤ t ≤ Tk)|Tk ≤ T ] .

(35)

MULTILEVEL

FEYNMAN-KAC DISTRIBUTIONS

Clearly,

1{Tk ≤ T } = gk(Zk), and 1{Tk ≤ T , θTk =j} = g

j k(Zk).

We can interpret the rare event probability in terms of the Feynman-Kac measures defined by γk(f ) = E [f (Zk)gk −1(Zk −1)] = Ef ((Xt, θt), Tk −1≤ t ≤ Tk ∧ T )1{Tk −1≤T }  b γk(f ) = E [f (Zk)gk(Zk)] = E h f ((Xt, θt), Tk −1 ≤ t ≤ Tk)1{T k ≤ T } i ,

and the corresponding normalized measures defined by

ηk(f ) = γk(f ) γk(1) = E [f ((X t, θt),Tk −1≤ t ≤ Tk ∧ T )|Tk −1 ≤ T ] b ηk(f ) = b γk(f ) b γk(1) = E [f ((Xt , θt), Tk −1≤ t ≤ Tk)|Tk ≤ T ] .

(36)

MULTILEVEL

FEYNMAN-KAC DISTRIBUTIONS

Clearly,

1{Tk ≤ T } = gk(Zk), and 1{Tk ≤ T , θTk =j} = g

j k(Zk).

We can interpret the rare event probability in terms of the Feynman-Kac measures defined by γk(f ) = E [f (Zk)gk −1(Zk −1)] = Ef ((Xt, θt), Tk −1≤ t ≤ Tk ∧ T )1{Tk −1≤T }  b γk(f ) = E [f (Zk)gk(Zk)] = E h f ((Xt, θt), Tk −1 ≤ t ≤ Tk)1{T k ≤ T } i ,

and the corresponding normalized measures defined by

ηk(f ) = γk(f ) γk(1) = E [f ((Xt , θt),Tk −1≤ t ≤ Tk ∧ T )|Tk −1 ≤ T ] b ηk(f ) = b γk(f ) b γk(1) = E [f ((Xt , θt), Tk −1≤ t ≤ Tk)|Tk ≤ T ] .

(37)

MULTILEVEL

FEYNMAN-KAC DISTRIBUTIONS

In particular, forf ≡ 1

γk(1) = P(Tk −1≤ T ), bγk(1) = P(Tk ≤ T ),

and forf = gk orf = gkj, it holds

ηk(gk) = P[Tk ≤ T |Tk −1 ≤ T ], ηk(gkj) = P[Tk ≤ T , θTk =j|Tk −1 ≤ T ].

We have the key formulas

γk(f ) = ηk(f ) k −1 Y p=0 ηp(gp) and bγk(f ) =ηbk(f ) k Y p=0 ηp(gp). Then, we recover P(Tn≤ T ) = n Y p=0 ηp(gp),

(38)

MULTILEVEL

FEYNMAN-KAC DISTRIBUTIONS

In particular, forf ≡ 1

γk(1) = P(Tk −1≤ T ), bγk(1) = P(Tk ≤ T ),

and forf = gk orf = gkj, it holds

ηk(gk) = P[Tk ≤ T |Tk −1 ≤ T ], ηk(gkj) = P[Tk ≤ T , θTk =j|Tk −1 ≤ T ].

We have the key formulas

γk(f ) = ηk(f ) k −1 Y p=0 ηp(gp) and bγk(f ) =ηbk(f ) k Y p=0 ηp(gp). Then, we recover P(Tn≤ T ) = n Y p=0 ηp(gp),

(39)

MULTILEVEL

FEYNMAN-KAC DISTRIBUTIONS

In particular, forf ≡ 1

γk(1) = P(Tk −1≤ T ), bγk(1) = P(Tk ≤ T ),

and forf = gk orf = gkj, it holds

ηk(gk) = P[Tk ≤ T |Tk −1 ≤ T ], ηk(gkj) = P[Tk ≤ T , θTk =j|Tk −1 ≤ T ].

We have the key formulas

γk(f ) = ηk(f ) k −1 Y p=0 ηp(gp) and bγk(f ) =ηbk(f ) k Y p=0 ηp(gp). Then, we recover P(Tn≤ T ) = n Y p=0 ηp(gp),

(40)

MULTILEVEL

FEYNMAN-KAC DISTRIBUTIONS

In particular, forf ≡ 1

γk(1) = P(Tk −1≤ T ), bγk(1) = P(Tk ≤ T ),

and forf = gk orf = gkj, it holds

ηk(gk) = P[Tk ≤ T |Tk −1 ≤ T ], ηk(gkj) = P[Tk ≤ T , θTk =j|Tk −1 ≤ T ].

We have the key formulas

γk(f ) = ηk(f ) k −1 Y p=0 ηp(gp) and bγk(f ) =ηbk(f ) k Y p=0 ηp(gp). Then, we recover P(Tn≤ T ) = n Y p=0 ηp(gp),

(41)

MULTILEVEL

FEYNMAN-KAC DISTRIBUTIONS

In order to keep trace of the discrete mode, we construct for anyj ∈ M γkj(f ) = Ehf (Zk)gjk −1(Zk −1) i = Ehf (Zt,Tk −1≤ t ≤ Tk ∧ T )1{Tk −1≤T ,θTk−1=j} i , b γkj(f ) = Ehf (Zk)gjk(Zk) i = Ehf (Zt,Tk −1 ≤ t ≤ Tk)1{T k ≤ T , θTk =j} i .

and the normalized measures

ηkj(f ) = γ j k(f ) γkj(1) = Ef (Zt, Tk −1≤ t ≤ Tk ∧ T )|Tk −1 ≤ T , θTk −1=j , b ηkj(f ) = bγ j k(f ) b γkj(1) = E [f (Zt, Tk −1≤ t ≤ Tk)|Tk ≤ T , θTk =j] .

We have the decompositions

b ηk = X j∈M ωjkηbkj, ηk +1= X j∈M ωkk +1j , where ωkj =ηbk(gkj) = P(θTk =j|Tk ≤ T ).

(42)

MULTILEVEL

FEYNMAN-KAC DISTRIBUTIONS

In order to keep trace of the discrete mode, we construct for anyj ∈ M γkj(f ) = Ehf (Zk)gjk −1(Zk −1) i = Ehf (Zt,Tk −1≤ t ≤ Tk ∧ T )1{Tk −1≤T ,θTk−1=j} i , b γkj(f ) = Ehf (Zk)gjk(Zk) i = Ehf (Zt,Tk −1 ≤ t ≤ Tk)1{T k ≤ T , θTk =j} i .

and the normalized measures

ηkj(f ) = γ j k(f ) γjk(1) = Ef (Zt, Tk −1≤ t ≤ Tk ∧ T )|Tk −1 ≤ T , θTk −1=j , b ηkj(f ) = bγ j k(f ) b γjk(1) = E [f (Zt, Tk −1≤ t ≤ Tk)|Tk ≤ T , θTk =j] .

We have the decompositions

b ηk = X j∈M ωjkηbkj, ηk +1= X j∈M ωkk +1j , where ωkj =ηbk(gkj) = P(θTk =j|Tk ≤ T ).

(43)

MULTILEVEL

FEYNMAN-KAC DISTRIBUTIONS

In order to keep trace of the discrete mode, we construct for anyj ∈ M γkj(f ) = Ehf (Zk)gjk −1(Zk −1) i = Ehf (Zt,Tk −1≤ t ≤ Tk ∧ T )1{Tk −1≤T ,θTk−1=j} i , b γkj(f ) = Ehf (Zk)gjk(Zk) i = Ehf (Zt,Tk −1 ≤ t ≤ Tk)1{T k ≤ T , θTk =j} i .

and the normalized measures

ηkj(f ) = γ j k(f ) γjk(1) = Ef (Zt, Tk −1≤ t ≤ Tk ∧ T )|Tk −1 ≤ T , θTk −1=j , b ηkj(f ) = bγ j k(f ) b γjk(1) = E [f (Zt, Tk −1≤ t ≤ Tk)|Tk ≤ T , θTk =j] .

We have the decompositions

b ηk = X j∈M ωjkbηkj, ηk +1= X j∈M ωkk +1j , where ωkj =ηbk(gkj) = P(θTk =j|Tk ≤ T ).

(44)

PLAN

1

INTRODUCTION

Motivation

Switching jump diffusion Splitting technique Some issues

2

F

EYNMAN

-K

AC FORMULATION

Multilevel Feynman-Kac distributions

Dynamical evolution

3

S

AMPLING PER

MODE ALGORITHM

Particle Methods

Sampling per Mode algorithm

4

ASYMPTOTIC

BEHAVIOUR

Asymptotic Behaviour Law of Large Numbers Central Limit Theorem

(45)

DYNAMICAL EVOLUTION

Using the Markov property ofZ(with Markov kernelMk), we obtain

γk(f ) = γk −1(gk −1Mk f )and γkj = γk −1(gk −1j Mk f ).

and the nonlinear measure-valued transformations

b ηk(f ) = ηk(fgk) ηk(gk) := Ψk(ηk)(f ), ηb j k(f ) = ηk(fgkj) ηk(gkj) := Ψjk(ηk)(f ).

so, the following two separate selection/mutation transitions

ηk

selection

−−−−−→ηbk := Ψk(ηk) mutation

(46)

DYNAMICAL EVOLUTION

Using the Markov property ofZ(with Markov kernelMk), we obtain

γk(f ) = γk −1(gk −1Mk f )and γkj = γk −1(gk −1j Mk f ).

and the nonlinear measure-valued transformations

b ηk(f ) = ηk(fgk) ηk(gk) := Ψk(ηk)(f ), ηb j k(f ) = ηk(fgkj) ηk(gkj) := Ψjk(ηk)(f ).

so, the following two separate selection/mutation transitions

ηk

selection

−−−−−→ηbk := Ψk(ηk) mutation

(47)

DYNAMICAL EVOLUTION

Using the Markov property ofZ(with Markov kernelMk), we obtain

γk(f ) = γk −1(gk −1Mk f )and γkj = γk −1(gk −1j Mk f ).

and the nonlinear measure-valued transformations

b ηk(f ) = ηk(fgk) ηk(gk) := Ψk(ηk)(f ), ηb j k(f ) = ηk(fgkj) ηk(gkj) := Ψjk(ηk)(f ).

so, the following two separate selection/mutation transitions

ηk −−−−−→selection ηbk := Ψk(ηk)

mutation

(48)

PLAN

1

INTRODUCTION

Motivation

Switching jump diffusion Splitting technique Some issues

2

F

EYNMAN-KAC FORMULATION

Multilevel Feynman-Kac distributions Dynamical evolution

3

S

AMPLING PER

M

ODE ALGORITHM Particle Methods

Sampling per Mode algorithm

4

ASYMPTOTIC

BEHAVIOUR

Asymptotic Behaviour Law of Large Numbers Central Limit Theorem

(49)

PARTICLE METHODS

Particle methods are a kind of stochastic linearisation technique for solving nonlinear equation in measure space.

Using two sequences ofNparticlesξ = (ξ1, · · · , ξN)andξ = (bb ξ1, · · · , bξN),

we approximate the two step transitions

ηk selection −−−−−→ηbk := Ψk(ηk) mutation −−−−−→ ηk +1=ηbkMk +1, by ηkN:= 1 N N X i=1 δξi k selection −−−−−→ηbkN:= 1 N N X i=1 δ b ξi k mutation −−−−−→ ηk +1= 1 N N X i=1 δξi k +1.

(50)

PARTICLE METHODS

Particle methods are a kind of stochastic linearisation technique for solving nonlinear equation in measure space.

Using two sequences ofNparticlesξ = (ξ1, · · · , ξN)and

b

ξ = (bξ1, · · · , bξN), we approximate the two step transitions

ηk selection −−−−−→ηbk := Ψk(ηk) mutation −−−−−→ ηk +1=bηkMk +1, by ηkN:= 1 N N X i=1 δξi k selection −−−−−→ηbkN:= 1 N N X i=1 δ b ξi k mutation −−−−−→ ηk +1= 1 N N X i=1 δξi k +1.

(51)

PARTICLE METHODS

Particle methods are a kind of stochastic linearisation technique for solving nonlinear equation in measure space.

Using two sequences ofNparticlesξ = (ξ1, · · · , ξN)and

b

ξ = (bξ1, · · · , bξN), we approximate the two step transitions

ηk selection −−−−−→ηbk := Ψk(ηk) mutation −−−−−→ ηk +1=bηkMk +1, by ηkN:= 1 N N X i=1 δξi k selection −−−−−→ηbkN:= 1 N N X i=1 δ b ξi k mutation −−−−−→ ηk +1= 1 N N X i=1 δξi k +1.

(52)

PARTICLE METHODS

Particle methods are a kind of stochastic linearisation technique for solving nonlinear equation in measure space.

Using two sequences ofNparticlesξ = (ξ1, · · · , ξN)and

b

ξ = (bξ1, · · · , bξN), we approximate the two step transitions

ηk selection −−−−−→ηbk := Ψk(ηk) mutation −−−−−→ ηk +1=bηkMk +1, by ηkN:= 1 N N X i=1 δξi k selection −−−−−→ηbkN:= 1 N N X i=1 δ b ξi k mutation −−−−−→ ηk +1= 1 N N X i=1 δξi k +1.

(53)

PARTICLE METHODS

Particle methods are a kind of stochastic linearisation technique for solving nonlinear equation in measure space.

Using two sequences ofNparticlesξ = (ξ1, · · · , ξN)and

b

ξ = (bξ1, · · · , bξN), we approximate the two step transitions

ηk selection −−−−−→ηbk := Ψk(ηk) mutation −−−−−→ ηk +1=bηkMk +1, by ηkN:= 1 N N X i=1 δξi k selection −−−−−→ηbkN:= 1 N N X i=1 δ b ξi k mutation −−−−−→ ηk +1= 1 N N X i=1 δξi k +1.

(54)

PLAN

1

INTRODUCTION

Motivation

Switching jump diffusion Splitting technique Some issues

2

F

EYNMAN-KAC FORMULATION

Multilevel Feynman-Kac distributions Dynamical evolution

3

S

AMPLING PER

M

ODE ALGORITHM

Particle Methods

Sampling per Mode algorithm 4

ASYMPTOTIC

BEHAVIOUR

Asymptotic Behaviour Law of Large Numbers Central Limit Theorem

(55)

SAMPLING PER

MODE ALGORITHM

The main idea ofSampling per mode algorithmconsists in maintaining a fixed number of particles in each mode, at each resampling step.

So, instead of starting the algorithm withNparticles randomly distributed, we draw in each modej, a fixed numberNj particles and at each

resampling step, the same number of particles is sampled for each visited mode.

Obviously, the total number of particles can change at each time some mode is not visited, or empty mode is visited afresh.

LetNbk andNk denote the total numbers of particlesξbk andξk, andωkj,N

the weights associated with the modes, we have the evolution scheme

(Nk, (ωk −1j,N )j∈Jk −1, ξk) → ( bNk, (ω

j

k)j∈Jk, bξk) → (Nk +1, (ω

j,N

k )j∈Jk, ξk +1)

(56)

SAMPLING PER

MODE ALGORITHM

The main idea ofSampling per mode algorithmconsists in maintaining a fixed number of particles in each mode, at each resampling step.

So, instead of starting the algorithm withNparticles randomly distributed, we draw in each modej, a fixed numberNj particles and at each

resampling step, the same number of particles is sampled for each visited mode.

Obviously, the total number of particles can change at each time some mode is not visited, or empty mode is visited afresh.

LetNbk andNk denote the total numbers of particlesξbk andξk, andωkj,N

the weights associated with the modes, we have the evolution scheme

(Nk, (ωk −1j,N )j∈Jk −1, ξk) → ( bNk, (ω

j

k)j∈Jk, bξk) → (Nk +1, (ω

j,N

k )j∈Jk, ξk +1)

(57)

SAMPLING PER

MODE ALGORITHM

The main idea ofSampling per mode algorithmconsists in maintaining a fixed number of particles in each mode, at each resampling step.

So, instead of starting the algorithm withNparticles randomly distributed, we draw in each modej, a fixed numberNj particles and at each

resampling step, the same number of particles is sampled for each visited mode.

Obviously, the total number of particles can change at each time some mode is not visited, or empty mode is visited afresh.

LetNbk andNk denote the total numbers of particlesξbk andξk, andωkj,N

the weights associated with the modes, we have the evolution scheme

(Nk, (ωk −1j,N )j∈Jk −1, ξk) → ( bNk, (ω

j

k)j∈Jk, bξk) → (Nk +1, (ω

j,N

k )j∈Jk, ξk +1)

(58)

SAMPLING PER

MODE ALGORITHM

The main idea ofSampling per mode algorithmconsists in maintaining a fixed number of particles in each mode, at each resampling step.

So, instead of starting the algorithm withNparticles randomly distributed, we draw in each modej, a fixed numberNj particles and at each

resampling step, the same number of particles is sampled for each visited mode.

Obviously, the total number of particles can change at each time some mode is not visited, or empty mode is visited afresh.

LetNbk andNk denote the total numbers of particlesξbk andξk, andωj,Nk

the weights associated with the modes, we have the evolution scheme

(Nk, (ωk −1j,N )j∈Jk −1, ξk) → ( bNk, (ω

j

k)j∈Jk, bξk) → (Nk +1, (ω

j,N

k )j∈Jk, ξk +1)

(59)

INITIALIZATION

In each modej, we sampleNj particlesξκ 0 = bξ κ 0 = (0, (X κ 0,j)) ∼ η j 0.

R

n

M

D0 D2 D1 D1 D2 A A Letωj0= P(θ0=j), thenηN 0 andηb N 0 are given by ηN 0 = P j∈Mω j 0η j,N 0 , ηb N 0 = P j∈Mω j 0ηb j,N 0 , with ηj,N0 = N1j P κ∈Jj0δξκ0, ηb j,N 0 = N1j P κ∈Jj0δξbκ0. HereJ0j is the set of the indices of the particles in the modej.

(60)

INITIALIZATION

In each modej, we sampleNj particlesξκ 0 = bξ κ 0 = (0, (X κ 0,j)) ∼ η j 0.

R

n

M

D0 D2 D1 D1 D2 A A Letωj0= P(θ0=j), thenηN 0 andηb N 0 are given by ηN 0 = P j∈Mω j 0η j,N 0 , ηb N 0 = P j∈Mω j 0ηb j,N 0 , with ηj,N0 = N1j P κ∈J0j δξκ0, ηb j,N 0 = N1j P κ∈Jj0δξbκ0.

(61)

INITIALIZATION

In each modej, we sampleNj particlesξκ 0 = bξ κ 0 = (0, (X κ 0,j)) ∼ η j 0.

R

n

M

D0 D2 D1 D1 D2 A A Letωj0= P(θ0=j), thenηN 0 andηb N 0 are given by ηN 0 = P j∈Mω j 0η j,N 0 , ηb N 0 = P j∈Mω j 0ηb j,N 0 , with ηj,N0 = N1j P κ∈J0j δξκ0, ηb j,N 0 = N1j P κ∈Jj0δξbκ0. HereJ0j is the set of the indices of the particles in the modej.

(62)

MUTATION

( bNk, ωkN, bξk) → (Nk +1, ωkN, ξk +1)

IfNbk =0the particle system dies, otherwise independently of each other,

each particleξbκk evolves randomly according to the Markov transition Mk +1

R

n

M

D0 D2 D1 D1 D2 A A

Neither the total number of particles nor the weight of each particle change (Nk +1= bNk). SoηN k +1= P j∈Jkω j,N k η j,N k +1,withη j,N k +1= 1 Nj P κ∈Jkjδξk +1κ ,whereJ j k is the set of the labels of the particles in modej ∈ Jk.

(63)

MUTATION

( bNk, ωkN, bξk) → (Nk +1, ωkN, ξk +1)

IfNbk =0the particle system dies, otherwise independently of each other,

each particleξbκk evolves randomly according to the Markov transition Mk +1

R

n

M

D0 D2 D1 D1 D2 A A

Neither the total number of particles nor the weight of each particle change (Nk +1= bNk). SoηN k +1= P j∈Jkω j,N k η j,N k +1,withη j,N k +1= 1 Nj P κ∈Jkjδξk +1κ ,whereJ j k is the set of the labels of the particles in modej ∈ Jk.

(64)

MUTATION

( bNk, ωkN, bξk) → (Nk +1, ωkN, ξk +1)

IfNbk =0the particle system dies, otherwise independently of each other,

each particleξbκk evolves randomly according to the Markov transition Mk +1

R

n

M

D0 D2 D1 D1 D2 A A

Neither the total number of particles nor the weight of each particle change (Nk +1= bNk). SoηN k +1= P j∈Jkω j,N k η j,N k +1,withη j,N k +1= 1 Nj P κ∈Jkj δξκ k +1,whereJ j k is the set of the labels of the particles in modej ∈ Jk.

(65)

SELECTION/R

ESAMPLING

(Nk +1, ωkN, ξk +1) → ( bNk +1, ωk +1N , bξk +1)

Select only the particlesξκk +1having reached the desired setBk +1;

R

n

M

D0 D2 D1 D1 D2 A A LetIN

k +1denote the set of (indices of) good particles; ifIk +1N = ∅the algorithm is stopped. Otherwise,

for each non empty modej, resampleNj particles according to

Ψjk +1k +1N ), and set b ηN k +1= P j∈Jk +1ω j,N k +1ηb j,N k +1, withηb j,N k +1= 1 Nj P κ∈bJk +1j δξbκk +1and ωj,Nk +1bNk +1(gk +1j ) =η N k +1(g j k +1) ηN k +1(gk +1) .

The total number of particles isNbk +1=Pj∈J k +1N

(66)

SELECTION/R

ESAMPLING

(Nk +1, ωkN, ξk +1) → ( bNk +1, ωk +1N , bξk +1)

Select only the particlesξκk +1having reached the desired setBk +1;

R

n

M

D0 D2 D1 D1 D2 A A

LetIk +1N denote the set of (indices of) good particles; ifIk +1N = ∅the algorithm is stopped. Otherwise,

for each non empty modej, resampleNj particles according to

Ψjk +1k +1N ), and set b ηN k +1= P j∈Jk +1ω j,N k +1ηb j,N k +1, withηb j,N k +1= 1 Nj P κ∈bJk +1j δξbκk +1and ωj,Nk +1bNk +1(gk +1j ) =η N k +1(g j k +1) ηN k +1(gk +1) .

The total number of particles isNbk +1=Pj∈J k +1N

(67)

SELECTION/R

ESAMPLING

(Nk +1, ωkN, ξk +1) → ( bNk +1, ωk +1N , bξk +1)

Select only the particlesξκk +1having reached the desired setBk +1;

R

n

M

D0 D2 D1 D1 D2 A A

LetIk +1N denote the set of (indices of) good particles; ifIk +1N = ∅the algorithm is stopped. Otherwise,

for each non empty modej, resampleNj particles according to

Ψjk +1k +1N ), and set b ηN k +1= P j∈Jk +1ω j,N k +1ηb j,N k +1, withηb j,N k +1= 1 Nj P κ∈bJk +1j δξbκk +1and ωj,Nk +1bNk +1(gk +1j ) =η N k +1(g j k +1) ηN k +1(gk +1) .

The total number of particles isNbk +1=Pj∈J k +1N

(68)

SELECTION/R

ESAMPLING

(Nk +1, ωkN, ξk +1) → ( bNk +1, ωk +1N , bξk +1)

Select only the particlesξκk +1having reached the desired setBk +1;

R

n

M

D0 D2 D1 D1 D2 A A

LetIk +1N denote the set of (indices of) good particles; ifIk +1N = ∅the algorithm is stopped. Otherwise,

for each non empty modej, resampleNj particles according to

Ψjk +1k +1N ), and set b ηN k +1= P j∈Jk +1ω j,N k +1ηb j,N k +1, withηb j,N k +1= 1 Nj P κ∈bJjk +1δξbκk +1and ωj,Nk +1bNk +1(gk +1j ) =η N k +1(g j k +1) ηN k +1(gk +1) .

The total number of particles isNbk +1=Pj∈J k +1N

(69)

SELECTION/R

ESAMPLING

(Nk +1, ωkN, ξk +1) → ( bNk +1, ωk +1N , bξk +1)

Select only the particlesξκk +1having reached the desired setBk +1;

R

n

M

D0 D2 D1 D1 D2 A A

LetIk +1N denote the set of (indices of) good particles; ifIk +1N = ∅the algorithm is stopped. Otherwise,

for each non empty modej, resampleNj particles according to

Ψjk +1k +1N ), and set b ηN k +1= P j∈Jk +1ω j,N k +1ηb j,N k +1, withηb j,N k +1= 1 Nj P κ∈bJjk +1δξbκk +1and ωj,Nk +1bNk +1(gk +1j ) =η N k +1(g j k +1) ηN k +1(gk +1) .

The total number of particles isNbk +1=Pj∈J k +1N

(70)
(71)
(72)
(73)
(74)

PLAN

1

INTRODUCTION

Motivation

Switching jump diffusion Splitting technique Some issues

2

F

EYNMAN-KAC FORMULATION

Multilevel Feynman-Kac distributions Dynamical evolution

3

S

AMPLING PER

MODE ALGORITHM

Particle Methods

Sampling per Mode algorithm

4

A

SYMPTOTIC

B

EHAVIOUR Asymptotic Behaviour

Law of Large Numbers Central Limit Theorem

(75)

ASYMPTOTIC

BEHAVIOUR

Now, we are addressing the asymptotic behaviour of our estimator as

N → ∞.

To obtain a law of large numbers, we followed the [Del Moral 2004]’s approach based on a martingale decomposition,

and for the central limit theorem, we used a CLT for triangular arrays developed in [Le Gland & Oudjane, 2006]

Before the statement of the two theorems, we need some notations:

I Ninf=infj∈MNj

I letN → ∞in such a way that eachρj :=Nj/Nare ”preserved” I this implies thatNinf→ ∞.

(76)

ASYMPTOTIC

BEHAVIOUR

Now, we are addressing the asymptotic behaviour of our estimator as

N → ∞.

To obtain a law of large numbers, we followed the [Del Moral 2004]’s approach based on a martingale decomposition,

and for the central limit theorem, we used a CLT for triangular arrays developed in [Le Gland & Oudjane, 2006]

Before the statement of the two theorems, we need some notations:

I Ninf=infj∈MNj

I letN → ∞in such a way that eachρj :=Nj/Nare ”preserved” I this implies thatNinf→ ∞.

(77)

ASYMPTOTIC

BEHAVIOUR

Now, we are addressing the asymptotic behaviour of our estimator as

N → ∞.

To obtain a law of large numbers, we followed the [Del Moral 2004]’s approach based on a martingale decomposition,

and for the central limit theorem, we used a CLT for triangular arrays developed in [Le Gland & Oudjane, 2006]

Before the statement of the two theorems, we need some notations:

I Ninf=infj∈MNj

I letN → ∞in such a way that eachρj :=Nj/Nare ”preserved” I this implies thatNinf→ ∞.

(78)

ASYMPTOTIC

BEHAVIOUR

Now, we are addressing the asymptotic behaviour of our estimator as

N → ∞.

To obtain a law of large numbers, we followed the [Del Moral 2004]’s approach based on a martingale decomposition,

and for the central limit theorem, we used a CLT for triangular arrays developed in [Le Gland & Oudjane, 2006]

Before the statement of the two theorems, we need some notations:

I Ninf=infj∈MNj

I letN → ∞in such a way that eachρj:=Nj/Nare ”preserved” I this implies thatNinf→ ∞.

(79)

ASYMPTOTIC

BEHAVIOUR

Now, we are addressing the asymptotic behaviour of our estimator as

N → ∞.

To obtain a law of large numbers, we followed the [Del Moral 2004]’s approach based on a martingale decomposition,

and for the central limit theorem, we used a CLT for triangular arrays developed in [Le Gland & Oudjane, 2006]

Before the statement of the two theorems, we need some notations:

I Ninf=infj∈MNj

I letN → ∞in such a way that eachρj:=Nj/Nare ”preserved” I this implies thatNinf→ ∞.

(80)

ASYMPTOTIC

BEHAVIOUR

Now, we are addressing the asymptotic behaviour of our estimator as

N → ∞.

To obtain a law of large numbers, we followed the [Del Moral 2004]’s approach based on a martingale decomposition,

and for the central limit theorem, we used a CLT for triangular arrays developed in [Le Gland & Oudjane, 2006]

Before the statement of the two theorems, we need some notations:

I Ninf=infj∈MNj

I letN → ∞in such a way that eachρj:=Nj/Nare ”preserved”

(81)

ASYMPTOTIC

BEHAVIOUR

Now, we are addressing the asymptotic behaviour of our estimator as

N → ∞.

To obtain a law of large numbers, we followed the [Del Moral 2004]’s approach based on a martingale decomposition,

and for the central limit theorem, we used a CLT for triangular arrays developed in [Le Gland & Oudjane, 2006]

Before the statement of the two theorems, we need some notations:

I Ninf=infj∈MNj

I letN → ∞in such a way that eachρj:=Nj/Nare ”preserved” I this implies thatNinf→ ∞.

(82)

PLAN

1

INTRODUCTION

Motivation

Switching jump diffusion Splitting technique Some issues

2

F

EYNMAN-KAC FORMULATION

Multilevel Feynman-Kac distributions Dynamical evolution

3

S

AMPLING PER

MODE ALGORITHM

Particle Methods

Sampling per Mode algorithm

4

A

SYMPTOTIC

B

EHAVIOUR

Asymptotic Behaviour

Law of Large Numbers

Central Limit Theorem

(83)

MAIN

THEOREMS: LAW OF

LARGE

NUMBERS

THEOREM(LAW OFLARGENUMBERS)

For anyn ≥ 0and any bounded functionf, we have

E(γnN(f )1{Nn>0}) = γn(f ), sup f :kf k∞≤1 E  [1{N n>0}γ N n(f ) − γn(f )]2  ≤b 2(n) Ninf .

LLNFOR NORMALIZED MEASURES

For anyn ≥ 0and any bounded functionf, we have

sup f :kf k∞≤1 E h ηNn(f )1{N n>0} − ηn(f ) i ≤ b(n) 2 Ninf +a(n)e−Ninf/c(n). sup f : kf k∞≤1  E 1{Nn>0}η N n(f ) − ηn(f ) 21/2 ≤ 2b 2(n) |γn(1)|2Ninf .

(84)

MAIN

THEOREMS: LAW OF

LARGE

NUMBERS

THEOREM(LAW OFLARGENUMBERS)

For anyn ≥ 0and any bounded functionf, we have

E(γnN(f )1{Nn>0}) = γn(f ), sup f :kf k∞≤1 E  [1{N n>0}γ N n(f ) − γn(f )]2  ≤b 2(n) Ninf .

LLNFOR NORMALIZED MEASURES

For anyn ≥ 0and any bounded functionf, we have

sup f :kf k∞≤1 E h ηNn(f )1{N n>0} − ηn(f ) i ≤ b(n) 2 Ninf +a(n)e−Ninf/c(n). sup f : kf k∞≤1  E 1{Nn>0}η N n(f ) − ηn(f ) 21/2 ≤ 2b 2(n) |γn(1)|2Ninf .

(85)

MAIN

THEOREMS: LAW OF

LARGE

NUMBERS

THEOREM(LAW OFLARGENUMBERS)

For anyn ≥ 0and any bounded functionf, we have

E(γnN(f )1{Nn>0}) = γn(f ), sup f :kf k∞≤1 E  [1{N n>0}γ N n(f ) − γn(f )]2  ≤b 2(n) Ninf .

LLNFOR NORMALIZED MEASURES

For anyn ≥ 0and any bounded functionf, we have

sup f :kf k∞≤1 E h ηNn(f )1{N n>0} − ηn(f ) i ≤ b(n) 2 Ninf +a(n)e−Ninf/c(n). sup f : kf k∞≤1  E 1{Nn>0}η N n(f ) − ηn(f ) 21/2 ≤ 2b 2(n) |γn(1)|2Ninf .

(86)

MAIN

THEOREMS: LAW OF

LARGE

NUMBERS

THEOREM(LAW OFLARGENUMBERS)

For anyn ≥ 0and any bounded functionf, we have

E(γnN(f )1{Nn>0}) = γn(f ), sup f :kf k∞≤1 E  [1{N n>0}γ N n(f ) − γn(f )]2  ≤b 2(n) Ninf .

LLNFOR NORMALIZED MEASURES

For anyn ≥ 0and any bounded functionf, we have

sup f :kf k∞≤1 E h ηNn(f )1{N n>0} − ηn(f ) i ≤ b(n) 2 Ninf +a(n)e−Ninf/c(n). sup f : kf k∞≤1  E 1{Nn>0}η N n(f ) − ηn(f ) 21/2 ≤ 2b 2(n) |γn(1)|2Ninf .

(87)

PLAN

1

INTRODUCTION

Motivation

Switching jump diffusion Splitting technique Some issues

2

F

EYNMAN-KAC FORMULATION

Multilevel Feynman-Kac distributions Dynamical evolution

3

S

AMPLING PER

MODE ALGORITHM

Particle Methods

Sampling per Mode algorithm

4

A

SYMPTOTIC

B

EHAVIOUR

Asymptotic Behaviour Law of Large Numbers

Central Limit Theorem 5

CONCLUSION

(88)

MAIN

THEOREMS: CENTRAL

LIMIT

THEOREM

THEOREM(CENTRALLIMITTHEOREM)

LetN → ∞in such a way thatρj =Nj/Nare ”preserved” for allj ∈ M. Then, the random variable

√ N1{N n+1>0}γ N n+1(1) − P(Tn<T ) 

converges in law to a Gaussian random variable with mean0and variance

Wn+1, where Wn+1 P2(Tn<T ) = n+1 X q=0 Ωq  1 Pq − 1  + n+1 X q=0 Ωq Pq " b ηq [∆n q◦ π]2  b ηq2 ∆nq◦ π  − 1 # , with Ωq= X j∈M (ωq−1j )2ρ−1j =1 + χ2(ωq−1, ρ), and ∆nq(t, z) = P(Tn≤ T |Tq=t, ZTq =z).

(89)

MAIN

THEOREMS: CENTRAL

LIMIT

THEOREM

THEOREM(CENTRALLIMITTHEOREM)

LetN → ∞in such a way thatρj =Nj/Nare ”preserved” for allj ∈ M. Then, the random variable

√ N1{N n+1>0}γ N n+1(1) − P(Tn<T ) 

converges in law to a Gaussian random variable with mean0and variance

Wn+1, where Wn+1 P2(Tn<T ) = n+1 X q=0 Ωq  1 Pq − 1  + n+1 X q=0 Ωq Pq " b ηq [∆n q◦ π]2  b ηq2 ∆nq◦ π  − 1 # , with Ωq= X j∈M (ωq−1j )2ρ−1j =1 + χ2(ωq−1, ρ), and ∆nq(t, z) = P(Tn≤ T |Tq=t, ZTq =z).

(90)

MAIN

THEOREMS: CENTRAL

LIMIT

THEOREM

THEOREM(CENTRALLIMITTHEOREM)

LetN → ∞in such a way thatρj =Nj/Nare ”preserved” for allj ∈ M. Then, the random variable

√ N1{N n+1>0}γ N n+1(1) − P(Tn<T ) 

converges in law to a Gaussian random variable with mean0and variance

Wn+1, where Wn+1 P2(Tn<T ) = n+1 X q=0 Ωq  1 Pq − 1  + n+1 X q=0 Ωq Pq " b ηq [∆n q◦ π]2  b ηq2 ∆nq◦ π  − 1 # , with Ωq= X j∈M (ωq−1j )2ρ−1j =1 + χ2(ωq−1, ρ), and ∆nq(t, z) = P(Tn≤ T |Tq=t, ZTq =z).

(91)

CONCLUSION

The ”sampling per mode” algorithm has ”good” properties.

However, to gain more time of simulation, we can:

I use importance sampling technique to make rare switches more frequent, I or aggregate the modes in order to decrease the complexity (for large scale

distributed hybrid systems).

Thus, we need to extend the previous results.

A better comprehension of the expression ofWn+1could help the choice of theNj regarding the cost of the algorithm.

This algorithm is implemented in a software developed by National Aerospace Laboratory (NLR) and used to evaluate the safety

characteristics of an arbitrary (new) operational Air Traffic Management concept [Blom, 2009].

(92)

CONCLUSION

The ”sampling per mode” algorithm has ”good” properties. However, to gain more time of simulation, we can:

I use importance sampling technique to make rare switches more frequent, I or aggregate the modes in order to decrease the complexity (for large scale

distributed hybrid systems).

Thus, we need to extend the previous results.

A better comprehension of the expression ofWn+1could help the choice of theNj regarding the cost of the algorithm.

This algorithm is implemented in a software developed by National Aerospace Laboratory (NLR) and used to evaluate the safety

characteristics of an arbitrary (new) operational Air Traffic Management concept [Blom, 2009].

(93)

CONCLUSION

The ”sampling per mode” algorithm has ”good” properties. However, to gain more time of simulation, we can:

I use importance sampling technique to make rare switches more frequent,

I or aggregate the modes in order to decrease the complexity (for large scale distributed hybrid systems).

Thus, we need to extend the previous results.

A better comprehension of the expression ofWn+1could help the choice of theNj regarding the cost of the algorithm.

This algorithm is implemented in a software developed by National Aerospace Laboratory (NLR) and used to evaluate the safety

characteristics of an arbitrary (new) operational Air Traffic Management concept [Blom, 2009].

(94)

CONCLUSION

The ”sampling per mode” algorithm has ”good” properties. However, to gain more time of simulation, we can:

I use importance sampling technique to make rare switches more frequent, I or aggregate the modes in order to decrease the complexity (for large scale

distributed hybrid systems).

Thus, we need to extend the previous results.

A better comprehension of the expression ofWn+1could help the choice of theNj regarding the cost of the algorithm.

This algorithm is implemented in a software developed by National Aerospace Laboratory (NLR) and used to evaluate the safety

characteristics of an arbitrary (new) operational Air Traffic Management concept [Blom, 2009].

(95)

CONCLUSION

The ”sampling per mode” algorithm has ”good” properties. However, to gain more time of simulation, we can:

I use importance sampling technique to make rare switches more frequent, I or aggregate the modes in order to decrease the complexity (for large scale

distributed hybrid systems).

Thus, we need to extend the previous results.

A better comprehension of the expression ofWn+1could help the choice of theNj regarding the cost of the algorithm.

This algorithm is implemented in a software developed by National Aerospace Laboratory (NLR) and used to evaluate the safety

characteristics of an arbitrary (new) operational Air Traffic Management concept [Blom, 2009].

(96)

CONCLUSION

The ”sampling per mode” algorithm has ”good” properties. However, to gain more time of simulation, we can:

I use importance sampling technique to make rare switches more frequent, I or aggregate the modes in order to decrease the complexity (for large scale

distributed hybrid systems).

Thus, we need to extend the previous results.

A better comprehension of the expression ofWn+1could help the choice of theNj regarding the cost of the algorithm.

This algorithm is implemented in a software developed by National Aerospace Laboratory (NLR) and used to evaluate the safety

characteristics of an arbitrary (new) operational Air Traffic Management concept [Blom, 2009].

(97)

CONCLUSION

The ”sampling per mode” algorithm has ”good” properties. However, to gain more time of simulation, we can:

I use importance sampling technique to make rare switches more frequent, I or aggregate the modes in order to decrease the complexity (for large scale

distributed hybrid systems).

Thus, we need to extend the previous results.

A better comprehension of the expression ofWn+1could help the choice of theNj regarding the cost of the algorithm.

This algorithm is implemented in a software developed by National Aerospace Laboratory (NLR) and used to evaluate the safety

characteristics of an arbitrary (new) operational Air Traffic Management concept [Blom, 2009].

(98)

R ´

EF

ERENCES

´

Henk Blom, Bert Bakker, and Jaroslav Krystul.,

Rare event estimation for a large-scale stochastic hybrid system with air traffic application. Rare Event Simulation using Monte Carlo Methods, pp. 194–214. Wiley, 2009.

Krystul, J.,

Modelling of Stochastic Hybrid Systems with Applications to Accident Risk Assessment (2006). PhD Dissertation: University of Twente.

Del Moral, P. (2004).

Feynman-Kac Formulae: Genealogical and Interacting Particle Systems with Applications (2004). Springer-Verlag.

Le Gland, F. and Oudjane, N.,

A sequential particle algorithm that keeps the particle system alive. (2006) in Stochastic Hybrid Systems : Theory and Safety Critical Applications, pp. 351-389, Springer, 2006

F. C ´erou, P. Del Moral, F. Le Gland and P. Lezaud,

Genetic genealogical models in rare event analysis, ALEA, volume 1, paper 8, 2006

Referenties

GERELATEERDE DOCUMENTEN

Background: The purpose of this study was to identify the mental disorders treated by a primary care mental health service in a clinic in the Cape Town Metropole.. There is very

of the gelation ability of some atactic polystyrene solutions at low temperature above the VeST, we reexamine the enhanced scattering observed at small angles in moderately

De monumenten van het type Riethoven sluiten nog sterk aan bij de traditie van de midden bronstijd, waarbij de monumen­ ten onderling verder uit elkaar liggen en vaak meerdere

numerieke stuurgegevens worden vertaald. Wanneer nu het meetob- jekt altijd op dezelfde positie werdt uitgelijnd kan het machi- nekoordinatensysteem worden gebruikt.

The Cape Education Department, Rotary Club and the South African Sport Federation sponsored 45 White boys from nine Cape Town schools to partake in the camp.. Even though the

Financial development pillars of the (FDI) index is how the measures of financial development were captured in an index using 7 determinants known as pillars which are institutional

en moet polêre spesies in die reaksiemengsel kan stabiliseer. Die algemeen aanvaarde reël is dat interne alkyne metateseprodukte lewer en dat terminale alkyne polimerisasie

Het advies van de WAR aan ZIN is dat bij de behandeling van pijn waarvoor een adequate behandeling met opioïden noodzakelijk is, specifiek bij patiënten met laxans-refractaire