S
AMPLING PER MODE SIMULATION FOR SWITCHING
DIFFUSIONS
Pascal Lezaud1 Jaroslav Krystul2 Franc¸ois Le Gland3 1DSNA/DTI/R&D and Institut de Math ´ematiques de Toulouse
2
Twente University, Enschede
3
INRIA Rennes, Bretagne–Atlantique
8th International Conference on Rare Event Simulation, RESIM’10, Cambridge, June 21–22, 2010
1
I
NTRODUCTIONMotivation
Switching jump diffusion Splitting technique Some issues
2
F
EYNMAN-K
AC FORMULATIONMultilevel Feynman-Kac distributions Dynamical evolution
3
S
AMPLING PERM
ODE ALGORITHMParticle Methods
Sampling per Mode algorithm
4
A
SYMPTOTICB
EHAVIOURAsymptotic Behaviour Law of Large Numbers Central Limit Theorem
PLAN
1
I
NTRODUCTION MotivationSwitching jump diffusion Splitting technique Some issues
2
F
EYNMAN-KAC FORMULATIONMultilevel Feynman-Kac distributions Dynamical evolution
3
S
AMPLING PERMODE ALGORITHM
Particle Methods
Sampling per Mode algorithm
4
ASYMPTOTIC
BEHAVIOUR
Asymptotic Behaviour Law of Large Numbers Central Limit Theorem
MOTIVATIONS
Many complex dynamical multi-agent systems make use of
continuous-time strong Markov processes with an hybrid state space:
I one state component evolves inRd,
I the other state component evolves in a discrete set,
I and each component may influence the evolution of the other component.
Our motivation is to estimate the probability that the continuous component hits a critical set.
We use a splitting technique adapted to the context of switching diffusions: the sampling per mode algorithm introduced by Krystul in [Krystul, 2006]
MOTIVATIONS
Many complex dynamical multi-agent systems make use of
continuous-time strong Markov processes with an hybrid state space:
I one state component evolves inRd,
I the other state component evolves in a discrete set,
I and each component may influence the evolution of the other component.
Our motivation is to estimate the probability that the continuous component hits a critical set.
We use a splitting technique adapted to the context of switching diffusions: the sampling per mode algorithm introduced by Krystul in [Krystul, 2006]
MOTIVATIONS
Many complex dynamical multi-agent systems make use of
continuous-time strong Markov processes with an hybrid state space:
I one state component evolves inRd,
I the other state component evolves in a discrete set,
I and each component may influence the evolution of the other component.
Our motivation is to estimate the probability that the continuous component hits a critical set.
We use a splitting technique adapted to the context of switching diffusions: the sampling per mode algorithm introduced by Krystul in [Krystul, 2006]
MOTIVATIONS
Many complex dynamical multi-agent systems make use of
continuous-time strong Markov processes with an hybrid state space:
I one state component evolves inRd,
I the other state component evolves in a discrete set,
I and each component may influence the evolution of the other component.
Our motivation is to estimate the probability that the continuous component hits a critical set.
We use a splitting technique adapted to the context of switching diffusions: the sampling per mode algorithm introduced by Krystul in [Krystul, 2006]
MOTIVATIONS
Many complex dynamical multi-agent systems make use of
continuous-time strong Markov processes with an hybrid state space:
I one state component evolves inRd,
I the other state component evolves in a discrete set,
I and each component may influence the evolution of the other component.
Our motivation is to estimate the probability that the continuous component hits a critical set.
We use a splitting technique adapted to the context of switching diffusions: the sampling per mode algorithm introduced by Krystul in [Krystul, 2006]
MOTIVATIONS
Many complex dynamical multi-agent systems make use of
continuous-time strong Markov processes with an hybrid state space:
I one state component evolves inRd,
I the other state component evolves in a discrete set,
I and each component may influence the evolution of the other component.
Our motivation is to estimate the probability that the continuous component hits a critical set.
We use a splitting technique adapted to the context of switching diffusions: the sampling per mode algorithm introduced by Krystul in [Krystul, 2006]
PLAN
1
I
NTRODUCTIONMotivation
Switching jump diffusion
Splitting technique Some issues
2
F
EYNMAN-KAC FORMULATIONMultilevel Feynman-Kac distributions Dynamical evolution
3
S
AMPLING PERMODE ALGORITHM
Particle Methods
Sampling per Mode algorithm
4
ASYMPTOTIC
BEHAVIOUR
Asymptotic Behaviour Law of Large Numbers Central Limit Theorem
SWITCHING JUMP DIFFUSION
Strong Markov processZ = {(Xt, θt);t ≥ 0}with value inRd× Mwith a
finite setM = {1, · · · , M},
the continuous component is described as ad-dimensional SDE
dXt =b (Xt, θt)dt + σ (Xt, θt)dBt,
and the discrete mode as a pure jump process
P (θt+∆t=j|θt =i, Xt=x ) = λij(x )∆t + o(∆t), i 6= j, with jump intensities depending on the continuous component.
Zt starts att = 0inD0× Mwith known initial probabilityη0 LetA ⊂ Rd be a closed critical region in whichX
t could enter but with a very small probability.
IfTAdenotes the hitting time ofA, we would like to estimateP(TA≤ T ) withT a deterministic or a stopping time.
SWITCHING JUMP DIFFUSION
Strong Markov processZ = {(Xt, θt);t ≥ 0}with value inRd× Mwith a
finite setM = {1, · · · , M},
the continuous component is described as ad-dimensional SDE
dXt =b (Xt, θt)dt + σ (Xt, θt)dBt,
and the discrete mode as a pure jump process
P (θt+∆t=j|θt =i, Xt=x ) = λij(x )∆t + o(∆t), i 6= j, with jump intensities depending on the continuous component.
Zt starts att = 0inD0× Mwith known initial probabilityη0 LetA ⊂ Rd be a closed critical region in whichX
t could enter but with a very small probability.
IfTAdenotes the hitting time ofA, we would like to estimateP(TA≤ T ) withT a deterministic or a stopping time.
SWITCHING JUMP DIFFUSION
Strong Markov processZ = {(Xt, θt);t ≥ 0}with value inRd× Mwith a
finite setM = {1, · · · , M},
the continuous component is described as ad-dimensional SDE
dXt =b (Xt, θt)dt + σ (Xt, θt)dBt,
and the discrete mode as a pure jump process
P (θt+∆t=j|θt =i, Xt=x ) = λij(x )∆t + o(∆t), i 6= j, with jump intensities depending on the continuous component.
Zt starts att = 0inD0× Mwith known initial probabilityη0 LetA ⊂ Rd be a closed critical region in whichX
t could enter but with a very small probability.
IfTAdenotes the hitting time ofA, we would like to estimateP(TA≤ T ) withT a deterministic or a stopping time.
SWITCHING JUMP DIFFUSION
Strong Markov processZ = {(Xt, θt);t ≥ 0}with value inRd× Mwith a
finite setM = {1, · · · , M},
the continuous component is described as ad-dimensional SDE
dXt =b (Xt, θt)dt + σ (Xt, θt)dBt,
and the discrete mode as a pure jump process
P (θt+∆t=j|θt =i, Xt=x ) = λij(x )∆t + o(∆t), i 6= j, with jump intensities depending on the continuous component.
Zt starts att = 0inD0× Mwith known initial probabilityη0
LetA ⊂ Rd be a closed critical region in whichX
t could enter but with a very small probability.
IfTAdenotes the hitting time ofA, we would like to estimateP(TA≤ T ) withT a deterministic or a stopping time.
SWITCHING JUMP DIFFUSION
Strong Markov processZ = {(Xt, θt);t ≥ 0}with value inRd× Mwith a
finite setM = {1, · · · , M},
the continuous component is described as ad-dimensional SDE
dXt =b (Xt, θt)dt + σ (Xt, θt)dBt,
and the discrete mode as a pure jump process
P (θt+∆t=j|θt =i, Xt=x ) = λij(x )∆t + o(∆t), i 6= j, with jump intensities depending on the continuous component.
Zt starts att = 0inD0× Mwith known initial probabilityη0 LetA ⊂ Rd be a closed critical region in whichX
t could enter but with a very small probability.
IfTAdenotes the hitting time ofA, we would like to estimateP(TA≤ T ) withT a deterministic or a stopping time.
SWITCHING JUMP DIFFUSION
Strong Markov processZ = {(Xt, θt);t ≥ 0}with value inRd× Mwith a
finite setM = {1, · · · , M},
the continuous component is described as ad-dimensional SDE
dXt =b (Xt, θt)dt + σ (Xt, θt)dBt,
and the discrete mode as a pure jump process
P (θt+∆t=j|θt =i, Xt=x ) = λij(x )∆t + o(∆t), i 6= j, with jump intensities depending on the continuous component.
Zt starts att = 0inD0× Mwith known initial probabilityη0 LetA ⊂ Rd be a closed critical region in whichX
t could enter but with a very small probability.
IfTAdenotes the hitting time ofA, we would like to estimateP(TA≤ T ) withT a deterministic or a stopping time.
PLAN
1
I
NTRODUCTIONMotivation
Switching jump diffusion
Splitting technique
Some issues
2
F
EYNMAN-KAC FORMULATIONMultilevel Feynman-Kac distributions Dynamical evolution
3
S
AMPLING PERMODE ALGORITHM
Particle Methods
Sampling per Mode algorithm
4
ASYMPTOTIC
BEHAVIOUR
Asymptotic Behaviour Law of Large Numbers Central Limit Theorem
SPLITTING TECHNIQUE
Identify intermediate sets that are (sequentially) visited much more often than the rare target set:
LetA = Dn⊂ · · · ⊂ D1⊂ Rd, D0∩ D1= ∅
R
nM
D0 D2 D1 D1 D2 A AWithB = A × MandBk =Dk× M, we define fork = 1, · · · , n
Tk =inf{t ≥ 0 : Zt ∈ Bk} = inf{t ≥ 0 : Xt∈ Dk}, which satisfy0 = T0≤ T1≤ · · · ≤ Tn=TB. Then P(TA≤ T ) = P(TB≤ T ) = n Y k =1 P(Tk ≤ T |Tk −1≤ T ), where conditional probabilities are not very small.
SPLITTING TECHNIQUE
Identify intermediate sets that are (sequentially) visited much more often than the rare target set:
LetA = Dn⊂ · · · ⊂ D1⊂ Rd, D0∩ D1= ∅
R
nM
D0 D2 D1 D1 D2 A AWithB = A × MandBk =Dk× M, we define fork = 1, · · · , n
Tk =inf{t ≥ 0 : Zt ∈ Bk} = inf{t ≥ 0 : Xt∈ Dk}, which satisfy0 = T0≤ T1≤ · · · ≤ Tn=TB. Then P(TA≤ T ) = P(TB≤ T ) = n Y k =1 P(Tk ≤ T |Tk −1≤ T ), where conditional probabilities are not very small.
SPLITTING TECHNIQUE
Identify intermediate sets that are (sequentially) visited much more often than the rare target set:
LetA = Dn⊂ · · · ⊂ D1⊂ Rd, D0∩ D1= ∅
R
nM
D0 D2 D1 D1 D2 A AWithB = A × MandBk =Dk× M, we define fork = 1, · · · , n
Tk =inf{t ≥ 0 : Zt ∈ Bk} = inf{t ≥ 0 : Xt∈ Dk}, which satisfy0 = T0≤ T1≤ · · · ≤ Tn=TB. Then P(TA≤ T ) = P(TB≤ T ) = n Y k =1 P(Tk ≤ T |Tk −1≤ T ), where conditional probabilities are not very small.
SPLITTING TECHNIQUE
Identify intermediate sets that are (sequentially) visited much more often than the rare target set:
LetA = Dn⊂ · · · ⊂ D1⊂ Rd, D0∩ D1= ∅
R
nM
D0 D2 D1 D1 D2 A AWithB = A × MandBk =Dk× M, we define fork = 1, · · · , n
Tk =inf{t ≥ 0 : Zt ∈ Bk} = inf{t ≥ 0 : Xt∈ Dk}, which satisfy0 = T0≤ T1≤ · · · ≤ Tn=TB. Then P(TA≤ T ) = P(TB≤ T ) = n Y k =1 P(Tk ≤ T |Tk −1≤ T ), where conditional probabilities are not very small.
PLAN
1
I
NTRODUCTIONMotivation
Switching jump diffusion Splitting technique
Some issues
2
F
EYNMAN-KAC FORMULATIONMultilevel Feynman-Kac distributions Dynamical evolution
3
S
AMPLING PERMODE ALGORITHM
Particle Methods
Sampling per Mode algorithm
4
ASYMPTOTIC
BEHAVIOUR
Asymptotic Behaviour Law of Large Numbers Central Limit Theorem
SOME ISSUES
Splitting technique applies since a switching process is a strong Markov process, but
this approach fails to produce a reasonable estimate, since each resampling step tends
I to sample more particles from mode with higher probability,
I to discard the particles in the ”light” modes.
Increasing the number of particles should improve the estimate but only at the cost of increased simulation time,
Idea: keep constant the number of particles in each visited mode at each resampling step,
SOME ISSUES
Splitting technique applies since a switching process is a strong Markov process, but
this approach fails to produce a reasonable estimate, since each resampling step tends
I to sample more particles from mode with higher probability,
I to discard the particles in the ”light” modes.
Increasing the number of particles should improve the estimate but only at the cost of increased simulation time,
Idea: keep constant the number of particles in each visited mode at each resampling step,
SOME ISSUES
Splitting technique applies since a switching process is a strong Markov process, but
this approach fails to produce a reasonable estimate, since each resampling step tends
I to sample more particles from mode with higher probability,
I to discard the particles in the ”light” modes.
Increasing the number of particles should improve the estimate but only at the cost of increased simulation time,
Idea: keep constant the number of particles in each visited mode at each resampling step,
SOME ISSUES
Splitting technique applies since a switching process is a strong Markov process, but
this approach fails to produce a reasonable estimate, since each resampling step tends
I to sample more particles from mode with higher probability,
I to discard the particles in the ”light” modes.
Increasing the number of particles should improve the estimate but only at the cost of increased simulation time,
Idea: keep constant the number of particles in each visited mode at each resampling step,
SOME ISSUES
Splitting technique applies since a switching process is a strong Markov process, but
this approach fails to produce a reasonable estimate, since each resampling step tends
I to sample more particles from mode with higher probability,
I to discard the particles in the ”light” modes.
Increasing the number of particles should improve the estimate but only at the cost of increased simulation time,
Idea: keep constant the number of particles in each visited mode at each resampling step,
SOME ISSUES
Splitting technique applies since a switching process is a strong Markov process, but
this approach fails to produce a reasonable estimate, since each resampling step tends
I to sample more particles from mode with higher probability,
I to discard the particles in the ”light” modes.
Increasing the number of particles should improve the estimate but only at the cost of increased simulation time,
Idea: keep constant the number of particles in each visited mode at each resampling step,
PLAN
1
INTRODUCTION
Motivation
Switching jump diffusion Splitting technique Some issues
2
F
EYNMAN-K
AC FORMULATION Multilevel Feynman-Kac distributionsDynamical evolution
3
S
AMPLING PERMODE ALGORITHM
Particle Methods
Sampling per Mode algorithm
4
ASYMPTOTIC
BEHAVIOUR
Asymptotic Behaviour Law of Large Numbers Central Limit Theorem
MULTILEVEL
FEYNMAN-KAC DISTRIBUTIONS
To capture the behaviour ofZ between each thresholds, we consider the random excursionsZk ofZ betweenTk −1andTk ∧ T
Zk = ((Xt, θt),Tk −1 ≤ t ≤ Tk∧ T ) ,
and we introduce the selection functions,
gk(Zk) =1{ZTk ∧T∈Bk}, g
j
MULTILEVEL
FEYNMAN-KAC DISTRIBUTIONS
To capture the behaviour ofZ between each thresholds, we consider the random excursionsZk ofZ betweenTk −1andTk ∧ T
Zk = ((Xt, θt),Tk −1 ≤ t ≤ Tk∧ T ) , and we introduce the selection functions,
gk(Zk) =1{ZTk ∧T∈Bk}, g
j
MULTILEVEL
FEYNMAN-KAC DISTRIBUTIONS
To capture the behaviour ofZ between each thresholds, we consider the random excursionsZk ofZ betweenTk −1andTk ∧ T
Zk = ((Xt, θt),Tk −1 ≤ t ≤ Tk∧ T ) , and we introduce the selection functions,
gk(Zk) =1{ZTk ∧T∈Bk}, g
j
MULTILEVEL
FEYNMAN-KAC DISTRIBUTIONS
To capture the behaviour ofZ between each thresholds, we consider the random excursionsZk ofZ betweenTk −1andTk ∧ T
Zk = ((Xt, θt),Tk −1 ≤ t ≤ Tk∧ T ) , and we introduce the selection functions,
gk(Zk) =1{ZTk ∧T∈Bk}, g
j
MULTILEVEL
FEYNMAN-KAC DISTRIBUTIONS
Clearly,1{Tk ≤ T } = gk(Zk), and 1{Tk ≤ T , θTk =j} = g
j k(Zk).
We can interpret the rare event probability in terms of the Feynman-Kac measures defined by γk(f ) = E [f (Zk)gk −1(Zk −1)] = Ef ((Xt, θt), Tk −1≤ t ≤ Tk∧ T )1{Tk −1≤T } b γk(f ) = E [f (Zk)gk(Zk)] = E h f ((Xt, θt), Tk −1 ≤ t ≤ Tk)1{T k ≤ T } i ,
and the corresponding normalized measures defined by
ηk(f ) = γk(f ) γk(1) = E [f ((X t, θt),Tk −1≤ t ≤ Tk ∧ T )|Tk −1 ≤ T ] b ηk(f ) = b γk(f ) b γk(1) = E [f ((Xt , θt), Tk −1≤ t ≤ Tk)|Tk ≤ T ] .
MULTILEVEL
FEYNMAN-KAC DISTRIBUTIONS
Clearly,1{Tk ≤ T } = gk(Zk), and 1{Tk ≤ T , θTk =j} = g
j k(Zk).
We can interpret the rare event probability in terms of the Feynman-Kac measures defined by γk(f ) = E [f (Zk)gk −1(Zk −1)] = Ef ((Xt, θt), Tk −1≤ t ≤ Tk ∧ T )1{Tk −1≤T } b γk(f ) = E [f (Zk)gk(Zk)] = E h f ((Xt, θt), Tk −1 ≤ t ≤ Tk)1{T k ≤ T } i ,
and the corresponding normalized measures defined by
ηk(f ) = γk(f ) γk(1) = E [f ((X t, θt),Tk −1≤ t ≤ Tk ∧ T )|Tk −1 ≤ T ] b ηk(f ) = b γk(f ) b γk(1) = E [f ((Xt , θt), Tk −1≤ t ≤ Tk)|Tk ≤ T ] .
MULTILEVEL
FEYNMAN-KAC DISTRIBUTIONS
Clearly,1{Tk ≤ T } = gk(Zk), and 1{Tk ≤ T , θTk =j} = g
j k(Zk).
We can interpret the rare event probability in terms of the Feynman-Kac measures defined by γk(f ) = E [f (Zk)gk −1(Zk −1)] = Ef ((Xt, θt), Tk −1≤ t ≤ Tk ∧ T )1{Tk −1≤T } b γk(f ) = E [f (Zk)gk(Zk)] = E h f ((Xt, θt), Tk −1 ≤ t ≤ Tk)1{T k ≤ T } i ,
and the corresponding normalized measures defined by
ηk(f ) = γk(f ) γk(1) = E [f ((Xt , θt),Tk −1≤ t ≤ Tk ∧ T )|Tk −1 ≤ T ] b ηk(f ) = b γk(f ) b γk(1) = E [f ((Xt , θt), Tk −1≤ t ≤ Tk)|Tk ≤ T ] .
MULTILEVEL
FEYNMAN-KAC DISTRIBUTIONS
In particular, forf ≡ 1γk(1) = P(Tk −1≤ T ), bγk(1) = P(Tk ≤ T ),
and forf = gk orf = gkj, it holds
ηk(gk) = P[Tk ≤ T |Tk −1 ≤ T ], ηk(gkj) = P[Tk ≤ T , θTk =j|Tk −1 ≤ T ].
We have the key formulas
γk(f ) = ηk(f ) k −1 Y p=0 ηp(gp) and bγk(f ) =ηbk(f ) k Y p=0 ηp(gp). Then, we recover P(Tn≤ T ) = n Y p=0 ηp(gp),
MULTILEVEL
FEYNMAN-KAC DISTRIBUTIONS
In particular, forf ≡ 1γk(1) = P(Tk −1≤ T ), bγk(1) = P(Tk ≤ T ),
and forf = gk orf = gkj, it holds
ηk(gk) = P[Tk ≤ T |Tk −1 ≤ T ], ηk(gkj) = P[Tk ≤ T , θTk =j|Tk −1 ≤ T ].
We have the key formulas
γk(f ) = ηk(f ) k −1 Y p=0 ηp(gp) and bγk(f ) =ηbk(f ) k Y p=0 ηp(gp). Then, we recover P(Tn≤ T ) = n Y p=0 ηp(gp),
MULTILEVEL
FEYNMAN-KAC DISTRIBUTIONS
In particular, forf ≡ 1γk(1) = P(Tk −1≤ T ), bγk(1) = P(Tk ≤ T ),
and forf = gk orf = gkj, it holds
ηk(gk) = P[Tk ≤ T |Tk −1 ≤ T ], ηk(gkj) = P[Tk ≤ T , θTk =j|Tk −1 ≤ T ].
We have the key formulas
γk(f ) = ηk(f ) k −1 Y p=0 ηp(gp) and bγk(f ) =ηbk(f ) k Y p=0 ηp(gp). Then, we recover P(Tn≤ T ) = n Y p=0 ηp(gp),
MULTILEVEL
FEYNMAN-KAC DISTRIBUTIONS
In particular, forf ≡ 1γk(1) = P(Tk −1≤ T ), bγk(1) = P(Tk ≤ T ),
and forf = gk orf = gkj, it holds
ηk(gk) = P[Tk ≤ T |Tk −1 ≤ T ], ηk(gkj) = P[Tk ≤ T , θTk =j|Tk −1 ≤ T ].
We have the key formulas
γk(f ) = ηk(f ) k −1 Y p=0 ηp(gp) and bγk(f ) =ηbk(f ) k Y p=0 ηp(gp). Then, we recover P(Tn≤ T ) = n Y p=0 ηp(gp),
MULTILEVEL
FEYNMAN-KAC DISTRIBUTIONS
In order to keep trace of the discrete mode, we construct for anyj ∈ M γkj(f ) = Ehf (Zk)gjk −1(Zk −1) i = Ehf (Zt,Tk −1≤ t ≤ Tk ∧ T )1{Tk −1≤T ,θTk−1=j} i , b γkj(f ) = Ehf (Zk)gjk(Zk) i = Ehf (Zt,Tk −1 ≤ t ≤ Tk)1{T k ≤ T , θTk =j} i .
and the normalized measures
ηkj(f ) = γ j k(f ) γkj(1) = Ef (Zt, Tk −1≤ t ≤ Tk ∧ T )|Tk −1 ≤ T , θTk −1=j , b ηkj(f ) = bγ j k(f ) b γkj(1) = E [f (Zt, Tk −1≤ t ≤ Tk)|Tk ≤ T , θTk =j] .
We have the decompositions
b ηk = X j∈M ωjkηbkj, ηk +1= X j∈M ωkjηk +1j , where ωkj =ηbk(gkj) = P(θTk =j|Tk ≤ T ).
MULTILEVEL
FEYNMAN-KAC DISTRIBUTIONS
In order to keep trace of the discrete mode, we construct for anyj ∈ M γkj(f ) = Ehf (Zk)gjk −1(Zk −1) i = Ehf (Zt,Tk −1≤ t ≤ Tk ∧ T )1{Tk −1≤T ,θTk−1=j} i , b γkj(f ) = Ehf (Zk)gjk(Zk) i = Ehf (Zt,Tk −1 ≤ t ≤ Tk)1{T k ≤ T , θTk =j} i .
and the normalized measures
ηkj(f ) = γ j k(f ) γjk(1) = Ef (Zt, Tk −1≤ t ≤ Tk ∧ T )|Tk −1 ≤ T , θTk −1=j , b ηkj(f ) = bγ j k(f ) b γjk(1) = E [f (Zt, Tk −1≤ t ≤ Tk)|Tk ≤ T , θTk =j] .
We have the decompositions
b ηk = X j∈M ωjkηbkj, ηk +1= X j∈M ωkjηk +1j , where ωkj =ηbk(gkj) = P(θTk =j|Tk ≤ T ).
MULTILEVEL
FEYNMAN-KAC DISTRIBUTIONS
In order to keep trace of the discrete mode, we construct for anyj ∈ M γkj(f ) = Ehf (Zk)gjk −1(Zk −1) i = Ehf (Zt,Tk −1≤ t ≤ Tk ∧ T )1{Tk −1≤T ,θTk−1=j} i , b γkj(f ) = Ehf (Zk)gjk(Zk) i = Ehf (Zt,Tk −1 ≤ t ≤ Tk)1{T k ≤ T , θTk =j} i .
and the normalized measures
ηkj(f ) = γ j k(f ) γjk(1) = Ef (Zt, Tk −1≤ t ≤ Tk ∧ T )|Tk −1 ≤ T , θTk −1=j , b ηkj(f ) = bγ j k(f ) b γjk(1) = E [f (Zt, Tk −1≤ t ≤ Tk)|Tk ≤ T , θTk =j] .
We have the decompositions
b ηk = X j∈M ωjkbηkj, ηk +1= X j∈M ωkjηk +1j , where ωkj =ηbk(gkj) = P(θTk =j|Tk ≤ T ).
PLAN
1
INTRODUCTION
Motivation
Switching jump diffusion Splitting technique Some issues
2
F
EYNMAN-K
AC FORMULATIONMultilevel Feynman-Kac distributions
Dynamical evolution
3
S
AMPLING PERMODE ALGORITHM
Particle Methods
Sampling per Mode algorithm
4
ASYMPTOTIC
BEHAVIOUR
Asymptotic Behaviour Law of Large Numbers Central Limit Theorem
DYNAMICAL EVOLUTION
Using the Markov property ofZ(with Markov kernelMk), we obtain
γk(f ) = γk −1(gk −1Mk f )and γkj = γk −1(gk −1j Mk f ).
and the nonlinear measure-valued transformations
b ηk(f ) = ηk(fgk) ηk(gk) := Ψk(ηk)(f ), ηb j k(f ) = ηk(fgkj) ηk(gkj) := Ψjk(ηk)(f ).
so, the following two separate selection/mutation transitions
ηk
selection
−−−−−→ηbk := Ψk(ηk) mutation
DYNAMICAL EVOLUTION
Using the Markov property ofZ(with Markov kernelMk), we obtain
γk(f ) = γk −1(gk −1Mk f )and γkj = γk −1(gk −1j Mk f ).
and the nonlinear measure-valued transformations
b ηk(f ) = ηk(fgk) ηk(gk) := Ψk(ηk)(f ), ηb j k(f ) = ηk(fgkj) ηk(gkj) := Ψjk(ηk)(f ).
so, the following two separate selection/mutation transitions
ηk
selection
−−−−−→ηbk := Ψk(ηk) mutation
DYNAMICAL EVOLUTION
Using the Markov property ofZ(with Markov kernelMk), we obtain
γk(f ) = γk −1(gk −1Mk f )and γkj = γk −1(gk −1j Mk f ).
and the nonlinear measure-valued transformations
b ηk(f ) = ηk(fgk) ηk(gk) := Ψk(ηk)(f ), ηb j k(f ) = ηk(fgkj) ηk(gkj) := Ψjk(ηk)(f ).
so, the following two separate selection/mutation transitions
ηk −−−−−→selection ηbk := Ψk(ηk)
mutation
PLAN
1
INTRODUCTION
Motivation
Switching jump diffusion Splitting technique Some issues
2
F
EYNMAN-KAC FORMULATIONMultilevel Feynman-Kac distributions Dynamical evolution
3
S
AMPLING PERM
ODE ALGORITHM Particle MethodsSampling per Mode algorithm
4
ASYMPTOTIC
BEHAVIOUR
Asymptotic Behaviour Law of Large Numbers Central Limit Theorem
PARTICLE METHODS
Particle methods are a kind of stochastic linearisation technique for solving nonlinear equation in measure space.
Using two sequences ofNparticlesξ = (ξ1, · · · , ξN)andξ = (bb ξ1, · · · , bξN),
we approximate the two step transitions
ηk selection −−−−−→ηbk := Ψk(ηk) mutation −−−−−→ ηk +1=ηbkMk +1, by ηkN:= 1 N N X i=1 δξi k selection −−−−−→ηbkN:= 1 N N X i=1 δ b ξi k mutation −−−−−→ ηk +1= 1 N N X i=1 δξi k +1.
PARTICLE METHODS
Particle methods are a kind of stochastic linearisation technique for solving nonlinear equation in measure space.
Using two sequences ofNparticlesξ = (ξ1, · · · , ξN)and
b
ξ = (bξ1, · · · , bξN), we approximate the two step transitions
ηk selection −−−−−→ηbk := Ψk(ηk) mutation −−−−−→ ηk +1=bηkMk +1, by ηkN:= 1 N N X i=1 δξi k selection −−−−−→ηbkN:= 1 N N X i=1 δ b ξi k mutation −−−−−→ ηk +1= 1 N N X i=1 δξi k +1.
PARTICLE METHODS
Particle methods are a kind of stochastic linearisation technique for solving nonlinear equation in measure space.
Using two sequences ofNparticlesξ = (ξ1, · · · , ξN)and
b
ξ = (bξ1, · · · , bξN), we approximate the two step transitions
ηk selection −−−−−→ηbk := Ψk(ηk) mutation −−−−−→ ηk +1=bηkMk +1, by ηkN:= 1 N N X i=1 δξi k selection −−−−−→ηbkN:= 1 N N X i=1 δ b ξi k mutation −−−−−→ ηk +1= 1 N N X i=1 δξi k +1.
PARTICLE METHODS
Particle methods are a kind of stochastic linearisation technique for solving nonlinear equation in measure space.
Using two sequences ofNparticlesξ = (ξ1, · · · , ξN)and
b
ξ = (bξ1, · · · , bξN), we approximate the two step transitions
ηk selection −−−−−→ηbk := Ψk(ηk) mutation −−−−−→ ηk +1=bηkMk +1, by ηkN:= 1 N N X i=1 δξi k selection −−−−−→ηbkN:= 1 N N X i=1 δ b ξi k mutation −−−−−→ ηk +1= 1 N N X i=1 δξi k +1.
PARTICLE METHODS
Particle methods are a kind of stochastic linearisation technique for solving nonlinear equation in measure space.
Using two sequences ofNparticlesξ = (ξ1, · · · , ξN)and
b
ξ = (bξ1, · · · , bξN), we approximate the two step transitions
ηk selection −−−−−→ηbk := Ψk(ηk) mutation −−−−−→ ηk +1=bηkMk +1, by ηkN:= 1 N N X i=1 δξi k selection −−−−−→ηbkN:= 1 N N X i=1 δ b ξi k mutation −−−−−→ ηk +1= 1 N N X i=1 δξi k +1.
PLAN
1
INTRODUCTION
Motivation
Switching jump diffusion Splitting technique Some issues
2
F
EYNMAN-KAC FORMULATIONMultilevel Feynman-Kac distributions Dynamical evolution
3
S
AMPLING PERM
ODE ALGORITHMParticle Methods
Sampling per Mode algorithm 4
ASYMPTOTIC
BEHAVIOUR
Asymptotic Behaviour Law of Large Numbers Central Limit Theorem
SAMPLING PER
MODE ALGORITHM
The main idea ofSampling per mode algorithmconsists in maintaining a fixed number of particles in each mode, at each resampling step.
So, instead of starting the algorithm withNparticles randomly distributed, we draw in each modej, a fixed numberNj particles and at each
resampling step, the same number of particles is sampled for each visited mode.
Obviously, the total number of particles can change at each time some mode is not visited, or empty mode is visited afresh.
LetNbk andNk denote the total numbers of particlesξbk andξk, andωkj,N
the weights associated with the modes, we have the evolution scheme
(Nk, (ωk −1j,N )j∈Jk −1, ξk) → ( bNk, (ω
j
k)j∈Jk, bξk) → (Nk +1, (ω
j,N
k )j∈Jk, ξk +1)
SAMPLING PER
MODE ALGORITHM
The main idea ofSampling per mode algorithmconsists in maintaining a fixed number of particles in each mode, at each resampling step.
So, instead of starting the algorithm withNparticles randomly distributed, we draw in each modej, a fixed numberNj particles and at each
resampling step, the same number of particles is sampled for each visited mode.
Obviously, the total number of particles can change at each time some mode is not visited, or empty mode is visited afresh.
LetNbk andNk denote the total numbers of particlesξbk andξk, andωkj,N
the weights associated with the modes, we have the evolution scheme
(Nk, (ωk −1j,N )j∈Jk −1, ξk) → ( bNk, (ω
j
k)j∈Jk, bξk) → (Nk +1, (ω
j,N
k )j∈Jk, ξk +1)
SAMPLING PER
MODE ALGORITHM
The main idea ofSampling per mode algorithmconsists in maintaining a fixed number of particles in each mode, at each resampling step.
So, instead of starting the algorithm withNparticles randomly distributed, we draw in each modej, a fixed numberNj particles and at each
resampling step, the same number of particles is sampled for each visited mode.
Obviously, the total number of particles can change at each time some mode is not visited, or empty mode is visited afresh.
LetNbk andNk denote the total numbers of particlesξbk andξk, andωkj,N
the weights associated with the modes, we have the evolution scheme
(Nk, (ωk −1j,N )j∈Jk −1, ξk) → ( bNk, (ω
j
k)j∈Jk, bξk) → (Nk +1, (ω
j,N
k )j∈Jk, ξk +1)
SAMPLING PER
MODE ALGORITHM
The main idea ofSampling per mode algorithmconsists in maintaining a fixed number of particles in each mode, at each resampling step.
So, instead of starting the algorithm withNparticles randomly distributed, we draw in each modej, a fixed numberNj particles and at each
resampling step, the same number of particles is sampled for each visited mode.
Obviously, the total number of particles can change at each time some mode is not visited, or empty mode is visited afresh.
LetNbk andNk denote the total numbers of particlesξbk andξk, andωj,Nk
the weights associated with the modes, we have the evolution scheme
(Nk, (ωk −1j,N )j∈Jk −1, ξk) → ( bNk, (ω
j
k)j∈Jk, bξk) → (Nk +1, (ω
j,N
k )j∈Jk, ξk +1)
INITIALIZATION
In each modej, we sampleNj particlesξκ 0 = bξ κ 0 = (0, (X κ 0,j)) ∼ η j 0.
R
nM
D0 D2 D1 D1 D2 A A Letωj0= P(θ0=j), thenηN 0 andηb N 0 are given by ηN 0 = P j∈Mω j 0η j,N 0 , ηb N 0 = P j∈Mω j 0ηb j,N 0 , with ηj,N0 = N1j P κ∈Jj0δξκ0, ηb j,N 0 = N1j P κ∈Jj0δξbκ0. HereJ0j is the set of the indices of the particles in the modej.INITIALIZATION
In each modej, we sampleNj particlesξκ 0 = bξ κ 0 = (0, (X κ 0,j)) ∼ η j 0.
R
nM
D0 D2 D1 D1 D2 A A Letωj0= P(θ0=j), thenηN 0 andηb N 0 are given by ηN 0 = P j∈Mω j 0η j,N 0 , ηb N 0 = P j∈Mω j 0ηb j,N 0 , with ηj,N0 = N1j P κ∈J0j δξκ0, ηb j,N 0 = N1j P κ∈Jj0δξbκ0.INITIALIZATION
In each modej, we sampleNj particlesξκ 0 = bξ κ 0 = (0, (X κ 0,j)) ∼ η j 0.
R
nM
D0 D2 D1 D1 D2 A A Letωj0= P(θ0=j), thenηN 0 andηb N 0 are given by ηN 0 = P j∈Mω j 0η j,N 0 , ηb N 0 = P j∈Mω j 0ηb j,N 0 , with ηj,N0 = N1j P κ∈J0j δξκ0, ηb j,N 0 = N1j P κ∈Jj0δξbκ0. HereJ0j is the set of the indices of the particles in the modej.MUTATION
( bNk, ωkN, bξk) → (Nk +1, ωkN, ξk +1)IfNbk =0the particle system dies, otherwise independently of each other,
each particleξbκk evolves randomly according to the Markov transition Mk +1
R
nM
D0 D2 D1 D1 D2 A ANeither the total number of particles nor the weight of each particle change (Nk +1= bNk). SoηN k +1= P j∈Jkω j,N k η j,N k +1,withη j,N k +1= 1 Nj P κ∈Jkjδξk +1κ ,whereJ j k is the set of the labels of the particles in modej ∈ Jk.
MUTATION
( bNk, ωkN, bξk) → (Nk +1, ωkN, ξk +1)IfNbk =0the particle system dies, otherwise independently of each other,
each particleξbκk evolves randomly according to the Markov transition Mk +1
R
nM
D0 D2 D1 D1 D2 A ANeither the total number of particles nor the weight of each particle change (Nk +1= bNk). SoηN k +1= P j∈Jkω j,N k η j,N k +1,withη j,N k +1= 1 Nj P κ∈Jkjδξk +1κ ,whereJ j k is the set of the labels of the particles in modej ∈ Jk.
MUTATION
( bNk, ωkN, bξk) → (Nk +1, ωkN, ξk +1)IfNbk =0the particle system dies, otherwise independently of each other,
each particleξbκk evolves randomly according to the Markov transition Mk +1
R
nM
D0 D2 D1 D1 D2 A ANeither the total number of particles nor the weight of each particle change (Nk +1= bNk). SoηN k +1= P j∈Jkω j,N k η j,N k +1,withη j,N k +1= 1 Nj P κ∈Jkj δξκ k +1,whereJ j k is the set of the labels of the particles in modej ∈ Jk.
SELECTION/R
ESAMPLING
(Nk +1, ωkN, ξk +1) → ( bNk +1, ωk +1N , bξk +1)Select only the particlesξκk +1having reached the desired setBk +1;
R
nM
D0 D2 D1 D1 D2 A A LetINk +1denote the set of (indices of) good particles; ifIk +1N = ∅the algorithm is stopped. Otherwise,
for each non empty modej, resampleNj particles according to
Ψjk +1(ηk +1N ), and set b ηN k +1= P j∈Jk +1ω j,N k +1ηb j,N k +1, withηb j,N k +1= 1 Nj P κ∈bJk +1j δξbκk +1and ωj,Nk +1=ηbNk +1(gk +1j ) =η N k +1(g j k +1) ηN k +1(gk +1) .
The total number of particles isNbk +1=Pj∈J k +1N
SELECTION/R
ESAMPLING
(Nk +1, ωkN, ξk +1) → ( bNk +1, ωk +1N , bξk +1)Select only the particlesξκk +1having reached the desired setBk +1;
R
nM
D0 D2 D1 D1 D2 A ALetIk +1N denote the set of (indices of) good particles; ifIk +1N = ∅the algorithm is stopped. Otherwise,
for each non empty modej, resampleNj particles according to
Ψjk +1(ηk +1N ), and set b ηN k +1= P j∈Jk +1ω j,N k +1ηb j,N k +1, withηb j,N k +1= 1 Nj P κ∈bJk +1j δξbκk +1and ωj,Nk +1=ηbNk +1(gk +1j ) =η N k +1(g j k +1) ηN k +1(gk +1) .
The total number of particles isNbk +1=Pj∈J k +1N
SELECTION/R
ESAMPLING
(Nk +1, ωkN, ξk +1) → ( bNk +1, ωk +1N , bξk +1)Select only the particlesξκk +1having reached the desired setBk +1;
R
nM
D0 D2 D1 D1 D2 A ALetIk +1N denote the set of (indices of) good particles; ifIk +1N = ∅the algorithm is stopped. Otherwise,
for each non empty modej, resampleNj particles according to
Ψjk +1(ηk +1N ), and set b ηN k +1= P j∈Jk +1ω j,N k +1ηb j,N k +1, withηb j,N k +1= 1 Nj P κ∈bJk +1j δξbκk +1and ωj,Nk +1=ηbNk +1(gk +1j ) =η N k +1(g j k +1) ηN k +1(gk +1) .
The total number of particles isNbk +1=Pj∈J k +1N
SELECTION/R
ESAMPLING
(Nk +1, ωkN, ξk +1) → ( bNk +1, ωk +1N , bξk +1)Select only the particlesξκk +1having reached the desired setBk +1;
R
nM
D0 D2 D1 D1 D2 A ALetIk +1N denote the set of (indices of) good particles; ifIk +1N = ∅the algorithm is stopped. Otherwise,
for each non empty modej, resampleNj particles according to
Ψjk +1(ηk +1N ), and set b ηN k +1= P j∈Jk +1ω j,N k +1ηb j,N k +1, withηb j,N k +1= 1 Nj P κ∈bJjk +1δξbκk +1and ωj,Nk +1=ηbNk +1(gk +1j ) =η N k +1(g j k +1) ηN k +1(gk +1) .
The total number of particles isNbk +1=Pj∈J k +1N
SELECTION/R
ESAMPLING
(Nk +1, ωkN, ξk +1) → ( bNk +1, ωk +1N , bξk +1)Select only the particlesξκk +1having reached the desired setBk +1;
R
nM
D0 D2 D1 D1 D2 A ALetIk +1N denote the set of (indices of) good particles; ifIk +1N = ∅the algorithm is stopped. Otherwise,
for each non empty modej, resampleNj particles according to
Ψjk +1(ηk +1N ), and set b ηN k +1= P j∈Jk +1ω j,N k +1ηb j,N k +1, withηb j,N k +1= 1 Nj P κ∈bJjk +1δξbκk +1and ωj,Nk +1=ηbNk +1(gk +1j ) =η N k +1(g j k +1) ηN k +1(gk +1) .
The total number of particles isNbk +1=Pj∈J k +1N
PLAN
1
INTRODUCTION
Motivation
Switching jump diffusion Splitting technique Some issues
2
F
EYNMAN-KAC FORMULATIONMultilevel Feynman-Kac distributions Dynamical evolution
3
S
AMPLING PERMODE ALGORITHM
Particle Methods
Sampling per Mode algorithm
4
A
SYMPTOTICB
EHAVIOUR Asymptotic BehaviourLaw of Large Numbers Central Limit Theorem
ASYMPTOTIC
BEHAVIOUR
Now, we are addressing the asymptotic behaviour of our estimator as
N → ∞.
To obtain a law of large numbers, we followed the [Del Moral 2004]’s approach based on a martingale decomposition,
and for the central limit theorem, we used a CLT for triangular arrays developed in [Le Gland & Oudjane, 2006]
Before the statement of the two theorems, we need some notations:
I Ninf=infj∈MNj
I letN → ∞in such a way that eachρj :=Nj/Nare ”preserved” I this implies thatNinf→ ∞.
ASYMPTOTIC
BEHAVIOUR
Now, we are addressing the asymptotic behaviour of our estimator as
N → ∞.
To obtain a law of large numbers, we followed the [Del Moral 2004]’s approach based on a martingale decomposition,
and for the central limit theorem, we used a CLT for triangular arrays developed in [Le Gland & Oudjane, 2006]
Before the statement of the two theorems, we need some notations:
I Ninf=infj∈MNj
I letN → ∞in such a way that eachρj :=Nj/Nare ”preserved” I this implies thatNinf→ ∞.
ASYMPTOTIC
BEHAVIOUR
Now, we are addressing the asymptotic behaviour of our estimator as
N → ∞.
To obtain a law of large numbers, we followed the [Del Moral 2004]’s approach based on a martingale decomposition,
and for the central limit theorem, we used a CLT for triangular arrays developed in [Le Gland & Oudjane, 2006]
Before the statement of the two theorems, we need some notations:
I Ninf=infj∈MNj
I letN → ∞in such a way that eachρj :=Nj/Nare ”preserved” I this implies thatNinf→ ∞.
ASYMPTOTIC
BEHAVIOUR
Now, we are addressing the asymptotic behaviour of our estimator as
N → ∞.
To obtain a law of large numbers, we followed the [Del Moral 2004]’s approach based on a martingale decomposition,
and for the central limit theorem, we used a CLT for triangular arrays developed in [Le Gland & Oudjane, 2006]
Before the statement of the two theorems, we need some notations:
I Ninf=infj∈MNj
I letN → ∞in such a way that eachρj:=Nj/Nare ”preserved” I this implies thatNinf→ ∞.
ASYMPTOTIC
BEHAVIOUR
Now, we are addressing the asymptotic behaviour of our estimator as
N → ∞.
To obtain a law of large numbers, we followed the [Del Moral 2004]’s approach based on a martingale decomposition,
and for the central limit theorem, we used a CLT for triangular arrays developed in [Le Gland & Oudjane, 2006]
Before the statement of the two theorems, we need some notations:
I Ninf=infj∈MNj
I letN → ∞in such a way that eachρj:=Nj/Nare ”preserved” I this implies thatNinf→ ∞.
ASYMPTOTIC
BEHAVIOUR
Now, we are addressing the asymptotic behaviour of our estimator as
N → ∞.
To obtain a law of large numbers, we followed the [Del Moral 2004]’s approach based on a martingale decomposition,
and for the central limit theorem, we used a CLT for triangular arrays developed in [Le Gland & Oudjane, 2006]
Before the statement of the two theorems, we need some notations:
I Ninf=infj∈MNj
I letN → ∞in such a way that eachρj:=Nj/Nare ”preserved”
ASYMPTOTIC
BEHAVIOUR
Now, we are addressing the asymptotic behaviour of our estimator as
N → ∞.
To obtain a law of large numbers, we followed the [Del Moral 2004]’s approach based on a martingale decomposition,
and for the central limit theorem, we used a CLT for triangular arrays developed in [Le Gland & Oudjane, 2006]
Before the statement of the two theorems, we need some notations:
I Ninf=infj∈MNj
I letN → ∞in such a way that eachρj:=Nj/Nare ”preserved” I this implies thatNinf→ ∞.
PLAN
1
INTRODUCTION
Motivation
Switching jump diffusion Splitting technique Some issues
2
F
EYNMAN-KAC FORMULATIONMultilevel Feynman-Kac distributions Dynamical evolution
3
S
AMPLING PERMODE ALGORITHM
Particle Methods
Sampling per Mode algorithm
4
A
SYMPTOTICB
EHAVIOURAsymptotic Behaviour
Law of Large Numbers
Central Limit Theorem
MAIN
THEOREMS: LAW OF
LARGE
NUMBERS
THEOREM(LAW OFLARGENUMBERS)
For anyn ≥ 0and any bounded functionf, we have
E(γnN(f )1{Nn>0}) = γn(f ), sup f :kf k∞≤1 E [1{N n>0}γ N n(f ) − γn(f )]2 ≤b 2(n) Ninf .
LLNFOR NORMALIZED MEASURES
For anyn ≥ 0and any bounded functionf, we have
sup f :kf k∞≤1 E h ηNn(f )1{N n>0} − ηn(f ) i ≤ b(n) 2 Ninf +a(n)e−Ninf/c(n). sup f : kf k∞≤1 E 1{Nn>0}η N n(f ) − ηn(f ) 21/2 ≤ 2b 2(n) |γn(1)|2Ninf .
MAIN
THEOREMS: LAW OF
LARGE
NUMBERS
THEOREM(LAW OFLARGENUMBERS)
For anyn ≥ 0and any bounded functionf, we have
E(γnN(f )1{Nn>0}) = γn(f ), sup f :kf k∞≤1 E [1{N n>0}γ N n(f ) − γn(f )]2 ≤b 2(n) Ninf .
LLNFOR NORMALIZED MEASURES
For anyn ≥ 0and any bounded functionf, we have
sup f :kf k∞≤1 E h ηNn(f )1{N n>0} − ηn(f ) i ≤ b(n) 2 Ninf +a(n)e−Ninf/c(n). sup f : kf k∞≤1 E 1{Nn>0}η N n(f ) − ηn(f ) 21/2 ≤ 2b 2(n) |γn(1)|2Ninf .
MAIN
THEOREMS: LAW OF
LARGE
NUMBERS
THEOREM(LAW OFLARGENUMBERS)
For anyn ≥ 0and any bounded functionf, we have
E(γnN(f )1{Nn>0}) = γn(f ), sup f :kf k∞≤1 E [1{N n>0}γ N n(f ) − γn(f )]2 ≤b 2(n) Ninf .
LLNFOR NORMALIZED MEASURES
For anyn ≥ 0and any bounded functionf, we have
sup f :kf k∞≤1 E h ηNn(f )1{N n>0} − ηn(f ) i ≤ b(n) 2 Ninf +a(n)e−Ninf/c(n). sup f : kf k∞≤1 E 1{Nn>0}η N n(f ) − ηn(f ) 21/2 ≤ 2b 2(n) |γn(1)|2Ninf .
MAIN
THEOREMS: LAW OF
LARGE
NUMBERS
THEOREM(LAW OFLARGENUMBERS)
For anyn ≥ 0and any bounded functionf, we have
E(γnN(f )1{Nn>0}) = γn(f ), sup f :kf k∞≤1 E [1{N n>0}γ N n(f ) − γn(f )]2 ≤b 2(n) Ninf .
LLNFOR NORMALIZED MEASURES
For anyn ≥ 0and any bounded functionf, we have
sup f :kf k∞≤1 E h ηNn(f )1{N n>0} − ηn(f ) i ≤ b(n) 2 Ninf +a(n)e−Ninf/c(n). sup f : kf k∞≤1 E 1{Nn>0}η N n(f ) − ηn(f ) 21/2 ≤ 2b 2(n) |γn(1)|2Ninf .
PLAN
1
INTRODUCTION
Motivation
Switching jump diffusion Splitting technique Some issues
2
F
EYNMAN-KAC FORMULATIONMultilevel Feynman-Kac distributions Dynamical evolution
3
S
AMPLING PERMODE ALGORITHM
Particle Methods
Sampling per Mode algorithm
4
A
SYMPTOTICB
EHAVIOURAsymptotic Behaviour Law of Large Numbers
Central Limit Theorem 5
CONCLUSION
MAIN
THEOREMS: CENTRAL
LIMIT
THEOREM
THEOREM(CENTRALLIMITTHEOREM)
LetN → ∞in such a way thatρj =Nj/Nare ”preserved” for allj ∈ M. Then, the random variable
√ N1{N n+1>0}γ N n+1(1) − P(Tn<T )
converges in law to a Gaussian random variable with mean0and variance
Wn+1, where Wn+1 P2(Tn<T ) = n+1 X q=0 Ωq 1 Pq − 1 + n+1 X q=0 Ωq Pq " b ηq [∆n q◦ π]2 b ηq2 ∆nq◦ π − 1 # , with Ωq= X j∈M (ωq−1j )2ρ−1j =1 + χ2(ωq−1, ρ), and ∆nq(t, z) = P(Tn≤ T |Tq=t, ZTq =z).
MAIN
THEOREMS: CENTRAL
LIMIT
THEOREM
THEOREM(CENTRALLIMITTHEOREM)
LetN → ∞in such a way thatρj =Nj/Nare ”preserved” for allj ∈ M. Then, the random variable
√ N1{N n+1>0}γ N n+1(1) − P(Tn<T )
converges in law to a Gaussian random variable with mean0and variance
Wn+1, where Wn+1 P2(Tn<T ) = n+1 X q=0 Ωq 1 Pq − 1 + n+1 X q=0 Ωq Pq " b ηq [∆n q◦ π]2 b ηq2 ∆nq◦ π − 1 # , with Ωq= X j∈M (ωq−1j )2ρ−1j =1 + χ2(ωq−1, ρ), and ∆nq(t, z) = P(Tn≤ T |Tq=t, ZTq =z).
MAIN
THEOREMS: CENTRAL
LIMIT
THEOREM
THEOREM(CENTRALLIMITTHEOREM)
LetN → ∞in such a way thatρj =Nj/Nare ”preserved” for allj ∈ M. Then, the random variable
√ N1{N n+1>0}γ N n+1(1) − P(Tn<T )
converges in law to a Gaussian random variable with mean0and variance
Wn+1, where Wn+1 P2(Tn<T ) = n+1 X q=0 Ωq 1 Pq − 1 + n+1 X q=0 Ωq Pq " b ηq [∆n q◦ π]2 b ηq2 ∆nq◦ π − 1 # , with Ωq= X j∈M (ωq−1j )2ρ−1j =1 + χ2(ωq−1, ρ), and ∆nq(t, z) = P(Tn≤ T |Tq=t, ZTq =z).
CONCLUSION
The ”sampling per mode” algorithm has ”good” properties.
However, to gain more time of simulation, we can:
I use importance sampling technique to make rare switches more frequent, I or aggregate the modes in order to decrease the complexity (for large scale
distributed hybrid systems).
Thus, we need to extend the previous results.
A better comprehension of the expression ofWn+1could help the choice of theNj regarding the cost of the algorithm.
This algorithm is implemented in a software developed by National Aerospace Laboratory (NLR) and used to evaluate the safety
characteristics of an arbitrary (new) operational Air Traffic Management concept [Blom, 2009].
CONCLUSION
The ”sampling per mode” algorithm has ”good” properties. However, to gain more time of simulation, we can:
I use importance sampling technique to make rare switches more frequent, I or aggregate the modes in order to decrease the complexity (for large scale
distributed hybrid systems).
Thus, we need to extend the previous results.
A better comprehension of the expression ofWn+1could help the choice of theNj regarding the cost of the algorithm.
This algorithm is implemented in a software developed by National Aerospace Laboratory (NLR) and used to evaluate the safety
characteristics of an arbitrary (new) operational Air Traffic Management concept [Blom, 2009].
CONCLUSION
The ”sampling per mode” algorithm has ”good” properties. However, to gain more time of simulation, we can:
I use importance sampling technique to make rare switches more frequent,
I or aggregate the modes in order to decrease the complexity (for large scale distributed hybrid systems).
Thus, we need to extend the previous results.
A better comprehension of the expression ofWn+1could help the choice of theNj regarding the cost of the algorithm.
This algorithm is implemented in a software developed by National Aerospace Laboratory (NLR) and used to evaluate the safety
characteristics of an arbitrary (new) operational Air Traffic Management concept [Blom, 2009].
CONCLUSION
The ”sampling per mode” algorithm has ”good” properties. However, to gain more time of simulation, we can:
I use importance sampling technique to make rare switches more frequent, I or aggregate the modes in order to decrease the complexity (for large scale
distributed hybrid systems).
Thus, we need to extend the previous results.
A better comprehension of the expression ofWn+1could help the choice of theNj regarding the cost of the algorithm.
This algorithm is implemented in a software developed by National Aerospace Laboratory (NLR) and used to evaluate the safety
characteristics of an arbitrary (new) operational Air Traffic Management concept [Blom, 2009].
CONCLUSION
The ”sampling per mode” algorithm has ”good” properties. However, to gain more time of simulation, we can:
I use importance sampling technique to make rare switches more frequent, I or aggregate the modes in order to decrease the complexity (for large scale
distributed hybrid systems).
Thus, we need to extend the previous results.
A better comprehension of the expression ofWn+1could help the choice of theNj regarding the cost of the algorithm.
This algorithm is implemented in a software developed by National Aerospace Laboratory (NLR) and used to evaluate the safety
characteristics of an arbitrary (new) operational Air Traffic Management concept [Blom, 2009].
CONCLUSION
The ”sampling per mode” algorithm has ”good” properties. However, to gain more time of simulation, we can:
I use importance sampling technique to make rare switches more frequent, I or aggregate the modes in order to decrease the complexity (for large scale
distributed hybrid systems).
Thus, we need to extend the previous results.
A better comprehension of the expression ofWn+1could help the choice of theNj regarding the cost of the algorithm.
This algorithm is implemented in a software developed by National Aerospace Laboratory (NLR) and used to evaluate the safety
characteristics of an arbitrary (new) operational Air Traffic Management concept [Blom, 2009].
CONCLUSION
The ”sampling per mode” algorithm has ”good” properties. However, to gain more time of simulation, we can:
I use importance sampling technique to make rare switches more frequent, I or aggregate the modes in order to decrease the complexity (for large scale
distributed hybrid systems).
Thus, we need to extend the previous results.
A better comprehension of the expression ofWn+1could help the choice of theNj regarding the cost of the algorithm.
This algorithm is implemented in a software developed by National Aerospace Laboratory (NLR) and used to evaluate the safety
characteristics of an arbitrary (new) operational Air Traffic Management concept [Blom, 2009].
R ´
EF
ERENCES
´
Henk Blom, Bert Bakker, and Jaroslav Krystul.,
Rare event estimation for a large-scale stochastic hybrid system with air traffic application. Rare Event Simulation using Monte Carlo Methods, pp. 194–214. Wiley, 2009.
Krystul, J.,
Modelling of Stochastic Hybrid Systems with Applications to Accident Risk Assessment (2006). PhD Dissertation: University of Twente.
Del Moral, P. (2004).
Feynman-Kac Formulae: Genealogical and Interacting Particle Systems with Applications (2004). Springer-Verlag.
Le Gland, F. and Oudjane, N.,
A sequential particle algorithm that keeps the particle system alive. (2006) in Stochastic Hybrid Systems : Theory and Safety Critical Applications, pp. 351-389, Springer, 2006
F. C ´erou, P. Del Moral, F. Le Gland and P. Lezaud,
Genetic genealogical models in rare event analysis, ALEA, volume 1, paper 8, 2006