• No results found

The Fokker Planck equation and its application to potential problems.

N/A
N/A
Protected

Academic year: 2021

Share "The Fokker Planck equation and its application to potential problems."

Copied!
33
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

The Fokker-Planck equation and its application

to potential problems

Jim Wittebol

June 28, 2020

Bachelor thesis Mathematics and Physics & Astronomy Supervisor: prof. dr. Sonja Cox, prof. dr. Greg Stephens

Institute of Physics

Korteweg-de Vries Institute for Mathematics Faculty of Sciences

(2)

Abstract

In this thesis, we derive the Fokker-Planck equation and apply it to physical problems. The Fokker-Planck equation yields information about stochastic processes satisfying a certain kind of stochastic differential equation. To understand these equations, we define Brownian motion and build up the theory of stochastic integration. With this, we derive the Feynmann-Kac formula, from which we deduce the Fokker-Planck equation as the dual result. We use the Fokker-Planck equation to tackle some problems with regards to potentials, like the famous Kramer’s escape rate problem, but also eigenvalue problems of the Fokker-Planck equation and first passage times.

The image on the front page is taken from the recent article [1].

Title: The Fokker-Planck equation and its application to potential problems Authors: Jim Wittebol, jim.wittebol@student.uva.nl, 11810491

Supervisors: prof. dr. Sonja Cox, prof. dr. Greg Stephens End date: June 28, 2020

Institute of Physics University of Amsterdam

Science Park 904, 1098 XH Amsterdam http://www.iop.uva.nl

Korteweg-de Vries Institute for Mathematics University of Amsterdam

Science Park 904, 1098 XH Amsterdam http://www.kdvi.uva.nl

(3)

Contents

1 Introduction 4

2 Brownian motion 6

2.1 The symmetric random walk . . . 6

2.1.1 Scaling the symmetric walk . . . 7

2.2 Brownian motion: Definition and comments . . . 7

3 Itˆo-integrals and the Itˆo-formula 9 3.1 Itˆo-integrals . . . 9

3.1.1 Simple processes . . . 9

3.1.2 Itˆo-integral for general processes: The incorrect definition . . . 10

3.1.3 Itˆo-isometry . . . 11

3.1.4 Itˆo-integration: The correct definition . . . 12

3.2 The Itˆo-Formula . . . 13

4 Feynman-Kac-formula 15 4.1 The main theorem . . . 15

4.2 The Fokker-Planck equation . . . 17

5 Barrier crossing: Kramer’s escape rate 19 5.1 Kramer’s escape rate . . . 19

6 Barrier crossing: Eigenvalue problems 23 6.1 Preliminary considerations . . . 23

6.2 The metastable potential . . . 24

7 First-passage time problems 27 7.1 Setup . . . 27

7.2 Mean first passage time for the metastable potential . . . 28

8 Conclusion 30

Bibliography 32

(4)

1 Introduction

In the study of the motion of a completely isolated particle in a one-dimensional potential f (x), the equation of motion is well known:

− ∂

∂xf (x) = M ¨x.

However, the model of the completely isolated particle is rarely accurate in real-world physics. One might, for example, want to keep friction caused by air molecules into consideration, which simply yields a velocity-dependent term in the equation of motion:

− ∂

∂xf (x) − γ ˙x = M ¨x.

There is however one more effect that we can consider, namely collisions with other particles. In contrast to the friction term, this term must be modeled by a random process (that is, a Gaussian process R(t)). This yields the more complete equation of motion

− ∂

∂xf (x) − γ ˙x + p

2γkBT R(t) = M ¨x.

This equation is the defining equation of Langevin dynamics. The problem that we will consider in this thesis is overdamped Langevin dynamics, which is the limit where the average acceleration is zero. Moving some terms around and integrating with respect to time gives us the equation that is the main point of study throughout this entire thesis:

dX(t) = − D kBT

f0(x)dt +√2DdW (t). (1.1) where W (t) is a Brownian motion,which will be defined in chapter 2, and D = kBT

γ . This

is the equation of motion of Brownian dynamics (this only makes sense when knowing how to integrate with respect to a Brownian motion, which is the subject of chapter 3). As it turns out, studying single particles in Brownian dynamics is not the smart plan due to the random effects, which can cause very different trajectories for very similar initial conditions. The solution is to study ensembles of particles. Like in quantum mechanics, this turns our problem into a problem of probability densities and their behavior. There exists a partial differential equation that completely determines the motion of a probability density p(t, x) in a system with the equation of motion (1.1). This equation is the Fokker-Planck equation and is one of the main results of this thesis (however, we will mainly derive the Feynman-Kac equation which is dual to the Fokker-Planck equation).

(5)

Afterward, we will start to use the Fokker-Planck equation to study the problem of a particle in a metastable potential well. Without the Brownian component in the equation of motion, a particle that starts at the bottom of a well will not move, but with the Brownian component involved, the particle is at the very least able to perform a motion. We will first solve the Kramer’s escape rate problem, which asks at what rate particles leave the well. We will then look at eigenvalue problems for the Fokker-Planck equation, and we will then finally derive the expected escape time for a particle in a metastable potential.

(6)

2 Brownian motion

In this chapter, we will first define the concept of Brownian motion. This is because the noise term in the physical problem that we introduced in the first chapter is modeled by a Brownian motion. To make it clearer where the idea of Brownian processes comes from, we will first briefly analyze a simpler process that is strongly related to it.

2.1 The symmetric random walk

Imagine yourself at the origin (of the real line, for simplicity). You toss a fair coin, and if the outcome is heads, you take a (unit) step forward, and if the outcome is tails, you take a (unit) step backward. That is, we define random variables Xi, i ∈ N by

Xi =

(

1 if result is heads −1 if result is tails and the position after k steps:

Mk= k

X

i=1

Xi.

This stochastic process is called the symmetric random walk. We will explore some simple properties of this process.

First of all, we can define M0 = 0, as you start at the origin. Furthermore, the random

walk has independent increments. This means that if one picks integers 0 = k0 < k1 <

. . . < kn, then the random variables

Mk1 − Mk0, Mk2 − Mk1, . . . Mkm− Mkm−1

are all independent variables. This is indeed clear because the increments depend on different coin tosses. One can also look at the expected value and variance of the incre-ments. Note that

Mr− Ms = r

X

i=s+1

Xi.

As the Xi have expectation zero and variance one, we have that E[Mr− Ms] = 0 and

(7)

2.1.1 Scaling the symmetric walk

To go towards Brownian motion, we will speed up time and scale down the step size. To do this, define the scaled random walk

W(n)(t) = √1 nWnt.

This definition only makes sense if nt is an integer. If not, the value of W(n)(t) is reached by linear interpolation between values where this definition does make sense (This is not that interesting of an issue for this project, as our only goal at the moment is to make the definition of Brownian motion understandable, so we will tacitly assume that nt is always an integer). The prefactor √1

n is necessary for consistency. This is because if we

look at the increments of the scaled walk, W(n)(t) − W(n)(s) = nt X i=ns+1 Xi √ n,

we see that this is the sum of nt − ns = n(t − s) random variables with expected value 0 and variance 1n, leading us to the conclusion that the expected value and variance of the scaled increments are equal to the old ones. One might notice that this scaling is the same as the kind of scaling one encounters in the well-known central limit theorem. This motivates the next theorem:

Theorem 2.1. Keep t ≥ 0 fixed. If we let n → ∞, the distribution of W(n)(t) converges to the normal distribution with expected value zero and variance t

The theorem follows directly from the central limit theorem. An analogous result follows for increments, saying that the distribution of W(n)(t) − W(n)(s) converges to a normal distribution with expected value zero and variance t − s.

2.2 Brownian motion: Definition and comments

We finally have done enough to define Brownian motion in a motivated manner. We get Brownian motion informally by letting n go to infinity in W(n)(t) (a true construction is more subtle). The full definition is as follows:

Definition 2.2. Let (Ω, F , P) be a probability space. We call a mapping

W : [0, ∞) × Ω → R a Brownian motion if the following four conditions hold (by convention, one often omits the ω-component of the function):

(B1.) W(0) = 0 for all ω ∈ Ω.

(B2.) W(t) has independent increments, that is, for all 0 = t0 < t1 < . . . < tn the

random variables

W (t1) = W (t1) − W (t0), W (t2) − W (t1), . . . , W (tn) − W (tn−1)

(8)

(B3.) For all 0 ≤ s < r < ∞ we have that W (r) − W (s) ∼ N (0, r − s). (B4.) t → W (t) is continuous for all ω ∈ Ω.

We will make some closing remarks on this definition.

First of all, Brownian motion and scaled symmetric walks for large n are similar, but still differ. Indeed, scaled symmetric walks are piecewise linear, and Brownian motion is not. Furthermore, Brownian motion has increments that are normally distributed. Scaled symmetric walks will have increments that are only close to normally distributed. Of course, in the limit these differences get less noticeable, but we have not given a proof that a model for Brownian motion even exists. Models do exist. We will not prove this fact, as it is non-trivial and of limited interest to this particular project. Instead, we leave it at the plausibility argument that we have given in the first two sections. For a proof of the existence of Brownian motion, see chapter I.6 of [2].

We will need one further notion for upcoming chapters. The notion formalizes the idea of the information that one has at a given time.

Definition 2.3. Let (Ω, F , P) be a probability space with a Brownian motion W (t). A filtration for the Brownian motion is a collection of σ-algebras satisfying the following three conditions:

(F1.) For 0 ≤ s < t, we have that Fs ⊆ Ft.

(F2.) For each t ≥ 0, W (t) is Ft-measurable.

(F3.) For 0 ≤ t < u, W (u) − W (t) is independent of Ft.That is, for all F ∈ Ft:

P({W (u) − W (t) ≤ x} ∩ F ) = P(W (u) − W (t) ≤ x)P(F )

A simple example of a filtration is the information one gains by looking at the Brownian motion. Here, we mean the filtration

Ft= σ({{W (s) ≤ x} : x ∈ R, s ≤ t})

The first two conditions are obvious, and the third condition follows from the definition of Brownian motion, even though giving a rigorous proof that this is indeed implied by the definition is non-trivial. There can also be filtration that consist of information glanced from other processes. It is then very important that one does not get clues about the future of the Brownian motion, that is, F3 must still hold.

(9)

3 Itˆ

o-integrals and the Itˆ

o-formula

In this chapter, we will develop stochastic calculus. This is necessary for our goal of solving the stochastic differential equation mentioned in the introduction. We will first develop a way to integrate certain stochastic processes with respect to Brownian motion.

3.1 Itˆ

o-integrals

Let T > 0. We want to develop a way to make sense of the following expression: Z T

0

X(t)dW (t).

Here, W is a Brownian motion (together with a filtration Ft) and X is an adapted

stochastic process. This simply means that the information F (t) is enough to evaluate X(t) with certainty as well, for all t ≥ 0, more formally, X(t) is Ft-measurable. To define

this integral, we will take an approach that looks like the definition of the Lebesque integral known by anyone who has studied measure theory, but is yet quite different:

• Define the integral for simple processes

• Prove that the integral for simple processes satisfies Itˆo-isometry. • Generalize by taking the right type of limits.

We will of course elaborate on how this works in the stochastic case.

3.1.1 Simple processes

We will start off with a definition that should not be too surprising:

Definition 3.1. A stochastic process S : [0, ∞) × Ω → R is called a simple process if the exist 0 ≤ t0 < t1 < . . . < tn and random variables ξ0, ξ2, . . . , ξn−1 such that

S(t) =

n

X

k=1

ξk−11[tk−1,tk)(t).

We now define the integral

I(t) = Z t

0

(10)

for simple processes to be: I(t) = k−1 X i=0 S(ti)[W (ti+1) − W (ti)] + S(tk)[W (t) − W (tk)]

Note that the integral for a particular t is a random variable. We can now easily verify some properties that this integral has (for simple processes) (and should have if it has to behave somewhat like an ordinary integral):

• The stochastic integral is linear. • We have Rt

0 dW (u) = W (t).

• t 7→ I(t) is continuous

• The integral I(t) is Ft-measurable.

3.1.2 Itˆo-integral for general processes: The incorrect definition

A naive, but yet logical way to continue would be to simply define the general integral by taking the pointwise limit of integrals of the above form. This will fail, and we will briefly discuss the reason for this failure. It turns out that one can show for continuous processes that the pointwise limit only exists if Brownian motion paths had bounded variation, that is, the quantity

LV (W ) = sup

Π

X

k

|W (tk) − W (tk−1)|

where the supremum is taken over all partitions of [0, T ], should be bounded for every ω ∈ Ω (this is a non-trivial consequence of the famous uniform boundedness theorem from functional analysis). However, it can be shown that

P(LV (B(ω)) < ∞) = 0

which could, therefore, be seen as the exact opposite of what one would need. Therefore, this notion of integration would not even allow us to define the integral for all continuous processes and is therefore incredibly unpractical. There is a solution to this issue, but it turns out that this will remove the possibility of pathwise computation almost entirely. We will say more about this after we correctly defined the integral.

(11)

3.1.3 Itˆo-isometry

We first need an important theorem about the Itˆo-integral for simple processes. We first state the result:

Theorem 3.2. (Itˆo-Isometry) We have for adapted simple processes S(t) adapted to W (t) as defined above that

E[I2(t)] =: E  ( Z t 0 S(u)dW (u))2  = E Z t 0 S(u)2du  for every 0 ≤ t ≤ T

The name Itˆo-isometry suggests that there is some sort of norm-preserving property. This is, in fact, the case, when we look at the integral as a map from the simple pro-cesses S([0, T ] × Ω) to the space of square-integrable random-variables L2(Ω), with inner products (X, Y )Lad([0,T ]×Ω = E Z T 0 X(t)Y (t)dt  (X, Y )L2(Ω) = E[XY ]

Then, the theorem simply says that the Itˆo-integral is norm-preserving. We will now give a full proof of this result

Proof. First of all, let us introduce some notation. Otherwise using notation as in the definition of the integral for simple processes, set Di = W (ti+1) − W (ti) if i ≤ k − 1

and Dk = W (t) − W (tk). Therefore, we have that I(t) = Pki=0S(ti)Di, and by simple

algebra: I2(t) = k X j=0 S2(ti)Di2+ 2 X 0≤i<j≤k S(ti)S(tj)DiDj

This might seem quite ugly, but we still need to take the expectation, and we will show that the cross terms have zero expectation. Note that the random variable S(ti)S(tj)Diis

Ftj-measurable, while the random variable Dj is independent of Ftj. Thus S(ti)S(tj)Di

and Dj are independent. Therefore Furthermore, increments of Brownian motion have

expectation zero, from which it follows that

E[S(ti)S(tj)DiDj] = E[S(ti)S(tj)Di]E[Dj] = 0.

We therefore only need to concern ourselves with the expectation of the square terms. Starting similarly as with the cross-terms, we can see that S2(ti) is Fti measurable, while

Dj2 is independent of Fti. Furthermore, we know that E[D

2

i] = Var[Di] = ti+1− ti if

(12)

Thus, EI2(t) = k X i=0 E[S2(ti)Di2] = k X i=0 E[S2(ti)]E[Di2] = k−1 X i=0

E[S2(ti)(ti+1− ti)] + E[S2(tk)(t − tk)]

What follows is the main observation in the proof. We know that S(ti) is constant on

(ti, ti+1) and thus S2(ti)(ti+1− ti) =

Rti+1

ti S

2(u)du for i ≤ k − 1. For i = k a similar

observation holds, and we finally find E[I2(t)] =

k−1

X

i=0

E[S2(ti)(ti+1− ti)] + E[S2(tk)(t − tk)]

= E "k−1 X i=0 Z ti+1 ti S2(u)du + Z t tk S2(u)du # = E Z t 0 S2(u)du  , which was precisely that what was claimed.

3.1.4 Itˆo-integration: The correct definition

We will finally define the Itˆo-integral RT

0 X(t)W (t) for more general X(t). To be exact,

we allow X(t) to be piecewise continuous for all ω ∈ Ω, and still need that X(t) is Ft-measurable, that is, X(t) is adapted to the filtration of the Brownian motion W (t).

Finally, we ask that X(t) is square-integrable, that is: E Z T 0 X2(t)dt  < ∞

The definition goes as follows. It turns out that for this class of processes, one can choose a sequence of simple processes so that limn→∞E

h RT 0 |Sn(t) − S(t)| 2dti= 0 (see lemma 2.4 of [3]). Now, we define Z t 0

X(u)dW (u) = lim

n→∞

Z t

0

Sn(u)dW (u)

for 0 ≤ t ≤ T . Multiple comments are in order

First of all, we noted that this does not work if we try to formulate a pathwise definition of the integral. In that case, why does this work? In fact, the sequence

In(t) =

Z t

0

Xn(u)dW (u)

is Cauchy in L2(Ω). This holds because of the Itˆo-isometry, which says that E[(In(t) − Im(t))2] = E

Z t

0

|Xn(u) − Xm(u)|2du

(13)

and the right hand side tends to zero because of how we chose the sequence of simple processes. Note that limn→∞E

h RT

0 |Sn− S|

2dti= 0 does NOT imply lim

n→∞Sn(ω) =

S(ω), stressing once again that the evaluation of the integral can not be done pathwise. The general integral inherits properties from the integral of simple functions. Examples of these properties are (please refer to chapter 3.2B of [3] for further elaboration):

• Continuity (The paths of I(t) are continuous as a function of the upper limit of integration t)

• Adaptivity (For each t, I(t) is Ft-measurable)

• Linearity • Itˆo isometry

3.2 The Itˆ

o-Formula

In this section, we will state a theorem that can be seen as the fundamental theorem of calculus for stochastic integrals.

Theorem 3.3. Let f (t, x) be a function with partial derivatives ft(t, x), fx(t, x), fxx(t, x)

defined and continuous, and let W (t) be a Brownian motion. Then for every T ≥ 0: f (T, W (T )) = f (0, W (0))+ Z T 0 ft(t, W (t))dt+ Z T 0 fx(t, W (t))dW (t)+ 1 2 Z T 0 fxx(t, W (t))dt

To retrieve the analogous result for one-variable functions, just remove the term with the time derivative:

f (W (T )) = f (W (0)) + Z T 0 f0(W (t))dW (t) +1 2 Z T 0 f00(W (t))dt

One would probably spot a clear difference between this result and the fundamental theorem: The addition of the last term. Some other things might look strange. We will briefly sketch the logic behind these peculiarities, the proof is too difficult to discuss here. A complete proof is given in the first chapter of [4].

We know by Taylor that:

f (W (ti+1)) − f (W (ti)) = f0(W (ti))(W (ti+1) − W (ti)) + error.

and of course by extension

f (W (ti+1))−f (W (ti)) = f0(W (ti))(W (ti+1)−W (ti))+f00(W (ti))(W (ti+1)−W (ti))2+error.

The error will go to zero when you let the mesh of the partition (that is, the maximum of the lengths of the intervals in the partition) goes to zero. However, as we decrease the

(14)

mesh, we must sum over more terms in the calculation of the integral (as in section 3.1.1 for example). It turns out that the sum of the errors does NOT go to zero when the mesh goes to zero, because Brownian motion is not ”nice enough” (formally represented by the fact that Brownian motion has nonzero ”quadratic variation”) but the sum of the smaller errors does. This is the reason for the inclusion of the second-order term. An observant reader might note some other oddities in the formula. For example, the second-order term is 1 2 Z T 0 f00(W (t))dt

while one would expect dW (t)dW (t) when looking at the Taylor expansion. Further-more, while it might be believable that derivatives of order 3 or higher do not enter the formula, why do the derivatives ftt and ftx not matter? It turns out that, informally, it

does not matter whether one writes dW (t)dW (t) or dt. An informal reason for this is that E[ X ti<t (W (ti+1) − W (ti))2] = X (ti+1− ti) → t.

as the mesh goes to zero, by basic properties of Brownian motion. Furthermore, it turns out that the sums associated with the other derivatives already go to zero when the mesh goes to zero. That is:

|Xftx(ti, W (ti))(ti+1− ti)(W (ti+1− W (tj)| → 0

and

|Xftt(ti, W (ti))(ti+1− ti)2| → 0

as the mesh goes to zero. For more elaboration on the details of this discussion and a good sketch of a proof of the Itˆo-Formula, see section 4.2 of [5].

(15)

4 Feynman-Kac-formula

4.1 The main theorem

In this chapter, we will find a relation between partial differential equations and stochas-tic differential equations. For our purposes, a stochasstochas-tic differential equation can be written as

dX(u) = µ(u, X(u))du + σ(u, X(u))dW (u), X(t0) = x (4.1)

where µ and σ are given functions. An adapted process X(t) is a solution of equation (4.1) if X(t) = x + Z T t0 µ(u, X(u))du + Z T t0

σ(u, X(u))dW (u).

The formulation above makes sense if the first integral as an ordinary integral and the second integral as an Itˆo-integral are well-defined. Note that it is sensible to assume that X(t) is adapted as X(t) only depends on the Brownian motion up to time t. With this, we are ready to formulate the main result of this chapter:

Theorem 4.1. Let t0 ∈ [0, ∞) and x0 ∈ R Assume X is a solution of the stochastic

differential equation (

dX(u) = µ(u, X(u))du + σ(u, X(u))dW (u) X(t0) = x0

.

Let h(y) be a Borel-measurable function, and fix a T > 0. Then, if there exists a g(t, x) such that gt, gx and gxx exist, satisfying the partial differential equation

gt(t, x) + µ(t, x)gx(t, x) +

1 2σ

2(t, x)g

xx(t, x) = 0

with terminal condition

g(T, x) = h(x) for all x ∈ R, then

g(t0, x0) = E[h(X(T ))].

Proof. By the Itˆo-formula, we have

dg(t, X(t)) = gtdt + gxdX +

1 2gxxdt

(16)

We combine this with the given partial differential equation that X satisfies and get dg(t, X(t)) = (gt+ µgx+ 1 2σ 2g xx)dt + σgxdW.

However, we know that gt+ µgx+12σ2gxx= 0, so that dg(t, X(t)) = γgxdW . Hence, we

can write

g(T, X(T )) − g(t0, x0) =

Z T

t0

σgx(t, X(t))dW (t).

Now using the terminal condition and taking expectations, we get E[h(X(T ))] − g(t0, x0) = E Z T t0 σgx(t, X(t))dW (t)  , and therefore the proof reduces to the following lemma:

Lemma 4.2. Suppose A is an Itˆo-integrable process defined on [t0, T ). Then

E Z T t0 A(t)dW (t)  = 0

Proof. First, we prove this for simple processes. Let S(t) be a simple process on [t0, T ).

S(t) =

n

X

k=1

ξk−11[tk−1,tk)(t)

with ξk−1 being Ftk−1-measurable and E[ξ

2

k−1] < ∞. Then we simply get

E Z T t0 S(t)  = n X k=1 E[ξk−1(W (tk) − W (tk+1))].

We now need some basic properties of conditional expectations. For a reference, see theorem 8.12 of [6]

(CE1) E[E[ξ|G]] = E[ξ] when ξ is a random variable and G is a σ-algebra. (CE2) If ξ is G-measurable, then E[ξη|G] = ξE[η|G].

(CE3) If ξ is independent of G, then E[ξ|G] = E[ξ].

Let us now work on the expression E[ξk−1(W (tk)−W (tk−1))]. By applying the properties

in succesion:

E[ξk−1(W (tk) − W (tk−1))] = E[E[ξk−1(W (tk) − W (tk−1))|Ftk−1]]

= E[ξk−1E[W (tk) − W (tk−1)|Ftk−1]

(17)

Now suppose we have a general adapted process A. Take a sequence of simple processes (Sn)n∈N such that lim n→∞E Z T t0 |A(t) − Sn(t)|2dt  = 0 Now notice the following:

|E Z T t0 S(t)dW (t)| = |E Z T 0 A(t)dW (t) − E Z T t0 Sn(t)dW (t)| = |E Z T t0 (A(t) − Sn(t))dW (t)| ≤ (E| Z T t0 (A(t) − Sn(t))dW (t)|2) 1 2 n→∞ −−−→ 0

In the first line we use that the expectation is zero for simple processes, the second line uses linearity of the Itˆo-integral, the inequality follows from H¨older’s inequality applied to x2 and the limit follows from Itˆo-isometry. This proves the lemma for general adapted processes.

Using this lemma, we are done by setting A(t) = σgx(t, X(t))

4.2 The Fokker-Planck equation

In a physical setting, one is indeed often interested in processes satisfying an equation of the following form:

dX(t) = µ(X(t))dt + σ(X(t))dW (t), t ≥ 0 (4.2) We have seen that the Feynmann-Kac formula yields that if there exists a g(t, x) such that gt(t, x) + µ(x)gx(t, x) + 1 2σ 2(x)g xx(t, x) = 0, (4.3)

with g(T, x) = h(x), then we find that g(t, x) = E[h1(X(t))|X(0) = x]. In literature, this

equation is also referred to as the Kolgomorov backward equation, as we go back from a terminal condition. This formula has a dual, called the Kolgomorov forward equation or the Fokker-Planck equation: The probability density p(t, x) (with p(0, x) given) of a random variable X(t) satisfying (4.3), satisfies the PDE

∂ ∂tp(t, x) + ∂ ∂x[µ(t, x)p(t, x)] − 1 2 ∂2 ∂x2[σ 2(t, x)p(t, x)] = 0. (4.4)

This is the form of the Feynman-Kac equation thas is more commonly used in physical applications. A more precise statement is that the operators L and L0 given by

L[f ] = µ(t, x)fx+ 1 2σ 2(t, x)f xx L0[g] = −[µ(t, x)g]x+ 1 2[σ 2(t, x)g] xx

(18)

that act on a suitable subspace of L2(Ω) (such that the derivatives exist and the resulting functions are also in L2(Ω)), are adjoint operators.

(19)

5 Barrier crossing: Kramer’s escape rate

In this chapter, we will look at a physical problem. We have ended the previous chapter by rewriting the Feynmann-Kac formula to its dual form, which is more readily applicable to physical problems. We will now study how to use this equation in the problem of barrier crossing, deriving an expression for the escape rate for particles in a potential well. We will compare this expression, which might seem quite incomprehensible at first, with another physical law hoping to understand the result better. This chapter, and also chapters 6 and 7, are heavily based on [7].

5.1 Kramer’s escape rate

We will now introduce the problem of barrier crossing. We are given a potential with a barrier, see for example figure 5.1. The potential contains Brownian particles, which are just particle whose diffusion process is modelled by Brownian dynamics, that is, their equation of motion is dX(t) = − D kBT f0(x)dt + √ 2DdW (t). (5.1) as shown in the introductory chapter. However, one can show that we can assume without loss of generality

dX(t) = −f0(x)dt +√2DdW (t). (5.2) The relevant Fokker-Planck equation in this case is given by

∂p(t, x) ∂t = [ ∂ ∂xf 0 (x) + D ∂ ∂x2]p(t, x) = − ∂ ∂xS(t, x) (5.3) where f (x) denotes the potential, and D is the diffusion coefficient of the problem (When we said that (5.4) does not lose any generality, we meant that the Fokker-Planck equation corresponding to (5.3) can be transformed to (5.5) by a change of variables, see section 5.1 of [7]. There it is shown that you can transform the original Fokker-Planck equation to this form by a change of variables x0 = y(x), where now the second coefficient is an arbitrary constant and no longer x-dependent. Therefore, we can simply assume the diffusion coefficient to be a constant throughout the rest of this thesis).

Our goal is to get an expression for the escape rate for particles sitting near the bottom of the well. As we have said before, we can restrict ourselves to a constant diffusion, otherwise one could just transform the problem so that it is constant. We make one

(20)

further assumption, namely that ∆fD is large. If we do not make this assumption, we can not get an analytic expression for the escape rate in general. Indeed, with this assumption, the following simplification can be made: as D is very small relative to ∆f , there must be almost no probability current around xmax, the x-coordinate of the

potential peak. Furthermore, the probability density p(t, x) does not change much in time. Thus, the probability current S is approximately x-independent.

Therefore, using the Fokker-Planck equation with ∂x∂ S(t, x) = 0, we get −f0(x)p(t, x) = D ∂

∂xp(t, x).

By writing −f0(x)p(t, x) = −fD0(x)Dp(t, x), we can treat this equation as a differential equation in Dp(t, x), with solution

p(t, x) = N De R −f 0(x) D dx= N De −f (x)D

where N is simply the integration constant such that p(t, x) is a probability density. We can thus conclude that around xmin, still assuming a large barrier, we can write the

density around xmin to be approximately the stationary density.

p(t, x) = N De −f (x)−f (xmin)+f(xmin)D = p(t, x min)e− f (x)−f (xmin) D . (5.4)

We can thus calculate the probability that a particle is somewhat near xmin (more

formally, we will find the probability that a particle is between two x-coordinates x1 and

x2 that are close to xmin. As the probability density gets small away from f (xmin) if

your diffusion is long enough, it is not really necessary to specify the values of x1 and

x2). To do this, we simply integrate the probability density (5.4) from x1 to x2:

q := P(particle near minimum) = p(t, xmin)e

f (xmin) D

Z x2

x1

e−f (x)D dx.

We will now rewrite S to look more like the expressions we have found. Some algebraic manipulations from the definition of probability current gives us:

S = −f0(x)p(t, x) − D ∂ ∂xp(t, x) = −De−f (x)D [f 0(x) D e f (x) D p(t, x) + e f (x) D ∂ ∂xp(t, x)] = −De−f (x)D ∂ ∂x[e f (x) D p(t, x)].

This expression for S, which is true in general, contains the exponential term that we have seen in the expression of the stationary distribution, and is better for the upcoming

(21)

derivation. Integrating this result from xmin to a certain A (keeping in mind that S is

not dependent of x), we get that S

Z A

xmin

ef (x)D dx = D[ef (xmin)D p(t, xmin) − e f (A)

D p(t, A)]

If A is suitably chosen so that the probability density around A is extremely small, we can reduce this to a new, approximate expression for S by simply setting p(t, A) = 0:

S = De f (xmin) D p(t, xmin) RA xmine f (x) D dx .

We now have all the ingredients to calculate the escape rate r, but what exactly is the escape rate anyway? The physical definition of the escape rate is very simple:

rq = S,

where S is the probability current and q still denotes the probability that a particle is around xmin. We have found expressions for both q and S, and substitution gives

1 r := q S = 1 D Z x2 x1 e−f (x)D dx Z A xmin ef (x)D dx

This is still rather unwieldy, so we will simplify further Even though the integrands in this expression are similar, they have a key difference: The first integrand gets its main contribution around xmin, as f (x) is minimal there, while the second integral gets its

main contribution around xmax, as f (x) is maximal there.

We can expand f (x) around these points. Using that f0(xmin) = f0(xmax) = 0, the

relevant Taylor expansions are

f (x) ≈ f (xmin) + 1 2f 00(x min)(x − xmin)2 f (x) ≈ f (xmax) − 1 2|f 00(x max)|(x − xmax)2

(note the absolute value in the second expression). Substituting and letting the integral boundaries go off to infinity gives

1 r ≈ 1 De f (xmax)−f (xmin) D Z ∞ −∞ e−f 00(xmin)(x−xmin) 2 2D dx Z ∞ −∞ e−|f 00(xmax)|(x−xmax) 2 2D dx.

If we now recall that R∞

−∞e−ax 2 dx =pπ a, we get 1 r ≈ 1 De f (xmax)−f (xmin) D s 2Dπ f00(x min) s 2Dπ |f00(x max)|

(22)

Figure 5.1: A generic potential (taken from [7], p123) or rK ≈ 1 2π p f00(x

min)|f00(xmax)|e−

(f (xmax)−f (xmin))

D . (5.5)

This expression is called the Kramers escape rate (hence the subscript). This expression is only valid if f (xmax) − f (xmin) >> D, and could be improved upon by

Taylor-expanding up to a higher order. This however complicates the exponential integrals and will therefore not be done in this thesis. Note that the escape rate increases as D increases, or at least as long as the initial assumption that f (x)D >> 1 holds. There is an optimal regime where the escape rate is maximal,

We can gain physical intuition by comparing our expression for the escape rate with the Arrhenius law, an expression for chemical reaction rates. The Arrhenius law reads as follows:

r = Ae−EactkB T

where Eact is the activation energy and T is the temperature. By focusing on the

exponential factor in the formula, we can say that in our case the activation energy is given by f (xmax) − f (xmin), the depth of the potential, while kbT = D, as conceptually

the Fokker-Planck equation after the change of coordinates is the same as if we originally had D as the diffusion coefficient and γ = 1. This comparison makes it clearer which kinds of energies we are comparing when we say that ∆fD should be big: the activation energy must be significantly larger than the thermal energy. This observation can be pretty strong in some physical models of this potential problem. For example, if you model a macro-molecule that can be in two states as a two-welled potential, one well for each state, the height of the potential should be the activation energy of the conformation change.

(23)

6 Barrier crossing: Eigenvalue problems

In this chapter, we will look at the eigenvalue problem for the Fokker-Planck-operator: LF P[φ(x)] := [

∂ ∂xf

0(x) + D

∂x2]φ(x) = λφ(x)

We introduce φ(x) to denote eigenfunctions, in contrast to p(x) denoting any probability density. Our immediate goal is to find a way to approximate the lowest eigenvalue of the metastable potential and the bistable potential, and see if it agrees with physical intuition.

6.1 Preliminary considerations

The strategy for approximating the first non-zero eigenvalue is similar for the metastable and the bistable potentials. We have to be careful about what we mean by the first non-zero eigenvalue, but we have the following result:

Theorem 6.1. If φ(x) is a solution to the eigenvalue problem [ ∂ ∂xf 0 (x) + D ∂ ∂x2]φ(x) = λφ(x), then λ is non-positive.

We will just give a sketch of the proof, without going too much into detail about some of the more unwieldy parts (in [7], an additional change is made to the definition of the eigenvalue problem so that the eigenvalues are non-negative. Even though this has its benifits, we do not do this here for simplicity).

Proof. First, a very straightforward calculation shows the following alternative formula for the Fokker-Planck operator in our problem:

LF P = ∂ ∂xDe −f (x) D ∂ ∂xe f (x) D (6.1)

Using this alternative form, one can check that L := ef (x)2D LF Pe−

f (x) 2D

is a Hermition operator (assuming some natural boundary conditions such that the boundary terms in partial integration are zero). It is now obvious that if φ(x) is an

(24)

eigenfunction of LF P with eigenvalue λ, then ψ = e

f (x)

2D φ(x) is an eigenfunction of L

with eigenvalue λ. One can easily see the following: Z

φef (x)D LF Pφdx =

Z

ψLψdx = λ

However, we can calculate the left side in another manner, using (6.1) once more: Z φef (x)D LF Pφdx = Z φef (x)D ∂ ∂xDe −f (x)D ∂ ∂xe f (x) D φdx = − Z ( ∂ ∂xφe f (x) D )2De− f (x) D dx

The right hand side contains the integral of a function that is obviously non-negative, so the right hand as a whole is non-positive. However, the right hand side is also λ, so every eigenvalue is non-positive.

One can actually see as well from the argument above that λ is only zero for φ0(x) = e−

f (x) D .

Keeping this theorem in mind, we will mean the eigenvalue with the lowest non-zero absolute value if we speak about the first eigenvalue.

6.2 The metastable potential

We will now try to find the first eigenvalue for the symmetric metastable potential, as seen in figure 6.1. Metastability is very interesting in many ways. For example, the theory of nuclear isomers revolves about metastable states different from the ground state of a nucleus (such that some protons or neutrons are in a higher energy state). These states can be achieved for longer periods of time before eventually dropping back to the ground state, even though these states are energetically unfavourable. Eventually in the next chapter, we will talk about the first-passage time of the simple metastable potential, giving an indication of how long it would take to get back into the ground state. Of course, the symmetric metastable potential is probably not the right model for these problems, but the simplicity of this model makes it a good object of study. In this chapter, we will analyse the eigenvalues of the Frobenius-Perron operator, which is always of interest when considering partial differential equations.

Using (6.1), the current problem is solving D ∂ ∂xe −f (x)D ∂ ∂xe f (x) D φ1 = λ1φ1 (6.2)

This alternative formulation of the eigenvalue problem has the added benefit that it can be transformed to an integral equation with relative ease. Integrating once with respect to x, we get D ∂ ∂xe f (x) D φ1(x) = λ1e f (x) D Z x 0 φ1(z)dz.

(25)

Figure 6.1: The symmetric metastable potential (taken from [7], p125)

Here, we need that f0(0) = 0 and that φ01(0) = 0. The first equality is clearly true. For physical reasons, the lowest eigenstate of the metastable potential is symmetric, implying the second equality. Now, we integrate another time and get

φ1(x) = e −f (x) D  λ1 D Z x 0 ef (y)D Z y 0 φ1(z)dz  dy − ef (0)D φ1(0)  (6.3) Given also the boundary condition φ1(A) = 0, we can approximate the first eigenvalue

bty the following iteration (still assuming small diffusion coefficient):

• As first approximation, we let λ(0)1 = 0 and (by 6.3) φ(0)1 (x) = −e−f (x)D e f (0)

D φ1(0)

• Inserting this approximation into (6.3) yields φ(1)1 (x) = −e −f (x) D e f (0) D φ1(0) λ1 D Z x 0 ef (y)D Z y 0 e −f (z) D dz  dy + 1  (6.4) As φ1(A) = 0, this yields the following value for the eigenvalue:

λ(1)1 = − D RA 0 e f (y) D h Ry 0 e −f (z) D dz i dy (6.5)

To evaluate this expression analytically, it once again pays off to realize that f (y)−f (z) is greatest for y = xmax and z = xmin = 0. Thus, just like in last chapter, once simply

expands f (y) around xmaxand f (z) around 0, and then evaluate the integral analytically

after letting the integration bounds for z go from 0 to ∞, while we let the integration bounds for y go from −∞ to ∞. As a result we will get the same expression as last chapter (just twice as big, as we only integrate z from 0 to infinity) with an extra minus sign:

λ(1)1 ≈ −2rK. (6.6)

This reflects the fact that the metastable potential has two walls instead of one. If one would be interested in continuing this process, they would just need to substitute (6.4) with the λ1 given by (6.6), yielding a better approximation for the eigenfunction.

(26)

One could then find the new eigenvalue by using boundary conditions once more. The solutions do not get any more elegant, so this will not be done here.

The bistable potential will not get a full treatment here, as it is very similar to the metastable potential. The resulting approximation for λ1 turns out to be the same.

(27)

7 First-passage time problems

As a third example of a physical use for the Fokker-Planck equation, we will consider the problem of calculating the first-passage time for a metastable potential. Formally, we will define the first passage time to be the time when a particle reaches a boundary. As we said before, this is often also the time when the particle escapes a well. Of course, we are searching for the expected first passage time as the equation of motion of our particles has a random component. As it turns out, solving for the expected value (or even higher moments) of the first passage time is particularly nice, as it turns out that this yields a differential equation independent of t.

7.1 Setup

We are interested in the probability density P (x, t | x0, 0): The probability that a par-ticle reaches position x at time t given that it is at position x0 at time 0. Suppose the boundaries are at x1 and x2, so that x1 < x2, and that the boundaries are absorbing.

Then, we shall not care about the probability density for x ≥ x2 or x ≤ x1. To

mathe-matically express this, we simply say that P (x, t | x0, 0) = 0 for these values of x. Now, using that the probability density satisfies the Fokker-Planck equation, we get

˙

P = LF P(x)P, P (x, 0 | x0, 0) = δ(x − x0) (7.1)

which is valid for x1< x < x2. Together with the boundary condition P (xi, t | x0, 0) = 0

for i = 1, 2 we have fully specified the relevant system of equations for P (x, t | x0, 0). We will make some extra definitions. We define W (x0, t) as the probability that the boundaries are not reached before time t given that the particle starts at position x = x0 at time 0. That is,

W (x0, t) = Z x2

x1

P (x, t | x0, 0)dx.

Seeing this as a random variable in the time coordinate, we can compute the density function of the first passage time. We must be careful however, as W is decreasing in time contrary to a normal distribution function. Therefore, the density function in this case is: w(x, T ) = −dW dT = − Z x2 x1 ˙ P (x, T |x0, 0)dx

(28)

We are mainly interested in the expected value for the first passage time, but in general we can write Z 0 Tnw(x0, T )dT = Z x2 x1 fn(x, x0)dx where fn(x, x0) = − R∞ 0 T

nP (x, T |x˙ 0, 0)dT . The expected value is given by setting n = 1

in the integrals. By looking at the problem through the moments of the first passage time, we have removed the time-component of the problem.

We can solve for fn(x, x0) through a system of equations. By manipulating the expression

of fnfor n ≥ 1 through partial integration, we get

fn(x, x0) =−nTn−1P (x, T | x0, 0) T =∞ T =0 + n Z ∞ 0 Tn−1P (x, T | x0, 0)dT But P (∞, T | x0, 0) = 0, so we get fn(x, x0) = n Z ∞ 0 Tn−1P (x, T |x0, 0)dT (7.2) Using that LF P = ˙P and the definition of fn, we get the following result by applying

LF P on (7.2):

LF P(x)fn(x, x0) = −nfn−1(x, x0) (7.3)

This, together with f0(x, x0) = −

Z ∞

0

˙

P (x, T | x0, 0)dT = P (x, 0 | x0, 0) = δ(x − x0) (7.4) yields an infinite set of equations.

7.2 Mean first passage time for the metastable potential

Let us study the metastable potential once more. We will find the mean first passage time T for a particle starting at 0 to leave the domain |x| < A. Here, A is once again the absorbing wall (it therefore has the exact same meaning as in the previous chapter where we derived the first eigenvalue). As we have seen, we have the following equations:

E[T | x(0) = x0] = Z A −A f1(x, x0)dx (7.5) LF P(x)f1(x, x0) = −δ(x − x0) (7.6) f1(A, x0) = f1(−A, x0) = 0 (7.7)

As the metastable potential is a symmetric potential, and the diffusion coefficient D is simply constant throughout this thesis, we can see that both f1(x, 0) and T (x0) are

symmetric functions. That is, the first derivative of f1(x, 0) is zero at x = 0, and the

(29)

As it turns out, we can easily solve for f1(x, 0) in this case. To do this, we recall

formula 6.1 to turn the problem into solving the equation ∂ ∂xDe −f (x)D ∂ ∂xe f (x) D f1(x, 0) = δ(x) (7.8)

where f denotes the potential function. Integrating a few times gets us f1(x, 0) = D−1e− f (x) D Z A x ef (y)D Z y 0 δ(z)dz  dy.

We want to get a nicer answer for the mean first passage time, but this means that we have to make an approximation. To get further, we replace the δ-function is the final integral by a very sharp symmetric function, but with finite height. By making this simplification,Ry

0 δ(z)dz = 1

2 holds by the symmetry. Now, let us use (7.5) to get

E[T | x(0) = 0] = Z A −A f1(x, 0) = 2 Z A 0 f1(x, 0) = D−1 Z A 0 e−f (x)D Z A x ef (y)D dy  dx where we used the symmetry of f1 once more. This might not seem like much of an

improvement, but comparing this expression with equation (6.5) immediately yields E[T | x(0) = 0] = − 1

λ(1)1 = 1

2rK

, (7.9)

which finally finishes the derivation of the mean passage time for this potential. Note in particular that, because the expected time to reach leave the well is not infinite, the particle will almost certainly get out of the potential well. This is in sharp contrast with the more naive model without the random component, where the particle at the bottom of the well will never move at all.

(30)

8 Conclusion

As said in the introduction, the main goal of this thesis was to derive the Fokker-Planck equation and to use it to derive physical results. We have mostly succeeded in the first part. The actual Fokker-Planck equation says that the probability density of a process satisfying

dXt= µ(X(t), t)dt + σ(X(t), t)dW (t)

satisfies the partial differential equation ∂ ∂tp(t, x) = − ∂ ∂x[µ(X(t), t)p(t, x)] + ∂2 ∂x2[ σ2(X(t), t) 2 p(t, x)].

The only difference between this equation and what we have done is that we do not allow for explicitly time-dependent coefficients µ and σ. This is not done here as the derivation of the dual Feynman-Kac formula in this case contains harder mathematics like the general definition of conditional expectations. This still is a part of the current proof, but to a lesser extent. Furthermore, this version was not needed for the physical problems surrounding metastable potentials that we studied. We have therefore decided to only derive the relevant version of the Fokker-Planck equation. Because of this, the derivation, given the Itˆo-formula, is relatively elementary, compared to the derivation of the full formula. We also have not derived the Itˆo-formula for the reason of the proof being far too difficult. Instead we focused on the definition and construction Itˆo-integral. We have tried to at least give a sketch of an argument for why the Itˆo-formula is believ-able, but the interested reader can follow the reference [5] for a more explicit sketch, and the source given in the relevant chapter for a full proof. Please note that the Feynmann-Kac result is also interesting on its own, which is why we used this route to find the Fokker-Planck equation instead of using a more direct approach.

Our second goal was to apply the Fokker-Planck equation in physical situations. For this goal, we concentrated on the problem of a particle in a metastable well, even though we derived Kramer’s escape rate for a more general potential. You might have noticed that the final expression of every chapter about these potential problems includes Kramer’s escape rate. Indeed, this is the case in many similar questions about these kinds of potentials and shows how fundamental this expression is in this area of physics. The derivation of this expression through the Fokker-Planck equation can therefore be seen as the main physical result in this thesis. However, the implications of the Fokker-Planck equation in physics are far wider than only these potential problems, and studying these might have been the logical next step if one had the time and space to do so. For a good reference, one can read [7], which has been the main source for these final few chapters

(31)

but contains a lot of these other relevant problems, and a more direct derivation of the Fokker-Planck equation.

Finally, I want to explicitly mention some applications of the study of metastable po-tentials with Brownian noise, in case one is not convinced of the physical applications yet. One has already been mentioned, namely conformal changes of macromolecules. Spontaneous protein activation or protein folding is in a similar vein but is also a good example. But not all applications have to be biological or chemical, as the concept of white noise is very important in dynamical systems with feedback, which has its appli-cations in electrical circuits or even in neuroscience. (white noise is the R(t)-term in the very beginning of the introduction). Brownian motion itself has interested well-known physicists like Einstein. To learn more about the history of the development of theories around this phenomenon, which we hope to have demonstrated to be very interesting, including the birth of the Langevin equation, please refer to the recent preprint [8].

(32)

Bibliography

[1] Benjamin Sch¨uller, Alex Meistrenko, Hendrik van Hees, Zhe Xu, and Carsten Greiner. Kramers’ escape rate problem within a non-markovian description. An-nals of Physics, 412:168045, 2020.

[2] LCG Rogers and David Williams. Diffusions, Markov Processes and Martingales, Volume 1: Foundations. Chichester: Wiley, 1994.

[3] Ioannis Karatzas and Steven Shreve. Brownian motion and stochastic calculus. Vol. Vol. 113. Springer Science & Business Media, 1991.

[4] LCG Rogers and David Williams. Diffusions, Markov processes and martingales: Volume 2, Itˆo calculus, volume 2. Cambridge university press, 2000.

[5] Steven E Shreve. Stochastic calculus for finance II: Continuous-time models, vol-ume 11. Springer Science & Business Media, 2004.

[6] Peter Spreij. https://staff.fnwi.uva.nl/s.g.cox/mtp_2016.pdf, 2016.

[7] Hannes Risken. Fokker-planck equation. In The Fokker-Planck Equation. Springer, 1996.

[8] Arthur Genthon. The concept of velocity in the history of brownian motion–from physics to mathematics and vice versa. arXiv preprint arXiv:2006.05399, 2020.

(33)

Popular summary

Every first course in classical mechanics teaches Newton’s second law: F = ma, the net force on a particle is its mass times its acceleration. Later, one learns that in a prob-lem with a potential this net force is minus the derivative of this potential with respect to position: F = −∂x∂V (x). This equation is valid for an isolated particle in vacuum. Sadly (or luckily, depending on your interests), almost no system in actual nature is an isolated particle in vacuum. Models that keep this into consideration are far more difficult, and there are many things to consider. In this thesis, we consider a version of the model of Langevin dynamics, where you consider drag forces (think air resistance) but also, and more interestingly, perturbations like high-velocity collisions with other particles. In contrary to drag forces (which are just dependent on the particle velocity), these perturbations are random!

In this thesis, we study the effects of these random perturbations on the resulting physics. For this, we first need a way to deal with our new equation of motion. Every physics stu-dent is probably reasonably proficient is solving differential equations, or at least deriving properties of its solutions, but our equation of motion contains a random component, which is not seen in a standard undergraduate physics curriculum. Due to the random component, solving for individual particles is not the way to go. However, our main mathematical result shows that we can solve a non-random partial differential equation for probability densities to know how ensembles of particles evolve in this system! This equation is known as the Fokker-Planck equation and has many uses in this area of physics.

We will only use this information to study the problem of particles in (metastable) po-tential wells. This is a problem that shows why Langevin dynamics is more interesting than the ”normal” Newtonian dynamics. A particle at the bottom of a well can not move in Newtonian dynamics, while this is not true for Langevin dynamics due to the random forces being able to push the particle out of the well. Indeed, one of the results in the thesis is that such a particle has a finite expected escape time (or first-passage time past a point outside the well), and thus must eventually escape with probability one. More relevant questions are asked, like the rate of particles escaping a well or eigenfunctions and eigenvalues of the Fokker-Planck equation. The rate of particles escaping a well is called Kramer’s escape rate and turns out to be essential to all the other problems studied here.

Referenties

GERELATEERDE DOCUMENTEN

Recipients that score low on appropriateness and principal support thus would have low levels of affective commitment and high levels on continuance and

Contraction of general transportation costs along solutions to Fokker-Planck equations with monotone drifts.. Citation for published

presenteerde reeds in 1953 een dispersieformule voor lucht op basis van metingen gedaan door Barrell en Sears in 1939 voor het NFL. Metingen uitgevoerd na 1953 wezen voort- durend

Types of studies: Randomised controlled trials (RCT) that assessed the effectiveness of misoprostol compared to a placebo in the prevention and treatment of PPH

Tijdens het onderzoek zijn in totaal 27 werkputten aangelegd waarbij het onderzoeksvlak aangelegd werd op het hoogst leesbare niveau waarop sporen kunnen

Based on experimental evidence obtained in triclinic DMM-TCNQ2, it will be argued that in zero field there is an instability for a lattice modulation, that approaches

were the dominant fungi (47,5%) isolated from Jacquez.. could explain the replant problem experienced with this rootstock. In old Jacquez plantings susceptible roots are

Op 1 periode snijdt de grafiek van f de x-as twee keer.. Extra