• No results found

Various problems concerning random walks

N/A
N/A
Protected

Academic year: 2021

Share "Various problems concerning random walks"

Copied!
35
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Various problems concerning random walks

Farrokh Labib

July 16, 2015

Bachelor thesis

Supervisor: prof. dr. Sindo Nunez Queija and dr. Theo Nieuwenhuizen

Korteweg-de Vries Instituut voor Wiskunde

(2)

Abstract

We will look at a construction of a random walk on R2 different from the one defined

on a lattice. Then we will look at the construction of Brownian motion by rescaling the random walk. As original work, we will find the distribution of the supremum of Brownian motion with drift using queueing theory. Then we will look at the distribution of the range of Brownian motion, of which the exact density function is found by W. Feller [1]. At last we will find an approximation for the probability distribution of the event that a symmetric random walk visits s distinct sites in t steps using a paper of Th. M. Nieuwenhuizen [2].

Title: Various problems concerning random walks

Authors: Farrokh Labib, Farrokh.Labib@student.uva.nl, 10329943 Supervisor: prof. dr. Sindo Nunez Queija and dr. Theo Nieuwenhuizen Date: July 16, 2015

Korteweg-de Vries Instituut voor Wiskunde Universiteit van Amsterdam

Science Park 904, 1098 XH Amsterdam http://www.science.uva.nl/math

(3)

Contents

1. Introduction 4

2. Brownian motion 7

2.1. A construction of Brownian motion in R2 . . . . 7

2.2. The supremum of Brownian motion with drift . . . 10

2.2.1. Results in queueing theory . . . 10

2.2.2. Doing the calculation . . . 12

2.3. Visiting a set of measure r in a time t by Brownian motion on R . . . 15

3. Visiting s distinct sites in t steps 18 3.1. Main idea . . . 18

3.2. Master equation for the diffusion of a random walker in 3D with traps . . 19

3.3. A Green’s function . . . 20

3.4. Representation of G(E) with Gaussian integrals and averaging over con-figuration of random traps . . . 21

3.5. Improving the approximation . . . 25

3.6. Survival probability and p(s, t) . . . 28

4. Conclusion 30

Bibliography 31

A. Gaussian integrals 32

(4)

1. Introduction

A random walk in 1D is a random process where a walker tosses a coin to choose to go one step to the left or to the right, here we use Z as a lattice. One can generalize this process to arbitrary dimensions on Zd. Even though this model is simple to formulate, random walks are important mathematical models which are widely used in areas of physics, chemistry and biology. For example random walks are extensively used in poly-mer configurations. A polypoly-mer is a substance made up of chains of repeating molecules. The chains are flexible so every molecule can orient in a random angle in the plane, so simulating polymers using random walks is a common practice. For applications in physics, one can think of the movement of particles in a gas. Since these particles are small, one could be motivated to define a rescaled random walk and look at the limit process of this rescaling. This limit process of rescaling random walks is known as Brow-nian motion or diffusion process in the literature. For more applications I refer to [3]. The subject of random walks is very rich, we can easily add new ingredients to the model and yet have very interesting processes. One of this ingredients the concept of traps. We distribute randomly traps on our lattice and look how long the walker sur-vives the random walk without getting trapped. An example where this variation of the simple random walk pops up naturally is that of photosynthesis. During this process a photon is absorbed by a chlorofyll molecule and this creates an exciton. This exciton moves by a random walk to neighbouring chlorofyll molecules until it reaches a trap. This causes the production of an oxygen molecule and the exciton disappears. More examples can be found in [3] Such applications in various fields of science makes random walks an interesting subject and worth doing research.

We will see in this thesis that queueing theory can be used to calculate quantities con-cering Brownian motion. In queueing theory we analyze certain models of queues. For these models we can try to calculate important quantities. For example the mean waiting time of a customer in a queue. One can find applications of queueing theory everywhere, because we find queue’s everywhere. Technically, a queue consists of a flow of customers and there are servers, they have to serve the customers.

(5)

For example, we can assume that the customers come according to a Poisson process of rate λ > 0 and that they all bring a fixed amount of work, say D. This is the M/D/1 queue. A lot is known about this queue. In this thesis we will use this specific queue to find the maximum of Brownian motion with drift.

In this thesis we will explore some problems on random walks. In the first chapter we construct a random walk on the plane, different from the one which is defined on a lattice on the plane. Namely one chooses an angle α uniformly in [0, 2π) and the walker makes a step of some unit length in that direction on the circle. Then the walker repeats this process. After this construction we will rescale the process to obtain Brownian mo-tion. This is a continuous version of a random walk on R2.

Further in the same chapter, we will find the distribution function of the supremum of Brownian motion with drift, we assume this process has started at t = −∞ so that the process is in equilibrium. This calculation will be done using tools of queueing theory, specially the theory of the M/D/1 queue. After this we will consider the following prob-lem: what is the probability distribution of the range of Brownian motion? The range of Brownian motion is defined R(T ) = M (T ) + |m(T )|, where M (T ) := sup0≤s≤T B(s) and m(T ) := inf0≤s≤T B(s). We will use the paper of W. Feller [1], where he answers

exactly this problem.

Then in the following chapter, we will look at a method for finding an approximation of visiting s distinct in t steps for a symmetric random walk on a cubic lattice, following the paper written by Th. M. Nieuwenhuizen [2]. A short description of the method is as follows. We consider random walks with traps, meaning we distribute traps on the lattice with a concentration 0 < c < 1. The walker can get trapped with rate Vr, where

r stands for the position of the random trap. The the probability that the walker will survive t steps can be written as a sum over all random walks wt of t steps

Ψ(t) = z−tX

wt

(6)

where s(wt) is the number of distinct sites visited by the random walk wt. Then this

sum can be rewritten

Ψ(t) =

t

X

s=0

p(s, t)(1 − c)s,

where the coefficients p(s, t are the probabilities that the random walk wtvisits s distinct

sites. Thus we need to find this survival probability, which will be done using methods in field theory.

(7)

2. Brownian motion

2.1. A construction of Brownian motion in R

2

Inspired by the random movement of particles in the physical world, we construct the following random walk on R2: a particle starts in the middle of a circle of radius 1 and moves to a point of the circle by choosing uniformly an angle in [0, 2π). A sample path can look as follows.

Formally we have the following 2-dimensional random vector as a single step,

Y = cos(α)

sin(α) 

, (2.1)

where α is uniformly distributed in the interval [0, 2π). The covariance matrix is Σ = E(Y YT) =



E(cos2α) E(cos α sin α)

E(cos α sin α) E(sin2α)  = 1 2  1 0 0 1  . (2.2)

So we have a random process (Xn)n∈N ⊂ R2 with Xn+1 = Xn+ Yn with Yn ∼ Y and

X0 = 0 with probability 1. Here ∼ denotes equality of distributions. Since it is very

intuitive to think that infinitesimally small particles can take infinitesimally small steps and thus arbitrarily many steps, I will now rescale the process. Define the rescaled process as follows:

Xtc= √1

cX[ct], (2.3)

where now t ∈ R and [x] denotes the floor function. Later it will be clear why I rescaled the space with a square root and rescaled time linearly with c. It is now time to find the distribution of increments of the limit process, i.e. the distribution of

lim c→∞(X c t2− X c t1), (2.4)

where 0 ≤ t1 < t2 are different times. For this we need a powerfull tool from probability

theory, that is the multi-dimensional version of the central limit theorem. We will now state it here without a proof [4].

(8)

Theorem 1. Central Limit Theorem (CLT). Let (Xi)i∈N be a sequence of n-dimensional

i.i.d. random vectors with mean vector µ and covariance matrix Σ, then as k → ∞ √

k(Xk− µ) → N (0, Σ), (2.5)

in distribution.

Theorem 2. The distribution of the increments of the rescaled random walk constructed above converges to N (0,12(t2− t1)I), i.e.

lim c→∞(X c t2 − X c t1) → N (0, 1 2(t2− t1)I) (2.6) in distribution.

Proof. We begin by calculating the increments given two times 0 ≤ t1 < t2. First define

N (c) := [ct2] − [ct1]. Then using the notation we used earlier

Xtc2 − Xc t1 = 1 √ c X[ct2]− X[ct1]  = √1 c [ct2] X n=[ct1]+1 Yn ∼ √1 c Y1+ Y2+ · · · + YN (c)  =  1 √ c p N (c)  1 pN(c)(Y1+ · · · + YN (c)) ! .

For the second term in the product we can use the CLT so that N (c)−1/2(Y1 + · · · +

YN (c)) → N (0,12I) as c → ∞ in distribution and by elementary analysis that c−1N (c) →

t2− t1 as c → ∞. The result follows by theorem 1.

Note that it was important that the space had to be rescaled with the square root of the time scaling, since in the last line of the calculation, we see the term √1

cpN(c)

which converges to t2 − t1. If we treated time and space equally, this limit would be

zero, so we would only have the process X(t) = 0 in the limit.

Readers who read the proof with attention noticed that in the proof I did not use any special property of this specific constructed random walk, I only needed its covariance matrix. This means that we can mimick this proof for arbitrary discrete random walks in arbitrary dimensions. The only thing we need is the covariance matrix. In particular we can do it for dimension 1, i.e. random walks in Z. As a matter of fact, the limit of the rescaled process in dimension 1 in the mathematical literature is called a Brownian motion.

Definition 1. (Brownian motion in R) We call B = {B(t) : t ∈ [0, ∞)} a standard brownian motion in R if the following set of assumptions holds.

(9)

(1) B(0) = 0 with probabiltiy 1.

(2) B has stationary increments, i.e. for all s, t ∈ [0, ∞) with s < t the distribution of B(t) − B(s) is the same as B(t − s).

(3) B has independent increments, i.e. for all times 0 < t1 < t2 < · · · < tn the random

variables B(t1), B(t2) − B(t1), . . . , B(tn) − B(tn−1) are independent.

(4) B(t) is normally distributed with mean 0 and variance t. (5) t 7→ B(t) is continuous with probability 1.

In this thesis we will not go through the proof of the existence of Brownian motion (Wieners theorem [4]). There is also a notion of Brownian motion with drift.

Definition 2. (Brownian motion with drift) Let B = {B(t) : t ∈ [0, ∞)} be standard Brownian motion. Then X = {X(t) = µt + σ2B(t) : t ∈ [0, ∞)} is called Brownian

(10)

2.2. The supremum of Brownian motion with drift

In this section we will find the distribution of the supremum of brownian motion with drift in R. This means finding the distribution of sups≤t(B(t) − B(s) − C(t − s)) where

C > 0. We assume the process was started at t = −∞, so that this supremum is independent of t. We will use queueing theory to find the result. We will now go through some results in queueing theory which will be needed later on.

2.2.1. Results in queueing theory

We consider a Poisson process of intensity λ > 0, which models people coming in to a queue. Let (χi)i∈N be a sequence of i.i.d random variables, this models the work the i-th

person brings. At the end of the queue there is a service node. The following diagram illustrates this.

We will now define the work of this process when there is only one server.

Definition 3. Let N (t) be a Poisson process of intensity λ > 0 and let (χi)i∈N be a

sequence of non-negative i.i.d random variables. Then the work is defined as

η(t) = N (t) X i=1 χi− Z t 0 L(η(u))du, (2.7)

where L(x) = 1 if x > 0 and L(x) = 0 otherwise.

A plot of η could look like the following. Time is on the horizontal axis and the amount of work is on the vertical axis.

(11)

We will now state an important lemma which is crucial in our calculation of the supre-mum of Brownian motion, it is Reich’s lemma [5].

Lemma 2.1. (Reich) η(t) = sup x>0   N (t) X i=N (t−x) χi− x   (2.8)

Proof. Let y = max{u : u ≤ t, η(u) = 0}. Then it is clear that

η(t) =

N (t)

X

i=N (y)

χi − (t − y). (2.9)

On the other hand we have that for x > 0

η(t) = η(t − x) + N (t) X i=N (t−x) χi − Z t t−x L(η(u))du ≥ N (t) X i=N (t−x) χi− x.

The following lemma about rescaling a Poisson process is also needed.

Lemma 2.2. Let N (t) be a Poisson process of rate m > 0. Let α ∈ N. Then as α → ∞ we have

N (αt) − αmt √

αm → B(t), (2.10)

in distribution, where B(t) ∼ N (0, t).

Proof. First we note that if X has a Poisson distribution with paramater λ1 and Y has

a Poisson distribution with paramater λ2, then X + Y has a Poisson distribution with

(12)

has a Poisson distribution with parameter mt. So if we have a sequence X1, X2, ..., Xα

which all have a Poisson distribution with paramater mt, then Pα

i=1Xi has a Poisson

distribution of parameter αmt, thus has the same distribution as N (αt). The expected value of Xi is mt. Also the variance of Xi is mt. So

i=1Xi has expected value αmt

and variance equal to αmt. So a direct application of the CLT yields as α → ∞ N (αt) − αmt

αm → B(t). (2.11)

2.2.2. Doing the calculation

We want to use the above lemma’s as follows. I want to calculate the distribution of the following random variable

η(t) = sup

s≤t

[B(t) − B(s) − C(t − s)] (2.12)

where C > 0 is the drift of the Brownian motion. As an approximation, define ηα(t) = sup s≤t  N (αt) − αmt √ αm − N (αs) − αms √ αm − C(t − s)  . (2.13)

We will use lemma 2.2 by looking at η(t) = lim α→∞ηα(t) = limα→∞sups≤t  N (αt) − αmt √ αm − N (αs) − αms √ αm − C(t − s)  (2.14) = lim α→∞sups≤t  N (αt) − N (αs) √ αm − (C + √ αm)(t − s)  , (2.15)

assuming we can interchange the limit and the supremum. So the idea is to approximate Brownian motion through Poisson processes. We can write the expression in between the brackets as ηα(t) = sup s≤t {   N (αt) X i=N (αs) 1 √ αm  − (C + √ αm)(t − s)}. (2.16)

We can now interpret this as the work of an M/D/1 queue by using Reich’s lemma. Namely, the Poisson process which models the incoming customers has rate αm and each customer brings a deterministic amount of work √1

αm. There is only one problem:

notice that the work does not decrease linearly in time. If we want to interpret this as the work of some M/D/1 queue, the work must decrease linearly in time. We can divide by a factor C +√αm so that ηα(t) C +√αm = sups≤t {     N0(t) X i=N0(s) 1 (C +√αm)√αm  − (t − s)  } (2.17)

(13)

is the work of an M/D/1 queue with a Poisson process of rate αm and each customer

brings total work √ 1

αm(C+√αm). The waiting time distribution of an M/D/1 is known,

due to A.K. Erlang [6].

Theorem 3. (Erlang) Consider an M/D/1 queue with deterministic service rate D and where customers come according to a Poisson process of rate λ. Then the cumulative distribution function of the waiting time is given by

P (W ≤ x) = (1 − ρ) n X k=0 (−λ(x − kD))k k! e λ(x−kD), (2.18)

where n is such that nD < x ≤ (n + 1)D and ρ = λD < 1.

Now let’s go back to our problem. Using equation 2.17, we see by using Reich’s lemma that ηα(t)/(C +

αm) is the waiting time or work of a M/D/1 queue with Poisson pro-cess of rate λ = αm and deterministic service rate D = √ 1

αm(C+√αm). There is a clear

analogy with the construction of Brownian motion in the previous section. With the construction of Brownian motion we start with a random walk and rescale the process. The number of steps of the walker goes to infinity, but the length of the steps goes to 0 in a well-defined way. In the case of our ’rescaled queue’ we let the intensity of the Poisson process go to infinity, λ = αm → ∞ as α → ∞, and the amount of deterministic

work D = √ 1

αm(C+√αm) per customer goes to 0. This shows the analogy between both

constructions.

We will use theorem 2.18 by using our M/D/1 queue defined right after 2.17.

P  ηα(t) C +√αm ≤ x  =  1 − αm (C +√αm)√αm  n X k=0  −αmhx − (C+√ k αm)√αm ik k! · exp  αm  x − k (C +√αm)√αm  . So if we want to know P (ηα(t) ≤ x) we find

P (ηα(t) ≤ x) = P  ηα(t) C +√αm ≤ x C +√αm  =  1 − αm (C +√αm)√αm  m X k=0  −αmh x C+√αm − k (C+√αm)√αm ik k! · exp  αm  x C +√αm− k (C +√αm)√αm  = C C +√αmexp  αmx C +√αm  m X k=0 (−αmx +√αmk)k (C +√αm)kk! exp  − √ αmk C +√αm 

(14)

The only thing left is to show that we can put the limit of α → ∞ inside the argument of the distribution function and that we can interchange limit and supremum in 2.14. Before stating the result, we need to look at the limit of the distribution function as α → ∞. It is not at all clear from an analytical point of view why this should converge. Since we have derived this formula using queueing theory, we will not prove it’s convergence analytically, but we will only look at a numerical calculation, which is shown in the figure below.

Figure 2.1.: Cumulative distribution function for α = 1000, C = 2 and m = 1. On the vertical axis there is the probability and on the horizontal axis the waiting time. Notice that the errors in the numerical calculation become too large at approximately x = 1. The value x = 1 is not special though.

We will conclude this section with the main result under some assumptions. It is stated as a theorem even though there is an assumption which is not proved in this thesis.

Theorem 4. Consider Brownian motion B(t)−Ct with drift paramater C > 0. Assume the Brownian motion has started at t = −∞ so that the process is in equilibrium. Then

(15)

the distribution function of η = sups≤t(B(t) − B(s) − C(t − s)) is given by P (η ≤ x) = lim α→∞ C C +√αmexp  αmx C +√αm  m X k=0 (−αmx +√αmk)k (C +√αm)kk! exp  − √ αmk C +√αm  , (2.19) assuming we can interchange the limit and supremum in 2.14. Notice that it is indepen-dent of m.

2.3. Visiting a set of measure r in a time t by

Brownian motion on R

In this section we will find the probability density function that standard Brownian motion on R visits a set of measure less or equal to s > 0 in a time t. We will use the paper written by W. Feller [1] to find this density function.

Let M (T ) := sup0≤s≤T B(s) and m(T ) := inf0≤s≤T B(s). Let R(T ) := M (T ) + |m(T )|.

We will refer to R(T ) as the range of the Brownian motion. Then what we want to find is the probability density function of the event R(T ) ≤ x, the range of the Brownian motion. First we look at F (T ; u, v) which we define to be the distribution function of the event M (T ) ≤ v, m(T ) ≥ −u, for u, v > 0. The corresponding density function is then of course f (T, u, v) = Fuv(T, u, v). We define δ(T, r) to be density function of the

range R(T ). Then we have by definitions of the above density functions δ(T, r) =

Z r

0

f (T, u, r − u)du, (2.20)

since f (T, u, r − u) is the density function of the event M (T ) ≤ v, m(T ) ≥ −r + v, then R(T ) ≤ r for each 0 ≤ u ≤ r, thus if we integrate over u, we get the desired density function. Now let w(t, x, u, v) be the density function of the event B(t) = x, M (t) ≤ v, m(t) ≥ −u. Then we have of course

F (T, u, v) = Z u

−v

w(T, x, u, v)dx. (2.21)

So we need to find w(t, x, u, v) if we want to find δ(T, r), which is what we will do now. This density function satisifies the diffusion equation wt = 12wxx for −u < x < v with

initial condition w(0, x, u, v) = δ(x) the delta-function (not to be confused with δ(T, r)) and boundary conditions w = 0 if x = −u or x = v. The reason why w(t, x, u, v) satisfies this partial differential equation can be found in chapter 2 of [7]. Let’s start with the Ansatz h(t, x) = √1 2πte −x2 2t = t− 1 2φ(x/t 1

2), where φ(x) is the standard normal density

function. First differentiate with respect to time ∂h(t, x)

∂t =

e−x22t (x2− t)

(16)

Now differentiate twice with respect to x ∂2h(t, x) ∂x2 = e−x22t (x2 − t) √ 2πt5/2 , (2.23)

so it indeeds solves ht = 12hxx. By one of the basic constructions of the delta-function,

we have that limt↓0h(t, x) = δ(x). So h(t, x) satisfies the initial value. Now we look

at the boundary conditions. We see that h(t, −u) = √1 2πte −u2 2t and h(t, v) = √1 2πte −v2 2t,

hence h(t, x) does not fulfill the boundary conditions. Now we apply a method called ”method of images”. We add Gaussians ±h(t, x − a) centered at points a not in our domain, so outside [−u, v], such that the ±h(t, x − a) + h(t, x) does fulfill the boundary condition, but we will see that finitely many of such Gaussians does not work, we need infinitely many of them so that w(t, x, u, v) is an infinite sum of such Gaussians. So for example, we add a function −h(t, x − 2v) so that it cancels with h(t, x) at x = v. For x = u we add −h(t, x + 2u). Now these two functions themselves contribute at x = −u and x = −v, so we add again Gaussians such that they cancel. So we need to get rid of −h(t, −u − 2v), therefore add a new Gaussian h(t, x + 2u + 2v). To get rid of −h(t, v + 2u), add a Gaussian h(t, x − 2v − 2u). If we continue this process inductively, we obtain w(t, x, u, v) = ∞ X k=−∞ t−12φ 2ku + 2kv − x t12  − ∞ X k=−∞ t−12φ 2ku + 2(k − 1)v + x t12  . (2.24) Now we can find F (T, u, v) by integrating w with respect to x from −u up to v. We write the integral of a single term in the first sum differently for k > 0

1 √ t Z v −u φ 2ku + 2kv − x√ t  dx = √1 t Z (2k+1)u+2kv 2ku+(2k−1)v φ y√ t  dy = √1 t Z (2k+1)u+2kv 0 φ y√ t  dy − Z 2ku+(2k−1)v 0 φ y√ t  dy ! = Z (2k+1)u+2kv√ t 0 φ(z)dz − Z 2ku+(2k−1)v√ t 0 φ(z)dz.

Now we can differentiate this easily with respect to u or v. Differentiating w.r.t v gives 2k √ tφ  (2k + 1)u + 2kv √ t  − 2k − 1√ t φ  2ku + (2k − 1)v √ t 

and differentiating with respect to u then gives 2k(2k + 1) t φ 0 (2k + 1)u + 2kv t  − (2k − 1)2k t φ 0 2ku + (2k − 1)v t  .

(17)

Now integrate this as indicated using 2.20 2k(2k + 1) t Z r 0 φ0 u + 2kr√ t  du − (2k − 1)2k t Z r 0 φ0 −u + (2k − 1)r√ t  = 2k(2k + 1)√ t  φ (2k + 1)r√ t  − φ 2kr√ t  − 2k(2k − 1)√ t  φ 2kr√ t  − φ (2k − 1)r√ t  . For k < 0 the calculations yields the same result. Now for terms in the second sum, the result is 2n(2n − 1) √ t  φ 2nr√ t  − φ (2n − 1)r√ t  −(2n − 1)2(n − 1)√ t  φ (2n − 1)r√ t  − φ 2(n − 1)r√ t  . If we put all terms together, we obtain

δ(T, r) = √8 t ∞ X k=1 (2k + 1)2φ (2k + 1)r√ t  − (2k)2φ 2kr t  (2.25) = √8 t ∞ X k=1 (−1)k−1k2φ kr√ t  . (2.26)

(18)

3. Visiting s distinct sites in t steps

3.1. Main idea

In this chapter we will try to calculate the probability of a random walk visiting s distinct sites in t steps. For this consider a symmetric random walk in 3D with traps distributed uniformly with a concentration 0 < c < 1, i.e. the probability that site r has a trap is c with rate Vr. For answering the main question, we consider perfect traps, so Vr = ∞.

We can write the probability of survival after t steps as a sum over all random walks with t steps,

Ψ(t) = z−tX

wt

(1 − c)s(wt), (3.1)

where we define z to be the number of nearest neighbours, thus 6 in three dimensions and wt is a random walk of t steps and s(wt) is the numbers of distinct sites visited.

Now of course there are lots of random walks which visit the same number of distinct site, so we can write the sum also as

Ψ(t) =

t

X

s=0

p(s, t)(1 − c)s, (3.2)

where now p(s, t) is the probability that the random walker visits s distinct sites in t steps, thus if we know Ψ(t) we can answer the main question.

We will not be able to find this exactly. We will derive a master equation of the form ˙

p = M p. (3.3)

We use the Ansatz p(t) = e−Etp where ˜˜ p is independent of time. Then the problem reduces to solving the following equation

M ˜p = −E ˜p. (3.4)

This will give us eigenvalues Ek and eigenvectors ˜p(k) such that the following relations

hold X k ˜ p(k)i p˜(k)j = δij en X k ˜ p(i)k p˜(j)k = δij. (3.5)

Since the eigenvectors form a basis, we can write pi(t) =

X

k

e−Ektp˜(k)

(19)

Note that pi(0) = P kp˜ (k) i ck so P ip˜ (l)

i pi(0) = cl by using the second rule of equation 3.5.

The initial condition is pi(0) = δij0 so cl = ˜p

(l)

j0 and we see that

p(i, t; j0, 0) = X k e−Ektp˜(k) i p˜ (k) j0 (3.7)

is the probability to be at i after t steps if we start at j0. Now we can write down what

the mean return probability is:

R(t) = 1 N X i p(i, t; i, 0) = 1 N X i X k e−Ektp˜(k) i p˜ (k) i = 1 N X k e−Ekt.

Now if we define the density of states ρ(E) = N1 P

kδ(E − Ei) then

R(t) = Z

ρ(E)e−EtdE. (3.8)

Since Ψ(t) ∼ R(t) hence the problem is to find out what ρ(E) is.

3.2. Master equation for the diffusion of a random

walker in 3D with traps

To derive a master equation, we need to use the following property of a Poisson process [4]

Theorem 5. Let (Xt)t≥0 be a Poisson process of rate 0 < λ < ∞. Then (Xt)t≥0 has

independent increments and as h ↓ 0 uniformly in t,

P (Xt+h− Xt = 0) = 1 − λh + o(h), P (Xt+h− Xt = 1) = λh + o(h). (3.9)

We want to know what pr(t + dt) is. A particle can hop from a nearest neighbour

to site r in this time dt and the probability is, using the above theorem, equal to pr+b(t)1zdt + o(dt) where 1z is the rate of jumping to a nearest neighbour and |b| = 1.

Another contribution is that if the particle is at site r at time t and that the particle does not hop in this time t. We must also consider the fact that the particle can get trapped. So the rate at which ”something happens”, i.e. the particle hops or it gets trapped and dissapears, is again a Poisson process with rate the sum of all the rates of the Poisson process from which we take the minimum. So this probability is exactly

(20)

equal to pr(t)[1 − (1 + Vr)dt + o(dt)]. By taking all the contribution together we get the following relation pr(t + dt) = X b : |b|=1 [pr+b(t) 1 zdt + o(dt)] + pr(t)[1 − (1 + Vr)dt + o(dt)]. (3.10) Now this can be written as

pr(t + dt) − pr(t)

dt =

P

b : |b|=1[pr+b(t)1zdt + o(dt)] + pr(t)[−(1 + Vr)dt + o(dt)]

dt , (3.11)

then take the limit dt → 0 and use the fact that limdt→0o(dt)/dt → 0 to obtain the

master equation ˙ pr(t) = X b : |b|=1 1 zpr+b(t) − pr(t)(1 + Vr) =: X b w(b)pr+b(t) − pr(t)Vr, (3.12)

where we defined w(b) := (z−1δ|b|,1 − δb,0)/τ0 and τ0 is a unit of time. This is of course

an equation which we can put in the following form ˙

p = M p. (3.13)

3.3. A Green’s function

When we consider the equation ˙p = M p with the Ansatz p = e−Etp, then we obtain˜ the equation M ˜p = −E ˜p. Written differently (−E − M )˜p = 0. The idea is now to consider the following matrix equation

(−E − M ) ˆG(E) = I, (3.14)

where I is the identity matrix. Thus ˆG(E) = (−E − M )−1. If we take the trace of ˆ

G(E), which we will do in an eigenbasis of M , since taking trace is invariant under basis transformation, we obtain G(E) := 1 Ntr( ˆG(E)) = 1 N X i 1 −E + Ei , (3.15)

where −Ei are the eigenvalues of M . We will refer to G(E) as our Green’s function.

Now we will look at the expression ˆ

G(E − iε)ii =

1 −E + Ei+ iε

= (−E + Ei+ iε)

(−E + Ei+ iε)(−E + Ei+ iε)

= −E + Ei

(−E + Ei)2+ ε2

− ε

(−E + Ei)2+ ε2

(21)

Since lim n→∞ n π(1 + n2x) = δ(x), (3.16) we see that lim

ε→0Im( ˆG(E − iε)) = πδ(E − Ei). (3.17)

So we see that

Im(G(E − 0 · i)) := lim

ε→0Im(G(E − iε)) = π N X i δ(E − Ei). (3.18) Thus we conclude

ρ(E) = π−1Im(G(E − 0 · i)). (3.19)

This is the reason why we are considering the equation (−E − M ) ˆG(E) = I, it is a way to find the density of states through the trace of the matrix of ˆG(E).

3.4. Representation of G(E) with Gaussian integrals

and averaging over configuration of random

traps

Using A.7 we see that ( ˆG(E))ii= 1 Z(0) Z x2i exp  −1 2x T(−E − M )x  dx1...dxN. (3.20)

Now if we do a change of variables√2φi = xi, the integral will be

( ˆG(E))ii=

2N/2+1 Z(0)

Z

φ2i exp−φT(−E − M )φ dφ1...dφN, (3.21)

also if we do the same for the integral Z := Z(0) = Z exp  −1 2x T (−E − M )x  dx1...dxN = 2N/2 Z exp−φT(−E − M )φ dφ1...dφN,

we obtain for the Green’s function

G(E) = 2 N Z Z Dφ N X i=1 φ2ie−A, (3.22) where Dφ =QN i dφi and A := φT(−E − M )φ (3.23) = −X r,b φrw(b)φr+b + X r (−E + Vr)φ2r. (3.24)

(22)

We will refer to A as the action. Now we will try to take the average over configurations of random traps of 3.22, we will denote this average with hG(E)i. At first sight, this is very hard, since we have a Z−1 term and taking the average of this random variable is complicated. The way to proceed is to write Z−1 in another way, namely

1

Z = limn→0Z

n−1. (3.25)

This method is called the replica method. We are going to calculate the average over random traps of Gn(E) = 2 NZ n−1 Z Dφ N X i=1 φ2ie−A (3.26)

for n = 1, 2, 3, ... and then take the limit n → 0. So Gn(E) = 2 N Z Dφe−A nZ Dφ N X i=1 φ2ie−A = 2 N Z n−1 Y α=1 Dφαe−Aα Z Dφ N X i=1 φ2ie−A = 2 N Z 1 n n Y α=1 Dφαe−Aα X i,α (φαi)2 = 2 N n Z Dφ N X i=1 φ2ie−An,

where now the measure is Dφ =Q

i,αdφ α i, φi = (φ1i, φ2i..., φni), φ 2 i = P α(φ α i)2 and An= n X α=1 Aα = n X α=1 " −X r,b φαrw(b)φαr+b+X r (−E + Vr)(φαr) 2 # =: A0n+X r Vr X α (φαr)2

We had fields φr at each lattice site r, but now we have n of such fields, hence this

method is called the replica method. We have defined A0n because this is a deterministic variable, Vr is our random variable which is equal to 0 with probability 1 − c and V with

probability c, this is how we distribute traps on the lattice. Then hGn(E)i = 2 nN Z Dφ N X i=1 φ2ie−Arestn he− P rVrPα(φαr)2i. (3.27)

For each r we have for c  1

he−VrPα(φαr)2i = 1 − c + ce−V P α(φαr)2 = exp h log  1 − c + ce−VPα(φαr)2 i ≈ exph−c + ce−VPα(φαr)2 i .

(23)

So we have hGn(E)i = 2 nN Z Dφ N X i=1 φ2ie−A0n− P r(c−c exp(−V P α(φαr)2)). (3.28)

Now we will go to a continuum description: we assume that our fields φr are defined on the whole of R3 instead of only on a discrete lattice. This way our action will become an integral instead of a sum. In our action 3.23 we had a term φr+b. In a continuum we

can Taylor expand this, assuming φr=: φ(r) becomes a smooth function. So

A = −X r,b φ(r)w(b)φ(r + b) +X r (−E + Vr)φ(r)2 ≈ − 1 zτ0 X r,b φ(r)w(b)  φ(r) + b · ∇φ(r) +1 2b THφ(r)b  +X r (−E + Vr)φ(r)2 =X r − 1 zτ0 φ(r)∇2φ(r) + (−E + Vr)φ(r)2 ≈ Z d3r  − 1 zτ0 φ(r)∇2φ(r) − Eφ(r)2 + Vrφ(r)2  = Z d3r  1 zτ0 (∇φ(r))2− Eφ(r)2 + V rφ(r)2  ,

where in the fifth equal sign we used integration by parts and Hφ(r) is the Hessian matrix of φ(r). Now using the above calculation of the averaged action we obtain

hAni = Z d3r " X α 1 zτ0 (∇φα(r))2−X α E(φα(r))2+ c − ce−V Pα(φα(r))2 # . (3.29)

To capture the ’bulk’ result, we will use the saddle point method. We have to take the functional derivative of hAni with respect to φ(r) and set it equal to 0. Now if we use

the Ansatz φ(r) = φ(rW )V0−1/2n, where n is a unit vector in n-dimensional Euclidian space and W = (cτ0V0z)1/2 and τ0 is a unit of time, and write down the integral for the

averaged action in a suitable basis (n should be at least a basisvector) then As:= hAni = Z d3r  1 zτ0 (∇φ(rW ))2 V − E V φ(rW ) 2+ c − ce−φ(rW )2 = cW−3 Z d3r0  (∇φ(r0))2− E V cφ(r 0 )2+ 1 − e−φ(r0)2  ,

where in the second equality a c was taken out of the integral and we also did a substi-tution r0 = rW . So we must find the functional derivative of the functional

J (k2, φ) = Z

d3rh(∇φ)2− k2φ2+ 1 − e−φ2i

(24)

Sometimes we will also write J (k2), omitting the dependence on φ. Consider a variation of φ, call this δφ. Then we need to look at J (k2, φ + δφ) − J (k2, φ). The functional

derivative is then by definition the expression linear in δφ inside the integral δJ = J (k2, φ + δφ) − J (k2, φ) =: Z d3r δJ δφ  δφ, (3.31)

where δJ/δφ is notation for the functional derivative. So calculation of this derivative yields δJ = Z d3rh(∇φ + ∇δφ)2− (∇φ)2− k2(φ + δφ)2+ k2φ2− e−(φ+δφ)2 + e−φ2i = Z d3rh2∇φ · ∇δφ + (∇δφ)2− k2(2φδφ + δφ2) − e−φ2e−2φδφ−δφ2 − 1i ≈ Z d3r h −2δφ∇2φ − 2k2φδφ − e−φ2(1 − 2φδφ − 1) i = Z d3rh−2∇2φ − 2k2φ + 2e−φ2 φiδφ,

where in the third line integration by parts was used and all terms which are not linear in δφ are thrown away. So we find that the functional derivative equals

δJ δφ = −2∇ 2 φ − 2k2φ + 2e−φ2φ = −2φ00− 4φ 0 r − 2k 2 φ + 2e−φ2, (3.32)

where the Ansatz φ(r) = φ(rW )V0−1/2n is used which is spherically symmetric and so ∇2φ = 1

r2

∂ ∂r r

2 ∂φ

∂r. In the saddle point method we need to solve δJ/δφ = 0, this leads

to the equation −φ00 2φ 0 r = (k 2− e−φ2 )φ. (3.33)

If we neglect the e−φ2 term, this becomes the spherical Bessel differential equation. The spherical Bessel differential equation is given by

r2d 2R dr2 + 2r dR dr + (k 2r2− n(n + 1))R = 0. (3.34)

For n = 0 these we have that 3.33 and 3.34 are the same equation if we neglect e−φ2, which we can since φ will take large values near the starting point of the random walk. For n = 0, the solutions to 3.34 are of the form A sin(kr)/r. So taking φ(r) = sin(kr)/r and since at r = π/k there is a zero of φ, so e−φ2 will not be neglible, we can not expect to get a right answer if we integrate over all of R3, instead we integrate from r = 0 to

(25)

So with k =pE/cV we have for small k2 J (k2) ≈ Z π/k 0 Z 2π 0 Z π 0 dθdϕdrr2sin θ  (∇sin kr r ) 2− k2sin 2kr r2 + 1  (3.35) = 4π Z π/k 0 drr2 "  k cos kr r − sin kr r2 2 − k2sin 2kr r2 + 1 # (3.36) = 4π 4 3k3 + 4π Z π/k 0 drr2 k 2cos2kr − k2sin2kr r2 − 2k sin kr cos kr r3 + sin2kr r4  (3.37) = 4π 4 3k3, (3.38)

where we have used that

Z π/k 0 dr(cos2kr − sin2kr) = 0 and also Z π/k 0 dr2k sin kr cos kr r = Z π/k 0 drsin 2kr r2,

which can be seen by integration by parts. So we find that As ≈ cW−3J (k2) = 4

3cπ 4(zτ

0E)−3/2. Now we need to find the density of states ρ(E) = π−1Im(G(E − 0 · i).

So we find

ρ(E) ∼ e−43cπ 4(zτ

0E)−3/2. (3.39)

3.5. Improving the approximation

Now we want to improve the approximation for the density of states. We will do this by renormalizing the action. First we consider equation 3.27 without making the approxi-mation made for he−Vr(φαr)2i, instead we use the exact expression to obtain for one term

in 3.27 hGn(E)iii = 2 nN Z Dφφ2ie−A0Y r  1 − c + ce−V Pα(φαr)2  , (3.40)

(26)

where φ2i =P α(φ α i)2 and A0 = − P α,r,bφ α rw(b)φαr+b− P α,rE(φ α

r)2. We will write the

terms in the product again as an exponential and do some Taylor expansions hGn(E)iii = 2 nN Z Dφφ2ie−A0Y r exphlog1 − c + ce−VPα(φαr)2 i = 2 nN Z Dφφ2ie−A0exp " X r log 1 + c ∞ X n=1 (−V P α(φ α r)2)n n! !# ≈ 2 nN Z Dφφ2ie−A0exp " cX r ∞ X n=1 (−V P α(φαr)2)n n! # ≈ 2 nN Z Dφφ2ie−A1  1 + cV 2 2 X r X α (φαr)2 !2 − . . .  , where now A1 = −P α,r,bφαrw(b)φαr+b+ P

α,r(cV −E)(φαr)2. Now we will use the notation

hφ1φ2. . . φmi = 2 Z Z Dφφ1φ2. . . φme−A 1 , (3.41)

where Z =R Dφe−A1. Then by appendix A

hφ1φ2. . . φmi = hφ1φ2i · · · hφm−1φmi + all other pairings. (3.42)

Thus it is only needed to know what hφiφji is. We will do this by fourier

transform-ing the linear transformation M where this comes from A1 = φα

rM φαr thus M φαr = −P bw(b)φ α r+b+ (cV − E)φαr. So X r M φαre−ik·r = −X r X b w(b)φαr+be−ik·r+X r (cV − E)φαre−ik·r = −X b w(b)e−ik·bφˆαk+ (cV − E) ˆφαk = (− ˆw(k) + cV − E) ˆφαk,

where the hats denote the Fourier transforms. The inverse of this linear map is trivially obtained, we just multiply by (− ˆw(k) + cV − E)−1. So transforming back to the original variable r, we obtain for the matrix elements of M−1

(M−1)r,r0 =

1 (2π)3

Z

(27)

In particular we will write τ0−1(M−1)r,r= g. First look at m = 2. So if we expand  P β(φ β r)2 2 we obtain for the first term after the zeroth order term

cV2 nN X r X α Z Dφ(φαi)2 X β (φβr)2 !2 = ZcV 2 2nN X r X α,β,γ hφα iφ α iφ β rφ β rφ γ rφ γ ri = ZcV 2 2nN X r X α,β,γ 8hφαiφβrihφα iφ γ rihφ β rφ γ ri + (...) = ZcV 2 nN X r X α (M−1)2i,rτ0g + (...) = ZcV 2 0g)3 N + (...)

where (...) denotes all terms which will be O(n2) so they will not survive the limit n → 0.

For a general element in the expansion above we have the following term 2c(−V )m nN m! Z Dφφ2i X r X β (φβr)2 !m = 2c(−V ) m nN m! X r Z Dφφ2i X k1+k2+···+kn=m m! k1!k2! · · · kn! Y 1≤β≤n (φβr)2kβ = Zc(−V ) m nN X r X α (M−1)2i,rτ0gm−1+ (...) = Zcg(−τ0gV ) m N + (...)

If we sum this over all i and m and take the limit n → 0 then we obtain

Σ = cgV

2τ 0

1 + V τ0g

. (3.44)

This Σ will lead to a mass renormalization. Namely, if we sum the following geometric series gR= 1 −τ0w(k) − τˆ 0(cV − E) + 1 −τ0w(k) + τˆ 0(cV − E) Σ 1 −τ0w(k) + τˆ 0(cV − E) + ... = 1 −τ0w(k) + τˆ 0(cV − E) − τ0cV 2g 1+τ0V g = 1 −τ0w(k) + τˆ 0(cVR− E) ,

(28)

we obtain a renormalized action A where V is replaced by VR = 1+τV0V gR. Our

renor-malized action has again a saddle-point given by 3.30, but we replace V by VR. So our

renormalized action is As = c(cτ0VRz)−3/2J (k2) where k2 = cVER and this results in the

density of states

ρ(E) ∼ exp −c(cτ0VRz)−3/2J k2 , (3.45)

where J (k2) is defined by 3.30 together with equations of motion 3.33.

3.6. Survival probability and p(s, t)

Now that we have found the density of states we can look at R(t) =R−∞∞ dEρ(E)e−Et. Since Ψ(t) ∼ R(t) we have Ψ(t) ∼ R−∞∞ dEρ(E)e−Et. This integral can be evaluated using the saddle-point method. It can be directly seen that

Ψ(t) ∼

Z ∞

−∞

dEρ(E)e−Et ∼ exp  − min k2 {c(cτ0VRz) −3/2 J k2 + Et}  . (3.46)

This can be written more elegantly if we define τ = tVR(cτ0VRz)3/2and F (τ ) = mink2{k2τ +

J (k2)}. Then

Ψ(t) ∼ exp −c−1/2(τ0VRz)−3/2F (τ ) . (3.47)

Earlier we found the following relation Ψ(t) =P

sp(s, t)(1 − c)

s = P

sp(s, t)e

−λs with

c = 1 − e−λ. For very large t one can approximate this with Ψ(t) = R∞

0 dsp(s, t)e −λs,

which is a Laplace transform of p(s, t). When we invert this Laplace transform, we can find p(s, t). We will use the saddle point method again to this end, which gives us the result p(s, t) ∼ expmax λ λs − c −1/2 (τ0VRz)−3/2F (τ )  . (3.48)

To find this maximum, we first note that c is very small and thus c = 1 − e−λ ≈ λ. Now differentiate with respect to λ

0 = d dλ λs − λ −1/2 (τ0VRz)−3/2F (τ )  = s + F (τ ) 2λ3/2 0VRz)3/2 − k 2 λ1/2 0VRz)3/2 dτ dλ = s + F (τ ) 2λ3/2 0VRz)3/2 − 3k 2τ 2λ3/2 0VRz)3/2 = s + 1 λ3/2 0VRz)3/2  1 2J (k 2) − k2τ  . So if we solve for λ1/2 we obtain

λ1/2=  1 s(τ0VRz)3/2  k2τ −1 2J (k 2) 1/3 =  1 s(τ0VRz)3/2  −J0(k2)k2 1 2J (k 2) 1/3 ,

(29)

where in the second equality we used that F (τ ) = mink2{k2τ + J (k2)} so τ = −J0(k2).

Now if we look at fixed s/t and also such that s t = VR  k2+ J (k 2) 2J0(k2)  , (3.49) then λ1/2 =  − J 0(k2) tVR(τ0VRz)3/2 1/3 . (3.50)

Now that we know where the maximum occurs, we can calculate what this maximum is. So using 3.50 and setting τ0 = 1 we obtain

λs − F (τ ) λ1/2(V Rz)3/2 = 1 λ1/2(V Rz)3/2 λ3/2(VRz)3/2s − F (τ )  = 1 λ1/2(V Rz)3/2  3 2k 2 τ −1 2J (k 2 ) − F (τ )  = − 3J (k 2) 2λ1/2(V Rz)3/2 = − 3J (k 2) 2(VRz)3/2  −VRt(VRz) 3/2 J0(k2) 1/3 = −3J (k 2) 2z  − t V2 RJ0(k2) 1/3 . So the result for p(s, t) is

p(s, t) ∼ exp −3J (k 2) 2z  − t V2 RJ0(k2) 1/3! (3.51)

For very compact walks, we have s/t = 2VRk2/3  1 because for very small k2 we can

use 3.35 to see that s t = VR  k2+ J (k 2) 2J0(k2)  = VR k2− 4π4 3k3 22πk54 ! = 2VRk 2 3 . (3.52) Then −3J (k 2) 2z  − t V2 RJ0(k2) 1/3 = − 3 2z  − tJ (k 2)3 V2 RJ0(k2) 1/3 = − 3 2z    t V2 R  4π4 3k3 3 2π4 k5    1/3 = −1 z  t V2 R (2π4)2 k4 1/3 = −1 z  t1/2 VR 2π4 k2 2/3 = −1 z  t3/24 3s 2/3 = −t z  4π4 3s 2/3 . So for extremely compact walks, the result reads

p(s, t) ∼ exp −t z  4π4 3s 2/3! . (3.53)

(30)

4. Conclusion

In this thesis, we looked at various subjects concerning random walks. We first con-structed a random walk on R2 with the intention to rescale this random walk. By

rescaling this random walk, we obtained Brownian motion. Then we looked at Brown-ian motion with drift on R. As original work, we found the distribution function of the supremum Brownian motion with drift using tools from queueing theory. We used the M/D/1 queue in particular. A lot is known about this model which enabled us to find the distribution function of the supremum Brownian motion with drift in R. The result, under some assumptions, reads

P (η ≤ x) = lim α→∞ C C +√αmexp  αmx C +√αm  m X k=0 (−αmx +√αmk)k (C +√αm)kk! exp  − √ αmk C +√αm  . (4.1) If more time was available, we could try to prove the assumptions under which this result holds. The assumptions which could be proven in a further research are found in theorem 4 in chapter 2. After this we looked at the range of Brownian motion in R. We found the probability density function of the range using an article by W. Feller [1] which is denoted by δ(r, t) and is given by

δ(r, t) = √8 t ∞ X k=1 (−1)k−1k2φ kr√ t  . (4.2)

Then we looked at the probability that a random walker in 3D visits s distinct sites in t steps. We looked at this problem using field theory. We followed a paper of Th. M. Nieuwenhuizen [2]. The result for extremely compact walks, i.e. s/t  1, reads

p(s, t) ∼ exp −t z  4π4 3s 2/3! . (4.3)

(31)

Bibliography

[1] Feller, William. ”The asymptotic distribution of the range of sums of independent random variables.” The Annals of Mathematical Statistics (1951): 427-432.

[2] Nieuwenhuizen, Th M. ”Trapping and Lifshitz tails in random media, self-attracting polymers, and the number of distinct sites visited: a renormalized instanton approach in three dimensions.” Physical review letters 62.4 (1989): 357.

[3] Weiss, George H. ”Random Walks and Their Applications: Widely used as math-ematical models, random walks play an important role in several areas of physics, chemistry, and biology.” American Scientist (1983): 65-71.

[4] Norris, James R. Markov chains. No. 2008. Cambridge university press, 1998. [5] Reich, Edgar. ”On the Integrodifferential Equation of Takacs. I.” The Annals of

Mathematical Statistics (1958): 563-570.

[6] A.K. Erlang, (1909, 1920), The theory of probabilities and telephone conversations, and Telephone waiting times, first published in Nyt Tidsskrift for Matematik, B 20 (1909) 33; Matematisk Tidsskrift, B 31 (1920) 25; E. Brockmeyer, et al., The Life and Works of A.K.Erlang, (English translations of both articles) The Copenhagen Telephone Company, Copenhagen, 1948.

[7] Lawler, Gregory F. Random walk and the heat equation. Vol. 55. American Mathe-matical Society, 2010.

(32)

A. Gaussian integrals

Consider the Gaussian integral

Z ∞ −∞ e−12ax2dx = r 2π a (A.1)

We can add a linear term in the exponent, I = Z ∞ −∞ e−12ax 2+J x dx, (A.2)

where J is a constant. We can compute this in the following way. First we will complete the square in the exponent

−1 2ax 2+ J x = −1 2  x − J a 2 +J 2 2a. (A.3)

Now we can make use of the first Gaussian integral to conclude I = r 2π a e J2/2a . (A.4)

We can define a Gaussian integral in more than one dimension. First let M be a n times n symmetric matrix and nonsingular. Thus there exists an orthogonal matrix O such that ˜M = O−1M O is diagonal with eigenvalues mi. Let J be a n-dimensional vector.

The integral which we will consider is Z(J) = Z exp  −1 2x TM x + JTx  dx1dx2. . . dxn, (A.5)

where we integrate over the whole Rn. Write x = O˜x and J = O˜J. Now if we want to

change variables to the eigenbasis, then we have to multiply inside the integral with the determinant of the Jacobian, but since O is orthogonal we have |O| = 1. So the integral

(33)

becomes Z(J) = Z exp  −1 2x TM x + JTx  dx1dx2. . . dxn = Z exp  −1 2x˜ TO−1 M O˜x + JTO˜x  d˜x1d˜x2. . . d˜xn = Z exp  −1 2x˜ TM ˜˜x + ˜JTx˜  d˜x1d˜x2. . . d˜xn = n Y i=1 Z exp  −1 2mix˜ 2 i + ˜Jix˜i  d˜xi = n Y i=1 r 2π mi exp " ˜ J2 i 2mi # = (2π) n/2 pdet(M)exp  1 2 ˜ JTM˜−1J˜  = (2π) n/2 pdet(M)exp  1 2J T M−1J  .

We see then that Z(0) = (2π)n/2det(M )−1/2. If we differentiate Z(J) with respect to J i

and Jj and evaluate at J = 0, then we obtain two expressions which must be equal, the

notation for this expression will be hqiqji.

hqiqji = 1 Z(0) ∂ ∂Ji ∂ ∂Jj Z(J)|J=0= 1 Z(0) ∂ ∂Ji n X k=1 (M−1)jkJkZ(J) ! |J=0 = (M−1)ji = (M−1)ij.

On the other hand we have 1 Z(0) ∂ ∂Ji ∂ ∂Jj Z(J)|J=0= 1 Z(0) Z xixjexp  −1 2x TM x  dx1dx2. . . dxn. (A.6) So we conclude that (M−1)ij = 1 Z(0) Z xixjexp  −1 2x TM x  dx1dx2. . . dxn. (A.7)

One also sees that if we just do the calculation

hq1q2. . . qmi = hq1q2i · · · hqm−1qmi + . . . (A.8)

(34)

B. Populaire samenvatting

In dit verslag zijn random walks het centraal onderwerp. Wat zijn random walks? Laten we kijken naar een random walk in 1 dimensie. Gooi een munt op en kijk of het kop of munt is. Als het kop is, zet een stap naar rechts en als het munt is, zet je een stap naar links. Herhaal dit na elke stap. Dit is een random walk op de gehele getallen Z. Wiskundig zeggen we dan dat we een rij stochasten (Xn)n∈Nmet Xn+1 = Xn+ Y met Y

de stochast die vertelt hoe je een stap moet zetten, dus in het geval van ons voorbeeld is Y gelijk aan −1 (stap naar links) met een kans 1

2 en gelijk aan 1 (stap naar rechts) met

een kans 12. Random walks kunnen we ook uitvoeren in hogere dimensies. In dimensie 2 kan een random walk er als volgt uit zien.

De loper loopt op een vierkant rooster. Het is ook mogelijk om een random walk anders te definieren. Men kan in een willekeurige richting lopen na elke stap. Zo een random walk ziet er als volgt uit.

(35)

Het idee is nu om deze random walks te herschalen. Dat wil zeggen, we gaan het aantal stappen dat de loper zet verhogen en de stapgroottes die de loper zet gaan we juist verkleinen. Een herschaling ziet er als volgt uit.

In de limiet dat de stapgrootte naar 0 gaat en het aantal stappen naar oneindig krijgen we Brownse beweging. Noteer dit met B(t), waar t de tijd voorstelt en B(t) het eind-punt van de beweging na een tijd t in dimensie 1. Vervolgens kunnen we kijken naar Brownse beweging met negatieve drift in 1 dimensie. Hierbij kijken we naar B(t) − Ct waarbij C > 0. In dit verslag kijken we dan naar wat het maximale verschil kan zijn tussen twee tijdstippen. Dit wordt genoteerd met sups≤t(B(t) − B(s) − C(t − s)). De kansverdeling hiervan kunnen we vinden met wachtrij theorie. Deze theorie kijkt naar modellen die wachtrijen in de praktrijk zo goed mogelijk probeert te modelleren. Een voorbeeld van een model is de M/D/1 rij welke we ook gebruiken voor de berekening van de kansverdeling hiervoor genoemd. Dit model zegt dat klanten arriveren volgens een Poisson process en dat deze klanten in een deterministisch tijd geserveerd worden. Bijvoorbeeld, we komen aan bij de kassa in de supermarkt en de caissiere doet er altijd 1 minuut over om de klant te helpen. Men kan ook kijken naar algemenere modellen, maar in dit verslag komt dat niet aan de orde.

Wat betreft random walks, kunnen we ons ook afvragen wat de kans is dat de loper s verschillende plekken bezoekt in t tijdstapjes. In dit verslag kijken we naar dit prob-leem in 3 dimensies op een vierkant rooster. Het beantwoorden van deze vraag doen we door een heel ander vraagstuk te beantwoorden. Bekijk weer een random walk op een vierkant rooster in 3 dimensies. Dan kijken we naar de kans om t stapjes te over-leven als er op willekeurige plekken op de rooster vallen zijn verdeeld. Wonderbaarlijk genoeg helpt dit ons om de kans te vinden dat een loper s verschillende roosterpunten te bezoeken in t tijdstapjes.

Referenties

GERELATEERDE DOCUMENTEN

a general locally finite connected graph, the mixing measure is unique if there exists a mixing measure Q which is supported on transition probabilities of irreducible Markov

Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers) Please check the document version of this publication:.. • A submitted manuscript is

2 Wanneer het materiaal boven de matrijs uitstroomt is de binnendiameter kleiner dan de uiteindelijke diameter aan de bovenzijde. Figuur 4.1 laat dit zien. Dit kan tot gevolg hebben

Bij het draaien van de matrijs moeten de afmetingen en de vorm worden gerealiseerd zoals deze in de werkstuktekening (figuur 4.1) zijn aangegeven.. Het uitgangsmateriaa.l

Chats die je al gevoerd hebt Gesprekken die je gehad hebt Contacten die je toegevoegd hebt.. Onderaan je smartphone zitten

Although we pointed out that the divergence criterion is biased, favouring the spectral areas with high mean energy, we argued that this does not affect the interpretation of

In een recent rapport van het Engelse Institution of Engineering and Technology (IET, zie www.theiet.org) wordt een overzicht gegeven van de redenen waarom 16-

Theorems 2.5 and 2.6 together tell us that RWRS associated with simple random walk on Zd , d ≥ 5, provides us with a natural example of a random process that is weak Bernoulli but not