• No results found

Correlation structure of intermittency in the parabolic Anderson model

N/A
N/A
Protected

Academic year: 2021

Share "Correlation structure of intermittency in the parabolic Anderson model"

Copied!
60
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

CORRELATION STRUCTURE OF INTERMITTENCY

IN THE PARABOLIC ANDERSON MODEL

J. Gartner F. den Hollander

Fachbereich Mathematik Mathematisch Instituut Technische Universitat Berlin Universiteit Nijmegen Strasse des 17. Juni 136 Toernooiveld 1

D-10623 Berlin, Germany. NL-6525ED Nijmegen, The Netherlands.

Abstract

Consider the Cauchy problem @u(xt)=@t= Hu(xt) (x 2 ZZdt  0) with initial condition u(x0)1 and withHthe Anderson HamiltonianH=+. Here  is the discrete Laplacian, 2(01) is a di usion constant, and =f(x):x2ZZdg is an i.i.d. random eld taking values

in IR. Gartner and Molchanov (1990) have shown that if the law of (0) is nondegenerate, then

the solution u is asymptotically intermittent. This means that limt !1 hu 2(0 t)i=hu(0t)i 2= 1,

wherehidenotes expectation w.r.t., and similarly for the higher moments. Qualitatively their

result says that, as t increases, the random eld fu(xt):x2 ZZdg develops sparsely distributed

high peaks, which give the dominant contribution to the moments as they become sparser and higher.

In the present paper we study the structure of the intermittent peaks for the special case where the law of(0) is (in the vicinity of) the double exponential Prob((0)>s) = exp;es=] (s2IR).

Here  2 (01) is a parameter that can be thought of as measuring the degree of disorder in

the- eld. Our main result is that, for xed xy2ZZd and t!1, the correlation coecient of u(xt) and u(yt) converges tokwk ;2 `2 P z2ZZ dw(x+z)w(y+z). In this expression,  =  = while w:ZZd ! IR + is given by w = (v) d with v:ZZ!IR

+ the unique centered ground state

of the 1-dimensional nonlinear equation v+ 2vlogv= 0 (ground state means the solution in `

2(ZZ) with minimal l

2-norm). Qualitatively our result says that the high peaks of

uhave a shape

that is a multiple ofw relative to the center of the peak.

It will turn out that if the right tail of the law of (0) is thicker (or thinner) than the

double exponential, then the correlation coecient ofu(xt) andu(yt) converges to xy (resp.

the constant function 1). Thus, the double exponential family is the critical class exhibiting a nondegenerate correlation structure.

1991 Mathematics Subject Classication: 60H25, 82C44 (primary), 60F10, 60J15, 60J55 (secondary). Key words: random media, intermittency, large deviations, variational problem, nonlinear dierence equa-tion.

Running title: The parabolic Anderson model. Date: February 20, 1997.

(2)

Contents

0 Introduction

3

0.1 The parabolic Anderson model: : : : : : : : : : : : : : : : : : : : : : : : : 3 0.2 Intermittency : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 4 0.3 Correlation structure: ( ) and Theorems 1{2 : : : : : : : : : : : : : : : : : 5 0.4 A variational problem: ( ) and Proposition 3 : : : : : : : : : : : : : : : : 7 0.5 Asymptotics of the 1-st and 2-nd moments: Theorem 3 : : : : : : : : : : : 8 0.6 Discussion : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 8 0.7 Numerical study of ( ) : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 10 0.8 Related work : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 11

1 Heuristic explanation of Theorem 3

11

1.1 Expansion for the 1-st moment : : : : : : : : : : : : : : : : : : : : : : : : 11 1.2 Expansion for the 2-nd moment : : : : : : : : : : : : : : : : : : : : : : : : 13

2 Main propositions

14

2.1 Clumping of the local times: Proposition 4 : : : : : : : : : : : : : : : : : : 14 2.2 Centering and truncation of the local times: Proposition 5 : : : : : : : : : 15 2.3 Two time scales: Proposition 6 : : : : : : : : : : : : : : : : : : : : : : : : 16 2.4 Transformation of the random walk: Proposition 7: : : : : : : : : : : : : : 17 2.5 Separation of the time scales: Proposition 8 : : : : : : : : : : : : : : : : : 17 2.6 Loss of memory: Proof of Proposition 9 : : : : : : : : : : : : : : : : : : : : 19 2.7 Completion of the proof of Theorem 3 : : : : : : : : : : : : : : : : : : : : 19

3 Proof of Propositions 4{6

21

3.1 Proof of Proposition 4 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 21 3.2 Proof of Proposition 5 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 27 3.3 Proof of Proposition 6 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 33

4 Proof of Propositions 7{9

36

4.1 Proof of Proposition 7 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 36 4.2 Proof of Proposition 8 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 38 4.3 Proof of Proposition 9 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 38

5 Functional analysis

42

5.1 Proof of Proposition 3 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 42 5.1.1 Analysis of ( ) : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 43 5.1.2 The link between ( ) and ( ) : : : : : : : : : : : : : : : : : : : : : 46 5.2 Proof of Theorem 2 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 47 5.2.1 Parts (1{2) and (3)(ii{iii) : : : : : : : : : : : : : : : : : : : : : : : 47 5.2.2 Parts (4) and (5) : : : : : : : : : : : : : : : : : : : : : : : : : : : : 47 5.2.3 Part (3)(iii) : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 48 5.2.4 Parts (6) and (7) : : : : : : : : : : : : : : : : : : : : : : : : : : : : 50

(3)

5.3 Finite approximation of ( ) : : : : : : : : : : : : : : : : : : : : : : : : : : 56

0 Introduction

0.1 The parabolic Anderson model

Consider the Cauchy problem

@

@tu(xt) =Hu(xt) (x 2ZZdt0)

u(x0) 1

(0.1) with H the Anderson Hamiltonian

H= + : (0.2)

Here  is the discrete Laplacian, 2(01) is a diusion constant, and

 =f(x):x2ZZ d

g (0.3)

is an i.i.d. random eld taking values in IR. As an operator, H only acts on the spatial

variable:

(u)(xt) = P y:jy;xj=1

u(yt);u(xt)]

(u)(xt) = (x)u(xt): (0.4)

Note that H has two competing parts:

(1) a diusivepart , which tends to make u spatially at

(2) a multiplicative part , which tends to make u spatially irregular.

(H is the so-called `tight-binding Hamiltonian with diagonal disorder' considered in

An-derson (1958).)

Depending on  and on the marginal law of , the equation in (0.1) can be used to model various physical and chemical phenomena. For instance, t!fu(xt):x2ZZ

d gmay

describe the evolution of the density eld of a chemical component in a catalytic reaction (Zel'dovich (1984)) or the average occupation eld in a system of particles that branch and migrate (Dawson and Ivano (1978)). In these examples the role of  is to act as a spatially inhomogeneous local rate of catalysis resp. branching. Other applications are: Fisher-Eigen equation in Darwinian evolution (Ebeling et al. (1984)) Burgers' equation with a random force in hydrodynamics (Carmona and Molchanov (1994)).

The following result gives a sucient condition on  to ensure that (0.1) is actually applicable to such concrete situations. Let hi denote expectation w.r.t. the -eld. Let

Z =fZ(t):t0gdenote simple random walk on ZZ

d jumping at rate 2d (i.e., the Markov

process with generator ). Write PxEx to denote probability and expectation on path

space given Z(0) = x.

(4)

Proposition 1

(Gartner and Molchanov (1990)) If  +(0) log+(0) d <1 with  +(0) =(0) _e (0.5)

then (0.1) has a unique nonnegative solution -a.s. This solution admits the Feynman-Kac representation u(xt) = Ex  expZ t 0 (Z(s))ds  : (0.6)

Moreover, for all t 0 the random eld fu(xt):x2ZZ d

g is stationary and ergodic under

translations.

The proof of Proposition 1, which is based on ideas from percolation, shows that in dimension d  2 condition (0.5) is in fact necessary: if (0.5) fails then a.s. there is no

nonnegative solution to (0.1).

0.2 Intermittency

A discussion of some mathematical problems related to (0.1) can be found in the recent memoir by Carmona and Molchanov (1994). In the present paper we shall be concerned with one particular aspect of (0.1), namely the occurrence of intermittency.

We shall henceforth assume that the cumulant generating function of the-eld is nite on the positive half axis:

H(t) = loghet (0)

i<1 for all t0: (0.7)

It is easily seen from the representation in (0.6) that assumption (0.7) is equivalent to all moments and correlations of the u-eld being nite for all times (see also Lemmas 1 and 2 in Section 1).

De nition

Let

k(t) = loghuk(0t)i (k = 12:::): (0.8)

The system (0.1) is said to be intermittent if 1

lim t!1  l(t) l ; k(t) k =1 for all l > k 1: (0.9)

Qualitatively, (0.9) means that the u-eld develops sparsely distributed high peaks as t increases. These peaks give the dominant contribution to the moments as they become sparser and higher. Thus the landscape formed by u is so irregular that the a.s. growth at a xed site diers from the average growth in a large box.

1It is easily checked that (0.9) holds for all

l>k 1 i it holds fork= 1l= 2 (Gartner and Molchanov

(1990) Section 1.1)

(5)

As is evident from (0.1{0.2), peaks tend to grow in the vicinity of where the -eld is large (at a rate proportional to the eld), but tend to be attened out by the diusion. By analogy with the theory of Anderson localization (see e.g. Frohlichet al. (1985)), one may expect to nd from a spectral analysis of the operator in (0.2) that the eect of the randomness in the-eld qualitatively dominates the eect of the diusion term . This is indeed the case, as expressed by the following result.

Proposition 2

(Gartner and Molchanov (1990)) If

(0)6= constant (0.10)

then (0.1) is intermittent.

0.3 Correlation structure: ( ) and Theorems 1{2

Our goal in this paper is to show that there is a qualitative change in the structure of the intermittent peaks when the law of (0) is (in the vicinity of) the double exponential

Prob((0) > s) = exp;es=] (s2IR): (0.11)

Here 2(01) is a parameter that can be thought of as measuring the degree of disorder

in the-eld, because the density associated with (0.11) rapidly drops to zero outside the interval ;]. Our main result, Theorem 1 below, gives the correlation coecient of

u(xt) and u(yt) for xy2ZZ

d xed andt

!1. We shall see that what this result says is

that the intermittent peaks have a particular asymptotic shape that depends on the ratio = (see Section 0.6).

To formulate Theorem 1 we introduce the following 1-dimensional nonlinear dierence equation:

( ) v + 2 v logv = 0, v:ZZ!IR

+ = (0

1) = =.

We shall be interested in the ground states of ( ), i.e., the solutions inl2(ZZ) with minimal

l2-norm.

Theorem 1

Fix  2 (01) and put = =. Suppose that the law of (0) is given by

(0.11). If there exists a v:ZZ!IR

+ such that

A1. v is a ground state of ( ),

A2. all other ground states are translations of v,

(6)

Theorem 1, which will follow from Theorem 3 in Section 0.5, gives us a precise descrip-tion of the correladescrip-tion structure of the intermittent peaks provided assumpdescrip-tions A1{A2 are met. However, the verication of these assumptions is a nontrivial problem, due to the discrete nature of ( ). As a partial result we can oer the following theorem, which will be proved in Section 5.

Theorem 2

Let V =fv:ZZ!IR +:v  is a ground state of ( )g. I. For all 2(01): (1) A1 holds, i.e., V 6= . (2) V is compact in the `

2-metric modulo shifts. 2

(3) For every centered v 2V: 3

(i) either v(x) < v(0) for all x 6= 0 (single-point maximum) or v(x) <

v(0) =v(1) for all x6= 01 (double-point maximum)

(ii) v is strictly unimodal, i.e., strictly monotone left and right of its maximum

(iii) v(x + 1)=v(x) 1=(2 xlog x) (x!1), and similarly for x!;1.

II. For suciently large:

(4) A2 holds, i.e., V is a singleton modulo shifts.

(5) The centered v has a single-point maximum and is symmetric.

III. For any centered family (v)2(01) with v 2V: (6) lim!1v = 0 pointwise. (7) lim!0v( bx= p c) = exp 1 2(1 ;x

2)]in L2(IR) and uniformly on compacts in IR

(where bc denotes the integer part).

Our estimates in Section 5 show that Theorem 2II holds when  2=log(1 + e ;2).

Possibly it holds for all > 0, but we are unable to prove this. See Section 0.7 for a description of numerical work. 4

Note that Theorem 2I(3)(iii) implies

v(x) = exp;(1 +o(1))jxjlogjxj] (jxj!1): (0.14) 2For

v2l

2(ZZ), let 

v] =fv(+x):x2ZZgbe the equivalence class given by the translations ofv. For V l

2(ZZ), let 

V] =fv]:v2Vgbe the set of equivalence classes ofV. We equip l

2(ZZ)] with the metric ku];v]k ` 2= inf x2ZZ ku();v(+x)k `

2. The statement in Theorem 2I(2) means that V

] is compact in

the topology induced by this metric.

3We call v2l 2(ZZ) centered if v(0) = max x v(x) andv(x)<v(0) forx<0. 4The continuous version of (

) is trivial. In fact,v 00+2

vlogv= 0 forv:IR!IR

+has only one solution

inL

2(IR) (modulo translations), namely v ( x) = exp 1 2(1 ;x 2)]. Indeed, multiply by v

0to see that any

solution satises 1 2( v 0)2+ v 2(log v; 1 2) A(A2IR). Ifv2L

2(IR), then necessarily

A= 0 (compatible

with v(x)v 0(

x) ! 0 as jxj ! 1). Substitute v = exp(f) to get 1 2( f 0)2 + (f ; 1 2) = 0. The (twice

(7)

So, in particular, w dened in (0.13) is an element of `1(ZZd) `

2(ZZd).

Remarks

(A) The proof in Sections 2{4 will show that we do not require the law of (0) to be given precisely by (0.11). What we actually need is that H(t) dened in (0.7) has the following asymptotic property:

lim

t!1tH

00(t) =  for some 

2(01): (0.15)

The parameter in (0.15) takes over the role of  in (0.11). For the double exponential in (0.11) we haveH(t) = log ;(t + 1), which indeed satises (0.15).

(B) The proof in Sections 2{4 will also show that if limt!1tH

00(t) = 0 or

1, then

the l.h.s. of (0.12) is the constant function 1 resp. xy (compatible with Theorem

2III). Thus, the distributions characterized by (0.15) form the critical class with an interesting correlation structure.

0.4 A variational problem: ( ) and Proposition 3

In view of (0.6), it is no surprise that the proof of Theorem 1 uses large deviation theory and that the nonlinear equation ( ) comes from an associated variational problem. We shall formulate this variational problem here. In Section 0.5 it will reappear in Theorem 3, which describes the asymptotic behavior of the 1-st and 2-nd moments of the eld

fu(xt):x2ZZ d

gand which is a renement of Theorem 1.

Let Pd = P(ZZ

d) denote the set of probability measures on ZZd. On

Pd dene the functionals Id(p) = X fxyg:jx;yj=1  q p(x); q p(y) 2 (0.16) Jd(p) = ; X x p(x)log p(x): (0.17) Dene ( ) ( ) = 1 2dpinf 2P d fId(p) + Jd(p)g.

We have 0  ( )  1 (because IdJd  0 resp. Id(

0) = 2dJd( 0) = 0). Moreover,

! ( ) is nondecreasing and concave with limits lim

!0 ( ) = 0 resp. lim!1 ( ) = 1.

The following proposition will be proved in Section 5.1 and provides the link between ( ) and ( ).

Proposition 3

For all 2(01):

(1) ( )has a minimum. (2) p is a minimizer of ( ) i p = di =1(v 2 i=kvik 2

`2) with vi any ground state of ( ).

(3) ( ) = logkvk`2 with v any ground state of ( ).

(8)

Note that ( ) does not depend on the dimension d. Theorem 2III(7) and Proposition 3(3) imply that ( ) = 

4log(1= ) + log( e

2) +o(1)] (

!0). Thus has innite slope at

= 0.

0.5 Asymptotics of the 1-st and 2-nd moments: Theorem 3

The -function appears in the following asymptotic expansions. Recall the denition of H in (0.7) and of w in (0.13).

Theorem 3

Fix  2 (01) and put = =. Suppose that the law of (0) satis es

(0.15) and suppose that A1{A2 in Theorem 1 hold. Then for xy 2ZZ

d xed and t ! 1 hu(xt)i =  X z2ZZ d w(x + z) exp  H(t); ( )2dt + C 1( t) + o(1)  (0.18) hu(xt)u(yt)i =  X z2ZZ d w(x + z)w(y + z) exp  H(2t); ( )4dt + C 2( t) + o(1)   (0.19)

where C1( t)C2( t) are functions of order o(t) that are independent (!) of xy.

Theorem 3, which will be proved in Sections 2{4, obviously implies Theorem 1. It is crucial that the expansions in (0.18{0.19) are independent of xy up to the error term o(1). The dependence on xy sits solely in the prefactors. We shall see in Section 2 that the functions C1C2 are in fact very sensitive to the precise form of the function H, but

that the prefactors only depend on the asymptotic behavior of H assumed in (0.15). It is beyond the scope of the present paper to identifyC1C2.

0.6 Discussion

The double exponential is nondegenerate and so, according to Proposition 2, the u-eld is intermittent. This means that the k-th moment is controlled by a dierent class of peaks for each k. Moreover, as k increases the peaks in the `k-class' become sparser but higher (recall (0.8{0.9)).

For t large but xed, the ergodic theorem tells us that the ratio of 2-nd moments appearing in the l.h.s. of (0.12) essentially counts how often two peaks in the class k = 2 are seen at a relative distance y;x resp. 0 in a large box. In other words, if we think of

the peaks as located on random islands, then the ratio essentially counts the pairs of sites in a large box that are at distancey;x resp. 0 and both belong to an island. It is in this

sense that the correlation structure established in Theorem 1 is related to the typical size of the islands.

(9)

Peaks grow in the vicinity of where the -eld is large, but are not fully localized on the local maxima of  because the diusion term  has a tendency to spread them out. Now, the double exponential dened in (0.11) makes a sharp drop beyond the value . Therefore, the larger  the larger the local maxima of  and hence the more localized the peaks. On the other hand, the larger  the faster the diusion and hence the less localized the peaks. Theorem 1 shows that, apparently, it is the parameter = = that controls the size of the islands. More specically, ifc(xy) denotes the r.h.s. of (0.12), then we see

from Theorem 2III that lim

!1c(xy) = xy (xy 2ZZ d) lim !0c (bx= p cby= p c) = e ; 1 4 jx;yj 2 (xy2IR d): (0.20)

The second statement says that an island in the class k = 2 has widths in the d lattice directions that are of order 1=p

for small . In other words, the long-time correlation length of the u-eld is of order 1=p

for small .

The result in Theorem 3 should be interpreted as follows. Let the highest peaks in the islands corresponding to the classes k = 12 have heights h1(t)h2(t) and densities

d1(t)d2(t). If x1(t)x2(t) denote the centers of some randomly chosen peaks, then (0.18{

0.19) tell us that k = 1 : u(x1(t) + xt) = w (x) w(0)h1(t) (0.21) k = 2 : u(x2(t) + xt) = w (x) w(0)h2(t) (0.22) and d1(t)h1(t) = w(0)exp  H(t); ( )2dt + C 1( t) + o(1)  (0.23) d2(t)h 2 2(t) = w 2 (0)exp  H(2t); ( )4dt + C 2( t) + o(1)  : (0.24)

In other words, modulo an unknown height and an unknown density, the peaks have a

non-random shape that is given by w for both classes. (The same result holds for the

classes k 3, but these will not be considered in the present paper.)

Thus, the results in Theorems 1{3 give us a picture of the correlation structure of the u-eld that is much more detailed than the notion of intermittency. Indeed, while intermittency tells us that the peaks occur on sparse islands, our result tells us that the peaks

(1) contract to single points when =1

(2) grow unboundedly when = 0

(3) develop an interesting nite structure when 2(01).

(10)

0.7 Numerical study of ( )

For each 2(01) there are two centered symmetric solutions of ( ), one with a

single-point maximum and one with a double-single-point maximum. Let v(1) and v(2) denote these

solutions, respectively. Then

v(1)(0)> v(1)(1)> v(1)(2)> ::: v(1)( ;x) = v (1)(x) (x 2ZZ) v(2)(0) =v(2)(1)> v(2)(2)> ::: v(2)( ;x) = v (2)(x + 1) (x 2ZZ): (0.25) Now, we may ask which of these two solutions has the smallerl2-norm and whether there

exist values of the parameter for which the norms coincide. We have done high pre-cision computations with the package Mathematica. These strongly indicate that always

kv (2)

k`2 >kv (1)

k`2, although for small values of the dierence 2 = kv (2) k 2 `2 ;kv (1) k 2 `2 is extremely small: 2 1 0.5 0.25 0.1 0.05 kv (1) k 2 `2 2.49 4.38 6.58 9.48 15.1 21.5 2 6.81 10;1 9.58 10;2 1.23 10;4 6.75 10;11 2.47 10;30 3.69 10;63

If there would be no other candidates for the centered solution of ( ) with minimall2

-norm (which we do not know!), then these numerics would lead us to the conclusion that for all 2(01) the minimall

2-solution of ( ) is uniquely given byv(1)modulo shifts (i.e.,

Theorem 2II would hold for all 2(01)). Therefore, theoretically, the high peaks of the

u-eld contributing to the moments have a unique shape determined by v(1), as explained

in Section 0.6. However, practically, for small also the peaks with shape v(2) have to be

taken into account, unless the time is extremely large.

Let us briey explain our numerical algorithm, which is based on the following obser-vation. The symmetric solutions of ( ) corresponding to an initial datumv(0) are: (i) not strictly decreasing when v(0) is small, (ii) not everywhere strictly positive when v(0) is large. The algorithm varies v(0) until both of these failures are removed (as is required by Theorem 2I(3)(i{ii)). Given an initial datumv(0), we compute v(1):::v(N) (with N ranging from 25 to 75 depending on ) by the following rules:

v(1) := v(0)1; log v(0)] for the single-point maximum,

v(1) := v(0) for the double-point maximum, v(n + 1) := v(n)2;2 log v(n)];v(n;1) if v(n) > 0,

v(n + 1) := v(n) if v(n)0,

for n = 1:::N ; 1. The correct initial datum v(0) is then computed by using the

following interval approximation. We start with the interval a0b0] := 12] and take

v(0) := (a0+b0)=2. Then we compute v(1):::v(N) in accordance with the above rules.

If this sequence of numbers is not strictly decreasing or if v(N) > 0, then we put a1 :=

(a0+b0)=2 and b1 :=b0. Otherwise we put a1 :=a0 and b1 := (a0+b0)=2. We then take

v(0) := (a1+b1)=2, etc. This process is iterated m times until bm

;am becomes less than

10;100.

(11)

0.8 Related work

As a further reference to intermittency we mention the following papers. Antal (1995) studies the survival of simple random walk on ZZd in a random eld of traps with density

c 2 (01). This model is equivalent to (0.1) when (0) takes the values ;1 and 0 with

probability c resp. 1;c (as can be seen from (0.6)). His analysis shows that at time t

the `islands' have a size of order t1=(d+2). Greven and den Hollander (1992) and Sznitman

(1994) study models related to (0.1) when a drift is added to the diusive part  and the -eld is bounded. It turns out that in this situation there is a critical value for the drift, below which the a.s. exponential growth rate and the box-averaged exponential growth rate are the same but above which they are not. This fact indicates that for a bounded -eld the occurrence of intermittency depends on the strength of the drift.

Finally, Bolthausen and Schmock (preprint 1994) study simple random walk on ZZd

with a self-attractive interaction inversely proportional to time, which technically leads to similar questions. They show that this process is localized and has a limit law that can be identied in terms of a variational problem and an associated nonlinear dierence equation similar in nature to our ( ) and ( ). We have picked up several ideas from their paper, although the functionals arising in our context require a modied approach.

The outline of the rest of this paper is as follows. In Section 1 we give a heuristic explanation of Theorem 3. In Section 2 we formulate the main steps in the proof of Theorem 3 by listing six key propositions. These propositions are proved in Sections 3{4. In Section 5 we prove Theorem 2 and Proposition 3. Theorem 1 is implied by Theorem 3, as was pointed out above.

1 Heuristic explanation of Theorem 3

In this section we explain where (0.18{0.19) come from. We give a heuristic argument show-ing how the quantity ( ) arises from large deviations of local times associated with our simple random walk Z =fZ(t):t0g, and how the higher order terms in the expansions

require an analysis of the corrections to large deviations.

1.1 Expansion for the 1-st moment

Return to the Feynman-Kac representation (0.6). Dene the local times `t(z) = Z t 0 1fZ(s)=zgds (z 2ZZ dt 0): (1.1)

Lemma 1

For all x2ZZ

(12)

Proof.

Use (1.1) to rewrite (0.6) asu(xt) = Ex(expP

z(z)`t(z)]). Take the expectation

over, use Fubini's theorem, and use (0.7) in combination with the i.i.d. property of . 2

Since P

z`t(z) = t, the exponent in (1.2) may be rewritten as X z H(`t(z)) = H(t) + t X z 1 t  H `t(z) t t  ; `t(z) t H(t)  : (1.3)

Now, H has the following scaling property (which is implied by (0.15)): lim

t!1

1

tH(ct);cH(t)] = clog c uniformly in c2 01]: (1.4)

It therefore seems plausible from (1.3) that as t !1 X z H(`t(z)) = H(t) + t X z `t(z) t log  `t(z) t  +o(t): (1.5)

Let Lt denote the occupation time measure associated with Z, i.e.,

Lt() = ` t()

t : (1.6)

Then, recalling the denition of the functional Jd in (0.17), we see that the sum in the

r.h.s. of (1.5) equals ;Jd(Lt). Substituting (1.5) into (1.2) we thus get hu(xt)i=Ex  exp H(t);tJd(Lt) +o(t)  : (1.7)

Next, according to the Donsker-Varadhan large deviation theory, Lt satises the weak

large deviation principle on Pd with rate function Id, where Id is the functional in (0.16)

(Deuschel and Stroock (1989), Theorem 3.2.17). Thus it seems plausible from (1.7) that as t!1 hu(xt)i= exp  H(t);t inf p2P d fId(p) + Jd(p)g+o(t)  : (1.8)

The inmum in the exponent is precisely (=)2d, with dened in ( ). So this explains the rst two terms of the expansion in (0.18).

A rigorous proof of (1.8) is given in Gartner and Molchanov (preprint 1996). The proof uses a standard compactication method:

(i) Pick a large boxTN = (;NN]d\ZZ d.

(ii) Get an upper bound onhu(xt)iby wrapping the random walk aroundTN, i.e., dene

`Nt(z)=P z0

22NZZ

d`t(z + z 0) (z

2TN) and use that P z2ZZ dH(`t(z)) P z2T NH(`Nt(z))

(because H(0) = 0 and t!H(t) is convex).

(iii) Get a lower bound on hu(xt)i by killing the random walk at the boundary of TN,

i.e., add the indicator of the event that `t(z) = 0 for all z 2(TcN@TN).

(13)

(iv) Use the full large deviation principle for LNt() = `Nt()=t on TN. This leads to an

expansion as in (1.8), but with anN-dependent upper resp. lower variational problem. In these variational problems the same functionals as in (0.16{0.17) appear, but now dened for p2Pd(TN) with periodic resp. Dirichlet boundary condition.

(v) Let N !1 and show that both variational problems converge to ( ).

To get the full expansion in (0.18) we need to go one step further and show that the term expo(t)] in (1.8) is actuallyf

P

zw(x + z)gexpC

1( t) + o(1)]. To achieve this we

must analyze the corrections to the large deviation behavior of Lt. This will be done in

Sections 2{4 and amounts to studying the local times of atransformed random walk, chosen in such a way that its occupation time measure performs random uctuations around the minimizerw2

=kwk 2

`2 of our variational problem ( ) (modulo shifts). More precisely, we

consider the random walk

Z =fZ(s):s0g (1.9) whose generator G is (Gf)(x) =  X y:jy;xj=1 w(y) w(x)f(y);f(x)] (1.10)

considered as a self-adjoint operator on `2(ZZdw2 =kwk

2

`2). The crucial point is that the

invariant probability measureof Z is preciselyw2 =kwk

2

`2. The absolute continuous

trans-formation from Z to Z gives rise to the prefactor in (0.18) and to the rst two terms in

the expansion. The higher order terms in the expansion are therefore determined by the uctuations of Lt under the law ofZ. The details are worked out in Sections 2{4.

Note that Z has a drift towards 0 that increases rapidly with the distance to 0 (see

(0.13) and Theorem 2I(3)(iii)). Thus it has strong ergodic properties.

1.2 Expansion for the 2-nd moment

The heuristic explanation of (0.19) is in the same spirit. This time the starting point is the following analogue of Lemma 1.

Lemma 2

For all xy2ZZd and t0 hu(xt)u(yt)i=Exy  exp X z2ZZ d H(^`t(z))   (1.11)

where Exy =ExEy and

^`t() =` 1 t() +`

2

t() (1.12)

is the sum of the local times of two independent copies of Z starting at x resp. y.

(14)

Proof.

Same as for Lemma 1. Use (0.6). 2

An argument similarto (1.3{1.8) produces the rst two terms of the expansion in (0.19). Namely, the analogue of (1.8) reads

hu(xt)u(yt)i= exp H(2t);2t inf p1p2 2P d  1 2  Id(p1) +I d(p2)  +Jd  1 2(p 1+p2)  +o(t) : (1.13) Because p ! Jd(p) is strictly concave, the inmum reduces to p

1 = p2 = p with p 2 Pd,

which again equals (=)2d (see Gartner and Molchanov (preprint 1996) for a rigorous proof). To get the full expansion will amount to studying the occupation time measure

^Lt() = 12t^`t() (1.14)

associated withtwo independent copies of the transformed random walk Z dened in

(1.9-1.10). The details are worked out in Sections 2{4. Again, the prefactor and the rst two terms in (0.19) arise through the absolute continuous transformation from Z to Z, the

higher order terms through the uctuations of ^Lt under the law of the two copies of Z.

2 Main propositions

In this section we outline the main steps in the proof of (0.19) in Theorem 3. These steps are formulated as Propositions 4{9 in Sections 2.1{2.6 below. The proof of these propositions will be given in Sections 3{4, the proof of (0.19) subject to these propositions in Section 2.7. It will become clear from the whole construction that (0.18) in Theorem 3 holds too, namely, via a straightforward simplication of the arguments given below to

one instead oftwo random walks (compare Lemmas 1 and 2).

Our starting point is Lemma 2, which gives us a representation for hu(xt)u(yt)i in

terms of H, the cumulant generating function of the -eld, and ^`t = `1 t +`2

t, the sum

of the local time functions of two independent simple random walks with step rate 2d. Throughout the sequel it will be assumed thatH satises the condition in (0.15). For ease of notation we shall abbreviate

X z2ZZ d

H(^`t(z)) = H  ^`t: (2.1)

Throughout Sections 2{4 assumptions A1{A2 in Theorem 1 are in force.

2.1 Clumping of the local times: Proposition 4

Proposition 4 below states that the asymptotic behavior of the 2-nd moments is controlled by paths whose occupation time measure ^Lt = ^`t=2t is close to a minimizer of ( ). This

property will allow us in Section 2.2 to truncate ZZd.

(15)

LetMdenote the class of minimizers of ( ). By assumptions A1{A2 in Theorem 1 in

combination with Proposition 3(2), M is a singleton modulo shifts.

For  > 0, dene

U =f2P(ZZ d):

k;k`1 <  for some  2Mg: (2.2)

Proposition 4

Fix xy2ZZ

d. For every  > 0 there exists a > 0 such that

Exy  expH ^`t]1  1 2t^`t 2U  (1;e ;t)E xy  expH^`t]  (2.3) for all t 0.

The proof of Proposition 4 is in Section 3.1 and is dicult for the following reason. From the full large deviation principle on the boxTN = (;NN]d\ZZd we know that for larget

the periodizedoccupation time measure, dened by ^LNt(z) =P z0

22NZZ

d ^Lt(z +z 0) (z

2TN),

is close to a minimizer of the periodized variational problem (see Section 1.1). However, this does not imply that ^Lt is close to a minimizer of ( ). Essentially, what we must

show is that the main contribution comes from paths whose local times are concentrated in one large box and not in two or more boxes separated by some distance. Namely, this precisely guarantees that ^Lt is close to ^LNt modulo a shift. We can then use the full large

deviation principle onTN, and Proposition 4 will follow by showing that the minimizers of

the periodized variational problem are close to the minimizers of ( ) when N is large.

2.2 Centering and truncation of the local times: Proposition 5

For > 0 and z 2ZZ

d, dene (see footnote 3) U (z) =f 2P(ZZ

d):

k;k`1 <  for some  2Mcentered at zg: (2.4)

By Theorems 2I(2) and 2I(3)(i), the U (z)'s for dierent z's are disjoint when  is small

enough. Write out Exy  expH ^`t]1f 1 2t^`t 2U g  = P z2ZZ d Exy  expH ^`t]1f 1 2t^`t 2U (z)g  = P z2ZZ d Ex;zy;z  expH  ^`t]1f 1 2t^`t 2U (0)g  : (2.5) Proposition 5 below is an estimate on the xy-dependence of the summand in the r.h.s. of (2.5). This estimate implies that the summation over z and the limit t ! 1 may be

interchanged. This will allow us in Sections 2.3{2.7 to rst compute the asymptotics of the summand for xed x0 = x

;z, y 0 = y

;z and t ! 1 and afterwards carry out the

summation over z.

(16)

Proposition 5

There exist A > 0 and t00R0 > 0 such that Exy  expH ^`t]1  1 2t^`t 2U (0)  Ae ; (jxj+jyj) E00  expH ^`t]  (2.6) for all t t 0, all 0<  

0 and all xy = 2TR

0 (with

jxj the lattice norm of x).

The idea behind this estimate is that when the two random walks are forced to build up their local times in the neighborhood of the origin, then this will be harder to do when they start far away from the origin then when they start at the origin.

The prefactor in the r.h.s. of (2.6) is summable overxy =2TR

0, showing that the remote

terms in the r.h.s. of (2.5) are negligible uniformlyin t. Let v:ZZ!IR

+ be the unique centered ground state of ( ). Let w

:ZZd !IR

+ be the

product function w = (v)d in (0.13) and dene p  =w2

=kwk 2

`2. Then, by assumptions

A1{A2 in Theorem 1 in combination with Proposition 3(2), p 2Pd is the unique centered

minimizer of ( ). Henceforth, instead of U (0) we shall write U (p), the -neighborhood

of p. In Sections 2.5{2.6 we shall be able to use Propositions 4 and 5 to expand H  ^`t

around H (2tp). But before that we need some preparations.

2.3 Two time scales: Proposition 6

In order to do the expansion we shall need an estimate in the spirit of Proposition 5 but with two times 0tT. For R > 0 dene

^ R= inffs0:Z 1(s) = 2TR or Z 2(s) = 2TRg: (2.7)

Proposition 6

Fix xy2ZZd. There exist A > 0 and T

0 00R0 > 0 such that Exy  expH ^`T]1  1 2T^`T 2U (p) 1f^R tg  AtRd ;1e; RE xy  expH ^`T]  (2.8) for all T T 0, all t 0 with t=T  0, all 0<   0 and all R R 0.

Note that T takes over the role that t was playing in the previous propositions, and that t is now used as an auxiliary time. We shall henceforth stick to this notation.

Proposition 6 states that the main contribution comes from paths that do not move out of a large box before timet uniformly in the lengthT of the path.

Incidentally, the restrictions on txy in Proposition 5 resp. TtR in Proposition 6 are partly an artefact of our proofs in Sections 3.2{3.3. However, these restrictions will not bother us in what follows.

(17)

2.4 Transformation of the random walk: Proposition 7

In order to exploit Propositions 4{6 we shall make an absolute continuous transformation from our reference random walk with generator to a new random walk whose generator G is chosen as in (1.10). The point is that G has preciselyp =w2

=kwk 2

`2 as its unique

invariant probability measure (see Section 4.1). Thus, under the law of the random walk driven by G and for large T, we have that LiT = `iT=T (i = 12) are close to p with

probability close to 1, and hence so is ^LT = ^`T=2T = (L1 T +L2

T)=2. Write Pxy =PxPy

and Exy =Ex Ey to denote the joint probability and expectation for two independent

random walks driven byG and starting at x resp. y.

Proposition 7

For all 0 tT, all R > 0 and all xy2ZZ d Exy  expH ^`T]1  1 2T^`T 2U (p) 1f^R > tg  =q p(x)p(y)expH(2T); ( )4dT]  Exy  expFT(^LT)] 1 p p (Z 1 (T))p  (Z 2 (T))1 f^LT 2U (p)g1f^R> tg   (2.9)

where ^R is de ned in (2.7) and

FT(^LT) =X z  H(2T ^LT(z)); ^LT(z)H(2T);2T^LT(z)logp(z) : (2.10)

The proof of Proposition 7 is in Section 4.1. Think of FT as a uctuation functional:

FT(p) = o(T) as T ! 1 because of (1.4), so in the r.h.s. of (2.9) the contribution of

the expectation is of higher order than the prefactor. The point of Proposition 7 is that the prefactor has precisely the form we are looking for in (0.19). To complete the proof of (0.19), we must show that as T !1 the expectation in (2.9) becomes independent ofxy

up to and including order 1. This will be described in Sections 2.5{2.6.

2.5 Separation of the time scales: Proposition 8

Pick 0tT and split the occupation time measure as

^LT = tT ^Lt+ T ;t

T ^LtT (2.11)

where ^LtT is the occupation time measure over the time interval tT). Later we shall let

T !1followed by t!1. The rst limit will allow us to get ^LtT close top, the second

limit will allow us to get rid of thexy-dependence.

Proposition 8 below separates the contributions from ^Lt and ^LtT. We expand

(18)

Here,hiis the standard inner product and DFT is the Fr$echet derivative of FT given by

(see (2.10))

DFT](z) = 2TH0(2T(z))

;H(2T);2Tlog p(z): (2.13)

Using the identityP

z ^Lt(z) = 1, we may write  t T ^LtDFT]  = 2t H0(2T) ; 1 2T H(2T) +  ^LtVT  + log p   (2.14) with VT:IR+ !IR the potential VT() = H0(2T) ;H 0(2T) ;log = Z 2T 2T du u ;uH 00(u)] (2.15)

and VT  the composition of VT with . (The reason for splitting terms as in (2.14) is

that VT is small for large T (see (0.15)). Together with the trivial inclusions f^LtT 2U 1(p) g f^LT 2U (p)g f^LtT 2U 2(p) g for 1 = ;2 1;  2 = +2 1; and 0  t T  (2.16) valid when 0< <

2, we obtain the following lower resp. upper bound for the expectation

in the r.h.s. of (2.9).

Proposition 8

Fix 0< <

2. Let 0

tT and i( ) (i = 12) be as in (2.16). Then

for all R > 0 and all xy2ZZ d Exy expFT(^LT)] 1 p p (Z 1 (T))p  (Z 2 (T))1 f^LT 2U (p)g1f^R > tg   (i=1)  (i=2) P ~ xy~2T R Pt(x ~x)Pt(y ~y)E ~ x~y  R  xy ~x ~y ^LT;ttT    Z1(T ;t)Z 2(T ;t) ^LT ;ttT  1f^LT ;t 2U i ( )(p) g  : (2.17)

Here Pt() is the transition kernel of the random walk driven by G in (1.10), while R

and  are the functions given by

(19)

for 0tT,  2P(ZZ

d) and xy ~x ~y ^x ^y 2ZZ

d.

The proof of Proposition 8 is in Section 4.2. The point of Proposition 8 is that the xy-dependence sits all in the rst three factors of the summand in (2.17).

2.6 Loss of memory: Proof of Proposition 9

Our last proposition shows that the rst three factors of the summand in (2.17) become independent of xy for T ! 1, and hence so does the expectation in the l.h.s. of (2.17).

The reason for this is that the transformed random walk has a drift towards 0 that increases rapidly with the distance to 0, so it has strong ergodic properties.

Proposition 9

(1) For all t  0, all R > 0, all 0 <  < R = infz 2T Rp(z) and all xy2ZZ d liminfT !1 inf ~ x~y2T R inf 2U(p) R(xy ~x ~ytT)   1; R  2t inf ~ xy~2T R Pxy supp(^Lt) TR Z 1(t) = ~xZ2(t) = ~y  limsupT !1 sup ~ xy~2T R sup 2U(p) R(xy ~x ~ytT)   1 + R  2t : (2.19) (2) For all x2ZZ d lim t!1 sup j~xj=o(t=logt) Pt(x ~x) Pt(0 ~x) ;1 = 0: (2.20)

(3) For all xy 2ZZ d

lim

t!1 inf p

t=loglogt=o(R) R=o(t=logt) inf ~ xy~2T R Pxy supp(^Lt) TR Z 1(t) = ~xZ2(t) = ~y  = 1: (2.21) The proof of Proposition 9 is in Section 4.3. We have now completed our list of key propositions.

2.7 Completion of the proof of Theorem 3

Let us nally collect Propositions 4{9 and explain why they prove Theorem 3. For this we take limits in the following order:

T !1 !0 !0 R = p

t t!1: (2.22)

(20)

The summation in (2.5) is restricted to the box TN and the limit N ! 1 is taken last.

The proof comes in 4 steps.

1. Propositions 4-6 and (2.5) can be summarized as follows (the lower indices indicate the choice of the variables):

Exy(expH ^`T]) = (1 +axyT )fl:h:s:(2:3)gxyT fl:h:s:(2:3)gxyT = P z2T N fl:h:s:(2:6)gx ;zy;zT +bNxyT E00(expH ^`T]) fl:h:s:(2:6)gx ;zy;zT = fl:h:s:(2:9)gx ;zy;zT Rt

+cx;zy;zT RtEx;zy;z(expH ^`T])

(2.23)

with lim

T!1

axyT = 0 for all  > 0 and all xy2ZZ d lim N!1 bNxyT = 0 uniformly inT t 0 and 0<   0

for all xy2ZZd

lim t!1Tlim !1 cxyT R= p tt = 0 uniformly in 0< 

0 for all xy 2ZZ

d:

(2.24) 2. Propositions 7{9 can be summarized as follows:

fl:h:s:(2:9)gx ;zy;zT Rt = q p(x;z)p(y;z) expH(2T); ( )4dT] fl:h:s:(2:17)gx ;zy;zT Rt fl:h:s:(2:17)gx ;zy;zT Rt  (i=1)  (i=2) fr:h:s:(2:17)gx ;zy;zT i ( )Rt fr:h:s:(2:17)gx ;zy;zT i( )Rt = (1 +dx ;zy;zT Rt) fr:h:s:(2:17)g 00T i ( )Rt (2.25) with lim t!1lim !0lim !0 lim T!1 dxyT R= p

tt= 0 for all xy2ZZ

d: (2.26)

3. Now rst pickx = y = 0. Then (2.23{2.24) and (2.25{2.26), together with the identity E;z;z(expH

 ^`T]) = E

00(expH

 ^`T]) (z 2 ZZ

d) and the fact that lim

!0i( ) = 

(i = 12), yield a closed set of equations for E00(expH

^`T]) from which the expansion in

(0.19) forx = y = 0 easily follows.

(21)

4. Finally, pickxy arbitrary. Then (2.23{2.24) and (2.25{2.26), together with the identity Ex;zy;z(expH

 ^`T]) = Exy(expH  ^`T]) (z 2 ZZ

d) and the result in step 3, yield the

expansion in (0.19).

Note that the precise form of the higher order termC2( T) = o(T) in the exponent in

(0.19) does not come out of the analysis. Clearly, it is sensitive to the precise form of H beyond the asymptotics assumed in (0.15) (and remains hidden in the last factor in the r.h.s. of (2.25) after the limits in (2.22) are taken).

3 Proof of Propositions 4{6

3.1 Proof of Proposition 4

The diculty behind the proof was explained in Section 2.1, as well as the route that is to be followed. We shall use several ideas from Bolthausen and Schmock (preprint 1994), where a similar problem is handled.

A key role will be played by the variational problem ( ) and its restriction to TN =

(;NN]d with periodic boundary conditions (see Sections 0.4 and 5.3). Let Mresp. MN

denote the sets of minimizers of these variational problems. For  > 0, dene

U = f2 P(ZZ d): k;k` 1 <  for some  2Mg UN = f2 P(TN):k;k` 1 <  for some  2MNg (3.1) (see also (2.2)). We shall abbreviate

^Pxyt(  ) = E xy(1f  gexpH ^`t]) Exy(expH ^`t]) (3.2) and ^Lt(B) = P z2B ^Lt(z) (B ZZ d) ^LNt(B) = P z2B ^LNt(z) (B TN) (3.3) where ^Lt = ^`t=2t is the occupation time measure of the two random walks and ^LNt is its

periodized version. The goal of this section is to prove that limsupt

!1

1

t log ^Pxyt(^Lt =

2U )< 0 for all  > 0 and xy2ZZ

d: (3.4)

This implies Proposition 4.

For ease of notation we shall drop the superscript. Keep in mind though that Pxytand

LtLNt refer to two random walks. We now start the proof of (3.4).

(22)

Proof.

Fix  > 0 and xy 2 ZZ

d. Throughout the proof, N is so large that xy

2 TN.

Dene the event AN t = \ z2ZZ d  Lt(TN +z)1; 1 4  (3.5)

i.e., no translate of TN contains more than mass 1; 1

4. We may then split

Pxyt(Lt = 2U ) Pxy t(Lt = 2U LNt 2UN 1 32d ) +Pxyt(LNt = 2UN 1 32d ) Pxy t(Lt = 2U A N t ]c) +Pxyt(LNt 2UN 1 32d AN t ) +Pxy t(LNt = 2UN 1 32d ): (3.6) In what follows we shall show that all three terms are exponentially small, which will prove (3.4). The proof comes in 7 steps.

1. Third term: By the full large deviation principle onTN, there exists aN0

1 (depending on ) such that limsupt !1 1 t log Pxyt(LNt = 2U N 1 32d )< 0 for N N 0: (3.7)

Indeed, because of (3.2), this is a statement about a quotient of two terms, which behave resp. as expH(2t); N 1 32d ( )4dt + o(t)] (3.8) expH(2t); ( )4dt + o(t)]: (3.9)

Here ( ) is given by ( ), while N ( ) = 12d minp=

2U N 

Fd(p) (Fd =Id+ Jd) (3.10)

(compare with (1.11{1.13) in Section 1.2). By Lemmas 16(f{g) in Section 5.3, we have N ( ) > ( ) for all  > 0 and N suciently large (depending on ). Together with (3.8{

3.9) this implies (3.7). 2. First term: Note that

AN t ]c = S z2ZZ d  Lt(TN +z) > 1; 1 4 S z2ZZ d  kLt(+z);LNt()k`1 < 1 2 (3.11)

(where elements of P(TN) are viewed as elements of P(ZZ

d) via the canonical embedding).

(23)

Next, by Lemma 16(c) in Section 5.3 there exists a N1

1 (depending on ) such that U N 1 32d U1 2 for all N N 1 (3.13) and hence fLNt 2= U1 2 g fLNt 2= U N 1 32d g: (3.14)

So, combining (3.7), (3.12) and (3.14) we get limsupt !1 1 t log Pxyt(Lt = 2U A N t ]c)< 0 for all N N 0 _N 1: (3.15)

3. Second term: We rst write Pxyt(LNt 2UN 1 32d AN t )  P z2T N Pxyt(LNt 2UN 1 32d (z)AN t ) = P z2T N Px;zy;zt(LNt 2UN 1 32d (0)AN t )  jTNj max uv 2T 2N u;v =x;y Puvt(LNt 2UN 1 32d (0)AN t ) (3.16) where UN (z) denotes the -neighborhood of the elements in MN that are centered at z

(recall (3.1)). In the second line of (3.16) we have used that AN t is shift invariant (recall

(3.5)) and in the third line that xy2TN. Next, put N = 5M and dene

B5M t = \ z2ZZ d  Lt(T5M+ 10Mz) 1; 1 4  A 5M t : (3.17)

The proof of (3.4) will be complete once we show that limsupt !1 1 t log  max uv 2T 10M u;v =x;y Puvt(L 5M t 2U 5M 1 32d (0)B 5M t )  < 0 for some M 1: (3.18)

This will be done in steps 4{7 below.

4. We begin with a combinatorial lemma. Dene the halfspaces hi+ k = fz 2ZZ d:zi > (5 + 10k)M g hi; k = fz 2ZZd:zi (5 + 10k)Mg (k 2ZZi = 1:::d): (3.19)

Lemma 3

B5M t S k2ZZ S di=1 fLt(h i+ k ) 1 8dLt(h i; k ) 1 8d g.

Proof.

Put = =8d. We prove the inverse inclusion for the complements. Suppose that there is no (ki) such that Lt(hi+

k ) Lt(h i;

k ) . Since for every i there exists a k(i)

(24)

it must be that Lt(hi+ k(i))< , and hence Lt(hi+ k(i);1 \h i; k(i))> 1 ;2 : (3.21) Since \di =1h i+ zi ;1 \hi ; zi ] = T 5M + 10Mz, it follows that Lt(T5M + 10Mz) > 1 ;2d for z = (k(1):::k(d)): (3.22) 2

5. Next, the random walks Z1Z2 whose local times we are monitoring cannot move far

away in timet, namely lim t!1 1 t2 log  max uv 2T 10M u;v =x;y Puvt  Zi(s) = 2T bt 2 c for some 0 s t  < 0 (i = 12): (3.23) Indeed, since H  ^`t  H(2t) = O(tlog t) = o(t

2), it suces to prove the claim under

the free random walk measure, i.e., without the exponential weight factor in (3.2). But the latter follows from a rough large deviation estimate because the jump times of the random walk are i.i.d. exponentially distributed with nite mean. The details are omitted. Combining (3.23) with Lemma 3, we see that in order to prove (3.18) it suces to show that limsupt !1 1 tlog  max uv 2T 10M u;v =x;y P jkj bt 2 c 10M d P i=1 Puvt  L5M t 2U 5M 1 32d (0)Lt(hi+ k ) 1 8dLt(h i; k ) 1 8d  < 0 (3.24) which in turn is implied by

limsupt !1 1 tlog  sup uv 2ZZ d u;v =x;y Puvt  L5M t 2U 5M 1 32d (0)Lt(h+)  1 8dLt(h ;)  1 8d  < 0 (3.25) with h+ = h 1+ 0 h ; = h 1;

0 . To go from (3.24) to (3.25) we have used that we may pick

k = 0i = 1 because of the shift-invariance and isotropy of the random walk and the shift-invariance of H ^`t the polynomial factor coming from counting the sum overki is

harmless.

6. Now, by Lemma 16(b) in Section 5.3 there exists a M0

(25)

Hence to prove (3.25) it suces to show that limsupt !1 1 tlog  sup uv 2ZZ d u;v =x;y Puvt  Lt(h+)  1 8dLt(h ; ;4Me 1)  1 16dL 5M t (intTM)1; 1 16d  < 0: (3.27) (e1 = (10:::0)). Indeed, by periodization with period 5M the slab between h

+ and

h;

;4Me

1is mapped entirely insideT5M

nTM. On the event in the r.h.s. of (3.26) this slab

therefore carries mass at most 1

16d. Consequently, on the event fLt(h ;)  1 8d g the half space h; ;4Me

1 carries mass at least 1

16d. What (3.27) says is that it is exponentially

unlikely to have substantial local times in two halfspaces separated by a slab.

7. To prove (3.27) we shall do a reection of the random walks w.r.t. the grid of size 2M. The object of this argument (see below) is to transfer the problem to the nite box T5M.

Dene gM = S k2ZZ d S i=1 fz 2ZZ d:zi = (2k + 1)M g ]it(gM) = 1 tjf0st:Zi(s) 2gMZ(s;)2= gMgj (i = 12) ]t(gM) = ]1 t(gM) +]2 t(gM) (3.28) i.e.,t]t(gM) counts the number of times the random walks hitgM during the time interval

0t]. We may then bound the probability in (3.27) by the sum of two parts, namely for any > 0 (1) Puvt  ]t(gM)> Lt(gM) 1 16d  (2) Puvt  ]t(gM) Lt(h +)  1 8dLt(h ; ;4Me 1)  1 16d   (3.29)

where we use thatfL 5M t (int TM)1; 1 16d g fLt(gM) 1 16d g because by periodization

with period 5M the grid gM is mapped entirely outside int TM. Thus (3.27) will follow

once we have proved Lemmas 4{5 below.

Lemma 4

There exists a C1 > 0 such that for all  < C1 and all M 1 limsupt !1 1 t log  sup uv 2ZZ d u;v =x;y (3:29)(1) < 0: (3.30)

(26)

Therefore, similarly as in (3.7) and (3.8{3.9), the r.h.s. of (3.31) is a quotient of two terms. The denominator is the same as (3.9). Because H  ^`t  H(2t), the numerator can be

bounded above by expH(2t)] maxz 2T M Px;zy;z  ]t(@TM)> LMt (@TM) 1 16d   (3.32)

where in the r.h.s. of (3.32) appears the free random walk measure. Now, the latter probability equals

exp; M( )4dt + o(t)] (3.33)

where  M( ) can be made arbitrarily large by picking = suciently small, uniformly in

M. The reason is that it is unlikely for the random walks to spend a local time on @TM

that is much smaller than 1=2d times the number of times they hit @TM. The details are

omitted. Pick = so small that  M( ) > ( ) to get the claim. 2

Lemma 5

There exists a C2 > 0 such that for all < C2log(1=) and all M suciently

large (depending on ) limsupt !1 1 t log  sup uv 2ZZ d u;v =x;y (3:29)(2) < 0: (3.34)

Proof.

The proof comes in 2 steps.

1. Consider the paths of the random walks up to time t. We can fold these paths inside T5M by doing a number a reections w.r.t. the hypersurfaces of dimension d

;1 that lie

on the gridgM, starting from the outside and working our way inwards toT5M. With each

reection H  ^`t increases, because H is convex and because the local times of the paths

are stacked on top of each other. Each piece of the paths that is thus folded adds a factor 2 to the counting. Hence we have

sup uv 2ZZ d u;v =x;y (3:29)(2) 2t max z2T 5M Px;zy;zt  Lt(TM + 4Me1)  1 8dLt(TM)  1 16dLt(T 5M) = 1  : (3.35) Indeed, we can fold all the local time inh+ into the box T

M + 4Me1, all the local time in

h;

;4Me

1 into the box TM, and all the remaining local time in the box TM + 2Me1.

2. We now have an event inside the niteboxT5M where substantial local times are carried

by two subboxes separated by a third box. The probability in (3.35) is the quotient of two terms, which behave resp. as (compare with (3.8{3.9))

expH(2t); M( )4dt + o(t)]

expH(2t); ( )4dt + o(t)]

(27)

where

 M( ) = min p2C(M )

Fd(p) (3.37)

with C(M) the set tting the event in (3.35). Now, Lemma 16(h) in Section 5.3 shows

that  M( ); ( ) > C

2log(1=) for some C2 > 0 and M suciently large (depending

on ). Thus it suces to pick smaller than this dierence and the claim follows from

(3.35). 2

By combining Lemmas 4{5, picking  so small that =C1 < C2log(1=), and picking

somewhere in the middle, we get (3.27). This completes the proof of Proposition 4. 2

3.2 Proof of Proposition 5

For s  0 and ZZ d, let

Ps( ) denote the set of all measures concentrated on with

total masss. For an arbitrary measure  on ZZd, write the abbreviation

H  = X z2ZZ d

H((z)): (3.38)

We recall that (0.15) implies lim t!1H 0 (t);H 0 (t)] = log    for all  >  > 0: (3.39) The following lemma, which is an estimate forone random walk, is the key to Proposition 5.

Lemma 6

Fix  > 0arbitrarily and let 1>  >  > 0 be such that

log

 



> 4de : (3.40)

Let be a nite connected subset of ZZd containing 0. De ne A=A( ) =f 2P 1(ZZ d):(0) min z2 (z)  > max z2 c (z) g: (3.41)

(a) There exist A > 0 and T0R0> 0 such that

Ex  eH` T1  1 T `T 2A  Ae ; jxjE 0  eH` T1  1 T `T 2A  (3.42) for all T T 0 and all x = 2TR 0.

(b) Let  = inffs0:Z(s)2 g denote the rst hitting time of . Then there exist A > 0

and T0R0 > 0 such that

(28)

for all T T 0, all 0 tT 0, all x = 2TR 0, all  2PT ;t(ZZ

d), and all measurable functions

f:ZZd P 1(ZZ d) !IR + satisfying f(zp)f(zq) whenever pq on and pq on c: (3.44)

Before presenting the proof of Lemma 6, let us give an heuristic explanation for (3.42). LetZ be our random walk, starting at x =2 and hitting for the rst time at time. The

basic idea is to replace (Z(s):s 2 0]) by a path that starts at 0, stays at 0 during the

time interval 0=2] and moves to Z() during the time interval (=2] without leaving . In this way we switch from paths starting at x to paths starting at 0. In terms of local times this switch means that mass =2 is moved from c to 0 and another mass=2 from c to . This moving obviously increases the event f`T=T 2 Ag. Moreover, we shall see

that H `T increases by at least 2de  because of (3.39{3.40). Hence we gain a factor

exp2de ] under the expectation. However, it will turn out that by the restriction to the

new class of paths we loose a factor C1exp2d]. Altogether, we therefore gain a factor

exp2d(e ;1)]=C

1. But we shall see that

C1Ex  exp;2d(e ;1)]  C 1C2e ; jxj  (3.45)

which yields the desired prefactor in the r.h.s. of the rst part of (3.42). The argument for (3.43) is essentially the same.

Proof.

The proof of assertion (a) comes in 7 steps. 1. Choose T0 so large that

H0 (T);H 0 (T)4de for T T 0: (3.46)

This is possible because of (3.39{3.40). Throughout the proof, T  T

0 and x 2 ZZ

d are

xed arbitrarily.

2. The monotonicity of t!H

0(t) obviously implies the following two inequalities:

H(a + ) + H(b)];H(a) + H(b + )] ( 0 for 0a b H 0(a) ;H 0(b + )] for  0a b + : (3.47) Using these inequalities we next prove the following statement:

(29)

Indeed, it follows from (3.50) and the denition of A in (3.41) that maxz 2 c( 1+2+)(z) min z2 (1+2+)(z): (3.51)

Hence, moving mass distribution2 from

c into and distributing it according to 3, we

can use the rst part of (3.47) to estimate H (

1+2+)

H (

1+3+): (3.52)

Moreover, after the move we obviously have 1 T (1 +3+) 2A (3.53) so 1(0) +3(0) +(0)  T maxz 2 c( 1 +3+)(z) < T: (3.54) Therefore, now using the second part of (3.47), (3.54) and the monotonicity of t !H

0(t),

we may move mass distribution 1 from

c onto 0, to obtain H ( 1+3+) H   s 2 0 +3+  ; s 2H0(T) ;H 0(T)]: (3.55)

Note that also after the last move 1 T  s 2 0 +3+  2A: (3.56)

Combining (3.46), (3.52) and (3.55), we arrive at (3.48). 3. We next use (3.48{3.50) to move local times. Let

 = inffu0:Z(u)2 g (3.57)

be the rst hitting time of . Clearly,`T=T 2AimpliesT because  > 0. To estimate

the expectation in the l.h.s. of (3.42) we proceed as follows. Applying the strong Markov property at time, we have

Ex  eH` T1  1 T `T 2A  =Ex  (Z()`0=2`=2)1 f Tg   (3.58)

where `ab denotes the local time over the time interval ab], and we dene

(30)

Since`T;s 2Pt

;s(ZZ

d), we may now recall (3.48{3.50) and (3.56) (for = `T

;s) to estimate (sy12) exp  ;4de s 2 

(sy3) for all3 2P s 2( ) (3.61) where we dene (sy3) =Ey  eH( s 2 0 + 3 +` T;s )1  1 T  s 2 0+3+`T;s  2A  : (3.62)

Combining (3.58{3.62) we arrive at the bound Ex  eH` T1  1 T`T 2A  Ex  exp ;4de  2  min 2P 2 () (Z()) 1fTg  : (3.63)

4. The l.h.s. of (3.63) equals the l.h.s. of (3.42). We next derive a lower bound for the r.h.s. of (3.42) that will be combined with (3.63) to yield (3.42). Let

 = inffu0:Z(u)6= 0g (3.64)

be the rst exit time from 0. For y 2 , dene the set of paths

Bs 2 y =  Z():Z(0) = 0Z  s 2  =yZ(u)2 for u2  0 s2 : (3.65)

Fix 0  s T and y 2 arbitrarily. We may then apply the Markov property at times

to write E0  eH` T1  1 T`T 2A  E 0  1f > s 2Z( s 2 + )2B s 2 ygeH ` T1  1 T`T 2A  =E0  1f > s 2Z( s 2 + )2B s 2 yg(sy` s 2s)  : (3.66)

Here we have used that `0 s 2 = s 2 0 on the event f > s 2 g and ` s 2s 2 P s 2( ) on the event fZ(s 2 + )2B s 2 yg (recall (3.62)). Since P 0( > s 2) = exp ;ds], we thus nd that E0  eH` T1  1 T `T 2A  exp;ds]P 0(B s 2 y) min 2P s 2 () (sy) (3.67)

for all 0sT and y2 . Combining (3.63) and (3.67) we arrive at

(31)

Thus, to complete the proof of (3.42) we must show thatK(x)Aexp(;jxj) for x =2TR 0

for someAR0 > 0.

5. We next estimate miny2@P0(B  2

y ) from below. Let 12::: be the jump times of the

random walk: i.i.d.exponentially distributed with mean 1=2d. Fix y 2@ and let D = Dy

be the length of the shortest path from 0 to y inside . Obviously, P0(Bsy)  1 (2d) DP(1 + +D s <  1 + +D +D +1) = 1 (2d) D (2ds) D D! exp ;2ds]: (3.70) From (3.70) it follows that there exists a C1 > 0 such that

miny 2@ P0(Bsy)] ;1 C 1exp2ds] f1 + (2s) ;D 0 g (s0) (3.71) where D0= sup

y2Dy. Substitution into (3.69) gives

K(x)C 1Ex  f1 + ;D 0 gexp;2d(e ;1)]  : (3.72)

We shall estimate the two terms in (3.72) separately.

6. Second term: To reach fromx, the random walk Z has to makeat least D00 = dist(x )

jumps. Hence 1+ +D 00. Since 2d( 1+ +D

00) has a Gamma distribution with

parameter D00, we can estimate for D00 > D0

Ex  ;D 0 exp;2d(e ;1)]  (2d)D 0 1 (D 00 ;1)! R 1 0 u D00 ;1;D 0 exp;e u]du = (2d)D0 (D 00 ;1;D 0 )! (D 00 ;1)! exp ;(D 00 ;D 0)] C 2exp ;(D 00 ;D 0)] (3.73) for someC2< 1. Clearly,D 00 jxj;C 3 for someC3 < 1.

7. First term: The same estimate withD0 replaced by 0. Combine steps 6 and 7 to get the

bound on K(x) claimed below (3.69). This completes the proof of assertion (a).

The proof of assertion (b) goes along the same lines. All we have to do is replace  by  +  2 PT ;s(ZZ d) and `T ;s by `t;s + 2 PT ;s(ZZ d). Since 1 T(`t+) 2 A does not

automatically imply   t, we need to include the indicator of the latter in the l.h.s. of

(3.43). The property of the function f stated in (3.44) ensures that f(Z(t)1

t`t) can only

increase when the path (Z(s):s 20]) is redistributed inside . 2

(32)

Lemma 7

Let the assumptions of Lemma 6 hold. Let 12 denote the rst hitting times

of . Then there exist A > 0 and T0R0 > 0 such that

Exy  eH(` 1 T +` 2 T )1  1 2T(` 1 T +`2 T)2A 1f 1  tg1f 2 tg  A 2e; (jxj+jyj)E 00  eH(` 1 T +` 2 T )1  1 2T(` 1 T +`2 T)2A  (3.74) for all T T

0 and all xy = 2TR

0.

Proof.

This is an easy consequence of (3.43). Namely, rst condition on Z2(

), take the

expectation over Z1(

) by applying (3.43) with `t = ` 1

t and  = `2

T, and then take the

expectation over Z2(

). After that, interchange the order of the expectations (Fubini) and

apply (3.43) with `t =`2

t and  = `1

T. Recall that Exy =ExEy. 2

We can now formulate the tightness result that implies Proposition 5. For 2P 1(ZZ d), let U () =f2P 1(ZZ d): k;k`1 < g (3.75)

be the -neighborhood of  in the `1-metric.

Lemma 8

Let 2P 1(ZZ d) be such that (i) (0) = maxz 2ZZ d(z) (ii)  =fz 2ZZ d:(z)

g is connected for all  suciently small:

(3.76)

Fix  > 0 arbitrarily. Then there exist A > 0 and 0T0R0 > 0 (depending on ) such

that Exy  eH(` 1 T +` 2 T )1  1 2T (`1 T +`2 T)2U ()  A 2e; (jxj+jyj)E 00  eH(` 1 T +` 2 T )  (3.77) for all 0<  0, all T T

0 and all xy = 2TR

0.

Proof.

Choose 0 > 0 so small that ( 

0)> 1 2 and (i0) log  (0) 0  > 4de (ii0) =

 is connected and contains 0 for all 0<   0:

(3.78) Next choose 1>  >  > 0 such that assumption (3.40) of Lemma 6 is satised and

(0) > minz

2

(z) >  > maxz

2

c (z): (3.79)

(Because of (3.76)(i{ii), the latter can be done by picking < (0) close to (0) and  < 0

(33)

Hence U () A for 0 <   

0, where

A = A( ) with =  the set dened in

Lemma 6. Moreover, 1 2T(` 1 T +`2 T) 2 U () implies 1 2T(` 1 T( ) +`2 T( )) > 1 2, which in turn implies `1 T( ) > 0 and `2 T( ) > 0, hence 1  T and  2

 T. We may therefore apply

Lemma 7 (compare (3.41) with (3.80)) to obtain (3.77). 2

The proof of Proposition 5 is now complete. Indeed, we know from Theorem 2I(3)(ii) that the minimizer of ( ) centered at 0 is unimodal in all directions, which guarantees that conditions (3.76)(i{ii) in Lemma 8 are fullled for = p =w2

=kwk 2

`2 (recall Section

0.5).

3.3 Proof of Proposition 6

The proof uses ideas from Section 3.2. The following lemma is an estimate foronerandom walk. Dene

R= inffs0:Z(s) =2TRg: (3.81)

Let@+T

R denote the exterior boundary ofTR.

Lemma 9

Fix x2ZZ

d. Let the assumptions of Lemma 6 hold with x

2 . Let R denote

the rst hitting time of after R. Then there exist A > 0 and T0R0 0 > 0 such that

Ex  eH` T1  1 T `T 2A 1fRtg  A 2 e;2 R j@ + TRjtEx  eH` T1  1 T `T 2A  (3.82) and Ex  eH(` T +)1  1 2T(`T +) 2A 1fR tg1fRTg  A 2e;2 R j@ +T RjtEx  eH(` T +)1  1 2T(`T +) 2A  (3.83)

for all t 0, all RR

0, all T t_T 0 with t=T  0 and all  2PT(ZZ d).

Proof.

Throughout the proof we pick R so large that TR and x 2 TR. We also pick

0 = and t=T 

0. If `T=T

2A andRt, then the latter guarantees that the random

walk must hit 0 in the time interval (RT) (recall (3.41)). We choose T0 to be the same

as in Lemma 6. The proof of (3.82) comes in 8 steps. 1. First we use the strong Markov property at times write

(34)

where we dene (sz) = Ez  eH( +` T;s )1  1 T ( + `T;s) 2A  (3.85) for 0st, z 2@ +T

Rand 2Ps(TR). Our choice of

0 guarantees that 1

T(+`T;s) 2A

impliesT ;s for s20t], where  again denotes the rst hitting time of .

2. By assertion (b) in Lemma 6 with f 1 we know that

(sz)Ae ; jzj

(s0) for all 0 st and 2Ps(TR): (3.86)

Combining this with (3.84) we have l:h:s:(3:84)  Ae ; R P z2@ +T R Rt 0 Px(R 2dsZ(s) = z) Ex  (s0`s) R =sZ(s) = z  : (3.87)

3. Now apply Fubini to write Ex  (s0`s) R=sZ(s) = z  =E0  (sxz`T;s)   (3.88) where we dene (sxz) = Ex  eH( +`s)1  1 T ( + `s)2A R =sZ(s) = z  : (3.89) for 0s t, z 2@ +T R and  2PT ;s(ZZ d).

4. Next, do a time reversal on the random walk over the time interval 0s]. Let z;be the

unique site in TR that neighbors z 2@ +T R. Then (sxz) = 1 2dEz ;  eH( +` s )1  1 T( + `s)2A R> sZ(s) = xZ(s+) 6 =x Px(R2dsZ(s) = z) = 1 2dPz ;(R> sZ(s) = x) 2d ds: (3.90)

Here the jump away fromz to z; at times is replaced by a jump away from x at time s in

the time reversed random walk. The factor 2d counts the number of ways this last jump can occur. The local times are invariant under the time reversal.

(35)

E 0  Ez;  eH( +`s)1  1 T ( + `s)2A R> sZ(s) = xZ(s+) 6 =x =` T;s  : 6. Again apply Fubini. After that we can write

r:h:s:(3:91) = Ae; R X z2@ +T R Z t 0 ds E0  (sxz; `T;s)   (3.92) where we dene (sxz;) = E z;  eH( +`s)1  1 T ( + `s)2A 1fR> sZ(s) = xg  : (3.93) 7. Next, Z(s) = x implies  s because x 2 . We may therefore apply assertion (b) in

Lemma 6 withf(zp) = x(z)1fp(TR) = 1g, to obtain

(sxz;) Ae ; R(sx0): (3.94) Combining (3.91{3.94) we arrive at l:h:s:(3:84)A 2 e;2 R X z2@ +T R Z t 0 ds E0  (sx0`T;s)  : (3.95)

However, using the strong Markov property at times and doing once more a time reversal of the random walk over the time interval 0s], we may write

E0  (sx0`T;s)  =Ex  eH` T1  1 T `T 2A 1fR> sZ(s) = 0g  : (3.96)

8. Finally, drop the last indicator to get l:h:s:(3:84)A 2 e;2 R j@ + TRjtEx  eH` T1  1 T `T 2A  : (3.97)

This completes the proof of (3.82).

The proof of (3.83) goes along the same lines. (Compare with the proof of assertion

(b) in Lemma 6.) 2

The analogue of Lemma 9 for two random walks is similar. Namely, using (3.83) we get the estimate

Exy  eH(` 1 T +` 2 T )1  1 2T(` 1 T +`2 T)2A   1f 1 Rtg1f 1 R Tg+ 1f 2 Rtg1f 2 RTg  2A 2e;2 R j@ +T RjtExy  eH(` 1 T +` 2 T )1  1 2T(` 1 T +`2 T)2A  (3.98) (compare with the proof of Lemma 7).

(36)

For the nal step in the proof of Proposition 6, we recall that U (p) Afor 0<  0

(see the proof of Lemma 8) and that ^ = minf 1 R2

Rgis the stopping time dened in (2.7).

We choose 0 so large that

p( 0)> 12(1+

0) (3.99)

(recall that 0 =  < 1). Then the same inequality holds for all measures in

U (p),

provided

0 and 0 is suciently small. But now we note that

1 2T (`1 T +`2 T)2U (p) tT  0 iR t =)iRT (i = 12): (3.100)

Hence we can apply (3.83) and get the claim in Proposition 6.

4 Proof of Propositions 7{9

4.1 Proof of Proposition 7

Let u2  = p = w2 =kwk 2 `2 = (v= kvk` 2)

d be the unique centered minimizer of ( ) in

Section 0.4. To ease the notation we shall write u instead of u.

Lemma 10

The semigroup S = (S(t):t0) associated with the generator G in (1.10)

is given by (S(t)f)(x) = 1u(x)Ex  exp ; Z t 0 ds uu (Z(s)) u(Z(t))f(Z(t)) (4.1)

and is a strongly continuous contraction semigroup on `2(ZZdu2).

Proof.

Elementary. The r.h.s. of (4.1) is well dened because u is strictly positive every-where (see Lemma 13 in Section 5.1) and (u)=u is bounded from below (see (4.6) below). The semigroup S = (S(t):t 0) associated with  (the generator of our reference

ran-dom walk) is given by (S(t)f)(x) = Ex(f(Z(t))) and is a strongly continuous contraction

semigroup on `2(ZZd). We compute with the help of (4.1)

(Gf)(x) = limt #0 1 t  S(t)f ;f  (x) = 1 u(x)  ;(u)(x)f(x) + lim t#0 1 t  S(t)uf];uf]  (x) = 1 u(x)  ;(u)(x)f(x) + (uf)(x) = 1 u(x) P y:jy;xj=1 

(37)

Indeed, this coincides with (1.10). Next, the semigroup property S(s + t) = S(s)S(t)

follows from (4.1) by using the Markov property of the reference random walk at times. The strong continuity of S follows from the strong continuity of S and the boundedness

of the exponential in (4.1). The contraction property ofS follows from the inequality hfGfi` 2 (ZZ d u 2 )= ; X fxyg:jx;yj=1 u(x)u(y)f(x);f(y)] 2 0: (4.3) 2

The above representation leads us to the following.

Lemma 11

Let Pxy =PxPy and Pxy =PxPy. Then for any T 0 dP xy dPxy  (Z1(s)Z2(s)) s20T]  = u(Z 1 (T))u(Z 2 (T)) u(x)u(y) exp  ; RT 0 ds   u u (Z1(s)) + u u (Z2(s))  : (4.4)

Proof.

Immediate from (4.1). 2

Using Lemma 11 we can now do the absolute continuous transformation in the expec-tation appearing in the l.h.s. of (2.9) in Proposition 7. Indeed, recalling that `iT(x) =

RT 0 ds1 fZ i (s)=xg (i = 12), we obtain Exy  expH ^`T]1  1 2T^`T 2U (p) 1f^R > tg  =u(x)u(y)Exy expH ^`T]exp  P z ^`T(z)f u u (z)g   1 u(Z 1 (T))u(Z 2 (T))1  1 2T^`T 2U (p) 1f^R> tg  : (4.5) To complete the proof of Proposition 7, we simply note that

u

u (z) =;2 log u(z);2d ( ) (4.6)

as follows from ( ) in Section 0.3 and Proposition 3 via the relation u = (v=kvk` 2)

d.

After substituting (4.6) into the r.h.s. of (4.5) and using the relations u2 = p

, = =,

^LT = ^`T=2T and P

z ^LT(z) = 1, we obtain the r.h.s. of (2.9).

We conclude this section with the following observation.

Lemma 12

The random walk driven byG is ergodic withu2 as the reversible equilibrium.

Referenties

GERELATEERDE DOCUMENTEN

Voor het vervolg van deze opgave gaan we niet meer uit van een jaarlijkse stijging van de elektriciteitsprijs maar van een vaste prijs van € 0,225 per kWh.. In onderstaande tabel

Voor het vervolg van deze opgave gaan we niet meer uit van een jaarlijkse stijging van de elektriciteitsprijs maar van een vaste prijs van € 0,225 per kWh. In onderstaande tabel

Parabolic Anderson equation, random conductances, Lyapunov exponents, large deviations, variational representations,

To aid this line we will adopt via a labelling procedure for the random walks, whereby effectively when two random walks meet for the first time, one of them (chosen at random) dies;

G¨artner, den Hollander and Maillard [14], [16], [17] subsequently considered the cases where ξ is an exclusion process with a symmetric random walk transition kernel starting from

Parabolic Anderson model, catalytic random medium, catalytic behavior, intermittency, large

Each individual is supplied with a large repertoire of different types of T-cells (each defined by the special type of T-cell receptor exposed at its surface), and every type

Therefore it is impossi- ble to treat the desired continuous process as an actual limit of some rescaled random walk, but the convergence in distribution strongly suggests a