• No results found

Fluctuations in the Hopfield model at the critical temperature

N/A
N/A
Protected

Academic year: 2021

Share "Fluctuations in the Hopfield model at the critical temperature"

Copied!
22
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Fluctuations in the Hopfield model at the critical temperature

Citation for published version (APA):

Gentz, B., & Löwe, M. (1998). Fluctuations in the Hopfield model at the critical temperature. (Report Eurandom; Vol. 98003). Technische Universiteit Eindhoven.

Document status and date: Published: 01/01/1998 Document Version:

Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers) Please check the document version of this publication:

• A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website.

• The final author version and the galley proof are versions of the publication after peer review.

• The final published version features the final layout of the paper including the volume, issue and page numbers.

Link to publication

General rights

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain

• You may freely distribute the URL identifying the publication in the public portal.

If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license above, please follow below link for the End User Agreement:

www.tue.nl/taverne Take down policy

If you believe that this document breaches copyright please contact us at: openaccess@tue.nl

providing details and we will investigate your claim.

(2)

Report 98-003

Fluctuations in the Hopfield Model at the Critical Temperature

B. Gentz M. Lowe

(3)

AT THE CRITICAL TEMPERATURE

BARBARA GENTZ AND MATTHIAS LOWE

ABSTRACT. We investigate the fluctuations of the order parameter in the Hopfield model of spin glasses and neural networks at the critical temperature 11f3c = 1. The number of patternsM(N) is allowed to grow with the number N of spins but

the growth rate is subject to the constraintM(N)15 IN ~ O. As the system sizeN

increases, on a set of large probability the distribution of the appropriately scaled order parameter under the Gibbs measure comes arbitrarily close (in a metric which generates the weak topology) to a non-Gaussian measure which depends on the realization of the random patterns. This random measure is given explicitly by its (random) density.

1. INTRODUCTION

In 1977, Pastur and Figotin introduced and discussed a disordered version of the Curie-Weiss model of ferromagnets (see [29], [30]). Later their model became popular under the name Hopfield model because of its impact on the theory of neural networks achieved by its rediscovery and reinterpretation by Hopfield [21]. This versatility of the Hopfield model-namely that it can be regarded as a very simple model of the brain on one hand, and as a so-called spin glass (i. e., a disordered spin system) on the other hand-has been the driving force for its popularity and the efforts which have been undertaken to obtain a better understanding of the model. The neural network point of view has been taken in the original paper by Hopfield [21] for instance, as well as in the papers [27], [28], [23], [25], [26], and many others while in the seminal paper [29], as well as in [7], [8], [9], [3], [16], [17], [4], [5], and [31] the statistical-mechanics and thus the spin-glass aspect of the model have been in the centre of interest. Of course, it would be very difficult to give a complete list of all important papers in this area. For an overview of recent results on the Hopfield model and related models and results which deeply influenced our understanding of the model and even were able to justify some of the physicists' predictions (see [1], e. g.) we refer the reader to [31] and [11] and, in particular, [6] therein.

To be more specific, let us now define the Hopfield model. First of all we choose two numbers N, MEN which will denote the number of spins or "neurons" and the number of so-called patterns, respectively. In contrast to a previous paper [20], we shall now treat the case where

M

=

M(N)

may depend on

N.

Henceforth, we shall write M and thus drop its dependency on N whenever there is no danger of confusion and we shall refer explicitly to this dependency only when necessary. The

Date: December 11, 1998.

1991 Mathematics Subject Classification. 60F05, 60K35 (primary), 82C32 (secondary).

Key words and phrases. Hopfield model, spin glasses, neural networks, random disorder, limit theorems, non-Gaussian fluctuations, critical temperature.

(4)

random function

(1.1)

(1.3) denotes the Hamiltonian of the Hopfield model, which is a function of the spin configuration CT E

{-I,

+

I}N. The strength of the pair interaction is random as the variables ~r E {-I, +1} with ~r denoting the ith component of the tlth pattern are random. In this paper we shall assume that the ~r are i.i.d. unbiased random variables, i. e., that at given system size N, the family of random variables {~r

:

i E

{I, ...

,N},

tl E {I, ...

,M(N)} }

is independent with

1

lP(~r = +1) = lP(~r = -1) =

2"

for all i and tl. Expectations with respect to lP will be denoted by lEo Whenever convenient, we shall write ~ for the (N x M)-matrix consisting of the (~rkJl'while

~i =

(a, ...

,~f1) and ~Jl = (~r,

...

,e/v),

respectively, stand for the ith row and the tlth column of this matrix, respectively.

The spin variables are assumed to be independent with an unbiased a priori dis-tribution P, i. e.,

1

P(CTi = +1) = P(CTi = -1) =

-2

for all i E N. In addition, we shall assume throughout this paper that the family

{~r

:

i E {I, ... ,

N},

tl E {I, ... ,

M} }

is independent of the family of the spin variables {CTi :i E {I, ...

,N} }.

The Hopfield model at temperature 1//3

E (0,00)

may now be identified with the Gibbs measure with respect to the Hamiltonian (1.1), i. e.,

(2N,/3(CT)

=

2-Nexp{-/3HN(CT)}/ZN,/3, CT E {-I, +1}N, (1.2)

where the so-called partition function

1

ZN,/3

=

2N

2:=

exp{-/3HN(CT)}

<TE{-1,+1}N

is the normalization which makes (2N,/3 a probability measure.

In order to understand the introduction of the order parameter in the Hopfield model note that the Hamiltonian (1.1) may be rewritten in the following convenient form as a quadratic functional of the so-called overlap mN:

. where (1.4) mN(CT) = (mJlN(CT))Jl=I,...,M with N mjACT)

=

2:=

~rCTi'

i=1

(1.5)

Here and below, 11·112 denotes the Euclidean norm in }RM. The tlth component mjy

of the overlap mN compares the spin configuration to the tlth pattern ~Jl in such a way that a large absolute value ofmjy(CT) means that the spin configurationCTlargely agrees with ~Jl (or its negative). These configurations are of low energy according to (1.4). Therefore, the overlap is an important quantity for the investigation of the

(5)

Hopfield model, a so-called order parameter. Its distribution under f2N,{3 has been

of major interest in the study of the model and also will be central in this paper. In [7]' Bovier, Gayrard, and Picco established a law of large numbers for the distribution of the overlap under the Gibbs measure (}N,{3 which holds for JP>-almost all

realizations of the random patterns~. They showed that, whenever

M(N)/N

-+ 0, for JP>-almost all~, the distribution of the overlap

mN

under the Gibbs measure with external magnetic field of strength h =j:. 0 in the direction of the first unit vector el of the canonical basis in }RM converges weakly towards the Dirac measure D±z({3)el

concentrated in

±z(,B)el

as first the system size

N

-+ 00 and then the strength

h -+

o±.

Here

z(,B)

denotes the largest root

z

E [0,1) of the Curie-Weiss equation

z = tanh(,Bz).

Note that

z(,B)

=

0 for

,B

~

,Be

=

1, so that Do is the uni.que limiting measure in the high-temperature region

,B

~

,Be

= 1, whereas

z(,B)

>

0 for

,B

>

,Be,

so that in this regime there is no unique limiting point.

Note that this result strongly resembles the law of large numbers for the mean magnetization in the Curie-Weiss model, see [14, Theorem IV.4.1(a)], for example. As already explained at the beginning this is, of course, not accidental, as the Hopfield model can be considered as a disordered version of the Curie-Weiss model and, indeed, for M = 1 the Hopfield model and the Curie-Weiss model agree by a simple "gauge transformation"

(i.

e., replacing (J'i by (J'i~l).

On the scale of fluctuations, when analyzing the distribution of

VN(mN-z(,B)el)'

the character of the disorder becomes visible. Indeed, for M / N -+ 0 and

(,B,

h) =j:.

(1,0), the overlap satisfies JP>-almost surely a central limit theorem with the covari-ance matrix which could be expected from the analogy with the Curie-Weiss model and a centring which differs in the cases

,B

>

1 or h =j:. 0 from the naively expected one by a ~-dependent adjustment, see [16], [17], [19] and Bovier and Gayrard [4].

As shown in a previous paper [20], the influence of the disorder is even stronger when investigating the fluctuations of the overlap at the critical temperature

1/,B

=

l/,Be

=

1, even when

M(N)

remains bounded. Recall that in the Curie-Weiss model the criticality at temperature

1/,B

= 1 can also be seen as the breakdown of the central limit theorem. As a matter of fact at the critical temperature the magnetization in the Curie-Weiss model-scaled by a factorN1/4-converges weakly

towards a random variable given by its density with respect to Lebesgue measure which is proportional to exp( -x4/12), d. [14, Theorem V.9.5]. In [20] we showed that in the Hopfield model with finitely many patterns

(i.

e., with M not depending on N) the distribution of the overlap-scaled by the same factor N1

/4-regarded

as a random variable

QN

taking values in the Polish space M1(}RM) of probability

measures on}RM converges weakly (with respect to JP» to a limiting random measure

QM·

This limiting random measure

QM

is given by its (random) density with respect to the M-dimensional Lebesgue measure which is proportional to

exp ( - 112

f>; -

~

L

x;x~

+

L

~",VX"xv),

(1.6)

J.!=l l~J.!<v~M l~J.!<v~M

where 1] is an M(M - 1)/2-dimensional Gaussian random variable with mean zero

and the covariance matrix being the identity matrix, namely, ~ = (~(J.!,v),(J.!',V') ) (J.!,V),(J.!' ,v')

(6)

and

E

_{I,

if

(/1,

v) =

(/1',

v'),

(f.L,V),(f.LI,V' ) - 0

,

otherwise

,

for 1 :::;

/1

<

v :::; M and 1 :::;

/1'

<

v' :::; M.

This shows that even for finite M at the critical temperature

1/13=

1, the fluctua-tions of the overlap depend strongly on the random disorder as even the distribution of the limiting fluctuations is random. Even to formulate the corresponding result for the case where the number of patterns

M(N)

is actually growing with

N

seemed to be difficult, since, on one hand, we don't have an "infinite-dimensional Lebesgue measure" as reference measure and, on the other hand, we cannot work with finite-dimensional projections (as in the Central Limit Theorem) either, since the "mixed terms" 2.:1:S;f.L<v:S;M TJf.L,Vxf.Lxv tend to "glue" together the coordinates.

In this paper we circumvent these difficulties by not stating a limit theorem but by showing instead that the distance between the distribution QN of the scaled overlap

and the random measure QM becomes small with high probability for largeN. More precisely, we shall show, under the constraint M15/N -+ 0 on the growth rate of

M(N),

that for each large enough

N

there exists a set of ~'s of probability larger than 1-exp{-M/L} (with some constant L

>

0) on which the distance between

QN

and QM is smaller than eN "'" O.

This paper has three more sections. Section 2 contains the explicit statement of the result concerning the non-Gaussian fluctuations of the overlap at

13

=

1 for the Hopfield model with a growing number of patterns. Section 3 is devoted to one of our basic tools, a multidimensional version of a strong approximation result of Koml6s, Major and Tusnady [22], which allows to control the difference of a sum of i.i.d. random variables and a sum of i.i.d. Gaussian random variables with the same covariance matrix. These results go back to Zaitsev [32], [33], Einmahl [12] and Einmahl and Mason [13]. They also proved useful in [10]. Section 4 finally is devoted to the proof which is based on the Hubbard-Stratonovich transform of the measures of interest together with a Taylor expansion of the resulting density, a saddle point approximation as well as the strong Gaussian 'approximation mentioned before.

Acknowledgement. We are grateful to Anton Bovier for bringing the strong Gaussian approximation to our attention, and, in particular, for sharing the re-sults of [10] with us prior to publication. We benefited from interesting discussions with him. The results presented here were obtained while the second author was visiting at the WIAS. He thanks the WIAS for its hospitality.

2. STATEMENT OF RESULTS

This section contains the mathematically precise statement of the result an-nounced in the introduction. We shall state the theorem only for the case of

13

=

13c

= 1 being fixed. In [20]' where we considered

M

independent ofN only, we also treated the case of variable temperature

13N

converging to

13c

=

1 as N -+ 00.

It turned out that for

13N

converging to

13c

faster than

I/VN

(recall that M was chosen as a constant), the limiting distribution is the same, while for

13N

converg-ing to

13c

slower than

I/VN,

we have a Central-Limit-Theorem type result and at "the borderline", i. e., when

13N - 13c

is of the same order as

I/VN,

one can see the influence of both possible limiting distributions.

(7)

In the present setting, we consider such an extension of our results to variable

I3N

a basically technical exercise. Therefore, we shall concentrate on the most interesting case which allows us to present streamlined proofs.

In general, we shall assume that the pattern matrix ~ lives on a probability space

(D,F, P) that is rich enough to allow the strong-approximation results stated in

Section 3. The pattern matrix has to be viewed as a random variable on (D, F, P),

but with slight abuse of notation, we shall formulate exceptional sets as sets of ~­

variables by writing {~ :F(O E A} which is to be understood in the natural way as

{w

ED: F(~(w))E

A}.

Let

QN = (IN,1(N1/4mN)-1 (2.1)

denote the distribution of the scaled overlap under the Gibbs measure (IN,1' By d

we denote the metric

d(P1 ,P2) = sup {

If

fdg -

f

fdP21 : f E

g}

(2.2)

with

9

=

(2.3)

{f: RM -+ R: sup If(x) - f(y)1 ::; 1 and sup If(x) - f(y)1 ::;

Ilx -

y112}

x,yEIRM x,yEIRM

on the set M 1(RM ) of all probability measures on RM. According to [2,

Corol-lary 2.8] this metric generates the weak topology on M 1(RM ). The result we are

going to prove is the following.

Theorem 2.1. Let

13

=

I3c =

1. Assume that

M(N)15 IN

-+ O. Then there exist a

constant L

>

0, a set D(N)

c

D with probability

lP(D(N))

2:

1 - e-M(N)/L, (2.4)

an N E N and a sequence (EN )NEN, satisfying EN '\. 0 as N -+ 00, such that for

every N

2:

N, there exists a set

(TJjl,Vh~/.l<v~M

of

M(M

-1)/2

independent standard-Gaussian random variables such that the

ran-dom measure

QM'

which is given by its (random) density

x f-7 exp{WM(X)} /

1M

exp{WM(X)} dx

(2.5)

with (2.6) satisfies (2.7) for all ~ E D(N).

(8)

GENTZ AND M. LOWE

(3.1) Remarks 2.2. 1. Note that the scaling factor NI/4 for the overlap vector is the

same as the one for the mean magnetization in the Curie-Weiss model at the

critical temperature, see [14, Theorem V.9.5]. Similar to that case (and, of

course, similar to the Hopfield model with a finite number of patterns) the distribution of the overlap is close to a non-Gaussian distribution.

2. Our condition M(N)15 IN --+ 0 on the growth rate of M is, of course,

em-barrassing. It is due to the simultaneous strong Gaussian approximation of

M(M -1)/2 variables. Any proof using the strong Gaussian approximation as

provided in [32], seems to produce conditions which are far off any reasonable

condition on the growth rate.

3. In fact, we are going to show that, under the conditions of the theorem,

ILM

f(x)QN(dx) -

LM

f(X)QM(dx)1

~

cN(Kf

+

Ilflloo)

(2.8)

holds for all

f

E O(N) and all f E BL(]RM, ]R), where BL(]RM,]R) denotes the

set of all bounded, Lipschitz continuous functions from]RM to]R, Kf denotes the

Lipschitz constant of f and

Ilflloo

= SUPxElRM If(x)l. This implies the theorem

by (4.2) below.

3. STRONG GAUSSIAN ApPROXIMATION

In this section we are going to collect some facts about the so-called strong Gauss-ian approximation and apply them to the situation of our interest. The problem of the Gaussian approximation is quickly stated. Given a sequence (Xi)iEN of i.i.d. random vectors in ]Rd, we know that L~=lXi-scaled appropriately-converges in distribution to a Gaussian random vector Y. This vector can obviously be decom-posed again into a sum of "small" Gaussians. The question is now, whether we can also find Gaussian vectors

Yi

such that the difference

"'(X, Y,n)

=

l~~dt

Xi -

t

It

becomes small in a suitable sense.

This problem was first stated and treated in a one-dimensional setting by Koml6s, Major and Tusnady in [22]. The d-dimensional extension is due to Zaitsev [32] and Einmahl [12]. For a thorough treatment of the problem, we refer the reader to [33]. The form of the strong approximation we recall below proved useful in [10] and goes back to Einmahl and Mason [13].

Let PI andP2 be two probability measureson]Rd (endowed with the Borel a-field),

and for 8

>

0 let

)..(PI,

g,

8)

=

sup{PI(A) - P2(AO), P2(A) - PI (AO) : A C ]Rd closed}. (3.2) Here

AO

=

{x E ]Rd : 3y E A such that

Ilx -

yl12

~ 8} (3.3)

is the closed 8-neighborhood of the set A.

Furthermore, let Xl, ... ,Xn be n E N independent random vectors in ]Rd with

JEXI = 0 and finite variance which satisfy the Bernstein-type condition

(9)

(3.5)

with some T for all m ~ 3 and all s,

t

E Rd.

Under the condition (3.4), Zaitsev proved in [32, Theoreml.1] the following bound

on A(PI,n, P2,n, 8), where PI,n is the distribution of Xl

+ ... +

X n and P2,n is the

d-dimensional normal distribution with mean zero and covariance matrix cov(XI )

+

... +

cov(Xn ) (see also [13]).

Fact 3.1. For all n ~ 1 and all 8~ 0,

A(PI,n, P2,n,

8)

~ CI,d exp{-8/(C2,dT )}

with CI,d

=

cld5/2 and C2,d

=

c2d5/2 for numerical constants CI, C2

>

0.

As in [13], Fact 3.1, the following fact follows.

Fact 3.2. Let Xl, ... ,Xn be independent mean zero random vectors satisfying the

Bernstein-type condition (3.4). If the underlying probability space is rich enough,

then, for each 8 ~ 0, there exist independent Gaussian random vectors YI , ... ,Yn

with mean zero and

for alli E

{I, ...

,n}, such that

(3.6)

where the constants CI,d, C2,d are the same as in Fact 3.1.

Corollary 3.3. In the situation of Fact 3.2, for each 8 ~ 0, there exists a mean

zero Gaussian random vectorY with covariance matrixcov(Y) = L~=lcov(Xi ) such

that

(3.7)

with the same constants CI,d, C2,d'

In our situation we want to apply Fact 3.2 and, in particular, Corollary 3.3 to the

M(M -

1)/2 dimensional vectors that contain the information of the mutual overlaps of the patterns in the ith component. More precisely, we will choose d

=

M(M

-1)/2, n

=

N,

and

Xi

=

(~f~nl::;Jl<v::;M in order to replace l/ylNL~1

Xi

by a Gaussian random vectorTJ

=

(TJJl,V )1::;Jl<v::;M. Observe that due to the independence of the ~f, we obtain

cov(Xi )

=

Id

for each i, and hence also TJ will have identity covariance matrix. (By a slight abuse of notation, we denote the identity matrix by Id whatever the dimension of the underlying space

R

d is.) In order to apply Corollary 3.3, we have to check the

Bernstein-type condition (3.4). This is done in the following lemma.

Lemma 3.4. In the above setting Xl, ... ,X n fulfill the Bernstein-type condition

(3.4) with T = M.

(10)

B. GENTZ AND M. LOWE

Thus, for any choice ofs,t E JR.M and all m

2::

3

IlE(s,

X i )2(t,

x

i)m-

21

:s;

Tm-

21Itll;n-2

E(s, X i)2

:s;

~m!Tm-2I1tll;n-2lE(s,

Xi?,

where we have already chosen T

=

M. 0

Now we are ready to deduce the desired approximation.

Corollary 3.5. If (O,:F,lP) is rich enough, for each Nand 8

2::

0, there exist a

mean zero Gaussian random variable TJ with covariance matrix Id and numerical

constants CI, C2

>

0, such that

Proof. Apply Lemma 3.4 and Corollary 3.3 with T = M.

(3.8)

o

Remark 3.6. Observe that 8 in (3.8) may-and will indeed in our

applications-depend on Nand M.

4. PROOFS

To prove Theorem 2.1, we need to show that for large system size N the distribu-tion QN of the scaled overlap under the Gibbs measure (!N,l is close to the random

measure

Q

M with respect to the metric d on a set of large lP-measure. First we show

that QN and its smoothed version obtained by a Hubbard-Stratonovich transform

are close, so that we may investigate the Hubbard-Stratonovich transform instead of the measure itself. We recall the Hubbard-Stratonovich transform ofQN from [20].

The core of the proof is the investigation of the density of this Hubbard-Stratonovich transform by an adaptation of Laplace's method.

Notation 4.1. We denote by f..t

*

1/ the convolution of two measures f..t and 1/.

Lemma 4.2. For all M

2::

8, all f E BL(JR.M ,JR.) and all probability measures

Q

on

JR.M

,

If

f

d(Q

*

#(0, N-

Ij

2

Id)) -

f

f

dQI

:s;

2V2K

r

la + Ilfllooe-

M,

(4.1)

where Kf denotes again the Lipschitz constant of f and

Ilflloo

=

SUPxEJ~M If(x)

I

as

before. Now, with

go

=

g

n

{f : f(O)

=

O}

(4.2)

(4.3)

and

go

C BL(lRM ,JR.). Therefore, the following corollary is an immediate

conse-quence of the preceding lemma.

Corollary 4.3. For all M

2::

8 and all probability measures

Q

on JR.M,

d(Q*#(0,N-

I

(11)

Proof of Lemma4.2. Let f E BL(lRM ,lR) and let Q be an arbitrary probability

measure on lRM . Then, for 0

>

0,

IJ f d(Q

*

N(O,N-1/2Id)) - J f dQI

::; J J IB(O,6)(x)lf(x

+

y) - f(y)1 Q(dy) N(O, N-1

/2Id)(dx)

(

vN)M/2

J

N

+

211fll00 27r

IB(o,6)c(x)exp {

-21Ixll~}

dx

::; Kfo

+

21IfIl00l'M(B(0, oN1/2

)C),

(4.5)

whereI'M denotes the M-dimensional Gaussian measure with mean zero and the co-variance matrix being the identity matrix. The radiusTM satisfyingI'M(B(O, TM)) =

1/2is bounded by V2M for M

2::

8, cf. [18, Equation (4.4)]. Choosing 0= 2~,

( ( 1/2

C)

1 { I[1/2 ] 2} 1 M ( )

I'M B O,oN ) ::; "2 exp

-2

N O-TM ::; "2e- 4.6

follows by [24, Theorem 1.2]. This concludes the proof. 0

The Hubbard-Stratonovich transform of the distribution of the scaled overlap is given by its density with respect to Lebesgue measure.

Lemma 4.4. Let

°

<

(l

<

00 and a

>

0. Then the convolution a

XN,{3,a = QN

*

N(O, N(lId)

(4.7)

(4.9)

x E lRM ,

of QN

=

{2N,{3( vamN)-1 with the M -dimensional Gaussian distribution with mean

zero and covariance matrix ;(3Id is the random measure on lRM which is given by

the (random) density

f () - exp{-N(l<I>N,{3(X/va)} x E lRM ,

N,{3,a X -

J

IRM exp{-N(l<I>N,{3(x/va)}dx' (4.8)

with respect to the M-dimensional Lebesgue measure, where

1 1 N

<I>N,{3(X)

=

211xll~

-

(3N :L:)ogcosh((3(x,

~i)),

i=1

depends on the random patterns. Here (., .) stands for the inner product in lRM .

We omit the proof as it follows by a straight-forward calculation similar to the ones given in [7, Lemma

2.2]

or [15, Lemma 3.3].

Before turning to the proof of Theorem 2.1, we gather some estimates which will prove useful in the sequel. The first of these estimates is a bound on the operator norm of the random matrix arising from the patterns.

Lemma 4.5 ([6, Theorem 4.1]). There exist a constant K

>

°

and an N1 E N such

that

(4.10) for all N

2::

N1 .

(12)

10 B. GENTZ AND M. LOWE

For later use, we define

01(N) = {~: 111~~T~llop

-

(1

+

va)2\

<

va}.

(4.11)

In particular, we know that for N

2:

N1 ,~ E 01(N) and all x,y E jRM,

I

~ ~

(x,

~i)(Y, ~i)

-

(x, y)

I

S;

4y'(>lIxll,lIylJ,·

(4.12)

We also need the following estimates to treat terms which involve products of components ~f for four or six different values of f.-l. These are provided by the following lemma. For {j

>

0 let

( u

{I~ t~:"~r'~r'~r'l

>

8y'(>}

J11, ... ,J14 t-1 U

U

{I~ t~r'~r'~r'~r'~r'~r'l

>

8y'(>}

r

(4.13)

J11, ... ,J16 t=l

where each of the unions is taken over all sets of pairwise different indices in

{I, ... ,M}.

Lemma 4.6. For every (j

>

0, there exists an N2(6) such that for all N

2:

N2(6)

lP{02(N,6)C} ::; exp{_62

M/4}.

(4.14)

Proof. Let

and

C

N.,(J1.1, ... , 1'6)

=

{I

~

t

~r'~r'~r'~r'~r'~r

I

>

8y'(>}.

(4.16)

For pairwise different indices f.-l1, ... , f.-l6 E {I, ...

,M},

Chebychev's inequality with

t

=

6va

implies

lP(BN,I5(f.-ll, ... , f.-l4)) ::; exp{-t6vaN} exp{Nt2/2} = exp{_62

M/2}

and, similarly,

Therefore,

lP(02(N,6)C)::;

(~M(M

-1)

+

:!M(M -l)(M - 2)(M -

3))

exp{-{j2M/2}.

(4.17)

Choosing

M

large concludes the proof. 0

The next lemma provides a bound similar to (4.12) for terms involving the Gauss-ian fJ instead ofN-1/2~T~. Let

03(N, R,,.,;) =

{~

:

II::

fJJ1,v(~)XJ1XVI

<

,.,;R2JMllxll~

Vx

E jRM }. (4.18) J1<V

(13)

(4.23)

Lemma 4.7.

lP{n3(N,R,/~f}:S52Mexp{-K2R4M/16}.

Proof. Let x, Y E lRM . First note that LJ.l<v rJJ.l,VxJ.lYv can be viewed as the scalar

product of rJ and the vector (xJ.lYv)J.l<v and that

II(xJ.lYv)J.l<vI12

:S

2-

1/

21IxI121IyI12'

By Chebychev's inequality,

lP{

L rJJ.l,VXJ.lYv

2

K'}

<

exp{

-tK'}

exp {

t;

II(xJ.lYv)J.l<vll~}

J.l<V

<

exp{

-tK'}

exp {

~ Ilxll~IIYII~}

(4.19)

for

t

>

0.

Choosing

t

=

2K'/(llxll~llyll~),

ll'{~~p,vxpYv

2 ",}

:s

exp{

-IIXI~'I;YII1}

(4.20)

follows. To obtain a uniform bound, note that

lP{

3x E lRM : L rJJ.l,VXJ.lXV

2

K'llxlI~}

=

lP{

3x E

B(O,

1) :

L rJJ.l,VXJ.lXV

2

K'}

J.l<V J.l<V

<

lP{

3x,Y E

B(O,I):

LrJJ.l,vXJ.lYv

2

K'}.

J.l<V

B(O,

1) being a (bounded) convex, balanced set in lRM , there exists a subset

D

c

B(O,

2)

such that B(O,

1)

is contained in the convex hull of D and D has at most 5M

elements (see for example

[31,

Lemma

10.2

in the Appendix]). Now, by our previous bound and the definition of the set D,

lP{

3x E lRM : L'TlJ.l,vXJ.lXV 2

K'llxll~}

J.l<V

:S

lP{3X,y ED: LrJJ.l,vXJ.lYv

2

K'}:S

52MxSUPDlP{L'TlJ.l,VXJ.lYv

2

K'}

J.l<V ,yE J.l<V

:s

5'M

X~~D

exp{

-lIxl~'I;YI11}

:s

5'M

exp{ -

~:}.

(4.21)

Choosing K'

=

KR2

yIM

with K>

°

concludes the proof. D

With these preparations we are able to prove Theorem 2.1.

Proof of Theorem 2.1. By

(4.2),

Theorem

2.1

follows, once we have shown that,

under the conditions of the theorem,

ILM

f(X)QN(dx) -

LM

f(X)QM(dx)

I

:S

cN(Kf

+

Ilflloo)

(4.22)

holds for all~ E n(N) and all

f

E BL(lRM ,lR). By Lemma

4.2,

we may replace QN

by its Hubbard-Stratonovich transform.

So let

f

E BL(lRM ,lR). We need to investigate

f

f(x) exp{-N~(x/Nl/4)}dx

f

exp{-N~(x/Nl/4)}dx

(14)

12

where

B. GENTZ AND M. LOWE

(4.24)

Consider the nominator first as the denominator is a special case of the nominator. The main contribution to the integral arises from the inner region

B(O,

RM1/4) and

we shall choose a suitable R

>

°

later on.

In

the inner region as well as in the

intermediate region

B(O,

rN1/4) \

B(O,

RM1/4) with r

>

°

to be chosen later, we

investigate the behaviour of the integral in the nominator with the help of a Taylor expansion of <1>. The outer region

B(O,

r Nl/4Y is treated separately.

Taylor expansion. Calculating the Taylor expansion of<1> around zero, we see that there exists a

e

E (0,1) such that

<I'(x)

=

~

IIxlll-

~ ~ [~(X,~i)2

-

11

2

(x,

~i)4]

+

RN(x,

0,

(4.25) with

where

h(t)

=

tan~~t))

[2 - sinh2(t)], t E R

cosh t

Regrouping the terms of the Taylor expansion of<1>, we find that

-N<1>(x/N1/4)

N

_ 1

II

114 1

'"'*

2 2 1

'"'*

1 '"' J1.I J1.2

- -12 x 4 -

4

LJ XJ1.IXJ1.2

+

2"

LJ XJ1.IXJ1.2

VN

~~i ~i J1.1,J1.2 J1.1,J1.2 t=l

N N

1

'"'*

3 1 '"'tJ1.I J1.2 1

'"'*

2 1 '"'tJ1.I tJ1.2

- 3

LJ XJ1.IXJ1.2N LJ"'i ~i -

2"

LJ XJ1.IXJ1.2XJ1.3N LJ"'i "'i

J1.1,J1.2 i=l J1.I,J1.2 ,J1.3 i=l

(4.26)

(4.27)

(4.28)

(4.29)

l I N

12

2:*

XJ1.I X J1.2 X J1.3 X JJ4N

2:~rl~r2~r3~r4

+

O(N/RN(x/N1

/4,OI),

JJI,JJ2,J1.3,J1.4 i=l

where

Ilxll~

=

L:~1

xt·

Here and in the sequel, we use the notation

L:*

J1.1, ••• ,J1.k

for summation over all k-tuples (J-l1,"" J-lk) E {I, ... ,

M}

with pairwise disjoint components.

Let us consider the different ~-dependent terms. By the strong Gaussian approxi-mation Corollary 3.3, there exist a constant No EN and an M(M -1)/2-dimensional Gaussian vector TJ with mean zero and covariance matrix being the identity matrix

such that .

Oo(N,

ON)

=

{[I

~~w'm"<v

-

~112

<

ON }

with

(15)

for some K

>

0 satisfies

(4.31)

for all N ~ No and

I

~

L'

x",x"'~t~t~r'

-

L

~""",x",x"'

I

<;

oMII(x", x",)",<",II,

<;

~lIxll~

Jll,Jl2 2=1 Jll<Jl2

(4.32)

for all ~ E Oo(N, bN).

The other ~-dependentterms become small due to the law of large numbers. For

N ~ N l and ~ E OleN), the bound (4.12) on the random matrix yields

(4.33)

as well as

(4.34)

Furthermore, for N ~ N 2

(b)

and ~ E 02(N,

b),

by the definition of02(N,

b),

11

1

2

L'

x", x",x",x",

~

t

~t~r'~r'~r'l

(

4.35)

Jll,Jl2,Jl3,Jl4 2=1

by!G. ""'

by!G.M

2 b

(M

5) 1/2

~ ~

L....J IX Jl1 XJl2XJl3XJl41

~

12

Ilxll~

=

12 N

Ilxll~·

It remains to consider the remainder of the Taylor expansion. Now,

Ih(t)1

~

2\tl

and 0

< () <

1 together with Schwarz' inequality imply that

IRN(y,

~)I

<;

I:N

t(y,

~i)6

<;

:5

L

Iy.,-·· y",11

~

t

~r'

...

~f'

(4.36)

2-1 Jll, ... ,Jl6 2-1

The right-hand side is bounded above by a combinatorial factor times the sum of terms similar to the ones treated above (with two, four or six different ~n plus the term arising from 1-"1 = ... = J.t6. This yields

[

(M5)

1/2

(M

7)

1/2

]

IRN(y,~)1 ~

c

hllyll~

+

b

N

Ilyll~

+

b

N

lIyll~

+

lIyll~

for N ~ max{Nl ,N 2

(b)}

and ~ E Ol(N)

n

02(N,

b),

so that

NIRN(X/Nl/4,~)1~ ~ [hllxll~

+

2b(

M~/) 1/21Ixll~

+

Ilxll~].

(4.37)

(4.38)

l,From now on, we shall always assume that N ~ max{No, N l , N2

(b)}

and that

(16)

14 B. GENTZ AND M. LOWE -Nif>(x/N1

/4) differs from

w(x) =

-1121Ixll~

-

~

L

x~x~

+

L

7}JL,VxJLXV

JL<V JL<V

by at most a constant times

(4.39)

[

M5

1/2] 1 [

(M

7

)1/2]

IIxI16 gN(X)

=

bMllxll~

+

va

+

b(

N )

Ilxll~

+

VN

va

+

8

N

Ilxll~

+

vN·

(4.40)

The inner region. For IIxl12 ~ RM1/4, the main contribution to 9N(X) arises from

the first summand. Therefore, we shall use the estimate

(M

I5) 1/2

9N(X) ~ hN(8, R)

=

N

(K

+

8)R6 -+ 0, (4.41)

provided MI5/N -+ O. (Recall that b

M = KM7

/VN.)

Therefore, the estimate for

the inner region is immediate: For j E BL(IRM,1R),

r

j(x)exp{-Nif>(x/N1

/4)}dx } B(O,RMl/4)

= exp{O(hN(8,R)}

r

j(x)exp{w(x)}dx. (4.42)

} B(O,RMl/4)

The intermediate region. For RMI/4 ~ IIxl12 ~ rN1/4,

which implies, that there exists an N3(8,r) E N such that

9N(X) ~ 8Mllxll~

+

2r211xll~

(4.43)

(4.44)

for all N

2

N3(8, r), provided provided M7/N -+ O.

AssumingN

2

max{No,N1 ,N2(b),N3(8} and

e

E rlo(N,8N )nrl1(N) nrl2(N,8)

n

rl3(N, R,

r;,)

from now on, our previous estimates together with the definition of

rl3(N, R,

r;,)

yield

-Nif>(x/N1

/4) (4.45)

~ w(x)

+

O(gN(X))

~ -1121Ixll~

-

~

L

x~x~

+

L

7}JL,Vx JL XV

+

O(8Mllxll~

+

2r21Ixll~)

JL<V JL<V

1 1

~ -121Ixll~

- 12

[llxll~

- Il x 111]

+

r;,R2JMllxll~

+

O(8Mllxll~

+

2r21Ixll~)·

For IIxl12

2

RM1/4, Ilxll~

2

R2JMllxll~ is trivial. By choosing rand 0

<

r;,

~ 1/48

small enough, we see that there exists an N4(R, K) E N such that bM becomes so

small that

(17)

holds for all N

2:

N4(R, K) and all x from the intermediate region. Therefore, for all

f

E BL(JRM ,JR) and N and ~ chosen as before,

1

1

f(x) exp{ -Nif!(x/N1/4)} dxl {RMl/4:SllxI12:SrNl/4}

~

Ilflloo

1

exp { -

~ VMllxll~}

dx {lIxIl2~RMl/4}

~

Ilfllooexp{-R4M/48}

r

exp {- R Z

VMllxll~}

dx

JIRM

48

=

IIfliooexP{-R"M/48}(R;~)

MI2 (4.47) This bound will allow us to deduce that the integral over the intermediate region is negligible.

The outer region. The investigation of the outer region consists of two parts. First, we show that there exists an ro

>

0 such that the integral over

B(O,

r oN1/4

y

is negligible and then, in a second step, we show that this ro can be replaced by an arbitrarily small r

>

o.

For convenience, we denote by fcw({3) the free energy in the Curie-Weiss model

at temperature 1/(3, i. e.,

fcw({3)

=

-%Z({3?

+

logcosh({3z({3)).

Then,

1

{I

} 1

log coshx ~ -{3XZ

+

max --{3t2

+

log cosh

t

= -(3xz

+

fcw(2{3),

4 tEIR 4 4

which implies in particular that

VN

N

-Nif!(x/N1/4) = --2-lIxll~

+

:?=logcosh(x/Nl/4'~i)

z=l

(4.48)

(4.49)

< -

~llxll~

+

~ t(X'~i)Z

+

N

fcw(2).

(4.50)

4yN i=l

Estimating the sum with the help of the bound (4.12) on the random matrix ~~T~, we see that there exist ro

>

0 and N5

2:

N1 such that

-Nif!(x/N1/4)

~

-

~llxll~

(4.51)

holds for all x satisfying Ilxllz

2:

roN l

/4,

all N

2:

N5 and all ~ E Ol(N).

Let now rNl/4 ~ Ilxllz ~ roNl/4 with an arbitrary r

E (0,

ro). First note that

if!(x/N1/4)

2:

JE{

~(X/Nl/4,6)Z

-logcosh(x/N1/4,6)} (4.52)

- sup

I

~

t

log cosh(x/N1/4,

~i)

-

JElog cosh(x/N1

/4,

6)

lIyl12:Sr o i=l

(18)

16 B. GENTZ AND M. LOWE

The first summand on the right-hand side is bounded below by

cr,ro

=

inf E4>((Y,6)),

y:r:::;IlyIl2~ro

where

(4.53)

4>(t)

=

t2

/2

-logcosht, t E R, (4.54)

attains its unique minimum at

t

=

O. The fact that (Y,6) is a (finite) Rademacher average (see [24, Chapter104], for instance), implies that

IP(I(Y, 6)1 ~ ~llyI12)

>

1/3 (4.55)

(d. [17, Lemma 4.3]), so that

cr,ro= inf E4>((y,6)) >0, (4.56)

y:r~llyIl2~ro

because there is a set of positive IP-measure, on which 4> is bounded away from its unique minimum at zero.

The second summand on the right-hand side of (4.52) becomes small due to so-called self-averaging. Inspection of the proof of [17, Lemma 4.2] shows that not only

lim sup IN1

tf((X'~i))-Ef((X'~l))I=o

(4.57)

N-+ooIIxl12~ro i=l

holds IP-almost surely for Lipschitz continuous

f,

but we obtained also bounds valid for large but fixed N:

Lemma 4.8 ([17, Lemma 4.2]). There exist a constant c

>

0 and an N6 ~ N1 such

that for all c

>

0 and all N ~ max{N6 ,2/c2}

IP{ sup

I~ tlogCOSh(Y'~i)

-EIOgCOSh(Y,6)1

~

(3+2ro)c}

lIyl12~ro i=l

:S

2 exp{M(log(ro/c) +

cn

exp{-Nc2/8} + IP(r2

1(N)C).

With

cr ro

c= '

2(3

+

2ro)

and

r24(N,r,ro)

=

{~:

sup

I~ tlogCOSh(Y'~i)

-EIOgCOSh(Y,6)1:s cr;o} (4.58)

lIyll2~ro i=l

we obtain the following corollary.

Corollary 4.9. There exist a constant K(r, ro)

>

0 and an N7(r, ro) E N such that

for all N ~ N7(r, ro)

IP(r24(N, r, ro)C)

:S

exp{-K(r, ro)N} + IP(r21(Nt).

Now, by our estimates on the two summands on the right-hand side of (4.52), we find

-NiP(x/N1/4)

:S

-Ncr,ro/2 (4.59)

for all x such thatrN1

(19)

Gathering our estimates on the outer region yields

I

(

f(x)

exp{

-NiIJ(x/N

1/4)}

dxl

J{llxI12~rNl/4}

$ { Ilflloo exp {-

VNIIXII~}

dx

J{lIxIl2~roNl/4}

6

+

1

Ilflloo exp{-NCr ,ro/2}

dx

{rNl/4:SlIxll2:SroNl/4}

$llfII00[exp{-Nr5/12}+exp{-Ncr,ro/4}] (4.60)

for all

N

2:

N

8

(r, ro)

for some

N

8

(r, ro)

E N.

Completing the proof. l,From now on we shall always assume that

~ E

r!(N)

r!(N, R, r, ro, 0,1\,)

=

r!o(N,

ON)

n

r!l(N)

n

r!2(N, 0)

n

r!3(N, R,

1\,)

n

r!4(N, r, ro) (4.61)

and that

Note that there exists a constant L

>

0 such that

lP(r!(N)C)

$

exp{-M/L},

(4.63)

provided R is chosen large compared to I\, and M is large enough, d. Lemma 4.7. Naturally,

L

depends on our choice of

R, r, ro, {;

and 1\,.

Let

f

E BL(JRM, JR) be arbitrary. We have already shown that

J

f(x)

exp{

-Nq,(x/N

1

/4)}

dx

= exp{

O(hN(o, R)} (

f(x)

exp{w(x)}

dx

J

B(O,RMI/4)

+

0

(lIfll=ex

p {

-R'M/48}

(R;~)

M!2)

+

0 (1lflloo [exp{

-Nr5/12}

+ exp{

-NCr,ro/4}])

(4.64)

with

hN(o,

R) ---+ O. Next, we want to replace the integral

(

f(x)

exp{w(x)}

dx

J

B(O,RMI/4)

(4.65)

by the integral over JRM. First note, that (4.45) already provides an upper bound on

w(x),

valid for all

x

satisfying IIxl12

2:

RM1

/4:

1 1

R

2

(20)

GENTZ AND M. LOWE (4.67) As an immediate consequence,

I

} {llxI122

r

RM1 /4}

f(x) exp{'lJ(x)} dxl

~

11J1100

r

exp{ -

~: VMllxll~}

dx

} {IIxI122RM1/4} ( 48 )

M/2

~

Ilflloo exp{

-R

4

M/48}

R2JM

'

which implies by (4.64) that

J

f(x)

exp{

-NiJ>(x/N

1/

4)} dx

=

exp{O(h

N

(8, R)}

r

f(x) exp{'lJ(x)} dx

J~M

( ( 48

)M/2)

+

0 Ilfllooexp{

-R

4

M/48}

R2JM

+

0 (1lflloo [exp{

-Nr5/12}

+

exp{

-Ncr,ro/4}] ).

(4.68)

In order to compare

J~M

f(x)

exp{

-NiJ>(x/N

1/

4)}

J~M exp{

-NiJ>(x/Nl/4)} dx

to

J~M

f(x) exp{'lJ(x)}

J~M

exp{'lJ(x)} dx '

we need a lower bound on J~M

exp{'lJ(x)} dx.

To obtain a lower bound on

'lJ

first, we proceed as in (4.45):

'lJ(x)

~

-1121Ixll:

- l [llxlli - Ilxll:] -

f);R2VMllxll~ ~

-lllxlii -

f);R2VMllxll~·

(4.69)

For IIxl12 ~

RMI/4,

R

2

'lJ(x)

~ -3VMllxll~

follows. (Recall, that f); ~ 1/48.) Now,

1

1

R2

1(3 )

M /2

exp{'lJ(x)} dx

~

eXP{--VMllxll~}

dx

~

-

:ru

~M B(O,RM1/4) 3 . 2

R2

M

for

M

large enough, i. e.,

N

~

Ng(R)

for some

Ng(R)

EN.

With these preparations, it is easy to see that

J~M

f(x)

exp{

-NiJ>(x/N

1

/

4)}

J~M

f(x) exp{'lJ(x)}

J~Mexp{

-NiJ>(x/Nl/4)} dx

J~M

exp{'lJ(x)} dx

~

IIflloo

J~M exp{'lJ~x)}

dx

+

0'

(4.70)

(4.71)

(21)

(4.73)

where we use 0 as an abbreviation for

o(

hN(8,

R)

L

exp{w(x)}

dX)

+

0

(exp{

-R'M/48}

(R:~

f')

+

0 (exp{

-NT~/12}

+

exp{-NCr,ro/4}).

By our lower bound on

flR

M exp{'1J(x)} dx, we see that

R

can be chosen so large that there exist a constant K

>

0 and an NlO(R, T, TO, 6, K,) E N such that

flR

M f(x) exp{-N<I>(x/NI/4)}

flR

M f(x) exp{'1J(x)}

flR

M exp{-N<I>(x/NI/4)} dx

flR

M exp{'1J(x)} dx

:::; Ilflloo

[O(hN(6, R))

+

O(exp{_R4M/K})]

for all N ~ NlO(R, T, TO,6,K,). Now the theorem follows from Lemma 4.2 and

Lemma 4.4 with

O(N)

as defined in the beginning of this subsection and

N ~ N N(R,T,To,6,K,)

max{No, NI ,N2(6), N3(6, T), N4(R,K), Ns, N6 ,N7(T, To), Ns(T, TO),

Ng(R), NlO(R, T, TO, 6, K,)}. (4.74)

o

REFERENCES

1. D.J. Amit, H. Gutfreund, and H. Sompolinsky, Statistical mechanics of neural networks near saturation, Ann. Phys. 173 (1987), 30 - 67.

2. R.N. Bhattacharya and R. Ranga Rao, Normal approximation and asymptotic expansion, Wiley, New York, 1976.

3. A. Bovier and V. Gayrard, An almost sure large deviation principle for the Hopfield model, Ann. Probab. 24 (1996), 1444-1475.

4. _ _ , An almost sure central limit theorem for the Hopfield model, Markov Processes Relat. Fields 3 (1997), 151-173.

5. _ _ , The retrieval phase of the Hopfield model: A rigorous analysis of the overlap distribu-tion, Probab. Theory Related Fields 107 (1997), 61-98.

6. _ _ , Hopfield models as generalized random mean field models, Mathematical Aspects of Spin Glasses and Neural Networks (A. Bovier and P. Picco, eds.), Progress in Probability, Birkhauser, Boston, 1998, pp. 3-89.

7. A. Bovier, V. Gayrard, and P. Picco, Gibbs states of the Hopfield model in the regime of perfect memory, Probab. Theory Related Fields 100 (1994),329-363.

8. _ _ , Gibbs states of the Hopfield model with extensively many patterns,J. Statist. Phys. 79 (1995), 395-414.

9. _ _ ,Large deviation principles for the Hopfield model and the Kac-Hopfield model,Probab. Theory Related Fields 101 (1995),511-546.

10. A. Bovier and D. Mason, Extreme value behaviour in the Hopfield model, preprint, preliminary version, 1998.

11. A. Bovier and P. Picco (eds.), Mathematical aspects of spin glasses and neural networks, Progress in Probability, Boston, Birkhiiuser, 1998.

12. U. Einmahl, Extensions of a result by Koml6s, Major, and Tusnady to the multidimensional case, J. Multivariate Anal. 28 (1989), 20-68.

13. U. Einmahl and D. Mason, Gaussian approximation of local empirical processes indexed by functions, Probab. Theory Related Fields 107(1997), 283-311.

14. R.S. Ellis, Entropy, large deviations, and statistical mechanics, Grundlehren der mathemati-schen Wissenschaften, vol. 271, Springer, New York, 1985.

(22)

B. GENTZ AND M. LOWE

15. R.S. Ellis and C. M. Newman, Limit theorems for sums of dependent random variables occur-ring in statistical mechanics, Z. Wahrscheinlichkeitstheorie verw. Gebiete 44 (1978), 117-139. 16. B. Gentz, An almost sure central limit theorem for the overlap parameters in the Hopfield

model, Stochastic Process. Appl. 62 (1996), 243-262.

17. ,A central limit theorem for the overlap in the Hopfield model,Ann. Probab. 24 (1996), 1809-1841.

18. ,A central limit theorem for the overlap in the Hopfield model,Ph.D. thesis, Universitat Zurich, Switzerland, 1996.

19. , On the central limit theorem for the overlap in the Hopfield model, Mathematical Aspects of Spin Glasses and Neural Networks (A. Bovier and P. Picco, eds.), Progress in Probability, Birkhauser, Boston, 1998, pp. 115-149.

20. B. Gentz and M. Lowe, The fluctuations of the overlap in the Hopfield model with finitely many patterns at the critical temperature,preprint, submitted, 1998.

21. J. J. Hopfield, Neural, networks and physical systems with emergent collective computational abilities, Proc. Natl. Acad. Sci. U.S.A. 79 (1982), 2554-2558.

22. J. Koml6s, P. Major, and G. Tusnady, An approximation of partial sums of independent RV's and the sample DF. I, Z. Wahrscheinlichkeitstheorie verw. Gebiete 32 (1975), 111-131. 23. J. Koml6s and R. Paturi, Convergence results in an associative memory model, Neural

Net-works 1 (1988), 239-250.

24. M. Ledoux and M. Talagrand, Probability in Banach spaces, Ergebnisse der Mathematik und ihrer Grenzgebiete, Springer, Berlin, 1991.

25. D. Loukianova, Lower bounds on the restitution error in the Hopfield model, Probab. Theory Related Fields 107 (1997),161-176.

26. M. Lowe, On the storage capacity of Hopfield models with weakly correlated patterns,to appear in Ann. Appl. Probab.

27. R.J. McEliece, E.C.Posner, E.R. Rodemich, and S. S. Venkatesh, The capacity of the Hopfield associative memory, IEEE Trans. Inform. Theory 33 (1987),461-482.

28. C. M. Newman, Memory capacity in neural network models: Rigorous lower bounds, Neural Networks 1 (1988), 223-238.

29. L. A. Pastur and A. L. Figotin, Exactly soluble model of a spin glass, SOy. J. Low Temp. Phys. 3 (1977), no. 6, 378-383.

30. , On the theory of disordered spin systems, Theor. Math. Phys. 35 (1977),403-414. 31. M. Talagrand, Rigorous results for the Hopfield model with many patterns, Probab. Theory

Related Fields 110 (1998), 177-276.

32. A. Yu. Zaitsev, On the Gaussian approximation of convolutions under multidimensional ana-logues of S.N. Bernstein inequality conditions, Probab. Theory Related Fields 74 (1987), 535-566.

33. A. Yu. Zaitsev, Multidimensional version of the results of Koml6s, Major, and Tusnady for vectors with finite exponential moments, Tech. Report 95-055, SFB 343, Bielefeld, 1995. (Barbara Gentz) WEIERSTRASS-INSTITUT FUR ANGEWANDTE ANALYSIS UND STOCHASTIK, MOHRENSTR. 39, D-10117 BERLIN, GERMANY

E-mail address.BarbaraGentz:gentz@wias-berlin.de

(Matthias Lowe) EURANDOM, PO Box 513, NL-5600 MB EINDHOVEN, THE NETHERLANDS E-mail address.MatthiasLowe:lowe@eurandom.tue.nl

Referenties

GERELATEERDE DOCUMENTEN

The human security framework is used to explore the reasons behind South Africa‘s response to Zimbabwean immigrants and subsequently to the crisis itself.. The reason

heterogeen paalgat zandige leem bruin en bruingrijs rond zeer recent (cf. spoor 6) houtskool en/of antracietfragmenten. 9 homogeen paalgat zandige leem bruingrijs rond

Enkele sleuven leverden enkele kuilen een paalsporen op, maar bij de aanleg van verschillende kijkvensters rond deze sporen werden geen verdere aanwijzingen

dossier, werd de bouwheer overgehaald een vooronderzoek uit te laten voeren op de vier loten, vooraleer deze werden bebouwd.. Er werden geen relevante sporen

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of

maakt een seguence-file aan en verstuurt deze naar de PLG.Deze seguence-file zorgt voor het aanbieden van een stimuluslijn gedurende de tijd STL:integer. De

Dit als voorbereiding op het gesprek met cliënten en naasten, over vrijheid en veiligheid.. recepten