• No results found

Adaptive random search

N/A
N/A
Protected

Academic year: 2021

Share "Adaptive random search"

Copied!
25
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Adaptive random search

Citation for published version (APA):

Kregting, J., & White, R. C. (1971). Adaptive random search. (EUT report. E, Fac. of Electrical Engineering; Vol. 71-E-24). Technische Hogeschool Eindhoven.

Document status and date: Published: 01/01/1971

Document Version:

Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers)

Please check the document version of this publication:

• A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website.

• The final author version and the galley proof are versions of the publication after peer review.

• The final published version features the final layout of the paper including the volume, issue and page numbers.

Link to publication

General rights

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain

• You may freely distribute the URL identifying the publication in the public portal.

If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license above, please follow below link for the End User Agreement:

www.tue.nl/taverne

Take down policy

If you believe that this document breaches copyright please contact us at:

openaccess@tue.nl

(2)

"ADAPTIVE RANDOM SEARCH"

by

(3)

Group Measurement and Control

Department of Electrical Engineering Eindhoven University of Technology Eindhoven, Netherlands

"ADAPTIVE RANDOM SEARCH"

by

J. Kregting and R.C. White, Jr.

TH-Report 71 - E - 24 October 1971

(4)

Abstract

ADAPTIVE RANDOM SEARCH J. Kregting and R.C. White, Jr.

Department of Electrical Engineering Technological University

Eindhoven, Netherlands

Random search methods for multivariable function minimization are considered. Beginning with a theoretical method having an optimum step size ~], direc-tional information is added, and the resulting improvement in performance

n

is calculated for the function

Q(~)

=.l

x~.

Practical algorithms, with and without adaptation of step size

ana=~earch

direction, are tested for several functions. For each algorithm the number of function evaluations re-quired for minimization increases linearly with the number of variables. Theoretical and numerical results indicate that directional adaptation im-proves performance significantly.

Contents

1. Introduction

2. Theoretical random search methods

2.1. Optimum directional random search (ODRS) 2.2. Optimum half-space random search (OHSRS) 2.3. Comparison of OSSRS, ODRS and OHSRS 3. Adaptive directional random search (ADRS) 4. Numerical results

5. Conclusions References Appendix A Appendix B

(5)

I. Introduction

A topic of much current interest is the development of efficient computa-tional methods for solving the following problem: Find the values of a

set of parameters. ~

=

(x

l .x2 ••.•• xn)T. which minimize a (real valued)

objective function Q(~).

"Random search" methods for function minimization have been studied and

used by several researchers

[IJ.

Like other direct search methods.

random search algorithms have the advantage of requiring neither

measure-ments of the gradient of Q(~) nor one-dimensional minimizations of Q(~)­

operations which can be numerically troublesome and/or costly in compu-tational effort. Schumer and Steiglitz [2J have analyzed a random search method with optimum step size and have found that. under certain

condi-tions. the number of evaluations of Q~) required for minimization is

a linear function of the number of parameters.

In the present paper we postulate and analyze the performance of two theoretical random methods which have both optimum step size and also

information concerning the direction in which Q(~) decreases. Our method

of analysis is similar to that of Schumer and Steiglitz. A practical algorithm. which attempts to realize the theoretical ones by adapting both step size and the search direction to the function being minimized. is developed from a method proposed by Matyas [3J. Numerical results are given for this algorithm as applied to several test functions.

(6)

-2. Theoretical random search methods.

For the development of the theoretical random search methods, we assume a quality function of the form

n

l

i=1 2 T x. = x X 1 2 p

where p = I~I. Figure 1 shows a 2-dimensional view of the parameter space, where the search is currently located at point A. ~x, a typical random

step of length s forms an angle 9 with the negative-gradient direction AO. The random search method examines Q(~+~x), and if Q(~+~x) < Q(~),

the search moves to the point ~ + ~x. Otherwise the search remains at x. The (k+

l)~

step is described by

~(k+1 ) = ~(k) + 6(k) ~x(k) (I)

where

if Q~+~x) < Q~) 6(k) =

0 if Q~+~x) :: Q(~)

We consider two methods for choosing ~x, each method assuming a certain amount of information concerning the direction in which Q(~) decreases.

It is first assumed that the search method has a large amount of di-rectional information: it can control the distribution of the random steps such that each one is successful. Furthermore, it chooses the step length s = I~xl so as to maximize the average progress toward the optimum. (The "optimum step size random search" (OSSRS) of Schumer & Steiglitz assumes no directional information, but does optimh::. che step size.) Thus, for "optimum direction random search" (ODRS) we consider the tip of the random vector ~x to be uniformly distributed over the spherical surface of an n-dimensional hyper cone with vertex

at~, an axis of length s lying along the negatiVe-gradient direction

OA, and an aperture angle 9 = arccos(s/2p). The probability density

o

(7)

e

=

arccos(s/2p)

o (3)

The normalized expected improvement in Q~) per step is defined as

I

=

E{-liQ}

e -

Q~) (4)

where E{.} denotes the expectation operation and, with reference to (I), liQ

=

Q(~(k+I»-Q~(k». liQ is negative for successful steps and

zero for a failure. From Fig. I, note that

-liQ p - p 2 ,2 if a < a 0

~{:

"

cos a - s 2 if a <

e

0 if a

::

a 0 (5)

With ODRS all steps are successful, and

a

0

2

E{-liQ}

f

(2 spcosa-s )fa(a)da

0

(6)

Substituting (2) and ( 6) in (4), we have

a

0

n-2

2n

f

cos a sin ada

IS (n, n) 0 a - n 2 (7)

0 . n-2

f

S1n a dS

0

where n

=

sIp, the relative step length. In order to obtain the step

length for a maximum normalized expected improvement per step, IS(n,n) is differentiated with respect to n, and the result is set equal to zero. This yields an equation in no' the optimum relative step length:

o

(8) S o where B (n ) =

f

o 0 n-2

sin SdS. Under the assumptions that n is large and

that n is small for largen, B(n ) may be approximated by (see

Appen-o 0

dix II of [2J )

(8)

-n /2 V;n 0 n-2 B(n )

'"

f

cos e de 0 0 J'l:

ffn

- no + 1;8 n-2 3 n - (n-2) (n-3) 5 (9) 1280 no +

...

2 0

Substituting (9) in (8) and writing the binomial series for the terms in (8), we obtain an equation of the following form for n .

o

m-l

2

(n) + • •• '" 0 (10)

were the am's are constants. This equation provides the result that n

=

a/In,

with

a

a constant to be determined. Direct solution of

o

(10) was abandoned in favor of numerically maximizing Ie(no,n) Ie(a/In,n) with respect to a (see Appendix B). The results are:

I =

e

2.290

III

(n n) = ke '" 2.159 0' n n (1 1 ) ( 12) Analogous results for OSSRS (where for notational purposes we denote by 4 the angle between x and the negative gradient) are [2]

1.225

(OSSRS) ( 13)

.406

n (OSSRS) ( 14)

It is noteworthy that, while the average improvement is greater for ODRS than for OSSRS, the assumption of directional information in ODRS has not changed the nature of the relationship between the

average ilnprovement and the number of parameters; it remains inversely

(9)

In ODRS a great amount of directional information is assumed to be known. In an effort to have a theoretical method which is slightly closer to a realizeable algorithm, we postulate a method in which all random steps lie in the half-space defined by the hyperplane tangent to

Q~) at.!!.

The tip of ~.!! is assumed to be uniformly distributed over the

spheri-cal surface of an n-dimensional half-hypersphere,which has radius s, center at.!!, and a flat side defined by the aforementioned tangent plane.

For "optimum half-space random search" (OHSRS) the angle between ~x

and the negative gradient is denoted by ~, 0 ~ ~ ~ n/Z. The

probabi-lity density function of ~ is (see Appendix A)

. n-Z f ( ) ---.co=sc:1:::n,-_",,~ __ ~ ~

=

n/Z

J

sinn-Z

~d~

o

o

~ ~ ~

Tf/Z

( 15)

For OHSRS the normalized expected improvement in Q~), denoted by

I~(n,n), is defined as in

(4)

and

(5).

We have n/Z

E{-~Q}

=

f

(-~Q)f~(~)d~

o

Substitution of (IS) and (I 6) in

e 0

(Zncos~-nZ)sin

f

I~(n,n) = 0 n/Z

J

sinn-Z~d~

0 = Z Ie(n,n) ( 16) (4) yields ~d~ ( 17)

where Ie(n,n) is the average improvement obtained for OSSRS (see Eq.

(7) of [ZJ ). Thus, no is the same for OHSRS and OSSRS, and for OHSRS

I",(n ,n) = Z Ie(n ,n)

k~

"" .81Z

't' 0 0 n n ( 18)

Again, the expected improvement per step is inversely proportional to the number of parameters.

(10)

-We now compare the number of evaluations of the quality function quired by the three theoretical methods for minimization of Q(~)

re-2

= p

Let Qo be the value of Q(x) at the beginning of the search, and let the search be ended when an x is found such that Q~) ~ Qf. Then un-der the assumptions that n is large and that the improvements at each step are independent, the number of steps M (and therefore the number of function evaluations) required to minimize Q(~) is [2 ]

1 In Qf M ~ k (Q)n 0 ( 19) where I 2.463 for OSSRS k", 1 1 • 213 for OHSRS klj! = k (20) 1 0.463 for OnRS ke

From (20) we see that OHRS requires half as many function evaluations as OSSRS, and onRS requires roughly one-fifth as many as OSSRS.

(11)

3. Adaptive directional random search (ADRS)

Matyas

[3J

has proposed a practical random search algorithm with adaptive

direction a limited

and step size, which is used here degree) the properties of ODRS or

in an attempt to realize (to OHSRS. The

tive directional random search (ADRS), is described by

algorithm,

adap-(I), where ~x(k),

k

=

1,2, ... , is determined by the following expressions «21)-(27»:

~x(k) = ~(k) + b(k)~(k) (21)

~(k) c~(k-I) + c

I ~x(k-I) ( 22)

with ~x(O) 0 and ~(O)

=

Q,

and where for 6(k-l) I (last step a success)

c c os'

0 cI

=

c Is (23)

o

< c < I . cIs > O· C +c > I

os

,

,

os Is (24)

and for 6 (k-I)

=

0 (last step a failure)

c = c . cI

=

c lf

0 of' (25)

o

< c

of < I·

,

clf < 0; ICof+clfl < I (26)

Finally, I~(k) I is limited by

I~(k) I < D b(k) (27)

Should (22) result in a violation of (27), ~(k) is normalized so that

Id(k)1 = D b(k). ~(k) is a random vector generated by selecting each

component from a distribution uniform on [-I, I] and normalizing the

resulting vector such that 1~(k)1

=

I.

b(k) and D are scalars. ~(k), the mean value of ~x(k), is weighted

posi-tively by ~(k-I) and is weighted in the direction of the last step

fol-lowing a success, or in the opposite direction folfol-lowing a failure. When successes occur in the same general direction, inequalities (24) tend

to bias ~x in this direction and to increase I~xl. Following failures,

inequalities (25) tend to remove directional bias and to dec.rease I~x!.

The inequality (27) prevents ~x from being "overdetermined" by ~. As values for cos' cIs' cof' clf' b(k) and D are not provided in

they were chosen on the basis of numerical experiments with Q(~)

The values selected are c os c of

=

D

=

3 0.75, cIs 0.75, c lf 1.25 -0.75 9

-[3J,

T

=

x x. (28)

(12)

b(k) is given an initial value and is multiplied by a factor of 1/10 fol-lowing 20 consecutive unsuccessful steps. The complete algorithm is shown in Fig. 2. For the purpose of comparing results, we have chosen the basic structure of the flow diagram to be the same as that for the "adaptive step size random search" (ASSRS) of [2]

One more algorith, which we call "ordinary random search" (DRS), is

con-structed according to (1) and (21) with the restriction that ~(k)

=

0

for all k. It is also described by the flow diagram of Fig. 2. ORS does not adapt the search direction, and adapts the step size only in the

sense that b(k) is reduced following 20 consecutive failures. It is used as a basis for judging the effectiveness of the adaptive algorithms

(13)

4. Numerical results.

The algorithms ADRS and ORS were tested for several functions. Comparisons are made with results reported for ASSRS [2J.

For

Q~)

=

~T~,

10 independent optimizations were performed for n

=

5, 10,

IS and 20. The initial step size is b(O)

=

0.1, the starting point is

-8

~{O)

=

(1,1, .•• ,1), and the search is stopped when Q(x) - <

o.

'nn.n

=

10 . The results for ADRS, ORS and ASSRS are shown in Fig. 3. The numbers of

function evaluations required by ORS and ADRS are approximately linear in n. This result for ORS is especially noteworthy, because this linear

re'-lationship has been predicted theoretically only for an algorithm with optimum step size. The directional adaptation of ADRS results in an im-provement over ORS, and ORS is slightly more efficient than ASSRS. Apparently the adaptation of step size in ASSRS is costly in terms of

function evaluations. From the experimental results it is possible to

calculate approximate values of Ilk in (19):

ADRS ORS ASSRS

Ilk 2.63 3.60 3.67

These values, which are proportional to the number of function evaluations

required to minimize

Q(~) = ~T~,

may be compared with the values for the

theoretical algorithms (20). The practical methods fall well short of the theoretical limits.

For the function n

L

i= I 2 a.x. 1 1 (29)

where the a.'s are random numbers chosen from a distribution uniform on

1

[.1, IJ, 10 independent trials were performed for n

=

5, 10, IS and 20.

b(O)

=

0.1, x(O)

=

(1,1, ••. ,1) and

o. =

Q(I,I, ••• ,I)/IOOO.

The results,

- mln

shown in Fig. 5, are qualitatively the same as for

Q(~) = ~T~,

although the

relative improvement of ADRS over ORS and ASSRS is greater here than for

Q(~)

=

~T~.

This might be expected, because the directional adaptation of ADRS should be helpful in moving along the valleys of (29).

n

Similar results were obtained for Q(x)

=

I

x~

with 10 independent trials,

- • 1

1=1 -8

b(O)

=

0.1, x(O)

=

(1,1, ••. ,1), and O.

=

0.5 x 1 0 . numbers of

- luln

function evaluations are approximately described by 53 n for ASSRS [2J, 51-n for ORS, and 32.n for ADRS.

(14)

-ADRS was tested for Rosenbrock's function,

=

100(x2

--3

with b(O)

=

0.1, x(O)

=

(-1.2,1), and o.

=

10 •

- 1run

(30)

Convergence was very slow. Multiplying the step size by a factor of 1/10 following 20 consecutive fai~ures (Fig. 2) soon resulted in a very small step size. For a reducing factor of only 1/2 and with D

=

6, 399 evaluations (average of four trials) were required for minimization. This is inferior to results for other "direct search" methods, such as pattern search, the simplex method, and Rosenbrock's method

[5J.

(15)

5. Conclusions

Two theoretical random search methods, optimum directional random search (ODRS) and optimum half-space random search (OHSRS), have been analyzed in an attempt to evaluate the improvement in performance resulting from the inclusion of directional information in a random search algorithms. Although the improvement is significant in comparison with a similar method without directional information (OSSRS [2J ), the number of

funct-- n

ion evaluations required by ODRS and OHSRS to minimize Q(x)

=

L

x~

re-- i=1 ~

mains a linear function of n.

The performances of three practical algorithms have been compared: adap-tive directional random search (ADRS) based on [3], which attempts to adapt both step size and search direction to the function being minimized; adaptive step size random search (ASSRS [2J) which adapts only step

size; and ordinary random search (DRS) which only reduces its step size following some number of

Q(~)

=

i~lxi, Q~)

=

iiI

consecutive failures. For the functions

2 n 4

a.x., and Q(x)

=

."Ix., the number of function

1. 1. - 1.= l.

evaluations, required for minimization is approximately linear in n for all three algorithms. ADRS requires about .74 times as many evaluations

T

at DRS for Q(~)

=

~~, and about 0.6 times as many as DRS for the other

two functions. DRS performs slightly better than ASSRS for each function. These results suggest that spending an extra function evaluation at each step in order to adapt the step size (as in ASSRS) is not profi-table. The strategy of ADRS, which attempts to step size and search direction only with information collected from past steps, is more suc-cessful. This follows the basic strategy of direct search algorithms: use every function evaluation to try to step immediately toward the mi-nimum and collect information only from such steps.

The performance of ADRS on Rosenbrock's function indicates that the al-gorithm is still inferior to other direct search methods for functions with narrow, curving valleys. Random search algorithms appear to be most

useful for rather "smooth" functions of many variables.

(16)

-References

White, R.C., Jr. "A survey of random methods for parameter op-timization",TH-report 70-E-16, Technological University Eindhoven, The Netnerlands.

[21 Schumer, M.A. and K. Steiglitz: "Adaptive step size random search".

Matyas, J.

IEEE Trans. Automatic Control, AC-13, no. 3, 270-276, June 1968.

I1Random optimization", Automation and Remote

Control, vol. 26, no. 2, 244-250, 1965.

[4] Multiniyeks, V.A. and L.A. Rastrigin: "Extremal control of continuous mult-parameter systems by the method of random

search", Engineering Cybernetics, vol. 2, 82-90, Jan/Feb. 1964.

[5] Kowalik, J. and M.R. Osborne: Methods for Unconstrained Optimization Problems, American Elsevier, New York, 1968.

(17)

APPENDIX A.

I. Probability density of 9.

The distribution of ~x is given in section 2.1. Then the distribution function F

9(9) of 9 is

(31 )

where S(9) is the area of the spherical surface of an n-dimensional cone with axis of length s and aperture angle 9. The probability density of 9 is dS(9) =~ From

[4

J

we have S(9 ) o dS(9)

=

d9 n-I n-2 a I (n-I) s sin 9 n-where n/2 11 a = " -n r(~+I) (32) (33) (34)

Integrating (33) to obtain S(9 ) and substituting in (32), we obtain

o . n-2 9 Sin 9

r

sin n-2 9d9 0 2. Probability density of ~.

The distribution of ~x is given in section 2.2. Then the distribution function F~(~) of ~ is

= S(~)

!S(1I)

(35)

(36)

where S(~) is defined analogously to S(9) above, and ~I~) is the surface area of an n-dimensional hypersphere.

Again using (32), we obtain . n-2 ,I. s~n '+' 11/2

f

sinn-2

~d~

o

o

~ ~ ~ 11/2 (37) - 15

(18)

-APPENDIX B. Maximization of Ia(a/in,n) with respect to a •. In.' the numerator of the first term of (7) we have

a

/ 0. cos a sinn- 2 ada

=

n-I (I

2 n-I

.no

-2--

- ) 4

o

where we have expanded is; approximated by (9) yi~lds for large n

2 I· a = { I -n-I

a

4 6 a a + 128 - 2560 + ••• }

the binomial series with n = a/Ill.

o

with . n 0

=

a/I;.

Substitution of (9)

2 4 6 1

-8"

a + a 128 - 1536 a + 2 n Ia (a/IIi,n) = 2a

n

a a3 as -a

-2

+ - - 1280 + 2 48 (38) The denominator and (38) in (7) (39)

The sums in the numerator and denominator of (39) can be shown to be ab-sa[utely convergent for any a.

The maximum value of the right side of (39) was found numerically to be attained for a ~ 2.290. This yields

n o <> 2.290

Vn

(40)

(19)

Q = p 2

'"

/ / I ,2 I / Q = P I I / I ·

---

/ / + 6x I '\ p' /

\

x 0

-

'A

p \ I \ \ / \ \ \ \ \ \ /

"-

\

V

"- \ . / / \ -... \

----.

\ - \

,

\

,

\ \

,

,

Fig. I. A 2-dimensional view of the parameter space.

(20)

-start

initialize ~(O), b(O) and

compute Q~(O»

increment I I .

take a step ~, For ADRS USE

determine success or equations (I)

failure, update For ORS USE

x andd

-

equations (I)

with d

-

= 0 success? no

yes increment 12.

I

12 < 20? no ~ yes b: = b/lO 12 = 0 I

Q~) <

~~

no

/'

,

is . I I a yes multiDle of 50? stop compare a step ~ with a step 10 (llx)

Fig. 2. Flow diagram for ADRS and DRS

and

(21)

N

1

1500 / , / , / 1000 / , / , / / / , / 500 h -<

,.

0 0 5 10 15

Fig. 3. Average number of function evaluations N required by ASSRS, DRS and ADRS for Q~)

19 -ASSRS: BOn ,/ , / / , / DRS: / / ' , / / , / / ADRS: 57n

w---+

VS. dimension n T

=

x x. n nn

(22)

700 / ' / ' 600 ASSRS:

33.~

/', ORS: 31n 500 400 300 200 100

o

o

5 .----:::

...::

/ ' / ' / ' 10 / ' / ' / ' / ' / ' / ' / ' 15 /

-

20 n Fig. 4. Average number of function evaluations N vs. dimension n

required by ASSRS, ORS and ADRS for n 2

Q~)=L a.x.

i=1 1 1

(23)

TH-Reports:

ELNDHOYEN UN~~S~TY O~ TECHNOLOGY THE NETIlER,LA,NDS

DEPARTMENT OF ELECTRICAL ENGINEERING

I. Dijk, J., M. Jeuken & E.J. Maanders

AN Al,TENNA FOR A SATELLITE COMMUNICATION GROUND STATION

(PROVISIONAL ELECTRICAL DESIGN). TR-report 68-E-OI. March 1968. 2. Veefkind, A., J.R. Blom & L.R.Th. Rietjens

THEORETICAL Al'D EXPERIMENTAL INVESTIGATION OF A NON-EQUILIBRIUM PLASMA IN A MHD CHANNEL. TR-report 68-E-02. March 1968. Submitted to the Symposium on Magnetohydrodynamic Electrical Power Generation, Warsaw, Poland, 24-30 July, 1968.

3. Boom, A.J.W. van den

&

J.R.A.M. Melis

A COMPARISON OF S0I1E PROCESS PARAMETER ESTIMATING SCHEMES. TH-report 68-E-03. September 1968.

4. Eykhoff, P., P.J.M. Ophey, J. Severs & J.O.M. Oome

AI' ELECTROLYTIC TANK FOR INSTRUCTIONAL PURPOSES REPRESENTING THE COMLEX-FREQUENCY PLAl'E. TH-report 68-E-04. September 1968.

5. Vermij, L. & J.E. Daalder

ENERGY BALAI'CE OF FUSING SILVER WIRES SURROUNDED BY AIR, TH-report 68-E-05. November 1968.

6. Houben, J.W.M.A. & P. Massee

MHD POWER CONVERSION EMPLOYING LIQUID METALS. TH-report 69-E-06. February 1969.

7. Reuvel W.M.C. van den

&

W.F.J. Kersten

VOLTAGE MEASUREMENT IN CURRENT ZERO INVESTIGATIONS. TH-report 69-E-07. September 1969.

8. Vermij, L.

SELECTED BIBLIOGRAPHY OF FUSES. TH-report 69-E-08. September 1969.

(24)

-9. Westenberg, J.Z.

SOME IDENTIFICATION SCHEMES FOR NON-LINEAR NOISY PROCESSES. TH-report 69-E-09. December 1969.

10. Koop, H.E.M., J. Dijk'&'E;J; Maanders

ON CONICAL HORN ANTENNAS. TH-report 70-E-IO. February 1970. II. Veefkirtd,'A.

NON-EQUiLIBRIUM PHENOMENA IN A DISC-SHAPED MAGNETOHYDRODYNAMIC GENERATOR TH-report 70-E-II. March 1970.

12. Jansen, J.K.M., M.E.J.'Jeuken

&

C.W. Lambrechtse THE SCALAR FEED. TH-report 70-E-I2. December 1969. 13. Teuling, D.J.A.

ELECTRONIC IMAGE MOTION COMPENSATION IN A PORTABLE TELEVISION CAMERA. TH-report 70-E-13. 1970.

14. Lorencin, M.

AUTOMATIC METEOR REFLECTIONS RECORDING EQUIPMENT. TH-report 70-E-14. November 1970.

15. Smets, A.J.

THE INSTRUMENTAL VARIABLE METHOD AND RELATED IDENTIFICATION SCHEMES. TH-report 70-E-IS. November 1970.

16. White,Jr., R.C.

A SURVEY OF RANDOM METHODS FOR PARAMETER OPTIMIZATION. TH-report 70-E-16. February 1971.

17. Talmon, J.L.

APPROXIMATED GAUSS-MARKOV ESTIMATORS

AND

RELATED SCHEMES. TH-report 71-E-17. February 1971.

18. Kalasek,

V.K.

MEASUREMENTS OF TIME CONSTANTS ON CASCADE D.C. ARE IN NITROGEN. TH-report 71-E-18. February 1971.

OZONBILDUNG MITTELS ELEKTRISCHE ENTLADUNGEN.

(25)

20. Arts, M.G.J.

ON THE INSTANTANEOUS MEASUREMENT OF BLOODFLOW BY UlTRASONIC MEANS. TH-report 71-E-20. May 1971.

21. Roer, Th.G. van de

NON-ISO THERMAL ANALYSIS OF CARRIER WAVES IN A SEMICONDUCTOR. TH-report 71-E-21. August 1971.

22. Jeuken, P.l. C. Huber &.C.E.. Mulders

SENSING INERTIAL ROTATION WITH TUNING FORKS. TH-report 71-E-22. September 1971

23. Dijk, J.H., & E.J. Maanders

BLOCKING IN CASSE GRAIN ANTENNA SYSTEMS TH-report 71-E-23, September 1971. A review.

24. J. Kregting, & R.C. White, Jr. ADAPTIVE RANDOM SEARCH

TH-report 71-E-24, September 1971

Referenties

GERELATEERDE DOCUMENTEN

In Section 2.1 we define the random walk in dynamic random environment, introduce a space-time mixing property for the random environment called cone-mixing, and state our law of

4 Large deviation principle for one-dimensional RW in dynamic RE: at- tractive spin-flips and simple symmetric exclusion 67 4.1 Introduction and main

in space but Markovian in time, i.e., at each site x there is an independent copy of the same ergodic Markov chain.. Note that, in this setup, the loss of time-independence makes

In Section 2.3 we assume a stronger space-time mixing property, namely, exponential mixing, and derive a series expansion for the global speed of the random walk in powers of the

In Section 3.2.1 we show that the path of the RW Z in (2.29), together with the evolution of the RE ξ between regeneration times, can be encoded into a chain with complete

Consider the lattice zd, d;?: 1, together with a stochastic black-white coloring of its points and on it a random walk that is independent of the coloring.

Weak Bernoullicity of random walk in random scenery Hollander, W.T.F.. Weak Bernoullicity of random walk in

On the probability of two players hitting the same digit while walking over the same string of random