• No results found

A principle for generating optimization procedures for discounted Markov decision processes

N/A
N/A
Protected

Academic year: 2021

Share "A principle for generating optimization procedures for discounted Markov decision processes"

Copied!
15
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

A principle for generating optimization procedures for

discounted Markov decision processes

Citation for published version (APA):

Wessels, J., & van Nunen, J. A. E. E. (1974). A principle for generating optimization procedures for discounted Markov decision processes. (Memorandum COSOR; Vol. 7411). Technische Hogeschool Eindhoven.

Document status and date: Published: 01/01/1974

Document Version:

Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers)

Please check the document version of this publication:

• A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website.

• The final author version and the galley proof are versions of the publication after peer review.

• The final published version features the final layout of the paper including the volume, issue and page numbers.

Link to publication

General rights

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain

• You may freely distribute the URL identifying the publication in the public portal.

If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license above, please follow below link for the End User Agreement:

www.tue.nl/taverne Take down policy

If you believe that this document breaches copyright please contact us at: openaccess@tue.nl

providing details and we will investigate your claim.

(2)

1l:~

cos

Department of Mathematics

STATISTICS AND OPERATIONS RESEARCH GROUP

memorandum COSOR 74-11

A principle for generating optimization procedures for discounted Markov decision processes

by

J. Wessels and

J.A.E.E. van Nunen

(3)

A principle for generating optimization procedures for discounted Markov decision processes

by

J. Wessels and

J.A.E.E. van Nunen

§ O. Introduction

In this paper we will show how all existing optimization procedures (and a number of new ones) for discounted Markov decision processes may be derived from one point of view.

So we consider a finite-state discrete time Markov system which is con-trolled by a decision maker. After each transition the system may be iden-tifies as being in one of N possible states. Let S := {I.2, .•• ,N} be the set of states. Transitions occure at discrete points in time n

=

0,1,2, . . • . After observing state i at time n the decisionmaker selects an action k from a nonempty finite set K(i). Now

p~.(~

0) is the probability of a transition

~J

to state j E S if the system's actual state is i E S and decision k E K(i)

has been selected. An expected reward rk(i) is earned immediately while future income is discounted by a constant factor 8,0 < 8 < I.

The problem is to choose a policy which maximizes the total expected dis-counted reward over an infinite time horizon.

In the literature a great number of optimization procedures for solving this kind of problems has been presented. Each procedure requires its own proof of convergence and possesses its own properties. We divide the proposed pro-cedures into two classes:

policy improvement procedures;

policy improvement-value determination procedures.

In procedures of the second class in each iteration step some extra work is done in order to estimate or compute the values for the current policy ([3J, [4J, [5J, [6J, [7J, [8J). Procedures of the first class have been presented in [IJ, [2J, [3J, [7J and [IIJ.

(4)

In § 1 we will use (as in [12J) the concept of stopping times for the

gener-ation of policy improvement procedures.

In § 2 we will show that any policy improvement procedure may be used to

generate a whole set of policy improvement-value determination procedures (including a Howard like one).

In § 3 we will present upper and lower bounds for the values corresponding

to the policies which appear during the iteration process.

This has been done already for specific procedures

[IJ, [3J, [7J, [9J.

We will present a general approach.

Finally some extensions to more general problems will be indicated.

§ 1. Policy improvement procedures

For the Markov decision process as described in the introduction the set of

11 d h ' 1 ' . Sn+1 SOO S S S ' h f

a owe pat s unt~ t~me n ~s . So := x x ••• ~s t e set 0

all allowed paths.

Defini tion 1. 1 •

00

a) The function T on S with nonnegative integer values is called a

stopping

time,

if and only if its inverse satisfies T+(n) = B x SOO with B c Sn+l;

00

b) a nonempty subset A of U Sk is called a

go ahead set,

if and only if

k=O 00

(a,S)

E

A

~

a

E

A

for all

(a,S)

E

U

Sk.

k=O

o

(S only consists of the null-tuple which concatenates to a with any a: our definition implies that any go ahead set contains this null-tuple.)

Notations. - A n n := U k=O (0 S; n S; 00);

- the i-th component of

a

E Sn

,

(n ~

1)

is denoted by

[aJ. 1;

~-- i f

a

E Sn (n ~ 0) k ~s defined to be n;

a

-

hence

a

E Sn (n .~

1)

may be written as

(CaJ

O

,CaJ 1,···,CaJ

k

-1);

a

- hence k y = ka + kS i f y

- A(i)

;=

{a

E

A

I

[aJ

O

=

= (a.,S); ~ if k ~ I} • a

(5)

3

-There is a one to one correspondence between stopping times and go ahead sets: A = U n=O

{a

E Sn

I

V

00

T(a,S)

~ n} , SES 00

a

E A, ~ E

S,

(a,~)

t

A ~ T(a,~,S)

=

k

for all

S

E S

a

Definition 1.2. A stopping time T (or its go ahead set A) is said to be

00

nonzer~ if and only if T(a) ~ 1 for all a E S (or equivalently SeA).

The only nonzero stopping time which is an entry time (memoryless) is T

=

00

(A

=

A ).

00

Examples of nonzero stopping times

with E a subset of S defined by: (I ~ n ~ 0 0 ) , (T

=

n); i-I := sO u {(i) u (i,a)

I

a E U

~(j)}

j=1 1• 1. A n 1. 2.

~

1. 3. A R 1.4. ~ defined by ~(i) defined by A (i) := R U n=O {a E Sn

I

[aJ.

=

i, j

=

0,1,2, ••. ,n-l, J ifn~l} ~ := U n=2 {a E Sn

I

[aJ. E E, j J

°

= 1,2, ... ,n-l} u SUS (E = Defini don 1.3. k S an element U k=1

- A

decision ruZe

D is a function ascribing to each a E

D(a) of K([aJk -I);

a

- the decision rule D ~s said to be

memoryZess

(stationiary Markov) if

00

D(a) = D([aJ

k -1) of each a E U· Sk;

a k=1

- the set of decision rules is denoted by

V;

the set of memoryless decision rules by

M.

(6)

Let a decision rule D E

V

be given. This decision rule determines a stochastic process {x

I

n =

O,I, ..• }

on S.

n

D

As in [12J we now introduce the operator L where T and D are given. T

Definition

1.4.

D (

V,

T is a stopping time, A its corresponding go ahead set.

The operator

L~

(or

L~)

on lRN is defined by:

k D(xO""'xk ) T

S r (x ) + S v (x )

k T

x

=

i)

a

(where ED denotes the expectation given that decision rule D is used), or equivalently:

(L~V)(i)

=

L

etEA(i)

L

etEA(i) Q,ES (et,Q,)iA(i) k FD(et,Q,!i)S etv(Q,) • W

D(etli) 1S the probability of path et given that X

o

= i and decision rule D is used.

Lemma I. I. Let T be an arbitrary stopping time. For any v E lRN , there exists a decision rule DO such that

DO D

Lv;::: L v

T T

componentwise for all D E

V.

For a proof see [12J.

DO

Notation. The vector L v will be denoted by:

T

D D

max L v, U v, max LAv, UAv .

D T T D

The operators U serve for some specific choices of T to construct opti-T

mization procedures, which aim actually at finding U~O (sometimes denoted by UooO,

°

denotes the null-vector in lRN) . The i-th component of UooO gives the total expected discounted reward over an infinite time horizon when the initial state is i and an optimal decision rule is used.

(7)

5

-From a computational point of view it is desirable to maximize only over the memoryless decision rules when V v is computed. This is allowed when

T

the stopping time ~s transition memoryless (see [12]):

Definition 1.5. A stopping time T (and its corresponding go ahead set A) is said to be transition memoryZess, if and only if there exists a subset T

I

2

of S and a subset So of S such that

([a] I,[a]) E T

I .

n- n

N Lemma 1.2. If T 1S transition memoryless, then for all v E lR

V V

T

=

max LDvDEM T

For a proof see [12J.

Theorem 1. I •

D

a) The operators Land U are monotone, i.e.:

T T

if v ~ w (componentwise) then:

V w •

T D

b) The operators L and V are strictly contracting (with respect to the

T T

supnorm in lRH: IIv II""

=

max

I

v(i)

I)

i f and only if T is nonzero. the

cor-i

responding contraction radii pD and v are equal to:

T T PD := max lED (ST

I

X

o

= i), V T • S T 1E D

:=

max p . D T c) I f D is D LA 0. 00

memoryless then for any nonzero T the fixed point of LD equals T

d) For all nonzero T the operators V

T possess the fixed point VA

°

(= V""O).

00

The stopping times used in the examples of this section are all nonzero and transition memoryless (hence: So

=

0).

(8)

Lemma 1.3. Let T be transition memoryless; suppose rk(i)

~

a for all

~

E S

and all k E K(i) then the sequence

T n

v := U v = (U )

a

n T n-I T (n = 1,2, ••• ) ,

1S nondecreasing and converges to uooa, ~.e.

T T v n_1 :s: vn T lim v = U a n 00 n-+<lO T Here D

n is the memoryless decision rule found by applying UT on vn_I'

The proof follows in a direct way from Theorem 1.1 and lemma 1.2.

Remark. The restriction rk(i)

~

a which is permitted without loss of gener-ality, is made in order to enable us to start each algorithm with the same

T

starting vector va

=

a. Without this restriction it is sufficient for the preservation of the monotonicity of the sequence v:' if

v~

satisfies:

T T UTva ~ va Examples. 1.1. vA sn ( I :s: n :s: 00) n I.2. v~ (3 __ QI-n . fJ ....:..-.i;. , w~th p := I-Sp m~n

i,D(i)

D(i)

p .. ~~

§ 2. Policy improvement-value determination procedures

Now, for each stopping time T which is nonzero and transition memoryless,

we introduce a class of value oriented extensions of the operator U .

(9)

7

-Defini don 2. I. For T transition memoryless, At: IN, V E: lRN we define the

operator

where D

v is the memoryless strategy which is found by applying UT on v.

Now U(A) is neither necessarily strictly contracting nor necessarily

mono-T

tone.

Theorem 2.1. Suppose rk(i) 2 a for all i E: Sand k E: K(i), let T be

transi-tion memoryless and A E IN then the sequence

AT := 0; v

n

~s nondecreasing and converges to U

A

O.

Furthermore 00 AT :5 v n-I AT :5 v n AT where D

n is the memoryless strategy found by applying UT on vn-1•

Proof.

S

~nce rk(~)

• • 2 a (see t e remarh k at teenh d 0f sect10n. 1) we have D

= U

a

= L 10 2

a

T T

D 1

so because of the monotony of L

T D A = (L I)

a

D A-I

2 (L 1) T ~

...

~ D L 10

=

U

a

2 vT T T l

further in an inductive way using the fact that U and T

T

contractions and the fact that v converges monotonously n

The proof proceeds D L are monotone T from below to U A

O.

00 on v. D

estimate for LAva than UTv, where D v

00

Assertion. Actually L(A)v is a better

T

is the strategy that is found by applying U

(10)

For T

= ]

this assertion 1S illustrated in [7J. In general the statement

follows from the following considerations:

Let T be transition memoryless, let v and w be given such that w ~ v and

D

w := U v = L vv. Now from the previous section we know that

T T D L V

o

= A00 D n lim (L v) v T n-+oo

r

n-] = lim

1.

w +

L

n+oo k=] D

w ~ v and the contraction property of L v imply

T Since D k D k

o

s (L v) w - (L v) v s T T D k v (pT ) IIw-vll 00 w + A

L

k=I the statement will be clear.

Remark. If T is nonzero and A _ 00 , then the algorithm of theorem 2.1 is

clearly of the policy iteration type: in each step the values of the current policy are computed exactly. The choice of T only influences the way of

looking for possible improvement: If T = 1, the methode equals Howard's policy iteration algorithm [4J, [IIJ. If T is replaced by the stopping time

induced by the go ahead set ~, we get Hasting's modified policy iteration algorithm [8J. A great number of other choices is possible, e.g. T as

in-duced by A

R.

Now, regardless of the restriction rk(i)

~

0, each iteration step brings a

OOT •

strict improvement in the values v , until the optimum is reached, wh1ch n

occurs after a finite number of steps (since only finitely many memoryless strategies are available).

§ 3. Upper and Lower bounds

If the theory developed in the previous sections is used for generating successive approximation algorithms it will be necessary to construct

upper-D and lower bounds for the optimal return UooO and for the return of LoonO of the strategy D occurring in the n-th iteration step.

(11)

x and the lower bound! for the optimal return then decision rule DO is not optimal if

9

-Furthermore upper and lower bounds enable us to incorporate a test for the suboptimality of policies see for instance [13J, [14J, [15J. Such a test may be based on the following idea:

Lemma 3. I. Le t the upper bound U 0 be given i.e. x sUO s x

00 - 00

DO _

L x < U x (where v < w means v(i) s wei) and for at least one component:

T T-v(i) < w(i)). Proof. U 0 = U (U 0) ~ 00 1 00 L is used. 1 Do _ D

U x > L x

~

L O(U 0) where the monotony of U

T- T T o o T and

Let us now return to the upper and lower bounds.

Lemma 3.2. For T transition memoryless. The sequence

V -T T ~T_ (VT(~) vn . - vn + I - v max n'" TiES vT I(i)) • e

n-is the contraction radius Here e ERN and e(i) = I, i E {1,2, •.. ,N} and v

T

-T

yields a sequence of nonincreasing upper bounds for U 0; and lim v

00 n n-+oo of U . T = U O. 00 Proof. U 0 00 However {LD*)k T = lim, T v n_1 where D k-+oo

is an optimal decision rule.

T V + n-I + ••• + T V -n-I t-I { D*\k {LD* T (' ) \ T

L

vT I(i) s V n_1 +

,P

T ) max

-

vn_1 ~ }

.

e k=O iES ' T n-t-I (v )k T

L

(vT(i) v:_I(i)) s v + max

.

e n-I k=O T iES n

(12)

Lemma 3.2. For T transition memoryless, the sequence {VT} defined as follows: -n T V -n D T T) n max{vn + _..;.T~D-n I-T) T

• min (vT(i) - vT ](i» • e, vT }

n n- -n-I iES D n where 11 T bounds for

:= min{E (STlx = i)}, yields a nondecreasing sequence of lower

. S D

°

~E n

D

L nO and thus U 0. Furthermore

00 00

lim v-nl = U

°

00 11+00

Lemma 3.3. For T transition memoryless, A E:IN, the sequence

{~?T}

defined

n as follows: AT

=

v o

v~T(i»

• e -AT . -AT AT vn = m~n{vn-I' vn-] + ~--- max \l T iES (U}-l (i) T n-]

V~~l(i».

e}, n> 1 yields a nonincreasing sequence of upper bounds for U 0, with

00

-AT

lim vn U00

°

n-+oo

Lemma 3.4. For T transition memoryless, A E :IN, the sequence {vAL} defined

-n as follows: + _ _I-=-_ min (U TV~T (i) D] iES I - T) T • e AT v -n .- max{yn_AT 1, vATn-} + 1

--=--

min Dn iES I-T) T (U}T (i) Tn-I

V~~l

(i» • e} D

yields a nondecreasing sequence of lower bounds for LoonO and thus for UooO, again we have

AT

lim -nv

=

U00

°

n+oo

The proofs of the last three lemma's proceed in a similar way as the proof of lemma 3. I. For special stopping times see also [3J.

(13)

Examples. 3.1. For T - k, v 1 - 1 I -k Dn k = S ., ~1 = B independent of D .n 3.2. If T corresponds with ~ pendent of D . n D n v

=

(3 and ~ T T aN . , d

=

~

,

aga1n 1n e-3.3. If T corresponds with A R vT

=

max• k

B

1, k 1 - p .. 11 k 1 - Bp .. 11 D TlTn := m1n

B

iES See also

[3J.

§ 4. Extensions and remarks.

1 1 -D (i) n p .. 11 D (i) . n Bp .. 11.

The ideas which have been presented in the previous sections may also be used in the case of a semi-Markov decision process (e.g. [5J, [6J).

In this paper we only considered pure stopping times. We avoided the use of mixed stopping times in order to maintain a better sight of the basic

ideas. However, the introduction of mixing for stopping times produces many more algorithms and even two already published ones: viz. the policy im-provement algorithm of Reetz [2J and a linear programming algorithm (e.g. [5J, [6J) with a random choice of the new basic variable from the relevant ones.

In section 2 we introduced policy improvement-value determination procedures characterized by a stopping time T and a natural number A. For the proofs it

is not essential that A is fixed for all random steps. The value of A may depend on the number of the iteration and even on specific aspects of the actual iteration process, see also [3J.

For numerical experience with a number of the methods treated in this paper we refer to

[7J.

(14)

References

[IJ J. Macqueen, A modified dynamic programming method for Markovian decision problems, J. Math. Anal. Appl.

l i

(1966) 38-43.

[2J D. Reetz, Solution of a Markovian decision problem by successive over-relaxation, Z.f. Oper. Res.

II

(1973) 29-32.

[3J J.A.E.E. van Nunen, Improved successive approximation methods for discounted Markov decision processes, in these Proceedings. [4J R.A. Howard, Dynamic programming and Markov processes, MIT-Press,

Cambridge, 1960.

[5J G.T. de Ghellinck, G.D. Eppen, Linear programming solutions for separable Markovian decision problems, Man. Sci.

11

(1967) 371-394.

[6J J. Wessels, J.A.E.E. van Nunen, Discounted semi-Markov decision processes: linear programming and policy iteration, to appear in Statistica Neerlandica ~ (1975) nr.1.

[7J J.A.E.E. van Nunen, A set of successive approximation methods for dis-counted Markovian decision problems, submitted to Z.f.Oper. Res.

[8J N. Hastings, Some notes on dynamic programming and replacement, Oper. Res.

Q.

12

(1968) 453-464.

[9J H. Schellhaas, Zur Extrapolation ~n Markoffschen Entscheidungsmodellen mit Diskontierung, Preprint nr. 84 (1973), Technische Hochschule Darmstadt.

[10J E.V. Denardo, Contraction mappings in the theory underlying dynamic programming, SIAM-Review

2

(1967) 165-177.

[IIJ H. Mine, S. Osaki, Markovian decision processes, New York 1970, submitted to Proceedings of 1974 EMS-meeting and 7th Prague Conference on

Information theory, Statistical decision functions, and random processe

[12J J. Wessels, Stopping times and Markov programming, submitted to Proceed-ings of 1974 EMS-meeting and 7th Prague conference on Information Theory, Statistical decision functions, and random processes.

[13J J. Macqueen, A test for suboptimal actions in Markovian decision problems. Oper. 15 (1967) 559-561.

(15)

- 13

-[14J E.L. Porteus. Some bounds for discounted sequential decision processes. Man. Sci. ~ (1971) 7-11.

[15J R.C. Grinold. Elimination of suboptimal actions in Markov decision problems. Oper. Res. 21 (1973) 848-851.

Referenties

GERELATEERDE DOCUMENTEN

TOTAAL LICHT VERSTANDELIJK GEHANDICAPTEN, INCLUSIEF DAGBESTEDING 0 0 (Besloten) wonen met zeer intensieve begeleiding, verzorging en gedragsregulering.

Op termijn eventueel hulpmiddelenzorg, maar gezien functiegerichtheid daarvan is omschrijving niet specifiek genoeg.. Injury to eyes: short term (Korte termijn gevolgen van

Based on the central tenet that organizations can create a key source of competitive advantage, embrace innovation, and improve bottom-line results by developing capabilities

De functie Verblijf omvat verblijf in een instelling met samenhangende zorg bestaande uit persoonlijke verzorging, verpleging, begeleiding of behandeling, voor een verzekerde met

The identification of the factors influencing the vulnerability of women to sexually transmitted HIV, with related pre- dictors and indicators, enables planners to differenti-

A quantitative methodology was selected as the most appropriate approach to identify factors preventing the successful implementation of an existing fall prevention programme...

In terms of the administration of CARA approvals between 10 May 2002 and 3 July 2006 (that is, when the cultivation of virgin ground was not enforced as a listed activity under ECA