• No results found

The solution of an undiscounted completely ergodic Markov decision process by successive approximation

N/A
N/A
Protected

Academic year: 2021

Share "The solution of an undiscounted completely ergodic Markov decision process by successive approximation"

Copied!
12
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

decision process by successive approximation

Citation for published version (APA):

Wal, van der, J. (1974). The solution of an undiscounted completely ergodic Markov decision process by successive approximation. (Memorandum COSOR; Vol. 7405). Technische Hogeschool Eindhoven.

Document status and date: Published: 01/01/1974 Document Version:

Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers) Please check the document version of this publication:

• A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website.

• The final author version and the galley proof are versions of the publication after peer review.

• The final published version features the final layout of the paper including the volume, issue and page numbers.

Link to publication

General rights

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain

• You may freely distribute the URL identifying the publication in the public portal.

If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license above, please follow below link for the End User Agreement:

www.tue.nl/taverne

Take down policy

If you believe that this document breaches copyright please contact us at:

openaccess@tue.nl

providing details and we will investigate your claim.

(2)

"

ARC'

81

COS

EINDHOVEN UNIVERSITY OF TECHNOLOGY Department of Mathematics

STATISTICS AND OPERATIONS RESEARCH GROUP

Memorandum COSOR 74-05

The solution of an undiscounted completely ergodic Markov decision process by successive approximation

by

J. van der Wal

(3)

Abstract

In this paper we consider a completely ergodic Markov decision process with finite state and decision spaces using the average return per unit time criterion. An algorithm is derived which approximates the optimal solution. It will be shown that this algorithm is finite and supplies upper and lower bounds for the maximal average return and a near optimal policy with

average return between these bounds.

I. Introduction and notations

We will consider a system which at any time t = 1,2, ••• is in one of the states 1,2, ••• ,N. In each state i there is a finite set K. of actions which

].

may be chosen. If in state i action u. E K. is selected we receive the

]. ]. expected immediate return q(u

i). For each j E S := {1,2, ••• ,N} [p(ui)]j is the probability of making a transition to state j if i is the current state and action u. has been chosen. With p(u.) we denote the row-vector

]. ].

([P(ui)]I"',[P(ui)]N)' A vector u E K

:=

Klx •• x~ will be called a

policy. A policy prescribes for each state which action will have to be selected. If u = (uI""'~) then q(u) denotes the column-vector

T

(q(UI)f"'fq(~» and P(u) is the transition probability matrix with [P(u)] .. = [p(u.)] .•

].J ]. J

We assume that for each u ~ K P(u) is completely ergodic (i.e. the Markov chain associated to u has a single aperiodic recurrent class and no

transient states).

Moreover g(u) and v(u) will be the gain (average return per unit time) and the vector of relative values (with N-th component zero) belonging to policy u (see R.A. HOWARD [2]). I f p E ]RN we will write:

[p]. for the j-th component of p f

J

P f· p for the largest respectively smallest component of p f and

IIp for the difference

p-

~

.

In section 2 we will derive our algorithm from the "policy iteration" algo-rithm of R.A. HOWARD. We will prove that our algoalgo-rithm produces upper and lower bounds for the maximal gain and near optimal policies.

In section 3 we demonstrate that it might be possible to prove the same for the ergodic case (i.e. the Markov chain associated to u has a single aperiodic recurrent class and might have one or more transient states).

(4)

T (e = (1, ... ,1) )

2. The algorithm

Since our algorithm has been derived from the "Policy Iteration Algorithm" of R.A. HOWARD [2J we rewrite his algorithm below in our notation:

Policy Iteration Algorithm

STEP 0 Select an initial policy u = (u1' ••• ,u N);

STEP 1 (Value-Determination Operation)

Solve the system {

g.e + v = q(u) + P(u)v [vJ

N

=

0 STEP 2

STOP

(Policy-Improvement Routine)

Find for all i E S an action w. E K. which maximizes q(w.) +p (w.)v.

~ ~ ~ ~

If for some i E S q(w.) + p(w.)v # q(u.) + p(u.)v then for all i E S

~ 1 ~ ~

u. := w. and go to STEP I.

~ .~

The policy u is optimal and g is the maximal average return per unit time.

We will change this algorithm in the following way.

Instead of solving the system in STEP 1 we will approximate the values of v and g. A.R. ODONI [4J computes after any execution of the "Policy-Improvement Routine" a new value v but he does not try to improve this approximation. However, before running through the Policy Improvement Routine once more, we improve the approximation of v until we know the gain fairly accurate.

A similar procedure has been suggested by SPREMANN and GESS~~R [5J. Their algorithm however, does not produce upper and lower bounds. These authors suggest an other modification which we use as well. During the first iterations we will not look for a better action if in a state the limit probability is small, does not exceed O.

1

As suggested in [5J we take for 6j the sequence

"2'

N'

0, 0, •••• Applying these modifications we produce the following algorithm:

(5)

STEP 0 STEP STEP 2 STEP 3 STEP 4 STOP

Select an initial policy u, select a > 0 and a monotone

non-increasing sequence E

O,E1, ••• with E.1 J > 0 for all1j and lim E. = O. For i E: S [nJ. :=

N'

[vJ. := 0; 0 :=

2;

j := 0;

j-l-<Xl J ~ ~

eps := EO'

T T

n := P (u)n; n := P (u)n.

While ~(q(u) + P(u)v-v) > eps do

v := q(u) + P(u)v - [q(u) + P(u)vJ N e

Find for all i E S for which [nJ. ~ 0 an action w. E: K. which

~ ~ ~

maximizes q(w.) + p(w.)v.

~ ~

If 0 = 0 and for all i E: S

q(w.) + p(w.)v < q(u.) + p(u.)v + a go to STOP else

~ ~ ~ ~

if [nJ. ~ 0 then u. := w.; j := j+l; eps := E:.

~ ~ ~ J

1 1

If 0

=

2

and N > 2 0

:=

N;

go to STEP I

else 0

:=

0 go to STEP 2

*

u is near optimal. Let u be optimal then we have: (i) g(u )

*

~ g(u) + a + eps

(ii) q(u) + P(u)v-v ~ s(u) ~ q(u) + P(u)v-v + eps

(iii) q(u) + P(u)v-v ~ g(u*) ~ q(u) + P(u)v-v + 2 eps + a.

Remark. The introduction of a a > 0 is necessary to prevent cycling if there

exists more than one optimal policy.

To prove the finiteness of our algorithm and the correctness of the estima-tions at STOP, we will show first that the number of successive iteraestima-tions within STEP 2 is finite and that the value of v in STEP 2 converges to the vector of relative values for the actual policy.

Suppose we arrive at STEP 2 with a policy u and and initial approximation vO(u) of v(u).

(6)

(i=I,2, ••• )

(2) g .(u) = q(u) + P(u)v. I(u) - v. I(u)

~ ~ - ~ - (i=I,2, ••• )

Obviously v.(u) is the approximation of v(u) that would be found after

~

improving the approximation vO(u) i times within STEP 2.

The test for transition from STEP 2 to STEP 3 is the examination whether or not

(3) ~g.(u) ~ eps holds.

~

Substitution of (I) in (2) yields gi+l(u) = Hence

P(u)g. (u),

~ i = I ,2,. •• •

(4) t = 0,1, ••••

00 • r

Since P (u) := l~m P (u) exists and has identical rows there exists a r-+oo

*

number g (u) so that

(5) lim g. (u) = g (u)'e •

*

i-+oo ~

For any policy u E K there exist band p (0 ~ p < I) such that (see [IJ)

v

II'

00

I

I'

(6) j,kES [P (u) - P (u)Jjk ~ bp •

Hence we have for all j,k E S and x E RN

Now we can formulate:

Lemma I. For any u E K and for any initial approximation vO(u) of v(u)

STEP 2 is finite.

Proof. From (7) we have

r

I'

(7)

Repeated application of (I) yields ( 8) v. (u)

=

{I + P(u) ~ + ••• + i-I i P (u)}q(u) + P (u)vo(u) + i-I i

- [{I + P(u) + ••• + P (u)}q(u) + P (u)vO(u)]Noe •

By arranging the terms ~n (8) in pairs. q(u) and [q(u)]N'e and so on, and using (7) J/,+I times we get (since [v. n(u) - v.(u)]N1+",

=

0) :

~

(9)

I

[v.+n(u)-v.(u)].

I

~2bN {lIq(u)(pi +pi+1 + ••• +pi+J/,-l) +

1 '" 1 J

i i+j/,

+ lIV

O(U)(p +p )}.

Now (9) implies that the sequence vO(u),vI(u), ••• converges.

*( ) . k

Let v u := 11m vj/,(u ) then we can formulate J/,-loOO

Lemma 2. The limits v*(u) and g*(u) are just the vector of relative values v(u) and the gain g(u) belonging to u.

*

*

Proof. v (u), g (u) and v(u). g(u) both solve the system

1

9oe + v

=

q(u) + P(u)v [v]N

=

0

which possesses a unique solution.

Let now uO.uI•••• be the succession of policies determined by uO the selected initial policy, and the approximation in STEP with initial value vO(uk) , require n

k iterations. Now define our algorithm. k 2 of v(u ), (IO)

a

vo(u )

=

0 k (uk-I), vO(u )

=

v k

=

I .2 •••• and ~-I k k

v.(u ). g.(u ), i = 1.2, ••• ; k = 0,1 •••• according to (I) and (2).

(8)

If the algorithm did not terminate after completing STEP 3, while we have

k . k+1

already 5 = 0, then a policy u has just been improved to a pol1cy u We have

k k k+l

= q(u ) + P(u )vO(u ) + dk+1 '

where Hence ( k+l) 8 1 u = which implies 00

Let S be the smallest element of all P (u), u E K. Since all P(u) are

completely ergodic we have S >

o.

Defining

y

:=

as

we have

k k

g~+I(u ) + Y ~ g(u ) + y - E

k • Now we have

Lemma 3. If k sufficiently large (so that E

k < y) then a once improved policy uk cannot be found again.

Lemma 4. If for each u E K the Markov chain with matrix P(u) is completely

ergodic then the algorithm is finite.

Proof. From Lemma I, Lemma 3 and the existence of only a finite number of policies.

If u* is the optimal policy and the algorithm terminates with a policy uk then we have

(9)

(11 ) and (12) + a'e, So we have k

*

Lemma 5. If the algorithm terminates with a policy u , while u is an optimal policy, then

(ii) g~+I(uk) ~ g(u )k ~

gI\.

+1(u )k + E: k

Proof. From (11) and (12) with

k k g(u ) ~ gn +I(u ) k k ~ g ( u ) . ~

Theorem 1. If for all u E K the Markov chain with matrix P~) is completely

ergodic then the algorithm is finite. For the approximation uk for u* the following estimates hold:

(i) g(u ) - g(u )* k < a + E:

k

(ii) g~+1(u )k ~ g(u )k ~ gn +1 (u ) +k E: k k

k * k

(iii) g~+1 (u ) ~ g(u ) ~ g~+1 (u ) + 2E:k + a.

Proof. The finiteness follows by Lemma 4; (i), (ii) by Lemma 5, (i) and (ii) imply (iii).

Remark 2. It is possible to prevent termination of the algorithm while E: k is still large, e.g. E:

(10)

3. The ergodic case

In the foregoing we proved our algorithm to be finite for completely ergodic decision processes. We believe that the algorithm is also finite if for each policy in the transition probability matrix P(u) is ergodic (which means that for each policy u the set of states is divided into a set of transient states and one aperiodic recurrent class). It might however be necessary to modify STEP 3 of the algorithm, i.e. to put

"if ['If]. 2: 0 then i f q(w.) + p(w.)v 2: q(u.) + p(u.)v + ex. , u. := w."

1 1 1 1 1 1 1

instead of

"if [lfJ. 2: 0 then u. := w."

1 1 1

This modification enabled us to prove finiteness in the case that for each policy the recurrent class consists of the same N-I states.

Lemma 6. If P(u) is ergodic then the system

+ v = P(u)v + q(u)

= 0

possesses a unique solution (in g and v).

Proof. The rank of I - P(u) is N-I (see [3J) and if v,g solve (I-P(u»v =

=

q(u) - g.e then v + a.e, g as well. So the rank of the system in N. Let for all u E K P(u) be ergodic and the rec~rrent:class consist of the

same N-I states then we have the following lemma's.

Lemma 7. If k is sufficiently large and the policy uk is improved in one of the recurrent states then the algorithm will not generate uk once more.

Proof. It is obvious that the modification of STEP 3 does not influence any of the proofs in the preceding section. Now let j be the transient state and S* := S\{j}. Analogously to section 2 we have

g(uk+l) 2: mln

.

*

(11)

00

with y'

=

as' where 8' is the smallest element of all P (u) not belonging to the j-th row or column. Again we have y' > 0 so if k sufficiently large we have g(up) > g(uk) for all p > k.

Lemma 8. A policy u E K can be improved but a finite number of times in

suc-cession in the transient state only.

Proof. Let state j (j

#

N) be the only transient state. From (I) and (2) we have

k gi+ I (u ) = and therefore

k k k k

gi+1 (u ) - [gi+1 (u )IN·e = vi+l(u ) - vi (u ) • k

If now a > £k and u is improved in the transient state only we have

Hence according to (*)

k+1

While ~gi(u ) > £k we have

Hence

k+1 k+1

[v.(u )J. - [v. I(u )J. >

o.

~ J ~- J

k+l

(12)

k+ I k+ I

The approximation of g(u ) and v(u )

i

lemma 1 we have 6gt+

i (u) ~ 2bNp 6gt (u). So these iterations result in a decrease

d ·1 Ag(uk+l) :5 procee s unt~ u k+l of [v.(u

)J.

of at most ~ J E: k+I. From So we have ~ a.

-if k sufficiently large, say k ~ k

O• k

Since v(u) is uniformly bounded for u E K a policy u , k ~ k

O' can be improv-ed but a finite number of times in the transient state only.

If N is the transient state and k ~ k

O then each improvement in state N only k

results in a decrease of at least Afor the components [v(u )J., j E S\{N}.

J From Lemma's 6, 7 and 8 we now conclude:

Theorem 2. If for each policy u the Markov chain with matrix P(u) is ergodic and the ergodic class consists of the same N-l states then the modified algo-rithm is finite.

References

[IJ J.L. Doob, Stochastic Processes. John Wiley &Sons, New Yo~k 1953, p. 173. [2J R.A. Howard, Dynamic Programming and Markov Processes, Cambridge M.I.T.

press, 5th printing 1969, p. 32-43.

[3J H. Mine en S. Osaki, Markovian decision processes, American Elsevier, New York 1970, pp. 25-26.

[4J A.R. Odoni, On finding the gain for Markov decision processes, O.R. 17 (1969), pp. 857-860.

[5J K. Spremann und P. Gessner, Bewerteter Markovprozesse im stationaren Zustand. Ein neuer Algorithmus mit Beispiel. Discussion paper

nr. 18 des Institutus fur Wirtschaftstheorie und Operations Research der Universitat Karlsruhe, Juli 1973.

Referenties

GERELATEERDE DOCUMENTEN

Based on the central tenet that organizations can create a key source of competitive advantage, embrace innovation, and improve bottom-line results by developing capabilities

De functie Verblijf omvat verblijf in een instelling met samenhangende zorg bestaande uit persoonlijke verzorging, verpleging, begeleiding of behandeling, voor een verzekerde met

A quantitative methodology was selected as the most appropriate approach to identify factors preventing the successful implementation of an existing fall prevention programme...

The empirical research done regarding the holistic needs of emeritus pastors indicated the need in the AFM for pre- retirement preparation?. 5.4 OBJECTIVES

Prof Thom se hantering van die bots in g tussen dr Malan en die Ossewa-Brandwag is hiervan 'n goeie il- lustrasie.. Daar is geen poging om , soos vandag

In terms of the administration of CARA approvals between 10 May 2002 and 3 July 2006 (that is, when the cultivation of virgin ground was not enforced as a listed activity under ECA

Nevertheless, qualitative research methods were the most appropriate for this study as the aim was to seek the views and perceptions of local people on the potential of tourism

The following themes were identified from the questionnaire where members of the legal profession require further training on: child development, children's language