• No results found

A policy improvement-value approximation algorithm for the ergodic average reward Markov decision process

N/A
N/A
Protected

Academic year: 2021

Share "A policy improvement-value approximation algorithm for the ergodic average reward Markov decision process"

Copied!
14
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

A policy improvement-value approximation algorithm for the

ergodic average reward Markov decision process

Citation for published version (APA):

Wal, van der, J. (1978). A policy improvement-value approximation algorithm for the ergodic average reward Markov decision process. (Memorandum COSOR; Vol. 7827). Technische Hogeschool Eindhoven.

Document status and date: Published: 01/01/1978

Document Version:

Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers)

Please check the document version of this publication:

• A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website.

• The final author version and the galley proof are versions of the publication after peer review.

• The final published version features the final layout of the paper including the volume, issue and page numbers.

Link to publication

General rights

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain

• You may freely distribute the URL identifying the publication in the public portal.

If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license above, please follow below link for the End User Agreement:

www.tue.nl/taverne Take down policy

If you believe that this document breaches copyright please contact us at: openaccess@tue.nl

providing details and we will investigate your claim.

(2)

Department of Mathematics

PROBABILITY THEORY, STATISTICS AND OPERATIONS RESEARCH GROUP

Memorandum COSOR 78-27

A policy improvement-value approximation algorithm for the ergodic average reward

Markov decision process

by

J. van der Wal

Eindhoven, december 1978

(3)

A policy improvement-value approximation algorithm for the ergodic average

reward Markov decision process

by

J. van der Wal

Abstract. This paper presents a policy improvement-value approximation algorithm for the average reward Markov decision process when all

transition matrices are unichained. In contrast with Howard's algorithm we do not solve for the exact gain and relative value vector but only approximate them. It is shown that the value approximation algorithm produces a nearly optimal strategy. This paper extends the results of a previous paper in which transient states were not allowed. Also the algorithm is slightly different.

L. Introduction and notations

In this paper we consider the average reward Markov decision process (MDP) with finite state and action spaces. In [4J we introduced a value approximation algorithm for the case that all chains are completely ergodic (i.e. only one recurrent sub chain and no transient states). This value approximation algorithm is a variant of Howard's policy iteration algorithm [2J. The difference is that we do not solve for the exact gain and relative value vector of the actual policy but only

approximate them. For our convergence proof, however, these approximations have to be sufficiently accurate. Here we extend these results to the case that all possible transition matrices are ergodic, i.e. have only one recurrent subchain but there may be transient states. Also the convergence proof we give here is more transparent.

So we are concerned with a MDP with finite state space S :=

{l,2, •••

,N}

and finite action space A. If in state i € S action a € A is taken, then

the immediate reward is r(i,a) and the process moves to state j with

a

probability Pij'

A policy or stationary strategy is a map f : S + A. With a policy f

we associate the immediate reward vector r

f and the transition matrix Pf

(4)

r(itf(i» , i E S •

i,j E S •

We will assume that all matrices P

f are aperiodic (this gives no loss

"'"

of generality, as one may use Schweitzer's data transformation rf := r f I Pf := aI + (1 - a)Pf , 0 < a < lt to transform any MOP into an equi-valent aperiodic one, see [3]).

Let gf be the gain and v

f be the relative value vector of a policy f. Then, at least in the unichain case we consider here, gf,v

t uniquely solve (1.1 ) (1.2)

*

where P f is defined by T-l

*

-1 t n Pf := lim T £ P f • T-+o:> n=O T (e == (1, ••• ,1) )

From the aperiodic assumption we also have

(1. 3) Pf=limP

*

n f •

n""""

Further we have

*

*

Let g be the gain of the MDP, g := max gf'

f

We are interested in finding an €-optimal policy, i.e. a policy f satisfying

(5)

- 3

-In order to determine such an £-optimal policy we propose the following approximated version of Howards policy iteration algorithm (which we already presented in [4] in a slightly different form).

Polioy improvement-vaLue appromimation algorithm

The main iteration step reads as follows. (The constants a and £ occurring

in this iteration step are chosen beforehand)

Value a~2roximation. Let f be the current policy, then determine

t

=

1,2, ••• until

(1.4)

where

Vo

is obtained in the previous iteration step and sp(v) denotes the span of a vector v:

sp(v) := max veil - min veil •

i i

Poli~y improvement. Let n be the first index for which

(1.4)

bolda. Than

define y by

(1.5)

Clearly y 2 O. Now we distinguish two cases

(1~ v 5 ae. Then f is nearly opt1mal.

(1.6) - a - £ (as we will prove later on).

(11) y(i) > a for some i € S. Then replace f by a policy h with

h(i)

=

f(i) if y(i) S a , and

(6)

Note that for the policy h determined according to (ii) we now have

(1.7)

with l;(i)

=

Oif y(i) :S a and l;(i) = y(i) if y(i) > a • In the remainder we prove that this algorithm converges. In section 2 we derive some preliminary results.

In section 3 we prove the correctness of formula (1.6). In the sections 4 and 5 we prove that if E is sufficiently small compared to a then a

replacement of a policy f by a policy h is indeed an improvement. In order to prove this we will distinguish two cases.

A : ~(i) > a for some state i which is recurrent under Ph' Then, if alE is sufficiently large, gh > gf,which will be proved in section 4.

B l;(i)

=

0 (hence h(i)

=

f(i» in all states which are recurrent under Ph' Then gh

=

gf' vh ~ v f and if ~(i) > a then vh(i) > vf(i),

provided that alE sufficiently large. This we prove in section 5~

In section 6 we collect some concluding remarks.

2. Some preliminary results

In this section we collect some preliminary results needed for the proofs in the following sections.

First we give a stronger result than the mere convergence of

P~

to P; formulated in (1.3):

Lemma 2.1. There exist constants band 0 :S P < 1 such that

for all i, j E S, f and n E IN.

A proof of this basic result can be found for example in Doob [1J. For notational convenience we use the operator L

f on lRN, defined by

In the following lemma we give some results concerning the behaviour of the approximation v

(7)

5

-a190rithm.

Lemma 2. 2. For any v € lR N

(i)

(ii) min (LfV - v) (i) ~ gf

s

max (LfV - v) (i)

i i

(iii)

.(iv)

Proof,

(ii) Immediate from (i) as LfV - v must have components at least equal to 9f as well as components at most equal to 9

f• (iii) L n+l v- LfV n

=

LfLfV - LfLf v n n-l

=

Pf(LfV - Lf v) n n-l f n - v)

=

Pf{LfV , (iv) From (ii) and (11i) •

n+l n From these two lemmas, it is clear, that L

f v - LfV converges

0

to gfe exponentially fast. So we see that the differences v

t - vt_1 in the

value approximation 'step converge to 9fe exponentially fast. Hence this step (provided that € > 0) terminates.

In the policy improvement step the most important role however is played

n

by the term vn

=

LfV O'

In Howard's algorithm we there have v

f' But this difference, is not so large, as we have

(8)

as n -+ 00 ,

n

so the difference between LfV and v

f becomes constant.

Proof. Clearly

L~V

-

L~W

=

Pf(v - w) and as vn

f satisfies (1.1)

n

LfV

f

=

v f + ng e f ' So

as n -+ 00, where we used that v

f satisfies (1.2).

0

And as we see adding a constant to the vector v in (1.5) does not n

change y.

3. The correctness of formula (1.6)

In this section we prove that if the algorithm terminates then the policy we find is (a + e)-optimal.

Proof. From sp(LfV - v) S e and lemma 2.2(11) we have

So

*

Premultiplying this with Ph and using lemma 2.2(i) we get

From this we get

Corollary 3.1. If f is the current policy in the algorithm, sp(v - v 1) S e and max LhV - LfV S ae then

n n- h n n

(9)

7

-Proof. Almost immediately from lemma 3.1. We only have to observe that sp(v

n - vn_1) ~ s implies sp(LfVn - LfVn_1) ~ €.

4. Case AJ improvement in a recurrent state

In this section we show that, if h is an improvement of f according to case A, then h has a higher gain than f, provided that € is sufficiently small compared to a.

First define

e

by

(4.1)

Clearly

e

> O. And we get

'I'heorem ,4.1. If f is replaced by h and I',; (i) > a for some i which is recurrent under h and if €/a S

e ,

then

Proof. LhVn - vn

=

LfVn - vn + r,; ~ (gf - €)e + 1',;.

*

Multiplying both sides with Ph we get

*

1',; (i) > a for some recurrent state under h, so Phi'; > aee. Hence,if a6 ~ € we have gh > gf •

5. Case B, improvement in transient states only

Case B is the more difficult one. If f and h are equal on the set Rec(h) of recurrent states under Ph' then we must have gf

=

gh. So, as in the standard policy iteration algorithm, we have to investigate the relative

o

(10)

value vectors.

We will show 1n this section that if E/a is sufficiently small then a 'case B type' improvement gives a policy h with a higher relative value vector v

h

>

vf ' Where we write v> w if v ~ w and v ~ w. !<'1rst we will derive some lemmas.

(5.1 )

n

Proof. We can rewrite LfV in the form

Or with lemma 2.2(111)

n n t 1 n t-1 n t-l

Lf v .. v +

I

P - (L v - v) .. v +

I

P f (gfe + 15) = v + ngfe +

L

P f <5.

t=l f f t=l t=l

*

Also by lemma 2.2 (1) we have P fO

=

O. Therefore

n n-l

*

L v

=

v + ngfe + t (pt - P )0

f

t=o

f f

And from this lemma we get

Lemma 5.2. If LfV = v + gfe + 15 and LhV

=

LfV + ~ while he!)

=

f(1) for all i E Rec (h) then

(5.2)

n-l n-l

L

(pt - P*)15 -

L

(P~

t=O h h t=O

(11)

9

-Proof. We have

and since f

=

h on Rec(h) also P

*

a: Ph and hence Pho

*

*

=

a •

f

The approach of the proof of lemma 5.1 now yields

n-l n +

I

P~

(0 + 1';;) LhV

=

v + ngfe teO n-l n-l

=

v + ngfe +

L

(pt -

P~)O

+

L

P~~

• teO h teO

If we subtract equation (5.1) from this result we get (5.2), which completes the proof.

n n

The reason why we are interested in the difference LhV - LfV becomes clear from the following lemma.

Lemma 5.3. Let f and h be two policies with f

N

and we have for any v € lR

Proof. From f

=

h on Rec(h) we have gf with lemma 2.3.

=

h on Rec(h) then gf

=

gh

*

=

Ph' So we get

(n -+ ClO) •

From the lemmas 5.2 and 5.3 we get the following important corollary.

Corollary 5.1. Suppose sp(LfV - v) S £ and h is a policy with h

=

f on

Rec{h) and LhV = LfV + ~ for some ~ ~

o.

If for component i we have I';;(i) > 2Nbe/(1 - p), then

o

(12)

Proof. From sp(LfV - v) $ £ we have the existence of a vector 0, with

101

$ ~e, such that LfV

=

v + gfe +

O.

Now consider (5.2). By lemma 2.1 we have

n-l n-l

I

~ (Ph

t -

P h *)

1>1 ....

~

Ipt

p*1

....

Nb (1 n ) / ( l ' ) l u ~ l h - h ee ~ e - p - p . t=O t=O $ Nbe:/ (1 - p) •

Similarly with h replaced by f. So we get for all n

n-l

L~V

- L;V

~

-2Nbe:/(1 - p)e +

r

P~~

t=O -2Nbe:/(1 - p)e + ~ •

So, if for component i we have ~(i) > 2Nbe:/(1 - p) then

n n

lim(Lhv - LfV} (i) > 0 •

n-lo«>

o

We now have that 1f h is a case B type improvement of f then gf • 'h and, as f and h are equal on Rec(h), also v

f

=

vh on Rec(h). From corgllaty 5.1 we know that if h(1) , f(i) (then also ~(i) > a) and if a/e sufficiently large then v

h(1) > vf(i). What rema1ns to be shown (in order to . . ~118h vh

>

vf> 1s that also v

h(1) ~ vf(1) 1n all transient states u~ 'h in which h(i)

=

f(i).

Theorem 5.1. If sp(LfV - v) $ e: and h is a policy w1th h

=

f on Rec{h) and LhV - LfV + ~ w1th ~ ~ 0, such that, for at least one i we

have

C(1) > a and for all 1 E S, either h(1)

=

f(1) or ~(i) > a, wlth a > 2Nb~/(1 - p),then v

h

>

vf" Proof. We already have v

h

=

vf on Rec(h) and vh(l) > vf(i) lf h(l) ~ f(l). Let I be the set of states where h(i)

=

f(i).

(13)

- 11

-Subtracting the first of these two equations from the second we get for f(i) h(i)

a state i E I (as rf(i)

=

rh(i), Pij

=

Pij and gf

=

gh)

(5.3)

Suppose that min(v

h - vf) (i) < 0 and define iEI

J := {j E

II

(v

h - Vf> (j) '"' min (vh - vf) (i)} • iEI

on S\1 we have v

h ~ vf hence we see from (5.3) that the set J must be closed under Ph" But this contradicts the fact that vh

=

vf on Bec(h). Hence V

h ~ Vf on J, thus on S. And as there is at least one i € S for which vhCi) > vf(i) we get vh

>

v

f •

0

6. Concluding remarks

As we have seen in the preceding sections the iterationstep formulated in section 1 yields an improved policy: either a higher gain (section 4) or with equal gains a higher relative value (section 5). Provided, of course, that € is sufficiently small compared to a. There are two ways in which

we can try to construct a convergent algorithm from this iteration step. (1) Select € > 0 sufficiently small. In order to do this we will have

to know the constants b,p and

e

appearing in lemma 2.1 and formula (4.1), which makes this approach practically infeasible (we would need £ ~ min(a6,a(1 - p)/2Nb».

(ii) Select a sequence £1'£2"'. with £n > 0 for all nand £n

+

O. And use in the n-th iterat10nstep £ • Then if n if sufficiently

n

large ale is large enough to assure that each policy replacement n

(14)

Another remark we have to make concerns the following. In the policy improvement step we demand that actions are replaced only if the effect of this improvement is more than a. In the case that there are no

transient states we do not need this restriction. Then, we only have improvements of type A for which the proof of section 4 goes on. This is the case we treated in [4]. In the case of transient states however we see that small improvements in recurrent states may, because of the inaccuracy of the value approximation step, yield a policy with

*

*

the same gain. But in that case we may no longer have Pf

=

Ph so the essential lemma 5.3 may fail to hold.

7. References

[lJ Doob, J.L., Stochastic Processes, Wiley, New York, 1953.

[2J Howard, R.A., DynamiC programming and Markov processes, MIT Press, Cambridge (Mass.), 1960.

[3J Schweitzer, P.J., Iterative solution of the functional equations of undiscounted Markov renewal programming. J. Math. Anal. Appl.

l i

(1971),495-501.

[4J Van der Wal, J., A successive approximation algorithm for an undiscounted Markov decision process, Computing

II

(1976), 157-162.

Referenties

GERELATEERDE DOCUMENTEN

Rekening houdend met alle aannames zoals genoemd in deze KCR, een marktpenetratie van 80% bij IFN geschikte patiënten en 100% bij patiënten die niet voor IFN in aanmerking komen,

De analyse van het Zorginstituut dat het aantal prothese plaatsingen (bij knie met 10% en bij heup met 5%) terug gebracht kan worden door middel van stepped care en

TOTAAL LICHT VERSTANDELIJK GEHANDICAPTEN, INCLUSIEF DAGBESTEDING 0 0 (Besloten) wonen met zeer intensieve begeleiding, verzorging en gedragsregulering.

BJz heeft in zijn advies aangegeven moeilijk een beoordeling te kunnen geven over de hoogte van de benodigde individuele begeleiding omdat de aard van de opvoedingssituatie of het

Based on the central tenet that organizations can create a key source of competitive advantage, embrace innovation, and improve bottom-line results by developing capabilities

The identification of the factors influencing the vulnerability of women to sexually transmitted HIV, with related pre- dictors and indicators, enables planners to differenti-

The empirical research done regarding the holistic needs of emeritus pastors indicated the need in the AFM for pre- retirement preparation?. 5.4 OBJECTIVES

In terms of the administration of CARA approvals between 10 May 2002 and 3 July 2006 (that is, when the cultivation of virgin ground was not enforced as a listed activity under ECA