• No results found

Discounted semi-Markov decision processes : linear programming and policy iteration

N/A
N/A
Protected

Academic year: 2021

Share "Discounted semi-Markov decision processes : linear programming and policy iteration"

Copied!
11
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

programming and policy iteration

Citation for published version (APA):

Wessels, J., & van Nunen, J. A. E. E. (1974). Discounted semi-Markov decision processes : linear programming and policy iteration. (Memorandum COSOR; Vol. 7401). Technische Hogeschool Eindhoven.

Document status and date: Published: 01/01/1974

Document Version:

Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers)

Please check the document version of this publication:

• A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website.

• The final author version and the galley proof are versions of the publication after peer review.

• The final published version features the final layout of the paper including the volume, issue and page numbers.

Link to publication

General rights

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain

• You may freely distribute the URL identifying the publication in the public portal.

If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license above, please follow below link for the End User Agreement:

www.tue.nl/taverne

Take down policy

If you believe that this document breaches copyright please contact us at:

openaccess@tue.nl

providing details and we will investigate your claim.

(2)

':IL{aJ

RlfGl..,...'...•..

8i~

cos;

EINDHOVEN UNIVERSITY OF TECHNOLOGY

Department of Mathematics

STATISTICS AND OPERATION RESEARCH GROUP

Memorandum COSOR 74-01

Discounted semi-Markov decision processes: linear progrannning and' policy iteration

by

J. Wessels and

J.A.E.E. van Nunen

(3)

is a decision moment. At the decision moment is earned. The Markov model appears when

F~.

1.J Abstract.

For semi-Markov decision processes with discounted rewards we derive the well known results regarding the structure of optimal strategies (nonrandonr ized, stationary Markov strategies) and the standard algorithms (linear programming, policy iteration). Our analysis is completely based on a primal

linear programming formulation of the problem.

§ 1. Introduction.

In this memorandum discounted semi-Markov problems as discussed e.g. by Jewell [4J, and De Ghellinck and Eppen [3J, will be treated.

We consider (semi-) Markov decision processes with a finite set of states, S := {I ,2, ••• ,N}. For any i E S a finite set K(i) of allowed decisions 1.S

available. If k E K(i) has been chosen the probability for finding the

, . h , . , . I t k ( k 0

system 1.n state j at t e next dec1.s1.on p01.nt 1.S equa 0 p., p .. ~ ,

N 1.J 1.J

L

p~, ~

1). If this occurs, the interdecision time has a probability

dis-j=J 1J

'b . f . k

tr1 ut1.on unct10n F" f t

=

0

1.J

a(n) (expected) reward rk(i) has the form:

k

=

{OJ

F ..(t) 1.J

for t <

for t ~ 1 •

The goal is to maximize the total discounted expected reward over an infi-nite time horizon.

For these problems it is possible to give a linear programming formulation (e.g. see d'Epenoux [2J, De Ghellinck and Eppen [3J, Howard [5J).

Also Howard's policy iteration algorithm (see e.g. [4J, [5J) can be used to find optimal solutions for this type of problems. The relationship between linear programming and policy iteration 1.S well known. Mine and Osaki [7J discussed this relation for (semi-) Markov decision processes with and with-out discounting. Here we derived the known results concerning the structure of optimal strategies (nonrandomized Markov or memoryless strategies) and the relation between the standard algorithms (linear programming and policy iteration) completely form a primal linear programming formulation of the problem.

(4)

- 2

-§ 2. Semi-Markov decision processes with discounting.

Let the initial state or initial distribution {~.(O)} (~.(O) ~ 0,

J J

~.(O)

=

I) be given. Then an arbitrary decision rule determines the

J

I

jES

stochastic process. This decision rule may be eventually randomized and non Markov, hence basing decisions on the complete past of the process. For a

given decision rule let

TI~(n,t),

for the n-th decision point, be the joint

1.

probability that state i is observed, that decision k is made, and that this n-th decision point occurs not later than time t.

k

For t ~ 0, j E S, the TI.(n,t) will satisfy the following recurrence relation:

J (I )

I

kEK(j ) k TI.(n,t) J

=

jr

,:(0)

,if np£..

o

t L L

J

~i(n-I

iEs £EK(i) 1.J

o

£ , t-T)dF .. (T) , 1.J n=I,2, •••• k

We assume that F .. (O) < 1 for all i,j E S and for all k E K(i). This

assump-1.J

tion guarantees that the expected number of transitions in a finite interval (O,T) is finite. Now we can state the following lemma.

Lemma. For any decision rule converges the total expected discounted reward (using a discount rate S > 0)

00 (2) 00

I {I

I

rk(j) n=O j ES kEK(j)

J

e-Std~~(n,t)}

o

*

absolutely, and the sum 1.S uniformly bounded by ± I~ 0 with

r

* :

= max Irk (j)

I ,

j,k 00 6 := max i,j,k

J

e-St d

F~

.(t) 1.J

o

(5)

Proof.

00 00

L {L

L

n=O jES kEK(j)

Irk(j)

I

J

e-Std7T~(n,t)}

::;

o

00 Consider: \ r*{ \ \

J

-St k( )} $ L. L. L. e d7Tj n,t n=O j€S kEK(j) 0 00

\ \

J

-St k L. L. e d7T.(n,t) jES kEK(j) 0 J 00

L

J

e-St d

(L

7T~(n,t»

= jES 0 kEK(j) 00 t

L

J

-St d (

L

L

9.-

f

7T~(n-1

, t-T) d 9.-= e p .. F .. (T» =

jES iES R.EK(i) ~J ~J

0 0 00 00

L

L

L

p ..9.-

J

e- St d F: . (t)

f

e-Std 7T . (n- 1, t)t ~ j E:S itS tEK(i) ~J ~J ~ 0 0 00

o ::;

6 < I, as a consequence of the assumption

F~.(O)

< I, i,j E: S, k E K(i).

~J So 00

L {L

L

n=O j ES kEK(j) 00

f

*

Irk(J')I e-Stdk(7Tj n,t)}<_r_._- 1-6

o

o

The problem ~sto determine a decision rule for which (2) is maximal.

Where, as a consequence of the absolute convergence of (2), it is also possible to write (2) as

(6)

4 -00 (3)

L

L

jES kEK(j) -(3t k e d'IT.(n,t). J

§ 3. Linear programs and the structure of optimal strategies.

(3) depends only on the decision rule through

00 -(3t k e. dtr.(n,t) J Hence, with 00 (4) 'IT.(n)k := J

J

-(3t k e d'ITj(n,t),

o

the problem can also be formulated as: Determine the decision rule for which

(5)

L

L

jES kEK(j) 00

L

n=O k 'IT . (n) J ~s maximal. k

As a consequence of (1) the 7T.(n) will satisfy the recurrence relation:

J (6) where { 7T.(O) , n

=

0

=

~

\

t t t L L p..

n··

'IT.(n~ 1) , . ~J ~J ~ uS tEK(i) n

=

£ ll·· := ~J 00

I

-(3t t e dF .. (t) ~J

o

(hence 0 ::; n .•R, < 1) • ~J

Lemma. Every nonnegative solution

{'IT~(n)}

of (6) may be considered as the

J

transforms (4) of the

7T~(n,t)

corresponding

w~th

a Markov decision rule.

(7)

Proof. A Markov decision rule which satisfies the requirements 1S construct-ed in the following way:

k

Select at the n-th decision point with probability d.(n), decision k E K(j)

J

if at this point state j has been observed, with

1T~(n)

d~(n) := _ ...J'--~_

L

1T~(n)

kEK(j) J I t is easy transforms k

to verify that the {1T.(n,t)} related with this decision rule have

k J

{1T.(n)}. 0

J

As a consequence of this lemma it 1S permitted to consider only Markov

strategies.

Furthermore, the lemma legitimates to formulate the problem as follows:

k

Maximize (5), subject to (6) and the nonnegativity constraints 1T.(n) ~ O.

J

This problem is a linear programming problem with an infinite number of constraints and variables.

As a second step we define:

k x' := J 00

L

7T~(n)

n=O

(8)

I 6

-L

L

k(') k max r J x. j€S k€K(j ) J subj ect to

L

x.k = IT. (0) +

I

L

q .. x.t t k€K(j ) J J i€s t€K(i) ~J ~ j E S where k x. ~ 0 , J J E S, k E K(j) t q .. ~J

Now, problem I is a standard linear programming problem.

Lenuna. I f lTj(O) > 0, j € S, then there exists a one to one correspondence

between basic feasible solutions of I and nonrandomized stationary Markov

strategies (see [3J, [7J).

To prove this lenuna we remark that IT.(O) > 0 implies

I

x~

> O. Hence

J kEK(j) J

for any basic feasible solution there is for each j E S exactly one

k(j) E K(j) with

x~

> 0 and

x~

=

0 for k

#

k(j). Conversely, given a

non-J J

randomized stationary Markov strategy denoted by f := (k(I),k(2), ..• ,k(N»,

the system of equations: k(') x. J J i€s

I

k(i) k(i) q .. ·x. =1T.(0) ~J ~ J j E S ,

has a unique solution

{x~(j)}

with

x~(j)

> O.

J J R,

This follows from the systemIs diagonal dominance (0 $ n .. < 1) or from the fact

h Qn ~ 0 h ' . . h~J1 k(i)

t at ~ as n ~

00,

were Q ~s an N x N matr~x w~t e ements q.. 0

~J

Furthermore, it is permitted to take lTj(O)

=

*'

in the linear programming

problem I, since an optimal solution of the linear programming problem I

remains optimal if the IT.(O) in the right hand side are changed (see Gass J

[8J).

Theorem. In order to find an optimal decision rule it i~ permitted to

re-strict the attention to nonrandomized stationary Markov strategies. This optimal decision rule can be found by solving the linear programming problem I, with arbitrary {IT.(O)}, IT.(O) > O.

(9)

This theorem follows from the fact that for any decision rule the

trans-formes of the corresponding

~~(n,t)

yield a feasible solution of I, while

J

conversely an optimal basic solution of I corresponds to a nonrandomized

stationary Markov strategy. The fact that a restriction to nonrandomized stationary 'Markov strategies is permitted is also proved in another way by Denardo [I J.

Remark. The total expected discounted reward, if the process starts 1n state

i and the optimal strategy f* is used (v.(f*», can be found by solving the

1

dual problem.

§ 4. Policy iteration.

It is easy to find a basic feasible solution: select for each j E S one

k(j) E K(j), then

{x~(j)}.

S form a basic solution and the corresponding

J JE

nonrandomized stationary Markov strategy is f

=

{k(I),k(2), .•• ,k(N)}.

Whether this basis yields an optimal solution or not may be checked by

con-structing the price vector, as usual in linear programming (see Gass [8J).

Thus the problem is to find a linear combination of the N equality

con-straints of I and the reward equation, such that the elements corresponding

k(')

to the x. J equal 0, i.e. look for v.(f), j E S, with

J J (8) v.(f) J iES

2

k(j) k(') q. . v. (f) - r J (j) = J1 1

° ,

j E S •

From (8) we see that vj(f) is the total expected discounted reward if the

system's initial state is j and strategy f (j + k(j» is followed (dynamic

programming equations).

For the other elements of the reward equation we have:

v.(f) -

2

q~.

v.(f) - rk(j) , j E S, k E K(j).

J i€S J1 1

If for all j E Sand k E K(j)

v.(f)

-J iES

2

then strategy f (j + k(j» is optimal.

(10)

W :-= 1T.(O).v.(f)

J J

8

-If for some k E K(j) and j E 8

(9) v. (f)

-J iE8

L

k k

q .. v.(f) - r. < 0 ,

J1 1 1

then a better solution is possible by selecting for each j E S one

k(j) E K(j) for which (9) holds. When for some j E 8 such a choice is not

available, the old k(j) E K(j) for which (8) holds is selected again. This

yields a new and better basic feasible solution.

That the new basic feasible solution and so the corresponding strategy

g t K := {K(I) x K(2) x ••• x K(N)} is better than the old one f E K can be

shown as follows: Let

and

v(f) ref) + Q(f)v(f)

v(g) reg) + Q(g)v(g)

y(g,f) = reg) + Q(g)v(f) - ref) - Q(f)v(f) ,

where for simplicity a vector notation is used. Now from (8) en (9) it

will be clear that y(g,f) ~ 0

~v :-= v(g) - v(f) = y(g,f) + Q(g)v(g) - Q(g)v(f) -=

-= y(g,f) + Q(g)~v •

80,

~v -= (I - Q(g»-I y(g,f) ~ 0 and! 0 .

This implies that, when this improvement procedure is applied, an old strategy will never appear again as long as real improvement is possible. This leads to the following algorithm:

i) ii) iii)

select a nonrandomized strategy;

solve for this strategy the set of equations (8); if possible select a better strategy based on (9) such a strategy is not available, stop:

(11)

Consequently such an algorithm will converge, in a finite number of itera-tions, to an optimal strategy f* with object value:

W:=

L

'IT.(O).v.(f) •

j ES J J

This means that Howard's policy iteration method, that can be derived with dynamic programming (see [6J, [4J), follows straightforward from the primal

linear programming formulation, using the possibility of changing more basic variables in each iteration step (see also [7]).

References.

[IJ E.V. Denardo, "Contraction mappings ~n the theory underlying dynamic programming", Siam Review

2.

(1967), 165-177.

[2J F. d'Epenoux, "Sur un probleme de production et de stockage dans

l'AH~atoire",Rev. Fran~. Rech. Operationnelle..!!!:. (1960), 3-16.

[3J

G.T. de Ghellinck and G.D. Eppen, "Linear programming solutions for separable Markovian decision problems", Management Sci. 13 (1967), 371-394.

[4J R.A. Howard, "Dynamic programming and Markov processes", M. 1. T. Press, Cambridge, Massachusetts (1960).

[5J ---, "Dynamic probabilistic systems", Volume II, John Wiley & Sons, Inc., New York (1971).

[6J W.S. Jewell, "Markov-renewal programming: I, Formulation finite return models", Operations Res .

..u.

(1963), 938-948.

[7J

H. Mine and S. Osaki, "Markovian decision processes", American Elsevier, New York (1970).

[8J S.1. Gass, "Linear programming methods and applications " , McGraw-Hill Book Company, New York (1958).

Referenties

GERELATEERDE DOCUMENTEN

De controle gaat nu administratief, maar nieuwe technieken zijn ook voor handha- ving te gebruiken; je kunt ermee contro- leren of koeien binnen of buiten zijn.’ Sensoren kunnen

review/meta-analyse die wij hebben bestudeerd, concluderen wij dat het aannemelijk is dat internetbehandeling gebaseerd op cognitieve gedragstherapie bij depressie vergeleken met

De functie Verblijf omvat verblijf in een instelling met samenhangende zorg bestaande uit persoonlijke verzorging, verpleging, begeleiding of behandeling, voor een verzekerde met

The identification of the factors influencing the vulnerability of women to sexually transmitted HIV, with related pre- dictors and indicators, enables planners to differenti-

A quantitative methodology was selected as the most appropriate approach to identify factors preventing the successful implementation of an existing fall prevention programme...

At all ages a profile with moderate to high levels of co- occurring internalizing and externalizing problems was iden- tified, but at 6 years this profile was characterized by

In terms of the administration of CARA approvals between 10 May 2002 and 3 July 2006 (that is, when the cultivation of virgin ground was not enforced as a listed activity under ECA

alpinus (Atyeo) in having base of prodorsal tri- chobothria sci well separated from sce; chelicera reticulated; median propodosomal trichobothria (sce) simple and each chelicera