• No results found

Adaptive controllers for partially observable unknown Markov chains

N/A
N/A
Protected

Academic year: 2021

Share "Adaptive controllers for partially observable unknown Markov chains"

Copied!
18
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Adaptive controllers for partially observable unknown Markov

chains

Citation for published version (APA):

Reyman, G. (1987). Adaptive controllers for partially observable unknown Markov chains. (Memorandum COSOR; Vol. 8719). Technische Universiteit Eindhoven.

Document status and date: Published: 01/01/1987

Document Version:

Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers)

Please check the document version of this publication:

• A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website.

• The final author version and the galley proof are versions of the publication after peer review.

• The final published version features the final layout of the paper including the volume, issue and page numbers.

Link to publication

General rights

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain

• You may freely distribute the URL identifying the publication in the public portal.

If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license above, please follow below link for the End User Agreement:

www.tue.nl/taverne Take down policy

If you believe that this document breaches copyright please contact us at: openaccess@tue.nl

providing details and we will investigate your claim.

(2)

EINDHOVEN UNIVERSITY OF TECHNOLOGY

Faculty of Mathematics and Computing Science

Memorandum COSOR 87-19

Adaptive controllers for partially observable unknown Markov chains

by

G. Reyman

Eindhoven, Netherlands

(3)

Abstract

Adaptive controllers for partially observable unknown

Markov chains

Grzegorz Reyman

Eindhoven University of Technology

Department of Mathematics and Computing Science Postbus 513. 5600 MB Eindhoven. The Netherlands

tel. (040) 472932

The heuristic adaptive controller for Markov chains is proposed for a case of both unknown transition matrix probabilities and partially observable states. The adaptive controller consists of the state recognizer. unknown parameters estimator and simple decision-table-type controller. Simulation shows that except cases of extremely wrong ini-tial estimates the adaptive algorithm gives very good results.

1. Introduction

A problem is considered of controlling finite-state. finite-control Markov chains. In practice Markov chains are unknown and moreover a current state may be only partially observed. This leads to a problem of an adaptive controller where a recursive estimation is required of Markov chain parameters and parameters of probability density functions forming a model of a measurement device. Up till now two subproblems of the above mentioned one have been successfully solved provided different type additional assump-tions. Results of solving a problem of an adaptive controller for unknown Markov chains may be traced in [1-5] . Finally [6] is the extension of [5] which gives the optimal adaptive controller without a restriction to a limited set of models. Another problem is to find a controller for partially observable Markov chains. Now a model is perfectly known. but measurements of the current state are incomplete. given by some known probability distri-bution. A solution to this problem may be found in [7] for a finite time horizon and in [8].[9] for an infinite time horizon.

To the knowledge of the author there is no work on the adaptive controller for Mar-kov chains in a case of both lack of the model knowledge and uncertain state measure-ments. However. analogous problems of design of an additive adaptive controller for sto-chastic systems given by a state-space representation with a white process noise ( or a noise driven by a white Gaussian process with zero mean) have been successfully solved [10]. In this paper we present a concept similar to the concept presented in [10. Section

(4)

2

9.2] of a general adaptive state estimator consisted of a sequential parameter estimation and a state reconstruction. In this paper the Theory of Pattern Recognition is used to "reconstruct" the current state. Although evaluation of the above concept is the most important contribution of this paper. an optimal Bayesian approach for the described prob-lem is briefly presented. Obtained results of the last approach are nice and compact. how-ever, a computer implementation of a reasonably large problem is impossible because a large number of integrations over continuous multidimensional sets is unavoidable.

The proposed heuristic adaptive controller based on initial parameter estimates con-sists of the following :

a state recognizer

- a Markov chain parameter estimator - a probability density parameter estimator - a controller provided a current state is known.

This paper is organized as follows. First the system is described. In Section 3 the Bayesian solution follows assuming a knowledge of a priori probability distributions. A concept of a decomposition of the problem into the state recognition and control provided the current state is known is given in Section 4. A brief discussion of a resultant decom-posed algorithm is presented. In Section 5 the heuristic adaptive algorithm is introduced and in Section 6 simulation results for a simple two-state. two-control, one-measurement problem are presented and discussed.

2. Description of te system

Let n = 0.1 .... denote a current time moment of the controlled Markov chain. At each time characteristic features of the current state are measured. On the base of the current measurement Xn an action should be chosen and executed. Let us use the following

deno-tations:

S

= [ 1.2 •...

.M ) - tbe finite state space.

K = [ 1.2 ... r ) - tbe finite set of actions .

Xn EX. X is a closed bounded set in the Euclidean space RS •

A bold letter bn will denote a random variable taking at the time n its realizations bn

from an appropriate set and ~

==

(b o, ... ,b

n ) .

b

n

E

Bn

+1 =

II

B

n +1

The behavior of the system is governed by the set of transition probabilities

P On + 1 = j I jn = i

in

= k )= Pi~ (¢ ) (1)

where jn is the state of the system and k n is the action executed.

¢

E q, is the unknown true parameter. and q, is a closed bounded set in the Euclidean space, Rt ,t ~ M(M-l)r

(5)

3

-The above inequality is due to forbidden transitions. i.e .. such transitions that form the following set:

F =

I

(i .j.k) E

s

xSxK:

P0C"')

=

01

We assume that the set F is known.

Initial probabilities of the state j 0 at the time n

=

0 are

(2) The measurement Xn of the state jn is given by the conditional probability densities vector

j

=

1,. .. ,M (3) where is a probability density of xn . (in)n -1 .... ) 0) is given, a E A is the unknown true parameter. where A is a closed bounded set in the Euclidean space RAJ

An usual assumption is made on the unknown true system. Assumption 1 [6] : For every feedback control10w

(4) the closed-loop system governed by the transition probabilities (0-(2) is irreducible. i.e .. it has a single ergodic class which is the state space S.

The assumption provides that all the states jn E S may be "excited" in a finite time hor-izon by any control low given by (4).

The goal of this paper is to design the adaptive controller. which will minimize the following finite horizon performance index

N-l

QN

=

E

1:

cn

Cjn

,kn ) (5)

n =0

where Cn (j .k ) is the local cost incurred by execution of the action k when the current

state at the time n is j .

3. Bayesian approach

Let us assume the a priori knowledge of the initial joint probability distribution

P(Xo= x .jo= j) (6)

As we know initial probabilities (2) then by the Bayes rule we know also

f

(xo=x Ijo=

n

=

f

o(x

In·

(7)

(6)

-4

q n ,j

==

P

On

=

j I

in

=

Xn

.kn

-1:

k

n -l'a. '¢ ).

Using the Bayes rule we have

M

f

(x Ij,a.)L,pi)(¢)q,,-ubll 1 i= 1 qn , j : M AI with where

L,

f

(x I j ,a.)

L,

p6(¢ )q" 1,i bn - 1 j=1 i=l

f

O(x I j)Pj M

L,

f

O(x I npj j=l

is the a posteriori joint probability of unknown parameters. Using the Bayes rule we obtain

M M

L,

f

(x I j ,a.)

L,

Pi~ C¢)q" -u bn - 1 j = l i=1 with bo

=

1 . (8) (9a) (9b) (10) (11)

Define the cost V O(XN

.k

N-1 ) : 0 . Then the minimal expected cost for the last stage is

given by

V lCXN-l,kN - 2)

=

min [E CN-l(jN-l,kN - t ) IXN-I: xN-l.kN-l=kN-l]

. kN_1E x

=

min[

t

CN-IU.k) J JP(¢.a. ,j l iN - I= xN-l.kN-l=kN-l)d q,d a. ]

kEx j=l A4> M

=

min[

L,

CN-IU.k) J JqN-l.j bN-1d ¢d a. ] kEX j=1 A4> so (12) _ ( I: k ) where an -1 - an -1.1 , . . . ,an -1,M Wn (13) and (14) For N -n stages ahead we have analogously

(7)

5

-M

+ J fIr.

a~

+l,j Wn +l,j d ¢d a A4> ' j = l

M

X

r.

J Jp(Xn +I= x .jn +I= j .¢.a IXn = Xn

.k

n-I = kn-I)d ¢d adx]

j = lA 4> so where M = min[

r.

Cn U.k) J JWn,j d ¢d a kEK j = l A4> M

=

min[

r.

frWn,j d ¢d a kE K j= lAi. M

a:,j

=

cnU.k)+

r.Ja~+l.jf(x

Ij.a)dxp6(¢) . i = 1.. ... M.

j=lX

(15)

(16)

and a~ +l,j is the optimal value of

a:

+l,j obtained by optimization on the previous stage of

the dynamic programming.

The formulas (9)-(16) are very nice and compact. However. due to integrations in (15) and (16) the overall computation time is very large. Therefore this approach is not practical for reasonably large problems.

4. Decomposed approach

The great computational complexity of the optimal control algorithm for the Baye-sian approach leads to a following concept. Let us decompose the control algorithm into

(8)

6

-two steps. First. the current state is recognized and then. assumed the recognized state as the true one. a simple control algorithm is applied. The recognition algorithm has the fol-lowing form

where in is the recognized state.

Minimizing the risk

N RN = E

1:

L (in .jn ). n=O L ( . .) ~.}

=

I

0 1 if if i i ;at

=

j j

(17)

(18a) (ISb) where for n

=

0 we have io

=

'llro(xo). the following recursive contextual recognition algo-rithm (CRA) is obtained [11]

i. =

argl mfn

[d.

(l.x •

.k. -,)

1]

(19) where M

f

(xn Il .an )

1:

p;1'-1(¢n-l)dn-1U

oX"

-1.kn -2) j = l (20a) with (20b) Assumption 2 : jn

=

in .

Assumption 2 yields the following simple control algorithm minimizing (5). given a new estimate ¢n at the time n

(21)

The decision table Hn may be easily obtained by solving the following dynamic program-ming problem

(22a)

(22b)

In the above equations an and ¢n are current estimates of the true parameters a and

¢ .

In [11] the superposition

(9)

7

-was compared with the optimal algorithm

en

(4) for the case of known parameters a and

cp .

The condition was stated when the superposition is equivalent to the optimal algo-rithm. Experimental results show that with respect to (5) the superposition is worse than the optimal algorithm less than 4%. The superposition (23) was also investigated in a case of lack of a priori information about distributions of a and

cp.

Then. however. so called learning sequences of measurements. states and actions obtained via off-line experiments were available. An estimator of the parameter of the Markov chain as well as estimator of probability densities (nonparametrical case) were given in [12]. It was shown in [13] that obtained so called learning algorithms based on the estimates in a place of the true values converge to appropriate algorithms when a length of the learning sequences tends to infinity.

The practical shortcoming of the above approach is that usually the learning sequences are not available at the beginning of the control horizon. but are obtained during the control process. Namely the off-line estimation and on-line control using once estimated parameters should be replaced by an on-line estimation and controL i.e .. an adaptive controL

It should be pointed out. however. that the decomposed approach algorithms are ready for computer implementation and computationally very simple. The computational complexity may be still decreased in some cases by use of states aggregation orland actions reduction. Such cases occur when the number of states is not equal to the number of actions. Le .. when M ¢ r. It was shown in [14] that for the decomposed approach the aggregation of states orland reduction of redundant operations may be performed without loss of optimality. After aggregation orland reduction operations the number of states is equal to the number of actions.

S. The heuristic adaptive control algorithm

Motivated by the results presented in Section 4 we propose a heuristic adaptive con-trol algorithm (HACAl) based on CRA. The block schema of the system with the adap-tive controller is presented in Figure 1. The state recognizer and controller presented in Figure 1 use recently available estimates of unknown parameters to recognize the current state and determine the action to be executed. Simple maximum likelihood estimators (MLE) are proposed to be used both for unknown density parameters and Markov chain parameters. Let us present MLE for the transition matrix parameters. MLE results in the following simp1e recursive algorithm

mn(i,j,k)= 11'-n-1Ci.j.k)+l(jn-l=i.jll=j.kn-1=k) for Ci.j.k)Ep

C

(24) mn

Ci

.j)

=

mn -l(i .k )

+

l(jn -1= i .kn -1= k) for Ci.k)e S xK . (25)

(10)

8

-(26) with initial values for moCi .j .k ) and moCi .k) such that their ratio is equal to the known approximate value of

p6

(cf» [6].

Since the used samples are dependent because produced by the above closed-loop sys-tem in Figure 1. MLE of unknown parameters may be not consistent. However. we do not discuss a convergence of HACAI because the sum in (5) is not well defined for N -+ 00 • A behavior of HACAI will be discussed by use of a computer simulation for a simple exam-ple.

6.

Computer simulation results

Let us simulate the closed-loop system in Figure 1 as to observe the behavior of its components and specificly estimators of

rp

and Di. A simple example was chosen to exhibit

properties of the system. Consider a maintenance problem of a certain machine which can be in one of two states: j "" 1 (good state) and j = 2 (break-down state). Two possible actions are: k

=

1 (continue machining) and k

=

2 (repair). Appropriate transition proba-bilities are indicated on the system graph (Figure 2) and initial probaproba-bilities are

Pl=I,P2=0.

At each time an inspection is made resulting in a measurement of one characteristic feature x of the machine state. It is assumed that appropriate probability distributions are normal with N(a(1).l) when the good state occurs and N(a(2).O when the break-down state occurs. The true values of above parameters are:

rp{I) ""

0.8.

rp(2)

== 0.9. a(l)

=

0.0.

a(2) = 2.0. Appropriate costs are the following: c(1.1) =1. c(1,2) = 5. c(2.1) == 5.

c(2.2) "" 4 .

The results of one sample simulation are presented in Figure 3 - Figure 6. On Figure 3 and Figure 4 estimates Di,;l) and Dip) of mean values and on Figure 5 and Figure 6

esti-mates rp~l) and

rpP)

of unknown Markov chain are presented for the time horizon

N

== 3000. Following initial estimates have been used:

aJl)

= 1.0, DiJ2) = 1.5,

rpJl)

=

30/50.

rpJ2)

==30/50.

The analysis of simulation results shows that the convergence rate of HACA1 is very slow. Because of the rate of wrongly recognized states is e = 0.13 ( e

=

number of wrongly recognized states / total number of recognized states) the Markov chain estimates do not converge to the true values and high frequency oscillations occur (especially for

rp~l)). Another (direct) recognition algorithm (DRA) . successfully aplied for adaptation in

[16] and [17]. has been also examined

(11)

9-Replacing CRA by DRA in the adaptive scheme presented in Figure 1 gives a new heuristic adaptive control algorithm HACA2 which yields. however. twice as great rate of wrongly recognized states e. and appropriately worse estimates.

Two adaptive control algorithms HACA1 and HACA2 were compared and results are displayed in Table 1 and Table 2. The comparison was done with respect to final estimates at the end of the N 2000 time horizon. the rate of wrongly recognized states e. and the average cost Q 2000 • For better orientation we state here results for a case when the true

parameters are known (no adaptation. (t 7) and (21) used). For the time horizon

N

=

2000: e' = 0.123 and Q;ooo

=

3906. Table 1 and Table 2 show that HACA1 is two times better than HACA2 with respect to the ratio e. This obviously influences the estima-tion of the unknown parameters. In a case of perfect initial density parameter estimate HACA2 gives always the same results. This is due to the smallest possible recognition error resulting in a respectively high convergence rate of Markov chain parameters. Obvi-ously. in this case. for extreme values of initial estimates. i.e ..

t:/>Jl)

=

t:/>J2)

=

O. HACA2 is better than HACA!. Notice robustness of a control algorithm (22) with respect to the ini-tial estimates

t:/>o

which yields the same value of

Q2ooo.

It follows from the well chosen cost structure for the considered example.

For very wrong initial estimates the estimated parameters converge to completely wrong values. Both HACAl and HACA2 are much more sensitive to initial estimate of probability density function parameter than to initial estimate of the Markov chain parameter. Comparing entries of Table 1 and Table 2 with e' and Q;ooo we see that except cases of extremaly wrong initial estimates HACAl gives very good results.

7. Conclusions

In this paper a realistic problem. confronting difficulties in the exact model descrip-tion as well as the exact state measurement. is stated. For the stated problem Bayes approach leads to obtain a nice recursive procedure. This procedure is however computa-tionally complex due to a calculus of integrals. The heuristic adaptive control algorithm HACAl is proposed to avoid this difficulty. It applies the idea of decomposed algorithm consisting of the contextual state recognition algorithm CRA and the simple control deci-sion table obtained by assuming the recognized state is the true one. The proposed algo-rithm was investigated by means of a simple numerical example. Results show convergence of unknown parameter estimates close to the true values. In cases of extremely wrong ini-tial estimates the algorithm HACAl does not assure a proper convergence.

(12)

10

-References

[1] P. Mandl. Estimation and control in Markov chains. Adv. Appl. Prob .. Vol. 6. pp. 40-60.1974.

[2] V. Borkar . P. Varaiya . Adaptive control of Markov chains. I: Finite parameter set. IEEE Trans. Automatic Control. Vol. AC-24. pp. 953-958. 1979.

[3] B. Doshi. S. Shreve. Strong consistency of a modified maximum likelihood estimator for controlled Markov chains. J. AppL Prob .. Vol. 17. pp. 726-734. 1980.

[4] Y.M. EI-Fattah . Recursive algorithms for adaptive control of finite Markov chains. IEEE Trans. System Man Cybern .. Vol. SMC-11. pp. 135-144. 1981.

[5J P.R. Kumar . A. Becker. A new family of optimal adaptive controllers for Markov chains. IEEE Trans. Automatic Control. Vol. AC-27. pp. 137-145. 1982.

[6] P.R. Kumar. W. Lin. Optimal adaptive controllers for unknown Markov chains. IEEE Trans. Automatic Control. Vol. AC-27. pp. 765-774.1982.

[7] R.D. Smallwood. E.J. Sondik . Optimal control of partially observable processes over the finite horizon. Operations Research. Vol. 21. pp. 1071-1088. 1973.

[8] Ch.C. White . A Markov quality control process subject to partial observations. Management Science. Vol. 23. pp. 843-852. 1977.

[9] E.J. Sondik . The optimal control of partially observable Markov process over the infinite horizon: Discounted costs. Operations Research. VoL 26. pp. 282-304. 1978. [10] G.c. Goodwin. K.S. Sin. Adaptive Filtering. Prediction. and Control, Prentice-Hall.

Englewood Cliffs. New Yersey. 1984.

[11] G. Reyman. Application of the Pattern Recognition Theory to an optimal control in discrete stochastic systems. Postepy Cybernetyki. Vol. 8, pp. 103-116.1985 (in Pol-ish).

[12] G. Reyman. Operations choice for the robot in assembly systems. Systems Science. Vol. 12. pp. 75-87. 1986.

[13] G. Reyman. Optimal choice of operations in a robotic assembly system. Ph.D. Thesis. Technical University of Warsaw. Electrical Department. Warsaw 1985 (in Polish). [14] G. Reyman. State recognition from the control point of view in robotic assembly

sys-tems. Proc. Intelligent Autonomous Systems ConL. Amsterdam 1986. North-Holland. pp.515-521. 1987.

[15] C.R. Rao . Linear Statistical Inference and Its Applications. Wiley. New York. 1965. [16] G.R. Dattatreya . V.V.S. Sarma. An adaptive scheme for learning the probability

tres-hold in pattern recognition. IEEE Trans. Systems. Man. Cybern .. Vol. SMC-12 . pp. 927-934. 1982.

(13)

11

-[17] G.R. Dattatreya . L.N. Kanal . Adaptive pattern recognition with random costs and its application to decision trees. IEEE Trans .. Systems. Man. Cybern .. Vol. SMC-16. pp. 208-218. 1986.

(14)

Table 1 o:Jll o:J2) 0:20)0 (l~ (23 0:200

¢4ll)(}

¢r5~o e Q2000 0.014 2.2 0.759 0.958 0.128 3945 HACAI 0.0 2.0 -0. 1.795 0.645 0.768 0.204 4469 HACA2 -0.13 1.654 0.681

I

0.97 0.138 4006 HACAI 0.0 1.0 -OA94 1.36 0.565 0.699 0.288 5035

I

HACA2 -0.071 1.706 0.704 0.968 0.13 HACAI 1.0 1.0 1.717 -0.211 0.688 0.273 0.8 HACA2 0.124 2.261 0.78 0.964 0.133 3970 HACAI 2.25 2.0 2.059 -0.163 0.585 0.210 0.834 8853 HACA2 -0.692 1.419 OA81 0.803 0.289 5058 HACAI -3.0 2.0 -1.637 0.714 0.329 0.351 0.639 7517 HACA2 -1.549 0.631 0.275 0.343 0.671 7719 HACAI -3.0 1.0 JLHACA2 -2.17 OA07 0.295 0.213 0.793 2.03 -3.0 0.99 0.9 0.999 9988 HACAI 5.0 -3.0 2.02 -2.878 0.982 0.831 0.975 9816 HACA2

(15)

Table 2

4>Jl)

4>J2)

a~J6o a~660 4>~(\Ju

4>~Uo

e Q2000

0.02 2.212 0.753 0.993 0.129 3953 0.8 0.9 -0.351 1.759 0.635 0.75 0.204 4469 -0.001 2.196 0.754 0.961 0.125 3921 0.7 0.7 -0.351 1.759 0.634 0.747 0.204 4469 -0.024 2.153 0.747 0.962 0.124 3900 0.6 0.6 -0.351 1.795 0.633 0.746 0.204 4469 -0.042 2.132 0.736 0.959 0.125 3911 0.5 0.5 -0.351 1.795 0.633 0.746 0.204 4469 -0.219 1.833 0.643 0.945 0.15 4019 0.2 0.3 -0.351 1.795 0.63 0.141 0.204 4469 -0.239 1.809 0.636 ! 0.922 0.157 4129 0.1 0.2 -0.351 1.795 0.633 0.746 0.204 4469 -0.074 0.029 0.0 0.0 0.999

9996

0.0 0.0 -0.351 1.795 0.629 0.737 0.204 4469

Initial values aJ!) and aJ2) are tbe true values. namely aJ)) - 0.0 and aJ2) - 2.0,

mo(i,k ) .. 10 :

i.k ..

1.2 . HACAI HACA2 HACAI HACA2 HACAI HACA2 HACAI HACA2 HACAI HACAl MACA 1 HACA2 HACAl HACAl

(16)

·

k

n

STOCHASTIC SYSTEM

In

STATE MEASUREMENT xn

~ I -J ' (1-2)

...

DEVICE (3)

-

- - -

-

, . - -

--

-

-

-

..

-<Pn

'P

ES'TIMATOR

kr -(26)

tE-~

CONTROLLER

fE--

in STATE RECOGNIZER

I . . -(21-22) (17-20)

fE-(l. ESTIMATOR

k-0: L -(MLE) ~ n "" ADAPTIVE CONTROLLER

"

"- I

_cpW

-"

"'

1.0 ft-I ~ (2)

....

' -

""-

<p

"--

-t

-k=l (continue machining) - - - k=2 (repair)

(17)

0.8

0.7

l

0.6

r

i j

i

0.9

~

0.8

0.7

i-i

ti

il

II

I)

0.6

~

.' i j j

I

*'" .. r"'4"r ... · /{

/(

/'

J ! f

1000

1000

~.,"ii • . _.J~'" "";

n

2000

3000

n

2000

3000

(18)

..

1.0

0.5

0.0

i (}) <X: n L -_ _ _ _ _ _ _ -'-_. _ _ _ _ _ _ _ _ ._. ___ ... _L __ ... ______ ,_....L_ .. , i (2) ! (X n

1.8

:-1.7

1.6 :-'

i ) .i

1.5

1000

2000

3000

.1. .1 .. 1 .

1000

2000

3000

n

n

Referenties

GERELATEERDE DOCUMENTEN

De analyse van het Zorginstituut dat het aantal prothese plaatsingen (bij knie met 10% en bij heup met 5%) terug gebracht kan worden door middel van stepped care en

Met deze wijziging van de beleidsregels heeft het College voor zorgverzekeringen (CVZ) een bedrag van 0,058 miljoen euro toegevoegd aan de middelen bestemd voor de zorgverzekeraars.

nnu tt;ese.. them

Dit onderzoek focust zich op de mogelijke verbetering van aandacht door een korte wandeling in de natuur en de invloed die zonlicht en regen op dit proces zouden kunnen hebben..

Professor Clark and colleagues present a very insightful take on “research into complex healthcare interventions” by comparing the current state-of-the-art to the sport of

Applied to debris-producing events, the possibility for indirect damages means that there might not be liability for all of the debris produced, but that there is

Na bovenstaande overweging en met name vanwege de vragen rondom de invoering van de roulatie, wordt in dit onderzoek gekeken naar de relatie tussen de kwaliteit van de audit en de

Aussi nous nous sommes attachés à mener des coupes à l'extérieur et à l'intérieur du monument, perpendiculairement au premier piédroit ouest de l'allée