• No results found

Maurer computers with single-thread control

N/A
N/A
Protected

Academic year: 2021

Share "Maurer computers with single-thread control"

Copied!
31
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Maurer computers with single-thread control

Citation for published version (APA):

Bergstra, J. A., & Middelburg, C. A. (2005). Maurer computers with single-thread control. (Computer science reports; Vol. 0517). Technische Universiteit Eindhoven.

Document status and date: Published: 01/01/2005

Document Version:

Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers)

Please check the document version of this publication:

• A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website.

• The final author version and the galley proof are versions of the publication after peer review.

• The final published version features the final layout of the paper including the volume, issue and page numbers.

Link to publication

General rights

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain

• You may freely distribute the URL identifying the publication in the public portal.

If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license above, please follow below link for the End User Agreement:

www.tue.nl/taverne

Take down policy

If you believe that this document breaches copyright please contact us at: openaccess@tue.nl

(2)

Maurer Computers with Single-Thread Control

J.A. Bergstra1,2 and C.A. Middelburg3

1

Programming Research Group, University of Amsterdam, P.O. Box 41882, 1009 DB Amsterdam, the Netherlands

janb@science.uva.nl

2 Department of Philosophy, Utrecht University,

P.O. Box 80126, 3508 TC Utrecht, the Netherlands janb@phil.uu.nl

3

Computing Science Department, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven, the Netherlands

keesm@win.tue.nl

Abstract. We present the development of a theory of stored threads and their execution. The work builds upon Maurer’s theory of computer instructions and the thread algebra of Bergstra et al. The theory being developed is primarily relevant to the design of new processor architec-tures. We also relate Maurer’s model for computers with Turing ma-chines, and stored threads with programs as considered in the program algebra of Bergstra et al.

Keywords: Maurer computers, thread algebra, instructions, stored threads, control threads, Turing machines, program algebra.

1998 CR Categories: C.1.1, F.1.1, F.1.2, F.3.2.

1

Introduction

In [11], a paper from almost 40 years ago, Maurer proposes a model for computers that is quite different from the well-known models such as pushdown automata and Turing machines (see e.g. [7]). The strenght of Maurer’s model is that it is close to real computers. Computer instructions play a prominent part in his model. We use the phrase Maurer computer for what is a computer according to Maurer’s definition of a computer.

We plan to develop an approach to design processor architectures based on Maurer computers. As a first step, we develop in the current paper a theory of stored threads and their execution. We show that a single thread can control the execution on a Maurer computer of an arbitrary finite-state thread stored in the memory of the Maurer computer, provided that the basic actions of the stored thread correspond to instructions of the Maurer computer. We demonstrate that finite-state threads of arbitrary size can be dealt with if the Maurer computer on which the execution takes place leaves the fetching of the basic actions to another Maurer computer of which the memory size is sufficient for the thread concerned.

(3)

To describe threads, we use BTA (Basic Thread Algebra), introduced in [4] under the name BPPA (Basic Polarized Process Algebra). BTA is a form of pro-cess algebra which is tailored to the description of the behaviour of deterministic sequential programs under execution. Using some kind of strategic interleaving, several single-thread controlled Maurer computers can be put in parallel, which is relevant to the design of new processor architectures. Several kinds of strategic interleaving have been elaborated in earlier work, see e.g. [5]. In this paper, a simple kind of strategic interleaving, called cyclic interleaving, is used to demon-strate that finite-state threads of arbitrary size can be dealt with by setting the fetching of basic actions apart.

A straightforward way to simulate a Turing machine by a Maurer computer clarifies that the instruction implicitly executed by a Turing machine on a test or write step must be capable of reading or overwriting the contents of any cell from the infinite number of cells on the tape of the Turing machine – the cell of which the contents is actually read or overwritten depends on the head position. In Maurer’s terminology, a test instruction has an infinite input region and a write instruction has an infinite output region. Real computers do not have such instructions. Using Maurer computers, we show that the instructions concerned can be replaced by instructions with a finite input region and a finite output region if we allow Turing machines with an infinite set of states.

We also demonstrate that there is a close connection between stored threads and PGLD programs. PGLD is one of the simpler programming languages based on PGA (ProGram Algebra) introduced in [4]. From each stored thread, a PGLD program can be extracted of which the behaviour is the thread represented by the stored thread. The PGLD program extracted from a stored thread shows hardly any difference with the stored thread. However, PGLD permits a more efficient representation of threads than the one obtained in this indirect way.

The structure of this paper is as follows. First of all, we review Maurer computers (Section 2) and Basic Thread Algebra (Section 3). Following this, we introduce an operator which allows for threads to transform the states of a Mau-rer computer by means of its instructions. (Section 4). After that, we enhance Maurer computers step by step till we can show that a single thread can con-trol the execution of any stored finite-state thread (Sections 5–8). We also show that such control can be accomplished with a single control instruction (Sec-tion 9). Next, we introduce parallel composi(Sec-tion of Maurer computers and cyclic interleaving of threads (Section 10) and demonstrate that finite-state threads of arbitrary size can be dealt with by setting the fetching of basic actions apart (Section 11). Then, we clarify some interesting points about Turing machines us-ing Maurer computers (Section 12), and relate PGLD programs to stored threads (Section 13). Finally, we make some concluding remarks (Section 14).

2

Maurer Computers

In this section, we shortly review Maurer computers, i.e. computers as defined by Maurer in [11]. The proofs of the presented theorems can be found in [10].

(4)

A Maurer computer C consists of the following components: – a set M ;

– a set B with card (B) ≥ 2; – a set S of functions S : M → B; – a set I of functions I : S → S; and satisfies the following conditions:

– if S1, S2∈ S, M0⊆ M and S3: M → B is such that S3(x) = S1(x) if x ∈ M0

and S3(x) = S2(x) if x 6∈ M0, then S3∈ S;

– if S1, S2∈ S, then the set {x ∈ M | S1(x) 6= S2(x)} is finite.

M is called the memory, B is called the base set, the members of S are called the states, and the members of I are called the instructions. It is obvious that the first condition is satisfied if C is complete, i.e. if S is the set of all functions S : M → B, and that the second condition is satisfied if C is finite, i.e. if M and B are finite sets.

The following proposition gives an interesting characterization of the set of states of a Maurer computer.

Proposition 1 (Characterization of the set of states). Let (M, B, S, I) be a Maurer computer, let S0 ∈ S, and let Bx= {b ∈ B | ∃S ∈ S•S(x) = b} for

all x ∈ M . Then S is the set of all functions S : M → B such that S(x) ∈ Bx

for x ∈ M and {x ∈ M | S0(x) 6= S(x)} is finite.

Let (M, B, S, I) be a Maurer computer, and let I : S → S. Then the input region of I, written IR(I), and the output region of I, written OR(I), are the subsets of M defined as follows:

IR(I) = {x ∈ M | ∃S1, S2∈ S, y ∈ OR(I)•∀z ∈ M \ {x}•

S1(z) = S2(z) ∧ I(S1)(y) 6= I(S2)(y)} ,

OR(I) = {x ∈ M | ∃S ∈ S•S(x) 6= I(S)(x)} .

OR(I) is the set of all memory elements that are possibly affected by I; and IR(I) is the set of all memory elements that possibly affect elements of OR(I) under I. We have the following theorem about the relation between the input region and output region of an instruction.

Theorem 1 (Input and output regions of instructions). Let (M, B, S, I) be a Maurer computer, let S1, S2∈ S, and let I ∈ I. Then S1IR(I) = S2IR(I)

implies I(S1)  OR(I) = I(S2)  OR(I).

Both conditions in the definition of Maurer computers are necessary for Theo-rem 1 to hold.

We have the following theorem about the input region and the output region of the composition of two instructions.

(5)

Theorem 2 (Composition of instructions). Let (M, B, S, I) be a Maurer computer, let I1, I2∈ I, and let J : S → S be defined by J (S) = I2(I1(S)). Then

IR(J ) ⊆ IR(I1) ∪ IR(I2) and OR(I1) \ OR(I2) ⊆ OR(J ) ⊆ OR(I1) ∪ OR(I2).

If OR(I1) ∩ IR(I2) = ∅, then IR(I2) ⊆ IR(J ) and OR(J ) = OR(I1) ∪ OR(I2).

Moreover, if OR(J ) = OR(I1) ∪ OR(I2) and OR(I1) ∩ OR(I2) = ∅, then also

IR(I1) ⊆ IR(J ). If OR(I1) ∩ IR(I2) = ∅, IR(I1) ∩ OR(I2) = ∅ and OR(I1) ∩

OR(I2) = ∅, then J = J0 where J0: S → S is defined by J0(S) = I1(I2(S)).

We have the following theorem about the decomposition of an instruction. Theorem 3 (Decomposition of instructions). Let (M, B, S, I) be a Maurer computer, let I ∈ I, and let x ∈ OR(I) \ IR(I). Then there exist J1, J2: S → S

with J2(J1(S)) = I(S) such that IR(J1) ⊆ IR(I), IR(J2) ⊆ IR(I), OR(J1) =

{x} and OR(J2) = OR(I) \ {x}.

Let C = (M, B, S, I) be a Maurer computer. Then the unit component of C is the set {x ∈ M | ∃b ∈ B•∀S ∈ S•S(x) = b}.

We have the following theorem about the existence of instructions for arbi-trary input and output regions.

Theorem 4 (Existence of instructions (1)). Let (M, B, S, I) be a Maurer computer, let Z be its unit component, and let P, Q ⊆ M . Then there exists a function I : S → S with IR(I) = P and OR(I) = Q iff P ∩ Z = ∅, Q ∩ Z = ∅, and P 6= ∅ ⇒ Q 6= ∅.

Let (M, B, S, I) be a Maurer computer, let I ∈ I, let M0 ⊆ OR(I), and let M00⊆ IR(I). Then the region affecting M0 under I, written RA(M0, I), and the

region affected by M00under I, written AR(M00, I), are the subsets of M defined as follows:

RA(M0, I) = {x ∈ IR(I) | AR({x}, I) ∩ M06= ∅} ,

AR(M00, I) = {x ∈ OR(I) | ∃S1, S2∈ S•∀z ∈ IR(I) \ M00•

S1(z) = S2(z) ∧ I(S1)(x) 6= I(S2)(x)} .

AR(M00, I) is the set of all elements of OR(I) that are possibly affected by the

elements of M00under I; and RA(M0, I) is the set of all elements of IR(I) that

possibly affect elements of M0 under I. We have the following theorem about the existence of instructions for arbitrary input, output and affected regions. Theorem 5 (Existence of instructions (2)). Let (M, B, S, I) be a Maurer computer with countable M , let Z be its unit component, let P, Q ⊆ M with P ∩ Z = ∅, Q ∩ Z = ∅, and P 6= ∅ ⇒ Q 6= ∅, and let Qx ⊆ Q with Qx6= ∅ for

each x ∈ P . Moreover, assume that the following two conditions are satisfied: – there exist only finitely many x ∈ M such that x ∈ Qx, y 6∈ Qy for all

y ∈ M \ {x}, and card ({b ∈ B | ∃S ∈ S•S(x) = b}) = 2;

– for all infinite Q0⊆Sx∈P Qx, the set {x ∈ P | Qx∩Q06= ∅} is either infinite

or contains an element y for which the set {b ∈ B | ∃S ∈ S•S(y) = b} is

(6)

Then there exists a function I : S → S with IR(I) = P , OR(I) = Q and AR({x}, I) = Qx for each x ∈ P .

Both conditions in Theorem 5 are satisfied ifS

x∈P Qx is a finite set.

3

Basic Thread Algebra

In this section, we review BTA (Basic Thread Algebra), a form of process algebra which is tailored to the description of the behaviour of deterministic sequential programs under execution. The behaviours concerned are called threads.

In BTA, it is assumed that there is a fixed but arbitrary set of basic actions A. BTA has the following constants and operators:

– the deadlock constant D; – the termination constant S;

– for each a ∈ A, a binary postconditional composition operator E a D . We use infix notation for postconditional composition. We introduce action pre-fixing as an abbreviation: a ◦ p, where p is a term of BTA, abbreviates p E a D p. The intuition is that each basic action performed by a thread is taken as a command to be processed by the execution environment of the thread. The processing of a command may involve a change of state of the execution envi-ronment. At completion of the processing of the command, the execution en-vironment produces a reply value. This reply is either T or F and is returned to the thread concerned. Let p and q be closed terms of BTA. Then p E a D q will proceed as p if the processing of a leads to the reply T (called a positive reply), and it will proceed as q if the processing of a leads to the reply F (called a negative reply).

A recursive specification over BTA is a set of equations E = {X = tX |

X ∈ V }, where V is a set of variables and each tX is a term of BTA that only

contains variables from V . Let t be a term of BTA containing a variable X. Then an occurrence of X in t is guarded if t has a subterm of the form t0E a D t00 containing this occurrence of X. A recursive specification over BTA is guarded if all occurrences of variables in the right-hand sides of its equations are guarded or it can be rewritten to such a recursive specification using the equations of the recursive specification. In the projective limit model of BTA, which is presented in [2, 4], guarded recursive specifications have unique solutions. A thread that is the solution of a finite guarded recursive specification over BTA is called a finite-state thread. BTA with recursion has, in addition to the constants and operators of BTA, for each guarded recursive specification E over BTA and each variable X that occurs as the left-hand side of an equation in E, a constant standing for the unique solution of E for X, which is denoted by hX|Ei. We often write X for hX|Ei if E is clear from the context. It should be borne in mind that, in such cases, we use X as a constant.

The projective limit characterization of process equivalence on threads is based on the notion of a finite approximation of depth n. When for all n these

(7)

Table 1. Axioms for projection π0(x) = D P0 πn+1(S) = S P1 πn+1(D) = D P2 πn+1(x E a D y) = πn(x) E a D πn(y) P3 (V n≥0πn(x) = πn(y)) ⇒ x = y AIP

approximations are identical for two given threads, both threads are considered identical. Following [2, 4], approximation of depth n is phrased in terms of a unary projection operator πn( ). The projection operators are defined

induc-tively by means of the axioms in Table 1. In this table, and all subsequent tables with axioms in which a occurs, a stands for an arbitrary basic action from A.

In the structural operational semantics, we represent an execution environ-ment by a function ρ : A+→ {T, F}. We write E for the set of all those functions and B for the set {T, F}. Given an execution environment ρ and a basic action a, the derived execution environment ∂

∂aρ is defined by ∂

∂aρ(α) = ρ(haiyα). 4

The chosen representation of execution environments is based on the assump-tion that it depends at any stage only on their history, i.e. the sequence of basic actions processed before, whether the reply is positive or negative. This is a realistic assumption for deterministic execution environments.

The following transition relations on closed terms are used in the structural operational semantics of BTA:

– a binary relation h , ρi−→ h , ρa 0i for each a ∈ A and ρ, ρ0 ∈ E;

– unary relations h , ρi↓ and h , ρi↑ for each ρ ∈ E . These transition relations can be explained as follows:

– hp, ρi −→ hpa 0, ρ0i: in execution environment ρ, thread p is capable of first

performing basic action a and then proceeding as thread p0 in execution environment ρ0;

– hp, ρi↓: in execution environment ρ, thread p is capable of terminating suc-cessfully;

– hp, ρi↑: in execution environment ρ, thread p is neither capable of performing a basic action nor capable of terminating successfully.

The structural operational semantics of BTA with recursion and projection is described by the transition rules given in Table 2. In this table, and all subsequent tables with transition rules in which a occurs, a stands for an arbitrary basic action from A. We write ht|Ei for t with, for all X that occur on the left-hand side of an equation in E, all occurrences of X in t replaced by hX|Ei.

4 We write h i for the empty sequence, hdi for the sequence having d as sole element,

and αyβ for the concatenation of sequences α and β. We assume that the identities

(8)

Table 2. Transition rules for BTA with recursion and projection

hS, ρi↓ hD, ρi↑

hx E a D y, ρi−→ hx,a ∂a∂ρi ρ(hai) = T hx E a D y, ρi−→ hy,a ∂a∂ρi ρ(hai) = F hht|Ei, ρi−→ hxa 0, ρ0i hhX|Ei, ρi−→ hxa 0, ρ0iX = t ∈ E hht|Ei, ρi↓ hhX|Ei, ρi↓X = t ∈ E hht|Ei, ρi↑ hhX|Ei, ρi↑X = t ∈ E hx, ρi−→ hxa 0, ρ0i hπn+1(x), ρi−→ hπa n(x0), ρ0i hx, ρi↓ hπn+1(x), ρi↓ hx, ρi↑ hπn+1(x), ρi↑ hπ0(x), ρi↑

Bisimulation equivalence is defined as follows. A bisimulation is a symmetric binary relation B on closed terms such that for all closed terms p and q:

– if B(p, q) and hp, ρi−→ hpa 0, ρ0i, then there is a q0 such that hq, ρi a

−→ hq0, ρ0i

and B(p0, q0);

– if B(p, q) and hp, ρi↓, then hq, ρi↓; – if B(p, q) and hp, ρi↑, then hq, ρi↑.

Two closed terms p and q are bisimulation equivalent, written p ↔ q, if there exists a bisimulation B such that B(p, q).

Bisimulation equivalence is a congruence with respect to all operators in-volved. This follows from the fact that the transition rules from Table 2 consti-tute a transition system specification in path format (see e.g. [1]).

Henceforth, we write Tfinrec for the set of all terms of BTA with recursion in

which no constants hX|Ei for infinite E occur, and Tfinrecfor the set of all closed

terms of BTA with recursion in which no constants hX|Ei for infinite E occur. Moreover, we write Tfinrec(A), where A ⊆ A, for the set of all closed terms from

Tfinrec that only contain basic actions from A.

4

Applying Threads to Maurer Machines

In this section, we introduce Maurer machines and add for each Maurer machine H a binary apply operator •H to BTA.

A Maurer machine is a tuple H = (M, B, S, I, A, [[ ]]), where (M, B, S, I) is a Maurer computer and:

– A ⊆ A;

– [[ ]] : A → (I × M ).

The elements of A are called the basic actions of H, and [[ ]] is called the basic action interpretation function of H.

The apply operators associated with Maurer machines are related to the ap-ply operators introduced in [6]. They allow for threads to transform states of the

(9)

Table 3. Defining equations for apply operator S •HS = S D •HS = ↑ x •H↑ = ↑ (x E a D y) •HS = x •HIa(S) if Ia(S)(ma) = T (x E a D y) •HS = y •HIa(S) if Ia(S)(ma) = F

Table 4. Rule for divergence

∀n ∈ N•πn(x) •HS = ↑

x •HS = ↑

associated Maurer machine by means of its instructions. Such state transforma-tions produce either a state of the associated Maurer machine or the undefined state ↑. It is assumed that ↑ is not a state of any Maurer machine. We ex-tend function restriction to ↑ by stipulating that ↑  M = ↑ for any set M . The first operand of the apply operator •H associated with Maurer machine

H = (M, B, S, I, A, [[ ]]) must be a term from Tfinrec(A) and its second argument

must be a state from S ∪ {↑}.

Let H = (M, B, S, I, A, [[ ]]) be a Maurer machine, let p ∈ Tfinrec(A), and

let S ∈ S. Then p •H S is the state from S that results if all basic actions

performed by thread p are processed by the Maurer machine H from initial state S. Moreover, let (Ia, ma) = [[a]] for all a ∈ A. Then the processing of a

basic action a by H amounts to a state change according to the instruction Ia.

In the resulting state, the reply produced by H is contained in memory element ma. If p is S, then there will be no state change. If p is D, then the result is ↑.

Let H = (M, B, S, I, A, [[ ]]) be a Maurer machine, and let (Ia, ma) = [[a]] for

all a ∈ A. Then the apply operator •H is defined by the equations given in

Table 3 (for a ∈ A and S ∈ S) and the rule given in Table 4 (for S ∈ S). We say that p •HS is convergent if ∃n ∈ N•πn(p) •HS 6= ↑. If p •HS is convergent,

then the number of computation steps of p •HS, written |p •HS|, is the smallest

n ∈ N such that πn(p) •HS 6= ↑. If p •HS is not convergent, then |p •HS| is

undefined. We say that p •HS is divergent if p •H S is not convergent. Notice

that the rule from Table 4 can be read as follows: if x •HS is divergent, then it

equals ↑.

We introduce some auxiliary notions, which are useful in proofs. Let H = (M, B, S, I, A, [[ ]]) be a Maurer machine, and let (Ia, ma) = [[a]] for all a ∈ A.

Then the step relation `H ⊆ (Tfinrec(A) × S) × (Tfinrec(A) × S) is inductively

defined as follows:

– if Ia(S)(ma) = T and p = p0E a D p00, then (p, S) `H (p0, Ia(S));

(10)

If (p, S) `H (p0, S0), then p •HS = p0•HS0. Moreover, let p ∈ Tfinrec(A), and let

S ∈ S. Then the full path of p •HS is the unique full path in `H from (p, S).

A full path in `H is one of the following:

– a finite path h(p0, S0), . . . , (pn, Sn)i in `H such that there exists no

(pn+1, Sn+1) ∈ Tfinrec(A) × S with (pn, Sn) `H (pn+1, Sn+1);

– an infinite path h(p0, S0), (p1, S1), . . .i in `H .

If p •HS is convergent, then its full path is a path of length |p •HS| from (p, S)

to (S, S0), where S0= p •HS. Such a full path is also called a computation.

5

Executing Stored Basic Actions

In this section, we enhance Maurer machines such that processing of a basic action performed by a thread amounts to first storing it in a special memory element and then executing the instruction associated with the basic action stored in that special memory element. That is, we extend the memory with a basic action register (bar) and a reply register (rr), and the instruction set with store instructions for each action a of the original Maurer machine (Istore:a)

and an execute stored basic action instruction (Iexsba). Moreover, we replace

the basic actions of the original Maurer machine by basic actions with which the extra instructions are associated. The resulting Maurer machines are called SBA-enhancements. SBA stands for stored basic action.

Let A ⊂ A be such that for all a ∈ A, store:a 6∈ A. Then it is assumed that store:a ∈ A for all a ∈ A. Moreover, it is assumed that exsba ∈ A.

Let H = (M, B, S, I, A, [[ ]]) be a Maurer machine with bar, rr 6∈ M , Istore:a6∈

I for all a ∈ A, Iexsba 6∈ I, store:a 6∈ A for all a ∈ A and exsba 6∈ A, and let

(Ia, ma) = [[a]] for all a ∈ A. Then the SBA-enhancement of H is the Maurer

machine (M0, B0, S0, I0, A0, [[ ]]0) such that M0 = M ∪ {bar, rr} , B0 = B ∪ A ∪ B , S0 = {S0: M0→ B0 | S0  M ∈ S ∧ S0(bar) ∈ A ∧ S0(rr) ∈ B} , I0 = {I0: S0→ S0| ∃I ∈ I∀S0 ∈ S0 I0(S0)  M = I(S0 M ) ∧ I0(S0)  (M0\ M ) = S0  (M0\ M )} ∪ {Istore:a| a ∈ A} ∪ {Iexsba} , A0 = {store:a | a ∈ A} ∪ {exsba} , [[a]]0= (Ia, rr) for all a ∈ A0.

Here, for each a ∈ A, Istore:a is the unique function from S0 to S0 such that for

all S0∈ S0:

Istore:a(S0)  M = S0 M ,

Istore:a(S0)(bar) = a ,

(11)

and Iexsba is the unique function from S0 to S0 such that for all S0∈ S0:

Iexsba(S0)  M = IS0(bar)(S0 M ) ,

Iexsba(S0)(bar) = S0(bar) ,

Iexsba(S0)(rr) = IS0(bar)(S0 M )(mS0(bar)) .

Because the memory is extended with only finitely many memory elements, it is easy to check, using Proposition 1, that an SBA-enhancement of a Maurer machine is a Maurer machine indeed. The same remark applies to all subsequent enhancements as well.

We define inductively a transformation function φ on Tfinrec:

φ(X) = X , φ(S) = S , φ(D) = D ,

φ(t1E a D t2) = store:a ◦ (φ(t1) E exsba D φ(t2)) ,

φ(hX0|{X0= t0, . . . , Xn = tn}i) = hX0|{X0= φ(t0), . . . , Xn= φ(tn)}i .

Applying thread p to a state of Maurer machine H has the same effect as applying the transformation of p to the corresponding state of the SBA-enhancement of H.

Theorem 6 (SBA-enhancement). Let H0 = (M0, B0, S0, I0, A0, [[ ]]0) be the SBA-enhancement of H = (M, B, S, I, A, [[ ]]), let S00 ∈ S0, and let p ∈ T

finrec(A).

Then p •H(S00  M ) = (φ(p) •H0S00)  M .

Proof. Let (Ia, ma) = [[a]] for all a ∈ A, and let (Ia, rr) = [[a]]0 for all a ∈ A0. It is

easy to see that for all a ∈ A and S0∈ S0:

Ia(S0 M ) = Iexsba(Istore:a(S0))  M , (1)

Ia(S0 M )(ma) = Iexsba(Istore:a(S0))(rr) . (2)

In the case where p •H(S00  M ) is convergent, it is easy to prove the theorem

by induction on |p •H(S00  M )|, using equations (1) and (2). In the case where

p •H(S00 M ) is not convergent, the theorem follows immediately from the claim

that πn(p) •H(S00  M ) = (π2n(φ(p)) •H0 S00)  M for all n ∈ N. This claim is

easily proved by induction on n, using equations (1) and (2). ut On the SBA-enhancement of a Maurer machine H, processing of a basic action performed by a thread p amounts to first storing it in the special memory element bar and then executing the instruction associated with the basic action stored in bar. For storing basic actions in bar and executing basic actions stored in bar, the special basic actions store:a and exsba are introduced. Thus, processing can be brought under control of a variant of the thread p, viz. the thread obtained by applying the transformation φ to p.

In subsequent sections, we will introduce several other kinds of enhancement of Maurer machines based on the idea that processing of a basic action per-formed by a thread p amounts to first storing it in a special memory element

(12)

and then executing the instruction associated with the basic action stored in that special memory element. However, for each of those other kinds, processing can be brought under control of a single special thread. This is in most cases accomplished by storing a representation of the thread p in a part of the memory of the enhanced Maurer machine.

6

Representation of Threads

In this section, we make precise how to represent threads in the memory of a Maurer machine.

It is assumed that a fixed but arbitrary finite set Mtand a fixed but arbitrary

bijection mt: [0, card (Mt) − 1] → Mt have been given. Mt is called the thread

memory. We write size(Mt) for card (Mt). Let n, n0∈ [0, size(Mt)−1] be such that

n ≤ n0. Then, we write Mt[n] for mt(n), and Mt[n, n0] for {mt(k) | n ≤ k ≤ n0}.

The thread memory is a memory of which the elements can be addressed by means of elements of [0, size(Mt) − 1]. We write Atfor [0, size(Mt) − 1].

The thread memory elements are meant for containing the representations of nodes that form part of a simple graph representation of a thread. Here, the representation of a node is either S, D or a triple consisting of a basic action and two natural numbers addressing thread memory elements containing representations of other nodes.

Let n, n0 ∈ At be such that n ≤ n0. Then, we write Bt[n, n0] for {S, D} ∪

([n, n0] × A × [n, n0]). We write Btfor Bt[0, size(Mt) − 1]. Bt is called the thread

memory base set. We write St for the set of all functions St: Mt→ Bt.

Let p ∈ Tfinrec. Then the nodes of the graph representation of p, written

Nodes(p), is the smallest subset of Tfinrec such that:

– p ∈ Nodes(p);

– if p0E a D q0∈ Nodes(p), then p0, q0∈ Nodes(p);

– if hX0|{X0= t0, . . . , Xn = tn}i ∈ Nodes(p), ht0|{X0= t0, . . . , Xn = tn}i ≡

p0E a D q0, then p0, q0 ∈ Nodes(p). We write size(p) for card (Nodes(p)).

It is assumed that for all p ∈ Tfinrec, a fixed but arbitrary bijection nodep:

[0, size(p) − 1] → Nodes(p) with nodep(0) = p has been given.

Let p ∈ Tfinrec be such that size(p) ≤ size(Mt). Then the stored graph

repre-sentation of p, written st(p), is the unique st:Mt[0, size(p)−1] → Bt[0, size(p)−1]

such that for all n ∈ [0, size(p) − 1], st(Mt[n]) = nreprp(nodep(n)), where

nreprp: Nodes(p) → Bt[0, size(p) − 1] is defined as follows:

nreprp(S) = S , nreprp(D) = D ,

nreprp(p0E a D q0) = (nodep−1(p0), a, nodep−1(q0)) ,

nreprp(hX0|{X0= t0, . . . , Xn= tn}i)

(13)

We call st(p) a stored thread.

Notice that st(p) is not defined for p with size(p) > size(Mt). The size of the

thread memory restricts the threads that can be stored.

7

No Stored Threads, but a Single Control Thread

Before we make use of stored threads to accomplish that a single control thread is sufficient, we demonstrate in this section that this can also be accomplished without stored threads at the cost of flexibility.

We enhance Maurer machines by extending the memory with a node register (nr), a basic action register (bar) and a reply register (rr), and the instruction set with a halt instruction (Ihalt), two fetch instructions (Ifetch:T, Ifetch:F) and an

execute stored basic action instruction (Iexsba). Moreover, we replace the basic

actions of the original Maurer machine by basic actions with which the extra instructions are associated. The resulting Maurer machines are called SBA0 -enhancements.

In the definition of an SBA0-enhancement of a Maurer machine given below, nreprp(n), where n ∈ [0, size(p) − 1], abbreviates nreprp(nodep(n)).

It is assumed that halt ∈ A, that fetch:r ∈ A for all r ∈ B, and that exsba ∈ A. Let H = (M, B, S, I, A, [[ ]]) be a Maurer machine with nr, bar, rr 6∈ M , Ihalt 6∈ I, Ifetch:r 6∈ I for all r ∈ B, Iexsba 6∈ I, halt 6∈ A, fetch:r 6∈ A for

all r ∈ B and exsba 6∈ A, and let (Ia, ma) = [[a]] for all a ∈ A. Let also

p ∈ Tfinrec(A). Then the SBA0-enhancement of H for p is the Maurer machine

H0= (M0, B0, S0, I0, A0, [[ ]]0) such that M0 = M ∪ {nr, bar, rr} , B0 = B ∪ [−1, size(p) − 1] ∪ A ∪ B , S0 = {S0: M0→ B0 | S0  M ∈ S ∧ S0(nr) ∈ [−1, size(p) − 1] ∧ S0(bar) ∈ A ∧ S0(rr) ∈ B} , I0 = {I0: S0→ S0| ∃I ∈ I∀S0 ∈ S0 I0(S0)  M = I(S0 M ) ∧ I0(S0)  (M0\ M ) = S0  (M0\ M )} ∪ {Ihalt} ∪ {Ifetch:r | r ∈ B} ∪ {Iexsba} ,

A0 = {halt} ∪ {fetch:r | r ∈ B} ∪ {exsba} , [[a]]0= (Ia, rr) for all a ∈ A0.

Here, Ihalt is the unique function from S0 to S0 such that for all S0∈ S0:

Ihalt(S0)  M = S0 M ,

Ihalt(S0)(nr) = S0(nr) ,

Ihalt(S0)(bar) = S0(bar) ,

Ihalt(S0)(rr) = T if nodep(S0(nr)) = S ,

(14)

for each r ∈ B, Ifetch:r is the unique function from S0 to S0 such that for all

S0 ∈ S0:

Ifetch:r(S0)  M = S0 M ,

Ifetch:r(S0)(nr) = nnn(S0, r) ,

Ifetch:r(S0)(bar) = π2(nreprp(nnn(S0, r))) if nodep(nnn(S0, r)) 6∈ {S, D} ,

Ifetch:r(S0)(bar) = S0(bar) if nodep(nnn(S0, r)) ∈ {S, D} ,

Ifetch:r(S0)(rr) = T if nodep(nnn(S0, r)) 6∈ {S, D} ,

Ifetch:r(S0)(rr) = F if nodep(nnn(S0, r)) ∈ {S, D} ,

where nnn : S0× B → [0, size(p) − 1] is defined as follows:

nnn(S0, T) = π1(nreprp(S0(nr))) if S0(nr) 6= −1 ∧ nodep(S0(nr)) 6∈ {S, D} ,

nnn(S0, F) = π3(nreprp(S0(nr))) if S0(nr) 6= −1 ∧ nodep(S0(nr)) 6∈ {S, D} ,

nnn(S0, r) = S0(nr) if S0(nr) 6= −1 ∧ nodep(S0(nr)) ∈ {S, D} ,

nnn(S0, r) = 0 if S0(nr) = −1 ;

and Iexsba is the unique function from S0 to S0 such that for all S0∈ S0:

Iexsba(S0)  M = IS0(bar)(S0 M ) ,

Iexsba(S0)(nr) = S0(nr) ,

Iexsba(S0)(bar) = S0(bar) ,

Iexsba(S0)(rr) = IS0(bar)(S0 M )(mS0(bar)) .

The node register nr is meant for containing the number that corresponds to the node of the graph representation of p from which most recently a basic action has been fetched. That node, together with the reply produced at completion of the execution of the basic action concerned, determines the node from which next time a basic action must be fetched. To indicate that no basic action has been fetched yet, nr must initially contain −1. The number corresponding to the node from which the first time a basic action must be fetched, i.e. the root, is 0. Consider the guarded recursive specification over BTA that consists of the following equations:

CT = (CT E exsba D CT0) E fetch:T D (S E halt D D) , CT0= (CT E exsba D CT0) E fetch:F D (S E halt D D) .

Applying thread p to a state of Maurer machine H has the same effect as applying control thread CT to the corresponding state of the SBA0-enhancement of H for p.

Theorem 7 (SBA0-enhancement). Let H0 = (M0, B0, S0, I0, A0, [[ ]]0) be the

SBA0-enhancement of H = (M, B, S, I, A, [[ ]]) for p ∈ Tfinrec(A), and let S00∈ S0

(15)

Proof. Let (Ia, ma) = [[a]] for all a ∈ A, and let (Ia, rr) = [[a]] 0

for all a ∈ A0. Then it is easy to see that for all S0 ∈ S0 with node

p(nnn(S0, S0(rr))) 6∈ {S, D}:

Ia(S0 M ) = Iexsba(Ifetch:r(S0))  M , (3)

Ia(S0 M )(ma) = Iexsba(Ifetch:r(S0))(rr) , (4)

where a = π2(nreprp(nodep(nnn(S0, S0(rr))))) and r = S0(rr).

Let (pn, Sn) be the n+1-th element in the full path of p •H(S00  M ), and let

(p0

n, Sn0) be the n+1-th element in the full path of CT •H0S00. Then it is easy to

prove by induction on n that

p02n+2= CT if S2n+10 (rr) = T ∧ S2n+20 (rr) = T p02n+2= CT0 if S2n+10 (rr) = T ∧ S2n+20 (rr) = F p02n+2= S if S2n+10 (rr) = F ∧ S2n+20 (rr) = T p02n+2= D if S2n+10 (rr) = F ∧ S2n+20 (rr) = F

(5)

(for 2n + 2 < |CT •H0S00| if CT •H0S00 is convergent). Moreover, using (3), (4)

and (5), it is straightforward to prove by induction on n that:

– pn+1is represented by the part of the graph representation of p of which the

root is nodep(nnn(S02n+2, S02n+2(rr)));

– Sn+1= S2n+20  M .

(for n + 1 < |p •H(S00M )| if p•H(S00M ) is convergent). From this, the theorem

follows immediately. ut

The SBA0-enhancements of a Maurer machine for different threads have dif-ferent fetch instructions. That is why SBA0-enhancements are inflexible from a practical point of view: it is virtually impossible to change an instruction avail-able on a real machine. On the other hand, it is easy to change the stored thread present in the memory of a real machine.

8

Fetching Basic Actions from a Stored Thread

In this section, we make use of stored threads to accomplish that a single control thread is sufficient.

We enhance Maurer machines by extending the memory with a thread mem-ory (Mt), a thread location register (tlr), a basic action register (bar) and a reply

register (rr), and the instruction set with a halt instruction (Ihalt), two fetch

in-structions (Ifetch:T, Ifetch:F) and an execute stored basic action instruction (Iexsba).

Moreover, we replace the basic actions of the original Maurer machine by basic actions with which the extra instructions are associated. The resulting Maurer machines are called ST-4I-enhancements. ST stands for stored thread and 4I indicates that there are four control instructions available.

Let H = (M, B, S, I, A, [[ ]]) be a Maurer machine with Mt6⊆ M , tlr, bar, rr 6∈

M , Ihalt 6∈ I, Ifetch:r 6∈ I for all r ∈ B, Iexsba 6∈ I, halt 6∈ A, fetch:r 6∈ A for all

(16)

ST-4I-enhancement of H is the Maurer machine H0 = (M0, B0, S0, I0, A0, [[ ]]0) such that M0 = M ∪ Mt∪ {tlr, bar, rr} , B0 = B ∪ Bt∪ At∪ {−1} ∪ A ∪ B , S0 = {S0: M0→ B0 | S0  M ∈ S ∧ S0 Mt∈ St ∧ S0(tlr) ∈ At∪ {−1} ∧ S0(bar) ∈ A ∧ S0(rr) ∈ B} , I0 = {I0: S0→ S0| ∃I ∈ I∀S0 ∈ S0 I0(S0)  M = I(S0 M ) ∧ I0(S0)  (M0\ M ) = S0  (M0\ M )} ∪ {Ihalt} ∪ {Ifetch:r | r ∈ B} ∪ {Iexsba} ,

A0 = {halt} ∪ {fetch:r | r ∈ B} ∪ {exsba} , [[a]]0= (Ia, rr) for all a ∈ A0.

Here, Ihalt is the unique function from S0 to S0 such that for all S0∈ S0:

Ihalt(S0)  M = S0 M ,

Ihalt(S0)  Mt = S0 Mt,

Ihalt(S0)(tlr) = S0(tlr) ,

Ihalt(S0)(bar) = S0(bar) ,

Ihalt(S0)(rr) = T if S0(tlr) = S ,

Ihalt(S0)(rr) = F if S0(tlr) 6= S ;

for each r ∈ B, Ifetch:r is the unique function from S0 to S0 such that for all

S0 ∈ S0:

Ifetch:r(S0)  M = S0 M ,

Ifetch:r(S0)  Mt = S0 Mt,

Ifetch:r(S0)(tlr) = ntla(S0, r) ,

Ifetch:r(S0)(bar) = π2(S0(Mt[ntla(S0, r)])) if S0(Mt[ntla(S0, r)]) 6∈ {S, D} ,

Ifetch:r(S0)(bar) = S0(bar) if S0(Mt[ntla(S0, r)]) ∈ {S, D} ,

Ifetch:r(S0)(rr) = T if S0(Mt[ntla(S0, r)]) 6∈ {S, D} ,

Ifetch:r(S0)(rr) = F if S0(Mt[ntla(S0, r)]) ∈ {S, D} ,

where ntla : S0× B → At is defined as follows:

ntla(S0, T) = π1(S0(Mt[S0(tlr)])) if S0(tlr) ∈ At ∧ S0(Mt[S0(tlr)]) 6∈ {S, D} ,

ntla(S0, F) = π3(S0(Mt[S0(tlr)])) if S0(tlr) ∈ At ∧ S0(Mt[S0(tlr)]) 6∈ {S, D} ,

ntla(S0, r) = S0(tlr) if S0(tlr) ∈ At ∧ S0(Mt[S0(tlr)]) ∈ {S, D} ,

(17)

and Iexsba is the unique function from S0 to S0 such that for all S0∈ S0:

Iexsba(S0)  M = IS0(bar)(S0 M ) ,

Iexsba(S0)  Mt = S0 Mt,

Iexsba(S0)(tlr) = S0(tlr) ,

Iexsba(S0)(bar) = S0(bar) ,

Iexsba(S0)(rr) = IS0(bar)(S0 M )(mS0(bar)) .

The thread location register tlr is meant for containing the address of the thread memory element from which most recently a basic action has been fetched. The contents of that thread memory element, together with the reply produced at completion of the execution of the basic action concerned, determines the thread memory element from which next time a basic action must be fetched. To indicate that no basic action has been fetched yet, tlr must initially contain −1. The thread memory element from which the first time a basic action must be fetched is the one at address 0.

Consider again the guarded recursive specification over BTA that consists of the following equations:

CT = (CT E exsba D CT0) E fetch:T D (S E halt D D) , CT0= (CT E exsba D CT0) E fetch:F D (S E halt D D) .

Applying thread p to a state of Maurer machine H has the same effect as applying control thread CT to the corresponding state of the ST-4I-enhancement of H in which the thread memory contains the stored graph representation of p. Theorem 8 (ST-4I-enhancement). Let H0 = (M0, B0, S0, I0, A0, [[ ]]0) be the ST-4I-enhancement of H = (M, B, S, I, A, [[ ]]), let p ∈ Tfinrec(A) be such that

size(p) ≤ size(Mt), and let S00 ∈ S0 be such that S00  Mt[0, size(p) − 1] = st(p)

and S00(tlr) = −1. Then p •H(S00  M ) = (CT •H0S00)  M .

Proof. Let (Ia, ma) = [[a]] for all a ∈ A, and let (Ia, rr) = [[a]]0 for all a ∈ A0.

Then it is easy to see that for all S0 ∈ S0 with S0(M

t[ntla(S0, S0(rr))]) 6∈ {S, D}:

Ia(S0 M ) = Iexsba(Ifetch:r(S0))  M , (6)

Ia(S0 M )(ma) = Iexsba(Ifetch:r(S0))(rr) , (7)

where a = π2(S0(Mt[ntla(S0, S0(rr))])) and r = S0(rr).

Let (pn, Sn) be the n+1-th element in the full path of p •H(S00  M ), and let

(p0

n, Sn0) be the n+1-th element in the full path of CT •H0S00. Then it is easy to

prove by induction on n that

p02n+2= CT if S2n+10 (rr) = T ∧ S2n+20 (rr) = T p02n+2= CT0 if S2n+10 (rr) = T ∧ S2n+20 (rr) = F p02n+2= S if S2n+10 (rr) = F ∧ S2n+20 (rr) = T

p02n+2= D if S2n+10 (rr) = F ∧ S2n+20 (rr) = F

(8)

(for 2n + 2 < |CT •H0S00| if CT •H0S00 is convergent). Moreover, using (6), (7)

(18)

– pn+1 is represented by the part of st(p) to which ntla(S2n+20 , S2n+20 (rr))

points;

– Sn+1= S2n+20  M .

(for n + 1 < |p •H(S00M )| if p•H(S00M ) is convergent). From this, the theorem

follows immediately. ut

Notice that the proof of Theorem 7 and the proof of Theorem 8 follow similar lines.

The size of a stored thread may exceed the size of the thread memory of an ST-4I-enhancement. In other words, an ST-4I-enhancement cannot handle finite-state threads of arbitrary size. Section 11 shows how to get around this limitation.

9

A Universal Control Instruction

On the ST-4I-enhancements of Maurer machines, four instruction are available for controlling the processing of basic actions. In this section, we enhance Maurer machines such that a single universal control instruction becomes available. That is, we extend the memory with a thread memory (Mt), a thread location register

(tlr), a basic action register (bar), a reply register (rr) and a fetch mode register (fmr), and the instruction set with a step instruction (Istep). Moreover, we replace

the basic actions of the original Maurer machine by a basic action with which the extra instruction is associated. The resulting Maurer machines are called ST-1I-enhancements. ST stands again for stored thread and 1I indicates that there is one control instruction available.

It is assumed that step ∈ A.

Let H = (M, B, S, I, A, [[ ]]) be a Maurer machine with Mt 6⊆ M , tlr, bar, rr,

fmr 6∈ M , Istep6∈ I and step 6∈ A, and let (Ia, ma) = [[a]] for all a ∈ A. Then the

ST-1I-enhancement of H is the Maurer machine H0 = (M0, B0, S0, I0, A0, [[ ]]0) such that M0= M ∪ M t∪ {tlr, bar, rr, fmr} , B0 = B ∪ Bt∪ At∪ {−1} ∪ A ∪ B , S0 = {S0: M0 → B0| S0  M ∈ S ∧ S0 Mt∈ St ∧ S0(tlr) ∈ At∪ {−1} ∧ S0(bar) ∈ A ∧ S0(rr) ∈ B ∧ S0(fmr) ∈ B} , I0 = {I0: S0→ S0| ∃I ∈ I •∀S0∈ S0• I0(S0)  M = I(S0 M ) ∧ I0(S0)  (M0\ M ) = S0  (M0\ M )} ∪ {Istep} , A0 = {step} , [[step]]0= (Istep, rr) .

(19)

Here, Istep is the unique function from S0 to S0 such that for all S0∈ S0: Istep(S0)  M00= Ifetch:r(S0 M00) if S0(fmr) = T ∧ S0(rr) = r , Istep(S0)  M00= Iexsba(S0 M00) if S0(fmr) = F ∧ S0(rr) = T , Istep(S0)  M00= Ihalt(S0 M00) if S0(fmr) = F ∧ S0(rr) = F , Istep(S0)(fmr) = F if S0(fmr) = T , Istep(S0)(fmr) = T if S0(fmr) = F ,

where M00= M ∪ Mt∪ {tlr, bar, rr} and Ifetch:r, Iexsba and Ihalt are defined as in

the definition of the ST-4I-enhancement.

Consecutive executions of the instruction step alternate between a fetch mode and an execute mode. The fetch mode register fmr is meant for containing a flag that indicates whether the next time step is executed the mode is fetch mode. The contents of that register, together with the contents of the reply register, determines whether the next time step is executed actually halt, fetch:T, fetch:F or exsba is executed.

Consider the guarded recursive specification over BTA that consists of the following equations:

CT00= (step ◦ CT00) E step D (S E step D D) .

Notice that odd steps are actually fetch steps, which may fail because of ter-mination or deadlock of the controlled thread. Applying thread p to a state of Maurer machine H has the same effect as applying control thread CT00 to the corresponding state of the ST-1I-enhancement of H in which the thread memory contains the stored graph representation of p.

Theorem 9 (ST-1I-enhancement). Let H0 = (M0, B0, S0, I0, A0, [[ ]]0) be the ST-1I-enhancement of H = (M, B, S, I, A, [[ ]]), let p ∈ Tfinrec(A) be such that

size(p) ≤ size(Mt), and let S00 ∈ S0 be such that S00  Mt[0, size(p) − 1] = st(p),

S00(tlr) = −1 and S00(fmr) = T. Then p •H(S00  M ) = (CT 00

H0S00)  M .

Proof. The proof follows the same line as the proof of Theorem 8. In the proof, the equations corresponding to equations (6) and (7) hold only for states S0with S0(fmr) = T. This does not stand in the way of following the same line, because this extra condition is satisfied by all states S0 that have to be related to the state component of an element in the full path of p •H(S00  M ). ut

10

Parallel Maurer Machines and Interleaving of Threads

In Section 11, we will demonstrate that a Maurer machine with a fixed finite memory can deal with any finite-state thread, provided that it is put in parallel with a Maurer machine of a suitable kind that can hold the thread concerned. In this section, we introduce the parallel composition of Maurer machines. More-over, because the control threads of the Maurer machines have to be interleaved if they are put in parallel, we add an operator for that purpose to BTA.

(20)

Table 5. Axioms for cyclic interleaving

k(h i) = S CSI1

k(hSiyα) = k(α) CSI2

k(hDiyα) = SD(k(α)) CSI3

k(hx E a D yiyα) = k(αyhxi) E a D k(αyhyi) CSI4

Table 6. Axioms for deadlock at termination

SD(S) = D S2D1

SD(D) = D S2D2

SD(x E a D y) = SD(x) E a D SD(y) S2D3

Let Hi = (Mi, Bi, Si, Ii, Ai, [[ ]]i), for i = 1, 2, be Maurer machines with for

all x ∈ M1∩ M2 either ∀I1 ∈ I1 •x 6∈ OR(I1) or ∀I2 ∈ I2 •x 6∈ OR(I2), and

A1∩ A2 = ∅. Then the parallel composition of H1 and H2, written H1k H2, is

the unique Maurer machine (M, B, S, I, A, [[ ]]) such that M = M1∪ M2, B = B1∪ B2, S = {S : M → B | S  M1∈ S1 ∧ S  M2∈ S2} , I = I1∪ I2, A = A1∪ A2, [[a]] = [[a]]1 if a ∈ A1, [[a]] = [[a]]2 if a ∈ A2.

Notice that the parallel composition of two Maurer machines is defined only if each common memory element is read-only for at least one of the Maurer ma-chines. It is usual that the common memory elements do duty for communication between the parallel Maurer machines.

A thread vector is a sequence of threads. Strategic interleaving operators turn a thread vector of arbitrary length into a single thread. Several kinds of strategic interleaving have been elaborated in earlier work, see e.g. [5]. In this section, we only cover the simplest interleaving strategy, namely cyclic interleaving. The strategic interleaving operator for cyclic interleaving is denoted by k( ).

The axioms for cyclic interleaving are given in Table 5. In CSI3, the auxil-iary deadlock at termination operator SD( ) is used. It turns termination into

deadlock. Its axioms appear in Table 6.

The structural operational semantics of BTA extended with recursion, pro-jection and cyclic interleaving is described by the transition rules given in Ta-bles 2 and 7. Here hx, ρi 6−→ stands for the set of all conditions ¬ (hx, ρi−→ hpa 0, ρ0i)

where p0 is a closed term of this extension, ρ0 ∈ E, a ∈ A.

Bisimulation equivalence is also a congruence with respect to the cyclic inter-leaving operator and the deadlock at termination operator. This follows

(21)

imme-Table 7. Transition rules for cyclic interleaving and deadlock at termination

hx1, ρi↓, . . . , hxk, ρi↓, hxk+1, ρi−→ hxa 0k+1, ρ 0i hk(hx1iy. . .yhxk+1iyα), ρi a −→ hk(αyhx0 k+1i), ρ 0i (k ≥ 0)

hx1, ρi 6−→, . . . , hxk, ρi 6−→, hxl, ρi↑, hxk+1, ρi−→ hxa 0k+1, ρ 0i hk(hx1iy. . .yhxk+1iyα), ρi−→ hk(αa yhDiyhx0 k+1i), ρ0i (k ≥ l > 0) hx1, ρi↓, . . . , hxk, ρi↓ hk(hx1iy. . .yhxki), ρi↓

hx1, ρi 6−→, . . . , hxk, ρi 6−→, hxl, ρi↑

hk(hx1iy. . .yhxki), ρi↑ (k ≥ l > 0) hx, ρi−→ hxa 0, ρ0i hSD(x), ρi−→ hSa D(x0), ρ0i hx, ρi↓ hSD(x), ρi↑ hx, ρi↑ hSD(x), ρi↑

diately from the fact that the transition rules from Tables 2 and 7 constitute a complete transition system specification in relaxed panth format (see e.g. [12]).

11

Dealing with Finite-State Threads of Arbitrary Size

In this section, we demonstrate that we can deal with finite-state threads of arbitrary size by means of an enhanced Maurer machine that does the execution of stored basic actions, but leaves the fetching of those basic actions to a remote Maurer machine of which the memory size is sufficient for the thread concerned. We enhance Maurer machines by extending the memory with a basic ac-tion register (bar), a reply register (rr), a remote reply register (rrr) and a stop mode register (smr), and the instruction set with a halt instruction (Ihalt) and an

execute stored basic action instruction (Iexsba). Moreover, we replace the basic

actions of the original Maurer machine by basic actions with which the ex-tra instructions are associated. The resulting Maurer machines are called RST-enhancements. RST stands for remote stored thread.

We also introduce a Maurer machine with a memory consisting of a thread memory (Mt), a thread location register (tlr), a basic action register (bar), a reply

register (rr), a remote reply register (rrr), and a stop mode register (smr), and an instruction set consisting of a fetch instruction (Ifetch). Moreover, this Maurer

machine has a basic action with which the fetch instruction is associated. The resulting Maurer machine is called the remote machine for stored threads.

Let H = (M, B, S, I, A, [[ ]]) be a Maurer machine with Mt 6⊆ M , tlr, bar, rr,

rrr, smr 6∈ M , Ihalt6∈ I, Ifetch:r 6∈ I for all r ∈ B, Iexsba 6∈ I, halt 6∈ A, fetch:r 6∈ A

for all r ∈ B and exsba 6∈ A, and let (Ia, ma) = [[a]] for all a ∈ A. Then the

RST-enhancement of H is the Maurer machine H0 = (M0, B0, S0, I0, A0, [[ ]]0) such that

M0= M ∪ {bar, rr, rrr, smr} , B0 = B ∪ A ∪ B ,

(22)

S0 = {S0: M0 → B0| S0  M ∈ S ∧ S0(bar) ∈ A ∧ S0(rr) ∈ B ∧ S0(rrr) ∈ B ∧ S0(smr) ∈ B} , I0 = {I0: S0 → S0| ∃I ∈ I∀S0∈ S0 I0(S0)  M = I(S0 M ) ∧ I0(S0)  (M0\ M ) = S0  (M0\ M )} ∪ {Ihalt, Iexsba} , A0= {halt, exsba} , [[halt]]0 = (Ihalt, rr) , [[exsba]]0 = (Iexsba, rrr) .

Here, Ihalt is the unique function from S0 to S0 such that for all S0∈ S0:

Ihalt(S0)  M = S0 M ,

Ihalt(S0)(bar) = S0(bar) ,

Ihalt(S0)(rr) = S0(smr) ,

Ihalt(S0)(rrr) = S0(rrr) ,

Ihalt(S0)(smr) = S0(smr) ;

and Iexsba is the unique function from S0 to S0 such that for all S0∈ S0:

Iexsba(S0)  M = IS0(bar)(S0 M ) if S0(rrr) = T ,

Iexsba(S0)  M = S0 M if S0(rrr) = F ,

Iexsba(S0)(bar) = S0(bar) ,

Iexsba(S0)(rr) = IS0(bar)(S0 M )(mS0(bar)) if S0(rrr) = T ,

Iexsba(S0)(rr) = S0(rr) if S0(rrr) = F ,

Iexsba(S0)(rrr) = S0(rrr) ,

Iexsba(S0)(smr) = S0(smr) .

Moreover, the remote machine for stored threads is the Maurer machine H00=

(M00, B00, S00, I00, A00, [[ ]]00) such that M00 = Mt∪ {tlr, bar, rr, rrr, smr} , B00 = Bt∪ At∪ {−1} ∪ A ∪ B , S00 = {S00: M00→ B00| S00  Mt∈ St ∧ S00(tlr) ∈ At∪ {−1} ∧ S00(bar) ∈ A ∧ S00(rr) ∈ B ∧ S00(rrr) ∈ B ∧ S00(smr) ∈ B} , I00 = {I fetch} , A00 = {fetch} , [[fetch]]00= (Ifetch, rrr) .

Here, Ifetch is the unique function from S00 to S00 such that for all S00∈ S00:

Ifetch(S00)  Mt= S00 Mt,

(23)

Ifetch(S00)(bar) = π2(S00(Mt[ntla(S00, r)])) if S00(Mt[ntla(S00, r)]) 6∈ {S, D} ,

Ifetch(S00)(bar) = S00(bar) if S00(Mt[ntla(S00, r)]) ∈ {S, D} ,

Ifetch(S00)(rr) = S00(rr) ,

Ifetch(S00)(rrr) = T if S00(Mt[ntla(S00, r)]) 6∈ {S, D} ,

Ifetch(S00)(rrr) = F if S00(Mt[ntla(S00, r)]) ∈ {S, D} ,

Ifetch(S00)(smr) = T if S00(Mt[ntla(S00, r)]) = S ,

Ifetch(S00)(smr) = F if S00(Mt[ntla(S00, r)]) 6= S ,

where r = S00(rr), and where ntla : S00× B → At is defined as in the definition of

an ST-4I-enhancement.

The common memory elements of the RST-enhancement H0 of a Maurer machine and the remote machine H00for a stored thread are bar, rr, rrr, smr. The memory elements bar, rrr, smr are not changed by any instructions of H0 and the memory element rr is not changed by any instructions of H00. So, the parallel composition H0 k H00 is defined (cf. Section 10). The fetch, execute and halt

instructions found here are similar to the ones of an ST-4I-enhancement. The instruction fetch has the same effect as either fetch:T or fetch:F depending on the contents of rr. The instruction exsba has no effect if rrr contains F.

Consider the guarded recursive specifications over BTA that consists of the following equations:

CT0 = CT0E exsba D (S E halt D D) , CT00= CT00E fetch D S .

CT0 and CT00 are control threads for RST-enhancements of Maurer machines and remote machines for stored threads, respectively. Applying thread p to a state of Maurer machine H has the same effect as applying the cyclic interleav-ing of control threads CT0 and CT00, starting with CT00, to the corresponding state of the parallel composition of the RST-enhancement of H and the remote machine for stored threads in which the thread memory contains the stored graph representation of p.

Theorem 10 (RST-enhancement). Let H0 = (M0, B0, S0, I0, A0, [[ ]]0) be the

RST-enhancement of H = (M, B, S, I, A, [[ ]]), let H00be the remote machine for

stored threads, let p ∈ Tfinrec(A) be such that size(p) ≤ size(Mt), let S∗ be the set

of states of H0k H00, and let S

0 ∈ S∗ be such that S0∗ Mt[0, size(p) − 1] = st(p),

S0∗(tlr) = −1, S∗0(rr) = T. Then p•H(S0∗M ) = (k(hCT 00i

yhCT0i)•H0kH00S0∗)M .

Proof. Firstly, k(hCT00iyhCT0i) is the solution of the guarded recursive

speci-fication over BTA that consists of the following equation: CT∗= (CT∗E exsba D CT∗∗) E fetch D CT0, where CT∗∗ abbreviates k(hCT00iyhS E halt D Di).

(24)

Secondly, H0kH00is the Maurer machine H = (M, B, S, I, A, [[ ]]∗ ) such that M∗= M ∪ Mt∪ {tlr, bar, rr, rrr, smr} , B∗ = B ∪ Bt∪ At∪ {−1} ∪ A ∪ B , S∗ = {S∗: M∗→ B∗| S∗ M ∈ S ∧ S Mt∈ St ∧ S∗(tlr) ∈ At∪ {−1} ∧ S∗(bar) ∈ A ∧ S∗(rr) ∈ B ∧ S(rrr) ∈ B ∧ S(smr) ∈ B} , I∗ = {I: S→ S| ∃I ∈ I∀S∈ S I∗(S∗)  M = I(S M ) ∧ I∗(S∗)  (M∗\ M ) = S∗  (M∗\ M )} ∪ {Ihalt, Ifetch, Iexsba} ,

A∗ = {halt, fetch, exsba} , [[halt]]∗ = (Ihalt, rr) ,

[[fetch]]∗ = (Ifetch, rrr) ,

[[exsba]]∗= (Iexsba, rrr) .

Here, Ihalt, Ifetch and Iexsba are the extensions of the instructions Ihalt, Ifetch and

Iexsba of H0 and H00 to S∗ such that Ihalt(S∗)  (M∗\ M0) = S∗ (M∗\ M0),

Ifetch(S∗)(M∗\M00) = S∗(M∗\M00) and Iexsba(S∗)(M∗\M0) = S∗(M∗\M0).

The remainder of the proof follows the same line as the proof of Theorem 8. u t Variations of the way to deal with arbitrary finite-state threads presented above are possible. For example, the fetch and execute instructions could have been kept essentially the same as the ones of an ST-4I-enhancement. In that case, test instructions would have been needed to check the most recently produced reply of the other Maurer machine. Moreover, a cyclic interleaving strategy would have been needed that gives each control thread two consecutive turns.

12

Maurer Machines and Turing Machines

Turing machines were proposed almost 70 years ago by Turing in [13]. In this section, we relate Maurer machines and Turing machines. First of all, we show how Turing machines can be simulated by means of Maurer machines.

Assume that a fixed but arbitrary countably infinite set Mtape and a fixed

but arbitrary bijection mtape: N → Mtape have been given. Mtape is called the

tape. Let n ∈ N. Then we write Mtape[n] for mtape(n).

The tape is an infinite memory of which the elements can be addressed by means of elements of N. We write Atape for N. The elements of the tape contain

0, 1 or  (blank ). We write Btape for the set {0, 1, }, and we write Stape for the

set of all functions Stape: Mtape→ Btape for which there exists an n ∈ Atape such

that for all m ∈ Atape with m ≥ n, Stape(Mtape[m]) = .

It is assumed that test:s, write:s ∈ A for all s ∈ Btape, and that mover, movel ∈

(25)

TM is the Maurer machine (M, B, S, I, A, [[ ]]) such that M = Mtape∪ {head, rr} ,

B = Btape∪ Atape∪ B ,

S = {S : M → B | S  Mtape= Stape ∧ S(head) ∈ Atape ∧ S(rr) ∈ B} ,

I = {Itest:s, Iwrite:s| s ∈ Btape} ∪ {Imover, Imovel} ,

A = {test:s, write:s | s ∈ Btape} ∪ {mover, movel} ,

[[a]] = (Ia, rr) for all a ∈ A .

Here, for each s ∈ Btape, Itest:s is the unique function from S to S such that for

all S ∈ S:

Itest:s(S)  Mtape= S  Mtape,

Itest:s(S)(head) = S(head) ,

Itest:s(S)(rr) = T if S(Mtape[S(head)]) = s ,

Itest:s(S)(rr) = F if S(Mtape[S(head)]) 6= s ;

for each s ∈ Btape, Iwrite:s is the unique function from S to S such that for all

S ∈ S and n ∈ Atape:

Iwrite:s(S)(Mtape[S(head)]) = s ,

Iwrite:s(S)(Mtape[n]) = S(Mtape[n]) if S(head) 6= n ,

Iwrite:s(S)(head) = S(head) ,

Iwrite:s(S)(rr) = T ;

Imover is the unique function from S to S such that for all S ∈ S:

Imover(S)  Mtape= S  Mtape,

Imover(S)(head) = S(head) + 1 ,

Imover(S)(rr) = T ;

and, Imovel is the unique function from S to S such that for all S ∈ S:

Imovel(S)  Mtape= S  Mtape ,

Imovel(S)(head) = S(head) − 1 if S(head) > 0 ,

Imovel(S)(head) = 0 if S(head) = 0 ,

Imovel(S)(rr) = T if S(head) > 0 ,

Imovel(S)(rr) = F if S(head) = 0 .

We write STM and ATM for the set of states of TM and the set of basic actions

of TM , respectively.

A Turing thread is a constant hX0|{X0 = t0, . . . , Xn = tn}i ∈ Tfinrec, where

t0, . . . , tnare terms of the form tE test:0 D(t0E test:1 Dt00) with t, t0and t00of the

(26)

Clearly, for each Turing machine, there is a Turing thread p ∈ Tfinrec(ATM)

such that for all S ∈ STM, p •TM S can simulate the computations of that

Turing machine from the initial tape contents S  Mtapeand initial head position

S(head).5

Looking at the instructions used in the simulation of Turing machines by means of a Maurer machine, we observe that the test instructions Itest:s have an

infinite input region and a finite output region and that the write instructions Iwrite:shave a finite input region and an infinite output region. It is not difficult

to see that these infinite regions are essential for the simulation of many Turing machines. However, if we expand Turing threads to threads definable by infinite recursive specifications, we can simulate all Turing machines using test and write instructions with a finite input region and a finite output region.

In order to simulate Turing machines using test and write instructions with a finite input region and a finite output region, we have to adapt the Maurer machine TM . Moreover, for each Turing machine, we have to adapt the corre-sponding Turing thread. It happens that the Turing threads can be adapted in a uniform way.

It is assumed that test:s:n, write:s:n ∈ A for all s ∈ Btape and n ∈ Atape.

TM0 is the Maurer machine (M0, B0, S0, I0, A0, [[ ]]0) such that M0 = Mtape∪ {rr} ,

B0 = Btape∪ B ,

S0 = {S0: M0→ B0 | S0

 Mtape= Stape ∧ S0(rr) ∈ B} ,

I0 = {I

test:s:n, Iwrite:s:n| s ∈ Btape ∧ n ∈ Atape} ,

A0 = {test:s:n, write:s:n | s ∈ Btape ∧ n ∈ Atape} ,

[[a]]0= (Ia, rr) for all a ∈ A0.

Here, for each s ∈ Btape and n ∈ Atape, Itest:s:n is the unique function from S0 to

S0 such that for all S0 ∈ S0:

Itest:s:n(S0)  Mtape= S0 Mtape,

Itest:s:n(S0)(rr) = T if S0(Mtape[n]) = s ,

Itest:s:n(S0)(rr) = F if S0(Mtape[n]) 6= s ;

for each s ∈ Btape and n ∈ Atape, Iwrite:s:n is the unique function from S0 to S0

such that for all S0∈ S0 and m ∈ A tape:

Iwrite:s:n(S0)(Mtape[n]) = s ,

Iwrite:s:n(S0)(Mtape[m]) = S0(Mtape[m]) if n 6= m ,

Iwrite:s:n(S0)(rr) = T .

5

We consider Turing machines with one-way infinite tape, which are as powerful as Turing machines with two-way infinite tape. We interpret the usual remark “no left move is permitted when the read-write head is at the [left] boundary” (see e.g. [9]) as “a left move leads to inaction when the read-write head is at the left boundary.”

(27)

We write STM0 and ATM0 for the set of states of TM0and the set of basic actions

of TM0, respectively.

Let hX0|Ei, where E = {X0 = t0, . . . , Xn = tn}, be a Turing thread, let

i ∈ {0, . . . , n}, and let k ∈ N. Moreover, let T0 be the set of all subterms

of a term t ∈ Tfinrec for which t0 = t is derivable from E. Then the relation

HPXi⊆ T0× (N ∪ {−1}) is inductively defined as follows:

– if Xi ∈ T0, then HPXi(Xi, 0);

– if HPXi(t, l), t E test:s D t

0 ∈ T

0 and l ≥ 0, then HPXi(t E test:s D t

0, l);

– if HPXi(t, l), t

0

E test:s D t ∈ T0 and l ≥ 0, then HPXi(t

0

E test:s D t, l); – if HPXi(t, l), write:s ◦ t ∈ T0and l ≥ 0, then HPXi(write:s ◦ t, l);

– if HPXi(t, l), mover ◦ t ∈ T0and l ≥ 0, then HPXi(mover ◦ t, l + 1);

– if HPXi(t, l), t E movel D D ∈ T0and l ≥ 0, then HPXi(t E movel D D, l − 1).

We write X0 k

−→E Xi to indicate that there exist a t ∈ Tfinrec such that t0 = t

is derivable from E and HPXi(t, k) holds. The recursive specification ψ(E) is

inductively defined as follows: – if X0

k

−→E Xi, then Xik= tik ∈ ψ(E),

where tik is obtained from ti by applying the following replacement rules:

• test:s is replaced by test:s:k;

• write:s ◦ Xj is replaced by write:s:k ◦ Xjk;

• mover ◦ Xj is replaced by Xjl, where l = k + 1;

• if k 6= 0, then XjE movel D D is replaced by Xjl, where l = k − 1;

• if k = 0, then XjE movel D D is replaced by D.

We write ψ(hX0|Ei) for hX00|ψ(E)i.

The variables of a Turing thread p correspond to the states of a Turing machine. If the head position is made part of the instructions, a different copy of a state is needed for each different head position that may occur when the Turing machine enters that state. The variables of ψ(p) correspond to those new states. That is, applying Turing thread p to a state of Maurer machine TM has the same effect as applying ψ(p) to the corresponding state of Maurer machine TM0.

Theorem 11 (Role of head position in Turing machines). Let p be a Turing thread, and let S0∈ STM be such that S0(head) = 0. Then (p •TM S0) 

(Mtape∪ {rr}) = ψ(p) •TM0(S0 (Mtape∪ {rr})).

Proof. It is easy to see that for all S ∈ STM:

Itest:s(S)  (Mtape∪ {rr}) = Itest:s:n(S  (Mtape∪ {rr})) ,

Iwrite:s(S)  (Mtape∪ {rr}) = Iwrite:s:n(S  (Mtape∪ {rr})) ,

Imover(S)  Mtape = S  Mtape,

Imovel(S)  Mtape = S  Mtape,

(28)

Let (pn, Sn) be the n+1-th element in the full path of p •TMS0of which the

first element equals S, D or q E test:0 Dr for some q, r ∈ Tfinrec, and let (p0n, Sn0) be

the n+1-th element in the full path of ψ(p) •TM0(S0 (Mtape∪ {rr})) of which the

first element equals S, D or q0E test:0:k D r0for some k ∈ Atape and q0, r0∈ Tfinrec.

Then, using the above equations, it is straightforward to prove by induction on n that:

– pn+1= q E test:0 D r and Sn+1(head) = k iff p0n+1 = q0E test:0:k D r0 with

q0 and r0 obtained from q and r by applying the replacement rules given in the definition of ψ above;

– Sn+1 (Mtape∪ {rr}) = Sn+10 .

(for n + 1 < |p •TMS0| if p •TMS0is convergent). From this, the theorem follows

immediately. ut

Consider the Turing thread hX0|Ei, where

E = {X0= write:1 ◦ X1E test:0 D (write:0 ◦ X1E test:1 D S),

X1= mover ◦ X0E test:0 D (mover ◦ X0E test:1 D mover ◦ X0)} .

Clearly, X0−→k EX0for each k ∈ N. Consequently, in this case ψ(E) is a

count-ably infinite set. We conclude the following concerning Turing machines: – the instruction implicitly executed by a Turing machine on a test step has

an infinite input region and a finite output region;

– the instruction implicitly executed by a Turing machine on a write step has a finite input region and an infinite output region;

– these instructions can be replaced by instructions with a finite input region and a finite output region if we allow Turing machines with an infinite set of states.

13

Stored Threads and Programs

In this section, we discuss the connection between stored threads and programs. First, we review the program notation PGLD [4], which is close to existing assembly languages.

In PGLD, it is assumed that there is a fixed but arbitrary set of basic in-structions I. PGLD has the following primitive inin-structions:

– for each a ∈ I, a positive test instruction +a; – for each a ∈ I, a negative test instruction −a; – for each a ∈ I, a void basic instruction a;

– for each k ∈ N, a absolute jump instruction ##k.

PGLD programs have the form u1; . . . ; unwhere u1, . . . , unare primitive

instruc-tions of PGLD.

The intuition is that the execution of a basic instruction a may modify a state and produces a Boolean value at its completion. In the case of a positive

(29)

test instruction +a, basic instruction a is executed and execution proceeds with the next primitive instruction if T is produced and otherwise the next primitive instruction is skipped and execution proceeds with the primitive instruction following the skipped one. In the case where T is produced and there is not at least one subsequent primitive instruction and in the case where F is produced and there are not at least two subsequent primitive instructions, termination occurs. In the case of a negative test instruction −a, the role of the produced Boolean value is reversed. In the case of a void basic instruction a, the produced Boolean value is disregarded: execution always proceeds with the next primitive instruction (if present). The effect of an absolute jump instruction ##k is that execution proceeds with the k-th instruction of the program concerned. If ##k is itself the k-th instruction, then inaction (deadlock) occurs. If k equals 0 or k is greater than the lenght of the program, termination occurs.

We write PPGLDfor the set of all PGLD programs.

The behaviour of a PGLD program, as defined in [4], is a thread which is definable by a finite guarded recursive specification over BTA. It is easy to see that each finite guarded recursive specification over BTA can be translated to a PGLD program of which the behaviour is the solution of the finite guarded recursive specification concerned (cf. Section 5 of [3]).

Next, we consider the stored threads from Section 6 again. We write ST for {st(p) | p ∈ Tfinrec ∧ size(p) ≤ size(Mt)}. We define a translation function

pgld : ST → PPGLD for stored threads. For all T ∈ ST , pgld (T ) = pgld0(T, 0),

where pgld0: ST × N → PPGLD is recursively defined as follows:

pgld0(T, n) = pgld00(T, n) if T (Mt[n + 1]) 6∈ dom(T ) ,

pgld0(T, n) = pgld00(T, n); pgld0(T, n + 1) if T (Mt[n + 1]) ∈ dom(T ) ,

where pgld00: ST × N → PPGLDis defined as follows:

pgld00(T, n) = +a; ##3n0+1; ##3n00+1 if T (Mt[n]) = (n0, a, n00) ,

pgld00(T, n) = ##0; ##0; ##0 if T (Mt[n]) = S ,

pgld00(T, n) = ##3n+1; ##3n+2; ##3n+3 if T (Mt[n]) = D .

The function pgld transforms addresses of thread memory elements containing representations of nodes to absolute jump instructions taking the line that each representation of a node is mapped to three primitive instructions. For that reason, S and D are mapped to three primitive instructions.

It can be shown that, for all p ∈ Tfinrecwith size(p) ≤ size(Mt), the behaviour

of pgld (st(p)) is the thread described by p. The function pgld shows that there

is hardly a difference between the stored thread st(p) and the PGLD program

pgld (st(p)) extracted from it: st(p) can also be viewed as a stored representation

of pgld (st(p)) with three primitive instruction to a memory element. However, it

is likely that pgld (st(p)) contains needless jump instructions. For example, what

can be achieved by a positive test instruction +a followed by two identical jump instructions can also be achieved by a void basic instruction a. In other words, PGLD permits a more efficient representation of threads than the one obtained by way of st and pgld .

Referenties

GERELATEERDE DOCUMENTEN

• Kent een belangrijke rol toe aan regie cliënt • Geeft wensen, doelen en afspraken weer.. • Is gericht op bevorderen kwaliteit van bestaan •

for fully nonconvex problems it achieves superlinear convergence

In this section we provide the main distributed algorithm that is based on Asymmetric Forward-Backward-Adjoint (AFBA), a new operator splitting technique introduced re- cently [2].

Before we discuss the method of “ethical circles in social work”, it is important that we first take a look at the values and standards within social work.. Moreover,

For ground-based detectors that can see out to cosmological distances (such as Einstein Telescope), this effect is quite helpful: for instance, redshift will make binary neutron

Although in the emerging historicity of Western societies the feasible stories cannot facilitate action due to the lack of an equally feasible political vision, and although

A legal-theory paradigm for scientifically approaching any legal issue is understood to be a shared, coherent collection of scientific theories that serves comprehension of the law

The best performance with respect to the number of deferred patients and runtime is obtained when using a capacity cycle generating heuristic based on the mean number of free time