• No results found

Separating computation and coordination in the design of parallel and distributed programs

N/A
N/A
Protected

Academic year: 2021

Share "Separating computation and coordination in the design of parallel and distributed programs"

Copied!
6
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Separating computation and coordination in the design of parallel and

distributed programs

Chaudron, M.R.V.

Citation

Chaudron, M. R. V. (1998, May 28). Separating computation and coordination in the design of parallel and distributed programs. ASCI dissertation series. Retrieved from

https://hdl.handle.net/1887/26994

Version: Corrected Publisher’s Version

License: Licence agreement concerning inclusion of doctoral thesis in theInstitutional Repository of the University of Leiden Downloaded from: https://hdl.handle.net/1887/26994

(2)

Cover Page

The handle http://hdl.handle.net/1887/26994 holds various files of this Leiden University dissertation

Author: Chaudron, Michel

Title: Separating computation and coordination in the design of parallel and distributed programs

(3)

1

Introduction

Computers are so widely used throughout society that they affect our lives 24 hours a day. They are the culmination of a tradition of making of tools to aid us in our daily lives. The successful application of computers derives from two of their abilities:

• performing computations at a phenomenal speed (currently many millions per sec-ond), and

• storing tremendous amounts of data (currently in the order of one terra-byte). Both of these abilities can be further increased by coupling computers. Two computers can, in principle, compute twice as fast and store twice as much as a single computer.

Although magical qualities are sometimes attributed to computers, they are but machines whose task it is to mechanically carry out our instruction. They are, however, very complex machines.

Software is as complex a construction as hardware. Several layers of software are used to steer the operation of the hardware. Each layer of software provides (suitable) abstractions for a layer one level up in the hierarchy, thereby hiding more and more machine specific aspects. The top most layer consists of an application which provides the functionality presented to the users of a system. Application programs that consist of millions of lines of code are no exception.

The building of such programs is a formidable task, which, unfortunately, is very error-prone. For software engineers it would be desirable if they could employ methods which would help in making error-free software products. This raises questions regarding the principles on which such methods could be based.

For traditional engineering products, it is common practice to expose products to extensive testing. Mechanical constructions may be tested under extreme circumstances, such as extreme temperature or force. Conclusions are then drawn from assumptions about continuity of behaviour for intermediate circumstances.

However, this assumption of continuity does not carry over to computer programs. Due to their discrete nature, a difference of a single bit or instruction may cause a

(4)

2 CHAPTER 1. INTRODUCTION program to fail or deviate from its intended behaviour in a completely unexpected way. Hence, to ascertain the correctness of a system, every individual setting of bits and every possible sequence of instructions would need to be verified. The number of possible settings and sequences is of such cosmic proportions that even computers themselves cannot help us in checking all possibilities.

Hence, the method of testing cannot be used to guarantee the correct operation of computer systems. The only way to assert anything about the correctness of a design is by rigorously specifying the system requirements such that mathematical methods of reasoning can be applied.

The question whether the initial specifications of a system actually meets the require-ments is impossible to answer, since there is inherent ambiguity in the formalization of an informal problem statement. However, a formal specification already provides an advantage in that it can be checked on internal consistency. Furthermore, we may subsequently appeal to mathematical methods to assist us in the transformation of an initial specification into more and more detailed form up to the stage where it becomes computer-processable.

Rather than going through the trouble of instructing computers in every detail, we can also try to raise the level of abstraction at which computers can execute our instruc-tions. Programming languages and tools should support the programming activity by assisting us in focusing on the relevant aspects at different stages of the design process and encourage abstraction from details that should be addressed at a different stage or from details that can be resolved automatically. One successful step in this direction has been the use of compilers to translate (so-called) high level languages into machine-oriented instructions.

With the advent of parallel and distributed computer systems, the complexity of designing programs was increased even further by the need to resolve matters of concur-rency and distribution. In order to deal with these matters, programming languages were extended with new primitives that were tailored for explicitly defining communication and distribution aspects. This increased the complexity of the programming languages and the programming activity because it encouraged the programmer to think about functional and operational issues at the same time.

(5)

3 from operational issues. Subsequently, a program may be further refined, aimed at improving the efficiency while leaving the functionality unaffected.

Up until now, formal methods could be used in support of this approach towards pro-gram design, but no formal methods have been put forward that encourage (or enforce) the separation of concerns. The research described in this thesis aims to fill this gap.

In this thesis, we propose a collection of formal techniques that constitute a method-ology for the design of parallel and distributed programs which addresses the correctness and complexity aspects in separate phases. This method proceeds along the following phases. For each phase, we present formalisms for specifying and reasoning about the aspects that belong to that each phase and encourage abstraction from aspects that belong to the complementary phase.

Firstly, we concentrate on specifying the functionality (which defines “what” should be computed). This specification determines the correctness of the program and is called the “computation component”. In support of this phase we present a programming model which allows the description of the basic computations that constitute a solution method, but abstracts from an underlying execution mechanism and thereby avoids to impose premature constraints on the order of computation. This programming model is a variant of Gamma [12, 13] and the Transaction-based programming model [79]. However, in order to incorporate it in our formal methodology, we provided it with an alternative formal semantics.

Secondly, we concentrate on specifying “how” a program should operate in order to compute what its functional component promises. This specification is called the “coordination component” and it complements the computation component by defining the operational aspects of a system. For instance, depending on the target architecture, the coordination component may prescribe a sequential or parallel strategy for realising the computation component.

In support of this second phase we develop a coordination language which is aimed at describing the behaviour of a system in terms of the basic computations of the solution method. We formally define the syntax and semantics of this language such that it can be integrated in our formal methodology.

To ensure the correctness of coordination components, we construct a formal method for their development by stepwise refinement.

(6)

4 CHAPTER 1. INTRODUCTION

Structure of this thesis

The remainder of this thesis is organized as follows. Chapters 2 and 3 introduce the specification formalisms that are used in this thesis. In Chapter 2 we present the com-putation language. We show that it facilitates the description of specifications that are not partial to a particular mode of execution. Furthermore, we present a semantics and a logic for reasoning about correctness of programs. In Chapter 3 we present the coordi-nation language. We define its semantics and show how it connects to the computation language.

In Chapters 4 and 5 we develop a theory of refinement. This theory provides a number of proof techniques that enable us to incrementally refine the behavioural aspects of a program. These chapters form the most theoretical part of this thesis. It should be possible to get an understanding of the methods derived in these chapters without going through all these proofs.

Referenties

GERELATEERDE DOCUMENTEN

Separating Computation and Coordination in the Design of Parallel and Distributed Programs Michel Roger Vincent Chaudron.. Thesis

The multi-step operational semantics of Figure 2.3 and the single-step operational semantics of [65] endow Gamma programs with different behaviour (in the sense of the

In this section we show that a most general schedule does not describe any behaviour that cannot be displayed by the corresponding Gamma program, we first consider this claim

In general, statebased refinement is not preserved by parallel composition because the interaction of the refined components of the composition may give rise to behaviour that is

In order use the generic theory of refinement to prove that the weak notions of refinement satisfy the properties proposed in previous sections, we need to check that the

With this notion of refinement it is possible to justify refinements using properties of the multiset (just as statebased refinement) and use these in a modular way (because it is

The Selection Sort schedule was derived using convex refinement laws. The first two re- finements were obtained by decomposing the domain of the index variables of the rewrite rule

Initially, a calculus of refinement of action systems was based on weakest precondi- tions [4, 5]. Based on this notion of refinement a step-wise method for the development of