• No results found

Concurrency in Java

N/A
N/A
Protected

Academic year: 2021

Share "Concurrency in Java"

Copied!
44
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Concurrency in Java

Allard Naber (1462504) A.L.Naber@student.rug.nl

Supervisors:

prof. dr. W.H. Hesselink prof. dr. M. Aiello

September 30, 2008

(2)
(3)

Contents

Contents

1 Introduction 5

2 Goals 6

3 The Concurrency module 7

3.1 Concurrency on the University of Groningen . . . 7

3.2 Concurrency on other (international) universities . . . 7

4 Threads 9 4.1 What are threads? . . . 9

4.2 Creating threads in Java . . . 9

5 Applications of concurrency 11 5.1 Theoretical approach . . . 11

5.1.1 Unlucky execution order . . . 11

5.1.2 Reordering of instructions . . . 12

5.1.3 Theoretical result . . . 12

5.2 Practical approach . . . 12

6 Algorithms 14 6.1 Peterson’s mutual exclusion algorithm . . . 14

6.2 Barrier synchronization . . . 16

6.3 Monitors . . . 18

6.3.1 Synchronized methods:synchronized . . . 19

6.3.2 Synchronized operations: synchronized block . . . 19

6.3.3 Waiting and waking up: wait/notify . . . 20

6.4 Semaphores . . . 21

6.5 Deceptive properties . . . 24

6.5.1 Spurious wakeups . . . 24

6.5.2 volatile variables . . . 24

6.5.3 Semaphores fairness settings . . . 25

7 Practical assignments 26 7.1 Mutual exclusion between multiple threads . . . 26

7.1.1 Solution . . . 26

7.2 Car Crossing . . . 29

7.2.1 Solution . . . 30

7.3 Sliding Window protocol . . . 33

7.3.1 The protocol . . . 34

7.3.2 Implementation . . . 38

8 Further work 44

(4)

CONTENTS

(5)

1 Introduction

1 Introduction

The term “concurrency” literally means ‘simultaneous occurrence’, but also

‘cooperation’, ’competition’ or ’coincidence’. These meanings are very differ- ent and that exactly shows what concurrency is about and what its difficulties are.

Concurrency handles problems introduced by using multiple threads in an application as well as other synchronization difficulties originating from the use of multiple computer systems.

Generally an application has one path of execution, a linear path consisting of iterative instructions. This means that such an application can only do one thing at a time. Nowadays, it’s often needed for an application to perform multiple tasks at a time. With the limitation of a single path of execution this is impossible. Therefore threads, as they are called in Java, are introduced.

Threads make it possible to run multiple procedures at a time, within a single application. Now an application has multiple paths of execution and it can perform multiple tasks at the same time.

In the last few years more and more systems are designed to use multiple computers to perform their tasks. Because these systems should cooperate to reach their common goal, a lot of work has been done to find ways to synchronize these systems.

On the level of a single system, having multiple paths of execution inside an application, possibly working on some shared variables, may introduce prob- lems. The threads don’t know what the other threads are doing and might manipulate some of the global variables. If another thread did not expect this change and assumes a certain value for the variable, it might produce unexpected results.

A solution to the above problem is to run the part of the thread that uses a shared variable under mutual exclusion, so no other thread will access that variable at that time. This is somewhat a competition; threads compete with each other to gain acces to the variable.

On the other hand, threads may sometime need to cooperate. A classical ex- ample is the Producer – Consumer problem. A producer creates items which will then be consumed by a consumer. To fulfill their respective tasks, both participants need to communicate with each other. If the producer’s storage buffer is full, because for example the consumer is slower than the producer, it should wait until there is space available again. If the consumer takes an item, new space is created and the producer should get a notification from the consumer so it can create some new items.

To realize synchronization between different systems, comparable techniques can be used, but on a higher level. This research project was especially aimed to look at ways to synchronize threads within a single system, but synchro- nization between systems will also be looked at.

One of the most famous solutions to concurrency problems is the semaphore, introduced in 1963 by the Dutch computer scientist Edsger W. Dijkstra. This solution is still used a lot, for example in the implementation of the Java monitor which will be explained in section 6.3.

(6)

2 Goals

The goal of this project is to renew the Concurrency module which is taught at the University of Groningen. Until now, the module uses the SR language (Synchronizing Resources) [1]. In the renewed module, Java might be used because it’s becoming the standard more and more for the Computing Science department in Groningen.

Next to the existing algorithms from the syllabus [3], which will be ported to Java as much as possible, also Java’s own utilities for concurrency will be described. If possible, these utilities will also be used in new practical assignments.

At last, the relation with other modules, like Netcomputing and Software Engineering will by analyzed.

(7)

3 The Concurrency module

3 The Concurrency module

3.1 Concurrency on the University of Groningen

The Course Catalog of the University of Groningen describes the Concurrency module as follows:

The central aim of this module is to teach students the understanding and skills needed for advanced forms of thread programming, for distributed cal- culations, for communication between processes and processors, and for in- tervention in the operating system. This primarily concerns cooperation of sequential processes with or without shared memory. This can concern a large number of computers in a worldwide network, or a limited number of processors in one computer, or a number of different processes which are ap- parently executed simultaneously on one processor. The purpose is always to design programs that can ensure an accurate coordinated implementation of a certain task by the processes. Problems include: mutual exclusion; indivisible actions; deadlock; fairness; distributed systems. Usable mechanisms such as fixed signals (semaphore); condition variables; communication through mes- sages, asynchrony and synchrony; remote procedure call; rendezvous. The module will be taught by way of the Concurrent Programming material. This module follows on from Program Correctness, complements Parallel Comput- ing (APPHPS) and Operating Systems and is a preparation for the Master’s modules Automated Reasoning and Distributed Systems. Information about the practical: There will be three programming assignments which will be completed in pairs.

3.2 Concurrency on other (international) universities

The exact contents of Concurrency modules on other universities are hard to find on the Internet, because access to these sources is more and more re- stricted to instructors and students of that university. Despite that, there were some rough outlines of these modules available which reveal the contents of the module globally. The information that was gathered is listed in table 1 on page 8. The table shows that the other universities use a more theoretical approach to concurrency problems, while the RUG uses a more practical ap- proach. To keep this research within its bounds, the Concurrency modules of other universities will not be looked into too much.

The terms used in table 1 are declared below.

Communicating Sequential Processes (CSP) – A formal language to describe patterns of interaction within concurrent systems. It’s used also within companies to specify and verify concurrent processes in real-life. C.A.R.

Hoare

Calculus of Communicating Systems (CCS) - Process algebra which models indivisible communication between two participants. Robin Milner Algebra of Communicating Processes (ACPτ) – An algebraic approach to rea-

son about concurrent systems. Originally it was developed to examine

(8)

3.2 Concurrency on other (international) universities

Table 1: Subjects within Concurrency modules of other universities.

Leiden Very theoretical. They only use Petri Nets and Graph theory.

Cambridge Petri Net and CCS

UvA Process Algebra

Oxford OCCAM, based on CSP

Stanford Process Algebra (CCS, CSP, ACP) Eindhoven Process Algebra (CCS, CSP, ACP)

New York Automata-based (CCS, CSP, ACP) + Action-based (trace the- ory)

solutions of “unguarded recursive equations”. It’s even more algebraic than CSP and CCS. Jan Bergstra, Jan Willem Klop

Concurrent Programming Languages which are currently being used are OC- CAM, CML, Facile, Pict, Linda, NESL.

(9)

4 Threads

4 Threads

To understand the problems and solutions in the next chapters, it is essential to have a good understanding of what threads are and what they do. In this chapter, the most important properties of threads are described.

Additionally, an alternative notation to denote threads is introduced to make sure the focus is on the behaviour of threads, not on their creation.

4.1 What are threads?

Most Java programs have only a single path of execution, consisting of iter- atively executable instructions. This means the application can only do one

‘thing’ at a time. For much applications its goals can be satisfied under this limitation.

At some time however, the need to execute multiple paths of execution at a time may arise. Think about a chat client for example, with which one can keep in contact with multiple other persons. When the local user is typing a message to one of his contacts, another contact may also send a message to the user. This message must show up immediately, it’s not accepted that the other message arrives after the local user has finished typing his message.

Without threads, this is not possible. The typing of a message is one of the items in the single execution path of the main program. At that moment, the program cannot execute any other instruction. Therefore threads are intro- duced. They make it possible to have multiple paths of execution, comparable to a computer on which multiple processes may run at a time. For exam- ple, the first thread reads the user’s message, while a second thread is busy receiving messages from the contacts.

4.2 Creating threads in Java

In Java there are two ways to specify a thread – for exampleMyThread –, both indicating that a class is to be used as a thread. The first way is to let the class extend theThread-class. This approach quite limits the programmer, as a class can extend only one other class. Therefore a new approach has been devel- oped. A class that should become a thread, may also implement theRunnable interface. Both methods require the class to provide an implementation for therun-function, being the body of the thread.

Both approaches look very similar. An example of a thread implementation is shown in listing 1.

1 public c l a s s MainApplication {

2 public s t a t i c void main ( S t r i n g [ ] a r g s ) {

3 MyThread t = new MyThread ( ) ;

4 t . s t a r t ( ) ;

5 }

6 }

7 8 9

(10)

4.2 Creating threads in Java

10 public c l a s s MyThread extends Thread {

11 or

12 public c l a s s MyThread implements Runnable {

13 public void run ( ) {

14

15 / / b od y o f t h e t h r e a d

16

17 }

18 }

Listing 1: Creating and starting a thread in Java.

In this report the construction of threads is not of importance, so a shorter notation is used. Listing 2 is equivalent to listing 1, using the alternative notation. The parameters a and b specified in the first line become member variables of the thread object.

1 public c l a s s MainApplication {

2 public s t a t i c void main ( S t r i n g [ ] a r g s ) {

3 MyThread t = new MyThread ( a , b ) ;

4 t . s t a r t ( ) ;

5 }

6

7 thread MyThread ( i n t a , S t r i n g b ) {

8

9 / / b od y o f t h e run−method o f t h e t h r e a d

10 / / w i t h i n s t a n c e v a r i a b l e s a , b .

11

12 }

13 }

Listing 2: Shorter notation for specifying threads in Java.

If a thread calls thesleep-function, which will be used a lot for testing pur- poses, it must catch an InterruptedException. In this report, this catching has been left out to improve readability.

(11)

5 Applications of concurrency

5 Applications of concurrency

It’s good to know what kind of problems may appear in concurrency, before looking at the set of problems and their solutions in the following chapters.

Therefore a very illustrative example is given. This example clearly shows how hard it is to draw strict conclusions from a research in concurrency.

Hendrik Wietze de Haan has proposed the following example to figure out how Java threads behave. His suggestion was to create three functions, modi- fying or testing two global variables and three threads continuously executing these functions:

int a = 0, b = 0;

x() { synchronized y() { synchronized z() {

a = 1; b = 0; if(b == 1 && a == 0) {

b = 1; a = 0; kill;

} } }

}

ThreadA.run() { ThreadB.run() { ThreadC.run() {

while(true) { while(true) { while(true) {

x(); y(); z();

} } }

} } }

According to Hendrik Wietzes idea, the above could run indefinitely.

5.1 Theoretical approach

Looking at the code two ways to let the process kill itself can be found. The first is quite straightforward and is quite likely to occur sometimes. The sec- ond one is tricky and it is unsure if it can happen. This depends on if and how the Java compiler or the Java Virtual Machine optimizes sources.

5.1.1 Unlucky execution order

x(): a=1 y(): b=0 y(): a=0 x(): b=1 z()

(Notice that thesynchronized keywords only prevent y() and z() to be ex- ecuted concurrently. x() can be started at any moment in time, also when y() or z() is running.) If the above shows the actual execution order of the threads,z() will find the values b = 1 and a = 0 and thus kill the process.

(12)

5.2 Practical approach

5.1.2 Reordering of instructions

Due to compiler optimizations it might happen that two instructions not clearly related to each other swap. If for example in the above example the two lines inx() are swapped, first the value of b is set to 1. If function z() now starts it will find a state for which it will kill the process. As mentioned above, it’s unknown whether the Java Compiler or the Java Virtual Machine reorder instructions. Reverse engineering a class-file didn’t show any reordering, so it is quite likely that the instructions for this process were not reordered by the compiler, but maybe the Virtual Machine did.

5.1.3 Theoretical result

Following the above theory, the problem proposed in the introduction is not really interesting. If the process kills itself, it is expected behaviour. Also, if the process does not kill itself, we only know that we had no real bad luck.

More interesting is the practical approach, which has suprising results.

5.2 Practical approach

Knowing that it is possible for the process to kill itself, it’s interesting to test how often this occurs. The functions and threads are implemented in Java, the threads are started alphabetically, starting withThreadA.

In the tests, the process is started 60 times. Every execution lasted for at most ten seconds, then it was killed manually. 12 times the process killed itself, finding that b = 1 and a = 0. In the following table one can see how much ThreadA and ThreadB had looped until the killing of the process.

# ThreadA ThreadB # ThreadA ThreadB

1 35,283,629 896,086 7 31,690,133 546,040

2 164,605,269 26,961,398 8 33,347,996 973,243

3 34,187,548 980,274 9 84,319,833 11,198,704

4 33,970,517 567,826 10 34,186,620 886,627

5 33,423,212 566,888 11 241,017,691 1,855,335

6 58,560,516 6,428,767 12 33,497,519 888,327

As one can see, the number of loopsThreadB has executed before the killing is always lower than the number of loopsThreadA has executed. This can be explained by the fact thatThreadA is started first and can make many loops beforeThreadB is started. It looked like a good experiment to start ThreadB first, to see of the behaviour would change. Ofcourse the expectation is that the ratio killings:total runs is almost equal to the 12:60 shown above.

In the new situation the process is also started 60 times. It’s quite stunning to see that the process only kills itself once. To keep the view consistent, the values for number of loops in both threads are also shown in a table:

# ThreadA ThreadB

1 106,407,329 12,606,155

(13)

5 Applications of concurrency

Despite the different order of starting both threads, ThreadA has executed more loops thanThreadB up to this moment. This probably shows that syn- chronized methods really take longer to start their execution, caused by ac- quiring the lock it needs. However, this does not explain why the process kills itself merely once, while it killed itself 12 times when ThreadA was started first.

(14)

6 Algorithms

To solve the problems mentioned in the introduction, a wide set of algorithms has been developed over the years. The current syllabus [3] lists most of these algorithms, accompanied by a proof of correctness of the algorithms. This chapter will describe equivalent algorithms in Java.

If the new course is using Java, the following algorithms can still be used, only the code examples in the syllabus, which are now written in SR, are to be replaced with the suggestions below.

6.1 Peterson’s mutual exclusion algorithm

Peterson's algorithm is an algorithm with quite a simple goal; it prevents two threads from executing their critical section (CS) at the same time. The crit- ical section is defined using a so-called entry point and an exit point. The algorithm should always allow a thread to execute the so-called non-critical section (NCS). The syllabus explains that Peterson’s algorithm has been set up by using shared variables to register which thread is allowed to execute its critical section. In SR this is implemented by defining a global variable outside any process, or a thread in Java terms. Because of the lack of global variables in Java, this is done by defining a variable in the main thread. Every child thread refers to the variable in the main thread, so it acts as if it were a global variable.

If a thread wants to enter its critical section and another process is already running its critical section, the former needs to wait. This is done by running an empty loop until the process is allowed to continue. In the SR pseudo codes this will be denoted with anawait loop. Actually this is a loop contin- uously checking whether the specified condition is true or not. This method of waiting is calledBusy waiting.

The first attempt to get to the correct algorithm keeps track of any active process and as soon as one process becomes inactive, the other starts to run.

First attempt: SR implementation

The first attempt is specified in SR as seen in listing 3.

 

1 var a c t i v e [ 0 : 1 ] : bool : = ( [ 2 ] f a l s e )

2

3 p r o c e s s ThreadMX ( s e l f : = 0 t o 1 )

4 do t r u e −>

5 NCS

6 a c t i v e [ s e l f ] : = t r u e

7 await not a c t i v e [ 1 − s e l f ]

8 CS

9 a c t i v e [ s e l f ] : = f a l s e

10 od

11 end ThreadMX

 

Listing 3: Peterson’s algorithm in SR.

(15)

6 Algorithms

First attempt: Java implementation

This algorithm converted into Java is listed in listing 4. Note that with the conversion, the condition in the await statement has negated. This is just a syntactical change.

1 public c l a s s P e t e r s o n {

2 public boolean[ ] a c t i v e = { f a l s e , f a l s e } ;

3

4 public s t a t i c void main ( S t r i n g [ ] a r g s ) {

5 P e t e r s o n p = new P e t e r s o n ( ) ;

6 }

7

8 public P e t e r s o n ( ) {

9 new ThreadMX ( 0 ) . s t a r t ( ) ;

10 new ThreadMX ( 1 ) . s t a r t ( ) ;

11 }

12

13 thread ThreadMX ( i n t s e l f ) {

14 while( t r u e ) {

15 NCS

16 a c t i v e [ s e l f ] = t r u e ;

17 while( a c t i v e [ 1 − s e l f ] ) ;

18 CS

19 a c t i v e [ s e l f ] = f a l s e ;

20 }

21 }

22 }

Listing 4: Peterson’s algorithm translated into Java.

As explained in the syllabus, this algorithm is based on a good idea, but this implementation can lead to deadlock. If both threads are in their non-critical section (NCS) and they concurrently set their own active variable to true, they both start waiting until the other gets inactive. It is clear that this never happens and the application reaches deadlock.

To repair the deadlock, Peterson added a shared variable last indicating which thread has most recently executed its entry protocol. The new, cor- rect version ofPeterson's algorithm is shown in SR in listing 5 and in Java in listing 6.

Correct version: SR implementation

 

1 var a c t i v e [ 0 : 1 ] : bool : = ( [ 2 ] f a l s e )

2 var l a s t : i n t

3

4 p r o c e s s ThreadMX ( s e l f : = 0 t o 1 )

5 do t r u e −>

6 NCS

7 a c t i v e [ s e l f ] : = t r u e

8 l a s t : = s e l f

(16)

6.2 Barrier synchronization

9 await( l a s t = 1 − s e l f or not a c t i v e [ 1 − s e l f ] )

10 CS

11 a c t i v e [ s e l f ] : = f a l s e

12 od

13 end ThreadMX

 

Listing 5: Peterson’s algorithm in SR.

Correct version: Java implementation

1 public c l a s s P e t e r s o n {

2 public boolean[ ] a c t i v e = { f a l s e , f a l s e } ;

3 public i n t l a s t ;

4

5 public s t a t i c void main ( S t r i n g [ ] a r g s ) {

6 P e t e r s o n p = new P e t e r s o n ( ) ;

7 }

8

9 public P e t e r s o n ( ) {

10 new ThreadMX ( 0 ) . s t a r t ( ) ;

11 new ThreadMX ( 1 ) . s t a r t ( ) ;

12 }

13

14 thread ThreadMX ( i n t s e l f ) {

15 while( t r u e ) {

16 NCS

17 a c t i v e [ s e l f ] = t r u e ;

18 l a s t = s e l f ;

19 while( l a s t == s e l f && a c t i v e [ 1 − s e l f ] ) ;

20 CS

21 a c t i v e [ s e l f ] = f a l s e ;

22 }

23 }

24 }

Listing 6: Peterson’s algorithm translated into Java.

6.2 Barrier synchronization

With tasks being distrubuted over several threads, it often is the case that at the end of one loop, the processes need to wait until every process has finished executing its loop. Barrier synchronization ensures that after a thread executes its terminating non-critical section (TNS), the thread will wait for the other threads to arrive at the barrier before continuing.

First a very simple solution is used. Every thread counts the loops that it executes. After executing its TNS, every thread increases its own loop counter.

Then it inspects the counters for all other threads and waits until the counter of the other thread is unequal to the old value of its own counter.

(17)

6 Algorithms

Simple solution: SR implementation

In SR the barrier is defined as listed in listing 7. This is only the part where the processes wait for each other, so the contents of this listing is added to a thread’s body.

 

1 c n t . s e l f ++

2 f a i : = 0 t o N1 s t i ! = s e l f −>

3 await ( c n t . i ! = c n t . s e l f − 1 )

4 a f

 

Listing 7: Barrier in SR.

Simple solution: Java implementation The Java-equivalent is shown in listing 8.

1 p a r e n t . c n t [ s e l f ] + + ;

2 f o r( i n t th = 0 ; th < p a r e n t .N; th ++) {

3 i f( th == s e l f ) continue ;

4 while( p a r e n t . c n t [ th ] == p a r e n t . c n t [ s e l f ] − 1 ) ;

5 }

Listing 8: Barrier in Java.

As mentioned before, the previous solution is a very simple one. If all threads arrive at the barrier at the same time, they all start to inspect the first thread, then they all inspect the second and so on. This is not very efficient as all threads need to acces the same memory locations. To prevent this, all threads get their own set containing all other threads. At the barrier a thread will inspect all other threads in a randomized order. If all threads arrive at the barrier at the same time, ideally every thread waits for another thread to arrive at the barrier, so no memory contention will occur.

More advanced barrier: (pseudo-)SR implementation

The syllabus defines this addition to the algorithm in pseudo-SR, as shown in listing 9.

 

1 c n t ++

2 var s e t : = { 0 . . N− 1 } \ { s e l f }

3 do i s _ not _empty( s e t ) −>

4 e x t r a c t some i from s e t

5 await ( c n t . i ! = c n t . s e l f − 1 )

6 od

 

Listing 9: More advanced barrier in SR.

More advanced barrier: Java implementation The Java algorithm is shown in listing 10

(18)

6.3 Monitors

1 p a r e n t . c n t [ s e l f ] + + ;

2 f o r( i n t th = 0 ; th < p a r e n t .N; th ++) {

3 s e t [ th ] = th ;

4 }

5 s e t c o u n t = p a r e n t .N;

6 s e t [ s e l f ] = s e t [ s e t c o u n t − 1 ] ;

7 s e t c o u n t−−;

8

9 while( s e t c o u n t > 0 ) {

10 r = ( i n t ) ( Math . random ( ) ∗ s e t c o u n t ) ;

11 while( p a r e n t . c n t [ s e t [ r ] ] == p a r e n t . c n t [ s e l f ] − 1 ) ;

12 s e t [ r ] = s e t [ s e t c o u n t − 1 ] ;

13 s e t c o u n t−−;

14 }

Listing 10: More advanced barrier in Java.

If an application using this algorithm runs for a while, it is possible that an overflow occurs for thecnt variable. After MAX_INT runs, the value can’t be increased anymore. To solve this problem, the algorithm could use the value of the counter, modulo a certain integer. The exact value doesn’t matter, as long as each thread is able to identify whether another thread has reached the barrier or not.

6.3 Monitors

The solutions of the previous problems, Mutual Exclusion and Barriers, are proven correct in the syllabus [3] and they are not too hard to implement.

However, these solutions are very inefficient, due to the ‘busy waiting’-principle.

The process takes all power from the CPU that is available, while this power could have been used better by another application or thread. With Java’s so-calledMonitor, a very efficient solution is provided.

As monitors are a conceptual definition within Java – there is no class called Monitor – it is good to start with a detailed explanation of what a monitor is and how it works. Figure 1 provides a schematic view of a monitor [4].

A monitor can be compared with a building where threads enter and exit.

When a thread enters the building it finds itself in the entrance hall (marked with 1). To execute a synchronized function it should proceed to the owner’s room where only one thread at a time may reside, so it should wait until the room is empty (marked with 2). If a thread needs to interact with hardware or otherwise needs to be suspended, it travels on to the waiting room (marked with 3) where it can sit until it gets activated again (marked with 4). After reactivation, the thread should wait until it is allowed to re-enter the owner’s room. As soon as the thread has finished executing the synchronized function it leaves the building (marked with 5).

If a method has to run under mutual exlusion, the situation is comparable to the situation where all threads are trying to enter the owner’s room and they compete with each other. A better Java solution for Mutual Exclusion is now available, by using theMonitor.

(19)

6 Algorithms

The owner

Entry set Wait set

1 entry

2 3

4

acquire release

acquire 5

release

& exit

Waiting thread Active thread

Figure 1: A Java monitor.

6.3.1 Synchronized methods:synchronized

Java has so-called method modifiers to define certain properties for a method.

These properties include the well-knownpublic, protected, private, abstract, static and final. To realize mutual exclusion between methods, there is a modifier calledsynchronized [2].

Before asynchronized method starts its execution, it first needs to acquire a lock, symbolized by executing the transition in figure 1 which is marked with a 2. The lock is defined on the containing object: this, or if the function call was static the lock is defined on the containing class. Once the lock is acquired, the thread starts executing, keeping the lock. At the end of the method, the lock is released (marked with 5) and any other waiting method is allowed to start its execution. Any Java implemention must ensure that changes to the state of an object are visible for the next methods.

Listing 11 shows the usage of thesynchronized keyword.

1 thread C {

2 while( t r u e ) {

3 NCS

4 helpMethod ( ) ;

5 }

6 }

7

8 s t a t i c synchronized void helpMethod ( ) {

9 CS

10 }

Listing 11: Using asynchronized method.

Note that ensuring that a method runs under mutual exlusion does not protect the variables that are called inside the method. Any other nonsynchronized method can and may change values of critical variables.

6.3.2 Synchronized operations:synchronized block

If only a part of a method needs to run under mutual exclusion, it’s possible to define it as a block, between curly brackets [2]. Therefore the programmer

(20)

6.3 Monitors

must explictly state the object on which a lock should be aquired.

1 public c l a s s A p p l i c a t i o n {

2 public s t a t i c void main ( S t r i n g [ ] a r g s ) {

3 A p p l i c a t i o n app = new A p p l i c a t i o n ( ) ;

4 }

5

6 public A p p l i c a t i o n ( ) {

7 f o r( i n t i = 0 ; i < 1 0 ; i ++) {

8 new ThreadMX ( t h i s ) . s t a r t ( ) ;

9 }

10 }

11

12 thread ThreadMX ( A p p l i c a t i o n p a r e n t ) {

13 while( t r u e ) {

14 NCS

15 synchronized( p a r e n t ) {

16 CS

17 }

18 }

19 }

20 }

Listing 12: Using asynchronized block

In the above example, every thread acquires a lock on the parent object before executing their critical sections while the non-critical sections are executed concurrently.

6.3.3 Waiting and waking up:wait/notify

Next to creating a competition, the threads can also cooperate. An example from the syllabus, the Producer – Consumer problem, is shown below. In this case one thread, the Consumer, needs to wait for an action of another thread, the Producer if the production buffer is empty. After the Producer has filled the buffer, it can notify the Consumer that it can start trying to enter the owner’s room again, because the buffer is now filled with useful objects.

1 c l a s s A p p l i c a t i o n {

2 i n t N = 2 0 ; / / S i z e o f b u f f e r

3 i n t count = 0 ; / / I t e m s c u r r e n t l y i n t h e b u f f e r

4 Vector < S t r i n g > b u f f e r = new Vector < S t r i n g > ( ) ;

5

6 public s t a t i c void main ( S t r i n g [ ] a r g s ) {

7 A p p l i c a t i o n a = new A p p l i c a t i o n ( ) ;

8 }

9

10 public A p p l i c a t i o n ( ) {

11 new Thread (new Consumer ( ) ) . s t a r t ( ) ;

12 new Thread (new Producer ( ) ) . s t a r t ( ) ;

13 }

14

(21)

6 Algorithms

15 thread Producer {

16 while( t r u e ) {

17 synchronized( b u f f e r ) {

18

19 while( count >= N) {

20 / / B u f f e r f u l l

21 b u f f e r . wait ( ) ;

22 }

23

24 b u f f e r . add ( produce ( ) ) ;

25 i f( count++ == 0 ) {

26 / / N o t i f y c o n s u m e r i f t h e b u f f e r was empty

27 b u f f e r . n o t i f y ( ) ;

28 } } } }

29

30 thread Consumer {

31 while( t r u e ) {

32 synchronized( b u f f e r ) {

33

34 while( count == 0 ) {

35 / / B u f f e r empty

36 b u f f e r . wait ( ) ;

37 }

38

39 consume ( b u f f e r . remove ( 0 ) ) ;

40 i f( count−− == N) {

41 / / N o t i f y p r o d u c e r i f t h e b u f f e r was f u l l

42 b u f f e r . n o t i f y ( ) ;

43 } } } }

44 }

Listing 13: A Java-implementation for the Producer – Consumer problem.

The above example can easily be extended to a situation where there are multi- ple consumers or multiple producers or both. In these cases, multiple threads can be initialized in theApplication constructor.

6.4 Semaphores

One of the oldest synchronization primitives is the semaphore. Semaphores were already used to secure train crossings, but Edsger W. Dijkstra introduced the same principle in Computing Science in 1963. A semaphore actually is an integer, having an initial value n≥0.

Threads can use the semaphore through two methods: theP-method ‘parks’

the semaphore, meaning that its value gets decreased by one. If the new value would become less then zero or zero, the semaphore can not be decreased. The thread then waits until the semaphore comes available. The second method is theV-method. This method ‘frees’ the semaphore, meaning that it increases the value of the semaphore.

Java has come with an extension on semaphores, by allowing it to initially

(22)

6.4 Semaphores

have a value n < 0. In this case, the semaphore first needs to be released a few times, until n≥0, before a lock can be acquired.

Mutual exclusion with a binary semaphore

A binary semaphore, which is a semaphore with initial value n= 1, has the same functionality as Peterson’s algorithm, found in section 6.1. P marks the beginning of the critical section and V marks the end of the critical section.

Listing 14 shows how the semaphore is used in SR.

 

1 sem s : = 1

2 p r o c e s s SemMutex ( s e l f : = 0 t o N− 1 )

3 do t r u e −>

4 NCS

5 P ( s )

6 CS

7 V( s )

8 od

9 end SemMutex

 

Listing 14: Mutual exclusion with a binary semaphore in SR.

Java also contains an implementation of semaphores, so the above code can be translated into Java. TheP and V methods are called acquire and release respectively.

1 Semaphore s = new Semaphore ( 1 ) ;

2 public void run ( ) {

3 while( t r u e ) {

4 NCS

5 s . a q u i r e ( ) ;

6 CS

7 s . r e l e a s e ( ) ;

8 }

9 }

Listing 15: Mutual exclusion with a binary semaphore in Java.

Barrier with a split binary semaphore

The solutions for theBarrier-problem in section 6.2 were not very satisfying, as it was a really straightforward implementation, optimized by randomness.

Therefore a better solutions is proposed in this section.

In the beginning, the barrier is closed and after a while all processes will arrive at that barrier. The arrival of each process is registered by increasing the variableatBar. The last process arriving at the barrier must open it, so all other processes can continue. Before the barrier is closed again, the processes must be prohibited from reaching the barrier again. To ensure this, a new boolean variableopen is introduced. In SR, the solution is shown in listing 16, where the angle brackets indicate atomicity.

(23)

6 Algorithms

 

1 B a r r i e r :

2 < await not open then

3 a t B a r ++; c n t ++

4 i f a t B a r = N−>

5 open : = t r u e

6 f i >

7 < await open then

8 atBar−−

9 i f a t B a r = 0 −>

10 open : = f a l s e

11 f i >

12 end B a r r i e r

 

Listing 16: The idea behind the barrier based on binary semaphores in SR.

To really implement this in SR, the semaphoresmut and wait are used, indi- cating respectively whether a processes is already inside a mutual exclusive section and indicating that the processes are waiting for the barrier to close and start the next loop.

 

1 sem mut = 1

2 sem wait = 0

3 do t r u e −>

4 TNS

5 P ( mut )

6 c n t ++; a t B a r ++

7 i f a t b a r < N−>

8 V( mut )

9 [ ] e l s e −>

10 V( wait )

11 f i

12 P ( wait )

13 atBar−−

14 i f a t B a r > 0 −>

15 V( wait )

16 [ ] e l s e −>

17 V( mut )

18 f i

19 od

 

Listing 17: Barrier based on binary semaphores in SR.

In Java, it is implemented as seen in listing 18

1 Semaphore p a r e n t . mut = new Semaphore ( 1 ) ;

2 Semaphore p a r e n t . wait = new Semaphore ( 0 ) ;

3 while( t r u e ) {

4 TNS

5

6 p a r e n t . mut . a c q u i r e ( ) ;

7 c n t [ s e l f ] + + ;

8 p a r e n t . a t B a r ++;

9 i f( p a r e n t . a t B a r < p a r e n t .N) {

(24)

6.5 Deceptive properties

10 p a r e n t . mut . r e l e a s e ( ) ;

11 }

12 e l s e {

13 p a r e n t . wait . r e l e a s e ( ) ;

14 }

15 p a r e n t . wait . a c q u i r e ( ) ;

16 p a r e n t . atBar−−;

17 i f( p a r e n t . a t B a r > 0 ) {

18 p a r e n t . wait . r e l e a s e ( ) ;

19 }

20 e l s e {

21 p a r e n t . mut . r e l e a s e ( ) ;

22 }

23 }

Listing 18: Barrier based on binary semaphores in Java.

6.5 Deceptive properties

The implementation of threads in Java, but also in other programming lan- guages, has some deceptive properties which could possibly cause unexpected behaviour of a final product.

6.5.1 Spurious wakeups

Due to the way computer hardware has been set up, it is possible that when using a call to the wait-function, as explained in section 6.3.3, returns al- though nonotify has been called. Also if a programmer is using condition variables, which have not been explained in this report, the function call may return, although the associated condition is not satisfied.

When using such calls, a programmer must keep in mind that the thread can return from this call unexpectedly, so waiting for a certain condition should always be done in a loop. Also recall listing 13.

6.5.2 volatile variables

Variables are non-volatile by default. This means that for every shared vari- able, each thread uses a local copy to realize a better performance. However, this might mean that the value of a local variable is outdated and the thread uses a wrong value, so the application as a whole might give unpredictable results. By declaring a variable volatile, Java will take care of synchronizing its value with the value of the shared variable, everytime the variable is used.

When calling asynchronized method, all local variables used in that thread are synchronized with the shared memory before the body of the method is executed. After the execution, all local variables are again synchronized with the main memory.

Because of differences in implementation and the difficulties in testing Java’s caching behaviour, it’s hard to tell when volatile variables are needed ex- actly. While testing, no differences between volatile and non-volatile fields

(25)

6 Algorithms

have been observed, but it may become evident while using large, memory intensive applications.

6.5.3 Semaphores fairness settings

By default, Java’s semaphores are unfair. This means that the last thread that tried to acquire a lock, might actually gain it as first. This could also mean, some threads will never acquire a permit from the semaphore. If this is not the behaviour that is needed in an application, one can set the fairness-setting to ‘true’ when creating the semaphore.

(26)

7 Practical assignments

7.1 Mutual exclusion between multiple threads

Peterson's algorithm, as described in section 6.1 on page 14, takes care of realiz- ing mutual exclusion between exactly two threads. This first practical assign- ment already exists in the course and it requests the students to expand this algorithm into a solution which realizes mutual exclusion between N = 2K threads. The directions introduce the idea to organize the threads in a tree, like shown in figure 2 where the threads have been referred to as ‘Proc’, which is the corresponding term in SR.

The tree itself is as a structure very important for the solution of the problem.

If a thread is about to enter its critical section, it requests access to its parent.

For example in figure 2 thread F requests access at node 5. It now competes with thread E to get access, where node 5 decides which one can continue.

In the example thread F has ‘won’ the competition, as shown by the thick line. Before thread F can really start its critical section, node 5 needs to gain permission to continue. Node 5 can request this at its parent, node 2, and is in competition with node 6. This idea travels up the tree. In the end, only one thread has permission to execute the critical section, shown by an uninterrupted line from node 0 to the thread. In the example this is thread F.

Proc A Proc B Proc C Proc D Proc E Proc F Proc G Proc H

5 4

3 6

1 2

0

7 8 9 10 11 12 13 14

Figure 2: Organization of threads in a tree, where Thread F can execute its critical section.

7.1.1 Solution

The threads are organized in a tree, but in the solution they are represented by two linear arrays. One representing Peterson’s active-set, one representing the local last-variable. The tree is transformed into an array as shown in figure 3.

The arrayactive contains a boolean field for every available thread or inner node. This field indicates whether the thread (or for a node one of its child threads) is executing its critical section. Thelast array administrates for every inner node which of its children has gained permission to execute its critical section.

(27)

7 Practical assignments

For every node, there now are twoactive fields and one last field available, exactly what is needed to executePeterson's algorithm.

Proc A Proc B Proc C Proc D Proc E Proc F Proc G Proc H

5 4

3 6

1 2

0

7 8 9 10 11 12 13 14

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

0 1 2 3 4 5 6

Active

Last

Figure 3: Transformation of a tree into an array.

1 import j a v a . lang . ∗ ;

2 import j a v a . math . ∗ ;

3 import j a v a . u t i l . Random ;

4

5 public c l a s s P e t e r s o n

6 {

7 public i n t k = 3 ; / / Depth o f t h e t r e e

8 public i n t p = ( i n t ) Math . pow( 2 , k ) ; / / Number o f p r o c e s s e s

9 public i n t n = 2 ∗ p − 1 ; / / Number o f n o d e s

10

11 / / Number o f c h a r a c t e r s f o r e a c h p r o c e s s

12 public f i n a l i n t t im es = 2 0 ;

13

14 / / C o u n t e r o f c u r r e n t l y r u n n i n g c r i t i c a l p r o c e s s e s

15 i n t i n c r i t = 0 ;

16

17 boolean[ ] a c t i v e = new boolean [ n ] ;

18 i n t[ ] l a s t = new i n t[ n − p ] ;

19 ThreadMX [ ] t h r e a d s = new ThreadMX [ p ] ;

20

21 public s t a t i c void main ( S t r i n g [ ] a r g s )

22 {

23 P e t e r s o n pet = new P e t e r s o n ( ) ;

24 }

25

26 public P e t e r s o n ( ) {

27 f o r( i n t i = 0 ; i < n ; i ++) {

28 a c t i v e [ i ] = f a l s e ;

29 }

(28)

7.1 Mutual exclusion between multiple threads

30 f o r( i n t j = 0 ; j < p ; j ++) {

31 t h r e a d s [ j ] = new ThreadMX ( j ) ;

32 new Thread ( t h r e a d s [ j ] ) . s t a r t ( ) ;

33 }

34 }

35

36 / / A c t i v a t e a n o d e

37 void a c t i v a t e ( i n t node ) {

38 i f( node ! = 0 ) {

39 i n t p a r e n t = ( node − 1 ) / 2 ;

40

41 / / L e f t o r r i g h t c h i l d ( 0 o r 1 )

42 i n t s e l f = 1 − node % 2 ;

43 i n t o t h e r ;

44

45 o t h e r = ( s e l f == 0 ? node + 1 : node − 1 ) ;

46

47 a c t i v e [ node ] = t r u e ;

48 l a s t [ p a r e n t ] = s e l f ;

49

50 while( ( l a s t [ p a r e n t ] ! = 1 − s e l f ) && ( a c t i v e [ o t h e r ] ) ) ;

51 a c t i v a t e ( p a r e n t ) ;

52 }

53 }

54

55 / / D e a c t i v a t e a n o d e

56 void d e a c t i v a t e ( i n t node ) {

57 i f( node ! = 0 ) {

58 i n t p a r e n t = ( node − 1 ) / 2 ;

59 d e a c t i v a t e ( p a r e n t ) ;

60 a c t i v e [ node ] = f a l s e ;

61 }

62 }

63

64 / / C r i t i c a l s e c t i o n

65 void c r i t ( i n t s e l f , i n t s l e e p t i m e ) {

66 i n c r i t ++;

67 / / S l e e p t o g e t more r e a l i s t i c r e s u l t s

68 Thread . s l e e p ( ( long ) s l e e p t i m e ) ;

69 System . out . p r i n t ( s e l f + " " ) ;

70 i f( i n c r i t ! = 1 ) {

71 System . out . p r i n t l n ( " V i o l a t i o n ! " ) ;

72 }

73 i n c r i t−−;

74 }

75

76 c l a s s ThreadMX implements Runnable

77 {

78 i n t l e a f ;

79 i n t time ;

80 i n t s e l f ;

(29)

7 Practical assignments

81 Random random ;

82

83 public ThreadMX ( i n t s e l f ) {

84 t h i s. s e l f = s e l f ;

85 t h i s. l e a f = n − p + s e l f ;

86 random = new Random ( ) ;

87 }

88

89 public void run ( ) {

90 f o r( time = 1 ; time <= t im es ; time ++) {

91 / / Non−c r i t i c a l s e c t i o n

92 Thread . s l e e p ( ( long ) random . n e x t I n t ( 1 0 ) ) ;

93

94 / / E n t r y

95 a c t i v a t e ( l e a f ) ;

96 97

98 / / C r i t i c a l−s e c t i o n

99 c r i t ( s e l f , random . n e x t I n t ( 1 0 0 ) ) ;

100

101 / / E x i t

102 d e a c t i v a t e ( l e a f ) ;

103 }

104 }

105 } / / ThreadMX

106 } / / P e t e r s o n

Listing 19: Java solution for practical assignment 1.

7.2 Car Crossing

Assume that there is a crossing, having K different directions. This crossing is modelled by different threads representing a car. They continuously execute the following loop:

1 while( t r u e ) {

2 e n t r y ( ) ;

3 c r o s s ( ) ;

4 depart ( ) ;

5 }

Listing 20: General behaviour of a car thread.

Each car has its own direction dir. To prevent accidents, only cars with the same direction may cross at the same time. A shared variablecurrent is in- troduced. This variable administrates the direction which is currently allowed to pass.

Extra requirements to the solution are that each car that wants to cross, may cross at a time; the solution should be fair, and if the crossing is empty, the solution must keep working correctly when a new car arrives.

(30)

7.2 Car Crossing

7.2.1 Solution

The proposed solution works with two virtual waiting queues per direction.

They are called ’virtual’ because they are not really implemented as queues.

The thread itself waits for a certain condition, symbolizing that it is in a certain queue. The term queues is only used to explain how the solution works.

Figure 4 shows the global flow for one thread, or car. All threads together form the complete system.

A r r i v e

p ’ s d i r e c t i o n a c t i v a t e d ?

2 n d q u e u e 1 s t q u e u e

y e s n o

S e t t u r n I D I n c r e m e n t # w a i t i n g c a r s

S t i l l i n t h e s a m e t u r n ?

y e s n o

p ’ s d i r e c t i o n a c t i v a t e d ?

C r o s s

y e s n o

D e c r e m e n t # a l l o w e d c a r s

A l l a l l o w e d c a r s c r o s s e d ?

S w i t c h d i r e c t i o n

I n c r e m e n t t u r n I D

y e s n o

L e a v e

Figure 4: Flow of the Crossing solution.

As one can see, a car that arrives at the crossing can do two things. It ends up in a so-called ‘2nd queue’ if its direction is currently activated, meaning that its traffic light is green. This is to ensure that every car gets a chance to cross. Only cars that are already waiting for the traffic light to become green

(31)

7 Practical assignments

may pass. Cars from a ‘2nd queue’ continue to ‘1stqueue’ when the turn is changed, so when their light turns red. Also if a car arrives at a red light, it can continue to the ‘1stqueue’ immediately.

When a turn changes, the allowed direction also changes. All cars currently waiting at the newly chosen direction start crossing. The last car from the ‘1st queue’ of the newly activated direction switches turn again.

To ensure the system keeps running when some directions are empty, extra checks are introduced to ensure that only a direction having waiting cars is chosen. Or if the crossing is empty, a car should not end up in a queue, but it may cross immediately.

The source code for the proposed solution is shown in listing 21 on page 31.

1 import j a v a . lang . ∗ ;

2 import j a v a . math . ∗ ;

3 import j a v a . u t i l . c o n c u r r e n t . Semaphore ;

4

5 c l a s s Crossing {

6

7 / / T o t a l number o f d i r e c t i o n s and p r o c e s s e s ( c a r s )

8 public f i n a l i n t d i r e c t i o n s = 4 ;

9 public i n t p = 1 0 0 ;

10 public CarThread [ ] t h r e a d s = new CarThread [ p ] ;

11

12 public i n t d i r = 0 ;

13 / / C a r s w a i t i n g a t t h e t r a f f i c l i g h t ( 2 nd q u e u e n o t i n c l u d e d )

14 public i n t[ ] w a i t i n g = new i n t [ d i r e c t i o n s ] ;

15 / / T o t a l w a i t i n g c a r s a t t h e c r o s s i n g ( e x . 2nd q u e u e s )

16 public i n t t o t a l w a i t i n g = 0 ;

17

18 / / S e m a p h o r e s t o r e a l i z e a q u e u e

19 public Semaphore [ ] [ ] queues = new Semaphore [ d i r e c t i o n s ] [ 2 ] ;

20 21

22 public s t a t i c void main ( S t r i n g [ ] a r g s ) {

23 Crossing c = new Crossing ( ) ;

24 }

25

26 public Crossing ( ) {

27 f o r( i n t i = 0 ; i < d i r e c t i o n s ; i ++) {

28 w a i t i n g [ i ] = 0 ;

29

30 / / I n i t i a l l y b l o c k e a c h d i r e c t i o n .

31 queues [ i ] [ 0 ] = new Semaphore ( 0 , t r u e ) ;

32

33 / / Open e n t r a n c e t o t h e f i r s t ( o r f r o n t ) q u e u e .

34 queues [ i ] [ 1 ] = new Semaphore ( 1 , t r u e ) ;

35 }

36

37 / / Open d i r e c t i o n ’ d i r ’ t o s t a r t w i t h .

38 queues [ d i r ] [ 1 ] . a c q u i r e ( ) ;

39 queues [ d i r ] [ 0 ] . r e l e a s e ( ) ;

(32)

7.2 Car Crossing

40

41 / / S t a r t y o u r e n g i n e s

42 f o r( i n t j = 0 ; j < p ; j ++) {

43 t h r e a d s [ j ] = new CarThread ( j ) ;

44 new Thread ( t h r e a d s [ j ] ) . s t a r t ( ) ;

45 }

46 }

47

48 void e n t r y ( i n t d , i n t s e l f ) {

49 i f( t o t a l w a i t i n g > 0 ) {

50 / / i n 2nd q u e u e

51 queues [ d ] [ 1 ] . a c q u i r e ( ) ;

52 queues [ d ] [ 1 ] . r e l e a s e ( ) ;

53 }

54

55 / / i n 1 s t q u e u e

56

57 / / C h e ck i f c r o s s i n g was empty , t h e n c h o o s e new d i r e c t i o n

y

t o p r e v e n t d e a d l o c k .

58 synchronized( t h i s ) {

59 t o t a l w a i t i n g ++;

60 w a i t i n g [ d ] + + ;

61

62 i f( t o t a l w a i t i n g == 1 && d i r ! = d ) {

63 n e x t D i r e c t i o n ( ) ;

64 }

65 }

66

67 / / L e a v e t h e 1 s t q u e u e when o u r d i r e c t i o n i s e n a b l e d .

68 queues [ d ] [ 0 ] . a c q u i r e ( ) ;

69 queues [ d ] [ 0 ] . r e l e a s e ( ) ;

70 }

71

72 / / C h o o s e t h e n e x t non−empty d i r e c t i o n .

73 void n e x t D i r e c t i o n ( ) {

74 / / F i r s t d e a c t i v e t h e c u r r e n t l y a c t i v e d i r e c t i o n .

75 queues [ d i r ] [ 0 ] . a c q u i r e ( ) ;

76 queues [ d i r ] [ 1 ] . r e l e a s e ( ) ;

77

78 / / F i n d a non

y

c r o s s i n g i s empty ) .−empty d i r e c t i o n ( o r do n o t h i n g when t h e

79 while( w a i t i n g [ d i r ] == 0 && t o t a l w a i t i n g > 0 ) {

80 d i r = ( d i r + 1 ) % d i r e c t i o n s ;

81 }

82

83 / / A c t i v a t e t h e newly s e l e c t e d d i r e c t i o n .

84 queues [ d i r ] [ 1 ] . a c q u i r e ( ) ;

85 queues [ d i r ] [ 0 ] . r e l e a s e ( ) ;

86 }

87 88

(33)

7 Practical assignments

89 void depart ( ) {

90 synchronized( t h i s ) {

91 t o t a l w a i t i n g−−;

92 w a i t i n g [ d i r ]−−;

93

94 / / I f we w e r e t h e l a s t c a r t o c r o s s , c h a n g e d i r e c t i o n .

95 i f( w a i t i n g [ d i r ] == 0 ) {

96 n e x t D i r e c t i o n ( ) ;

97 }

98 }

99 }

100 101

102 c l a s s CarThread implements Runnable {

103 i n t d , s e l f ;

104

105 public CarThread ( i n t s e l f ) {

106 t h i s. s e l f = s e l f ;

107 }

108

109 public void run ( ) {

110 i n t i = 0 ;

111 f o r( i = 0 ; i < 2 0 ; i ++) {

112 d = ( i n t ) ( Math . random ( ) ∗ d i r e c t i o n s ) ;

113 e n t r y ( d , s e l f ) ;

114

115 / / C r o s s c r i t i c a l s e c t i o n h e r e .

116

117 i f( d i r ! = d ) {

118 System . out . p r i n t l n ( "ACCIDENT! " ) ;

119 System . e x i t ( 0 ) ;

120 }

121 System . out . p r i n t l n ( " P r o c e s s " + s e l f + " ; d i r e c t i o n " +

y

d ) ;

122

123 depart ( ) ;

124 }

125 } / / run

126

127 } / / C a r T h r e a d

128

129 } / / C r o s s i n g

Listing 21: Solution for the Car Crossing problem.

7.3 Sliding Window protocol

A very important aspect in computer science is fault tolerance. Some little error may not crash a whole system. For example, if a user sends data over a certain unreliable connection, he still expects that the receiver receives the original data and doesn’t accept that the received data is altered.

(34)

7.3 Sliding Window protocol

In this practical assignment an imaginary medium is used. This medium can lose, duplicate and reorder messages that it sends. To provide the user with a reliable solution, theSliding Window-protocol has been developed. The student is asked to create a simple chat client which communicates over this imaginary medium. Messages are sent character by character.

7.3.1 The protocol

The idea behind the protocol is quite simple. Items within a certain window of size R will be sent to the receiver. The receiver sends acknowledgements when receiving items. Until the sender receives such an acknowledgement it will send the items within the window continuously. When an acknowledgement is received, the window slides to the position after the item for which the acknowledgement has been received.

The properly functioning case In the case that there is a reliable medium and no messages are lost, theSliding Window-protocol works quite straightfor- ward, like shown in figure 5 and 6. In this case, the protocol works exactly like the idea described above.

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Window

Figure 5: Send items within window.

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

0 1 2 3

Figure 6: Send acknowledgements for received items.

(35)

7 Practical assignments

Losing items Because we deal with an unreliable medium, it is possible that an item sent by the sender, gets lost somewhere, like shown in figure 7.

Then the receiver sends acknowledgements only for the items that have been received before the first gap occurs. In figure 8 item 7 has been received, but only acknowledgements for the items up to and including 5 will be sent.

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

0 1 2 3

Figure 7: Losing an item while sending.

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

0 1 2 3 4 5 7

Figure 8: Send acknowledgements for all received consecutive items.

(36)

7.3 Sliding Window protocol

Losing acknowledgements Also acknowledgements can get lost while trans- mitting over a medium. This is shown in figure 10. For the sender it is the same as that its item is lost, so the window will slide to the item for which the acknowledgement was lost. In that case the item will be sent again.

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

0 1 2 3 4 5 7

Figure 9: Send items normally

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

0 1 2 3 4 5 6 7 8 9

Figure 10: Losing an acknowledgement while sending.

Reaching end of message When reaching the end of the message, there are not enough items left to fill a window. Therefore the window gets smaller.

The receiver’s window is shrinking first, because it shrinks immediately after receiving items. See figure 12.

Referenties

GERELATEERDE DOCUMENTEN

If some subset of discs are initially more massive or extended, then they could exhibit greater mass loss rates at the present day and may contribute to the number of bright and

Als we er klakkeloos van uitgaan dat gezondheid voor iedereen het belangrijkste is, dan gaan we voorbij aan een andere belangrijke waarde in onze samenleving, namelijk die van

To test this assumption the mean time needed for the secretary and receptionist per patient on day 1 to 10 in the PPF scenario is tested against the mean time per patient on day 1

This Participation Agreement shall, subject to execution by the Allocation Platform, enter into force on the date on which the Allocation Rules become effective in

This is a sample plain XeTeX document that uses tex-locale.tex and texosquery to obtain locale infor- mation from the operating system..

Intranasal administering of oxytocin results in an elevation of the mentioned social behaviours and it is suggested that this is due to a rise of central oxytocin

The present text seems strongly to indicate the territorial restoration of the nation (cf. It will be greatly enlarged and permanently settled. However, we must

posite parts Principal Sentence Co-ordinate Sentence Sub-ordinate Sentence Complete Sentence Incomplete Sentence Elliptic Sentence Noun Sentence Adjective