• No results found

Exploiting human mobility with the CouchDB replication protocol

N/A
N/A
Protected

Academic year: 2021

Share "Exploiting human mobility with the CouchDB replication protocol"

Copied!
49
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Bachelor Informatica

Exploiting human mobility with

the CouchDB replication

pro-tocol

Bjorn Hato

June 23, 2020

Supervisor(s): Sander van Splunter, Rowan de Graaf

Inf

orma

tica

Universiteit

v

an

Ams

terd

am

(2)
(3)

Abstract

The use of mobile internet has been growing significantly worldwide, which is causing more and more connection-seeking devices to be added to society. Consequently, at large-scale events, the frequency spectrum can get overcrowded and a large amount of interference occurs. It becomes impractical to set up traditional wireless networks due to the interfer-ence. Therefore, an alternative method is needed.

This study attempts to provide such an alternative method by studying a specific use case: the inventory management system of a festival. In this thesis it is proposed to exploit hu-man mobility to mitigate the connection gap between nodes. The implementation of the proposed networking method is based on the CouchDB Replication protocol. Since this study is considered to be the first of its kind, the goal is to gain more insights and show that feasibility of the implementation.

To demonstrate the capabilities of the network, several experimental test cases are exe-cuted. These test cases show that the implementation meets the set requirements and behaves as expected. Therefore, it is a feasible solution for use in practice. As expected, data was able to be shared, edited and deleted successfully. The implementation is also considered to be easy to use, easy to deploy and to be fiscally feasible. However, the imple-mentation also presented (possible) shortcomings and performance bottlenecks. Therefore, further development is recommended.

(4)
(5)

Contents

1 Introduction 7

1.1 Context Of The Study . . . 7

1.2 Research Question . . . 8 1.3 Summary . . . 8 2 Related Work 11 2.1 Data Offloading . . . 11 2.2 Protocol Adaptions . . . 11 2.3 Ad-Hoc Networking . . . 12

2.4 Visible Light Communication . . . 12

3 Use Case 13 3.1 Current Inventory Management . . . 13

3.2 Design . . . 14

4 Implementation 17 4.1 Choosing the Database . . . 17

4.2 Tools . . . 18 4.2.1 CouchDB . . . 19 4.2.2 PouchDB . . . 21 4.2.3 Ionic Framework . . . 21 4.2.4 Raspberry Pi 3 Model B . . . 21 4.2.5 Smartphone Devices . . . 22

4.3 System architecture overview . . . 23

5 Experiments 25 5.1 Setting Up The Topology . . . 25

5.2 Test Case A: Basic Addition . . . 26

5.3 Test Case B: Complex Addition . . . 26

5.4 Test Case C: Update Conflict . . . 26

5.5 Test Case D: Complex Update Conflict . . . 27

5.6 Test Case E: Deletion . . . 28

5.7 Test Case F: Complex Deletion . . . 30

6 Results 33

7 Discussion 41

8 Conclusion 45

(6)
(7)

CHAPTER 1

Introduction

1.1

Context Of The Study

The use of mobile internet has been growing significantly worldwide. In 2019, the world had 3.2 billion smartphone users and it is estimated that this number will grow to 3.8 billion by 2021 (Holst, 2019). Also, the need for internet connectivity is driving up the amount of WiFi net-works. For public WiFi networks alone, forecasts suggest that there will be 362 million hotspots available around the world in 2019, which is a quadrupling since 2016 (S. Liu, 2019). Further-more, the predictions indicate that growth will continue into the early 2020s. Finally, the IoT is another fast growing field in technology. Estimations show that 75.44 billion devices will be connected via the internet by 2025 (Department, 2019). An effect of these growing technologies is that more and more connection-seeking devices are added to society.

There are situations where the aforementioned increase of wireless devices can result in an over-crowded frequency spectrum, causing degradation of a (cellular) network’s performance. This is especially the case for large-scale events. At large-scale events a lot of people are gathered in a relatively small area. Most of these people carry a smartphone with wireless functionalities. There’s only a few hundred MHz available for mobile phone use (Rogowsky, 2012). Thus, the large amount of visitors is often too much for the cell tower(s) covering that area. This leads to an increase in connection timeouts and failures. A study examining the cellular network perfor-mance during crowded events found that pre-connection failures increase by 100-5000 times as compared to the average on regular days (Shafiq, Ji, Liu, Pang, & Wang, 2012).

Furthermore, large-scale events intensify a visitor’s usage of file sharing and/or social media applications, which are data heavy (Arbel, 2016; Shafiq et al., 2012; Erman & Ramakrishnan, 2013). The study on cellular network performance states that “up-link traffic volume increases by 4-8 times, and both downlink traffic volume and the number of users increase by 3 times, during the crowded events as compared to their average on routine days”. In some cases, cellular service is simply not available due to the location of the event.

For many of these large-scale events it is important to have a reliable network connection for communication and data-sharing purposes. As mentioned above, cellular networks often fail in these settings. As will be discussed, setting up local area networks using WiFi technology also proves to be challenging in overcrowded frequency spectrum environments. With the right finan-cial resources these challenges can be overcome. Although it is finanfinan-cially exhaustive, deploying a large amount of access points can assist in meeting throughput requirements for the network at a large-scale event. For example, about 1800 access points were used in the Mercedes-Benz Stadium during Super Bowl 2019. The access points facilitated 24,05 TB of traffic by 48.845 unique users on the stadium’s network (Kapustka, 2019). Most of these access points were placed underneath the stadium seats as can be seen in Figure 1.1 on page 8.

(8)

However, typically this type of solution can’t be realised due to a lack of proper financial

Figure 1.1: A WiFi access point that was placed underneath a seat in the Mercedes-Benz Sta-dium to allow better throughput performance during Super Bowl 53. Credit: Paul Kapustka, mobilesportsreport.com

resources. Another reason can be found in the fact that deploying many access points is not feasible in some cases. For example, in natural disaster area’s communication- and electric in-frastructures are often destroyed, making it hard to set up such a network. Also, for some cases such as festivals, it is preferable to avoid the cluttering of an environment to preserve a certain appearance.

1.2

Research Question

Not all event organizers are able to deploy a network for various reasons. However, in many cases a local network is needed to share data among nodes. Considering that cellular and WiFi networks are unreliable, this study takes a worst case approach in finding a solution by assuming that wireless technology can only be used for short range connections. Thus, it can’t be used to link nodes within a network. An alternative method is required. This study attempts to provide such an alternative method by considering human mobility to mitigate the connection gap between nodes. Therefore, the research question is: “How can human mobility be exploited for data dissemination in overcrowded frequency spectrum environments?”.

Little is known about using such an alternative networking method in overcrowded frequency spectrum environments. A thorough search of the related literature yielded a few related articles. To gain more insights related to the research question, a specific case is investigated. A large indoor music festival is used as the case. At this festival, the inventory is tracked by hand. It is preferable that this process is automated. Therefore, a network needs to be deployed to be able to disseminate data across the festival terrain. However, it is assumed that deploying a traditional network is not possible at this festival. Therefore an alternative solution which ex-ploits human mobility is proposed in this paper. To demonstrate the feasibility of the proposed solution, several test cases using the proposed solution are done.

1.3

Summary

Summarizing, the continuing growth in the amount of wireless devices is causing for a more cramped frequency spectrum. This is especially the case for large-scale events. This thesis studies a specific use case in an attempt to find an alternative solution for the challenges that are

(9)

found in deploying a network in overcrowded frequency spectrum environments. The case being studied is situated at a large music festival and since it is assumed that this paper is the first of its kind, the objective is to gain insights in the use of human mobility for data dissemination in practice. Furthermore, by running several test cases it is shown that the proposed solution meets the requirements for this use case and is and that the proposed solution feasible for use in practice.

(10)
(11)

CHAPTER 2

Related Work

Studies analyzing network performance degradation in overcrowded frequency spectrum environ-ments exist, including (Shafiq et al., 2012; Sommers & Barford, 2012; Erman & Ramakrishnan, 2013; Marques-Neto et al., 2018). Most, if not all, studies conclude that the cause of the degra-dation is found in the fact that the number of wireless devices is increasing. Different approaches to resolve and/or mitigate the congestion problem are proposed.

2.1

Data Offloading

Several studies propose network infrastructure adaptions, which enable cellular traffic offloading to counter the cellular network connection problem. (Li, Shi, Chen, & Zhao, 2015) attempts to reduce the congestion of the frequency spectrum by implementing opportunistic communication. Traditional infrastructures provide a connection through a direct device to cell tower. To save bandwith of the cellular network infrastructure, (Li et al., 2015) targets only a small fraction of devices using the NodeRank algorithm. NodeRank ranks the importance of a node and is based on the regularity of human mobility. Then, these targeted devices propagate data further to other subscribed users when these users are in range. A similar strategy is found in (Dimatteo, Hui, Han, & Li, 2011), where data offloading is done using multiple public WiFi access points city-wide.

(Wudneh, 2018) managed to reduce the number of rejected users from 45,9% to 2,3% for voice users and 8,2% for data users. This was done by offloading 42% of the 3G network to WiFi. (Tan & Zeydan, 2018) proposes a framework that combines serving users through a cellular or WiFi infrastructure and and device-to-device (D2D) communication to exploit user proximity in crowded environments.“The simulation results depict that up to 168% and 200% increase in user satisfaction and throughput can be achieved under high network load scenarios at optimal D2D density.” Another study addressing D2D communication for traffic offloading in dense environ-ments is (Abbas, Dawy, Hajj, Sharafeddine, & Filali, 2017). Significant gains are presented for various scenarios in a stadium setting.

The studies mentioned above differ from this study in the fact that in this study it is assumed that the possibility to establish a traditional network connection between nodes does not exist. Furthermore, (Li et al., 2015) uses the regularity of human mobility, not human mobility itself.

2.2

Protocol Adaptions

Other studies attempt to overcome the signal interference by optimizing routing protocols. (Xu, Shi, Luo, Zhao, & Shu, 2011) shows performance gains for ZigBee signals, which are interfered by WiFi signals, when using a multi-channel approach. The use of a mechanism called Cooperative Busy Tone (CBT) produce similar results for (Zhang & Shin, 2011). (Zhou, Stankovic, & Son,

(12)

2006) finds that using spread spectrum techniques as a solution for the crowded spectrum is “far from enough.”

(Lim & Kai, 1999) proposes to use awareness of the network environment by detecting dis-connection, high error rates and collisions. Upon detection the media access control layer adapts accordingly to reduce performance degradation. Results from simulations show that the adap-tive versions existing protocols show significant improvements over the basic versions. (Ahmad, Kumar, & Shekhar, 2012) states that popular real-time applications such as live television, video conferencing, online gaming, etc. are not subject to congestion control, which may cause prob-lems for the stability of the internet as the popularity for those applications continues to grow. A model where each requesting clients gets an individual iterative server is proposed.

(M. Liu & Wu, 2008) studies efficient spectrum sharing based on the theory of congestion games. In these games, selfish behaviours result in a socially desirable outcome. By reformulating ex-isting protocols as congestion games, practical protocols for spectrum sharing between multiple access points are constructed. The studies in this section show that adapting protocols can show positive results. However, if a network is unable to establish a connection between nodes, as is assumed for this thesis, using adaptive protocols will not be able to have an impact on the network’s performance.

2.3

Ad-Hoc Networking

Other studies investigate how and if the network connection problem can be overcome by de-ploying delay-tolerant networks for communication. (Vukadinovic, Dreier, & Mangold, 2014) analyzes human mobility in entertainment parks to understand network requirements for op-portunistically communicating ad-hoc networks. However, successfully deploying such ad-hoc networks at a large-scale event in general is not feasible due to the large amount of interference, according to (Kerssens, 2019).

LifeNet (LifeNet, 2011) is an open-source communication solution designed for post-disaster situations. In these situations, the existing communication infrastructure is failing. Since it is more important to be able to communicate during post-disaster situations, LifeNet favors reliability of connectivity, robustness to failures and ease of deployment instead of bandwith requirements. This is achieved by implementing a ’flexible routing’ protocol and using an ad-hoc networking approach. Although LifeNet aims to handle similar situations, using an ad-ad-hoc networking approach for this study would not be feasible since amount of mobile nodes is low.

2.4

Visible Light Communication

Experimental results show that using a hybrid WiFi/VLC (Visible Light Communication) net-work outperforms a regular WiFi netnet-work for crowded environments in terms of throughput (Shao et al., 2014, 2015; Nikam, Shinde, & Joshi, 2016). However, VLC highly depends on the accurate alignment between VLC transceivers. Therefore, it can be concluded that deploying such a network at a festival terrain would be too impractical. Furthermore, mobility would then be limited.

(13)

CHAPTER 3

Use Case

R. Yin defines a case study as “an empirical enquiry that investigates a contemporary phe-nomenon within its real-life context, especially when the boundaries between phephe-nomenon and context are not clearly evident” (Yin, 2008). (Runeson, Host, Rainer, & Regnell, 2012) states that case studies offer an approach that does not require strictly controlled experiments and provide a deeper understanding of the phenomena that is being studied.

This thesis investigates a specific use case (the inventory management system at a music fes-tival) and is not a case study as defined above. Although it is not studied within its real-life context (a festival), studying this specific use case can provide an answer to the research question. Hence, this paper makes contributions to the body of knowledge on the subject.

3.1

Current Inventory Management

The interview with festival producer R. De Graaf was conducted to gain more insight into the current inventory management method and asses what requirements an automated system should have. A festival production can be broken down into several branches. For example, the security and safety of the festival, booking and handling artists that will perform at the festival and working with a catering/hospitality company who will supply all beverages for the festival. This thesis is focused on the latter.

Delivery by the supplier is done by filling unique cooling containers for each bar with the right composition of beverages. How a cooling container is filled is based on an estimation of what that particular bar will need. A bar’s need is predicted by considering the type of crowd that is expected to visit the festival, the artists that will be performing on stages around the bar and the placement of the bar in regards to the festival’s terrain.

So, tracking transactions can improve these predictions. Consequently, better predictions trans-late to a more accurate order that is made to the supplier, which provides financial benefits. However, the current method for inventory tracking is to do so by hand. At the beginning of a festival a total count is done and when the festival is finished, another count is done. Therefore, mutations that occur during the festival are not registered, unless an electronic pay system is used. Nonetheless, electronic pay systems only cover mutations that are caused by paying visi-tors at the bars.

During a festival it is possible that a certain bar is running low on a particular beverage. This threat is then solved by restocking. Restocking can be done by sending more products from the production office and/or by migrating products from another bar to the endangered bar. In practice, the restocking by shifting products between bars is done more often due to time efficiency. The more this occurs, the more inaccurate the intelligence on the overall inventory

(14)

state becomes. This stems from the fact that the mutations between bars for restocking purposes are generally not registered and if they are, it is done by writing it down by hand. Normally, these notes are then processed post festival.

3.2

Design

A system that can track the state of the inventory without requiring a traditional network is needed. A list of requirements for this system was extracted from the interview:

• Data can be shared, edited and deleted. • The presented solution is easy to use. • The presented solution is easy to deploy. • The presented solution is fiscally feasible.

The production office (PO) is the only node for which it is crucial to have data of all the bars. Therefore, the design attempts to maximize the probability that a stored transaction reaches the PO for processing. Furthermore, using a new system in practice can cause an increase in the workload of the employees. Because the working conditions at a large festivals are already hec-tic, the design also aims to minimize the additional workload. Finally, this design focuses on transferring data. Therefore, the processing of the data to predict and anticipate on the state of the inventory is not taken in to account in this design, as it is outside of the scope of this thesis.

The production at festivals is often done by an external company. This company gives its

employees several roles during an event. One of the roles a employee could be assigned to is the ’runner’ role. Runners are responsible for the following: supplying employees that are preparing the beverages that will be sold, transporting garbage and packages to the backside of the bar, keeping the cooling container and the bar’s backstage area organized and track the status of the inventory and restock it when necessary.

In contrast to other employees, these tasks force runners to spend a considerable amount time migrating across the festival site. Therefore, they are a viable option to be used as data carri-ers. It is assumed that all runners have a smartphone or are provided with one. To minimize additional workload for runners, this system will automatically dump and fetch data using these smartphones.

Every bar has a computer (node) that collects the data of that particular bar and saves it in a database. This computer can also make itself connectable to other devices. When a runner is in range of a node, the runner’s smartphone automatically synchronizes with the node. In this transaction of data, the smartphone can both transmit and receive data from the node. This allows data to travel from one node to another. Hence, the system uses one database that is distributed over multiple nodes, making it a distributed system.

For example, a runner starts working and has no data at that point. The runner is ordered to go to bar 1 and evaluate if the cooling container is working properly. As a result, the runner is now in range of node 1. Therefore, the runner’s device is now in range of node 1. The device and node sync their data, which means that both the node and the device have the same data. After finishing the given tasks at bar 1, the runner proceeds towards bar 2. Because the runner is in range, the device and node 2 now sync as well. The device now receives data from node 2 but also sends the data it already has, which is the data from node 1. Hence, node 2 has the data of node 2 and node 1.

Two-way synchronization causes for more data to be transmitted, which means that the syn-chronizing costs more time. This time might not always be available due to the hectic working

(15)

conditions that the runners have. However, synchronizing in both directions increases the prob-ability of data reaching the PO. Continuing on the previous example, if a second runner has no data and then synchronizes with node 2, this runner gains data of node 1 and node 2 in one synchronization. Instead of relying on one runner, the PO now has two runners who can the deliver the data for node 1 and 2, and thus increasing the probability.

All nodes in this design are able to operate individually. Therefore, if a node were to crash, it would not impact other nodes directly. In the worst case, only the data of the crashing node is lost. However, this data could be restored to some extent. Depending on how recent the node synchronized it’s data with a runner, the data till that point in time could be restored. If the node at the production were to crash, it would be more problematic, since the data at that node is used for further processing. Nonetheless, by re-synchronizing with runners, the data could be restored. The performance depends on many factors which can be divided into two types; circumstan-tial factors and technical factors. Circumstancircumstan-tial factors are hard to measure and/or control. Examples are the weather and the work ethic of employees. Furthermore, there are many circum-stantial factors that could be adjusted, but are considered as a given. For instance the amount of employees, size of the festival and the amount of bars at the festival. Technical factors are based on the equipment that is used. Processing speed of the smartphones, connection speed, the time it takes to connect, battery life of the smartphones and memory capabilities are examples of these technical factors.

Many wireless connection technologies exist. This design focuses on the use of WiFi technology. Since it is a widely used technology, it is assumed that the use of WiFi will significantly con-tribute towards designing an easy to use and implement system. However, WiFi technology faces limitations in overcrowded frequency spectrum environments. WiFi performance is generally su-perior to cellular network performance (Sommers & Barford, 2012), However, a study on WiFi performance in large-scale and dense environments shows that WiFi performance is also affected negatively by the large amount of wireless devices (Bosch, Wyffels, Braem, & Latr´e, 2017). The study investigates a festival containing 80.000 visitors over an area of 0.3 square kilometers. The festival’s network “is an IEEE 802.11n-based wireless mesh consisting of 37 devices in 15 nodes working as a network backhaul.”

Two main factors that cause the degradation are given. First, active and inactive wireless

devices interfere by overcrowding the frequency spectrum. Also other radio frequency equipment can cause interference. Therefore, the more of these devices, the higher the interference. Other studies also back this conclusion (Gummadi, Wetherall, Greenstein, & Seshan, 2007; Bosch, Latr´e, & Blondia, 2018; Sagari et al., 2013; Michaloliakos, Rogalin, Zhang, Psounis, & Caire, 2016). Second, the network’s topology should be carefully designed. For example, if nodes are place on the ground, signals trying to reach that node could have more difficulty reaching it due to interference from people and user devices operating between the nodes. If nodes are placed at higher points, that interference could be less. Also, placing a node at the edge of a festival’s site, close to a stage or indoors affects the node’s performance differently. Therefore, to be able to establish a functioning WLAN network, the study concludes that a well-planned topology is essential for large-scale events.

The results show that nodes at the edge of the festival site have an average signal strength of -66dBm. For nodes closer to stages, but which still have sufficient line of sight, the average signal strength is -73dBm. Indoor nodes have an average signal strength of -78dBm. This means that, based on signal strength charts, nodes at the edge have a good and reliable average signal strengths (What is a RSSI? , n.d.; Tumuso & Newth, 2018; Understanding RSSI , n.d.). Nodes closer to the stages have an okay average signal strength that can be used for operations such as browsing and e-mailing. Indoor Nodes have a bad signal strength which will probably not suffice for most services other than connecting to the network.

(16)

This suggests that network designers for festivals should try to keep as much s as possible at the edges of the festival site to have the highest possibility of a good connection. Furthermore, the writers of (Bosch et al., 2017) conclude that a large number of packets need to be retransmitted due to packet losses and that the interference is not continuous. Moreover, results show that the average latency for one hop is 2,5 seconds “making bidirectional communication impossible”. The average for two hops is 6 seconds. For hop counts of three and more the latency “increases up to 10 seconds and more. On the other hand, the standard deviation is extremely large as well. For one hop, around 50% of the pings were below l00ms, which is still acceptable for most applications. This percentage decreased down to 10% for more than two hops and was for two hops around 30%”.

In addition, the interference also affects the loss rate. To nodes at one hop away the loss rate is above 50%, around 80% for nodes two hops away, close to 90% for three hops and even more for nodes more hops away. Besides the interference, the reason given for this is that the CSMA/CA scheme used to control channel access uses a backoff timer to increase the window every time interference is detected. Thus, the latency can then increase relatively fast. To this point (Bosch et al., 2017) concludes that this MAC layer scheme should be improved for dense frequency spectrum scenarios, which is also suggested by (Abinader et al., 2014).

The aforementioned studies report on difficulties using WiFi in dense frequency spectrum envi-ronments. However, in those studies, the focus lies on creating a network in which nodes are directly connected through WiFi. The main problem for those networks is that the distance between the nodes is too large. In this design, this problem is eradicated by using runners. Therefore it is assumed that using WiFi to connect smartphones to the nodes should not cause similar connection challenges. Furthermore, the topology of the network in this design will im-pact the performance of the network differently. When trying to connect nodes directly via WiFi as is done in (Bosch et al., 2017), node placement is important to have a strong as possible con-nection. With this design, nodes should be placed on strategic positions so that the probability of synchronization is as large as possible.

(17)

CHAPTER 4

Implementation

In the previous chapter, the overall system design strategy was covered. In this chapter the implementation of that design discussed. The implementation is based on the use of CouchDB and it’s replication protocol. The first section describes the process of choosing a database. Thereafter, CouchDB and other tools that are used are discussed. Finally, the system architecture overview is given.

4.1

Choosing the Database

A vast amount of databases exist with each having different solutions for different contexts. Therefore, to asses which database best fits this implementation, the paper by (Louren¸co, Cabral, Carreiro, Vieira, & Bernardino, 2015) is used. Louren¸co et al. use the CAP theorem, which was first presented by (Brewer, 2000) and later proven by (Gilbert & Lynch, 2002), to analyze databases. The CAP theorem implies that a distributed system must choose two of the follow-ing three terms: consistency, availability, partition. Their relationship is illustrated in Figure 4.1. Consistency is often described as the condition where all nodes see the same data at the same time. Consequently, nodes will have to be updated when data changes before it can be read. Such an event will cause nodes to be unavailable. Availability refers to the property that a distributed systems remains operational the entire time. Every request is answered regardless of the state of the node, meaning that the answer is allowed to be out of date. Finally, partition tolerance is related to the ability for a system to keep operating even when one or more nodes fail, because it is possible to split the database over multiple servers.

However, according to (Kleppmann, 2015) many definitions of the CAP theorem exist due to it’s abstract nature. A variety of interpretations can be made, which can lead to inconsistencies and misunderstandings. Therefore, when using different interpretations of the definitions in the CAP theorem, one could conclude that a different combination of the terms - availablity, consistency and partition tolerance - should be used to classify the system described in this thesis. Fur-thermore, (Kleppmann, 2015) recommends that the CAP theorem should be avoided to justify design decisions and propose an alternative framework for reasoning.

To prevent such a misunderstanding, the three definitions are put in to the context of this thesis:

• Consistency: this would mean that every node has the same data, at all times.

• Availability: if a runner is in range of a node, the device of the runner will always be able to connect to that node for synchronization purposes. The state of that node’s database does not matter as long as the node is always available for synchronization.

(18)

Figure 4.1: The relationship between the three terms that are used in the CAP theorem. Some database brand examples are addes to illustrate the variation they implement. This Figure is taken from (Louren¸co et al., 2015).

• Partition-Tolerance: it is not necessary for the nodes to be connected with each other. All nodes and runners can ’live’ on their own.

With respect to this implementation, consistency is not a priority. Every node in the system generates unique data. The aim is to deliver this data at the PO as fast as possible. To increase the probability of reaching that goal data-sharing amongst nodes is possible. However, it is not a necessity for nodes to share their data with other nodes. Furthermore, prioritizing consistency would be impractical considering the fact that the network depends on runners for data-sharing. Since data is assumed to be generated continuously, the nodes would fail to be available. When a runner is in range of the node, the runner should always be able to synchronize with a node. Therefore, availability is a priority. Besides the data objects being generated uniquely at a node, they are also expected to be edited rarely. Thus, the objects that are being transferred from a node to the runner’s device will be the most recent version in most instances. It might occur that a node crashes during a festival. In these situation the other nodes should remain operational. The possibility to split the database over several nodes is essential. Therefore, partition tolerance is another priority. It can be concluded that an ’AP’ system is needed. (Louren¸co et al., 2015) categorizes Apache CouchDB as an ’AP’ database and state that “Apache CouchDB uses a shared-nothing clustering approach, allowing all replica nodes to continue work-ing even if they are disconnected, thus bework-ing a good candidate for systems where high availability is needed.” Thus, the database used in this implementation is CouchDB. However, it is worth noting that other databases could also be found to be suitable for this project.

4.2

Tools

In this section the following tools are discussed: CouchDB in section 4.2.1, PouchDB in section 4.2.2, the Ionic Framework in section 4.2.3, the Raspberry PI 3 Model B in section 4.2.4 and the

(19)

used smartphone devices in section 4.2.5.

4.2.1

CouchDB

CouchDB implements a replication protocol which allows multiple databases to synchronize over HTTP. This makes it possible to spread data across several nodes. When two CouchDB databases synchronize, by default, only the changes (updates, additions and deletions) are sent so that all documents on the source database are also in the destination database and vice versa. Conflicts can occur when multiple users attempt to write to (a copy of) the same document and then attempt to synchronize. These conflicts are resolved in a non-destructive manner. For every modification made to a document a new revision (branch) is created while saving the older revi-sion in the document’s history tree. During synchronization the revirevi-sions on each side are then merged, creating the same updated document on each side.

CouchDB documentation uses a specific use case to demonstrate the workings of the protocol. A small library for the conversion of shared Songbird playlists to JSON objects is considered. By using the JSON objects, the playlist can be backed up. An initial backup from the desktop to the CouchDB and CouchDB replies with the first revision number of the playlist. Next, when a change is made to the playlist, a request is sent to CouchDB along with the latest revision number. CouchDB then checks if that revision number matches the latest revision number in the database. Since the revision number updates on each modification, CouchDB knows the user has the latest version of the playlist document if the match is successful. Then, CouchDB applies the updates and replies with the new revision number.

Now consider a new laptop. This laptop first needs to ’restore from backup’ to be in sync. By doing so, the laptop now also has the latest revision number. Next, the laptop makes changes to the playlist. The changes are sent to the database and the laptop receives the new revision number. The desktop who still has the older revision now also tries to make changes. Since the desktop has the wrong revision number, CouchDB returns a conflict error. Hence, in order for the desktop to make changes, it has to refresh first. This is illustrated in Figure 4.2

Figure 4.2: This Figure showcases an example of the workings of the CouchDB Replication Protocol. This image was taken from the CouchDB documentation.

Continuing, consider the situation where the laptop and desktop each have their own local CouchDB instance. By keeping these two databases synchronized, the laptop and desktop each stay up to date. The laptop makes more changes, but does not synchronize with the other

(20)

database. Next, the desktop computer also makes changes and tries to synchronize to the laptop database. CouchDB detects that the changes that were made on the desktop were based on out of date documents, since the laptop had already done changes before. This is a conflict. This conflict is handled by merging both versions of the documents as Figure 4.3 illustrates. However, only one revision is chosen as the winner and sent to the view. The winning revision is the revision with the longest revision history. “If they are the same, the rev values are compared in ASCII sort order, and the highest wins” (Slater, Lehnardt, & Anderson, n.d.). In this study it is assumed that this description translates to the newest revision is chosen as the winner.

Figure 4.3: This Figure showcases how the CouchDB Replication Protocol handles conflicting documents. This image was taken from the CouchDB documentation.

Hence, the replication protocol enables CouchDB to achieve eventual consistency in the sense that conflicts are resolved in a later stage. Due to the fact that it is expected that these conflicts will occur rarely in this specific case, the replication protocol is discussed in a relatively simplistic manner. Section 1.3.7 of the CouchDB documentation shows a thorough example on the creation and usage of revisions.

During synchronization, CouchDB compares two databases to determine which documents are conflicted. Then, only the changed documents are sent or received one by one. Hence, the need to synchronize all documents in the databases is avoided. If the connection is lost during syn-chronization, CouchDB will continue where it left off when the connection is regained. Thus, when a runner moves out of range, transferred data will remain successfully transferred even though the full synchronization was not completed. After replication, each database is able to work independently (and offline).

Furthermore, CouchDB documentation states that it is an open-source database that uses JSON documents to store data. This data can be queried via HTTP using JavaScript. Documents can each have their own structure, making CouchDB very flexible. CouchDB is designed to overcome fluctuating traffic. Hence, when many concurrent requests are present, CouchDB absorbs the impact by trading in on performance. Even though latency is increased in such situations, it is guaranteed by CouchDB that all requests are handled. In context of this thesis, this means that when many products are sold at the bar(s) concurrently, CouchDB will be able to handle this. Reliability is preferred over speed in this scenario.

From the CouchDB docs: “A distributed system is a system that operates robustly over a wide network. A particular feature of network computing is that network links can potentially

(21)

disappear, and there are plenty of strategies for managing this type of network segmentation. CouchDB differs from others by accepting eventual consistency, as opposed to putting absolute consistency ahead of raw availability, like RDBMS or Paxos.”

Moreover, CouchDB documentation declares that during replication a node can be in healthy and unhealthy states. The unhealthy states are the error state, the failed state and the crashing state. The failed state is entered when a replication document could not be processed because it’s malformed. To get out of this state, user intervention is needed. Since documents are generated automatically in this project, it is expected that such documents will not present themselves in practice, meaning that nodes will remain available. The error state and crashing state are con-sidered temporarely as they are handled automatically by CouchDB when entered. Therefore, it is assumed that they do not pose a threat. Finally, CouchDB features all ACID properties (Haerder & Reuter, 1983). As a result, in combination with the nature of the discussed design in previous chapter, the assumption is made that CouchDB is robust enough choice for this implementation.

4.2.2

PouchDB

PouchDB is a JavaScript implementation of CouchDB that can run in-browser. Applications that use PouchDB can store data locally. Since PouchDB uses the same replication protocol, PouchDB databases are able to synchronize with CouchDB databases. If an application that implements PouchDB fails to establish an connection with the internet, the data will be stored locally. The PouchDB will then take care of synchronizing with the remote CouchDB database when a connection is present. Since PouchDB practically functions as a buffer between the CouchDB database and the user, PouchDB can provide the ability to keep using an application offline. Besides in-browser, PouchDB can also be used when building native applications.

4.2.3

Ionic Framework

The Ionice Framework is a toolkit for building mobile and desktop apps in an effective manner using HTML, CSS and JavaScript. The framework provides stylized IO components that can be used as customizable building blocks. Furthermore, Ionic can be used to build applications for platforms such as native iOS, Android and Windows with one code base.

4.2.4

Raspberry Pi 3 Model B

Figure 4.4: An image of The Raspberry Pi 3 Model B

The Raspberry Pi, shown in Figure 4.4, is an affordable ($35) credit-card sized computer afforable computer that can be used for implementing a wide range of system designs. It was

(22)

designed to be used for educational purposes in software engineering. Out of the box, the Raspberry Pi is a single board computer without peripherals. However, the board contains several I/O ports, such as USB, HDMI and Ethernet, which make it possible to use it as a desktop computer. According to (Barnes, n.d.) the the Raspberry Pi 3 Model B has:

• the Broadcom BCM2837 as it’s SoC

• a quad core ARM Cortex-A53, 1.2GHz CPU • a Broadcom VideoCore IV GPU

• 1GB LPDDR2 (900 MHz) of RAM

• networking capabilities with 10/100 Ethernet, 2.4GHz 802.11n wireless • Bluetooth 4.1 Classic, Bluetooth Low Energy

• a microSD port to load an operating system and use the SD card as a hard drive • a populated 40-pin header

• the following ports to provide I/O capabilities: HDMI, 3.5mm analogue audio-video jack, 4 USB 2.0, Ethernet, Camera Serial Interface (CSI), Display Serial Interface (DSI)

4.2.5

Smartphone Devices

Since the Ionic Framework is used to create the application, smartphones with an iOS, Android or Windows operating system can be used for this system. For this specific instance of the implementation a Samsung Galaxy S7 and S8 are used. Both devices have an Android operating system. According to tweakers.net the S7 has:

• 4 GB’s of RAM

• an 8 core Samsung Exynos 8 Octa (8890) 2,3GHz CPU • Android 8 as the operating system

• support for 802.11a, 802.11ac (Wi-Fi 5), 802.11b, 802.11g, 802.11n (Wi-Fi 4) • Bluetooth 4.2

• a 3.000mAh Li-Ion battery • and more.

Furthermore, the S8 has: • 4 GB of RAM

• an 8 core Samsung Exynos 8 Octa (8895) 2,3 GHz CPU • Android 8 as the operating system

• support for 802.11a, 802.11ac (Wi-Fi 5), 802.11b, 802.11g, 802.11n (Wi-Fi 4) • Bluetooth 5.0

• a 3.000mAh Li-Ion battery • and more.

(23)

4.3

System architecture overview

The system’s architecture is mainly based on the synchronizing capabilities of CouchDB. There are two parties in the system: smartphones that are carried by runners and the bar-data han-dling computers. The smartphones use an application that is created with Ionic. By adopting PouchDB, the application is enabled to store data locally and dump the data when a connection is established. Furthermore, the smartphone application also shows the data that is stored in the PouchDB database in a view. A runner then has the option to verify whether data has been or is being shared.

For every bar in the system a Raspberry Pi 3 is used as the computer. The Raspberry’s run a linux operating system. Every Raspberry has a local CouchDB database named ’eventory’. By naming the database ’eventory’ on all Raspberry’s, a distributed database is created. Further-more, all Raspberry’s act as a WiFi access point by broadcasting a WiFi (IEEE 802.11b, 2.4 GHz) signal. The SSID of these access points is also ’eventory’. Using the same SSID’s for all Raspberry’s enables the smartphones to connect to an eventory automatically.

A flowchart for this system is shown in Figure 4.5. The smartphone scans for WiFi networks that are in range at a given moment in time. If an ’eventory’ SSID is found, the smartphone will automatically attempt to connect to the network. If the smartphone fails to connect, but is still in range of the hotspot, another attempt to connect will be made. If the smartphone succeeds to establish a connection with the ’eventory’ hotspot, the replication protocol is initialized1.

When the PouchDB database and the CouchDB database connect to each other the contents are compared. All changes are then added to a batch as change-objects and handled sequentially.

The type of the change-object determines how the change is handled. If the change-object

is of type delete, two things happen. First, the document is removed from the array that is used to show the documents in the view. Second, the ’deleted’ field of the object is set to true. To update a document in the case the change-object is of type update, the revision field value and ID of the document are used to locate the right document in the database. Then, the document is updated and a new revision is made. Finally, the document in the array is also updated. If the change-object is of type add, this means that a new document was found for one of the two databases. Consequently, this new document is added to respective database. The view of the smartphone application is also refreshed to reflect this change.

After handling one particular change, it is checked whether another change is present in the batch. If a change is present, it will be handled as described above. If it is not present, the application will enter a listening state. The application will remain in this state until a new change is found or a the connection is lost. If the connection is lost, the application will enter the scanning state.

1An in-depth description of the replication protocol can be found at

(24)
(25)

CHAPTER 5

Experiments

5.1

Setting Up The Topology

Since it is not possible to run the tests during a festival, the tests are done at the Science Park complex in buildings A, B, C and D. Since it is assumed that establishing connection between runners and nodes at a festival is possible, running the tests in the Science Park buildings should produce trustworthy results.

The topology consists of 5 nodes. Nodes 1 through 4 each represent a bar. Node 5 repre-sents the production office. Since there is no dataset available to be used for testing, databases 1 through 4 are filled by means of a Python script. This python script inserts a new document in to the database every minute. Along with default CouchDB fields every document is also given a creation timestamp and a title. The title contains the document number and the number of the bar where it was created. For example, if a document is created at bar 3 in the fifth minute, the title is “product 4 van bar 3”, since the count starts at 0.

At a festival, beverages are sold continuously at all bars, which causes the databases to be filled at a steady rate. Therefore, the python scripts also run for the entire time of the exper-iments. To simulate runner migration over the festival terrain, several predefined scenarios are discussed in the following section. Since the rate of data generation is know in this case, it is possible to anticipate which documents will be transferred at a moment in time using the title and timestamp of the documents. Before experimenting, all nodes are initialized and the time of initialization is registered. The amount of documents that are generated at a given moment in time can then be calculated.

In practice it may occur that a transaction is registered incorrectly. For example, a wrong amount of consumptions could be entered by the bar employee or a complete order is cancelled. In these situations the bar employee would use the cash register software to edit of remove the appropriate transactions. In this experimental setting this is done by means of a laptop. By connecting to the network of a bar, the laptop is able to alter the database if needed.

Node startup times 1:

• Node 1: 19:55 • Node 2: 20:06 • Node 3: 20:11 • Node 4: 20:15

(26)

5.2

Test Case A: Basic Addition

Scenario

In this scenario one runner and one bar are used. Runner A (Samsung Galaxy S8) is sent to bar 1. Runner A reaches the area of bar 1 and performs the tasks that were given. While doing so, runner A is in range of node 1 which causes a synchronization at 19:57. After completion, runner A moves back to the production office and synchronizes with the main computer at 20:02. Expected Behaviour

When runner A arrives at bar 1 it is expected that the smartphone will connect to node 1, because it is in range. When the connection is established, the databases on the device (PouchDB) and the database on the node (CouchDB) will compare their content. It is then determined that there are differences between the two databases, which initiates the synchronization procedure. After synchronization, the two databases contain the same data: 2 documents of node 1. Runner A then returns to the production office, which initiates another synchronization. Thereafter, the main computer is also in possession of the 2 documents and can operated on them accordingly. The expected database states after this test case are shown in table 5.1

Time Action Prod. Office Runner A Runner B Node 1 Node 2 Node 3 Node 4

19:57 A ↔ 1 - N1: 0 - 1 - N1: 0 - 1 - -

-20:02 A ↔ P O N1: 0 - 1 N1: 0 - 1 - N1: 0 - 6 - -

-Table 5.1: The expected state of all databases after test case A.

5.3

Test Case B: Complex Addition

Scenario

This scenario considers both runners and all nodes. Runner A is sent to bar 1 again and therefore synchronizes with node 1 at 20:04. Then, runner A migrates towards node 2 and 3, where synchronizations takes place at 20:07 and 20:12 respectively. Runner B (Samsung Galaxy S7) is sent to node 4 and synchronizes at 20:18. Then, runner B also synchronizes with node 3 at 20:21. Finally, runner B goes back to the production office and syncs the data at 20:25.

Expected Behaviour

Runner A first synchronizes with node 1 at 20:04, which means that runner A should have all documents of node 1 at that moment in time. As runner A then synchronizes with node 2, node 2 should receive data of node 1 since it’s stored on the smartphone and runner A should gain data of node 2. This ’data-forwarding’ should also occur at node 3, which causes node 3 to have data of node 1 and 2.

Runner B first synchronizes with node 4 and proceeds to synchronize with node 3. Consequently, runner B should then have data of all nodes. Therefore, when runner B then synchronizes with the production office, data of all nodes should be transferred over. The expected database states after this test case are shown in table 5.2.

5.4

Test Case C: Update Conflict

Scenario

This scenario showcases the implementation’s ability to handle situations where different versions of the same document are synchronized. Runner A moves towards node 2 and synchronizes at

(27)

Time Action Prod. Office Runner A Runner B Node 1 Node 2 Node 3 Node 4 20:02 A ↔ P O N1: 0 - 1 N1: 0 - 1 - N1: 0 - 6 - - -20:04 A ↔ 1 N1: 0 - 1 N1: 0 - 8 - N1: 0 - 8 - - -20:07 A ↔ 2 N1: 0 - 1 N1: 0 - 8 N2: 0 - N1: 0 - 11 N1: 0 - 8 N2: 0 - -20:12 A ↔ 3 N1: 0 - 1 N1: 0 - 8 N2: 0 N3: 0 - N1: 0 - 16 N1: 0 - 8 N2: 0 - 5 N1: 0 - 8 N2: 0 N3: 0 -20:18 B ↔ 4 N1: 0 - 1 N1: 0 - 8 N2: 0 N3: 0 N4: 0 - 3 N1: 0 - 22 N1: 0 - 8 N2: 0 - 11 N1: 0 - 8 N2: 0 N3: 0 - 6 N4: 0 - 3 20:21 B ↔ 3 N1: 0 - 1 N1: 0 - 8 N2: 0 N3: 0 N1: 0 - 8 N2: 0 N3: 0 - 9 N4: 0 - 3 N1: 0 - 25 N1: 0 - 8 N2: 0 - 14 N1: 0 - 8 N2: 0 N3: 0 - 9 N4: 0 - 3 N4: 0 - 6 20:25 B ↔ P O N1: 0 - 8 N2: 0 N3: 0 - 9 N4: 0 - 3 N1: 0 - 8 N2: 0 N3: 0 N1: 0 - 8 N2: 0 N3: 0 - 9 N4: 0 - 3 N1: 0 - 29 N1: 0 - 8 N2: 0 - 18 N1: 0 - 8 N2: 0 N3: 0 - 13 N4: 0 - 3 N4: 0 - 10

Table 5.2: The expected state of all databases after test case B.

20:28. Then, runner A proceeds towards node 3 and synchronizes at 20:32. Next, the first document that was created at node 2 is updated. The document’s ‘title’ field is changed from “product 0 van node 2” to “product 0 van node 2 [updated]” at 20:37. Runner A synchronizes with node 2 once more at 20:42. Finally, runner A synchronizes with the production office at 20:45.

Expected Behaviour

The first two synchronizations should behave as described earlier in previous test cases. When node 2’s first document is updated, it should get a new revision. This means that at that moment in time there should be two versions of the document in the network. When runner A then synchronizes with node 2, the replication protocol should favor the newest revision. Consequently, the older document version should then be updated. By finally synchronizing with the production office, the production team should become aware of the update and process it accordingly. The expected database states after this test case are shown in table 5.3.

5.5

Test Case D: Complex Update Conflict

Scenario

This scenario expands on the previous test case by using two runners that have a different version of the same document. First, runner A synchronizes with node 1 at 20:47. Then, runner A synchronizes with node 2 at 20:50. Runner B syncs with node 1 at 20:52 and later also syncs with node 3 at 20:55. At 20:59, the first created document of node 1 is updated. As before, the document’s ’title’ field is changed from “product 0 van node 1” to “product 0 van node 1 [updated]”. Next, runner A syncs with node 1 at 21:01. Then, the first created document of node 1 is updated once more at 21:02. The ‘title’ field is now changed to “product 0 van node 1 [updated again]”. Runner B syncs with node 1 at 21:03 and with the production office at 21:07. Finally, runner A syncs with the production office at 20:08.

Expected Behaviour

By synchronizing with node 1 and 2, runner A should gain data from those nodes at that point, while also dumping data that is already present in the smartphone’s PouchDB instance. The same

(28)

Time Action Prod. Office Runner A Runner B Node 1 Node 2 Node 3 Node 4 20:25 B ↔ P O N1: 0 - 8 N2: 0 N3: 0 - 9 N4: 0 - 3 N1: 0 - 8 N2: 0 N3: 0 N1: 0 - 8 N2: 0 N3: 0 - 9 N4: 0 - 3 N1: 0 - 29 N1: 0 - 8 N2: 0 - 18 N1: 0 - 8 N2: 0 N3: 0 - 13 N4: 0 - 3 N4: 0 - 10 20:28 A ↔ 2 N1: 0 - 8 N2: 0 N3: 0 - 9 N4: 0 - 3 N1: 0 - 8 N2: 0 - 21 N3: 0 N1: 0 - 8 N2: 0 N3: 0 - 9 N4: 0 - 3 N1: 0 - 32 N1: 0 - 8 N2: 0 - 21 N3: 0 N1: 0 - 8 N2: 0 N3: 0 - 16 N4: 0 - 3 N4: 0 - 13 20:32 A ↔ 3 N1: 0 - 8 N2: 0 N3: 0 - 9 N4: 0 - 3 N1: 0 - 8 N2: 0 - 21 N3: 0 - 20 N4: 0 - 3 N1: 0 - 8 N2: 0 N3: 0 - 9 N4: 0 - 3 N1: 0 - 36 N1: 0 - 8 N2: 0 - 25 N3: 0 N1: 0 - 8 N2: 0 - 21 N3: 0 - 20 N4: 0 - 3 N4: 0 - 17 20:37 Update @ 2 N1: 0 - 8 N2: 0 N3: 0 - 9 N4: 0 - 3 N1: 0 - 8 N2: 0 - 21 N3: 0 - 20 N4: 0 - 3 N1: 0 - 8 N2: 0 N3: 0 - 9 N4: 0 - 3 N1: 0 - 41 N1: 0 - 8 N2: 0 [U] - 30 N3: 0 N1: 0 - 8 N2: 0 - 21 N3: 0 - 25 N4: 0 - 3 N4: 0 - 22 20:42 A ↔ 2 N1: 0 - 8 N2: 0 N3: 0 - 9 N4: 0 - 3 N1: 0 - 8 N2: 0 [U] - 35 N3: 0 - 20 N4: 0 - 3 N1: 0 - 8 N2: 0 N3: 0 - 9 N4: 0 - 3 N1: 0 - 46 N1: 0 - 8 N2: 0 [U] - 35 N3: 0 - 20 N4: 0 - 3 N1: 0 - 8 N2: 0 - 21 N3: 0 - 30 N4: 0 - 3 N4: 0 - 27 20:45 A ↔ P O N1: 0 - 8 N2: 0 [U] - 35 N3: 0 - 20 N4: 0 - 3 N1: 0 - 8 N2: 0 [U] - 35 N3: 0 - 20 N4: 0 - 3 N1: 0 - 8 N2: 0 N3: 0 - 9 N4: 0 - 3 N1: 0 - 49 N1: 0 - 8 N2: 0 [U] - 38 N3: 0 - 20 N4: 0 - 3 N1: 0 - 8 N2: 0 - 21 N3: 0 - 33 N4: 0 - 3 N4: 0 - 30

Table 5.3: The expected state of all databases after test case C.

goes for runner B with respect to node 1 and 3. The updates should cause both runners to have a different version of the document. Since runner B has the newest version and synchronizes first with the production office, the synchronization with runner A at the later point in time should not affect the document. Only the [updated] document in runner A’s databases should be updated to [updated again]. The expected database states after this test case are shown in table 5.4.

5.6

Test Case E: Deletion

Scenario

This scenario demonstrates that the implementation is able to handle deletions. Runner A syncs with node 4 at 21:11 and then syncs again at the production office at 21:14. The, the first 5 documents that were created at node 4 are deleted at 21:18. Runner A returns to node 4 and syncs again at 21:19. Finally, runner A syncs with the production office once more at 21:26. Expected Behaviour

The first two synchronizations should cause for data to be transferred between runner A, node 4 and the production office by simple “addition”. When the first 5 documents of node 4 are deleted, they should be flagged by setting their ‘deleted’ field to ‘true’. Consequently, they are removed from the view, but are still present in the tree structure. Then when runner A synchronizes again, the replication protocol detects the changes and updates the documents, which are later also synchronized to the production office. The expected database states after this test case are shown in table 5.5.

(29)

Time Action Prod. Office Runner A Runner B Node 1 Node 2 Node 3 Node 4 20:45 A ↔ P O N1: 0 - 8 N2: 0 [U] - 35 N3: 0 - 20 N4: 0 - 3 N1: 0 - 8 N2: 0 [U] - 35 N3: 0 - 20 N4: 0 - 3 N1: 0 - 8 N2: 0 N3: 0 - 9 N4: 0 - 3 N1: 0 - 49 N1: 0 - 8 N2: 0 [U] - 38 N3: 0 - 20 N4: 0 - 3 N1: 0 - 8 N2: 0 - 21 N3: 0 - 33 N4: 0 - 3 N4: 0 - 30 20:47 A ↔ 1 N1: 0 - 8 N2: 0 [U] - 35 N3: 0 - 20 N4: 0 - 3 N1: 0 - 51 N2: 0 [U] - 35 N3: 0 - 20 N4: 0 - 3 N1: 0 - 8 N2: 0 N3: 0 - 9 N4: 0 - 3 N1: 0 - 51 N2: 0 [U] - 35 N3: 0 - 20 N4: 0 - 3 N1: 0 - 8 N2: 0 [U] - 40 N3: 0 - 20 N4: 0 - 3 N1: 0 - 8 N2: 0 - 21 N3: 0 - 35 N4: 0 - 3 N4: 0 - 32 20:50 A ↔ 2 N1: 0 - 8 N2: 0 [U] - 35 N3: 0 - 20 N4: 0 - 3 N1: 0 - 51 N2: 0 [U] - 43 N3: 0 - 20 N4: 0 - 3 N1: 0 - 8 N2: 0 N3: 0 - 9 N4: 0 - 3 N1: 0 - 54 N2: 0 [U] - 35 N3: 0 - 20 N4: 0 - 3 N1: 0 - 51 N2: 0 [U] - 43 N3: 0 - 20 N4: 0 - 3 N1: 0 - 8 N2: 0 - 21 N3: 0 - 38 N4: 0 - 3 N4: 0 - 35 20:52 B ↔ 1 N1: 0 - 8 N2: 0 [U] - 35 N3: 0 - 20 N4: 0 - 3 N1: 0 - 51 N2: 0 [U] - 43 N3: 0 - 20 N4: 0 - 3 N1: 0 - 56 N2: 0 [U] - 35 N3: 0 - 20 N4: 0 - 3 N1: 0 - 56 N2: 0 [U] - 35 N3: 0 - 20 N4: 0 - 3 N1: 0 - 51 N2: 0 [U] - 45 N3: 0 - 20 N4: 0 - 3 N1: 0 - 8 N2: 0 - 21 N3: 0 - 40 N4: 0 - 3 N4: 0 - 37 20:55 B ↔ 3 N1: 0 - 8 N2: 0 [U] - 35 N3: 0 - 20 N4: 0 - 3 N1: 0 - 51 N2: 0 [U] - 43 N3: 0 - 20 N4: 0 - 3 N1: 0 - 56 N2: 0 [U] - 35 N3: 0 - 43 N4: 0 - 3 N1: 0 - 59 N2: 0 [U] - 35 N3: 0 - 20 N4: 0 - 3 N1: 0 - 51 N2: 0 [U] - 48 N3: 0 - 20 N4: 0 - 3 N1: 0 - 56 N2: 0 [U] - 35 N3: 0 - 43 N4: 0 - 3 N4: 0 - 40 20:59 Update @ 1 N1: 0 - 8 N2: 0 [U] - 35 N3: 0 - 20 N4: 0 - 3 N1: 0 - 51 N2: 0 [U] - 43 N3: 0 - 20 N4: 0 - 3 N1: 0 - 56 N2: 0 [U] - 35 N3: 0 - 43 N4: 0 - 3 N1: 0 [U] - 63 N2: 0 [U] - 35 N3: 0 - 20 N4: 0 - 3 N1: 0 - 51 N2: 0 [U] - 52 N3: 0 - 20 N4: 0 - 3 N1: 0 - 56 N2: 0 [U] - 35 N3: 0 - 47 N4: 0 - 3 N4: 0 - 44 21:01 A ↔ 1 N1: 0 - 8 N2: 0 [U] - 35 N3: 0 - 20 N4: 0 - 3 N1: 0 [U] - 65 N2: 0 [U] - 43 N3: 0 - 20 N4: 0 - 3 N1: 0 - 56 N2: 0 [U] - 35 N3: 0 - 43 N4: 0 - 3 N1: 0 [U] - 65 N2: 0 [U] - 43 N3: 0 - 20 N4: 0 - 3 N1: 0 - 51 N2: 0 [U] - 54 N3: 0 - 20 N4: 0 - 3 N1: 0 - 56 N2: 0 [U] - 35 N3: 0 - 49 N4: 0 - 3 N4: 0 - 46 21:02 Update @ 1 N1: 0 - 8 N2: 0 [U] - 35 N3: 0 - 20 N4: 0 - 3 N1: 0 [U] - 65 N2: 0 [U] - 43 N3: 0 - 20 N4: 0 - 3 N1: 0 - 56 N2: 0 [U] - 35 N3: 0 - 43 N4: 0 - 3 N1: 0 [UA] - 66 N2: 0 [U] - 43 N3: 0 - 20 N4: 0 - 3 N1: 0 - 51 N2: 0 [U] - 55 N3: 0 - 20 N4: 0 - 3 N1: 0 - 56 N2: 0 [U] - 35 N3: 0 - 50 N4: 0 - 3 N4: 0 - 47 21:03 B ↔ 1 N1: 0 - 8 N2: 0 [U] - 35 N3: 0 - 20 N4: 0 - 3 N1: 0 [U] - 65 N2: 0 [U] - 43 N3: 0 - 20 N4: 0 - 3 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 3 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 3 N1: 0 - 51 N2: 0 [U] - 56 N3: 0 - 20 N4: 0 - 3 N1: 0 - 56 N2: 0 [U] - 35 N3: 0 - 51 N4: 0 - 3 N4: 0 - 48 21:07 B ↔ P O N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 3 N1: 0 [U] - 65 N2: 0 [U] - 43 N3: 0 - 20 N4: 0 - 3 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 3 N1: 0 [UA] - 71 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 3 N1: 0 - 51 N2: 0 [U] - 60 N3: 0 - 20 N4: 0 - 3 N1: 0 - 56 N2: 0 [U] - 35 N3: 0 - 55 N4: 0 - 3 N4: 0 - 52 21:08 A ↔ P O N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 3 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 3 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 3 N1: 0 [UA] - 72 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 3 N1: 0 - 51 N2: 0 [U] - 61 N3: 0 - 20 N4: 0 - 3 N1: 0 - 56 N2: 0 [U] - 35 N3: 0 - 56 N4: 0 - 3 N4: 0 - 53

(30)

Time Action Prod. Office Runner A Runner B Node 1 Node 2 Node 3 Node 4 21:08 A ↔ P O N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 3 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 3 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 3 N1: 0 [UA] - 72 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 3 N1: 0 - 51 N2: 0 [U] - 61 N3: 0 - 20 N4: 0 - 3 N1: 0 - 56 N2: 0 [U] - 35 N3: 0 - 56 N4: 0 - 3 N4: 0 - 53 21:11 A ↔ 4 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 3 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 56 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 3 N1: 0 [UA] - 75 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 3 N1: 0 - 51 N2: 0 [U] - 64 N3: 0 - 20 N4: 0 - 3 N1: 0 - 56 N2: 0 [U] - 35 N3: 0 - 59 N4: 0 - 3 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 56 21:14 A ↔ P O N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 56 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 56 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 3 N1: 0 [UA] - 78 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 3 N1: 0 - 51 N2: 0 [U] - 67 N3: 0 - 20 N4: 0 - 3 N1: 0 - 56 N2: 0 [U] - 35 N3: 0 - 62 N4: 0 - 3 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 59 21:18 Delete @ 4 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 56 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 56 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 3 N1: 0 [UA] - 82 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 3 N1: 0 - 51 N2: 0 [U] - 71 N3: 0 - 20 N4: 0 - 3 N1: 0 - 56 N2: 0 [U] - 35 N3: 0 - 66 N4: 0 - 3 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 5 - 63 21:19 A ↔ 4 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 56 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 5 - 64 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 3 N1: 0 [UA] - 83 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 3 N1: 0 - 51 N2: 0 [U] - 72 N3: 0 - 20 N4: 0 - 3 N1: 0 - 56 N2: 0 [U] - 35 N3: 0 - 67 N4: 0 - 3 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 5 - 64 21:26 A ↔ P O N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 5 - 64 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 5 - 64 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 3 N1: 0 [UA] - 90 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 3 N1: 0 - 51 N2: 0 [U] - 79 N3: 0 - 20 N4: 0 - 3 N1: 0 - 56 N2: 0 [U] - 35 N3: 0 - 74 N4: 0 - 3 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 5 - 71 Table 5.5: The expected state of all databases after test case E.

5.7

Test Case F: Complex Deletion

Scenario

This scenario demonstrates that deletion with multiple runners in the field is also handled. Runner A syncs with node 3 at 21:31. Then, the first 5 documents of node 3 are deleted at 21:32. At 21:35 runner B migrates towards node 3 and syncs. Finally, runner B syncs at the production office at 21:40 and runner A does so at 21:42.

Expected Behaviour

First, data should be shared as described earlier between runner A and node 3. As with test case E was the case, the ’deleted’ field for the first 5 documents should be set to ’true’. After deletion, runner B synchronizes with node 3 and later again with the production office. Since runner B is the first to synchronize with the production office, the documents runner A later dumps should not influence the view. The documents in the database of runner A should be updated. The expected database states after this test case are shown in table 5.6.

(31)

Time Action Prod. Office Runner A Runner B Node 1 Node 2 Node 3 Node 4 21:26 A ↔ P O N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 5 - 64 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 5 - 64 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 3 N1: 0 [UA] - 90 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 3 N1: 0 - 51 N2: 0 [U] - 79 N3: 0 - 20 N4: 0 - 3 N1: 0 - 56 N2: 0 [U] - 35 N3: 0 - 74 N4: 0 - 3 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 5 - 71 21:31 A ↔ 3 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 5 - 64 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 79 N4: 5 - 64 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 3 N1: 0 [UA] - 95 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 3 N1: 0 - 51 N2: 0 [U] - 84 N3: 0 - 20 N4: 0 - 3 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 79 N4: 5 - 64 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 5 - 76 21:32 Delete @ 3 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 5 - 64 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 79 N4: 5 - 64 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 3 N1: 0 [UA] - 96 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 3 N1: 0 - 51 N2: 0 [U] - 85 N3: 0 - 20 N4: 0 - 3 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 5 - 80 N4: 5 - 64 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 5 - 77 21:35 B ↔ 3 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 5 - 64 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 79 N4: 5 - 64 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 5 - 83 N4: 5 - 64 N1: 0 [UA] - 99 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 3 N1: 0 - 51 N2: 0 [U] - 88 N3: 0 - 20 N4: 0 - 3 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 5 - 83 N4: 5 - 64 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 5 - 80 21:40 B ↔ P O N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 5 - 83 N4: 5 - 64 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 79 N4: 5 - 64 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 5 - 83 N4: 5 - 64 N1: 0 [UA] - 104 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 3 N1: 0 - 51 N2: 0 [U] - 93 N3: 0 - 20 N4: 0 - 3 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 5 - 88 N4: 5 - 64 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 5 - 85 21:42 A ↔ P O N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 5 - 83 N4: 5 - 64 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 5 - 83 N4: 5 - 64 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 5 - 83 N4: 5 - 64 N1: 0 [UA] - 102 N2: 0 [U] - 43 N3: 0 - 43 N4: 0 - 3 N1: 0 - 51 N2: 0 [U] - 95 N3: 0 - 20 N4: 0 - 3 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 5 - 90 N4: 5 - 64 N1: 0 [UA] - 67 N2: 0 [U] - 43 N3: 0 - 43 N4: 5 - 87

(32)
(33)

CHAPTER 6

Results

In this chapter the results, being the state of the databases after test case F, of the experiments are presented. To maintain a clear overview, the results are given in a table format. A table for a database lists all documents in that database. However, only the ’title’ fields of all documents are shown, as they are the only significant ones. Furthermore, the tables are not sorted and therefore show the exact order that is present in the original database itself. The tables are to be read one column by one, from left to right, to find the equivalent order of documents that is found in the original PouchDB/CouchDB databases.

The database of node 1 contains the following documents. Documents 0 through 43 for doc-uments that were created at node 3. Docdoc-uments 0 through 2 are present for node 4 created documents. For documents that are created at node 1 itself, documents 0 through 102 are present, where document 0 has the added “[updated again]” tag. Finally, node 2 documents 0 through 43, including a “[updated]” tag for document 0. Table 6.1 shows the resulting state of node 1’s database.

The database of node 2 contains documents 0 through 20 for node 3, documents 0 through 2 for node 4, documents 0 through 52 for node 1 and documents 0 through 95 for documents created at node 2 itself. Document 0 of node 2 has the “[updated]” tag. Table 6.2 shows the resulting state of node 2’s database.

For the database of node 3 the following documents are present: documents 5 through 90 originating from node 3 itself, documents 5 through 63 for documents originating from node 4, documents 0 through 68 for documents originating from node 1 and documents 0 through 43 for documents originating from node 2. Document 0 from node 1 has the “[updated again]” tag and document 0 of node 2 has the “[updated]” tag. Table 6.3 shows the resulting state of node 3’s database.

The database of node 4 includes the documents 0 through 43 for documents from node 3, docu-ments 5 through 86 from node 4, docudocu-ments 0 through 68 from node 1 and docudocu-ments 0 through 43 from node 2. Document 0 from node 1 has the “[updated again]” tag and document 0 of node 2 has the “[updated]” tag. Table 6.4 shows the resulting state of node 4’s database.

For runner A, runner B and the production office, the databases are identical. Namely: doc-uments 5 through 83 for node 3 made docdoc-uments, docdoc-uments 5 through 63 for node 4 made documents, documents 0 through 68 for node 1 made documents and documents 0 through 43 for node 2 made documents. Document 0 from node 1 has the “[updated again]” tag and docu-ment 0 of node 2 has the “[updated]” tag. Tables 6.5, 6.6 and 6.7 show the resulting states of the production office, runner A and runner B respectively.

(34)

product 0 van bar 3 product 1 van bar 4 product 43 van bar 1 product 88 van bar 1 product 21 van bar 2 product 1 van bar 3 product 2 van bar 4 product 44 van bar 1 product 89 van bar 1 product 22 van bar 2 product 2 van bar 3 product 0 van bar 1 [updated again] product 45 van bar 1 product 90 van bar 1 product 23 van bar 2 product 3 van bar 3 product 1 van bar 1 product 46 van bar 1 product 91 van bar 1 product 24 van bar 2 product 4 van bar 3 product 2 van bar 1 product 47 van bar 1 product 92 van bar 1 product 25 van bar 2 product 5 van bar 3 product 3 van bar 1 product 48 van bar 1 product 93 van bar 1 product 26 van bar 2 product 6 van bar 3 product 4 van bar 1 product 49 van bar 1 product 94 van bar 1 product 27 van bar 2 product 7 van bar 3 product 5 van bar 1 product 50 van bar 1 product 95 van bar 1 product 28 van bar 2 product 8 van bar 3 product 6 van bar 1 product 51 van bar 1 product 96 van bar 1 product 29 van bar 2 product 9 van bar 3 product 7 van bar 1 product 52 van bar 1 product 97 van bar 1 product 30 van bar 2 product 10 van bar 3 product 8 van bar 1 product 53 van bar 1 product 98 van bar 1 product 31 van bar 2 product 11 van bar 3 product 9 van bar 1 product 54 van bar 1 product 99 van bar 1 product 32 van bar 2 product 12 van bar 3 product 10 van bar 1 product 55 van bar 1 product 100 van bar 1 product 33 van bar 2 product 13 van bar 3 product 11 van bar 1 product 56 van bar 1 product 101 van bar 1 product 34 van bar 2 product 14 van bar 3 product 12 van bar 1 product 57 van bar 1 product 102 van bar 1 product 35 van bar 2 product 15 van bar 3 product 13 van bar 1 product 58 van bar 1 product 103 van bar 1 product 36 van bar 2 product 16 van bar 3 product 14 van bar 1 product 59 van bar 1 product 104 van bar 1 product 37 van bar 2 product 17 van bar 3 product 15 van bar 1 product 60 van bar 1 product 105 van bar 1 product 38 van bar 2 product 18 van bar 3 product 16 van bar 1 product 61 van bar 1 product 106 van bar 1 product 39 van bar 2 product 19 van bar 3 product 17 van bar 1 product 62 van bar 1 product 107 van bar 1 product 40 van bar 2 product 20 van bar 3 product 18 van bar 1 product 63 van bar 1 product 108 van bar 1 product 41 van bar 2 product 21 van bar 3 product 19 van bar 1 product 64 van bar 1 product 109 van bar 1 product 42 van bar 2 product 22 van bar 3 product 20 van bar 1 product 65 van bar 1 product 110 van bar 1 product 43 van bar 2 product 23 van bar 3 product 21 van bar 1 product 66 van bar 1 product 111 van bar 1

product 24 van bar 3 product 22 van bar 1 product 67 van bar 1 product 0 van bar 2 [updated] product 25 van bar 3 product 23 van bar 1 product 68 van bar 1 product 1 van bar 2 product 26 van bar 3 product 24 van bar 1 product 69 van bar 1 product 2 van bar 2 product 27 van bar 3 product 25 van bar 1 product 70 van bar 1 product 3 van bar 2 product 28 van bar 3 product 26 van bar 1 product 71 van bar 1 product 4 van bar 2 product 29 van bar 3 product 27 van bar 1 product 72 van bar 1 product 5 van bar 2 product 30 van bar 3 product 28 van bar 1 product 73 van bar 1 product 6 van bar 2 product 31 van bar 3 product 29 van bar 1 product 74 van bar 1 product 7 van bar 2 product 32 van bar 3 product 30 van bar 1 product 75 van bar 1 product 8 van bar 2 product 33 van bar 3 product 31 van bar 1 product 76 van bar 1 product 9 van bar 2 product 34 van bar 3 product 32 van bar 1 product 77 van bar 1 product 10 van bar 2 product 35 van bar 3 product 33 van bar 1 product 78 van bar 1 product 11 van bar 2 product 36 van bar 3 product 34 van bar 1 product 79 van bar 1 product 12 van bar 2 product 37 van bar 3 product 35 van bar 1 product 80 van bar 1 product 13 van bar 2 product 38 van bar 3 product 36 van bar 1 product 81 van bar 1 product 14 van bar 2 product 39 van bar 3 product 37 van bar 1 product 82 van bar 1 product 15 van bar 2 product 40 van bar 3 product 38 van bar 1 product 83 van bar 1 product 16 van bar 2 product 41 van bar 3 product 39 van bar 1 product 84 van bar 1 product 17 van bar 2 product 42 van bar 3 product 40 van bar 1 product 85 van bar 1 product 18 van bar 2 product 43 van bar 3 product 41 van bar 1 product 86 van bar 1 product 19 van bar 2 product 0 van bar 4 product 42 van bar 1 product 87 van bar 1 product 20 van bar 2

Referenties

GERELATEERDE DOCUMENTEN

2) Motion Compensation: Physiological motion of the pa- tient can induce tissue motion during the needle insertion pro- cedure. In addition to target motion, damage to the tissue

salivarius HBC12 without fibrillar surface tethers can be related with the DLVO interaction free energy curves but due to low numbers of adhering bacteria resulting from

Op basis van verschillen in de DNA-volgordes kon worden geconcludeerd dat er inderdaad sprake is van verschillende soorten, mis- schien met uitzondering van Botrytis croci en

In this section, we investigate the persistence of dynamical properties in the previous section under the perturbation given by the higher order terms of the normal form for

Als we er klakkeloos van uitgaan dat gezondheid voor iedereen het belangrijkste is, dan gaan we voorbij aan een andere belangrijke waarde in onze samenleving, namelijk die van

Figure 84 shows the displacement of femur IV towards the femoral groove (femur III). The carina of the trochanter and femur is clearly visible, which will permit the tarsus and

He  had  to  choose  between  working  with  a  small  committed  body  of  people  comprising  the  Christian Institute,  which  would  take  action  on  issues