• No results found

Soft Real-Time Performance in Multihop Wireless Sensor Networks

N/A
N/A
Protected

Academic year: 2021

Share "Soft Real-Time Performance in Multihop Wireless Sensor Networks"

Copied!
34
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Bachelor Informatica

Soft Real-Time Performance

in Multihop Wireless Sensor

Networks

Jesse van den Ende

June 16, 2019

Supervisor(s): Taco Walstra

Inf

orma

tica

Universiteit

v

an

Ams

terd

am

(2)
(3)

Abstract

Wireless sensor networks consist of many nodes that cooperate to measure their en-vironment. The low-power, low-cost sensors are a result of advances in sensor technology. Wireless sensor networks can be employed in scenario’s such as: forest fire detection, military tracking or environmental monitoring. In this field, routing protocols have been proposed for soft-real time performance. SPEED is a stateless routing protocol, where decisions of nodes are made with information present in nodes themselves. SPEED achieves a desired delivery speed across the network by supporting soft real-time deadlines. The protocol makes use of different mechanisms to handle events as congested areas, void areas and estimating single hop delay. The protocol is implemented in network simulator 3, where assumptions are made about the network and mechanisms used by the protocol. The performance that is achieved in this setup is unlike the performance as proposed in SPEED.

(4)
(5)

Contents

1 Introduction 7

1.1 Modern wireless sensor network protocols . . . 8

1.1.1 Real-time . . . 8 1.1.2 Clustering . . . 9 1.2 Research question . . . 9 1.3 Layout . . . 9 2 Theoretical background 11 2.1 Real-Time . . . 11 2.1.1 SPEED . . . 12 2.2 Communication . . . 14 2.3 Clustering . . . 15 3 Methodology 17 3.1 Network simulator 3 . . . 17 3.1.1 Communication . . . 17 3.2 Implementation . . . 18

3.2.1 User Datagram Protocol . . . 18

3.2.2 Acknowledgements . . . 18 3.2.3 Packet format . . . 19 3.2.4 Time granularity . . . 19 3.2.5 Buffers . . . 20 3.2.6 Neighbours . . . 20 3.2.7 Forwarding . . . 20 3.2.8 Backpressuring . . . 21 4 Experiments 23 4.1 Setup . . . 23 4.1.1 Hardware . . . 23 4.1.2 Simulator . . . 23

5 Results and discussion 25 5.1 Uniform setting . . . 25

5.2 Non-uniform setting . . . 28

6 Conclusions 31 6.1 Future Work . . . 31

(6)
(7)

CHAPTER 1

Introduction

Wireless sensor networks have become a field of interest for a couple of years. This is a result of advances in sensor technology, low-power electronics and low-power radio frequency design which have enabled the development of small, inexpensive and low-power sensors that can create a wireless network. In a network these sensors are also called nodes. These sensors have the ability to measure environmental parameters. They are able to measure for instance tempera-ture, light, air condition, humidity and pressure [3]. Due to their range of capabilities, they can be used in different scenarios such as the military field, environmental monitoring, logistics or robotics. In the military field wireless sensor networks can perform enemy tracking, battlefield surveillance and target classification. A wireless sensor network could also be used in outdoor environments to detect forest fires.

The use case that will be investigated in this paper is the monitoring of indoor environments, specifically for fire detection. The existance of smoke and fire detection in buildings is common nowadays. There are also light signals present to indicate emergency exits. These two systems could work together through the use of a wireless sensor network to lead residents to safety. The sensors have limited capabilities in terms of computation, energy and memory. Sensors are non-rechargeable which makes it necessary for them to be energy efficient. The most critical part of the energy consumption is the wireless data transmission that is needed between sensors [12]. Communicating sensors can exchange data in different ways [16]. A sensor can communicate directly with any another node which is single hop communication. This form of communication is direct, but it can be demanding on energy resources as a sensor may need to increase transmis-sion power to reach the receiving sensor. This leads to another form of communication where a node uses adjecent or neighbouring nodes to send data. The node first sends data to an adjacent node, which in turn sends it to an adjacent node untill the packet reaches its destination. This form of communication is muti-hop communication.

Nodes can also form clusters in which one node is the cluster head. The cluster head per-forms various tasks such as coordinating and data aggregation of nodes around them. Cluster head nodes also provide the communication with other clusters. The use of clusters in wireless sensor networks can achieve high energy efficiency and network scalability [12]. The role of clus-ter heads can be rotated, which provides balanced energy consumption of sensors.

Depending on the use case of a wireless sensor network, real-time deadlines may be necessary for the application. A use case for forest fire detection does have deadlines for example. The deadlines that are used in this case are soft, meaning that the usefulness of the information is less when a deadline is missed, but it is not zero. Whereas a hard deadline means system failure. A forest fire detection network is a case of an emergency application which can be approached with soft-deadlines [6]. Soft deadlines are more feasible in sensor networks than hard deadlines, because of the limitations in terms of computation, network lifetime and memory [13]. The link

(8)

quality between nodes and varying network perfornmance also need to be considered in real-time performance. This bottleneck impacts the soft-real time performance in the network. A Stateless Protocol for Real-Time Communication in Sensor Networks (SPEED) is a protocol that consid-ers soft-real time performance in wireless sensor networks [6].

The SPEED protocol will be investigated in this work. There are more SPEED protocols, such as Energy-Efficient SPEED, Multipath Multi-SPEED and and Fault Tolerant-SPEED. The protocols build on top of SPEED by extending the protocol. SPEED provides a desired delivery speed in the network. It is stateless because the nodes only use local information of neighbouring nodes around them. The control overhead used by the protocol is minimal, which benefits the energy consumption of nodes. The delivery speed is maintained by feedback control and non-deterministic forwarding. The EE-SPEED protocol takes into consideration residual energy of nodes when making forwarding decisions [9]. The MM-SPEED protocol guarantees probabilistic quality of service support by providing multiple packet delivery speeds [5]. The FT-SPEED pro-tocol handles void areas, which occur due to node failure, in the network with a void announcing scheme [18].

Performance of real-time protocols is measured in the amount of deadline misses that occur. A deadline isn’t met if the packet is delivered after deadline. The end-to-end delay in the net-work is also a measure for performance. If the average end-to-end delay is low, the amount of deadline misses in the network is low.

1.1

Modern wireless sensor network protocols

Many protocols have been designed with regards to real-time communication in wireless sen-sor networks such as: SPEED, which has several derivatives, An Adaptive Real-Time Rout-ing Scheme (ARP) [14], On-demand Multi-hop Lookahead based Real-time routRout-ing protocol (OMLRP) [8] and Energy Aware Routing for Real-Time and Reliable Communication (EARQ) [7].

Clustering in wireless sensor networks is a form of communication that is investigated broadly. This approach uses cluster to lower the energy consumption that is used by nodes. Two protocols that are designed for this approach are: Energy-Efficient Unequal Clustering [12] and a hybrid energy-efficient distributed clustering approach (HEED) [4].

1.1.1

Real-time

Critical data reliable routing (CDRR) is a protocol that is introduced in industrial wireless sensor networks [10]. It considers the problems MANET routing protocols have concerning scalability. These networks don’t consider node constraints in their protocol design. The protocol considers the combination of realiability and real-time aspects in wireless sensor network protocols. Adaptive real-time routing scheme (ARP) is a protocol that provides real-time data transmission and dynamically adapts to different real-time demands of applications [14]. It also takes into condideration a tradeoff between energy-consumption and real-time transmission. ARP changes packet’s requirements dynamically to transmission speed and priority during end-to-end trans-mission.

On-demand Multi-hop Lookahead based Real-time routing protocol (OMLRP) provides the least deadline miss ratio by multi-hop lookahead [8]. It gathers information of multi-hop neighbours around a data forwarding path from source to destination by relying on a one-hop lookahead manner. Multi-hop lookahead is used when a packet has a real-time deadline that is to be met. EARQ is a protocol that considers the same problems that wireless sensor networks have just as CDRR. It provides real-time, reliable delivery of a packet while considering energy [7]. The protocol selects a path for a packet that is low on energy consumption, delay and provides high

(9)

reliability. It achieves this by estimating energy cost, delay and the reliability of the path from source to destination. It can also choose to select a non-optimal path in terms of speed, but can still meet a packets deadline.

1.1.2

Clustering

EEUC is an energy efficient unequal clustering mechanism for wireless sensor networks [12]. This mechanism has been proposed to extend the lifetime of wireless sensor networks by resizing clus-ters. Clusters closer to the base station tend to die faster, due to the many-to-one traffic pattern these clusters suffer from. The clusters closer to the base station will be smaller than those further away in EEUC. This results in less energy consumption during intra-cluster communica-tion and leaves more energy for inter-cluster relay traffic. Cluster heads are chosen through an election mechanism. The election happens within a range that is proportional to the distance to the base station. The criterion that is used for the election of cluster heads is based on the amount of residual energy present in nodes as well.

HEED is a hybrid energy-efficient distributed clustering approach for ad hoc sensor networks [4]. The approach is hybrid because it considers energy and communication cost. It is distributed in the sense of energy consumption and the production of well-distributed cluster heads. It attempts to prolong the lifetime of the network by assigning cluster heads and rotating this role between different nodes. Cluster heads have the task of data aggregation and coordination among the nodes in their cluster and communication with other cluster heads or observers. Their job is energy demanding which results in faster energy depletion than non-cluster head nodes. The rotation of the cluster head role ensures load balancing. The residual amount of energy that is present in nodes is a criterion that HEED uses to select cluster heads. Nodes with a higher amount of energy will have a higher probability of becoming a cluster head. Another criterion that is used when the first criterion is matched with other nodes is the intracluster communica-tion cost. This can be done such that the load is balanced between nodes, i.e. the cluster head node with lowest degree is chosen. Or dense clusters can be created by choosing the highest degree cluster head node.

1.2

Research question

In this work the SPEED protocol will be investigated. This will be done while real-time per-formance is considered. The research question regarding this work is: What is the real-time performance in wireless sensor networks using the SPEED protocol?

1.3

Layout

This work is set up as follows. First a theoretical foundation is given, such that the terminology and concepts regarding wireless sensor networks are clear. The SPEED protocol is also elaborated on in this section. Subsequently, the implementation that is used is explained where several assumptions are stated. There are differences between the theoretical explanation of SPEED and the implementation, therefore details are required. Thereafter the experimental setup is explained, which realizes the results. The results are explained afterwards. Lastly, a conclusion is given that answers the research question.

(10)
(11)

CHAPTER 2

Theoretical background

The research in wireless sensor networks has investigated various perspectives to better the per-formance of wireless sensor networks. This concerns real-time perper-formance, energy consumption efficiency and network coverage to name a few.

A sensor network covers a sensing field where measurements are taken. The sensor nodes are distributed on the field to measure the environment around them and collect data accordingly. The data is then sent to the end user [2].

Node

Sink

User

Figure 2.1: Sensing field of nodes sending data to a sink.

As a result of differing distances between nodes, communication delay is not constant between all nodes as supposed to delay in a wired network. Therefore real-time is challenging in wireless sensor networks.

2.1

Real-Time

Real-time guarantees can be one of two categories: hard real-time or soft real-time. A deadline miss in a hard real-time setting means the system fails. However, a deadline miss in a soft real-time setting doesn’t make the system fail which means lateness is tolerable and the deadline guarantees are probabilistic. Therefore the end-to-end delay that is provided by the network is either probabilistic or deterministic. Meeting deadlines does cost energy. A balance is thus to be found between energy consumption and real-time performance. The design of real-time protocols should consider additional factors of wireless sensor network constraints. These include resource limitations, link reliability between nodes and varying network performance. The cause of these

(12)

factors are the low reliability of nodes and the possibility of dynamic network topologies [qos]. When energy consideration has a higher priority in protocol design than real-time performance, the design choices in the scope of real-time will be more strict. When real-time delivery must be met with tolerable lateness, some extra energy may be used.

2.1.1

SPEED

SPEED is a stateless protocol for real-time communication in sensor networks [6]. The protocol achieves a desired delivery speed (Ssetpoint) across the sensor network by supporting soft

real-time deadlines. The protocol is stateless since the nodes in the network only make use of local information. Therefore no global state is needed. This information consists of direct neighbour information. Due to this stateless approach, the memory usage in nodes is small. The end-to-end delay that is guaranteed is proportional to the distance that is between the two endpoints, due to the wireless nature of the network. SPEED support delivery in a destination are, which is defined by a sphere. There are three types of delivery: unicast, anycast and multicast. In unicast delivery, a specified node in the destination area receives the packet. In anycast delivery, a node in the destination area receives the packet. And in multicast delivery, all nodes in the destination area receive the packet. The following flow chart is elaborated on in the following sections.

SNGF NFL

LMP FS

Delivery

Figure 2.2: Flow chart of the SPEED protocol. Packet format

The following packet format is used by SPEED:

1. Packet type, determines the type of communication to be used. 2. Global id, used in unicast communication to determine the end node. 3. Destination area, the destination described by a sphere.

4. TTL (time to live), the hop limit to be used in the last mile process. 5. Payload, data that needs to be delivered.

Neighbour beaconing

SPEED makes use of direct neighbour information, resulting in lower memory requirements. This information is gathered by periodic beaconing where location information is shared be-tween neighouring nodes. The rate at which the beaconing is done is low, since the nodes are stationary. Two additional on-demand beacons are used for estemating delay to neighbours and start backpressure from a node. These beacons will be elaborated on later. After the neighbour beaconing the nodes store a neighbour table with the delay, the position and the expire time of this neighbour. If a neighbour is out of energy the periodic beacon will not extend the expire time of that neighbour.

(13)

Delay

Delay is estimated at the sender side of communication. Single hop delay between nodes is used to approximate the load of a node. Data packets that are passing a node measure this load. The sender node timestamps the packet when the packet enters the network output queue. When the sender receives an acknowledgement, the round trip single hop delay is calculated. The receivers timestamps the acknowledgement with the processing time of the acknowledgement. At recep-tion of the acknowledgement, the sender subtracts this processing time from the round trip delay. The delay is then calculated based also on previous measurements of delay by the exponentially weighted moving average (EWMA), according to the following formula:

EW M A(t) = (

Y1, if t = 1.

α · Yt+ (1 − α) · St−1, if t > 1.

(2.1) Where Ytis the delay value at index t and Stis the EWMA of index t.

Routing

Routing is done in the stateless non-deterministic geographic forwarding (SNGF). Each node stores a neighbour set of nodes which are within their transmission range. This set contains a forwarding candidate subset which are the nodes that are closer to the destination than the current node:

F Si(Destination) = {node ∈ N Si|L − LN ext> 0} (2.2)

Where L is the distance from node i to the destination and LNext that of it’s neighbour to

the destination.

The relay speed to a neighbouring node is calculated by: Speedji(Destination) = L − LN ext

HopDelayij (2.3)

The decision to which node a packet is forwarded is done via the following rules:

1. Packets are only forwarded to nodes inside the set FS. If this set is empty, the packet is dropped.

2. FS is divided in a group of nodes that can match a set speed (Ssetpoint) and a group that

can not.

3. The forwarding node is chosen from the group that can meet the required speed. The nodes with higher delivery speeds have a higher probability of being chosen. The probabilty distribution that is used is an exponential probability distribution.

4. When the first group is empty, a relay ratio is calculated by the Neighbourhood Feedback Loop (NFL). A packet is then dropped if a random value [0, 1] is higher than this ratio. Due to the non-deterministic forwarding, the routing balances the load between nodes. This results in more spread in energy consumption in the network.

Neighbourhood Feedback Loop

The neighbourbood feedback loop maintains the desired single hop relay speed between nodes. This is done by calculating the relay ratio of possible forwarding nodes. A deadline of a packet is considered to be a miss when a packet is forwarded to a node that has a relay speed that is lower than Ssetpoint. The miss ratio is the fraction of misses is the neighbours’ miss ratio. The

relay ratio is calculated by the following formula:

u = 1 − KP ei

N , if ∀ei> 0 u = 1, if ∃ei= 1

(14)

Where ei is the miss ratio of neighbour i, N is the size of FS, u is the relay ratio and K is the

proportional gain. Back-Pressure Rerouting

Back-pressure rerouting is used when part of the network is congested, or an area of nodes is not able to communicate. The latter is known as a void area in the network. A back-pressure beacon is sent back to the sender of a packet when a node cannot forward the packet.

0 0 Node Sink Sink Congested node

Figure 2.3: Backpressure rerouting around a congested area.

It also balances traffic load by forwarding packets non-deterministically through multiple concurrent routes. The load of a node is measured with single hop delay of a node. When congestion does occur in the network, a backpressure mechanism is invoked to find a route past this area of the network. This mechanism sends a packet back to neighbour of these congested nodes. An average delay is put in the packet of this congested node, which is high due to congestion. The neighbours receiving this beacon will lower the probability of forwarding to this node as a result of this.

Last Mile Processing

Last mile processing is used when the packet is in the destination area. This process is used instead of SNGF. According to the packet type, the last mile process forwards the packet to all nodes in the destination area, one node in the destination area or one specific node.

2.2

Communication

Communication in a wireless sensor network covers variable distances between nodes. Nodes may need to adjust their transmission range in order to communicate with other nodes that are located at different distances relative to the sending node. When a node has to communicate with a distant node, the transmission power is high to accomplish succesful communication. The energy dissipation in this node suffers from this. Therefore it is desirable that nodes don’t need to communicate over long distances. The network can, for this reason, be partitioned in several clusters. Nodes in each cluster gather environmental information and send it to the cluster head. Each cluster head sends their information via other cluster heads to the base station. They can do this via multihop intracluster communication or intercluster communication. Communicating via intracluster communication results in communication that is less distant. Cluster heads can be elected by the amount of residual energy that is present. The cluster head role has higher energy dissipation, therefore cluster heads need to have more energy than regular nodes. A cluster head that fails due to energy deficit results in the election of a new cluster head, or the distribution of regular nodes over other adjacent clusters [17].

(15)

2.3

Clustering

Energy consumption is a factor in wireless sensor networks that must be kept as low as possible to extend the lifetime of the network. Higher energy efficiency can be achieved through clustering. Clustering is an example of hierarchical routing in wireless sensor networks [1]. The decision of which node will be the next node on the route results in the reduction of energy in that node. When a route is used relatively many times in comparison with other routes, the nodes on this route will have their energy depleted faster than other nodes. Routes can be chosen dynamically in order to prevent early energy dissipation of nodes. Nodes that have high residual energy can be chosen more often on a route to enable balanced energy dissipation of the network. For this reason the optimization of routing influences the network lifetime. The network lifetime can also be extended by optimizion of data aggregation. When multihop communication is used between nodes, the data may be sent to redundant nodes. Some nodes measuring the same metric while they are located near each other may cause this. This results in unnecessary energy dissipation, which could be prevented by letting one of these nodes enter a standby state.

The presence of a sink node or base station results in congestion in nodes in proximity with this node. These nodes are on routes to this base station, making them use more energy than other nodes which are located further away from the base station. This uneven energy dissipation results in a shorter network lifetime. However, the position of several nodes may change due to controlled mobility. In this scenario mobile nodes have the ability to change their position. These mobile nodes can cooperatively change their position which results in a more balanced energy dissipation of nodes.

(16)
(17)

CHAPTER 3

Methodology

3.1

Network simulator 3

1

Network simulator 3 (ns3) is a discrete event network simulator that is used for teaching and research. A discrete event simulatiion schedules events or a series of events that happen at set moments during the simulation. The moment an event is finished, the next event can begin its execution immediately by jumping to this event’s starting time.

Ns3 is the successor of the ns2 tool. Ns2 was the de-facto standard for academic research [15]. Many papers obtained their results through the use of ns2. Because of this extensive use, hundreds of new models were developed and added to the codebase. However, ns3 was developed to replace ns2. The experiences of authors with using and maintaining networking tools led to an idea of how networking stacks should be modelled. This view suited networking research. A fundamental goal that ns3 tries to improve over ns2 is the realism of the modelling. The implementation of the models were closer to their corresponding real software implementation. The authors argue that the use of high-level languages to create such tools can cause simulation results to diverge from the real hardware results. Ns3 is written in C++ which eases the use of existing C code. Due to the use of C++, performance and ease of debugging are improved over ns2. The language Tcl that was used in ns2 aswell as C++ was less known to students. A Python-based scripting API is available which enables ns3 to be integrated with other Python based environments and models. Therefore both Python and C++ can be used to program in ns3.

3.1.1

Communication

2

Media Acces Control layer

The Media Access Control layer (MAC layer) protocol that is used in the SPEED protocol itself is a variant of the Distributed coordination funciton (DCF). It is stated that a simplified version is used, but the way in which it is simplified is not stated. The MAC layer that is used in the implementation of this work is DCF, since that is implemented directly in ns3. The used MAC model is the ad hoc wifi mac model. This model was used, since the nodes in the sensor network need the ability to work independently. Whereas this would not be possible with an access point model. The ad hoc model doesn’t use beacon generation, probing or association. This favors the sensors’ energy, since no additional packets will be transmitted on the level of the MAC layer.

1https://www.nsnam.org/

(18)

Physical layer

The physical layer implemented in ns3 is based on Yet Another Network Simulator [11]. The reason for this approach is that there are two physical layer models supported, but the YANS model provides the necessities. It does not support frequency-level decomposition of signals, but this is feature can be omitted for this implementation. The physical layer models the reception of packets and the energy consumption that this requires based on three components. Packets are received on a succesful reception which is probabilistically determined. This probability depends on different factors, such as: modulation, signal to noise ratio for the packet and the state of the physical layer. The physical layer can be in a transmission state or sleeping state where the packet can’t be received. Ns3 uses an object for bookkeeping to track the factors that influence reception of each packet. The physical layer also makes use of error models to provide the probability of the used modulation and standard.

3.2

Implementation

3.2.1

User Datagram Protocol

The User Datagram Protocol (UDP) is a protocol that allows computers to send datagrams to each other. A handshake is not required to set up a connection between two endpoints as opposed to TCP. This is favorable in a wireless sensor network, since any overhead that can be omitted is beneficial. This reduces the amount of energy that is required to exchange messages and reduces the probability of collision on the wireless channel. Another advantage of UDP is the smaller size the header compared with TCP. A disadvantage of UDP is that routing decisions are performed on the application level.

3.2.2

Acknowledgements

The SPEED protocol introduces an acknowledgement mechanism to provide the nodes of delay estimation to their neighbours. UDP itself doesn’t provide this mechanism as supposed to TCP. The SPEED protocol provides no insight on how this mechanism works. The mechanism is necessary to provide the nodes with delay estimation of their neighbours. Therefore an acknowl-edgement mechanism is implemented in this work.

Acknowledgements are sent each time a packet is received from a neighbour. All types of a packet are acknowledged, except for an acknowledgement itself. This would lead to an endless stream of acknowledgements. A non-acknowledgement packet that is sent to a neighbour is given the timestamp that the packet left the sending node. The receiver can then calculate the single trip time of the packet by substracting the timestamp of the packet from the current time. The receiver puts this time in the tag of the acknowledgement packet. The moment a node receives a packet that must be acknowledged, the receiver node timestamps the acknowledgement packet when it is sent back to the original sender of the packet. When the acknowledgement packet is received at the sender, the round trip time can be calculated with the first single trip time that is present in the tag and with the timestamp that the receiver put in the tag. The round trip time is then stored at the sender and used to estimate delay as in SPEED. The EWMA formula uses an α value of 0.7. The different types of packets and timestamps that are stored in the packet tag are elaborated on in the next sections.

Several circumstances can occur that will result in a node trying to resend a packet to its neighbour. The main reason that this happens is that the acknowledgement of a packet is never received by the sender. Another reason could be the receiver never receiving the message, in which case the sender doesn’t receive an acknowledgement as well. In these cases, the sender node retries to deliver the message to the desired neighbour untill it receives an acknowledge-ment. Suppose the following scenario: node A wants to send a message to node B, which relays it to node C. If B has successfully relayed the packet to node C, and thus has received an ac-knowledgement of C, A resends the packet untill it receives an acac-knowledgement of B (see figure

(19)

3.1). For this case the nodes keep track of the messages that have successfully been forwarded. This results in the node not forwarding the message further, but only sending an ack back to the sender. In this case node B already forwarded the message to C, thus not forwarding the packet from A, but only sending an acknowledgement to A. If node B didn’t forward the packet already to node C it will try this and also send an acknowledgement back to node A.

A B C

Node Relay

Ack

Figure 3.1: Late acknowledgement case.

3.2.3

Packet format

Ns3 makes use of tags that can be attached to packets. These tags can be from ns3 or custom. In this implementation a custom tag is used. Several things are put in the packet tag, such as:

1. Type, which is used to determine the type of a packet (forwarding, acknowledgement, unicast, anycast, multicast, backpressure).

2. Global id, which is used for the unicast mechanism.

3. Destination coordinates, which is used to determine the relay speed to the destination. 4. Destination radius, which defines the destination area.

5. Time to live, amount of hops this packet has left in the last mile process. 6. Sender id, latest sender of the packet.

7. Single trip timespan, time the packet spent in the first trip to the receiver. 8. Start time, the time the packet was generated.

The packets also contain payload data. The size of this payload is 20 bytes, which are 0 in this implementation. These could be used for application purposes. Handling and processing of this payload isn’t done in this work.

3.2.4

Time granularity

Events are scheduled in ns3 on a time basis. The discrete times an event happens are decided in advance. In the implementation a granularity is used for the discreteness of the scheduling of events. This can be seen as a timestep. The main event that is scheduled is the processing of buffers. This includes the sending and receival of packets when packets are queued. This process is done frequently (every 100 µs) to ensure that a node isn’t idle holding a packet for an unnecessary amount of time. The moment this granularity is done in for example the order of milliseconds, the processing of buffers impacts the performance of the protocol. The reason for this is that nodes will keep the packets longer in their queues, which directly impacts the end-to-end deadline of packets.

Another factor that depends on time is a waiting period. Nodes will wait a random amount of time before forwarding a packet to their neighbour. This waiting period ranges from 20 to 40 milliseconds and is based on the timestamp that is present in a packet. A packet is only forwarded if it meets this constraint. The result of this wating period is the ability for nodes to send back an acknowledgement to the sender, without the sender retransmitting the packet. The waiting period reduces collisions of packets as well.

(20)

3.2.5

Buffers

The SPEED protocol doesn’t specify how buffers are used. However, packets should be stored in order for nodes to forward packets to their neighbours. Buffers are attached to nodes, which makes each node have a buffer. They are used to store packets that need to be forwarded. Therefore, acknowledgements are not stored in the buffer. The buffer can contain up to one hundred packets. A First-in-first-out (FIFO) mechanism is used to determine the order in which the packets will be sent. A packet that has been sent to a neighbour, is added to the back of the buffer. This results in the progression of packets being sent through the network. When a buffer can’t contain more packets than it currently has, a packet can’t be added to the buffer and is dropped instead.

When an acknowledgement is received, the corresponding packet of this acknowledgement is removed from the buffer. The id of the packet is then stored in a register that keeps track of already acknowledged packets.

A disadvantage of using a FIFO buffer is that an enrty at a certain index can’t be removed directly. Thus, the entire buffer must be processed each time a packet needs to be removed from the buffer.

3.2.6

Neighbours

The discovery of neighbouring nodes involves neighbour beaconing in SPEED. Beaconing is done to learn the location of neighbouring nodes and to update the current status of them. In this work however, it is assumed that nodes know their own location and the location of all other nodes beforehand. Nodes can then find their neighbours by calculating the distance between them and the other nodes. If the distance is smaller than some threshold (transmission range), then this node is a neighbour. This threshold is set to 40. The induction of a set threshold is used to limit the nodes of being able to communicate with every other node, since this is possible in the used setting in ns3. Another assumption that is made is that the IP addresses of other nodes are also known.

A neighbourtable object is used to track the neighbours of nodes, which contains the neigh-bour objects belonging to this node. The neighneigh-bourtable is stored in a speed object that is attached to every node. The speed object also contains the desired Ssetpoint

3.2.7

Forwarding

Nodes make forwarding decisions based on multiple factors. These factors include: relay speed, estimated delay and packet type.

A node that isn’t located in the destination area, forwards the packet by determining its’ for-warding set of neighbours. This set includes nodes that are at least a distance of 10 closer to the destination and have a relay speed of at least Ssetpoint. The value of Ssetpoint is 20, as there are

8-9 hops on average between the base station and the node [6], and the desired delivery speed is 200 ms. The relay speed is then calculated of the nodes in this set, as in SPEED. Nodes with a higher relay speed have a higher probability of being chosen to be the forwarding node. This probability is given by an exponential distribution3. The lambda value of this formula is 3.5. The value of lambda determines the curve of the formula. A higher lambda value results in a higher probability of choosing a node with a higher relay speed. If the forwarding set of nodes doesn’t contain any neighbours, the minimal distance of 10 is ignored. If the forwarding set is still empty, the relay ratio is calculated. This is according to the formula as stated in SPEED (formula 2.4). The factor k in this formula is 1. K is a coefficient that determines the relay ratio. If k is high the relay ratio will be lower, therefore dropping a packet faster. A random number between 0 and 1 is then compared with the relay ratio. If the random number is bigger than the

(21)

relay ratio, the packet is dropped. Otherwise, the packet is forwarded to a random node that is closer to the destination.

If a node is located in the destination area and receives a packet that must be forwarded to the destination, the node will consider this packet a unicast packet. The reason for this is that the data sink is the node in the network to which the packets are transferred. This node will then send the packet to the unicast destination node if this node is present in its’ neighbourtable. Otherwise it will forward the packet as a normal forwarding packet.

3.2.8

Backpressuring

Backpressure beaconing is used in case of congestion and void areas in the network. The moment a node is congested, meaning that the buffer of the node is full, a beacon will be sent back to the sender. The nodes will check the current size of their buffer every time the buffers are processed. If a node has a buffer with more than 95 packets in it, the buffer is considered congested and the node will issue a backpressure beacon. This beacon is a packet with the average delay of the forwarding nodes of this congested node in its tag. There could be cases where the congested node hasn’t communicated with all forwarding neighbours. This results in an average delay that is relatively low, because the average delay is mostly based on the initial delay value. Neighbours of the congested node would still forward packets to this node. Therefore, a threshold backpres-sure delay is used with a value of 30. If the average delay is lower than this value, the threshold is added to the delay. The congested node sends this packet to its’ non-forwarding neighbours. That is, this congested node is in the forwarding set of these neighbours. In this case, a random waiting period before sending out the beacon is again used for the same purpose. The neighbours have a high delay value of this neighbour in their neighbourtable, which will result in a lower probability of forwarding it to this neighbour.

Nodes that drop a packet due to their forwarding neighbours not providing the desired de-livery speed also issue a backpressure beacon. To prevent these nodes from not relaying packets to their forwarding nodes, a periodic packet is sent to the nodes whose delay is higher than Ssetpoint. This enables these nodes to update their delay and to be forwarded packets if this

(22)
(23)

CHAPTER 4

Experiments

4.1

Setup

4.1.1

Hardware

The experiments are conducted on an Intel Core i5-6200U with 4 gb of ram. The network simulator 3 version that is used is 3.29.

4.1.2

Simulator

The nodes in the network are uniformly distributed in a grid on a field of 200 by 200 in one experiment. The first eight columns of nodes consist of 10 nodes, the second to last column consists of 9 nodes and the last column consists of a sink node. A total of 100 nodes is used in the network. The sink node is located at (200, 100). The distance between the columns is 20 and the vertical distance between nodes is also 20.

(24)

A second setting of nodes is used where the nodes are distributed randomly on a field of 200 by 200. The nodes are at least a distance of 10 away from each other. This ensures that nodes don’t share the same position. The location of the sink node is (200, 100).

Figure 4.2: Setting of randomly distributed nodes.

In an experiment, 6 nodes are randomly drawn from the 10 leftmost nodes in the field. These 6 nodes generate data every second. After 6 seconds, node 50 will also generate a packet to create a stepchange in the system. This node generates extra data for 4 seconds. The amount of packets it generates differs. It will generate packets every 1000, 100, 50, 33, 25, 20, 16, 14, 13, 11 and 10 milliseconds. The reason for this is that SPEED is also using the amount of packets per second as an increase in congestion in this node. The amount of packets that will be created per second in this node is around 1, 10, 20, 30, 40, 50, 60, 70 80, 90 and 100. The rate at which the packets are generated is called the datarate. An experiment is done 8 times per amount of congestion. This is done with a different random seed each run. After the seed is initialized, 6 random nodes from the leftmost column are chosen to generate data.

The miss ratio is calculated by measuring the time between generation and reception time of the packet. If this timespan is larger than 200 ms, the packet is considered to have missed its’ deadline. This timespan is also the end-to-end delay. The measurement of the average queue size is done every 200 ms. It records the average size of all the packet queues of the nodes.

(25)

CHAPTER 5

Results and discussion

5.1

Uniform setting

Figure 5.1: Average end-to-end delay with different congestion in a uniform sensing field. The average end-to-end delay is expected to increase as the datarate in the congestion increases. The reason that this is expected, is that the packets are are rerouted around the congested area. Therefore, this will result in a larger path to the destination and the end-to-end delay will increase. Another reason that the end-to-end delay increases, is that since the congestion increases, more packets are to be transferred in the network. This results in the wireless medium being used more frequently. A consequence of this is an increase in collisions between packets. The packets will be succesfully transferred between nodes less often, therefore increasing the end-to-end delay. As can be seen in 5.1, the end-to-end delay increases after 20 packets per second and is decreasing from 1 to 20 packets per second. The decrease in end-to-end delay can happen because of the current setting used. The congestion that is created at these timesteps are not sufficient to create extra packet delay. The distance between the node where the packets are created is smaller than the flow from the leftmost nodes to the sink. Therefore, the delay of the packets created in the congestion impact the average end-to-end delay by being transferred

(26)

faster to the sink. The amount of packets created in the congesting node is also adequate to impact the delay, which isn’t the case with datarates of 1 and 10 packets per second. In addition, there were 8 simulations conducted per datarate, which could be a too small amount to resemble the average end-to-end delay. The increase after 40 packets per second is due to the congestion being built up in the network and the inability of the network to handle this congestion, since the end-to-end delay increases significantly. The delay stops increasing after a datarate of 90 (P/S). This is due to the setting of the network. As can be seen in figure 5.5, the delay still increases in the non-uniform setting.

Figure 5.2: Average miss ratio with different congestion rates in a uniform sensing field. The miss ratio increases as the datarate increases. The increase ranges over a percentage of 40, as shown in 5.2. The miss ratio starts at a value of 60 percent. The high miss ratio is the result of packet collision. The nodes wait a random amount of time before resending a packet. This is specific for every node, therefore nodes can send packets simulateously. This results in the collision of packets. Packets will thus not be delivered in time and result in a higher miss ratio. Figure 5.4 supports this. The average queue size increases as time progresses during the simulation, specifically after the congestion starts. This indicates that the nodes hold on to packets, since they can’t forward or this indicates that nodes can forward packets, but don’t receive an acknowledgement of them. Both of these reasons are a result of packets not being received by nodes, which was the reason that the miss ratio increases.

(27)

Figure 5.3: Average queue size with different congestion rates in a uniform sensing field.

(28)

5.2

Non-uniform setting

Figure 5.5: Average end-to-end delay with different congestion in a non-uniform sensing field. In the non-uniform setting, the end-to-end delay is expected to increase as the datarate increases as in figure 5.1. The nodes are now distributed in a random manner, but this doesn’t account for the nodes becoming congested. The factor of the medium being used more frequently, resulting in the increase in delay, still holds as well. Another similar drop in delay is seen in this setting, which has the same cause as in the uniform setting. The values of the delay are higher from a datarate of 80 in comparison with the uniform setting. Figure 5.7 shows that there is a high increase in average queue size as well. Therefore this increase in delay is the result of bottlenecks in the network, where nodes can’t forward their packets. These bottlenecks are the result of aforementioned reasons, as the frequency of medium usage and the waiting period.

(29)

Figure 5.6: Average miss ratio with different congestion rates in a non-uniform sensing field. The miss ratio is higher with datarates of 20 and 40 than in 5.2. This difference is the result of the non-uniform setting being used. The decrease in miss ratio between datarates of 1 and 30 is due to the congesting node creating packets, as is the case in the uniform setting. After a datarate of 50 (P/S), the miss ratio stops increasing significantly. This is also the case in figure 5.2. Therefore, this isn’t the result of the setting being used. This stagnation is the result of the minimum amount of packets meeting their deadline being reached. The percentage of missed packets is already high, therefore this percentage doesn’t grow significantly.

(30)

Figure 5.8: Average queue size of 1 congestion rate (60 P/S) in a non-uniform sensing field. The results overall are not inline with the results found in [6]. The end-to-end delay does increase in their findings. However, the amount of increase is lower as opposed to the results found in the current implementation. The end-to-end delay also starts with a higher value in this work. Besides, the datarate at which this increase happens is also different.

These differences are the result of the different implementations. The SPEED protocol uses a simplified variant of the DCF algorithm for example. The way in which the algorithm is sim-plified is not specified, therefore an accurate implementation could not be done in this work. Another factor that is used in the protocol is the acknowledgement mechanism. This mechanism is not used natively by UDP. This results in assumptions about this mechanism that are made in this implementation of SPEED. In addition to these factors, the backpressure mechanism is also not elaborated on sufficiently. It is explained through examples, but the exact conditions under which the decision to activate backpressuring isn’t stated.

(31)

CHAPTER 6

Conclusions

The research question that was proposed in this work is: What is the real-time performance in wireless sensor networks using the SPEED protocol? The ns3 simulator was used to implement the SPEED protocol. This implementation was based on several assumptions to make up for unspecified information of SPEED. The results in this work show that the performance is not high. The miss ratio starts at a percentage of 55, which is a low performance in a real-time scenario. The end-to-end delay is also shows low performance. The delay increases significantly as the datarate reaches a its highest value. These results show that the current implementation doesn’t have the same performance as the SPEED protocol as proposed in [6].

6.1

Future Work

The performance that the current implementation provides can be improved in future research. Several strategies can be investigated, such as: the use of another simulator that more resembles the implementation that is used for SPEED and using another acknowledgement scheme to ensure that this mechanism is not the bottleneck in the current implementation. This work can be extended by looking at various different version of SPEED. EE-SPEED could for example be implemented to investigate differences in energy consumption and performance trade-offs. The impact of the backpressure mechanism could be examined by removing it from the protocol. Another clustering protocol that is focused on energy conservation such as EEUC could be implemented along side SPEED to reduce the energy of SPEED.

(32)
(33)

Bibliography

[1] Kemal Akkaya and Mohamed Younis. “A survey on routing protocols for wireless sensor networks”. In: Ad hoc networks 3.3 (2005), pp. 325–349.

[2] Ian F Akyildiz et al. “A survey on sensor networks”. In: IEEE Communications magazine 40.8 (2002), pp. 102–114.

[3] Th Arampatzis, John Lygeros, and Stamatis Manesis. “A survey of applications of wireless sensors and wireless sensor networks”. In: Proceedings of the 2005 IEEE International Symposium on, Mediterrean Conference on Control and Automation Intelligent Control, 2005. IEEE. 2005, pp. 719–724.

[4] Ping Ding, JoAnne Holliday, and Aslihan Celik. “Distributed energy-efficient hierarchical clustering for wireless sensor networks”. In: International conference on distributed com-puting in sensor systems. Springer. 2005, pp. 322–339.

[5] Emad Felemban, Chang-Gun Lee, and Eylem Ekici. “MMSPEED: multipath Multi-SPEED protocol for QoS guarantee of reliability and. Timeliness in wireless sensor networks”. In: IEEE transactions on mobile computing 5.6 (2006), pp. 738–754.

[6] Tian He et al. SPEED: A stateless protocol for real-time communication in sensor networks. Tech. rep. virginia univ charlottesville dept of computer science, 2003.

[7] Junyoung Heo, Jiman Hong, and Yookun Cho. “EARQ: Energy aware routing for real-time and reliable communication in wireless industrial sensor networks”. In: IEEE Transactions on Industrial Informatics 5.1 (2009), pp. 3–11.

[8] Juhyun Jung et al. “OMLRP: Multi-hop information based real-time routing protocol in wireless sensor networks”. In: 2010 IEEE Wireless Communication and Networking Conference. IEEE. 2010, pp. 1–6.

[9] Mohammad Sadegh Kordafshari et al. “Energy-efficient speed routing protocol for wireless sensor networks”. In: 2009 Fifth Advanced International Conference on Telecommunica-tions. IEEE. 2009, pp. 267–271.

[10] Manish Kumar, Rajeev Tripathi, and Sudarshan Tiwari. “Critical data real-time routing in industrial wireless sensor networks”. In: IET Wireless Sensor Systems 6.4 (2016), pp. 144– 150.

[11] Mathieu Lacage and Thomas R Henderson. “Yet another network simulator”. In: Proceed-ing from the 2006 workshop on ns-2: the IP network simulator. ACM. 2006, p. 12. [12] Chengfa Li et al. “An energy-efficient unequal clustering mechanism for wireless sensor

networks”. In: IEEE International Conference on Mobile Adhoc and Sensor Systems Con-ference, 2005. IEEE. 2005, 8–pp.

[13] Yanjun Li et al. “Real-time QoS support in wireless sensor networks: a survey”. In: IFAC Proceedings Volumes 40.22 (2007), pp. 373–380.

[14] Han Peng et al. “An adaptive real-time routing scheme for wireless sensor networks”. In: 21st International Conference on Advanced Information Networking and Applications Workshops (AINAW’07). Vol. 2. IEEE. 2007, pp. 918–922.

(34)

[15] George F Riley and Thomas R Henderson. “The ns-3 network simulator”. In: Modeling and tools for network simulation. Springer, 2010, pp. 15–34.

[16] Kay Romer and Friedemann Mattern. “The design space of wireless sensor networks”. In: IEEE wireless communications 11.6 (2004), pp. 54–61.

[17] Halil Yetgin et al. “A survey of network lifetime maximization techniques in wireless sensor networks”. In: IEEE Communications Surveys & Tutorials 19.2 (2017), pp. 828–854. [18] Lei Zhao et al. “FT-SPEED: A fault-tolerant, real-time routing protocol for wireless sensor

networks”. In: 2007 International Conference on Wireless Communications, Networking and Mobile Computing. IEEE. 2007, pp. 2531–2534.

Referenties

GERELATEERDE DOCUMENTEN

In the past, the small volumes of winery wastewater that were produced by wineries had little effect on the immediate environment, but with increasing wine production all around

Although depressed individuals synchronized with their partners’ pupils, they trusted partners with dilating pupils just as much as partners with constricting pupils, which

Deze bestaat uit de nieuw verkregen inzichten omtrent de effectiviteit van methylfenidaat en mindfulness training als behandeling voor kinderen met ADHD en comorbide symptomen van

Conflicts and changes during the restoration of religious heritage in Italy are often caused by the different interests of the actors, such as the government, responsible for

The aim of this pilot study is to examine if MaaS can reduce the demand for car traffic and car parking, and, as a result, con- tribute to urban developments (e.g.,

Additionally, in some clinics the different patient types may have different access time upper bounds, physicians are allowed to work over- time up to a certain maximum per block or

tions of functional connectivity in multiple resting-state networks (RSNs) after moderate to severe traumatic brain injury (TBI) and evaluate the relationship between func-

Comparison of all degradation products in units per mPEG 113 at different reaction temperatures (RT, 50 ◦ C, 70 ◦ C), Figure S9: SEC traces of the degradation study of mPEG 113