• No results found

Sound-based Relative Position Detection in Drone Swarms

N/A
N/A
Protected

Academic year: 2021

Share "Sound-based Relative Position Detection in Drone Swarms"

Copied!
48
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Bachelor Informatica

Sound-based Relative Position

Detection in Drone Swarms

Tom Kersten

June 21, 2019

Supervisor(s): A. (Toto) van Inge

Inf

orma

tica

Universiteit

v

an

Ams

terd

am

(2)
(3)

Abstract

This thesis investigates the viability of a sound-based relative position detection system for drone swarms. To this end, the thesis seeks to answer four questions. First of all, it seeks to answer ”how can distances between drones be measured using sound?” Prior research puts forward a sound-based ranging method utilising the time difference between recorded sound signals of two devices for range measurements. However, its viability in a drone swarm with increased background noise due to the drones has to be investigated. The second question this thesis answers, is: ”How can the relative positions of all drones in the swarm be determined using only distance measurements?” This question is approached by comparing conventional trilateration based techniques to a Multidimensional Scaling approach. Following this, the question ”how can missing information in the drone swarm be mitigated?” is answered by introducing a location prediction algorithm to estimate missing data. Finally, the question ”how does one measure the viability of a positioning system?” is answered by considering the common needs for a relative positioning system in a drone swarm. The obtained answers to these research questions are used to establish the viability of the proposed sound-based relative positioning system.

(4)
(5)

Contents

1 Introduction 7

2 Related Work 9

2.1 Alternative Sound-Based Relative Positioning System . . . 9

2.2 Non-Sound-Based Relative Positioning Systems . . . 9

3 Sound-Based Ranging using BeepBeep 11 3.1 One-to-one Ranging . . . 11

3.2 Multiple BeepBeep Ranging Scheme . . . 13

4 Relative Position Determination 17 4.1 Trilateration . . . 17

4.2 Multidimensional Scaling . . . 19

4.2.1 Metric MDS . . . 19

4.3 Trilateration versus MDS . . . 20

5 Design and Implementation 21 5.1 Range Measurements . . . 21

5.2 Constellation Construction . . . 22

5.2.1 Location Prediction . . . 23

6 Experiments 25 6.1 Sound Detection Experiment . . . 25

6.2 Positioning System Experiments . . . 27

6.2.1 Environment . . . 27

6.2.2 Experimental Setup . . . 28

6.2.3 Drone Velocity . . . 29

6.2.4 Number of Drones . . . 30

6.2.5 Location Prediction . . . 34

6.2.6 Constellation Update Time . . . 37

7 Discussion 41 7.1 Results . . . 41 7.1.1 Findings . . . 41 7.1.2 Considerations . . . 42 7.2 Conclusion . . . 42 7.3 Future Research . . . 43 Bibliography 45 Appendix 47 A PYNQ Python Interface . . . 47

(6)
(7)

CHAPTER 1

Introduction

In recent years, many potential applications of drones have been investigated, such as detecting forest fires [1], aerial surveillance [2], aerial transport [3], construction [4], precision agriculture [5] and military usage [6]. These are only a few examples of the wide range of possible applications for drones and drone systems. Rapid progress has been made towards the design of multi-drone systems [7], paving the way for more applications.

All drone systems require location detection services to facilitate, among others, collision avoidance protocols [8] and formation control [9]. To this end, they may either use an internal or external positioning system for relative positioning.

Relative positioning concerns the acquiring of the locations of the other drones in the system by individual drones. Due to weight considerations on smaller drones, lightweight equipment is desired. As such, most drones use some form of an external positioning system, either beacon based (GPS) or non-beacon based (motion tracking cameras), to detect their location. In combi-nation with a communication network, they can share their positions with other drones. However, this makes the drones highly dependant on these external systems. For example, GPS is not accurate in indoor or cluttered environments [10], and motion tracking cameras are impractical for use in unprepared or large environments.

An alternative approach to relative positioning systems is the usage of internal systems. This approach has been applied in 2D environments using, among others, infrared sensors [11], laser range finders or cameras [12]. 3D solutions such as 3D range finders exist [13], but these are often expensive and bulky and thus not suitable for small aerial drones. Miniature 3D range finders have been used to detect large objects in front of drones [14], but these are not suitable for the detection of other drones due to their small size and changing positions. Other possibilities include radar-based systems such as the Traffic Collision Avoidance System (TCAS), which is used by commercial aircraft for collision avoidance. This system, however, is not suitable for small drones due to its size and cost. Optical measurements might also be used for relative positioning [15]. However, several drawbacks to this approach are: limited field of view, needed visual contrast between the drone and the background and failure when drones are stationary [15]. These factors make this approach less suitable.

One other approach is sound-based relative positioning. Sound is commonly seen in nature as a positioning system, for example by birds [16], bats [17] and insects [18]. Using sound as a relative positioning tool could have several advantages. First of all, it is independent of illumination and visibility conditions. Furthermore, it is not significantly affected by the occlusion of other drones or other obstacles such as foliage in an outdoor environment. Microphones and speakers are also lightweight and thus do not affect the payload of the drone, when considering drones weighing 500 grams to a kilogram. Finally, their sound could potentially be utilised to identify drones in flight by their corresponding sound profile [19]. It should be noted, however, that sound has its distinct disadvantages, such as background noise and sound reflection. Background noise has the potential to make detection of desired sound difficult if not impossible in some situations [19]. Furthermore, reflection might cause false or inaccurate readings. These disadvantages have to be taken into account and analysed for any potential sound-based relative positioning system.

(8)

The BeepBeep ranging system [20] is one such sound-based positioning system. BeepBeep provides a sound-based ranging technique to measure distances between devices using only a simple speaker and microphone system whereby they circumvent some of the problems related to non-realtime operating systems and clock synchronisation. Using this method, they are able to produce accurate ranging results up to several centimetres. These distance are then utilised to determine the relative location of several devices using beacon-based trilateration [20]. This sound-based system could potentially be adapted for multi-drone systems [21].

This thesis investigates drone swarms. Drone swarms are a particular type of multi-drone system. They are characterised by emergent collective behaviour from the interactions between the drones in the swarm and their environment.

This thesis investigates the viability of sound-based relative position detection in drone swarms for indoor usage. The relative position information could then be used for collision avoidance [8], formation control [9] and cooperative positioning between the drones [22]. To this end, the BeepBeep ranging system is used as a foundation with some small adaptations as outlined in [21]. The viability of the detection system relies on the accuracy of the range measurements and on the speed with which it can be performed.

This thesis ventures to answer the following questions to determine whether the sound-based relative positioning system is viable for drone swarms in an indoor environment:

(RQ1) How can distances between drones be measured using sound?

As stated before, the positioning system is based on sound. In particular, the distances between drones in the swarm is measured using the BeepBeep protocol [20]. However, the exact implementation has to be explored as well as the accuracy of the system. Further-more, based on experimental results, the viability of measuring distances between drones using sound has to be determined.

(RQ2) How can the relative positions of all drones in the swarm be determined using only distance measurements?

Using the distances obtained from RQ1, the relative positions of the drones in the swarm have to be determined. This indicates that each drone should be able to construct an overview of the swarm based on relative distances between the drones. This drone constel-lation could be created using different methods, as outlined in [21]. One such method is to be chosen and implemented. Furthermore, the accuracy of such a system needs to be investigated as well as the factors that influence said accuracy.

(RQ3) How can missing information in the drone swarm be mitigated?

It is possible that drones move too far apart for accurate range measurements to be taken. In these cases, the constellation construction process has to work with incomplete infor-mation. It would be advantageous to be able to estimate missing data and predict future locations of drones in these cases. A good approach for this has to be investigated. (RQ4) How does one measure the viability of a positioning system?

The viability of a positioning system depends on the use case of the end user. As such, more general metrics are needed to express the viability in different use cases.

This thesis first discusses related work in chapter 2. Then, it discusses the sound-based range measurement system in chapter 3. Using the distances obtained this way, chapter 4 outlines the sound-based relative positioning system. Following this, the implementation of this system is discussed in chapter 5. Experiments based on the implementation are then outlined in chapter 6. Finally, the findings of this work are evaluated in chapter 7.

(9)

CHAPTER 2

Related Work

Much research has been done on the subject of drone swarms, ranging from small scale [22] aerial UAVs to large scale construction projects utilising quadrotors [4]. This thesis focuses on relative positioning systems for drones using sound. It first reviews similar existing techniques that might have been used as alternatives to BeepBeep [20], which have already been investigated. It will then review other relative positioning systems that might have been used.

2.1

Alternative Sound-Based Relative Positioning System

An alternative relative positioning system makes use of directional sound [19]. The article dis-cusses two methods for relative position estimation based on sound. The first system they introduce is a passive audio system which listens to the engine noise of other drones in the swarm. In it, they equip their drones with a microphone array to detect the direction of the engine noises produced by the other drones. Once they capture sound, they perform coherence measurements to determine the presence of the desired engine noise. When this is measured, the direction relative to the drone is calculated. This relative direction is also called the bearing. Should there be more drones within hearing range of the microphone, then the bearing will be of the source that is making the most sound. A pruning unit is introduced to also search for the bearings of other drones whose sound may be masked. They note that the method has a significant disadvantage: the drone taking the measurements needs to produce as little sound as possible as it otherwise interferes with the detection. This means that measurements are best performed while the drone is not using its rotor, which means in-flight detection might prove troublesome.

Their second method uses active audio signals they call ”chirps”. This method has each drone produce distinct chirps both as a method of identification and as a way to perform location detection for multiple drones at a time. They produce the sound using a piezoelectric transducer which can convert electrical signals into mechanical vibrations, thereby producing sound. They then continuously perform cross-correlation checks for the existence of chirps in the received sound. If the sound has been detected, it is isolated, and a bearing measurement may now be taken to determine the bearing of the sound and thus detecting which drone produced the sound. This system is most similar to BeepBeep. However, this method still requires a microphone array to be set up, while BeepBeep can be performed using only a single microphone.

2.2

Non-Sound-Based Relative Positioning Systems

Many relative positioning systems have been proposed and used for groups of drones. One such system used a vision-based approach [15]. They equipped their drones with cameras and the captured images to locate drones. While they were able to locate other drones in the images, the limited field of view hampers its viability for sensing the location of all drones. The method

(10)

seems to be mainly used as a collision avoidance measure, for which it might be adequate if it were not the case that it fails when drones are stationary.

Another system that has been shown to work, but only in indoor environments, uses infrared sensors [23]. The system used a ring containing multiple infrared sensors to measure distance and direction between drones in the swarm. However, the sensor array itself used 10 watts and weighted approximately 400g. As such, it is not suitable for smaller aerial drones, and larger drones will have their flight efficiency reduced as a result of the added weight of the array. Moreover, the system is optimised for indoor usage as it becomes inaccurate at longer ranges. Due to these shortcomings, a sound-based approach was used for this thesis.

(11)

CHAPTER 3

Sound-Based Ranging using BeepBeep

This chapter explores the sound-based range measurement system BeepBeep [20] for acquiring range measurements between drones in a swarm to provide a basis for RQ11. To this end, this

section outlines each step necessary to build such a system. Firstly, a ranging method between two devices is discussed in section 3.1. Then, a method for simultaneous range determination is outlined in section 3.2.

3.1

One-to-one Ranging

BeepBeep is a ’time-of-arrival’ (TOA) based system, using sound detection to estimate distances between devices running the protocol. TOA systems work as follows: given a pair of devices, the system estimates the distance d between the sender and receiver of a signal to be the product of the time-of-flight ∆t, which is the time it takes a signal to reach the receiver, and the propagation speed c of the signal, which is most often known a priori.

d = c · ∆t (3.1)

In most cases, the sender of the signal takes a timestamp the moment they send the signal. The receiver, in turn, takes a timestamp the moment they receive the signal. The timestamps can then be compared, resulting in the time-of-flight ∆t. As the propagation speed c is known, the distance d can thus be calculated.

This approach, however, poses numerous challenges, three of which are addressed by Beep-Beep [20]. Firstly, the issue of clock synchronisation. In these kinds of systems, it is essential for the clocks of both the sender and receiver to be synchronised, as otherwise, the possible clock skew could induce errors in the time-of-flight information. Secondly, a misalignment might occur between the sending of the signal and the timestamp associated with it. This issue mostly occurs in non-real-time systems, as there is often a small delay between the issuing of a command and its execution. As such, time passes between the issuing of the send command and the taking of the timestamp. Furthermore, the third issue is similar in that it revolves around the receiving timestamp. Here too, there is a difference in time between the receiving of the signal and the taking of the timestamp.

These factors combined make accurate ranging using TOA more difficult, as even small errors can severely impact the ranging results. In [20], they found that these factors could amount to several milliseconds and thus to an inaccuracy of several centimetres to metres. As such, BeepBeep has taken care to address these issues.

(12)

BeepBeep’s ranging method works as follows. Take two devices, A and B. Assume that both devices are currently recording. Device A emits a sound signal and records it. Device B, upon detecting the emitted signal from A, records it as well and responds by emitting a sound signal itself. Following this, both A and B record the second sound signal. Both devices calculate the elapsed time between two time-of-arrivals (ETOA) using the obtained recordings. The ETOA is the measured time difference between the self-recorded signal that a device sends and the received signal from the other device based on the number of samples between the signals. Once both ETOAs have been calculated, both devices exchange their ETOAs. Based on this exchange, both devices can compute the distance between both devices A and B.

Figure 3.1: Illustration of event sequences in the BeepBeep ranging procedure, as taken from the BeepBeep paper [20]. No timestamping is required for the process.

Figure 3.1 showcases the BeepBeep ranging procedure. The upper (MA) and lower (MB) line

represent the local time of devices A and B respectively. Firstly, at time t∗A0, device A issues a play-sound command. At time tA0, the command is executed. Following this, at time tA1,

A receives the sound signal and at time t∗A1, A registers the sound. At tB1, device B receives

the sound and at t∗B1 it is registered. Following the registering of the sound, B sends its signal command at time t∗B2, which is executed at time tB2. Finally, both A and B receive the sound

at times tA3 and tB3 respectively and register the sound at t∗A3 and t∗B3.

Several distances can be calculated using this information. dx,y denotes the distance between

device x’s speaker and y’s microphone and c denotes the propagation speed of the sound signal:

dA,A= c · (tA1− tA0) (3.2)

dA,B= c · (tB1− tA0) (3.3)

dB,A= c · (tA3− tB2) (3.4)

(13)

Using the distance equations, the distance D between devices A and B can be approximated as: D =1 2 · (dA,B+ dB,A) = c 2 · ((tB1− tA0) + (tA3− tB2)) = c 2 · (tB1− tB2+ tB3− tB3− tA0+ tA1− tA1+ tA3) = c 2 · ((tA3− tA1) − (tB3− tB1) + (tA1− tA0) + (tB3− tB2)) = c 2 · ((tA3− tA1) − (tB3− tB1)) + 1 2(dA,A+ dB,B) (3.6)

As shown in equation 3.6, the calculated distance is solely based on the local times of devices A and B and the distances between the microphone and speaker on both devices. The distance between the microphone and speaker on each device can be measured beforehand, which leaves only the need to calculate the time between signals for each device, which is done locally. This means that the issue of clock skew between devices A and B does not interfere with the ranging procedure. Moreover, due to the self-recording done on both devices, all time measurements are based on the arrival of sound at the speaker. As such, the delay of issuing and executing a command becomes non-existent for BeepBeep ranging. This only leaves the difference in time between the receiving of the signal and the processing of a timestamp upon arrival. This can, however, be solved by looking at the sound recordings of the procedure and identifying the arrival of both signals there, thereby circumventing the third issue.

3.2

Multiple BeepBeep Ranging Scheme

The BeepBeep ranging scheme described earlier is only able to measure the distance between two devices at a time, for which both devices need to participate actively. As such, should a device need to know the distance to N other devices, it would need to perform the ranging procedure N times, needing 2N beeps to measure the distance to each target device sequentially. In addition to this, the different beeps need to be spaced out in time in such a way that they cannot be confused during the procedure. This can be expressed as

tp= N · (tb+ ts+ tb+ ts) − ts

= N · (2tb+ 2ts) − ts

= 2N · (tb+ ts) − ts (3.7)

, where tp is the total time needed for the protocol, tb is the duration of a beep and ts is the

space in time between beeps. This means that this simple approach does not scale well given the every increasing time needed to perform ranging measurements between all devices. Because of this, a Multiple BeepBeep Ranging (MBR) scheme was devised.

The Multiple BeepBeep Ranging (MBR) scheme is a method for fast measurement of distances from one to N other devices. It produces this speedup by simultaneously measuring all distances between the target device and the N other devices. For this procedure, only two beeps need to be produced between the target device and one of the other devices to produce accurate range measurements between the target device and all other devices.

Figure 3.2 shows an overview of the MBR procedure. Here, device M needs range measure-ments from itself to devices A, B and C. A firstly produces a beep, after which M responds with a beep of its own. In the meantime, both devices B and C are also listening in addition to A and M. Because of this, they too can measure their distance to M based on the measured sounds and time intervals. Figure 3.3 shows the event sequence of the procedure.

(14)

A

B

C M

Figure 3.2: Schematic overview of the MBR ranging scheme. The white circles represent anchor nodes A, B and C. The black circle is a node whose distance to the anchor nodes needs to be determined. The solid lines represent the known distances between the anchor nodes. The dashed line represents the measured distance through the BeepBeep protocol. The dotted lines represent the inferred distances based on the MBR ranging scheme.

Figure 3.3: Image showcasing the timing of events for the MBR procedure between devices A, B and M. Taken from the BeepBeep paper [20].

Because of the simultaneous recording of device B, it is also able to calculate its distance to M. To this end, the following properties are listed, where c is the propagation speed of the sound signal.

TB1 = T0+ DAB/c (3.8)

TB2 = T1+ DM B/c (3.9)

TM 1 = T0+ DAM/c (3.10)

TM 2 = T1+ DM M/c (3.11)

The distance DM B can be determined using these properties:

T1− T0= TB2− TB1+ DAB/c − DM B/c (3.12) T1− T0= TM 2− TM 1+ DAM/c − DM M/c (3.13) TB2− TB1+ DAB/c − DM B/c = TM 2− TM 1+ DAM/c − DM M/c (3.14) DAB/c − DM B/c − DAM/c + DM M/c = TM 2− TM 1+ TB1− TB2 (3.15) DAB− DM B− DAM + DM M = c · (TM 2− TM 1) + c · (TB1− TB2) (3.16) DM B= DAB− DAM + DM M − c · (TM 2− TM 1) − c · (TB1− TB2) (3.17)

(15)

As can be observed, for the calculation of DM B, the distance between A and B needs to

be known. This could have been obtained by the performance of the BeepBeep protocol in the past. Furthermore, DAM and DM M are already known due to the standard BeepBeep protocol.

Because of this, the distance DM B can be estimated based on the two beeps produced by A and

M. To generalise, any device X that has been listening and knows its distance to A can determine its distance to M by:

(16)
(17)

CHAPTER 4

Relative Position Determination

This chapter gives an overview of two possible approaches to determining the relative locations of all drones in the swarm using distance measurements, thereby providing a foundation for RQ21.

Firstly, section 4.1 discusses trilateration as this is the most general approach to determining locations based on distance. This method is discussed to provide a foundation for the Multi-dimensional Scaling approach in section 4.2. Once both methods have been outlined, they are compared and contrasted in section 4.3.

4.1

Trilateration

Trilateration is the process of determining the position of an object based on the distances between the object and multiple reference points at known positions, which will henceforth be referred to as anchor points [24]. The concept can be visualised as finding the intersection between circles around the different anchor points, where the radius of the circle is the measured distance between the anchor point and the object in question. In a 3D environment, it can be visualised using spheres.

Figure 4.1 showcases a 2D example of trilateration. It shows four devices, the three anchor points A, B and C and a device M. To position M, distances are measured between A and M (DAM), B and M (DBM) and C and M (DCM). Based on these distances, each anchor point is

able to draw a virtual circle around itself. The point where they overlap is the location of M, given that all range measurements are correct.

DAM DBM DCM A B C M

Figure 4.1: This figure illustrates trilateration in 2D given three anchor points A, B and C and one object to be positioned, M.

1RQ2: How can the relative positions of all drones in the swarm be determined using only distance

(18)

In real-world applications, ranging measurements may contain errors. In these cases, an additional radius error may be added to the circle, creating an annulus, or the error is added to the sphere, creating a spherical shell. The overlap of these areas/volumes then holds the desired location. If the overlap is small enough, within an acceptable error range, then no further calculations are needed. Should this not be the case, then trilateration can be expressed as a local optimisation problem to find the most fitting location, here shown for three-dimensional space, S = N X i=1 (p(x − xi)2+ (y − yi)2+ (z − zi)2− Di)2 (4.1)

where S is the cost function, x, y and z the estimated location of the object, N the number of anchors, xi, yi and zi the coordinates of the anchor point i and Di the measured distance

between the object and the anchor point i.

Gradient descent could be used to estimate the position x, y and z, whereby the cost function is minimal. The downside to this approach, however, is that initial values for the coordinates are needed for fast convergence. This problem could be somewhat alleviated by using previous information on the position of an object to have the function converge faster. This does not, however, solve the problem of finding the position initially.

In [20], a new trilateration algorithm is proposed based on the tweaking of the optimisation criterion (4.1): S = N X i=1 ((x − xi)2+ (y − yi)2+ (z − zi)2− Di2) 2 (4.2)

In this manner (4.2, the algorithm becomes based on linear matrix operations and could thus be transformed into the least-squared solution of the following matrix:

S = kβχ − ∇k (4.3) where: χ = (x2, y2, z2, x, y, z)T β =    1 −2x1 −2y1 −2z1 .. . ... ... ... 1 −2xn −2yn −2zn    ∇ =    D2 1− x21− y21− z12 .. . Dn2− x2n− yn2− zn2   

The least-squared solution then becomes:

χ = (βTβ)−1βT∇ (4.4)

Thereby, by solving the above equation, we find χ, which contains the location of the object. Important to note here is that, given static anchors, β does not change and could thus be precomputed, as well as parts of ∇. However, as can be observed in (4.4), the equation needs to be full rank, and as such, a minimum of four anchor nodes are necessary to compute this fast computation method.

(19)

4.2

Multidimensional Scaling

Multidimensional Scaling (MDS) [25] is a widely used technique for visualising data in n-dimensional space. It takes as input a dissimilarity matrix which expresses the similarity between pairs in a set and outputs a set of coordinates such that the distance between each pair is pro-portional to the similarity between them. This makes MDS a global optimisation algorithm. It has been used to plot higher dimensional data sets for a long time, but it has also seen use as a localisation algorithm [21, 26, 27].

Multidimensional Scaling has several different variations: Classic MDS, Metric MDS and Non-Metric MDS. Depending on the circumstances, one of these methods is appropriate. Metric MDS is a superset of Classic MDS, adding a variety of loss functions and input matrices with, for example, weights. Non-Metric MDS no longer uses exact distances between objects, but instead uses a rank-based approach, looking only at the order of the distance between objects, rather than the actual distances. As the distances between the drones in the swarm can be measured, but full ranging information is not always available, a Metric MDS approach is investigated, as it utilises the measured distances and can use weights to mitigate the effect of missing data.

4.2.1

Metric MDS

Metric MDS tries to minimise a cost function given a network consisting of N nodes in n-dimensional space, whose coordinates are unknown, to find a set of coordinates that most closely matches the real world locations of the nodes relative to one another. The cost function in question is: min X S(X) = minX X i<j≤N wij( ˆdij− dij(X))2 (4.5)

where wij is a weight defining the confidence in the measurement ˆdij and dij(X) is the

euclidean distance between node xi and xj, as determined by the algorithm.

The stress function S can be minimised in different ways, such as gradient descent. One such approach is called ”Scaling by MAjorizing a COmplicated Function” (SMACOF) [28], which has proven to perform significantly better than other approaches when considering both computation time and accuracy [28]. SMACOF works by iteratively minimising a simple convex function which approximates the stress function S, thereby reducing the computational complexity. The convex function T (X, Z) ≥ S(X) bounds S from above and touches the surface of S at point Z, thus T (Z, Z) = S(Z).

The function T (X, Z) can be derived by first expanding S(X). The following procedure has been adapted from [28] and is only shown here to prove the validity of previous statements:

S(X) = X i<j≤N wij( ˆdij− dij(X))2= =X i<j wijdˆ2ij+ X i<j wijd2ij(X) − 2 X i<j wijdˆijdij(X).

The first term is constant, and the second term is quadratic in X and therefore easily solved. The third term is bounded by the Cauchy-Schwarz inequality using the fact that:

dij(X) = kxi− xjk = kxi− xjk kzi− zjk kzi− zjk ≥ (xi− xj) T(z i− zj) kzi− zjk (4.6) where Z = z1, ..., zN, where N is the dimensionality. This means that the third term is

bounded as: X i<j wijdˆijdxj(X) ≥ X i<j wijdˆij (xi− xj)T(zi− zj) kzi− zjk (4.7)

(20)

We can thus rewrite S(X) ≤ T (X, Z) as: S(X) ≤ T (X, Z) =X i<j wijdˆ2ij+ X i<j wijd2ij(X) − 2 X i<j wijdˆ2ij (xi− xj)T(zi− zj) kzi− zjk (4.8)

This means that T (X, Z) can be written in matrix form:

T (X, Z) = C + tr(XTVX) − 2 tr(XTB(Z)Z) (4.9)

where the elements of V and B(Z) are defined as follows:

vij = ( N P k=1,k6=j wkj if i 6= j, bij =          N P k=1,k6=j wkj ˆ dij dij(Z) if i 6= j, N P k=1,k6=j bkj if i = j.

All of this leads to a less computationally intensive calculation as compared to the original function when calculating the minimum of said function:

X = min

X T (X, Z) = V

−1B(Z)Z (4.10)

The SMACOF algorithm can be explained using this computation as follows: initialise start-ing positions X(0). Then, calculate the new set of positions as determined by min T (X, Z). After

this, compare the stress score S(X) between the previous positions and the new positions. If the difference is below some threshold, the algorithm returns the new set of coordinates. Otherwise, repeat the calculation until it does.

4.3

Trilateration versus MDS

Both trilateration and Multidimensional Scaling have their advantages and disadvantages. Tri-lateration can be computed using a fast computation method, as explained in section 4.1. It is dependant on solving a least-squared equation. On the other hand, MDS might, in some cases, require multiple runs using differing starting values to find a good solution that has a low stress value. Furthermore, the algorithm tries to find a solution that converges, which might take more time. However, it has been shown that MDS performs better both when it comes to accuracy and robustness under missing values than trilateration [21].

This thesis has chosen to focus on MDS due to considerably better performance than trilat-eration. The calculation time of the MDS algorithm is evaluated in chapter 6 to test its viability on limited hardware.

(21)

CHAPTER 5

Design and Implementation

This chapter gives an overview of the design and implementation of the sound-based relative positioning system. Section 5.1 outlines the method by which range measurements are taken, thereby answering the implementation part of RQ11. Section 5.2 shows how those measurements

are used to create a drone constellation, answering the implementation part of RQ22.

5.1

Range Measurements

The sound-based BeepBeep protocol was chosen to achieve high precision range measurements. As described in chapter 3, BeepBeep both supports one to one ranging as well as simultaneous multi-node ranging. For both methods, the accurate detection of the sound signal is critical for the accuracy of the protocol as a whole. Two factors make the detection of the sound signal more difficult. First of all, ambient noise and distance play a large part in the deterioration of the signal, especially since each drone produces noise by itself. Secondly, the timing of the sound signal is important, as multiple sound signals may be mistaken for one another.

A solution for the second issue is proposed in [21]. This approach assigns a specific sound frequency to each drone, which they use to signal one another. This allows each drone to be identified based on their sound and allows the emission of multiple sound signals at a time. However, this approach also makes the detection of the sound more difficult, as now the full frequency spectrum needs to be analysed. Moreover, the hardware requirements increase, as the sound emitter needs to be able to produce specific sound frequencies and the microphone needs to be able to record and differentiate the different signals with high accuracy, as otherwise signals might become mixed up. Due to these issues when it comes to sound detection, this thesis has chosen to focus on the construction of the drone constellation and to simulate the range measurements.

The simulation of the range measurements works as follows: first, each drone is assigned a random position. Then, each drone starts requesting measurements from all other drones by broadcasting their position. Then, upon receiving the request, the drone produces a measurement by calculating the Euclidean distance between itself and the received position. This measurement is then broadcast to all drones. In the meantime, each drone updates its simulated position based on a movement parameter.

The measurement taken by the system has some additions to better approximate the be-haviour of the sound-based system. When a measurement is taken, a small error in the interval (−Er, Er) is added to approximate the ranging error as it occurs in BeepBeep [20]. It is reported

that the ranging error is approximately 2cm with a deviation of 2cm. As such, a ranging error of Er = 4cm has been chosen. Another way in which the sound-based system is simulated is

by introducing a failure chance for the procedure. When a measurement is taken, there is a 1RQ1: How can distances between drones be measured using sound?

2RQ2: How can the relative positions of all drones in the swarm be determined using only distance

(22)

failure chance based on the simulated distance between the drones. The failure chance used here is based on the results from experiments in section 6.1. It was found that after approximately 15 metres, not all range measurements succeeded. The following formula was therefore used to approximate the chance of a successful range measurement:

P (x) =      1, if x < 15 −(x−5)(x−25)100 , if 15 ≤ x ≤ 25 0, if x > 25 (5.1)

Where P (x) is the chance to detect sound at a distance x. The following graph shows the probability of detecting sound at different ranges to visualise the effects of the equation:

0 5 10 15 20 25 30 Distance (m) 0.0 0.2 0.4 0.6 0.8 1.0 Probability

Successful sound detection probability

Figure 5.1: This figure depicts the probability of detecting sound at different ranges. As can be seen in figure 5.1, in the range of 15 to 25 metres, there is an ever decreasing likelihood of detecting sound signals. This means that, once the distance between drones grows to this extent, that distance measurements fail. However, as the range measurement protocol is executed frequently, missing some signals is tolerable.

5.2

Constellation Construction

The constellation construction process uses the previously detailed measurements to create a representation of the swarm relative to the executing drone. Multidimensional Scaling is used to achieve this, as explained in chapter 4.2. The particular MDS implementation that is used is provided by sklearn3. It has both a metric and nonmetric implementation, of which the metric

version is used. It, however, does not support weights, but it ignores supplied distances of 0. This behaviour is similar to assigning a weight of zero to these measurements.

The sklearn implementation runs the SMACOF algorithm a variable number of times, each time with different initialisation values, in this case, coordinates. The stress of each result is then compared, and the result with the lowest stress value is returned. The stress is calculated by comparing the resulting distances, as generated by the algorithm to the provided distances. Lower stress signifies a higher similarity.

The implementation allows the user to provide the initialisation values. This is useful, as this means that previous measurements can be used for potential quicker convergence of the algorithm. Should all drones be stationary, then supplying the previous results to the algorithm should result in fast convergence. As long as the algorithm is executed frequently, drones should not have moved far from their previous positions, depending on their speed. As such, the previous results give a good indication for the new constellation and should thus reduce the time to convergence and provide better results.

3

(23)

5.2.1

Location Prediction

A method to improve the results from MDS is the prediction of future drone locations. Using the previous estimates of drone locations as generated by MDS, the new locations of the drones can be approximated, as long as the drones do not drastically change direction or speed. The new locations are estimated based on the previous n estimated coordinates, where more weight is placed on the most recent results. By subtracting the coordinate matrices between two results, an estimate of the change in direction for each drone is received. By comparing the previous n results, n − 1 direction changes are obtained. The average change is calculated using these changes, where more recent changes weight more than later results. The process is outlined in the following pseudocode:

Algorithm 1: Location Prediction input : n Matrices Mc of coordinates

output: Matrix Me with expected location 1 f actor = 0; 2 for i ← 1 to n do 3 Me= Me+ i · (Mc[i] − Mc[i − 1]); 4 f actor = f actor + i; 5 end 6 Me= Me/f actor;

This method could mitigate missing range measurements in the drone swarm, as it could predict where the drones will be even when data is missing, assuming no sudden change in direction. This method could thus satisfy RQ34. The influence of this prediction is investigated

in section 6.2.5.

(24)
(25)

CHAPTER 6

Experiments

Numerous experiments have been conducted to investigate the viability of the described sound-based relative positioning system. Firstly, the limitations of the employed hardware were inves-tigated in section 6.1, specifically when it comes to the detection of sound.

6.1

Sound Detection Experiment

This experiment has been conducted to measure the ability of a microphone to measure a sound signal. The microphone in question is the integrated microphone of the PYNQ-Z11. This

ex-periment aims to find a distance threshold past which sound detection becomes unreliable. The following setup was used to record the signals (figure 6.1):

R N

B

Figure 6.1: This schematic depicts the sound recording setup. The square indicates the PYNQ-Z1 with the integrated microphone R. N indicates a speaker resting on the board, which produced a constant background noise of flying drones2. Finally, B indicates a speaker that produced a constant 9000Hz sound signal which is used as a reference signal. The distance between N and R is 5cm. The distance between B and R is 10cm.

The PYNQ-Z1 needs to be able to record the sound signal even under noisy conditions such as in a drone swarm. As such, drone noise was produced during all recordings. Furthermore, a second sound signal was produced to measure the relative intensity of the emitted sound signal. The second signal was produced at 9000Hz. The recordings were done at different distances, as shown by the following schematic (figure 6.2):

1

https://reference.digilentinc.com/reference/programmable-logic/pynq-z1/start

(26)

S R2.5 R7.5 R12.5 R17.5 R22.5 R27.5R25 R20 R15 R10 R5 R30 2.16m 1.65m Figure 6.2: This schematic depicts the area in which the sound has been measured. The white circle S indicates the speaker from which the sound was emitted. The black circles Rxindicate

locations where the sound has been measured, where x is the distance in metres between that location and the speaker. The ceiling was 2.5m high and was made of pipes, which both produced some measure of background noise and reduced echos.

As can be seen in figure 6.2, sound interference is a possibility due to both the indoors environment and the shape of the area. However, the effects of sound interference in this sort of environment are outside the scope of this project. Because of this, sound interference is only offered as an explanation to observed results in the subsequent experiments instead of investigated further.

The experiment was conducted as follows: the recording device was placed at one of the distances depicted in figure 6.2. Then, a constant sound signal was emitted from S at 5000Hz. Simultaneously, drone sounds were played as well as a signal at 9000Hz, as depicted in 6.1. Finally, five separate recordings of two seconds each were made by the microphone. A signal spectrum was generated for each recording using this procedure. It was determined if the sound signal at 5000Hz was present using visual inspection of the signal spectrum. If it was, the relative intensity as compared to the 9000Hz signal was calculated. Otherwise, it was deemed a failed measurement, and thus, an intensity of 0 was given. This resulted in the following graphs:

5 10 15 20 25 30

Distance to sound source (m) 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 Relative intensity

Relative sound intensity

Figure 6.3: This image depicts the relative sound intensity of the 5000Hz signal as compared to the 9000Hz signal.

Figure 6.3 shows that the intensity of the sound diminishes over greater distances. However, at a distance of 5 metres, there is a dip in intensity. It is suspected that this is due to environ-mental factors, which could cause destructive interference, as it was tested in an enclosed space. Furthermore, at distances 22.5 and 27.5, there is a slight rise in intensity. These points could be explained due to constructive interference of the sound signal, as the hallway this was tested

(27)

in was not entirely smooth. Finally, at distances 25 and 30, no sound signal was detected that would have risen above the background noise.

2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 22.5 25.0 27.5 30.0 Distance to sound source (m)

0.0 0.2 0.4 0.6 0.8 1.0

Fraction of succesful detections

Successful signal detections

Figure 6.4: This image depicts the fraction of successful sound signal detections at different distances.

Figure 6.4 shows that more sound detection failures occur at greater distances. Figure 6.3 corroborates this, as it shows lower intensity values for those points where more failures occur. After 15 metres, failures start to occur. However, there were no failures at 22.5 metres, which again is most likely due to constructive interference.

Based on these findings, there is a chance that the BeepBeep protocol fails after 15 metres. However, up to 22.5 metres, there is still a good chance for the protocol to succeed. Based on these findings, an equation has been constructed (5.1) to simulate the chance of detecting a sound signal. Furthermore, a maximum optimal detection range of 15 metres bodes well for RQ13.

6.2

Positioning System Experiments

Several experiments have been conducted on the simulated environment, as described in section 5. This section first outlines the simulated environment in section 6.2.1 and the experimental setup in section 6.2.2. Then, the actual experiments are discussed. Section 6.2.3 investigates the impact of velocity on the average position error. Section 6.2.4 experiments with the number of drones in the swarm and how it relates to the positioning error. Following this, the location prediction algorithm is tested in section 6.2.5. Finally, the update time of the constellation construction process is investigated in section 6.2.6

6.2.1

Environment

Each of the experiments that were conducted with the drones were done so in a simulated environment. This means that none of the drones flew, but separate PYNQ-Z14 boards were

used as if they would have been on a drone. This also means that a network has been set up between these boards to allow for communication [29]. This means that network delays are a factor when considering all the following experiments.

The simulated environment itself works as follows: each board, hereafter referred to as a drone, that participates in the experiment has an instance of the environment running. This

3RQ1: How can distances between drones be measured using sound?

(28)

instance includes a network connection to all other drones that participate. During the initiali-sation process, the virtual room is decided upon, both in shape and size. During startup, each drone assigns itself a starting location within the virtual room. The drone does not use this knowledge for any calculations pertaining to the relative positioning system, but the location only exists to allow for meaningful range measurements to be conducted. Once each drone has a position, it is assigned a direction and velocity. While flying, each drone is assigned a new direction approximately every two seconds. This causes the drones to change direction to the newly assigned direction slowly. Each drone regularly updates its virtual location based on the direction and velocity. Should it encounter the edge of the virtual room, it is directed back into the room by reversing its direction to move away from the edge.

The following schematic shows a possible starting state for a spherical room with eight par-ticipating drones:

Figure 6.5: This figure depicts a common setup for the simulation. Each drone is assigned a random position within certain bounds, in this case, a spherical room. Then, each drone is given a random velocity, within certain bound, and direction in which they will move.

6.2.2

Experimental Setup

An experimental setup had to be made to conduct experiments using the simulated environment. As stated previously, PYNQ-Z15 boards were used to simulate the drones. These devices were

used because they could theoretically be connected to a drone and be used as a control unit. All boards were connected to a switch using Ethernet Cat 5e cables. In total, two switches were used, also connected by such a cable. This meant that each board could reach other board through the switches. Each board was running on the, at the time of writing, the latest image supplied by Xilinx, PYNQ V2.46.

5https://reference.digilentinc.com/reference/programmable-logic/pynq-z1/start 6https://github.com/Xilinx/PYNQ/releases/tag/v2.4

(29)

Figure 6.6: This image depicts a bird’s eye view of the experimental setup using PYNQ-Z1 boards that was used for the experiments conducted in this thesis.

6.2.3

Drone Velocity

Multiple factors might impact the average distance error. One such factor is the velocity by which drones move. As their velocity increases, so do the distances they cover between updates of their relative position constellations. As previous constellations can be used by the MDS algorithm to converge more quickly on a new constellation, it is expected that greater changes might cause greater errors. Because of this, some experiments have been conducted to measure this change.

Three experiments have been conducted under different circumstances. In each experiment, the velocity was slowly raised over time from 0m/s to 10m/s. The experiment has been done in three different rooms with different sizes. Each test has been run seven times using different initialisations. The results below are the averaged results over all eight drones that were used in the experiments, 0 2 4 6 8 Velocity (m/s) 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6

Average distance error (m)

Impact of velocity on distance error in a 7.5m spherical space

Figure 6.7: This figure shows how the average distance error between any two drones based on their calculated relative positions changes when speed increases of the drones. The experiment was conducted in a virtual spherical room with a radius of 7.5 metres.

As can be observed in figure 6.7, the average distance error grows as the velocity increases. The initial spike in the average distance error can be explained as a result of each drone

(30)

initialis-ing their constellation construction process. While within the 7.5m spherical room, the maximum distance between two drones is 15 metres. As such, all range measurements always succeed. It, therefore, seems likely that the increased mobility of the drones negatively impacts the constella-tion construcconstella-tion process. However, the average of the average distance error remains relatively low, staying under 1 metre.

0 2 4 6 8 Velocity (m/s) 0 2 4 6 8 10 12

Average distance error (m)

Impact of velocity on distance error in a 15m spherical space

(a) This figure shows how the average dis-tance error between any two drones based on their calculated relative positions changes when speed increases of the drones. The experiment was conducted in a virtual spherical room with a radius of 15 metres. 0 1 2 3 4 5 6 7 8 9 10 Velocity (m/s) 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Missing distance values

Impact of velocity on missing values

100

101

102

Number of occurences

(b) This histogram shows the occurrences of the number of missing distance values as the size of the spherical room increases. A miss-ing distance error represents a failed BeepBeep protocol ranging attempt. The data is taken over different runs and all missing values are added over these runs and displayed here.

Figure 6.8: The figures depict the impact of velocity of drones on the average distance error in a 15m spherical room.

Looking at the average distance error in figure 6.8a, it is higher than the error in figure 6.7. It can also be observed that, as speed increases, the average distance error does not increase. It even seems to decrease slightly, but this behaviour could be due to the random behaviour of the drones. In the interval of 0 to 2 m/s, a significant rise in the average distance error can be observed. This coincides with an increase in the number of missing values as depicted in figure 6.8b.

Comparing figure 6.7 to the figures in 6.8, it seems that the velocity has a small impact on the average distance error. The number of missing distance values has a greater impact than the velocity, as figure 6.8a shows, on average, a higher average distance error than figure 6.7.

6.2.4

Number of Drones

Another factor that might influence the accuracy of the relative positioning system is the number of drones used. As more drones join the swarm, more measurement data becomes available, and thus more accurate positioning becomes possible. However, with more drones joining the swarm, it might also be the case that some drones can no longer execute the BeepBeep protocol due to their distance to other drones. It could be expected that, as long as drones stay together, that more drones produce more accurate results than using fewer drones. However, should they not behave in that way, then the average distance error should increase faster than that of smaller groups

Five experiments were conducted to investigate the impact of differing the number of drones, ranging from four to eight drones used for each experiment. The drones are each assigned an average speed of 2m/s, and move randomly through the virtual space, changing direction every so often. The virtual space is spherical, starting with a radius of five metres. As time goes on, the radius of the room increases. The drones also make use of location prediction, using the previous two constructed constellations to predict the next one. Each of the following tests shows the averaged data over all participating drones in the swarm as well as over seven different runs.

(31)

5 10 15 20 25 Room range (m) 0 2 4 6 8

Average distance error (m)

Average distance error over increasing room range

(a) This graph shows the average distance error between two drones as the range, and thus the size, of the spherical room increases.

5 7 9 11 13 15 17 19 21 23 25 27 29 Room range (m) 0 1 2 3 4

Missing distance values

Impact of room range on missing values

100

101

102

Number of occurences

(b) This histogram shows the occurrences of the number of missing distance values as the size of the spherical room increases. A miss-ing distance error represents a failed BeepBeep protocol ranging attempt. The data is taken over different runs and all missing values are added over these runs and displayed here.

Figure 6.9: The above figures show the results for the experiment on the average distance error of the drone constellation using four drones.

Figure 6.9a shows the rise of the average distance error as the possible space that the four drones can move in increases. This coincides with the accompanying figure 6.9b showing the missing distance measurements as space grows. This shows that the average distance error increases when measurements decrease due to too long distances between drones. This also explains why the rise starts at around a room range of 10m, as at this point the diameter of the spherical room becomes 20m, which is in the point at which on average 50% of all range measurements fail in the simulation. This, however, does not explain the at first relatively high average distance error. This can be explained as the constellation construction procedure starting up and starting to converge on a constellation, thereby having relatively high error values at the start. Figure 6.9b also shows that initially, some values are missing. This is most likely because the constellations are at this point already being build before all range measurements have taken place.

(32)

5 10 15 20 25 Room range (m) 0 2 4 6 8 10 12

Average distance error (m)

Average distance error over increasing room range

(a) This graph shows the average distance error between two drones as the range, and thus the size, of the spherical room increases.

5 7 9 11 13 15 17 19 21 23 25 27 29 Room range (m) 0 1 2 3 4 5 6 7

Missing distance values

Impact of room range on missing values

100

101

102

Number of occurences

(b) This histogram shows the occurrences of the number of missing distance values as the size of the spherical room increases. A miss-ing distance error represents a failed BeepBeep protocol ranging attempt. The data is taken over different runs, and all missing values are added over these runs and displayed here.

Figure 6.10: The above figures show the results for the experiment on the average distance error of the drone constellation using five drones.

Figure 6.10a is similar to figure 6.9a. Both initially have high average distance errors, which then decreases, before rising again at around 10 metres. However, the incline is steeper than that of the previous graphs. This is most likely due to more missing values, as can be seen in figure 6.10b. Figure 6.10a does show a large decrease in the average distance error at around 27 metres. This coincides with less missing values in figure 6.10b. This is most likely due to the random nature of the experiment, as this could be a point at which most of the drones coincidentally meet. 5 10 15 20 25 Room range (m) 0 2 4 6 8 10 12 14

Average distance error (m)

Average distance error over increasing room range

(a) This graph shows the average distance error between two drones as the size of the spherical room increases. 5 7 9 11 13 15 17 19 21 23 25 27 29 Room range (m) 0 1 2 3 4 5 6 7 8 9 10 11 12

Missing distance values

Impact of room range on missing values

100

101

102

Number of occurences

(b) This histogram shows the occurrences of the number of missing distance values as the size of the spherical room increases. A miss-ing distance error represents a failed BeepBeep protocol ranging attempt. The data is taken over different runs, and all missing values are added over these runs and displayed here.

Figure 6.11: The above figures show the results for the experiment on the average distance error of the drone constellation using six drones.

Figure 6.11a follows the graph in 6.10a, showing similarities in increases of the average dis-tance error. However, the incline seems steeper for six drones as compared to five. This most likely follows from the increased number of missing values when comparing figure 6.10b and

(33)

fig-ure 6.11b, as with more drones, there is a higher potential for any one drone to be out of range of another drone. Furthermore, figure 6.11a does not show the same valley in the average distance error at around 27 metres as figure 6.10a does. This would indicate that the previous valley was most likely due to the randomness of the behaviour.

5 10 15 20 25 Room range (m) 0 2 4 6 8 10 12 14 16

Average distance error (m)

Average distance error over increasing room range

(a) This graph shows the average distance error between two drones as the size of the spherical room increases. 5 7 9 11 13 15 17 19 21 23 25 27 29 Room range (m) 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

Missing distance values

Impact of room range on missing values

100

101

102

Number of occurences

(b) This histogram shows the occurrences of the number of missing distance values as the size of the spherical room increases. A miss-ing distance error represents a failed BeepBeep protocol ranging attempt. The data is taken over different runs, and all missing values are added over these runs and displayed here.

Figure 6.12: The above figures show the results for the experiment on the average distance error of the drone constellation using seven drones.

Figure 6.12a is similar to the previous graphs in that it shows the same increases in the average distance error as them. However, the incline is even greater, reaching higher average distance values than any of the graphs before. Figure 6.12b is also similar, only showing more missing values than previous graphs.

5 10 15 20 25 Room range (m) 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5

Average distance error (m)

Average distance error over increasing room range

(a) This graph shows the average distance error between two drones as the size of the spherical room increases. 5 7 9 11 13 15 17 19 21 23 25 27 29 Room range (m) 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

Missing distance values

Impact of room range on missing values

100

101

102

Number of occurences

(b) This histogram shows the occurrences of the number of missing distance values as the size of the spherical room increases. A miss-ing distance error represents a failed BeepBeep protocol ranging attempt. The data is taken over different runs and all missing values are added over these runs and displayed here.

Figure 6.13: The above figures show the results for the experiment on the average distance error of the drone constellation using eight drones.

(34)

is even steeper than with seven drones. Figure 6.13b shows more occurrences of missing values than the previous graphs.

Comparing all figures 6.9, 6.10, 6.11, 6.12 and 6.13, it becomes apparent that there is a steady increase in the inaccuracy when the volume of the room increases. Moreover, as more drones fly in a swarm, there is a higher potential for missing values and thus an increase in the average distance error for the created constellations. It should be noted that for these experiments, all drones flew in a random pattern and did not follow some protocol, as could be expected in an actual swarm. This meant that the drones did not stay close together, thus causing more missing distance values were missing. This in turn increased the average distance error. This shows that for larger swarms, it becomes important for drones to stay close together, as otherwise the accuracy of the swarm constellation as a whole is impacted.

6.2.5

Location Prediction

These experiments were conducted to investigate the impact of location prediction, as outlined in section 5.2. The location prediction algorithm tries to predict the location of all drones in the new constellation based on their positions in the previously constructed constellations. This prediction can then be used to initialise the MDS algorithm. If the prediction is accurate, then MDS needs less time to converge, and the final result should have a lower average distance error. However, should drones radically alter their movement pattern, then a prediction might instead lead to greater convergence time and worse results.

Several variations of location prediction have been tested using different numbers of constel-lations to investigate the impact of location prediction. ’Using’ a constellation means, in this context, taking that constellation into account when predicting the next location of the drones. Using no prediction has also been tested as a baseline measurement.

The following tests were conducted in a spherical room with a radius of five metres, which slowly increases over time. All drones fly with a constant speed between two and three m/s, and they change their direction over time. Furthermore, eight drones were used in the following tests. 5 10 15 20 25 Room range (m) 0 10 20 30 40 50

Normalized average distance error

Impact of location prediction using 2 constellations on distance error

prediction using 2 no prediction

Figure 6.14: This figure depicts the impact that using location prediction has on the average distance error. The impact of location prediction using two constellations is depicted relative to using no prediction.

Figure 6.14 shows the relative average distance error of using two constellations for prediction as compared to using no prediction. It can be observed that using two constellations for prediction caused two spikes in the average distance error. The first spike could be explained as having to do

(35)

with the randomness that can be observed at the start due to starting constellation construction. The second spike happens just before the 10m range mark. Around this area, distance values might become missing. As such, this spike could most likely be attributed to a missing distance value.

Using two constellations for prediction seems to perform well overall, but not significantly better than using no prediction. The average distance error fluctuates around the baseline of using no prediction. It cannot be conclusively said that the location prediction had a significant impact on the accuracy of the constellation.

5 10 15 20 25 Room range (m) 0 2 4 6 8 10 12

Normalized average distance error

Impact of location prediction using 3 constellations on distance error

prediction using 3 no prediction

Figure 6.15: This figure depicts the impact that using location prediction has on the average distance error. The impact of location prediction using three constellations is depicted relative to using no prediction.

Figure 6.15 shows more fluctuating results than 6.14. Two spikes occur at around the 10m range once again, most likely due to the same causes as before. Overall, using three constellations for location prediction seems to perform slightly worse than using no prediction. For most of the time until the 15m range, the average distance error for using three constellations is somewhat higher than that of no prediction. However, at this point, the accuracy fluctuates somewhat above and below the no prediction baseline. While not conclusive, it appears that using three constellations for location prediction performs worse than using no prediction.

(36)

5 10 15 20 25 Room range (m) 0 2 4 6 8 10

Normalized average distance error

Impact of location prediction using 4 constellations on distance error

prediction using 4 no prediction

Figure 6.16: This figure depicts the impact that using location prediction has on the average distance error. The impact of location prediction using four constellations is depicted relative to using no prediction.

Figure 6.16 looks similar to figure 6.15. Both have spikes at around the same areas. However, figure 6.16 seems to perform better than figure 6.15. It can be observed that at the higher ranges, starting from around the 13m range, location prediction using four constellations performs slightly better than using no prediction. However, at the lower ranges, especially before the 10m mark, it performs somewhat worse, apart from the average distance error valley at the very start. This valley is once again most likely caused by random behaviour. Overall, it seems that using four constellations for location prediction performs better at the higher ranges when more values are missing, but it does not perform well at the lower ranges.

5 10 15 20 25 Room range (m) 0 5 10 15 20 25 30 35 40

Normalized average distance error

Impact of location prediction using 5 constellations on distance error

prediction using 5 no prediction

Figure 6.17: This figure depicts the impact that using location prediction has on the average distance error. The impact of location prediction using five constellations is depicted relative to using no prediction.

(37)

before the 10m range. Overall, using five constellations for location prediction seems to do worse than using no prediction until the 13m range, at which point it starts to perform at a similar level to using no prediction.

Comparing all results, it seems that using more constellations for predicting the next location of the drones does not decrease the average distance error, and even increases it for smaller distances. Location prediction starts to perform similarly to using no prediction once a room range of around 13 metres has been reached. The lower accuracy in the lower ranges could be explained due to the relatively high speeds of the drones compared to the room range at that point. As the drones were flying with speeds between 2 and 3m/s in a room with a diameter of 12 metres, they were quick to hit one of the edges. Once they did, they were redirected away from the edge, which was a rather sudden movement. If location prediction is used, it is most likely predicted to be beyond the edge of the room, while using no prediction will still have the drone localised in the room. As such, no prediction could perform better in these cases. Once the room starts to grow in size, it seems that this problem appears less frequently and at times even slightly outperforms using no prediction.

More research into the matter of location prediction for MDS localisation seems warranted before conclusions can be drawn. As it stands, current data indicates that using more constel-lations adversely impacts the accuracy of the algorithm at shorter ranges when all information is available, and sudden changes in direction occur. However, a more sophisticated algorithm than the one proposed in section 5.2 could provide better results. Moreover, when drones fly in formation, more information is known about future locations of the drones. When movement is not random, location prediction could provide better results.

6.2.6

Constellation Update Time

A final important measurement when considering this process is the time it takes to update the constellation. This is an important metric, as more frequent updates mean more up to date information, which allows for decision making when it comes to, for example, collision avoidance. As such, the time between updates has been recorded while performing several of these experiments and the findings are displayed below:

4

5

6

7

8

Number of drones

0.0

0.5

1.0

1.5

2.0

2.5

3.0

Time between updates (s)

Update time of constellation construction

Average time

Time measurement

Figure 6.18: This figure depicts a scatter plot of time measurement data from the constellation construction process. Each black dot represents an individual time measurement. The red line represents the average update time over the increasing number of drones. An average of 13.000 measurements have been performed per number of drones.

(38)

Figure 6.18 shows the linear increase of the constellation construction time as more drones are taken into account during the constellation construction. The linear function is the result of fitting the data to a linear function by minimising the squared error:

E =

n

X

i=0

kf (xi) − yik2 (6.1)

where E is the error, n is the number of measurements, f is the linear function f (x) = ax + b, xi is the number of drones corresponding to the measured time yi. The resulting function, the

red line, has a gradient of 0.055. This means that the time between updates can be expressed as:

tu= 0.055 · N (6.2)

Where N is the number of drones. However, it can be observed that there are several outliers, as some updates took upwards of 2.5 seconds and even 3.0 seconds. It could have been the case that either the CPU was busy at that moment in time or that the constellation construction process could not converge. In either case, the frequency of these high update times is relatively low, as shown by the figure below.

0.0

0.5

1.0

1.5

2.0

2.5

3.0

Time between updates (s)

0

1

2

3

4

Density

Distribution of update times of constellation construction

4 drones

5 drones

6 drones

7 drones

8 drones

Figure 6.19: This figure depicts the distribution of the update times for the constellation con-struction for differing numbers of drones.

Figure 6.19 depicts the distribution of the update times. It can be observed that there are several peaks in the density. These peaks differ for each number of drones but are generally in the same areas. First of all, the highest peak exists around the 0.10s to the 0.25s range, where the four drone constellations peak at the lower end while the eight drone constellations peak at the high end of this range. Then, the second highest peak exists at around the 0.60s mark, where

Referenties

GERELATEERDE DOCUMENTEN

De milieu- en gezondheidseffecten van de bovengenoemde alternatieve weekmakers is in veel mindere mate onderzocht dan de effecten van ftalaatweekmakers, waardoor niet zondermeer

aangetroffen waarbij deze onderverdeeld zijn in 33 miljard kleine, 4 miljard grote en 33 miljard individuen waarvan geen lengte kon worden bepaald.. Van deze 70 miljard

Het is mogelijk, dat uit de analyse volgt dat er in het geheel genomen geen significante verschillen zijn in de BAG-verdeling naar een bepaald kenmerk

Wanneer er van uitgegaan wordt dat de te treffen maatregelen het spoor- wegverkeer zelfniet onveiliger maken, zal het beoogde effect (een verschuiving van vrachtvervoer over de weg

De grote hoeveelheid vondsten (fragmenten van handgevormd aardewerk, huttenleem en keien) in de vulling doet vermoeden dat de kuil in een laatste stadium werd gebruikt

Fitting is not the only requirement in the parameter estimation problem (PEP), constraints on the estimated parameters and model states are usually required as well, e.g.,

The other humanitarian discourse is not based on responsibility created out of international law or human rights but receives its legitimacy to describe the situation as a

Elevation, terrain and surfaces as crucial characteristics for all geomorphological landforms can be monitored with different air- and spaceborne RS technologies: (a) Digital