• No results found

Sound based 3D localisation of mobile nodes in a wireless network for node tracking

N/A
N/A
Protected

Academic year: 2021

Share "Sound based 3D localisation of mobile nodes in a wireless network for node tracking"

Copied!
31
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Bachelor Informatica

Sound based 3D localisation

of mobile nodes in a wireless

network for node tracking

Gawan Dekker

June 8, 2018

Supervisor(s): drs. A. (Toto) van Inge

Inf

orma

tica

Universiteit

v

an

Ams

terd

am

(2)
(3)

Abstract

In this research we propose a method of determining the relative positions in 3D space of nodes in a network using sound at different frequencies. We are have devised a method in which each node sends signals to each other node and nodes can identify which node sent what signal. By determining the starts of these signals and exchanging information, two nodes can calculate the distance between them. When enough distances between pairs of nodes are calculated a 3D model of the node constellation can be estimated by each node.

The accuracy of the method is tested. The method seems to be usefull for determining relative node locations. However, due to inaccuracies in the identification of the start of the sent signals, the end result is inaccurate. When the starts of the signals can be determined properly, the method is likely to bring good results.

(4)
(5)

Contents

1 Introduction 7

1.1 Research goal . . . 7

2 Theoretical background 9 2.1 Speed of sound . . . 9

2.2 Windowed fast fourier transform . . . 10

2.3 Trilateration . . . 10

2.3.1 Ambiguity in trilateration results . . . 10

2.4 QR decomposition using Householder reflections . . . 11

3 Previous research 13 3.1 Distance calculation between nodes . . . 13

3.2 3D node constellation . . . 13

3.2.1 A linear approach to trilateration . . . 14

3.3 Frequency ranges of devices . . . 15

3.4 Bluetooth framework . . . 15

4 Implementation 17 4.1 Determining distances between nodes . . . 17

4.1.1 Id and distance protocols . . . 17

4.1.2 Signal recognition . . . 18

4.2 Creating a 3D model . . . 22

5 Experiments 23 5.1 Distance calculation accuracy . . . 23

5.2 3D model accuracy . . . 24

6 Discussion 27 6.1 Distance calculation accuracy . . . 27

6.1.1 Comparison with previous paper . . . 27

6.2 3D model accuracy . . . 27

7 Conclusion 29 7.1 Future Research . . . 29

(6)
(7)

CHAPTER 1

Introduction

When working with mobile nodes exchanging information in a wireless network, it can occur that these nodes need to be aware of the relative positions of their peers. We research an algorithm for finding these relative positions using sound at different frequencies. We use sound as our primary way of determining these locations, since it gives very few restrictions on the hardware and software used for a node. Moreover, by using sound we expect to keep a relative low error when compared to some other methods of 3D localisation. Frequencies are used to differentiate between nodes and for the potential benifit of being able to run multple localisation processes at once.

One of the methods that is often used, but which gives a relatively high error is using wifi signal strength to determine node positions [1], [2]. Due to the fact that wifi signal strength is subject to a lot of fluctuation, measurements of distances between nodes and wifi beacons are often imprecise. This gives difficulty when determining the nodes position. Other weaknesses of such an approach are that the nodes themselves are dependent on a number of wifi beacons which have known positions. All measurements used for determining locations are taken between a node and a beacon. A node does not measure anything between itself and another node. Moreover, distances between a node and a beacon can not be infered without first measuring the signal strength of the wifi beacon on multiple places in the area in which nodes are to be located. This means any localisation is limited to an environment that has already been set up to support this localisation.

There are numerous possible applications for 3D node localisation. For instance, one could set up a swarm of nodes [3] and would want to have nodes in the swarm be aware of their surrounding neighbours’ locations to navigate around them. Alternatively, one could create a three dimensional sensory network of independent nodes [4] and have the need of locating the relative postions of all sensors to make sense of the data gathered up by the sensors.

1.1

Research goal

The goal of this paper is to research a method of localising mobile nodes in a wireless network in three dimensional space by using different sound frequencies and determine the accuracy of this method. Our method will first measure the distances between a sufficient number of pairs of nodes in a network and use these measurements to determine the relative node positions in 3D space.

(8)
(9)

CHAPTER 2

Theoretical background

2.1

Speed of sound

Calculating the distance between nodes using sound is done by simulating echo location. This means we make use of the following formula to calculate distances:

s = vs∗ t (2.1)

where s is the distance the sound has traveled (m), vs is the speed of sound (m/s) and t is

the travel-time time (s). The speed of sound is, however, affected by the temperature, pressure and humidity of the air the sound travels through. This being said, since the distances between two nodes wil be small (roughly a couple meters) compared to the speed at which the sound travels, these differences in the value of vsare negligable at best. Moreover, we can assume that,

in practice, pockets of air close to each other have similar temperature, pressure and humidity, therefore the speed of sound in those pockets stays constant. This means that taking a slightly inaccurate value of vs, will only increase or decrease the measured distances by a constant value,

at which point the proportions of the generated 3D model will still be accurate.

Going forward, we will assume that measurements will be taken at room temperature with average air pressure and humidity, giving a speed of sound of 343 m/s. It is worth noting, however, that if the above named variables that influence the speed of sound differ a lot in the general area in which the nodes are deployed, then the end model will become deformed. In that case, in some areas the nodes will obtain distances smaller than those in reality, while in other areas the obtained distances will be larger. This will result in a model with inaccurate proportions since not all distances are affected by the same constant (figure 2.1).

Figure 2.1: Example of influence of differing temperature on created model. The left side shows the real-world constellation, while the right side shows a possible created model. The gradi-ation shows temperature where warmer colors are higher temperatures. Where temperatures are higher, sound travels faster and measured distances are shorter and vice versa. This causes incorrect proportions in the model.

(10)

2.2

Windowed fast fourier transform

We will be working with frequencies in our implementation. To recognise these in our recordings we will have to apply an algorithm to extract the frequencies from a piece of recording. A fast fourier tranform (FFT) is suitable for this except for the fact that the signal, when an FFT is applied, looses its time dimension. To counteract this a windowed FFT can be used [5].

To apply a windowed FFT, one first applies a window functions to one’s signal to select only a piece of the signal, and then applying an FFT to this altered signal. This way only the frequencies of a specific piece of the original signal, that piece selected with the window, will be attained. Then the window is shifted over the signal and an FFT is repeatedly caclulated. This way one can get a view of the frequencies that occur in the signal over time.

2.3

Trilateration

To calculate the coordinates of a point in 3D, based on the distance of that point to other known points, trilateration can be used. That is to say, one can solve the set of equations:

Ri=

p

(x − xi)2+ (y − yi)2+ (z − zi)2

where Ri is the distance from the unknown point to the ith known point, x, y and z are the

unknown coordinates of the new point and xi, yi and zi are the coordinates of the ith known

point. Here i takes values in the range of 1 to N , where N ≥ 3 since this is a point in 3D [6], [7]. When working with measured data, however, there will always be errors in the distance. Therefore, when assuming that measurements are exact, using the above described method to calculate node locations, will give very imprecise results. Not only because the initial measure-ments are imprecise, but also because during the calculation of multiple node locations, the errors in previously determined node locations add up to give greater errors in later calculated locations. Furthermore, when using more than two nodes to determine the location of a new node, rather than finding possible precise solutions, fields of solutions are found in which the new location can be found. More calculations need to be done to reduce such a field back to one location.

Figure 2.2 shows an example of the influence of errors in the distance measurements on the determined locations in 2D. In it node A is used as a starting point. From it, node B is determined using a distance measurement with some error. Using A and B and their distance measurements to C, one of the two possible locations of C is determined. Then, using A, B and C and their distance measurements to D, a field is found in which the location of D is to be found.

In section 3 we discuss some of the methods used to deal with the previously described problems.

2.3.1

Ambiguity in trilateration results

A different problem that occurs when using trilateration is that it gives two solutions to the system of equations [2] [8]. In an N dimensional space, using trilateration with N points to locate an (N + 1)thpoint will result in two possible locations of the point. For only N + 1 points this is not necessarily a problem, since the incorrect result is simply the mirrored version of the correct one. However, when creating a system of more than N + 1 points, all points have to be consistent relative to each other.

Solving this problem can be done as follows. At any point beyond the (N + 1)th point, one

first uses trilateration to determine the two resulting possible locations. Then one calculates the difference between the original distance measurements and the distance between each of the resulting points and every already calculated location. The location for which the sum of these differences is the smallest is the closest to the position of the desired result and should be taken as the final result.

(11)

Figure 2.2: Example of influence of error in simple trilateration in 2D. A is taken as starting point, B is determined with error (grey). Then A and determined B are used to locate C (blue). Afterwards A and determined B and C are used to determine a field in which D can be located (red).

2.4

QR decomposition using Householder reflections

When applying trilateration we will be doing calculations with matrices. We will be working with the QR decomposition of a matrix. This means that we want to find Q and R for a matrix A such that:

A = QR (2.2)

The QR decomposition of a matrix can be calculated in a myriad of ways. We will be using Householder reflections to calculate it since using this method is relatively straight forward, is not prone to numerical errors and does not suffer from limitations to the shape of the 2D matrix. Using Householder reflections, the QR decomposition of an n by m matrix A can be acquired as follows:

For i ranging from n down to 2 and j ranging from 1 to m, entry a(i−1)jin A is used to eliminate

entry aij. First the n by n matrix G is calculated. G is an identity matrix where:

g(i−1)(i−1) g(i−1)i

gi(i−1)j gii



= cosθi−1,i sinθi−1,i −sinθi−1,i cosθi−1,i



where

cosθi−1,i = a(i−1),j/

q a2

ij+ ai−1)j

sinθi−1,i= ai,j/

q a2

ij+ ai−1)j

Each step a new A is created by calculating A = GA. After all steps, you will end up with a matrix where all values at the diagonal and lower left side from that diagonal are zero. This matrix is R. Matrix Q can now be calculated by taking the dot product of all GT in reverse order.

(12)
(13)

CHAPTER 3

Previous research

3.1

Distance calculation between nodes

In [2], R. Mautz and W. Y. Ochieng describe their method of locating nodes in 3D. In it they make use of a node system that is split into two sets. The nodes in the first set serve as beacons and send sound signals to the nodes in the second set. The nodes in this second set listen for these signals, when these signals are perceived, they determine the distance between themselves and the beacon node the signal came from while making use of extra information about when the signal was sent, provided by the beacon node over a wireless connection.

In [9], C. Peng et al. discuss their method of determining the distance between two devices using sound. They claim to acquire a precision of 1 to 5 cm when the devices are located 1 to 4 meters apart. They have their nodes each send a sound signal to the other and record the entire interaction. Then each device locates its own signal and the other device’s signal in the recording. The distance between the two devices A and B is then calculated using those two offsets and the following equation:

D = c 2∗ ( nA fsA − nB fsB ) + K (3.1)

where c is the speed of sound, fs is the sample frequency of a device, nX are the differences

between the start sample indices of the first and second device where the index of the native device sample is always subtracted from the foreign one and K is a constant signifying the sum of the distances between the microphone and speaker of each device.

The value of K is dependent on the two devices A and B and should be determined in advance of the exchange in signals. That being said, it is not a given that measuring the physical distance between microphone and speaker suffices, since there is a chance that the sound from the speaker will travel throught the device itself, rather than the air surrounding it, which would influence the value of K that should be applied in practice. However, during our research we have not been able to achieve the necessary consistency in our measurements to be able to determine to what extent this traveling of the sound through the device itself has an influence on the value of K.

C. Peng et al. claim that the precision of their model is mainly achieved due to the fact that the nodes also look for their own signals and because the distance is calculated using the difference in recording samples. This way they avoid relying on the internal timers of the devices and are not subject to any imprecision caused by inconsistencies between those timers. Because of its precision and the fact that no external hardware is necessary for this method, we decided to use it in our research.

3.2

3D node constellation

To solve a system of linear equations while having to deal with measurement errors, a search-based algorithm is often usefull. One recursively calculates new results while trying to minimise the

(14)

total error in the system. In the case of distance measurements, one would reduce the difference between the measured distances and the distances between the node locations calculated at each iteration of the algorithm [10].

3.2.1

A linear approach to trilateration

In [7], Yu Zhou describes a trilateration method that differs from search-based methods. It solves the trilateration problem using only standard linear algebra techniques. The method adresses issues such as needing to make the same assumptions about the initial node positions on every device and the fact that search-based methods have a generally expensive execution time. This makes it very suitable to our situation and, therefore, we are applying it in our project.

The method works as follows [7]:

Given a list of coordinates of N known points p and a list of distances from those points to the point which coordinates are to be estimated r, the coordinates of the unknown point can be calculated by solving: ∂S(p0) ∂p0 = a + Bp0+ (2p0pT0 + (p T 0p0)I)c − p0pT0p0= 0 (3.2) where a = 1 N N X i=1 (pipTipi− ri2pi) (3.3) B = 1 N N X i=1 (−2pipTi − (pTi pi)I + r2iI) (3.4) c = 1 N N X i=1 pi (3.5)

According to Zhou, this can be done by introducing:

p0= q + c (3.6) which gives (a + Bc + 2ccTc) + (Bq + (2ccT + (cTc)I)q) − qqTq = 0 (3.7) or f + Hq = 0 (3.8) where f = a + Bc + 2ccTc (3.9) H = [h1, ..., hn] = − 2 N N X i=1 pipTi + 2cc T (3.10) Introducing f0= [f1− fn, ..., fn−1− fn]T (3.11) H0= [h1− hn, ..., hn−1− hn]T (3.12)

and using the QR decomposition of H0 together with equation 3.8 gives:

QTf0+ U q = 0 (3.13)

When taking

(15)

Figure 3.1: Figure from [11] showing frequency response for multiple devices.

q can be obtained by solving equation 3.13. In two dimensions this can be done by solving:

v1+ u11q1+ u12q2= 0 (3.16)

v2+ u22q2 (3.17)

and in three dimensions by solving:

v1+ u11q1+ u12q2+ u13q3= 0 (3.18)

v2+ u22q2+ u23q3 (3.19)

q12+ q22+ q23= qTq (3.20)

Once these equations have been solved and q has been found, q and c can be substituted in equation 3.6 to calculate the estimation of the coordinates of the unknown point.

3.3

Frequency ranges of devices

The sound signals we will be sending from phone to phone will be sent at different frequencies. However, the devices we are working with have limitations on the frequencies they can receive. This is something to keep in mind, since frequencies that are not well received will show up very quietly in the phone recordings, making it difficult to spot the start of the signal or to spot the signal altogether.

We will be working with a couple different mobile devices. The architectures we are working with are HTC Desire C and GT-S5660. The research in [11] shows the general frequency response of the microphones in mobile devices. It shows that frequencies between 100 Hertz and roughly 10000 Hertz are received well, although it is possible that frequencies are not well received after 4000 Hertz. We will make the assumption that our devices will create a similar frequency response and will keep the frequencies used in our experiments in roughly the same ranges. This way we have reasonable certainty that our signals are perceived by the nodes in the network.

3.4

Bluetooth framework

To have nodes communicate with each other, we need a way of sending information between them. Communication can range from notifying nodes to start running a protocol to exchanging

(16)

intermediate/resulting information from a protocol. For this purpose we have been provided with a framework that handles bluetooth connections between mobile nodes [12].

The framework lets two nodes take part in a bluetooth connection where one node is the master and the other is the slave. A master node can have multiple slave nodes and slave nodes can become masters over other nodes creating a larger network. Each node has a unique bluetooth id and nodes can send serializable Java objects over the network.

(17)

CHAPTER 4

Implementation

In this section we describe the process of implementing the 3D localisation scheme. Our imple-mentation is written in Java, using android-studio. By using android-studio we can run our later experiments using real-life mobile devices. That being said, earlier test versions of the software has been written in Python.

4.1

Determining distances between nodes

The first step of localising the mobile nodes is determining the distances between the nodes themselves. In this section we will describe the steps we take to make every node aware of the distances to the nodes in its surroundings.

4.1.1

Id and distance protocols

Protocol for exchanging node ids/frequencies

When determining distances between nodes, we want every node to send a unique frequency, while other nodes are aware which node sends which frequency. To achieve this we want each node the have a unique frequency linked to them that doubles as and id and have all other nodes in the network be aware of which node is linked to which frequency.

Using the framework described in section 3.4 we can set up bluetooth connections between nodes where each node has a unique bluetooth id. These bluethooth ids can be used to identify which nodes are present in the network, but do not yet provide information about the frequency the node will be using. For this purpose we devised a protocol that should be run by each new slave node that joins the network. At the end of the protocol, the new node holds a hashmap of all bluetooth-ids paired with all frequency-ids in the network.

When a node joins the network, it should start the protocol and send a message to its master signaling that it wishes to get information about the ids of the nodes in the network. The parent then updates its own frequency-id in its hashmap and send the hashmap to its new slave. The slave receives this hashmap and now knows about all of the ids in the network. It can then use the frequency-ids in the hashmap to determine a fitting new id for itself to link to its bluetooth-id.

When the slave has found a fitting id, it updates its own id in its hashmap and sends its bluetooth-id together with its newly calculated frequency-id back to its master. The master will then send these ids to all of its slaves and potentially its own master with the message to propagate it over the entire network. Any node that receives the new ids will compare it with the ids in its current hashmap. If the new bluetooth id is absent from its hashmap or has a different frequency-id paired with it, it will update its hashmap and further propagate the pair of ids over the network. This second round of propagation is done by sending the pair of ids to each slave the node is connected to. If the packet was received from one of its slaves it will also send the ids to its master. If the node does not have to update its hashmap, it will not send the

(18)

ids to any further nodes. Figure 4.1 shows the protocol working in a network where a fifth node joins.

Protocol for calculating distance between nodes

The protocol we use for distance determination is heavilly based on [9], discussed in section 3.1. The node starting the distance protocol will use the ids it has acquired using the id protocol to determine an order in which the nodes will send their signals and create a schedule with time offsets at which the different nodes will send their messages. Together with the schedule, the message also contains the amount of time in which all messages are sent.

After a schedule has been made, the node will send its schedule into the network and start recording. This recording is saved to a temporary file on the device. It will also start counting down towards the moment that it is expected to send its signal. The nodes that receive its message will do the same. They will also start recording and will count down and eventually send their own signals. Due to the fact that the schedule is passed from node to node in the network and the fact that there might be background processes running on the devices slowing down the processes of passing on the schedule and starting the recording, there will be a slight delay between between nodes when it comes to starting the recording and starting to count down towards the point at which a signal is sent. Because of this, a minimum offset is set before which no signals can be sent. This is done to give each node enough time to start their recording. The same offset is also added to the calculated time for which each node has to record to make sure no node stops recording before all signals are sent.

Once the time given in the message has elapsed, the nodes will stop recording and will start analysing their recorded data to find the offsets at which it received its own signal and those of the other devices (as described in section 4.1.2). It will then send those offsets together with its own bluetooth-id and its used sample frequency into the network where it is passed to all other nodes.

The nodes that receive these offsets will search for the offset to their signal and use it to calculate the distance between themselves and the node that sent the message using the given offset and the offset calculated by itself. The node will also forward the message to the nodes it is connected to. This way each node can independently calculate its distance to all other nodes it gets a message from. After these distances are calculated the results can once again be propagated through the network so each node can use the calculations of each other node later.

4.1.2

Signal recognition

After a device has recorded the exchanges in signals it has to locate the samples at which all of these signals start in its recording. Due to having run the id-protocol, each device is already aware which frequencies the signals are sent at and thus which frequencies it is looking for in its recording.

Failed approaches

We attempted several methods to find these starting samples. For example, we split the signal into two parts, applied an FFT on both parts and picked the earliest part that contained the frequency we searched for at a certain threshold dependent on the average value of the FFT result. Then we recursively continued to do the same on the part that we picked. Eventually we then end up with a final sample at which the signal would have started.

Another approach we tried was to set a part of the signal to zero, starting at a step size of half the signal length, and apply an FFT. If the severity of the to-be-found frequency in the altered signal is above a value dependent on the average severity of the frequencies in the signal, we would half the step size and turn the left most part of the non-zero part of the signal to zero. Otherwise, we would half the step size and restore the right most part of the part of the signal that was set to zero. Doing this repeatedly would eventually bring the step size to zero and give us a sample at which the signal would start.

(19)

Figure 4.1: Sequence diagram of node 4 joining the existing network and runnig the id-protocol. The arrows connecting the nodes at the top show the hierarchy of the network.

(20)

bandpass filter to the original recording, this resulted in multiple filtered signals consisting of only the frequency that we were looking for (and frequencies very close to it). Then we took the absolute value of this filtered recording and applied a gaussian filter. This resulted in a signal resembling a pulse. Then we determined the start of the signal by finding the first sample that exceeded threshold dependent on the maximum value of the created pulse signal. This method of finding the start of a signal gave a lot more reliable results than the previous two, however, implementing the required signal processing software in Java gave a lot of trouble, because of that we eventually dropped this implementation for our final method.

Final approach

To start we apply an altered version of the windowed FFT described in section 2.2. We shift a window over our recording and at every point, we apply the FFT. This gives us a spectrogram for our original recording. In the spectrogram we determine the rows that correspond to the frequencies we used for our signals and use those rows as a representation of the severity of that frequency in the time. In that row, we look for the first point at which the severity reaches above a certain threshold, this roughly gives us the start of our signal. To pinpoint an exact point to take as the start of our signal, we work back to find the point where the severity of the frequency is below a new threshold, then we check for a third threshold in the derivative of the signal to find the first point where the severity of the frequency is increasing.

To increase the speed at which the algorithm can be run, we do the process in several steps. We start with a somewhat large step-size at which the window is sled over the recording and gradually work down towards a step-size of one. At each iteration we determine the starting point of our signal and only use the part of the recording around this found starting point to continue searching. Additionally, on the first step we check whether the starting point that we find holds a value above our threshold for at least half the duration of a signal. The first two steps of an example recording is shown in figures 4.2a and 4.2b.

An added benifit of using steps is that small distortions in the recording have a far lower chance of influencing the end result of the algorithm. Since the initial step-size is fairly large, the window size at these steps is also taken to be larger to cover the entire step (the window size is also reduced towards a calculated minimum). Using a larger window size makes it so that distortions of only a couple of samples contribute very little to the severity of the step they are a part of.

The minimum window size that we use is dependent on the frequencies of our signals. We want to pick a window size that is large enough to make a clear distinction between all our frequencies in the spectrogram, but small enough to keep calculations relatively efficient. We calculate our minimal window size in the following way:

W Smin= fs/fstep (4.1)

where fs is the sample frequency of our recording and fstep is at most one third of the smallest

difference between two frequencies. The used window size is always the maximum value between the current step size with which we traverse the recording and W Smin. An issue that arises from

our choice of window size, is that not all frequencies can be properly represented in our FFT results. If we take a window size of W S then the frequency axis in our spectrogram has a step size of fs/W S where fs is the sample frequency used when calculating the FFT values. If the

frequencies we are looking for are not a multiple of the step size, there is a possibility that an entire signal will be spread over a large range of frequencies (figure 4.3a and 4.3b). This can cause problems when searching for other signals. Therefore, we ideally want to pick a window size that is the greatest common divisor of all of our frequencies and the used sampling rate. This assures that the frequency step size is a multiple of all used signal frequencies. On top of that we apply a Hamming Window to each part of the signal that we are calculating the FFT over. This further reduces the spread of one frequency over multiple frequencies. Since, in our case, we can keep the sampling frequency equal between devices, we can choose our frequencies to comply with this rule. However, background noise can still cause the same types of problems.

(21)

(a) First step in frequency search. Shows frequencies in time and derivative. Green lines are the several thresholds used, the black line is index where the start of the signal is found in this step.

(b) Second step in frequency search. Shows frequencies in time and derivative. Green lines are the several thresholds used, the black line is index where the start of the signal is found in this step.

Figure 4.2: First two steps in frequency search on one recording of an exchange between two nodes.

(22)

(a) Spectrogram with properly chosen window size

(b) Spectrogram without properly chosen win-dow size

Once the sample number at which a signal starts has been found for all signals, we can calculate the differences between the starting point of the signal coming from the device on which the current calculations are done, and the starting points of each of the other signals. We also save these offsets once they are calculated.

Following the protocol in section 4.1.1 we have sent our signal offsets and sample frequency into the network and will receive signal offsets and sample frequencies from other nodes. Once such offsets are received, we can calculate the distance between our current node and one of the nodes we received data from by following equation 3.1. Here we set our current node as node A and the node we received data from as node B and using the offsets of the two nodes as the values for nA and nB and their sample frequencies as f sA and f sB, we calculate (f snA

A−

nB

f sB),

since the distance is always positive (we do not have a direction tied to the distances, only a severity), we can take the absolute value of of this calculation. This makes it irrelevant which node we choose as A and which as B. The value of K in 3.1 (the sum of the distances between the speaker and microphone of each of the two devices) should have been determined beforehand as explained in section 3.1.

4.2

Creating a 3D model

After determining the distances between the nodes, we use these to infer node locations in three dimensional space. We start in a two dimensional coordinate system and set the position of the current node, the node that is calculating a model from its perspective, as the origin of this system. Furthermore, we set the node with the closest distance to the current node to be at the position [d, 0]T where d is the distance between that node and the current node.

Now that we have set the location of two nodes, we can estimate a the location of a third node by applying the algorithm from [7], as described in section 3.2.1, to the locations we currently have and solving the 2D case. This gives us an estimated location for the third node.

We now have three estimated locations in 2D. To acquire locations for the rest of the nodes, we first extend our current coordinates with a third dimension by providing each of the three estimated coordinate pairs with a z-coordinate with a value of 0. Then we can use these new 3D coordinates to estimate the locations of the fourth node using the 3D case of the algorithm from [7].

Now that we have four node locations, we can determine the location of any further nodes in the same way as the fourth node was located. However, in addition to that, we also make sure that the result of the algorithm that we use is consistent with the rest of the system. This is done by following the method described in section 2.3.1.

(23)

CHAPTER 5

Experiments

In this section we display a number of figures with results from our experiments on the implemen-tation of our localisation. During the experiments, the value of K of equation 3.1 is set to zero, since the errors in our results are too large for it to make a meaningful impact. When sending signals, the time offset between each signal is set to 2 seconds so the signals do not overlap.

5.1

Distance calculation accuracy

We evaluate our distance determination on three fronts:

1. Accuracy: the difference between the measured distance and real distance. 2. Confidence: the percentage of results that achieve a given accuracy value.

3. Operational range: the maximum range at which a given confidence is attained for a given accuracy.

By using these metrics to evaluate the protocol we can later compare our results to those of [9], since it uses the same metrics.

Figure 5.1 shows accuracy and confidence results for our protocol. The left side shows the mean accuracy together with its standard deviation and the right side shows confidence with a threshold of 0.5 to 1.5 meters. Extreme values are removed when calculating the mean accuracy, only measured distances under 10 meters are considered. The results were measured in a quiet room to avoid any interference of background noise. In the experiment two nodes exchanged tones and calculated the distance between them. The frequencies used in the experiment were

Figure 5.1: Mean accuracy with standard deviations and confidence for an accuracy of 0.5 to 1.5 meters for distance determination. Results are taken between nodes over 10 measurements. Accuracy is only taken over measured distances smaller than 10 meters.

(24)

Figure 5.2: Mean accuracy with standard deviations for distance determination at different steps of frequency. Results are taken between nodes over 10 measurements. Accuracy is only taken over measured distances smaller than 10 meters.

3150 Hz for the first node and 5040 Hz for the second node, this gives a difference of roughly 2000 Hz to cause minimal interference between the frequencies.

Figure 5.2 shows accuracy results when using different step sizes in the used frequencies in the protocol. Extreme values are once again removed for a better view of the accuracy. The results were measured in a quiet room, with two nodes 1 meter apart from each other. This way the imprecisions shown in figure 5.1 have only little influence over the measurements.

5.2

3D model accuracy

Since determining distances between nodes gives very large errors, using such data to determine the relative locations of the nodes will not give results where nodes are anywhere close to where they are expected to be. Therefore, we do not show such results here. We can, however, show an example of the localisation used on simulated data. This is done in figure 5.3. Here we created a matrix of node positions:

      0.0 0.0 0.0 1.0 0.0 0.0 0.0 2.0 0.0 0.0 0.0 −3.0 −2.0 1.0 1.5      

Then we calculated the distances between these nodes, added a random error between −0.25 and 0.25 to each of the distances and used those distances to calculate the original positions.

(25)

Figure 5.3: On the left, shows original node positions. On the right, shows calculated node positions with distances with an error between −0.25 and 0.25.

(26)
(27)

CHAPTER 6

Discussion

6.1

Distance calculation accuracy

Our results show that our implementation is subject to a large error. Figure 5.1 shows that distances around 1 meter are generally measured within one meter accuracy, however a distance of 2 meters already has a high likelihood of giving an error larger than 1 meter. Beyond 2 meters distance almost guarantees such an error. Furthermore, the standard deviation in error is relatively large at all distances, although it does increase the more the distance increases.

This means that our implementation is most likely not usefull in any real world application until this error is reduced. The fact that the error is as large as it is might be explained by inaccuracies in the determined starting points of the signals. We determine the starting points of our signals at the base of our pulse, since, of the methods we experimented with, this gave the best results. However, the base of the pulse is subject to quite a lot of measurement errors and general noise fluctuation, which might be a cause for the errors in our measurements and in particular the large standard deviation of our error. The almost linear increase in error as the distance increases, however, seems to be a more general problem with sound measurements becoming less consistent.

Figure 5.2 shows that the difference between frequencies has a very low influence on the error of the measurements. Since the nodes send signals in turns, this means that our implementation will often identify the correct pulse in the time.

6.1.1

Comparison with previous paper

Compared to the the paper from C. Peng et al. [9], our implemenation does not hold up. They were able to acquire a 1 to 2 cm accuracy and a confidence of 0.5 over 1 to 4 meters of distance and a confidence of over 0.9 for an accuracy of 5 cm. Where their measurements were also done in a quiet indoor location. While our results contain measurement errors up to more than 1 meter under similar circumstances.

6.2

3D model accuracy

We can not say a lot about the 3D model accuracy of our implementation. Figure 5.3 shows that in our example experiment, the reconstructed constellation of nodes keeps roughly the same positions except for the fact that the axes are set arbitrarily and therefore are not guaranteed to line up with our input data. However, since the intention is to reconstruct the relative positions of the nodes (and since real world constellations do not have a set axis anyway), the result of the example seems promising.

That being said, since we are dealing with one example, this result is not necessarily represen-tative of the average performance of the localisation. On top of that it does not give information

(28)

about the influence of different sizes of errors on the resulting constellation or about whether or not simmilar results will be seen in practice.

(29)

CHAPTER 7

Conclusion

Our research points towards the possibility that, using sound signals at differing frequencies, an accurate 3D model can be created of the relative locations of a set of nodes in a network. We have been successful in implementing a method with which different nodes send sound signals of different frequencies to each other and can identify each other using those frequencies. The differ-ence between a node receiving one’s own signal and that of another node is used to determine the distance beween two nodes. Using information about the distances between a sufficient number of node-pairs, the relative locations of the different nodes in the network are then calculated.

When determining the accuracy of this method we found that large measurement errors occur when measuring the distances between nodes. These measurement errors are most likely caused by inaccuracies that occur when determining the starting points of the received signals. This results in 3D node constellations that are largely inaccurate when compared with reality. However, when the localisation itself is tested in isolation, results show the implementation to be largely accurate. This points towards the possibility that, if the error in the distance measurements can be reduced, an accurate 3D model of relative node positions will follow.

7.1

Future Research

Possible ways of building on our research would be by finding better ways of identifying the start of a signal sent by one of the nodes in the network. We currently use pulses at different frequencies as our signals and locate the base of this pulse. However, a different method might give better results. In [9], for example, C. Peng et al. work with a chirp signal and locate it by matching the recorded data with the shape of the original chirp signal. Such a signal might be easier to recognise than a simple pulse. This being said, using a chirp signal will result in the loss of the ability to use different frequencies to identify which node sent which signal. However, one might find a way of using a better identifiable signal that can be used in combination with frequency identification to still keep the ability of identifying nodes by frequency.

Another alternative is to extend the identification of the start of the signals by fitting a function on the pulse signals in the recordings. For example by fitting an exponential function on the start of the pulse, the mesurement errors at the base of the signal are largely removed. This means that finding the start of the signal using such a fit, should give more consistent results than using measurements that still contain a lot of fluctuation at the base of the signal.

Finally, when creating a 3D model based on the distances between the nodes, a possible extention of our research would be to apply the results of the error measurements for the de-termination of distances, to the algorithm for node localisation. Our results show that larger distances also result in larger errors on the measured distances. Using this knowledge when calculating node locations, by letting larger distance measurements have a smaller influence on the determined position, might result in even more accurate localisation.

(30)
(31)

Bibliography

[1] A. S. Paul and E. A. Wan, “Wi-fi based indoor localization and tracking using sigma-point kalman filtering methods,” in Position, Location and Navigation Symposium, 2008 IEEE/ION, IEEE, 2008, pp. 646–659.

[2] R. Mautz and W. Y. Ochieng, “A robust indoor positioning and auto-localisation algo-rithm,” Positioning, vol. 1, no. 11, p. 0, 2007.

[3] S. Monica and G. Ferrari, “A swarm-based approach to real-time 3d indoor localization: experimental performance analysis,” Applied Soft Computing, vol. 43, pp. 489–497, 2016. [4] Z. Zhang and H. Cui, “Localization in 3d sensor networks using stochastic particle swarm

optimization,” Wuhan University Journal of Natural Sciences, vol. 17, no. 6, pp. 544–548, 2012.

[5] I. Daubechies, “The wavelet transform, time-frequency localization and signal analysis,” IEEE transactions on information theory, vol. 36, no. 5, pp. 961–1005, 1990.

[6] D. E. Manolakis, “Efficient solution and performance analysis of 3-d position estimation by trilateration,” IEEE Transactions on Aerospace and Electronic systems, vol. 32, no. 4, pp. 1239–1248, 1996.

[7] Y. Zhou, “An efficient least-squares trilateration algorithm for mobile robot localization,” in Intelligent Robots and Systems, 2009. IROS 2009. IEEE/RSJ International Conference on, IEEE, 2009, pp. 3474–3479.

[8] R. Mautz, W. Ochieng, G. Brodin, and A. H. Kemp, “3dwireless network localization from inconsistent distance observations.,” Ad Hoc & Sensor Wireless Networks, vol. 3, no. 2-3, pp. 141–170, 2007.

[9] C. Peng, G. Shen, Y. Zhang, Y. Li, and K. Tan, “Beepbeep: a high accuracy acoustic ranging system using cots mobile devices,” in Proceedings of the 5th international conference on Embedded networked sensor systems, ACM, 2007, pp. 1–14. doi: http://dx.doi.org/10. 1145/1322263.1322265.

[10] D. Manjarres, J. Del Ser, S. Gil-Lopez, M. Vecchio, I. Landa-Torres, and R. Lopez-Valcarce, “On the application of a hybrid harmony search algorithm to node localization in anchor-based wireless sensor networks,” in Intelligent Systems Design and Applications (ISDA), 2011 11th International Conference on, IEEE, 2011, pp. 1014–1019.

[11] R. Brown and L. Evans, “Acoustics and the smartphone,” in Proceedings of Acoustics, 2011.

[12] Net-centric computing: btconnect (framework for bluetooth connections) android studio project, https : / / staff . fnwi . uva . nl / e . h . steffens / ?page _ id = 12, Accessed: 10/04/2018.

Referenties

GERELATEERDE DOCUMENTEN

De grote hoeveelheid vondsten (fragmenten van handgevormd aardewerk, huttenleem en keien) in de vulling doet vermoeden dat de kuil in een laatste stadium werd gebruikt

Abstract This paper presents a distributed adaptive algorithm for node-specific sound zoning in a wireless acoustic sensor and actuator network (WASAN), based on a

Recommendation and execution of special conditions by juvenile probation (research question 5) In almost all conditional PIJ recommendations some form of treatment was advised in

For the domestic herbivores of the Ginchi Vertisol area, native pastures and crop residues are the major sources of minerals and other nutrients, although whole crop or grains of

In this section, we investigate the persistence of dynamical properties in the previous section under the perturbation given by the higher order terms of the normal form for

Belgian customers consider Agfa to provide product-related services and besides these product-related services a range of additional service-products where the customer can choose

Figure 84 shows the displacement of femur IV towards the femoral groove (femur III). The carina of the trochanter and femur is clearly visible, which will permit the tarsus and