• No results found

Mobile image stitching using ad-hoc networks on Android

N/A
N/A
Protected

Academic year: 2021

Share "Mobile image stitching using ad-hoc networks on Android"

Copied!
32
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Bachelor Informatica

Mobile image stitching using

ad-hoc networks on Android

Robin de Vries

June 20, 2014

Supervisor(s): dhr. drs. A. van Inge

Inf

orma

tica

Universiteit

v

an

Ams

terd

am

(2)
(3)

Abstract

Over the past few decades mobile devices have become enormously more powerful, therefore this makes it a more and more interesting platform to peform calculations on. In this research a rather complex algorithm has been selected for such experimentation: image stitching. It has been investigated how mobile devices can gain a performance boost by distributing this algorithm among seperate devices by means of an ad-hoc network. An implemention in the form of an Android application is described, which, by using Wi-Fi Direct and OpenCV, provides a sustainable environment for the experiments. Using this implementation, it can be verified that distributing a part of the algorithm can speed up the stitching process. In addition to that a different approach has been attempted as well: by setting up the devices in a tree-based topology, a performance boost might be gained due to the benefit of having a naturally distributed dataset. Using these experiments, it can be shown that it is possible to gain a performance boost by using an ad-hoc network to distribute the complex image stitching algorithm over mobile devices.

(4)

Contents

1 Introduction 5

2 Motivation 6

3 Related work 7

4 Implementation 8

4.1 Used techniques and tools . . . 8

4.2 The Android application . . . 10

4.3 Measuring performance . . . 12

4.4 Hardware . . . 13

5 Experiments 14 5.1 Verifying that the implementation works . . . 14

5.2 Splitting the stitching pipeline . . . 15

5.3 Distributing with a tree-based topology . . . 17

6 Conclusions 19 6.1 Recommendations for future work . . . 19

Bibliography 20 Appendices 22 A Reference images 23 B Benchmarks 25 B.1 SURF benchmarks . . . 25 B.2 Transmission benchmarks . . . 26 B.3 Stitching benchmarks . . . 30

(5)

CHAPTER 1

Introduction

Over the past few decades mobile devices have become enormously more powerful. In fact the latest mobile phones are almost as powerful as computers were a few years ago [5], because the gap between full-size computers and mobile devices is shrinking. Like real computers, there are some heavy computing tasks that might be calculated distributively to gain some performance boost. When doing those calculations on a mobile platform, a peer-to-peer network can be a flexible solution for communicating between the devices because the topology within such a net-work is not definite, and therefore more dynamic as opposed to more traditional netnet-works. The fact that mobile devices have become more powerful should spark some interest in inves-tigating whether it is possible to achieve a certain perfomance boost by having mobile devices, that are co-operating with each other by means of an ad-hoc network, apply a complex algorithm on a naturally distributed dataset.

These are the accompanying research questions:

• whether an ad-hoc network can be set up in such a fashion that it provides a sustainable environment to carry out the experiment;

• whether a performance gain can be achieved by distributing parts of the complex algorithm across multiple nodes as opposed to a single-node configuration;

• and whether a performance gain can be achieved by distributing the full algorithm across the multiple nodes, rather than distributing parts of the algorithm.

For this research the complex algorithm of choice is image stitching, where from a pool of images, each pair of images with at least some common part are stitched together. This algorithm can then be used to create panoramas from images that are taken from different angles, or that span a larger view. When taking these images from various mobile devices, it can be concluded that the dataset is naturally distributed. This, together with the fact that image stitching is a suffi-ciently complex algorithm, which will be discussed later on, should form an adequate benchmark. Furthermore, it is important to keep in mind that, opposed to desktop computers, the data is relatively large for mobile devices. Considering that Android is widely popular amongst mobile devices, the benchmark will be implemented as an Android application.

The next chapter provides a motivation as to why image stitching is the complex algorithm of choice. After this chapter follows the third chapter in which related work will be shown; the fourth chapter discusses the implementation, for those who are interested in the internals of the application; the penultimate chapter investigates the aforementioned questions, decribes the ex-periments and the results of those; and finally the last chapter offers insightful answers to those questions. In addition to all these chapters, there also is an appendix in which intermediate data can be found.

(6)

CHAPTER 2

Motivation

It is only possible to gain a performance boost from distributing an algorithm and its data when the algorithm itself can be distributed. Therefore, the algorithm must be sufficiently complex, i.e. the algorithm should be subdividable into multiple stages, allowing it to be split among separate devices, apart from the fact whether that is actually faster. In order for the potential to exist that a certain performance boost can be achieved when distributing the algorithm, its calculations should not be dependent, and they should be quite intensive to compute.

Furthermore, a naturally distributed dataset is preferred, because that would avoid the ne-cessity to first distribute the data. Matrix multiplication, for instance, is also a very interesting algorithm to use in a distributed manner, but the data would then have to be scattered over the various devices. This is not the case when images are to be taken by several mobile devices, where, comparable to a sensor network, each device has its own sensor, namely a camera, with which it is capable of acquiring its own set of data, and processing it, whilst collecting data of other devices interchangably. What makes this even more interesting is that images themselves are quite large, and therefore they take quite some effort to process.

Thus, we can conclude that the algorithm of choice should be non-trivial, and therefore worth optimising, and that it should be complex and separatable, so that multiple devices can perform computations simultaneously. Image stitching definitely does meet these criteria. The prior is met because finding the best transformations for different images to create one single image is a non-trivial problem. Besides that, it is also relatively time-consuming, because all the images should be scanned for key points, and those points should be matched with all other images. The latter is met, because it is not necessary to stitch all the images at once. Depending on how image stitching scales when applied to more and more images, a performance boost can be achieved when distributing this task.

(7)

CHAPTER 3

Related work

This chapter will shortly summarise some related work and what their relation is with this re-search. Image stitching and ad-hoc networks of mobile devices are not very rare research topics within the field of computer science, but the combination of image stitching and ad-hoc networks seems to be quite the exotic one.

That image stitching is a complex algorithm as mentioned earlier is described by Brown and Lowe [8]. They have shown a possible algorithm to do automatic image stitching which consists of at least eight sub-algorithms. The algorithm suggested in this article has been used to build the pipeline of the OpenCV-stitcher that the implementation, that is to be described in the next chapter, uses.

B¨usching, Schildt and Wolf [9] have shown how to build a computing cluster with Android devices in an, according to them, “ad-hoc” kind of way, but it is questionable whether you can call their test environment really an ad-hoc network: all the devices besides Wi-Fi were also connected to a USB-hub and controlled by a “Control PC”. This is a different approach than what will be used here, but this article shows that a network of Android devices could improve the performance for some calculations.

Xiong and Pulli [15], [16] have shown how to do image stitching on mobile devices. Later Yang et al. [17] showed how to speed up and refine the mobile stitching process by using sensors from the mobile device. Using this information could offer a gain in performance for the approach that is to be presented, but this will not be in the scope of this research.

Tron and Vidal [14] have investigated the challenges of distributing computer vision algorithms, and the challenges they have described are similar to what this research tries to resolve.

(8)

CHAPTER 4

Implementation

This project has been, as mentioned earlier, implemented as an Android application. In this chapter a comprehensive overview of what techniques it uses, and how it works precisely will be given.

4.1

Used techniques and tools

This section will describe what tools are to be used in the implementation for building an ad-hoc network, and for stitching the images.

4.1.1

Ad-hoc network

A very important aspect of this project is the connection between the separate devices. The choice what tool to use will have great impact on the performance. A logical choice would be Bluetooth, because it seems easy to set up, and because many devices do support it, but Bluetooth has its disadvantages: with a maximum speed of one megabit per second it is slow compared to modern technologies [10]. It also seems that to keep the production costs low the manufacturers of mobile devices have made some concessions. Some devices can connect up to three other devices and the maximum distance between those devices cannot exceed ten meters. So using Bluetooth is not ideal, but what else could be an alternative?

The Wi-Fi specification offers, compared to Bluetooth, connections that are very fast and ro-bust, but has the problem that it uses a server-client topology with a predefined access point. Therefore, it is not exactly usable, or is it actually? To compete with Bluetooth and to stimulate peer-to-peer applications, the Wi-Fi Alliance designed the Wi-Fi Peer-to-Peer (P2P) specifica-tion, also known less formally as Wi-Fi Direct. According to this specification devices can connect to each other using Wi-Fi without using a predefined access point. This works by designating one device as the “group owner”. The group owner creates its own software access point (“Soft AP”), the other devices can connect to this access point as if it were to be a normal access point. Under the hood, the verification process works with a push-button or PIN-code Wi-Fi Protected Setup (WPS), which is known from the traditional Wi-Fi standard. Contrary to Bluetooth, two of the mayor advantages of Wi-Fi Direct are the maximum distance (up to 200 meters) and the possible bandwidth (up to 250 megabits per second) between the devices[4]. Since Android 4.0 (API 14) Wi-Fi Direct is available for usage on mobile Android devices with the appropriate hardware on board [11]. Because the used test devices have Android 4.1 and Wi-Fi Direct capabilities, the choice was made to use Wi-Fi Direct for this project.

4.1.2

Image stitching

In order to stitch images together, the first step is to detect certain distinctive points known as features or key points, which are not influenced by scale operations and rotations, i.e. they are scale- and rotation-invariant. In order to extract a vector of features from an image, one can

(9)

use an algorithm such a SIFT [13]. The resulting feature vectors can then be compared to find matching key points in both images. It is important to note that the images may have been taken from slightly different angles, and may therefore have a different perspective compared to each other. In order to correct this, a process called warping is to be applied to one of the images. This is done by computing a matrix that describes the transformation from the matching point, for which an algorithm such as RANSAC [7] is quite common to use. However, in some scenarios RANSAC may not be adequate enough, and therefore a much more complex warper may be mandatory.

Image stitching is not only a very expensive and intensive task, but it is also tremendously complex. There are dozens of papers on the topic of stitching images, and implementing this would take several months. Fortunately, there is an open-source toolkit that can be of aid with many computer vision problems: OpenCV [1]. OpenCV contains an integrated stitcher that creates panoramas for the application without many configuration issues. Figure 4.1 describes how OpenCV stitches the input images to one panorama.

Figure 4.1: the OpenCV stitching pipeline[3].

The downside of OpenCV is that it is written in C++, whilst Android applications have to be written in Java. However, Google also offers the Native Development Kit (NDK), allowing the integration and use of libraries written in C/C++, such as OpenCV, for instance, on the Android platform. However, even though it is possible to use the OpenCV library on Android, this library is incomplete, as most of the algorithms that are patented, including OpenCV’s image stitcher, have not been included within the binary. Producing a binary in which these algorithms are included would therefore require a manual re-compilation of OpenCV. To avoid this rather daunting task, JavaCV, which provides a Java interface to inter alia OpenCV, including the “non-free” components like the stitcher module, has been used.

(10)

4.2

The Android application

The techniques that had been described in the previous section are implemented in an Android application. This application forms an ad-hoc network using Wi-Fi Direct and will stitch the images together with OpenCV. The application has three “stages” which are:

1. setting up an ad-hoc network;

2. configuring the stitcher;

3. shooting, transferring and stitching the images.

All three stages will be explained, and an introspective view of what the application is imple-mented like internally is given.

4.2.1

Setting up an ad-hoc network

When starting the application, the first thing the user should do is create an ad-hoc network between the individual devices. By pressing the “discover” button in the top-right corner the application calls the Wi-Fi Direct framework and starts searching for other devices. When selecting the name of a device in the side bar, the application will connect to the device (see figure 4.2).

The Wi-Fi Direct framework will designate a group owner. The group owner is the only device in the ad-hoc network that is aware of the other devices. Each device knows the IP-address of the group owner and whether it is the group owner or not. Ironically, a device that is not the group owner is not aware of its own IP-address. Thus, the application requires a mechanism to exchange information, like its own IP-address, for instance. In this case each device will start a configuration server on port 1337 after the connection has been established and a group has been formed. After that the “new device” protocol will be initiated as described below:

1. the device that is not the group owner will send a “HELLO”-message to the group owner;

2. the group owner adds the device to its list of devices;

3. the group owner sends every device a “DEVICES”-message. This message contains the IP-address of the receiver, as well a list of the other IP-addresses in the network;

4. each device updates its list of devices, and the new device stores its own IP-address. Table 4.1 gives an overview of the protocol on a byte-level.

Message ID

HELLO 0x1

DEVICES 0x2 IP-address num of devices IP-addresses

SOURCE 0x3 0x0 (false) or 0x1 (true)

STITCH 0x4

WAVE CORRECTION 0x5 0x0 (false) or 0x1 (true) FEATURE FINDER 0x6 0x0 (ORB) or 0x1 (SURF) THRESHOLD 0x7 Index of the threshold

(11)

Figure 4.2: the state of the side bar upon discovering other peers.

Figure 4.3: the configuration options after connecting.

4.2.2

Configuring the stitcher

After the devices have been connected, and after they have become aware of each others presence, the application transforms into a stitching application. In the left side bar, as shown in figure 4.3, the user can configure the following options:

1. whether the camera is enabled; 2. the destination of the final image; 3. whether the wave correction is enabled;

4. the algorithm to be used for finding features (SURF or ORB); 5. the confidence threshold.

If the image would be sent to the same device, instead of being sent, it will be drawn on an “ImageView” in the centre of the screen. Otherwise the application will send the result to the selected destination. The result consists of the stitched input images. Naturally, when only one source is present, no images are stitched.

(12)

When an image destination is selected, the ConnectionManager sends “SOURCE”-messages to all devices. When a device receives true it will start a FileServer on port 4000 + last two bytes of the IP-address of the sending device. For example, when device 192.168.49.169 sends a SOURCE true to device 192.168.49.103, the latter will start a FileServer on port 4169. The started FileServer waits for the resulting image. The reason why every source image has its one port is to make it possible to receive multiple files simultaneously. If all the files were to be sent to the same socket, then all the devices would have to wait for each other, which would then stagnate the process.

When a device receives a false “SOURCE”-message, it will close the listening socket for that particular device, if it was actually listening.

4.2.3

Starting the stitching process

When all the devices have been configured properly, the user can press the “Stitch”-button. Whereupon the device of which the button was pressed will send a “STITCH”-message to all the other devices. It should not matter on which device the button has been pressed. After sending or receiving a “STITCH”-message the Stitch-process is started. If the camera has been enabled, then a picture will be taken. When all source image(s) have been collected, two things can happen: either the source image will be transferred to its destination, or the source images will be stitched, whereupon it will be transferred to its destination. The stitching process is to be done a separate thread, which will call the OpenCV-stitcher upon being started.

4.3

Measuring performance

It is essential that a precise measurement tool is available, because measuring the performance of the stitcher is an important aspect of this project. This measurement tool finds its imple-mentation in “the Logger”, which can be found on the right side of the screen. The logger will show all important events that have occurred, such as an incoming picture or the command to initiate the stitching process. The timings of the devices cannot be easily compared, because all the clocks would have to be synchronised. Nevertheless the time it takes between two events can measured with less accuracy. A time stamp will be extracted from “System.nanoTime()”, which “returns the current value of the most precise available system timer, in nanoseconds” [2], according to the Java documentation. Listing 4.1 shows an example output from the Logger.

0 . 0 0 0 0 0 0 S t a r t e d 0 . 7 1 2 3 0 5 : P i c t u r e t a ke n , s t o p camera 1 . 5 3 7 7 0 3 : I n c o m i n g p i c t u r e from / 1 9 2 . 1 6 8 . 4 9 . 1 2 7 1 . 5 3 8 6 2 1 : I n c o m i n g p i c t u r e from / 1 9 2 . 1 6 8 . 4 9 . 1 1 2 1 . 6 0 8 1 2 7 : I n c o m i n g p i c t u r e from / 1 9 2 . 1 6 8 . 4 9 . 1 0 3 1 . 6 1 6 3 4 : I n c o m i n g p i c t u r e from / 1 9 2 . 1 6 8 . 4 9 . 7 9 2 . 1 9 2 1 3 2 : P i c t u r e from / 1 9 2 . 1 6 8 . 4 9 . 1 1 2 t r a n s f e r r e d 2 . 2 6 6 3 7 : P i c t u r e from / 1 9 2 . 1 6 8 . 4 9 . 1 2 7 t r a n s f e r r e d 2 . 4 4 1 7 0 3 : P i c t u r e from / 1 9 2 . 1 6 8 . 4 9 . 1 0 3 t r a n s f e r r e d 2 . 4 5 4 5 5 8 : P i c t u r e from / 1 9 2 . 1 6 8 . 4 9 . 7 9 t r a n s f e r r e d 2 . 4 5 5 4 5 7 : S t i t c h e r t h r e a d s t a r t e d W a v e C o r r e c t i o n : t r u e T r e s h o l d : 1 . 0 SURF : t r u e 4 4 . 2 7 0 7 9 7 : S t i t c h i n g s u c c e s s f u l

Listing 4.1: an example log file.

If we look at the example log we can see that it took approximately 44 seconds between the receiving of the command to initiate the stitching process and the finalisation of the stitching process. This example used five input images, of which four external images were transferred in

(13)

less than a second per picture. The thread that stitched the 5 pictures needed almost 42 seconds to stitch the images.

4.4

Hardware

During the experiments the application has been run on a tablet provided by the UvA: the Lenovo IdeaTab A1209A-F. This tablet has a front (1.3 megapixel) and a rear camera (3 megapixel). The processor is an NVIDIA Tegra 3 T30SL 1.2 GHz Quad Core. [12] After installing the latest updates, it runs Android 4.1.1. During the experiments only the 3 megapixel rear camera has been used.

(14)

CHAPTER 5

Experiments

This chapter describes the experiments that have been carried out to evaluate the research ques-tions, that have been mentioned in the introduction. The experiments use the implementation described in the previous chapter. Each question is described within its own section. All the benchmarks have been done using four reference images, in order to increase the reliability of the experiments. These reference images and their specifications can be found in appendix A. The experiments themselves have been executed using these reference images and for each experiment this was done five times, increase the accuracy of the experiments.

5.1

Verifying that the implementation works

The first research question, concerning the possibility to set up a network and an environment sustainable enough to run the experiments on, can be answered by testing the implementation, that has been described in the previous chapter. With the tree tablets an image has been shot, all the images have a resolution of 2048x1536 pixels. Two out of three images were transmitted to the third tablet which performed the stitching, as shown in figure 5.1. Image 5.2 shows the result of this experiment, and therefore shows that it is possible to set up a sustainable ad-hoc network. Image A Device I Image C Device III Image B Device II Image A Device I Image A+B+C Device III Image B Device II

(15)

Figure 5.2: one of the first successfully stitched images using three tablets.

5.2

Splitting the stitching pipeline

The second research question is about whether it is possible to gain a performance boost when distributing parts of the complex algorithm. The most basic approach is distributing some task that should be calculated for every image on the device that shoots the image. As the image pipeline has showed (figure 4.1) there are not many suitable stages. At the point that multiple features must be matched, the images should be available on the same device. Therefore it is not possible to distribute the algorithm beyond this stage. But, the first stage of the pipeline, finding the features can be distributed. So this is the approach to be used, because when transferring the features is faster than finding them this improves the performance. Unfortunately, the imple-mentation does not allow splitting the pipeline easily. The OpenCV pipeline is constructed with recursive function calls that cannot be split without modifying the C++ source code, because the implementation uses the JavaCV-library to provide OpenCV functionality and because it has been compiled externally, it is not possible to implement this without much difficulty. How-ever, this is not within the scope of this research. Therefore an empirical approach required: it is in fact possible to execute the algorithm to find features locally, to benchmark it and to see how much data it generates. Additionally, the transmission speed between the devices can be measured and based on that information a valid estimation about the performance boost of calculating the features distributedly can be made.

5.2.1

Transmission speed

For estimating the transmission speed, the reference images are sent in configurations of two, three or four images simultaneously, because the implentation does not allow more than five devices. The configurations of the devices that have been used, are described in the appendix B in figures B.2, B.3 and B.4. To have a reference measurement, the transmission of a single image has been measured as well, as described in figure B.1.

Results

The results can be found in tables B.2 - B.5. These raw results are combined into a more clear diagram: figure 5.3. This diagram shows that when the amount of simultaneously transferred images increases, the speed per image decreases. Because the total the total transmission speed the network can hande is approximately 4 megabytes per second as can be measured.

(16)

0 1 2 3 4 5 6

1 image 2 images 3 images 4 images

Sp eed in megab ytes p er second

speed per image total speed

Figure 5.3: the comparison of the transmission speeds.

5.2.2

Sizes of the key points descriptors

The next step is measuring the size of the key point descriptors and how much time it takes to compute them. The measurements have been taken by another application that solely performs the resizing and the feature detection. The relevant code of this implementation can be found in listing C.2.

Results

According to the results of this experiment a reference image contains 5608 SURF-features on average. According to the five measurements, calculating the descriptors of these features took 4.03720 seconds on average. Because each feature is described by 64 floating point numbers [6], the average size of the descriptors will be approximately 1402 kilobytes. This is calculated by taking 4 bytes per float:

5608 features · 64 floats/feature · 4 bytes/float = 11485184 bits = 1402 kilobytes

A Python script (listing C.1) shows that this is not very unrealistic. This scripts calculates the features of an image and writes the descriptors into a file. The size might even be reduced even further, when compressing the features before transmission. Depending on how many images are transmitted simultaneously, a different performance boost can be achieved. For example, when a configuration like figure 5.1 is used, sending the descriptors of two images of 1402 kilobytes each will take approximately 0.8 seconds (over two simultane 1843 kilobytes per second connections). Because the descriptors can be calculated in parallel a gain of approximately 4−0.8 = 3.2 seconds per image can be acquired.

Another example that should demonstrate this performance gain with much more significance is when using sixteen images on sixteen devices instead of one device:

16 images · 1402 kilobytes per image

4 megabytes per second ≈ 6 seconds transmission

Adding the 4 seconds duration for calculating the features, this is significantly faster than the 16 · 4 = 64 seconds that it would take to calculate all the features on the same device. Therefore distributing the algorithm to find the features can be considered a profitable performance gain.

(17)

5.3

Distributing with a tree-based topology

The last research question of this research is about gaining a performance boost by using a tree-based topology. Because the dataset is naturally distributed, it might be faster to calculate local results of two neighbouring images. Therefore the hypothesis is that this might be faster on a large scale. Intuitively, this makes sense because the height of a balanced tree is O(log2(n)). On the one hand, this does neglect the fact that for every pair of images that are being processed, the result may be up to twice as large, which may have a negative impact on both the stitching algorithm and the transmission speed. On the other hand, processing n images would mean that the stitching algorithm has to be applied to all possible combinations, in the worst case. With n images there are n2 = 2!·(n−2)!2! =12n(n − 1) such combinations of images, i.e. the worst-case performance would be in the order O(n2). Figure 5.5 shows a tree-based topology with four

images. To see whether this configuration is faster than a centralised stitcher, some experiments ought to be performed with a centralised topology and this tree-based topology.

Set-up

The first experiment involves a centralised topology. Because the dataset is distributed, the dataset has to be transferred to one node, like previously shown in figure 5.1. In order to get an indication of the scalability, this experiment has been carried out for two, three and four nodes. The second experiment involves a tree-based topology. Because there is a limitation within the implementation where only up to five devices can be used, the configuration as described in figure B.5 is used. This configuration should not differ from the original tree except for the fact that less pictures should be transferred.

Results

The raw results can be found in section B.3 of the appendix. Figure 5.4 shows the scalability of the sticher using a centralised approach. In the most right bar the results of the second experiment can be found. At this scale, it is not any faster than the centralised approach, due to the possible I/O-overhead. On a much larger scale, this should be less of an issue, but this could not be tested.

0 10 20 30 40 50

2 images 3 images 4 images

Duration in se conds centralised topology tree-based topology

Figure 5.4: the average duration of the stitching of two, three and four images and the tree-based topology.

(18)

Image A+B+C+D

Image A+B Image C+D

Image A Image B Image C Image D

Figure 5.5: tree-based topology: the blue devices captured the pictures and the green devices only stitched the images.

Device V Image B Device II Image A Device I Image C Device III Image D Device IV Device V Image A+B Device II Image A Device I Image C+D Device III Image D Device IV Image A+B+C+D Device V Image A+B Device II Image A Device I Image C+D Device III Image D Device IV Figure 5.6: the used implementation of the tree-based topology.

(19)

CHAPTER 6

Conclusions

With the Android application that has been implemented it certainly is possible to build a sustainable ad-hoc network of mobile devices on top of Wi-Fi Direct. As shown in section 5.2 it is possible to gain a performance boost by distributing a part of the stitching algorithm, namely the feature detection algorithm. The other stages of the algorithm cannot be easily distributed, because of the data dependencies. However, gaining a performance boost using a tree-based topology could not be verified using the implementation that has been built, due to the limit of five devices. However, the tree-based topology still does have the potential to be faster on a larger scale due to the O(log2(n)) growth of the tree, but this has to be further investigated. It

has been shown that it is possible to achieve a performance boost, by having mobile devices that are co-operating with each other by means of an ad-hoc network, apply a complex algorithm on a naturally distributed dataset.

6.1

Recommendations for future work

In a follow-up study there is still some unknown territory left to be explored. Foremost, it would be interesting to either use a new technology, or an improved technology, that does not have the limitation that only up to five devices can be used to create an ad-hoc network. Once that problem has been dealt with, tests with larger amounts of devices can be performed in such way that it really is faster to distribute the algorithm. Attempts might be made with larger tree-based topologies in such a way that in can be confirmed whether it is actually faster or not. Additionally, it is also worth a try modifying the OpenCV code in such a way that is is possible to build an implementation that does make it possible to distribute parts of the algorithm.

(20)

Bibliography

[1] OpenCV homepage. http://opencv.org/.

[2] System (Java Platform SE 6). http://docs.oracle.com/javase/6/docs/api/java/lang/ System.html#nanoTime%28%29. Accessed June 3, 2014.

[3] The OpenCV Stitching Pipeline. http://docs.opencv.org/_images/ StitchingPipeline.jpg.

[4] Wi-Fi Alliance. Wi-Fi Direct. http://www.wi-fi.org/discover-wi-fi/wi-fi-direct. Accessed June 4, 2014.

[5] Stuart Andrews. Rise of the mobile processors. Accessed June 14, 2014.

[6] Herbert Bay, Tinne Tuytelaars, and Luc Van Gool. Surf: Speeded up robust features. In Computer Vision–ECCV 2006, pages 404–417. Springer, 2006.

[7] Robert C Bolles and Martin A Fischler. A RANSAC-Based Approach to Model Fitting and Its Application to Finding Cylinders in Range Data. In IJCAI, volume 1981, pages 637–643, 1981.

[8] Matthew Brown and David G Lowe. Automatic panoramic image stitching using invariant features. International Journal of Computer Vision, 74(1):59–73, 2007.

[9] Felix Busching, Sebastian Schildt, and Lars Wolf. DroidCluster: Towards Smartphone Cluster Computing–The Streets are Paved with Potential Computer Clusters. In Distributed Computing Systems Workshops (ICDCSW), 2012 32nd International Conference on, pages 114–117. IEEE, 2012.

[10] Andrew Dursch, David C Yen, and Dong-Her Shih. Bluetooth technology: an exploratory study of the analysis and implementation frameworks. Computer Standards & Interfaces, 26(4):263–277, 2004.

[11] Google. Wi-Fi Peer-to-Peer. http://developer.android.com/guide/topics/ connectivity/wifip2p.html. Accessed June 4, 2014.

[12] Lenovo. A2109 Tablet. http://mobilesupport.lenovo.com/nl/nl/products/tablets/ a-series/a2109-tablet?type=Tablets. Accessed June 4, 2014.

[13] David G Lowe. Object recognition from local scale-invariant features. In Computer vision, 1999. The proceedings of the seventh IEEE international conference on, volume 2, pages 1150–1157. Ieee, 1999.

[14] Roberto Tron and Rene Vidal. Distributed computer vision algorithms. Signal Processing Magazine, IEEE, 28(3):32–45, 2011.

[15] Yingen Xiong and Kari Pulli. Sequential image stitching for mobile panoramas. In In-formation, Communications and Signal Processing, 2009. ICICS 2009. 7th International Conference on, pages 1–5. IEEE, 2009.

(21)

[16] Yingen Xiong and Kari Pulli. Fast panorama stitching for high-quality panoramic images on mobile phones. Consumer Electronics, IEEE Transactions on, 56(2):298–306, 2010. [17] Qingxuan Yang, Chengu Wang, Yuan Gao, Hang Qu, and Edward Y Chang. Inertial sensors

aided image alignment and stitching for panorama on mobile phones. In Proceedings of the 1st international workshop on Mobile location-based service, pages 21–30. ACM, 2011.

(22)
(23)

APPENDIX A

Reference images

This appendix contains the four images that have been used for benchmarking and their prop-erties.

Figure A.1: image A. Figure A.2: image B.

(24)

# Width (px) Height (px) Size (bytes) # SURF features Image A 2048 1536 869719 7724 Image B 2048 1536 805565 5057 Image C 2048 1536 807855 5587 Image D 2048 1536 746564 4064 Average 807426 5608 Standard deviation 50293 1545

(25)

APPENDIX B

Benchmarks

B.1

SURF benchmarks

# Image A (s) Image B (s) Image C (s) Image D (s) Average (s) 1 4.64270 3.78497 4.248567 3.41801 2 4.62401 3.83457 4.201068 3.53505 3 4.63309 3.88243 4.188703 3.50507 4 4.59920 3.77744 4.194026 3.49954 5 4.64089 3.81080 4.216498 3.50737 Average 4.62798 3.81804 4.209772 3.49301 4.03720 Standard deviation 0.01770 0.04247 0.024072 0.04412 0.03209 Table B.1: the raw benchmarks of the SURF algorithm on the mobile device using the code in listing C.2. The numbers represent the amount of seconds the device needed to calculate them.

(26)

B.2

Transmission benchmarks

B.2.1

Transmission of one image

Image A Device II Image A

Device I

Figure B.1: the topology of transmission experiment 1.

Measurement Image Duration (s) Size (bytes) Speed (Bps) Speed (kBps)

1 A 0.33261 869719 2614831 2554 2 A 0.34847 869719 2495807 2437 3 A 0.30062 869719 2893055 2825 4 A 0.31476 869719 2763109 2698 5 A 0.32124 869719 2707389 2644 6 B 0.30856 805565 2610732 2550 7 B 0.31796 805565 2533518 2474 8 B 0.31019 805565 2597006 2536 9 B 0.30951 805565 2602694 2542 10 B 0.30340 805565 2655151 2593 11 C 0.29840 807855 2707298 2644 12 C 0.25963 807855 3111611 3039 13 C 0.29936 807855 2698625 2635 14 C 0.31693 807855 2549009 2489 15 C 0.30826 807855 2620728 2559 16 D 0.28618 746564 2608749 2548 17 D 0.28014 746564 2664958 2602 18 D 0.30699 746564 2431892 2375 19 D 0.32018 746564 2331694 2277 20 D 0.30024 746564 2486549 2428 Average 2634220 2572 Standard deviation 165863 162

(27)

B.2.2

Transmission of two images

Image A+B Device III Image A Device I Image B Device II

Figure B.2: the topology of transmission experiment 2.

Measurement Image Duration (s) Size (bytes) Speed (Bps) Speed (kBps)

1 A 0.47990 869719 1812277 1770 2 A 0.41161 869719 2112989 2063 3 A 0.47375 869719 1835830 1793 4 A 0.48347 869719 1798929 1757 5 A 0.52519 869719 1655996 1617 1 B 0.42790 805565 1882606 1838 2 B 0.49146 805565 1639120 1601 3 B 0.42012 805565 1917487 1873 4 B 0.35477 805565 2270694 2217 5 B 0.41319 805565 1949647 1904 Average 0.44813 1887557 1843 Standard deviation 0.05077 192622 188

(28)

B.2.3

Transmission of three images

Image A+B+C Device IV Image A Device I Image B Device II Image C Device III

Figure B.3: The topology of transmission experiment 3.

Measurement Image Duration (s) Size (bytes) Speed (Bps) Speed (kBps)

1 A 0.62105 869719 1400392 1368 2 A 0.79072 869719 1099910 1074 3 A 0.66967 869719 1298734 1268 4 A 0.65097 869719 1336038 1305 5 A 0.51077 869719 1702767 1663 1 B 0.63148 805565 1275670 1246 2 B 0.66711 805565 1207548 1179 3 B 0.62142 805565 1296323 1266 4 B 0.62992 805565 1278847 1249 5 B 0.73515 805565 1095791 1070 1 C 0.62386 807855 1294940 1265 2 C 0.74151 807855 1089471 1064 3 C 0.59886 807855 1348981 1317 4 C 0.69027 807855 1170355 1143 5 C 0.63782 807855 1266586 1237 Average 0.65470 1277490 1248 Standard deviation 0.06688 151660 148

(29)

B.2.4

Transmission of four images

Image A+B+C+D Device V Image B Device II Image A Device I Image C Device III Image D Device IV Figure B.4: The topology of transmission experiment 4.

Measurement Image Duration Size (bytes) Speed (Bps) Speed (kBps)

1 A 0.95180 869719 913759 892 2 A 0.81724 869719 1064219 1039 3 A 0.93633 869719 928861 907 4 A 1.09001 869719 797901 779 5 A 0.79370 869719 1095785 1070 1 B 1.10358 805565 729955 713 2 B 0.89705 805565 898019 877 3 B 0.82959 805565 971035 948 4 B 1.15594 805565 696892 681 5 B 0.89563 805565 899437 878 1 C 1.02415 807855 788807 770 2 C 0.82443 807855 979892 957 3 C 0.82959 807855 973796 951 4 C 0.79213 807855 1019857 996 5 C 0.53093 807855 1521591 1486 1 D 0.49257 746564 1515638 1480 2 D 0.67343 746564 1108593 1083 3 D 0.67578 746564 1104751 1079 4 D 0.78749 746564 948025 926 5 D 0.66132 746564 1128901 1102 Average 0.83813 1004286 981 Standard deviation 0.17842 215032 210

(30)

B.3

Stitching benchmarks

B.3.1

Stitching two images

Measurement Start (s) End (s) Duration (s) 1 0.77200 18.41581 17.64381 2 0.63362 18.42394 17.64381 3 0.83908 18.54942 17.79033 4 0.86829 18.56303 17.71033 5 0.72841 18.34356 17.61516 Average 18.45915 17.68069 Standard deviation 0.09410 0.07056

Table B.6: benchmark of stitching two images, using the same topology as in figure B.2.

B.3.2

Stitching three images

Measurement Start (s) End (s) Duration (s) 1 1.13687 27.80039 26.66352 2 1.06407 27.20872 26.14465 3 0.97936 27.42239 26.44303 4 0.86833 27.59771 26.72938 5 0.85479 27.16119 26.30641 Average 27.43808 26.45740 Standard deviation 0.26752 0.24356

Table B.7: benchmark of stitching three images, using the same topology as in figure B.3.

B.3.3

Stitching four images

Measurement Start (s) End (s) Duration (s) 1 1.12557 36.41618 35.29062 2 2.30107 37.97859 35.67752 3 1.27514 37.46713 36.19199 4 2.17934 37.89502 35.71569 5 1.24968 36.68467 35.43499 Average 37.28832 35.66216 Standard deviation 0.70738 0.34405

(31)

B.3.4

Stitching using four images with a tree-based topology

Device V Image B Device II Image A Device I Image C Device III Image D Device IV Device V Image A+B Device II Image A Device I Image C+D Device III Image D Device IV Image A+B+C+D Device V Image A+B Device II Image A Device I Image C+D Device III Image D Device IV Figure B.5: The three stages of the configuration of the experiment.

Measurement Start (s) End (s) Duration (s) 1 21.24570 42.46786 21.22216 2 20.03105 42.92930 22.89825 3 19.74882 42.34394 22.59512 4 19.58180 42.61881 23.03701 5 20.35508 43.46262 23.10754 Average 20.19249 42.76451 22.57202 Standard deviation 0.65794 0.44739 0.77976 Table B.9: logs of the final destination node of the experiment.

(32)

APPENDIX C

SURF benchmark code

import s y s import cv2

import numpy a s np from math import s q r t

i f len ( s y s . a r g v ) != 2 :

print ” P l e a s e p r o v i d e an imagename a s 1 s t argument ”

e x i t ( ) image = cv2 . i m r e a d ( s y s . a r g v [ 1 ] ) ; width , h e i g h t = image . s h a p e [ : 2 ] w o r k s c a l e = min ( 1 . 0 , s q r t ( 0 . 6 ∗ 1 e 6 / ( wi dt h ∗ h e i g h t ) ) ) d s t = cv2 . r e s i z e ( image , ( i n t ( wi dt h ∗ w o r k s c a l e ) , i n t ( h e i g h t ∗ w o r k s c a l e ) ) ) s u r f = cv2 . SURF( ) k e y p o i n t s , d e s c r i p t o r s = s u r f . detectAndCompute ( d s t , None ) print d e s c r i p t o r s . s h a p e [ : 2 ] d e s c r i p t o r s . t o f i l e ( ” t e s t . np” , ” ” )

Listing C.1: Python script that writes SURF descriptors to a file

long s t a r t = System . nanoTime ( ) ; SURF s u r f = new SURF ( ) ;

Mat f i l e = i m r e a d ( f i l e n a m e ) ; Mat f i l e r e s = new Mat ( ) ;

double w o r k s c a l e = Math . min ( 1 . 0 , Math . s q r t ( 0 . 6 ∗ 1 e 6 / f i l e . s i z e ( ) . a r e a ( ) ) ) ;

r e s i z e ( f i l e , f i l e r e s , new S i z e ( ) , w o r k s c a l e , w o r k s c a l e , INTER LINEAR ) ;

Mat a r g 1 = new Mat ( ) ; Mat a r g 3 = new Mat ( ) ;

KeyPoint kp = new KeyPoint ( ) ;

s u r f . detectAndCompute ( f i l e r e s , a r g 1 , kp , a r g 3 ) ;

Log . e (TAG, ” D e t e c t e d , t i m e : ” + ( ( System . nanoTime ( ) − s t a r t ) / 1 0 0 0 0 0 0 0 0 0 . 0 ) ) ;

Referenties

GERELATEERDE DOCUMENTEN

The MTT assay was used here for the first time to test the effects of microbes, and not chemical contaminants as is traditionally the case, on the viability of human duodenum

Figure 5.8: Results of the discrete Gaussian scale-space with modified tree size measurement using model ; a reference data from a panchromatic image, b vitellaria trees with

(Received 27 July 2017; revised manuscript received 25 November 2017; published 17 January 2018) The cleanest way to observe a dynamic Mott insulator-to-metal transition (DMT)

Bij de tweede telling werden zeker 12 kreeften gezien, die één of beide scharen (twee ex.) misten of een kleine schaar hadden. Van een aantal dieren kon dit niet goed

H3: The brand/product fit reflected by digital influencers and their endorsed posts will be associated with higher levels of consumer engagement in a) likes, and b) positive comments

“Participative decision making” work environment & values F “Role model” leadership team F “Inspiration” direction, motivation F “Expectations and rewards”

(1997) geen rekening werd gehouden met de functionele achteruitgang voor de ziekenhuisopname maar alleen werd gekeken naar functionele achteruitgang tijdens de

(And what delightful brickwork it is!) Other related buildings have been plastered and painted. When I settled down to sketch Doddington Court from Functionally