• No results found

Using UAVs for map creation and updating: a case study in Rwanda

N/A
N/A
Protected

Academic year: 2021

Share "Using UAVs for map creation and updating: a case study in Rwanda"

Copied!
14
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

A case study in Rwanda

M. Koeva

∗1

, M. Muneza

1,2

, C. Gevaert

1

, M. Gerke

1

and F. Nex

1

Aerial or satellite images are conventionally used for geospatial data collection. However,

unmanned aerial vehicles (UAVs) are emerging as a suitable technology for providing very high

spatial and temporal resolution data at a low cost. This paper aims to show the potential of using

UAVs for map creation and updating. The whole work

flow is introduced in the paper, using a

case study in Rwanda, where 954 images were collected with a DJI Phantom 2 Vision Plus

quadcopter. An orthophoto covering 0.095 km

2

with a spatial resolution of 3.3 cm was produced

and used to extract features with a sub-decimetre accuracy. Quantitative and qualitative control

of the UAV data products were performed, indicating that the obtained accuracies comply to

international standards. Moreover, possible problems and further perspectives were also

discussed. The results demonstrate that UAVs provide promising opportunities to create

high-resolution and highly accurate orthophotos, thus facilitating map creation and updating.

Keywords: UAV, Mapping, Photogrammetry, Urban, Surveying, Updating

Introduction

Geospatial data play an important role in an estimated 80% of our daily decisions (Heipke et al. 2008) and in various urban planning activities. For example, in the context of the recently accepted Sustainable Devel-opment Goals, the UN emphasises the need for high-quality and usable data, as ‘data are the lifeblood of decision-making’ (IEAG 2014). Moreover, there is an initiative from UN on Global Geospatial Information Management which aims to promote the use of geospa-tial information to address key global challenges. Unfor-tunately, as discussed by the Regional Centre for Mapping of Resources for Development in Rwanda, lack of funding is a major bottleneck in many develop-ing countries and the required data are often unavailable or outdated (Ottichilo and Khamala 2002). To ensure the usability of spatial data as well as to provide a solid basis for informed decision-making and planning, map updating is imperative. Map updating consists of three main steps: (i) comparing a new data source against the existing basemap, (ii) identifying changes and updating the database and (iii) verifying the logical consistency of the updated version with the old version (Heipke et al.2008). Aerial or satellite imagery obtained through remote sensing or earth observation is used as a data source for many basemap updating activities. Pre-vious research has demonstrated the use of satellite and aerial imagery as means to extract information for

creating and updating maps (Alexandrov et al. 2004a, 2004b; Ali et al.2012) as well as to provide input for urban models (Herold et al. 2003). Important features of the urban environment, such as roads and buildings, may then be digitised in the imagery either by experts (e.g. Ottichilo and Khamala2002) or by a wider public in participatory mapping exercises (Mourafetis et al. 2015). Over the past two decades, considerable research has also focused on automatic feature extraction from high-resolution satellite and aerial images (Liu and Jezek 2004; Babawuro and Beiji 2012; Gruen et al. 2012; Awad2013; Horkaew et al.2015). However, the temporal resolution of conventional sensors is limited by the restricted availability of aircraft platforms and the orbit characteristics of satellites (Turner et al. 2012). Another disadvantage is cloud cover, which impedes image acquisition through these platforms. Such limitations restrict the use of satellites or manned aircrafts for map updating purposes, as it may increase cost and production time. In order to provide the high-quality and up-to-date information required to support urban governance and informed decision-making, Van der Molen (2015) calls for land surveyors to make use of the potential of new affordable, geo-spatial technologies.

A suitable example of such emerging technology is Unmanned Aerial Vehicles (UAVs), which are proving to be a competitive data acquisition technique designed to operate with no human pilot on board. The term UAV is commonly used, but other terms, such as drones, Unmanned Aerial Systems (UAS), Remotely Piloted Air-craft (RPA) or Remotely Piloted Aerial Systems (RPAS), have also been frequently used in the geomatics commu-nity (Nex and Remondino 2014). UAV refers to the

1Faculty of Geo-information Science and Earth Observation, University of

Twente, P.O.Box217, 7500 AE Enschede, The Netherlands

2Rwanda Natural Resources Authority (RNRA), Kigali, Rwanda

(2)

aircraft itself which is intended to be operated without a pilot-on-board, whereas UAS refers to the aircraft and other components that could be required, such as naviga-tion software and communicanaviga-tion equipment. According to ICAO Standards1, RPAS are a subset of UAS which are specifically piloted by a ‘licensed “remote pilot” situ-ated at a“remote pilot station” located external to the air-craft’. RPAS refers to the entire system, whereas RPA refers to the aircraft itself. The pilot’s license should address legal, operational and safety aspects. In the geo-matics community, the terms UAS and RPAS are often used interchangeably, and will be considered as synonyms in the current paper.

For photogrammetric applications, the payload of the whole system is composed of a camera, Global Naviga-tion Satellite System (GNSS) and Inertial Measurement Unit (Colomina and Molina2014). The camera takes overlapping images as itflies over a study area. These images may be processed through a photogrammetric workflow to obtain a point cloud (or a Digital Surface Model), an orthophoto or a full 3D model of the scene. An on-board GNSS device allows these data pro-ducts to be georeferenced. However, in the context of low-cost UAVs, the accuracy of such GNSS is often lim-ited. Therefore, supplementary Ground Control Points (GCPs) are usually acquired in the study area, in order to maintain the accuracy of the image block orientation and derived mapping products, such as orthophotos, and to facilitate its integration with other spatial data. These GCPs have to be carefully selected and well dis-tributed, and they should be visible in many images, as well as easily identifiable in the images after the acqui-sition and measurable with accurate technology, such as survey-grade GNSS. With the current study, we aim, in particular, to analyse the process of orthophoto production using UAVs for map creation and updating as a low-cost solution which can be affordable for many developing countries. Moreover, an important step is to assess the obtained accuracy quantitatively and qualitatively. To address this aim, we make use of a case study in Rwanda.

The paper structure is as follows: first, we introduce the background, regulation framework and the method-ology for true-orthophoto creation and feature extrac-tion. Using the case study in Rwanda, we perform a quantitative and qualitative control of the UAV data products. For this purpose, we conduct experiments with and without the use of GCPs for the qualitative analysis of the product. This in-depth analysis gives insight to the quality of the data products and thus theirfitness for various applications, the importance of GCPs, as well as to describe possible deformations in the orthophotos and how these may be avoided. Finally, we describe a feature extraction framework of digitis-ation rules which can be applied to ensure the logical consistency for map creation and a subsequent updating procedure. This last step is very important as the con-siderably higher resolution of the UAV images, as opposed to the previous scale of the basemap, allows new features to be visible, and existing features to be rep-resented in different ways. The accuracy of digitisation is also assessed and described. General quantitative and

qualitative control of the UAV data products and the final outputs are shown. Finally, possible problems and further perspectives are also discussed.

Background and regulatory framework

UAVs werefirst created and used in military applications where flight recognition in enemy areas, unmanned inspection, surveillance, reconnaissance and mapping of enemy areas without any risk for human pilots were the primary military aims (Nex and Remondino2014). Nowadays, UAVs are increasingly being used in civil and scientific research activities in different fields of application. For example, for agriculture (Grenzdörffer

and Niemeyer 2011), mapping (Nex and Remondino

2014), surveying and cadastral applications (Cunning-ham et al. 2011; Manyoky et al. 2011; Cramer et al. 2013; Barnes et al.2014), archaeology and architecture (Chiabrando et al. 2011), geology (Eisenbeiss 2009), coastal management (Delacourt et al. 2009), disaster management (Choi and Lee2011; Molina et al.2012), damage mapping (Vertivel et al.2015) and cultural heri-tage (Remondino et al.2011; Rinaudo et al.2012). With this paper, we aim to investigate the suitability of UAVs for map creation and updating. Other examples for applications have been demonstrated for monitoring for-estfire. Zarco-Tejada and Berni (2012) used afixed-wing UAV with thermal and hyperspectral sensors. Exper-iments have also been reported for tree classification (Agüera et al.2011), Normalised Difference Vegetation Index calculation (Lucieer et al.2012) and monitoring stream temperatures (Costa et al.2012).

Technically, UAVs can fly almost everywhere. Their flexibility is high and this fact allows them easily to change the observed location and viewing angle in a short time. For that reason, it is important to pay atten-tion to the safety of the users of aerial spaces, including manned or other unmanned aircraft, to the people and property on the ground, as well as their impact on the environment (Watts et al. 2012). This emerges from a concern regarding how thisflying system, which does not have pilots on board, can be safely deployed in pub-lic space. With the ability to carry cameras, infrared sen-sors and facial recognition technology, they can present a serious threat to privacy. However, in daily practice, one obstacle is a missing or unfavourable regulatory fra-mework. Great diffusion and commercialisation of UAVs have pushed several national and international associations to analyse the operational safety of UAVs through the formulation of clear regulations. Motiv-ations behind such regulMotiv-ations include the safety of people on the ground and security against the misuse of UAVs which can result in accidents and privacy violations as they are flying in shared public space. Also, having clearly formulated regulations is important in order to provide a legislative environment which enables the formulation of proper insurance policies. However, in the past years, each country has defined a different implementation of the UAV regulations and a comprehensive and common set of rules is still missing. In August 2016, the Federal Aviation Administration (FAA)2, defined a new set of rules (Washington, DC

(3)

20591). Operational limitations, such as maximal weight of the vehicle (less than 25 kg.), visual line-of-sight over rural or uninhabited areas during daylight, maximum ground speed of 100 mph, maximum altitude of 400 feet above the ground, minimum weather visi-bility of 3 miles from control station, were defined. The rules concerning the responsibilities and certi fica-tion of the pilot were set as well. The same year, the European Aviation Safety Agency (EASA) developed Advance Notice of Proposed Amendment 2015–103 categorising the vehicles into three classes: open (low risk), specific operation (medium risk) and certified (higher risk). Regulations at European level usually con-sider the following three elements: (i) the security and technical specifications of the UAV (ability to perform safeflights and the weight of the UAV) (ii) the skills of the pilot (who must be certified) and (iii) the surveyed area which can be uncritical (without humans or impor-tant infrastructure) and critical (with humans etc.). These three elements determine where flights are allowed and under which conditions (Dolan and Ii 2013; Nex and Remondino 2014). At the time when this project was done, there were no particular restric-tions or regularestric-tions for the usage of UAVs in Rwanda, the location of our case study, andflights could be per-formed after an application for flight permission from the Rwanda Civil Aviation Authority (RCAA) and con-tact with landowners.

This overview shows that all over the world efforts are being undertaken to find a suitable regulatory frame-work for the (professional) civil use of UAVs. From our point of view, it is thus worthwhile to investigate into the use of UAV-based imagery for mapping pur-pose, even if today conditions for their usage are not defined everywhere.

Methodology

In this section, we describe the general UAV workflow. Using the case of Rwanda, we perform quantitative and qualitative control of the final UAV data products for map updating. As block deformations can be observed owing to missing or poorly distributed GCPs and insufficient camera calibration (Gerke and Przybilla 2016), the importance of GCPs is analysed. Then using the high-resolution output, the feature extraction pro-cess based on clear feature digitisation rules is described and their accuracy assessed. Such rules ensure the con-sistency in the updates by various technicians as well as the compatibility between the previous map and updated version. A schematic overview of the UAV workflow is shown inFig. 1.

Flight planning, image acquisition and GCP

collection

For every aerial surveying project using UAVs or classical acquisition techniques, thefirst obligatory part is to start

1 UAV workflow

(4)

with flight planning. For mapping purposes, this part requires activities, such as getting theflight permission, the software selection, the detailed analysis of the area and required pixel size on the ground (ground sample dis-tance, GSD), among others. Some important aspects, such asflying height, use of GNSS and Inertial Naviga-tion System (INS) on board, measurements of GCPs on the ground with required technical equipment, should be considered as well. The quality of thefinal product that will be used for mapping– usually the orthophoto – depends greatly on the quality of the acquired images. High overlaps (such as 80%) between images are generally used to improve the quality of thefinal products, in par-ticular, the dense image matching results. The high over-lap is also recommended in order to avoid gaps owing to platform instability induced by possible turbulence.

Image orientation

The main objective of photogrammetry is to extract 3D information from 2D images. For this purpose, the interior orientation, IO, (defining the position of the pro-jection centre of the camera with respect to the image, the principal distance or focal length and lens distortions) and exterior orientation, EO, (defining the position of the camera projection centre and the rotation of the assembly of its optical axis with respect to the mapping frame) of the images have to be computed. In the traditional (manned) airborne photogrammetry, the adopted sensors are metric cameras. These cameras have stable interior orientation parameters, usually estimated by the producer through a camera calibration. Thefirst examples of low weight metric frame cameras (Kraft et al. 2016) for UAVs have been recently developed, but in the current study we focus on the low-cost solutions as they still rep-resent the large majority of the sensors installed on UAVs. These solutions are non-metric consumer cameras and are already largely used for photogrammetric mapping pro-jects (Barnes et al.2014). The consequence of using cam-eras of this category is that IO parameters need to be estimated from the captured data themselves (self-calibration).

To perform camera calibration and image orientation, the extracted common features (tie points) visible in mul-tiple images are needed. The selection of homologous points in the images is nowadays performed automatically using the dedicated algorithm of feature extraction (Barazzetti et al. 2010). The mathematical relationship between images, camera geometry and ground space is computed during block triangulation, which is commonly combined with an error minimisation strategy and is then referred to as bundle block adjustment (BBA). This includes the following tasks: (1) determine the EO infor-mation for each camera in the project as it was at the time of its exposure including the IO parameters for each device, (2) determine the ground coordinates of the tie points measured in the image overlapping area. In a global error minimisation procedure, the errors associated with the image and camera geometry and associated image measurements are distributed. If GCPs are pro-vided, they help to geometrically support the overall image block, and at the same time define a proper datum for the object space. If the GNSS/INS information on board the UAV is available, then the collected data help in the automatic tie point extraction because a prediction

of the image in overlap is possible in advance. The naviga-tion informanaviga-tion also supports the georeferencing of the whole image since the image projection centres are then already estimated (GNSS-assisted sensor orientation).

When these data are not available or of bad quality, then indirect sensor orientation is done by incorporating GCPs. The distribution of GCPs is of high importance not only for the image orientation but also for the preven-tion of block deformapreven-tion effects which may result from the remaining systematic errors in the camera calibration. As concluded by Gerke and Przybilla (2016), to avoid block deformations, it is suggested to plan cross-flight (different flight directions and altitude) for some parts of the study area. This procedure will be especially helpful for the enhancement of the results inflat terrain owing to a more reliable self-calibration.

Dense point cloud generation and DSM

generation

A dense matching technique should be applied after accu-rate IO and EO parameters are available in order to rep-resent the object space through dense point clouds. These point clouds should, later on, be structured, interpolated if needed, simplified and textured for photo-realistic rep-resentation and visualisation (Nex and Remondino 2014). A large number of image matching techniques have been developed and presented in the photogram-metric and computer vision communities over the last decade. These can be divided into two main classes: (i) patch-based approaches and (ii) semi-global approaches. An example of thefirst class is the multi-image approach presented in Furukawa and Ponce (2010), while an example of the second group is given by Hirschmuller (2008) and subsequent refinements (Rothermel et al. 2012; Wenzel et al. 2013). Patch-based approaches are very often multi-image (i.e. they use many images simul-taneously to determine homologous points and their 3D position), while semi-global approaches work with stereo-pairs and then merge the generated point clouds into a single dataset afterwards. Their quality is in flu-enced by flight parameters, such as resolution on the ground (GSD), image overlap and sensor quality. The presence of occlusions, shadows and regions with reduced texture might lead to gaps and increases the noise of the generated DSM, which is interpolated from the point cloud. For a more detailed description of matching algor-ithms, refer to Remondino et al. (2014).

True-Orthophoto creation

Thefinal representation to be used for mapping is created through orthorectification, which requires precise surface information to remove projective distortions of the orig-inal images. The orthophoto represents all objects in a map-like projection. The required surface may be obtained using a Digital Terrain Model, or DSM: in this last case, the result of the orthorectification is called a true-orthophoto. The depiction of the scene in the air-borne images, combined with its accurate geometry given by the orthogonal projection, allows even users unskilled in cartography to understand, read and accu-rately measure objects present in the image (Biasion et al.2004). However, in order to use thisfinal product as a reliable source for feature extraction, the created

(5)

misprojections. Therefore, qualitative and quantitative assessments and correction of the orthophoto might be necessary. For example, building facades should not be visible in true orthophotos, although they might be vis-ible in the original images or conventional orthophotos (Fig. 2).

Feature extraction

The final orthophoto and the DSM are very useful for manual or semi-automatic feature extraction for map cre-ation or updating. Highly accurate photogrammetric pro-ducts acquired through traditional platforms, such as satellites or aircraft, have been used for feature extraction for decades (Madzharova et al.2008; Gruen et al.2012). Their added value compared to the conventional survey-ing methods in terms of time, costs and accuracy has been proved, even though there are some disadvantages due to the lack of attribute data (street names) or occluded fea-tures. The current study aims to investigate the suitability of orthophotos, specifically those produced from UAV images, for map creation and updating. To obtain high-quality vectorised maps in ordered, clear and complete way, the feature extraction procedure should be guided by well-formulated rules. In practice, such rules and guid-ing explanations are combined in so-called extractions guides. Before the manual feature extraction is actually undertaken, these rules should be accepted and approved by the authorised specialist (such as head of departments working in municipalities). During digitisation, a unique identifier should be assigned to each extracted feature in the database, along with the attributes defined by the extraction guides.

Geometric quality assessment

The quality of the image orientation and orthophoto was analysed qualitatively and quantitatively. Possible defor-mations of the orthophoto include radiometric errors caused by imperfect image blending and radiometric

differences between the individual UAV images. Defor-mation could also be visible due to the imperfections in the DSM, causing faulty orthorectification of the individ-ual images. Through visindivid-ual inspection, deformation and artefacts are presented. Their respective causes are dis-cussed, along with measures which can be taken to avoid or ameliorate the defects.

Case study in Rwanda

Existing geospatial data

During recent years, Rwanda has been growing in terms of population and economic development, which raises the need for up-to-date geospatial information to facili-tate timely and efficient planning. To support this idea, a traditional aerial photogrammetric mission was per-formed over the entire country in 2008 and 2009. Elevation data were generated and orthophotos with high accuracy were produced. A basemap, covering the whole country of 26.338 km2, was created through man-ual digitisation. It includes the following features:

.Boundaries (administrative boundaries)

.Hydrography (rivers and lakes)

.Elevation (contours lines every 25 m, DEM)

. Physical infrastructure (railways, roads network, air-port, powerlines, harbours, signposts)

.Social infrastructure (schools, health, facilities, govern-ment offices, markets, place names)

.Thematic data (land cover, land use, built-up, parcels) In 2010, the orthophoto was also used for the pro-duction of the Rwanda National Land Use Master Plan at a scale of 1:250 000 and the creation of topographic maps at a scale of 1:50 000. This represented a significant progress for the country since, in most of the developing countries, the maps in use are from the 1970s. Unfortu-nately, there have been no efforts for updating this infor-mation since then. Hence, this case study aims to apply a state-of-the-art acquisition technique to partially update

(6)

the derived information and thus supports future development.

Study area

The area for the current study covers an informal settle-ment located in Nyarutarama cell, City of Kigali, Rwanda (Fig. 3). This area was selected as there have been major changes since the orthophoto was produced (2009), and it thus illustrates the importance of updating the national basemap which forms the foundation of many policy and development decisions.

Results

Data acquisition, image processing and

orthophoto creation

In May 2015, UAVflights were operated over the area using a DJI Phantom 2 Vision Plus quadcopter, the main characteristics of which are shown in Table 1. Using the Pix4DCapture app4, aflight plan was defined above the study area. The UAV flew autonomously at an altitude of 50 m above the ground, resulting in an aver-age GSD of 3.3 cm. A common problem while using Phantom is that there is a difference between planned number of images and the one performed. For our study, a total of 1172 geotagged nadir images were planned and only 954 were obtained. However, an average of 85% forward and 75% side-overlaps were achieved. The duration of theflight over the study area, including take-off and landing, was approximately 2 hours and the identification and marking of the GCPs in the images and image orientation, dense image matching and

orthophoto creation took about 2 days on a decent desk-top computer.

In the case of these non-metric cameras, a self-cali-bration is mandatory, so pre-caliself-cali-bration is not an option, hence in our case the indirect georeferencing cannot be decoupled from the sensor calibration. The acquired images were processed using the Pix4D Mapper photo-grammetric software. The camera installed on the Phan-tom 2 is afish-eye lens that acquires the images using a rolling shutter sensor. Fish-eye lenses have the advantage of acquiring images with largefields of view and high res-olution, but they require dedicated camera calibration models to generate results comparable to conventional perspective cameras (Strecha et al.2015). As already dis-cussed, the cameras installed on a UAV are non-metric and a self-calibration in the BBA is, therefore, mandatory. The electronic shutter, also called‘rolling shutter’, adopts a sensor that is exposed and reads the scene line-by-line, instead of exposing the entire image at once. Additional distortions have to be considered and modelled when the UAV is flying too fast or when there are dynamic objects to be captured (Baker, et al. 2010; Grundmann et al.2012). In the case of UAVs, the effect of the rolling shutter cameras can be modelled by assuming some

3 Location of the study area and distribution of GCPs

Table 1 UAV properties

Model DJI Phantom 2 Vision +

Camera PhantomVisionFC 2000

Resolution 16 MP

Sensor width × height 6.48525 × 4.86394 mm

Image width × height 4608 × 3456 pixels

Pixel size 1.4 µm

Focal length 5 mm

Maximumflight time 25 min

Geolocation On-board GNSS

(7)

constant motion speed, and it was demonstrated that this model might provide results comparable to the results of the global shutter cameras as reported in Vautherin et al. (2016). The rolling shutter model presented in this paper is also implemented in Pix4D and it was, therefore, adopted for the processing of the image block as an additional experiment.

The Phantom 2 carries a consumer grade GNSS (L1, code signal, expected absolute position accuracy 5– 10 m), so high-quality GCPs were collected in order to be able to produce accurately georeferenced products. These points should be measured on static objects which are easily identifiable in the UAV imagery. Figure 3 shows the layout of the points which were surveyed on ground using a Real-Time Kinematic GNSS with an approximate accuracy of 2 cm. The local coordinate sys-tem, TM_Rwanda, was utilised. The GCPs were split into two groups: 7 GCPs were used as GCPs for the image block and 6 points were used as Check Points (CPs) to assess the accuracy. All points were marked in at least 14 UAV images.

After the image orientation, the dense matching process was initiated using the full resolution of the images to gen-erate a very dense point cloud. The employed software uses a patch-based approach. This process led to the gen-eration of millions of points that were interpolated into the DSM. Using the obtained DSM, the orthorecti fica-tion process was performed in order to remove relief dis-tortions and to produce a true-orthophoto.

Image orientation experimental results

Three experiments were conducted for the quantitative analysis. For image orientation, thefirst is GNSS-assisted sensor orientation where no GCPs captured with geodetic receivers are involved. It provides an indication of the accuracy which can be achieved by this UAV in situations where no external GCPs are available. As UAVs may be utilised for rapid mapping applications in areas with hazardous or limited human accessibility, thus making

it difficult or impossible to take GCPs, knowing the accu-racy of the orthophoto obtained, using only the on-board GNSS, will help to estimate how accurate the obtained model is. This procedure might be very useful for mapping tasks where lower accuracy is acceptable. AsFig. 4shows, the image orientation with only geotags resulted in a low geolocation accuracy: achieving an RMSE at checkpoints of 0.19 m in X, 0.77 m in Y and 5.67 m in Z although this accuracy may be adequate for some mapping appli-cations, it is still very high compared to the spatial resol-ution of the orthoimagery. The low accuracy is primarily due to the insufficient accuracy of the navigation-grade GNSS units used to record camera position at the time of image acquisition. Regarding the low vertical accuracy, the error may also be due to the fact that the UAV used for the present study did not record the absolute altitude of the drone in the image geotags, but rather the relative height above the take-off location. Here, the absolute height was obtained by extracting the elevation of the take-off point from a national DEM (with a spatial resol-ution of 5 m), introducing errors in the final model. In order to mitigate the systematic shift induced by this pro-cedure, one could simply shift the whole block according to the mean error. This, however, requires some sort of ground truth.

The second experiment includes high-precision GCPs in the image orientation phase, thus maximising the accu-racy of the image orientation and subsequent orthophoto production. After including the 7 GCPs in the image orientation, the accuracy improved significantly (Fig. 4). The model achieves an RMSE at CPs of 0.13 m (3.94 GSD) in X, 0.07 m (2.12 GSD) in Y and 0.07 m (2.12 GSD) in Z, thereby meeting the requirements of the pre-sent study. The third experiment not only includes the high-precision GCPs but also introduces the rolling shut-ter correction. Here, we can see that this greatly increases planimetric accuracy, reducing the RMSE at CPs to 0.04 m (1.21 GSD) in X and 0.05 m (1.51 GSD) in Y, though slightly increases the RMSE in Z to 0.20 m (6.06 GSD). The minimum, maximum, mean and

4 RMSE obtained with experiment 1 (GNSS-assisted sensor orientation), experiment 2 (Indirect sensor orientation including GCPs without rolling shutter correction) and experiment 2b (Indirect sensor orientation including GCPs with rolling shutter correction)

(8)

standard deviations of the CP residuals for the three experiments are also provided inTable 2.

After the image orientation, the dense matching was performed. The image matching results of the third exper-iment, including GCPs and rolling shutter correction, have been published and can be freely viewed online5.

Qualitative assessment of the orthophoto

The processing steps as explained in the previous section resulted in a high-quality RGB orthophoto (Fig. 5) with a GSD of 3.3 cm and a radiometric resolution of 8 bits. A visual inspection of the image indicates that it is suitable for visual interpretation, as features are clearly visible and objects can be easily extracted.

However, some minor deformations were detected and will be further analysed here with the purpose of illustrating which types of errors may be present in orthophotos obtained from UAV imagery. Examples of such distortions are given in Fig. 6. Problems with facade visibility caused by DSM imperfections were observed (Fig. 6a). This can be mitigated by improving the image overlap during the UAV flights. Moving objects, such as cars, may be located in different areas due to the slight time delay between the acquisition of sequential images. This may cause the problem that moving objects appear multiple times in the orthophoto (Fig. 6b). Such effects can be manually removed if there is sufficient overlap between the images. For the current study, this problem was solved using the‘mosaic editor’ in Pix4D Mapper, which allows the user to select which image is used to produce a section of the orthophoto. Distortions attributed to thefish-eye lens of the UAV

Table 2 The minimum, maximum, mean and standard deviations of the CP residuals for all three experimental set-ups

Exp 1: GNSS-assisted sensor orientation

Exp 2: Indirect sensor orientation including GCPs without rolling

shutter correction

Exp 2b: Indirect sensor orientation including GCPs with rolling shutter

correction Residual error Error X (m) Error Y (m) Error Z (m) Error X (m) Error Y (m) Error Z (m) Error X (m) Error Y (m) Error Z (m) Minimum −0.207 −1.301 −9.997 −0.187 −0.041 −0.107 −0.078 −0.077 −0.403 Maximum 0.348 1.065 −0.629 0.089 0.157 0.079 0.024 0.060 −0.019 Mean −0.033 −0.315 −4.830 −0.073 0.025 −0.032 −0.024 −0.028 −0.157 Standard deviation 0.189 0.700 2.974 0.105 0.067 0.061 0.033 0.046 0.129 RMSE 0.192 0.768 5.672 0.128 0.072 0.069 0.041 0.054 0.203

5 The orthophoto obtained from UAV imagery of the study area in Kigali

5The point cloud visualisation is available at:https://www.itc.nl/eos_public/ rwanda2015/portal.html. Please note that the compatibility has been veri-fied with Google Chrome on a Windows platform.

(9)

camera were visible in areas at the border of the image block. (Fig. 6c). Also, standing objects, such as light poles and pylons, may cause difficulties in the dense matching and DSM generation steps, which results in wrong reconstructions in the orthomosaic (Fig. 6d ). Finally, the DSM interpolation method influences the orthophoto as well. For example, using a direct interp-olation method may cause noise at overhanging roof edges (Fig. 6e), whereas Inverse Distance Weighting interpolations may cause rounded roof corners (Fig. 6f ). Thisfinal aspect is further addressed in the discus-sion section of this paper.

Quantitative assessment of the orthophoto

The quantitative assessment of the orthophoto consists of two aspects: (i) the accuracy at the measured control points and (ii) the geometric accuracy of objects measured in the orthophoto. The same control points utilised to ver-ify the image orientation step yield an RMSE of 4.8 cm in X and 3.7 cm in Y, corresponding to a planimetric accu-racy of 6.0 cm. The result shows that the level of detail and the radiometric quality of the orthophoto is comple-tely comparable to the quality of the input images. According to the new standard, the obtained error meets the requirements for the horizontal accuracy class of 7.5 cm (ASPRS2015).

The horizontal accuracy obtained without GCPs with the GNSS-assisted sensor orientation method was around 80 cm. Similar studies that use various UAVs to obtain orthophotos report accuracies of: 1 m (8 cm

GSD) using on-board GNSS (Skarlatos et al. 2013), 90 cm accuracy (Haitao and Lei2010), 2–8 m for 130– 900 m flying altitude was reported by Küng et al. (2012) and 2–4 m with 50–100 m flying height by Eug-ster and Nebiker (2007).

The second part of the quantitative assessment con-sisted in analysing the geometric accuracy of the produced orthophoto on an object level. A number of permanent objects were actually measured in thefield as well as digi-tised over the orthophoto (see Fig. 7). Results indicate that measurements in the orthophoto replicated thefield measurements to an error of less than 0.6% of the actual dimensions (Table 3).

Map creation and basemap updating

The orthophoto was then used to extract features to update the existing basemap. The original basemap was used to define the extraction guides for updating existing layers. However, the high resolution and level of detail of the UAV orthophoto enables additional objects of interest to be visible. This provides the opportunity for creating new vector datasets representing topological features (such as drainage and narrow footpaths), potentially enabling a more informed decision-making for various urban planning activities. Extraction guidelines were also developed for: road centrelines, roads, built-up areas, footpaths and drainage. For each of these datasets, the extraction guide provided a definition and description of the features to be included, and mandated how these should be digitised. The guide also described the

6 Possible distortions in the UAV orthophoto include: a facade visibility, b moving objects, c feature distortion due to insuffi-cient overlap and lens distortions d distortions of high poles, e distortions due to the DSM interpolation method used such as grainy roof edges f rounded roof corners

(10)

geometry type, unique code and required attributes of each feature.

In total, 948.7 m of roads, 16553.0 m² of buildings,

778.8 m of drainage, 1510.3 m of footpaths and

1078.0 m² of schools were digitised. After digitisation, their spatial accuracy was checked by comparing a num-ber of digitised coordinates with those measured in the field. The features were digitised with an average error of 1.3 cm in X and 3.2 cm in Y, and a planimetric RMSE of 8.8 cm. This is acceptable for mapping accord-ing to the new combined standard (ASPRS2015). The augmented resolution greatly enhanced the interpretabil-ity of the orthophoto for feature extraction purposes. It was observed that not only a variety of features can be identified and distinguished more easily, but also the high resolution also allows convenient extraction of new features. For example, features such as drainage lines (man-made and eroded gullies) and structures such as lamp posts and electricity poles were very clearly visible. Metadata was added to the vector layers according to ISO 19115 (2003) to document their quality and facili-tate future analyses using the updated basemap. Two examples of 1:1000 scale maps were produced for dem-onstration purposes: a cadastral (Fig. 8) and a topo-graphic map (Fig. 9). Furthermore, generalisation procedures were applied to rescale the features digitised over the orthophoto at the scale of the national basemap (1:50 000) and update the existing basemap (Fig. 10). A topological control and metadata creation (ISO 2003 Template) were conducted before completing the updat-ing procedure.

7 Example for measured distances on the ground

8 Cadastre map 1:1000 scale

Table 3 Comparison between physical objects measured in

thefield (L field) and in the orthophoto (L ortho)

Object Lfield (m) L ortho (m) Error (m) Error (%)

1 56.200 56.221 0.021 0.037

2 0.700 0.704 0.004 0.568

(11)

Discussion

The results indicate that with a low-cost UAV and photogrammetric techniques, it is possible to obtain high-quality products. Thefinal orthophoto with a pos-itional accuracy of 6.0 cm proves to be suitable to extract features (accuracy of 8.8 cm), which is accepta-ble for mapping according to the new combined stan-dard (ASPRS2015). Compared to the time and cost of traditional photogrammetric surveys, this technique represents a promising alternative. Unclear or restrictive legislation and computer processing requirements for large projects are two of the main challenges faced by the application of UAVs for large-scale basemap updat-ing. However, there are projects currently addressing such issues, for example by reducing the computational requirements of processing (NLeSC 2016) in MicMac, an open source photogrammetric software6 (IGN, France). Furthermore, technological developments, such as GPU, processing are continuously reducing the computational bottleneck.

The quality obtained by UAV photogrammetric pro-ducts depends on a number of factors. Visual errors in the final orthophoto were mainly due to DSM defor-mation. This deformation was caused by lack of suf fi-cient overlap during image acquisition. Therefore, the generated point cloud lacked the required density to per-form the geometric reconstruction of certain objects. This stresses the importance of rigorous flight planning

and precise image acquisition in order to guarantee the quality of the final product. Other deformations, such as the duplication of moving objects, can be removed by manually adjusting the input images selected for the creation of the orthophoto. Such manual processes, how-ever, are also necessary for conventional manned air-borne-based products.

The orthophoto quality also depends on which algor-ithm is used to interpolate the DSM from the point cloud. A common method is triangulation, which may result in noise around overhanging roof edges (Fig. 6e). Other interpolation methods, such as Inverse Distance Weighting, improve the visual appearance of overhanging roof edges, but cause rounded roof corners (Fig. 6f ). Such observations stress that quality of the DSM and ortho-photo also depend on which software and algorithms are utilised.

Regarding the georeferencing accuracy, a comparison was made between using only on-board GNSS and including external GCPs for image orientation. The achieved results indicate that GNSS-assisted sensor orien-tation using the coordinates provided by the UAV resulted in low accuracy (seeFig. 4). This is due to the use of a con-sumer grade GNSS. This demonstrates that for maximal accuracy, there is still a dependency on high-quality GCPs. GCP quality can be influenced by the precision of the surveying equipment used, their distribution throughout the study area and positioning error intro-duced when manually marking GCPs in the UAV images is performed. Errors incurred in any of these elements will have an impact on the accuracy of thefinal product.

9 Topographic map 1:1000 scale

(12)

Conclusions and recommendations

This work demonstrates that UAVs provide promising opportunities to create a high-resolution and highly accurate orthophoto, thus facilitating map creation and updating. Through an example in Rwanda, the photogrammetric process of obtaining an orthophoto from the individual UAV images is explained. With the support of external high-precision GCPs, the ortho-photo created for the case study has planimetric and vertical accuracies less than 8 cm, thus meeting the requirements for 1:1000 scale maps. A number of factors that influence the quality of the orthophoto are high-lighted, as well as possible strategies which can be adopted to mitigate these imperfections.

This important part of the study shows that due to the high resolution of the UAV orthophoto, new features can be easily extracted and various outputs can be produced. For the basemap updating task of the case study area, clear digitisation rules for feature extraction were defined in order to ensure the logical consistency. A so-called extraction guide was created to guide the feature extrac-tion process. It consists of a clear descripextrac-tion of the object, requirements for their attributes and specifically defined digitisation rules.

The important role of GCPs on increasing the accuracy of the obtained orthophoto is also demonstrated here. As reported, the geolocation accuracy without external GCPs is relatively low. This can be resolved through the

collection of additional high-quality GCPs in thefield, which require extra time for collection and insertion in the software. Therefore, UAVs are currently more suitable for map updating projects over a limited study area and incremental map updating. However, rapid developments in both UAV platforms, increasing the area covered per flight and improving the accuracies of the on-board GNSS (Gerke and Przybilla2016), as well as photogram-metric software will likely facilitate the processing of lar-ger projects in the foreseeable future.

Acknowledgement

The authors express their gratitude to ITC (https://www. itc.nl/) which financed the UAV image acquisition, Rwanda Natural Resources Authority and the support of the officials in the City of Kigali for their support. We thank also Pix4D for providing us with the research license of Pix4dmapper.

Notes on contributors

Mila Koeva is an Assistant Professor working in 3D Land Information. She holds a PhD in 3D modelling in archi-tectural photogrammetry from the University of Univer-sity of Architecture, Civil engineering and Geodesy in Sofia. She also holds an MSc. degree in Engineering (Geodesy) from the same institution obtained in 2001.

(13)

After her work in Municipality company GIS-Sofia Ltd. and later on in the private company Mapex JSC., combin-ing multidisciplinary approach for cadastre needs, she moved to University of Twente at the faculty of Geo-Information Science and Earth Observation (ITC) where she was teaching topics of Photogrammetry and Remote sensing. Her main areas of expertise include 3D modelling and visualisation, 3D Cadastre, 3D Land Information, UAV, digital photogrammetry, image pro-cessing, large-scale topographic and cadastral maps, GIS, application of satellite imagery among others. She is co-chair of the ISPRS working group WG IV/10: Advanced geospatial applications for digital cities and regions.

Muneza Jean Maurice is currently working as photo-grammetrist in Rwanda Natural Resources Authority. He obtained his Bsc. degree in 2010 at National Univer-sity of Rwanda and later on in 2014 his Master’s degree in Environmental studies at the Open University of Tan-zania. In 2015, he successfully obtained his second Mas-ter’s degree in Geoinformatics at Faculty of Geo-Information Science and Earth Observation (ITC) of the University of Twente. His main interests are in photo-grammetry, surveying, disaster and humanitarian relief, economic empowerment, science and technology. Caroline M. Gevaert is currently pursuing a Ph.D. degree at the Faculty ITC, University of Twente in the Nether-lands. The objective of the research is to investigate how UAVs may be useful for informal settlement mapping. Her research interests span a wide range of topics from image and point cloud feature extraction methods to advanced machine learning techniques, as well as the potential societal impact and ethics of the use of UAVs in this context. She previously received an M.Sc. degree in Remote Sensing from the University of Valencia (Spain) and an M.Sc. degree in Geographic Information Science from Lund University (Sweden).

Markus Gerke has been an Assistant Professor since 2007 in the Department of Earth Observation Science (EOS) at the University of Twente. He specialises in geometric and semantic information extraction from airborne image sequences, photogrammetry and 3D modelling. He obtained Master’s and PhD degrees at the Leibniz Uni-versity of Hannover, and contributes to projects, including projects funded nationally and by the European Commis-sion, on 3D topography, virtual city modelling, use of un-calibrated airborne imagery and close-range photogram-metry. He is the co-chair of the ISPRS working group II/4 on ‘3D Scene Reconstruction and Analysis’ and PI and Co-PI of two well-recognised benchmark tests in thefield of photogrammetry and remote sensing. Francesco Nex received his master’s degree in Environ-mental Engineering (2006) and his PhD degree (2010) from the TU Turin (Italy). From 2011 to 2015, he was with the 3DOM unit of FBK (Italy) working as Marie-Curie Post-Doc in the CIEM Project from 2011 to 2014 and then as Researcher. In 2015, Francesco moved to the University of Twente, where he is currently Assistant Professor at the faculty of Geo-Information Science and Earth Observation (ITC), department of EOS. His main research interests are in the use of UAV platforms and oblique imagery as well as the automation in the feature extraction from these data. He is currently involved in

three European research projects dealing with UAV ima-gery. He is the Chairman of the ICWG I/II (UAVs: sensors and applications) of the ISPRS and he was the PI of the ISRPS scientific initiative on the integration of UAVs and oblique imagery.

References

Agüera, F., Carvajal, F., and Pérez, M.,2011. Measuring sunflower nitro-gen status from an unmanned aerial vehicle-based system and an on the ground device. ISPRS Annals of the Photogrammetry Remote Sensing and Spatial Information Sciences, XXVIII-1/C2, 33–37. Alexandrov, A., et al.,2004a. Application of QuickBird satellite imagery

for updating cadastral information. The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, XXXVPart B2, 386–391.

Alexandrov, A., et al.,2004b. Application of satellite imagery for revision of topographic map of Sofia. In 4th International PHOTOMOD users conference (Proceedings), Minsk, pp. 3–10.

Ali, Z., Tuladhar, A., and Zevenbergen, J.,2012. An integrated approach for updating cadastral maps in Pakistan using satellite remote sen-sing data. International Journal of Applied Earth Observation and Geoinformation, 18, 386–398.

American Society for Photogrammetry and Remote Sensing (ASPRS),

2015. ASPRS positional accuracy standards for digital geospatial data. Photogrammetric Engineering and Remote Sensing, 81 (3), 1–26.

Awad, M.M.,2013. A morphological model for extracting road networks from high-resolution satellite images. Journal of Engineering, 2013, 1–9.

Babawuro, U. and Beiji, Z.,2012. Satellite imagery cadastral features extractions using image processing algorithms: a viable option for cadastral science. International Journal of Computer Science Issues, 9 (4(2)), 30–38.

Baker, S., et al.,2010. Removing rolling shutter wobble. In: 2010 IEEE conference on computer vision and pattern recognition (CVPR), pp. 2392–2399.

Barazzetti, L., Scaioni, M., and Remondino, F.,2010. Orientation and 3D modelling from markerless terrestrial images: combining accu-racy with automation. The Photogrammetric Record, 25 (132), 356–381.

Barnes, G., et al.,2014. Drones for peace : Part 1 of 2 design and testing of a UAV-based cadastral surveying and mapping methodology in Albania. In: World bank conference on land and poverty, Washington DC, USA, 24–27 March 2014.

Biasion, A., Dequal, S. and Lingua, A.,2004. A New procedure for the automatic production of true orthophotos. The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 35, 1682–1777.

Chiabrando, F., et al.,2011. UAV and RPV systems for photogrammetric surveys in archaeological areas: two tests in the piedmont region (Italy). Journal of Archaeological Science, 38 (3), 697–710. Choi, K. and Lee, I.,2011. A UAV based close-range rapid aerial

moni-toring system for emergency responses. ISPRS – International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XXXVIII-1/C22, 247–252.

Colomina, I. and Molina, P.,2014. Unmanned aerial systems for photo-grammetry and remote sensing: a review. ISPRS Journal of Photogrammetry and Remote Sensing, 92, 79–97.

Costa, F.G., et al.,2012. The use of unmanned aerial vehicles and wireless sensor network in agricultural applications. In: IEEE international geoscience and remote sensing symposium (IGARSS), Munich, Germany, 22–27 July 2012.

Cramer, M., et al.,2013. On the use of RPAS in national mapping—The EUROSDR point of view. ISPRS– International archives of the photogrammetry, remote sensing and spatial information sciences, XL-1/W2, 93–99.

Cunningham, K., et al., 2011. Cadastral audit and assessments using unmanned aerial systems.’ In: UAV-g: conference on unmanned aerial vehicle in geomatics. Zurich, Switzerland, 14–16 September 2011.

Delacourt, C., et al.,2009. DRELIO: an unmanned helicopter for ima-ging coastal areas. Journal of Coastal Research, 56 (SI), 1489–1493. Dolan, A.M. and Ii, R.M.T.,2013. Integration of drones into domestic airspace : selected legal issues. Current Politics and Economics of the United States, Canada and Mexico, 15 (1), 107–137.

(14)

Eisenbeiss, H., 2009. UAV photogrammetry. Ph.D. Thesis. Institut für Geodesie und Photogrammetrie, ETH-Zürich. Zürich, Switzerland. Eugster, H. and Nebiker, S.,2007. Geo-registration of video sequences captured from Mini UAVs approaches and accuracy assessment. In: The 5th international symposium on mobile mapping technology. Padua, Italy, 29–31 May, 2007.

Furukawa, Y. and Ponce, J.2010. Accurate, dense, and robust multiview stereopsis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32 (8), 1362–1376.

Gerke, M. and Przybilla, H.J.,2016. Accuracy analysis of photogram-metric UAV image blocks: influence of onboard RTK-GNSS and cross flight patterns. Photogrammetrie Fernerkundung -Geoinformation, 2016 (1), 17–30.

Grenzdörffer, G.J. and Niemeyer, F.,2011. UAV based BRDF-measure-ments of agricultural surfaces with pfiffikus. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 38 (1/C22), 229–234.

Gruen, A., Baltsavias, E., and Henricsson, O., eds.,2012. Automatic extraction of man-made objects from aerial and space images (II). Boston: Birkhäuser.

Grundmann, M., et al.,2012. Calibration-free rolling shutter removal. In: 2012 IEEE international conference on computational photogra-phy (ICCP), Seattle, pp. 1–8.

Haitao, X. and Lei, T.,2010. Method for automatic georeferencing aerial remote sensing (RS) images from an unmanned aerial vehicle (UAV) platform. Biosystems Engineering, 108 (2), 104–113. Heipke, C., Woodsford, P. A., and Gerke, M.,2008. Updating geospatial

databases from images. In: Advances in photogrammetry, remote sensing and spatial information sciences: 2008 ISPRS congress book. Boca Raton: CRC Press, 355–362.

Herold, M., Goldstein, N.C., and Clarke, K.C.,2003. The spatiotem-poral form of urban growth: measurement, analysis and modeling. Remote Sensing of Environment, 86 (3), 286–302.

Horkaew, P., Puttinaovarat, S., and Khaimook, K.,2015. River bound-ary delineation from remote sensed imagery based on SVM and relaxation labeling of water index and DSM. Journal of Theoretical and Applied Information Technology, 71 (3), 376–386.

Hirschmuller, H.,2008. Stereo processing by semiglobal matching and mutual information. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30 (2), 328–341.

Independent Expert Advisory Group on a Data Revolution for Sustainable Development,2014. A world that counts: mobilizing the data revolution for sustainable development. [pdf] Available at: < http://www.undatarevolution.org/wp-content/uploads/2014/12/A-World-That-Counts2.pdf> [Accessed 19 April 2016].

Kraft, T., et al.,2016. Introduction of a photogrammetric camera system for rpas with highly accurate gnss/imu information for standardized workflows. ISPRS – International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XL-3/W4, 71–75. Küng, O., et al.,2012. The accuracy of automatic photogrammetric tech-niques on ultra-light Uav imagery. In: UAV-g: conference on unmanned aerial vehicle in geomatics. Zurich, Switzerland, 14–16 September 2011.

Liu, H., and Jezek, K. C.,2004. Automated extraction of coastline from satellite imagery by integrating canny edge detection and locally adaptive thresholding methods. International Journal of Remote Sensing, 25 (5), 937–958.

Lucieer, A., et al.,2012. Using a micro-UAV for ultra-high resolution multi-sensor observations of Antarctic moss beds. ISPRS – International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XXXIX-B1, 429–433. Madzharova, T., et al.,2008. Mapping from high resolution data in GIS

SOFIA Ltd. ISPRS 2008: proceedings of the XXI congress: silk road for information from imagery: the international society for

photogrammetry and remote sensing, 3–11 July, Beijing, China. Comm. IV, WG IV/9. Beijing: ISPRS, 2008. pp. 1409–1412. Manyoky, M., et al.,2011. Unmanned aerial vehicle in cadastral

appli-cations. In: UAV-g: conference on unmanned aerial vehicle in geo-matics. Zurich, Switzerland, 14–16 September 2011.

Molina, P., et al.,2012. Drones to the rescue! unmanned aerial search missions based on thermal imaging and reliable navigation. Inside GNSS, 7, 36–47.

Mourafetis, G., et al.,2015. Enhancing cadastral surveys by facilitating the participation of owners. Survey Review, 47 (344), 316–324. Nex, F. and Remondino, F.,2014. UAV for 3D mapping applications: a

review. Applied Geomatics, 6 (1), 1–15.

NLeSC,2016. Improving open-source photogrammetric workflows for pro-cessing big datasets. [online] Available at: <https://www. esciencecenter.nl/project/improving-open-source-photogrammetric-workflows-for-processing-big-datasets> [Accessed 18 April 2016]. Ottichilo, W. and Khamala, E.2002. Map updating using high resolution

satelite imagery – A case of the kingdom of Swaziland. International Archives of Photogrammetry and Remote Sensing, 34 (6/W6), 89–92.

Remondino, F., et al.,2011. UAV photogrammetry for mapping and 3d modeling–current status and future perspectives. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XXXVIII-1/C22 UAV-g 2011, 25–31. Remondino, F., et al.,2014. State of the art in high density image

match-ing. The photogrammetric record, 29 (146), 144–166.

Rinaudo, F., et al.,2012. Archaeological site monitoring: UAV photo-grammetry can be an answer. ISPRS– International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XXXIX-B5, 583–588.

Rothermel, M., et al.,2012. SURE: photogrammetric surface reconstruc-tion from imagery. In Proceedings LC3D workshop, Berlin (Vol. 8). Skarlatos, D., et al.,2013. Accuracy assessment of minimum control points for UAV photography and georeferencing. In: First inter-national conference on remote sensing and geoinformation of the environment (RSCy2013). Paphos, Cyprus, 8–10 April 2013. Strecha, C., et al., 2015. Quality assessment of 3D reconstruction

using fisheye and perspective sensors. ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, II-3/W4, 215.

Turner, D., Lucieer, A., and Watson, C.,2012. An automated technique for generating georectified mosaics from ultra-high resolution unmanned aerial vehicle (UAV) imagery, based on structure from motion (SFM) point clouds. Remote Sensing, 4 (5), 1392–1410. Van der Molen, P.2015. Rapid urbanisation and slum upgrading: what

can land surveyors do? Survey Review, 47 (343), 231–240. Vautherin, J., et al.,2016. Photogrammetric accuracy and modeling of

rolling shutter cameras. ISPRS Annals of Photogrammetry, Remote Sensing & Spatial Information Sciences, III-3, 139–146. Vetrivel, A., et al.,2015. Identification of damage in buildings based on

gaps in 3D point clouds from very high resolution oblique airborne images. ISPRS journal of photogrammetry and remote sensing, 105, 61–78.

Watts, A.C. Ambrosia, V.G., and Hinkley, E.A.,2012. Unmanned air-craft systems in remote sensing and scientific research: classification and considerations of use. Remote Sensing, 4 (6), 1671–1692. Wenzel, K., Rothermel, M., and Fritsch, D.2013. SURE–The ifp

soft-ware for dense image matching. Photogrammetric Week’13. Belin: Wichmann/VDE Verlag.

Zarco-Tejada, P. and Berni, J.,2012. Vegetation monitoring using a micro-hyperspectral imaging sensor onboard an unmanned aerial vehicle (UAV). In: Proceedings of the EuroCOW 2012, European spatial data research (EuroSDR), Castelldefels, Spain, 8–10 February 2012.

Referenties

GERELATEERDE DOCUMENTEN

This paper presents a cost-based optimization model for offshore wind operations by exam- ining condition-based opportunistic maintenance and spare part inventory control policies..

I start the motivation for my study with a broad description of how HIV/AIDS affects educators as a lead-up to the argument that teachers need to be supported

Firstly, to what extent are Grade R-learners‟ cognitive and meta-cognitive skills and strategies, cognitive functions and non-intellective factors that play a role in

Findings from the First Youth Risk Behaviour Survey in South Africa (Reddy et al., 2003), reported that PA levels among South African children have declined over the past decades

Hypotheses about more working hours leading to less time spent on sporting activities/in nature (hypothesis 4), different preferred landscape elements between urban and rural

Based on prior research the expectation was that different types of industries would have different relationships between the level of executive compensation and

De volgende vraagstelling stond in dit onderzoek centraal: wat is de relatie tussen opvoedstress van moeders en sociaal emotionele problemen en gedragsproblemen bij een kind in

This researcher followed a mixed-methods design by implementing both quantitative and qualitative research designs in order to investigate, explore and understand