• No results found

Mobile mapping of night-time road environment lighting conditions

N/A
N/A
Protected

Academic year: 2021

Share "Mobile mapping of night-time road environment lighting conditions"

Copied!
17
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

The Photogrammetric Journal of Finland, Vol. 26, No. 1, 2018 Received 19.11.2018, Accepted 12.12.2018 Doi:10.17690/018261.1

MOBILE MAPPING OF NIGHT-TIME ROAD ENVIRONMENT LIGHTING CONDITIONS Matti T. Vaaja1, Matti Kurkela1, Mikko Maksimainen1, Juho-Pekka Virtanen1,2, Antero Kukko1,2,

Ville V. Lehtola2, Juha Hyyppä1,2 and Hannu Hyyppä1,2 1Department of Built Environment, Aalto University, Finland

2The Finnish Geospatial Research Institute, National Land Survey of Finland, Finland matti.t.vaaja@aalto.fi, matti.kurkela@aalto.fi, mikko.maksimainen@aalto.fi,

juho-pekka.virtanen@aalto.fi, antero.kukko@nls.fi, ville.lehtola@nls.fi, juha.coelasr@gmail.com and hannu.hyyppa@aalto.fi

ABSTRACT

The measurement of 3D geometry for road environments is one of the main applications of mobile mapping systems (MMS). We present mobile mapping applied to a night-time road environment. We integrate the measurement of luminances into a georeferenced 3D point cloud. The luminance measurement and the 3D point cloud acquired with an MMS are used in assessing road environment lighting conditions. Luminance (cd/m2) was measured with a luminance-calibrated

panoramic camera system, and point cloud was produced by laser scanners. The relative orientation between the GNSS, IMU, camera, and laser scanner sensors was solved in order to integrate the data sets into the same coordinate system. Hence, the georeferenced luminance values are transferable into geographic information systems (GIS). The method provides promising results for future road lighting assessment. In addition, this article demonstrates the night-time mobile mapping principle applied to a road section in Helsinki, Finland. Finally, we discuss the future applications of mobile-mapped luminance point clouds.

1. INTRODUCTION

The purpose of road lighting is to improve the night-time visibility and safety of a road and its imminent environment. The road surface luminance measurement can be performed using an imaging luminance photometer (Eloholma et al. 2010; Cai and Chung 2011). However, measuring large road areas with a stationary instrument is inefficient. Furthermore, this reduces the 3D road environment to a set of 2D images which are analysed to obtain the desired lighting measures.

Laser scanning is a technology where laser distance measurement is applied systematically to measure the geometry of an environment or an object. The measurement data is registered as a three-dimensional point cloud. One implementation of MMS is mobile laser scanning (MLS), which is laser scanning performed from a moving platform. MLS instruments are versatile and efficient equipment for collecting 3D data (Lehtola et al. 2017). An MLS system consists of a laser scanner, a global navigation satellite system (GNSS), and an inertial measurement unit (IMU) (Kukko et al. 2012). Therefore, the MLS output is a georeferenced point cloud, as the scanning is paired with positioning techniques. An MLS is suitable for convenient measurement of road environments. Mobile laser-scanned point clouds have been used to analyse and extract road environment elements (Jaakkola et al. 2008; Lehtomäki et al. 2010; Jochem et al. 2011; Manandhar and Shibashaki 2002; Yang et al. 2012; Wu et al. 2013; Puente et al. 2013; Cabo et

(2)

al. 2016; Cheng et al. 2017; Holgado-Barco et al. 2017; Kumar et al. 2017; Wang et al. 2017; Yang et al. 2017; Li et al. 2018). Integrating image textures into a 3D point cloud improves the identification of road environment elements. In this data integration, it is essential to register both the 2D images and the 3D point cloud in a common coordinate frame. The registration can be done by identifying and using environment control features to independently resolve the sensor orientations. Moreover, a system calibration for the integrated imaging system can be made, or the inter-data-set relative orientation can be solved in order to successfully register the data (Rönnholm et al. 2007). Several articles describe a fusion of a 3D point cloud and other sensor data. Ground-penetrating radar and thermographic camera data have been combined with point clouds (Alba et al. 2011; Lubowiecka et al. 2011; González-Aguilera et al. 2012; Costanzo et al. 2015; Teza et al. 2015). Furthermore, point cloud and supplementary sensor data fusion has been used to create virtual environments, georeferenced panorama images, and textured 3D models (Brenner et al. 1998; Alamús and Kornus 2008; Zhu et al. 2009; Virtanen et al. 2015). Point cloud rendering has also been used to enhance architectural visualisations (Kanzok et al. 2015). Applying terrestrial laser scanning techniques and stationary luminance imaging, 3D point clouds have been coloured with luminance data (Vaaja et al. 2015; Tetri et al. 2017). The imaging systems on mobile platforms have been discussed by Petrie (2010) and Puente et al. (2013), among others. However, the methodologies fusing point clouds and road surface luminances still lack mobile solutions, which would open the horizon for resource-efficient large-scale employment of these techniques. In addition, despite the fact that laser scanning is an active measurement method that enables mapping objects at night, the contributions from the benefits of the method in night-time measurements have been lacking. Figure 1 illustrates this concept, which is presented for the first time in this study.

Figure 1. The principle of modelling illumination conditions in a road environment using laser scanning and luminance imaging: (a) Road scene at night-time; (b) Mobile laser-scanned point

cloud; (c) Imaging luminance photometry; (d) Analysis of road surface luminance in 3D.

The objective of this study is to develop and demonstrate mobile mapping of a night-time road environment. In particular, we focus on integrating luminance imaging into a mobile laser scanning system in order to analyse road environment lighting conditions. The benefits of a 3D luminance point cloud and mobile luminance imaging are evaluated. More specifically, we assess the concept used when measuring the following road lighting metrics: average luminance, overall uniformity, and longitudinal uniformity. Furthermore, the luminance point cloud is utilized to assess the illumination obstruction caused by roadside vegetation.

2. TEST SITE, METHODS, AND DATA PREPARATIONS 2.1 Experiment Area

The mobile luminance imaging method was tested on a road section in Munkkiniemenranta, Helsinki, Finland (Figure 2). We did not carry out an official and accredited lighting measurement. Instead, the intention was to learn the capabilities and the limitations of an integrated MLS and

(3)

luminance imaging system. According to the lighting design report, Munkkiniemenranta is a 6.20-metre wide two-lane street, and the length of the road section is 800 6.20-metres (Munkkiniemenranta – Lighting Design Plan 2014).

Figure 2. An aerial image of the experiment area in Munkkiniemenranta, Helsinki. The street named Munkkiniemenranta is the one closest to the shore. The street is highlighted in magenta, and the area of analysis is highlighted with the blue and green pseudo-coloured luminance point

cloud. (Helsingin karttapalvelu 2018) (Orthophotograph © Helsinki City Survey Department, 2017)

The luminaires installed in Munkkiniemenranta were AEC Illuminazione LED luminaires with a luminous fluxof 8450 lm. The luminaire mounting height was 8 metres, and the distance between two adjacent luminaires was 33 metres. The Munkkiniemenranta area is a quiet suburban area of private houses and small apartment buildings. There are buildings only on the north-eastern side of Munkkiniemenranta. The south-western side of Munkkiniemenranta is adjacent to a thin strip of park and the seashore. All the way along the street on the north-eastern side stand 33 fully grown leaved trees. The number of trees in between two adjacent luminaires varies from zero to four. Some of the branches of these trees occlude the lighting.

2.2 Mobile Laser Scanning and Luminance Imaging Devices

For mobile laser scanning and imaging, we used a Trimble MX2 mobile mapping system with a Ladybug3 panoramic camera. The system contains a Trimble AP20 GNSS-Inertial System for positioning and two SLM-250 Class 1 laser heads for point cloud acquisition (Trimble 2018). The dual head configuration of SLM-250 laser scanners is capable of collecting 72,000 points per second. The scan rate is 2 x 20 Hz (1200 rpm), and the pulse rate is 2 x 36 kHz. The range can be up to 250 m and the accuracy ± 1 cm at 50 m to a Kodak white card according to the producer. The accuracy of the created point cloud is ± 2 cm in the horizontal axis and ± 5 cm in the vertical axes in optimal satellite positioning conditions. The scanner field of view is 360 degrees. Figure 3 presents the Trimble MX2 mobile mapping system installed on a vehicle.

(4)

Figure 3. Trimble MX2 mobile mapping system installed on a vehicle.

For positioning, the MX2 utilizes the Trimble AP20 GNSS-Inertial System and Inertial Measurement Unit Applanix IMU-42 with a 200 Hz data rate. Two GNSS antennas, the Applanix GNSS Azimuth Measurement System coupled with IMU data, are used for azimuth determination. The luminance images were captured using a Ladybug3 panoramic camera system, which covers 80% of the full spherical field of view. The Ladybug3 has six 2.0-megapixel 1/1.8” CCD sensors with a 3.3 mm fixed focus lens at a fixed aperture of f/2.2. It enables uncompressed 6.5-frames-per-second imaging, and it can be synchronized to an external trigger. The greatest gain value (18 dB) was used to minimize the exposure time, thus minimizing motion blur.

Figure 4. Mean relative luminance (8-bit value) as a function of absolute luminance (cd/m2),

minimum and maximum values, and standard deviation for the Ladybug3.

The same exposure setting (50 ms) was used in the field measurements and during the luminance calibration. The Ladybug3 had been pre-calibrated by the manufacturer for vignetting, interior orientation, and relative orientations between the cameras. An Optronic Laboratories, Inc., model 455–6–1 reference luminance source was used for producing controlled luminance calibration image acquisition. We acquired a set of Ladybug3 images with different luminance reference values and measured the luminance reference using a Konica Minolta CS-2000 spectroradiometer. The spectroradiometer’s accuracy for luminance measurement was ±2% according to the

(5)

manufacturer (Konica Minolta). Hence, knowing the measured absolute luminance values and the camera exposure settings, we determined the absolute luminances captured with the camera. The reference luminance values were adjusted for matching the night-time road surface lighting conditions, covering luminances from 0.06 to 8.63 cd/m2. Figure 4 illustrates the signal-to-noise behaviour of the Ladybug3 (Kurkela et al. 2017).

2.3 Luminance Image Processing & Sensor Integration

To be utilized for luminance measurement, a digital camera needed to be calibrated in terms of the sensor’s luminance sensitivity, vignetting, and geometric distortion (Kurkela et al. 2017). The Ladybug3 captured uncompressed video, which was first converted into a sequence of linear 8-bit TIF images for post-processing. Next, the 8-bit images were converted into luminance images where each pixel describes the luminous intensity per square metre (cd/m2). Figure 5 illustrates the sensor integration and data processing workflow.

Figure 5. Sensor integration and data processing workflow for creating luminance point clouds.

The correlation between a digital pixel value and a luminance can be calculated with Equation 1:

K= (Nd fs2)/(Ls t SISO) (1)

where K is the calibration constant of the camera, Nd is the digital relative luminance value of the raw image, fs is the aperture value, Ls is the absolute luminance, t is the exposure time in seconds, and SISO indicates the ISO value (Hiscocks and Eng 2018). The calibration constant K for the Ladybug3 was obtained by calibrating the camera (Kurkela et al. 2017). The digital relative luminance, Nd, is obtained with the IEC standard Equation 2:

Nd = 0.2162R + 0.7152G + 0.0722B (2)

where R, G, and B are the red, green, and blue pixel values, respectively. Applying these two equations, digital pixel values captured with the Ladybug3 can be converted into absolute luminances.

(6)

2.4 Point Cloud Creation and Processing

Applanix POSPac MMS software was used to create a virtual reference station (VRS) network around the measurement area. A VRS network is used for calculating the position of the measurement route (trajectory) and mapping the mobile measurement system’s location data. Trimble Trident software was used to combine the laser scanning with the trajectory data and the RGB images captured with the Ladybug3. In Munkkiniemenranta, we collected the data in two measurements, driving the MLS system in both directions on the right-hand side of the street. In the Trimble MX2 mobile mapping system, the Ladybug3 panoramic camera is rigidly integrated into the system. The registration of the RGB images with the laser scanning point cloud was done by using an interactive orientation method based on a visual interpretation of a point cloud superimposed on the images (Barber et al. 2001; Rönnholm et al. 2003; Abdelhafiz et al. 2005; Rönnholm et al. 2009). The initial values for the projection centre of the Ladybug3 camera was determined in advance. During the interactive orientation, the operator is able to correct the exterior orientation parameters, which are camera rotations and the location of the projection centre. In other words, the RGB images were oriented to the point cloud by identifying environment features in both the laser scanning and the image, and orienting the image according to the recognized features. The orientation was solved using a daytime data set (Figure 6), after which the same setting was applied for night-time measurements. Absolute luminance values (cd/m2) were calculated from RGB values using Equation 2 and stored as a luminance scalar for each 3D point in the point cloud file.

Figure 6. Orienting the image with the 3D point cloud. The intensity-coloured point cloud in the image (a), the RGB image of the Ladybug 3 system (c), and (b) the combination overlay in the

image.

2.5 Analysis of Road Surface Luminance

In this study, the lighting measures assessed are the average road surface luminance 𝐿, overall uniformity UO, and longitudinal uniformity UI. A road lighting design guide defines the average road surface luminance (𝐿) as: “The average luminance shall be calculated as the arithmetic mean of the luminances at the grid points in the field of calculation (European Committee for Standardization 2015).” Moreover, the overall uniformity is the ratio of the lowest luminance at any grid point to the average luminance, and the longitudinal uniformity is the ratio of the lowest to the highest luminance on a grid point along each centre line of each lane. The road lighting design guidelines also define the good practice for lighting measurement. -However, in this study this practice was not followed, as the methodology used was fundamentally different and new. We segmented the full point cloud into smaller point clouds for analysis: the single lane area between two adjacent luminaires (33 m × 3.1 m) and the 20 cm-wide centre strip of the single lane (Figure 7). The centre strip was needed in order to calculate the longitudinal uniformity for the lane section.

(7)

Figure 7. The measurement environment between two adjacent luminaires: a single lane area of measurement 33 m x 3.1 m is framed by white lines, and the 20 cm-wide centre strip of the single

lane is framed by magenta lines.

In image processing, an averaging filter or a median filter is often used for noise reduction (Brailean et al. 1995). Hence, we present an example of noise reduction filtering applied to a luminance point cloud. We executed median filtering by searching each point cloud element for its neighbouring elements within a given diameter in the XY plane. Next, we calculated the median value of these elements and created a new point cloud where the calculated median value was registered as the luminance scalar for each 3D point. Figure 8 illustrates the results of the median filtering. In the median filtering with the diameter of 0.1 and 0.3 m, some systematic error artefacts of the luminance measurement were still present. These artefacts were caused by inaccuracy in the Ladybug3 panoramic camera system, which was calibrated by the manufacturer. However, by selecting the median filtering diameter of 1.0 metres, the amplitude of the artefacts decreased. Finally, the luminance scalar array of these filtered point clouds were used to calculate the average luminance, overall uniformity, and longitudinal uniformity.

Figure 8. A point cloud of a single lane between two adjacent luminaires: with (a) no median filtering, (b) 0.1, (c) 0.3, and (d) 1.0 m diameter two-dimensional median filtering.

(8)

2.6 The Segments of Luminance Analysis

The road section was scanned in both directions, and the point cloud analysed was these two scans combined. The whole combined point cloud consisted of 40 million points. We segmented and analysed eight areas of measurement, presented in Figure 9.

Figure 9. The eight areas of measurement that were analysed to try the feasibility of mobile road lighting measurement. The odd sections are on the south-western side of the street, and the even

sections on the north-eastern side.

For four road sections between two consecutive luminaires, we analysed both lanes and their 20 cm-wide centre strips. Hence, we calculated the average luminance, overall uniformity, and longitudinal uniformity for eight lane areas between two adjacent road luminaires. Each area of measurement was 33 m x 3.1 m, with a point density of 1000-1500 points/m2. The luminance values were median-filtered to reduce camera sensor noise. Filtering was done for a road’s circular road surface area with a diameter of 1.0 m for each luminance measurement point. In other words, each measurement point in the filtered point cloud has the median luminance value among its point neighbours within 0.5 metres of the original cloud.

3. RESULTS

3.1 Luminance Point Cloud

In our mobile luminance imaging, the exposure time was 50 ms, maximum mobile measurement velocity 10 m/s, and gain 18 dB. Panoramic images were taken at 2-metre intervals. With these settings, the measurable luminance range was 0.20–8.6 cd/m2 (Kurkela et al. 2017). According to the lighting design report (Munkkiniemenranta – Lighting Design Plan 2014), the average road surface luminance and overall uniformity of the carriageway in Munkkiniemenranta are 0.57 cd/m2 and 0.57, respectively. Hence, the minimum designed luminance value of the carriageway is 0.325 cd/m2. Thus, the designed luminances of the road environment were within the measurement range. The relative standard deviation in the measurement was 33% when the luminance was 0.23 cd/m2 and 10% when the luminance was 0.99 cd/m2. However, this noise could be filtered using spatial median filter, as the point grid of the mobile measurement had a density of 1000–1500 points per square metre. Figure 10 shows the luminance point cloud.

(9)

Figure 10. Snapshot from a 3D point cloud into which luminance values have been mapped. The colour scale indicates the absolute luminance values.

3.2 3D Luminance Data Analysis

Table 1 shows the results of the luminance measurements in the 8 areas of analysis. The average value among the eight areas of measurement for luminance 𝐿 was 0.61 cd/m2, the average overall uniformity Uo was 0.50, and the average longitudinal uniformity Ui was 0.37.

Table 1. The average luminances 𝐿, the average overall uniformities Uo, and the longitudinal

uniformities Ui for the eight areas of measurement. The average values of these are in the bottom

row. Measurement area 𝐿 (cd/m2) U O UI 1. 0.41 0.41 0.24 2. 0.50 0.20 0.13 3. 0.66 0.82 0.71 4. 0.71 0.41 0.35 5. 0.59 0.69 0.47 6. 0.68 0.41 0.33 7. 0.62 0.61 0.39 8. 0.69 0.43 0.32 Average 0.61 0.50 0.37

In luminance measurement, the measurement direction and measurement location have an effect on the measurement if the measured surface is not completely diffusely reflective. Hence, we analysed the difference in results caused by the measurement direction for two measurement areas: 7 and 8. Table 2 shows the repeatability of overlapping measurements in areas 7 and 8, measuring in the two measurement directions.

(10)

Table 2. The differences for measurements in areas 7 and 8, measuring in two directions. Area 7. 𝐿 (cd/m2) U O UI Area 8. 𝐿 (cd/m2) UO UI Measurement 1. 0.62 0.58 0.36 0.75 0.52 0.35 Measurement 2. 0.64 0.66 0.40 0.68 0.43 0.30 Relative difference 3.8% 12.4% 12.2% 12.5% 18.6% 15.0%

An additional analysis was performed on the centre strips of measurement areas 7 and 8. We calculated the average luminance for each 20 cm x 20 cm area of the centre strip and compared the values of the two measurements taken from different directions and trajectories. Figure 11 illustrates the compared values. When comparing the two measured luminance values for each 147 20 cm x 20 cm area, the average relative difference was 9.2% in area 7 and 12.5% in area 8. The trajectory of measurement 1 travelled over area 8, and the trajectory of measurement 2 travelled over area 7. This can explain the differences in measured average luminances.

Figure 11. The difference between two measurement directions and trajectories. In Measurement 1, the travelling direction was from southeast to northwest, and in Measurement 2, from northwest to southeast. The luminaires located in the extreme distances 0 m and 30 m. The

measurement areas 7 and 8 are presented in Figure 9.

3.3 The Analysis on Vegetation and Other Shadowing Objects

Of the analysed measurement areas, areas 1 and 2 have the lowest values of average luminance, overall uniformity, and longitudinal uniformity. One could speculate about the reasons for the

(11)

lighting’s under-performance from the conventional 2D analysis. However, a 3D analysis environment enables the examiner to roll, pitch, and yaw the 3D model and create arbitrary sub-areas of analysis. We analysed the vegetation’s shadowing effect visually. Figure 12 presents the measurement areas 1 and 2 where the vegetation shadows the illumination. On a traditional 2D luminance map of the road surface, only the reduced luminance levels are illustrated. On the 3D presentation, the cause of the shadowing is detectable. In this case, the vegetation reduced the average luminance from 0.91 cd/m2 down to 0.16 cd/m2, compared to a similar area with no blocking vegetation. The difference is relatively significant—down to 17% of the luminance level in a similar non-vegetated area.

Figure 12. Measurement areas 1 and 2, with (a) the 3D presentation above and (b) the conventional 2D surface below. The 3D presentation shows how the vegetation obstructs the

light. The distance between two adjacent luminaires was 33 m.

4. DISCUSSION

3D measurement and modelling of street and road illumination will improve opportunities to assess the road lighting, identify places in shadow, assess visibility conditions as well as compare the capabilities of different lighting systems to illuminate a road environment. Our aim was to develop a system for night-time road environment mobile luminance imaging and mapping. We demonstrated how the system can be applied for luminance point cloud capturing. Furthermore, we executed the data integration, and assessed the collected 3D luminance point data as a tool for lighting analysis.

The luminance images were captured from the same moving platform as the laser scanning was performed. The headlights were not turned on during the experiment. However, as the road section was not closed during the experiment, the front position lamps were on for safety. This naturally compromised the absolute accuracy of the luminance measurement. However, our intention was to test the benefits of mobile luminance imaging, not to perform official luminance measurements.

(12)

At this initial stage, the mobile luminance 3D measurement system could not completely replace the measurement practices in use. The performance of Ladybug3 was not completely satisfying for night-time road surface luminance imaging in terms of signal-to-noise ratio. For luminances lower than 0.2 cd/m2, the measurement using Ladybug3 was no longer reliable, as the data may have been truncated. However, when the reference luminance value was ≥ 1.0 cd/m2, the relative standard deviation in Ladybug3 was smaller than 10% which corresponds to the current state of luminance imaging (Kurkela et al. 2017). The luminance imaging device would need to perform with lower noise and a wider luminance range than that performed by the Ladybug3. In its current state, the system gives each 3D point only one luminance value from one measurement direction. Thus, the luminance values are not truly three dimensional—only the data point locations are.

An important phase of the luminance measurement and laser scanning integration is determining the relative orientation of the sensors. According to Rönnholm et al. (2001), the accuracy of interactive orientation is adequate, if carefully performed. The accuracy is the weakest in the direction of the viewing axis. However, this weakness can be solved by taking two images with perpendicular viewing directions, and that panoramic images are the most suitable, as their viewing angle is extremely wide. In another study by Rönnholm et al. (2009), the maximum shift between the laser-scanned data and panoramic close-range imaging varied between 1.3 cm and 18.5 cm, depending on the measurement environment and parameters of the measurement system. In this study, the scale of the measurement area was similar to the measured area in the referred accuracy assessments.

For the visualization and analysis of road surface luminances, this method is an outstanding improvement over the methods currently in use. Currently, luminance analysis is done on 2D images. The 2D method lacks the precision and spatial awareness of 3D analysis. The benefit of 3D analysis is best illustrated in the video provided as supplementary material with the article. Furthermore, large areas can be measured quickly with mobile measurement. This is the main benefit in mobile luminance imaging compared to stationary luminance measurements. The preparation for the measurement took one hour, and the actual measurements were done in under 10 minutes with two driving directions of the 800-metre road section. We evaluated the luminance point cloud measurement and registered uncertainty in two road surface areas. The areas were single lanes between adjacent luminaires. The measurement was performed moving in opposite directions following opposite lanes. Between the two measurements, the relative differences in average luminance values were 9.2% and 12.5% for the two assessed areas. It is important that either the luminance images are captured in a different direction than the automotive lighting of the measurement platform or the lighting of the platform is switched off.

For now, the primary solution to improve the signal-to-noise ratio at a low luminance measurement is to reduce the mobile measurement velocity. At a slower measurement speed, the exposure time can be extended and the gain value reduced. Currently, mobile laser scanning systems and panoramic camera systems are improving rapidly. Mobile luminance imaging would benefit the most if the new panoramic camera technology were to be improved in terms of low-luminance (0.05–1.0 cd/m2) performance. As the sensors of the panoramic camera systems improve in terms of signal-to-noise ratio, less filtering will be needed in the post-processing. Alternatively, a high-end DSLR camera can be integrated into MMS to increase reliability when measuring low luminances (Kurkela et al. 2017). Using a single high-end DSLR camera also solves the problems of panoramic imaging, such as the image-stitching and the sensors not being identical in terms of luminance response.

(13)

In the example case, the captured data contained repeating and systematic erroneous artefacts. These artefacts were filtered out applying a median filter with a diameter of 1.0 metres. For a validated measurement system, the source of systematic error should be clarified and fixed if possible, rather than filtering the error out in the post-processing. Since the purpose of this article was presenting the new concept and not validating a measurement system, we considered the median filtering approach adequate for removing the systematic measurement error.

In the future, the road conditions need to be taken into account in the validation of a 3D luminance mobile measurement system. In wet conditions, the light reflection from the road surface can be more specular than diffuse. It will be a very difficult task to decide which angle of measurement to use. Tunnel lighting is a very important special case of road lighting which needs to be considered as 3D luminance imaging is developed further.

5. CONCLUSIONS

The developed concept of 3D luminance and geometry measurement via MMS offers several benefits compared to the conventional stationary luminance imaging method. Firstly, with stationary measurement, separate measurement and analysis are required for each road segment. With an MMS (mobile mapping system), the measurement can be carried out continuously, thus covering expansive areas. The efficiency of luminance data acquisition increases significantly with the application of an MMS capable of luminance measurement. In this study, the measurement system was mounted on a car, but the concept is applicable for different platforms, such as UAV and personal mapping systems. Secondly, the conventional luminance imaging method produced luminance data only, whereas an MMS with luminance measurement also records the geometry of the road surface and the surrounding environment in fine detail. This means that the data produced by a luminance-enabled MMS can be applied in a far wider array of applications in the road-inventory context than individual luminance images. These applications include an road-inventory of lighting fixtures, other road furniture, line-of-sight analysis, road marking condition survey, and especially the estimation of occluding vegetation. In addition, the effect of dynamic elements on luminance can be evaluated (e.g. growing vegetation or construction sites), which allows for more resource-efficient safety management and maintenance planning. Finally, as the 3D luminance point cloud produced with an MMS system is georeferenced, it can be transferred to a GIS system and integrated with other urban geo-information data sets in analysis. This approach also allows the luminance information obtained from road segments to be propagated to the road models in a 3D city model, for example. Alternatively, detailed luminance maps can be produced for further analysis.

Being able to measure the night-time environment is a logical development direction for mobile mapping systems. Fifty percent of a year is twilight or darkness. Yet, 3D city models and mobile mapped services mainly present daytime measurements. This article focused on the lighting distribution and occlusion, but night-time mobile mapping could also have several other applications. Night-time mapped environments will help people learn the appearance of an area they enter for the first time during darkness. Furthermore, using a night-time city model people will be able to assess the night-time safety and attractiveness of a district where they are planning to purchase a house, for example. These applications will benefit from the preparation, assessment, and considerations described in this article.

(14)

6. ACKNOWLEDGMENTS

This research project was supported by the Academy of Finland, the Centre of Excellence in Laser Scanning Research (CoE-LaSR) (No. 272195, 307362), Strategic Research Council at the Academy of Finland is acknowledged for financial support for project "Competence Based Growth Through Integrated Disruptive Technologies of 3D Digitalization, Robotics, Geospatial Information and Image Processing/Computing - Point Cloud Ecosystem (No. 293389, 314312)”, the Finnish Funding Agency for Innovation project “VARPU” (7031/31/2016), and the Aalto Energy Efficiency Research Programme (Light Energy—Efficient and Safe Traffic Environments project).

7. REFERENCES

Abdelhafiz A., Riedel B., Niemeier W. 2005. Towards a 3D true colored space by the fusion of laser scanner point cloud and digital photos. In the Proceedings of the ISPRS Working Group V/4 Workshop 3D-ARCH, Mestre-Venice, Italy, 22–24 August 2005.

Alamús R., Kornus W. 2008. DMC geometry analysis and virtual image characterisation. Photogrammetric Record 2008, 23, 353–371. https://doi.org/10.1111/j.1477-9730.2008.00504.x. Alba M., Barazzetti L., Scaioni M., Rosina E., Previtali M. 2011. Mapping infrared data on terrestrial laser scanning 3D models of buildings. Remote Sensing 3, 1847–1870, https://doi.org/10.3390/rs3091847.

Barber D., Mills J., Bryan P. 2001. Laser Scanning and Photogrammetry: 21st Century Metrology. In Proceedings of the CIPA 2001 International Symposium, University of Postdam, Germany, 18– 21 September 2001.

Brailean J., Kleihorst R., Efstratiadis S., Katsaggelos A., Lagendijk R. 1995. Noise reduction filters for dynamic image sequences: a review. In the Proceedings of the IEEE. 83, 1272-1292, https://doi.org/10.1109/5.406412.

Brenner C., Haala N. 1998. Rapid acquisition of virtual reality city models from multiple data sources. International Archives Photogrammetry and Remote Sensing, 32, 323–330.

Cabo C., Kukko A., García-Cortés S., Kaartinen H., Hyyppä J., Ordoñez C. 2016. An Algorithm for Automatic Road Asphalt Edge Delineation from Mobile Laser Scanner Data Using the Line Clouds Concept. Remote Sensing 8, 740, https://doi.org/10.3390/rs8090740.

Cai H., Chung T. 2011. Improving the quality of high dynamic range images. Lighting Research & Technology 43, 87–102, https://doi.org/10.1177/1477153510371356.

Cheng M., Zhang H., Wang C., Li J. 2017. Extraction and Classification of Road Markings Using

Mobile Laser Scanning Point Clouds. IEEE J-STARS 10, 1182–1196,

https://doi.org/10.1109/JSTARS.2016.2606507.

Costanzo A., Minasi M., Casula G., Musacchio M., Buongiorno M. 2015. Combined Use of Terrestrial Laser Scanning and IR Thermography Applied to a Historical Building. Sensors 15, 194–213. https://doi.org/10.3390/s150100194.

(15)

González-Aguilera D., Rodriguez-Gonzalvez P., Armesto J., Lagüela S. 2012. Novel approach to 3D thermography and energy efficiency evaluation, Energy and Buildings 54, 436–443, https://doi.org/10.1016/j.enbuild.2012.07.023.

Eloholma M., Ketomäki J., Halonen L. 2010. Road lighting – luminance and visibility measurements. Helsinki University of Technology, Lighting Laboratory, Helsinki, Report 29. European Committee for Standardization. 2015. Road lighting - Part 3: Calculation on performance. Brussels (Belgium): CEN/TR. EN 13201-3.

Helsingin karttapalvelu (Helsinki map service). 2018. Available online: http://kartta.hel.fi (accessed on 26th of June 2018).

Hiscocks P., Eng P. 2018. Measuring Luminance with a Digital Camera: Case History. Available online: http://www.ee.ryerson.ca:8080/~phiscock/astronomy/light-pollution/luminance-case-history.pdf (accessed on 26th of June 2018).

Holgado‐Barco A., Riveiro B., González‐Aguilera D., Arias P. 2017. Automatic Inventory of Road Cross‐Sections from Mobile Laser Scanning System. Computer‐Aided Civil and Infrastructure Engineering 32, 3–17. https://doi.org/10.1111/mice.12213.

Jaakkola A., Hyyppä J., Hyyppä H., Kukko A. 2008. Retrieval algorithms for road surface modelling using Laser-based mobile mapping. Sensors, 8, 5238–5249, https://doi.org/10.3390/s8095238.

Jochem A., Höfle B., Rutzinger M. 2011. Extraction of vertical walls from mobile laser scanning

data for solar potential assessment. Remote Sensing 3, 650–667,

https://doi.org/10.3390/rs3030650.

Kanzok T., Linsen L., Rosenthal P. 2015. An interactive visualization system for huge architectural Laser Scans. In Proceedings of the 10th International Conference on Computer Graphics Theory and Applications, Berlin, Germany, 11–14 March 2015, pp. 265–273.

Kukko A., Kaartinen H., Hyyppä J., Chen Y. 2012. Multiplatform mobile laser scanning: usability and performance. Sensors, 12, 11712-11733. https://doi.org/10.3390/s120911712.

Kumar P., Lewis P., McElhinney C., Boguslawski P., McCarthy T. 2017. Snake Energy Analysis and Result Validation for a Mobile Laser Scanning Data-Based Automated Road Edge Extraction Algorithm. IEEE J-STARS 10, 763–773, https://doi.org/10.1109/JSTARS.2016.2564984.

Kurkela M., Maksimainen M., Vaaja M., Virtanen J-P., Kukko A., Hyyppä J., Hyyppä H. 2017. Camera preparation and performance for 3D luminance mapping of road environments. The Photogrammetric Journal of Finland 25, 1–23, https://doi.org/10.17690/017252.1.

Lehtola V., Kaartinen H., Nüchter A., Kaijaluoto R., Kukko A., Litkey P., Honkavaara E., Rosnell T., Vaaja M., Virtanen J-P., Kurkela M., El Issaoui A., Zhu L., Jaakkola A., Hyyppä J. 2017. Comparison of the Selected State-Of-The-Art 3D Indoor Scanning and Point Cloud Generation Methods. Remote Sensing, 9, 796, https://doi.org/10.3390/rs9080796.

(16)

Lehtomäki M., Jaakkola A., Hyyppä J., Kukko A., Kaartinen H. 2010. Detection of vertical pole-like objects in a road environment using vehicle-based laser scanning data. Remote Sensing, 2, 641–664, https://doi.org/10.3390/rs2030641.

Li F., Oude Elberink S., Vosselman G. 2018. Pole-Like Road Furniture Detection and Decomposition in Mobile Laser Scanning Data Based on Spatial Relations. Remote Sensing, 10, 531, https://doi.org/10.3390/rs10040531.

Lubowiecka I., Arias P., Riveiro B., Solla M. 2011. Multidisciplinary approach to the assessment of historic structures based on the case of a masonry bridge in Galicia (Spain). Computers & Structures, 89, 1615–1627, https://doi.org/10.1016/j.compstruc.2011.04.016.

Manandhar D., Shibasaki R. 2002. Auto-extraction of urban features from vehicle-borne laser data. International Archives of Photogrammetry Remote Sensing and Spatial Information Sciences, 34(4), 650-5.

Munkkiniemenranta – Lighting Design Plan. 2014. Suomen Energia-Urakointi Oy.

Petrie G. 2010. An introduction to the technology: Mobile mapping systems. GeoInformatics, 13, 32–33, 35–43.

Puente I., González-Jorge H., Martínez-Sánchez J., Arias P. 2013. Review of mobile mapping and

surveying technologies. Measurement, 46, 2127–2145,

https://doi.org/10.1016/j.measurement.2013.03.006.

Rönnholm P., Hyyppä H., Pöntinen P., Haggrén H., Hyyppä J. 2003. A method for interactive orientation of digital images using backprojection of 3D data. The Photogrammetric Journal of Finland, 18, 16–31.

Rönnholm P., Honkavaara E., Litkey P., Hyyppä H., Hyyppä J. 2007. Integration of laser scanning and photogrammetry. International Archives of Photogrammetry Remote Sensing and Spatial Information Sciences, 36, 355–362.

Rönnholm P, Hyyppä H, Hyyppä J, Haggrén H. 2009. Orientation of Airborne Laser Scanning Point Clouds with Multi-View, Multi-Scale Image Blocks. Sensors, 9, 6008-6027; https://doi.org/10.3390/s90806008.

Tetri E., Bozorg Chenani S., Räsänen R-S., Baumgartner H., Vaaja M., Sierla S., Tähkämö L., Virtanen J-P., Kurkela M., Ikonen E., Halonen L., Hyyppä H., Kosonen I. 2017. Tutorial: Road Lighting for Efficient and Safe Traffic Environments. LEUKOS, 13, 223–241, https://doi.org/10.1080/15502724.2017.1283233.

Teza G., Marcato G., Pasuto A., Galgaro A. 2015. Integration of laser scanning and thermal imaging in monitoring optimization and assessment of rockfall hazard: a case history in the Carnic Alps (Northeastern Italy). Natural Hazards, 76, 1535–1549, https://doi.org/10.1007/s11069-014-1545-1.

Vaaja M., Kurkela M., Virtanen J-P., Maksimainen M., Hyyppä H., Hyyppä J., Tetri E. 2015. Luminance-Corrected 3D Point Clouds for Road and Street Environments. Remote Sensing, 7, 11389–11402, https://doi.org/10.3390/rs70911389.

(17)

Virtanen J-P., Puustinen T., Pennanen K., Vaaja M., Kurkela M., Viitanen K., Rönnholm P. 2015. Customized visualizations of urban infill development scenarios for local stakeholders. Journal of

Building Construction and Planning Research, 3, 68–81,

http://dx.doi.org/10.4236/jbcpr.2015.32008.

Wang J., Lindenbergh R., Menenti M. 2017. SigVox – A 3D feature matching algorithm for automatic street object recognition in mobile laser scanning point clouds, ISPRS Journal of

Photogrammetry and Remote Sensing, 128, 111–129,

https://doi.org/10.1016/j.isprsjprs.2017.03.012.

Wu B., Yu B., Yue W., Shu S., Tan W., Hu C., Huang Y., Wu J., Liu H. 2013. A voxel-based method for automated identification and morphological parameters estimation of individual street trees from mobile laser scanning data. Remote Sensing, 5, 584–611, https://doi.org/10.3390/rs5020584.

Yang B., Wei Z., Li Q., Li J. 2012 Automated extraction of street-scene objects from mobile Lidar point clouds. International Journal of Remote Sensing, 33, 5839–5861.

Yang B., Dong Z., Liu Y., Liang F., Wang Y. 2017. Computing multiple aggregation levels and contextual features for road facilities recognition using mobile laser scanning data. ISPRS Journal

of Photogrammetry and Remote Sensing, 126, 180–194,

https://doi.org/10.1016/j.isprsjprs.2017.02.014.

Zhu L., Hyyppä J., Kukko A., Jaakkola A., Lehtomäki M., Kaartinen H,. Chen R., Pei L., Chen Y., Hyyppä H., Rönnholm P., Haggren H.. 2009. 3D city model for mobile phone using MMS data. In Proceedings of the Urban Remote Sensing Joint Event, Shanghai, China, 20–22 May 2009.

Referenties

GERELATEERDE DOCUMENTEN

The Crosstrainer Programme (CTP) is an ECD centre-based Early Childhood Development programme providing early learning stimulation for children from three to six years of age in

Percentage of Dietary Reference Intake (DRI) of Essential Nutrition of Mothers of Children with FASD and Controls from a Community in

From this subsection we deduced that finite digital filters do not have ideal magnitude response characteristics, it was shown that finite digital filters could have linear

A repeat study of all patients paralysed as a result of diving injuries and admitted to the Conradie Hospital Spinal Cord Injury Centre during the period 1980 - 1990 was made to

Archive for Contemporary Affairs University of the Free State

Table 4: HPLC results of experiment 1 0.4.2 indicating ochratoxin concentrations (in absorbancy units) of the methanol/water and methanol extracts, after it passed

Similar seasonal patterns were observed at all three sites where continuous measurement data were collected (Elandsfontein, Marikana and Welgegund), with the highest eBC mass

Where a discretion to determine the price is subject to an external objective standard or reasonableness, this will result in the price being certain and