• No results found

Visualizing things in construction photos: time, spatial coverage, and content for construction management

N/A
N/A
Protected

Academic year: 2021

Share "Visualizing things in construction photos: time, spatial coverage, and content for construction management"

Copied!
68
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Visualizing Things in Construction Photos:

Time, Spatial Coverage, and Content for Construction

Management

by

Fuqu Wu

B.Sc., University of Saskatchewan, 2006

A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of

MASTER OF SCIENCE

in the Department of Computer Science

 Fuqu Wu, 2009 University of Victoria

All rights reserved. This thesis may not be reproduced in whole or in part, by photocopy or other means, without the permission of the author.

(2)

ii

Supervisory Committee

Visualizing Things in Construction Photos: Time, Spatial

Coverage, and Content for Construction Management

by

Fuqu Wu

B.Sc., University of Saskatchewan, 2006

Supervisory Committee

Dr. Melanie Tory, (Department of Computer Science) Supervisor

Dr. Margaret-Anne D. Storey, (Department of Computer Science) Departmental Member

Dr. Sheryl Staub-French, (Department of Computer Science) Departmental Member

(3)

iii

Supervisory Committee

Dr. Melanie Tory, (Department of Computer Science) Supervisor

Dr. Margaret-Anne D. Storey, (Department of Computer Science) Departmental Member

Dr. Sheryl Staub-French, (Department of Computer Science) Departmental Member

Abstract

PhotoScope, a novel visualization, visualizes the spatiotemporal coverage of photos in a construction photo collection. It extends the standard photo browsing paradigm in two main ways: visualizing spatial coverage of photos on floor plans, and indexing photos by a combination of spatial coverage, time, and content specifications. This approach enables users to browse and search space- and time-indexed photos more effectively. We designed PhotoScope specifically to address challenges in the construction management industry, where large photo collections are amassed to document project progress. These ideas may also apply to any photo collection that is spatially constrained and must be searched using spatial, temporal, and content criteria. Design choices made when developing PhotoScope are also described.

Civil, mechanical and electrical engineers, and professionals from construction management validated the visualization mechanisms and functionalities of PhotoScope in a usability study. Empirical findings on the cognitive behaviors of participants are also discussed in this thesis.

(4)

iv

Table of Contents

Supervisory Committee...ii

Abstract Page... iii

Table of Contents...iv

List of Tables...vi

List of Figures ...vii

Acknowledgments...viii

Dedication ...ix

Chapter 1: Introduction ...1

1.1 Thesis Outline ...3

Chapter 2: Related Work...5

2.1 Retrieving Photos to Support Construction Claims and Disputes ...5

2.2 Integrating Space and Time in Visualization Tools...6

2.3 Organizing photos and Videos using temporal and spatial cues...8

Chapter 3: Design of the Visualization...10

3.1 Scenarios...10

3.2 Visual Information Needs and Design Goals ...13

3.3 Case Example Used in Our Prototype...15

3.4 Prototype...16

3.4.1 Overview...16

3.4.2 Detailed Level ...20

3.4.3 Photos ...22

3.4.4 Filters...23

3.4.5 Previous Prototypes and the Major Lesson Learned ...24

Chapter 4: Algorithms...28

4.1 No Intersection...28

4.2 Intersection...29

4.3 Special Case...30

4.4 Saturation Scheme...30

(5)

v

5.1 Participants...32

5.2 Apparatus...33

5.3 Tasks...33

5.4 Procedure ...34

Chapter 6: Evaluation Results...36

6.1 Strategies...36

Location + Activity → Time point/period ...38

Activity + (Time) → Location...40

Time + Location → Status/Progress...40

6.2 Usability...41

6.2.1 Construction industry formats, filters...41

6.2.2 Timeline...42

6.2.3 Context Switch...43

6.3 Assumptions about Cell Saturations ...44

6.3.1 The darker the cell, the more photos, the more activities?...44

6.3.2 The darker the cell, the more photos, the more problems? ...45

6.3.3 Empty cells, no photos, no work?...45

Chapter 7: Discussion...47

7.1 Bridging time, location and construction specifications ...47

7.2 Activity-Centered System...47

7.2.1 Intelligently Identifying Construction Activities...48

7.2.2 Activity Based Timeline...48

7.3 Extension to Other Domains...49

Chapter 8: Conclusion ...50

Bibliography...52

Appendix A: List of Questions from the Demographic Questionnaire...54

Appendix B: List of Tasks...55

(6)

vi

List of Tables

Table 1. Possible target and known/referenced information and example questions in the scenario of construction claims. Asterisk * indicates information that may or may not always be known. ...11 Table 2. Possible target and known/referenced information and example questions in the scenario of defect inspection. Asterisk * indicates information that may or may not always be known. ...12 Table 3. Possible target and known/referenced information and example questions in the scenario of as-built story. Asterisk * indicates information that may or may not always be known...13 Table 4. Task types and example tasks. ...34 Table 5. Categories of Strategies ...38

(7)

vii

List of Figures

Figure 1. Overview of the prototype’s main screen...13

Figure 2. Alternate floor plans. ...15

Figure 3. A photo example. ...16

Figure 4. An example of the detailed second level. ...17

Figure 5. An example of the detailed second level. ...18

Figure 6. Photos and filters...19

Figure 7. Filters and the keyword list in the Callout...20

Figure 8. The option panel of the prototype evaluated in the initial pilot study...25

Figure 9. An example of the ‘next’ and ‘previous’ buttons indicated in the red circle....27

Figure 10. Cells in the floor plan and spatial coverage of photos for ‘No Intersection’...28

Figure 11. Points sitting inside or outside of the triangle...29

Figure 12. Cells in the floor plan and spatial coverage of photos for ‘Intersection’...29

Figure 13. Special Case. ...30

(8)

viii

Acknowledgments

First, I would like to express my deepest gratitude to my supervisor, Melanie Tory, for her intelligence, support and guidance in my career as a graduate student, for her encouragement and patience when my confidence in my abilities often lacked, for her suggestions on the handling of the seemingly impossible situations, and, most of all, for her friendship and advice on life.

Thanks to my colleagues in the VisID (Visual Interaction Design) lab for their conversations, creativity, encouragement and especially for being fun through the ups and downs. Their friendship made this experience enjoyable.

Thanks also to the members of my committee, Dr. Margaret-Anne Storey and Dr. Sheryl Staub-French, for their expertise, and for offering different perspectives. Special thanks to Sheryl who kindly contacted the potential participants and provided the space for one of our studies.

Finally, I would like to thank my family and friends for their emotional support and understanding throughout my graduate school. I am forever grateful to my parents for their love, encouragement, unwavering confidence in me, and, foremost, for giving me the opportunity to explore a new world.

(9)

ix

For my parents –

(10)

Chapter 1: Introduction

Digital photos offer a simple and inexpensive way to document events and changes in a spatial area over time. For example, in construction management, photos are used to document the progress of activities involved in the construction of a building. In forestry, aerial and satellite photographs are commonly used to document the state of forests over time, revealing factors such as logging, reforestation, disease, and fire damage. In microscopy, images may document the state of cellular components as a cell undergoes changes such as cell division.

We focus on photo collections in construction management. Construction is a process of assembling thousands of pieces to create a complete functioning structure and involves disciplines such as architecture, civil, mechanical, and electrical engineering. Due to the complexity of connecting a large number of elements to an infrastructure, it is not surprising to see that construction claims and disputes often occur and the industry is prone to litigation. For example, imagine that a school hired a contractor to construct additional classroom space a year ago. Recently, the concrete floor in the hallways started to crack. The school owner might sue the contractor, claiming that the concrete construction was not executed appropriately. To avoid paying repair costs, the contractor would have to prove that the use and installation of concrete were appropriate. The litigation process often involves specialized and complex issues and is usually very expensive [10]. The industry has been struggling to find ways to equitably and economically resolve these issues [1].

(11)

2 As digital cameras become more accessible and storage space gets cheaper, digital photos have become a common way to document construction site activities. They are gathered periodically, stored in central databases and utilized for project management tasks [5]. ‘Taking photos’ now is part of the routine work of the project monitoring process and digital photos are used for construction claims, defect inspection and record keeping. The size of a construction photo archive ranges from a few hundred photos in a midsize project to thousands in a complex high rise building project and the size of the photo archive expands quickly as the scale of the project increases. If any claims and disputes happen at a later stage, searching through the entire photo archive for useful evidence can be a challenging and tedious process.

To support managing, browsing and searching in a construction photo archive more easily and effectively, we designed and evaluated PhotoScope, an interactive tool that visualizes the spatiotemporal scope of construction photo coverage. The goal of the study was two-fold: (1) To explore and understand engineers’ strategies in locating target information and (2) to also facilitate further usability requirements gathering for the design of future visualizations of construction photo collections. Our visualization design and the exploratory study are centered on two research questions:

(1) What are strategies that engineers use in construction management tasks using PhotoScope? In turn, we expected that these strategies would contribute to usability requirements for future systems.

(2) What are possible design guidelines and general implications for future visualization technologies to support construction management?

(12)

3 PhotoScope is a visualization that offers two novel ideas: (1) Indexing photos by space, time, and standardized content specifications, and (2) Showing spatial coverage of photos rather than camera positions, which emphasizes photo contents rather than mechanics of how the photo was taken. PhotoScope makes searching and browsing easier, enables users to extract more useful and reliable information, and provides an overview of progress over time. In this thesis, we discuss the prototype, the design rationale behind it, and an evaluation we conducted to improve its usability and ferret out better design ideas.

1.1 Thesis Outline

The remainder of this thesis is organized as follows.

Chapter 2 – Related Work: Reviews related topics in the literature including work from

both Information Visualization and Civil Engineering communities.

Chapter 3 – Design of the Visualization: Describes scenarios where the visualization

could be most useful, our design goals, the data we used and the design and implementation of the prototype.

Chapter 4 – Algorithms: Explains algorithms used to compute back-end data for

creating the prototype.

Chapter 5 – Evaluation: Describes the usability study including the participants,

apparatus, tasks and the procedure.

Chapter 6 – Results: Presents our findings from the usability evaluation.

Chapter 7 – Discussion: Includes a discussion and explanation of the results and

presents design implications and lessons learned.

Chapter 8 – Conclusion: Summarizes the research, and discusses possible future

(13)

4 Much of the text in the thesis is taken from the publication Wu, F., and Tory, M.,

PhotoScope: Visualizing Spatiotemporal Coverage of Photos for Construction Management. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI ’09, ACM. I am primary author on this paper.

(14)

5

Chapter 2: Related Work

We consider three areas of related work in both construction and visualization literatures: (1) Retrieving photos to support construction claims and disputes. After photos

became part of construction record keeping, it is common practice to use photos as evidence in construction claims. People in construction management have devoted great effort to find ways to efficiently retrieve target photos from a large size photoset. We believe there are useful design implications from the construction management perspective that are worth studying.

(2) Integrating space and time in visualization tools. Visual representations of time and space vary from domain to domain. In construction project management, project schedules and building plans are two essential components. Understanding spatiotemporal visualization in the literature would help us make use of successful design ideas.

(3) Organizing photos in a photo browser using temporal and spatial cues. Thousands of photos are usually stored for a construction project [6]. We hope visualization techniques created in recent photo browsers could be also useful for our domain-specific prototype.

2.1 Retrieving Photos to Support Construction Claims and Disputes

Research on managing construction photos focuses on search engines to retrieve information from construction photo databases. Most search engines integrate graphics and image processing techniques such as automatic shape identification and pattern recognition methods to help engineers retrieve target photos. Brilakis et al. have a series

(15)

6 of work [5,6,7,8] on construction site image retrieval based on image content, materials and material cluster recognition. One of their goals was to generate methods that can index and retrieve images with minimal or no user intervention. By contrast, our goal was to create an interactive visualization tool that enables users to browse and search target photos by time and location. Nonetheless, we envision that these image processing oriented techniques could be easily embedded into PhotoScope and used to produce photo content based metadata.

PHOTO-NET II [1] links archived film clips to a project’s schedule and progress information. It stores digital images from up to four fixed-position cameras at a construction site, and uses the photos to create a film of the construction activities. However, the fixed camera perspectives limit what can be seen. We focus on photos taken by project coordinators when they walk on the site monitoring the project; photos may be taken from all locations within or around a building. These photos cannot be viewed as a movie using PHOTO-NET II. Additionally, PHOTO-NET II does not support tracking status for a construction activity at a specific location and time, nor does it link the photos to spatial locations.

2.2 Integrating Space and Time in Visualization Tools

Visualization is an external mental aid that enhances cognitive abilities [9]. Here we focus on visualizing spatiotemporal data. Shanbhag et al. [26] developed novel ways to visualize temporal changes, e.g. populations over geographical regions for efficient allocation of resources such as schools and care services. Wood et al. [31] applied visual encodings and interactions to assist exploratory visual analysis. Spatial representations such as a ‘tag cloud’ or ‘tag map’ were combined with other visual encodings to support

(16)

7 selection of records by time and space. Although Shanbhag et al. and Wood et al. both aimed to represent spatiotemporal datasets, their visualization techniques do not provide a means to show spatial coverage of photos.

One common approach to integrate space and time is to use three dimensions (3-D). For example, GeoTime [20] represents events as points within an X, Y, Z coordinate space. The X,Y plane shows geographic space and the Z-axis represents time. We chose not to represent time in this way because the 3D space could become very cluttered with a large number of photos, and it may not be the easiest interface for browsing and searching for photos.

Methods of building a 3D virtual space from related photos were presented by Anabuki and Ishii [3] and Tanaka et al. [28]. Spatial relations were found according to the same piece of information in different photos, and all photos were connected to form a virtual space. Camera parameters such as the focal length at the time of shooting were calculated and used for exploring spatial relationships. However, regular construction photos unlikely have enough information to build a 3-D virtual space and 3-D spaces are difficult to navigate. It would be easier for construction personnel to map photos to 2D floor plans than a 3D space.

The approach used in our prototype is to represent time and space in separate widgets, linked together through brushing. Similar ideas have also been used elsewhere. Yuri et al. [32] visualized a sensor field overlaid on a floor plan and a timeline showing the history of sensor activations. We extended their work by showing spatial and temporal coverage of photos, which requires extra work as shown later. Similarly, a Spatiotemporal Visualization (STV) used to assist crime pattern recognition [11] included

(17)

8 a GIS view to display incidents and a timeline to indicate crime density in the time dimension. We generalized and reused the segmented timeline to represent the ‘density’ of construction photos and also utilized the idea of indicating ‘incidents’ in the GIS view to show photo distribution on the floor plan. However, instead of using camera positions like STV, we show spatial coverage of photos on the floor plan, which provides more useful information.

2.3 Organizing photos and Videos using temporal and spatial cues

Photo collections and videos can be structured or viewed based on time and/or location. Harada et al. [17] associated a time line with album icons to indicate the time range that each album covered. Time Quilt [18] organized photo albums into wrapped vertical columns in a temporal order. The Calendar Browser [16] also used a timeline to navigate a photo collection. Users could view images from one single time granularity, e.g. months in a year, or days in one month. We extend these time-based album ideas by enabling users to manipulate the time coverage and retrieve corresponding photos within the time range.

Map-based storyboards [25] for tour videos and commercial geo-tagged photo sites such as Flickr [15] and Picasa [24] allow users to select video shots or photos and map them into their corresponding locations in the map view. However, storyboards do not facilitate search for a specific shot, and geo-tags could be very cluttered with construction photos since the same locations might be photographed repeatedly to document the progress over time. Our application shows spatial coverage of photos instead of locations to avoid clutter and also to provide an overview showing which regions have been documented by photos.

(18)

9 Other ways to organize photos include PhotoSpread [19] and Photo Tourism [27]. PhotoSpread is a system for organizing and analyzing biology field photo collections categorized by time, location, and other attributes. Photos can be reorganized by drag-and-drop operations on the spreadsheet. This approach works well for categorical locations (e.g. field sites 1 and 2); however, for our application visualizing the specific spatial region within a floor plan is important. Photo Tourism is a 3D interface designed to browse many images of the same object. In construction management, activities usually rely on 2D floor plans, and objects vary over time (e.g., a wall might be framed and then have drywall installed). Photo Tourism does not address these issues. Our application allows users to select a region on the floor plan, and view photos that spatially cover the selected region.

(19)

10

Chapter 3: Design of the Visualization

Our main design objective was to enable construction personnel to manage, browse and search construction photos more easily. We aimed to allow users to extract more useful information out of the documents. This in turn will provide better support for construction control and management. Target users are all types of contractors who use photos for piecing together the as-built story, for record keeping or for future reference.

3.1 Scenarios

A construction management expert summarized three main scenarios where a visualization that supports managing construction photos could be of a good use. These scenarios were also corroborated by discussions with a second construction management expert.

1) Construction Claim

“The biggest scenario is the construction claim. When you are trying to figure out what actually happened, you go back through the photos … [For example], someone saying it happened at a certain point, but you know that happened at a different time, or you have to prove that it happened in a certain way, [for example] you have to prove the weather was bad, or there was a certain amount of time that you cannot work effectively [so your work is delayed]”.

2) Defect Inspection

“Think of concrete as a good example. You find out that the concrete cannot handle the load it is supposed to handle, or it is already showing cracking that shouldn’t be cracking. You can go back through your photos… maybe you could see something about the way that the construction method that was executed, to figure out if there was some problem with working shift….”.

(20)

11 “We all keep records on our projects in the end of the day.

You sort of end up with an archive, and photos are part of the archive. It is for future learning and for your own record keeping”.

We extracted key elements tracked by engineers in each scenario to identify main information needs. Major elements include target information that needs to be verified by related construction photos and information that is often known and referenced to assist identifying the target information.

Date, location and activity were identified as key elements in all three scenarios. In the majority of construction management tasks, one or more key elements are known and are used as references to search for evidence of other elements. A summary of information that needs to be verified through construction photo searching, information that is often used to assist this process and example questions that might be asked in each scenario is shown in Table 1 to Table 3 respectively.

Construction Claim

Target Information to be Verified

Information

Known/Referenced Example Questions

(Exact) Date • Location • Activity

• *(Approximate) Date

• When did activity X occur in location L? • Did activity X actually

occur around date D in location L?

Activity • Location

• (Approximate) Date

• Did activity X actually occur around date D in location L?

• What were construction workers working on in location L around date D?

Table 1. Possible target and known/referenced information and example questions in the scenario of construction claims. Asterisk * indicates information that may or may not

(21)

12 In a construction claim, it is common to verify the date or the time period when certain construction activities occur in designated locations and whether they are consistent with the date or the time period that have been claimed on the project schedule. In some of the cases (when the approximate date is known), engineers might use the approximate date as a quick reference, and search for the exact date. Also, frequently, certain activities need to be verified at a location around a time period. For example, to verify whether the waterproofing system in a mechanical control room has been finished before its scheduled time, so the project is not later impeded by a waterproofing failure.

Defect Inspection Target information to be verified Information Known/Referenced Example Questions Activity • Location • *(Approximate) Date

• What (activity) actually happened in location L around Date D? (Exact) Date • Location

• Activity

• When exactly did activity X occur in location L?

Table 2. Possible target and known/referenced information and example questions in the scenario of defect inspection. Asterisk * indicates information that may or may not always

be known.

Defect inspection mainly focuses on verifying activities that occur in certain locations and sometimes an approximate date is known and could be referenced to assist construction photo search. Another common tasks in defect inspection is to verify exact dates when a certain activity occurs, since time could be a main factor that causes defects and project delay. In that case, the location and activities are often known and the exact date needs to be verified in photos.

(22)

13 As-Built Story Target Information to be Verified Information Known/Referenced Example Questions Activity • Location

• Date (Time Period)

• What progress was made to location L during time period D? • What was the state of

the location L on date D?

Location • Activity

• *(Approximate) Date

• Where was the equipment installed? Table 3. Possible target and known/referenced information and example questions in the scenario of as-built story. Asterisk * indicates information that may or may not always be

known.

‘As-Built Story’ is an archive of information collected on a daily basis of all aspects of a construction project. Construction photos are part of the archive and served as evidence of project status. They can be used to verify the state of a certain location at date D, and also to track the project progress, i.e. to identify how the state changed over a period of time. Other information such as location is also recorded and tracked in ‘as-built story’. For example, if there are two identical pieces of equipment installed in a building at two different locations, and one piece is installed first, then the archive could be a useful resource to identify where the first piece was installed around approximate date D.

3.2 Visual Information Needs and Design Goals

From the main scenarios, we identified several visual components as the key visual information needs to support construction claims, defect inspection and record keeping.

• Building plans. Photos can be linked to building plans by locations where photos were taken. The spatial coverage of each photo can be reflected in the building plans, so regions that have been documented by photos are clearly shown.

(23)

14 • Project schedule. Each photo has a time stamp that associates it to the project schedule. Adjusting the time range of interest can assist users to more easily locate target photos.

• Standardized content specifications in the construction industry. Photos offer an intuitive way to document construction activities, execution methods, materials and other related information. Mapping photos to a standardized construction information hierarchy and providing an intuitive mechanism to facilitate search would be useful.

Based on the visual information needs conceptualized from the scenarios, we present our design goals:

1. Support viewing historical data in photos: Provide accurate and specific historical information. Users can track project status at any time point by viewing photo archives of construction activities and important events along the timeline.

2. Facilitate flexible navigation and exploration: Assist users to flexibly and effectively navigate through construction photos by providing maximum temporal and spatial context.

3. Provide a broad, overall view of the allocation of photos in building plans: Provide a big picture of the photo allocation on floor plans to help users have a better idea of what and where the work has been done.

4. Afford intuitive and effective interaction: Afford users to flexibly interact with visual components to learn the project status, progress and other information at a given time.

(24)

15

3.3 Case Example Used in Our Prototype

We have over 700 photos of a 7-month construction project in San Francisco. The building was extensively renovated for a pharmaceutical company. Offices and labs were located on the main floor. They also built a mechanical platform between the main floor and the roof where large mechanical and electrical systems were installed. All photos were taken by a civil engineer who regularly monitored project status and took photos at the site.

We cleaned and pre-processed the data by eliminating duplicate photos and ‘pre-processed’ the image data by creating a spreadsheet of metadata that contained id, timestamp, engineer’s notes, etc. for each photo. Ideally, spatial coverage of each photo could be calculated from camera parameters such as position, orientation and focal length, similar to Anabuki and Ishii [3]. However, these camera parameters were not available in our photoset. In order to make use of this photoset, we estimated the coordinates on the floor plan where each photo was taken with the help of the civil engineer who took all the photos. We estimated each photo’s location coordinates and spatial coverage by manually mapping the photo to the floor plan based on photo contents (e.g. visible objects, layout, and construction features such as windows and equipment). Three categories of metadata were encoded for each photo: (1) notes made by the civil engineer, (2) codes from Master Format, (3) and codes from Uniformat [13]. These standard formats organize construction specifications. For example, a photo where construction workers were pouring concrete was linked to the code “concrete” and subcategories such as “Basic concrete materials and methods”, “Concrete reinforcement”, and so on. The reasons for encoding notes made by the civil engineer in photos are

(25)

16 because most of those notes are terminology that construction people commonly use and they may also capture information specific to the given project.

3.4 Prototype

We conducted three pilot studies to improve usability of our prototype before we reached the current stage. In this section, we describe PhotoScope in detail and also present lessons we have learned from design iterations.

3.4.1 Overview

The prototype’s main screen (Figure 1) consists of a number of components. An overview of the floor plan lies at upper left corner, and the current floor plan is in the center. In our case example, the main floor is divided into five subareas based on the layout of corridors. Generally, a floor plan could be subdivided into smaller regions according to the building layout. A divided-up floor plan allows us to implement detailed information in subareas at the second level (described in the next section). The border of the subarea is highlighted in orange both in the floor plan and the overview. The border of the room is highlighted in blue when the mouse rolls over a subarea/room and a tool tip also appears displaying the room number and the room name. Options such as tool tips and room names can be turned on and off via the option panel (item 3 in Figure 1).

(26)

17

Since the prototype is designed to facilitate searching for construction photos, a connection between photos and the essential building information is needed. We interpret this connection by providing a big picture of spatial coverage of photos in the floor plan during a given time range. The floor plan is divided into grid cells and each cell is filled with a saturated gray color. Since the scale of saturation is known as a good visual attribute to represent ordinal data [30], we used saturation to represent the number of photos that cover the cell (the region) on the floor plan, which we call the ‘photo capacity’. The darker the cell is, the larger the photo capacity in this cell. We originally considered displaying the camera positions as points, but realized it would not provide any information about which spatial regions were documented by photos. Displaying photo coverage is a better way to help users find photos of a specified area.

Figure 1. Overview of the prototype’s main screen.

(1) Overview of floor plan, to aid with navigation; (2) Current Floor plan; (3) Option Panel; (4) Timeline; (5) ‘All’ option.

(27)

18 The timeline is placed below the floor plan ((4) in Figure 1). Each month is represented by a segment in the timeline. The saturation of each segment represents the number of photos taken in that month; e.g., there are more photos taken in August than in July. Two sliders sitting on the timeline can be dragged and released to narrow down the time range of interest. Sliders only stay at the edge of each month segment. If the user releases the slider between two edges, the slider will automatically jump to the closer edge. Cell saturation in the floor plan implements the concept of dynamic queries [2] and updates correspondingly when the user changes the time range in the timeline. For example, the floor plan generally gets lighter if the time range is narrowed down from 5 months to 3 months.

In the option panel ((3) in Figure 1), different floor plans can be selected. Floor plans in the case example building are shown in Figure 2. Photos were taken inside the building, on the roof and in the area around the building. The photo space coverage for the mechanical platform and the roof are shown in (1) and (2) in Figure 2 respectively. The ‘all’ option ((5) in Figure 1) provides an ‘X-Ray’ view to see through the building from the roof down to the main floor. Some users called it “the layered view”. The saturated cells in ‘all’ overlap saturations of all layers (the main floor, mechanical platform and roof). This technique could be more useful for high-rise buildings where each floor might have an identical floor plan. ‘The layered view’ facilitates viewing where work that has been done across floors.

(28)

19

Some photos cover regions in both main floor and outside of the building. For example, the project coordinator could stand inside the building on the main floor taking a photo against the outside of the building through one of the entrances of the building (see Figure 3). We have a floor plan ((4) in Figure 2) that shows the spatial coverage for the main floor and the outside of the building. Users can bring up photos for the selected time and location by pressing the camera icon.

(29)

20

3.4.2 Detailed Level

Users can drill down to the second level (Figure 4) by clicking in a subarea in the overview or in the floor plan. The area where the user will move to is indicated in the overview.

In the second level, each room is divided into smaller grid cells compared to the first level, making detailed patterns more obvious. For example we can see more photos were taken against the north wall in the ‘SHELL’ room indicated by the red circle in Figure 4, than against the other three walls. This might be because more systems were installed on the north wall.

Figure 3. A photo example.

(30)

21

A room can be selected in the second level by a single click (see (a) in Figure 5). A region can also be selected across rooms by dragging and releasing a box (see (b) in Figure 5). The selected room or region is highlighted in yellow. If a subarea, room or region is selected, segments in the timeline update their saturation to indicate the number of photos taken for this region.

Figure 4. An example of the detailed second level.

Mouse rolls over the Compound Lab. Red circle indicates SHELL room where most photos were taken against the north wall.

(31)

22

(b) A selected region

(a) A selected room (b) A selected region

(a) A selected room

3.4.3 Photos

When the photo set is brought up, it displays photos related to the floor plan or the selected region within the time range. All photos are ordered by time and the total number of photos is displayed in the window title. A large version of a photo can be brought up by double clicking any thumbnail. The camera position and perspective are represented by a red dot with two angle lines in the floor plan where the photo was taken (see the enlarged version of the camera position and perspective at bottom right corner in Figure 6). The camera position with the perspective indicates the approximate location where the photo was taken on the floor plan so that users do not have to go back to the main floor plan view to check the location.

Figure 5. An example of the detailed second level.

(32)

23

3.4.4 Filters

Filters are displayed with the photo set (see Figure 7) to help users narrow down photos based on photo contents. One common question is to find all photos that show the progress or status of an activity, e.g. “Can I see all the flooring photos on the main floor?” Filters accommodate this need. Users could either select the existing keywords from the keyword list (in the callout in Figure 7) or add their own keywords. The keyword list includes categories in Master Format and Uniformat [13]. Each category has one or more subcategories; selecting the main category automatically selects all the subcategories. Adding or removing keywords filters and updates the photo set in real time.

Figure 6. Photos and filters.

(33)

24

3.4.5 Previous Prototypes and the Major Lesson Learned

During our pilot studies, we observed how users understood and interacted with the prototype and tried to discover and capture the usability issues. Participants in all three pilot studies were Computer Science and Engineering students recruited from the local university. New participants were recruited in each study. The same tasks (explained in section 5.3) were used in all three pilot studies.

The prototype we previously evaluated in our pilot studies is shown in Figure 8, and a major lesson that we have learned is the consistency issue in the scope for filtering and search.

(34)

25

Figure 8. The option panel of the prototype evaluated in the initial pilot study.

The floor level options, the filters and the keyword search were all placed closely in the option panel next to the floor plan. Filters could be used to filter photos of the current active floor plan or the subarea (in other words, the floor plan seen by users in the left, a local scope). However, the ‘Keyword Search’ at the bottom of the panel allowed users to search the entire photo archive (global scope) based on MasterFormat and/or Uniformat. We expected the ‘Keyword Search’ would act as a function similar to the “search the site” on most websites. However, in our pilot studies, participants were confused by the different scopes of the ‘Filters’ and the ‘Keyword Search’. Some users expected

(35)

26 ‘Keyword Search’ to search a specific floor plan or subarea. In the post-session interviews, participants explained that it was difficult for them to track different scopes of two functions that are visually closely located.

To provide a better context of the function scope, we first moved the ‘Keyword Search’ to the upper right corner of the window inspired by the ‘Search the Site’ function on a website. However, users in the second pilot study did not find it helpful. We then reconstructed the option panel by removing the ‘Keyword Search’ and adding ‘Create your own keyword’ in the filter. We also removed the entire filter panel from the option panel and added it next to the photoset window, so viewers could better understand the function scope. In the third pilot study, function scopes were found to be much clearer to users.

We also observed that users had to close the large version of a photo before they could click on the next thumbnail. Two buttons (see Figure 9) were added to allow users to easily go to the previous and the next photo without closing the large view.

(36)

27

(37)

28

Chapter 4: Algorithms

The floor plans are divided into a number of cells with two different sizes in two hierarchical levels. Each cell is filled with a gray saturated color. To calculate a cell’s saturation level, we identify which photos cover the cell (the region on the floor plan) and then sum the number of photos. Three possible coverage cases occur and are described below:

4.1 No Intersection

Figure 10. Cells in the floor plan and spatial coverage of photos for ‘No Intersection’.

In Figure 10, the red plus sign indicates the position where the photographer stood, and two solid lines coming from the red plus sign form the camera visual angle that end when they reach an object, e.g. a wall or floor. The square is a cell in either the first or the second hierarchical level. When the triangle contains the square or the square contains the triangle, the photo capacity of the cell should be increased by one. In most cases, squares sit inside the triangle, since most photos cover a region that is larger than a cell area in the floor plan. In some cases, for example, in the first hierarchy level, we have relatively large cells, and if a photo was taken closely against the wall, e.g. trying to catch details of an electrical outlet, the triangle might sit in the square. Algorithms we used to identify the ‘No Intersection’ case is described as follows.

(38)

29

Figure 11. Points sitting inside or outside of the triangle.

A square lies inside a triangle. A square lies inside a triangle if its four vertices all lie in

the triangle. Let’s assume we have ∆ABC, and a point P (see Figure 11). If the sum of areas of three triangles (∆APB, ∆APC, ∆BPC) equals to the area of ∆ABC, then Point P is inside ∆ABC ((i) in Figure 11); if the sum of the three areas is greater than the area of ∆ABC, then point P is outside ∆ABC ((ii) in Figure 11).

A triangle lies inside a square. To determine if a triangle is contained by a square, we

compare the coordinates of the triangle and the square. When the minimum X or Y coordinate of the square is smaller than the minimum X or Y coordinate of the triangle, and the maximum X or Y coordinate of the square is larger than the maximum X or Y coordinate of the triangle, the triangle lies inside the square. The photo capacity of the cell increases by one.

4.2 Intersection

(39)

30 If any edges of the square and edges of the triangle are intersected, the region of the cell is at least partially covered by this photo (Figure 12) and the photo capacity of the cell should increase by one. We determined whether any pair of the triangle edges and square edges is intersected using the algorithm known as ‘Sweeping’ described in [14].

4.3 Special Case

Figure 13. Special Case.

One additional case occurs, particularly in the corner of a room. If any vertex of the cell could form a convex hull together with vertices of triangle ∆ABC, and the vertex of the cell is between triangle vertices A and B in a convex hull (e.g. the convex hull is CAvB with v being the vertex of a cell), then the cell falls in the region S completely or partially (see Figure 13). Cells in region S are covered by the photo and the photo capacity of the cell should increase by one. We determine if any four points could form a convex hull using part of the ‘Graham’s Scan’ [4].

4.4 Saturation Scheme

After we compute the photo capacity of each cell, we need to decide what saturation scheme is best to use. Mehler et. al [22] have found that using absolute scales makes most heat effects on a heat map unobservable since very low values are unperceivable.

(40)

31 Using a relative scale ensures a maximum contrast between the highest and lowest heat values. We therefore used a relative scale for saturating the cells to ensure low values are perceivable. We ordered the photo capacity for cells on each floor plan separately, and found the quintiles for each floor plan. We mapped each quintile to one of five gray colors in the gray scale.

(41)

32

Chapter 5: Evaluation Method

To ensure validity of our visualization design, we decided to conduct a laboratory study before any deployment in the industry. Deploying PhotoScope to the real world would be a useful next step to improve the usability and to further gather user requirements.

Current practice of construction photo management is ad-hoc. Photos are usually organized in folders based on time in a standard file system. Therefore, we did not consider a comparison study between PhotoScope and current photo management methods as a useful approach. The research questions we hoped to answer from the study were suited to an exploratory approach that allowed us to learn engineers’ search strategies and explore possible directions and design guidelines for future visualization technologies that could support construction management. Additionally, to the best of our knowledge, not much information is available on how similar problems or research issues have been solved in the past. We therefore conducted an exploratory study to (1) evaluate PhotoScope and (2) help define user requirements for construction photo browsers, that could later improve future visualization designs.

5.1 Participants

We chose participants from various engineering disciplines involved in building design. Twelve regular participants (all males, 4 civil engineers, 4 mechanical engineers and 4 electrical engineers) and one expert (female) from construction management area were recruited from two local universities. All the civil engineers had extensive experience in the construction industry. They had been involved in various types of construction projects in the past, and their roles varied from project managers, project engineers, and

(42)

33 project coordinators to assistant engineers and contract administrators. Most mechanical/electrical engineers were students who had not been involved in any construction projects, except one electrical engineer who was a designer in a project. All participants had used architectural software (e.g. AutoCAD) at least a few times. Mechanical and Electrical Engineers had used the software from rarely (a few times) to often (weekly), and Civil Engineers used it from occasionally (monthly) to often (weekly). Most participants (9 out of 13) were familiar with photo browsers such as Picasa and Flickr. Each participant was offered $10 to compensate for their time. Participant ages ranged from 22 to 48 years old with the mean age of 31 years.

5.2 Apparatus

The prototype was implemented in TCL/TK with ActiveTcl version 8.4. Since the evaluation took place in two locations, the computers used were slightly different. One was AMD Athlon Dual Core Processor, the other was Intel Xeon 3.6 Ghz. Both displays were 19-inch LCD at 1280 by 1024 resolution. Participants interacted with the software using a standard mouse and keyboard.

5.3 Tasks

Fifteen tasks were created based on possible questions construction personnel might ask and were essentially derived from combining major factors of information needs extracted from three scenarios in Tables 1, 2 and 3. All the tasks are listed in Appendix B. The civil engineer who was on site full time for this project verified that these questions were reasonable and relevant. Tasks were divided into three categories:

(43)

34

Task type Example Task(s)

Time point and time period

Approximately when did they start the wall framing on the main floor?

Approximately how long did the floor finishes take in corridor X on the main floor?

Location Where was equipment A installed?

Status and progress What was the status of the roof in October? What progress was made to the open office area in the first two weeks of November?

Table 4. Task types and example tasks.

The study had two sections: a warm-up section with 4 questions and a real section with 11 questions. In the warm-up section, participants were told to use specified functions of the prototype for each question. For example, we told them to select keywords from the keyword list to complete the first question. By doing this, participants could familiarize themselves with all functions in the prototype. In the real section, participants could use whichever parts of the prototype seemed most helpful to complete the tasks. We did not require participants to give us the strictly correct answer, because we hoped they could explore the tool and tell us their opinions of the prototype. All the tasks were just possible directions that could lead the exploration.

5.4 Procedure

Participants were first introduced to the purposes of the study – that the researchers were looking for feedback to improve the prototype. Participants then filled in a background questionnaire (see Appendix A) about demographic information and their experience with construction projects and photo browsers. They were also asked to try out the prototype while the experimenter introduced its functions. During the study session,

(44)

35 participants were allowed to take as long as they wish to answer each question and were also allowed to write “I don’t know the answer” with an explanation of why they got stuck. They could ask questions as long as the questions were not directly relevant to the answers. This allowed us to analyze the questions and identify problems at later stage. Instructions were clear to all participants, and they had no trouble completing the tasks. During the session, the experimenter observed from an adjacent room linked via video. All sessions were videotaped and screen captured. Following the tasks, participants took part in a semi-structured interview (see Appendix C for interview questions) allowing us to gain insight into their opinions of the task and the visualization.

(45)

36

Chapter 6: Evaluation Results

Participants took 25 to 45 minutes to complete all tasks and were able to answer all questions (except one participant left one question blank). All the answers were reasonably correct. Therefore our analysis focuses mainly on usage strategies, usability, and possible future improvements. Videotape transcripts, field notes and interview notes were analyzed using an open coding approach described in [12] to form initial coding categories of strategies used in different types of tasks. More general and broader categories were merged from initial coding categories in the subsequent coding passes by further study of the videos. In this section, we begin with describing categories of strategies that emerged from our analysis and then focus on evaluation outcomes and usability issues of the visualization tool from our study.

6.1 Strategies

For managing a construction photo archive, time, location, and construction specifications (Activity) are three most crucial factors. Time links to project schedules; Location relates to coordinates in building plans; Activities are represented as construction specifications in Formats. Construction personnel focus on these factors when searching for target construction photos. In most cases, they already know one or two factors and they search for photos that could indicate or verify the rest. For example, when a flooring contractor searches his archive trying to find photos that could prove his team has finished flooring by the end of July, he knows the time and the location, and he searches for photos that could verify the activities. This logic leads to the development of

(46)

37 three categories of strategies illustrated in Figure 14 with more details described in Table 5. We discuss them in turn.

(47)

38 Type of task Conditions (Existing Knowledge) Strategies

Time

point/period Location and Activity

Phase I: Select the target location where

the activity happened.

Phase II: Examine photo content

• Scan through the photoset • Apply filters and then scan

through

• Apply own knowledge and then scan through the photoset

Special Case: Examine the timeline only Case 1: the general location is known Phase I: Select the general location

(usually an entire floor plan)

Phase II: Examine photo content

• Scan through the photoset • Apply filters and then scan

through the photoset Location Activity and maybe time

Case 2: the general location is not known Phase I: Bring up the entire photoset Phase II: Examine photo content by

scanning and applying filters Status/Progress Time and location

Phase I: Select desired time and location Phase II: Examine photo content by

scanning Table 5. Categories of Strategies Location + Activity → Time point/period

When location and activity are known, time point and time period become conditions that need to be identified from the photos. Related questions usually ask ‘when’ or ‘how long’ did an activity occur or take in a location. In this type of question, a concrete answer was usually expected, e.g. a date, or a time range. Participants used relatively consistent strategies that included two phases (see Table 5): (1) selecting the target location (an entire floor or a specific region where the activity happened); (2) examining contents of photos. Some participants scanned through thumbnails and occasionally brought up large

(48)

39 versions of the photos that were interesting to them. A few participants scanned through all large version photos using ‘next’ and ‘previous’ buttons. Organizing result photos by time and providing easy navigation facilities were critical to enable this workflow. Most users also applied filters to downsize the photoset. The larger the size of the photoset, the more likely users would apply filters. A couple of users applied domain knowledge to help downsize the photos. For example, one engineer “guessed” the possible time range, then adjusted the timeline to reduce the number of photos. He explained later that he knew a certain activity was not supposed to occur before some pre-requisite activities, and he knew from experience that the pre-requisite activities happened during a certain time range. This case did not happen very often in our study, probably because most of our users were not directly involved in the example project. However, we speculate that users who are involved in the project would be more likely to use their own knowledge to filter photos.

There was one special situation in this category. If the timeline itself was able to tell the time range of an activity, users did not browse photos at all or they only brought up photos to confirm the time range they learned from the timeline. For example, the mechanical platform in our case example was built from September to December, so in the timeline, segments of other months were all empty. Users assumed construction of the platform started from September. Most users did not examine photos since they just wanted to know when the project team started to build the platform. This use of the visualization alone (without viewing photos) was unexpected, and is not possible with the current file system that most construction personnel use.

(49)

40 Activity + (Time) → Location

When users need to identify the location where some activity happened or a piece of equipment was installed, their search criteria usually included the activity (filters) and sometimes the time. Strategies used to identify locations were highly dependent on the activity or item that was involved (see Table 5). If it was a unique large-scale piece of equipment, users would know an approximate location. In this case, they scanned through the photoset for that location and applied filters if they knew the appropriate keyword. When the general location was unknown, or there could be multiple locations, participants usually brought up the entire photoset and narrowed it down using filters. Then they scanned through result photos. In this type of task, the time was not always available as indicated by the dotted line in Figure 14. Most of our participants were not involved in the project, so they rarely had an idea when the activities took place. However, we believe that users would adjust the time range before examining photos if the time period was known.

Time + Location → Status/Progress

Inquiry of project status and progress is an extremely common task in construction management. Two steps were included in the strategies (see Table 5). Participants first narrowed down photos by selecting the designated time range and location and then they examined contents in the photos. The majority of users scanned through most photos in either thumbnails or in large versions; other users viewed a few photos at the beginning of the set, in the middle, and at the end. Since the photos were ordered by time, viewing a small number of photos at critical positions might be a shortcut to determine status/progress. This suggests the time-based order is critical. We could emphasize those

(50)

41 flag photos by displaying large thumbnails and shrinking the size of others. This allows viewers to have an overview of the status/progress with a quick glance through those large thumbnails. Which photos should be chosen as flag photos is an interesting problem for future research.

Strategies used within each category were relatively consistent, partly because the order of task steps was imposed by the prototype. The prototype provided clear hints to lead users through the workflow. With the exception of a context switching problem discussed later, this workflow seemed natural and intuitive to participants. Among all strategies engineers used, the use of filters had the highest frequency. The majority of participants frequently applied filters to further narrow down the size of a photoset. The use of floor plan and timeline were dependent on the known information such as locations and schedules of activities.

6.2 Usability

In general, the prototype received positive reviews from participants and was described as “an interesting tool”, “a useful tool”, “is easy to learn” and “is very helpful to help people more quickly to search out evidence”. Performance improved quickly as participants became familiar with the tool. We studied the usability of PhotoScope from both the screen captures and substantial feedback from participants. Here we present our successful design ideas, problems, and possible usability improvements.

6.2.1 Construction industry formats, filters

We discovered a tight connection between construction formats that provide detailed divisions of construction activities and construction photos: the contents of each photo

(51)

42 could be linked to specifications in the formats. For example, in a photo where construction workers are pouring concrete could be categorized into ‘Basic Concrete Materials and Methods’, ‘Concrete Reinforcement’, etc. in the Master Format. We found filters were extensively used, and served as a main method to downsize photosets. Surprisingly, some users applied filters to a photoset with less than ten photos to avoid missing relevant photos with direct scanning. For example, in a photo, a construction worker might carry equipment while walking along a hallway that is being framed. When direct scanning, users might only see the worker and equipment and miss the status of the hallway. However, when users search for either the equipment or wall framing using filters, the same photo would be available in both searches. Since users always assume that thorough, complete and correct metadata are implemented with each photo, encoding precise information in photo metadata should be considered as a priority.

We know filters were favored by users, based on the number of times that they were used for various tasks. However, complete and powerful filters rely on precisely coding specifications from the formats into metadata. In our current prototype, each photo was manually coded with salient information. However, we envision that image processing oriented techniques presented in [5,6,7,8] could be used to detect contents of each photo and relate photos to corresponding categories from the formats.

6.2.2 Timeline

Users’ opinions on the usability of the timeline fell into two categories. First, users interpreted the saturation of the timeline as construction progress and the timeline was considered to be an overview of the schedule. One good example was to find out when the construction team started building the mechanical platform (described in the

(52)

43 ‘Strategies’ section). The saturation of the timeline provided hints about the time range of an activity. Second, users pointed out that the timeline should have multiple granularities (e.g. so they could select a single day or from an exact date to another exact date). When participants were asked if a calendar would meet their needs, one of them generally commented, “the timeline is a nice and simple icon and a calendar view alone would be hard to use, but to have a calendar view accompanying the time line could be useful.”

6.2.3 Context Switch

In the current prototype, floor plans and the timeline are displayed in the main window. Photos can be brought up along with filters in a separate window. We placed photos in a separate window to increase the screen space available for photos. However, manipulating search widgets (time ranges and spatial regions) from the photo display window requires window switching. This context switching was described as “a disconnect” by participants. Users expected to be able to manipulate all search widgets while they examined a photoset, and did not like to “go back and forth”. From video observations, we found that when a user obtained an unexpected photoset (e.g. one that was empty or did not meet search criteria), they usually dragged the photo window away to check if they had selected the correct region. If all these widgets were in the same window, this check would be simpler. One participant commented “When you see a photoset with filters, you know filtering affects photos, but you don’t know what floor plan or what timeline you were on. You have to remember the timeline and the floor specification”. Minimizing this context switch could potentially improve performance by reducing memory load. One engineer pointed out that they would rather tolerate relatively small size photos, in exchange for having all search widgets available in a

(53)

44 single window. Ideally, a larger screen would allow all widgets to be integrated without compromising photo size.

6.3 Assumptions about Cell Saturations

In our prototype, each cell in the floor plan is filled with a saturated gray color representing the number of photos. We expected participants could see where most photos were taken across the building. All participants were able to understand the representation and one civil engineer commented, “It is perfectly reasonable how you have done it to use colors to highlight the (photo) numbers”. However, one unexpected outcome was that engineers further interpreted the cell saturations to additional meanings based on their domain knowledge. Two main types of interpretations were assumed by engineers regarding to the cell saturation.

6.3.1 The darker the cell, the more photos, the more activities?

Several engineers assumed more photos in an area meant there were more construction activities in that area. Photos are usually taken by project engineers or coordinators who monitor the project and keep records in case of future claims. These people are likely to take photos to record the type and quality of activities happening at the time. This is not always the case; for example, if the project was delayed because of a heavy rain, construction workers might have taken many photos to explain why they could not get to the site. Engineers agreed that more photos generally indicate more activities, but disagreed about whether the activities necessarily contributed to project progress. But for record keeping and construction claims, all kinds of activities are equally important.

(54)

45 6.3.2 The darker the cell, the more photos, the more problems?

One engineer with specialty in diagnosing construction performance told us that if more photos were taken in one area, there might be more problems in this area. He pointed out that photos could be taken to capture the defects and used as evidence. “From my perspective”, he said, “the purpose of taking pictures are different. You might find defects on the wall or equipment, so you take photos for claiming. In the case related to construction performance, construction deficiency, it doesn’t mean more work has been done, (if there are more photos taken in this area)”. In this case, more photos probably means more problems. Although this is an opinion from a single engineer, he represents a large number of construction personnel who work in the area of defect inspection.

How to interpret the number of photos is influenced by the user’s expertise domain and also by the purpose of taking photos. It could be used to recover the progress and the duties in the future, or to uncover problems and diagnose construction performance. However, most of the time, photos are taken for both purposes. Another engineer suggested that we could have two sets of saturated cells but with different colors to represent photos taken with different purposes. This might be cluttered when two groups of cell colors are present on the same floor plan. But, it could be useful if users were allowed to switch back and forth between two sets of cells by turning on/off one of the two color schemes; or floor plans with different color cells could be displayed side by side.

6.3.3 Empty cells, no photos, no work?

Some participants did not assume any of the above, but they assumed “No photos, no work has been done”. Other participants used the cell colors to gain an overview of the

Referenties

GERELATEERDE DOCUMENTEN

This analysis was executed to determine the maximum rotation ( ) and relative angular distortion ( ) which has occurred in the past. The analysis was used to compare the

Would consumers have a different brand attitude or consumer engagement when brands use humor to address consumers on the web for the whole world to see

[r]

Ik denk dat wanneer er al een onderscheid tussen ‘wij’ en ‘zij’ wordt gemaakt, binnen de Nederlandse samenleving er naar mijn mening, al gesproken kan worden van racisme

36 On the other hand, the manifestation part of the freedom- namely the forum externum, the right that individuals have to manifest their religion by, Inter

1 The mission statement of ‘De Nederlandse Bank’ recalls [translate from Dutch]: “DNB commits itself to.. “How are events following from the financial crisis, as 1)

Toch kunnen er enkele voorbeelden aangehaald worden van gespen die inderdaad op hun beslagplaat-en daarbij eventueel nog op de tong- een motief tonen samengesteld

Menings in hierdie werk uitgespreek of gevolgtrekkings waartoe geraak is, is die van die skrywer en moet in geen geval beskou word as 1 n weergawe van die