• No results found

Examining the learning curve in projection based augmented reality: effects of instructions types under varying levels of task complexity

N/A
N/A
Protected

Academic year: 2021

Share "Examining the learning curve in projection based augmented reality: effects of instructions types under varying levels of task complexity"

Copied!
100
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

0

Robert ten Have

S2962233 | UNIVERSITY OF GRONINGEN

Examining the learning curve in

(2)

1

Abstract

Objective: Augmented reality is a promising tool to assist training operators for assembly tasks. Through projection-based augmented reality operators are provided with instructions projected onto their work environment. How and what instructions should be projected remains ambiguous. Therefore, the aim of this thesis is to compare two instruction types within projection-based augmented reality. The first type provided operators with AR contour instructions, the second provided AR contour instructions and photo instructions. The aim is to measure operator performance for instructions types moderated by low and high complexity tasks. Moreover, two phases of the learning curve are examined to determine where, based on the performance of an operator, task types may be best supplied with one of two instruction types.

Method: An experimental study was carried out to compare the two instruction types. Participants were asked to build two products with two instruction types. This was done to determine the performance of each operator. Moreover, for each performance measure the learning curve was visualized to determine where in the learning curve operators benefitted most from having a certain instruction type. Performance consisted of the number of errors a participant made, productivity and workload.

Results: The learning curve showed a difference between instruction types on the productivity of an operator. This difference was found in the first three product cycles. These three cycles were defined as the start-up phase of the learning curve. After this phase there was no significant effect of the instruction types on productivity. The number of errors were observed to be higher when operators assembled solely with AR instructions for the start-up phase and in some cases for the steady-state phase. Frustration was found to be increased when assembling with AR instructions.

(3)

2

Examining the learning curve in projection based

augmented reality: effects of instructions types under varying

levels of task complexity

Master thesis

Date of submission: 17th of February, 2019

• Author: Robert ten Have – S2962233 • First supervisor: dr. J.A.C. Bokhorst

University of Groningen, Faculty economics & Business • Second supervisor: dr. ir. D.J. van der Zee

University of Groningen, Faculty economics & Business • In partnership with: TNO

(4)

3

Acknowledgement

This thesis is written as the final part of the master in technology and operations management at the University of Groningen. I want to thank the people who have provided me with support, feedback and valuable insights.

Firstly, I would like to thank both Dr. J.A.C. Bokhorst and Dr.Ir.D.J. van der Zee for their guidance. A special thanks to Dr. J.A.C. Bokhorst for his excellent supervision, guidance and support throughout the thesis. During our regular sessions I was provided with valuable insights and feedback. Without him this thesis would not have been possible.

Secondly, I would like to tank TNO for allowing me to use their laboratory and providing me with excellent support throughout the Thesis. A special mention to Tim Bosch, Ellen Wilschut and Reinier Könneman for their dedication, involvement, feedback and support throughout the process.

Thirdly, I want to express my gratitude towards all of those who have partaken in my experiments. Their enthusiastic cooperation was much appreciated.

(5)

4

Table of Contents

Abstract ...1 Acknowledgement ...3 List of figures ...5 List of tables ...6 List of graphs ...6 1. Introduction ...7 2. Literature review ... 10 2.1 Assembly ... 10 2.2 Augmented reality ... 12 2.3 Complexity in assembly ... 16 2.4 Training in assembly... 21

2.5 Performance measures in assembly using augmented reality ... 26

2.6 Conceptual model ... 29

3. Methodology ... 30

3.1 Experimental design ... 30

3.2 Participants ... 37

3.3 Materials ... 38

3.4 – Procedures & Data collection ... 39

3.5 Data analysis ... 41

3.6 Validity and reliability ... 43

4. Results ... 44

4.1 Main and interaction effects of instruction type, product cycles, complexity and task type on productivity ... 44

4.2 Main and interaction effect of instruction type and product cycles on the number of errors ... 54

4.3 Workload ... 57

5. Discussion ... 58

5.1 Discussion of results ... 58

5.2 Implications ... 61

5.3 Limitations and future research ... 62

6. Conclusion ... 64

7. References ... 65

(6)

5

Appendix A – Low and high complexity indicators (Falck et al., 2017) ... 71

Appendix B – NASA-TLX factors (NASA, 1986) ... 72

Appendix C – Tasks in assembly order of product A & B. ... 73

Appendix D – Complexity factors not considered ... 77

Appendix E – Complexity per task type ... 78

Appendix F – Information letter for prospective participants ... 80

Appendix G – Booklet for participants during experiments ... 81

Appendix H – Protocol for measurement during experiments... 93

Appendix I – Eye tracker data exclusion ... 96

Appendix J - Detailed assembling process of participants ... 97

Appendix K – Questionnaire data ... 98

Appendix L – Product cycle average step times per task type and instruction type ... 99

List of figures

Figure 1: Common manual assembly system configurations (Swift & Booker, 2013) ... 11

Figure 2: Reality-Virtuality continuum from Milgram, Takerhura, Utsumi, & Kishino (1994) ... 12

Figure 3:Product related factors (Haagsman, 2018) ... 17

Figure 4: Process related complexities (Haagsman, 2018) ... 17

Figure 5: Personal factors (Alkan et al, 2016, Mattsson et al., 2012) ... 19

Figure 6: Figure: Samy & Elmaraghy, 2010: Handling & Insertion attributes ... 20

Figure 7: Plateau phases (Conway & Schultz, 1959) ... 22

Figure 8: KPI's from Jetter et al. (2018)... 26

Figure 9: Conceptual model ... 29

(7)

6

List of tables

Table 1: Counterbalancing of participants ... 31

Table 2: Complexity checklist ... 34

Table 3: Order of tasks per product ... 35

Table 4: Complexity for task type Pins ... 36

Table 5: Pegboard trial scores for both instruction types ... 37

Table 6: Within-subject factors of productivity ... 41

Table 7: Within-subject factors of the number of errors ... 42

Table 8: Main and interaction effects of instruction type, product cycles, complexity and task type on productivity ... 45

Table 9: Main effect of task type on productivity ... 46

Table 10: Interaction effect of instruction type * product cycle ... 48

Table 11: Interaction effect of instruction type * complexity ... 48

Table 12: Interaction effect of instruction type * task type ... 49

Table 13: Main and interaction effects of product cycles and instruction type on the number of errors . 54 Table 14: interaction effect of instruction type * product cycles on the number of errors ... 55

Table 15: Workload of 6 NASA-TLX factors ... 57

List of graphs

Graph 1: Average productivity in seconds per product cycle ... 45

Graph 2: Average productivity per instruction type per product cycle ... 47

Graph 3: Average productivity per task type and instruction type ... 50

Graph 4: Average productivity per product and complexity ... 51

Graph 5: Average productivity per product cycle per task type ... 52

Graph 6: Productivity of beams per product cycle per instruction type ... 53

Graph 7: Productivity of pins per product cycle per instruction type ... 53

Graph 8: Productivity of tubes per product cycle per instruction type ... 53

Graph 9: Productivity of cogwheel per product cycle per instruction type ... 53

Graph 10: Average number of errors per product cycle ... 55

(8)

7

1. Introduction

Manufacturers nowadays are met with increasingly high standards regarding quality and production times (Choi, Chan, & Yuen, 2002). To keep up with such standards the manufacturers need a highly efficient production process. A key part of a streamlined production process is having well trained employees who are able to assemble products according to customers’ demands. This requires

employees to be flexible, fast and quality oriented. In order to achieve this, the workforce has to be well informed and trained. Traditionally, training is done either by other staff or through paper instructions (Funk et al., 2015). However, this can be time consuming and costly for a firm. With the industry

progressing, new promising technologies that can aid training have sprung. One of those technologies is Augmented reality.

Augmented reality (AR) in its most basic form can be described as a mixture of the virtual and the real environment. For example, a projector can be used to project a virtual image onto a physical object. This virtual imagery can be used to show instructions on an assembly. Therefore, augmented reality is suitable for training purposes in assembly (Segovia, Mendoza, Mendoza, & González, 2015). Instead of having manual instructions during training, AR can be deployed to help with the training process. Boeing was the first to employ augmented reality in their assembly process for training purposes. Personnel were given instructions through AR to assemble various parts of an aircraft (Caudell & Mizell, 1992). The project was successful and other firms have followed in employing AR.

AR thus has the ability to give instructions to trainees through various methods of instructions (Funk et al., 2017). There are many ways in which these instructions can be visualized. The training process can be supplied with all forms of information (e.g. photo/video/contours). These forms of information display for trainee assistance are defined in this research as instruction types. However, it is unclear for which situation what form of instruction should be given (Elia, Gnoni, & Lanzilotto, 2016). The difficulty here is determining how information should be displayed during an assembly task.

(9)

8 But how information should be displayed could be influenced by how simple or hard a task might be: Complexity. A definition of complexity is given by the Cambridge dictionary: “The state of having many parts and being difficult to understand or find an answer to” (Cambridge dictionary, n.d.). Thus, this relates to the degree of difficulty. This research hypothesizes that the relationship between how instructions are used and how it affects performance might partially be explained by the degree of complexity. For example: constructing a table is easier with simple text instructions than constructing an engine for a plane. The aforementioned complexity within tasks relates to the technical capability of some instruction types. For some tasks it might not be necessary to display certain instruction types.

Wilschut, Könemann, Murphy, Rhijn & Bosch (2019) state that AR needs to provide instructions based on objective measures of “performance, user experiences and skill levels of operators”. In order to base instructions on human characteristics, an AR system would ideally provide a degree of adaptability (Elia et al., 2016). A first task in reaching adaptability of such a system is to determine how operators use information that is presented to them. By doing so, an AR system can be modified to provide the preferred type of instruction during the learning curve. Current research is focused on the learning curve of a product as a whole (e.g. Hořejší, 2015; Wilschut, Murphy, & Bosch, 2019) but little is written regarding individual tasks that comprise a product and their impact on performance. As an operator may go through phases when learning to build a product, each task could reveal potential information that relates to the performance of an operator. For example: Assembling a motherboard can consists of several tasks such as placing ram or connecting wires. Placing ram with text instructions may be more suitable than connecting wires with text instructions.

(10)

9 This leads to the following research question: How do instruction types, under various levels of task complexity affect performance in projection-based AR training and what is the impact of the phase of the learning curve?

Given the research question, the contribution of this research is twofold: first, manufacturing firms that employ AR for training purposes can optimize their projection-based AR program for a higher

performance by applying the most effective instruction type in each phase of the learning curve. Furthermore, this research contributes to practice and theory by examining the learning curve during task assembly and the preferred instruction type based on how well a trainee performs. Second, academics can use this research in further exploring adaptability in AR training systems.

(11)

10

2. Literature review

In the literature review concepts that are relevant to the research question are discussed. These are: assembly, augmented reality types & technology, (task) complexity, training in augmented reality, the learning curve and performance measures in augmented reality.

First, assembly and its characteristics are discussed to determine what forms of assembly exist and what common assembly systems configurations there are.

Second, augmented reality and its technology/types are reviewed to show which forms of augmented reality exist and how those are employed. After reviewing those a motivation regarding the choice of augmented reality type in this research is given.

Third, (task) complexity in assembly is discussed to show that different assemblies have different degrees of difficulty. The identified complexities are categorized to identify what types of complexity there are and how they are present in assembly.

Fourth, Training in assembly with augmented reality is discussed to show how operators train with augmented reality and define what a learning curve is. Moreover, instruction types are identified which are used in augmented reality. Last, adaptability of those instruction types is explored.

Fifth, several performance measures are identified and discussed. A motivation is given regarding the choice of performance measures and why they are relevant to the research.

The aforementioned concepts are then placed into a conceptual model to understand how these relate to each other.

2.1 Assembly

In order to delineate the scope of the research, assembly needs to be explored. According to Nof & Chen (2003) assembly can be defined as: “Assembly is the productive function of building together certain individual parts, subassemblies and substances in a given quantity and within a given time period” (p. 307) This research focuses on industrial manual assembly as the aim is to examine operator

(12)

11 characterized by the human aspect: the human is at the center of the assembly process (Nof, Wilbert, & Warnecke, 2012). In this section the focus is on the human aspect of manual industrial assembly. Involving human’s in the assembly process has several advantages: humans have the ability to learn, adapt to a certain context and can be highly flexible (Weidner, Kong, & Wulfsberg, 2013). Disadvantages are that humans have a limited endurance and are characterized by a high operating cost.(Weidner et al., 2013).

Manual industrial assembly is mostly employed when products are either low in volume or components are unable to be mounted through an automatic process (Nof et al., 2012). In general, the worker performs a series of actions which lead to a subassembly or an assembled product (Swift & Booker, 2013). The worker may operate alone, in a line or both with assistance of machines as shown in figure 1.

Figure 1: Common manual assembly system configurations (Swift & Booker, 2013)

Cooperative work stations

Figure 1 shows an in-line transfer system with mechanized assistance These systems feed parts to the operator. However, for more complex tasks, which require additional assistance a new type is identified: Cooperative work stations (Krüger, Lien, & Verl, 2009). An operator may be supported by an assistive system during manual assembly. These are technical systems which receive information from its environment to offer support during a task (Hinrichsen, Riediger, & Unrau, 2016). The assistive system should support the operator with clear information such as pictorial instructions. Moreover, the system should provide operators with the correct information or assembly assistance at the right time

(13)

12 Hinrichsen et al. (2016) defined two types of assistive systems for cooperative work stations: physical and informational systems. Physical systems aim to reduce the amount of physical strain on an operator and aid in assembling a task. The objective of information systems is to guide the operator and reduce mental effort during a task. Both types of support systems are used to increase productivity of an operator and quality of the product (Hinrichsen et al., 2016).

An example of a cooperative work station with an assistive system is one created by Assembly Solutions GmbH. (2016). They created a stationary system which provides the operators with visual and textual support projected onto the operator’s workspace. Moreover, the operator receives visual cues from the system to indicate what part should be picked and where the part should be placed. The

aforementioned support system is a form of AR which is discussed in the next subsection.

2.2 Augmented reality

“Augmented reality can be defined as any system with the following three characteristics: it combines real and virtual worlds, it is interactive in real time and is registered in three dimensions” (Azuma, 1997, p. 356) Augmented reality is thus a mixture between the virtual world and the ‘real’ world. Figure 2 describes the reality-virtually continuum. Virtual reality is the usage of a digital environment where all interactions remain in the virtual world (Graafland, Barsom, & Schijven, 2016). The two thus differ in a way where augmented reality uses virtual items to overlay the physical world whereas virtual reality remains in a virtual environment. The distinction between virtual reality and augmented reality is vital to delineate the scope of this research. This research focuses solely on AR.

(14)

13 To further delineate, this research focuses on Industrial augmented reality which is defined as:

“applying augmented reality to support an industrial process” (Fite-Georgel, 2011, p. 2). Several applications within industrial augmented reality have been defined: Product design, manufacturing, commissioning, maintenance and decommissioning (Fite-Georgel, 2011). This research is centered around manufacturing within industrial AR.

Caudell & Mizell (1992) coined the term augmented reality in 1992. Their paper can be seen as the first application of AR in an assembly setting. Since then much progress has been made regarding computer and manufacturing technology (Ong et al., 2008). The advances made in the aforementioned field are beneficial for the development of AR. However, Ong et al. (2008) state that AR is still in its exploratory stage and improvements are expected over the years. The authors state that most issues relate to the size of head-mounted displays. Wang et al. (2016), surveyed more current research of AR and concluded that new challenges have arisen. For example, one of the major challenges is the context-awareness of AR systems.

Context-awareness meaning AR systems should ideally adapt to the operator. A context aware system should consist of two parts. First, information regarding the operator, place, time and event should be known. Second, a context-aware system should be able to monitor the cognitive status of an operator. The two parts ensure that the relevant information is present and can be displayed based on the cognitive status of an operator (Wang et al., 2016). For example: The context-aware system knows that an operator is not able to comprehend the information given, the system adapts the degree of

information to suit the operator better. Context-aware systems are discussed in detail in section 2.4.3. Here context-awareness was discussed to show that this poses one of the challenges of AR.

2.2.1 Technology and types

This section outlines what an AR system consists of and what types of AR there are. 4 types are

discussed to show how instructions can be provided with AR. Based on these types a motivation is given for the chosen AR type.

AR systems

At the core of each AR system are three elements: Hardware, software and tracking. Hardware are the physical components of an AR system. Software are programmes which make the hardware run.

(15)

14 The software element main categories are: Programming languages, libraries, software development kits, game engines and 3d modelling (Palmarini, Erkoyuncu, Roy, & Torabmostaedi, 2018). These may be used in the development of software for AR applications.

The hardware element consists of a devices which can be subdivided into six classes: Head mounted displays, hand held displays, Desktop pc, Projector, Haptic and Sensors. (Palmarini et al., 2018). Head mounted displays are devices which are worn on the head and can display information on the display or can reflect information through the display allowing users to see both the virtual and physical world (Sutherland, 1968). Hand help displays are devices such as phones or tablets. Desktop pcs are personal computers, often at a single location. Projectors are devices which can project static or dynamic visuals onto physical objects. Haptic is a technology which enables communication between human and machine frequently by touch (Hayward, Astley, Cruz-Hernandez, Grant, & Robles-De-La-Torre, 2004). Sensors are devices which detect or react to changes in its environment (Southwest Center for Microsystems Education, 2017).

Tracking has three main techniques: Sensor-based tracking, Vision-based tracking and hybrid-based tracking (Stütz, Dinic, Domhardt, & Ginzinger, 2014). Sensor-based tracking use sensors to determine what the operator is doing. However, sensor-based tracking is well developed and little advances have been in recent years. (Stütz et al., 2014). Video-based tracking is a technique which uses images to determine the position relative to real-world objects (Stütz et al., 2014). Hybrid-tracking technologies incorporate several different sensing technologies to track the user or object.

Types of augmented reality

The aforementioned elements make up the core of each AR systems. Within AR two types can be distinguished: marker-less and marker-based (Patkar, Singh, & Birje, 2013). Marker-based AR makes use of camera’s and visual cues whereas marker-less AR makes use of GPS and compass (Patkar et al., 2013). Based on these two types Fleck, Hachet, & Bastien. (2015) identified the most common forms of AR: Projection, Recognition, Location and Outline.

Projection-based AR

(16)

15 most cases) does not require headwear or eyewear for the user(van Krevelen & Poelman, 2010).

Projection-based AR’s main advantages are: No use of heavy eyewear, suitability for group work, projections are directly onto the product and the observer does not have to switch focus. Disadvantages are: projection-based AR is often in a fixed position and restrained by external conditions such as the amount of light. Regarding light, outside projection-based AR would fail to work (Lackey & Schumaker, 2016).

Recognition-based AR

The recognition type uses recognition techniques to identify real world items. Based on these items it can provide additional information to the user (Fleck et al., 2015). For example, the MyFitnessPal application which allows you to scan an item in the supermarket with the camera of. Based on the item nutritional values are shown.

Location-based AR

Location-based AR uses GPS to provide the user with positional information. (Fleck et al., 2015). An example is Google maps which allows users to navigate roads based on real-life imagery.

Outline-based AR

Outline-based AR can provide information based on the outline of an object or user. This outline can be used to superimpose information on the real world (Milgram, Takemura, Utsumi, & Kishino, 1994). An example of this is architects who may outline a building and superimpose on the real world to show whether the building would suit the environment.

Marker-based projection augmented reality

(17)

16

2.3 Complexity in assembly

Complexity of assembly is discussed to explore what types of complexities are present in assembly. Categorizations of complexity are explored to determine where complexity is present in assembly. These categorizations have several factors which determine the amount of complexity. This is done as it is expected that complexity has a moderating effect on complexity.

2.3.1 Qualitatively assessing complexity

Complexity does not have a universal definition, it is highly dependent on the assembly (Samy & Elmaraghy, 2010). However, Falck, Örtengren, Rosenqvist, & Söderberg (2017)note the following about complexity:

“Complexity of a production system relates to its inherently unpredictable behaviors and non-linear dynamics making it difficult to describe in clear-cut terms”. Therefore, complexity is a general term, academics have defined complexity in many ways (e.g. Elmaragh, Kuzgunkaya, & Urbanic, 2003; Al-Zuheri, 2013; Zaeh, Wiesbeck, Stork, & Schubö, 2009). This research defines complexity based on Alkan, Vera, Ahmad, Ahmad, & Harrison (2016). Their categorizations were crafted from recent literature and are specifically designed for manual assembly. Assembly complexities are therefore defined in the following four categories: product related factors, process related factors, personal factors and environmental factors (Alkan et al., 2016).

Falck et al. (2017) conducted a study which interviewed 64 skilled engineers who have worked in design and manufacturing engineering. This study identified and assessed complexity within manual assembly in two ways: Low complexity and high complexity. They created a checklist which lists these

complexities, this checklist is shown in Appendix A. The criteria given can be used to assess the complexity of an assembly as a whole or individual tasks (Falck et al., 2017). This research defines complexity as task complexity. Task complexity is used as different tasks may require different instructions as stated in the introduction.

(18)

17

2.3.1.1 Product related factors

Product assembly complexity is defined by Samy & Elmaraghy, (2010) as: “the degree to which individual parts/subassemblies have physical attributes that cause difficulties during the handling and insertion processes in manual and automatic assembly”. In other words, product complexity can be found in the handling and placing of parts within an assembly setting. Figure 3 shows the product related factors identified; these relate to the product itself.

Figure 3:Product related factors (Haagsman, 2018)

2.3.1.2 Process related factors

During assembly a product will go through several stages before a finalized product is achieved. Process complexity is the act of “measuring the difficulty of describing and executing a process” (Howard, Rolland, Qusaibaty, 2004). Figure 4 shows the process related complexities combined by Haagsman (2018). As these factors are not as straightforward as the product related factors, they are discussed in detail.

Figure 4: Process related complexities (Haagsman, 2018) Product related factors Explanation

Clarity of mounting position Degree of clarity of mounting positions of parts/subassemblies Ease of accessbility Degree of accesibility for an operator of a product/part/subassembly Visibility of operations Degree to whether the operator can see operations

Necessity of subjective inspection Degree to which inspection by an operator is necessary Degree of accuracy Degree of precision needed for a task

Material characteristics The shape/form of the material

(19)

18

Process variation: Relates to in what way the assembly can be accomplished. Falck, Örtengren,

Rosenqvist, & Söderberg (2017) give the example of utilizing hand tools or not. For a low complexity task there should thus be standardized way of performing the task. Meaning the process should contain few variations in what is required for the task to be completed. For example, assembling a table should not require 15 different types of specific tools for its construction.

Number of individual mounts/details: are the number of parts/components that the operator is

required to mount on the assembly. For low complexity there should be few parts to mount Falck et al. (2017).

Duration of assembly/time pressure: Is the amount of time it takes to assemble a task, furthermore the

degree of time to assemble a task influences the amount of complexity (Su, Liu, & Whitney, 2010). To achieve a low complexity the assembly should be easy and time pressure must be kept to a minimum (Falck et al., 2017).

Sequence dependency: relates to the order that the assembly task must be accomplished, for a low

complexity task, the assembly should be done in one way (Falck et al., 2017). Some sequences may be more complex than others due to fixtures, tool changes or infeasibility of the task (Raghavan, Molineros, & Sharma, 1999).

Degree of rework: refers to the adjustments that are made during a task at a specific station (Falck et

al., 2017). If within a task there are little adjustment necessary the task is considered low complexity.

Product variation: “If the surrounding environment varies where parts and components should be

mounted or if the detail to be positioned is dependent on the surrounding components” (Falck et al., 2017, p. 4256). Fast-berglund, Fässberg, Hellman, Davidsson, & Stahre (2013) state mass customization leads to a higher degree of product variation.

The presence of detailed, unclear or insufficient (undocumented processes) work instructions:

Instructions of an assembly should be clear to achieve a low complexity. High complexity instructions are vague or insufficient to complete the assembly (Falck et al., 2017).

2.3.1.3 Personal factors

(20)

19 complex. He perceived that the complexity of the closet is high. A person who works for IKEA and has assembled many closets may perceive the complexity of the closet as low. Thus certain human

characteristics influence the degree of perceived complexity. Alkan et al. (2016) state several personal factors which are shown in figure 5. They state that these elements affect the perceived complexity. The factors are discussed in short as these cannot be incorporated into individual task complexity.

Figure 5: Personal factors (Alkan et al, 2016, Mattsson et al., 2012)

Mental & physical capacity

Mental capacity refers to the degree to which the operator has to use their mind to perform a task. For example, an operator may need to remember specific building instructions.

Physical capacity refers to the degree to which the operator has to use their body to perform a task. For example, the operator is required to carry heavy parts/subassemblies to different locations.

Training level of the operator

The training level of the operator affects how much complexity the task contains, as with the IKEA example an operator which is already familiar with a certain product will perceive less complexity. Some assembly tasks are trivial and thus require little to no training whereas others may require further education to perform the task.

Personality, Culture & Motivation to work

These factors are grouped together as they are difficult to quantify or measure. Personality is different for each person thus what the influence of personality is on complexity is hard to define. Culture

matters as each culture has different norms and values these could influence how an operator perceives a task. Motivation to work is different for each person and arguably different for each day.

2.3.1.4 Environmental factors

(21)

20 work station, and amount of space. These are not explored in detail; environmental factors are not considered as they relate to the overall setting and not complexities of individual tasks.

2.3.2 Quantitatively assessing complexity

The aforementioned, give criteria to qualitatively assess whether a task has a low or high complexity. It may be useful quantitatively assess complexity to determine the degree of complexity. A method of doing this was devised by Samy & Elmaraghy (2010) who defined manual assembly complexity as having two groups: Handling attributes and insertion attributes. These groups have several attributes which are shown in Figure 6. From these attributes an average difficulty factor can be calculated. This factor may aid in quantitatively identifying the complexity of a task or product.

Figure 6: Figure: Samy & Elmaraghy, 2010: Handling & Insertion attributes

(22)

21 the research question, task complexity can be identified and assessed using the aforementioned

methods.

2.4 Training in assembly

This research aims to provide the first steps towards an adaptable AR system, such systems may benefit from knowing what instruction types lead to the highest performance. When an operator is then met with a certain task the system could provide the instruction type which showed to lead to best operator performance. Training in assembly is therefore discussed to show how training is done with AR.

Moreover, the types of learning curves are discussed to determine the best fit for this research. Last proposed adaptability systems are explored to show what the current state of adaptable AR is.

In most assembly contexts the user is either trained by staff or is provided a manual which contains instructions for the task (Funk et al., 2015). Next to these traditional methods, AR and VR training have gained increasing attention (Boud, Haniff, Baber, & Steiner, 1999). With this type of training the user is supported by a virtual aspect. VR training enables the user to access a virtual object, in AR this is different as a virtual image is projected on the real world (Boud et al., 1999).

2.4.1 Training in augmented reality

In AR training the user is guided with an overlay onto the real world which provides instructions for the task to be completed (Gavish et al., 2015). The main advantage of using AR based training is that the trainee has virtual information directly available in a physical environment (Webel et al., 2013).

Furthermore, the trainee is able to receive instant feedback through an AR system (Webel et al., 2013). AR training is beneficial as a reduction in errors and a reduction in training time is realized (Westerfield, Mitrovic, & Billinghurst, 2015). However, the trainee should not become dependent on the AR system, due to the trainee having to learn the task (Webel et al., 2013; Wang et al., 2016). This can be mitigated by reducing the amount of instructions as the task progresses (adaptability). This ensures the trainee needs to remember how the task is assembled. Radkowski et al. (2015) confirm the former and state that AR should be adapted to the difficulty of a certain task.

(23)

22 AR training thus has benefits in terms of reduction of errors and time but due to poor adaptability and feedback of most systems the benefits are not fully realized yet.

2.4.1.1 Training in augmented reality: learning curve

The Oxford dictionary defines the learning curve as “the rate of a person’s progress in gaining experience or new skills”. This relates to assembly as for a given task a person must learn how to assemble this task. There are several varieties of the learning curve, these are the most known: the S-model, Dejong mode, Plateau S-model, log-linear model and standford-B model (Yelle, 1979). However, for this research the plateau model is the most suitable as it is expected that the learning curve’s in the experiment resemble that of figure 7. The plateauing model was first introduced by Conway & Schultz (1959), it consists of two phases: startup and steady state. During start-up the trainee is still learning, whereas in the steady-state the trainee stops learning. Applying these stages to the research question, the steady-state and start-up phase are examined to determine how various instruction types affect performance.

Figure 7: Plateau phases (Conway & Schultz, 1959)

(24)

23 compared AR training to manual training to determine whether there were differences between the effectiveness of AR in gender. They found that there was no effect of gender type but that using AR achieved better performance over the learning curve. Similar to Peniche et al. (2012) the authors did not distinguish between phases of the learning curve. This research will differentiate between the phases of the learning curve to accurately distinguish between when an operator Is still learning (start-up) and ceases to learn (steady-state).

2.4.2 Instruction types in augmented reality training

As our manufacturing systems increase in complexity (Efthymiou, Pagoropoulos, Papakostas, Mourtzis, & Chryssolouris, 2012), the correct display of information and the choice what kind of information to display is becoming important. This research defines the display of information to assist operators as Instructions. Others have defined the display of information similarly (e.g. Funk et al., 2017a; Wilschut et al., 2019; Korn, Schmidt, & Hörz, 2013). Next, an overview is given of what instruction types other academics have used to assist assembly training.

Projections which can create an overlay onto a workspace is gaining traction for employee training e.g. (Gavish et al., 2015). This instruction type is referred to as in-situ instructions (Funk et al., 2017a). In-situ instructions are categorized by Blattgerste, Renner, Strenge, & Pfeiffer (2018) They differentiate AR instructions based on whether the instructions are presented 2d in-situ, 3d in-situ, 3d-wireframe and side-by-side. Where 2d in-situ refers to instructions projected onto the assembly in two dimensions, 3d in-situ projects instructions on the assembly in three dimensions. The 2d or 3d in-situ instructions show a contour of the part to be placed on the assembly. 3d-wireframe instructions show instructions based on a skeletal representation of an object. Side-by-side instructions are projected next to the assembly. Most research in the AR field focuses on comparing 2d in-situ instructions against text, photo or video instructions. (e.g. Funk et al. 2017) which used skilled and untrained workers. The skilled workers worked with 2d in-situ instructions as well as untrained workers. What they found was that untrained workers were able to produce faster and with less errors using in-situ projection-based AR reality instructions. Skilled workers showed no significant improvement in terms of productivity and errors using in-situ instructions.

(25)

24 aforementioned instruction types. They found that performance was best in 2d in-situ projection-based AR. On the contrary, Blattgerste, Strenge, Renner, Pfeiffer, & Essig (2017) presented participants with four different devices: a HoloLens, a smart glass, smartphone and paper-based instructions. The first three used in-situ instructions. Participants were asked to assemble a DUPLO product using the devices with its in-situ or text instructions. They found that paper-based instructions led to the highest

performance.

Given the aforementioned, different assemblies may require different types of instructions. Moreover, the effectiveness of in-situ AR instructions may be dependent on the assembly.

Current research is mostly focus on the difference between types of hardware but not on the difference between instructions itself (e.g. Blattgerste et al., 2017). Here the aim is to differentiate between instructions rather than the devices which display information. Projection-based AR can be used to project instructions onto a physical space. Therefore, 2d in-situ, 3d in-situ, 3d-wireframe and side-by-side instructions are suitable for this technology.

Projection-based AR has the ability to incorporate instructions from traditional instructions and project these onto the workspace (Tang, Owen, Biocca, & Mou, 2003). Traditional instructions are referred to as either paper-based instructions, imagery or videos (Funk, Lischke, Mayer, Shirazi, & Schmidt, 2018) Projection-based AR has the ability to provide instructions projected onto the workplace or assembly and provide additional information on the side.

Moreover, projection-based AR instructions differ from traditional instruction types as they may be interactive (Funk et al., 2018). With an interactive projection-based AR system the user is able to alter the instructions provided for assembly. Wilschut et al. (2019) used such a system, participants were able to press a virtual button projected on the workspace to continue to the next set of instructions.

2.4.3 Adaptability of instruction types in augmented reality training

(26)

25 devices, sensors and user interactions. Their systems use the aforementioned to gather data and send this to a server. The server analyses the data and sends back relevant instructions for the trainee based on the data received.

Zhu, Ong, & Nee (2015) detail a system which uses information instances which are “appropriate

combinations of virtual objects”(Zhu et al., 2015). They divide the information instances in static content and dynamic rendering. The former are virtual objects that delineates information. The latter aims to integrate this information with unique contexts. With such a system the user is presented a custom AR program. Thus, there are two parts, a part which ‘feeds’ information to the system and a part which uses this information to adapt to the situation of an operator. In this case the contexts (or situations) were provided for the system, in a truly adaptable system this would not be the case.

(27)

26

2.5 Performance measures in assembly using augmented reality

This research aims to measure the effect of operators on performance given task complexity. Performance of an operator has many aspects, Jetter, Eimecke, & Rese (2018) derived the most frequently used performance measures, by means of expert interviews and literature. In total 15 performance measures were derived. These are shown in figure 5. Studies similar to this research are discussed to determine which performance measures are suitable to use for analysis.

Figure 8: KPI's from Jetter et al. (2018)

Wilschut et al. (2019) conducted a study where AR learning was evaluated against traditional paper instructions, they utilized the following performance measures: Productivity, quality, workload and learning curve. Funk et al. (2017) measured performance of skilled and unskilled operators using in-situ instructions. They used Total cycle time, number of errors and cognitive workload. Tang et al. (2003) compared several types AR to determine performance. The following performance were used: Time of completion, error rates and perceived mental workload. This research therefore uses similar

(28)

27 the performance measures of Jetter et al. (2018) and the aforementioned measures, three performance measures were derived: Productivity, Number of errors and workload.

Figure 8 shows more performance measures listed than the three chosen. As the objective is to measure how an operator performs given various instruction types, ease of use, scalability and expandability, flexibity, costs, extent of possibility to collaborate, compatability, user/customer attention, social acceptance, employee satisfaction, knowledge management, spatial representation of contextual information and sustainability are not considered as they relate to organizations, systems or external factors. Task complexity is not considered as a performance measure, rather it is suspected that task complexity influences the performance of an operator. The identified performance measures are discussed.

Workload

When a user assembles a task the cognitive system is met with a mental load, this can be described as the cognitive load (Merrienboer, Kirschner, Kester, & Merriënboer, 2010). It is difficult to quantify what each unique user feels during the performance of a task. Therefore, rather than measuring cognitive load an alternative could be to measure workload.

Workload is measured as it is expected that AR instructions lead to a lower amount of workload. Hou et al. (2013) compared AR instructions with paper instructions. They found that paper instructions led to a higher perceived workload.

Bosch et al. (2017) measured the perceived workload of operators when assembling with electronic work instructions and projection-based AR. They found that AR instructions led to a reduction in workload. Tang et al. (2002) performed a similar experiment where they compared four instructional types for perceived workload. They found that AR led to a lower workload compared to the other instruction types

(29)

28 Productivity

Many researchers defined productivity and found differing results. First a general definition of productivity is given after which several papers are discussed which have measured productivity. Productivity is the “expression of the quantitative productiveness of an economic activity” (Kuhlang, Edtmayr, & Sihn, 2011). Productivity’s main use is to provide information about the effectiveness of deployed production factors (Kuhlang et al., 2011). For example, operator 1 produces 15 products per hour whereas operator 2 assembles 30 products per hour. Operator 2 therefore has a higher

productivity than operator 1.

Tang et al. (2003) measured productivity in their experiments as the task completion time, the average time it took a participant to perform a task. Their research showed that productivity did not increase when using AR instructions over text-instructions. Wilschut et al. (2019) measured productivity as the task completion time of a participant, the authors analyzed productivity for the learning phase of an operator. They found similar results as Tang et al. (2003) instructions types did not have a significant effect on task completion times.

Bosch et al. (2017) measured productivity as the task completion time, they compared electronic work instructions to projected AR. Their results showed that using projected AR significantly improved productivity of operators. Funk et al. (2017) conducted research to determine whether projected instructions increase productivity compared to using traditional instructions using trained and untrained operators. They measured productivity here as task completion times and found that only untrained workers attained an increase in productivity with projected AR instructions.

This research follows other researcher in defining productivity as task completion times. Tasks here refer to tasks which comprise an assembly. Complexity was defined in a similar way, rather than looking at complexity of an entire assembly, complexity is considered per task within an assembly.

Number of errors

(30)

29

2.6 Conceptual model

To clarify the research question: “How do instruction types, under various levels of task complexity affect performance in projection-based AR training and what is the impact of the phase of the learning curve?” discussed concepts were used to create a conceptual model. The effect of instruction types on

performance is measured. However, how difficult a task is may influence the performance; therefore, it is a moderator. Complexity as mentioned in section 2.3 refers to the complexity of an individual task. Meaning each individual task when assembling a product has a degree of complexity. This is further defined in the methodology section. The circle indicates the stage of the learning curve which is either start-up or steady-state (Conway & Schultz, 1959). Performance measures were derived from section 2.5: Number of Errors, Productivity and workload. The complexity consists of criteria based on Falck et al. (2017) and Samy & Elmaraghy (2010) which are operationalized in section 3.1. Figure 9 shows the conceptual model

(31)

30

3. Methodology

The methodology motivates why and how the research was conducted. First, the experimental design is discussed and motivated. Second, Participants and their characteristics are discussed. Third, the tools to perform the experiment are explored. Fourth, the procedure and methods of data collection are

described. Fifth, the statistical methods through which analysis was done are discussed. Sixth, the validity and reliability of the experiment are motivated

In order to answer the research question, an experimental research design was chosen to quantitatively measure the effects of different treatments within participant groups. This experiment’s results are used to measure the performance measures as described in section 2.5. The following variables were used in the experiment: Independent variables: Instruction type and stage of learning curve. Dependent

variable: performance. Moderator: task complexity.

The experiment in short: Participants are asked to assemble two products which have low and high complexity tasks. The products to be assembled by participants are supported with either AR instructions or AR and photo instructions. Participants assembled using both instruction types.

3.1 Experimental design

The experiment measured a relationship between variables which were subsequently analyzed through statistical methods. Therefore, the research approach is quantitative. Creswell (2005) distinguished two quantitative designs: experimental and non-experimental. Those quantitative design can either be within-subject or between-subject. The former meaning participants are exposed to all treatments. The latter meaning participants are exposed to one treatment (Charness, Gneezy, & Kuhn, 2012).

This research follows a within-subject design to reduce the number of participants. Disadvantages of this design are that they may lead to carry-over and demand effects. Advantages are that within-subject design can perform more powerful statistical tests and in some cases may be more suitable match from a theoretical point of view (Charness et al., 2012). The within-subject design is also referred to as a repeated measures design, they mean the same (Shuttleworth, 2009).

(32)

31 quick as possible with the least amount of errors. Therefore, the participant knows what is expected of them.

Carry over effects are the effects that transfer from one experimental condition to another (Vogt, 2015). This experiment uses a with-in subject design meaning participants are exposed to multiple treatments. Therefore, carry over effects may be present. To mitigate those effects counterbalancing is applied. Half of the participants started with AR instructions, the other half with AR and photo instructions.

Moreover, to reduce the learning effect between instructions types, two products were created for this experiment. If a participant started with AR, half of the group would start with Product A and Half with Product B. The same is true for the AR and Photo instructions.

Table 1:Counterbalancing of participants

In the following sub-sections, the independent, dependent and moderator variable(s) are discussed to show how they were operationalized for this experiment.

3.1.1 Performance measures

Productivity

This research defines productivity as the task completion time similarly to productivity as defined in 2.5 (e.g. Wilschut et al., 2019; Bosch et al., 2017; Tang et al., 2003). They measured productivity as the total amount of time it took to complete a task. They defined tasks as a product. Productivity in their case would measure the total time per product. This research measures task types, therefore productivity is defined as the average time it took to complete a task within a product. A task consists of picking and assembling of parts. Picking of parts for a task is not considered in productivity. Picking was excluded as no complexity could be introduced per task type for picking parts. Therefore, productivity is measured as the time it took to place the parts per task. All individual picking and assembling times were logged. Productivity was computed as the time it took to place the parts relating to a task. This is the time that an operator took after confirmation that parts were picked and until confirmation that the parts were placed.

Start AR Start AR+Photo Start Prod A Start Prod B

Group 1 x x

Group 2 x x

Group 3 x x

Group 4 x x

(33)

32

Number of errors

Number of errors were determined based on a photo taken at the end of each product cycle. An error was recorded when participants: incorrectly oriented, placed, inserted or omitted a part. These summed are the number of errors a participant made during a product. Ideally, the errors were recorded per task type. However, taking a photo after each task would lead to distraction of participants during assembly. Moreover, the researcher recorded errors during assembly but was interfered or distracted in some cases. Therefore, errors that were made might have been missed.

Workload

This research uses the NASA TLX for measuring workload as others conducting similar experiments have utilized this measure (e.g. Wilschut et al., 2019; Funk et al., 2017; Bosch et al., 2017) The NASA TLX index is a questionnaire which measures six items: “mental demands, physical demands, temporal demands, performance, effort and frustration” (NASA, 1986). What these six items entail is shown in appendix B. Participants filled out the NASA-TLX for each instruction types. They score the six items on a scale from 0-100. Where 0 means no effect and 100 an extremely high effect of the respective items. Performance differs as 0 indicates good performance and 100 bad performance according to the participant.

3.1.2 Instruction types

As stated in section 2.4.2 current research is focused on comparing different types of instructions with varying hardware. For example, Funk et al. (2017) who compared text instructions with AR instructions. Here the aim is to measure instruction types based on one type of hardware within AR. The

aforementioned stems from a need for adaptability as discussed in section 2.4.3. where it is suspected that differing task types may require different instructions.

This research used an interactive projection-based AR to differentiate between instruction types given the advantages listed by Krevelen & Poelman (2010). Projection based AR was used instead of other forms of AR. These were not readily available by TNO and had drawbacks in terms of user comfort, tracking and programming required. Additionally, due to usage of an eye-tracker usage of head mounted displays was impossible.

(34)

33 instructions or projected photo instructions. Video instructions were deemed not feasible as it would require the participant to watch a video whilst assembling, which would increase assembly time. Moreover, participants were supplied with a side-by-side text instruction for both instruction types. These instructions were minimal (e.g. place white block) and did not give information regarding where to place or how to orient the part. Therefore, not considered as an instruction type.

Figure 10: Operator assembling with 2d in-situ instructions

3.1.3 Task complexity

Participants were required to build two products. Each product consisted of 9 tasks (as defined in section 2.5) and are shown in appendix C. 4 task types with high and low complexity. 1 task did not have complexity, it provided a structure to build the assembly on. The 4 task types are: Placing Beams, placing pins, connecting tubes and Placing cogwheels. Each task consists of an n number of parts. LEGO technic parts were used for the task type design. LEGO technic blocks are an easy way to build a product without prior training/knowledge required from the participant. Moreover, LEGO technic contains parts which mimic an industry assembly setting e.g. cogwheels.

(35)

34 Cogwheels are used in many assemblies such as gearboxes. Others have not considered task types of a product rather the product as a whole. Therefore, tasks could not be created based on others which have conducted experiments with AR.

Complexity of these tasks were derived from an adapted checklist created by utilizing aspects of Falck et al. (2017) and expert comments from TNO. The checklist was adapted to reflect complexity of tasks within a product. The checklist is shown in Table 2. Complexities were derived from the product and process factors as personal factors and environmental factors do not relate to the task itself. They relate

to the person or environment which cannot be used in creating complexities for individual tasks. Factors that were not included and the reason why are shown in Appendix D.

Moreover, the two products had the same task types which were created equally with a similar amount of parts. Differentiations between product task types where based on the location of assembly, color and orientation of parts. Complexities did therefore not differ between the two products based on the adapted checklist. Table 3 shows the tasks in order of assembly for both products. Appendix C shows each task per product in order of assembly.

Factor Low complexity High complexity Factors adapted from section 2.3 and expert comments (TNO)

1. Amount of Parts 1,2,3 4 or more. Number of individual mounts/details, Duration of assembly

2. Visibility of task Visible Not visible Visibility of operations

3. Location of piece easily accessible, 1 position Difficult to acces, Multiple options Clarity of mounting position, ease of accesibility 4. Orientation of montage Top/Bottom only 1 way possible More than one way Expert comment by TNO

5. Sequence of montage Free Specific Sequence dependency

6. Orientation of piece Orientation doesn't matter Four possibilities. Material characteristics, degree of accuracy Complexity of task types for both products

(36)

35

Table 3: Order of tasks per product

To determine whether complexity differed between task types relevant attributes derived from Samy & Elmaraghy, (2010) were summed. Insertion attributes were used as the handling attributes relate to individual parts. The insertion attributes can be used on the task level. The mechanical and non-mechanical fastening process was not considered, these are not present when assembling with LEGO Technic. Tubes may be bend however, these were pre-bend for the participant.

The identified insertion attributes were compared between low and high complexity tasks. Table 4 shows how task complexity of task type Pins was derived.

Task complexity

Product A B

Task 1 No complexities No complexities Task 2 Low (Beams) Low (Beams)

Task 3 Low (Pins) Low (Pins)

(37)

36 Task 1 Pins (low complexity) Pins (low) Factors Task 1 Pins (high complexity) Pins (high) Factors

4 criteria are met in full according to our complexity checklist (1,3,5,6).

Furthermore 2 and 4 are met partially.

Insertion attributes score: = 3.42 1. 2 parts 2. Visibility is clear 3. Clearly visible location of piece

4. Front and back side

5. Free 6. Orientation

doesn’t matter

3 Criteria are met in full according to our complexity checklist (1,3,4,) furthermore factor 2 is met partially.

Insertion attributes: = 4.87 1. 5 parts 2. Some pieces have poor visibility 3. Multiple options to place the pins 4. Front/back and between beams. 5. Free 6. Orientation doesn’t matter

Table 4: Complexity for task type Pins

The table first shows a picture of the task, pictures from product A and B were used. Appendix E shows all task complexities. Next, the criteria from the adapted checklist are discussed. Last the insertion attributes scores are given. Here the difference is 1.45 this number, a higher score suggests a greater degree of complexity. The number solely illustrates that there is a numerical difference in complexity.

3.1.4 Learning curve

(38)

37

3.2 Participants

The selection process for this research was done through convenience sampling, meaning participants were chosen which are easily accessible for the researchers. This was done as experiments had to be performed within a specified timespan. In total there were 20 participants which were divided in groups as shown in table 5. Division of participants was done based on order of application.

Recruiting of participants was done through: Facebook groups, WhatsApp groups, Posters and email addresses from a previous experiment conducted by TNO. When a potential participant showed interest a document with information regarding the experiment was sent, appendix F contains this document. All participants signed a consent form and were awarded a gift card.

All participants were students from the university of Delft doing a Bachelors, Masters or PhD. 17 participants were right-handed, 3 left-handed. 6 participants were female and 14 were males. The average age of participants was 25 (std. 2.7). The average height was 172.8cm (std. 13.9). participants were subject to the Purdue pegboard task before assembling products. The participants were asked to place pins into holes with both hands at the same time, twice, for 30 seconds. The task is used to measure hand dexterity. Table 2 shows the pegboard task scores based on gender.

Table 5: Pegboard trial scores for both instruction types

Lafayette, the manufacturer of the pegboard task indicates that on average males score 12.59 and females 13.76. The participants have scored slightly below this score.

Pegboard trial 1 Pegboard Trial 2

Male (n=14) 11,47 12,07

Female (n=6) 12,67 12,33

(39)

38

3.3 Materials

In order to perform the experiment several instruments were necessary. The experiment used a workbench to assemble a product. The workbench consists of a flat surface and several bins with parts for assembly. Instructions are projected onto the workbench with a projector. A Kinect was used to measure depth, which identified movement of the participant during the assembly of tasks. A personal computer with software from OPS solutions was used for logging productivity data. The pegboard was used at the start of the experiment to test for hand dexterity. Furthermore, a booklet (appendix G) with several questionnaires and a consent form is utilized in various stages of the experiment. This booklet consists of: Informed consent, Participant characteristics, NASA-TLX (Pegboard, Product A and Product B), eye tracker comfort, product questions and a TAM (Technology acceptance model). Additionally, the researcher used a protocol for carrying out the experiments (appendix H).Furthermore, an eye tracker1

was used to measure which regions of interest participants are gazing towards. This eye tracker was provided by TNO and is created by Pupil Labs. LEGO bricks were used for the assembly of both products. All materials except the printouts were provided by TNO. After this section all components that are necessary on or around the workbench to perform the experiment (Hardware & Software) are defined as the AR setup.

Workings of the AR system

The projection-based AR system presented the participant with Text top left of the workspace, AR instructions on the assembly (middle) and Photo instructions to the right (if applicable). A task consisted of two steps: Picking parts and assembling parts. When picking a bin would light up with the parts for that task. When assembling the instructions would be presented. Participants were presented with two virtual buttons: a back and forward button. Participants could cover these buttons with their hand to alternate between instructions. When the participant finished building a webcam took a photo of the finished product.

(40)

39

3.4 – Procedures & Data collection

3.4.1 General procedure for each participant

The experiment took place in TNO’s lab at the RoboHouse in Delft. The experiments were conducted one-by-one. After greeting the participant and seating them at the workstation the researcher explained what was expected of the participant. Next, the participant filled out the informed consent form and participant characteristics. The participant was asked to wear the eye-tracker which needed calibration before usage (approx.5 minutes). Then the pegboard task was performed to determine motoric skills, additionally the researcher used the pegboard task to confirm whether the calibration of the eye tracker was correct. The participant was asked to fill out a NASA-TLX after completing the pegboard task. After the NASA-TLX an explanation of how the AR system works was given, the user started building after the explanation. Participants started building with a one of two instruction types. When the first product was built ten times the participant filled out another NASA-TLX after which a short break commenced. During this break the researcher switched bins for the correct LEGO parts. The next product was built ten times, after which the participant completes the NASA-TLX again. This concluded the building part of the experiment and the booklet with questionnaires was given. After these were all completed, data from the eye tracker, quality photos and AR setup were stored. The researcher checked if all

questionnaires did not have missing values. Last, the participant was given a 25-euro bol.com voucher and is thanked for his/her participation. In total the experiments took two hours. This describes how the experiments were conducted in short, in appendix Dthe protocol for measurement is given. Appendix G shows the list of questionnaires. Appendix J details the building process of operators.

3.4.2 Data collection

Data collection was done by conducting experiments from the 9th of December until the 20th of

December. Each participant performed the experiment over approximately 2.5 hours. The first hour participants were assembling products with one of two instruction types. The second hour the other instruction type was given for assembling the products. 30 minutes were used for filling out

questionnaires and calibrating the eye tracker. Timeslots for experiments were from 9:00 – 11:30, 13:00 – 15:30 and 17:00 – 19:30.

(41)
(42)

41

3.5 Data analysis

First data of productivity remodeled to determine whether product A and B had similar average task times. A repeated measures ANOVA showed that the means of Product A and B did not significantly differ. This indicates that products were similar in terms of average task time. However, 12 out of 19 participants indicated that one product helped assembled the other. This could indicate that a carry-over effect was still present.

Next data was modelled to reflect product cycles per type of instruction. This dataset was used for analysis. The data set was complete for 19 out of 20 candidates, 1 candidate had missing values

therefore this case was dropped from analysis. Productivity and number of errors were analyzed using a general linear model repeated measures ANOVA with Bonferroni posthoc testing in spss. The repeated measures ANOVA was chosen as it measures the mean difference over 3 or more time points. posthoc testing was done to reduce the chances of having significances where none were present.

Productivity

The Within-subject factors for productivity of the repeated measure ANOVA are shown in table 6. Each task type consists of 760 measurements (19 participants * 10 repetitions * 2 complexities * 2 products). In total there were 3040 data points for the task types. Task 1 was excluded from analysis.

Within-subject factors Name Levels Instruction_type 2 Product_Cycles 10 Complexity 2 Task_type 4

Table 2: Within-subject factors of productivity

(43)

42 been conducted. However, the Friedman test ranks each data point (medians), therefore not suitable for this research as the aim is to measure mean differences between operator performance per instruction type.

Second Mauchly’s test of sphericity was examined, if the test was significant (sig < 0.05) sphericity was violated. If this was the case the degrees of freedom were corrected using Greenhouse-Geisser correction. The Greenhouse-Geisser correction was chosen as this is the more conservative correction and thus does not indicate significance if there is none present.

Number of errors

A general linear model repeated measures ANOVA was used for analysis. Below the repeated measures within-subject factors are shown. Errors consisted of 380 data points (19 participants, 10 repetitions, 2 products).

Within-subject factors

Name Levels

Instruction_types 2

product_cycles 10

Table 3: Within-subject factors of the number of errors

Similar to productivity, significance was accepted for a p smaller than 0.05. Product cycles were split as similarly to productivity to analyze the stage of the learning curve. Normality was checked, sphericity examined and the Greenhouse-Geisser was applied where necessary. The errors follow a similar statistical analysis as productivity.

Workload

Workload is tested by using a paired samples t-test. The pairs consist of the variables measured after each instruction type. For example: Frustration at from working with AR instructions was compared with frustration from AR and Photo instructions.

Questionnaire data

(44)

43

3.6 Validity and reliability

Referenties

GERELATEERDE DOCUMENTEN

Designing a framework to assess augmented reality potential in manual assembly activities to improve

The microscopic origin of this phenomenon in the studied films can be mainly attributed to an increase in orbital moment normal to the grain–oxide interface, with increasing

Results indicate that the type of media (image or video) within a Facebook post is the characteristic that is most closely correlated to user engagement rate, partially followed

Online store offer 2 (furniture/VR).. further processing based on an informed consent. If a participant disagreed and thus selected no, the experiment was immedi- ately terminated,

Bell’s Theorem: The Quantum Venn Diagram Paradox [Internet]..

[r]

Traces of recurrent jet activity is also observed when extended, low surface brightness emission with amorphous shape is associated with a CSS, GPS or HFP sources (Baum et al.

Miller and cow- orkers described the development of the first non-viral delivery system for in vitro and in vivo co-delivery of Cas9 mRNA and targeted sgRNA from a single LNP ( Miller