• No results found

The influence of trust in automation on the relationship between automation reliability and human performance

N/A
N/A
Protected

Academic year: 2021

Share "The influence of trust in automation on the relationship between automation reliability and human performance"

Copied!
33
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Experiment with a mirror-tracing task

S2238136 Maarten de Jong

(2)
(3)
(4)

Abstract

Nowadays more and more technology is used in industries as well in private lives. However, not every implementation of technology leads to better performance. Automation needs to be reliable to improve human performance. Literature shows that trust in automation can be a possible explanation for this relationship. When workers use automation, a lack of trust in automation can decrease human performance.

Therefore the research question of this research is: To what degree is the relation between reliability and individual performance explained by trust in automation? To answer this, an experiment is conducted with an online mirror-tracing task. This mirror-tracing task simulates a manual factory job. The participants are assisted by an automated guideline which allows them to improve performance. In the experiment the reliability of the guideline is manipulated.

The results show no significant mediating effect of trust in automation on the relation between automation reliability and human performance. The relation between automation reliability and human performance is found to be significant; the automation makes the task of the participant easier. The manipulation turned out to be not significant. Also, other limitations, theoretical contributions and possibilities for future research are discussed.

(5)

1.Introduction

The level of automation in a production environment has strongly increased in the past decades (Kumar & Siddharthan, 1994; Pikosz & Malmqvist, 1998; Sauer, Kao & Wastell, 2012). Technology is used to assist or even replace human workers in many production environments (Mateo, Tanco & Santos (2014).

Where in the past only repetitive large-scale jobs were automated, nowadays the fast development in robotics also provides opportunities to automate smaller non-repetitive jobs (Hancock et al, 2011). Due to this fast development in production automation technology, the technology is becoming more accessible. The production robots or robotic arms are becoming cheaper and easier to implement. Therefore, automation becomes more attractive for smaller manufacturing companies.

The implementation of automation technology changes the work of individual workers. Workers can work together with automation technology. Therefore, acceptance of automation is important. Without acceptance implementation of automation can fail. (Hancock et al., 2011) Automation is more likely to be accepted when a worker perceives usefulness.

(6)

Trust in automation is necessary when new automation is implemented. If trust in automation lacks then implementation of new technology can fail. Without trust in automation workers can choose to ignore the automation. (Hancock et al., 2011)

The aim of this research is to find out in what degree trust in automation explains the relation between automation reliability and performance. Therefore, the research question of this paper is: To what degree is the relation between reliability and individual performance explained by trust in automation?

This question will be answered by conducting an experiment. This confirmative experiment consists of a mirror tracing task which emulates a manual task. In this task the human performance can be measured. Testing the theory in a different context can lead to new insights in theory and can assist to improve human-machine interactions to help us understand how to improve performance in practice.

The remainder of the thesis is structured as follows: In Section 2, the literature about the background of the study is reviewed. In Section 3, the methodology utilized and the stepwise approach that was followed by performing the experiment is explained. In Section 4, the findings of the experiment are presented. Finally, in Section 5, concluding remarks are given as well as indications for future research.

2.Background

(7)

automate smaller jobs and can let humans work together with a robot (Hancock et al., 2011). Implementing new production technology has changed the task of human workers; nowadays human workers have to check and correct machines on product quality instead of performing the production task themselves. In more advanced collaborations of humans and machines, such as in cobotics, human trust is essential (Faulring, Colgate & Peshkin, 2005).

2.1Reliability

Automation is designed to improve human performance. Automation can make a job less physically and mentally demanding and can improve quality. For example, a robotic arm simplifies the task of placing a windshield on a car. A worker still has to control the robot and apply the glue but the automation takes over the lifting and can place the windshield more precise.

Automation has to be reliable; unreliable automation will decrease performance. This is shown in multiple studies. Under decreasing system reliability, human performance deteriorates across a range of work environments for instance in: flight simulation (e.g. Bailey and Scerbo, 2007), in-vehicle navigation (Ma and Kaber, 2007), waste processing facility (Wiegmann et al., 2001), and military applications (Rovira et al., 2007). Also Lee & See(2004), state in their article that well implemented reliable automation should lead to increase in performance.

(8)

automation is trusted sooner when workload is high. Parasuraman et al. (1993) found no relation between reliability and performance but they are outnumbered by studies which do.

There are few studies examining system reliability in relation to trust. Moray et al. (2000) investigated the impact of reliability with an adaptive system. The performed task in this study involved the management of a central heating simulation, aided by a support system which helped in the diagnosis of faults and the choice of repair action. The reliability of this support system varied from 70% to 100%. Operators showed better performance under automated control mode compared to manual control mode when the system was totally reliable, and worse performance when system reliability was low.

The relation between automation reliability is often discussed in literature. Automation with high reliability increases performance. Writers state that trust in automation is a possible explanation for this relation. Therefore, trust in automation is investigated further.

2.2 Trust in automation

(9)

Hoffman, Johnson, Bradshaw & Underbrick (2013) wrote a paper about trust in automation. In this paper, the authors emphasize the dynamics of trust. They consider different kinds of trust, and different modes of operation in which trust dynamics play a role. They state that trust is essential for workers to achieve the highest performance when working with automation. Muir (1994) built a model which defines human trust in machines and specifies how trust changes with experience on a system, providing a framework for experimental research on trust and human intervention in automated systems. Muir found that when people have high trust, less interventions are made and then the performance will be equal to the automation reliability. Parasuraman, Sheridan & Wickens (2008) concludes that mental workload and trust in automation are related and are valuable to understand and predict human-machine performance. Dzindolet et al (2003) conducted an experiment to explore the relation between automation reliability, trust and reliance. Their conclusion is that trust in automation is increased when human know the algorithm behind the automation. In literature, a bias is found in automation. For example, Dijkstra and Timminga (1998) found students judged advice from an automated expert system to be more rational and objective than the same advice from a human advisor. However, Lerch et al. (1997) found greater trust in automated systems than human advisors only when the automation was an expert system and the human advisor was a novice. They did not find participants were more likely to trust an automated expert system than a human expert advisor.

(10)

relation is contradicted by Rovira et al. (2007), who showed an effect only on performance and use of automation and not on trust. A similar observation is made by Manzey, Reichenbach and Onnasch (2012). Multiple studies are performed on trust in automation. However, the role of trust as mediator between automation reliability and human performance is not clear yet. Also, the use of an experiment to investigate in which degree trust in automation explains the relation between automation reliability and human performance is new in literature.

2.3 Conceptual model

Out of the literature study is shown that there is a relation between automation reliability and human performance. Multiple writers suggest that trust in automation has an influence on this relationship. Trust and reliability both influence reliance. Workers which rely on automation which helps to improve performance, will increase their own performance. However when trust lacks, system suggestions can be ignored and therefore the human performance is not optimal. Performance might even degrade because the worker has more information to process and evaluate.

(11)

3. Methodology

3.1 Sample description

In this study, a student sample is used. A student sample is used because a high number of participants could be reached inexpensive and in a short time period. The participants were reached mostly by e-mail and social media. A sample of 96 respondents was taken for the experiment. Out of the 96 participants, 54.2% were male and 45.8% were female. The age of the participants ranges from 18 to 37 years, with a mean of 23,9 years. 76% of the participants were Dutch, 8,3% German and the remaining 15,6% had 10 other nationalities. Most of the participants completed a bachelor degree 67%, 11% completed a master degree, 18% completed high school and 4% completed another education.

3.2 Experimental design

This research and experiment is built to explore if trust in automation has an effect on the relationship between automation reliability and performance. The experiment is designed

Trust in automation

Automation reliability 2 Human performance 1

(12)

between subjects, the manipulated variable is automation reliability. This means that the participants are randomly assigned to one of the three groups, which have a different level of automation reliability. Human performance and trust in automation are measured per participant.

Age, gender, nationality and level of education are also measured. These variables can have influence on the performance, and therefore they must be taken into account when the effect of trust in automation is investigated.

3.3 Experimental task

(13)

Figure 3.1: Example of the mirror-tracing task

(14)

3.4 Measures

The hypothesis is that trust in automation mediates the relation between automation reliability and human performance. Therefore, a mediation analysis is performed. A common way to measure a mediation effect in literature is by the framework of Baron and Kenny (1986). In this research, human performance is the independent variable. The human performance is measured by the score per trail. This score is based on the ratio drawn in-/outline by the participant.

Automation reliability is manipulated in this experiment, this is the independent variable. The automation reliability differs in each of the three experiments. In the first experiment the reliability is 100%, a dotted line in the drawing canvas will help the participant to make a better

(15)

Figure 3.3: Automation reliability in the mirror-tracing task

drawing. The dotted line consists of a fixed number of black squares. If a participant perfectly follows the dotted line then a high score can be reached. In the second experiment, the automation reliability is 50%, this means that the dotted line only covers 50% of the drawing. When the participant starts the drawing, the dotted line will appear, but if 50% of the drawing is covered then the dotted line will disappear and then the participant has to finish the drawing manual by looking at the upper canvas. In the third experiment, the automation reliability is 25%. The dotted line will only cover a small part of the drawing and therefore the participant has to draw most of the line by looking at the upper canvas. In figure 5 the length of the line which is covered by the dotted line is made visible.

(16)

Table 3.1: Example statements.

The expected mediating variable, trust in automation, will be measured using a survey before and after the experiment. The participants will rate statements on their trust in automation from 1 (strongly disagree) to 7 (strongly agree). Table 3.1 shows some statements as example. All statements are shown in Appendix A. The questions asked after the experiment are used in the analysis.

I trust automated systems.

Automated systems can make better decisions than humans. Robots can replace humans in factories.

I want to perform well in any task I do. I am highly motivated to succeed.

I felt responsible for creating a good drawing. The task was important.

Automated systems can help humans to improve their performance. I trusted the black squares that it performed the right drawing.

Participants also have to answer questions about the confounding variables: age, gender, level of education, nationality and input device. At last, motivation and experience are measured. These measured are used to control the randomization of the three different experimental groups. This to prevent that one group consists out of participants which have low motivation or high experience without knowing.

3.5 Procedure

(17)

After these statements, the participants are informed remaining steps of the experiment. Now each participant will be assigned to one of the three reliability levels. A video is shown to assure full understanding of the experiment, after that the participant can practice with a practice round. At this point the actual experiment start and the participant has to make four drawings. After that participants need to rate statements about their trust in the automation, the perceived usefulness of the automation and their performance. At last the participant is debriefed and thanked for participating the experiment.

3.6 Validity

To make conclusions out of this research generalizable, it is a requisite that this research is valid. There are multiple forms of validity that need to be looked at.

Internal validity, this type of validity ensures that the observed change in human performance is related to the change in automation reliability. In this research, a manipulation check is performed and the confounding variables are also measured. Confounding variables can have an effect on the human performance, therefore it is important to check their effects.

Construct validity relates to the extent to which an observation measures the concept it is intended to measure (Karlsson, 2009). This is covered by the design of the experiment, the three different experiments are build up equally. The participants of experiment 1, 2 and 3 all follow the exact same steps and all experiments have the same length. Also the effects of confounding variables are analyzed, this to be sure that the observed measures were a result of the manipulation.

(18)

small sample size makes generalization of conclusions difficult. The aim of this research was to have at least 30 participants per experiment, so 90 participants in total. The sample also needs to be appropriate for the research. In this case, the random sample of people between 20 and 30 years is appropriate. All participants had no pre-knowledge about the experiment and the goals of the experiment.

At last criterion validity is addressed. All variables and measures are reported and measured during the experiment and thus at the same time; no time effects can occur. Also, the score per trail is calculated at the same way for each trail in each experiment. No measurement mistakes or rounding off errors can occur in this experiment

4. Results

4.1 Data preparation

In total 96 participants participated in this experiment. Four participants did not finish the experiment, two participants did not agree on the consent form. Of the remaining 90 participants 8 outliers are identified. Some very low scores and participants which did not rate the statements correct were identified as outliers. This means that the data of 82 participants is used in the remainder of this study.

4.2 Randomization check

(19)

analysis is performed on experience and automation reliability. this relation is not significant (F=2.424, p=.104). This means that the number of participants which have experience with mirror tracing tasks is similar in each group. Also, an One-way ANOVA between motivation of the participants and automation reliability is performed (F=1.534, p=.225). The value of .225 means that the motivation of participants does not significant differ between the three groups. In Table 4.1 the means and standard deviation per group is shown. Out of the analysis can be concluded that the randomization led to three equal experimental groups.

ANOVA M SD Experience group 1 4.63 1.59 Motivation group 1 5.76 1,04 Experience group 2 4.24 1.75 Motivation group 2 5.83 0.98 Experience group 3 4.67 1.53 Motivation group 3 5.89 0.943 4.3 Manipulation check

After the randomization check, a manipulation check needs to be performed. This manipulation check is used check if participants used the automation and found the automation useful. After the four trails participants had to rate some statements from 1 (strongly disagree) to 7 (strongly agree) on the usefulness of the automation. Here also A One-way ANOVA analysis is performed.

The participants experienced the dotted line as useful. There are differences in perceived usefulness of automation between the different groups as can be seen in table 4.2 The lower the

(20)

automation reliability, the lower the perceived usefulness of the automation. These differences are however not significant(p=.102).

Between group 1 and group 3 the difference in perceived usefulness is almost 1 point, with the range between 1 and 7, this is almost 15% decrease. Although the means out of table 4.2 show a relationship and thus show that the manipulation did have effect, the failed manipulation check does lower the value of found relations in the remainder of this research.

Perceived Usefulness M SD Group 1, 100% reliability 5.63 1.54 Group 2, 50% reliability 5.24 1.75 Group 3, 25% reliability 4.69 1.94 4.4 Confounding variables

By the data analysis also confounding variables as gender, education, nationality experience, motivation, trust and input device are analyzed to prevent these variables to have an unknown effect of the outcome of this research. For each of these variables the correlation with human performance is calculated.

The correlation between gender and human performance (r = 0,278, p = .376) and between motivation and human performance (r = 0.318, p = .424) tells us that a positive relation is found, but these relations are not significant.

(21)

Furthermore, the correlations between education and human performance (r = 0,098, p = .288) and between nationality and human performance (r = 0.205, p = .228) are not significant.

Also, the correlations between experience and human performance (r = -,196, p = .437) and between input device and human performance (r = 0.106, p = .251) showed no significant correlation.

At last, the correlation between trust in automation and human performance is looked at. Trust in automation is measured with questions before and after the mirror-tracing task. The questions and their means are showed in Appendix B. Trust in automation has no significant effect on human performance (r=0.104, p=.398).

All confounding variables have no significant influence on human performance. Out of this findings can be concluded that gender, education, nationality, experience, motivation, trust and input device (external mouse/touchpad) have no significant effect on performance.

4.5 Correlations

(22)

The positive relation between automation reliability and performance is also visible in figure 4.1. The average score of the participants with 100% reliability is higher than both other groups. Remarkable is the fact that the average score of experiment 2 is almost equal to the average score of group 3, while the automation reliability is decreased from 50 to 25%.

Another remarkable finding is the decreasing learning curve in experiment 2 and 3. In figure 7 the scores per trail are displayed. In experiment 1 the scores are increasing when the number of trails are increasing, while in experiment 2 and 3 the best scores are achieved in the first trail. This observation may be explained by the fact that participants in experiment 2 and 3 lose the helping dotted line during the trail. This switch may explain decreasing learning curve.

0,83 0,835 0,84 0,845 0,85 0,855 0,86 0,865 0,87 0,875 0,88

Experiment 1, 100% Experiment 2, 50% Experiment 3, 25%

Sco

re

Reliability

Average score per experiment

(1) (2) (3) (1) Automation reliability 1 (2) Human Performance .359* 1 (3) Trust in automation .347 .423 1

Table 4.4: Correlation values. (*= 0.01<significant<0.05)

(23)

4.6 Hypothesis testing

In this section, the mediation effect of trust in automation between automation reliability and performance is examined by regression. The regression between automation reliability on trust in automation is not significant, B=-.007, t=-.984, p=.351. The regression of trust in automation on performance, while controlling for automation reliability, is also not significant B=-.038, t=-.671, p=.298. The last relation, between automation reliability and human performance is found to be significant, B=.104, t=.854, p=.039. This positive relation can be explained by the fact that the automation simplifies the task. Therefore, the human performance increases when a 100% reliable guideline is used.

This analysis shows that trust in automation has no partial mediation effect on the relation between automation reliability and performance. Therefore, can be concluded that the conceptual model has to be adapted to the model in figure 4.3.

0 0,1 0,2 0,3 0,4 0,5 0,6 0,7 0,8 0,91

P

er

centa

g

e

inl

ine,

sco

re

Mirror tracing score, per trail

(24)

Figure 4.3 shows that only the direct relation between automation reliability and performance is significant.

5.Discussion 5.1 Interpretation

The randomization check shows that the different experimental groups are equal on behalf of experience and motivation. Confounding variables gender, education, nationality experience, motivation, trust and input device are analyzed to prevent these variables to have an unknown effect of the outcome of this research, they do not. The manipulation check failed, the manipulation of the automation reliability between the three different groups turned out to be not big enough. This also shows out of the results between the 50% reliability group and the 25% reliability group, these groups scored almost the same.

The results shows that participants with 100% reliable automation scored better than participants which had lower automation reliability. Also in the data analysis is proven that the relation between automation reliability and human performance is significant. The influence of

Figure 4.3: Adjusted conceptual model

Trust in automation

(25)

trust on this relation is not proven. Therefore can be concluded that trust in automation does not mediate the relation between automation reliability and human performance.

5.2 Limitations

A big limitation of this research is the fact that the manipulation check turned out not to be significant. Although the differences between the three experimental groups were big, the differences were not significant. (p=.102). This implies that the three levels of automation reliability should be designed in such a way that the differences are significant. The lowest automation reliability in this research was 25%, in future research this can be lowered. Also the number of experimental groups is a limitation, in this research three groups with different automation reliability are used. Using more groups will increase the significance of the results; for instance 7 groups, all with a different automation reliability. The lowest automation

reliability then can be set to 0%, thus no automation. Then also the statement of Wickens and Dixon(2007), who state that below 70% reliability automation has no positive influence on human performance can be investigated further.

5.3 Theoretical implications

Participants with 100% reliable automation scored better than participants which had lower automation reliability. Between the group with 50% reliability and the group with 25% reliability almost no differences occur. This can be explained by the statement of Wickens and Dixon(2007), who state that below 70% reliability, automation has no positive influence on human performance. Both groups (25% and 50%) are below this 70%, therefore this

(26)

No significant effect of trust in automation is found on the relation between automation reliability and human performance. The relation between automation reliability and human performance is found to be significant. The found relation between automation reliability and human performance is in line with the findings of Lee & See(2004), who stated that that well implemented reliable automation should lead to increase in human performance.

Hoff & Bashir (2015) defined three kinds of trust; dispositional trust, situational trust and learned trust. The dispositional trust is measured before the experiment. Afterwards trust is also measured, the difference in trust would indicate that participants gained trust. Gained trust can be seen as learned trust in the model of Hoff & Bashir (2015). In this research no learned trust is found, a possible explanation is that the used Likert scale (1-7) was not sensitive enough. An observation is that most participants rate statements around the number 5.

5.4 Methodological implications

In future research the manipulation of the different automation reliability levels should be increased. In this research the manipulation check failed. This research simulated the level of reliability with a helping dotted line which was visible for a percentage of the line. The

percentage visible was equal to the percentage reliability; so with 100% reliability the whole line was visible. If this experiment is repeated then also a level with 0% reliability can be added to increase the manipulation.

(27)

from 0-100% is that participants need to think more about their score and every score is possible. A disadvantage that the questionnaire takes more time.

5.5 Managerial implications

This study contributed to the understanding of the relation between automation reliability and performance. Findings in this research can help managers by implementing automation on their production process. Trust in automation is less important than automation reliability. Managers should aim for automation with the highest possible reliability available, this will increase human performance the most.

6.Conclusion 6.1 Summary

An experiment is conducted to investigate if the relation between automation reliability and human performance is explained by trust. In the experiment a manual factory job is simulated. In this experiment automation reliability is manipulated in three different levels. The manipulation was unsuccessful, although the experienced usefulness of the automation did differ between the experimental groups. Because of the failed manipulation the results of this research are limited.

(28)

6.2 Future research

(29)

7. References

Bailey, N. R., & Scerbo, M. W. (2007). Automation-induced complacency for monitoring highly reliable systems: The role of task complexity, system experience, and operator trust. Theoretical Issues in Ergonomics Science, 8(4), 321-348.

Dijkstra, J., Liebrand, W. and Timminga, E "Persuasiveness of expert systems." Behaviour & Information Technology 17.3 (1998): 155-163.

Dzindolet, Mary T., et al. "The role of trust in automation reliance." International Journal of Human-Computer Studies 58.6 (2003): 697-718.

Faulring, E. L., Colgate, J. E., & Peshkin, M. A. (2005, June). High performance cobotics.In Rehabilitation Robotics, 2005. ICORR 2005. 9th International Conference on (pp. 143-148). IEEE

Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y., De Visser, E. J., & Parasuraman, R. (2011). A meta-analysis of factors affecting trust in human-robot interaction. Human Factors, 53(5), 517-527.

Hoff, K. A., & Bashir, M. (2015). Trust in automation integrating empirical evidence on factors that influence trust. Human Factors: The Journal of the Human Factors and Ergonomics Society, 57(3), 407-434.

Hoffman, R. R., Johnson, M., Bradshaw, J. M., & Underbrink, A. (2013). Trust in automation. IEEE Intelligent Systems, 28(1), 84-88.

(30)

Kumar, N. and Siddharthan, N.S. (1994). Technology, firm size and export behavior in developing countries: The case of Indian enterprises. The Journal of Development

Studies, Volume (31), 1994 - Issue 2.

Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors: The Journal of the Human Factors and Ergonomics Society.

Ma, R., & Kaber, D. B. (2007). Effects of in-vehicle navigation assistance and performance on driver trust and vehicle control. International Journal of Industrial Ergonomics, 37(8), 665-673.

Manzey, D., Reichenbach, J., & Onnasch, L. (2012). Human performance consequences of automated decision aids: The impact of degree of automation and system experience. Journal of Cognitive Engineering and Decision Making, 6(1), 57-87.

Mateo, R., Tanco, M., & Santos, J. (2014). Less Expert workers and customer complaints: automotive case study. Human Factors and Ergonomics in Manufacturing & Service Industries, 24(4), 444-453.

Moray, Neville, Toshiyuki Inagaki, and Makoto Itoh.(2000) "Adaptive automation, trust, and self-confidence in fault management of time-critical tasks." Journal of experimental psychology: Applied 6.1 : 44.

Muir, B. M. (1994). Trust in automation: Part I. Theoretical issues in the study of trust and human intervention in automated systems. Ergonomics, 37(11), 1905-1922.

(31)

Pikosz, P. & Malmqvist, J. (1998). A comparative study of engineering change management in three Swedish companies. Proceedings of ASME DETC’98, 9006.

Rovira, E., McGarry, K., & Parasuraman, R. (2007). Effects of imperfect automation on decision making in a simulated command and control task. Human Factors, 49(1), 76-87.

Sauer, J., Kao, C. S., & Wastell, D. (2012). A comparison of adaptive and adaptable automation under different levels of environmental stress. Ergonomics, 55(8), 840-853.

Wickens, C. D., & Dixon, S. R. (2007). The benefits of imperfect diagnostic automation: A synthesis of the literature. Theoretical Issues in Ergonomics Science, 8(3), 201-212.

(32)

Appendices Appendix A.

Questionnaire, statements rated from 1 (strongly disagree) to 7 (strongly agree): After task:

The task was mentally demanding. The task was difficult.

The task was physically demanding. I was successful in performing the task. I am satisfied with my performance.

I worked hard to accomplish this level of performance. I felt insecure in performing the task.

I experienced stress during the task. I concentrated during the task. I understood the goal of the task. I wanted to perform well.

I felt in control of the outcome of the drawing. I felt responsible for creating a good drawing. The task was important.

Automated systems can help humans to improve their performance. I trusted the black squares that it performed the right drawing. I used the black squares to create the drawing.

The black squares were helpful.

I looked at the upper panel the majority of the time.

Before task: I trust automated systems.

Automated systems can make better decisions than humans. Robots can replace humans in factories.

(33)

Appendix B.

Questions related to trust.

Statement Mean group 1 Mean group 2 Mean group 3 Automated systems can help humans to improve their

performance. 5,4 5,3 5,4

I trust automated systems. 5,3 5,2 5,2

Automated systems can make better decisions than

humans. 5,6 5,4 5,5

Robots can replace humans in factories. 5,3 5,1 5,1

I trusted the black squares that it performed the right

drawing. 5,5 5,3 5,2

I used the black squares to create the drawing. 5,7 5,6 5,4

The black squares were helpful. 5,8 5,4 5,4

Referenties

GERELATEERDE DOCUMENTEN

CONTACT was not significant, and therefore shows that both trust and frequency of contact have no influence on the relationship between the use of subjectivity in

Source  Date  Topic  Costs  Existing  New  Existing  New  Existing  New  Existing  New . 1  2  3  4 

Our research showed that the current process of photo scanning objects via photogrammetry leaves a lot of manual and repetitive labor for an artist within the workflow.. This

This revised framework is then applied to the process of Baker Tilly to show what is needed to be able to realise the automation.. A proposal is

We can conclude that the impact of automation on geo-relatedness density is positively associated with the probability of exit of an existing occupation

The effects of level of automation and adaptive automation on human performance, situation awareness and workload in a dynamic control task.. Theoretical Issues in Ergonomics

The fact that manufacturers of the wide range of automated immunoassay analysers available at present have not seen fit to incorporate proinsulin into their present test menus

Deze voorvallen komen met dusdanige regelmaat e n / o f ernst voor, dat er algemene maatregelen (preventief dan wel curatief) getroffen moeten worden o m er voor te zorgen dat de