• No results found

Development of human-robot trust : trusting robotic co-workers

N/A
N/A
Protected

Academic year: 2021

Share "Development of human-robot trust : trusting robotic co-workers"

Copied!
90
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Development of Human-Robot Trust

Trusting robotic co-workers

(Final version)

Master’s Thesis - MSc Business Administration, Digital Business track Author: N. A. Scheijbeler (10658165)

Thesis supervisor: dr. D. van der Meulen Second supervisor: prof. dr. P.J. van Baalen

Amsterdam Business School - University of Amsterdam 23 June 2017

(2)

Statement of Originality

This document is written by Student Nina Scheijbeler who declares to take full responsibility for the contents of this document.

I declare that the text and the work presented in this document is original and that no sources other than those mentioned in the text and its references have been used in creating it.

The Faculty of Economics and Business is responsible solely for the supervision of completion of the work, not for the contents.

(3)

Table of Content

Abstract __________________________________________________________________ 5 1. Introduction _____________________________________________________________ 6 2. Theoretical framework ___________________________________________________ 10 2.1 Human-Robot Interaction _____________________________________________ 10 2.2 Human-Robot collaboration ___________________________________________ 11 2.3 Human-Robot Trust __________________________________________________ 13 2.4 Perceived Intelligence _________________________________________________ 16 2.4.1 Functionality _____________________________________________________ 19 2.4.2 Interactivity ______________________________________________________ 20 2.5 Negative Attitude towards Robots _______________________________________ 20 2.6 Training ____________________________________________________________ 22 2.7 Conceptual model ____________________________________________________ 24 3. Methodology ___________________________________________________________ 26 3.1 Sample _____________________________________________________________ 26 3.2 Survey design ________________________________________________________ 27 3.2.1 Training _________________________________________________________ 28 3.2.2 Functionality _____________________________________________________ 28 3.2.3 Interactivity ______________________________________________________ 29 3.2.4 Negative Attitude __________________________________________________ 29 3.2.5 Human-Robot Trust ________________________________________________ 30 3.3 Statistical analysis ____________________________________________________ 30 4. Results ________________________________________________________________ 33 4.1 Preliminary analyses __________________________________________________ 33

(4)

4.1.1 Reliability tests ____________________________________________________ 34 4.1.2 Distribution of data ________________________________________________ 35 4.1.3 Descriptive statistics _______________________________________________ 36 4.2 Construct analysis ____________________________________________________ 38 4.2.1 Correlational analysis ______________________________________________ 38 4.2.2 Regression analysis ________________________________________________ 41 4.2.3 Interaction effect in Regression analysis ________________________________ 44 4.3 Different types of Robots ______________________________________________ 51 5. Discussion _____________________________________________________________ 53 6. Conclusion _____________________________________________________________ 56 6.1 Contributions to Theory _______________________________________________ 57 6.2 Contributions to management practice __________________________________ 58 6.3 Limitations __________________________________________________________ 59 6.4 Further research _____________________________________________________ 60 References _______________________________________________________________ 63 Appendices _______________________________________________________________ 72 Appendix A: Online survey questions _______________________________________ 72 Appendix B: Additional SPPS outputs ______________________________________ 79

(5)

List of Tables

Table 1 Overview of a selection of factors influencing Human-Robot Trust

identified in prior research. 16

Table 2 Tests of normality of data 36

Table 3 Descriptive statistics Human-Robot Trust, Functionality, Interactivity, Attitude towards Robots and Amount of Robot Training. 38 Table 4 Correlation matrix of Functionality, Interactivity, Human-Robot Trust,

Attitude towards Robots, Robot Training, Age, Gender and Experience with Robots. Includes Means, Standard Deviations and Correlations. 40 Table 5 Hierarchical multiple regression model of Human-Robot Trust – Step 1

& Step 2 44

Table 6 Hierarchical multiple regression model of Human-Robot Trust – Step 3 46 Table 7 Hierarchical multiple regression model of Human-Robot Trust – Step 4 48 Table 8 Hierarchical multiple regression model of Human-Robot Trust – Step 5 51 Table 9 One-way ANOVA – different types of robots 52

List of Figures

Figure 1 Conceptual model 25

Figure 2 Interaction effect of Interactivity and Negative Attitude towards Robots

(6)

Abstract

In the future, humans and robots will be working alongside each other more frequently. In order to make the interaction and collaboration between humans and robots as efficient as possible, it is of importance that the human workers trust their robotic ‘co-worker’.

This study contributes to the examination of the development of Human-Robot Trust, by investigating the effect of two dimensions of perceived intelligence, Functionality and Interactivity of a robot, on the level of Human-Robot Trust. Additionally, two possible moderators of this effect were investigated; Negative Attitude towards Robots and Robot Training. The quantitative survey-based research used employees that currently work together with a robot (n = 185). The results were tested using regression analyses, which indicated that only the Functionality dimension of perceived intelligence significantly positively influenced the level of trust employees had towards the robot that they worked with. The Interactivity dimension did not show a significant effect on Human-Robot Trust. The two moderators did also not elicit a significant moderation effect. A significant negative direct effect was found for Negative Attitude towards Robots on Human-Robot Trust.

The results confirmed that the perceived functioning of a robot has a significant positive effect on the development of Human-Robot Trust. Negative attitudes humans have towards robots in general significantly negatively affect Human-Robot trust. These findings have to be taken into account when designing robots that are built to work together with human beings. This will increase human trust towards robots.

Key words: Human-Robot Interaction; Human-Robot Trust; Perceived intelligence; Functionality; Interactivity; Negative Attitude towards Robots; Robot Training.

(7)

1. Introduction

Researchers have been fascinated by the possibilities of Robot-Robot Interaction, and Human-Robot Interaction, since the very beginning of biologically/human inspired robots (Fong, Nourbakhsh, & Dautenhahn, 2003). The word ‘robot’ was used for the first time in 1920, and comes from the Slavic word for ‘work’, ‘robota’. Robots are nowadays no longer just used in technical working environments, but are increasingly entering the social sphere as well, as their capabilities are growing. These robots are called social robots, they can autonomously interact with humans (Fong et al., 2003). The world is entering a new age of automation, in which machines can match or even outperform the human being in certain tasks (McKinsey, 2017). Companies are starting to use robots not only in their production process but also in a social context (Mathur & Reichling, 2015). Robots are entering our daily life, within the next two years the market for (social) robots will grow substantially. Over 300.000 service robots will be installed in professional environments until 2019, with a value of around $23 billon. Within the personal and domestic market 42 million units of service robots are expected to be sold within this period (International Federation of Robotics, 2016). According to McKinsey (2017), humans will continue to work alongside robots to create a growth in GDP per capita. The people that robots will interact with are likely not to be technical experts in the field of robotics. People will use casual intuitive approaches to the interaction. Besides that, they will need to acquire a new set of skills that is needed to engage more with robotics in their workplace (McKinsey, 2017). For the effective integration of robotics into the daily lives of humans, the interaction between human and robots needs to be investigated (Hancock & Warm, 1989; Mathur & Reichling, 2015).

Research conducted within the field of Human-Robot Interaction and Human-Robot Collaboration is limited. The recent and future growth in robot development has/will made/make this topic more important and relevant to investigate. Explorative studies show that

(8)

Human-Robot interaction is influenced by the perception, attitudes and feelings of humans towards robots (Breazeal, 2004, Dautenhahn, Woods, Kaouri, Walters, Koay & Werry, 2005; Fong et al., 2003). For human acceptance towards robots and human usage of robotics, the construct of trust towards robotics is critical (Hancock, Billings, Schaefer, Chen, de Visser, & Parasuraman, 2011; Yagoda & Gillian, 2012). As robots are becoming integrated team members in companies, the development of mutual trust between robot and human is a big challenge (Desai, Stubbs, Steinfeld, & Yanco, 2009; Groom & Nass, 2007). Human’s trust in a robotic team member is essential for a functional relationship between the two to be effective (Schaefer, 2013).

By letting humans and robots work together, efficiency and performance of the organization can be increased. The capabilities of the two groups of ‘employees’ can complement each other (Bortot, Born, & Bengler, 2013). For effective Human-Robot collaboration, the robots need to portray certain characteristics to qualify as a good and trustworthy team member (Chen & Barnes, 2014; Freedy, de Visser, Weltman, & Coeyman, 2007; Groom & Nass, 2007; Hancock et al., 2011; Parasuraman & Riley, 1997;). A robot that is not trusted cannot help anyone (Templeton, 2017).

Various studies identified factors that influence a humans’ level of trust towards a robot. This current study will explore the construct of trust in Human-Robot Collaboration, by looking into two dimensions of the variable of human’s perceived intelligence of a robot. People judge the performance of an agent not based on their actual intelligence, but on their perceived level of intelligence (Koda & Maes, 1996). To develop trust towards robotic team members, it is important that humans consider these robotic team members to be intelligent, and thus competent (Oleson, Billings, Kocsis, Chen, & Hancock, 2011; Sanders, Oleson, Billings, Chen, & Hancock, 2011). An empirical investigation of two dimensions of perceived intelligence,

(9)

Human-Robot Trust. While other studies have looked into the relation between perceived intelligence of robots and Human-Robot Trust, to date most reviews have been qualitative and descriptive (Hancock et al., 2011). This current study will quantify the effect of the two above mentioned dimensions of perceived intelligence on the construct of Human-Robot Trust. This will be done by investigating employees that interact with robots in their workplace on a daily basis, by means of a cross-sectional survey design. Research into Human-Robot Trust has up to this point, not been done by actually researching the Human-Robot Trust of humans that work together with robots. Following from this, the research question that will be investigated is:

What is the effect of the level of, Functionality and Interactivity (two dimensions of perceived intelligence) of a robot on the level of Trust towards a Robotic team member?

Additionally, two sub-questions are posed: 1) How does Human-Robot Training moderate this effect of the two dimensions of perceived intelligence on Human-Robot Trust? 2) How does the Negative Attitude of humans towards Robots moderate this effect of the two dimensions of perceived intelligence on Human-Robot Trust?

Investigating the construct of trust in the interaction between humans and robots will further our understanding of the attitudes and perceptions of humans toward robots and the way in which robots can best be designed. Insights for designing interactions will be gained, in order to gain the highest efficiency and performance from work teams existing out of humans and robots (Stock, 2016). Besides this, the construct of trust is also an issue that can be mitigated by training human employees to be able to work together with robots better (Oleson et al., 2011). Trust is a critical component for the performance of teams consisting of humans and robots (Groom & Nass, 2007). Human-Robot collaboration will increase in the coming years,

(10)

designing this robotic workforce in the most functional way will therefore be of importance (Groom & Nass, 2007).

This current research contributes to the existing literature on HRI expending the knowledge of the psychological mechanisms related to the development of trust toward robots in Human-Robot interaction/collaboration. In addition, it can indicate a trend in Human-Robot Trust research, and can help to identify areas for future research.

In the following chapters and paragraphs of this paper, previous research on the topic of Human-Robot Interaction, Human-Robot Collaboration, Human-Robot Trust and the perceived intelligence of robots, will be presented. The research field will be specified, and research questions and hypotheses will be discussed together with the academic and practical contributions of the investigation. Next, the method and research design will be explained. The results of the investigation will then be analysed and discussed. At last, the paper will conclude with the findings, limitations and applications of the results of the study.

(11)

2. Theoretical framework

In this chapter, the existing literature within the field of Human Robot Interaction (HRI), Human Robot Collaboration (HRC), Human-Robot Trust, perceived intelligence, Negative Attitude towards Robots and Robot Training are discussed and evaluated. A conceptual model, based on these studies, will be illustrated and the hypotheses of this investigation will be described together with the research questions to be answered.

2.1 Human-Robot Interaction

When robots are entering the daily lives of people, natural Human-Robot interaction is of importance (Park, Kim, & Chung, 2010). Research into the interactions of humans with robots is limited, the field is still in its infancy. Some studies have been performed over the years, but there is no clear foundation of knowledge available on this topic yet. It is a research field that includes psychology, cognitive science, social sciences, artificial intelligence, computer science, robotics and engineering (Dautenhahn, 2007). Literature on human-computer interaction and human-human interaction is related, however, human robot interaction is different than human-computer interaction and human-human interaction (Dautenhahn, 2007).

The field of HRI has two dimensions. Firstly, the robot-centric perspective – this perspective is related to how robots need to be designed, for instance what level of intelligence to give them. This perspective is complemented by the human-centric perspective – defining how humans respond to robots. This current study will focus on the human-centric perspective of Human-Robot interaction. People’s perception, expectations, attitude, acceptability, comfortability and trust towards robots is essential for the integration of robots into our lives. Robot designers should take these human-centric factors into account (Stock, 2016).

A clear definition of a robot is needed to be unambiguous about the difference between automation and robotics. For this current research, the following definition of a robot is used:

(12)

“A machine capable of carrying out a complex series of actions automatically, especially one programmable by a computer” (Oxford English Dictionary, 2017). Besides this definition, it is important to distinguish a robot from a computer, to make the definition of a robot more explicit. A computer is defined as: “An electronic device which is capable of receiving information (data) in a particular form and of performing a sequence of operations in accordance with a predetermined but variable set of procedural instructions (program) to produce a result in the form of information or signals” (oxford English Dictionary, 2017).

2.2 Human-Robot collaboration

Full automation of manufacturing processes is described not to be neither feasible nor cost efficient; most manufacturing processes require the flexibility of a human operator (Charalambous, Fletcher, & Webb, 2016). However, the manufacturing industry is interested in automating manufacturing processes by introducing industrial robots to their production (Hägele, Schaaf, & Helms, 2002; Santis, Siciliano, Luca, & Bicchi, 2008; Schraft, Meyer, Parlitz, & Helms, 2005; Unhelkar, Siu, & Shah, 2014). In various environments, from space exploration to transportation and assembling, robots are being used to perform tasks (Oleson et al., 2011). By having robots work along with the human workforce, higher efficiency and productivity can be gained. This new form of automation could raise productivity globally by 0.8 to 1.4 percent annually (McKinsey, 2017). Robots and human can complement each other on capabilities (Bortot et al., 2013). For instance, a robot can collaborate with a skilled human worker to perform an assembly operation. Robot capabilities are ever increasing; this will make robots enter the workplace as a ‘full’ team member.

The introduction of robots, to a team of workers, does not go without challenges. For instance, big concerns were raised about human safety when working with robots. Safety issues

(13)

Robotics formulated by Isaac Asimov (1950): “A robot may not injure a human being or, through inaction, allow a human being to come to harm”. “A robot must obey orders given it by human beings except where such orders would conflict with the First Law”. “A robot must protect its own existence as long as such protection does not conflict with the First or Second Law”. These laws were developed to ensure safe (human) robotic interaction. Recent technology developments and updated regulations have shown promising opportunities for employing robots. Under specific circumstances it is viable and safe for humans to work closely with industrial robots (ISO, 2011).

To ensure success in the workplace, effective Human-Robot interaction and collaboration needs to be present (Hancock & Warm, 1989). Humans and robots need to be directed to reach a common goal through collaboration by working as a team (Bauer, Wollherr, & Buss, 2007). In the case of robot team members, it will be the humans who state the goal, the robots generally assist the human and takes on the human intentions as its own (Bauer et al., 2007).

Introducing robots as ‘full’ members of human teams can, however, improve team capabilities and result in better performance. But at the same time can also create challenging situations, that can reduce efficiency and performance of a company (Adams, Bruyn, Houde, Angelopoulos, Iwasa-Madge, & McCann, 2003).

It is therefore important for robots that are introduced at the workplace, to have the qualities of a good ‘human’ teammate and present themselves as trustworthy team members (Groom & Nass, 2007). These collaborative robots require specific capabilities and certain design features (Bauer et al., 2007). Trust is one critical dimension of Human-Robot interaction, without trust, humans cannot enjoy the benefits and help of robotics (Templeton, 2017). As capabilities of robots keep growing, trust between human and robot becomes more important and turns out to be essential for teams consisting of robots and humans (Schaefer, 2013). For a

(14)

Human-Robot team to be effective, humans have to trust the robotic team member(s) of their team to protect the interests and welfare of all individual’s part of the team (Chen & Barnes, 2014; Freedy, de Visser, Weltman, & Coeyman, 2007; Hancock et al., 2011; Parasuraman & Riley, 1997). To achieve successful acceptance and use of robotic teammates, the construct of trust needs to be explored in depth.

2.3 Human-Robot Trust

Human attitudes and feelings towards robots are a significant factor in the successful implementation of robots in companies (Breazeal, 2004, Dautenhahn et al., 2005). The human perception of robots, is influenced by intrinsic as well as extrinsic factors. For instance; personality, emotions, appearance and dialogue (Fong, Nourbakhsh, & Dautenhahn, 2003).

Trust is a critical component for successful interactions and cooperation between humans and robots, just as in teams consisting of humans only (Groom & Nass, 2007; Yagoda & Gillian, 2012). It is an important factor in (social) acceptance and usage of robotic systems, and the willingness of humans to rely on the information that robots produce and suggestions that they make (Hancock et al., 2011; Yagoda & Gillian, 2012). Especially in risky and uncertain situations and environments (Freedy et al., 2007; Park, Jenkins, & Jiang, 2008). When there is no trust in the robotic team member, the human operator will intervene and take control back from the robotic team member (de Visser, Parasuraman, Freedy, Freedy & Weltman, 2006). Trust is important for the creation and maintenance of robot-human relationships; it influences the behaviour of humans in the relationship. Robots have great potential for being a perfect team member, however, due to low levels of trust towards robots, they are underutilized as workforce (Lussier, Galien & Guiochet, 2007). The definition of trust that it used in this study is the definition of Lee and See (2004); “the attitude that an agent will help achieve an

(15)

In general, humans perceive the reliability of automated and machine-like agents to be higher compared to human assistance and human-like agents, although the information provided by the human agent and robotics agent was the same (Dzindolet, Pierce, Beck, Dawe, & Anderson, 2001). For successful interaction between humans and robots it is therefore important to calibrate appropriate levels of trust (Chen & Barnes, 2012; Parasuraman & Riley, 1997). For high risk involving tasks, human reliance on automation and robotics increases. Humans trusting robots more than humans, could have harmful consequences.

Factors influencing the development of trust have been described by numerous exploratory investigations. Trust is a multidimensional construct, that is context dependent (Charalambous et al., 2016; Yagoda & Gillian, 2012). The development of trust between two actors is a process. Muir (1994) argues that human trust in automation develops as in romantic relations between humans. At the start trust is based on performance or reliability, after a while the trustworthiness of a person or robot is influences by the level of dependability, and at last develops based on faith. Contradictory, faith can also be seen as an important factor influencing trust in the beginning of a relationship (Muir & Moray, 1996). Instead of looking at the development of trust from three sequential steps, Lee and See (2004), consider three different levels of abstraction of trust development.

Lee and See (2004) identified three factors influencing trust towards robots; purpose, the level of automation, process, the suitability for a specific task and performance, which concerns the reliability, predictability and capability of the robotic system. Other research identified three different factors influencing trust, and the development hereof towards robotics, they state that trust is influenced by the operational environment, the human team members and the robots itself (Eui, Jenskins, & Jiang, 2008; Hancock et al., 2011; Oleson et al., 2011).

In addition, a persons’ trust towards robots is related to a persons’ trust in automation and technology in general. The level of Human-Robot Trust in Human-Robot interaction is also

(16)

influenced by robot characteristics and performance based factors such as; size, type and reliability (Dautenhahn, 2007). Furthermore, the degree in which humans can assess the transparency and observability of the robotic system is important for the development of trust in Human-Robot interaction (Verberne, Ham, & Midden, 2012). Also, the complexity of the tasks performed by robots are suggested to affect the level of trust of humans towards an automated system, being a robot (Mazney, Reichenbach, & Onnasch, 2012; Parasuraman, Molloy, & Singh, 1993). Other factors are interface usability, the operating environment, the proximity of the robot, competence and situation awareness (Oleson et al., 2011).

The factors affecting trust in human to human interaction and collaboration also play a role in the development of trust with regards to robots, and mediates the relationship between humans and automated systems (Merritt & Ilgen, 2008; Sheridan, 1975; Sheridan & Hennessey, 1984). However, for trust to be important in a relationship, human agents need to be willing to put themselves at risk by delegating some level of responsibility to the other party, the robot (Lee & See, 2004).

Lastly, also human related characteristics are factors that can influence Human-Robot Trust. For instance, prior experiences, expertise, perception and personality (Oleson et al., 2011). However, much more factors could possibly influence the construct of Human-Robot Trust.

The discussed studies performed on Human-Robot Collaboration, give insight into the existing knowledge of the construct of trust in Human-Robot Interaction. Table 1 shows an overview of variables influencing Human-Robot Trust according to prior research. Just as earlier constructed frameworks of Human-Robot Trust, the variables are organized along the three categories: Human, Robot and Context related variables (Oleson et al., 2011; Sanders et al., 2011).

(17)

Table 1

Overview of a selection of factors influencing Human-Robot Trust identified in prior research

Human Robot Context

Personality Work load Robot task complexity

Attitudes False alarms Culture

Experience Failure rates In group membership

Capabilities Multitasking Communication

Perceptions Automation level Type of task Acceptance of technology Anthropomorphism Mental models

Demographics Predictability Team

Engagement Proximity Room

Expertise Personality Proximity of robot

Work load Competence Situation awareness

Situation awareness Robot type Dynamics’s Self-confidence Robot behaviour Are others present Comfort with robots Reliability Perceived safety Propensity to trust Dependability Control

Training Adaptability Cooperation

Age Interface usability

Gender Appearance

Functionality

Note. Factors organized along three categories: human related factors, robot related factors and context related factors.

2.4 Perceived Intelligence

In this current study, a human-related factor influencing Human-Robot Trust is the point of focus. Research pointed out that the human-related factors are key to Human-Robot interaction, and are important for understanding the development of trust towards a robot (Yagoda & Gillian, 2012). Human-Robot Interaction and Human-Robot Collaboration is dependent on the human partner in these relationships (Davids, 2002). However, most prior research focusses on the robot in these interactions, instead of on the human worker (Schaefer, 2013).

(18)

One of the factors influencing Human-Robot Trust is perceived intelligence. “The perceived ability to acquire and apply knowledge and skills” (Oxford English Dictionary, 2017). The perceived intelligence of a robot, is of large importance for the development of acceptance and trust towards a robot (Oleson et al., 2011). People judge a person’s or agent’s intelligence, based on their perceived intelligence rather than their actual intelligence (Koda & Maes, 1996). This dimension should be taken into account when designing robots (Koda & Maes, 1996). The development of trust towards a robot is influenced by the intelligence, and thus competence of a robot (Oleson et al., 2011; Sanders et al., 2011). When a robot is perceived as intelligent, they are also perceived to be competent. The robot is then perceived to be able to perform a task successfully, so it can be trusted to perform a certain task. According to Bartneck, Kulic, Croft, & Zoghbi (2009), humans require robots to behave in a certain way to see robots as intelligent agents. Moreover, they indicate that when robots are deployed in the real world, humans seem to pick up on their limited abilities when they interact with a robot for longer than a few minutes. The programmed patterns of a robots’ behaviour will become known and boring to humans. Robot behaviour and ‘intelligence’ is based on Artificial Intelligence (AI). A robot can only portray the illusion of intelligence, it can exhibit the properties that are associated with being intelligent. Robots that interact with human beings, can be seen as more intelligent, as their level of intelligence is dependent on the perception and interpretation of a human (Duffy, 2003).

A big component of the perceived intelligence of a robot is based on the competence of a robot, not their appearance (Koda & Maes, 1996). However, it was shown by King (1996) that agents with human forms are perceived as more intelligent than agents with other forms. Bartneck et al. (2009) showed that the perceived intelligence of a robot influences humans’ overall perception of a robot. These authors, also show that perceived intelligence influences

(19)

what extent a robot is perceived as intelligent. Research suggests that the behaviour of a robot provides cues that influence the perceptions and assumptions people have in the interaction with a robot. When robots portray the cues that evoke automatic perceptions of intelligence, these initial perceptions will influence the acceptance and collaboration of humans and robots in a positive way. Cooperating robots, need to meet social as well as instrumental goals (Goetz, Kiesler, & Powers, 2003). Whether people perceive a robot to be intelligent depends on multiple factors, including the social and instrumental behaviour of a robot.

According to Yagoda (2011), the interaction between humans and technology is dependent on the level of intelligence and the level of automation of the technology. Yagoda shows that robots can be positioned to be moderately intelligent and moderately automated in comparison to other technologies. Research shows that the level of intelligence of a robot is based on the robot’s Functionality, and its Interaction with its user (Yagoda, 2011).

Perceived intelligence is an important component of the trustworthiness of a robot, and helps in the development of acceptance towards robots (Oleson et al., 2011; Sanders et al., 2011). Whether a robot is perceived to be intelligent by a human, determines whether it can be trusted to perform a certain task (Koda & Maes, 1996). Besides that, by influencing the level of trust between these two actors, the level of perceived intelligence of a robot influences the development of a collaboration relationship between a human and a robot, (Hancock et al., 2011; Yagoda & Gillian, 2012).

This current study focusses on these two dimensions of perceived intelligence of robots: perceived Functionality, an instrumental dimension of Human-Robot collaboration, and perceived Interactivity, a social dimension of Human-Robot collaboration. In the next two paragraphs these variables will be illustrated.

(20)

2.4.1 Functionality

One of the dimensions of perceived intelligence, is Functionality. This entails whether the robot has the qualities to be suited for its purpose. The Functionality of a robot in meeting a specific purpose, is an important predictor of Human-Robot Trust (Lee & See, 2004; Schafer, 2013). The functioning of the capabilities of the robot (e.g. behaviour) can influence the trust towards this robot (Biros, Daly, & Gunsch, 2004). Functionality becomes more important for robots as the level of automation of the robot increases, because of less control of humans (Yagoda & Gillan, 2012).

The development of trust, is often based on the performance and/or reliability of a robot (Rempel, Hilmes, & Zanna, 1985). However, the level of initial trust a human has towards a robot can be based on presumptions of trustworthiness instead of the actual performance of the robot (Kramer, Brewer, & Hanna, 1996). Lee and See (2004) propose that to make automation trustable, the capabilities of the automation should be conveyed to the user, for instance, by showing the past performance of the automation. The perceived Functionality of a robot is a dimension of the perceived intelligence of a robot, which influences the level of human trust towards a robot (Biros, Daly, & Gunsch, 2004; Yagoda, 2011; Yagoda & Gillian, 2012). The human perceptions of a robot, also the perceptions about its functioning, influence the acceptance of a robot in the collaboration with humans positively (Goetz et al., 2003). When a robot functions well, people are more prone to trusting a robot (Lee & See, 2004; Schafer, 2013).

Based on these insights the following relationship between Functionality and Human-Robot Trust in hypothesised:

(21)

2.4.2 Interactivity

Interaction is the second dimension of perceived intelligence (Yagoda, 2011). However, when discussing interaction between humans and robots, the term Interactivity is used. It describes, “the level to which a robot is able to respond to the input of a human being” (Oxford English Dictionary, 2017).

The level of Interactivity of a robot, the social dimension of the behaviour of a robot, influences the human perception of a robot on a social level (Reeves & Nass, 1996). It influences human’s perception of the intelligence of the robot (Yagoda & Gillian, 2012). Also, the kind of interaction, and the robot’s behaviour in these interactions, between a human and a robot influences human perceptions of robots (Goetz et al., 2003; Sanders et al., 2011). The level of interaction, and the kind of interaction, can therefore be a predictor of Human-Robot Trust (Oleson et al., 2011). Research has shown that the acceptance in collaboration between humans and robots is positively influenced by human perceptions of the robot (Goetz et al., 2003). According to Goetz et al. (2003), these perceptions are also based on the degree to which robots meet the social goals within an interaction with humans. Lee and See (2004), argue that the Interactivity of a robot has a positive influence on Human-Robot Trust.

Therefore, the following relationship between Interactivity and Human-Robot Trust is hypothesised:

Hypothesis 2: There is a positive relationship between the level of Interactivity of a Robot and the level of Trust towards that Robot.

2.5 Negative Attitude towards Robots

A psychological factor influencing the interaction between humans and robots, is the attitude humans have towards robots. When this attitude is negative, this could prevent individuals from

(22)

interacting with robots (Nomura, Kanda, & Suzuki, 2004; Nomura, Kanda, Suzuki, & Kato, 2008). Besides this, the attitude people have towards robots could also influence the development of trust towards robots (Schaefer, 2013).

In general, people have a positive attitude towards the idea of an intelligent (service) robot (Khan, 1998). However, differences in attitude are significantly between different generations (Scopelliti, Giuliani, D’Amico, & Fomara, 2004). The human attitude towards robots is also influenced by a humans’ personality, gender and the task of the robot (Nomura et al., 2008).

The definition of ‘attitude’, used in this study, from Chaplin (1991); “a relatively stable and enduring predisposition to behave or react in a certain way towards persons, objects, institutions, or issues; its source is cultural, familial and personal” (Nomura, et al., 2008).

The role of attitude towards robots in the development of Human-Robot Trust has been investigated (Oleson et al., 2011; Sanders et al., 2011). Humans with a more positive attitude towards automation and robotics, could put too much trust into a robotic system, which could lead to lower performance (Bailey, Scerbo, Freeman, Milkulka, & Scott, 2006). However, a more negative attitude towards a robotic system, could make people trust robots less (Nomura et al., 2004).

The effect of the level of perceived intelligence of a robot on the level of Human-Robot Trust towards that robot, might be influenced by the attitude peoples have to robots in general. When the general predisposition of a person towards robots is negative, the two dimensions of perceived intelligence, Functionality and Interactivity, are expected to have a smaller effect on Human-Robot Trust towards the robot that they actually work with. As general Negative Attitude towards Robots are relatively stable, other factors influencing the development of trust are expected to have less impact (Chaplin, 1991).

(23)

The negative attitude of people towards robots could negatively moderate the effect of the two dimensions of perceive intelligence on the level of Human-Robot Trust of employees that work together with robots. Based on these insights the following two hypotheses are formulated:

Hypothesis 3: The positive relationship between the level of Interactivity of a Robot and the level of Trust towards that Robot, is negatively moderated by the persons Attitude towards Robots.

Hypothesis 4: The positive relationship between the level of Functionality of a Robot and the level of Trust towards that Robot, is negatively moderated by the persons Attitude towards Robots.

2.6 Training

To motivate employees to do their job properly, training them for the job can be a helpful tool. People that interact with robots are most likely to be trained to work with robotics, or have a lot of knowledge about robotics. However, since more and more robots are entering the workplace, more employees will be interacting with robots that do not have great knowledge of robotics. While at the moment, robots and human workers are kept separately, in for instance factories, in the future human and robot will be working alongside each other more (International Federation of Robotics, 2016).

Designing trainings to provide employees information about the purpose, process, and performance of a robot can have a positive effect on the development of Human-Robot Trust (Lee & See, 2004; Oleson et al., 2011; Sanders et al., 2011). The exact development of trust towards robots, depends on the information, and type of information that is provided to the employees through, among others, training. According to Lee and See (2004), these trainings

(24)

should not only provide information, but should present this information in a way that stimulates the development of trust. In the earliest stages of human interaction with a robot, this information is more crucial for the development of trust. As experience increases, humans are able to determine the performance, reliability, predictability and dependability of robots, while in the earliest stages perceived trustworthiness of robots is determined by the provided information about its purpose and faith (Hoc, 2000).

Whether employees have or have not received training on working with robots, could influence their development of trust towards robotics. Just as the development of trust between people, trust in robotics develops according to the information that is available to the employees (Lee & See, 2004). This information can be provided though offering trainings to employees that (have to) work with robots. In the beginning of the interaction between the human and a robot, this could have the largest effect (Hoc, 2000; Lee & See, 2004; Muir & Moray, 1996). The level of perceived intelligence of a robot is at that point solely based on the information that is provided to the employees through for instance trainings.

It is expected, that the training humans receive on interacting with robots could moderate the effect that the two dimensions of perceived intelligence have on the level of Human-Robot Trust of employees that work with robots, as the information that employees receive through training, can already have influenced their trust development towards robots before.

Hypothesis 5: The positive relationship between the level of Interactivity of a Robot and the level of Trust towards that Robot, is positively moderated by the amount of Robot Training a person has had.

(25)

Hypothesis 6: The positive relationship between the level of Functionality of a Robot and the level of Trust towards that Robot, is positively moderated by the amount of Robot Training a person has had.

2.7 Conceptual model

Based on the knowledge acquired through identifying the existing research done on this topic, a conceptual model of Human-Robot Trust for this current investigation was created, which visualizes the seven hypothesised effects between the variables of interest (see Figure 1). Four control variables; Age, Gender, Experience in working with robot, and Type of robot a participant worked with, will control these effects. These four variables have been proven to affect the development of trust in earlier research (see paragraph 2.3). Additionally, these variables can be measured across all types of employees, as they are relevant for all sorts of jobs in which employees work together with robots.

The research question that will be answered in this study is: What is the effect of the level of the two dimensions of perceived intelligence, Functionality and Interactivity, of a robot on the level of trust towards a robotic team member?

Additionally, two sub-questions are posed:

1) How does Human-Robot Training moderate this effect of the two dimensions of perceived intelligence on Human-Robot Trust?

2) How does the negative attitude of humans towards robots moderate this effect of the two dimensions of perceived intelligence on Human-Robot Trust?

(26)

Figure 1

Conceptual model of Human-Robot Trust

Note. Control variables: Age, Gender, Experience in working with robot, Type of robot. Functionality of Robot Interactivity of Robot Negative Attitude towards Robots in general Human-Robot Trust H2(+) H3 (-) Perceived Intelligence H1 (+) H4 (-) H5 (+) Human-Robot Training H6 (+)

(27)

3. Methodology

To investigate the hypotheses made in the last paragraph, this study followed a deductive research approach. This quantitative study of descriptive nature, identified whether the in the previous paragraph identified perceived intelligence dimensions, Functionality and Interactivity influenced the construct of Human-Robot Trust of employees that work together with robots. This was done by means of a cross-sectional survey design. This research design and method enabled the quantification of the effect of the two above mentioned dimensions of perceived intelligence on Human-Robot Trust at a certain moment in time.

In the following parts of this chapter the sample for this current study is described, as well as the survey design and the statistical analysis that is performed on the data set.

3.1 Sample

This study focussed on employees that work together with (a) robot(s) during their job. The sample was not constricted based on i.e. age, gender or occupation. Anybody that was in close contact with a robot during their current job could participate in the study. For investigating Human-Robot Trust, this sample is of big importance. This group of people already has experience in working with robotics. Besides this, a comparable sample has not been used in Human-Robot Interaction and Collaboration studies before. This broad population, made determining the sample frame of the study difficult. Therefore, the sampling technique that was chosen for this study was non-probability volunteering snowball sampling. Respondents were found through contacting different companies that employed human as well as robotic employees, by contacting individual employees by e-mail, and though social media and fora. Additionally, respondents were found through a panel company. With the intention to create a sample of at least 160 respondents. This intended sample size was based on the analysis of the number of variables and hypothesised interactions in the developed conceptual model. This

(28)

number of respondents, could besides that also prove an 𝑅" lower than 15, with a Power of .80, a significance level of .050 and around ten variables (Hair, Black, Babin, & Anderson, 2010). While 213 people filled in the survey completely, the used sample finally consisted of 185 employees that work with robots. (µ#$%= 30; 73.68% male, 26.32% female).

The participants were informed beforehand about the topic of the survey; working together with robots. However, the aspect of trust was not disclosed before participation. Besides that, the participants were informed about the protective way in which their data was collected, stored, processed and analysed. They were also guaranteed about the voluntary nature of the survey, their anonymity, their right to ask questions, to quit the survey and to pause the survey at any time. Also, they were assured that the data collected was not to be used for any purpose other than this study. Respondents were able to participate on a pc or laptop, or on a mobile device, at any place or time. Lastly, it was indicated that, by filling in their e-mail address through a link separated from the survey, they had a chance of winning €10, -. (three in total). See Appendix A for the explanation of the research participants received.

3.2 Survey design

To investigate the effect of perceived intelligence on Human-Robot Trust a survey was developed through Qualtrics, and administered in English online. The survey was distributed by email and through social media platforms, and could be filled in on the computer as well as on mobile devices.

The survey started off with an explanation of the research that was being conducted, and the rights of the participants. The first question, was to ensure that the participants that filled in the survey were genuinely working together with robots. Subsequently, the survey collected general information about the participants, as gender, age, occupation, the frequency with

(29)

variables were controlled for on the hypothesised effect of the dimensions of perceived intelligence on Human-Robot Trust.

3.2.1 Training

The moderating variable training was measured by posing questions about the amount of training these participants had received in working with robots. These data were collected on the categorical level of measurement.

Afterwards, the participants were asked to indicate what type of robot they worked with most frequently, and it was indicated that the remaining questions of the survey, were to be answered based on the robot that the participant worked/collaborated with the most.

3.2.2 Functionality

The following set of questions, measured the independent variable of Functionality. Using the Global Evaluative Assessment of Robotic Skills (GEARS) scale developed by Goh, Goldfarb, Sander, Milest and Dunkin (2012) (a = .91), the level of perceived Functionality of a robot was measured. Two item were slightly adjusted in order for these items to fit the participants of the survey, and to make the items compatible to all types of robots that people could be working with. An example of this adjustment is the adaption of the following item ‘Some overshooting or missing target, but quick to correct’, was changed into; ‘some mistakes, but quick to correct.’

Respondents indicated whether they perceived the robot to be functioning/proficient on six items. The responses were indicated on a 5-point scale – from poor, to exceptional. Through this scale numerical data was collected, on for instance the item; ‘Efficiency’.

(30)

3.2.3 Interactivity

The other dimension of perceived intelligence, the level of Interactivity of a robot, was measured to collect numerical data on this variable. Participants indicated to what level they agreed with nine items of the expected Interactivity measurement scale, officially consisting of 12 items, developed by Sohn and Choi (2014). Three items were excluded to make the scale more suitable for research into employees that work together with robots. In addition, the tense used in the items was changed from future tense to present tense, as the participants are currently working with robots.

Three out of the nine items were counter indicative items. Their level of agreement was administered on a 7-point Likert scale based on three dimensions (sensory, semantic, behavioural) – ranging from strongly disagree, to strongly agree (resp. a = .87, a = .95, a = .92). An example of one of the statements of this scale was; ‘I am able to influence it.’

3.2.4 Negative Attitude

Following this, respondents attitude towards robots in general was measured using the Negative Attitude towards Robots (NARS) scale developed by Nomura et al. (2004) to collect numerical data. This scale consisted of 14 items into three subscales (S1 = interaction, S2 = social influence and S3 = emotional interaction) on a 5-point scale – from strongly disagree, to strongly agree (α S1 = .750, α S2 = .782, α S3 = .648). The three items on emotional interaction, were counter indicative items. In this survey 13 out of the 14 items were used, the item that was removed was not appropriate for participants that work together with robots. One of the statements of the scale was; ‘I would feel uneasy if robots really had emotions’. This scale measures the Negative Attitude towards Robots. A higher score on this scale, means a more Negative Attitude towards Robots.

(31)

3.2.5 Human-Robot Trust

Lastly, the level of Human-Robot Trust of the participants was measured, using a scale to measure trust in industrial Human-Robot collaboration created by Charalambous et al. (2016), numerical data was collected. Ten statements were presented, on a 5-point Likert scale – from strongly disagree, to strongly agree (a = .84). Two items of this scale were counter indicative. The formulation of these ten items was slightly changed in order to be sure it was appropriate for targeted participants of the survey, and compatible for all kinds of robots. The first statement of this measurement scale was; ‘the way the robot moves makes me uncomfortable’.

One extra statement was added to this part of the survey, as a control question. The statement; ‘to guarantee the quality of this survey, please answer this question with "strongly disagree".’ Participants that were unable to answer this statement correctly where omitted from the sample. This statement was added, to enhance the quality of the collected data.

After all these questions, the participants could enter the lottery to win one of the three €10, - prices, by leaving their e-mail address through a separate link at the end of the survey, and were thanked a lot for their participation. See Appendix A, for all questions that were part of the survey, and the way in which the questions were posed exactly. Detailed information about the survey and the collected data is available on request.

3.3 Statistical analysis

The data was analysed using the SPSS software developed by IBM. However, before the data was analysed and statistical tests were performed, the dataset was cleaned and ordered; respondents that indicated to not have experience in working with robots (34 responses), partial responses (21 responses), and extreme outliers (five responses) were removed from the dataset. Reponses were deleted list wise. Besides this, the respondents that were unable to answer the

(32)

control question correctly, were omitted from the dataset (23 responses were deleted). A frequency check, detected no errors in the data set.

The collected data needed to be prepared before analyses could be run. Firstly, the scales needed to be corrected for the counter indicative items that were present. Three of the measurement scales that were used in the survey, the scale of Interactivity, the scale of attitudes and the scale of Human-Robot Trust, contained some counter indicative items. The collected values on these items were recoded, in order to analyse the collected data correctly.

Secondly, a reliability analysis of the scales that were used in the survey was performed in order to be sure that the scale measured the respective construct correctly, precisely and consistently. Because of the cross-sectional nature of this study, the reliability of the four scales used was computed using Cronbach’s Alpha.

Thirdly, to be able to test the relationships between the variables that are part of this study, the variables that were measured on a scale level, were transformed into new variables that represented the scale means of the scales used in the survey. Lastly, the distribution of the dataset was determined.

The first analyses that were performed were of a descriptive nature, showing statistics on the values collected through the survey on the variables of interest. Secondly, to explore whether the constructs of the developed conceptual model were related, a correlational analysis was performed on all five main variables and the four control variables. This analysis gave insight into significant correlated variables, and the strength of these correlations.

Based on the assumptions of the tests being met, the following statistical tests were run to test the hypothesised relationships between the variables of interest. The main effect of the two dimensions of perceived intelligence, Functionality and Interactivity, on Human-Robot Trust were tested by means of a multiple hierarchal regression analysis. Subsequently, the

(33)

to the multiple regression analysis, containing the interaction terms of the investigated moderators. These analyses, were able to show the moderating effects of training, and of attitude towards robots, on the relationship between the two dimensions of perceived intelligence and the contrast on Human-Robot Trust. These tests showed the relationships present between the five variables that are part of the conceptual model of this current study, while controlling for the control variables.

(34)

4. Results

This chapter presents and examines the results from the statistical analysis, described in the previous paragraph. Firstly, the preliminary analysis of the data set and some general descriptive statistics of the dataset are presented. Next, the main hypothesised effects were tested, followed by the predicted moderating effects, and additional analysis.

4.1 Preliminary analyses

After having cleaned the data set (see paragraph 3.3), having removed outliers and partial responses, as well as responses that were unreliable due to not answering the control question correctly, preliminary analyses were performed on the dataset.

The final sample consisted of 185 responses. On average, respondents spent seven minutes to fill in the survey. Of the 185 employees, 26.3% was female, and 73.7% male. On average, the age of the respondents was thirty. Most respondents had a British nationality (29.2%), or American nationality (21.6%). Overall, employees with 43 different nationalities responded to the survey. Almost half of all respondents obtained a Bachelor’s degree (48.1%), 17.3% also obtained a Master’s degree. The respondents ranged from programmers, operators, financial workers, analysts, manufacturers and managers to IT specialists and data scientists.

Of the employees that participated in the survey, 26.50% worked with (a) robot(s) on a daily basis. The biggest group, 49.2% indicated to work with (a) robot(s) up to four days a week. In addition, the results showed that most of the employees are quite new to working with robots, as 29.2%, indicated to only have worked with (a) robot(s) for less than one year. Only 18.8% has been working with robots for over five years.

Most respondents 39.5%, indicated to work with an information processing robot 30.8% worked with a robotic arm, 15.7% worked with a transport robot and 14.1% indicated to work

(35)

worked with. For the respondents that did remember, the most mentioned types of robots were the ‘KUKA’, the ‘FANUC’ and robots from ‘ABB Robotics’. KUKA Robotics, FANUC Robotics and ABB Robotics are thee manufacturers of industrial robot (arms). These robot producers, manufacture industrial robot arms that can perform numerous tasks for any application. (KUKA Robotics, 2017; FANUC, 2017; ABB Robotics, 2017). These industrial robot arms can, among other things, be used in assembly, machine tending, handling, measuring, picking and packing, loading and unloading, and palletising (FANUC, 2017).

On average, the respondents indicated to like working with robots, no one indicated to dislike it. Only 3.2% disliked it somewhat. The respondents also indicated they perceived the robot they worked with to be intelligent (50.8%).

Data were collected on the following five main variables; Human-Robot Trust, Functionality, Interactivity, Negative Attitude towards Robots and Robot Training. In the next paragraph, the reliability of the scales used in the survey was tested and the distribution of the data set was determined, and the first descriptive statistics of the dataset were analysed.

4.1.1 Reliability tests

The scales used to measure Human-Robot Trust, Functionality of a robot, Interactivity of a robot, and a participants’ Negative Attitude towards Robots, were tested for reliability. The internal consistency of the items of the scales was estimated using Cronbach’s Alpha.

The Human-Robot Trust scale was highly reliable, with Cronbach’s Alpha = .865. According to the item-total correlations, all the items of the scale (N = 10) had a good correlation with the total score of the scale (all above .30) (Field, 2009). Item number seven, could have been deleted from the scale, however, this did not substantially affect the reliability of the scale, since it would not increase the alpha with at least 0.1 (Field, 2009).

(36)

Also, the Functionality scale had a high reliability, with Cronbach’s Alpha = .816. No items were indicated to be problematic by the item-total correlations. All items (N = 6) had a good correlation with the total scale score (all above .30).

The Interactivity measurement scale also provided good reliability. The tested reliability provided a good correlation of all the items of the scale (N = 9) with the total score of the scale (all above .30), Cronbach’s Alpha = .722. According to the item-total correlation, item 8 could have been removed from the scale, however, deleting it did not substantially affect the reliability of the complete scale.

Finally, the items of the Negative Attitude towards Robot Scale (N= 9), also had good correlations with the total score of the scale (all above .30). The scale had high reliability, with Cronbach’s Alpha = .808. Item number 13 could have been deleted, but deleting this item would not affect the reliability of the scale substantially.

All used scales had a good reliability (above .70), therefore, these could all be used to calculate the respective construct for each respondent and for further analysis.

4.1.2 Distribution of data

Comparing whether the current dataset followed a normal distributed dataset, was done by the Shapiro-Wilk and Kolmogorov-Smirnov tests. It showed that not all data collected on the five main variables were normally distributed. Table 2 shows the overview of variables that were assessed and their scores on the two tests of normality. According to the Kolmogorov-Smirnov test, none of the data collected on the five variables were normally distributed (p < .05). Contradictory, the Shapiro-Wilk test did indicate normally distributed data for Interactivity and attitude towards robots (respectively, p = .244, p = .063). Having assessed the normal Q-Q plots and histograms of the data, it was concluded that the collected data on the five variables was

(37)

Trust was slightly positively skewed, while the distribution of Functionality, Interactivity and attitude towards robots were slightly negatively skewed. The distribution of the categorical variable, Robot Training, deviated largely from a normal distribution.

This implied that certain statistical tests that assume data to be normally distributed, could not be performed. Instead the data had to be transformed or non-parametric tests had to be run. However, some significance tests are quite robust to non-normality, if the sample sizes used was fairly large. Since the data collected on the independent and dependent variables in this study were measured at the numerical measurement level, a regression analysis could be performed. A test that does not explicitly assumes data to be normally distributed, a normal distribution of the residuals was more important in this case (see paragraph 4.2.2, p. 43).

Table 2

Tests of normality of data

Kolmogorov-Smirnov Shapiro-Wilk Statistic df p Statistic df p Human-Robot Trust Functionality 0.07 0.11 185 185 .040* .000* 0.98 0.96 185 185 .015* .000* Interactivity 0.07 185 .019* 0.99 185 .244

Negative Attitude towards Robots 0.09 185 .001* 0.99 185 .063 Robot Training 0.23 185 .000* 0.84 185 .000* Note. Lillefors significance correction (n = 185). * significant at the 0.05 level (2-tailed)

4.1.3 Descriptive statistics

The dataset on which the statistical analysis was performed consisted of 185 observations (n = 185). The 185 respondents indicated their judgement of the Functionality and Interactivity of the robot they were currently working with, their trust towards this robot, their Negative Attitude towards Robots in general and the amount of Training in working with robots they

(38)

is shown. It is important to take into account the facts that Human-Robot Trust, Functionality, Interactivity and Negative Attitude towards Robots, were measured on Likert-scales. This, while the amount of Robot Training was measured on the categorical measurement level.

The descriptive analysis of the five main variables of this study, is shown in table 2. It showed that on average the respondents indicated a score above 5 on Human-Robot Trust on a 7-point Likert scale. According to Charalambous et al. (2016), the developers of this measurement scale, this was a high average trust score. The minimum score of Human-Robot Trust measured through this survey, is 2.8. The developers of the scale, indicate that only scores below 2.5 can be considered as low Human-Robot Trust scores. The sample scores very high on Human-Robot Trust, up to 7. However, this could indicate that the sample of employees that work together with a robot, could over rely on the robots that they work with.

‘Perceived’ Functionality was measured on a 5-point Likert scale. The average score on the Functionality of the robot is 3.63. This is an above average score on Functionality of a robot. On average, this indicates that the robot the respondents worked with was able to perform its tasks in a way that made the robot suited for its purpose.

The respondents scored the robot they cooperate with during their daily jobs lower on Interactivity. On average, the respondents indicated a score of 3.72 on the robot’s Interactivity on a 7-point Likert scale. This score is just below the average Interactivity score of 4. The robots that the participants worked with were therefor, on average, not really interactive.

The Negative Attitude of the respondents towards Robots in general, was measured on a 7-point Likert scale. The NARS scale that was used, measured the Negative Attitudes people have towards Robots in general. The average score of the respondents on Negative Attitude towards Robots was 2.73, this could for that reason be interpreted as a low Negative Attitude towards Robots. Besides that, the maximum score that was measured on this construct was only

(39)

4.46. This indicates once more, that the respondents have on average a low Negative Attitude towards Robots in general.

The amount of Robot Training employees received for working together with robots was measured on a categorical measurement basis (1 = no training, 2 = less than three trainings, 3 = three to six trainings, 4 = more than six trainings). On average the respondents indicated to have had less than three trainings in working together with robots. Of the respondents, 34.1% indicated to not have received any training on working with robotics, 37.8% said they received less than three trainings, 19.5% indicated to have received three to six trainings. In conclusion, 8.6% recorded to have received more than six trainings. This indicates that most respondents have received some training on working with robotics.

Table 3

Descriptive statistics Human-Robot Trust, Functionality, Interactivity, and Negative Attitude towards Robots

N Minimum Maximum M SD

Human-Robot Trust 185 2.80 7.00 5.16 0.96

Functionality 185 1.17 5.00 3.63 0.80

Interactivity 185 1.22 6.11 3.72 0.90

Negative Attitude towards Robots 185 1.00 4.46 2.73 0.64 Note. Negative Attitude of Robots is measured negatively, a high score on this scale indicates a Negative Attitude towards Robots.

4.2 Construct analysis 4.2.1 Correlational analysis

A Pearson and Spearman correlational analysis was performed on the independent, dependent, moderating and control variables. The control variable Type of Robot, was excluded from this correlational analysis, since it was measured on the nominal measurement level. This was the

(40)

first step in exploring the data, to see which variables were correlated with each other, and how strong positive or negative associations between the variables were. The correlations between variables do never imply causal relationships. The results of this analysis are given in the correlation matrix provided in table 4.

As table 4 shows, nine significant correlations were found between the variables. Five of these correlation with higher significance, at a confidence interval of 99%. The highest significant relationship, was found between the Negative Attitude people had towards Robots and their level of Human-Robot Trust towards the robot that they worked with. Negative attitude was significantly related to the level of Human-Robot Trust of participants, r = -.50, p = .000. The attitude of respondents towards robots was measured on a Negative Attitude towards Robot’s scale (NARS), a higher score on attitude, indicated a more Negative Attitude towards Robots. This clarifies the negative large correlation.

The control variable, Experience in working with robots, was highly correlated with Age, 𝑟'= .42, p = .000. A medium positive correlation, which seems evident. An increase in experience, up to a new ordinal category, is on average associated with an increase in age. Human-Robot Trust, was also highly significantly correlated with the Functionality of the robot participants worked with, r = .37, p = .000. A medium positive correlational effect between Functionality of a robot and Human-Robot Trust was found.

The fourth high significant relationship was found between the amount of Robot Training participants had received and the Functionality of the robot they worked with, 𝑟' = .19, p = .007. A small correlation effect was found between these two variables. Likewise, a highly significant, correlation was found between control variable, Experience in working with robots, and Amount of Robot Training, 𝑟'= .22, p = .002, a small positive correlation.

Referenties

GERELATEERDE DOCUMENTEN

The study found that through organisational commitment and team commitment, respectively, trust in co-worker has a positive effect on organisational citizenship

The following research questions will be answered with the results from research question 1: theory review, and by gathering empirical data from Spicy Exports, their

The ease of access and shared purpose were found to have an activating effect on the behaviour of (passive) users in online communities.. Theoretical and

Eindexamen havo Engels 2013-I havovwo.nl havovwo.nl examen-cd.nl Tekst 3 Can we trust the forecasts?. by weatherman

De in de CBS-publikatie 1899-1994: vijfennegentig jaren statistiek in tijdreeksen voorkomende reeksen voor de auto en de motor komen voor de periode vanaf 1978

Uit de enquête onder de betrokken motorrijders blijkt in ongeveer 60% van de gevallen het (belangrijkste) letsel te zijn ontstaan bij de feitelijke botsing met de tegenpartij

The study also aimed at establishing the effect of cooking method on guinea fowl quality attributes by investigating the effect of different cooking methods on the chemical

In het wetsvoorstel is bepaald dat de schuldenaar op wiens verzoek de beoogd curator is aangewezen het salaris van de beoogd curator en de kosten van door