• No results found

It moves! : it talks! : it’s alive?! : how robot characteristics influence psychological responses and robot acceptance

N/A
N/A
Protected

Academic year: 2021

Share "It moves! : it talks! : it’s alive?! : how robot characteristics influence psychological responses and robot acceptance"

Copied!
50
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

It Moves! It Talks! It’s Alive?!

How Robot Characteristics Influence Psychological Responses and Robot Acceptance

Marieke Wieringa

10857591

Master Thesis

Graduate School of Communication

Communication Science (Research Master)

University of Amsterdam

Supervisor: Dr. R.J. Kühne

(2)

Abstract

The primary task of robots so far has been to assist humans in industrial and other

professional settings. However, robots are increasingly being designed for the purpose of

communication with humans in the home environment; to provide company for our elderly,

entertain our children and even serve as therapists. This has raised the need to investigate how

people respond to robots and what factors lead to their acceptance. The goal of this study is to

test the effects of robot characteristics on psychological responses to the robot and the

acceptance of the robot. The extent to which people had control over the robot as well as the

vocal expressions of a robot were manipulated in an experiment on human-robot interaction

(N = 92). The results showed that having high control over the robot during the interaction

led to higher perceived task performance and higher perceived ease of use compared to

having low control over the robot. However, having low control over the robot during the

interaction resulted in higher mind attribution than having high control. Furthermore,

interacting with a robot with the capability of vocal expression led to higher perceived

animacy, higher mind attribution and higher perceived human likeness compared to

interacting with a robot without this capability. Finally, perceived ease of use and perceived

human likeness were related to robot acceptance. These findings contribute to the

understanding of how robots are evaluated during human-robot interaction. Furthermore, it

contributes to knowledge on which factors play a role in the acceptance of social robots.

Keywords: social robots, human-robot interaction, control, vocal expressions, perceived task performance, perceived ease of use, animacy, anthropomorphism, robot

(3)

It Moves! It Talks! It’s Alive?! – How Robot Characteristics Influence Psychological Responses and Robot Acceptance

Robots are increasingly being used in settings other than just industry, such as

education, therapy and even in the home environment. These robots do not include merely

functional robots such as robotic vacuum cleaners, but also include robots that are designed to

communicate with humans in order to entertain, emotionally engage and even serving as pets

or companions (Fong, Illah & Nourbakhsh, 2002). These robots are usually referred to as ‘social robots’. According to Libin and Libin (2004), communication between humans and social robots can lead to several benefits, such as elevated mood and well-being. For example,

robotic seal Paro has been shown to improve mood and communication of both children and

elderly (Wada et al., 2005).

These benefits are only enjoyed when people frequently interact with the robot (Libin

& Libin, 2004). This has raised the need to investigate how individuals respond to robots and

what factors might lead to their acceptance (Beer, Prakash, Mitzner & Rogers, 2011). This

paper aims to contribute to such knowledge by investigating how robot characteristics affect

utilitarian and hedonic responses to the robot, and how these responses eventually influence

robot acceptance. Specifically, this paper investigates the influence of two robot

characteristics on psychological responses in human-robot interaction: the robot’s

controllability and vocal expression. Utilitarian responses include the perceived task

performance and perceived ease of use of the robot. Hedonic responses include perceived

animacy and anthropomorphism. Accordingly, we aim to answer the following research

question:

RQ: How do the level of control over a robot and the vocal expressions of a robot during human-robot interaction affect utilitarian and hedonic responses to the robot, and how do these responses influence the acceptance of the robot?

(4)

This study is relevant for three reasons. First, robot designers try to design their robots

as user-friendly as possible (Beer, Prakash, Mitzner & Rogers, 2011). Other than that,

designers often also try to design their robots in a way that it seems lifelike, in order to

emotionally engage its users (Bartneck, Kulic & Croft, 2009). It is therefore of interest to

robot designers to know which robot characteristics contribute to user-friendliness as well as

the apparent animacy and human likeness of robots. Secondly, even though there are a lot of

different ways to exert control over a robot, studies on how different control methods

influence factors of robot acceptance are currently lacking (Beer, Prakash, Mitzner & Rogers,

2011). Third, in current studies on responses towards a robot (such as, for example,

anthropomorphism), people do not actually interact with a robot (e.g Eyssel & Kuchenbrandt,

2011; Eyssel, Kuchenbrandt & Bobinger, 2011; Eyssel et al., 2012). We therefore still know

little about the role that these responses play in actual human-robot interaction and eventually

in the acceptance of the robot. This paper aims to close this research gap by studying

utilitarian and hedonic responses to a social robot in an experimental study including actual

human-robot interaction.

The next section gives a detailed description of social robots, the different types of

social robots, their functions within current society and important characteristics of the social

robot. Then, we describe how the level of control over robots are expected to influence the

utilitarian responses perceived ease of use and perceived task performance. Then, the

expected influence of controllability and vocal expression on the hedonic responses perceived

animacy and anthropomorphism will be described. Finally, we will discuss how perceived

ease of use, perceived task performance, perceived animacy and anthropomorphism are

(5)

Robot Types

The term robot is defined by the Merriam-Webster dictionary as “a machine that looks

like a human being and performs various complex acts of a human being, such as walking or talking”. In scientific literature, there is agreement that there are broadly two types of robots. The first type of robot, most often referred to as ‘industrial robots’ or ‘professional service robots’, operate in industrial and other professional settings such as the military. These robots are usually fully computer controlled and do not necessarily resemble a human (Thrun, 2004;

Libin & Libin; 2004). The second type of robots, usually referred to as ‘social robots’, are

specifically designed to communicate with humans in more domestic settings, and more often

resemble a human being (Zhoa, 2006). The latter type of robot is the main focus of this paper.

Social robots function in a variety of settings where they fulfil various tasks. These

settings include amongst others education and therapy (Libin & Libin, 2004). In education,

social robots are being used to teach children and young adults basic programming skills

(Fong, Illah & Nourbakhsh, 2002). In therapy, social robot KASPAR resembles a child and

aids in therapy for autistic children (Höflich, 2013), while robotic seal Paro is designed to

stimulate interaction amongst elderly (Wada et al., 2005).

The specific type of social robot that is the focus of this paper however, are those

robots that are designed for domestic settings where they serve as pets or companions. An example of such a robot is Sony’s AIBO, a robotic dog that learns through interaction with humans (Fong, Illah & Nourbakhsh, 2002) and humanoid robots RoboSapien and MiP

(Behnke, Müller & Schreiber, 2005). The main goal of these robots is to entertain and provide

company at home (Libin & Libin, 2004). This has important implications for these robots.

First, the fact that these robots are designed for the purpose of entertainment makes

these robots not just utilitarian, but also hedonic products. In other words, these robots are not

(6)

2011). Therefore, these robots do not only evoke utilitarian responses, but also by hedonic

responses (Libin & Libin, 2004). De Graaf and Allouch (2011) define utilitarian responses as

those responses tied to the utility of the robot, whereas hedonic responses relate to the

experience of the user while using a robot. Hedonic responses have no obvious relation to

task-related goals such as the utilitarian responses.

Furthermore, in order to obtain their goal of communicating with and entertaining

humans, these robots (and social robots in general) usually require two things. First, the social

robot requires at least some level of autonomy rather than being fully controlled by the human

like industrial robots (Thrun, 2004). Autonomy is defined as the extent to which a robot can

sense its environment, plan and act based on that environment with the intent of reaching

some task-specific goal without external control. The opposite of autonomy is therefore full

human control. (Beer, Fisk & Rogers, 2014). The second characteristic social robots require is

an interface through which they can communicate with humans, for example speech and vocal

expression.

This paper poses that these robot characteristics (the level of control over the robot and the robot’s vocal expressions) influence utilitarian and hedonic responses towards the robot. We furthermore pose that utilitarian and hedonic responses in turn influence robot acceptance.

The next section focuses on explaining how the robot characteristics are expected to influence

utilitarian and hedonic responses toward robots. Then, we will explain how utilitarian and

hedonic responses influence robot acceptance.

Utilitarian Responses

Perceived task performance. One of the most applied models used to explain the acceptance of technology is the Technology Acceptance Model (TAM: Davis, 1989). The

TAM posits that acceptance will be predicted by two utilitarian considerations: perceived

(7)

the degree to which a person believes that using a particular system would enhance his or her

performance (Davis, 1989). In the case of robots, perceived usefulness could therefore refer to

how well they themselves perform on a task (Beer, Prakash, Mitzner & Rogers, 2011).

According to Davis (1989), perceived ease of use and perceived usefulness will be

influenced by specific characteristics of the technology. In the case of robots, the level of

control over the robot will determine which tasks a robot is able to perform (Beer, Prakash,

Mitzner & Rogers, 2011). Methods of control where the human is in control over the robot,

such as control by remote, are used extensively with robots in industrial and professional

settings, such as in planetary exploration and search and rescue actions and is therefore

currently an important default mode of control for robots (Lampe & Chatila, 2006). Humans

generally feel motivated to be feel in control over their environment (White, 1959).

Experiencing a lack of control can therefore lead to several negative outcomes such as stress,

frustration and poor performance (Burger, 1985). Even though very little studies tested the

effect of the controllability of a robot on its perceived task performance, a series of studies did

show that the controllability of characters in a video game influenced feelings of competence

within the game. Specifically, feelings of competence in the game were enhanced when

participants could easily control the characters compared to when the characters were more

difficult to control (Ryan, Rigby and Przybylski, 2006). In these studies, competence was

defined as feelings of effectance, and is therefore similar to performance. Accordingly, we

pose the following hypothesis:

H1: Having high control over a social robot will lead to better perceived task performance than having low control over a social robot

Perceived ease of use. The second utilitarian variable, perceived ease of use, refers to the degree that a person believes that using a system would be free of effort (Davis, 1989).

(8)

difficult to use and therefore feel apprehensive to use it. The effect of the level of control over

a robot on perceived ease of use has not yet been studied (Beer, Prakash, Mitzner & Rogers,

2011). However, as stated previously, feeling out of control can lead to several negative

outcomes (Burger, 1985). The reason that lacking control can lead to negative outcomes is

that people feel generally motivated to function effectively within their environment by

reducing uncertainty about it. This motivation to feel in control over your environment is

referred to as effectance motivation (White, 1959). Lacking control however creates feelings

of uncertainty. According to Davis (1989), perceived ease of use relates to the effort put into

using the technology. If using certain technology increases feelings of uncertainty, it takes

more effort to use the technology compared to when people feel in control of the technology

(Luczak, Roetting & Schmidt, 2003). Therefore, we pose the following hypothesis:

H2: Having high control over a social robot will lead to increased perceived ease of use compared to having low control over a social robot

Hedonic Responses

Animacy. Robot designers often try to make their robots as lifelike as possible (Bartneck, Kulic & Croft, 2009). According to the Merriam-Webster dictionary, “animate” means “the state of being alive”. Detecting animate entities has been necessary for survival in order to distinguish prey and predator (Pratt, Radulescu, Guo & Adams, 2010). Infants start

distinguishing between animate and inanimate objects when they are only nine months old

(Poulin-Dubois, Lepage & Ferland, 1996). In scientific literature, there are two contrasting

hypotheses that explain the attribution of animacy to objects.

The first hypothesis, the Newtonian violation hypothesis, poses that animacy is

attributed to objects whose motion violates Newtonian laws of motion (Scholl & Tremoulet,

2000). This means that people will attribute animacy to an object if it stops or starts moving

(9)

(1982) show that certain movements can indeed influence the perception of animacy (in:

Scholl & Tremoulet, 2000, p. 304). These motions include a start from rest, a change of

direction or moving in a direct path towards an object. Furthermore, Tremoulet and Feldman

(2000) showed in an experiment including a single rigid object moving across a uniform field

that perceptions of animacy were significantly influenced by change in speed and direction.

The second hypothesis, the intentionality hypothesis, states that animacy is attributed when

intentionality is perceived, such as when an object responds to its environment (Tremoulet &

Feldman, 2006). For example, an object is perceived as more animate when it changes

direction in order to avoid an obstacle (Blythe, Miller & Tod, 1999).

The most important condition for the perception of animacy in both the Newtonian

violation hypothesis and the intentionality hypothesis is that motion has to be self-propelled,

rather than driven by an external force (Poulin-Dubois, Lepage & Ferland, 1996). This

assumption would imply that when the movements of a robot are fully controlled by the

human during human-robot interaction, it should be perceived as less animate than when the robot controls its’ own movements. Furthermore, according to the Newtonian violation hypothesis, attribution of animacy should further increase if the robot is also able to stop,

start, turn and accelerate independently without external force (Stewart, 1982), in other words,

without being controlled by the human. Perceived animacy should also increase if the robot is

able to respond to its environment without external control, as stated by the intentionality

hypothesis (Tremoulet & Feldman, 2006). Based on these arguments, we pose the following

hypothesis:

H3a: Having low control over a social robot during human-robot interaction leads to higher perceived animacy than having high control over the social robot.

Besides independent motion, the robot’s vocal expressions should also affect

(10)

humans (Hargie, 2010) and most animals also have the capability of vocal expression: the lion

roars, the dog barks, the cat meows etc. Therefore, the capability of vocal expressions is an

important characteristic that increases animate perceptions when applied to robots (Fink,

2012). However, we pose that in a similar way that motion has to be self-propelled in order to

be perceived as animate (Poulin-Dubois, Lepage & Ferland, 1996), vocal expressions should

also be caused by the robot itself rather than an external force, such as a human pressing a

button. Therefore, we pose the following hypothesis:

H3b: Interaction with a low controllable social robot with the capability of vocal expression leads to higher perceived animacy compared to interaction with a low controllable social robot without the capability of vocal expression

Perceived animacy is the degree to which a person perceives the robot as being alive.

However, a second important hedonic response takes things further by describing how we not

only have the tendency to view inanimate objects as being alive, but that we may also

perceive them as being human.

Anthropomorphism. People have the tendency to treat technological devices as if they were human (Luczak, Roetting & Schmidt, 2003). This tendency to attribute human

characteristics, intentions and emotions to nonhuman agents is referred to as

anthropomorphism (Epley, Waytz & Cacioppo, 2007). Examples of anthropomorphism

include attributing a humanlike appearance to nonhuman agents such as deities, or to believe

that computers possess mental capacities or minds of their own and can therefore conspire

against you (Luczak, Roetting & Schmindt, 2003). Gray, Gray & Wegner (2007) showed that

perceiving something as possessing “mind” includes perceiving something as being able to

experience emotions and being conscious of its environment, while at the same time being

capable of making its own decisions. These dimensions of mind attribution are important

(11)

anthropomorphism is perceiving nonhuman agents as if they were human (Epley & Waytz,

Akalis & Cacioppo, 2008). Anthropomorphism therefore goes beyond merely describing the

actions of a nonhuman agent as humanlike, but refers to the process of attributing a nonhuman

agent with human characteristics, emotions and a mind of its own. For example,

anthropomorphism occurs when a pet owner goes beyond describing the behaviour of his dog as “affectionate” to infer that “my dog loves me” (Epley, Waytz & Cacioppo, 2007).

Anthropomorphism relates to animacy in the sense that anthropomorphic inferences may

include perceptions of animacy. However, since animate life is not a uniquely human quality,

anthropomorphism goes beyond perceptions of animacy by inferring some nonhuman agent

possesses uniquely human qualities (Epley, Waytz, Akalis & Cacioppo, 2008).

The strength of anthropomorphic inferences people make can differ in different

contexts. Strong forms of anthropomorphism include not only behaving as if a nonhuman

agent, such as a deity, possess human characteristics, but also includes actively endorsing the

belief that the deity possesses these characteristics. Weaker forms of anthropomorphism, such

as cursing at your computer, are more immediate responses and may not necessarily include

the active endorsement of the belief that the nonhuman agent actually possesses human

characteristics (Epley, Waytz, Akalis & Cacioppo, 2008).

Anthropomorphism serves as a mechanism through which uncertainty about

technology is reduced (Luczak, Roetting & Schmidt, 2003) and communication with robots is

facilitated (Duffy, 2003; Fong, Nourbaksh, & Dautenhalm, 2002). Anthropomorphising helps

people rationalise the real or imagined behaviour of nonhuman agents such as robots, by

treating the robot as if it were a rational agent whose actions are governed by choices and

desires (Duffy, 2003). What motivates people to anthropomorphise in a given situation is

described by Epley, Waytz and Cacioppo (2007), who hypothesized several key psychological

(12)

earlier in this paper, namely, effectance motivation.

Specifically, effectance motivation is defined as the motivation to interact effectively with one’s environment by understanding it and reducing uncertainty about it (White, 1959). Interacting with nonhuman agents such as technology can lead to feelings of uncertainty,

especially when technology is not functioning properly (Luczak, Roetting & Schmidt, 2003).

According to Epley, Waytz and Cacioppo (2007), knowledge about humans in general, and

about the self in particular, serve as a readily available heuristic for rationalizing the

behaviour of nonhuman agents. Since self-knowledge is developed in childhood before

knowledge about others, it is more readily accessible and more detailed than other-knowledge

(Epley, Waytz, Akalis & Cacioppo, 2008). Epley, Waytz and Cacioppo (2007) pose that this

readily available information about humans in general, and self-knowledge in particular is

used to reduce uncertainty while interacting with nonhuman agents in order to rationalise their

behaviour. Anthropomorphism should therefore be increased when people are faced with

uncertainty, thereby increasing effectance motivation.

Specific characteristics of nonhuman agents such as robots can increase effectance

motivation. One of these characteristics is the apparent predictability of the nonhuman agent

(Epley, Waytz & Cacioppo, 2007). Therefore, interacting with a low controllable agent could

increase effectance motivation and subsequent anthropomorphism compared to interacting

with a highly controllable agent, since lacking control creates uncertainty about their

behaviour (Epley, Waytz, Akalis & Cacioppo, 2008). The findings of a study by Waytz,

Heafner and Epley (2014) support the idea that lacking control over an agent can lead to

increased anthropomorphism. In their experiment, participants using a driving simulation

drove either a normal car or an autonomous car, able of controlling its own speed and

steering. They found that people were significantly more likely to perceive the autonomous

(13)

studies on the influence of effectance motivation on anthropomorphism showed that people

were significantly more likely to perceive a robot as possessing humanlike traits (Eyssel &

Kuchenbrandt, 2011) as well as having a mind of its own (Eyssel, Kuchenbrandt & Bobinger,

2011) when they expected to interact with an unpredictable robot compared to a predictable

robot. Therefore, we pose the following hypothesis:

H4: Having low control over a social robot during human-robot interaction will lead to higher levels of anthropomorphism than having high control over the robot, meaning that it will lead to a) higher levels of mind attribution and b) higher perceived human likeness

The second robot characteristic discussed in this paper, the robot’s vocal expressions,

are also expected to influence anthropomorphism for several reasons. As stated previously,

speech and vocal expression is the main mode of communication for humans (Hargie, 2010).

Therefore, it is perceived as a humanlike feature when applied to robots (Fink, 2012).

Furthermore, the ability of a robot to vocally express itself can give an impression that the

robot possesses some level of intelligence, independent thought and even emotions (Beer,

Fisk & Rogers, 2011). The idea that vocal expression can influence anthropomorphism has

also been supported by scientific research. For example, Eyssel et al. (2012) conducted an experiment in which they manipulated the robot’s voice in such a way that is sounded either humanlike or robotlike and found that people were more likely to attribute mind to a robot

with a humanlike voice than a robotlike voice. Furthermore, Waytz, Heafner and Epley

(2014) found that an autonomous vehicle was anthropomorphised more when it had a female

voice than when it had no voice. In these studies, the effect of vocal expression on

anthropomorphism was tested for both a robot that seemed to be functioning autonomously

and an autonomous driving car. Vocal expression in these studies were therefore not caused

(14)

motion has to be driven by an internal rather than an external force in order to create

perceptions of animacy (Poulin-Dubois, Lepage & Ferland, 1996), vocal expressions might

influence anthropomorphism under the assumption that it is caused by an internal rather than

an external force. Accordingly, we pose the following hypothesis:

H5: Interacting with a low controllable social robot with the capability of vocal expression will lead to higher levels of anthropomorphism, including a) more mind attribution and b) higher perceived human likeness, compared to interacting with a low controllable social robot without the capability of vocal expression.

So far, we have discussed how the controllability of a robot is expected to affect

utilitarian responses perceived ease of use and perceived task performance, and utilitarian

responses perceived animacy and anthropomorphism. Furthermore, we have discussed how the robot’s capability of vocal expression is expected to affect perceived animacy and anthropomorphism. The next section will now explain how these different responses are

related to the acceptance of the robot.

Influences on Acceptance of the Social Robot

The Technology Acceptance Model (TAM: Davis, 1989) defines acceptance as a

combination of attitudes, intentions and behaviours towards technology. Heerink et al. (2008)

pose that the acceptance of robots consists of their functional acceptance, but also their

acceptance as conversational partners with whom humans could build a potential relationship.

The section below describes how the utilitarian and hedonic responses discussed above are

expected to influence robot acceptance.

Influences of Utilitarian Responses. The TAM predicts that perceived usefulness and perceived ease of use of some form of technology predict the attitude towards using the

technology and eventually the intention to use the technology (Davis, 1989). These

(15)

Theory describes important mechanisms that influence whether people will adopt certain

behaviour. According to Bandura (2004), one of the important mechanisms that influences

whether behaviour is adopted are the outcome expectancies. Specifically, people will be more

likely to adapt new behaviour when they believe the behaviour will result in positive

outcomes, such as enhanced performance. According to Davis (1989), perceiving a system as

useful results in positive outcome expectancies about using the system, and will therefore

increase the likelihood someone will use a system.

The TAM has been widely applied and proven to be effective in predicting technology

acceptance in various fields (Lee, Kozar & Larsen, 2003). Since the perceived usefulness is

defined as the believe that using certain technology would enhance performance of the task, in

the case of robots, perceived usefulness relates to how well the robot performs on various

tasks (Beer, Mitzner, Prakash & Rogers, 2011). Findings of scientific research support the

importance of perceived usefulness in predicting robot acceptance. For example, de Graaf and

Allouch (2011) found that the perceived usefulness of a robot indeed significantly predicted

the attitude towards use. Furthermore, Heerink et al. (2010) found that usefulness predicted

both the intention and the actual use of a social robot. We therefore pose the following

hypothesis:

H7: Perceived task performance has a positive effect on robot acceptance

The second predictor of acceptance in the TAM is the perceived ease of use, and has

also been successfully used in scientific research to predict robot acceptance. This prediction

that ease of use positively affects acceptance is based on the concept described in the Social

Cognitive Theory as self-efficacy. Self-efficacy is defined as the belief in one’s ability to

perform a behaviour (Bandura, 2004). According to Social Cognitive Theory, self-efficacy is

one of the most important mechanisms that determine whether someone will adopt certain

(16)

system as easy to use increases self-efficacy beliefs, and should therefore have a positive

effect on the acceptance of the technology.

The importance of perceived usefulness in the acceptance of robot has also been

shown by scientific research. A survey by Ezer, Fisk and Rogers (2009) showed that

perceived ease of use of a robot was able to predict the attitude towards accepting the robot in

the home environment for both younger and older adults. Furthermore, a series of experiments

by Heerink et al. (2009; 2010) showed that perceived ease of use significantly predicted the

intention to use the social robot I-cat. Accordingly, we pose the following hypothesis:

H6: Perceived ease of use has a positive effect on robot acceptance

Although the TAM mainly thanks its popularity to the fact that it can be applied in

various fields, it has been criticised for focusing only on two utilitarian factors of acceptance

(Beer, Mitzner, Prakash & Rogers, 2011). In reality, hedonic responses also play a role in the

acceptance of social robots (de Graaf & Allouch, 2011). These responses are discussed below.

Influences of Hedonic Responses. The two hedonic responses discussed in this paper are perceived animacy and anthropomorphism. As discussed previously, anthropomorphism

serves as a function through which uncertainty about nonhuman agents such as robots can be

reduced and communication with them can be facilitated (Luczak, Roetting & Schmidt, 2003;

Epley, Waytz & Cacioppo, 2007). It allows people to establish a humanlike connection with

robots (Epley, Waytz, Akaliz & Cacioppo, 2008). According to Höflich (2013), perceiving a

robot as more humanlike increases familiarity, and familiarity increases liking. Findings of

scientific research support this idea. For example, Waytz, Heafner and Epley (2014) found

that mind attribution to an autonomous vehicle significantly predicted trust in that vehicle.

They also found that people were less likely to blame the car for a crash when they attributed

the car with mind. Furthermore, research has found that people are willing to spend more time

(17)

Kuchenbrandt, 2012; Eyssel, Kuchenbrandt & Bobinger, 2011; 2012). Furthermore, de Graaf

and Allouch (2013) found that perceived human likeness significantly predicted whether

people viewed the robot as a potential friend. Overall, these studies show that

anthropomorphism can result in more positive attitudes towards robots. We therefore pose the

following hypothesis:

H8: Anthropomorphism has a positive effect on robot acceptance, meaning that a) mind attribution and b) perceived human likeness positively affects robot acceptance Finally, even though robots may not always be designed as humanlike (for example

robotic dogs), robot designers do often try to design their robots as lifelike as possible. This is

because lifelike creatures have the power to involve people emotionally (Bartneck, Kulic &

Croft, 2008). Studies show that a robot being perceived as animate can lead the robot to also

be perceived as more intelligent, and that this holds true even if the robot does not resemble a

human (Bartneck, Kanda & Mubin, 2009; de Graaf & Allouch, 2011). Libin and Libin (2004)

furthermore found that lifelike robots were perceived as friendly companions. Since perceived

animacy can lead to emotional engagement, it may facilitate acceptance in a similar way as

anthropomorphism: by creating a connection between the human and robot that is perceived

as real (Epley, Waytz & Cacioppo, 2007). We therefore pose the following hypothesis:

H9: Perceived animacy has a positive effect on robot acceptance

In sum, we expect that high control over a robot will lead to higher perceived task

performance and perceived ease of use than low control. However, we expect that having low

control over the robot will lead to higher perceived animacy and anthropomorphism than

having high control. Finally, we expect that perceived task performance, perceived ease of

use, perceived animacy and anthropomorphism will positively affect robot acceptance. These

hypotheses will be tested in an experiment involving the manipulation of robot characteristics.

(18)

Method Participants and Design

This study was conducted in April and May 2016 at the University of Amsterdam. 92

students (62 women, 30 men) were recruited at the University of Amsterdam to participate in

a laboratory study on the evaluation of a small humanoid robot. Participants ranged in age

from 18 to 34 years (M = 23.14, SD = 2.95). The study employed a 3x1 design (high

controllability of robot/low controllability of robot/low controllability with vocal

expressions). Participants were randomly assigned to one of these three conditions. The study

was granted ethical approval by the Ethical Committee of the Amsterdam School of

Communication Research at the University of Amsterdam.

Procedure

Participants registered for the study online, and were welcomed in a waiting room

upon arrival. Here, participants were informed that they would take part in a study about the

evaluation of a small humanoid robot. After signing the informed consent form, the

participant was accompanied by the researcher to the lab where the interaction with the robot

took place. The experimenter made sure that the robot was already switched on in the

appropriate mode before the participant entered the lab. In the lab, the experimenter explained

to the participant that the robot works on a self-balancing mechanism, meaning that the robot

would not fall over even if it was pushed. The experimenter demonstrated by giving the robot

a push. The participant was then invited to do the same. By inviting the participants to touch

the robot, this study followed other studies on robot interaction, which also invited

participants to touch the robot (de Graaf & Allouch, 2013; Nomura et al., 2008).

After the participant touched the robot, the experimenter explained to the participant

that he/she would perform a task with the robot, namely making the robot follow a short

(19)

the high controllability condition, participants were allowed to practice with the controls, so

that they would feel in control of the robot while performing the task. Participants in the low

controllability conditions were not allowed to practice with the controls. After participants

completed the task with the robot, they were guided to a separate room where they filled in a

questionnaire.

Stimulus Material

In this study, participants interacted with the entertainment robot MiP (short for

Mobile Inverted Pendulum). MiP is a small humanoid robot and has six pre-programmed

control modes, including gesture control mode, free roaming mode and remote controlled

mode. The robot is also equipped with an IR sensor, which allows it to detect obstacles. MiP

furthermore has the ability to produce vocal expressions such as whistling sounds, happy

sounds, surprised sounds and sad sounds. Figure 1 shows an illustration of the robot.

In order to manipulate controllability of the robot, MiP was switched to different

operating modes in the high and the low controllability condition. In the high controllability

condition, participants controlled the robot with an I-pad. This allowed participants to have full control over the robot’s movements. Participants were able to make the robot go forward,

(20)

backwards and turn in different directions at their will.

In the two low controllability conditions on the other hand, the robot was switched to ‘free roaming mode’. This mode allows MiP to explore its environment without being

controlled by an external force. When switched to this mode, the robot will move forward and

turn in other directions independently. Its IR sensors allow it to detect and respond to

obstacles in the environment. Upon detection, the robot stops and turns into another direction. However, the direction in which the robot moves and turns can’t be controlled. This means that in the low controllability conditions, participants could only manipulate the robot’s movement by placing their hand in front of the robots’ IR sensors to make it turn in another

direction, but they had no control over the direction in which the robot would turn and drive

towards as a consequence. Furthermore, the robot also changed directions independently at

random moments, making it more difficult for participants to control the direction in which

the robot was driving. A pre-test (N = 15) showed that those in the low controllability

condition (N = 7) felt significantly less in control over the robot than those in the high

controllability condition (N = 8), t(13, 1) = 2.4, p =.032, d = 1.24.

In order to manipulate vocal expression, two different low controllability conditions were employed. In the first low controllability condition the robot’s vocal expressions were turned off (as in the high controllability condition), whereas the robot’s vocal expressions were turned on in the second low controllability condition. In the latter condition, the robot

would make a whistling sound at the start of the interaction and would make content

humming sounds every now and then throughout the interaction. If the robot detected an object (such as when participants blocked the robots’ IR sensors to make him turn), it would make a surprised sound while turning. Furthermore, if MiP would fall (for example if

(21)

sound. All these vocal expressions were missing in the other low controllability and I the high

controllability condition.

Measures

After participants completed the interaction with the robot, they filled out a

questionnaire. The questionnaire started with thanking the participants again for their interest

in the study and asking some demographic questions, namely age, gender and nationality.

Then, the utilitarian responses task performance and ease of use were measured. The hedonic

responses animacy, mind attribution and finally human likeness were measured next. Finally,

the acceptance of the robot was measured. How these concepts were operationalized will be

described below. For the complete measurement instruments, see the appendix.

Perceived task performance. To measure the extent to which participants perceived the robot to have performed well on the task (following a certain parkour), we asked them to

what extent they believed the robot succeeded in completing the task. The scale consisted of four items, including “I think the robot performed well on the task” and “I think the robot succeeded in performing the task”. All items were rated on a 7-point Likert scale ranging from 1 (strongly disagree) to 7 (strongly agree). Because the scale was newly created, a

principal component analyses was conducted in order to check if the items formed a

dimensional scale. The analysis showed that the four items indeed formed a single

uni-dimensional scale: only one component has an eigenvalue above 1 (eigenvalue 3.21) and there

was a clear cut-off point in the scree plot after one factor. Internal consistency of the scale

was good (Cronbach’s alpha = .92).

Perceived ease of use. Perceived ease of use was measured using a scale by Heerink et al. (2010). This scale was developed to measure the perceived ease of use of robots

specifically. The scale consists of five items, including “I think I will know quickly how to use the robot” and “I find the robot easy to use”. Items were measured along a 7-point scale

(22)

ranging from 1 (strongly disagree) to 7 (strongly agree). Internal consistency of the scale was good (Cronbach’s alpha = .80).

Animacy. Animacy was measured using a semantic differential scale developed and validated by Bartneck, Kulic & Croft (2009). The scale was developed to measure the

animacy of robots specifically. The scale consisted of six items, including dead/alive and

artificial/lifelike. Participant rated the items along a 7-point scale. Internal consistency of the scale was sufficient (Cronbach’s alpha = .74).

Mind attribution. Mind attribution was measured using an adapted version of the mind attribution scale by Kozak, Marsh and Wegner (2006). Participants rated the robot on 10 mental capacities such as “this robot can experience pain”, “this robot is capable of emotion” and “this robot has the capacity to plan actions”, “this robot is capable of doing things on purpose”. Participants rated the items along a Likert scale ranging from 1 (strongly disagree) to 7 (strongly agree). The original scale consists of two subscales. However, this study

follows previous research where the scales were combined into one (Eyssel et al., 2012;

Waytz, Heafner & Epley, 2014). The internal consistency of the overall scale was excellent (Cronbach’s alpha = .90).

Human likeness. Human likeness was measured using a semantic differential scale developed and validated by Bartneck, Kulic & Croft (2009), which was developed

specifically to measure the human likeness of robots. Examples of items include fake/natural

and machinelike/humanlike. Participants rated the items along a 7-point scale. The complete

original scale consisted of five items and had a Cronbach’s alpha of .75. However, the

reliability analysis showed that removing one item (moving rigidly/moving elegantly)

increased reliability of the scale. This item was removed, since the robot also moved slightly

(23)

controlled manually (high controllability condition). The final scale therefore consisted of four items, internal consistency of the scale was reliable (Cronbach’s alpha = .76).

Acceptance of robot. Heerink et al. (2008) propose that robot acceptance includes functional and social acceptance. Therefore, we measured both the attitude towards using the

robot (functional acceptance), as well as the extent to which the robot was perceived as a

potential friend or companion (social acceptance). Attitude towards use of the robot was

measured using a scale by Heerink et al. (2010). The scale consisted of four items, including “I think it’s a good idea to use the robot” and “It’s good to make use of the robot”. Items were measured on a Likert scale, ranging from 1 (strongly disagree) to 7 (strongly agree). Internal consistency of the scale was good (Cronbach’s alpha = .88).

To measure the extent to which people accepted the robot as a companion, we asked

participants whether they could view the robot as a potential friend. To do so, a scale used by Lee et al. (2006) was used. The scale consisted of three items, including “I think I could spend a good time with this robot” and “I think this robot could be a friend of mine”. Participants rated the items along a 7-point Likert scale, ranging from 1 (strongly disagree) to 7 (strongly agree). Internal consistency of the scale was sufficient (Cronbach’s alpha = .75).

Results Analytical Approach

As a first step, ANOVA’s and a chi-square test were conducted to check if the random

assignment of the participants to the three experimental conditions was successful in terms of

age and gender. Then, a one-way MANOVA was performed to see whether controllability

and vocal expressions of the robot affected robot acceptance directly. Using Helmert

contrasts, we first compared the high control condition was to the two low control conditions

together to test the effect of controllability of the robot. Then, the two low control conditions were compared with each other to test the effect of the robot’s vocal expressions. This same

(24)

procedure was used to test the effects of controllability and vocal expressions of the robot on

the utilitarian and hedonic response factors in a MANOVA. Additional analyses tested if

effects changed after controlling for age and gender. Finally, the whole model including the

effects of the robot characteristics on the utilitarian and hedonic responses as well as all

effects on robot acceptance was tested using path analysis.

Randomization Checks

Randomization checks showed that there were no significant differences between the three conditions in terms of gender distribution, χ²(2) = 1.19, p t= .553, and participants’ age, F(2, 89) = 1.41, p = .249. These results indicate that the random assignment of the

participants to the three conditions was successful in terms of age and gender.

Effects of Robot Characteristics on Robot Acceptance

First, an exploratory analysis tested whether controllability and vocal expressions

influenced the acceptance of the robot directly. A one-way MANOVA was conducted using

the attitude towards use and the extent to which participants perceived the robot as a potential

companion as dependent variables. The condition variable functioned as the grouping

variable. Using Pillai’s Trace, there was a significant effect of the robots characteristics on robot acceptance (Pillai’s Trace = .16, F(4, 178) = 3.75, p = .006, η² = .08).

The univariate analysis found moderate but only marginally significant differences

on the companionship scores between the groups, F(2) = 2.82, p = 0.065, η² = .06. However,

there were no significant differences between the groups on the attitude towards use, F(2) =

2.29, p = .107. Helmert contrasts were used to check if controllability and vocal expression of

the robot affected the attitude towards use and the extent to which the robot was perceived as

a potential friend or companion. To check the possible effect of controllability, the first

contrast compared the scores of the high controllability condition to the score of the two low

(25)

condition scored significantly higher on attitude towards use than those in the low control

conditions (p = .038). However, there was no significant difference in companionship

between the high control and the two low control conditions (p = .848).

The second contrast compared the first low control condition (not including vocal

expressions of the robot) with the second condition of low control (including vocal

expressions of the robot), leaving the high control condition out of the analysis. This contrast

revealed that there was no significant difference in the attitude towards use between the two

low control conditions (p = .732). However, there was a significant difference in

companionship between the two low control conditions: those in the condition without vocal

expression scored significantly lower on the companionship scale than those in the condition

including vocal expression of the robot (p = .02).

Effects of Robot Characteristics on Utilitarian and Hedonic Responses

The same procedure was used to test the effects of controllability and vocal

expression on the utilitarian and hedonic responses. A one-way MANOVA was conducted

using perceived task performance, perceived ease of use, perceived animacy, mind attribution

and perceived human likeness as the dependent variables. The condition variable was used as the grouping variable. Using Pillai’s Trace, there was a strong significant effect of robot characteristics on the utilitarian and hedonic responses (Pillai’s Trace = .74, F(10, 172) = 10.03, p < .000, η² = .37). Table 1 shows the mean scores of the group for each variable.

Hypothesis 1 predicted that having high control over the robot would lead to higher

perceived task performance than having low control. The effect size of the univariate analysis

indeed revealed large significant differences between the groups on perceived task

performance (F(2, 89) = 67.10, p < .000, η² = .60), In support of hypothesis 1, the first

contrast (comparing the high control condition to the two low control condition showed that

(26)

than those in the low controllability conditions (p < .000), while there was no difference in

perceived task performance between the two low controllability conditions (p = .643).

Hypothesis 2 predicted that having high control over the robot would lead to higher

perceived ease of use than having low control over the robot. The univariate analysis found

large significant differences between the groups on perceived ease of use (F(2, 89) = 16.10,

p < .000, η² =.27). The first contrast comparison showed that those in the high controllability condition scored significantly higher on perceived ease of use than those in the low

controllability conditions (p < .000). There was no significant difference in perceived ease of

use between the two low controllability conditions (p = .155). Hypothesis 2 is therefore

supported. Hypothesis 3a predicted that having low control over the robot would lead to

higher perceived animacy than having high control. Furthermore, Hypothesis 3b predicted

that interaction with a low controllable robot with the capability of vocal expression would

lead to higher perceived animacy than interaction with a low controllable robot without this

Table 1

Means Scores with Standard Deviations of High Control and Low Control Conditions

Condition

High control Low control Low control (vocal expression) Dependent variable M (SD) M (SD) M (SD) Task Performance 5.85 (.89) 3.44 (.98) 3.33 (1.00) Ease of Use 5.72 (.86) 4.30 (.95) 4.67 (1.19) Animacy 4.03 (1.04) 3.82 (.70) 4.30 (.77) Mind attribution 1.91 (.70) 2.16 (.99) 2.79 (1.12) Human likeness 2.45 (.89) 2.29 (.95) 2.90 (.88)

Attitude towards use 4.96 (1.16) 4.45 (1.20) 4.32 (1.33)

(27)

capability. However, the univariate analysis found only small and marginally significant

differences between the groups on perceived animacy (F(2, 89) = 1.75, p = 0.94, η² = .05).

The first contrast comparison found no significant difference in perceived animacy between

the high control and the low control conditions (p = .868). Hypothesis 3a must therefore be

rejected. Results of the second comparison revealed that the first condition of low control, not

including vocal expression of the robot, perceived the robot as significantly less animate than

in the second condition of low control including vocal expression (p =.030). This result

supports hypothesis 3b1.

Hypothesis 4 predicted that having low control over a robot would lead to higher

anthropomorphism than having high control over a robot. Hypothesis 4a specified that having

low control over the robot would lead to higher mind attribution than having high control over

the robot. Furthermore, 5a predicted that vocal expressions of the robot would increase mind

attribution. The univariate analysis found large significant differences between the groups on

mind attribution (F(2, 89) = 6.35, p = .002, η² = .14). Results of the first contrast comparison

showed that those in the low control conditions scored significantly higher on mind attribution

than those in the high control conditions (p = .009). Hypothesis 4a is therefore supported.

Furthermore, the second contrast comparison showed that those in the low control condition

with vocal expressions of the robot scored significantly higher on mind attribution than in the

low other low control condition not including vocal expression (p = .011). This supports

hypothesis 5a2.

Finally, hypothesis 4b predicted that having low control over a robot would lead to

higher perceived human likeness than having high control over a robot. The univariate

analysis found moderate significant differences between the groups on perceived human

1 These results remained significant after removing outliers and controlling for age and gender. 2 The results for mind attribution remained the same after controlling for age and gender

(28)

likeness (F(2, 89) = 3.13, p = .026, η² =.08). However, the results showed that people did not

perceive the robot significantly more humanlike in the low control conditions compared the

high control condition (p = .469). Hypothesis 4b must therefore be rejected. Hypothesis 5b

predicted that vocal expressions would increase perceived human likeness. The second

contrast showed that those in the low control condition including vocal expression of the

robot perceived the robot as significantly more humanlike than those in the low control

condition not including vocal expression of the robot (p = .009). This result supports

hypothesis 5b3.

Full Model

In order to test how the utilitarian and hedonic factors as well as controllability and

vocal expressions of the robot influenced the attitude toward using the robot and the extent to

which the robot was perceived as a potential friend, we employed a path analysis using

AMOS 23. Because of the sample size, the variables for the utilitarian and hedonic responses,

as well as the variables for attitude towards use of the robot and the extent to which the robot

was perceived as a friend were entered in the model as observed rather than latent variables.

In order to distinguish between the effects of controllability and the effect of vocal expression,

the low controllability not including vocal expression was used as the reference category. For

the other two groups (high controllability, low controllability with vocal expression), two

dummy variables were created and entered into the model. The first dummy variable

contained all participants that had high control over the robot. The second dummy variable

contained all participants that had low control over the robot with the capability of vocal

expression. Zero-order correlations between all variables entered in the model can be found in

3 After removing one outlier and controlling for age and gender, the univariate ANOVA for perceived human likeness became only marginally significant, F(2, 84) = 2.07, p = .066. The effect of the condition variable remained significant, F(2.84) = 3.30, p = .042. Results of the planned contrasts remained the same (contrast 1: p = .424, contrast 2: p = .018). Furthermore, a significant main effect of gender was found, F(1, 84) = 5.32, p = .023. Women were more likely to perceive the robot as humanlike (M = 2.64, SD = .89) than men (M = 2.26, SD = .84).

(29)

Table 2. The data had no problems with multivariate normality, Mardia’s coefficient stayed below the critical cut off value of 1.96 (Mardia’s coefficient = -3.10, critical ratio = -1.06). Therefore, we could proceed with the analysis.

First, we estimated a model with all hypothesized paths from the dummy variables

representing high controllability and low controllability including vocal expression to the

mediators, and from the mediators to the indicators of robot acceptance attitude towards use

and companionship using maximum likelihood estimation. We furthermore estimated direct

paths from the dummy variables to the indicators of robot acceptance. We allowed the error

terms of the mediators to covary as well as the two error terms of the dependent variables.

The model had acceptable fit, although the values for the RMSEA were still above the desirable threshold (χ²(2) = 3.87, p = .144, χ2/df = 1.94, CFI = .99, RMSEA = .10, 90% CI [.00, .25]). The model showed that all direct paths between the dummy variables representing

the robot characteristics were non-significant, except for the direct path between the dummy

variable for vocal expression and the attitude towards use (b* = -.21, p = .050). We then used

a nested model to test whether the three non-significant direct effect of the robot

Table 2

Means, Standard Deviations and Zero-Order Correlations

M SD 1. 2. 3. 4. 5. 6. 7. 8. 9. 1. High control 2. Vocal exp. .50** 3. Task perf. 4.19 1.50 .78** .41** 4. Ease of use 4.89 1.17 .50** -.13 .65** 5. Animacy 4.05 .86 -.02 .21* .13 .26* 6. Mind attr 2.29 1.01 -.26* .35** -.23* -.05 .25* 7. Humanlike 2.55 .94 -.07 .27** -.003 .16 .64** .53** 8. Attitude use 4.57 1.25 .22* .19 .33** .36** .38** .14 .39** 9. Companion 3.29 1.28 .02 .20 .16 .40** .43** .27** .48** .52** Note. * p < .05. ** p < .01. p < .000.

(30)

characteristics on the attitude towards use and companionship could be dropped. The

chi-square difference test showed that dropping these paths did not significantly deteriorate model

fit (χ²difference(3) = 2.01, p = .570). These paths were therefore removed from the model. The

final model is shown in figure 2. The final model had good model fit (χ²(5) = 5.88, p = .318, χ2/df = 1.18, CFI = 1, RMSEA = .04, 90% CI [.00, .16]). Figure 2 shows the final model.

The final model found the same significant effects of the robot’s vocal expression on

perceived animacy (b* = .23, p =.037), mind attribution (b* = .28, p = .013), and perceived

human likeness (b* = .28, p = .014) as were found in the ANOVA’s. However, since the full

model compared the high control condition to the first low control condition (not including

Figure 2. Final model with standardized estimates and significance level (covariances between error terms not displayed for reasons of parsimony). χ²(5) = 5.88, p = .318, χ2/df =

(31)

vocal expressions) only, the effects of the robot’s controllability differ in the final model compared to the ANOVA’s. For this reason, the model does not show the significant effect of the level of control over the robot on mind attribution (b* = -.13, p = .259), as was found in

the ANOVA. This is however not a problem when interpreting the effects of the utilitarian

and hedonic responses on robot acceptance predicted in the hypotheses.

Hypothesis 6 predicted that perceived task performance has a positive effect on robot

acceptance. However, the model showed now significant effect of perceived task performance

on either attitude towards use (b* = .11, p = .355) or companionship (b* = -.09, p = .444).

Hypothesis 6 is therefore rejected. Hypothesis 7 predicted thatperceived ease of use has a

positive effect on robot acceptance, but ease of use did not have a significant effect on the

attitude towards use (b* = .17, p = .142). However, it did have a significant positive effect on

the extent to which the robot was perceived as a potential friend (b* = .38, p < .000), showing

partial support for hypothesis 7.

Hypothesis 8 predicted that anthropomorphism has a positive effect on robot

acceptance. Hypothesis 8a specified that mind attribution has a positive effect on acceptance.

The model showed that mind attribution did not significantly affect the attitude towards use,

(b* = .07, p = .530) or the extent to which the robot was perceived as companion, (b* = .08,

p = .42). Hypothesis 8a is therefore rejected. However, perceived human likeness of the robot had a significant positive effect on both the attitude towards using the robot (b* = .28, p =

.027) and the extent to which the robot was perceived as a friend (b* =.28, p = .026). These

results support hypothesis 8b, which predicted that perceived human likeness has a positive

effect on robot acceptance.

Finally, perceived animacy was hypothesized to have a positive effect on robot

acceptance in hypothesis 9. However, no significant effects of perceived animacy on either

(32)

a potential friend (b* = .15, p = .185) were found. Hypothesis 9 must therefore be rejected.

Surprisingly, the model also showed a significant negative direct effect of vocal expression on

the attitude towards using the robot (b* = -.24, p = .011).

Discussion and Conclusion

The goal of this study was to investigate the influence of two robot characteristics on

utilitarian and hedonic responses towards the robot and robot acceptance. The study showed

significant effects of the degree the participants could control the robot during the interaction.

Having high control over the robot led to higher perceived task performance and perceived

ease of use than having low control. However, having low control over the robot during the

interaction led to higher mind attribution compared to having high control over the robot. Furthermore, significant effects of the robot’s vocal expressions were found. Interacting with a robot with the capability of vocal expression led to higher perceived animacy, higher mind

attribution and higher perceived human likeness compared to interacting with a robot without

this capability. We furthermore found significant effects of these responses on robot

acceptance: perceived ease of use and perceived human likeness were positively related to

robot acceptance. There was also a significant direct negative effect of vocal expressions on

the attitude towards using the robot.

Practical and Theoretical Implications

The findings of this study have important implications for research on the effects of

robot characteristics on psychological responses towards robots in human-robot interaction.

The effects of controllability and vocal expression found in this study were consistent with

previous research on mind attribution (Waytz, Heafner & Epley, 2014; Eyssel et al., 2012;

Eyssel & Kuchenbrandt, 2011). However, we did not find the expected effects on either

perceived animacy or perceived human likeness. The expectation that the level of control over

(33)

self-propelled motion creates perceptions of animacy (Poulin-Dubois, Lepage & Ferland,

1996). Even though this hypothesis has been shown to be valid in research on the perception

of animacy of geometric shapes (Scholl & Tremoulet, 2000), it did not hold up for the

self-propelled motion of a robot. One possible explanation could be that observing the movements

of abstract forms across a solid background in a video leaves more room for imagination and

interpretation about possible intentions of the object than observing the movements of a

concrete object. For example, a study by McAleer et al. (2004) found that people watching a

video where the visual ques of two people dancing were reduced to only white body

silhouettes across a black background felt more aroused compared to watching the actual

video of the two people dancing, as the former video was interpreted as two people fighting

rather than dancing.

The expectation that having low control over a robot would lead to higher perceived

human likeness of the robot than having high control was based on the theory of effectance

motivation (Epley, Waytz and Cacioppo, 2007), which states that people are more likely to

anthropomorphise when faced with uncertainty. However, this study found no effect of the

level of control over the robot on perceived human likeness. An important explanation as to why we didn’t find this effect could be that weaker forms of anthropomorphism only include immediate behavioural reactions towards the non-human agent, treating it as if it were human.

This weak form of anthropomorphism does not include the actual endorsement of the belief

that a non-human agent possesses human qualities (Epley, Waytz, Akalis & Cacioppo, 2008).

The interaction with the robot might have only induced this weak form of anthropomorphism,

which would have led people to behave towards the robot as if it were human, but would have

not led people to actually believe that the robot possesses human qualities. Since this weaker form of anthropomorphism consists of behaviour, it can’t be detected through a questionnaire as was used in this study.

(34)

This paper furthermore has implications for research on robot acceptance.One

important finding of this study is that perceived task performance and perceived ease of use

failed to predict the attitude towards use, as is predicted by the Technology Acceptance Model

(Davis, 1989). The TAM has been tested and validated in various fields, but is usually used to

predict the acceptance of pieces of technology functioning as tools to enhance task

performance (Lee, Kozar & Larsen, 2003). Social robots however are primarily hedonic rather

than utilitarian products. They serve hedonic purposes such as enjoyment and company, rather

than utilitarian purposes such as a robotic vacuum cleaner (Lee, Shin & Sundar, 2011). The

fact that perceived task performance and perceived ease of use did not predict the attitude

towards using the robot indicates that people indeed did not view robot as a tool with a

specific utilitarian function.The findings therefore show support for the claim that hedonic

responses play an important role when evaluating acceptance of social robots (de Graaf &

Allouch, 2011).

However, it must be noted that perceived ease of use did predict the extent to which

the robot was perceived as a potential friend. An explanation as to why perceived ease of use

did influence the acceptance of the robot as a friend but not the attitude towards use as

predicted by the TAM could be provided by the self-determination theory (Ryan, Rigby &

Przybylski, 2006). The self-determination theory describes what factors predict intrinsic

motivation (e.g. motivation that is derived from the satisfaction of performing an action),

which is the main motivation underlying play. According to the self-determination theory,

feelings of competence can increase intrinsic motivation. In a series of experiments, Ryan,

Rigby and Przybylski (2006) found that intuitive controls in video games increased feelings of

competence, which in turn increased enjoyment and, most importantly, increased preference

(35)

experienced increased feelings of competence, which in turn may have led to increased

enjoyment and preference for continued play with the robot.

Another important finding relating to robot acceptance is that only perceived human

likeness significantly influenced the attitude towards using the robot and the extent to which

the robot was perceived as a potential friend, whereas perceived animacy and mind attribution

did not. One possible explanation for this finding may be that people made their judgements

about the human likeness of the robot solely based on perceptual ques such as its vocal

capabilities and its appearance, which somewhat resembled a human (see figure 1). After all,

perceived human likeness of the robot was influenced by the robot’s vocal expressions but not

by the level of control over the robot. Most designers choose to design the appearance of the

robot in a humanlike way because it increases familiarity and thereby has a positive influence

on acceptance (Fong, Nourbakhsh & Dautenhahn, 2003; Höflich, 2013). Therefore, only

perceptual cues about the robot might have influenced robot acceptance, but not the beliefs

about its animacy or its cognitive capabilities. The assumption that cognitive beliefs might not

have influenced robot acceptance may relate to the strength of those beliefs. As discussed

previously, weaker forms of anthropomorphism do not include the actual believe that a

nonhuman agent has humanlike qualities, such as free will and emotions (Epley, Waytz,

Akalis & Cacioppo, 2008). Bartneck, Kanda, Mubin and Mahmud (2009) also pose that the

perception of animacy moves gradient. They imply that, rather than being either alive or dead, there is a category of “sort of alive”. Perceptions of animacy may therefore also exist in weaker or stronger forms. In the end therefore, the humanlike appearance of the robot might

have increased familiarity and thereby acceptance (Höflich, 2013), but the beliefs about the robot’s cognitive capabilities and its animacy may have been too weak to influence the acceptance of the robot.

Referenties

GERELATEERDE DOCUMENTEN

They are usually aware that their speech is being interpreted and (if the consecutive mode of interpreting is used) are able to listen to the interpreted or TL speech. In a health

Scenarios are sketched in big amounts together with J.Kuiken, in order to fully grasp what an empathetic robot could be. This is done mainly following the templates in figures 3.1

Data of particular interest is the proxemic space that the robot occupies during the tour, and the influence that certain behaviours (seem to) have on the overall functioning of

When, as well as crazing and elastic deformation, shearing takes place, the data of the stress against elongation curve and the volume strain against elongation curve

In het zuidoostelijke deel van de werkput waren enkele bakstenen funderingen aanwezig, die in de Nieuwste Tijd (19de - 20ste eeuw) gedateerd konden worden.. De funderingen S1.8 en

This is stated in the Three-Factor Theory of Epley, Waytz, &amp; Cacioppo (2007). This theory was the basis for our research question and the two hypotheses. These hypotheses

The induced loneliness was tested in an experiment in which participants were separated in two conditions (experimental and control) and rated robots on the „Perceived

In order to control the position and velocity of the ball, the BBR must be able to cope with changes in pitch and roll angles. However, as mentioned in Sec. 5.1, at this moment the