• No results found

Personified digital assistants : the effects of the level of personification and gender on users‘ perceived trust, enjoyment and intention to adopt

N/A
N/A
Protected

Academic year: 2021

Share "Personified digital assistants : the effects of the level of personification and gender on users‘ perceived trust, enjoyment and intention to adopt"

Copied!
103
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Master Thesis

Personified digital assistants:

The effects of the level of personification and gender on users‘ perceived trust,

enjoyment and intention to adopt.

Author

Thijs Bokhorst

11649992

Supervisor

Roger Pruppers

(2)

Abstract

Automation has seeped into everyday life: the number of people with smartphones has skyrocketed and people rely on digital processes to help them in their daily lives. But how can these digital processes be optimized to increase consumers‘ experiences? Personification, the process of making non-human objects more human like, is used extensively to make robots and digital assistants or digital agents more approachable to users. This article researches the effects of the level of personification and the gender of the agent and that of the user on the users‘ perceived trust and enjoyment and adoption intention. For the research, an automated personified digital assistant in the form of Zazu was used. The results showed no significant effects of the level of personification and the gender of the agent and that of the user on the respondents‘ perceived trust and enjoyment and adoption intention. Considering the results, practitioners should consider if they would want to spend much effort and resources on making certain services more humanlike. Also, other possible explanations for the found results are discussed.

Statement of Originality

This document is written by student Thijs Bokhorst who declares to take full responsibility for the contents of this document.

I declare that the text and the work presented in this document are original and that no sources other than those mentioned in the text and its references have been used in creating it.

The Faculty of Economics and Business is responsible solely for the supervision of completion of the work, not for the contents.

(3)

Table of Contents

1 Introduction ... 1

1.1 Phenomenon: emergence of automation and digital assistants ... 1

1.2. Gap: what are the real effects of personification in a digital assistant context? ... 2

1.3 Problem statement ... 3 1.4 Delimitations ... 5 1.5 Contributions ... 5 1.6 Structure ... 6 2. Automation ... 7 2.1 Overview automation ... 7 2.2 Issues of automation ... 8

3. Personification and anthropomorphism ... 10

3.1 Overview personification and anthropomorphism ... 10

3.2 Drivers of anthropomorphism ... 11

3.3 Drivers of personification and human-computer interaction ... 12

3.4 Levels of personification ... 13

3.5 Effects of personification: overview ... 14

3.6 Effects of personification: trust ... 14

3.7 Effects of personification: enjoyment ... 16

3.8: Effects of personification: adoption intention ... 17

4. Gender ... 18

4.1 Theories on gender ... 18

4.2 Gender and personified digital assistants ... 19

4.3 Gender similarity theory ... 21

5. Conceptual models & hypotheses ... 23

5.1 Conceptual model 1: level of personification ... 23

5.2 Conceptual model 2: gender of the agent ... 27

6. Method ... 31 6.1 Research design ... 31 6.2 Pre-test ... 34 6.3 Procedure ... 37 6.4 Measures ... 38 6.4.1 Manipulations ... 38 6.4.2 Dependent variables ... 39 7. Results ... 41

(4)

7.1 Respondents ... 41

7.2 Data preparation ... 41

7.3 Manipulation checks ... 43

7.4 Hypothesis testing: conceptual model 1 ... 51

7.5 Further analysis: conceptual model 1 ... 54

7.6 Hypothesis testing: conceptual model 2 ... 58

7.7 Further analysis: conceptual model 2 ... 61

8. Discussion ... 66

8.1 General discussion: manipulation checks ... 66

8.2 General discussion: level of personification ... 67

8.3 General discussion: gender ... 70

8.4 Theoretical implications ... 71

8.5 Managerial implications ... 73

9. Conclusion ... 75

9.1 Summary ... 75

9.2 Limitations and future research ... 76

9.3 Recommendations ... 77 References ... 79 Appendix ... 89 Appendix A: Statistics ... 89 Appendix B: Images ... 91 Appendix C: Zazu ... 92 Appendix D: Survey ... 96

(5)

1

1 Introduction

1.1 Phenomenon: emergence of automation and digital assistants

The last few decades, huge development in technologies have replaced humans with machines in certain activities. The replacement of humans by machines has been prevalent in almost every kind of industry. For instance, most car manufacturers have replaced a

significant number of human workers with robots or other programmable machines. Besides machines, other technology keeps developing at a fast rate. Automation will become an even larger part of all humans´ personal lives. Forbes suggests that by the year of 2020, the average person will have more conversations with robots and digital services than with their spouse (Galer, 2017). Automated applications are also taking a dominant position in our more direct and personal life. Estimations tell us that by the end of 2017, over 2.32 billion people owned a smartphone (Smartphone users, 2017). A lot of these smartphones have digital personal assistants installed by their manufacturers and hundreds of other applications that can be accessed via these smartphones.

Digital personal assistants have been common for a couple of years. Siri, the intelligent personal assistant from Apple, has been incorporated in every iPhone since 2011. Other competitors, like Windows, Amazon and Google are also developing digital personal assistants. These personal assistants are mostly directed by voice-control, and they try to answer a wide array of questions and follow basic instructions. Producers launched these applications and devices supposedly as incredible inventions, made to simplify every user‘s life. Other personal assistants have a more narrowed set of activities. Such as IPSoft‘s cognitive knowledge worker Amelia that can handle emails and telephone calls. Two other prime examples of automation in a more personal setting are the New York startup X.ai and

(6)

2 Amsterdam bound Zazu who are developing digital personal assistants that specialize in setting up meetings between users.

1.2. Gap: what are the real effects of personification in a digital assistant context?

Personification of the user interface has been incorporated in many digital personal assistants, such as Apple‘s Siri and Zazu‘s scheduling assistant. Personification has been used extensively in marketing and branding. Personification is the practice of making nonhuman objects more humanlike (As cited in Delbaere, 2011). The use of personification has held positive effects. For example, anthropomorphism can lead to greater brand love (Rauschnabel & Ahuvia, 2014), increased trust (Waytz, Heafner, & Epley, 2014), and it helps humans to interact with machines, amongst other proven positive effects (Zlotowski

Proudfoot, Yogeeswaran, & Bartneck, 2015). Research on personification in a robotic context has also found positive results, such as increased familiarity with the robot and increased sense of use of the robot (Choi & Kim, 2009; De Graaf & Allouch, 2013). However, much of the research on personification and automation was done on the interaction with robots (Desai, Stubbs, Steinfeld, & Yanco, 2009; Eyssel & Hegel, 2012). The findings from these studies may encourage the use of personification. However, digital assistants are a completely different context because these assistants are not physical and thus less pervasive in their personification.

Other research has also been done on personification in digital agent contexts. Digital agents are digital representations of computer programs that have been designed to interact with, or on behalf of, a human (Bailenson, Swinth, Hoyt, Persky, Dimov, & Blascovich 2005). Both positive (Lester & Stone, 1997; Walker, Sproull, & Subramani, 1994; Koda, 2003; Moundridou & Virvou, 2002; Maes, 1994) and negative (David, Kline, & Cai 2007)

(7)

3 outcomes for the practice have been found. Other research found no significant differences between anthropomorphic and non-anthropomorphic agents (Van Mulken, André, & Müller, 1998; Xiao, Catrambone, & Stasko 2003). Different effects of personified feedback were found in different contexts such as travel reservations (Murano, Gee, & Holt, 2011) and auctions (Murano & Holt, 2011). Murano proposes to study multiple contexts to pursue the true effects of personification. But, maybe more important is that these studies focus on personified feedback. These personified agents assist in activities that still have to be

performed by the user. In the case of, for example a digital scheduling assistant, the agent will schedule the meeting for the inviter and invitee instead of these people themselves. In such a context where the agent performs the activity, personification may become more effective and important as people have to trust the process behind the agent to schedule the meeting.

1.3 Problem statement

Most personal digital assistants are still seen as similar to technologies like home automation and virtual reality: popular enough to have seeped into our lives, but not yet refined enough that they have become irreplaceable for most of us (Moren, 2016). But the hesitation of adopting a digital assistant may also be the result of the resistance to artificial intelligence (AI) that is prevalent among many people. To improve perception of automated machines or applications, designers and developers have been known to use personification in their products and services. But it is not yet proven whether personification really has the desired outcome. For example, it is also important to look at which level of personification has the most positive effects, because the advancement in technology can make

(8)

4 Besides the level of personification, gender also has a sizeable impact on the effect personification of a digital service or agent. If an agent or product is highly

anthropomorphized by a human and thus seen as having human attributes, the agent is either seen as male or female. Large corporations that use anthropomorphism in their products or services differ in their use of gender. For example, Apple‘s popular digital assistant Siri is undeniably female. On the other hand is Google Assistant from software giant Google that is less anthropomorphized than Siri. An example of a male-like anthropomorphized service is the helpdesk of Dutch e-retailer Bol.com (see appendix B). Apparently, the three companies employ gender differently in their anthropomorphized service. Therefore the question rises which configuration of gender is best suited for which consumers.

Alongside the gender of the agent, the gender of the user also affects the user experience. Men and women can respond differently to the anthropomorphized service or agent. Women are known to be more relationship-oriented than men (Chodorow, 1999). Because an anthropomorphized service simulates some kind of relation, women may be more responsive to anthropomorphized services than men. Men and women may also respond differently to the gender of the anthropomorphized service. Many services that are now digitalized were performed by humans earlier. Gender stereotypes may arise when looking at these services. In the case of an assistant, users might be more positive towards the service when it is portrayed as female as it fits common stereotypes of women working as an assistant (Forlizzi, Zimmerman, Mancuso, & Kwak, 2007).

To shed light on the effects of different levels of personification and the effects of the gender of the agent and the moderating role of the user on personification, the following research question is formulated:

(9)

5

What is the influence of personification and gender on consumer evaluations of digital (assistance) agents in an online service context?

1.4 Delimitations

It could be expected that older people would understand the process behind artificial intelligence less than younger people would. Therefore, they would possibly benefit more from personified agents than their younger equivalents. But, whilst it would be interesting to see how the age of the user would have an effect on user experience, this was not a primary theme in this study. Also, the focus wasn‘t on the occupation or education levels of

respondents.

Whilst many research regarding personification in the area of automation has been done on robots or services that help users with their activities, this article explicitly focuses on a service that performs an activity for the user instead of assisting the user with that task. This direction has been chosen because personification research on robots and digital agents that assist instead of doing the activity themselves is quite extensive; this is in contrast to research on personification in completely automated digital assistants that replace the users in certain activities, e.g. the scheduling of meetings. By researching this specific context of

personification it is possible to further deepen the knowledge on personification and gender, on which further research can be based.

1.5 Contributions

The intelligent Virtual Assistant market size was over USD 800 million in 2015 and is expected to grow to exceed USD 11 billion by 2024 (IVA market size, 2015). Even in such a prospective market, there is still a lot of disagreement within the area of

(10)

6 personification. The scientific field can‘t seem to find agreement on the subject. But, the major players in the market like Google and Apple also have different opinions in the design of their digital personal assistants. This study will try to add to the debate and provide more clarity for businesses in how to effectively personify personal digital assistants and

subsequently lead to more product adoption.

Intensive research has been done on personification and anthropomorphism in robotic and digital agent contexts. This study tries to add to the field of personification by assessing the results of personification in a new context, i.e. the scheduling assistant. This will be a contribution to this field because it will show that the positive effects of personification found in robotics and automation will also be applicable to this context. But more importantly, this research will widen our knowledge on the effects of personification in digital services. As stated before, digital agents haven been researched intensively before. Except most of these studies focused on agents that were represented with 3D avatars that assisted in tasks, such as playing a videogame (Kim, Chen, & Zhang, 2016), decision support aids (Pak, Fink, Price, Bass, & Sturre, 2012) or online auctions (Murano & Holt, 2011). This study will add to the field of personification by assessing the effects of a personified agent of a service or process that is done for the user, instead of being of assistance to the user.

1.6 Structure

This thesis will first provide a review of the relevant literature, in order to gain a better understanding of the different concepts. Subsequently, the research design and research methods will be discussed. Thirdly, the results of the study will be explained. Finally, a discussion of the results is provided, consisting of conclusions, implications and suggestions for future research.

(11)

7

2. Automation

To shed light on the empirical work that has addressed the relationship between anthropomorphized agents, trust, enjoyment, adoption intention and gender, the coming chapters provide an overview of the relevant literature on these subjects. First, the concept of automation and its specifics will be discussed. Subsequently, personification in general and the effects on trust, enjoyment and adoption will be reviewed. Finally, the effects of gender of the anthropomorphized services and users on these user experiences will be discussed.

2.1 Overview automation

The services provided by X.ai en Zazu are examples of automation. The classic aim of automation is to replace human manual control, planning and problem solving by automatic devices and computers (Bainbridge, 1983). More recently automation has been more narrowly described as any sort of technology that actively selects data, transforms information, makes decisions, or controls processes (Lee & See, 2004). Automation can also refer to the full or partial replacement of a function previously carried out by the human operator. This implies that automation is not all or none, but can move across a continuum. Technical developments in computer hardware and software now make it possible to introduce automation into

virtually all aspects of human-machine systems (Parasuraman, Sheridan, & Wickens, 2000). Automation in all its applications exhibits tremendous potential to extend human performance and improve safety. Automation is seen as a way to improve productivity and quality of products and can reduce human labour costs and expenses.

(12)

8

2.2 Issues of automation

But, automation has and maybe always will have certain issues. One of these issues is the persistent requirement of a human factor in the loop. Even highly automated systems need human beings for supervision, adjustment, maintenance, expansion and improvement (Bibby, Margulies, Rijnsdorp, Withers, & Makarov, 1975; Endsley, & Kiris, 1995). Whilst humans are still needed, the role that they play in the assistance of the machine is a much more passive one. This passive role can negative consequences to the humans supervising or using an automated machine. For example, increased automation of an industrial production system elevated subjective fatigue (Persson, Garde, Hansen, Ørbæk, & Ohlsson, 2003). Even short-duration automation duty cycles may affect fatigue (Scallen, Hancock, & Duley, 1995).

The increase in automation also leads to a decrease in operator‘s manual skills, this occurs because they have fewer chances to practice skills (Bainbridge, 1983; Baxter,

Rooksby, Wang, & Khajeh-Hosseini, 2012). To reduce these issues in the future, the authors suggest that the systems development should be an interdisciplinary endeavour. The

development should not only stem from software- and hardware engineering, but also from psychology and sociology. It is reasoned that with the implementation of social disciplines, the developers will take appropriate account of the people that will ultimately use their systems.

That psychology should play an important role in the subject of automation is

underlined by Lee and See (2004). According to their article, automation is often problematic because people fail to rely upon it appropriately. It is stated that because people respond to technology socially, trust influences reliance on automation. To achieve a high degree of trust in users, design errors, maintenance problems and unanticipated variability should be absent in automation. The algorithms of the automation should be simpler and the operation revealed

(13)

9 more clearly. Also, the operators‘ understanding of how the context affects the capability of the automation should be improved via the use of training. Appropriate trust can lead to performance of the human-automation interaction that is superior to the performance of either the human or automation alone. These findings show us the important influence of affect and emotions on human-technology interaction. Emotional response to technology is not only important for acceptance; it can also make a fundamental contribution to safety and performance.

These emotional responses may be influenced positively via the use of personification. Stadler and her colleagues (2013) have researched users‘ expectations of both humanoid (personified) and industrial robots in a factory context. The comparison of their results showed that anthropomorphic elements in the design of robotic systems can also be imagined for industrial robots. Furthermore and more important, their results showed that industrial robots are perceived as objects, whereas humanoid robots invoke a kind of ‗‘buddy‘‘ feeling. The combination of a functional robot design with anthropomorphic elements could foster intuitive cooperation, co-working, and assistant duties. The anthropomorphic elements may also help users feel more comfortable during ‗‘shoulder-to-shoulder‘‘ cooperation. Other research also shows that anthropomorphic robots facilitate human-machine interaction, creates familiarity with a robotic system and builds on established human skills, developed in social human-human interactions (Złotowski, Proudfoot, Yogeeswaran, & Bartneck, 2015; Fasola & Mataric, 2012; Choi & Kim, 2009; Kim, 2009; Schmitz, 2011). The use of

personified forms is also a powerful means of solving design and shaping our experience with products (DiSalvo & Gemperle, 2003). Consequently, personification can help overcome the problems that people can have with automation and improve the cooperation with these products.

(14)

10

3. Personification and anthropomorphism

3.1 Overview personification and anthropomorphism

The robots used in Stadler‘s research and the examples of personal assistant devices or applications stated earlier (e.g. Siri & Cortana) all make use of personification and

anthropomorphism in one way or another. Applications like Siri make use of human-like voices whilst replying to requests and X.ai‘s personal assistants introduce themselves as ‗real humans‘ via text messages. Amelia even makes use of an avatar, stating to be ‗‘your first digital employee‘‘.

Personification has been defined as a figure of speech in which inanimate objects are characterized in terms of human attributes, thus representing the object as a living and feeling person (Ricoer, 1977, as stated in Delbaere, 2011). Personification can be comprehended by anthropomorphism. This is the cognitive bias whereby people are prone to attribute human characteristics to things. Personification is a message characteristic — an option that can be added to a message, while anthropomorphism is an inherent audience characteristic — one that allows this particular message option to be effective. In other words, personification is an aspect of a product or service that the developer can influence. Anthropomorphism on the other hand stems from the user and is dependent on user characteristics and usage context for example.

These humanlike characteristics that are attributed may include physical appearance, emotional states perceived to be uniquely human or inner mental states and motivations (Epley, Waytz, Akalis, & Cacioppo, 2008). These attributions of essential human characteristics, like a humanlike mind, capable of thinking and feeling are exceptionally important. (Waytz et al, 2014). Philosophical definitions of personhood focus on these mental capacities as essential to being human. Furthermore, research shows that people define

(15)

11 humanness in terms of emotions that implicate higher order mental process such as

self-awareness and memory and traits that involve cognition and emotion (Haslam, 2006; Leyens, Paladino, Rodriguez-Torres, Vaes, Demoulin, Rodriguez-Perez, & Gaunt, 2000).

3.2 Drivers of anthropomorphism

Humans are, at their core, social beings. We need other humans in our daily life for reasons ranging from the practical to the existential. This need is so strong that people sometimes create humans out of non-humans through the process of anthropomorphism. Epley and colleagues (2007) proposed a three-factor theory to determine the motivation of people to anthropomorphize. The theory focuses on three psychological determinants: elicited agent knowledge, effectance motivation and sociality motivation.

According to Epley, elicited agent knowledge is the accessibility and applicability of anthropocentric knowledge. Knowledge about humans in general, or the self in particular, is likely to serve as the basis for induction primarily because such knowledge is acquired earlier and is more richly detailed than knowledge about nonhuman agents. If the knowledge and information about the agent is lacking, humans are more likely to use the human knowledge to judge the agent. As knowledge about nonhuman agents is acquired, however, knowledge about humans or the self should be less likely to be used as a basis for induction simply because the knowledge about the nonhuman agent is sufficient to pass judgment.

The elicited agent knowledge works in concert with effectance and sociality

motivation. Sociality describes the need and desire to establish social connections with other humans. Anthropomorphism enables satisfaction of this need by enabling a perceived humanlike connection with nonhuman agents. In the absence of social connection to other humans, humans can create agents out of nonhumans through anthropomorphism to satisfy

(16)

12 their motivation for social connection. This predicts that anthropomorphism will increase when people feel a lack of social connection to other humans and decrease when people feel a strong sense of social connection. A follow up research underlined this prediction (Epley et al, 2008). They found that participants who felt more chronically disconnected were more inclined to create agents of social support by anthropomorphizing their pets.

Humans are generally motivated to feel competent through resolving uncertainty and gaining a sense of control over the environment. In the area of anthropomorphism, effectance involves the motivation to interact effectively with nonhuman agents and enhances a person‘s ability to explain the actions of the agent. Attributing human characteristics and motivations to nonhuman agents increases the ability to make sense of an agent‘s actions, reduces the uncertainty associated with an agent, and increases confidence in predictions of this agent in the future. The anxiety associated with uncertainty of an agent‘s behaviour therefore

influences people‘s tendency to anthropomorphize a non-human agent. In the same follow up research as noted earlier Epley and his colleagues found experimental proof for this

suggestion. In an experimental setting, they found that people anthropomorphized more when they were faced with an unpredictable agent.

3.3 Drivers of personification and human-computer interaction

This three-factor theory can also be used to provide insights about the personification of digital assistants and human-computer interaction. The inner workings of most modern technological agents are very obtuse, but the incentives for understanding and effectively interacting with such agents are very high. This effectance motivation coupled with a general lack of understanding means that the tendency to anthropomorphize the workings of many nonhuman agents may be especially high. With regards to sociality motivation, facilitating anthropomorphism may increase the usefulness of technological agents by creating social

(17)

13 bonds that increase a sense of social connection. Participants playing a desert survival task in one experiment reported feeling better understood in the task when more anthropomorphic faces and voices appeared in the interface (Burgoon, Bonito, Bengtsson, Cederberg,

Lundeberg, & Allspach, 2000). Such social bonds are likely to be facilitated by increasing the extent to which a technological agent is morphologically similar to selective human features.

3.4 Levels of personification

An often assumed psychological process in people‘s interaction with computers and digital agents is that the more human-like the computer representation is, the more social people‘s responses are. Subsequently, this results in more effective human-computer interactions. That higher levels of personification lead to higher positive outcomes is underlined by research from Gong (2008). The results of his study provided support to the often assumed linear relationship between the degree of personification of computer representations and people‘s social responses. When facial representations on computers progressed from low-personification to medium-personification to high-personification and to real human images, people gave them more positive social judgment, greater homophily attribution, higher competency and trustworthiness ratings, and were more influenced by them in choice dilemma decision-making.

However, a text interface with no degree of anthropomorphism whatsoever was also included in the research. And, consistent with previous studies (Burgoon et al., 2000), the results showed relatively positive judgments of this text interface. These inconsistent results may be explained by a matching hypothesis that has emerged in the literature. The hypothesis proposes to congruently match the use or design of agents and the degree of

anthropomorphism with the nature and goal of the human–computer interaction to achieve effectiveness and optimal performance (Goetz, Kiesler, & Powers 2003). For example, an

(18)

14 interaction task which does not involve a social orientation would not call for high

anthropomorphism. It will be interesting to see how these findings may influence the results in this study.

3.5 Effects of personification: overview

The level of personification influences the effects of personified digital assistants on

users. But, what are these effects? Research has shown that personification can have very different effects on its users. Examples of these effects are increased sense of trust and perceived competence in the agent (Gong, 2008; Waytz et al., 2014; Pak et al., 2012). Other found effects are the aforementioned higher positive social judgment and greater homophily attribution (Gong, 2008). Increased sense of trust and competence can be seen as functional, whilst the latter two would be better described as more emotional and affective. One of these functional effects, that has been researched rather extensively, is trust. Trust is closely linked to effectance motivation, as the decrease of insecurity will automatically lead to more trust. An example of an emotional effect is the sense of enjoyment that users perceive whilst using the personified digital agent. It is theorized that when these two effects (trust and enjoyment) are positively influenced by personification, this will lead to a higher intention to adopt the agent.

3.6 Effects of personification: trust

Research states that the level of trust that people have in an automated system is a key factor that influences their use of that system (Desai et al., 2009), the reliance on automation (Lee & See, 2004) and the intention to adopt (Komiak & Benbasat, 2006). Luckily, people are inclined to anthropomorphize to reduce uncertainty. Trust is a multifaceted concept that can refer to the belief that another will behave with benevolence, integrity, predictability, or

(19)

15 competence (McKnight & Chervany, 2001). Trust is also described as the attitude that an agent will help achieve an individual‘s goals in a situation characterized by uncertainty and vulnerability (Lee & See, 2004).

According to research (Waytz et al, 2014) anthropomorphism is an important determinant of trust in any nonhuman agent. They researched the effects of

anthropomorphism on trust in the use of autonomous vehicles, a setting where trust and uncertainty avoidance is especially important. As users place themselves in such a vehicle they surrender control and have to depend on the vehicle to transport them safely. The researchers expected that people trusted technology to perform its intended function

competently if the technology seemed to have humanlike mental capacities. This prediction was built on the common association between people‘s perceptions of others‘ mental states and competent action. Because mindful agents appear capable of controlling their own actions, people judge others to be more responsible for successful actions they perform with conscious awareness, foresight and planning than for actions they perform mindlessly.

Attributing a humanlike mind to a nonhuman agent or service should therefore more make the agent seem better able to control its own actions, and therefore better able to perform its intended functions competently. The results showed that the participants in the

anthropomorphized autonomous vehicle judged the vehicle to be more competent than non-anthropomorphized vehicles, which led to higher trust in the autonomous vehicle. Other research on automated aid showed that increasing the humanness of the automation increased trust calibration and appropriate compliance (De Visser, Krueger, McKnight, Scheid, Smith, Chalk, & Parasuraman, 2012).

Gong (2008) researched if increasing human likeness of computer representations elicited more social responses from people. He found that as the agent increased in the degree

(20)

16 of human likeness the users showed more social responses. This in turn led to higher scores of trustworthiness. Similar positive effects of personification on trust were found in the context of digital recommendation agents (Qiu & Benbasat, 2009). Pak and colleagues (2012) also showed that the inclusion of an image of a person can significantly alter perceptions of certain devices. In their research on the effects of anthropomorphism on the use of automated

decision support aids, they found that younger participants‘ trust was enhanced when using an anthropomorphic aid compared to an aid without human images. The same effect was not found with older participants. As the agent used was portrayed as a young female, the authors speculate that older adults‘ trust is less malleable in the presence of an agent that does not share the same age category as the user. This conclusion may confirm Nass and Lee‘s (2001) earlier found results for digital similarity-attraction. This means that humans are attracted to agents with similar attributes, just as they are in the non-digital environment.

3.7 Effects of personification: enjoyment

Another theorized important effect of anthropomorphism on a more affective level is the user‘s experienced enjoyment in the use of a human-like device or agent. Enjoyment is defined as feelings of joy or pleasure associated by the user with the use of the robot or agent, aside from the utilitarian value of the agent or robot. When people evaluate a social robot their pleasure experiences may certainly influence user acceptance. Enjoyment appears to be a crucial variable for social robot acceptance as it directly influences ease of use, use attitude and use intention of robots (De Graaf & Allouch, 2013). When looking at software agents, the same effects of anthropomorphism are found. Users‘ affective responses are increased when computer games and multimedia tutoring systems are accompanied by an anthropomorphic agent. This is because the addition of such an agent increases the entertainment value of an interface (Dehn & van Mulken, 2000). Research (Moundridou & Virvou, 2002) in the field of

(21)

17 computer assisted learning showed that anthropomorphized animated speaking agents

increased the enjoyment the students felt when they interacted with the system. Earlier

research (Walker et al., 1994; Lester et al., 1997) found similar positive results in their studies on animated pedagogical agents.

3.8: Effects of personification: adoption intention

Whilst it is very important and interesting to look at what the cognitive and affective effects of anthropomorphism are on users. From a commercial perspective, it is maybe even more important to see how anthropomorphism ultimately leads to actual behavior, such as the adoption of a service or product. Findings from a study focused on personalized

recommendation agents indicate that emotional trust significantly increases the intention to adopt the agent (Komiak & Benbasat, 2006). Users‘ intention to use a digital recommendation agent increased with the addition of personification because it enhanced users‘ trusting beliefs and perceived enjoyment (Qiu & Benbasat, 2009). Other research (Featherman & Pavlou, 2003) showed that perceived risk is an important inhibitor for consumers to adopting an e-service and that this perceived risk can be reduced by increasing the sense of trust.

On the other hand, people are more likely to adopt technology and use it more extensively when they experience immediate pleasure or joy from using the technology and perceive any activity involving the technology to be personally enjoyable (Davis, Bagozzi, & Warshaw, 1989). Enjoyment is also seen as an intrinsic motivator and an affective

determinant of perceived value. Perceived value then is a major factor determining adoption (Kim, Chan, & Gupta, 2007).

(22)

18

4. Gender

4.1 Theories on gender

Previous research has shown that people interact with computers in ways that are comparable to human–human interactions. For example, Reeves and Nass (1996) have

demonstrated repeatedly that people instinctively treat nonliving entities just like humans. The proper personification of an automated machine or digital agent will only enhance this

tendency. Whenever humans interact with each other, many processes and situations can influence that interaction. One human specific characteristic is especially influential; gender.

One of the first things a human does when encountering another human or personified agent is to determine the gender. Gender is one of the most salient and omnipresent social categories in human societies that affects virtually every aspect in our every-day live. To a large extent, gender determines people‘s social roles, occupations, relationships, and

opportunities (Bussey & Bandura, 1999). For example, earlier researched showed that women are seen as sympathetic, kind and accessible, whilst men are seen as tough and aggressive (Huddy & Terkildsen, 1993). Competence and rationality were also labeled as mainly male attributes.

Ridgeway (2001) showed that gender is more than just a trait of an individual, but an institutionalized system of social practices for constituting males and females as different in socially significant ways and organizing inequality in terms of those differences. This social system leads to widely shared gender stereotypes that influence the treatment of men and women greatly. According to Wagner & Berger (1997), gender and gender stereotypes are deeply entwined with social hierarchy and leadership. They state that this is the case because these gender stereotypes contain status beliefs at their core. Expectation states theory defines status beliefs as widely held cultural beliefs that link greater social significance and general

(23)

19 competence, as well as specific positive and negative skills, with one category of a social distinction (e.g., men) compared to another (e.g., women).

Closely linked to the expectation states theory is the social role theory. According to this theory, the differences in behavior of women and men originate in the contrasting distributions of men and women into social roles (Eagly, Wood, & Diekman, 2000). The gender differences that commonly occur in social behavior follow from the typical

characteristics of roles commonly held by women versus men. Women and men adjust to sex-typical roles by acquiring the specific skills and resources linked to successful role

performance and by adapting their social behavior to role requirements. On the other hand, men and women are also judged according to the social role that they are supposed to fulfill. Roles considered to be male emphasize power, competition, or authority. Female roles are supposed to emphasize support, caring and human interactions. As such, women working in females roles (e.g., in education) are judged more favorably by other people than women working in roles with more masculine characteristics (Garcia-Setemero & Lopez-Zafra, 2006). This concept is commonly referred to as role-congruity. The role congruity theory proposes that a group will be positively evaluated when its characteristics are recognized as aligning with that group's typical social roles (Eagly & Diekman, 2005).

4.2 Gender and personified digital assistants

Evidence for the validity of the effects of role congruity theory is also found in the usage of personified digital assistants. Forlizzi et al. (2007) researched the relationship between the visual features of embodied agents and the task they perform, and the social attributions that result. The results show a clear link between agent task and agent form and reveals that people often prefer agents who conform to gender stereotypes associated with tasks. Discussed earlier, schema-congruence theory can be an explanation for Forlizzi‘s

(24)

20 findings. If the personified agent is in congruence with the task or application, this may lead to an improvement in effectiveness of the agent. However, opposing results were found in other research on human-robot interaction (Kuchenbrandt, Häring, Eichberg, Eyssel, & André, 2014). Their results showed that interaction with a robot in the context of a typically female work domain resulted in less optimal outcomes than working on a male task.

Apparently, participants in the study made significantly more errors when performing a typically female task than a typically male task and they were more reluctant to accept help from the robot in future tasks compared to participants who were instructed on a typically male task.

Nass and colleagues (1997) also underlined the power of gender stereotypes, even in inanimate machines. They researched if the addition of minimal gender cues to computers evokes sex-based stereotypic responses. The experiment consisted of conditions, in which all suggestions of gender were removed, with the sole exception of male or female vocal cues. The study showed that evaluation from males was more valid than the evaluation from females; the mail-voiced evaluator was rated higher with respect to friendliness and competence than the female-voiced evaluator by the users. The results of the study also showed that male-voiced tutor computer was perceived as more informative about male topics (computers) than the female-voiced tutor, whilst the opposite was true for allegedly more female topics (love-and-relationships). These conclusions show that the tendency to gender stereotype is extremely powerful, extending even to inanimate machines.

The tendency to gender stereotype is especially strong in the area of service jobs like secretarial or clerical work. Secretary or clerical work is widely considered to be feminine, is associated with feminine qualities, such as caretaking and deference and conforms to some of the female gender stereotypes (Truss, Goffee, & Jones, 1995). Most secretaries and

(25)

21 administrative assistants are still female. According to the United States Department of Labor, 94.5% of all employees in secretarial and administrative positions are female (Most common jobs for women, 2015).

So, we theorize that perceived gender of the personified machine, robot or application has an effect on the user‘s experience. But it is also possible that the gender of the user has an effect on how the personified agent or service will be experienced. Social theories have long argued that women are more caring and relationship-oriented than men, who are more task-oriented (Chodorow, 1999). This focus could be brought forth by socialization, as boys are encouraged to display traditionally male behaviors, such as aggressiveness and

competitiveness whilst girls are motivated to show emotions and nurturance (Marini, 1988). It is also found that men receive more of the instrumental, practical aspects of relationships while women receive more of the intimate, interactive aspects of relationships (Umberson, Chen, House, Hopkins, & Slaten, 1996). By increasing the level of personification in a service, the user experiences the interaction as being more of a relationship than just an interaction. Because women tend to be more relationship-oriented, we could expect that women will evaluate the personified service more highly and will score higher on trust, enjoyment and adoption intention than men.

4.3 Gender similarity theory

So, perceived gender of the personified machine, robot or application has an effect on the user‘s experience. But, according the personification research, in addition to the robot gender, also the user gender and other characteristics affect people‘s reactions towards robots (Eyssel, Kuchenbrandt, Bobinger, De Ruiter & Hegel (2012). Their results showed that

participants showed greater robot acceptance and felt psychologically closer to the robot when the robot and the participants shared the same gender. Participants even anthropomorphized a

(26)

22 system more strongly when it used a same-gender, but human-like voice. Neurological

research also showed that an anthropomorphic interface increased the favorability of users‘ evaluation, thus providing more proof for this so-called gender-match theory (Benbasat, Dimoka, Pavlou, & Qiu, 2010). These results were also underlined by Hende & Mugge (2014) in their research on gender-schema congruity.

To increase our understanding of the role of gender in this context, it can be wise to look at the role of genders in different contexts. Looking at leader-member exchanges in a professional setting, there is proof for a gender match theory that resembles the gender-congruence theory stated before. Gender match seems to lead to positive affect, whilst gender dissimilarity is shown to lead to poor quality of exchange (Bhal, Ansari, & Aafaqi, 2007).

(27)

23

5. Conceptual models & hypotheses

A series of hypotheses were formulated to first test the relationship between the level of personification and the users‘ perceived trust, enjoyment and intention to adopt. The gender of the agent was also researched in this string of hypotheses. Secondly, the relationship

between the gender of the agent and the users‘ perceived trust, enjoyment and intention to adopt were tested with a second series of hypotheses. Again, the role of the gender of the user was also assessed in this series.

5.1 Conceptual model 1: level of personification

It was already discussed earlier that personification is an important determinant of trust in any nonhuman agent (Waytz et al., 2014). In their research on personification in the context of autonomous vehicles Waytz and his colleagues found that people trusted

technology to perform its intended function properly if the technology seemed to have humanlike mental capacities. Therefore, their results showed that the participants trusted the anthropomorphized vehicles more than non-anthropomorphized vehicles. Gong (2008) also found that increasing the human likeness of computer representations increased the social responses from users on that agent. This in turn led to higher scores of trustworthiness. Other positive effects of personification on trust were found in research in automated decision support aids (Pak et al., 2012). Because the existing research concurs that increasing personification or adding anthropomorphized images in products, services or agents can increase trust the following is hypothesized:

H1: There is a positive impact of level of personification on perceived trust.

(28)

24

H1a: Perceived trust is higher when personification is at a medium level than when it is at a low level.

H1b: Perceived trust is higher when personification is at a high level than when it is at a medium or low level.

Personification also has an effect on users on a more affective level than trust (which can be seen as a more utilitarian aspect). In their research on personified social robots, De Graaf and Allouch (2013) found enjoyment to be a crucial variable for robot acceptance, as it directly influences ease of use, user attitude and use intention of robots. Dehn and van Mulken (2000) researched the effects of personification in the context of software agents. Their results also showed the importance of personification on users‘ affective responses. They found that users enjoyed computer games and multimedia tutoring systems more when the users were accompanied by an anthropomorphic agent. According to the authors, this is because the addition of such an agent increases the entertainment value of an interface. Other research on computer assisted learning (Moundridou & Virvou, 2002), animated pedagogical agents (Walker et al., 1994; Lester et al., 1997) and digital product recommendation agents (Qiu & Benbasat, 2009) found similar positive results of personification on enjoyment, Hence: H2: There is a positive impact of level of personification on perceived enjoyment.

More specifically:

H2a: Perceived enjoyment is higher when personification is at a medium level than when it is at low level.

H2b: Perceived enjoyment is higher when personification is at a high level than when it is at a medium or low level.

(29)

25 Findings from a study focused on personalized recommendation agents indicate that emotional trust significantly increases the intention to adopt the agent (Komiak & Benbasat, 2006). This conclusion is also underlined by other research on recommendation agents. The results show that consumers‘ trust in the agents as virtual assistants are influential in

consumers‘ intentions to adopt the agents (Benbasat & Wang, 2005) and that the addition of personification can enhance users‘ trusting beliefs (Qiu & Benbasat, 2009). Other research (Featherman & Pavlou, 2003) showed that perceived risk is an important inhibitor for consumers to adopting an e-service and that this perceived risk can be reduced by increasing the sense of trust. Because of the proven importance of trust in the intention to adapt in users of digital agents the following is hypothesized:

H3a: Trust mediates the relationship between levels of personification and adoption intention, such that adoption intention is stronger for the highest level of personification than the

medium-level of personification.

H3b: Trust mediates the relationship between levels of personification and adoption intention, such that adoption intention is stronger for the medium-level of personification than the lowest level of personification.

Enjoyment has been gaining attention in research as an important intrinsic motivation variable in technology adoption behaviors the last few years (Hwang, 2010). Prior research has proposed intrinsic motivation, such as perceived enjoyment, as a determinant of perceived ease of use and intention to use (Venkatesh, Morris, Davis, & Davis, 2003). Apparently, people are more likely to adopt technology and use it more extensively when they experience immediate pleasure or joy from using the technology and perceive any activity involving the technology to be personally enjoyable (Davis et al., 1989). Enjoyment is also seen as an intrinsic motivator and an affective determinant of perceived value. Perceived value then is a

(30)

26 major factor determining adoption (Kim et al., 2007). Other research on electronic commerce systems also found significant positive effects of enjoyment on users‘ intention to use the product (Hwang, 2010). Considering the past research showing the effect of enjoyment on users‘ intention to adopt technology the following is hypothesized.

H4a: Enjoyment mediates the relationship between levels of personification and adoption intention, such that adoption intention is stronger for the highest level of personification than for the medium-level of personification.

H4b: Enjoyment mediates the relationship between levels of personification and adoption intention, such that adoption intention is stronger for the medium-level of personification than for the lowest level of personification.

Besides the agent itself, certain characteristics of the person that uses the agent can also influence that person‘s experiences with the agent (Eyssel et al., 2012). In this article a special role is played out for the gender of the user. Social theorists have long argued that women are more caring and relationship-oriented than men (Chodorow, 1999). It is also found that men receive more value of the instrumental, practical aspects of relationship while

women receive more value of the intimate, interactive aspects of relationship (Umberson et al., 1996). We expect that an increase in personification of a digital agent will lead to an increased feeling of interacting with something conscious. Because of this feeling, users may experience the interaction with the agent as more of a social relationship, instead of just a pure functional one in the case of a non-personified bot. Research shows that women prefer

intimate and interactive relationships more than men, therefore we hypothesize the following:

H5: The effects of level of personification on perceived trust (H1) will be stronger for female users than for male users.

(31)

27

H6: The effects of level of personification on enjoyment (H2) will be stronger for female users than for male users.

The aforementioned hypotheses are combined and make the following conceptual model.

Figure 1

Conceptual model 1 with Level of Personification as independent variable

5.2 Conceptual model 2: gender of the agent

In many human to human interactions gender has an effect on that interaction. To a large extent, gender determines people‘s social roles, occupations, relationships and opportunities (Bussey & Bandura, 1999). Previous research has shown that people interact with computers in ways that are comparable to human-human interactions (Reeves & Nass., 1996). Therefore, it is very possible that gender also plays a significant part in the interaction with personified agents. In this case, gender stereotypes and social roles come into play. People react more positively to gender congruence in different professions (Eagly &

Diekman, 2005). Research shows that these preferences and gender stereotypes push through to the digital environment. Forlizzi et al. (2007) found that people often prefer agents who conform to gender stereotypes associated with the tasks that they perform. Secretarial or

(32)

28 clerical work is still widely considered to be feminine and associated with feminine qualities (Truss et al., 1995) and the scheduling assistant can be seen as a replacement for the secretary. People react more positively to gender congruence; therefore the following hypotheses are implemented.

H7: Perceived trust is stronger when the personified agent is portrayed as female, than when it is portrayed as male

H8: Enjoyment is stronger when the personified agent is portrayed as female, than when it is portrayed as male.

As stated before for H7 and H8, people react more positively to gender congruence in different professions (Eagly & Diekman, 2005) and these preferences and gender stereotypes also apply to digital agents (Forlizzi et al., 2007). Considering that secretarial work is still associated with female gender stereotypes (Truss et al., 1995) it lies in the expectations that, according to the gender congruence, a digital assistant in this context also leads to higher adoption intention. It is also expected that trust plays an important mediating role between the gender of the agent and the users‘ adoption intention. Multiple articles that researched the effects of trust on adoption intention in digital agents found that trust increases this intention (Komiak & Benbasat, 2006; Benbasat & Wang, 2005; Qiu & Benbasat, 2009; Featherman & Pavlou, 2003). Besides trust, enjoyment is also expected to play an important mediating role between the gender of the agent and the users‘ adoption intention. This expectation comes forth from earlier research that found enjoyment to be an important motivation variable in technology adoption (Hwang, 2010; Venkatesh et al., 2003; Davis et al., 1989; Kim et al., 2007). Considering the existing research on gender congruence in digital assistants and the effects of trust and enjoyment on adoption intention the following is hypothesized:

(33)

29

H9: Trust mediates the relationship between gender of the agent and adoption intention, such that adoption intention is stronger for the female version of the agent, and less for the male version of the agent.

H10: Enjoyment mediates the relationship between gender of the agent and adoption

intention, such that adoption intention is stronger for the female version of the agent, and less for the male version of the agent.

As stated before, in addition to the gender of the agent or robot, the user gender and other characteristics of the user affect reactions towards robots and digital agents. Eyssel and his colleagues (2012) researched the effects of personification on robot acceptance. Their results showed that participants showed greater robot acceptance and felt psychologically closer to the robot when the personified robot and the participants shared the same gender. Other research demonstrated that when a human gender schema is primed, that is, congruent with consumers‘ own gender, consumers show more preferential evaluations and are more likely to perceive the product as human (Hende & Mugge, 2014). Neurological research on the effects of gender similarity in personified recommendation agents found that interactions between users and agents that shared the same gender led to higher social presence. Social presence is used to capture the degree to which users assess the quality of the social

connections with a recommendation agent and is generally considered to play an important role in the adoption and effects of technologies. Considering the existing research on gender congruity in the field of personified agents and the positive results, the following is

hypothesized:

H11: The effects of the gender of the agent on perceived trust (H7) will be stronger when the user shares the same gender as the agent.

(34)

30

H12: The effects of the gender of the agent on enjoyment (H8) will be stronger when the user shares the same gender as the agent.

The aforementioned hypotheses are combined and make the following conceptual model.

Figure 2

(35)

31

6. Method

In this part the empirical part of the study will be provided. First, the research design and the characteristics of the collected sample will be explained. Secondly, the performed pre-test will be discussed. And lastly, the measures and measurement scales of the variables are described.

6.1 Research design

To answer the research question and to test the developed conceptual models, a quantitative study in the form of a usage situation with a digital assistant and a survey

afterwards was conducted. A between-subjects experimental research design was used to test the first conceptual model. For this design a three-level independent variable was used (level of personification: low, medium and high) to test its effect on the dependent variable adoption intention. Two mediators were also assessed in the design: trust and enjoyment. A moderator in the form of gender of the user (male or female) was also part of the design. The second conceptual model used the gender of the agent as an independent variable (male or female). The dependent variable, mediators and moderators were the same as in the research design for the first model.

To test our hypotheses in an AI-based context, a test environment of the scheduling assistant software of Zazu was used. The Zazu service can schedule appointments between an inviter and invitee of an appointment. When the inviter has sent an invitation via e-mail to the receiver, with Zazu in the cc, the service will arrange the meeting with the invitee for the inviter. The application will then give the invitee multiple dates and times that are available. When the invitee has chosen the preferred moment for the meeting, both invitee and inviter will receive a confirmation and the appointment has been scheduled. The system makes use of an agent that can either be portrayed as the female Emma or the male Ben. Ben or Emma ‗‘is

(36)

32 your hard working employee – an artificial intelligent personal assistant that will schedule your meetings ‗‘ (Zazu.ai). Both agents communicate in the same way with the user in a humanlike way.

The Zazu application was used for this study because it represents a modern

personified digital assistant that performs an activity for the user, instead of helping the user with performing in that activity. In this case, the activity is the scheduling of an appointment. It is also very useful for this study because the activity of scheduling is a prime example of an activity that is traditionally performed by humans. Therefore, Zazu is essentially a

replacement for a secretary or personal assistant. As a result, the service is a great stimulus to use because the role of secretary is one where gender stereotypes will come in to play. For this study, respondents participated in the actual scheduling of a meeting as would really happen if the Zazu application was used in a non-research setting. Since the experiment is based on a real existing service, this research also resembles some characteristics of a field experiment. Therefore, with this existing service in the design the external validity is higher than when a resemblance of an agent was used instead of a real functioning digital scheduling assistant.

The Zazu service was adapted for the research purposes to match the three different levels of human likeness of the agent that were used in the experiment: low (Bot), medium (Hybrid) and high. The high levels of human likeness consisted of Emma and Ben. Human likeness was manipulated by increasing or decreasing how much the text in the e-mails

sounded humanlike. All respondents received three e-mails in total. The first one was received by every respondent, despite which group they were in. This first mail was a simple mail that was sent from my personal e-mail and invited the respondent to a (fictional) meeting to grab a cup of coffee at my office. In this mail the agent was asked to arrange the meeting for me.

(37)

33 Directly after that they would receive an e-mail from their respective agent with the available dates for the meeting. The third, and last, e-mail that the respondents received was also from the agent and was a confirmation for the scheduled meeting on the time and date that the invitee had chosen in the previous mail.

The four groups also differed in name. The low level of personification was called the Scheduling Service, the medium level the Scheduling Assistant and Emma and Ben retained their own names. Ben and Emma only differed in name. The names Ben and Emma were chosen because they are surely seen as distinctively male and female names. No other

adjustments were made to the text in these groups. This was done because we only wanted to test the difference between a female or male agent. By making Emma more feminine or Ben more male through anything other than only the name would mean that the differences between these groups were not only an effect of gender, but may also be other variables that we can‘t control for such as the degree of helpfulness for example. Human likeness was also manipulated by changing the name of the service, as noted before.

The three levels of personification needed to increase in terms of human likeness from low, to medium and to high. Therefore, the three different levels of personification had their own text in this mail, which differed in terms of how much it felt humanlike. The text in the lowest level of personification (bot) was purposefully written to feel like an interaction with an automated bot. It was supposed to resemble simple everyday automatic systems that only serve the purpose of fulfilling the service it was made to provide without any form of

humanlike feedback. This was achieved by disregarding greetings and keeping the text short and straightforward. The first mail from the bot would just indicate that were some options for the meeting to choose from and the second confirmatory mail would simply state when and where the meeting is going to take place. The text in the medium-level of personification

(38)

34 (hybrid) was supposed to feel more humanlike than the bot. This was done by adding a

greeting and the assistant talked about itself in the first person. The text was also more humanlike than the bot version in the sense that the sentences used weren‘t notifications, as was the case in the bot version. The highest level of personification was divided in the male Ben and the female Ben, but the text was equal for both conditions. The text in these

conditions was even more humanlike than in the hybrid condition. This was achieved by extending the text and making it feel friendlier to the user (see appendix C for the texts).

6.2 Pre-test

A pilot study was conducted to determine if the three different levels of

personification (low, medium and high) were really different from each other in terms of human likeness. The pre-test also intended to show the differences in perceived gender between the two highest levels of personification: Emma and Ben. For this test, the

respondents were shown images of the e-mails that are actually sent when using the real Zazu service. In total, every respondent saw three images of three subsequent e-mails in the

scheduling process that were pasted into an image of a smartphone. The first image that the respondents were presented with was an example of the first e-mail that would initiate the scheduling process. This image showed that the e-mail was sent by my personal e-mail and asked the scheduling service, which was added in the cc of the e-mail, to arrange a meeting between the respondent and me. The second image showed the following e-mail that is sent by the scheduling service with the possible dates and times. The third image showed the confirmation e-mail. Examples of these images found in the pre-test can be found in the appendix (C).

66 Respondents were approached for this study. 34 (51.5%) respondents of the group were female, whilst 30 (45.5%) were male. Two respondents didn‘t fill in the gender question.

(39)

35 The average age of the respondents was 24.76 (SD = 8.79). A one-way ANOVA was

conducted to determine if the respondents judged the level of human likeness for the four different conditions. Participants were classified into four groups: Bot (n = 15), Hybrid (n = 18), Emma (n = 15) and Ben (n = 18). Human likeness score was statistically significantly different between the four different groups, F(3, 62) = 6.399, p < .001. Human likeness score increased from the bot (M = 3.33, SD = 1.03) to the Hybrid M = 4.13, SD = .85) and Emma (M = 4.77, SD = 1.00) and Ben (M = 4.67, SD = .95) conditions. Tukey‘s post hoc analysis revealed that the mean increase from Bot to Emma (1.44, 95% CI [-2.36, -.52]) was

statistically significant. The same applied to the increase from Bot to Ben (1.13, 95% CI [-2.02, -.25]). No significant differences were found between Hybrid and Bot and also the other way around: from Hybrid to Emma and Ben.

(40)

36 Table 1

Mean Differences for Human Likeness

(I) Condition Group (J) Condition Group Mean Difference (I-J) SE p

Bot Hybrid -.794 .334 .092 Emma -1.437 .349 .001** Ben -1.135 .334 .006* Hybrid Bot .794 .334 .092 Emma -.643 .334 .228 Ben -.341 .318 .708 Emma Bot 1.437 .349 .001** Hybrid .643 .334 .228 Ben .302 .335 .803 Ben Bot 1.135 .334 .006* Hybrid .341 .318 .708 Emma -.302 .335 .803

To increase the differences between these conditions for the actual experiment, the human likeness of Emma and Ben were increased in the actual experiment. This was

accomplished by expanding the same text in both conditions and making it more humanlike by adding friendly sentences, such as ‗Glad to meet you, even in a digital way!‘ (see appendix C for the exact adaptation).

Besides human likeness, the pre-test showed us other results. For example, an independent-samples t-test was run to determine if Emma (n = 14) was seen as female and Ben (n = 18) as male. Emma was seen as more female (M = 2.43, SD = 1.51) and Ben was

(41)

37 seen as more male (M = 5.67, SD = 1.50), as shown by the statistically significant difference,

M = 3.24, 95% CI [-4.33, -2.15], t(30) = 6.06, p = <0.01. Also, to assess whether respondents

found the service to be an automatic process a one-sample t-test was performed. On a scale where one stood for very manual and seven for very automatic, the mean score (M = 4.85, SD = 1.30) was higher than the average of 4.0 that represented neutrality. This mean difference was statistically significant with a difference of 0.848, 95% CI [0.53, 1.17], t(65) = 5.34, p < 0.001. As expected, the Zazu application is an example of an automated service. Besides perceptions of automation the respondents were also asked if they would appreciate it if the interface of the software was more humanlike. The mean score (M =4.76, SD = 1.65) was higher than the average of 4.0 that represented neutrality. This mean difference was statistically significant with a difference of 0.758, 95% CI [0.35, 1.16], t(65) = 3.74, p < 0.001. The results prove the context of the research question as it shows that Zazu is seen as an automated service and the respondents indicate that they appreciate human like interactions in the service.

6.3 Procedure

For the actual research respondents were approached via Facebook, Linkedin,

Whatsapp and face to face. The respondents were required to have a functional mail-address and an understanding of the English language. No other requirements for the respondents were installed for this research. The participants were asked to share their e-mail address so they could be set up in the scheduling process. After they‘ve given their e-mail addresses they were non-randomized put in one of the conditions (low human likeness, medium human likeness, high human likeness Emma and high human likeness Ben). Randomization was not possible because the Zazu software was not able to run the four different conditions at the same time. As a result, every condition had to be filled with respondents before the next

Referenties

GERELATEERDE DOCUMENTEN

Furthermore, the expected negative or positive association between state self-compassion and state perceived stress on the between-person level of all participants across all

By providing a holistic overview of the influence of the negotiation factors of Tactics, Trust and Process in B2B settings, hopefully, professionals (buyers and

Since this study showed that trust is not the variable that mediates the relationship between interview style and risk perception, further research should investigate a

The model results reveal the existence of stable equilibrium states with more than one inlet open, and the number of inlets depends on the tidal range and basin width (section 3)..

Overall, based on the swift trust theory, it can be assumed that global group audit teams may experience high levels of trust, because when a developed trusting relationship is

Another trend is that the average appreciation for ‘perceived level of trust in senior management’ was higher than ‘the perceived quantity and quality of internal

operational information) influence the level of trust (goodwill and competence) in buyer- supplier relationships?’ and ‘How do perceptions of information sharing (strategic and

We assess if the construal level of a controlled stimulus acts as a moderator on the effect of a surprise anticipation product label on an individual’s enjoyment and