• No results found

A Choice-Based Conjoint Analysis for Determining Consumer Preferences for Intelligent Personal Assistant Robots.

N/A
N/A
Protected

Academic year: 2021

Share "A Choice-Based Conjoint Analysis for Determining Consumer Preferences for Intelligent Personal Assistant Robots."

Copied!
40
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

A Choice-Based Conjoint Analysis for

Determining Consumer Preferences for

Intelligent Personal Assistant Robots.

The Influence of Anthropomorphous Attributes on Purchase Intention

of Intelligent Personal Assistant Robots Considered Jointly.

by

(2)

2

A Choice-Based Conjoint Analysis for

Determining Consumer Preferences for

Intelligent Personal Assistant Robots.

The Influence of Anthropomorphous Attributes on Purchase Intention

of Intelligent Personal Assistant Robots Considered Jointly.

by

Sebastiaan Wagenmaker

June 26, 2017

Master Thesis MSc Marketing Intelligence University of Groningen

Faculty of Economics and Business Department of Marketing PO Box 800, 9700 AV Groningen Supervisors dr. J. van Doorn dr. F. Eggers Author Sebastiaan Wagenmaker Zwanestraat 6a, 9712 CL Groningen

Tel: +31 (0) 6 11 34 91 59 Email: s.wagenmaker@student.rug.nl

(3)

3 Abstract. Robots have spread throughout many application areas in the past decades and are also being introduced in a domestic environment. Complex questions arise when robots enter our homes. What human characteristics do we deem desirable in our robots? And to what extent? And in which combinations? The primary intention of this paper is to provide insights on consumer preferences for anthropomorphous attributes of intelligent personal assistant robots. Little to no research has been done on anthropomorphous attributes in robots in a domestic setting. Specifically, very little research was done with these attributes considered together. The anthropomorphous attributes that are considered in this paper are design, social interaction and level of autonomy. Using a dual response choice-based conjoint analysis, data was gathered on the preferences of a sample from the United States. Major findings include that consumers prefer a more humanlike design, dislike a fully autonomous robot and are quite indifferent when it comes to social interaction. A more humanlike design does also not facilitate social interaction. Managerial implications are thoroughly discussed. Finally, the paper is wrapped up by limitations of these insights and recommendations for future research on anthropomorphous robots.

(4)

4

TABLE OF CONTENTS

1. INTRODUCTION ... 5

2. CONCEPTUAL FRAMEWORK ... 8

2.1.ANTHROPOMORPHISM AND ROBOTS ... 8

2.2.SOCIABILITY OF ROBOTS ... 9

2.3.LEVEL OF AUTONOMY IN ROBOTS ...10

2.4.DESIGN OF ROBOTS ...11

2.5.ANTHROPOMORPHOUS ATTRIBUTES CONSIDERED JOINTLY ...12

2.6.CONCEPTUAL MODEL ...13 2.7.HYPOTHESES ...15 3. RESEARCH DESIGN ... 19 3.1.RESEARCH METHOD ...19 3.2.PLAN OF ANALYSIS ...19 4. RESULTS ... 21 4.1.SAMPLE DEMOGRAPHICS ...21 4.2.SAMPLE CHARACTERISTICS ...22 4.3.RESPONSE TIME ...22

4.4.RESPONDENTS ATTITUDE AND UNDERSTANDING ...23

4.5.GOODNESS OF MODEL FIT ...23

4.6.PREDICTIVE VALIDITY ...25

4.7.MOST PREFERRED ATTRIBUTE LEVELS ...26

4.8.ATTRIBUTE IMPORTANCE ...27

4.9.PRICE ...28

4.10.SEGMENTATION ...28

5. DISCUSSION ... 32

5.1.FINDINGS AND THEORETICAL IMPLICATIONS ...32

5.2.MANAGERIAL IMPLICATIONS ...34

5.3.LIMITATIONS ...34

5.4.SUGGESTIONS FOR FUTURE RESEARCH ...35

6. REFERENCES ... 36

(5)

5

1. INTRODUCTION

In the past decades, robots have spread throughout many application areas, such as medical, military and public safety contexts. A more recent development is the introduction of robots as consumer products in a domestic environment, becoming available for everyday in-home use (Young, Hawkins, Sharlin and Igarashi, 2009). Domestic service robots have long been predominantly science fiction and commercial visions of the future (Forlizzi and DiSalvo, 2006). However, the prediction is that in the marketplace of 2025, service-providing humanoid robots will be melded into numerous service experiences, including the domestic setting (Van Doorn, Mende, Noble, Hulland, Ostrom, Grewal and Petersen, 2017). Duffy (2003) also emphasizes that we are getting closer to integrating robots into our physical and social environment.

Prior research indicates that the application of personal robots in a domestic setting introduces the challenge of acceptability. Personal robotics is concerned with the application of robots in the assistance to people in various aspects of their life, including a close interaction between robot and human user, at various levels: physical, intellectual and emotional (Laschi, Teti, Tamburrini, Datteri, and Dario, 2001). Laschi and colleagues describe acceptability as the balance of perceived and real costs against the perceived tangible benefits obtained by a user in the use of given technologies or services. Acceptability is closely related to the willingness to use a system or service in a particular context, and therefore also closely related to purchase intentions.

(6)

6 provision (for example about news, personal schedules, weather conditions, traffic congestion or stock prices), making dinner reservations, sending text messages, ordering food, purchasing event tickets, schedule management and personal health management (Chaudhri et al, 2006).

Another considerable functionality of intelligent personal assistants integrated with Internet-of-Things is home automation. The aim of home automation is to control home devices from a central control point (Alkar and Buhur, 2005). An intelligent personal assistant could take the role of this central control point, allowing it to control all devices in the home that are connected to the internet. For example, functionalities could include facilitating control of security cameras, control and automation of refrigerators, lighting, curtains, heating, air conditioning and opening and locking doors. The software agent that provides these capabilities requires a physical component, a hardware device, to function. These devices are also available in an array of different possibilities. For example, the software could be integrated in a smartphone, like Apple’s Siri and Samsung’s S Voice. On the other hand, the intelligent personal assistant could come embodied in a custom made device or robot, like Google Home or Amazon’s Alexa. The focus of this paper is on the latter.

The possibilities in attributes of an intelligent personal assistant robot seem endless and little to no research has been done on the consumer preferences of these attributes. Despite this lack of research, a growing assortment of intelligent personal assistants is coming available for the consumer market. A selection of companies has already released their product and service, for example Google Home (May 2016), Amazon Alexa (November 2014). Even more are expected to be released in the near future, such as Mycroft, Kuri Home Robot, Jibo Robot, Ivee, Aido Robot, LG’s Hub Robot, Bosch’s Mykie and Apple’s HomePod.

How would we want our future home robots to look like? And how would we like them to behave? Will they be like an additional family member, or just like a machine that provides you services? Questions about our preference for types of robots arise when you start picturing one in your house. Specifically, how humanlike or anthropomorphised would we want them to be?

(7)

7 modes of interaction, and stable social norms during close encounters with humans. Third, humanoid design, defined as highly anthropomorphic in appearance; built to look and move like the human body (Broadbent, 2017).

Gaining deeper insights into the effect of and interaction between these attributes is of added value from both a scientific and social perspective. The scientific relevance stems from the extension of the insights on the consumer preferences for human-robot interaction and design in general and in a domestic setting in specific. Little to no research has considered these attributes jointly, especially in the domestic setting (Schermerhorn et al, 2011). Thus we do not yet know what combination of attributes is preferred. From a social perspective, added value lies in the expectation that when aligned with the consumer preferences, intelligent personal assistant robots could be adopted by families and individuals on a massive scale. This is obviously very useful for organizations involved in the development of intelligent personal assistants and other domestic service robots. In order to successfully unify the intelligent personal assistant robot with the every-day life of the individuals and families, clarification is needed on the desired attributes.

Therefore, I introduce the following problem statement of this paper: “What is the influence of and interplay between anthropomorphous attributes on the purchase intention of intelligent personal assistant robots?”. For the purpose of this paper, I define purchase intention as the plan to purchase a particular good or service in the future.

(8)

8

2. CONCEPTUAL FRAMEWORK

Many researchers have explored the area of human-robot interaction and the influence of various anthropomorphous attributes on this interaction. This section provides a thorough overview of these insights.

2.1. Anthropomorphism and Robots

Existing literature provides various definitions of anthropomorphism, which comes from the Greek word anthropos for man, and morphe for form or structure. Duffy (2003) emphasizes that anthropomorphism helps us to rationalize actions of inmate objects, animals and others by attributing human characteristics to them. Breazeal (2003) supports this view by recognizing that our social-emotional intelligence strongly supports our understanding of the behaviour of people and other living creatures. Even more, it is considered to be an innate tendency of human psychology (Hutson, 2012). Duffy (2003) also elaborates on human-robot interaction, which is often presented as the primary motivation that anthropomorphism is employed in robotic systems. They state that in order to engage in meaningful social interaction with its user, a robot requires the application of a degree of anthropomorphic, or human-like, attributes. Duffy (2003) states in his paper that our social understanding is facilitated by the use of human-like features in robots.

(9)

9

2.2. Sociability of Robots

Sociability seems to be the most agreed upon attribute of anthropomorphism in prior research (Breazeal, 2003; Duffy, 2003; Fong, Nourbakhsh, and Dautenhahn, 2003; Mohammad et al, 2009; Severinson-Eklundh, Green and Huttenrauch, 2003). Robots that fulfil a function as emotional companion and can autonomously interact with humans in a socially meaningful way are called socially interactive robots (Fong et al, 2003), or as Breazeal (2003) defined them, social robots.

For social robots to perform well within various functions of human-robot interaction, a multitude of researches have indicated that it is essential for them to have effective and intimate social interaction with their user (Breazeal, 2003; Duffy, 2003; Fong et al, 2003). Severinson-Eklundh et al (2003) even found that social interaction for service robots with the primary user is not sufficient. They state that the social robot should address the group of people where the robot is used. Mohammad et al (2009) argue that even when a robot can correctly execute its tasks, but fails to have an interaction with humans in a natural way, it is as unacceptable as succeeding in social interaction but failing to achieve its tasks. They also state that a social robot should be capable to combine social interactivity with autonomy in order to succeed.

Nevertheless, some researches argue against the statement that a robot’s social skills form a critical part of a robot’s cognitive skill set and the extent to which it exhibits intelligence. Occasionally, it is still only viewed as a necessary ‘add-on’ to human-robot interfaces to make the robot feel more ‘attractive’ to the people interacting with it (Dautenhahn, 2007). Furthermore, robots that display affection could even have a negative effect on their evaluation. Schermerhorn et al (2011) for example found that subjects rated affect-displaying robots as more annoying than no-affect robots in a teamwork setting. They provide evidence that the display of affect could lead to the robot being perceived as less cooperative.

(10)

10 this class is socially passive, they only respond to their user’s efforts at interacting with them but not actively engage with them to fulfil their internal social goals. Fourth, sociable subclass is socially participative and has its own internal goals and motivations. In contrast to the socially receptive subclass, sociable robots pro-actively employ social behaviour towards their users. They do this to not only benefit the person, but also to benefit itself by, for example, learning from the user. Breazeal (2003) also argues that endowing a robot with social skills and capabilities has benefits far beyond the interface value for the person who interacts with the robot.

2.3. Level of Autonomy in Robots

Autonomy within robots and the extent to which they can achieve full autonomy is an actively discussed attribute of anthropomorphization within published literature (Haselager, 2005; Smithers, 1997). Research shows that a higher level of autonomy of the service robot is twofold. It relieves the users from significant workload, but also decreases the amount of control over its tasks (Muszynski, Stuckler and Behnke, 2012).

Haselager (2005) discusses two different levels of autonomy in robots’ operations. First, their operations can be reactive, responding to what is going on in its environment. Second, they can be proactive, pursuing the goals that are active within the robot, thereby moving away from the environment as primary influence. Such robots with proactive operations can sometimes choose how to achieve these goals.

Parasuraman, Sheridan and Wickens (2000) took a more thorough approach and introduced a framework for types and levels of automation that serves as an objective basis for making decision in the context of human-robot interaction on what to automate and to what extent. They defined four broad classes of functions: 1) information acquisition; 2) information analysis; 3) decision and action selection; and 4) action implementation. They state that a system can include all types of automation, ranging on a continuum level from low to high autonomy, thus from fully manual to fully automatic.

(11)

11 suggests that in order to be fully autonomous and deal with the sort of environments in which we live and work, robots and other agents will have to be lawmaking and not just self-regulating, in thus in Haselager’s (2005) terms should be proactive. Haselager (2005) provides a definition of autonomous agents that captures this general intent: “Autonomous agents operate under all reasonable conditions without recourse to an outside designer, operator or controller while handling unpredictable events in an environment or niche”.

Lastly, a paper of Stubbs, Hinds and Wettergreen (2007) focussed on how different types and levels of robot autonomy affects grounding, which is the reaching of common ground with the robot in terms of an accurate, shared understanding of the robot’s context, planning, and actions. In their research they defined three types of autonomy: low, low to moderate and high. Furthermore, they defined two types of problems, namely lack of transparency (why decisions were made) and missing contextual information (what happened). They found that the least problems occurred for high autonomy, which were all due to a lack of transparency. Most problems occurred for low to moderate autonomy, which were mainly due to the missing of contextual information. This is in line with Smithers’ (1997) and Haselager’s (2005) findings. For this paper, the division of Stubbs et al (2007) will be used as a basis for level of autonomy.

2.4. Design of Robots

A third attribute of anthropomorphism is its design; the extent that its body shape is built to resemble the human body. In existing literature, it is widely recognized that the physical design of a robot can have important effects on human-robot interactions, specifically on how humans perceive the robot’s capabilities, competence, sociability and trustworthiness (Aggarwal and McGill, 2007; Broadbent, Kumar, Li, Sollers, Stafford, MacDonald, and Wegner, 2013; Duffy, 2003; Oyedele, Hong and Minor, 2007; Schermerhorn et al, 2011; Wainer, Feil-Seifer, Shell and Mataric, 2007). Duffy (2003) stated that traditionally the obvious strategy for integrating robots successfully in our physical and social environment has been the humanoid form. Moreover, it even is argued that the ultimate goal of many robotics engineers is to build a fully anthropomorphic synthetic human.

(12)

12 was also perceived as more sociable, rated as having most mind, being most humanlike, alive, and amiable (Broadbent et al, 2013).

In a paper more focused on intimate interaction, Oyedele, Hong and Minor (2007) showed that in the context of touching robots, people were indifferent about the degree of humanness of robotic images. Contrary, in the contexts of communicating with robots, watching robots in a movie, and living in the same house with robots they found that people showed more concern for robotic images’ similarity to humans.

Instead of a humanoid design or machinelike design, robots could also have no embodiment. However, prior research indicates that people interact differently and issue more commands with a physical robot than a simulated robot with no embodiment (Schermerhorn et al, 2011). Wainer et al (2007) also found a more positive evaluation towards embodied robots and showed in their research that an embodied robot, compared to a telepresent robot and a simulated robot, was seen as more helpful, watchful, and enjoyable.

A study into medical examination and the use of robots has shown that highly human-like robots caused more embarrassment for Dutch university students than a technical box (Bartneck, Bleeker, Bun, Fens and Riet, 2010). Bartneck et al (2010) also found that the highly human-like robot was perceived more as a person than the technical box. It is therefore not unexpected that quite often engineers are attempting to make robots look and behave identically to humans, in part so that humans can interact with robots on a more intuitive and natural level (Broadbent, 2017). However, some researchers argue there is a limit to how humanlike robots should look. Mori, MacDorman and Kageki (2012) for example stated that someone's response to a robot designed to look like a human would precipitously shift from empathy to revulsion as it approached, but failed to attain, a lifelike appearance.

2.5. Anthropomorphous Attributes Considered Jointly

(13)

13 To summarize, the anthropomorphization of robots influences our perception of, interaction with, and feelings about robots in a variety of interesting ways. The next part of this paper discusses how these influences could interact with the purchase intention of intelligent personal assistants.

2.6. Conceptual Model

From a theoretical perspective, Schermerhorn et al (2011) emphasize that the different dimensions of anthropomorphism as autonomy, embodiment and interaction style have exclusively been investigated separately. Before their research, no previous study had looked into those three attributes together. Their results show that any mono-dimensional study analysing these three construct separately is going to miss important interaction effects. Research that includes the combination of these three dimensions should thus enrich the current literature on human-robot interaction. Consequently, I introduce the conceptual model of this paper (Figure 1).

Figure 1: Conceptual framework.

(14)

14 between social interaction and purchase intention is included. Based on prior research, I expect that the variable design strengthens the relationship between social interaction and purchase intention (Duffy, 2003; Broadbent et al, 2013; 2017, Schermerhorn et al, 2011).

In order to measure how different levels of the independent variables are preferred, I define attribute levels for each attribute based on appropriate literature. Social interaction, defined as the ability to use normal modes of interaction, and stable social norms during close encounters with humans (Mohammad et al, 2009) is divided into: 1) passive, which means it only speaks when spoken to and 2) active, which means it speaks on its own initiative. These levels are comparable to Breazeal’s (2003) subclasses socially receptive and sociable and are chosen because they are the most appropriate and applicable in the context of intelligent personal assistant robots.

Design, defined for this paper as built to look and move like the human body, is divided into: 1) looks like a machine, 2) has facial expressions, but looks like a machine and 3) has facial expressions and body movement). Level of autonomy, which is defined by Smithers (1997) as the degree of self-regulation or control in performing tasks, is split into: 1) passive, which means it only does instructed tasks, 2) assertive, which means it suggests tasks it could do and 3) autonomous, which means it does tasks without asking.

The combination of these dimensions in one study is very uncommon in general human-robot interaction research, and I am unaware of any research with this combination in a domestic setting. To better understand the relative relationship of the dependent and independent variables, three variables are controlled for in the model. First, price of the intelligent personal assistant robot is added as a trade-off measure for levels of the other attributes. By doing so, not just insights on the preferred levels of each attribute are acquired, but also at what price.

(15)

15 privacy. First, due the robot’s ability to sense, process, and record the world around them. Second, because they introduce new points of access to historically protected spaces. Thirdly, due to their social meaning which stems from the anthropomorphization. People are hardwired to react to heavily anthropomorphic technologies, such as robots, as if a person were actually present (Calo, 2010). In order to measure privacy concerns of the participants, a likert scale of Smith (1996) is used. Lastly, three levels of price ($199, $299 and $399) are added as a control variable and trade-off for the other attributes.

2.7. Hypotheses

An appreciable amount of literature has been published on the requisite of sociability in robots for it to successfully function in human-robot interaction (Breazeal, 2003; Duffy, 2003; Fong et al, 2003; Mohammad et al, 2009; Severinson-Eklundh et al 2003). It is likely that a robot that includes social interaction is preferred to one that does not, because it supports our understanding of the behaviour of people and other living creatures and it helps to rationalize their actions (Breazeal, 2003; Duffy 2003). Fong et al (2003) also emphasize that in order for social robots to succeed, effective interaction between humans and robots is essential. Even more, social interaction with more than just the primary user seems needed; domestic service robots should consider its full environmental context and all present people in order to be effective (Severinson-Eklundh et al, 2003).

Also from a more psychological perspective, a more socially active robot seems more preferable. Baumeister and Leary (1995) argued in their paper that humans have a need for frequent, nonaversive interactions with an ongoing relational bond. They state that social attachment and belongingness appear to are of strong influence on emotional patterns and cognitive processes, and even that a lack of attachment is linked to a variety of ill effects on health, adjustment and well-being.

Considering the emphasis in prior research on the magnitude of social interaction in robots, it seems not only preferred attribute, but also a necessity. Though, a consensus on a positive evaluation of social interaction is still lacking. Dautenhahn (2007) emphasize that a robot’s social skills are occasionally considered as just a superfluous ‘add-on’. Furthermore, Schermerhorn et al (2011) found that in teamwork setting affect-displaying robots were perceived as less cooperative and therefore negatively evaluated.

(16)

16 robots, the effect is likely to be positive. Therefore, I expect that it is preferred that robots not only speak when spoken to, but also speak on their own initiative. Accordingly, I introduce the first hypothesis of this paper:

H1: An active level of social interaction has a more positive influence on the purchase intention of an intelligent personal assistant robot than a passive level.

The extent to which the embodiment of a robot has a humanoid design greatly influences the perception of the robot’s capabilities, competence, sociability and trustworthiness by of its users (Aggarwal et al, 2007; Broadbent et al, 2013; Duffy, 2003; Mori et al 2012; Schermerhorn et al, 2011; Oyedele et al, 2007; Wainer et al, 2007). The lion’s share of the research on humanoid design indicates that it is preferred to a non-humanlike design.

For example, Oyedele et al (2007) showed that in the context of living in the same house with robots, people showed more concern for robotic images’ similarity to humans. Furthermore, from a more psychological perspective, Hutson (2012) stated that the attribution of humanlike characteristics is an innate tendency of human psychology. People use their schemas about other humans as a basis for inferring the properties of non-human entities in order to make efficient judgements about the environment. Therefore a humanlike design should be evaluated more positively than machinelike looks. Consequently, it seems reasonable that such a positive evaluation and perception of a robot would increase the purchase intention.

On the other hand, Duffy (2003) states that a robot which appears too intelligent could be seen as more selfish or sensitive to weaknesses as humans, and therefore the most preferred design should not necessarily be a perfect synthetic human. The theory of the uncanny valley of Mori et al (2012) also emphasizes that an extreme, yet not perfect, version of humanoid design could have a serious negative effect on the robots evaluation.

Though, a negative effect of humanoid design in robot only seems to occur in the most extreme cases. Thus, I expect that it is preferred that the robot has facial expressions and body movement. For these reasons, I posit the following hypothesis:

(17)

17

H2b: A robot with facial expressions and body movement has a more positive influence on the purchase intention of an intelligent personal assistant robot than a robot that looks like a machine but has facial expressions.

Furthermore, a higher level of autonomy in a robot is likely to positively influence the purchase of that robot, because it will relieve the users from significant workload. However, with more autonomy the user will lose control of its tasks (Muszynski et al, 2012). Stubbs et al (2007) showed in their paper that the least common-ground issues occurred for a high level of autonomy. This indicates that it is easier to get an accurate, shared understanding of the robot's context, planning, and actions when the robot has a high level of autonomy. It is likely that consumers prefer a robot that they can easier understand and have les problems with. Smithers (1997) confirms this preference by emphasizing that in order to deal with the sort of environments in which we live and work, robots have to be lawmaking and not just self-regulating. A passive level of autonomy seems therefore insufficient.

Parasuraman et al (2000) showed in their automation framework that for three of four types of robot automation (information acquisition, information analysis, decision selection) a high level of autonomy is required for reliable automation. For the fourth automation type (action implementation) they advise a moderate level of automation. Considering that the majority of the automation types require higher levels of autonomy in order to be reliable, and thus preferable by the consumers, it seems likely that higher levels of autonomy are preferred, but some control stays desirable. Furthermore, Ray, Mondada and Siegwarthe (2008) stated in their paper that robot autonomy should not be too high and clear control for the user should always be maintained. They state that consumers could have developed the fear of losing control over robots for example due to science fiction movie scenarios.

I thus expect that it is preferred that the robot not only does instructed tasks but also suggests them, but not does tasks without asking. Therefore, I hypothesize the following:

H3a: An assertive level of autonomy has a more positive influence on the purchase intention of an intelligent personal assistant robot than a passive level.

(18)

18 The extent of humanoid design like the use of a head, with eyes and a mouth may also facilitate social interaction and thus enhance the effect on the purchase intention through a moderation effect. For example, nodding or shaking the head can indicate acceptance or rejection (Duffy, 2003).

Furthermore, Broadbent et al (2013; 2017) support this with their findings that robot with a humanlike face display was perceived as more sociable. Therefore, in order for humans to have more natural and intuitive interactions with robots, engineers attempt to make robots look identically to humans. Goetz, Kiesler and Powers (2003) even found that people do not only evaluate and perceive humanlike robots more positively, but also consistently preferred robots for a certain job when its human likeness matched the sociability required for that job. This also indicates that a more humanoid design positively moderates the influence of social interaction, compared to a machinelike design. Consequently, my fourth and final hypothesis is as follows:

H4a: A robot with facial expressions and body movement strengthens the influence of active social interaction on purchase intention, compared to a robot that looks like a machine.

(19)

19

3. RESEARCH DESIGN

3.1. Research Method

In order to test my hypotheses, I have conducted a survey among 204 participants. All the participants were assured of their anonymity in an accompanying letter, in which they were also told the purpose of the survey. After this letter the questionnaire started. The survey started with a brief explanation of the study and the different attributes.

Furthermore, in order to establish predictive validity, some form of craft is applied. Realistic images of the robots are used to support the correct interpretation of humanoid design. Also, video instructions are used to explain all attributes and their corresponding levels. Eggers, Hauser and Selove (2016) found no increase in precision or accuracy for a training video. They suggested that the wear-out effect overwhelmed the training effect. However, an intelligent personal assistant robot is such relative novel and unknown product, therefore I will use a training video to improve predictive validity. The data collection took place in May 2017.

3.2. Plan of Analysis

(20)
(21)

21

4. RESULTS

4.1. Sample Demographics

In total 204 respondents living in the United States completed the survey (48.0% male, 52.0% female). The median age of the respondents is between 35 and 44 years (23.5%), followed closely by the group aged between 25 and 34 years (20.6%). Furthermore, the median household income is between $25,000 and $49,999 (22.1%), followed by the group ranging from $0 to $24,999 (19.1%). The majority of the respondents is employed and works 30 or more hours per week (49.0%). The most occurring educational levels are bachelor’s degree (34.3%) and some college or associate’s degree (33.8%). Lastly, most respondents have two (31.4%) or three (22.5%) co-residents. A full overview of the sample demographics is shown in Table 1.

Table 1. Sample Demographics

Variables Answer possibilities N = 204 (%)

Gender Male 98 (48.0%) Female 106 (52.0%) Age < 20 years 0 (0%) 20 – 24 years 23 (11.3%) 25 – 34 years 42 (20.6%) 35 – 44 years 48 (23.5%) 45 – 54 years 37 (18.1%) 55 – 64 years 32 (15.7%) 65 + years 22 (10.8%) Household income $0 to $24,999 39 (19.1%) $25,000 to $49,999 45 (22.1%) $50,000 to $74,999 36 (17.6%) $75,000 to $99,999 25 (12.3%) $100,000 to $124,999 22 (10.8%) $125,000 to $149,999 8 (3.9%) $150,000 or more 13 (6.4%)

Prefer not to answer 16 (7.8%)

Employment Employed, working 30 or more hours per week 100 (49.0%) Employed, working 1-29 hours per week 18 (8.8%)

Self-employed 6 (2.9%)

Out of work and looking for work 8 (3.9%) Out of work but not currently looking for work 3 (1.5%)

A homemaker 21 (10.3%)

A student 7 (3.4%)

Military 0 (0%)

Retired 31 (15.2%)

Unable to work 9 (4.4%)

(22)

22

Education Less than high school graduate 5 (2.5%)

High school graduate or equivalent 37 (18.1%) Some college or associate's degree 69 (33.8%)

Bachelor's degree 70 (34.3%)

Master's degree or higher 21 (10.3%)

Prefer not to answer 2 (1.0%)

Co-residents 1 37 (18.1%) 2 64 (31.4%) 3 45 (22.5%) 4 34 (16.7%) 5 12 (5.9%) 6 5 (2.5%) 6+ 6 (2.9%) 4.2. Sample Characteristics

In order to control for sample characteristics, the variables Technology Readiness Index (Parasuraman, 2015) and Privacy Concerns (Smith, 1996) were measured by means of 16 and 12 items respectively in a 5-point rating scale, ranging from ‘Strongly disagree’ (1) to ‘Strongly agree’ (5). Both variables were averaged across the items and mean centered before estimation. The mean value of Technology Readiness Index, M = 3.220 (SD = 1.187), indicates the respondents are slightly more ready to embrace new technologies than they are not, although the results barely differ from the neutral point. The scale exhibited adequate internal reliability (α = 0.862). Moreover, the mean value of Privacy Concerns, M = 3.523 (SD = 1.215), shows that the respondents on average have small but noteworthy concerns for their privacy. This time, the internal reliability was also satisfactory (α = 0.928).

Table 2. Sample Characteristics

Variable Mean Standard

Deviation

Cronbach’s Alpha

Number of items

Technology Readiness Index 3.220 1.187 0.860 16

Privacy Concerns 3.523 1.215 0.929 12

4.3. Response Time

(23)

23

4.4. Respondents Attitude and Understanding

To control for fatigue effects of the survey and correct understanding of the topic and the attributes, respondents rated five different statements in a 5-point rating scale, ranging from ‘Strongly disagree’ (1) to ‘Strongly agree’ (5).

As shown in Table 3, the results of statements 1 through 4 indicate that the attributes, different options and the topic of intelligent personal assistant robots were sufficiently understood by the participants. Furthermore, the results of statements 5 and 6 indicate that the number of decisions was acceptable and the decisions self were not considered boring.

Table 3. Respondents Attitude and Understanding

# Statement Mean Standard

Deviation 1. The attributes design, social interaction, and level

of autonomy were clear to me.

4.41 0.961

2. I wish I had more information about the meaning of the attributes before choosing between the options.

2.78 1.311

3. I had a clear image of the design and functionalities of the available options.

4.30 0.979

4. The topic of intelligent personal assistant robots was adequately introduced.

4.28 0.923

5. There were too many decisions to make. 2.47 1.265

6. The decisions were boring. 2.34 1.251

Furthermore, the majority of the respondents was somewhat (39.7%) or very (35.3%) interested in intelligent personal assistant robots.

4.5. Goodness of Model Fit

An assessment of model fit was done for the NULL-model and three aggregate models. Model 1 is a part-worth model and only includes the main effects and no moderators. Model 2 is a part-worth model and includes the main effects and all moderators. Lastly, Model 3 includes the main effects, all moderators and has a linear effect for the price attribute. An overview of all comparison metrics is displayed in Table 4.

Table 4. Model Comparison

NULL model Model 1 Model 2 Model 3

Parameters 0 8 20 19

Degrees of freedom 196 184 185

Goodness of Fit

(24)

24

McFadden-R2 0.074 0.082 0.081

McFadden-R2 adjusted 0.072 0.076 0.076

Hit Rate 55.98% 56.3% 56.27%

Mean Absolute Error 2.92% 2.97% 3.06%

First, the log-likelihood of the NULL-model was calculated based on 204 cases, 10 choice sets and the number of alternatives per choice sets, namely 3 for the decision between robots and 2 for the choice vs. no-choice option.

𝐿𝐿(0) = 204 ∗ 10 ∗ (1 3∗

1

2) = −3655.19

Next, to assess whether Model 1 was significantly different on a 5% level from the NULL-model (i.e. random guessing), a likelihood ratio test was performed by means of a Chi-squared distribution with 8 degrees of freedom. The test statistic (543.54) was higher than the critical value (15.51) from the Chi-square distribution table, thus the estimated model parameters are significantly different from zero. Hence, as expected since adding parameters always increase model fit, Model 1 is better than random guessing. Furthermore, to assess whether the goodness of fit was better than the NULL-model, McFadden-R2 (0.074) and McFadden-R2 adjusted (0.072), corrected for the number for parameters, were calculated. Both values were lower than the acceptable values (0.2 – 0.4), indicating a poor model fit compared to the NULL-model. However, this could be due to heterogeneity in the model, as this was not accounted for. More specifically, overall differences in the sample were accounted for by the chosen levels, but did not distinguish on individual level.

Next, the part-worth model including moderators (Model 2) was estimated and compared to Model 1. Both the log-likelihood and (adjusted) McFadden-R2,indicate better model fit. However, in order to test whether this difference is significant, a likelihood ratio test was performed. The Chi-square statistic (53.12) was higher than the corresponding statistic (21.03) from the Chi-square distribution table (df = 12, p = 0.05). Thus, there is a significant difference between the model without (Model 1) and with (Model 2) moderators. Therefore, Model 2 has a significant better fit than Model 1.

(25)

25 Next, the hit-rates for the models were calculated to assess how good the estimates could predict the respondents’ actual choices. The calculation is based on the number of correct predictions divided by the overall amount of choices, for Model 3:

(1182 + 803 + 311)

4080 ∗ 100 = 56.27%

According to these calculations, 56.27% of all choices could be predicted correctly with the estimates. The hit-rates for Model 1 (55.98%) and Model 2 (56.30%) differ slightly but not significantly. Therefore, also taking into account the hit-rate, Model 3 is still preferred.

Next, the mean absolute error was calculated by comparing predicted versus observed choices to further assess model fit. Model 1 (2.92%) showed the lowest error percentage, followed by Model 2 (2.97%) and Model 3 (3.06%). Considering the relatively small difference between Model 2 and 3, Model 3 is still the most preferred model.

Before I continued with further estimation, the insignificant control variables (p > 0.05) were removed from the model. Thus, in the final model the direct effect of all attributes, the none option, the effect of technology readiness on social interaction and level of autonomy and the effect of privacy concerns on design were included. Also, the moderating effect of design on the relationship between social interaction and purchase intention was kept in the model for hypothesis testing. For a full overview of the parameters and the corresponding p-values, see Appendix A.

4.6. Predictive Validity

In order to assess how good the estimates of Model 3 can predict respondents’ actual choices, the predictive validity is calculated based on the holdout choice sets that were not used for estimation. Table 5 provides an overview of the predicted and observed choices for the holdout sets.

Table 5. Predictive Validity

Holdout Choice Set 1 Holdout Choice Set 2

Option Predicted Observed Predicted Observed

1 45.65% 47.06% 13.71% 16.67%

2 28.17% 26.96% 28.17% 28.43%

3 26.18% 25.98% 58.12% 54.90%

Mean Absolute Error 0.94% 2.15%

(26)

26 choices were incorrect compared to the observed choices. This is a really low percentage, indicating very strong predictive validity.

4.7. Most Preferred Attribute Levels

Next, in order to test my hypotheses, the most preferred levels of each attribute were analysed for the whole sample. An overview of the attribute parameters and their significance is displayed in Table 6

Table 6. Attribute Parameters

Attribute Level Utility p-value

Social Interaction Passive: only speaks when spoken to 0.0536 0.015 Active: speaks on its own initiative -0.0536

Design Looks like a machine -0.1486 < 0.001

Has facial expressions, but looks like a machine -0.1434 Has facial expressions and body movement 0.2920

Level of Autonomy Passive: only does instructed tasks 0.0262 0.003 Assertive: suggests tasks it could do 0.0721

Autonomous: does tasks without asking -0.0984

First of all, the attribute social interaction, which had a significant (p = 0.015) influence. The attribute level with the highest utility (0.0536) is “Passive: only speaks when spoken to”. Utility of attribute level “Active: speaks on its own initiative” was -0.0536. A pooled t-test indicated that the difference between these attribute levels is significant (t(203) = 3.45; p = 0.001)). Therefore, my first hypothesis is not supported. Table 7 provides an overview of the hypotheses.

(27)

27 Third, level of autonomy also has a highly significant influence (p = 0.003). Most preferred level is “Assertive: suggests tasks it could do” with a parameter of 0.0721. Followed by “Passive: only does instructed tasks” (0.0262) and “Autonomous: does tasks without asking” (-0.0984). A pooled t-test indicated that the difference between an assertive level and a passive level was not significant ((t(203) = 1.12; p = 0.05)) Thus, my hypothesis 3a is not supported by the data. However, the difference in utility between an assertive level and an autonomous level proved to be highly significant ((t(203) = 4.08; p = 0.05)). Therefore, hypothesis 3b is supported.

Lastly, I looked into the moderation effect of design on social interaction. This effect was not significant for both levels (p = 0.58; p = 0.65). Therefore, hypothesis 4a and 4b are not supported.

Table 7. Overview of hypotheses

# Hypothesis Supported

1. An active level of social interaction has a more positive influence on the purchase intention of an intelligent personal assistant robot than a passive level.

No

2a. A robot with facial expressions and body movement has a more positive influence on the purchase intention of an intelligent personal assistant robot than a robot that looks like a machine.

Yes

2b. A robot with facial expressions and body movement has a more positive influence on the purchase intention of an intelligent personal assistant robot than a robot that looks like a machine but has facial expressions.

Yes

3a. An assertive level of autonomy has a more positive influence on the purchase intention of an intelligent personal assistant robot than a passive level.

No

3b An assertive level of autonomy has a more positive influence on the purchase intention of an intelligent personal assistant robot than an autonomous level.

Yes

4a. A robot with facial expressions and body movement strengthens the influence of active social interaction on purchase intention, compared to a robot that looks like a machine.

No

4b. A robot with facial expressions and body movement strengthens the influence of active social interaction on purchase intention, compared to a robot that looks like a machine but has facial expressions.

No

4.8. Attribute Importance

(28)

28 Table 8. Relative Attribute Importance

Attribute Importance Social Interaction 6.79% Design 27.92% Level of Autonomy 10.80% Price 54.49% 4.9. Price

Next, I calculated the absolute consideration willingness-to-pay for the most preferred intelligent personal assistant robot. Thus, the maximum price consumers would be willing to pay for a robot with the most preferred attribute levels. This is a robot with social interaction that is passive, design that has facial expressions and body movement and a level of autonomy that is assertive. The most preferred intelligent personal assistant robot has a utility of 0.4177. The none-option has a utility of -1.6652 (p < 0.001). Furthermore, the incremental effect of price is -0.0043 (p < 0.001). Therefore:

−1.6652 − 0.4177

−0.0043 = $484.40

Based on these calculations, if the price is above $484.40 the consumers would rather not purchase the product. Though, the maximum price in the conjoint experiment was $399, therefore I extrapolated. Thus, I assume that consumers would still react linearly to price changes above $399.

4.10. Segmentation

(29)

29 Figure 3. Scree plot

However, the number of parameters (116) exceeds the degrees of freedom (88) for a nine-segment solution nine-segments. So also a ten-nine-segment solution is unsuitable, as this difference only increases. Thus, the classification error for each number of segments was also considered. A two-segment solution showed the lowest classification error (0.011). When comparing a nine and ten segment solution, a nine segment solution showed the lowest classification error (0.0358).

Considering that a nine-segment solution increases complexity for interpretation and is too fragmented, a two-segment solution seems optimal. Additionally, the most noteworthy difference between the two generated classes in a two-segment solution is that class one has a very low parameter for the none-option (-3.566), compared to the second class (-0.013). Thus indicating that class one generally prefers to actually buy the robot if it was available. Class one is also more interested in intelligent personal assistant robots (M = 4.40) than the second class (M = 3.13).

(30)

30 Table 9. Relative Attribute Importance Two Segments

Attributes Class one:

Robot Lovers Class two: Indifferent Heterogeneous Design 35.63% 19.39% Social Interaction 1.42% 15.71% Level of Autonomy 9.58% 21.97% Price 53.38% 42.93%

Next, a comparison between the two classes was done. Robot Lovers includes 59.36% of the respondents and class two 40.64%. The relative attribute importance of the two classes was compared to gain insights on what attributes are important for each class (Table 9). Price is relatively more important to Robot Lovers (53.38%) than Indifferent Heterogeneous (42.93%). Aside from price, a notable difference is present between the relative importance for the other three attributes. Design is almost twice as important for Robot Lovers (35.63%) than for Indifferent Heterogeneous (19.39%). Also, social interaction and level of autonomy are much more important for Indifferent Heterogeneous than for Robot Lovers.

Furthermore, a comparison was done between the demographics of the segments. Gender and education show no considerable differences between the two classes. Income, however, does show a noteworthy difference between the two classes as displayed in Figure 4. Of Indifferent Heterogeneous more than half (51.91%) of the respondents has a household income of below $50.000. Whereas for Robot Lovers the median income is between $50.000 and $74.999, and for all income ranges above $75.000 class one is larger.

Figure 4. Income of Segments

(31)

31 Additionally, as shown in Figure 5, the large majority (67.68%) of the respondents in Robot Lovers are between 35 and 54 years old. Whereas Indifferent Heterogeneous is more divided in age and contains more younger (20 – 24 years) and elder (55+) respondents.

Figure 5. Age of Segments

(32)

32

5. DISCUSSION

5.1. Findings and Theoretical Implications

The results of this paper contribute to the idea that the application of some anthropomorphous attributes on robots is desirable, but also provides insights on what attributes are more important in a domestic setting.

The most noteworthy insight is that that a robot with facial expressions and body movement has a more positive influence on the purchase intention of an intelligent personal assistant robot than a robot that looks like a machine, or only has facial expressions. This finding is in line with previous research, as the use of a human schema results in a more positive evaluation (Aggarwal et al, 2007). Additionally, this could be because a robot with a humanlike face display was perceived as more sociable, rated as having most mind, being most humanlike, alive and amiable (Broadbent et al, 2013). Though, this is not in line with the theory of the uncanny valley (Mori et al, 2012). According to this theory, the most humanoid robot with facial expressions and body movement could elicit feelings of eeriness and revulsion among some observers. Walters, Syrdal and Dautenhahn (2008) researched how the uncanny valley can be avoided. They found that people tend to prefer robots with more a human-like appearance. Nevertheless, this preference did not hold for introverts and people with lower emotional stability, as they preferred the mechanical looking appearance of the robot. Thus, an explanation could be that the observed sample has a relatively higher emotional stability and more extrovert nature. Moreover, the segment with the highest purchase intention had an even higher relative attribute importance for design (35.63%) than the relative attribute importance for the whole sample (27.92%).

(33)

33 Another insight encompasses the irrelevance of active or passive social interaction, this is not as important to consumers in the context of intelligent personal assistant robots as the literature suggested. An active level of social interaction does not have a more positive influence on the purchase intention of an intelligent personal assistant robot than a passive level. This is unexpected, as prior research indicated that for social robots to perform well within various functions of human-robot interaction, it is essential for them to have effective and intimate social interaction with their user (Breazeal, 2003; Duffy, 2003; Fong et al, 2003). As mentioned before, literature suggested that robots failing to have an interaction with humans in a natural way is as unacceptable as failing to achieve its tasks (Mohammed et al, 2009). That this is not confirmed by the results, is unexpected.

Additionally, the relative attribute importance of social interaction was the lowest for the whole sample (6.97%), as well as for both classes in the two-segment solution. Class one, the segment with the highest intention to purchase, even had a relative attribute importance of 1.42% for social interaction. Therefore, it seems that respondents care relatively less about whether a robot is socially active and thus speaks on its own initiative or is socially passive and thus speaks only when spoken to. This is unexpected, also because voice control is the main way of interaction with an intelligent personal assistant robot. Thus, I expected a less indifferent position towards the level of social interaction. Some researchers already suggested that a robot’s social skills do not form a critical part of a robot’s cognitive skill set, arguing in favour of the indifferent response towards the level of social interaction in the context of this paper (Dautenhahn, 2007). In the domestic setting of this paper, it apparently might be only a necessary ‘add-on’ to human-robot interfaces to make the robot feel more ‘attractive’ to the people interacting with it, as Dautenhahn and colleagues suggested in their paper. Another explanation for these results could be that some consumers view a robot with a more active level of social interaction as more annoying than no-affect robots, as Schemerhorn et al (2011) found in their paper. This could cancel out a positive effect of active social interaction.

(34)

34

5.2. Managerial Implications

From a managerial perspective, various valuable insights were generated. First of all, insights were found on what attributes of an intelligent personal assistant robot are more important than others and thus should be focussed on. Secondly, insights were gathered on what levels within an attribute were most preferred.

As expected, the most important attribute to the consumers was price. Managers should therefore take the price elasticity in consideration. Based on the calculation of the absolute considering willingness-to-pay price, managers should not price their robots higher than $484.40. This price is calculated for a robot with the most preferred attribute levels. If the robot is more expensive, consumers are likely to choose not to purchase to robot instead.

In designing the robots, managers should make sure focus is applied to making the robot look more humanlike, as the design of the robot is considered the relatively most important attribute after price, and the most preferred level is the most humanlike. However, this study only considered a humanlike appearance to a certain extent, as I will discuss in the limitations. These results should thus not be interpreted as the more humanlike, the better. The third most important attribute was level of autonomy, although no significant differences were found between the two most preferred levels. Managers should avoid using a robot with full autonomy, as this level was least preferred. Also, managers should not spend too much energy on the social interaction of the robot. Consumers are quite indifferent towards choosing between a socially passive or socially active robot.

Lastly, based on the segmentation analysis, managers should target the segment Robot Lovers. The consumers in this segment are somewhat older, but barely includes seniors. Also, this group has a relatively higher income. This segment is a bit more price sensitive, and shows an even higher relative importance for the design of the robot. As the segment Robot Lovers showed a significant larger intention to purchase, it would be wise for managers to target this segment and not Indifferent Heterogeneous.

5.3. Limitations

(35)

35 The use of a training video also might have biased the perception of possible use moments and functionalities for the consumers, as the video mainly shows the robot in a domestic household setting involving a family.

Additionally, in order to estimate what the absolute considering willingness-to-pay price was, I had to extrapolate and assume consumers would still react linearly to price changes above $399. However, there is no evidence for this, thus is a limitation. Another caveat of this study is that the robots were only evaluated in a domestic setting, therefore it is hard to generalize the findings among different contexts of human-robot interaction.

Lastly, the study was done through an online survey. Hence all preferences were measured based on the assumptions of the respondents instead of actual experiences. When conducting a similar research in for example a field experiment, different results may arise.

5.4. Suggestions for Future Research

Future research should aim at generating more generalizable results. For example, by conducting the research with a sample with a more diverse nationality. More insights on the preference of humanlike design in robots could be gathered by doing research with robots that look more humanlike than the robots that were included in this study. Also, by doing field experiments and measuring consumer preferences beyond only online surveys, more validity of the results could be established or new insights could be generated.

Furthermore, more consumer characteristics could be taken into account when measuring preferences. For example, social interaction might be evaluated differently when comparing extrovert and introvert consumers.

(36)

36

6. REFERENCES

Aggarwal, P., McGill, A.L. (2007) "Is That Car Smiling at Me? Schema Congruity as a Basis for Evaluating Anthropomorphized Products", Journal of Consumer Research, 34 (4): 468–479.

Alkar, A.Z. and Buhur, U. (2005) “An Internet Based Wireless Home Automation System for Multifunctional Devices”, IEEE Transactions on Consumer Electronics, 51 (4), 1169-1174.

Bartneck, C., Kulić, D., Croft, E. and Zoghbi, S. (2009) “Measurement Instruments for the Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of Robots”, International Journal of Social Robotics, 1(1): 71–81.

Bartneck, C., Bleeker, T., Bun, J., Fens, P. and Riet, L. (2010) “The influence of robot anthropomorphism on the feelings of embarrassment when interacting with robots”,

Journal of Behavioral Robotics, 1(2): 109-115.

Baumeister, F. and Leary, M. R. (1995) “The Need to Belong: Desire for Interpersonal Attachments as a Fundamental Human Motivation Roy”, Psychological Bulletin, 117(3): 497-529.

Broadbent, E., Stafford, R., MacDonald, B. (2009) “Acceptance of healthcare robots for the older population: Review and future directions.”, International Journal of Social

Robotics, 1:319–330.

Broadbent E., Kumar V., Li X., Sollers J. 3rd, Stafford R.Q., MacDonald, B.A. and Wegner, D.M. (2013) “Robots with Display Screens: A Robot with a More Humanlike Face Display Is Perceived To Have More Mind and a Better Personality.” PLoS ONE, 8(8): e72589.

Broadbent, E. (2017) “Interactions With Robots: The Truths We Reveal About Ourselves”,

Annual Review of Psychology, 68: 627-652.

Broekens, J., Heerink, M. and Rosendal, H. (2009) “Assistive social robots in elderly care: a review.”, Gerontechnology, 8: 94–103.

Calo, R. (2010) “Robots and Privacy,”, In Robot Ethics: The Ethical and Social

Implications of Robotics.

Chaudhri, V.K., Cheyer, A., Guili, R., Jarrold, B., Myers, K.L., Niekrasz, J. (2006) “A case study in engineering a knowledge base for an intelligent personal assistant”,

(37)

37 Dautenhahn, K. (2007) Socially intelligent robots: dimensions of human–robot interaction”,

Philosophical Transactions of the Royal Society B, 362(1480), 679–704.

Duffy, B.R. (2003) “Anthropomorphism and the social robot”, Robotics and Autonomous

Systems, 42(3-4): 177–190.

Eggers, F. and Sattler, H. (2011) “Preference Measurement with Conjoint Analysis

Overview of State-of-the-Art Approaches and Recent Developments”, GfK Marketing

Intelligence Review, 3(1): 36-47.

Fong, T., Nourbakhsh, I., and Dautenhahn, K. (2003). “A survey of socially interactive robots.”, Robotics and Autonomous Systems, 42, 143–166.

Forlizzi, J. and DiSalvo, C. (2006) “Service robots in the domestic environment: a study of the roomba vacuum in the home”, HRI '06 Proceedings of the 1st ACM

SIGCHI/SIGART conference on Human-robot interaction, 258-265.

Goetz, J., Kiesler, S. and Powers, A. (2003) “Matching robot appearance and behavior to tasks to improve human-robot cooperation.”, In Proceedings of Ro-Man, 55–60. Goodrich, M. A., Olsen, D. R., Crandall, J. W. and Palmer, T. J. (2001) “Experiments in

Adjustable Autonomy”, in Proceedings of the IJCAI Workshop on Autonomy,

Delegation and Control: Interacting with Intelligent Agents, 2001.

Haselager, W.F.G. (2005) “Robotics, philosophy and the problems of autonomy”, Pragmatics

& Cognition, 13(3): 515 –532.

Hutson, M. (2012) “The 7 Laws of Magical Thinking: How Irrational Beliefs Keep Us Happy, Healthy, and Sane.” New York, NY: Hudson Street Press, 165–181.

Laschi, C., Teti, G., Tamburrini, G., Datteri, E., Dario, P., (2001) “Adaptable semi-autonomy in personal robots”, 10th IEEE International Workshop on Robot and Human

Interactive Communication, 2001.

Li, D., Rau, P.P., and Li, Y. (2010) “A cross-cultural study: Effect of robot appearance and task.”, International Journal of Social Robotics, 2(2): 175–186.

Mitchell, T., Caruana, R., Freitag, D., McDermott, J., Zabowski, D. (1994) “Experience With a Learning Personal Assistant”, Communications of the ACM: Special Issue on

Agents, July 1994, 37(7): 80-91.

Mohammad, Y. and Nishida, T. (2009) “Toward combining autonomy and interactivity for social robots”, Journal of Knowledge, Culture and Communication, 24(1): 35–49. Mori, M., MacDorman, K.F. and Kageki, N. (2012) “The Uncanny Valley [From the Field]”,

IEEE Robotics & Automation Magazine, 19(2): 98-100.

(38)

38 teleoperation of personal service robots”, In RO-MAN, IEEE 2012, 933–940.

Oyedele, A., Hong, S. and Minor, M.S. (2007) “Contextual Factors in the Appearance of Consumer Robots: Exploratory Assessment of Perceived Anxiety Toward Humanlike Consumer Robots”, CyberPsychology & Behavior, 10(5): 624-632.

Parasuraman, A. (2000) “Technology Readiness Index (TRI) A Multiple-Item Scale to Measure Readiness to Embrace New Technologies”, Journal of Service Research, 2(4): 307-320.

Parasuraman, A. and Colby, C. (2015) “An Updated and Streamlined Technology Readiness Index: TRI 2.0”, Journal of Service Research, 18(1): 59-74.

Rachels, J. (1975) “Why Privacy is Important”, Philosophy & Public Affairs, 4(4): 323-333. Ray, C., Mondada, F., Siegwart, R. (2008), “What do people expect from robots?”, IEEE/RSJ

International Conference on Intelligent Robots and Systems, 3816-3821.

Riek, L.D., Rabinowitch, T., Chakrabarti, B. and Robinson, P (2009) “How

Anthropomorphism Affects Empathy Toward Robots”, In: Proceedings of the 4th

ACM/IEEE International Conference on Human Robot Interaction, 245–246.

Schermerhorn, P. and Scheutz, M. (2011) “Disentangling the effects of robot affect,

embodiment, and autonomy on human team members in a mixed-initiative task.”, In

Proceedings of the 2011 International Conference on Advances in Computer-Human Interactions, Gosier, Guadeloupe, France.

Severinson-Eklundh, K., Green, A., and Huttenrauch, H. (2003) “Social and collaborative aspects of interaction with a service robot.”, Robotics and Autonomous Systems, 42, 223–234.

Shibata, T. and Wada, K. (2010) “Robot therapy: a new approach for mental healthcare of the elderly - a mini-review.” Gerontology, 57: 378–386.

Smith, H. J., Milberg, J. S., and Burke, J. S. (1996) “Information privacy: Measuring

individuals’ concerns about organizational practices.”, MIS Quarterly, 20(2): 167– 196.

Smithers, T. (1997) “Autonomy in Robots and Other Agents”, Brain and Cognition, 34, 88–106.

Stubbs, K., Hinds, P.J. and Wettergreen, D. (2007) “Autonomy and Common Ground in Human-Robot Interaction: A Field Study”, In: IEEE Intelligent Systems, 22(2): 42-50. Van Doorn, J., Mende, M., Noble, S.M., Hulland, J., Ostrom, A.L., Grewal, D., and Petersen,

(39)

39

Research, 20(1): 43-58.

Wainer, J., Feil-Seifer, D. J., Shell, D. A. and Mataric, M. J. (2007) “Embodiment and human-robot interaction: A task-based perspective.” In Processings of Ro-Man, 872–877.

Young, J.E., Hawkins, R., Sharlin, E. and Igarashi, T. (2009) “Toward Acceptable Domestic Robots: Applying Insights from Social Psychology”, International Journal of Social

(40)

40

7. APPENDICES

Appendix A. Significance of Parameters

Direct Effect p-value

Design < 0.001*

Social Interaction < 0.001*

Level of Autonomy < 0.001*

Price < 0.001*

None option < 0.001*

Moderating Effect p-value

Design (level 1)  Social Interaction 0.510

Design (level 2)  Social Interaction 0.650

TRI  Design (level 1) 0.092

TRI  Design (level 2) 0.083

TRI  Social 0.004*

TRI  Autonomy (level 1) 0.002*

TRI  Autonomy (level 2) 0.190

PC  Design (level 1) 0.061

PC  Design (level 2) 0.002*

PC  Social 0.970

PC  Autonomy (level 1) 0.400

PC  Autonomy (level 2) 0.110

Significant parameters (p < 0.05) are marked with an asterisk (*)

Appendix B. Comparing Segment Solutions

# LL BIC AIC AIC3 CAIC Npar df CE

1 -3367.86 6799.53 6759.71 6771.71 6811.53 12 192 0.0000 2 -2870.45 5873.05 5790.10 5815.10 5898.05 25 179 0.0110 3 -2739.72 5681.53 5555.45 5593.45 5719.53 38 166 0.0255 4 -2651.08 5573.39 5404.17 5455.17 5624.39 51 153 0.0319 5 -2576.48 5493.33 5280.97 5344.97 5557.33 64 140 0.0379 6 -2511.57 5432.63 5177.14 5254.14 5509.63 77 127 0.0612 7 -2459.31 5397.26 5098.63 5188.63 5487.26 90 114 0.0357 8 -2418.04 5383.84 5042.07 5145.07 5486.84 103 101 0.0470 9 -2363.42 5343.74 4958.84 5074.84 5459.74 116 88 0.0358 10 -2334.72 5355.48 4929.44 5056.44 5484.48 129 75 0.0385

Referenties

GERELATEERDE DOCUMENTEN

Users could simply add their as- sistant’s address as ‘CC’ in an email, and EIVA would share the recommended location and time slots with guests automatically, based on the

In prior research, personal values have been identified as strong motivators for purchase decisions; however, only a few studies have found interaction effects of

›  Only car sharing as application access-based consumption •  Reduced generalizability. ›  Data sample

In the literature about alternative drive trains, contrasting theories were provided about the consumer preferences for electric cars compared to conventional fuel cars while

Understanding individuals’ differences in the propensity to anthropomorphize is an important factor that will be taken into account in this research since consumers with a high

Service agent preference Tendency to anthropomorphize H3b+ H2b+ H1b+ Human-like robot Machine-like robot H2a+ H3a+ Brand concept (premium vs. economy) H1a+. Conceptual model

4b A robot with facial expressions and body movement strengthens the influence of active social interaction on purchase intention, compared to a robot that looks like a machine

Aside from these main findings, this article shows that pre-trailer movie preference and star power do not have a moderating effect on either the relationship between trailer