• No results found

Beyond R2D2 - The design of nonverbal interaction behavior optimized for robot-specific morphologies

N/A
N/A
Protected

Academic year: 2021

Share "Beyond R2D2 - The design of nonverbal interaction behavior optimized for robot-specific morphologies"

Copied!
242
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

BEYOND R2D2

The design of nonverbal interaction

behavior optimized for robot-specific

morphologies

BEYOND R2D2

(2)
(3)

BEYOND R2D2

(4)

Ph.D Graduation Committee

Chairman and Secretary Prof. dr. P.G.M. Apers, University of Twente, NL Promotor Prof. dr. V. Evers, University of Twente, NL Assistant-promotors Dr. ir. G.D.S. Ludden, University of Twente, NL

Dr. E.M.A.G. van Dijk, University of Twente, NL Members Prof. dr. D.K.J. Heylen, University of Twente, NL

Prof. dr. ir. P.P.C.C. Verbeek, University of Twente, NL Prof. dr. ir. D.M. Gavrila, Delft University of Technology, NL Prof. dr. F. Eyssel, Bielefeld University, GE

Dr. W. Ju, Stanford University, USA

The research reported in this thesis was carried out at the Human Media Interaction group of the University of Twente.

CTIT Ph.D. Thesis Series ISSN: 1381-3617, No. 16-404

Centre for Telematics and Information Technology P.O. Box 217, 7500 AE Enschede

The Netherlands

SIKS Dissertation Series No. 2016-36

The research reported in this thesis has been carried out under the auspices of SIKS, the Dutch Research School for Information and Knowledge Systems.

The research reported in this thesis received funding from the European Community’s 7th Framework Programme under Grant agreement 288235 (http://www.frogrobot. eu/).

© Daphne Karreman, Enschede, The Netherlands

Layout in InDesign. Printed by CPI-Koninklijke Wöhrmann- Zutphen. ISBN: 978-90-365-4184-8

DOI: 10.3990/1.9789036541848

All rights reserved. No part of this work may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without prior permission from the copyright owner.

(5)

BEYOND R2D2

THE DESIGN OF NONVERBAL INTERACTION BEHAVIOR

OPTIMIZED FOR ROBOT-SPECIFIC MORPHOLOGIES

PROEFSCHRIFT

ter verkrijging van

de graad van doctor aan de Universiteit Twente, op gezag van de rector magnificus,

Prof. dr. H. Brinksma,

volgens besluit van het College voor Promoties in het openbaar te verdedigen

op woensdag 14 september 2016 om 12:45 uur

door

DAPHNE ELEONORA KARREMAN

geboren op 23 februari 1984 te Delft

(6)

This dissertation has been approved by:

Promotor: Prof. dr. V. Evers, University of Twente, The Netherlands Assistant-promotors: Dr. ir. G.D.S. Ludden, University of Twente, The Netherlands

(7)

A

Acknowledgment

When I was finishing my master’s degree, it became clear to me that I like to do research and that doing a PhD would be an interesting and promising next step. Today, over four years later, I am sure that I made the right decision. Looking back, even though it was not always easy, I would definitely do it again. Luckily, I did not have to do everything by myself. I want to thank many people for their support. Even though I can only mention a few of them here, I want to thank everyone who helped me in some way.

I would like to thank Vanessa and Betsy for giving me the opportunity to start the PhD in human-robot interaction, and for supervising me. Vanessa, thank you for giving me the freedom to go in the direction in which I wanted to go, and for your interesting ideas and support to improve the research and the presentation of the results. Betsy, thank you for your patience, discussing research setups and results, reading and improving papers and deliverables, and helping me out with stupid questions. Even though you were not as closely involved the last years, I very much appreciate our meetings and the feedback you gave me. A special thanks goes to Geke, who became my daily supervisor when I had been working on my PhD for two years. Thank you for diving into the world of human-robot interaction, for your support with exploring new opportunities for human-robot interaction, and of course for the encouraging moments which I needed when I was finishing the first draft.

Many more people helped me during the years. There were weekly meetings with the robot research group, which increased in size over the years. Thank you Michiel (also, thank you for all the other things you did for me), Maartje, Daniel, Vicky, Cristina, Jorge, Roelof, Jered, Jaebok, Manja, Khiet, Dennis, and others who joined over the years, for giving nice insights into your research and helping me with mine. Also, thanks to the students Gilberto, Sandra and Lex, who collaborated in my research. I want to thank Lynn for her effort put into im-proving my writing in papers, deliverables, and of course this thesis. Moreover, I enjoyed traveling together for FROG very much. Also, I would like to thank Charlotte and Alice for all their help with arranging and organizing everything. And of course, thanks to all other colleagues at HMI for the pleasant work environment.

Iris and Randy, I want to thank you for being my paranymphs. Iris, even though there were many differences, we followed the same path. Thank you for sharing experiences, peer pres-sure during writing, and the fun time we had. Randy, thank you for always being willing to help with FROG, and the help with all questions regarding the process of finishing a PhD. A special thanks goes to Veronique who helped and encouraged me to do what I really wanted

(8)

Ackno

wledgmen

t to do. Without her help, I probably would not have been at the University of Twente.

I want to thank the partners of the FROG project for the nice time we had in Seville and Lisbon. The working days in the Zoo and the Royal Alcázar were long, but we achieved a nice result. Furthermore, thank you for showing me around the city and for helping me find the best places to eat!

I found out that other activities were as important for doing my PhD as was doing the work itself. Therefore, I thank my hockey teams in Delft and Enschede, and everyone on the violin making course, for their interest in my PhD, and also for giving me a chance to think about something completely different and to relax after a long day/week of work. To my friends from Delft, I promise to have more time during the weekends to meet with you again! I want to thank my parents for their support in whatever choice I have ever made. Hans, I enjoyed the discussions we had about everything. We do not always agree, but that is the interesting part. It gave me insights in a lot of things. Ellen, thank you for always being ready to help me, both at any moment I came to Delft, as well as when I just had a question on the phone. Japke, I want to thank you for just being there, for listening, and for sharing experiences. Also, I want to thank Lammie and Theo for their interest in my thesis and the carefree weekends.

Last but not least, I want to thank Tim. Thank you for everything in the last four years. It must have been frustrating that even though we live in Dordrecht, I was in Enschede for half of the week. I appreciate that you never complained, and gave me the chance to do what I wanted. During the last year, I often needed the weekends to work, but you kept things running and supported me when I was tired. I did not always show appreciation for that, but you know that I am thankful for all your support. Things will become less complicated, I promise.

Daphne Karreman, Enschede, August 2016

(9)

A

Abstract (EN)

It is likely that in the near future we will meet more and more robots that will perform tasks in social environments, such as shopping malls, airports or museums. However, design guidelines that inform the design of effective nonverbal behavior for robots are scarce. This is surprising since the behavior of robots in social environments can greatly affect the interaction between people and robots. Previous work has shown that people have the tendency to project motivations, intentions and emotions on non-living objects, such as robots, when these objects start to move. Moreover, it has been shown that people tend to react socially to computers, televisions and new media. Similarly, people will react socially to robots. Currently, in the field of human-robot interaction the focus is on creating robots that look like humans and that can use humanlike behavior in interaction. However, such robots are not suitable for all tasks, and humanlike robots are complex, vulnerable and expensive. Moreover, people do not always prefer humanlike robots over robots with other appearances. This indicates that there are good reasons to develop low-anthropomorphic robots; robots that do not resemble people in appearance.

A challenge in designing nonverbal behavior for low-anthropomorphic robots is that these robots often lack the abilities to imitate humanlike behavior correctly. These robots could lack the specific modalities to perform the behavior, for example, they may not have arms and fingers to point with. When a robot does have the right modalities to perform a specific behavior, these modalities might not have similar degrees of freedom to imitate recognizable nonverbal behavior. Yet, contrary to people, low-anthropomorphic robots may use screens, projections, light cues or specific movements to communicate their intentions and motivations. To optimize the use of robot-specific modalities and morphology, we developed robot-optimized behavior. This was done in the context of developing guide robots for tourist sites.

To understand the effects of both imitated humanlike behavior and robot-optimized behavior for low-anthropomorphic robots on peoples’ perception, controlled lab studies and in-the-wild studies have been performed. First, we analyzed the effect and effectiveness of imitated humanlike guide behavior on low-anthropomorphic robots. These studies showed that humanlike behavior is preferred over random behavior, but that it is also more distracting. This means that humanlike behavior may not the best solution to design behavior for low-anthropomorphic robots. An important question now was what would be a promising alternative to imitating human behavior. In follow-up studies we compared the effect and effectiveness of imitated humanlike behavior for the robot to robot-optimized behavior. From these studies we learned that robot-optimized behavior is a good alternative

(10)

Ab

str

act for low-anthropomorphic robots instead of imitated humanlike behavior.

To effectively perform the studies mentioned above, we developed and introduced DREAM, Data Reduction Event Analysis Method. This method allowed us to analyze video data of in-the-wild human-robot interaction. Such a method did not yet exist, and is essential in order to gain insight in effective behaviors for robots. Because many more aspects than only the behavior of the robot and the person interacting with the robot play a role in the final user experience, such as context and other people in that context, it is necessary to add real world field studies to studies in controlled (lab) environments. Therefore, in this thesis, DREAM, is introduced to analyze the rich and unstructured data of in the wild human-robot interactions in a fast and reliable manner. This method turned out to be effective to use for the guide context human-robot interaction data.

The work presented in this thesis is a first step towards understanding 1) the effect of nonverbal behavior for low-anthropomorphic robots, 2) which design approaches can be effectively used to design this behavior and 3) how nonverbal robot behavior can be studied and evaluated in the wild. The tangible outcomes of this thesis: 1) the robot-optimized approach to design behavior for low-anthropomorphic robots, and 2) DREAM to evaluate human-robot interaction in the wild. These results can well serve as a starting point to further develop more diverse and effective design approaches in human-robot interaction.

(11)

A

Abstract (NL)

Het is waarschijnlijk dat we steeds vaker robots zullen tegenkomen die taken uitvoeren in sociale omgevingen zoals winkelcentra, vliegvelden of musea. Desondanks zijn er (nog) geen richtlijnen die sturing geven aan het ontwerpen van effectief non-verbaal gedrag voor robots. Dit is opvallend, omdat juist gedrag van robots in sociale omgevingen veel invloed kan hebben op de interactie tussen mens en robot. Eerder onderzoek heeft aangetoond dat mensen de neiging hebben om motivaties, intenties en emoties te projecteren op niet levende dingen, zoals robots, wanneer deze dingen bewegen. Daarnaast is het ook aangetoond dat mensen de neiging hebben om sociaal te reageren op gedrag van computers, televisies en nieuwe media. Daarom zullen mensen ook sociaal reageren op robots. Op dit moment is onderzoek in het mens-robot interactie veld vooral gericht op het ontwikkelen van robots met een menselijk uiterlijk, die menselijk gedrag kunnen gebruiken in interactie. Echter, zulke robots zijn niet geschikt om alle taken uit te voeren, omdat menselijke robots complex, kwetsbaar en duur zijn. Daarnaast is het gebleken dat mensen niet per definitie voorkeur hebben voor robots die er menselijk uit zien. Er zijn dus goede redenen om laag antropomorfe robots te ontwerpen; robots die uiterlijk juist niet op mensen lijken.

Een uitdaging bij het ontwerpen van non-verbaal gedrag voor laag antropomorfe robots is dat deze robots vaak de modaliteiten missen om menselijk gedrag correct te imiteren. Dit kan komen doordat deze robots niet de modaliteit hebben om het gedrag uit te voeren, de robot heeft bijvoorbeeld geen armen en kan daarom niet wijzen. Als de robot wel vergelijkbare modaliteiten heeft, kan het zijn dat deze niet dezelfde bewegingsvrijheid hebben als mensen om herkenbaar gedrag te tonen. Het is echter wel mogelijk dat zulke robots, in tegenstelling tot mensen, schermen, projectors, licht effecten, en specifieke bewegingen gebruiken om intenties en motivaties te communiceren. Om het gebruik van deze robot specifieke modaliteiten en uiterlijk van de robot te optimaliseren, hebben we geoptimaliseerd robot gedrag ontwikkeld. Dit hebben we gedaan in de context van de ontwikkeling van gids robots voor toeristische plaatsen.

Om te begrijpen welke effecten zowel imitatie van menselijk gedrag, als robot geoptimaliseerd gedrag hebben op de perceptie van de interactiepartners, zijn gecontroleerde studies en studies in de ‘echte wereld’ uitgevoerd. Allereerst hebben we geanalyseerd wat het effect en de effectiviteit van geïmiteerd menselijk gedrag is voor laag antropomorfe robots. In deze studies vonden we dat menselijk gedrag de voorkeur heeft boven random gedrag, maar dat het ook erg afleidend is. Dit betekent dat menselijk gedrag niet de beste oplossing is om gedrag voor laag antropomorfe robots te ontwerpen. Een belangrijke vraag was nu wat een veelbelovend alternatief zou kunnen zijn voor het gebruik van gekopieerd menselijk

(12)

Ab

str

act gedrag. In opvolgende studies hebben we effect en effectiviteit van gekopieerd menselijk

gedrag voor de robot vergeleken met gedrag dat was geoptimaliseerd voor het uiterlijk en de modaliteiten van de robot. Van deze studies hebben we geleerd dat geoptimaliseerd robot gedrag een goed alternatief is in plaats van het imiteren van menselijk gedrag voor laag antropomorfe robots.

Om de bovengenoemde studies goed uit te kunnen voeren, hebben we DREAM, ‘Data Reduction Event Analysis Method’, ontwikkeld en geïntroduceerd. Deze methode maakt het mogelijk om video data van spontane (niet gecontroleerde) mens-robot interacties te analyseren. Een dergelijke methode bestond nog niet, maar is wel noodzakelijk om inzicht te verkrijgen in effectief gedrag voor robots. Omdat veel meer aspecten dan gedrag van de robot en de persoon die interacteert met de robot een rol spelen, zoals fysieke omgeving en de andere mensen in de omgeving, is het nodig om ook ‘echte wereld’ veld studies naast gecontroleerde (lab) studies uit te voeren. DREAM is geïntroduceerd om rijke en ongestructureerde interacties tussen mensen en robot op een snelle en betrouwbare manier te analyseren. Het is gebleken dat deze methode effectief is om spontane mens-robot interactie in een gids/museum situatie te analyseren.

Het werk dat in dit proefschrift gepresenteerd wordt is een eerste stap in het begrijpen 1) van effectief non-verbaal gedrag voor laag antropomorfe robots, 2) welke design methodes gebruikt kunnen worden om dit gedrag te ontwerpen en 3) hoe non-verbaal gedrag voor robots in spontane situaties bestudeerd kan worden. Dit heeft geresulteerd in de volgende resultaten: 1) Het creëren van robot geoptimaliseerd gedrag om gedrag voor laag antropomorfe robots te ontwerpen, en 2) DREAM, om spontane mens-robot interactie te analyseren. Deze resultaten kunnen als startpunt dienen om meer diverse en effectieve ontwerpmethoden voor mens-robot interactie te ontwikkelen.

(13)

C

Content

Chapter 1 - Introduction 1

1.1. The influence of nonverbal behavior on communication 2 1.2. Interacting with robots in everyday environments 3 1.3. Approaches to design robots 4

1.4. Research questions 6

1.5. Research context 7

1.5.1. Application areas for the FROG project 8

1.6. Overview of chapters 11

Chapter 2 - Background and related work 15

2.1. Main concepts and definitions 16 2.2. Important theories for human-robot interaction and social robotics 17 2.3. Previous research on tour guides robots 23 2.3.1. Studies involving the first generation tour guide robots 23 2.3.2. Studies involving second generation tour guide robots 26 2.3.3. Previous research into specific tour guide robot behaviors 28 2.4. Framework to design behavior for low-anthropomorphic robots 32

2.5. Conclusion 35

Chapter 3 - Contextual analysis; A robot in a tourist site? 37

3.1. Motivation 38

3.2. Related work 38

3.2.1. Strategies in tour guiding 38 3.2.2. Factors influencing the visitor experience 39 3.3. Study 1: Contextual analysis of human tour guide behaviors 40

3.3.1. Study design 40

3.3.2. Results 42

3.3.3. Discussion 49

3.4. Study 2: Understanding visitor experiences 53

3.4.1. Study design 53

3.4.2. Results 54

3.4.3. Discussion 57

3.5. General discussion 58

3.5.1. Understanding the context 58 3.5.2. Phenomena influencing the design of behavior of a tour guide robot 59

(14)

Con

ten

t 3.5.3. Focus on user experience in human robot interaction 60

Chapter 4 - Translating human nonverbal behaviors to robots 63

4.1. Motivation 64

4.2. Related work; How humanlike nonverbal cues influence

human-robot interaction 64

4.2.1. Influencing the interaction with robot gaze 64 4.2.2. Influencing the interaction with robot hand gestures 67 4.2.3. Influencing the interaction with robot bodily movement and orientation 69 4.3. Study 3: The influence of a low-anthropomorphic robot’s eye-gaze

on interactions with multiple users 70

4.3.1. Hypothesis 71

4.3.2. Study design 72

4.3.3. Results 77

4.3.4. Discussion 79

4.4. Study 4; How does a tour guide robot’s orientation influence

visitors’ orientations and formations? 80

4.4.1. Research question 81

4.4.2. Study design 81

4.4.3. Results 87

4.4.4. Discussion 88

4.5. General discussion 92

Chapter 5 - Beyond R2D2; Designing nonverbal interaction behaviors for

robot-specific morphologies 95

5.1. Motivation 96

5.2. Related work; Designing nonverbal interaction behaviors for

low-anthropomorphic robots 96

5.3. Research question 99

5.4. Two approaches towards the design of multimodal robot behavior 100 5.4.1. Approach 1: human-translated behavior 101 5.4.2. Approach 2: robot-optimized behavior 102

5.5. Methods for evaluation 102

5.5.1. Manipulation of the robot’s behavior in both studies 104 5.5.2. Method of the online video study (study 5) 104 5.5.3. Method of the in-the-wild study (study 6) 110

5.6. Results 114

5.6.1. Attention 115

(15)

C

5.7. Discussion 118

5.7.1. Discussion of the results 118 5.7.2. Design approach to develop robot behavior 119 5.7.3. Use of mixed methods for evaluation 120

5.8. Conclusion 120

Chapter 6 - Experiences with a low-anthropomorphic tour guide robot; A

case study with FROG in the wild 123

6.1. Motivation 124

6.2. Visiting cultural heritage with a tour guide robot: A user evaluation

study in the wild (study 7) 124

6.2.1. Study design 125

6.3. Results 130

6.4. Discussion 133

Chapter 7 - Introducing DREAM; Bringing back video analysis to

human-robot interaction in the wild 137

7.1. Motivation 138

7.2. Related work 138

7.3. DREAM: annotating videos using thin slices 141

7.3.1. The pillars of DREAM 142

7.4. Dataset to validate DREAM 145

7.4.1. Study design 146

7.4.2. Use of DREAM 147

7.5. Validation of the method 149

7.5.1. Results of the comparison of DREAM and ‘counting predefined events’ 149 7.5.2. Experiences of annotators 151

7.6. Discussion 152

7.7. Conclusion and future work 154

Chapter 8 - Discussion and conclusions 157

8.1. Discussion of the main findings 158 8.1.1. To what extent are humanlike nonverbal behaviors effective when

transferred to low-anthropomorphic robots that interact with people?

(RQ1) 158

8.1.2. What is the best approach to design a consistent set of nonverbal

communication behaviors for low-anthropomorphic robots? (RQ2) 159 8.1.3. In what way should the effect of robot behavior on people’s

(16)

Con

ten

t 8.1.4. How should low-anthropomorphic robots display nonverbal

behaviors in social interactions with people so that they are easy to

understand and lead to positive experiences? (Main research question) 162

8.2. Implications for theory 164

8.2.1. Approaches to design robot behavior 164 8.2.2. Recognize overlap in the fields of design and human-robot interaction 165 8.3. Implications for future development of social robots 166

8.4. Future work 168

8.5. Conclusion 170

Appendix 173

Appendix A - Observation scheme guide behavior - study 1 173 Appendix B - Questionnaire - study 3 174 Appendix C - Annotation of participant’s attention - study 3 180 Appendix D - Coding scheme - study 4 182 Appendix E - Questionnaire - study 5 184 Appendix F - Interview questions - study 6 190 Appendix G - Coding scheme - study 6 191 Appendix H - Interview questions - study 7 197

Bibliography 199

List of publications 213

Relevant for this thesis 213

Additional publications 213

(17)

1

1

Introduction

In this chapter I will introduce the main goal of this thesis and provide the

motivation for studying nonverbal behaviors for low-anthropomorphic

robots. Furthermore, I will present the research questions that have guided

the research. Next, I will describe the application context of the EU funded

project FROG. Finally, I will give an overview of chapters for the remainder of

the thesis as well as an overview of the research methods used.

(18)

CHAP

TER 1

In

troduction

1.1. The influence of nonverbal behavior on communication

R2D2 is a famous robot from the Star Wars movies. If we were to encounter R2D2 in the real world, we would probably not literally understand the beeps and movements it makes. This is because R2D2 does not closely resemble people in the way it looks or behaves. However, we might be able to deduce R2D2’s intentions from the way it moves, beeps, drives and otherwise behaves. This implicit understanding may be due to the tendency of people to project humanlike behavior onto non-human things such as robots (Duffy, 2003; Fink, 2012), their social responses to computers and technology in general (Nass & Moon, 2000; Reeves & Nass, 1996) and their inclination to interpret behavior of non-human things in a humanlike manner (Heider & Simmel, 1944; Ju & Takayama, 2009).

In the example of R2D2, it is not the verbally given information that people are able to understand, but people interpret the nonverbal communication signals to understand the situation. This nonverbal communication includes body movement, body posture, proxemics, gestures, facial expressions and tone of voice. It is that part of the communication that does not use actual words to explain concepts.

It is known that nonverbal communication plays an important role in interaction. Several researchers have put forward that in human-human interaction between 60% and 93% of communication is nonverbal (Birdwhistell, 1990; Mehrabian, 1972; A. Pease, & B. Pease, 2004). Also in human-product interaction, product and interaction designers develop nonverbal cues to inform users of how to use the product (Preece, Sharp, & Rogers, 2002). For example, a coffee machine uses blinking or continuously shining light to show whether the coffee is ready. A ticket machine’s lights blink to indicate what the user should focus on next. By using nonverbal communication in machines, extensive written instructions are rendered unnecessary and first time users can interact with the technology intuitively. Although it is unlikely for us to encounter R2D2 in an everyday situation, it is expected that we will encounter similar service robots that perform tasks to support us in various settings. This could be a laundry robot in a hospital, a delivery robot in an office, or a guide robot in a museum. Because these machines will need to operate in populated environments and understand and interact with people intensively, the interaction becomes more social rather than purely functional. For example, people expect robots to behave according social rules and get annoyed if the robot does not (Mutlu & Forlizzi, 2008). This indicates that interaction has a social component when robots have to react to social situations and their behavior most likely will be interpreted (a-) socially by people. As a result, this inherently social component of interaction places human-robot interaction between the highly social human-human interactions and the more functional human-product interactions. Therefore, in order to work effectively in a typically human environment that is full of social encounters

(19)

1

and social situations, robots, just like people, might benefit from adopting nonverbal communication behaviors.

1.2. Interacting with robots in everyday environments

Robots that perform tasks in everyday social environments include service robots for professional use or personal use. As was reported by the International Federation of Robotics (2015), in 2014, the total number of professional service robots sold (such as medical robots or milking robots) increased by 11.5% compared with 2013 to 24,207 robots. The number of service robots meant for personal and domestic use (such as robot vacuum cleaners or handicap assistance robots) that was sold in 2014 was 4.7 million, which was an increase of 28% compared with 2013. The increase of sales in entertainment robots for personal use (such as LEGO Mindstorms or Furbies) between 2013 and 2014 was 40%, to about 1.3 million robots sold in 2014. Moreover, the predictions for 2015-2018 are that about 152,400 service robots in the professional sector and about 35 million service robots in the personal sector will be installed by 2018 (International Federation of Robotics, 2015). This indicates an enormous growth of the service robots market in the coming years.

All of these robots, will to some extent encounter people when performing their tasks. Even the simplest robots that perform tasks in everyday work or home environments will need to be handled, fixed and worked or interacted with. Robots that people encounter or have to interact with in daily social environments will be held to the prevalent social norms in the environment of use. When a delivery robot passes people in a hallway, we might well expect it to move to the right to let us pass. We may get frustrated or annoyed if it positions itself in the middle or waits for a long time because it is stuck (Mutlu & Forlizzi, 2008). We might not understand if and how we can pass the robot or we might find the robot annoying. Alternatively, when such a robot stands in line, it should maintain a proper distance from the other people in line (Nakauchi & Simmons, 2002). The robot should not come too close to the person in front of it and its behavior needs to unequivocally show that it is queuing. These examples illustrate that getting nonverbal behavior right is challenging and requires insight into and understanding of local social norms and customs.

Social behavior for robots is even more important and more complex when the interaction with people is more involving then passing or queuing. This is the case for example, for a robot that guides people to their gate in a busy airport (Joosse et al., 2015), a robot that helps children to engage in tasks, such as tidying their rooms (Zaga, Lohse, Truong, & Evers, 2015) or a robot that autonomously guides people through a museum (e.g. (Jensen et al., 2002)). In order for these robots to offer effective, efficient, safe and enjoyable services to people, their behaviors need to be easy to interpret and the interaction needs to be natural.

(20)

CHAP

TER 1

In

troduction

Research into nonverbal communicative behaviors for social robots (discussed in Chapter 2) is often concerned with the development of humanlike robots that display humanlike behaviors to leverage familiarity and ease the interaction. However, there are limitations to this approach. While humanlike appearances and behaviors for a robot evoke natural interaction (Bischoff & Graefe, 2002) and people prefer to interact with robots in a way similar to the way they interact with other people (Fong, Nourbakhsh, & Dautenhahn, 2003), current capabilities of robots are limited. These technical limitations constrain the full imitation of human behavior (Duffy, 2003). Additionally, highly humanlike robots are complex, vulnerable, and expensive. Furthermore, for many tasks, a humanoid form for robots is probably not effective or efficient (e.g. a swimming lifeguard robot, dog walking robot). Therefore, it is imperative to consider robots that do not closely resemble people and explore alternative ways to design the nonverbal behavior of these robots beyond copying human behaviors.

Although an increasing number of social robots will enter our society in the coming years and these robots are envisioned to behave according to human social norms in order to ease the human-robot interaction in everyday environments, robots often lack humanlike modalities. Moreover, it is unknown what effective nonverbal behaviors for robots that do not closely resemble people should be. This means that important questions arise around the appropriateness of copying human behavior to robots and that new forms of robot interactive behavior need to be defined.

1.3. Approaches to design robots

In this thesis, I will disconnect the robot’s behaviors from its appearance. It is not essentially necessary that a biologically inspired robot only shows humanlike behavior, neither should a functional robot show only functional behavior. The terms that I will use in this thesis, describe the appearance of the robot, while the focus of this thesis is on the exploration of effective nonverbal behaviors that is optimized for the robot’s morphology.

To avoid indistinctness, I will adopt the term humanlike robot for highly anthropomorphic robots that closely resemble people in appearance and interaction modalities. Humanlike robots have features such as arms, a head, and facial features. They are able to imitate humanlike gestures, show humanlike facial expressions and full body pose behaviors as well as realistic humanlike head and gaze cues. I will use the term low-anthropomorphic robot for robots with bodies that are very different from human bodies and subsequently lack physical properties to closely imitate humanlike behaviors. However, these robots may instead have interaction modalities that people do not have, such as screens to communicate information, colored lights to show intentions or beyond-human sensor capabilities.

(21)

1

This classification is comparable to the classification as proposed by Fong et al. (2003), who distinguished between two main robot-types. According to them robots can be designed either based on biological inspiration or on functional design (Fong et al., 2003). Biologically inspired robots are highly anthropomorphic robots that are developed to physically resemble biological entities such as people or animals. As a result, such robots are designed and developed to simulate human physical, cognitive and social capabilities. The androids of Professor H. Ishiguro are good examples of biologically inspired robots (Ishiguro, 2007; Minato, Shimada, Itakura, Lee, & Ishiguro, 2006; Sakamoto, Kanda, Ono, Ishiguro, & Hagita, 2007). However, the development of Robovie V1, and its successors (See Figure 1.1), also lead to quite humanlike robots (Ishiguro et al., 2001). Functionally designed robots are robots that are not inspired by the human form. Contrarily, their design is functionality driven and the physical as well as interaction design is optimized for interaction with people. Examples of functionally designed robots are the HUG (DiSalvo, Gemperle, Forlizzi, & Montgomery, 2003) and the Care-O-Bot 3 (See Figure 1.2).

Nevertheless, most robots will not neatly fit into the biologically inspired or the functionally designed class. In most cases influences from both classes are visible. For example a biologically inspired head and facial features on to the functionally designed robot body of Snackbot (See Figure 1.3) (Lee et al., 2009). This shows that the classification of Fong et al. (2003) does not cover the full range of robots. Therefore, I will refer to functionally designed robots that are provided with some biologically inspired features as low-anthropomorphic robots as well, because initially they are functionally designed and in most cases the seemingly biologically inspired features applied to those robots do not have similar functionality as people have.

Figure 1.1: Robovie R V2; a humanlike robot (taken from http://www.roboticstoday.com/

robots/robovie-pc)

Figure 1.2: Care-O-Bot 3; a low-anthropomorphic robot (taken from: (Mast et al.,

2012))

Figure 1.3: Snackbot; a low-anthropomorphic robot with some humanlike features (taken

(22)

CHAP

TER 1

In

troduction

Moreover, in their paper, Fong et al. (2003) focused mainly on biologically inspired robots, which according to them seems to be the most effective way to develop robots. This approach is strongly based on knowledge from Computer Science, and Behavior Sciences and Psychology. Nevertheless, Fong et al. (2003) acknowledge that other design approaches such as taken from Industrial Design and Interaction Design are focused on fewer. This indicates that robot used for social human-robot interaction are mainly biologically inspired and knowledge on how to develop effective behaviors for functionally designed robots is scarce. In this thesis, I will address this shortcoming by focusing more on design approaches suitable for functionally designed low-anthropomorphic robots.

Social robots that do not have an anthropomorphic or low-anthropomorphic appearance such as those resembling animals (zoomorphic) or mechanical platforms will not be considered in the remainder of this thesis. While the existence of other biologically inspired robots such as zoomorphic is acknowledged, this thesis mainly focuses on humanlike aspects of robot and behavior design. Moreover, research in human robot interaction is often carried out on early prototypes, that look mechanical or machinelike (e.g. see (Hinds, Roberts, & Jones, 2004; Walters, Syrdal, Dautenhahn, te Boekhorst, & Koay, 2007)). While results from such studies are considered, these robot platforms are not included as examples of either humanoid or low-anthropomorphic robot types for the reason that they are unfinished concepts, and would not be able to perform tasks in real world environments.

1.4. Research questions

This thesis concerns the challenge of designing effective behaviors for robots that intensively interact with people in social environments. More specifically, in this thesis I focus on the

exploration of effective nonverbal behaviors for low anthropomorphic robots (robots that do not closely resemble people) in the context of guiding visitors at an outdoor tourist site.

It stems from the premise that effective methods to develop nonverbal behavior for low-anthropomorphic robots are scarce and therefore, need to be studied. As previous studies have shown, people have the tendency to interpret movements of non-living objects, such as robots, in humanlike terms. This means that if a robot moves, but behavior is not or poorly designed, it will be interpreted as intended behavior and influence people’s reactions in interaction. Therefore, it is imperative to understand the effects of nonverbal behaviors of low-anthropomorphic robots on human-robot interaction.

This led to the main research question:

How should low-anthropomorphic robots display nonverbal behaviors in social interactions with people so that they are easy to understand and lead to positive experiences?

(23)

1

To answer the main research question, several sub-questions also need to be answered. These are first of all born out of the lack of research into the effects of human behaviors transferred to low-anthropomorphic robots on the user’s responses and experiences. Leading to the following research question:

RQ1: To what extent are humanlike nonverbal behaviors effective when transferred to low-anthropomorphic robots that interact with people?

The most commonly used approach to design social robot behaviors is to imitate human social behavior. However, it is as yet unknown what design approach is preferable for the design of nonverbal behaviors for low-anthropomorphic robots. Furthermore, while previous work addressed isolated robot behaviors, it is unclear how multimodal behaviors influence human-robot interaction. This leads to the following research question:

RQ2: What is the best approach to design a consistent set of nonverbal communication behaviors for low-anthropomorphic robots?

Finally, the design of behaviors for robots that are to be deployed in real world everyday environments requires methodological approaches beyond laboratory studies or technology probe-type explorations. This leads to the final research question:

RQ3: In what way should the effect of robot behavior on people’s perception and experience be studied?

Throughout the work presented in this thesis, these questions have guided the direction of the research. Additionally, the research design was influenced by the real world context in which the research was performed. In the next section, I will explain how the context influenced the research presented in this thesis.

1.5. Research context

The studies presented in this thesis were carried out in the context of the EU 7th Framework project FROG (Fun Robotic Outdoor Guide), which ran from 2011 to 2014. The goal of the FROG project was to develop an outdoor guide robot with an engaging personality for the exploration of outdoor attractions. In order to inform the design of the robot’s interactive behaviors, specific functionality, navigation behaviors and synthetic personality, several fields of expertise were involved in the FROG-project. These were vision-based detection, robotics design and navigation, human-robot interaction, affective computing, intelligent agent architecture, and dependable autonomous outdoor robot operation. All partners contributed with their expertise and combined their knowledge to create the robot platform FROG.

(24)

CHAP

TER 1

In

troduction

The robot used in three of the empirical studies of this thesis was the FROG robot that was developed within the FROG project. Even though this robot turned out to be green, had big eyes and was called FROG, it was designed as a functional robot to guide visitors and not biologically inspired on frogs. Moreover, the robot’s role was unrelated to being a frog, only name and appearance slightly referred to frogs, but it did not display any frog-like behaviors. Therefore, I did not expect a zoomorphic effect, which means that I did not expect people to expect frog-like behaviors of the robot.

Although the FROG project was the starting point for this thesis, the studies performed and topics used were carefully selected to find a best way to develop effective multimodal nonverbal behavior for a low-anthropomorphic robot, such as FROG. Through this research line, I informed the nonverbal behavior design of the robot in the FROG project. Moreover, by doing these studies, I gained a lot of knowledge on behavior design practices and real-world research methods in human-robot interaction and how these might be improved, which goes beyond the scope of the FROG project.

The development of an autonomous robot tour guide that engages visitors in a fun exploration of a tourist site offers a relevant case study for the application of nonverbal robot behaviors. This context of the FROG project provided a real world context, a low-anthropomorphic platform, and relates to a body of previous work on effectiveness of humanlike behaviors, which made this context very suitable to explore the nonverbal communication behavior for a social robot.

1.5.1. Application areas for the FROG project

Within the FROG project, two areas of application and related real world test locations were chosen. The two environments that combined an outdoor nature with many visitors were a Zoo and a famous Historical site. The test locations were the Lisbon City Zoo in Lisbon, Portugal and the Royal Alcázar, a historical royal residence in Seville, Spain. Both sites offer interesting and challenging opportunities for having robot guides guiding visitors. Moreover, both sites attract international visitors, which enables to generalize the findings for other places as well.

The Lisbon City Zoo is a park showing several species of wild animals to the public that aims to educate people about nature and animals. Yearly, approximately 800,000 visitors (national and international) visit the Zoo. The website of the Lisbon City Zoo (http://www. zoo.pt/) provides information about the history and the mission of the Zoo. The Zoo was founded in 1884, and since 1905 it has been at its present location. The mission of the Lisbon City Zoo is to educate visitors, but also to develop both a zoological and botanical park, as a center for conservation, breeding and reintroduction of endangered species through scientific research and environmental enrichment programs. Accordingly, in 1996

(25)

1

a Teaching Center opened where the Zoo provides free education to school classes to raise awareness of the problems of the populations of species and the environment. Currently, the Zoo houses over 2000 animals of 300 different species, including mammals, birds, reptiles and amphibians. Where animals were showcased in the earlier years, now the Zoo takes an active role in the protection and conservation of nature. This means that animals live in enriched environments to stimulate the natural behaviors of each species.

The guides are employees of the Lisbon City Zoo, which means that the guides who gave the tours in the Lisbon City Zoo are trained and employed by the Zoo. Visitors of the Zoo are mainly families with one or two (young) children, couples with or without children, school classes and groups of friends. See Figure 1.4 for an impression of the Lisbon City Zoo.

The second site selected to perform studies was the Royal Alcázar in Seville (Spain). The Royal Alcázar is a royal home, however the royal family does not live in the buildings of the Royal Alcázar. Yearly about 1200,000 visitors visit the Royal Alcázar, 5500 visitors a day in busiest periods in May to 1500 visitors a day in January. Visitors stay on average for 90 minutes in the site and the most popular location to visit is the Mudéjar Palace. The first building was built in the ninth century and during the ages Christians and Muslims have built, destroyed and rebuilt the buildings on the site. Here, visitors can see how the Christian

(26)

CHAP

TER 1

In

troduction

and the Muslim architectural styles are mixed.

Guides who offer tours in the Royal Alcázar work for independent agencies or are self-employed. All guides must have certification, but the board of the Royal Alcázar does not regularly control the guides. The guides who give tours here do not only guide visitors through the Royal Alcázar, but also offer guided city tours through Seville. Usually, a visit to the Royal Alcázar is part of such a city tour. Visitors of the Royal Alcázar are mostly couples (with older children), groups of tourists and school classes. See Figure 1.5 for an impression of the Royal Alcázar.

(27)

1

1.6. Overview of chapters

In this section I will describe the content of the chapters and the methods used for the studies.

In chapter 2, I will discuss the state-of-the-art in social robot research and design of social robots that informed the studies in this thesis. First, I will focus on theories and frameworks that influence the design of appearance and behaviors of social robots. Afterwards, I will review studies that have been performed with tour guide robots. This led to insights into how the design of behavior for robots is currently influenced as well as to insights on the state of the art in effective behaviors for guide robots.

In chapter 3, I will focus on understanding the context of tourist sites. To this purpose, two studies have been carried out. The first study was to gain insight into the effect of human tour guide behaviors (study 1). In the second study, the focus was on understanding the visitor experiences of visiting a site with and without tour guides (study 2). This was carried out to gain rich insights into effective behavior strategies in human guiding and current visitor’s experiences of the sites.

In chapters 4 – 6, I will present five iterative studies of human-robot interaction. In chapter 4, I will present two studies with low-anthropomorphic tour guide robots that used copied humanlike behavior to explain about points of interest. For these robots one singled out modality was manipulated. In the first study, gaze behavior was manipulated, while in the second study orientation behavior was manipulated. These studies led to insights on whether and how humanlike behavior could be used for low-anthropomorphic robots. In chapter 5, I will explore an approach to develop behavior for low-anthropomorphic robots. By using this approach to develop nonverbal behavior for low-anthropomorphic robots, the behavior will be optimized for the robot’s morphology and the modalities that the robot has. Results of the two studies presented in this chapter give insights on how people react to robot-optimized behavior compared to human-translated behavior in terms of attention towards the robot and attitude towards the robot.

In chapter 6, I will describe a user evaluation of tours given by FROG. Results provide insight into how people experienced the robot-optimized behaviors designed for FROG in real world robot guided tours.

Two of the studies presented in chapters 4 and 5 were performed in controlled settings. These were a controlled lab study into the effects of robot gaze behavior on the responses of participants (study 3) and a video study to assess the influence of robot multimodal behavior on participants’ understanding of the behavior (study 5). These controlled

(28)

CHAP

TER 1

In

troduction

studies allowed me to study user responses to specific nonverbal behaviors of the robot in a controlled way and compare participants’ evaluations of two conditions of robot behavior. The other two studies of these chapters (study 4; study 6) were a semi-controlled comparison between two conditions and performed in an in-the-wild setting. These were performed to gain insights into the effects of robot orientation on user responses and experiences (study 4) and to assess the effects of different sets of multimodal behavior on participants’ experiences (study 6). The in-the-wild approach allowed the observation of natural and original responses to the robot. The data from these studies was less controlled, but offered rich insights into peoples’ experiences. The study presented in chapter 6 (study 7) was performed in an in-the-wild setting with a fully autonomous robot. This allowed the observation of actual interaction between the robot and real visitors and interviewing of visitors who interacted with the robot. See Table 1.1 for an overview of the studies that I performed in relation to the research methodologies.

Table 1.1: Overview of studies presented in this thesis Single-modality

studies Multi modality studies Collected data Efficiency of

behavior (fully controlled – two conditions)

Gaze behavior in the lab

(chapter 4 – study 3)

Online video study of multimodal behavior for FROG (chapter 5 – study 5) Questionnaire data on participants’ perception of robot behavior Experience of behavior (unstructured – two conditions) Orientation behavior in the wild (chapter 4 – study 4) In-the-wild study of multimodal behavior for FROG (chapter 5 – study 6) Observation of visitor reactions and short interviews with visitors DRE AM Experience of a full FROG tour (uncontrolled – one condition) FROG evaluation in the wild (chapter 6, study 7)

Visitor speech during tour, observations and extensive interviews with invited participants and short interviews with spontaneous visitors.

In chapter 7, I will introduce DREAM, the Data Reduction Event Analysis Method. In this chapter, I will describe DREAM, a method I developed and evaluated to analyze the rich and unstructured interactions that were video recorded during the semi-controlled in-the-wild studies. I found that using DREAM resulted in reliable findings for objectively observable actions and changes in situations. This offers opportunities to learn from rich and valuable real life data to create even better future robots.

(29)

1

discuss the results. Furthermore, I will discuss what implications the findings presented in this thesis have on theory and practice. I will close the thesis with directions for future work.

(30)
(31)

2

2

Background and related

work

In this chapter I will describe the essential background information for the

overall thesis and offer working definitions for the main concepts in social

robotics. Then, I will describe relevant theories in human-robot Interaction.

Next, I will give an overview of tour guide robot platforms developed to

date and a review of previous research into the effects of tour guide robot

behavior design on human-robot interaction. Finally, the main methods

towards approaching the design of social robot behavior will be discussed

and the main framework of practice for this thesis will be provided.

(32)

CHAP TER 2 Backgr ound and R ela ted W ork

2.1. Main concepts and definitions

This thesis concerns the challenge of designing effective behaviors for robots that intensively interact with people in social environments (See Chapter 1). The robots this thesis focuses on in general have the ability to detect and interpret human behavior in typical everyday environments such as offices, shopping malls, museums, and hospitals. Moreover, these robots are able to operate in such inherently social environments: they can detect and interpret people’s behavior, they can navigate autonomously while taking into account the people in the environment, they are able to adapt their behaviors to people’s responses and situations, and they can engage in prolonged interactions with people. Because of the abilities of such robots to operate in, deal with and adapt to the social environments in which they are deployed, these robots are considered social robots.

Several human-robot interaction researchers have proposed definitions to describe the concept of social robots. For example, Breazeal (2003) describes social robots as the class of robots to which people attribute humanlike characteristics in order to interact with them. Fong, Nourbakhsh and Dautenhahn (2003) use the term socially interactive robots to describe robots for which social interaction plays a key role. Furthermore, Hegel, Muhl, Wrede, Hielscher-Fastabend, and Sagerer (2009) argue that people attribute social qualities to a robot if the robot contains ‘a robot and a social interface’. From these definitions, we can conclude that social robots are robots that have interactions with people in social situations. What the definitions do not describe is how the robot should communicate and what the flow of social interaction between a robot and a person should be like. Therefore, these definitions do not address the main concerns of this thesis.

Bartneck and Forlizzi (2004) also proposed a definition for social robots in which they focused more on social situations. They define a social robot as follows: ‘A social robot is an

autonomous or semi-autonomous robot that interacts and communicates with humans by following the behavioral norms expected by the people with whom the robot is intended to interact.’ This indicates that in order to have meaningful interactions with people, a social

robot should be able to read and process social behavior of people as well as react to it in normative and social ways. A limitation of this definition is that the concepts of social and normative behavior are not further defined. Also, this definition does not address how people might experience interacting with a social robot.

Taking the above definitions into account, the working definition of social robots for this thesis should focus on adhering to the behavioral norms and achieving effective and fun interaction with people in a social context. Regardless of the robot’s morphology and regardless of the interaction modalities it has at its disposal, it should effectively engage with people to achieve specific outcomes related to the service it offers and elicit positive

(33)

2

user experiences. Therefore the working definition proposed for this thesis is:

‘Social robots are robots that engage and interact with people, effectively display

multi-modal behaviors and take the social context into account to achieve a desired interaction outcome and user experience in everyday social environments.’

Interestingly, several authors have suggested that in order for a robot to be ‘social’, it requires humanlike features and imitations of human communication behavior (e.g. (Bartneck & Forlizzi, 2004; Breazeal, 2003; Fong et al., 2003; Hegel et al., 2009)). According to this, only robots that resemble people in appearance and that can imitate human communication behavior should be used as social robots. However, the current state of the art in robot technology constrains opportunities to fully imitate people with robots (Duffy, 2003). Therefore, in this thesis I argue against this notion that human-likeness is essential to create an effective social robot. Instead, I propose that careful design of the behavior of low-anthropomorphic robots can offer an equal or even better human-robot interaction experience.

The definition that I propose is not limited to humanlike robots but includes robots that do not look like people at all. As long as the robot offers an effective interaction and positive user experience and is able to interact with people in an environment which is inherently social, it is considered a social robot. Consequently, this definition does not restrict social robots to robots that look like and/or behave like people. Rather, it leaves room to explore alternative manifestations of both appearance and behavior in social robots.

To better understand the processes that determine people’s perceptions of, attitudes towards and responses to social robots and social robot behavior, in the next section I will discuss the most relevant theories from Computer and Social Sciences. Also, I will discuss design approaches from Product and Interaction Design, to broaden the view of human-robot interaction from mainly biologically inspired to other forms of social interactive human-robots.

2.2. Important theories for human-robot interaction and

social robotics

In this section I will discuss 5 relevant theories and frameworks that can inform the design of social robots and their behaviors. First, I will briefly describe the notion of Anthropomorphism and several studies that explore animacy in non-living objects. Second, I will discuss Mori’s Uncanny Valley Theory. Third, I will discuss Reeves and Nass’ Media Equation and its consequences for the design of a robot’s interactive behavior. Fourth, I will discuss the theory of Product Personality and the product personality scale that Mugge, Govers and Schoormans developed. Finally, I will describe Norman’s Levels of Design in

(34)

CHAP TER 2 Backgr ound and R ela ted W ork

relation to human robot interaction and the design of robot behaviors.

Anthropomorphism is the act of attributing humanlike characteristics, motivations, intentions, and emotions to non-human organisms and objects and is considered to be an inherent tendency of human psychology (DiSalvo & Gemperle, 2003; Duffy, 2003; Epley, Waytz, & Cacioppo, 2007). This phenomenon can be leveraged by purposefully designing humanlike characteristics on to objects. Nevertheless, this tendency is also present if human characteristics, such as intentions and emotions have not been designed or programmed for a robot. An example of anthropomorphism is to feel sad for a robot that is being dismantled or to think a robot feels sad because you insulted it.

Anthropomorphism seems to be evoked even by the simplest of objects. For instance, Heider and Simmel (1944) observed that participants assigned intentions and emotions to moving triangles and dots in a simple animation. When people only saw a still image of the triangles and dots no intentions or emotions were attributed (Heider & Simmel, 1944). Michotte (1963) performed studies with two moving balls and showed that people described some of the movements of the balls as factual, while for other movements people attributed motivations, emotions, relationships and even gender and age to the two balls. Furthermore, Ju and Takayama (2009) showed that people interpreted movements of doors that open automatically as intentional gestures. This suggests that people read intentions, motivations and emotions in the behavior of non-living things that do not look like people. For the field of human-robot interaction this means that not only a robot’s humanlike features or overall humanlike appearance may elicit anthropomorphism, the behavior of robots can evoke anthropomorphism too (Disalvo, Gemperle, Forlizzi, & Kiesler, 2002). The above studies also strengthen the argument made previously that robots do not need to have humanlike characteristics in appearance to evoke attributions of emotion or intentions. Nevertheless, several researchers have argued that human-likeness is essential for social robots. Epley et al. (2007) argue that the purposeful use of humanlike behavior and appearance for technological agents can benefit the interaction. They state, for example, that human-likeness can enable a sense of efficacy with technological agents, and that it may increase social bonding with technology which in turn might increase liking of the technology (Epley et al., 2007). Moreover, Waytz, Cacioppo and Epley (2010) state that anthropomorphic technology lead to people follow social rules in interaction with the technology. Further, the technology was perceived as more understandable, predictable, intelligent and credible. It was found to increase engagement, appeared more effective in collaborative decision-making tasks, and was preferred because it could express emotions in a humanlike fashion (Waytz et al., 2010). However, they also state that humanlike appearance and behavior for computers and robots has its limitations. For instance they found that people were more likely to blame humanlike technology when it malfunctioned,

(35)

2

people felt less responsible for success, and it generated inappropriate expectations of the capabilities of the technology (Waytz et al., 2010).

An important question therefore is: how humanlike should a robot’s morphology be? This question is partly answered by Lee, Šabanović and Stolterman (2016) who interviewed potential users about their responses to 27 robots. Some of those robots had humanlike appearances, while others were minimalistic in form. They found that participants did not refer to robots as being humanlike or non-humanlike. Instead, they referred to parts of the robot as being humanlike (e.g. the hands, the eyes or the body). More importantly, participants evaluated how well humanlike features would fit in their everyday use contexts. This could indicate that humanlike appearances of robots are evaluated positively by users only in specific contexts of use, and that humanlike appearance in not necessary to create social robots (Lee et al., 2016).

Duffy (2003) emphasized that it is important to separate the use of humanlike features from anthropomorphic tendencies. The use of humanlike features influences the robot’s form and function, such as humanlike posture, and expression of emotions through a dynamic face. People’s tendencies to anthropomorphize influence how these features are perceived, and how people interpret form and behavior of the robot. This tension between form and function is further addressed in the Uncanny Valley theory.

Mori (1970) argues in his Uncanny Valley theory that the more a robot resembles a human, the more people will feel familiar with it (see Figure 2.1 for a graphical representation of the uncanny valley). This familiarity in turn would benefit the interaction with the robot. However, he also poses that robots which resemble people so closely that they are difficult to distinguish from people are perceived as uncanny, because they are not quite real humans. This effect is called the uncanny valley (Mori, MacDorman, & Kageki, 2012). Moreover, the effect is thought to exacerbate when robots start moving, because movement shows even more clearly the limitations of a robot that closely resembles a human (Mori et al., 2012). MacDorman (2006) has shown through multiple studies that there are likely more, as well as more complex factors than familiarity that influence the comfort or discomfort people feel when interacting with a robot. A major impact is thought to be the discrepancy between the way the robot looks and the way it moves or behaves. For example, a mismatch between auditory and visual stimuli can elicit this feeling of eeriness (Meah & Moore, 2014). Similarly, an extremely humanlike robot will become creepy when its movements are jerky. In contrast, a simple box-like robot may seem surprisingly intelligent if it shows abilities beyond people’s expectations.

This notion of aligning form and function can also be found in product design where a product needs to display behaviors that correspond to user’s expectations. For example,

(36)

CHAP TER 2 Backgr ound and R ela ted W ork

a tiny vacuum cleaner that generates an incredible amount of noise might not meet the user’s expectations. The discrepancy between visual and auditory characteristics of the product might lead to confusion or even irritation. Instead, an optimal match where product characteristics correspond with the intended, overall experience may lead to more preferred products (Hekkert, 2006; Ludden & Schifferstein, 2007).

Closely connected to people’s tendency to assign human traits to non-human entities is the Media Equation: People respond to media as they would to a person (Reeves & Nass, 1996). Reeves and Nass (1996) argue that Individuals’ interactions with computers, television, and new media are fundamentally social and natural, just like interactions in real life, even though people are not aware of these responses. Moreover, in another study was shown that even when robots are not humanlike in appearance, people tend to interpret the movements of the robots as social actions (Terada, Shamoto, Mei, & Ito, 2007).

While researchers have argued that people are likely to respond socially to robots because of their anthropomorphic form (Fong et al., 2003; Hinds, Roberts, & Jones, 2004; Mirnig, Riegler, Weiss, & Tscheligi, 2011; Sardar, Joosse, Weiss, & Evers, 2012; Siegel, Breazeal, & Norton, 2009; Syrdal, Dautenhahn, Woods, Walters, & Koay, 2007; Walters et al., 2007; Wang et al., 2010), Nass and Moon (2000) have stated that these social responses are not triggered by the anthropomorphic form of technology. They instead argue that people

(37)

2

seem to react socially to several characteristics in the technology, such as a voice for output, interactivity and taking on certain roles that were traditionally carried out by people (Nass & Moon, 2000). Also, the desktop PC’s used by Nass and colleagues in their studies into politeness and reciprocity to computers, did not have a humanlike appearance while people did react to these computers socially (Nass, Fogg, & Moon, 1996). This again strengthens the argument that robots do not need to look nor do they need to behave in a strictly humanlike manner to be perceived as and responded to socially.

It has been suggested that the behavior of a social robot should be consistent over time, so that people can predict what will happen next (e.g. (Duffy, 2003; Kim, Kwak, & Kim, 2008; Walters, Syrdal, Dautenhahn, te Boekhorst, & Koay, 2008; Woods, Dautenhahn, Kaouri, Boekhorst, & Koay, 2005). Moreover, these researchers state that to ensure that robot behavior is perceived as consistent over time and to facilitate predictable and understandable actions, a personality profile can be developed for the robot. The personality profiles that are created for robots these days are mainly based on human personality profiles such as the Big Five inventory (Digman, 1990) and the MBTI (Myers-Briggs Type Indicator) (Myers, Kirby, & Myers., 1993) measures, and inventories that are based on these. However, a disadvantage of the use of measures from human psychology for research into robot personality is that not all human characteristics can be meaningfully transformed to robots. Even though one of the Big Five Dimensions, extraversion, is clearly recognizable in robots, the other four dimensions (conscientiousness, neuroticism, openness and agreeableness) were found more difficult to apply to and to recognize in robots (Lippa & Dietz, 2000). An alternative approach to create a consistent behavior and personality for a robot could be to look at how people experience the personalities of products. People use human personality characteristics to describe their impression of a product: this notion has been used to define and study product personality (Govers, 2004). Product personality describes the overall impression that a product makes on someone and can be influenced by different product characteristics such as its sensory properties, the perceived quality and the product’s behavior.

Theory on product personality has been used as a tool to design for effective and specific experiences (Govers, Hekkert, & Schoormans, 2003; Janlert & Stolterman, 1997). Moreover, Mugge, Govers and Schoormans (2009) developed a scale to define a product personality and to evaluate the product personality. They found that a product’s personality (including appearance, behavior, sounds, smell, interaction; the entire experience of a product) can be described by 20 distinguishable dimensions, such as cheerful, relaxed, provocative and honest. These dimensions can be used by designers to design a personality for a product, and they can be used to test whether the designed personality is recognized by the users of the product. All these dimensions are applicable to products, it could therefore be

Referenties

GERELATEERDE DOCUMENTEN

Keywords: whimsical cuteness, robot acceptance, service robots, eeriness, uncanny valley, human-robot interaction, hedonic service setting, utilitarian service setting, intention

Scenarios are sketched in big amounts together with J.Kuiken, in order to fully grasp what an empathetic robot could be. This is done mainly following the templates in figures 3.1

• To what degree does the interaction pattern of a robot, (active constructive or passive con- structive), influence (i.) shared leadership between teammates (robot and humans),

Data of particular interest is the proxemic space that the robot occupies during the tour, and the influence that certain behaviours (seem to) have on the overall functioning of

Kamerbeek et al., Electric field effects on spin accumulation in Nb-doped SrTiO using tunable spin injection contacts at room temperature, Applied Physics Letters 

explanations with the user’s actual needs and cognitive load in a dynamic, fast moving environment will be essential to successfully deploy robotics and AI offshore robotics and

Neurite outgrowth on these substrates was well aligned in the direction of fiber orientation (Figure 5A,B), with relatively large variability between samples and no

1 Pseudo-cubic lattice parameters for LCF64, LSF64, and LBF64 as a function of temperature from Rietveld refinement of HT-XRD data recorded in the range 600-1000 °C in air.