• No results found

The Influence of a Robot's Embodiment on Trust: A longitudenal Study

N/A
N/A
Protected

Academic year: 2021

Share "The Influence of a Robot's Embodiment on Trust: A longitudenal Study"

Copied!
59
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

The Influence of a Robot’s

Embodiment on Trust: A

Longitudinal Study

Anouk P.R van Maris 4044762

anoukvanmaris@student.ru.nl

Internal supervisor: Dr. Beata J. Grzyb External supervisor: Dr. Hagen Lehmann Second assessor: Dr. ir. Johan H.P Kwisthout

Department of Artificial Intelligence

Extended Research Project in partial fulfillment of the requirements for the degree of Master of Science in Artificial Intelligence

(2)

Abstract

Trust, taken from the human perspective, is an essential factor that deter-mines the use of robots as companions or care robots, especially given the long-term character of the interaction. This study investigated the influence of a robot’s embodiment on people’s trust over a prolonged period of time, where embodiment entailed the distinction between a physical robot and a virtual agent. The participants engaged in a collaborative task in 10 sessions spread over a period of 6 weeks, with either a physical robot or a virtual agent. While our results showed that the level of trust was not influenced by the type of embodiment, and no interaction between time and embodiment was found, time alone was an important factor showing a significant increase in user’s trust. Behavior analysis showed that participants at first felt sig-nificantly more uncomfortable interacting with a physical robot with respect to a virtual agent. This trend changed over time, as eventually participants interacting with the physical robot felt more comfortable than participants who interacted with the virtual agent. We emphasize the importance of long-term HRI studies and the use of objective measurements in these stud-ies. The results found in this research project raise new questions on the role of the embodiment in trust and contribute to the growing research in the area of trust in human-robot interaction.

(3)

Contents

1 Introduction 3

2 Background 5

2.1 Human-Robot Interaction . . . 5

2.2 Human-Robot Trust . . . 6

2.2.1 Antecedents of Human-Robot Trust . . . 6

2.3 Robot Embodiment . . . 7 2.4 Robot Habituation . . . 8 2.5 Conclusion . . . 9 3 Method 10 3.1 Participants . . . 10 3.2 Procedure . . . 10 3.3 Experimental Setup . . . 11

3.4 Task: Blank Map Game . . . 12

3.5 Metrics . . . 16

3.5.1 Trust Game . . . 17

3.5.2 Questionnaire . . . 18

3.5.3 Behavior Analysis . . . 19

3.6 Architecture . . . 21

3.6.1 Implementation of the Experiment . . . 21

3.6.2 Implementation of the Trust Game . . . 21

3.7 Tools . . . 22 3.8 Virtual Agent . . . 24 4 Results 26 4.1 Trust Game . . . 26 4.2 Questionnaire . . . 26 4.3 Behavior Analysis . . . 29 4.3.1 Statistical Analysis . . . 29 4.3.2 Descriptive analysis . . . 30

(4)

4.4 Correlations . . . 32

4.4.1 Questionnaire . . . 32

4.4.2 Behavior Analysis . . . 33

4.4.3 Personality . . . 33 4.4.4 Correlation between Subjective and Objective Measures 33

5 Discussion 35

6 Conclusion 40

7 Acknowledgements 42

(5)

Chapter 1

Introduction

The level of intelligence exhibited by social robots and their actual level of intelligence may not always match. This discrepancy may provoke inappro-priate user expectations of a robot’s intelligence and abilities. This has given rise to the growing research in social robotics from the user’s perspective. Particularly important for effective human-robot interaction is the user’s trust in robots, as trust directly affects the outcome of an interaction be-tween a human and a robot [1].

One possible factor that influences trust is the robot’s embodiment, i.e. whether the robot has a physical body or is simulated (a virtual agent shown on a screen). For example Rae et al. [2] have shown that embodiment has an influence on trust.

Another important factor that may influence trust is the amount of experi-ence that users have when interacting with a robot. It has been shown that time influences robot’s perception, since user’s preferences of robot attitudes and appearance changed over time [3].

Combining the factors mentioned above provides us with the following re-search question:

“How is trust in a robot affected by its embodiment during a long-term human-robot interaction?”

To investigate this question, two components are investigated. The first component is whether a robot’s embodiment (being a physical robot versus a virtual agent) has an influence on user’s trust. Second, time is added as a component to this investigation, as the level of experience may have an impact on trust.

(6)

A longitudinal study is performed, where all participants have a total of ten interaction sessions with a physical robot or virtual agent over a period of six weeks. Following the study from Rae et al. [2], we hypothesize that users in our study would trust a virtual agent more than a physical robot, as people are more familiar with a virtual environment than interacting with robots (participants were selected on having no to little experience with robots), and familiarity has an impact on trust [4]. However, we believe that this difference would diminish over time with the increased exposure to the physical robot. This thesis aims to contribute to research in Human-Robot Interaction (HRI) in two aspects. First, it aims to contribute to research regarding embodiment: research regarding differences and/or similarities between physical robots and virtual agents. Second, it is intended to provide more information regarding the importance of long term studies in HRI.

The remainder of this thesis is organized as follows. The next chapter de-scribes aspects that are fundamental for this research. First, the development of humrobot interaction is introduced. Second, the importance and an-tecedents of human-robot trust when engaging in human-robot interaction are discussed. The chapter continues with the influence of robot embodiment on one’s perception of the robot and previous findings regarding research in this area. The fourth background section elaborates on the possible influence of time on human-robot trust by providing previous research and their results in this area. The chapter finishes with a brief conclusion and the notion why this particular research is important for human-robot interaction.

Chapter three elaborates on the experiment that has been performed to an-swer the research question. First, information is giving regarding the people who participated in the experiment, after which the procedures of the ex-periment is discussed. The design of the exex-periment is introduced, together with the measurements used in this project to gather data.

The results that are found after analyzing the data gathered through the measurements discussed in chapter three, are shown in chapter four. These results are discussed in chapter five, together with limitations and future research. Finally, chapter six evaluates the project and its overall results.

(7)

Chapter 2

Background

This chapter first discusses Human-Robot Interaction and the importance of research in Human-Robot trust. It is followed by a general notification on trust, before moving on to research regarding robot embodiment. Finally, robot habituation and the influence of time will be considered.

2.1

Human-Robot Interaction

HRI as a research field emerged in the late 1990s when researchers from different backgrounds (e.g. robotics, psychology, natural language) came together to collaborate [5].

A definition of a social robot is given by Dautenhahn and Billard, as being an embodied agent that is part of a society of robots and/or humans. These agents can recognize each other and join a social interaction and are able to communicate with and learn from one another [6]. However, the necessity of embodiment, which would imply that a virtual agent cannot be a social robot, is questioned. It is claimed that, when no physical objects are required for the interaction (which will be the case in this project), the social capabilities are far more important than the presence of a physical body [7].

A socially interactive robot can be described as a robot to which human-robot interaction is important [8]. A special case of a socially interactive robot is an assistive (care) robot. Examples are a robot that provides help to a caregiver in elderly care [9, 10] or children with ASD [11, 12, 13]. An area within HRI that demands a large amount of user trust is assistive robotics, which is claimed to be a large profile area of human-robot interac-tion [5]. The next secinterac-tion will provide a more in-depth explanainterac-tion of this topic.

(8)

2.2

Human-Robot Trust

For human-robot interaction to be successful, especially when the social robots are being deployed as care robots (e.g. elder care, children with ASD), a certain level of trust is required, as trust directly affects the outcome of an interaction between a human and a robot [1]. This human-robot trust depends on several factors, such as the robot’s appearance and proximity to its user[1]. A disproportional level of trust may have negative consequences, like misuse or disuse of the robot [14, 15, 16, 17]. Misuse occurs when a robot is trusted too much and it is expected to do more than it is capable of [18]. Disuse is the phenomenon where a robot is not trusted enough and in consequence is not used, where using the robot would have provided a better result [19]. Olsen and Goodrich [20] stated that the instructions the user gives to the robot depend on how much the user trusts the intelligence of the robot. This would mean that if a user does not trust a robot, it would not be utilized to its full potential. More generally, it was found that trust can be a predictor of technology usage [21, 22], and a mediator for technology acceptance [18, 23], meaning that without trust robots would not be used to their full potential.

2.2.1

Antecedents of Human-Robot Trust

A general framework regarding actors that may influence trust is provided by Oleson et al. [15]. In this framework, there are three main categories, namely robot-related antecedents, human-related antecedents and environ-ment related antecedents. Although robot-related factors (robot personality, anthropomorphism, robot behavior) are found to have the strongest impact on trust in HRI [14], it is still important to investigate human-related and en-vironmental related factors (attitudes towards/level of comfort with robots) as well. Analysis from a human-centered perspective is necessary for under-standing what is required to develop a socially acceptable robot [3, 8, 24]. It should be taken into account that there is a distinction of trust during human-human interaction and human-robot interaction, since only one of the two parties is capable of feeling trust in the latter [17]. This might result in different outcomes than those found in human-human interaction.

(9)

2.3

Robot Embodiment

The notion of embodiment that will be discussed in this thesis is the dis-crepancy between a physical robot and a virtual agent (who is shown on a screen and thus not physically present in the room). Different embodiments can be found to be more appropriate for certain tasks. It may be tempting, if it is sufficient to perform a certain task, to use a virtual agent with respect to a physical robot, as this may be cheaper and is easier to implement since one does not have to take into account the limitations of the physical body (e.g. gravity). For this reason it is important to investigate the effects of using different embodiments. If there are fundamental differences between embodiments that can have a certain effect on the perception of an interac-tion with the robot than this results in strong implicainterac-tions for human-robot interaction [25].

Several studies have investigated the influence of embodiment (physical robot versus virtual agent) on the user’s perception of the robot. For instance, embodiment has shown to influence user’s empathy: participants tended to empathize with physical robots, but to a lesser degree (or sometimes even absent) with a virtual agent [26]. Another finding is that it is not the physical presence of a robot that has an impact on user’s perception of the robot, but the fact that a physical robot is perceived as a real entity where an animated character on a screen is seen as fictional [27]. Physical robots appear more watchful and more enjoyable than an agent [28], and are more positively evaluated regarding social presence and interaction [29].

The studies mentioned above show a general preference for a physical robot over a virtual agent. However, some studies have shown that there is no dif-ference in predif-ference. For example Bartneck et al. [30] found no significant difference for perceived intensity and recognition accuracy between embod-iments. Shinozawa et al. [31] investigated whether human decision making was influenced by the robot’s embodiment and found that it depends on the interaction environment. Another study that investigated the influence of embodiment on engaging elderly people in physical exercise found that the elderly people themselves preferred the physical robot. However, this signif-icant preference was not present in a young-adult group. It is discussed that this absence may be due to the fact that the young participants recognized the task was not developed for them and therefore were more generous in their evaluation [32].

(10)

Although these researches focused on general perception or other aspects of HRI, some researches have been performed regarding trust as well. For example Rae et al. [2] have shown that embodiment has an influence on trust: collaboration outcomes were more positive for embodied robots than for non-embodied robots, meaning there was more trust when interacting with embodied robots. Powers et al. [33] found that participants disclosed less to a robot than an agent, indication that there was a larger evaluation apprehension of the (physical) robot and thus less trust.

2.4

Robot Habituation

It is likely that the relationship between humans and robots will change over time the same way relationships between humans change [34]. The need to research long-term HRI studies has also been emphasized by Robins et al [35]. The studies mentioned in the previous sections entailed only one interaction the between participant and the robot. In this sole interaction, external aspects like a novelty effect may have influenced the participant’s trust, but were not taken into account. The novelty effect shows an improvement in performance, not due to learning or achieving a goal, but due to increased interest in new technology. By performing a longitudinal study, one can investigate the influences of embodiment on trust when the novelty effect has worn off. This has been researched by Koay et al. [3], who found that time influenced people’s perception of robots, since the user’s preferences changed over time. A preference that changed over time is the allowed approaching distance of the robot towards the participant, as the robot was allowed to approach closer after the novelty effect had worn off and the participant had habituated to the robot.

Kanda et al. [36] investigated whether current social robots would be able to establish long-term social relationships with children during a field trial of two months. They found that increasing the number of behaviors over time resulted in an improved friendship estimation performance, where in an earlier field trial (where the robot’s behavior was not adapted) children became bored after a week [37]. However, the trial also showed that children who did not consider the robot as a partner became bored with the robot. These two findings show that both a first impression and an adapation of behavior over time are important for succesful relationships between robots and humans.

(11)

This impact of the novelty effect is also found by Gockley et al. [38], as people first stopped to chat with their roboceptionist, where they only stopped to ask for information once they familiarized with it. Adapting the robot’s behavior may prevent or decrease this drop.

2.5

Conclusion

To summarize, this chapter has described several researches that have shown the importance of investigating the user’s trust. It can be concluded that the embodiment of a robot can have an influence on trust, and that time can impact trust, too. However, it has not been investigated yet how embodiment and time combined may influence one’s trust. As social robots are currently being deployed for usage over a longer period of time (e.g. healthcare, ASD patients), it is important to investigate how extended exposure to a certain embodiment of a robot influences trust. This investigation unfolds in the next chapters. The following chapter describes the set-up of the experiment that was created to answer this question.

(12)

Chapter 3

Method

This chapter describes the methods and experimental set-up that are used to answer the research question regarding the influence of embodiment and time of one’s trust in a robot.

3.1

Participants

In total 17 adults (including 9 females) voluntarily participated in this ex-periment, ranging in age from 21 to 30 years old (M = 25,5, SD = 2,9). Informed consent forms were obtained from all participants, together with a personality questionnaire (TIPI) [39]. Each participant interacted ten times with either a physical robot (N = 8), or a virtual agent (N = 9). The par-ticipants were relatively unfamiliar with social robots (M = 1,4, SD = 0,5 on a 5-point semantic differential scale, 1 = unfamiliar, 5 = familiar).

3.2

Procedure

The ten interaction sessions per participant were spread out over a period of six weeks. It was intended to finish the experiment in five weeks, like the experiment in [3], but this was not possible due to limited availability of the participants. There were eight interaction sessions in the research from Koay et al. [3], but as our sessions had a shorter duration we decided to let our participants interact with the robot for ten sessions. The sessions had a duration of approximately ten minutes. The first and last session had a longer duration than the others (approximately twenty minutes), as in these two sessions a trust game was played besides the regular task (blank map game) of the session, and additional information was given regarding the

(13)

experiment. The details of the regular task and the trust game are given in section 3.4 and 3.5.1 respectively. The experimental procedure is summarized in Table 3.1.

Session Pre-phase Experimental phase Post-phase

1

Explanation of overall procedure and experiment Obtain informed consent and personality questionnaire

Blank map game

Questionnaire Trust game

2 - 9 Recap of experiment Blank map game Questionnaire

10 Recap of experiment Blank map game

Questionnaire Trust game

Elaboration on research

Table 3.1: Experimental procedure for all participants

3.3

Experimental Setup

Figure 3.1 shows the experimental setup for the physical robot condition. The experimental setup for the virtual agent condition was the same, except that the virtual agent was shown on the black screen located behind the robot (see Figure 3.2). The experimenter, located behind the robot, was shielded from view with a cardboard display to ensure the participant could not receive any (un)intended clues from the experimenter. Two blank maps were located on the table, one facing the participant and the other facing the robot. Together with these blank maps a list of possible countries with their capital cities was provided as additional help for participants. The names on this list were ordered randomly.

(14)

As the experiment was executed via the experimenter’s laptop, a gaming headset was used for interaction with the virtual agent (visible on Figure 3.2). The virtual agent was shown on an external screen, while the experiment was running on the laptop of the experimenter (located behind the external screen, averted from the participant). The speech recognition and speech output from the virtual agent would go through the experimenter’s laptop, meaning it would be averted from the participant. The headset was used to improve the speech recognition and make sure the participant could clearly hear the virtual agent. Also, the interaction might have felt unreal if the sound of the robot speaking did not come from the same location as where it was seen by the participant.

Figure 3.1: Setup of experiment with the physical robot

3.4

Task: Blank Map Game

In all ten sessions of the experiment, the participants cooperated with a robot to complete a blank map that was located in front of them. We chose to use a blank map game because of its cooperative character. Most of the games, such as Hangman, are highly competitive which in turn could influence the level of trust and perceived performance of the robot. The task was imple-mented such that the robot would act autonomously to ensure consistency of the experiment over all participants. The same implementation was used for both the physical robot and the virtual agent, resulting in the behavior

(15)

Figure 3.2: Setup of experiment with the virtual agent

of the robots being consistent between conditions. No information was given to the participants on how the robot was controlled.

In order to facilitate communication about different locations on the map, all countries and capitals were numbered. The following shows an example interaction:

Robot: ‘Hi, let’s go complete this blank map! [looks at map] I think the country labelled 1 is Iceland, do you agree?’

Participant: ‘Yes, I think so too.’

R: [cheers] ‘That’s great! Do you know the capital of Iceland, labelled a on this map?’

P: ‘Yes, the capital of Iceland is Reykjavik.’

R: [nods] ‘I agree, let’s go to the next country. I think country number 2 is called Great Britain, do you think so too?’

(16)

To keep the experiment interesting for the participants, every three sessions a new map was introduced. The maps that were covered during the experiment showed the provinces of the Netherlands with their corresponding capital cities, some countries of Europe with their corresponding capital cities and some countries in the world with their capital cities. Although the maps changed, the number of countries and thus the length of the interaction remained the same, namely twelve countries (or provinces) with their capital cities. An example of a blank map can be seen in Figure 3.3. Even though participants discussed the same countries with the robot for some sessions (e.g. countries in Europe for three consecutive sessions), the order in which these countries were numbered were different for every session, so that the participant always had to pay attention to the map and was not able to memorize.

As can be seen in the example interaction, the robot (both physical and simulated) would make some conversational gestures during the interaction, to account for the interaction not becoming tedious. Possible gestures are:

• Nod: to show that the robot agrees with the answer that was given by the participant.

• Look at map: to appear to be looking for the next location on the map. • Cheer: to show that the robot agreed with the answer that was given

by the participant.

• Point: to show what location on the map the robot was talking about. These movements were not performed for all twelve countries, as this might have felt repetitive. However, the robot did perform the same movements for the same countries to keep the interaction consistent (e.g. always looking at the map before the fifth country was discussed or point at the seventh country). Thus, these movements depended on the number of question during the interaction: it did not necessarily need to be the same country for every experiment, as the order in which countries were discussed are different for every session.

(17)
(18)

The possibility existed that the participant would give an incorrect answer, or accidentally tended to disagree with the robot. An example is the participant disagreeing with the name of country number 3 being Spain. An example interaction would be:

Robot: ‘Do you think the country labelled 3 is called Spain as well?’

Participant: ‘No.’

R: ‘Ow. What do you think it is called then?’ P: ‘I think it is called France.’

R: ‘Okay, then let’s go for that one! Let me see where the next country is.’

To keep the interaction consistent for all the participants, it was decided that the robot would go along with the answer of the participant, even when the answer was wrong. This robot’s agreement was expected to have less influ-ence on one’s trust than the robot disagreeing with the participant, therefore influencing the perceived level of intelligence, as this would not be influenced for a participant who would give all the correct answers.

It could be argued that this task is not sufficient to measure trust, since no trust is required to play this game (e.g. no instruction that is commanded). However, trust does not only occur during situations that involve risk and vulnerability. It also occurs as an intrinsic feeling, based on e.g. personality or culture [40]. With the blank map game, the participant and robot share a purely vocal interaction where there are few external factors that could influence the participant’s (intrinsic) trust, which makes it a suitable task for this research.

3.5

Metrics

Both an objective and a subjective measurement for trust were used during this experiment. As objective measurement, the participant played a short investment game named the trust game [41] twice, namely after the first and the last session. The participants had to fill in a questionnaire, based on the Godspeed questionnaire [42], after all ten sessions as a subjective measurement. These measurements will be discussed in more detail below.

(19)

3.5.1

Trust Game

Besides the blank map game that was played during all ten session, the participants also played a trust game after the first and last session. This game was chosen as an objective measure for trust, as it is generally accepted as a valid experimental instrument for measuring trust [43, 44].

In the trust game, the participant’s trust is measured using economic decision making. Berg et al. [41] designed the original game. In this research it is used to measure whether the participant’s level of trust has changed after completion of all ten interactions with the robot. Therefore, this game was only played at the end of the first and last session.

The original trust game is played between two anonymous players. The first player is given 100 dollars, or an other currency, and can decide whether he wants to give some (or all, or none) of his money to the second player. This amount will be tripled by the experimenter, meaning the second player will receive thrice the amount of money the first player has given. The second player can again give some of this received money (or all, or nothing) back to the first player. The amount of money that is given by the first player represents the level of trust of the first player, as giving more money is ex-pected to result in regaining more money. In our experiment the participant was the first player of the game and the robot the second player.

The game was similar between conditions: participants from both conditions played this game through a computer interface, with the notion that this game would be played with the robot with whom they had just interacted. Note that the game was not played on the same screen as the one showing the virtual agent, but a second, different screen. This resulted in the trust game being equal for all participants. Furthermore, this approach excused the participant from the need to share the desired amount with the experimenter, as it would be entered through a keyboard. If the trust game was played with the physical robot and virtual agent with respect to this hypothetical alternative, the participants would have had to express the amount of tokens to share vocally with the knowledge that the experimenter would be able to hear, which might influence the participant’s decision [45].

One might argue that, since no real money is involved in current implemen-tation and is therefore a hypothetical alternative of the trust game, partici-pants may respond different from when real money had been involved. It has been investigated before whether playing a hypothetical trust game (during human-human interaction) with respect to a trust game with real money

(20)

would influence the participant’s decisions. It was found that playing a hy-pothetical trust game resulted in less money being given than when playing a trust game with actual money [46, 47]. If this effect would also be applicable to human-robot interaction, then this would have resulted in lower trust in our experiment. However, even though less money may have been given to the robot, the hypothetical aspect of the trust game was equal for all par-ticipants, and is therefore not likely to have influenced the possible change in trust. Also, since this alternative of the trust game results in less money being given, it provides us with a lower bound of trust. This can be useful in HRI research, as the level of trust can only increase when performing the experiment with actual money.

3.5.2

Questionnaire

The questionnaire that was used in this experiment is based on the God-speed questionnaire, developed by Bartneck et al. [42], together with a few additional questions that we proposed. This questionnaire and our additions can be found in the Appendix A. The Godspeed questionnaire was chosen, as this is a measurement often used in HRI research [48]. Also, it covers many aspects of HRI research, that can also be antecedents of human-robot trust [15]. In this questionnaire participants have to rate on a semantic dif-ferential scale of five points how much they feel some factor applies to the robot. These factors are based on five pillars: antropomorphism, animacy, likability, perceived intelligence and perceived safety. Especially likability is an important pillar to investigate as this is said to be strongly correlated with trust in a robot [49]. Olsen and Goodrich [20] state that the usage of a robot by its user depends on the trust the user has in the intelligence of the robot. As intelligence was investigated, we decided to also investigate competence and knowledge. As these aspects are related, we were interested to find whether the results would also correlate. Moreover, the appearance of the robot (machinelike or humanlike) is important to investigate, as a robot’s appearance raises a certain expectation from the participant. If a large difference is found between how the appearance of a simulated versus physical robot is perceived, with corresponding different expectations, then this is something that should be taken into account in future studies. A translation of the questionnaire can be found in the appendix. Appendix A is a translation of the questionnaire, as participants were presented with this questionnaire in their native language (Dutch).

(21)

Two aspects were added to the questionnaire: comfortability and trustwor-thiness. Comfortability was added, as one needs to feel comfortable to be able to develop some feeling of trust. Also, due to the sessions being recorded we were able to perform behavior analyses regarding comfort which made it an interesting aspect to investigate. Besides comfortability, trustworthiness was added to the questionnaire as well, as the Godspeed questionnaire does not have a component regarding trust itself. Nevertheless, since this project intended to research trust, it was added as an item to the questionnaire as well.

3.5.3

Behavior Analysis

Besides having the trust game and questionnaire as measurement tools, we also took video recordings of all sessions to analyze the behavior of the par-ticipants. We rated body language signs that could both show comfort and discomfort, together with how often the participant looked directly at the robot. All experimental sessions were recorded. As the intention was to find whether there was a change in trust and not necessarily how this trust de-veloped over time, four of the ten sessions (per participant) were analyzed. The first and last session of all participants were analyzed, as it was intended to find whether there was a change in trust over time. Furthermore sessions four and seven were analyzed, as this resulted in a constant interval between analyzed sessions and perhaps some trend over time could be found. A subset of the analyzed sessions, 19% (13 out of 68 sessions), was coded by a second coder who was not involved in this research and had not participated in the experiment. Since the interrater reliability was good (Cohen’s kappa κ = .72), we felt confident our codings were an accurate measure for participant’s level of (dis)comfort. All experiments were cut into sections of 30 seconds. For all sessions it was coded whether a participant showed signs of comfort or discomfort and how often he or she looked at the robot. Point events that were coded as being (un)comfortable are summarized in Table 3.2.

Some events, like nodding, are more difficult to code, as several nods can be coded as one ‘nod’ or all nods separately. Hence, at least two seconds had to pass between two events, otherwise it was coded as being one event.

Looking at the robot (gaze) was only coded if the duration of the gaze was at least one second. Gaze was investigated as eye contact might provide in-formation that is not found through a qualitative measurement, like a ques-tionnaire [50]. The point events for measuring comfort and discomfort are based on earlier studies regarding behavior [51, 52]: here the discomfort point

(22)

Events coded as discomfort

Fidget Touch hand unintentionally with other hand, play with jewellery or hair

Cover mouth with hand Finger(s) or hand palm cover lips Touch face otherwise Rub chin, scratch head, wipe eyebrow

Events coded as comfort

Nod Move head up and down (or vice versa) at least once Smile One or both corners of mouth go up, eyes change Table 3.2: Point events that were coded as showing a certain level comfort or discomfort.

Point events and duration rating 0 or 1 event(s) 1 2 or 3 events 2 4 or more events 3 5 to 10 seconds 2 10 seconds or more 3

Table 3.3: Coding scheme for all 30 second segments

events were coded as self-manipulation and mentioned as signs of anxiety, while nodding and smiling counted as affiliative behavior.

All sessions were cut into segments of 30 seconds. Those segments were rated for comfort and discomfort on a 3-point scale, with 1 meaning no or little signs of (dis)comfort and 3 the participant being very (un)comfortable. This rating depended on both number of occasions as duration, thus a rating of 3 could mean both several signs of discomfort or behavior lasting for a large part of the 30 second segment. The division of these ratings is shown in Table 3.3. This same scheme was used for rating gaze. If an event occurred for 5-10 seconds (rated as 2) and in that same segment at least 2 other point events were coded, the segment was rated as 3.

(23)

3.6

Architecture

3.6.1

Implementation of the Experiment

The experiment was implemented in Choreographe [53]. The implementation was linear, as the robot would discuss countries one through twelve. For each country, the robot would suggest the name of the country with the question whether the participant agreed with its suggestion. An example interaction was given in section 3.4. Participants had the opportunity to say ‘no’ if disagreeing, where the robot would ask what the name of the country was according to the participant. It would agree with every answer given by the participant, as explained in section 3.4. After agreeing on the name of the country, the robot would ask the participant whether he knew the name of its corresponding capital city. The participant could answer with the name or say ‘no’ if he didn’t know. In case of the latter the robot would give the correct answer. These two scenario’s are small sidetracks and would always lead back to the linear implementation when discussing the next country or capital city.

Halfway through the experiment, after six countries had been discussed, the robot asked the participants whether he would like to switch. This switch involved the participant now naming the countries and the robot their cor-responding capital cities, instead of vice versa. Note that the conversation was still led by the robot and the participant still had to respond with ‘yes’, ‘no’ or the name of the country (where this was first the name of the capital city). Hence, this switch did not involve the participant taking the lead in the conversation. The implementation of this experiment is shown in Figure 3.4. The task was implemented such that the robot would act autonomously to ensure consistency of the experiment over all participants. The same im-plementation was used for both the real robot and the virtual agent, resulting in a consistent behavior of the robots between conditions.

3.6.2

Implementation of the Trust Game

The trust game was implemented in Java. The participants would see an image of the robot. Beneath this image participants received information re-garding this (hypothetical) game: participants would get 100 tokens together with the question how many of those 100 tokens they would give to the robot. It was told that this amount would be tripled before the robot would receive it, and of all the tokens the robot received (3 * the amount given by the par-ticipant), it could decide to give a certain amount (all, nothing or a part of it)

(24)

Figure 3.4: Linear implementation of the experiment

back to the participant. Participants were asked insert the desired amount and press ‘Done’. To calculate what amount of tokens the robot would re-turn, the amount given by the participant was multiplied thrice, over which a Gaussian curve was used with a mean of 50 and a variance of 5. Then the amount of tokens the robot had given back to the participants and their final amount of tokens was shown on the screen. An example trust game is shown in Figure 3.5.

3.7

Tools

The robot that was used for this experiment is a Nao robot, visible in Figure 3.1, developed by Softbank Robotics [54]. The Nao sat in a crouching position during the experiment, as it can be unstable when standing and its legs obstructed its ability to point when sitting down. Also, when the external screen was turned 90 degrees, the virtual agent approached the same size as the physical robot when in crouching posture. This was beneficial as now the robot’s size would not be an external factor to possibly influence trust. The software that was used to simulate the virtual agent is the one provided with the Nao software Choreographe [53]. Netbeans IDE 8.0 [55] was used for the implementation of the trust game. To gather data for the behavior analysis, two cameras were used. One camera was located above the robot which was sitting in front of the participant: a Logitech HD webcam C525. This camera recorded the front of the participant and was used for the behavioral analysis. The second camera, a Canon HF10 HD Camcorder, was located

(25)
(26)

behind the participant and aimed at the robot to record their interaction. This camera was used to verify where the robot was located with respect to the participant to ensure behavior shown through the Logitech camera was indeed aimed at the robot (e.g. gaze).

3.8

Virtual Agent

As mentioned before, a headphone was used to interact with the virtual agent. This was required as otherwise vision and sound would not originate from the same location, and also speech recognition for the agent would be harder as the laptop running the program (and thus recording the participant’s responses) was not located near the participant.

However, to provide the agent with speech output and recognition brought some issues. The virtual agent that comes with the Choreographe software does not have speech output, nor speech recognition. This also counts for other alternatives to Choreographe, e.g. the Webots robot simulator [56]. Since implementing a virtual agent with speech output and recognition would cover a research project on its own, an alternative solution was found. The simulation from choreograph also runs when the program is run on a physical Nao. Therefor, by using Skype [57] and Teamviewer, it was possible to show the participant the virtual agent and also let the agent respond as required through skype with the physical Nao.

A more detailed explanation. Two rooms were required for the experiments run with the virtual agent. One is the experiment room, similar for both conditions, the other room was the experiment base room. In the experiment base room, the physical Nao was located, together with a supervisor and a laptop on which the experiment was run. The presence of a supervisor was required to ensure nothing would happen with the physical Nao during the experiment. The laptop in the experiment base room was connected with the experimenter’s laptop through Teamviewer. This way the experimenter was able to start the program with the physical Nao without needing to be in the same room. Through Teamviewer it was possible to show the Nao simulation provided by Choreograph, that runs simultaneously with the physical Nao, to the participant. By using Skype, and thus the headset, the participant was able to interact with the physical Nao located in the other room while seeing the virtual agent on the screen. This construction is shown in Figure 3.6. The participants were told the headset was necessary as the sound output of the screen was bad. They were unaware of the fact that they were interacting with the physical Nao. This was revealed after the last session was finished.

(27)
(28)

Chapter 4

Results

In this chapter, the results regarding the trust game, questionnaire and be-havior analysis are discussed.

4.1

Trust Game

A repeated measures ANOVA determined that trust differed statistically significantly over time (F (1, 15) = 16.583, p <.005, η2

p = .525). Figure 4.1

and 4.2 show this difference as an increase in trust. No significant difference was found for embodiment (F (1, 15) = .69, p = .796, ηp2 = .005), nor was there a significant interaction between embodiment and time (F (1, 15) = .69, p = .796, η2

p = .005). To investigate whether there was a difference in

increase in trust between embodiments, an independent samples t-test was performed. It appeared that the increase in trust over time did not differ significantly between embodiments (t (15) = .263, p = .796).

4.2

Questionnaire

The complete overview of data we gathered by evaluating the results of the questionnaires are shown in the Appendix, since the tables are quite large. The important results are also shown here. Appendix B shows both the mean and median per questionnaire item for both embodiment conditions and their corresponding standard deviation. Appendix C shows the results from independent samples t-tests that have been performed for all question-naire items to see whether the items are influenced by the embodiment of the robot. The significants results found in these tests are also shown in Table 4.1. Significant differences where the physical robot was rated higher are

(29)

Figure 4.1: Amount of money the participants gave to the robot after the first and last session.

Figure 4.2: Amount of money the participants gave to the robot after the first and last session.

fluency of its movements, likability, responsiveness, kindness, responsibility, sensibility and lastly the level of comfortability of the participants.

(30)

Sig-nificant differences where the virtual agent was rated higher are appearance (the virtual agent appears less machinelike than the physical robot), whether the robot appeared dead or alive, consciousness, competence, knowledge and intelligence.

To investigate whether time had an influence on the different aspects of the questionnaire, repeated measures ANOVA was performed for all items (Ap-pendix D). Again, the significant results are shown here as well, in Table 4.2. Time appeared to have an influence on several items of the questionnaire. However, no interaction effect was found between embodiment and time for any of the items. Time influenced trustworthiness and to which level the participants felt agitated, surprised, anxious and/or comfortable during the experiment. All these aspects positively increased over time, meaning the robot was found to be more trustworthy, people felt less agitated, more qui-escent, less anxious and more comfortable.

t (18) p machinelike-humanlike 2.51 .022 unconscious-conscious -4.628 .000 moving rigidly-moving elegantly -2,75 .013

dead-alive -5.423 .000 apathetic-responsive -2.909 .009 dislike-like 9.24 .000 unkind-kind -4.607 .000 incompetent-competent -6.98 .000 ignorant-knowledgeable -8.13 .000 irresponsible-responsible -2.868 .010 unintelligent-intelligent -10.15 .000 foolish-sensible -5.200 .000 uncomfortable-comfortable 2.76 .013

Table 4.1: Questionnaire items showing a significant difference between robot embodiments

(31)

repeated measures anova time embodiment*time F (9,135) p ηp2 F (9,135) p η2p untrustworthy-trustworthy 2.391 .015 .137 1.341 .221 .082 agitated-calm 3.628 .000 .195 1.043 .410 .065 surprised-quiescent 6.131 .000 .290 .702 .706 .045 anxious-relaxed 6.145 .000 .291 1.376 .205 .084 uncomfortable-comfortable 1.894 .058 .112 1.056 .399 .066 Table 4.2: Questionnaire items showing a significant influence of time, al-though no interaction effect between time and embodiment was found

t p session 1 discomfort 3,486 0,001** comfort 1,645 0,102 session 4 discomfort -1,616 0,108 comfort 1,512 0,133 session 7 discomfort 0,433 0,661 comfort 3,780 0,000** session 10 discomfort 1,091 0,277 comfort 2,216 0,029* **p <0,01 *p <0,05

Table 4.3: Difference between embodiments regarding (dis)comfort for ses-sions 1, 4, 7 and 10.

4.3

Behavior Analysis

4.3.1

Statistical Analysis

Several independent samples t-tests were run to investigate whether there was a change in comfort or discomfort and whether there was a difference in (dis)comfort between embodiments. The results are shown in Table 4.3. At session 1, there was a significant difference for discomfort, where participants felt less comfortable interacting with a physical robot with respect to a virtual agent. In session 4, this difference faded. In session 7, participants felt significantly more comfortable when interacting with the physical robot. This result was still visible in session 10.

(32)

Session 1 Session 4 Session 7 Session 10

physical virtual physical virtual physical virtual physical virtual

Fidget 0 0 3 15 6 10 6 8

Mouth 11 0 9 5 11 7 3 3

Face 29 10 24 19 31 31 22 15

Nod 8 6 22 13 41 10 29 8

Smile 61 45 19 21 13 15 26 15

Table 4.4: The total amount of point events per session per embodiment

4.3.2

Descriptive analysis

Differences between embodiments

The mean and standard deviation for all point events per session can be found in Appendix F, it shows the descriptive results per embodiment. It stands out that, while the average amount of events where the participant covered his mouth increases per session for the virtual embodiment condition, the average number of smiles decreases. The amount of nods increases for the physical embodiment condition. The total amout of point events per embodiment are shown in Table 4.4. The number of gazes at the robot per session differed from 13.25 to 29 gazes. Two participants tested the robot during one or more sessions by giving incorrect answers on purpose. Those two participants were both assigned to the physical robot condition, and also had the two highest average ratings of gazes per session. The two participants who gazed at the robot least. were both categorized in the virtual agent condition. A complete overview of the number of gazes per session and the average is presented in Appendix G.

In total, 10 out of 17 participants showed a total of less than ten signs of com-fort or discomcom-fort for the four analyzed sessions combined. Seven of those ten participants were categorized in the virtual agent condition, where three of them were assigned to the physical robot. This difference in behavior is also visible in the total amount of signs shown. Although the physical embodi-ment condition contained one less participant than the virtual embodiembodi-ment condition, both the number of signs of comfort and discomfort were higher for the physical embodiment condition.

Six out of eight participants showed signs of discomfort that lasted for at least ten seconds in the physical embodiment condition, where five out of nine participants showed these signs in the virtual embodiment condition.

(33)

These signs had a longer duration in the physical embodiment condition.

Comfort, discomfort and personality

Note that this particular section does not discriminate between embodiments. Here, it is intended to investigate whether personality has an influence on behavior.

The data of the participants was split into three groups: participants who had shown less than ten signs of discomfort over all sessions (five partici-pants), participants who had shown less than ten signs of comfort over all sessions (five participants), and the third group contained the remainder of participants (seven participants), who showed ten signs or more for both comfort and discomfort over the four analyzed sessions combined.

Four of the five participants who showed no to little discomfort scored 6+ on being open to experience on the ten-item personality questionnaire (TIPI) [39], that was taken before the first experiment. Also, two of them had the highest scores in agreeableness, which was not present in the other two groups.

Four of the five participants who showed less than ten signs of comfort, scored 6 or higher on being conscientious. Seven out of the ten participants showing no or little signs of (dis)comfort are participants who interacted with the virtual agent.

The third group showed most signs of both comfort and discomfort, where it was expected that the group with little comfort would show discomfort and the group with little discomfort would show signs of being comfortable. All participants in this group have an emotional stability of 5 or higher.

Mistakes of the robot

No trends are visible regarding trust being influenced by mistakes that oc-curred during the experiment. These mistakes entailed an incorrect response of the robot due to a faulty speech recognition.

(34)

4.4

Correlations

4.4.1

Questionnaire

A Spearman’s rank-order correlation was performed for all items in the ques-tionnaire. The complete table with results can be found in Appendix E. The results for the items that were added to the original Godspeed questionnaire ourselves are discussed here.

Trustworthiness proved to correlated positively with: • consciousness (rs = .598, p = .011) • responsiveness (rs = .774, p = .000) • likability (rs = .800, p = .000) • friendliness (rs = .811, p = .000) • kindness (rs = .841, p = .000) • pleasantness (rs = .764, p = .000) • competence (rs = .862, p = .000) • knowledge (rs = .776, p = .000) • responsibility (rs = .684, p = .002) • intelligence (rs = .839, p = .000) • sensibility (rs = .709, p = .001)

The level of comfortability correlated positively with: • kindness (rs = .484, p = .049) • competence (rs = .638, p = .006) • knowledge (rs = .527, p = .030) • intelligence (rs = .489, p = .046) • calmness (rs = .594, p = .012) • quiescence (rs = .522, p = .031) • relaxation (rs = .654, p = .004)

(35)

Discomfort Comfort Gaze

rs p rs p rs p

Trust game 1 -0,135 0,607 0,241 0,351 -0,017 0,947 Trust game 2 0,301 0,241 0,156 0,549 -0,125 0,633 Trustworthiness -0,084 0,750 -0,120 0,647 -0,471 0,056

Table 4.5: Correlations between behavior and trust

4.4.2

Behavior Analysis

To investigate whether there was a correlation between the participant’s be-havior and trust, again a Spearman rank-order correlation was performed between the levels of comfort, discomfort and gaze, and the results from the trust game and trustworthiness item in the questionnaire. Only low, non-significant correlations were found. They can be found in Table 4.5.

4.4.3

Personality

To investigate whether personality had an impact on trust over time, every participant filled in a personality questionnaire before the start of the first experiment. The results are shown as a measure of the big five personality traits [58, 59] on a scale of seven and can be found in Appendix H.

A Spearman’s rank-order correlation was run to determine the relationship between these personality items and the level of trust (in both trust games and the questionnaire item trustworthiness). There was a strong, positive correlation between extraversion and the extent to which the robot was found to be trustworthy (questionnaire), which was statistically significant (rs =

.686, p = .002). Openness to experience positively correlated with trust-worthiness as well (rs = .540, p = .025). There was a negative correlation

between agreeableness and the first trust game (rs = -.555, p = .021). All

results regarding a possible correlation between the personality items and trust can be found in Table 4.6.

4.4.4

Correlation between Subjective and Objective

Measures

Comfortability was measured both subjectively through the questionnaire and objectively by coding the participants’ behavior. Although there ap-pears to be a significant difference between embodiments for comfortability, both scores are still very high (M = 5 and M = 4,18 for physical robot

(36)

Trustworthiness Trust game 1 Trust game 2 rs p rs p rs p Extraversion 0,686 0,002* -0,019 0,943 -0,350 0,168 Agreeableness 0,026 0,923 -0,555 0,021* -0,458 0,064 Emotional Stability 0,016 0,953 -0,065 0,804 0,304 0,236 Conscientiousness 0,238 0,357 -0,453 0,068 -0,071 0,785 Openness to Experience 0,540 0,025* 0,259 0,316 0,027 0,917

Table 4.6: Correlations between trust and personality traits.

and virtual agent respectively). However, the behavioral data provide sev-eral occasions where participants from both conditions show signs of feeling uncomfortable. Therefore, it was investigated whether the results of the subjective and objective measures correlated, again by using a Spearman’s rank-order correlation. It was decided to research this for trust as well, as trust was measured both subjectively (questionnaire) and objectively (trust game) as well.

A low, non-significant correlation was found between the two results of the trust game and the questionnaire item trustworthiness (rs = -.049, p = .853

and rs = -.178, p = .493 for the first and second trust game respectively).

The same goes for level of comfort and discomfort found in the behavior analysis and the questionnaire item comfortability (rs = -.222, p = .391 and

rs = .165, p = .528 for discomfort and comfort respectively).

Gaze did not highly correlate with feeling comfortable or uncomfortable ei-ther (rs= -.017, p = .350 and rs = .091, p = .728 for discomfort and comfort

respectively), nor with trust (rs = -.017, p = .947 and rs = -.125, p = .633

and rs= -.471, p = .056 for the first and second trust game and questionnaire

(37)

Chapter 5

Discussion

The aim of this research was to investigate whether the embodiment of a robot has an influence on one’s trust when interacting with the robot for a prolonged period of time, more specifically ten sessions spread out over six weeks. A longitudinal experiment was performed and the gathered data was analyzed. In this current section the results of these experiments will be discussed, as well as their implications and limitations.

The research question for this project was:

“How is trust in a robot affected by its embodiment during a long-term human-robot interaction?”

We hypothesized that there would be a difference, following previous research [2], but that this difference would diminish as participants got habituated to the robot.

Our results suggest that embodiment does not have an influence on trust, as no difference was found for participants who interacted with the virtual agent with respect to participants who interacted with the physical robot. Time, however, did appear to impact trust, as it generally increased over time.

Contrary to the results from Rae et al. [2] we have not found a significant influence of embodiment on trust. However, their definition of embodiment is different from ours, as they investigated whether a tele-presence robot had an influence on trust with respect to using a hand-held tablet. It was not the difference between a physical robot and a virtual agent that was investigated, like in our study. Seo et al. [26] did use the same definition as us regarding embodiment. Although their results were, unlike ours, significantly different

(38)

for the two embodiments, they investigated whether the robot’s embodiment influenced empathy with respect to trust in our study. Another difference with this study that may have caused the difference in outcome is that in their study they used a 3D simulation, where in our study a 2D simulation was used.

Even though our results do not seem to point towards the same direction as the studies mentioned above, we pointed out differences with our study that could cause these differences. By using the trust game, we also used a generally accepted objective measurement for trust [44].

Fasola et al. [32] found significant results between embodiments, except for the participants that were between 20 and 33 years old. They provided a hypothesis why this group did not provide significant results. However, the age of this group is the same as the age of the participants in current study. Specific research in age differences regarding the perception of a robot’s em-bodiment could clarify this. This is left for future work.

The results from the trust game show that time has an impact on trust. This impact should be taken into account when developing robots that will be used for a longer period of time. People’s trust, in this case, is positively influenced by time. This increase of trust is found in the questionnaire as well, showing that both the objective and subjective measurement provide the same result. Nonetheless other important factors for HRI, e.g. robot acceptance, may be influenced otherwise. This can also be a starting point for future work.

Four participants (three in the physical, one in the virtual embodiment con-dition, all male) gave the robot the full amount of money in the first trust game. When they were asked why, they all answered this would result in them gaining the most in return. These participants approached the trust game with an economic strategy instead of relying on their perception of the robot. It was considered whether there was a correlation between personality domains for these participants, but there was none.

Two participants (one from both conditions, both female) gave the robot the full amount of money when playing the second trust game, while they had not done this in the first trust game. Those participants were asked why they gave the full amount of money and they both stated, independently of each other, that they really liked the robot and felt it deserved to receive the full amount.

(39)

Along with our findings regarding trust using the trust game, we found signif-icant differences regarding robot embodiment for some other aspects of the questionnaire as well. For example, participants found the physical robot more likable and felt less comfortable interacting with the virtual agent. This means that, even though it does not extend to trust, physical robots may be preferred over virtual agents for long term studies, when attempting to achieve a high level of comfortability in the user. This is not necessarily the case for single interactions between the participant and the robot, as it was found that participants felt significantly less comfortable interacting with the physical agent in the first session, although this finding altered over time. However, the virtual agent was perceived as more intelligent, knowledgeable, competent and even more humanlike than the physical robot (although it should be noted that both embodiments were perceived as more machinelike than humanlike). This may indicate that a virtual agent may be trusted sooner when a certain task has to be performed.

Intelligence, competence and knowledge are highly correlated (see Appendix E) .The finding that physical presence of a robot creates higher likability and a higher level of comfortability, but that it results in the robot being perceived as less intelligent is interesting. This indicates that people prefer to interact with a robot that appears less intelligent. Perhaps a lower level of intelligence creates a feeling of being in control, or maybe the simulation gives the indication that there is a computer running in the background that provides the robot’s responses, which makes it appear more intelligent. This is a topic for future research.

Embodiment appeared to influence aspects of the pillars anthropomorphism, animacy, likability and perceived intelligence from the Godspeed question-naire. It stood out that the virtual agent positively influenced the perceived intelligence pillar, where the other three pillars were positively influenced by the physical embodiment of the robot.

Where embodiment did not appear to influence the fifth pillar, perceived safety, time did positively influence the aspects investigated in this pillar. This increase in perceived safety is important for HRI, as a high perception of safety is required for robot acceptance [42].

The behavioral analysis showed us that people at first felt more intimidated interacting with the physical robot, but after habituation felt more comfort-able than the participants interacting with the virtual agent. The difference in level of comfortability between embodiments changed from more discom-fort regarding the physical robot to no difference between conditions between session 1 and 4. The chance to feeling more comfortable interacting with the

(40)

physical robot appeared between session 4 and 7 and remained at session 10. In summary, where differences in embodiment regarding discomfort de-creased, differences regarding comfort inde-creased, again showing the positive influence of time on robot perception.

The fact that participants in the physical embodiment condition showed more signs of comfort or discomfort where in the virtual embodiment conditions these signs were barely present sometimes, may indicate that it is not the physical embodiment that evokes a reaction in people, but the fact that the physically embodied robot is seen as an entity, where the virtual agent is seen as an animation. This was also found earlier for a user’s perception of a robot in general [27].

The questionnaire item trustworthiness that was added to the original God-speed questionnaire appears to correlate with several other aspects of the questionnaire. This indicates that the finding from Oleson’s framework [15] that robot related factors have the biggest influence on how robots are per-ceived also extends to trust.

Where trust correlates with factors fom the pillars anthropomorphism, an-imacy, likability and perceived intelligence, the comfortability item in the questionnaire only correlates with aspects from the perceived intelligence and perceived safety pillars.

It is highly interesting that trustworthiness does not correlate with any as-pects of the perceived safety pillar, as trust is said to be influenced by certain risk and the feeling of risk again is influenced by perceived safety [60]. This phenomenon is amplified by the fact that no significant correlation was found between trust and comfort (both subjective and objective measures), as one expects to feel a certain discomfort when risk is involved [61]. Perhaps this was not found as the task did not involve any risk-taking. This would imply that feelings of safety are not necessary for the development of trust.

It was investigated whether the results regarding the subjective (question-naire) and objective (trust game/behavior) measurements would significantly correlate, and it appeared they do not. It is possible that participants were influenced by the knowledge that eventually their answers would be known by the experimenter and therefore wanted to give answers they thought were appropriate. Another possibility is that the participants were biased since the questionnaire was taken after each interaction session was finished [42]. Participants may also have been unaware of the effect the robot had on their level of comfortability.

(41)

This implies that, although subjective measurements are very useful, objec-tive measures are necessary in HRI research to be able to completely un-derstand the user’s perspective regarding the robot. This is necessary for developing a socially acceptable robot [3, 23, 24] that can be fully accepted as it otherwise may be misused or disused[14, 15, 17].

Evaluating these results, it appears that embodiment does not have an in-fluence on trust over a prolonged human-robot interaction. This means no evidence was found for the hypothesis that there would be a difference, nei-ther for the hypothesis that the difference in trust would decrease over time. This is not consistent with findings in other studies, but we possible causes for this inconsistency were offered. Although embodiment does not appear to have an influence on trust, and there is no interaction effect between em-bodiment and time, time itself does seem to positively influence the level of trust.

(42)

Chapter 6

Conclusion

The aim was for this research project to contribute to the HRI research field in two ways. First, it was intended to add insights regarding robot embodiment to the findings that are already known. The second objective was to show the importance of long-term HRI studies as time is, in my opinion, one of the most important aspects to take into account when developing a socially acceptable robot.

The results provide several main conclusions. The first one is the following: Trust is not influenced by the embodiment of a robot, nor is

there an interaction between embodiment and time.

This contradicts with findings from earlier studies regarding embodiment, but these studies entailed different meanings of embodiment or did not specifi-cally measure trust but a more general form of robot perception.

This brings us to the second main finding from this project:

Time has a large impact on trust and general robot perception and should be taken into account when developing socially

acceptable robots.

Other studies had found the influence of time as well, but it is still an un-derappreciated research area.

(43)

The last important finding from this study is the following:

Besides subjective measures, objective measures are a necessity in HRI research.

This was found as the results of the subjective and objective measurements in this study did not strongly correlate.

Looking at the whole process of the project, it can be concluded that this study is relevant for HRI research, and more specifically research regarding robot embodiment and time. This contribution has been confirmed as part of this work was submitted as extended abstract and accepted as poster presentation for the coming HRI conference in Vienna, March 2017.

If anything, this research raises new questions regarding robot embodiment and emphasizes that much more research is required before we will be able to develop a socially acceptable robot.

(44)

Chapter 7

Acknowledgements

First of all I want to thank all my participants, who volunteered to participate and came to the university for all ten consecutive sessions during the summer holidays. Thank you for your time and investment, it means a lot to me. Hagen Lehmann, my external supervisor, thank you so much for being avail-able and willing to answer my questions and supervise me, and guide me through the behavioral research. Also a huge thanks for bringing me in con-tact with Lorenzo Natale, who came up with the blank map game. I would like to thank Johan Kwisthout for taking the time and effort to review my thesis.

I’m very grateful to Luc Wijnen, who has been my anchor during this project. You were my support in so many ways, from discussing the design of my project to supervising the physical robot during all the sessions with the virtual agent where you were only allowed to quietly wait until it the session over again (and make sure the robot would not fall down the table, which it indeed did not). During writing, you were there patiently waiting until I was done ranting, to tell me I was doing great.

The largest credits go to Beata Grzyb, my internal supervisor. Thank you for being so patient and supportive, but not hesitant to give me a kick in the butt at the (sparse) moments I needed one. You not only guided me through my project with constructive feedback and insights, but supervised me such that I transformed from being an insecure person, unsure about what she could possibly add to her field of research, into a confident individual who knows the value of her work and is not afraid to step forward to investigate unexplored areas.

(45)

Chapter 8

Bibliography

[1] Deborah R Billings, Kristin E Schaefer, Jessie YC Chen, and Peter A Hancock. Human-robot interaction: developing trust in robots. In Pro-ceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction, pages 109–110. ACM, 2012.

[2] Irene Rae, Leila Takayama, and Bilge Mutlu. In-body experiences: em-bodiment, control, and trust in robot-mediated communication. In Pro-ceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 1921–1930. ACM, 2013.

[3] Kheng Lee Koay, Dag Sverre Syrdal, Michael L Walters, and Kerstin Dautenhahn. Living with robots: Investigating the habituation effect in participants’ preferences during a longitudinal human-robot interaction study. In Robot and Human interactive Communication, 2007. RO-MAN 2007. The 16th IEEE International Symposium on, pages 564–569. IEEE, 2007.

[4] John Lee and Neville Moray. Trust, control strategies and allocation of function in human-machine systems. Ergonomics, 35(10):1243–1270, 1992.

[5] Michael A Goodrich and Alan C Schultz. Human-robot interaction: a survey. Foundations and trends in human-computer interaction, 1(3):203–275, 2007.

[6] Kerstin Dautenhahn and Aude Billard. Bringing up robots orthe psy-chology of socially intelligent robots: From theory to implementation. In Proceedings of the third annual conference on Autonomous Agents, pages 366–367. ACM, 1999.

Referenties

GERELATEERDE DOCUMENTEN

We will evaluate whether players are able to attain the game’s meta-goal, examining the effects of OOC reasoning and the Lemniscate Model and how players experience the game in terms

4.17 (oud) Wet IB 2001 de overgang krachtens erfrecht niet als een vervreemding aangemerkt indien de verkrijger binnenlands belastingplichtige was en indien de aandelen geen

oor die Kommunisme in Italie en die verkiesing, verklaar die hlad. 'n sametrekking te Pretoria waar die Kommandant-generaal, dr. van Rensburg, die Offisiere {lal

It can be seen from table 4, that at a ten percent significance level there are fourteen companies with positive significant abnormal returns using the CAPM, sixteen companies

Hypothesis 2: The M&amp;A process takes longer length if the target is a SOE than a POE, holding other conditions the same.. Methodology 3.1 Data

[r]

In examining trust in pension providers, we arrive at the following hypothesis: Trustworthiness hypothesis: Trust in pension providers (pension funds, banks and insurance companies)

In a study by Diener and Seligman (2002) college students who reported frequent positive affect were shown to have higher-quality social relationships with peers