A study on how different accents influence user's compliance with service robots

38  Download (0)

Full text


Amsterdam, 24.6.2021

A study on how different accents influence user's compliance with service robots

Master thesis Ela Praznik, 13377027

MSc Business Administration-Consumer Marketing Track University of Amsterdam

Supervisor: Andrea Weihrauch EBEC number: 20210408050413


Statement of Originality

This document is written by Student Ela Praznik who declares to take full responsibility for the contents of this document.

I declare that the text and the work presented in this document are original and that no sources other than those mentioned in the text and its references have been used in creating it.

The Faculty of Economics and Business is responsible solely for the supervision of completion of the work, not for the contents.


Table of Contents




2.1. Rise of the Machines ... 5

2.2. Compliance ... 6

2.3. Accent biases ... 9



4.1. General design ... 11

4.2. Procedure and materials ... 12

4.3. operationalization of variables ... 14

5. RESULTS ... 16

5.1. Descriptives and frequencies ... 16

5.2. Reliability analysis for scales ... 18

5.3. Correlation ... 19

5.4. Hypotheses testing ... 22

6. DISCUSSION ... 27

6.1. General discussion ... 28

6.2. Managerial and theoretical implication ... 30

6.3. Limitations to the study and future research ... 31

7. REFERENCES ... 33

Figure 1: Conceptual model ... 11

Picture 1: Robotic vs Human Service Agent displayed in the experiment ... 13

Table 1: Correlation matrix of the variables ... 21

Table 2: Hierarchical analysis output for Hypothesis 1 ... 23

Figure 2: Conceptual model with effects ... 24




The technology is rapidly changing the business sector. Among the various benefits it offers, it also helps automatize the labor and reduce costs at the same time. In the service sector where humans were expected to be irreplaceable, technology has proved otherwise. Service agents at the airports, hospitals, hotels and other areas are being replaced with robots that help speed up the process of doing business. Chatbots and anthropomorphized robots are becoming a standard.

Therefore, it is no surprise that the researchers pay great attention to this emerging trend by studying the influence that different forms of the agents have on consumer behavior.

Furthermore, several outcome variables have been in particular interest of the scholars, one of them being the people’s compliance with the agent’s requests. Especially during the time when this study was conducted, due to the COVID-19, the measures were changing daily with intention to protect people. Hence, it is important that people complied with the measures taken.

Scholars have investigated several different variables that influence this, such as the non-verbal cues of the robotic agent and physical presence. This study focused on the effect different accents (American vs British) have on the compliance. A 2 (human vs robot) x2 (American vs British) between subject design experiment was conducted online, including 167 participants from all over the world. The results showed no statistical significance between the form of the agent and the compliance. The form of the agent did have an influence on the perceived authority. Furthermore, no statistical significance was established in the moderating effect of the accent on the relationship between the agent form and compliance, nor was it successful to prove an influence of the accent on the relationship between the form of the agent and the authority. There are several limitations and implications to this study, which could potentially be extended by including other accents and be tested in a different environment.




Why is it that French and Italian accents are perceived as sexy, and that somehow British accent makes the speaker seem intelligent, sophisticated, and credible (Hill et al, 2011)? And is there a way to take advantage of these biases and stereotypes we assign to certain accents and use them to influence consumer behavior when they interact with a robot?

Robots are slowly becoming our daily companions in both private and business environment (Agrawal et al, 2017). Not only are they performing tasks that make our daily lives easier within our households, but they are slowly taking over human roles and positions which require robots to socially interact with their users. The advantages of replacing humans with robots are extremely cost efficient (Adam et al, 2020), hence an implementation of robots across different industries and job positions is to be expected. All robots are designed specifically for the task they are meant to complete. Robots that welcome you at the airport for example, need to be designed in a way that will enhance customer experience to the highest degree possible.

However, robots in customer service require much more diversity, as they are meant to serve, help, warn or even give an order to its users. The latter task requires user’s compliance to successfully complete its service.

With the rise of robots in the positions that require authority, there is a big need to identify as many factors as possible that influence both authority and compliance in order to design robots in a way that will elicit positive outcomes-compliance (Geiskkovitch et al, 2016; Agrawal et al, 2017). Adam et al (2020) also stress the importance of studying people’s acquiescence with requests, made by either an Artificially Intelligent (AI) robot or an actual human being and to compare the two. Not only is it important to establish robots as authority figures in case of emergencies but especially in the time of the pandemic, where rules are constantly changing and strict obedience is of great importance.



To achieve this, scholars have paid great attention to different factors that influence user compliance. Adam et al (2020), has studied the influence of anthropomorphic cues on user compliance with a chatbot, whereas other scholars (Geiskkovitch et al (2016); Haring et al, 2019) focused on the differences in compliance, when a request is given by either a human or a robot. The latter author furthermore looked for differences in compliance rate between a request given by either a human like or a non-human like robot. Other authors were focused on finding a moderating role of several different variables such as vocal cues (Adam et al, 2020), facial expressions (Chidambaram et al, 2012), whereas Siegel et al (2009) focused on researching the moderating role of robot gender on user compliance.

However, despite some thorough research, conducted in this field, only few scholars have paid attention to the effect of robot’s accent and it’s influence on consumer behavior. Among these researchers, Torre et al (2020) have investigated how robot’s accent evokes either positive or negative attitudes towards these entities. Another study by Dahlbäck et al (2001) found that when robot’s accent matches with participant’s, the robot appears as warmer and more trustworthy. Sandygulova et al (2015), showed that Irish children preferred British accented over American accented robots. All these findings are of great importance to the field of human- robot-interaction studies. However, none of them have investigated the effect an accent might have on compliance.

Because people tend to associate certain accent with stereotypes and biases, they have the potential to influence customer behavior. At the moment, American accented female voice is prevalent among digital assistants (Torre et al, 2020), although several studies have shown people’s preferences for British, compared to American accent. The British one also evokes feelings of trustworthiness, which could potentially have an influence on compliance with agent’s requests (Torre et al, 2020).



This gap that has been identified in the literature can be addressed by the following research question:

RQ: To what extent can the use of British accent (vs. American) of service robot agent contribute to consumer compliance (i.e, for safety-related request)?

This study will extend the prior research in the field of human-robot interaction and will test another variable that might have a positive influence on user compliance. Hence, this is not only an extension of the academic area but also as a contribution to the design of the service robots in a way that they will be perceived as authoritative figures.




2.1.Rise of the Machines

Robots are becoming a big part of both business world and our society at large (Agrawal et al, 2017). They are slowly integrating into various industries and sectors, where they are taking over roles as teachers, welcoming patients in the hospitals and soon enough, they might be able to examine and treat patients as well as prescribe them medications. Not only are the robots performing tasks where they serve as an additional assistance to humans, they are also starting to replace them (Ghazali et al, 2019). For example, in the service sector, where service robots are defined as” autonomous physical devices, capable of motion and performing a service”

(Murphy et al, 2017). They are further divided into personal and professional service robots, depends on the task and the degree of social interaction (Murphy et al, 2017).

Naturally, there are many advantages to the so-called ‘robot invasion’. Firstly, robots that are designed to perform in customer service are known to be extremely time efficient, secondly, they enhance customer experience and most importantly reduce costs (Adam et al, 2020). The customer service agents or so-called chatbots are “systems, designed to communicate with human users by means of natural language, often based on artificial intelligence (AI)” that have almost completely replaced human chat service agents in customer support (Adam et al, 2020).

These systems alone are able to reduce the costs of the current global business by 1.3 trillion dollars just by cutting down the responses time and freeing up humans to work on other tasks (Reddy, 2017b). As machines, they can be available 24/7 which gives them another advantage, compared to humans. Consequentially, many face-to-face interactions between a company and the customer are therefore being switched from being human-driven to technology-dominant, where AI assistants act as service interfaces (Adam, 2020).



Another form of robots called the humanoid service robots, which are robots with human-like morphology such as face, arms and legs, are just like disembodied chatbots slowly replacing human service providers in various industries (Mende et al, 2019). Some of the tasks that the robots are taking over require giving people directions and requests, which users are meant to comply with. For example, in the rehabilitation centers, where robots must pressure people into completing exercises or simply maintain their fitness goals (Geiskkovitch et al, 2016).

Furthermore, the technology allows the possibility to deploy robots as police officers (Haring et al, 2019) and security guards (Geiskkovitch et al, 2016).

Positions like these require people to follow orders and requests made by a robot, resulting in compliance and obedience. In order to achieve this, users have to perceive the robot as an authoritative figure (Geiskkovitch et al, 2016). Milgram’s famous study about obedience to authority has shown that the reason participants obeyed experimenter’s orders to hurt another individual is because the experimenter was perceived as authoritative, due to his lab coat (Burger, 2009). Agrawal et al (2017) even used a low-pitched voice, to appear more dominant as one of the cues that would elicit a higher degree of perceived authority.

Scholars have been trying to identify factors that make robots appear more authoritative which in turn results in the increase of people’s motivation to follow robot’s instructions. Furthermore, it is important to identify as many factors as possible, so that the robots can be designed in a way that will elicit the highest degree of user compliance (Geiskkovitsch et al, 2016).


Firstly, there is a difference between obedience and compliance. Obedience, to both robots and humans has mostly been the focus of prior research in the field of both robotics and psychology (Burger, 2009; Geiskkovitch et al, 2016; Aroyo et al 2018). It is defined as “following one’s orders, which can be contrary to one’s moral beliefs and values” (Haring et al, 2019). However,



the findings of the studies where the authoritative person is a human being, are not simply generalizable to situations where the authoritative role is played by a robot (Geiskkovitch et al, 2016). Hence it is important that the research field additionally tests those findings on the robots as well.

Compliance on the other hand refers to “following requests of continuation of a task beyond one’s initial willingness” (Haring et al, 2019). Based on this distinguishment between the two forms, compliance seems to be more suitable for the purpose of this research as the request is there more as a guidance in case of emergency or providing the user with advice.

Scholars have investigated situations in which they have compared human obedience to either a human or a robot in the role of the authoritative figure. Consistent with the findings of Geiskkovitch et al (2016) and Haring et al (2019), humans were more successful in achieving user’s compliance, compared to the robots. In the first study, 86% participants obeyed the human experimenter and only 46% participants did the same when receiving an order from a robot experimenter. The authors explained this by assigning a higher degree of authority to a human, compared to a robot (Geiskkovitch et al, 2016).

Interestingly, a study by Menne (2017) also studied user compliance when manipulating the agent to be either a robot or a human but the author did not find any differences in the compliance between the two. The only difference found was in the time it took the participants to follow the instructions.

Although the study by Haring et al (2019) did not find any differences between the obedience to a humanoid compared to a non-humanoid robot, anthropomorphism, that is the tendency to assign human like characteristics, emotions, intentions, and motivations to nonhuman agents (Epley et al, 2007), plays an important role in perceiving the robot as more human-like and consequentially more likely to be perceived as authoritative. For the purpose of this study, the



service robot is an embodied anthropomorphized robot to appear as human like as possible and hence more comparable to the actual human service agent.

Consistent with the findings presented above, a following hypothesis can be developed:

H1: People are more compliant to a human than robot (direct effect)

Besides these studies that have mainly compared human to robot agent, other scholars have focused on moderating the robot characteristics (Adam, 2020), non-verbal cues such as tone of the voice, body movements and face expressions (Chidambaram et al, 2012) and even gender of the robot (Siegel et al, 2009) might influence user compliance. From the variables listed above, persuasiveness of the robot, which resulted from the non-verbal cues has been shown to be an important factor that influences people’s cooperation. Furthermore, robots who were able to elicit a high behavioral realism also appeared to be more influential. Hence, body movements are an important cue when trying to increase compliance through persuasiveness. Vocal cues on their own however, did not (Chidambaram et al, 2012). However, vocal aspects of the robot should not be neglected as it has been shown by Adam et al (2020) that anthropomorphized voice of a service agent results in a higher compliance rate.

Another reason why this research includes an embodied anthropomorphized robot is because in a study by Bainbridge et al (2011), physical presence of a robot and hence the embodiment of the humanoid robot resulted in great degree of influence on people’s compliance compared to when the robot was displayed on a video.

Despite persuasiveness (Chidambaram et al, 2012) and physical embodiment (Bambridge et al, 2011) having an influence on compliance, perceived authority seems to be the one mediator that most reliably influences the compliance. To establish a mediating effect, the hypotheses is the following:



H2: People are more compliant to a human than a robot as humans are perceived as more authoritive than robots, which increases compliance (full mediation)

2.3.Accent biases

The research field that specializes in compliance with robots is filled with investigating the variables stated above. The positive influence of vocal cues on user compliance was shown by Adam et al (2020), who manipulated the extent of human-like voice features, part of which are also accents. To be more specific, accents are defined as “any systematic differences in pronouncing the sounds of a language, that people belonging to a certain group have in common” (Torre et al, 2020). The reason why accents are an interesting variable to explore is because of accent biases and stereotypes which people assign to accents similarly to how they associate a brand with a certain country or value (Hill et al, 2011). Accents reveal ethnicity and whether people belong to the same social group, which affects the interaction between the people involved in the communication (Torre et al, 2020).

Some researchers have discovered the potential of the linguistics in the business world. Back in 1996, DeShields et al tested the influence of a salesperson’s accent on his performance in sales. In an international environment, customers are constantly interacting with English speaking employees that have different accents (Hill et al, 2011), hence it is important to understand how they influence consumer behavior and how to use them in designing a robot in a favorable way. Torre et al (2020) studied whether robots in particular should have accents in order to enhance customer experience, since they tend to elicit either positive or negative attitudes when people associate certain accents with stereotypes.

Certain English accents such as Mexican and Indian evoke negative associations because these countries score lower in socio-economic status. Furthermore, in English-speaking countries, they are also perceived as less credible and less professional compared to standard American



accent (DeShields et al, 2000). On the other hand, British accent is associated with sophistication, prestige and politeness (Hill et al, 2011). In the UK particularly, Standard British English accent is associated with trustworthiness whereas non-native accents are perceived less positively (Torre et al, 2020). Not only in English speaking countries but in several European countries, British English has been evaluated as more competence compared to American English. Furthermore, Jarvella et al (2001) and Carrie (2017) found that people find British English speech as more desirable and attractive, compared to American, same results were found in a study conducted by Tamagawa et al (2011), where participants’ nationality was New Zealander. Even children as participants in the study of Sandygulova et al (2015) have evaluated a British accented robot as preferable over an American speaking one.

On the other hand, a local accent has been shown to evoke positive service evaluations (Torre et al, 2020; Tamagawa et al, 2011). It is supposed to evoke feelings of warmth and also trustworthiness when a robot’s accent matches the user’s (Dahlbäck et al, 2001). Andrist et al (2015) confirmed these findings as they came up with similar findings in which participants perceived the robot with a matching accent as more knowledgeable and credible compared to those with a standard British accent. However, since the setting of this study is conducted in an international environment such as an airport, a robot with a local English accent might not elicit those perceptions and positive evaluations because of the variety of people’s nationalities.

Although the findings of previous studies are not completely consistent, the summary concludes that scholars have found the preference for British English in both English and non-English speaking countries, while the Standard American accented female voice remains the most prevalent form of digital assistants (Torre et al, 2020). Based on the several findings which indicated British English as more trustworthy and credible, which in turn should also evoke a sense of authority, which together with intelligence represents one of the dimensions that lead to credibility (McCroskey et al, 1999) and could potentially have a positive influence on



compliance. That is because credibility affects how well the user agrees with the message, received from a robot (DeShields et al, 2000).

To discover whether British compared to American accent does have an influence on compliance, a following hypothesis is proposed:

H3:The effect differs between humans and robots depending on accent used (moderated mediation).

For humans, British accents are perceived as more authoritive than American accents


Based on the previously developed hypotheses, the following conceptual model is created.

Figure 1: Conceptual model


In this chapter, the methodological part of the study is presented, which includes the operationalization of the variables and the procedures of the study.

4.1.General design

In order to establish a causative effect between the independent and dependent variables, a 2 (exposure human service agent vs. robotic service agent) x2 (british vs. american accent):

between-subject experimental design has been conducted inorder to test the hypotheses. The



main independent variable is the form of the service agent (human vs. robot) and another variable acting as a moderator-service agent’s accent. The otucome, dependent variable is compliance and the authority of the agent, serving as a mediator. All of the variables mentioned are presented and operationalized in detail in the section 4.3 Operationalization of the variables.

Control variables were created to control for any differences across the four conditions, which are discussed in the section 5.1 Descriptives and frequencies.

For the development of the experiment, an online survey tool Qualtrics was used, with personall efforts used to distribute it among the participants.

4.2.Procedure and materials

The detailed description of the online experiment is presented in this section. First, participants had to agree with the terms and state that they are neither American nor British citizens. Then each out of the 214 participants was randomly assigned into one of the four conditions, which ended up exposing them to either a human service robot with an American or British accent or to a robotic service agent with an American or British accent. The participants were not briefed before the experiment, making sure that stimuli was the only variable affecting their compliance. However, they were asked to imagine that they are interacting with the agent in real life. Next, participants across all four conditions were exposed to the same set of tasks, however, the stimuli was presented in a video form in which the service agent was giving instructions on how to solve the tasks. The video in the human agent conditions was of a Japanese engineer and inventor Hiroshi Ishiguro, which was taken from his interview, posted on YouTube. In the robotic agent condition, the person giving instructions in the video was Ishiguro’s geminoid robot, which looks and behaves extremely similar to his inventor-Ishiguro.

After, the video of the human agent was edited and added a voice over by either a person with an American accent, or a person with a British one. The first one was provided by Zachary



Manning and the latter one by James Pettem. The video of the robotic service agent was edited and added a voice over, which was extracted from a mobile application Text to speech, giving it a robotic touch. In one of the robotic conditions, the agent was given a British accent and American in the second one.

Picture 1: Robotic vs Human Service Agent displayed in the experiment

Across all four conditions, participants were asked by the service agent to perform a task. In total, there were ten different videos with instructions, each was followed by a task. All of the tasks were designed in a way that would make participants feel irritated and annoyed with intention to make sure that the only reason they completed the task was because the agent asked them to do so.

The first task was to find and count all odd numbers out of forty-six, presented in the table.

Participants were given the option to either answer the question and state the number of the odd numbers found or the option to skip the task, which was a clear indicator of non-compliance.

Next, participants were asked by the agent to watch a video until the end. After the video, they were asked to write down the color that was displayed at the end of the video. They were given the option to skip the task before watching the video, which counted as non-compliance, same as the wrongly stated color.



The next task was to count the apples on the photo, which also had the option to skip. After that, a question that checked the attention of the participant was displayed, where participants had to click on number two out of ten. The last task was a bundle of seven tasks, in each, people were asked to calculate five simple equations. Each individual task had the option to skip, which if ticking the box, took the participant straight to the last section of the experiment. This helped indicate the level of compliance on a scale from one to seven. The participants who performed all seven tasks successfully, were assigned to level seven of compliance and those who gave up at the third task and clicked on ‘opt to skip’ were assigned to level three of compliance.

The next part of the study was self-report, which consisted of three sets of questions in which participants had to indicate on a scale from one to seven how likely they are to follow the agent’s requests in real life, how authoritative, credible, trustworthy and dominant the agent appeared.

In the last part of this section, participants had to indicate the accent of the agent, which served as a control variable.

The last part of the study consisted of control variables such as demographic information about the age, gender and the mother tongue of the participants. In the end, participants were briefed about the entire purpose of the study.

4.3.operationalization of variables

Form of the agent

The form of the agent serves as the independent variable. The agent is defined as “the face and the voice of the organization”. They are the link between the company they represent and the customer (Ashforth et al, 2008). The form of the agent is the independent variable in this study, and it is defined as either a human form or a robotic form. The first one means that the agent is an actual human being, whereas the robotic form presents a non-human service agent-a robot.

The two forms were manipulated across all four conditions, meaning that half of the participants



were exposed to the first, human form and the other half was exposed to the robotic form of the agent.

The accent of the service agent

The accent variable is a moderating variable, defined as “any systematic differences in pronouncing the sounds of a language, that people belonging to a certain group have in common” (Torre et al, 2020). Across four conditions, the accent stimuli was manipulated so that quarter of the participants were exposed to a human agent with a British accent, a quarter was exposed to a robotic agent with a British accent, the third quarter interacted with a human agent that had an American accent and the remaining participants were assigned to the group, exposed to a robotic agent with a British accent.

Compliance and self-report compliance

Compliance is the main dependent variable in the study, defined as “following requests of continuation of a task beyond one’s initial willingness” (Haring et al, 2019). The measure for this variable was created by assigning each successfully performed task a score. Based on participants’ scores, a level of compliance was assigned to each participant. Furthermore, a self- report compliance was measured on a 1 to 7 Likert scale, where participants indicated how likely they are to follow the agent’s requests in real life.


Authority of the agent was another dependent variable that served as a mediator and is defined as “the moral or legal right or ability to control”(Cambridge University Press, 2021). The measure was first measured on a 1 to 7 Likert scale that measured several self-reported variables such as Dominance (How dominant did the agent appear?), Authority (How authoritive did the agent appear?), Trustworthiness (How trustworthy did the agent appear?) and Credibility (How credible did the agent appear?). Later, a scale was created as a cluster that combined all of these variables and measured the authority.




This part analyses and discusses the results of the analysis of the data in SPSS. Firstly, descriptive and frequency statistics examination is discussed, followed by the analysis of the reliability of the measures. Next, differences across all four conditions are measured and explained, together with correlation analysis of all the variables. Lastly, three hypotheses are tested and analyzed.

5.1.Descriptives and frequencies

2x2 between subject-design experiment was performed online in Qualtrics software, which included 214 participants. 47 people were immediately excluded from the analysis due to either not agreeing with the terms or because they are either British or American citizens. The latter were excluded due to the potential bias. Furthermore, another 5 participants were excluded from the study for ticking the attention check (Click on number 2 if you are still paying attention) incorrectly. The recruitment of the participants was done through a personal distribution of the survey. After removing irrelevant data, 167 respondents were included in the analysis, all of them were randomly assigned to one of the four conditions. The first two conditions, participants were exposed to a robot (N=82) and in the second two, respondents interacted with a human agent (N=85). In the first condition, 40 respondents have been exposed to a robot with an American accent, in the second, 42 were exposed to a robot with a British accent. In the third one, 42 have been exposed to a human agent with an American accent and in the fourth one, 43 participants have been exposed to a human agent with a British accent.

Besides the variable that was created for attention check, another manipulation check variable (Recognition_Check) was created across all four conditions to measure whether the participants have recognized the agent’s accent. 55% of the participants managed to correctly identify robot’s accent in the first condition, the same percentage of participants (59,5%) recognized it



in second and third and only 34,9% of participants correctly identified the agent’s accent in the fourth condition.

The main dependent variable Compliance consists of ten tasks the participants had to correctly perform in order to be assigned with a of correct answer. Based on how many tasks they performed and answered correctly, a scale from one to ten was used to measure the compliance rate. The mean for compliance steps was the highest in the US Human Agent condition (M=4,88, SD=1,94), which means that on average, participants completed most of the task when interacting with a human agent with an American accent. The second highest average number of tasks completed was in the first condition (M=4,85, SD=2,56) where participants interacted with a robot agent with a British accent. The second condition with a British robot had the third highest mean (M=4,79, SD=2,21) and the mean was lowest in the third condition, with a British accented human (M=4,02, SD=2,34). Furthermore, the Kolmogorov-Smirnov (KS) and Shapiro-Wilk (SW) test has been conducted, both of them being significant, which resulted in a negatively skewed histogram (M=4,63, skewness= -.687). This proves that the Compliance steps were not normally distributed.

Another dependent variable Authority, which is a mediator, consisted of four items that were measured on a seven point Liker scale (1= very strongly disagree; 8= very strongly Agree). The mean for ‘The agent seemed authoritative’ was the lowest (M=3,56, SD=1,66) and the highest (M=3,73, SD=1,65) for ‘The agent seemed dominant’. The fourth condition (US Human) scored the highest evaluation of all four agent’s characteristics (Trustworthy, Credibility, Authority, Dominance), with the lowest mean of 4,09 for authority and highest of 4,46 for credibility. All other conditions scored with the highest of 3,69 (Authority; UK Human). Again, both KS and SW tests showed that the variable is not normally distributed.

Dependent self-report variable ‘How likely are you to follow the agent’s requests in real life’

was included in the analysis, measured on a scale from one to one hundred. The mean was the



lowest in the UK Human Agent condition (M=38,39, SD=22,46) and the lowest in the US Robot condition (M=26,85, SD= 20,13, which was also the only condition with the mean under 30.

Again, both KS and SW tests were both significant.

Next, a control variable for nationality was included. Most of the participants’ mother tongue was Dutch (27,95%), followed by Slovenian with 17,4%. The second control variable was age which ranged among the participants between 63 and 17(M=25,06, SD=5,88). 49,7% of the participants were aged 23 or less. When controlling for gender, 64,1% (N=107) were females, 28,7% males (N=48), 1,8% (N=3) non-binary and 9 participants chose not to disclose their gender.

5.2.Reliability analysis for scales

An analysis to check the reliability of the Authority scale was performed, to check how closely the items are related as a group in other words, how a participant is responding throughout all the different items. It consisted of four different items (Authority, Trustworthy, Credibility, Dominance) and was checked for Cronbach’s Alpha (α), which should be higher than 0,70. The result showed a positive and high value of Cronbach’s alpha (α=0,875), which indicates a good internal consistency.

The inter-item correlation output, which shows the correlation of every item in the scale with every other item. All of the values were positive, which means that the answers provided by participants were going in the same direction. The highest correlation was between credibility and trustworthiness (α=0,856) and the lowest correlation was between dominance and trustworthiness (α=0,432).

The item-Total Statistic showed that all items included had a good correlation as they scored a minimum of 0,509, which is above the value of α>0,40. However, Cronbach’s Alpha If Item



Deleted showed that by removing variable Dominance from the group, Cronbach’s Alpha would increase (α=0,893). Hence, this item was excluded from the scale and further analysis.

Next, a new scale (Authority_S) is created from the three remaining items (Trustworthy, Authority, Credibility) by calculating means. This serves as a mediating variable.


Next, a correlation test, including all the main variables was performed to check whether the variables were correlated. Table 1 includes the result of the means, standard deviations, and correlations of the following variables: Self report-willingness to comply, Authority, Compliance steps, Age and independent variables US robot, UK robot, US human and UK human.

The first variable, Authority, is measuring participants’ perceived authority of the agent and is a mediator in the analysis. The second one, Agent ID2, is an independent variable and has been manipulated so that participants were exposed to either a human or a robot agent. The two variables showed significant negative correlation (r= -.253, p= .00). The correlation is considered small and means that the more human the agent, the more it is perceived as authoritative (Human=0, Robot=1). This further explains the significant correlations between UK_robot and Authority_S (r=.165, p=.033) and UK_human and Authority_S (r=.276, p=.00).

This is an important part of the hypothesis 2. Furthermore, the Agent variable showed significant negative correlation with the self-report variable ‘How likely are you to follow agent’s requests in real life?’ (r= -.173, p= .033).

Another significant strong, positive correlation (r= .458, p=.00) is between variables Authority and the self-report variable ‘How likely are you to follow agent’s requests in real life?’. This



means that the more participants perceived the agent as authoritative, the more they believe they would follow its request in real life.

The model also includes an accent variable, which serves as a moderator in this research (Agent _ID3) and has been manipulated by exposing the participants to either an agent with a British or an American accent. This variable showed no correlation with any other variables in the model.

Compliance, measures participants’ compliance with the agent’s request and is the outcome variable in the analysis. This variable is positively correlated with the self-report variable ‘How likely are you to follow agent’s requests in real life?’ (p= 192, r= .01). Furthermore, another negative significant correlation was found between the variables US_Human and Compliance steps (r=-.156, p=.044).



Table 1: Correlation matrix of the variables

** Correlation is significant at the 0.01 level (2-tailed).

* Correlation is significant at the 0.05 level (2-tailed).

c Unless otherwise noted, bootstrap results are based on 1000 bootstrap samples

M SD 1 2 3 4 5 6 7 8 9 10

1 Authority_S 3,61 1,43

2 How likely it is that you would follow the agent's

requests in real life? 32,50 21,98 ,458**

3 Compliance_steps 4,63 2,28 0, 169* ,192*

4 Age 25,06 5,88 0,107 0,03 0,13

5 Agent_ID2 0,49 0,50 -,253** -,173* 0,08 -0,128

6 Agent_ID3 0,51 0,50 0,099 0,09 0,09 0,009 0,006

7 US_robot 0,24 0,43 -0,129 -0,145 0,53 -0,023 ,571** -,571**

8 UK_robot 0,25 0,44 -,165* -0,057 0,039 -0,125 ,590** ,569** -,325**

9 US_human 0,25 0,44 0,01 0,04 -0,156* 0,012 -,569** -,590** -,325** -,336**

10 UK_human 0,26 0,44 ,276** 0,162* 0,065 0,134 -,578** ,578** -,330** -,341** -,341**



5.4.Hypotheses testing

Hypothesis 1: People are more compliant to a human than robot (direct effect)

The independent variable, Form of an agent (Agent_ID2), has two different values (Human=0, Robot=1) and the dependent variable, Compliance, consists of a 1 to 10 scale. Since there is a single independent and dependent variable, a simple, linear regression was conducted., which is an extension of Pearson correlation, calculated in the table 1. Results of the regression (F=1.033, p= .311, R2= .006) showed no significance between the variables, which is not surprising, since the Pearson's correlation was not significant as well. This means that the model was not able to predict the form of agent impacting the level of compliance.

Next, a hierarchical regression was made to see whether other, additional variables in the model could be able to have a predictive capacity over the dependent variable (Compliance).

Perceived authority (Authority_S ) was shown to be significant predictor of compliance (p=

.014) which means that the higher level of perceived authority indicates higher level of compliance. To test whether after including Agent ID2 and Agent ID3 as predictors, Authority adds anything extra in predicting compliance, we need to look at the hierarchical regression analysis.

In the first model, variables Authortiy_S and Agent_ID3 were included as these variables are not expected to have a direct effect on the compliance in the proposed hypothesis. These variables were not significant (p= .095), which means that they do not account for a significant model and have a direct and significant relationship with the dependent variable (compliance).

In the seconds step, variable Form of the agent (Agent_ID2) was added to the model but no significance was discovered (p= .108). The table with results is summarized below.

This analysis indicated that hypothesis 1 is rejected.



Table 2: Hierarchical analysis output for Hypothesis 1


R square




Beta t

Step 1 0,184 0,34 0,22

Authority 0,046 0,024 0,124 1,604

Agent Accent 0,019 0,008 0,195* 2,525

Step 2 0,222 0,49 0,032

Authority 0,053 0,026 0,143 1,847

Agent Accent 0,022 0,008 0,218*** 2,782

Agent Form 0,676 0,324 0,156* 1,984

*p<0.05, **p<0.01,


Hypothesis 2: People are more compliant to a human than a robot as humans are perceived as more authoritive than robots, which increases compliance (full mediation)

This part of the analysis is about the mediating effect of the authority on compliance which in other words means that it explains why the relationship between the independent (Agent_ID2) and dependent variable (Compliance) occurs.

To analyze this, a mediation PROCESS analysis is used with the mediating variable Authority of the agent (Authority_S). The model used to measure this was the Hayes & Preacher, model 4. Number of Bootstraps samples was 5.000 with a confidence level of 95%. First, the model was run without any control variables.

The relationship between the Agent Form and the mediator Authority is significant (p= .001).

This shows, the change in the outcome variable (Authority) that can be expected with the increase of one unit in the predictor variable (Agent_ID2). In other words, this means that with the increase of Agent form for one unit, towards more robotic, (Human=0, Robot=1) the participants are estimated to evaluate the perceived authority of the agent as lower (a1=-.722).

The second model explains the direct effect of the independent variable (Agent ID2) on the dependent variable (compliance). There was no significance between the two variables (p=.0,10, c1’=.59), however, the model was significant (p=.02) and there was a significant relationship discovered between Authority and Compliance (p=.011). This means that the



change in the compliance can be expected with the increase in the perceived authority.

Furthermore, it explains that the more people perceived the agent as authoritative, the more they complied with the instructions of the agent (b1=.32).

Next, the total effect between the independent and the dependent variable is analyzed, which

was not significant (p=.31, c1=0,358), neither was the direct effect of the same, form of the agent on the compliance (p=.11). The indirect effect of the agent form on compliance (a1b1=- .232), which means that different conditions in which the agent is more ‘human’ are estimated to differ by 0,224 compliance steps as a result of the higher perceived authority of the agent.

The direct effect of the form of the agent on compliance is 0,59 (SE=0,358) but it is insignificant (p=.31)

HYPOTHESIS 3: The effect differs between humans and robots depending on accent used (moderated mediation).

For humans, compliance is higher with a British accent as it is perceived as more authoritive than American.

This part of the analysis is meant to understand whether under the condition of the moderated variable the effect of the agent form on compliance operates or does the effect depend on the presence of the moderating variable (Accent).

Figure 2: Conceptual model with effects



First the independent variable Accent (Agent_ID3) has been tested for a direct effect on the compliance by performing ANOVA. The results showed no significance between the two variables included in the analysis (F=1.346, p=.248, R2=.008). This means that the model was not able to predict the accent of the agent impacting the compliance steps participants took.

Next a univariate analysis of variance was performed to analyze the interaction effect between the main independent variables (Agent_ID2 and Agent_ID3). and the main dependent variable (Compliance_steps). First, no significant differences were found between the groups (p= .254).

This means that the means of the variable Compliance were similar across all four conditions (US robot: M=4,85; SD=2,56; UK robot M=4,79, SD=2,21, US human: M=4,02, SD=2,34; UK human: M=4,88, SD=1,94). The analysis does not show statistical significance between compliance and the form of the agent (p=.302) nor for does it show a significant value for compliance and the accent of the agent (p=.259). Furthermore, the interaction between the two independent variables (Agent_ID2 and Agent_ID3) does not prove a statistical significance as well (p=.190).

As next part of the analysis, a moderated Hayes & Preacher PROCESS model 1 was run. First, the effect of the moderating variable on the relationship between the main independent variable Form of the agent (Agent_ID2) and Compliance was checked.

The results of the analysis showed no significance in the interaction (p=.19, c3=-.924). This means that it cannot be proven that the compliance depends on the accent of the agent. The model as well did not show statistical significance (p=.254). Furthermore, the conditional effect of Agent Form on compliance when the accent equals zero (British=1, American=0) was not statistically significant as well (p=.083, c1=.86).

Next a moderated Hayes & Preacher PROCESS model 7 analysis was made to test the moderating effect on the mediating relationship of authority on the relationship between the



form of the agent and compliance. The bootstrap sample was set to 5000 with a confidence level of 95%. The results showed a negative coefficient for the i. This means interaction (-.713), however it was not proved to be statistically significant (p=.096). This means that we cannot assume that the effect of the form of the agent on authority varies across the different forms of the accent. However, the statistical significance (p=.034) was found for the Agent_ID3 on authority with a positive coefficient of 0,636 (SE=.298) which means that the effect of agent’s accent on the authority varies across the different forms of the agent. Still, we cannot claim that the effect of the form of the agent on the authority is moderated by the accent of the agent.

Another analysis of the moderated Hayes & Preacher PROCESS model 59 was performed to test all of the moderating effects on the direct, prime relationship between the main independent variable Agent form and the main dependent variable Compliance, the effect on the relationship between the Agent form and the mediating variable Authority and the effect on the relationship between the mediating variable and the outcome, dependent variable Compliance. The Bootstrap Samples was chosen to be 5000 with a confidence level of 95%.

The first model summary with authority as the outcome variable was statistically significant (p=.0015, R2=.0898). This means that 8,98% of authority has been predicted by the Form of the agent, the accent of the agent and the interaction term. The independent variable (Agent_ID2) was not significant in the model, however the moderating variable (Agent_ID3) was (p=.034, c2=.636, SE=.298). This means that it has a significant impact on variations in authority.

Next the prediction of the moderating effect on the outcome variable Compliance. The model was statistically significant (p=.02, R2=.0789), meaning that 7.89% of the compliance can be predicted by the Form of the agent, Authority, Accent of the agent and Interaction 1 and 2.

However, out of the five variables, only four of them are bringing significant change to the compliance. The form of the agent is significant with p=.04 (SE=.49), The authority with



p=.002 (SE=.17), the accent of the agent with p=.01 (SE=1,12) and the interaction 2 (Authority_S x Agent_ID2) with p=.04 (SE=.25). However, none of the conditional direct effects were significant.

The indirect effect of the mediating effect of authority on compliance was also insignificant (BootLLCI<0, BootULCI<0).


This thesis is meant to report the effect of different forms of English accents on people’s level of compliance in an online experiment. Its prime intention was to examine whether people are more compliance with human service agents, compared to robotic ones. Furthermore, the intention was to explain this relationship through a mediating variable authority, which would explain that the reason for complying with one form of the agent more than with the other, lies in how authoritive people perceive the agent. Furthermore, an additional moderating variable, the accent of the agent, was tested to see whether agent’s British or American accent influences both compliance and perceived authority.

The findings have not provided evidence that would confirm the moderating effect of the accent on the perceived authority of the agent, nor that the accent influences people’s compliance with the agent. There is also no evidence that could prove the direct effect of the form of the agent on people’s compliance, which means that we cannot claim that people comply with humans more than they do with robots. Although the mediating effect in this relationship was not proves, there is evidence that the form of the agent affects authority, and that authority has an effect on the level of compliance.



6.1.General discussion

There are many potential reasons for not finding any predictive power of the form of the agent on people’s compliance. Geiskkovitch et al (2016) and Haring et al (2019) managed to prove people’s increase in compliance when interacting with human compared to robots, however, a study by Menne (2017) did not manage to do the same. Firstly, Bainbridge et al (2011) discovered the importance of physical presence of the robot in order to increase compliance.

This study was conducted as an online experiment, which means that both the robot and the human form of the agent were displayed in a form of a video. This means that there was no physical presence in neither of the agent forms, which might explain the reason for not finding any statistical significance in the relationship between the two variables.

Furthermore, Chidambaram et al (2012) pointed out in their study that the most important factor in increasing compliance with robots were the non-verbal cues of the robot. Although the robot that was used in this experiment looked exactly like the human agent, it had its limitations as it was presented in a video format. This means that the participants in the experiment were not able to notice all of the non-verbal cues that the anthropomorphized geminoid robot expressed during his speech. The online format of the experiment also decreased the ecological validity of the study.

Although there was a significant relationship between the agent form and the perceived authority, there is no proof about authority’s mediating effect on the compliance. Milgram’s study has discovered that the reason for people’s compliance was due to the perceived authority of the person, instructing the participants. The reason people perceived them as authoritative figures was because of their uniforms, which was not included in the study. There is a chance that participants were not able to imagine the scenario, presented in the experiment as part of



the real-life situation, since the service agents of both forms did not look like they are representing a certain company.

When it comes to accent variable, several things need to be taken into consideration as well. As mentioned before, Chidambaram et al (2012) proved the importance of non-verbal cues and their effect on compliance. However, among several different cues, they also tested for the effect of robot’s voice, which was one of the factors that were not proved to have a significant impact on compliance. Furthermore, looking at the control variable regarding the mother tongue of the participants, more than half of the participants’ mother tongue was either Dutch or Slovenian. A study by Dahlbäck et al (2001) showed that in order to increase people’s perceived trustworthiness of the person, the parties of both accents should match. This means that in order to achieve a higher level of perceived trustworthiness of the agent, the accents tested should be either Dutch accented English or Slovenian accented English. The other option would be to test the same experiment among British and American speakers, however both nationalities were intentionally excluded from the study to avoid potential bias. Furthermore, not only did the accent of the agent match the participants’, due to the use of a voice over feature, inserted in the video, there was a clear discrepancy between the voice and the video, which means that it was pretty clear that the voice did not match the agent.

Regarding the accent variable, as reported in the results section, a maximum of 59,5% and a minimum of 34,9% of the participants across the four conditions managed to recognize and correctly identify the accent of the agent. This could be the main explanation for the low effect of the accent on people’s compliance and authority. On the one hand, there is a possibility that because around half of the participants did not recognize the accent, they accents were not distinguishable enough.

Lastly, it is important to take into consideration that the way the experiment was distributed was through personal efforts. This means that a certain percentage of the participants could



potentially be bias and not completely neutral when participating in the study. Furthermore, there is a chance that several participants were already aware of the nature of the study and the effects it was meant to test.

6.2.Managerial and theoretical implications

This study also provides several managerial and theoretical implications. It was based on the previous studies that were conducted in the research field of robotic service agents. Several have focused on the differences between human and robotic form of the agents and its influence on compliance. Furthermore, researchers have started to dive deeper into the mediating and moderating variables that explain this effect. This study has focused on the moderating effect of the accent with intention to help contribute to the theory of designing robotic service agents in a way that would elicit a higher compliance rate. Although no statistical significance has been shown, it still contributes to the research field.

Firstly, it shows that the accent of the agent is not an important factor that could influence people’s behavior. This means that it does not have to be taken into consideration when hiring a human agent or designing a robotic one, when intended to perform tasks in an international environment. Secondly, it shows a significant influence of the perceived authority on compliance. This means that the authority does play a role when trying to elicit a higher compliance rate, which only confirms previous studies that have been conducted in this research field. Hence, companies that operate using service agents should focus on choosing human and designing a robotic one in a way that should be perceived as authoritative as possible.

Next implication takes into consideration the environment in which the experiment was conducted. Since most of the participants’ mother tongue was Dutch and Slovenian, it can be assumed that in an environment where these two nationalities are interacting with service agents, the accent of the latter should not have any influence on the behavior of the users.



6.3.Limitations to the study and future research

Most of the limitations to the study have already been discussed in the general discussion part.

The main limitation is probably the form of the experiment which has been conducted online instead of in a laboratory or a real-life environment, which decreases the ecological validity of the study. The nature of the experiment not only failed to achieve the perception of physical presence of the agent, but also decreased the ability to prove that the results of the study could be generalized to a real-life setting.

Participants also had the option to express their questions and opinions about the study. Many of them have expressed their confusion about the study, because they were not entirely briefed about the nature of the experiment until the very last step of the survey. Several have also expressed that they found the robotic agent to be ‘creepy’ and expressed their concern regarding the mismatch between the audio and the video.

Based on these comments, The design of the stimuli clearly lacked some technological knowledge that would help design it in a way that would decrease the discrepancies between the voice over and the agents’ facial expressions. Besides, the recognition check of the accent was not successful as very little percentage of the participants managed to recognize the accent.

It should also be noted that the distribution of the survey, which was done through personal efforts might have been a limiting factor in the study. Some of the participants recruited might be bias when taking the experiment, some of them were even aware of the fully disclosed nature of the study.

Future research should take all of the limitation that are expressed into consideration when conducting further research on this topic. The field could be expanded by incorporating other mediating and moderating variables. Other forms of customer agents and their influence on consumer behavior are still to be inspected. For example, holographic 3D customer agents could



be a way to expand the field of customer service. Furthermore, variables such as gender or age of the agent would be a great way to expand the previous studies, When it comes to accents in particular, conducting a study with different accents that match the accent of the participants would be a great way to test whether accents really matter when interacting with customer agents.

It is important that this research field continues to expand as it grows in importance for both companies on the one site and the satisfied customers on the other. Hopefully, other researchers manage to recognize the potential this field has to offer.




Adam, M., Wessel, M., & Benlian, A. (2020). AI-based chatbots in customer service and their effects on user compliance. Electronic Markets. https://doi.org/10.1007/s12525-020-00414-7 Agrawal, S., & Williams, M. (2017). Robot Authority and Human Obedience: A Study of Human Behaviour using a Robot Security Guard. Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, 57–58.


Andrist, S., Ziadee, M., Boukaram, H., Mutlu, B., & Sakr, M. (2015). Effects of Culture on the Credibility of Robot Speech: A Comparison between English and Arabic. Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, 157–164.


Aroyo A. M. et al., "Will People Morally Crack Under the Authority of a Famous Wicked Robot?," 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Nanjing, China, 2018, pp. 35-42, doi:


Ashforth, B. E., Kulik, C. T., & Tomiuk, M. A. (2008). How Service Agents Manage the Person—Role Interface. Group & Organization Management, 33(1), 5–45.


Bainbridge, W., Bainbridge, W., Hart, J., Hart, J., Kim, E., Kim, E., Scassellati, B., &

Scassellati, B. (2011). The Benefits of Interactions with Physically Present Robots over Video- Displayed Agents. International Journal of Social Robotics, 3(1), 41–52.


Burger, J. (2009). Replicating Milgram: Would People Still Obey Today? The American Psychologist, 64(1), 1–11. https://doi.org/10.1037/a0010932

Carrie, E. (2017). “British is professional, American is urban”: attitudes towards English reference accents in Spain. International Journal of Applied Linguistics, 27(2), 427–447.


DeShields, O., & de los Santos, G. (2000). Salesperson’s accent as a globalization issue.

Thunderbird International Business Review, 42(1), 29–46. https://doi.org/10.1002/1520- 6874(200001)42:13.3.CO;2-G

Epley, N., Waytz, A., Akalis, S., & Cacioppo, J. (2008). When We Need A Human:

Motivational Determinants of Anthropomorphism. Social Cognition, 26(2), 143–155.


Geiskkovitch, D., Cormier, D., Seo, S., & Young, J. (2016). Please continue, we need more data: an exploration of obedience to robots.



Ghazali, A., Ham, J., Barakova, E., & Markopoulos, P. (2019). Assessing the effect of persuasive robots interactive social cues on users’ psychological reactance, liking, trusting beliefs and compliance. Advanced Robotics, 33(7-8), 325–337.


Haring, K., Mosley, A., Pruznick, S., Fleming, J., Satterfield, K., de Visser, E., Tossell, C., &

Funke, G. (2019). Robot Authority in Human-Machine Teams: Effects of Human-Like Appearance on Compliance. In Virtual, Augmented and Mixed Reality. Applications and Case Studies (pp. 63–78). Springer International Publishing. https://doi.org/10.1007/978-3-030- 21565-1_5

Harris, Karen, Austin Kimson, and Andrew Schwedel (2018), “Why the Automation Boom Could Be Followed by a Bust,” Harvard Business Review (March 13), https://hbr.org/2018/03/why-the-auto mation-boom-could-be-followed-by-a-bust.

HiroshiIshiguroLab. (2011, July 22). Geminoid HI-1 and its source. [video]. YouTube.


McCroskey, J. C., & Teven, J. J. (1999). Goodwill: A reexamination of the construct and its measurement. Communication Monographs, 66, 90–103

Mende, M., Scott, M., van Doorn, J., Grewal, D., & Shanks, I. (2019). Service Robots Rising:

How Humanoid Robots Influence Service Experiences and Elicit Compensatory Consumer Responses. Journal of Marketing Research, 56(4), 535–556.


Menne, I. (2017). Yes, of Course? An Investigation on Obedience and Feelings of Shame Towards a Robot. Social Robotics, 365–374. https://doi.org/10.1007/978-3-319-70022-9_36 Murphy, J., Hofacker, C., & Gretzel, U. (2017). Dawning of the age of robots in hospitality and tourism: Challenges for teaching and research. European Journal of Tourism Research, 11, 104–


N. Dahlback, S. Swamy, C. Nass, F. Arvidsson, and ¨ J. Skageby, “Spoken interaction with computers in a ˚ native or non-native language-same or different,” in Proceedings of INTERACT, 2001, pp. 294–301.

Rao Hill, S., & Tombs, A. (2011). The effect of accent of service employee on customer service

evaluation. Managing Service Quality, 21(6), 649–666.


Rao Hill, S., & Tombs, A. (2011). The effect of accent of service employee on customer service

evaluation. Managing Service Quality, 21(6), 649–666.




Reddy, T. (2017b). How chatbots can help reduce customer service costs by 30%. Retrieved from https://www.ibm.com/blogs/watson/2017/ 10/how-chatbots-reduce-customer-service- costs-by-30-percent/

Reinares-Lara, E., Martín-Santana, J., & Muela-Molina, C. (2016). The Effects of Accent, Differentiation, and Stigmatization on Spokesperson Credibility in Radio Advertising. Journal of Global Marketing, 29(1), 15–28. https://doi.org/10.1080/08911762.2015.1119919

Sandygulova, A., & O’Hare, G. (2015). Children’s Perception of Synthesized Voice: Robot’s Gender, Age and Accent. In Social Robotics (pp. 594–602). Springer International Publishing.


Siegel, M., Breazeal, C., & Norton, M. (2009). Persuasive Robotics: The influence of robot gender on human behavior. 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2563–2568. https://doi.org/10.1109/IROS.2009.5354116

Stocker, L. (2017). The Impact of Foreign Accent on Credibility: An Analysis of Cognitive Statement Ratings in a Swiss Context. Journal of Psycholinguistic Research, 46(3), 617–628.


Tamagawa, R., Watson, C., Kuo, I., MacDonald, B., & Broadbent, E. (2011). The Effects of Synthesized Voice Accents on User Perceptions of Robots. International Journal of Social Robotics, 3(3), 253–262. https://doi.org/10.1007/s12369-011-0100-4

Torre I. and Maguer S.L., "Should robots have accents?," 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Naples, Italy, 2020, pp. 208-214, doi: 10.1109/RO-MAN47096.2020.9223599.

Wang, Z., Arndt, A., Singh, S., Biernat, M., & Liu, F. (2013). “You Lost Me at Hello”: How and when accent-based biases are expressed and suppressed. International Journal of Research in Marketing, 30(2), 185–196. https://doi.org/10.1016/j.ijresmar.2012.09.004




Related subjects :