• No results found

Chatbots as credible sources and positive assets for companies : an experimental study investigating the effects of social interactivity of virtual agents and complexity of the tasks accomplished on perceived chatbot cr

N/A
N/A
Protected

Academic year: 2021

Share "Chatbots as credible sources and positive assets for companies : an experimental study investigating the effects of social interactivity of virtual agents and complexity of the tasks accomplished on perceived chatbot cr"

Copied!
48
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Chatbots as credible sources and positive assets for companies

An experimental study investigating the effects of social interactivity of virtual agents and complexity

of the tasks accomplished on perceived chatbot credibility and attitude toward the company

Melania De Angelis 12234516 Master’s Thesis

Graduate School of Communication Master’s programme Corporate Communication

dr. A.C. (Anne) Kroon 30/01/2020 Word count: 7766

(2)

1 Abstract

Companies are progressively involving technological rather than human contributions when establishing relationships with customers online. For this reason, it is essential to have a better understanding of how specific characteristics of virtual agents (or chatbots) employed in online conversations affect their outcome, and why this is fundamental if the objective is using chatbots in a strategic and effective way. Both social interactivity and service

complexity are key determinants of the overall perception online users’ have of chatbots’ credibility and their attitude toward the company providing the service. For this reason, the current paper addresses this topic with an online experiment investigating the effects of four different virtual agents - displaying different levels of social interactivity and complexity of the service – on online users perception of both the chatbot’s credibility and the company providing this service. The study shows that when the virtual agent displays high levels of social interactivity, online customers tend to perceive both the agent as more credible and to have a more positive attitude toward the company providing the service than when the same agent is characterised by low levels of social interactivity. Additionally, the results show that chatbots offering complex services are perceived as more credible and lead to a more positive attitude toward the company than chatbots offering simple services. However, an interaction effect between levels of social interactivity and service complexity on the same dependent variables was not found in the current research.

Keywords: Chatbots, Social interactivity, Service complexity, Source credibility, Attitude, Company

(3)

2 Introduction

Chatbots are shaping the way in which companies exchange conversations with

customers and provide them information in current online domains. Human voices and actions are being rapidly replaced by these artificial conversational entities, used by corporations to establish relationships with both customers and online users (Edwards et al., 2016). Currently, robots are already largely employed in customer services given their ability of dealing with both relatively simple and complex tasks (Wirtz et al., 2018).

For automated conversational agents (or chatbots), the social-emotional dimension plays a crucial role when the objective is preventing customers’ skepticism. In fact, the acceptance of robots does not exclusively depend on the actual functionality of their actions, but also on elements such as social interactivity (Stock & Merkle, 2018). Several studies (Heerink et al., 2010; Stock & Merkle, 2018) showed that when automated conversational agents are perceived as socially appropriate and when their messages perfectly fit the context in which they are shared, the chatbots are extensively accepted by customers.

Another important factor to take into consideration when studying chatbots is service complexity. While simple tasks are considered as more executable by service robots, services that require more complexity are usually delivered by humans (Mikolon et al., 2015; Wirtz et al., 2018). For this reason, analysing how the variance of a service complicatedness affects the overall satisfaction of consumers is deeply relevant for companies interested in employing chatbots as an asset.

Both social interactivity and service complexity are key determinants for the overall perception online users have of the source credibility. Source credibility has been identified as one of the most substantial aspects to take into account when analysing the trustworthiness and the quality of the communication provided by companies online (Edwards et al., 2015). Haas and Waredn (2003) made clear that the credibility of information exchanges that happen

(4)

3

online also consist of the accuracy, completeness and timeliness of the message, elements that together generate a sense of “believability”. It follows that the correct implementation of chatbots in customer services can be guaranteed only when the source – here the virtual agent – is perceived as credible.

Being a strategic asset for companies, chatbots play a role also in influencing the perception consumers have of the company itself (Larivière et al., 2017; Araujo, 2018). For this reason, the level of social interactivity together with the complexity of the tasks held by the chatbots is likely to alter customers’ attitude toward the company and their overall perception of it.

This research investigates the use of chabots for corporate communication purposes and the perception customers have of both this tool and the company employing it in customer services. The aim is understanding whether there is a difference or not in the perception of the source credibility and attitude toward the company for different levels of social interactivity that the robot shows in online conversations. Additionally, this paper aims to determine whether the complexity of the service provided by the virtual agent has an influence on the credibility of the source and the perception of the company. Therefore, the current paper will try to answer to the following research question:

RQ: How do different levels of social interactivity in artificial conversational agents

influence online users’ perception of source credibility and their attitude toward the company? And how does this differ for different levels of service complexity?

The aforementioned research question will be answered with an online experiment in which the effects of four different virtual agents - displaying different levels of social

interactivity and complexity of the service – on online users perception of chatbot’s credibility and of the company providing the chatbot service is examined. The different artificial

(5)

4

conversational agents will be self-developed through the Conversational Agent Research Toolkit (CART), a programming toolkit created specifically for researchers (Araujo, 2018).

Companies are progressively involving technological rather than human contributions when establishing a relationship with customers online (Araujo, 2018; Wirtz et al., 2018; Stock & Merkle, 2018). The current study contributes to existing literature in having a better understanding of how the characteristics of the chatbot employed in the conversation affect its outcome, and why this is fundamental if the objective is using chatbots in a strategic and effective way.

Additionally, although several studies took into account the humanness of the chatbots as a key determinant characteristic (Aruajo, 2018; Westerman, Cross, & Lindmark, 2019; Hill, Randolph Ford, & Farreras, 2015), there is no extensive analysis on the relevance of social interactivity of the agent as a distinctive characteristic. In fact, while the former is defined by elements such as human language style and human name, the latter need explicit reasoning about dynamic social relations and expectations of others such as giving a clear signal of mutual intention (Mohammad & Nishida, 2009). Despite the fact that these two elements both belong to the overall social-emotional dimension (Stock & Merkle, 2018), they are distinct in terms of concrete features entailed, therefore the analyses conducted on

humanness do not necessarily explain the effect of social interactivity on the perceived source credibility. It follows that a distinct examination of this new interaction is deeply needed.

Finally, it is particularly interesting to look at the effects of service complexity for varying levels of social interactivity. The lack of this combination in existing literature may reveal undiscovered outcomes which will have both theoretical and practical implications.

(6)

5

Theoretical framework

Virtual agents in customer service

Chatbots have been defined as software agents with the ability of communicating with online users through natural language conversation (Følstad & Brandtzæg, 2018). Given their nature, virtual agents are seen as an auspicious tool for customer service objectives. Chatbots can offer personalized assistance and accomplish tasks tailored to each specific online client, hence providing services at any time and location (Chung et al., 2018). In the current fast-paced environment, virtual agents can be the key to solving customer problems and can assist in dictating the success or decline of companies by determining an overall satisfaction

(Chakrabarty, Widing, & Brown, 2014). In fact, the demand for highly personalized customer interaction typically requires highly skilled customer service personnel, but intelligent

automation of the same service will make the personalization efficient while keeping costs down (Chung et al., 2018).

The success of chatbots is also determined by the increasing demand of online messaging chats in customer service. Chats can be seen as a more effective channel for the service provider than support by e-mail and telephone, as customer service personnel may handle multiple requests in parallel and provide a written summary of the interaction (Tezcan & Zhang, 2014). Additionally, Følstad and Brandtzæg (2018) found that efficiency and convenience are the two most frequently reported motivations for using chatbots, followed by user experience, social aspects and a sense of innovation.

Social interactivity

The Role Theory

One of the theoretical foundations upon which social interactivity is built is the Role Theory (Soloman et al., 1985). According to Soloman et al. (1985), when two actors entail a

(7)

6

conversation, they both expect from each other to act in accordance with a role congruency which is socially defined. This means that one of the requirements for a successful exchange of information is a certain level of congruency between the perceived and the expected behaviour for both actors. In the context of customer services, a mutual coordination between a customer and a company’s representative is compelled. The aspect of social interactivity and congruency are highly relevant both when human conversational agents and automated conversational agents are employed. However, in the last circumstance, the difference is that robots are generally considered less socially skilled than humans, therefore social interactivity can be critical when it comes to the perceived credibility of the conversation (Giebelhausen et al., 2014). Soloman et al. (1985) understood the importance of socially interactive approaches in customer service, based on the fact that roles are defined in social contexts and that social interactivity exerts influence in customers’ behaviors and impressions. Social interactivity is a co-evolution the agent has both with his environment and with the other agent (Mohammad & Nishida, 2009). Hence, it is particularly essential to investigate more on how the concept of social interactivity is applied to the case of virtual agents.

Chatbots and social interactivity

The perceived social interactivity of robots is one of the socio-emotional elements that affects customers’ acceptance of automated conversational agents (Stock & Merkle, 2018). Although part of the same umbrella concept, social interactivity has been conceptualized differently from humanness and social presence. While humanness is defined as the level of anthropomorphism a robot displays, social interactivity is the ability to observe and reproduce accepted social norms such as actions and emotions appropriate to the context (Breazeal, 2003; Duffy, 2003). What is more, social presence is defined as the extent to which customers feel that they are with another social being (Heerink et al, 2010). Conversely, Pavlik and McIntosh (2004) described social interactivity as multiple exchanges between the source and

(8)

7

the user. It follows that an active control, two-way communication, and synchronicity will be ensured when these exchanges are positively outstanding (Liu, & Shrum, 2002). For this reason, the focus here will be on the feature of social interactivity of chatbots since their success is often determined by their ability of interacting in a social context (Mohammad & Nishida, 2009).

The main dimension upon which social interactivity is built is cognitive empathy (Liu, & Sundar, 2018). Cognitive empathy has been defined as a process that involves a complete understanding of others’ feelings (Vossen, Piotrowski, & Valkenburg, 2015). Although typically attributed to human beings, when automated conversational agents show their ability of being empathic - therefore comprehending emotions and adjusting the conversation

according to them – it can be expected that the agents have higher probability of being trusted and evaluated as credible. In this regard, Lee et al. (2019) argued that a relationship developed online relies on empathy to form interpersonal trust in a way that when a supportive response is provided, interpersonal trust is achieved. This rationale leads to the ensuing hypothesis:

H1a: Artificial conversational agents with high levels of social interactivity will be perceived

as more credible than artificial conversational agents with low levels of social interactivity.

Another crucial dimension at the foundation of social interactivity is personalization. When customers perceive a certain service to be customized to their own needs and when a clear receptivity of the message is established, the effects on the impressions they have of the company are positively remarkable (Vendemia, 2017). It follows that the attitude toward the company making use of chatbots can also be affected by the quality of the social interactivity the conversational agent shows (Larivière et al., 2017). Henceforth, the following reasoning is expected to be confirmed by the current research:

(9)

8

H1b: Artificial conversational agents with high levels of social interactivity will lead to a

more positive attitude toward the company than artificial conversational agents with low levels of social interactivity.

Complexity of the service

Service complexity can be defined as “the subjectively perceived difficulty in making sense of a service” (Mikolon et al., 2015, p. 514). The authors categorizeservice robots by different type of services, such as task-type and recipient of service, emotional-social and cognitive complexity, and physical task functionality and service volume. For this research, the emotional-social and cognitive complexity will be particularly taken into account.

As reported by the Embodied Interactive Control Architecture (EICA), when an interaction between two subjects happens, the immediate consequence is a unified process that keeps developing as the interaction continues (Mohammad & Nishida, 2007). This process can convey into either a successful or a conflicting cooperation, which is also determined by the complexity of the message delivered during the interaction. In online environments, complex tasks can be more difficult to manage for customer service activities, since in these situations a vis-à-vis conversation would prevent possible misunderstandings (Haas & Wearden, 2003). It follows that when a complex task is carried out by a robot the consequence might be having possible misinterpretations. For this reason, it is extremely important to investigate the effects of service complexity for different levels of social interactivity.

When the virtual agent shows high levels of social interactivity and the task carried out is complex, the consequent outcome might be a more pronounced perceived source credibility than in the case of chatbots with low levels of social interactivity. The reason behind it is that chatbots’ ability of fitting the right context while undertaking a complicated responsibility

(10)

9

will ensure to the online customers a sense of reliability. Conversely, in circumstances in which the same virtual agent deals with simple tasks, the level of perceived source credibility is expected to be lower than the previous scenario, because in these occasions the cognitive effort required by the online client is definitely lower. This reasoning lead to the following assumption:

H2a: The positive effect of artificial conversational agents with high levels of social

interactivity - compared to low levels of social interactivity - on source credibility will be more pronounced in the context of complex services than simple services.

What is more, the complexity of the service provided by chatbots might also have consequences on the overall perception customers have of the company providing specific services. Recent research (Wirtz et al., 2018) showed that when artificial conversational agents accomplish cognitively complex tasks and positively manage them, the immediate aftermath is a growing beneficial perception of the company.

When chatbots are characterised by high levels of social interactivity and are able to accomplish complex tasks, it can be assumed that online customers who interact with them consequently have a more positive attitude toward the company providing the chatbot service than in the case of chabots with low level of social interactivity. This can be justified by the fact that when a complex task is successfully carried out by the virtual agent, this reflects the overall perception consumers have of the company which made the artificial conversational agent available. However, when the chatbot has low levels of social interactivity and carries out simple tasks, the attitude consumers have toward the same company will be lower than the previous scenario.

(11)

10

H2b: The positive effect of artificial conversational agents with high levels of social

interactivity - compared to low levels of social interactivity - on attitude toward the company will be more pronounced in the context of complex services than simple services.

The relationships between the different concepts just discussed are better explained in the Figure 1 below.

Method

Research design

The research question and the hypotheses were tested in an online experiment with a 2 x 2 between-subjects factorial design. The first factor was social interactivity of the agent, composed of 2 levels (namely, high levels of social interactivity vs. low levels of social interactivity) and the second factor was service complexity, again composed of 2 levels (namely, complex service vs. simple service), as graphically described in the table below (Table 1). Thanks to this study design, each participant was randomly assigned to only one of the four the conditions. The main advantage of this design is that participants were not

influenced by other conditions since they were exposed to only one of them. In addition,

(12)

11

being randomly assigned to a condition, all participants had the same likelihood to be allocated in a condition and this ensured high level of internal validity.

Sample

A non-probability sampling technique was employed for the current research paper, more specifically a convenience sampling technique. Participants were recruited online, where they received a website link connected to the experiment in two different ways: either they were contacted privately with the link to the online survey (e.g. via WhatsApp) or the researcher publicly shared the link on the personal social media’s feed (e.g. via Facebook, LinkedIn or Instagram). The first requirement for participants was being at least 18 years old. Then, it was necessary for subjects to have an account on at least one of the aforementioned platforms (Facebook, Whatsapp etc.), since the experiment was shared through these

channels. Finally, participants must have had a sufficient level of English to complete the experiment and, to verify this, they were asked during the experiment whether they believed their English proficiency was adequate enough to understand every section of it. In case of negative responses, these contributions were not considered when conducting the analysis of the results.

A total of 187 persons began the questionnaire but the final sample consisted of 164 participants (N = 164) since 23 participants were removed from the dataset for stopping

Level of social interactivity

Complexity of the service High levels of social interactivity Low levels of social interactivity

Simple Simple task with high social

interactivity

Simple task with low social interactivity

Complex Complex task with high social

interactivity

Complex task with low social interactivity

(13)

12

completing the survey before the end of it. This convenience sample comprised 98 female (58.8%), 62 male (37.8%), 3 non-binary (1.8%) and 1 other (0.6%). Participants’ age was clustered in 4 groups: 18-30, 31-50, 51-60, and above 61. 126 participants were in the first group (76.8%), 24 respondents in the second group (14.6%), 13 respondents in the third group (7.9%), and finally 1 participant in the fourth group (0.6%). Furthermore, 67 participants identified as the highest level of education a Bachelor degree (40.9%), 44 participants identified a High school diploma (26.8%), 37 respondents answered with a Master’s degree (22.6%), 14 respondents had a Vocational degree or College (8.5%), 1 respondent was below High school degree (0.6%) and 1 respondent had a PhD (0.6%). Finally, among the final 164 participants, 67 were Italian (40.9%), 15 were Dutch (9.1%), 14 were Spanish (8.5%), 11 were British (6.7%) and the remaining 57 participants from different parts of Europe, Africa, Americas and Asia.

Randomization check

To check whether participants’ age (0 = 18-30, 1 = 31-50, 2 = 51-60, 3 = above 61) was comparable over the four conditions (high vs. low social interactivity and simple vs. complex task), a Chi-square analysis was conducted. The variable containing the four

conditions was the independent variable and age the dependent variable. The results revealed a non-significant association between the four conditions and age of participants, χ2 (9, 164) = 7.27, p = .609, τ =.01. In addition to this, a Chi-square analysis was also performed with the four conditions as independent variable and gender (0 = male, 1 = female, 2 = non-binary, 3 = other) as dependent variable. The results showed a non-significant association between the four conditions and gender of participants, χ 2 (9, 164) = 7.34, p = .602, τ =.02. We can conclude from this that participants were evenly distributed across conditions based on age and gender. Therefore, it was not necessary to control for age and gender, thus they were not added as covariates in the analyses for testing the hypotheses.

(14)

13 Procedure

The current research was conducted between the 10th and the 27th of December 2019 using the online survey platform Qualtrics. The survey consisted of various parts and a full copy of it can be found in Appendix C. In the first section, participants were asked to agree to an informed consent to start the survey - in which it was underlined that the anonymity of every participant was ensured. After asking whether participants were 18 years old, a short introduction about the chatbot and the company providing the service were mentioned. They were then randomly exposed to one of the four conditions and asked to start a conversation with the virtual agent. Once the conversation ended, participants were asked to enter the code the chatbot gave them in order to proceed with the online survey, ensuring that each

participant completed the conversation with the agent. Consequently, various questions were asked to check the manipulation of both the complexity of the service and the social

interactivity of the agent. After that, some questions related to the perception of the credibility of the virtual agent were asked to participants, followed by questions regarding the attitude toward the company providing the chatbot service. Finally, participants were asked to state whether they thought their level of English was good enough to complete and comprehend the survey, followed by some socio-demographic questions such as gender, nationality, age and education level. A final message explained to participants that both the virtual agent and the company that provided the service were created for the purpose of the study.

Stimulus material

The stimulus material of the current study consisted of an online chat to which participants were exposed. They were subjected to this online chat programmed through the Conversational Agent Research Toolkit (CART) and were asked to have a conversation with a virtual agent (Araujo, 2018). The aforementioned tool enables researchers to create

(15)

14

Interfaces (APIs) to create and manage various versions of a chatbot that can be used as stimuli in experiments. CART provides a toolkit written in Python, allowing the

randomization of experimental conditions for all the participants and storing all the

interactions each participant had with the chatbot. Additionally, CART enables the integration with online survey platforms (here Qualtrics) in a way that a combination between the

interaction with the chatbot and the self-reported measures is ensured. Participants were able to converse with the virtual agent without leaving the experiment environment, therefore with no need of being redirected. A visual representation of the design of the chat embedded in Qualtrics is showed in Figure 2.

Social interactivity

Two types of conversational agents were created in this setting: an agent displaying high levels of social interactivity and an agent displaying low levels of social interactivity. In order to appear socially interactive, a robot needs to show its ability of applying social models when interacting with humans, such as demonstrating that they understand social norms and adapt the response to the specific context in which the conversation takes place (Wirtz et al., 2018). For the purpose of this study, we define high levels of social interactivity as a type of

(16)

15

interaction in which the chatbot displays an empathetic behavior mirroring customers’ requests, with a high fit with the context and the ability of personalize the message to

customers’ specific needs. Conversely, low levels of social interactivity are characterised by a lack of emotions showed by the robot, and its scarce ability of mirroring human behavior.

The first virtual agent was characterised by a great capability of fitting into the context in which the conversation took place, being more empathic and elaborating more articulated sentences when conversing with the user. An example of a reply from this type of agent is the following: “Great! How much do you want to spend for this trip? We love giving you the best option at your circumstances!”. Conversely, the second type of agent was defined by low levels of social skills, replying to customers questions in a more direct and less elaborated way, and showing no empathy in its replies. This is an example of a sentence from a virtual agent with a low levels of social interactivity: “Tell me your budget”.

The manipulation of the levels of social interactivity of the conversational agent can be found in more details in Appendix A, in which - together with the complexity of the service - all the four different conditions are better explained.

Service complexity

The complexity of the service was the second independent variable of this study and it moderated the effect of the relationship between levels of social interactivity and source credibility. Participants were subjected either to a conversation in which a complex service was required or to a conversation in which a simple service was involved.

Two different services were chosen for this experiment: a pizza delivery service and a travel advices service. The former was part of the simple service condition, and participants exposed to this condition were asked to simulate a conversation in which they ordered to the virtual agent the pizza they wanted, and asked to deliver it to a specific address and in a

(17)

16

specific timeframe. This service was defined as simple given its mechanical nature. In fact, ordering a pizza does not necessarily require high levels of personalization or empathy. Also, given the relatively low amount of economic effort required to order some items in the pizza menu, participants were expected to perceive this specific task as simple.

The latter was part of the complex service condition, and participants exposed to it had to simulate a conversation where they asked for travel advices to the virtual agent.

Particularly, the virtual agent was able to suggest different destinations based on the subjects desired vacation in terms when they wanted to travel, with how many travellers and what was the budget. In this particular condition, the service was defined as complex since it required high levels of personalization. In fact, customizing chatbot’s answers according to

participants’ requests generally demands strong cognitive efforts, therefore leading to more complexity of the conversation entailed.

Manipulation check

The effectiveness of the manipulations of the two independent variables was assessed beforehand through a pilot test. This test was subjected to a sample smaller (N = 25) than the sample size of the definitive experiment and it was useful to detect potential controversies that could have had a negative impact on the final results. The pilot test was a within-subjects design, in which all the participants were exposed to all the four conditions of the study.

Manipulation of social interactivity

The effectiveness of the manipulation for the social interactivity variable was assessed by asking participants in three separate questions to what extent they believed the virtual agent was socially interactive, good at interacting and showed empathy in the conversation. They were invited to answer on a 7-point Likert scale (1 = strongly disagree; 7 = strongly agree) for all the questions.

(18)

17

This variable was computed before the manipulation could be checked. An exploratory factor analysis with Varimax rotation over all three items indicated that the scale was

unidimensional, because one component was revealed with an Eigenvalue above 1, namely 2.72, which explained 90.59% of the variance. The 3-item scale also proved to be reliable with a Cronbach’s Alpha of .94. The total score of ‘perceived social interactivity of chatbot’ was computed by using the mean across the three items (M = 4.93, SD = 1.60).

A Repeated Measures analysis was conducted to test the effectiveness of the

manipulation of social interactivity. Participants exposed to high levels of social interactivity reported overall higher levels of social interactivity of the virtual agents for both the simple task (M = 5.45, SD = 1.00) and the complex task (M = 5.41, SD = 1.13) than participants exposed to low levels of social interactivity for both the simple task (M = 2.71, SD = 1.15) and the complex task (M = 2.72, SD = 1.34), F (3,72) = 58.92 , p < .05, ηp2 = .711.

Manipulation of service complexity

To verify whether the manipulation of the service complexity variable was successful, participants were asked whether they perceived the service provided in the chat as cognitively complex, and to what extent they found the task accomplished by the agent was difficult. In this occasion as well, participants expressed their opinion on a 7-points Likert scale (1 = strongly disagree; 7 = strongly agree).

This variable was computed before the manipulation could be checked. An exploratory factor analysis with Varimax rotation over the two items indicated that the scale was

unidimensional, because one component was revealed with an Eigenvalue above 1, namely 1.99, which explained 99.28% of the variance. The 2-item scale also proved to be reliable with a Cronbach’s Alpha of .99. The total score of ‘perceived complexity of the service’ was computed by using the mean across the two items (M = 4.38, SD = 2.06).

(19)

18

A Repeated Measures analysis was conducted to test the effectiveness of complexity of the service. Participants exposed to the complex task reported overall higher levels of complexity of the task carried out by the virtual agent for both high social interactivity (M = 5.22, SD = 1.55) and low social interactivity (M = 5.15, SD = 1.33) and than participants exposed to simple task for both high social interactivity (M = 2.32, SD = 1.11) and low social interactivity (M = 2.20, SD = .98), F (3,72) = 41.29 , p < .05, ηp2 = .632. Based on these

findings, we can conclude that the manipulation of both factors was successful.

Dependent variables

Source credibility

The credibility of the source perceived by participants assesses the individual perception of credibility through three dimensions (McCroskey & Teven, 1999), such as competence (6 items; e.g., “intelligent/unintelligent”), trustworthiness (6 items; e.g., “trustworthy/untrustworthy”) and caring (6 items; e.g. “cares about me/ doesn’t care about me”). All the dimensions and their respective items were measured using a 7-points Likert scale (1 = strongly disagree; 7 = strongly agree). Table 1 summarizes the operationalization of this dependent variable and a full reporting of the scale measurement can be found in

Appendix B.

Table 1 Source credibility operationalization

Dimensions of the variable Scale type Reference Items

Competence 7-points Likert scale McCroskey & Teven, 1999 6 items

Character 7-points Likert scale McCroskey & Teven, 1999 6 items

Caring 7-points Likert scale McCroskey & Teven, 1999 6 items

Before the hypotheses could be tested and the variable computed, 6 items were reverse coded. In the end, the 18-item scale proved to be reliable with a Cronbach’s Alpha of .95. The

(20)

19

total score of ‘source credibility’ was computed by using the mean across the eighteen items (M = 4.59, SD = 1.15).

Attitude toward the company

The second outcome of the current research was measured with a scale adopted from Becker-Olsen (2003), asking participants to rate their attitude towards the company on a 7-points Likert scale (1 = strongly disagree; 7 = strongly agree) among 9 items such as negative/positive, unfavorable/favorable, bad/good, and satisfactory/unsatisfactory.

This variable also needed to be computed before the hypotheses could be tested, and2 items were reverse coded. The 9-item scale also proved to be reliable with a Cronbach’s Alpha of .94. The total score of ‘attitude toward the company’ was computed by using the mean across the nine items (M = 4.66, SD = 1.13).

Analyses

A Two-way ANOVA was conducted to the test the hypotheses H1a and H2a. While the former (H1a) included the dichotomous variable levels of social interactivity (high vs. low) as independent variable and the continuous variable source credibility (1 = strongly disagree; 7 = strongly agree) as dependent variable, the latter (H2a) included both the already mentioned variables and the dichotomous variable complexity of the service (simple vs. complex) as an additional the independent variable. By conducting this analysis, not only it was possible to investigate the main effects of both factors, but also what is the difference when these effects are moderated by the complexity of the service.

What is more, a second Two-way ANOVA was also conducted to test the hypotheses H1b and H2b. In this scenario, the former (H1b) consisted of the dichotomous variable levels of social interactivity as independent variable and the continuous variable attitude toward the company (1 = strongly disagree; 7 = strongly agree) as dependent variable, while the latter

(21)

20

(H2b) consisted of the already mentioned two variables and the dichotomous variable complexity of the service as an additional the independent variable. Thanks to this statistical test, the effect of different levels of social interactivity on attitude toward the company was investigated, together with an analysis of what happens when this effect is moderated by the complexity of the service.

Results

Social interactivity on source credibility (H1a)

In order to test the hypothesis stating that “Artificial conversational agents with high levels of social interactivity will be perceived as more credible than artificial conversational agents with low levels of social interactivity”(H1a), a Two-way ANOVA was conducted to examine whether participants exposed to a virtual agent characterised by high levels of social interactivity scored different on source credibility than participants exposed to a virtual agent with low levels of social interactivity. The analysis of variance showed a significant weak main effect of levels of social interactivity on source credibility, F (1,163) = 47.47, p < .01, ƞ2 = .21. Participants who were exposed to high levels of social interactivity scored higher on source credibility (M = 5.12, SD = .11) than participants who were exposed to low levels of social interactivity (M = 4.07, SD = .11). Levels of social interactivity explained 21% of the variance in levels of source credibility. Hence, H1a is accepted.

Social interactivity on attitude toward the company (H1b)

The hypothesis that states “Artificial conversational agents with high levels of social interactivity will lead to a more positive attitude toward the company than artificial

conversational agents with low levels of social interactivity” (H1b) was tested through a Two-way ANOVA to examine whether participants exposed to a virtual agent characterised by high levels of social interactivity scored differently on attitude towards the company than

(22)

21

participants exposed to a virtual agent with low levels of social interactivity. The analysis of variance showed a significant weak main effect of levels of social interactivity on attitude toward the company, F (1,163) = 41.31, p < .01, ƞ2 = .19. Participants who were exposed to high levels of social interactivity scored higher on attitude toward the company (M = 5.17, SD = .93) than participants exposed to low levels of social interactivity (M = 4.17, SD = 1.09). Levels of social interactivity explained 19% of the variance in levels of attitude toward the company. Therefore, H1b is accepted.

Complexity of the service and social interactivity on source credibility (H2a)

In order to test the hypothesis stating that “The positive effect of artificial

conversational agents with high levels of social interactivity - compared to low levels of social interactivity - on source credibility will be more pronounced in the context of complex services than simple services” (H2a), a Two‐way ANOVA was conducted to examine the effect of complexity of the service provided by a virtual agent added to its level of social interactivity on source credibility. The analysis of variance showed a significant weak main effect of complexity of the service on source credibility, F (1,163) = 18.14, p < .01, ƞ2 = .08. Participants who were exposed to the simple service condition scored lower on source credibility (M = 4.25, SD = 1.17) than participants who were exposed to the complex service condition (M = 4.93, SD = 1.03). Complexity of the service explained 8% of the variance in levels of source credibility.

Additionally, the analysis revealed a non-significant interaction effect of complexity of the service and social interactivity on source credibility, F (1,163) = 1.32, p = .252. Therefore, it was possible to conclude that the effect of social interactivity on source credibility did not depend on complexity of the service. Hence, H2a is rejected.

(23)

22

Complexity of the service and social interactivity on attitude toward the company (H2b)

To test the hypothesis that states “The positive effect of artificial conversational agents with high levels of social interactivity - compared to low levels of social interactivity - on attitude toward the company will be more pronounced in the context of complex services than simple services” (H2b), a Two‐way ANOVA was conducted to examine the effect of

complexity of the service provided by a virtual agent added to its level of social interactivity on attitude toward the company. The analysis of variance showed a significant weak main effect of complexity of the service on attitude toward the company, F (1,163) = 14.67, p < .01, ƞ2 = .07. Participants who were exposed to the simple service condition scored lower on

attitude toward the company (M = 4.35, SD = 1.11) than participants who were exposed to the complex service condition (M = 4.97, SD = 1.07). Complexity of the service explained 7% of the variance in levels of source credibility.

Additionally, the analysis revealed a non-significant interaction effect of complexity of the service and social interactivity on attitude toward the company, F (1,163) = .63, p = .430. Hence, in this scenario the effect of social interactivity on attitude toward the company did not depend on complexity of the service. Hence, H2b is rejected.1

Conclusion

The overall goal of this study was finding out how different levels of social

interactivity displayed by artificial conversational agents influenced online users’ perception of source credibility and their attitude toward the company providing the chatbot service. The research was also particularly focused on discovering how the aforementioned relationship differed for different levels of service complexity. This study was built on what previous

1 In order to further check the robustness of the study, a Multivariate Analysis Of Variance (MANOVA) was conducted with levels of social interactivity and complexity of the task as independent variables and source credibility and attitude toward the company as dependent variables. This analysis confirmed what the two separate Two-way ANOVAs revealed, bringing additional strength to the results found.

(24)

23

research highlighted on the importance of social interactivity and complexity of the service in the context of chatbots utilized in customer service (Breazeal, 2003; Duffy, 2003; Haas & Wearden, 2003). In fact, the findings of this research are in line with the assumption that social interactivity exerts influence on customers’ behaviors and impressions (Solomon, 1985), and that complex tasks – when positively managed - are linked to a growing beneficial perception of both the company and the virtual agent (Wirtz et al., 2018).

The current study showed that when a virtual agent displays high levels of social interactivity in online customer service chat, online customers perceive the agent as more credible than when the same agent shows low levels of social interactivity. According to this result, we can conclude that the study is in line with previous research stating that the

perceived social interactivity of robots is a crucial element that affects customers’ acceptance of automated conversational agents (Mohammad & Nishida, 2009; Stock & Merkle, 2018).

Additionally, the research showed that virtual agents displaying high levels of social interactivity also have a more positive effect on customers’ attitude toward the company providing the chatbot service than virtual agents showing low levels of social interactivity. Thanks to this result, it was possible to strengthen what existing research already investigated. In fact, according to Larivière et al. (2017), when a company makes chatbots an available resource for customer service tools and when these agents show strong abilities of interacting in a social context, the immediate consequence for customers is a stronger and more positive attitude favouring the same company than when the conversational agents display lower levels of social interactivity. Given the first two results of this research, we can conclude that levels of social interactivity of the agent is surely a crucial aspect to consider when building a virtual conversational agent. This aspect is essential for both the source of information itself (here, the chatbot) and the company that makes this service available.

(25)

24

What is more, the current research showed also that the complexity of the service positively affects the credibility of the source, in a way that when the service offered by the virtual conversational agent is complex, the source is perceived by online clients as more credible than when the offered service is simple. However, the effect of an interaction between the complexity of the service and levels of social interactivity on source credibility was not confirmed with the current research. It follows that although the complexity of the service employed resulted as highly relevant in the context of chatbots’ credibility, an interplay among the two aforementioned variables did not have a direct repercussion on source credibility. This analysis does not confirm our initial assumption that chatbots’ ability of fitting the social context while providing a complicated service would have ensured a sense of reliability to online customers. This result can be explained by the fact that social

interactivity is an important aspect for both complex and simple services when it comes to the perceived source credibility of the virtual agent, and that making a distinction between

complex and simple service in this context does not necessarily lead to different outcomes in terms of source credibility.

Finally, the last analysis proved that when the chatbot offers a complex service customers have a more positive attitude toward the company providing the chatbot service than when the same virtual agent offers a simple service. Nevertheless, the interaction effect of complexity of the service and levels of social interactivity on attitude toward the company was not statistically confirmed. This means that a combination between the complexity of the task carried out by the virtual agent and levels of social interactivity does not necessarily affects the overall perception consumers have of the company that made the artificial conversational agent available.

We can conclude that our initial research question stating “How do different levels of social interactivity in artificial conversational agents influence online users’ perception of

(26)

25

source credibility and their attitude toward the company? And how does this differ for different levels of service complexity?” has beenanswered and deeply analyzed, and that while levels of social interactivity has been confirmed as positively affecting both source credibility and attitude toward the company, an interaction between the same variable and the complexity of the service provided on source credibility and attitude toward the company was not successfully confirmed.

Discussion

The current study brings several contributions to both theoretical and empirical academic research, but does not come without limitations. In the following paragraph, I will first discuss some practical implications the study added to existing research, followed by several limitations that future researchers might want to take into consideration.

This online experiment underlines the importance of thoroughly constructed artificial conversational agents in the context of customer service. Specific characteristics such as the social interactivity the chatbots display and the type of service they provide can determine the overall perception online consumers have of both the company and the virtual agent. In fact, companies utilizing these technological contributions when establishing relationship with customers online must avoid the risk of jeopardizing their reputation and carefully build these agents selecting the right characteristics (Wirtz et al., 2018; Stock & Merkle, 2018).

Furthermore, the study contributes to existing research in emphasizing the relevance of service complexity. While some services might be perfectly suitable for this type of virtual interaction between chatbot and customers, others might not be perceived correspondingly (Haas & Wearden, 2003). For this reason, future research should deeply evaluate the importance of this factor in the context of virtual conversational agents.

(27)

26

In terms of internal validity, the current research was built with the confidence that an experimental design has strong certainty that the results can be justified and explained

because of the way in which it was constructed (for instance, thanks to the manipulation of the independent variables). In terms of the external validity, although the results of this study cannot be generalized further than to the actual sample used in this research, Mullinix et al. (2015) showed that convenience online samples are still considered somewhat representative of the entire population and that they have considerable similarities with population-based samples.

One of the limitations that might have undermined the results of this study is the choice of the two services. The two different services in the simple and complex condition were provided by completely different companies operating in different sectors (one in the food industry, the other in the travel industry). It follows that these industries might have different levels of perceived credibility per se, independently from the specific customer service provided through the chatbot, therefore participants’ answers might have been influenced differently by this element. The suggestion to future researchers interested in this area of expertise is to find services that differ in terms of complexity yet belong to the same industry.

The study also showed that, although the complexity of the service does not moderate the relationship between levels of social interactivity and perceived source credibility, it does have a significant direct effect on the source credibility itself. This interesting result can be deeply investigated by future researchers, for instance, choosing to adopt a mediation research design instead of a moderation one, and analysing the effect complexity of the task has

individually on both levels of social interactivity and source credibility.

Another limitation that could have determined the lack of statistical significance for some of the hypotheses is the sample size. Our final sample size comprised 164 participants

(28)

27

and the study was composed of four different conditions, therefore there were less than 50 participants for each condition and this might have led to limit the confirmation of some of the effects predicted – for instance, the interaction effect between social interactivity and complexity of the service. Additionally, out of 164 participants 40% were Italian and the survey was in English, which means that the majority of participants were not native speakers and that some language misunderstandings could have biased participants’ answers. Future researchers might want to consider reaching a bigger sample size and address English native speaker participants only or give them the opportunity to select their preferred language.

Additionally, participants had the opportunity of chatting with the artificial agent without the need of being redirected to an external website. This setup kept the rate of dropouts really low since participants remained in the same setting from the beginning until the end of the online experiment. However, since the chat was not embedded in a company website, the level of realism of the conversations might not have been sufficient to give participants the impression that the chats entailed were authentic. The suggestion to the next researchers interested in investigating this topic is to try to guarantee the level of realism as high as possible, for instance embedding not only the chat but the entire corporate website to the online experiment.

To conclude, one of the elements that makes this research unique is the focus on the aspect of social interactivity of the virtual agent instead of humanness. Although they are both part of the socio-emotional dimension that the chatbot displays during a conversation, they differ on several aspects therefore require distinct investigations. Additionally, the current paper provides a starting point when it comes to exploring the interaction between levels of social interactivity and complexity of the service. The lack of this combination in existing literature contributes to generate intellectual discussion in the academic arena of corporate communication.

(29)

28 References

Araujo, T. (2018). Living up the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Computers in Human Behavior, 85, 183-189.

https://doi.org/10.1016/j.chb.2018.03.051

Becker-Olsen, K. L. (2003). And now, a word from our sponsor - a look at the effects of sponsored content and Banner advertising. Journal of Advertising, 32(2), 17-32. https://doi.org/10.1080/00913367.2003.10639130

Breazeal, C. (2003). Towards sociable robots. Robotics and Autonomous Systems, 42(3), 167-175.

Bruner, G. (2013). Marketing scales handbook : multi-item measures for consumer insight research. Volume 7 ((Library version)). Fort Worth, Texas: GCBII Productions. Chakrabarty, S., Widing, R.E. & Brown, R.E. (2014). Selling behaviours and sales

performance: The moderating and mediating effects of interpersonal mentalizing. Journal of Personal Selling & Sales Management, 34 (2), 112-122.

Chung, M., Ko, E., Joung, H., & Kim, S. (2018). Chatbot e-service and customer satisfaction regarding luxury brands. Journal of Business Research.

https://doi.org/10.1016/j.jbusres.2018.10.004

Duffy, B.R. (2003). Anthropomorphism and the social robot. Robotics and Autonomous Systems, 42(3), 177-190.

Edwards, C., Beattie, A. J., Edwards, A., & Spence, P. R. (2016). Differences in perceptions of communication quality between a Twitterbot and human agent for information seeking and learning. Computers in Human Behavior, 65, 627-634.

(30)

29

Edwards, A., Edwards, C., Spence, P. R., & Shelton, A. K. (2014). Is that a bot running the social media feed? Testing the differences in perception of communication quality for human agent and bot agent on Twitter. Computers in Human Behavior, 33, 372-376. https://doi.org/10.1016/j.chb.2013.08.013

Følstad, A., & Brandtzæg, P. B., (2017). Chatbots and the new world of HCI. Interactions, 24(4), 38-42. DOI: 10.1145/3085558

Giebelhausen, M., Robinson, S., Sirianni, N., & Brady, M. (2014). Touch versus tech when technology functions as a barrier or a benefit to service encounters. Journal of Marketing, 78(4), 113–124. https://doi.org/10.1509/jm.13.0056

Haas, C., & Wearden, S. (2003). E-credibility: Building Common Ground in Web

Environments. L1-Educational Studies in Language and Literature, 3(1), 169–184. https://doi.org/10.1023/A:1024557422109

Heerink, M., Kröse, B., Evers, V., & Wielinga, B. (2010). Assessing acceptance of assistive social agent technology by older adults: the Almere Model. International Journal of Social Robots, 2(4), 361-375.

Hill, J., Randolph Ford, W., & Farreras, I. (2015). Real conversations with artificial

intelligence: A comparison between human–human online conversations and human– chatbot conversations. Computers in Human Behavior, 49, 245–250.

https://doi.org/10.1016/j.chb.2015.02.026

Holbrook, M. B., Chestnut, R. W., Oliva, T. A., & Greenleaf, E. A. (1984). Play as a

Consumption Experience: The Roles of Emotions, Performance, and Personality in the Enjoyment of Game. Journal of Consumer Research, 11(2), 728-739.

Larivière, B., Bowen, D., Andreassen, T. W., Kunz, W., Sirianni, N. J., Voss, C., et al. (2017). “Service Encounter 2.0”: An investigation into the roles of technology, employees and

(31)

30

customers. Journal of Business Research, 79, 238-246.

https://doi.org/10.1016/j.jbusres.2017.03.008

Lee, Y., Ha, M., Kwon, S., Shim, Y., & Kim, J. (2019). Egoistic and altruistic motivation: How to induce users’ willingness to help for imperfect AI. Computers in Human Behavior, 101, 180–196. https://doi.org/10.1016/j.chb.2019.06.009

Liu, B., & Sundar, S. (2018). Should Machines Express Sympathy and Empathy?

Experiments with a Health Advice Chatbot. Cyberpsychology, Behavior, and Social Networking, 21(10), 625–636. https://doi.org/10.1089/cyber.2018.0110

Liu, Y., & Shrum, L. (2002). What is Interactivity and is it Always Such a Good Thing? Implications of Definition, Person, and Situation for the Influence of Interactivity on Advertising Effectiveness. Journal of Advertising, 31(4), 53–64.

https://doi.org/10.1080/00913367.2002.10673685

McCroskey, J. C., & Teven, J. J. (1999). Goodwill: A reexamination of the construct and its measurement. Communication Monographs, 66, 90–103.

Mikolon, S., Kolberg, A., Haumann, T., & Wieseke, J. (2015). The Complex Role of Complexity: How Service Providers Can Mitigate Negative Effects of Perceived Service Complexity When Selling Professional Services, Journal of Service Research, 18(4), 513-528, DOI: 10.1177/1094670514568778 jsr.sagepub.com.

Mohammad, Y., & Nishida, T. (2009). Toward combining autonomy and interactivity for social robots. AI & SOCIETY, 24(1), 35–49. https://doi.org/10.1007/s00146-009-0196-3

Mullinix, K., Leeper, T., Druckman, J., & Freese, J. (2015). The Generalizability of Survey Experiments. Journal of Experimental Political Science, 2(2), 109–138.

(32)

31

Pavlik, J., & McIntosh, S. (2004). Converging Media: An Introduction to Mass Communication. Boston Mass: Allyn & Bacon.

Solomon, M., Surprenant, C., Czepiel, J., & Gutman, E. (1985). A Role Theory Perspective on Dyadic Interactions: The Service Encounter. Journal of Marketing, 49(1), 99–111.

https://doi.org/10.2307/1251180

Stock, R.M. & Merkle, M. (2018), “Can humanoid service robots perform better than service employees? A comparison of innovative behavior cues”, Proceedings of the 51st Hawaii International Conference on System Sciences, Waikoloa Village, HI, January 3-6, available at:

https://scholarspace.manoa.hawaii.edu/bitstream/10125/50020/1/paper0133.pdf (accessed September 9th, 2019).

Tezcan, T., & Zhang, J. (2014). Routing and staffing in customer service chat systems with impatient customers. Operations Research, 62(4), 943-956.

DOI:10.1287/opre.2014.1284

Vendemia, M. A. (2017). When do consumers buy the company? Perceptions of interactivity in company-consumer interactions on social networking sites. Computers in Human Behavior, 71, 99-109. https://doi.org/10.1016/j.chb.2017.01.046

Vossen, H.G., Piotrowski, J.T., & Valkenburg, P.M. (2015). Development of the adolescent measure of empathy and sympathy (AMES). Personality and Individual Differences, 74, 66–71.

Westerman, D., Cross, A., & Lindmark, P. (2019). I Believe in a Thing Called Bot: Perceptions of the Humanness of “Chatbots”. Communication Studies, 70(3), 295– 312. https://doi.org/10.1080/10510974.2018.1557233

(33)

32

Wirtz, J., Patterson, P., Kunz, W., Gruber, T., Nhat Lu, V., Paluch, S., & Martins, A. (2018). Brave new world: service robots in the frontline, Journal of Service Management, 29(5), 907-931. https://doi.org/10.1108/JOSM-04-2018-0119

(34)

33

Appendix A: Manipulations

Figure 3 – Condition 1 (High social interactivity, Simple task)

(35)

34

Figure 5 – Condition 3 (High social interactivity, Complex task)

(36)

35

Appendix B: Scales measurement

Source credibility (McCroskey & Teven, 1999)

Competence

1. I found the agent in the chat intelligent 2. I think the agent in the chat was trained

3. I had the impression that the agent was not expert 4. I think the agent was informed

5. I had the impression that the agent was competent 6. I think the agent was stupid

Trustworthiness

1. I think the agent in the chat was trustworthy 2. I found the agent unethical

3. I found the agent genuine

4. I had the impression that the agent was immoral 5. I would define the agent as honourable

6. I think the agent was honest

Caring

1. I had the impression that the agent in the chat cared about me 2. I had the impression that the agent was self-centred

3. I think that the agent had my interests at heart

4. I had the impression that the agent was concerned with me 5. I think the agent was not understanding

(37)

36

Attitude toward the company (Becker-Olsen, 2003)

1. I have a positive attitude toward this company 2. I think the company is pleasant

3. I found the company agreeable 4. I think the company is worthless 5. I think the company is good 6. I think the company is wise 7. I am favourable of this company 8. I dislike a lot this company 9. I found the company useless

(38)

37

Appendix C: Online survey

With this letter, I would like to invite you to participate in a research study to be conducted under the auspices of the Graduate School of Communication, a part of the University of Amsterdam. The study in which you are going to take part in deals with the use of chatbots by companies. In the online survey, you are asked to simulate a conversation with a virtual agent and complete one task.

The study will take about 8 minutes.

As this research is being carried out under the responsibility of the ASCoR, University of Amsterdam, we can guarantee that:

1. Your anonymity will be safeguarded, and that your personal information will not be passed on to third parties under any conditions, unless you first give your express permission for this.

2. You can refuse to participate in the research or cut short your participation without having to give a reason for doing so. You also have up to 24 hours after participating to withdraw your permission to allow your answers or data to be used in the research.

3. Participating in the research will not entail your being subjected to any appreciable risk or discomfort, the researchers will not deliberately mislead you, and you will not be exposed to any explicitly offensive material.

4. No later than five months after the conclusion of the research, we will be able to provide you with a research report that explains the general results of the research.

For more information about the research and the invitation to participate, you are welcome to contact the project leader Melania De Angelis (deangelismelania@gmail.com) at any time. Should you have any complaints or comments about the course of the research and the

(39)

38

the designated member of the Ethics Committee representing ASCoR, at the following address: ASCoR Secretariat, Ethics Committee, University of Amsterdam, Postbus 15793, 1001 NG Amsterdam; 020‐525 3680; ascor‐secr‐fmg@uva.nl. Any complaints or comments will be treated in the strictest confidence. We hope that we have provided you with sufficient information. We would like to take this opportunity to thank you in advance for your

assistance with this research, which we greatly appreciate.

Kind regards,

Melania De Angelis

---

I hereby declare that I have been informed in a clear manner about the nature and method of the research, as described in the email invitation for this study.

I agree, fully and voluntarily, to participate in this research study. With this, I retain the right to withdraw my consent, without having to give a reason for doing so. I am aware that I may halt my participation in the experiment at any time.

If my research results are used in scientific publications or are made public in another way, this will be done such a way that my anonymity is completely safeguarded. My personal data will not be passed on to third parties without my express permission.

If I wish to receive more information about the research, either now or in future, I can contact Melania De Angelis (deangelismelania@gmail.com). Should I have any complaints about this research, I can contact the designated member of the Ethics Committee representing the ASCoR, at the following address: ASCoR secretariat, Ethics Committee, University of Amsterdam, Postbus 15793, 1001 NG Amsterdam; 020‐ 525 3680; ascor‐secr‐fmg@uva.nl. 0 I understand the text presented above, and I agree to participate in the study.

(40)

39

0I do not wish to participate in this research study.

---

Are you above the age of 18?

(1) Yes (2) No

---

Lisa is a virtual agent whose purpose is helping online clients with customer service tasks. You can simulate a conversation with her, start by typing "Hi" and follow her inputs to continue the conversation and let her help you out with a specific task. Once the conversation ends, Lisa will provide you a code that you can copy and paste to continue the survey. Then, you can answer some questions on the service provided!

Please provide the code the chatbot gave you to continue:

---

"Social interactivity is defined as the ability of showing to understand social norms and adapt the response to the specific context in which the conversation takes place" (Wirtz et al., 2018).

(41)

40

Based on this definition, please express to what extent you believe that...

The virtual agent was socially interactive:

(1) Strongly disagree (2) Disagree

(3) Somewhat disagree (4) Neither agree nor disagree (5) Somewhat agree

(6) Agree

(7) Strongly agree

The virtual agent was good at interacting in the conversation:

(1) Strongly disagree (2) Disagree

(3) Somewhat disagree (4) Neither agree nor disagree (5) Somewhat agree

(6) Agree

(7) Strongly agree

The agent showed empathy during the conversation

(1) Strongly disagree (2) Disagree

(3) Somewhat disagree (4) Neither agree nor disagree (5) Somewhat agree

(42)

41

(7) Strongly agree

---

Please express to what extent you believe that… The service provided in the chat was complex:

(1) Strongly disagree (2) Disagree

(3) Somewhat disagree (4) Neither agree nor disagree (5) Somewhat agree

(6) Agree

(7) Strongly agree

The task accomplished by the virtual agent was difficult

(1) Strongly disagree (2) Disagree

(3) Somewhat disagree (4) Neither agree nor disagree (5) Somewhat agree

(6) Agree

(7) Strongly agree

---

Please express to what extent you believe the following sentences regarding the virtual agent are true:

(43)

42

1 (Strongly disagree) 2 3 4 5 6 7 (Strongly agree)

I think the agent in the chat was trained

1 (Strongly disagree) 2 3 4 5 6 7 (Strongly agree)

I had the impression that the agent was not expert

1 (Strongly disagree) 2 3 4 5 6 7 (Strongly agree)

I think the agent was informed

1 (Strongly disagree) 2 3 4 5 6 7 (Strongly agree)

I had the impression that the agent was competent

1 (Strongly disagree) 2 3 4 5 6 7 (Strongly agree)

I think the agent was stupid

1 (Strongly disagree) 2 3 4 5 6 7 (Strongly agree)

Please express to what extent you believe the following sentences are true:

I think the agent in the chat was trustworthy

1 (Strongly disagree) 2 3 4 5 6 7 (Strongly agree)

I found the agent unethical

1 (Strongly disagree) 2 3 4 5 6 7 (Strongly agree)

I found the agent genuine

1 (Strongly disagree) 2 3 4 5 6 7 (Strongly agree)

I had the impression that the agent was immoral

(44)

43

I would define the agent as honourable

1 (Strongly disagree) 2 3 4 5 6 7 (Strongly agree)

I think the agent was honest

1 (Strongly disagree) 2 3 4 5 6 7 (Strongly agree)

Please express to what extent you believe the following sentences are true:

I had the impression that the agent in the chat cared about me

1 (Strongly disagree) 2 3 4 5 6 7 (Strongly agree)

I think that the agent was self-centred

1 (Strongly disagree) 2 3 4 5 6 7 (Strongly agree)

I think that the agent had my interests at heart

1 (Strongly disagree) 2 3 4 5 6 7 (Strongly agree)

I had the impression that the agent was concerned with me

1 (Strongly disagree) 2 3 4 5 6 7 (Strongly agree)

I think the agent was understanding

1 (Strongly disagree) 2 3 4 5 6 7 (Strongly agree)

I believe the agent was insensitive

1 (Strongly disagree) 2 3 4 5 6 7 (Strongly agree)

---

Please express to what extent you agree with the following sentences regarding the company providing the chatbot service:

(45)

44

I have a positive attitude toward this company

1 (Strongly disagree) 2 3 4 5 6 7 (Strongly agree)

I think the company is pleasant

1 (Strongly disagree) 2 3 4 5 6 7 (Strongly agree)

I found the company agreeable

1 (Strongly disagree) 2 3 4 5 6 7 (Strongly agree)

I think the company is worthless

1 (Strongly disagree) 2 3 4 5 6 7 (Strongly agree)

I think this company is good

1 (Strongly disagree) 2 3 4 5 6 7 (Strongly agree)

I think this company is wise

1 (Strongly disagree) 2 3 4 5 6 7 (Strongly agree)

I am favorable of this company

1 (Strongly disagree) 2 3 4 5 6 7 (Strongly agree)

I like a lot this company

1 (Strongly disagree) 2 3 4 5 6 7 (Strongly agree)

I found the company useless

1 (Strongly disagree) 2 3 4 5 6 7 (Strongly agree)

---

Referenties

GERELATEERDE DOCUMENTEN

The results demonstrate that the method provides robust model coefficients and quantitative measure of the model uncertainty. This approach can be adopted for the

De stroming van de Cynici is voor Foucault het duidelijkste voorbeeld van een wijze van filosofie beoefenen die niet een zaak was van doctrines over het leven, maar waar de

Furthermore, the perceived complexity and insecurity that social media cause might influence the use of social media in organisations (Bucher et al., 2013).

(beter is te sprekefi van: het verkrijgen van een meer volleiige en minder relatieve kennis). In elke wetenschap wordt formaliseering van de taal toegepast om tot het doel te

In general FMCG websites show that they are able to provide the consumers with product information and that some companies offer extra functionalities, but

Vansina describes the floating gap as the chasm between time-bound communication and timeless cultural memory (Vansina 1985). New Media’s ability to provide

Op de linkeroever van de Dijle, op een pleistocene rug, wordt de oudste kern van de stad gesitueerd, gevlochten rond de Korenmarkt op het kruispunt van twee belangrijke

Teneinde de archeologische waarde in te schatten van het projectgebied met betrekking tot de Eerste Wereldoorlog werd een historisch onderzoek uitgevoerd door de firma