• No results found

Implementing a chatbot that affects an organization’s Net Promoter Score: the mediation effects of Social Presence and Perceived Ease Of Use

N/A
N/A
Protected

Academic year: 2021

Share "Implementing a chatbot that affects an organization’s Net Promoter Score: the mediation effects of Social Presence and Perceived Ease Of Use"

Copied!
36
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Master’s thesis

Implementing a chatbot that affects an organization’s Net Promoter Score: the mediation effects of Social Presence and Perceived Ease Of Use

Graduate School of Communication, University of Amsterdam Master Corporate Communication (MSc)

Author: Darko Monzio Compagnoni (12276367)

Supervisor: P.G.A. (Pernill) van der Rijt

(2)

Abstract

The chatbot Glenda has been developed to conduct this research, following up on the elective course Digital Analytics by Dr. Theo Araujo, at the University of Amsterdam. Work automation is an increasing phenomenon, especially for what concerns low skilled jobs. In this online experiment conducted with 201 participants, the focus is on companies that decide to automate processes (booking movie tickets online in this research) by implementing a chatbot, that the customers (impersonated by the participants) directly contact. Different characteristics connected to how the chatbot frames its messages are put into relation with how much a customer, after interacting with the chatbot, would be likely to recommend the company to a friend (Net Promoter Score, Individual Promoter Score in this research). This message framing characteristics included whether the chatbot impersonated a cheery or serious conversation partner, expected to affect the Individual Promoter Score through Social Presence, and whether it guided the conversation or not, expected to affect the Individual Promoter Score through Perceived Ease Of Use. The results showed how the participants expressed a higher desire to recommend the company when the chatbot that was implemented conversated in a warm and active way, while telling the participants what to type in the chat did not consist a predictor of a higher Individual Promoter Score.

INTRODUCTION

Between 1964 and 1965 the computer scientist Joseph Weizenbaum developed a program named ELIZA, which he defined as a program that makes possible to hold a conversation with a computer using natural language (Weizenbaum, 1966). From the development of ELIZA, often described as the very first chatbot, computer science literature has been enriched with studies that concerned AI holding conversations, with the shared goal of passing the Turing test (Turing, 1950) by being unrecognizable as machines when put into comparison with human beings providing answers to the same questions. Hand in hand with technological developments, new and more detailed tests appeared to measure

(3)

the property of a software capable of conversing (a chatbot) to be indistinguishable from a human being typing on a keyboard (Colby, Hilf, Weber, & Kramer, 1972). Shum, He, and Li (2017) defined an overview of the main chatbots after ELIZA, addressing the main

characteristics: specifically PARRY in 1972 first to give emotionally characterized responses (Colby, 1975); ALICE in 1995, with customizable scripts; DARPA in 2000, capable of

managing reservations for flight tickets; the digital assistant Siri on Apple devices from 2011; XiaoIce in 2014, social chatbot capable of adapting the responses to the user’s emotional status, showing an artificial form of empathy. With computer science research developing a technology that communicates mimicking a human being, communication science moved towards exploring the use of chatbots for communication practices, substituting face-to-face interactions with human beings. To give examples, recent research implementing chatbots has been conducted to investigate their implementation in customer service (Chung, Ko & Kim 2018); customer experience (Trivedi, 2019); advertising (Van Den Broeck, Zarouali, & Poels 2019); marketing (Kaczorowska-Spychalska, 2019); B2B marketing (Paschen, Kietzmann, & Kietzmann, 2019); emotional connection with the company (Araujo, 2018). All these examples are from recent research and, despite their being related to practices of marketing and customer experience, none of them relates to how chatbots could affect the Word of Mouth. Specifically for this research, the communicative style of the chatbot, and the help provided to users in the scenario of their very first interaction with the company, would possibly lead to different outcomes when it comes to the overall evaluation of the first experience the customer has with the company.

The Research Question guiding this research is: “In the scenario of the first interaction with a customer: how do social and technological features of a chatbot, implemented to represent a company, influence the likelihood the customer will recommend the company to his/her network?”.

Such characteristics are categorized in a social aspect, making the chatbot more human-like, and a technological aspect, aimed to facilitate the use of the chatbot and

(4)

smoothing the interaction with it. Harrison-Walker (2001), found how the level to which a customer is committed and the quality of service are both positively related to WoM. The two concepts have shown to be antecedents of WoM, but they both derived from human-human interactions between the organization and the customer, not a form of Artificial Intelligence like a chatbot (although nowadays not really innovative when compared to the most famous virtual assistants Amazon Alexa, Siri by Apple, and Google Assistant). A deeper

understanding of the potential of a chatbot as a customer service agent would be a starting point for new research that will see artificial intelligence and customer service researchers collaborate to move forward in the process of work automation of customer service processes. From a societal viewpoint, this research aims to set guidelines for companies that are willing to automate some of their services by implementing a chatbot. The potential of this implementation currently consists of chatbots that can take food orders, chatbots used to manage the bookings for hotel rooms, flights, or movie tickets, and chatbots that provide information related to the organization or the services that are provided. Implementing a chatbot to take care of these tasks, often repetitive and alienating, would mean withdrawing this portfolio from the employees and decrease the possibility of negative consequences of work alienation such as counterproductive behaviors (Li & Chen, 2018), less commitment towards the organization (Tummers & Den Dulk, 2013), and a higher intention to leave (Tummers, Bekkers, van Thiel, & Steijn, 2015). Once the decision to automate is taken, the characteristics of the chatbot need to be decided before commissioning its development. This research provides insights on how to balance the social and technological aspects needed to drive the choices for this implementation, with the goal to have a positive outcome on the Word of Mouth.

THEORETICAL BACKGROUND

Before introducing the concepts, segments of this research, an explanation of how the research itself has been segmented is necessary. In fact, this research considers the

(5)

entity chatbot from two different perspectives, that guide the hypotheses. The first

perspective concerns the social aspect of the chatbot, and drives the first hypothesis and its components: in its holding a chat-based conversation using messages, language can structured to mimic to a certain extent the way a human being would converse. The second perspective concerns a technological aspect of the chatbot in relation to the user, and drives the second hypothesis and its components: being the chatbot a form of technology, and as such aimed to simplify or substitute a specific task, issues regarding the use of this

technology may arise, since users can find its use difficult. The two perspectives both allow to make inferences on the chatbot and its characteristics, but for this research has been considered appropriate to focus on them separately before advancing the third hypothesis, which contemplates a combination of both.

Chatbots

Chatbots are the most common form of Disembodied Conversational Agents, which solely rely on text messages (including media) to communicate, as opposed to the more complex Embodied Conversational Agents that present an avatar capable of expressing body language cues (Etemad-Sajadi, 2016, Etemad-Sajadi & Ghachem, 2015, and Verhagen, Van Nes, Feldberg, & Van Dolen, 2014) as well as basic emotions (Ekman & Cordaro, 2011). Since Facebook Messenger allowed companies to build their own chatbot, several brands started to implement this service. For example, Lyft uses a chatbot to manage the requests of rides, providing information about the driver, the car and the location; Spotify’s chatbot simplifies the process of searching for music as well as give playlist recommendations; Whole Foods’ chatbot assists the customer with recipes and MasterCard makes checking transactions easier (“9 Great Examples of How Brands are Using Chatbots”, 2018).

(6)

Individual Promoter Score

This research includes the concept of Individual Promoter Score (IPS), based on Reichheld’s Net Promoter Score, or NPS (2003). The Net Promoter Score is the measure that allows companies to estimate the level of satisfaction of their customers from a perspective of how likely are they to recommend the company to other people, raising its awareness in the public and attracting new potential customers. Customers that score 9-10 are called promoters, and more likely to recommend the organization to members of their online networks, triggering a positive online Word of Mouth (eWoM) that, according to Raassens and Haans (2017) is related to the Net Promoter Score. On the other hand, detractors (customers scoring 0-6) are unlikely to recommend it and are responsible for up to 90% of a company’s negative Word Of Mouth (Reichheld, 2006), being likely to complain to their networks about the organization, causing damage to its reputation, its sales, and its growth (Reichheld, 2003, and Reichheld, 2006). In between promoters and detractors we found passives, satisfied with the company but not really endorsing it, being constantly vulnerable to offers coming from the competitors (“What Is Net Promoter?”, 2017). Companies such as Apple, Intuit, and Philips have put the NPS at the center of their processes for management (Reichheld & Markey, 2011). Existing research exploring NPS aims to use this measure to make inferences regarding: outcomes of customer-oriented communication (Eger & Micik, 2017); evaluate the forging of academic-practitioner partnership (Bendle, Bagga, & Nastasoiu, 2019); improve performance of

volunteer-dominated workforce (Burnham & Wong, 2018); increase profitability (Korneta, 2018); predict consumer expenditure (Stander, 2016); as an indicator of customer’s perception (Laitinen, 2018). Michaels (2017) underlines how the presence of a chatbot on a website enhances the user experience by providing information, services, and solutions to the problems the

customer might have, instead of alternatives consisting in static text and completion boxes on multiple webpages. This growing popularity of chatbots among companies lights up interest related how a corporation’s reputation might be affected when implementing one. Particularly Araujo (2018) found how “human-like cues such as language and name” (p.183)

(7)

of a chatbot influence both mindless and mindful anthropomorphism, as well as the

emotional connection with the company felt by the consumers after their interaction with the chatbot.

Social perspective: Communicative Type and Social Presence

When interacting with a chatbot, the tendency of users is to attribute to it a subjective level of anthropomorphism, namely human traits, emotions and intentions (Nass, Moon, & Green, 1997, Nass & Moon, 2000, Nass & Lee, 2001, Nass & Brave, 2005, Gong & Nass, 2007, and Sundar, 2008). A user that attributes a high anthropomorphism behaves socially with the chatbot, as in the interaction with another human being (Reeves & Nass, 1996, Baylor & Kim, 2003, Gong & Nass, 2007, Gong, 2008, Lee, 2010, and Cowan, Branigan, Obregón, Bugis, & Beale, 2015). This tendency to treat computers - chatbots in this research - as human beings is a behavior that a user mindlessly (unconsciously) perpetrates, based on cues that do not need to be complicated and realistic (Kim & Sundar, 2012). Following up with what described, companies that intend to annex a chatbot on a level of their

organization chart that faces the customers need to take into account what is known about the human-machine interaction no ensure the best experience for their customers. The concept of Social Presence was introduced by Short, Williams, and Christie (1976), and regards the capacity of an online media to express and communicate human-like cues to the user. The concept has been broadly used, and considered an indicator of higher learning and satisfaction outcomes in online learning contexts (Homer, Plass, & Blake, 2008, Leong, 2011, Kim, Kwon & Cho, 2011, Lyons, Reysen, & Pierce, 2012, Weidlich & Bastiaens, 2017, and Richardson, Maeda, Lv, & Caskurlu, 2017). According to Verhagen et al. (2014),

friendliness of a virtual customer service agent is a determinant of social presence, expressing more warmth and empathy (Anderson, 1995, and Price, Arnould, & Deibler, 1995). For this research, the Communicative Type is the way the chatbot communicates, which can either be friendly, expressing empathy and warmth in the chat messages, or neutral, using chat messages that go straight to the point without cheering the user. The

(8)

concept has been based on the findings of a research by Go and Sundar (2019), stating that a highly interactive message compensates the scarcity of human-like cues. Being the chatbot used for this research a Disembodied Conversational Agent that relies on text messages to express anthropomorphism, it is expected that highly interactive messages will address more anthropomorphism and therefore more social presence: according to Nowak and Biocca’s findings (2003), interacting with an entity that is slightly anthropomorphic results in the user to perceive higher Social Presence in comparison to both a not anthropomorphic at all or a highly anthropomorphic one, “indicating that the more anthropomorphic images set up higher expectations that lead to reduced presence when these expectations were not met” (p. 481). A chatbot that communicates in a friendly way is expected to be perceived as more human-like, and empower the perception of a presence to which the user socially interacts in form of chatting:

H1a: Participants interacting with a friendly chatbot will express a higher Social Presence than the participants interacting with a neutral chatbot.

It is demonstrated that Social Presence positively affects customer satisfaction (Verhagen et al., 2014) a satisfied user works as a hub and channels positive Word of Mouth in his/her network (Harrison-Walker, 2001), positively related to the Net Promoter Score Raassens and Haans (2017). Following the goal of this research, it is expected that a user that perceives a high social presence in the chatbot will consequently be more likely to positively recommend the company to their networks:

H1b: There is a positive relationship between the Social Presence and the IPS, where the higher the Social Presence, the higher the IPS.

By merging H1a and H1b, it is possible to infer that the Social Presence of the chatbot mediates the relationship between the Communicative Type and the Individual Promoter Score:

H1c: Participants interacting with a friendly chatbot will express a higher level of IPS than participants using a neutral chatbot, mediated by Social Presence.

(9)

Technological perspective: Presence of Guidance and Perceived Ease of Use Regarding the technological aspect of the chatbot, the implementation of such technology might find resistance from users that are not familiar with conversing with a computer. For this research, the Presence Of Guidance consists of messages provided by the chatbot that aim to guide the user in the interaction, providing instructions regarding what the participants need to type, inspired by the concept of soliciting a response (Ikemoto, Asawavetvutt, Kuwabara, & Huang 2019), and the concepts of “navigation by asking” and “navigation by proposing” (Smyth & McGinty, 2003). A user that interacts with a chatbot that does not understand, gets frustrated. A frustrated customer is an unhappy customer. This is why companies that intend to implement a chatbot should be interested in providing

guidance to their customers, facilitating their interaction with the chatbot and the overall experience, whose outcomes this research is investigating. It is expected that the effect of this characteristic, on the overall experience, will possibly predict a higher recommendability for the company. In this, the Perceived Ease Of Use, together with Perceived Usefulness, is included in Davis’ Technology Acceptance Model (TAM) based on the Theory of Reasoned Action (Fishbein & Ajzen, 1975). Davis (1989) defined Perceived Ease Of Use as "the degree to which a person believes that using a particular system would be free of effort" (p.320). It is expected that a user whose interaction with the chatbot has been problematic, will find it hard to use, while a user whose interaction went without flaws would consider the chatbot easy to use. This concept has been considered in a wide amount of research as a factor influencing the acceptance of the adoption of new technologies, such as e-services for organizations (Featherman, Miyazaki, & Sprott, 2010), faculty acceptance of electronic books (Nasser Al-Suqri, 2014) intention to use Internet Banking services (Danurdoro & Wulandari, 2016), intention to use e-government (Hamid, Razak, Bakar, & Abdullah, 2016) e-purchase intentions (Moslehpour, Pham, Wong, & Bilgiçli, 2018). In this research Perceived Ease of Use establishes how the users of the chatbot, potential customers of the company, will relate to using such technology to request a service, or information, instead of having an employee of the company taking care of their needs. It is expected that interacting with a chatbot that

(10)

explicitly says how to respond to obtain what is wanted, either information or services, will result in a smoother human-machine interaction by avoiding misunderstandings, and make the chatbot be considered easier to use:

H2a: Participants interacting with a chatbot that gives guidance will express a higher Perceived Ease of Use than the participants interacting with a chatbot that does not give guidance.

An interaction made smooth by the Presence of Guidance would result in a better overall experience in the completion of the task (Danurdoro & Wulandari, 2016). Avoiding to put the user in a situation in which the chatbot does not understand and keeps sending an error message would avoid frustration, and once again result in a better consideration of the company, raising how the user would rate it in informal chats with his/her network as well as the likelihood this chats actually happen:

H2b: There is a positive relationship between the Perceived Ease Of Use and the IPS, where the higher the Perceived Ease Of Use, the higher the IPS.

From H2a and H2b is possible to infer that the Perceived Ease Of Use mediates the relationship between the Presence of Guidance and the Individual Promoter Score:

H2c: Participants interacting with a chatbot that gives guidance will express a higher level of IPS than participants using a neutral chatbot, mediated by Perceived Ease Of Use.

Overall perspective: which is the best chatbot?

Following up with Araujo’s findings (2018) related to the use of a chatbot by an organization and the emotional connection with the user, the chatbot created for this research purposes allows to understand whether and how combinations of Communicative Type and Presence of Guidance affect how a customer will talk about the organization after a first encounter scenario. According to what is explained in the previous sections, it is

expected that interacting with a chatbot that is friendly and proactively helpful will result in a higher evaluation of the company that is represented:

(11)

H3: Participants interacting with a friendly chatbot who gives guidance will express the highest IPS compared to the other conditions.

Model

The conceptual research model below explains the expected relationships between the concepts, distinguished between the social perspective (top way) and the technological perspective (bottom way):

Image 1: The model

METHODS Design

To conduct this research, the chatbot Glenda has been developed using Dialogflow by Google, embedded in a survey on Qualtrics, with conditions created on Sublime Text and assigned using CART by ASCoR. Glenda (Appearing as a chat box, Appendix A) has been programmed to potentially respond in one up to four different combinations of traits

(Appendix B), corresponding to the four conditions of the experiment: Friendly, Friendly With Guidance, Neutral, Neutral With Guidance. To give a brief explanation on how the chatbot

Perceived Ease Of Use Individual Promoter Score Social Presence Communicative Type Presence of Guidance H1a H2a H1b H2b H3

(12)

works, the first step on Dialogflow has been to create and categorize the Entities. Each Entity comprises one or more words that the AI of the chatbot is able to recognize and assign to a specific category. In this research, in which the participants interacted with the chatbot to order a movie ticket, the main categories were Movie Titles (with Entities such as “Joker”; “The Irishmen”; “Frozen 2”;...), Dates (with Entities such as “April 23”; “Christmas”; “today”; “tomorrow”; “Saturday”...), Hours (with Entities such as “18”; “8pm”; “20:00”... ), Rows (named with letters from “A” to “E”) and Seats (numbered from “1” to “12”). Secondary for this study, but of equal importance categories were the Greetings (“Hi”; “Hello”; “Hey”...), often opening the conversations, and Thanks (“Thank you”; “Thanks!”; “Cheers”;...), that was expected the users (participants) would have used to terminate the conversation with Glenda. After the creation of Entities, these elements need to be contextualized for the Artificial Intelligence into Intents. Each Intent is recognized by the chatbot as “What does the user intend to do?”, and is structured with a series of Training Phrases (inserted in Dialogflow by the developer), in which the chatbot recognizes Entities. For example, in the training phrase “I want to watch Joker tomorrow at 8pm” are recognized the Entities “Joker” (and associated to the category Movie Titles), “tomorrow” (associated to the category Dates), and “8pm” (associated to the category Hours). The chatbot recognizes the Intents on the base of the Entities that it recognizes in the sentences provided by the user. The Training Phrases give to the Artificial Intelligence of the chatbot a starting point, from which the Machine Learning system within Dialogflow can make the chatbot able to understand complex sentences given by the users as soon as the entities are recognized. For instance, a chatbot trained with the sentence “I want to watch Joker tomorrow at 8pm”, will understand a user who types “I want to book a ticket for Fyre, next Wednesday at 18”, recognizing the entities “Fyre”, “next Wednesday”, and “18” and associating them to the correct categories. When programming the Intents on Dialogflow, general Intent such as “book a movie ticket” can be programmed with prompts to request all the requested information when they are not provided by the user on the first place. For example, in the sentence “I want to watch Joker” typed by a user, the chatbot recognizes date and hour as missing information, requesting it in the subsequent messages.

(13)

It has been important to program a Fallback intent, that the chatbot provided each time something unrecognizable was typed by the user (for example “I am sorry about it, but I didn't understand what you said, my only purpose is helping you with booking tickets for the Cinema Shel!” in the Friendly condition and “Message not recognized. Type in the movie of your choice” in the Neutral with Guidance condition). The chatbot has been programmed to be able to recognize listed synonyms, to avoid falling in the Fallback intent too often. For example synonyms of the hour “20:00” such as “8pm”, or “20” were made recognizable. Before its implementation in the Qualtrics survey, the chatbot has been tried several times, and its features adjusted to guarantee its functionality before starting the data collection.

Procedure

All the participants performed the same task, consisting in the interaction with Glenda aimed to book movie tickets, for a fictional cinema theatre named Cinema Shel, choosing the movie, date, time, and seat, before receiving a confirmation code to be inserted in a text box. The participants could choose the movie from a table containing the available titles

(Appendix D) and their seat from a seat map (Appendix E), appearing on the same page as the chatbot. After completing the interaction, the respondents proceeded in the survey answering several questions to measure the different variables. A system programmed within the chatbot took care of assigning participants to the different conditions while keeping the number of participants for each condition balanced. The conditions consisted in different versions of the chatbot, as shown in the factorial design. The independent variables Communicative Type (with levels Friendly and Neutral) and Presence of Guidance (with levels Guidance/No Guidance), form a 2x2 factorial design, individuating four different version of the same chatbot, Glenda.

(14)

Chatbot “Glenda” Presence of Guidance (Guidance/No Guidance) Communicative Type (Friendly/Neutral) version 1: Friendly No Guidance version 2: Friendly Guidance version 3: Neutral No Guidance version 4: Neutral Guidance

Table 1: Factorial design

Sample

The main research involved the recruitment of 201 new participants between 18 and 58 years old (M=28,56, SD=9.47). Of these 201, 66 were male (32.7%), 130 were female (64.4%), while 5 preferred not to specify their gender (2.5%).The participants of this research have been recruited using a combination between the Convenience and Snowball sampling techniques. Specifically, the link to the Qualtrics online survey has been shared via

Whatsapp, Facebook, LinkedIn, and Instagram, asking the reached contacts to share the survey with their networks. Within the survey, questions regarding age and language requirements filtered out potential participants under the age of 18, or not confident enough with their understanding of the English language. In total, 48 participants have been assigned to the Friendly condition, 50 to the Friendly with Guidance condition, 51 to the Neutral

condition and 52 to the Neutral with Guidance condition.

Pilot test

Before the actual data collection started, a pilot test has been conducted, aimed to check whether the manipulation was effective. If not, each set of messages (one per

condition) would have needed to be modified, being the custom messages of the chatbot the manipulation of the experiment (Appendix C). For this pilot test, the participants performed a task consisting in making use of a coupon to order a cinema ticket. After performing the task, they answered questions requesting their opinion on some features of the chatbot, namely to what extent they had perceived it to be friendly -or neutral- towards them, and whether the

(15)

chatbot was helping them, by assisting them more the interaction. These questions aimed to investigate whether the manipulations were perceived: would a friendly version of the chatbot have been perceived as friendly by the users, and would they have noticed the presence of some sort of guidance? Firstly, checking for the chatbot’s different approaches when

chatting, have been used the questions “How do you assess Glenda’s conversational style?”, with levels Neutral and Friendly, and “How friendly was Glenda in the conversation?”,

measured with a scale ranging from Not at all (1) to Very friendly (5). Secondly, checking for the chatbot’s provision or not provision of extra messages guiding what to type: “Glenda was guiding the interaction by explicitly telling me what to type”, and “Glenda regularly told me how to respond (for example by telling me: 'type in the hour to continue')”, both categorical variables with levels Yes and No. 53 participants took part to the pilot test, aged from 19 to 52 (M=26.5, SD=6.19), of which 18 male (34.0%), 33 female (62.3%), and 2 who preferred not to specify their gender (3.8%). The target sample size was determined by considering a minimum of 40 participants for each condition. The assignment of participants to the four conditions has been random balanced, with a system included in CART allowing to keep the number of participants for each condition balanced. As explained above, in the pilot test four questions aimed to check whether the manipulations have been effective. A chi-square test of independence was performed to examine whether participants in the friendly condition perceived Glenda as friendly, and the participants in the neutral condition perceived Glenda as neutral. The variable Communicative Type in relation to the first pilot test question “How do you assess Glenda’s conversational style?” was significant, X2 (1, N = 53) = 20.62, p

<.001. To test the second pilot test question “How friendly was Glenda in the conversation?”, an independent sample t-test showed how the 26 participants who interacted with a friendly version of Glenda (M=3.96, SD=1.04) compared to the 27 participants that interacted with a neutral version (M=2.15, SD=.95) scored significantly higher friendliness t(51)=6.64, p<.001. Both the first and the second pilot test questions demonstrated how the manipulation

(16)

presence (or absence) of guidance in the messages provided by Glenda was perceived, a chi-square test was conducted showing a significant relation between the variable Presence of Guidance in relation to the third pilot test question “Glenda was guiding the interaction by explicitly telling me what to type”, X2 (1, N = 53) = 38.09, p <.001, and to the fourth pilot test

question “Glenda regularly told me how to respond (for example by telling me: 'type in the hour to continue')” X2 (1, N = 53) = 38.30, p <.001. Both the third and the fourth pilot test

questions demonstrated how the manipulation Presence of Guidance was effective, and perceived by participants. Being the results of the pilot test significant, proceeding with the data collection of the main research has been possible

Measurements Independent variables

All the conditions have been recoded into dummy variables. For Communicative type, the value “1” has been assigned to the Friendly condition while the value “0“ to the Neutral. For Presence of Guidance, the value “1” has been assigned to the with Guidance condition, while the value “0“ to the No Guidance condition. Finally, a third dummy variable aimed to test H3 had “1” assigned to the Friendly with Guidance condition, and “0” to the remaining three.

Dependent variable: Social Presence

For the measurement of Social Presence, a 5-point Likert scale (Appendix F) with five items has been developed, inspired by Gefen and Straub’s (2003) Social Presence scale, ranging from Strongly Disagree (1) to Strongly Agree (5). Examples of items that can be found in Appendix F include “In Glenda there is a sense of personalness” and “In Glenda there is a sense of human warmth''. A reliability test has been run on the scale, which Cronbach’s alpha showed a high reliability (a=0.90).

(17)

Dependent variable: Perceived Ease Of Use

To measure how easy the participants considered the act of interacting with Glenda, the variable Perceived Ease of Use has been measured using a 5-point Likert scale with six items such as “My interaction with Glenda was clear and understandable” and “I would find it easy to get Glenda to do what I want it to do to book my ticket”, adapted from Gefen and Straub’s (2003) scale, ranging from Strongly Disagree (1) to Strongly Agree (5). A reliability test has been run on the scale used to measure Perceived Ease Of Use, showing high reliability (a=.84), proving to be measuring the concept.

Dependent Variable: Individual Promoter Score

The Independent Promoter Score, based on Reichheld’s Net Promoter Score, or NPS (2003), has been measured with a scale with one single item: “On a scale from 0-10, how likely are you to recommend the Cinema Shel to a friend?”, ranging from Not at all likely (0) to Extremely likely (10). The respondents that totalize an Individual Promoter Score from 0 to 6 are defined Detractors, 7-8 are Passives, and 9-10 are Promoters of the organization (Reichheld, 2003) The decision in this research to focus on the IPS instead of the NPS is due to the fact that IPS is a continuous variable measured from each individual, while NPS is measured as the difference between the percentages of Promoters, who expressed an IPS of 9 or 10, and Detractors, who expressed an IPS ranging from 0 to 6. The variable could’ve been used as a nominal variable, considering Detractor, Passive, and Promoter as levels, but to obtain a higher differentiation, and check for the distribution, the scale 0-10 has been preferred.

Control variables: Level of Experience with chatbots, Rating of past Experience with chatbots, Interest to visit the Cinema Shel

To control for potential, external variables having an effect on the dependent variable Individual Promoter Score, the decision to include additional survey questions has been taken. The first control question aimed to measure the level of experience the participants had in chatbot interactions, “Do you have experience in interacting with chatbots? ” ranging

(18)

from Not at all (1) to A lot (5). It is supposed that people that have had experience in interacting with chatbots would had probably been in contact with more advanced chatbots, programmed by teams of developers therefore more capable than Glenda, used for this Master’s thesis. Perceiving Glenda as obsolete, with effects on Perceived Ease of Use. The second control question requested to rate their past experience with chatbots, “Overall, how would you rate your experience with chatbots?” ranging from Very negative (1) to Very positive (5). This question aimed to control whether having a good, or bad, past experience with chatbots would have had an effect on the IPS. The third question aimed to check the participants’ interest to visit the Cinema Shel, ranging from Not interested (1) to Very interested (5). This question aimed to follow up on the measurement of IPS, willing to have extra information on the participant’s attitude. This last question, however, is not strictly related to the aim of this research This implies that the descriptives will be presented in the results, but no analyses including this extra variable will be conducted.

All the scales integrated in the Qualtrics online survey can be found in the Appendix F.

RESULTS

Manipulation Check

The questions used in the pilot test acted as the manipulation check in the

experiment. In this section, the results of the manipulation check will be presented. The first question “How do you assess Glenda’s conversational style?” has been found to have a significant relationship with the Communicative type, X2 (1, N = 201) = 68.14, p <.001, with

participants exposed to the friendly Glenda perceived the chatbot as friendly. Testing the following question “How friendly was Glenda in the conversation?”, an independent sample t-test showed how the 98 participants who interacted with a friendly version of Glenda

(M=3.95, SD=.84) compared to the 27 participants that interacted with a neutral version (M=2.91, SD=.90) scored non significantly higher friendliness t(199)=8.43, p=.867.

(19)

Apparently, only the first question demonstrated how the manipulation Communicative Type was effective, and perceived by participants. This, however, has been considered enough to retain the results. Testing whether the presence (or absence) of guidance in the messages provided by Glenda was perceived, a chi-square test was conducted showing a significant relation between the variable Presence of Guidance in relation to the third and the four pilot test question. “Glenda was guiding the interaction by explicitly telling me what to type”, X2 (1,

N = 201) = 5.27, p <.001, and “Glenda regularly told me how to respond (for example by telling me: 'type in the hour to continue')”, X2 (1, N = 201) = 38.87, p <.001 showed a relation

with Presence of Guidance. Both the third and the fourth pilot test questions demonstrated how the manipulation Presence of Guidance was effective, and perceived by participants.

Analyses

Describing the dependent variables, the variable Social Presence (M=2.76, SD=0.91) appeared to be normally distributed in the sample, while both Perceived Ease Of Use

(M=4.26, SD=0.63) and Individual Promoter Score (M=7.53, SD=1.87) presented a negative skew. Before starting with the analyses to investigate the main relationships, controlling for the influences of the control variables on the dependent variables was necessary. The Level of Experience with chatbots showed to have no significant correlation with the dependent variables. Meanwhile, the rating of past experience with chatbots, showed a weak, significant correlation with all the dependent variables Social Presence, Perceived Ease of Use, and Individual Promoter Score. Being the effects of the control variables weak or non significant, these variables won’t be counted as covariates during the regression analyses below. The results of this correlations analysis can be found in Appendix G. Before proceeding to test all the hypotheses with regression analyses, the values of every independent and dependent variable have been standardized. The standardized variables have been used to run a mediation analysis from Hayes (2012) in Process. The chosen model has been Model 4.

(20)

Social perspective

A regression analysis using Hayes’ (2012) method in PROCESS has been used to test the hypotheses connected to the social perspective of this experiment. Testing H1a,

Communicative Type significantly predicts Social Presence b = 0.378, t = 5.77 p <.001, 95% CI [0.249, 0,508]. The R2 = .14 shows how Communicative type explains 14% of the variance

in Social Presence. Being the predictor Communicative Type a dummy variable created for the friendly condition, the regression analysis shows how the friendly chatbot condition is positively related to Social Presence, with higher values if compared to the neutral chatbot condition. Testing H1b, the regression analysis with Individual Promoter Score predicted from Communicative Type and Social Presence showed how Social Presence significantly predicts Individual Promoter Score b =0.474, t = 6.99 p <.001, 95% CI [0.341, 0.608] where an increase in the Social Presence of the chatbot relates to an increase in the Individual Promoter Score expressed by the user. When Social Presence is in the model,

Communicative Type does not significantly predict the Individual Promoter Score b = -.020, t = -.30 p =.766, 95% CI [-0.154, 0.114]. The model explains the 22% of the overall variance of Individual Promoter Score. When Social Presence is not in the model, Communicative type significantly predicts Individual Promoter Score, b =.159, t = 2.28 p < 0.05, 95% CI [0.021, 0.297], explaining 2.5% of its variance.

Testing H1c, there was a significant indirect effect of Communicative Type on Individual Promoter Score through Social Presence, b = .180, BCa CI [0.105, 0.267].

Image 2: social perspective: results (Note *p < .05, **p < .01, ***p < .001) Communicative Type Social Presence Individual Promoter Score b = .37, p <.001*** Direct effect: b = -.020, p =.766, 95% CI [-0.154, 0.114]

Indirect effect: b =.180, 95% BCa [0.105, 0.267] b = .47, p <.001***

(21)

Technological perspective

As well as for the social perspective, the hypotheses related to the technological perspective have been tested using a regression analysis, Hayes’ method (2012), in PROCESS. Testing H2a, the Presence of Guidance does not significantly predict the Perceived Ease of Use b = 0.019, t = 0.27 p =.789, 95% CI [-0.121, 0,159]. Testing H2b, the regression analysis with Individual Promoter Score predicted from Presence of Guidance and Perceived Ease of Use showed how Perceived Ease of Use significantly predicts Individual Promoter Score b =0.497, t = 8.05 p <.001, 95% CI [0.375, 0.618] where an increase in the Perceived Ease of Use relates to an increase in the Individual Promoter Score expressed by the user. When Perceived Ease of Use is in the model, Presence of Guidance does not significantly predict the Individual Promoter Score b = -.011, t = -.18 p =.858, 95% CI [-0.133, 0.111]. The model explains the 25% of the overall variance of Individual Promoter Score. When Perceived Ease of Use is not in the model, Presence of Guidance does not

significantly predict Individual Promoter Score, b = -.002, t = -.02 p = 0.982, 95% CI [-.141, 0.138].

Testing H2c, Presence of Guidance effect on Individual Promoter Score through Perceived Ease of Use is not significant, b = .009, BCa CI [-.060, 0.078]. While H2b is retained, seeing Perceived Ease Of Use as a predictor of Individual Promoter Score, H2a is rejected as well as H2c. The Presence of Guidance does not affect the Perceived Ease Of Use, nor the Individual Promoter Score with or without the Presence of Perceived Ease Of Use.

(22)

Image 3: technological perspective: results (Note. *p < .05, **p < .01, ***p < .001)

Overall perspective

Testing H3, a regression analysis has been conducted to explore the direct effect of the combination between a friendly Communicative Type and the Presence of Guidance (with Guidance). No significant effect has been found b = - .070, t =.99 p = 0.321.

Image 4: overall perspective: results (Note: *p < .05, **p < .01, ***p < .001)

CONCLUSION AND DISCUSSION

From the results of this research, the research question can be answered stating that in a first encounter scenario between a customer and a chatbot implemented to represent a company, social features do have an effect on the likelihood the customer will recommend the company to his or her network. No evidence has been found for the technological

Presence of Guidance Perceived Ease Of Use Individual Promoter Score b = .02, p =.789 Direct effect: b = -.002, p = 0.982, 95% CI [-0.141, 0.138]

Indirect effect: b = -.011, 95% BCa CI [-0.133, 0.111] b = .50, p <.001*** Presence of Guidance - with Guidance Communicative Type - friendly Individual Promoter Score b = .070, p = .321,

(23)

features used in this experiment. As expected, regarding the social perspective, interacting with a friendly chatbot makes customers more inclined to advocate the company, with Social Presence acting as a mediator. A chatbot communicating in a friendly way better expresses to be a social and active entity, findings that can be considered in light of Araujo’s findings (2018) that saw social presence as a predictor of emotional outcomes in the customer towards the company (Fang, Chen, Wen, & Prybutok, 2018, and Osei-Frimpong & Mclean, 2018). On the other hand, regarding the technological perspective of this experiment, programming the chatbot to instruct the user throughout the interaction did not find evidence to be influencing the Perceived Ease Of Use nor the Individual Promoter Score. The

significance of the effect of the Perceived Ease Of Use on the Individual Promoter Score, however, suggests that despite the pilot test and the manipulation check were both

successful, the limitation might lie in the concept behind Presence Of Guidance. Moreover, Perceived Ease Of Use with an average value of 4.26 in the sample presents a negative skewness, suggesting how a ceiling effect for this variable might have taken place.The chatbot Glenda was easy to use, but not because of its providing guidance. A possible explanation to this is that being explicit that Glenda’s task was to make the users book their movie tickets, they chose seats and movies from the seat map and the movie. Seats and movies were listed among the Entities the chatbot has been trained with on Dialogflow, as explained in table that had been previously programmed as Entities for the chatbot in Dialogflow, as explained in the Design section. With this implicit form of guidance, the participants had never found themselves interacting with a chatbot that could not understand what they were requesting, giving error messages causing frustration and consisting

obstacles the interaction. In the event of service failure, the role that Social Presence plays is far from positive, since it has been demonstrated how it elicits a higher negative Word of Mouth (He, Hu, Chen, Alden, & He, 2017). Another possible explanation for the ceiling effect in the measurement of Perceived Ease Of Use could be related to the young average age of the participants who took part in this experiment, 28 years old, being new generations generally more tech savvy (Hanson, 2011) than the old ones. The sample for this research

(24)

has an average age that is too low to be generalizable to a senior users. A study from Sundar, Jung, Waddell, and Kim (2017) found out how senior citizens tend to prefer a serious chatbot instead of a friendly one that cheers and shows vivacity. With the ultimate goal to maximize a positive Word of Mouth using the most effective chatbot, future

communication science research should collaborate with chatbot developers and AI experts to refine the best messaging. Moreover, setting a way to modulate the communicative type according to the age of the user, as a way to tailor the technology to the user (Sundar, Bellur, & Jia, 2012), adjusting the communicative type to the age, or other characteristics, of the user. For example, as already mentioned, the chatbot XiaoIce in 2014 was able to adapt its replies according to the emotional status of the user (Shum et al., 2017): future research could investigate company-related outcomes when a chatbot capable of tailoring its

communication to the user is implemented. In conclusion, companies willing to influence the positive word of mouth about them should implement a chatbot, and commission the

corporate communication department to produce the messages the chatbot will provide. These messages, although aligned with the Tone of Voice of the company (Barcelos, Dantas, & Sénécal, 2018, and Oh & Ki, 2019), should maintain a certain level of informality and human warmth.

REFERENCES

9 Great Examples of How Brands are Using Chatbots (2018). Retrieved December 24, 2019, from https://www.socialmediatoday.com/news/9-great-examples-of-how-brands-are-using-chatbots/524138/.

Araujo, T. (2018). Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Computers in Human Behavior, 85, 183–189.

https://doi.org/10.1016/j.chb.2018.03.051

Anderson, C. A. (1995). Implicit theories in broad perspective. Psychological Inquiry, 6(4), 286-289.

Barcelos, R., Dantas, D., & Sénécal, S. (2018). Watch Your Tone: How a Brand’s Tone of Voice on Social Media Influences Consumer Responses. Journal of Interactive Marketing, 41, 60–80. https://doi.org/10.1016/j.intmar.2017.10.001

(25)

Baylor, A., & Kim, Y. (2003). The role of gender and ethnicity in pedagogical agent

perception. In E-Learn: World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education (pp. 1503-1506). Association for the Advancement of Computing in Education (AACE).

Burnham, T., & Wong, J. (2018). Factors influencing successful net promoter score adoption by a nonprofit organization a case study of the Boy Scouts of America. International review on public and non-profit marketing (Vol. 15, pp. 475–495).

https://doi.org/10.1007/s12208-018-0210-x

Chung, M., Ko, E., Joung, H., & Kim, S. (2018). Chatbot e-service and customer satisfaction regarding luxury brands. Journal of Business Research.

https://doi.org/10.1016/j.jbusres.2018.10.004

Colby, K. M. (1975). Artificial paranoia: A computer simulation of paranoid processes, Behavior Therapy, 7(1), 146–146. https://doi.org/10.1016/S0005-7894(76)80257-2 Colby, K., Hilf, F., Weber, S., & Kraemer, H. (1972). Turing-like indistinguishability tests for the validation of a computer simulation of paranoid processes. Artificial Intelligence, 3(C), 199–221. https://doi.org/10.1016/0004-3702(72)90049-5

Conversational Agent Research Toolkit (CART). (2018). Retrieved January 17, 2020 from https://cart.readthedocs.io/en/latest/

What Is Net Promoter? (2017). Retrieved January 18, 2020 from https://www.netpromoter.com/know/

Cowan, B., Branigan, H., Obregón, M., Bugis, E., & Beale, R. (2015). Voice

anthropomorphism, interlocutor modelling and alignment effects on syntactic choices in human−computer dialogue. International Journal of Human - Computer Studies, 83(C), 27–42. https://doi.org/10.1016/j.ijhcs.2015.05.008

Danurdoro, K., & Wulandari, K. (2016). The Impact of Perceived Usefulness, Perceived Ease of Use, Subjective Norm, and Experience Toward Student’s Intention to Use Internet Banking. Jurnal Ekonomi dan Studi Pembangunan, 8(1), 17–22.

https://doi.org/10.17977/um002v8i12016p017

Davis, F. D. (1989), "Perceived usefulness, perceived ease of use, and user acceptance of information technology", MIS Quarterly, 13 (3): 319–340, doi:10.2307/249008 Eger, L., & Micik, M. (2017). Customer-oriented communication in retail and Net Promoter

Score. Journal Of Retailing And Consumer Services, 35, 142–149. https://doi.org/10.1016/jretconser.2016.12.009

Ekman, P., & Cordaro, D. (2011). What is Meant by Calling Emotions Basic. Emotion Review,

3(4), 364–370. https://doi.org/10.1177/1754073911410740

Etemad-Sajadi, R. (2016). The impact of online real-time interactivity on patronage intention: the use of avatars. Computers in human behavior, 61, 227-232.

Etemad-Sajadi, R., & Ghachem, L. (2015). The impact of hedonic and utilitarian value of online avatars on e-service quality. Computers in human behavior, 52, 81-86. Fang, J., Chen, L., Wen, C., & Prybutok, V. (2018). Co-viewing Experience in Video

Websites: The Effect of Social Presence on E-Loyalty. International Journal of Electronic Commerce, 22(3), 446–476.

https://doi.org/10.1080/10864415.2018.1462929

Featherman, M., Miyazaki, A., & Sprott, D. (2010). Reducing online privacy risk to facilitate e-service adoption: the influence of perceived ease of use and corporate credibility. Journal of Services Marketing, 24(3), 219–229.

(26)

Fishbein, M. & Ajzen, I., (1975). Belief, Attitude, Intention and Behavior, An Introduction to Theory and Research, Addison-Wesley Publishing Company, MA

Gefen, D., & Straub, D. (2003). Managing User Trust in B2C e-Services. E-Service, 2(2), 7–24. https://doi.org/10.2979/ESJ.2003.2.2.7

Go, E., & Sundar, S. (2019). Humanizing chatbots: The effects of visual, identity and

conversational cues on humanness perceptions. Computers in Human Behavior, 97, 304–316. https://doi.org/10.1016/j.chb.2019.01.020

Gong, L., & Nass, C. (2007). When a talking-face computer agent is half-human and half-humanoid: Human identity and consistency preference. Human communication research, 33(2), 163-193.

Gong, L. (2008). How social is social responses to computers? The function of the degree of anthropomorphism in computer representations. Computers in Human Behavior, 24(4), 1494–1509. https://doi.org/10.1016/j.chb.2007.05.007

Hamid, A., Razak, F., Bakar, A., & Abdullah, W. (2016). The Effects of Perceived Usefulness and Perceived Ease of Use on Continuance Intention to Use E-Government. Procedia Economics and Finance, 35(C), 644–649.

https://doi.org/10.1016/S2212-5671(16)00079-4

Hanson, V. (2011). Technology skill and age: what will be the same 20 years from now? Universal Access in the Information Society, 10(4), 443–452.

https://doi.org/10.1007/s10209-011-0224-1

Harrison-Walker, L. (2001). The Measurement of Word-of-Mouth Communication and an Investigation of Service Quality and Customer Commitment As Potential Antecedents. Journal of Service Research, 4(1), 60–75. https://doi.org/10.1177/109467050141006 Hayes, A. F. (2012). PROCESS: A versatile computational tool for observed variable

mediation, moderation, and conditional process modelling. Retrieved January 26, 2020, from http://www.afhayes.com/ public/process2012.pdf

He, Y., Hu, M., Chen, Q., Alden, D., & He, W. (2017). No Man is an Island : the Effect of Social Presence on Negative Word of Mouth Intention in Service Failures. Customer Needs and Solutions, 4(4), 56–67. https://doi.org/10.1007/s40547-017-0078-7 Homer, B., Plass, J., & Blake, L. (2008). The effects of video on cognitive load and social

presence in multimedia-learning. Computers in Human Behavior, 24(3), 786–797. https://doi.org/10.1016/j.chb.2007.02.009

Ikemoto, Y., Asawavetvutt, V., Kuwabara, K., & Huang, H. (2019). Tuning a conversation strategy for interactive recommendations in a chatbot setting. Journal of Information and Telecommunication, 3(2), 180–195.

https://doi.org/10.1080/24751839.2018.1544818

Kaczorowska-Spychalska Dominika. (2019). How chatbots influence marketing. Management, 23(1), 251–270. https://doi.org/10.2478/manment-2019-0015

Kim, J., Kwon, Y., & Cho, D. (2011). Investigating factors that influence social presence and learning outcomes in distance higher education. Computers & Education, 57(2), 1512–1520. https://doi.org/10.1016/j.compedu.2011.02.005

Kim, Y., & Sundar, S. (2012). Anthropomorphism of computers: Is it mindful or mindless? Computers in Human Behavior, 28(1), 241–250.

https://doi.org/10.1016/j.chb.2011.09.006

Korneta P. (2018). Net promoter score, growth, and profitability of transportation companies. International Journal of Management and Economics, 54(2), 136–148.

https://doi.org/10.2478/ijme-2018-0013

(27)

Journal of Library Administration, 58(4), 394–406. https://doi.org/10.1080/01930826.2018.1448655

Lee, E. (2010). What Triggers Social Responses to Flattering Computers? Experimental Tests of Anthropomorphism and Mindlessness Explanations. Communication Research, 37(2), 191–214. https://doi.org/10.1177/0093650209356389 Leong, P. (2011). Role of social presence and cognitive absorption in online learning

environments. Distance Education, 32(1), 5–28. https://doi.org/10.1080/01587919.2011.565495

Li, S., & Chen, Y. (2018). The Relationship Between Psychological Contract Breach and Employees’ Counterproductive Work Behaviors: The Mediating Effect of

Organizational Cynicism and Work Alienation. Frontiers In Psychology, 9, 1273. https://doi.org/10.3389/fpsyg.2018.01273.

Lyons, A., Reysen, S., & Pierce, L. (2012). Video lecture format, student technological efficacy, and social presence in online courses. Computers in Human Behavior, 8(1), 181–186. https://doi.org/10.1016/j.chb.2011.08.025

Moslehpour, M., Pham, V., Wong, W., & Bilgiçli, I. (2018). e-Purchase Intention of Taiwanese Consumers: Sustainable Mediation of Perceived Usefulness and Perceived Ease of Use. Sustainability, 10(1). https://doi.org/10.3390/su10010234.

Michaels, V. (2017, September 26). How And Why Chatbots Will Dominate Social Media Marketing. Retrieved January 10, 2020 from https://chatbotsmagazine.com/how-and-why-chatbots-will-dominate-social-media-marketing-daf927319c4.

Nass, C. I., & Brave, S. (2005). Wired for speech: How voice activates and advances the human-computer relationship (p. 9). Cambridge, MA: MIT press.

Nass, C., & Lee, K. M. (2001). Does computer-synthesized speech manifest personality? Experimental tests of recognition, similarity-attraction, and consistency-attraction. Journal of experimental psychology: applied, 7(3), 171.

Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of social issues, 56(1), 81-103.

Nass, C., Moon, Y., & Green, N. (1997). Are machines gender neutral? Gender‐stereotypic responses to computers with voices. Journal of applied social psychology, 27(10), 864-876.

Nasser Al-Suqri, M. (2014). Perceived usefulness, perceived ease-of-use and faculty acceptance of electronic books. Library Review, 63(4/5), 276–294.

https://doi.org/10.1108/LR-05-2013-0062.

Net Promoter Score - Your Word of Mouth Index (2016). Retrieved December 25, 2019, from https://gatherup.com/blog/net-promoter-score-word-of-mouth-index/

Nowak, K., & Biocca, F. (2003). The Effect of the Agency and Anthropomorphism on Users’ Sense of Telepresence, Copresence, and Social Presence in Virtual Environments. Presence: Teleoperators & Virtual Environments, 12(5), 481–494.

https://doi.org/10.1162/105474603322761289

Oh, J., & Ki, E. (2019). Factors affecting social presence and word-of-mouth in corporate social responsibility communication: Tone of voice, message framing, and online medium type. Public Relations Review, 45(2), 319–331.

https://doi.org/10.1016/j.pubrev.2019.02.005

Osei-Frimpong, K., & Mclean, G. (2018). Examining online social brand engagement: A social presence theory perspective. Technological Forecasting & Social Change, 128, 10–21. https://doi.org/10.1016/j.techfore.2017.10.010

(28)

implications for market knowledge in B2B marketing. Journal of Business & Industrial Marketing, 34(7), 1410–1419. https://doi.org/10.1108/JBIM-10-2018-0295

Price, L. L., Arnould, E. J., & Deibler, S. L. (1995). Consumers’ emotional responses to service encounters: the influence of the service provider. International Journal of Service Industry Management, 6(3), 34-63.

Raassens, N., & Haans, H. (2017). NPS and Online WOM: Investigating the Relationship Between Customers’ Promoter Scores and eWOM Behavior. Journal of Service Research, 20(3), 322–334. https://doi.org/10.1177/1094670517696965

Reeves, B., & Nass, C. I. (1996). The media equation: How people treat computers, television, and new media like real people and places. Cambridge university press. Reichheld, F. (2003). The one number you need to grow. Harvard Business Review, 81(12),

46–54, 124.

Reichheld, F. (2006). The microeconomics of customer relationships. MIT Sloan Management Review, 47(2), 73–78. Retrieved January 24, 2020 from https://dialnet.unirioja.es/servlet/oaiart?codigo=1419840

Reichheld, F., & Markey, R. (2011). The Ultimate Question 2.0: How Net Promoter Companies Thrive in a Customer Driven World. Boston, MA: Harvard Business Review Press.

Richardson, J., Maeda, Y., Lv, J., & Caskurlu, S. (2017). Social presence in relation to students’ satisfaction and learning in the online environment: A meta-analysis. Computers in Human Behavior, 71, 402–417.

https://doi.org/10.1016/j.chb.2017.02.001

Short, J., Williams, E., & Christie, B. (1976). The social psychology of telecommunications. John Wiley & Sons.

Shum, H., He, X., & Li, D. (2018). From Eliza to XiaoIce: challenges and opportunities with social chatbots. Frontiers of Information Technology & Electronic Engineering, 19(1), 10–26. https://doi.org/10.1631/FITEE.1700826

Smyth, B., & McGinty, L. (2003). An analysis of feedback strategies in conversational

recommenders. The 14th Irish conference on artificial intelligence & cognitive science (AICS 2003), 211–216. Dublin, Ireland.

Stander F. W. (2016). A case for loyalty-based relational business models: Assessing direct and mediating effects of the Net Promoter Score (NPS) metric in commercial football consumption decisions. African Journal of Hospitality, Tourism and Leisure, 5(4). Sundar, S. S. (2008). The MAIN model: A heuristic approach to understanding technology

effects on credibility. Digital media, youth, and credibility, 73-100.

Sundar, S., Bellur, S., & Jia, H. (2012). Motivational technologies: A theoretical framework for designing preventive health applications. Lecture Notes in Computer Science

(including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 7284, 112–122. https://doi.org/10.1007/978-3-642-31037-9_10 Sundar, S., Jung, E., Waddell, T., & Kim, K. (2017). Cheery companions or serious

assistants? Role and demeanor congruity as predictors of robot attraction and use intentions among senior citizens. International Journal of Human - Computer Studies, 97, 88–97. https://doi.org/10.1016/j.ijhcs.2016.08.006

Trivedi, J. (2019). Examining the Customer Experience of Using Banking Chatbots and Its Impact on Brand Love: The Moderating Role of Perceived Risk. Journal of Internet Commerce, 18(1), 91–111. https://doi.org/10.1080/15332861.2019.1567188 Tummers, L., Bekkers, V., van Thiel, S., & Steijn, B. (2015). The Effects of Work Alienation

(29)

47(5), 596–617. https://doi.org/10.1177/0095399714555748

Tummers, L., & Den Dulk, L. (2013). The effects of work alienation on organisational commitment, work effort and work‐to‐family enrichment. Journal of Nursing Management, 21(6), 850–859. https://doi.org/10.1111/jonm.12159

Turing, A. (1950). Computing Machinery and Intelligence. Mind, 59(236), 433–460. Van Den Broeck, E., Zarouali, B., & Poels, K. (2019). Chatbot advertising

effectiveness: When does the message get through? Computers in Human Behavior, 98, 150–157. https://doi.org/10.1016/j.chb.2019.04.009

Verhagen, T., Van Nes, J., Feldberg, F., & Van Dolen, W. (2014). Virtual Customer Service Agents: Using Social Presence and Personalization to Shape Online Service Encounters. Journal of Computer‐Mediated Communication, 19(3), 529–545. https://doi.org/10.1111/jcc4.12066

Weidlich, J., & Bastiaens, T. (2017). Explaining social presence and the quality of online learning with the SIPS model. Computers in Human Behavior, 72(C), 479–487. https://doi.org/10.1016/j.chb.2017.03.016

Weizenbaum, J. (1966). ELIZA-a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36–45. https://doi.org/10.1145/365153.365168

(30)
(31)

APPENDIX B: four versions of Glenda

Friendly Friendly with Guidance

(32)
(33)

APPENDIX D: movies table

Star Wars: The Rise of Skywalker Action

The conflict between the Jedi and the Sith comes to its turning point, bringing

the Skywalker saga to a definitive end. 2h35m

Fyre Document

ary A behind the scenes look to the Fyre Music Festival, from its creation to its

disaster. 1h37m

Haunt Horror

On Halloween, a group of friends encounter an extreme haunted house that

promises to feed on their darkest fears. 1h32m

Joker Thriller

Isolated and bullied by society, Arthur Fleck descents into madness as he

transforms into the genius criminal known as the Joker. 2h2m

My Dear Liar Romance

To rescue his asthmatic 6 years old son, Wuhai and his friend Zhong get close to

a low class Cam girl to scam her with a fake marriage. 1h56m

Frozen 2 Children

Elsa the Snow Queen and Anna go on an adventure far away from Arendelle.

Their friends Kristoff, Olaf, and Sven will join them in this courageous adventure. 1h43m

The Irishman Crime

In the 1950s, truck driver Frank Sheeran gets involved with Russell Bufalino and

his Pennsylvania crime family, climbing the ranks to become a top hit man 3h30m

Cats Musical

The story of a tribe of cats called the Jellicles and the night they make the

"Jellicle choice", deciding which cat will come back to a new life. 2h

Tolkien Biography

The early life experiences of the young J.R.R. Tolkien, that inspired the author to

write the classic fantasy novels "The Hobbit" and "The Lord of the Rings." 1h52m

Jumani: The Next Level Adventure

The four players brave the jungle, desert, mountains and dangerous animals to

save the fantastical video game world of Jumanji. 1h59m

The King History

Henry Prince of Wales is the son of King Henry IV of England. Uninterested in

succeeding his father, spends his days drinking, whoring, and jesting. 2h20m

Harriet History

The story of Harriet Tubman, from escaping from slavery to the dangerous

(34)
(35)

APPENDIX F: survey questions

PEOU_1 Operating Glenda has been easy for me

Strongly Disagree (1) to Strongly Agree (5) PEOU_2 I found easy to get Glenda to do what I want it to do

PEOU_3 My interaction with Glenda was clear and understandable

PEOU_4 I found Glenda to be flexible to interact with

PEOU_5 It would be easy for me to become skillful at using Glenda

PEOU_6 I found Glenda easy to use

SP_1 In Glenda there is a sense of human contact

Strongly Disagree (1) to Strongly Agree (5)

SP_2 In Glenda there is a sense of personalness

SP_3 In Glenda there is a sense of Sociability

SP_4 In Glenda there is a sense of human warmth

SP_5 In Glenda there is a sense of human sensitivity

IPS On a scale from 0-10, how likely are you to recommend the

Cinema Shel to a friend?

Not at all likely (0) to Extremely likely (10).

X_1 Do you have experience in interacting with chatbots? Not at all (1) to A lot (5)

X_2 Overall, how would you rate your experience with chatbots? Very negative (1) to Very positive (5)

(36)

APPENDIX G: correlation with control variables

Correlations matrix (N=201)

M SD SP PEOU IPS

Level of Experience with chatbots 3.28 1.23 -0.42 .01 .01

Rating of past Experience with

chatbots 3.39 .98 .30*** .27*** .35***

Social Presence 2.76 .91 / .39*** .47***

Perceived Ease Of Use 4.26 .63 .39*** / .50***

Individual Promoter Score 7.53 1.87 .47*** .50*** /

Referenties

GERELATEERDE DOCUMENTEN

H3: The perception of (a) trust, (b) satisfaction, and (c) purchase intention is higher when people are confronted with a chatbot using a human picture and human language compared

This study tried to replicate and extend the studies of Boecker and Borsci (2019) and Balaji and Borsci (2019) using the USQ to assess the user satisfaction of chatbots.. However,

The goal of this study is to develop a debate platform, the ArgueBot, that is able to maintain a meaningful debate with the user for various topics. The goal of the chatbot is to

Others demonstrated that chatbots can be useful tools to implement in the selection process since they can increase candidates’ performance (Van Esch et al., 2019; Nawaz &amp;

Topic of the assignment: Creating a chatbot user interface design framework for the Holmes cyber security home network router for the company DistributIT.. Keywords: cyber

The purpose of this study was to explore how the visual appearance and the conversational style of customer service chatbots influence their perceived usefulness, ease of

As such, the tool Duckbot is a design solution to answer the main research question: “How can we improve the online education platforms for Creative Technology programming to

H1: The perception of (a) social presence, the level of (b) satisfaction, and (c) purchase intention, is higher when people are confronted with a chatbot using a