• No results found

“Do you have a complaint? A chatbot will satisfy your needs!” : an investigative study exploring the effects of response strategies and anthropomorphic characteristics of chatbots on customer satisfaction

N/A
N/A
Protected

Academic year: 2021

Share "“Do you have a complaint? A chatbot will satisfy your needs!” : an investigative study exploring the effects of response strategies and anthropomorphic characteristics of chatbots on customer satisfaction"

Copied!
63
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

“Do you have a complaint?

A chatbot will satisfy your needs!”

An investigative study exploring the effects of response strategies and

anthropomorphic characteristics of chatbots on customer satisfaction

Barbara De Bie (10420940)

Master’s Thesis

Graduate School of Communication Master’s Programme Communication Science

Supervisor: Dhr. dr. Theo Araujo 28th of June 2019

(2)

Abstract

Today many companies and services are implementing chatbots. Making use of this technology has certain benefits, especially for customers since chatbots offer them a time-saving option, being able to communicate 24/7 with companies. However still little is known about the effects chatbots could have on the level of customer satisfaction. This study is therefore aimed at examining the effects of different response strategies of chatbots (accommodating vs. defensive) on customer satisfaction. In addition other factors that might influence this effect were examined. The perceived interactivity and the perceived helpfulness were explored as mediating factors, whilst the anthropomorphism of the chatbot (human-like vs. machine-like) and the pre-existing attitudes towards a company were explored as moderating factors. The present study is derived from the Expectancy Disconfirmation Theory (EDT), the Situational Crisis Communication Theory (SCCT) and the Computers Are Social Agents paradigm (CASA). An online experiment was conducted (N = 127) with actual chatbots granted with the latest technology. The participants had a conversation with a chatbot concerning a customer complaint. The results of the study showed that the accommodative response strategy of the chatbot had a positive effect on customer satisfaction, mediated by the perceived interactivity and the perceived helpfulness. The anthropomorphic characteristics of the chatbot and the pre-existing attitude towards the company did not moderate that effect. Based upon these results one might expect that in the future developers and companies might emphasize on the creation of new chatbots with an accommodating response. This research can be seen as a precursor for future research in chatbot technology and other Artificial Intelligence systems and their effect on customer satisfaction.

Keywords: Chatbot, customer satisfaction, response strategy, anthropomorphism, perceived interactivity, perceived helpfulness, attitude towards the company.

(3)

Introduction

As a result of several forces, such as technological developments and globalization, the society has shifted from a product-based economy to a more service-based economy (Kotler, 2000; Rust & Huang, 2014). Therefore companies realized that implementing a good service is essential when operating in highly competitive environments. For that reason the construct of customer satisfaction became one of the key performance indicators of a company (Mihelis, Grigoroudis, Siskos, Politis, & Malandrakis, 2001; Grigoroudis & Siskos, 2009; Curtis, Abratt, Dion, & Rhoades, 2011). In addition, the confirmation that chatbots are the rising means of interacting with data and services by the three tech giants, Microsoft, Google and Facebook, has triggered many companies to implement the chatbots (Følstad, Brandtzaeg, Feltwell, Law, Tscheligi, & Luger, 2018).

Chatbots are defined as ‘systems that are designed for extended conversations, set up to mimic the unstructured conversational or ‘chats’ characteristic of human-human interaction’ (Jurafsky & Martin, 2014, p. 425). Chatbots are embedded with natural language processing technology and artificial intelligence and are able to identify sentences and questions in order to create an immediate response and answer. Their response will match the input of the user (Setiaji & Wibowo, 2016). The first natural language processing computer was already created in 1966 by Joseph Weizenbaum (Weizenbaum, 1966). Their initial function was to interact with its users and to respond to their questions (Go & Sundar, 2019; Ciechanowski, Przegalinska, Magnuski, & Gloor, 2019) however the chatbots of today are more advanced and can offer a more sophisticated advice to the user and create a more interactive conversation with them. Chatbots are seen as a major source of innovation using artificial intelligence (Huang & Rust, 2018). The advancements in chatbots even increased the prospect that human labor and jobs are at stake (Huang & Rust, 2018). According to Vukovic and Dujlovic (2016) an advantage of a chatbot is that they might be able to

(4)

eventually replace the human influence in customer support and decrease company expenses. Other advantages of chatbots are the increase of productivity, the creation of new employment opportunities, public service delivery and lastly the improvement of satisfaction, through the personalization and the 24/7 availability of these applications (Androutsopoulou, Karacapilidis, Loukis, & Charalabidis, 2019). Even though many studies have considered chatbots being highly applicable for customer services (Cui, Huang, Wei, Tan, Duan, & Zhou, 2017; Verhagen, Van Nes, Feldberg, & Van Dolen; 2014, Xu, Liu, Guo, Sinha, & Akkiraju, 2017; Brandtzaeg & Følstad, 2017), the developers are still facing general challenges on how a chatbot should be designed to achieve the best results.

First, it is important to comprehend how the response strategy of a chatbot in a conversation should be. Studies have proven that the type of response is important for the customers’ evaluation of the company, including the customers’ satisfaction and their loyalty towards the company (Conlon & Murray, 1996; Coombs, 1999; Lee & Song, 2010). Since the arrival of the Internet customers can easily spread negative information or complaints about a company, a product or the services (Lee & Song, 2010). Deriving from the Situational Crisis Communication Theory (SCCT) of Coombs (2007), a company’s correct and proactive action or response during a complaint or a negative event is important to protect or improve a company’s reputation and to possibly avoid further criticism towards the company (Davidow, 2003; Casarez, 2002; Clark, 2001; Homburg & Fürst, 2007).

Secondly, it is important to comprehend if the chatbot should be anthropomorphized in a conversation. Deriving from Computer Are Social Actors (CASA) paradigm (Nass, Moon, Morkes, Kim, & Fogg, 1997) increasing the ‘social presence’ of a chatbot, by applying the chatbot with human characteristics, can make the chatbot appear more human-like (Heerink, Kröse, Evers, & Wielinga, 2010). The chatbot can even develop a deeper connection with the user (Shin, 2013).

(5)

As a result of these findings, the aim of this research is to give insights in the design of a chatbot, looking at the effect of the type of response strategy and the anthropomorphic design on customer satisfaction. Moreover, since the initial function of chatbots was to interact and respond to its users, it can be assumed that other factors might influence that effect. For instance, does a conversation with a chatbot feel like a two-way communication or more a one-way communication? And does the response of the chatbot help the customer? Therefore perceived interactivity and the perceived helpfulness of the chatbot are included in this research as mediators. In addition, since chatbots usually represent a company or a service, it can be assumed that previous formed attitudes towards the service or the company can mitigate on customer satisfaction.

In summary, the current research will answer the following research question: ‘To what extent does the response strategy of a chatbot affect customer satisfaction? And how does the anthropomorphic design of a chatbot and the pre-existing attitude towards the company moderate that effect and how does the perceived interactivity and perceived helpfulness mediates that effect?’

Hence, this research will make several scientific and societal contributions. This research wants to fill a knowledge gap in chatbot literature by being the first research to test the combination of the type of response strategy and the anthropomorphic design of a chatbot on customer satisfaction. Furthermore, this research will extend on previous literature of customer satisfaction with the EDT, the type of response strategy with SCCT, and on the anthropomorphizing of chatbots with the CASA paradigm. These findings will be relevant for developers who create new and advanced chatbots. Specifically, the findings will determine how chatbots should be built to provide an added value to companies and services in the future.

(6)

Theoretical Framework Customer satisfaction

Customer satisfaction has been a central concept in the field of marketing for many years now. According to Anderson, Fornell and Lehmann (1994) there are two different conceptualizations in the context of customer satisfaction namely, transaction-specific and cumulative. In the transaction-specific point of view customer satisfaction is perceived as the customer’s evaluative judgment of their experience with and reactions to a specific product transaction, episode, or service encounter (Hunt, 1977; Oliver, 1977, 1980, 1993, Johnson, 2015). Yet, in the cumulative point of view customer satisfaction is perceived as an overall evaluation based on the total purchase and consumption experience with a good service over time (Fornell, 1992; Johnsen & Fornell, 1991). Based on these two conceptualizations, a transaction-specific customer satisfaction provides an explicit diagnose of one single product or service, whilst a cumulative customer satisfaction is more an intrinsic indicator of the organization’s past performance, their current performance and their future performance. In this research both conceptualizations for customer satisfaction will be pursued, since they are complementary instead of competing (Johnson, 2015).

One of the important theories behind customer satisfaction is the Expectancy Disconfirmation Theory (EDT) of Oliver. Initially customers will form an expectation, then they will observe the performance of the products or services and they will compare this with their initial expectation to ultimately combine this information with other levels of expectations to judge the satisfaction (Oliver, 1977, 1980, 1993). A positive disconfirmation increases customer satisfaction (the perceived performance exceeds the expectation), whilst a negative disconfirmation decreases customer satisfaction (the perceived performance falls short on the expectation). An important model for customer satisfaction is the American Customer Satisfaction Index model (ACSI) from Fornell, Johnson, Anderson, Cha and Bryant

(7)

(1996). This model focuses on three key factors namely, perceived quality, perceived value and customer expectations. These are able to decrease customer complaints and increase the customers’ loyalty and satisfaction.

Response strategies

Customer complaints or negative information can easily be spread since the arrival of Web 2.0 (Lee & Song, 2010). The SCCT is a theory that predicts effective crisis communication responses. These responses can be used to repair a company’s reputation, negative affect and prevent negative behavioral intentions (Coombs, 2007). A company’s correct action or response during a negative event is extremely important in order to protect or improve a company’s reputation (Davidow, 2003). A response strategy from a company could differ from a defensive response strategy, putting the organization first, to an accommodative response strategy, putting customers’ concerns first (Coombs, 1999; Marcus & Goodman, 1991; Lee & Song, 2010). An accommodative response strategy, such as an apology or compensation towards a negative event, can help to restore the image of a company, including customer satisfaction and loyalty towards the company (Coombs, 1999; Conlon & Murray, 1996; Griffin, Babin, & Darden, 1992; Lee, 2005). A defensive response strategy could intensify the complaint and lead to a higher dissatisfaction with the customer and even damage the company’s reputation (Lee & Song, 2010). Surely, an accommodative response is preferred over a defensive response when a negative event occurs. Therefore, one might assume that when a negative event occurs customers will apply the same rules on conversations with chatbots as with humans. In addition, an accommodating response of a chatbot in a conversation will exceed the customers’ expectations and lead to satisfaction, whilst a defensive response of a chatbot will fall short on customers’ expectations and lead to dissatisfaction. The following hypothesis was formulated:

(8)

H1. ‘A chatbot with an accommodative response strategy will lead to a higher customer satisfaction compared to a chatbot with a defensive response strategy’

Pre-existing attitudes

Customers often already have a pre-existing attitude towards a company, even though they have or have never communicated or interacted with it (Mazaheri, Basil, Yanamandram, & Daroczi, 2011). A general description of attitudes is ‘an overall summary evaluation’ (Petty, Wegener, & Fabrigar, 1997). Former research has shown that pre-existing attitudes are able to impact customers on how they process new information (Asch, 1946; Allport, 1935). According to Yi (1993) pre-existing attitudes influence how ambiguous information will be processed. Ambiguity can occur when an experience with a service does not lead to a clear consistent interpretation (Hoch & Deighton, 1989). Therefore when a customer has a complaint the response of a company can explained in ambiguous ways. This response can be interpreted as helpful or as confrontational (Mazaheri et al., 2011). With this in mind customers tend to process new information what is coherent with their pre-existing attitudes (Judd, Kenny, & Krosnick, 1983). Since customer satisfaction is based upon positive and negative disconfirmations it can be expected that someone with a negative pre-existing attitude will interpret a defensive response as a negative disconfirmation, compared to someone with a positive existing attitude (Mazaheri et al., 2011). As such, the pre-existing attitude towards a company can moderate the effect on customer satisfaction. A participant with positive pre-existing attitudes towards a company will experience the conversation in a more positive light what will influence the effect on customer satisfaction. The following hypothesis was formulated:

H2. ‘A chatbot with an accommodating response strategy will lead to a higher customer satisfaction compared to a chatbot with a defensive response strategy and

(9)

this effect is stronger for customers with a positive pre-existing attitude, compared to a negative pre-existing attitude’.

Anthropomorphism of chatbots

According to the CASA paradigm people apply a variety of social rules to computers. Whenever human-computer interaction is involved with cues that are normally associated to a human-human interaction, a user can mindlessly rely on the schemas associated with the interpersonal interaction (Nass, Moon, & Carney, 1999). These cues such as voice, language or interactivity will transfer ‘social presence’ to the user. Social presence is defined as ‘a psychological state in which virtual social actors are experienced as actual social actors in either sensory or nonsensory ways’ (Lee, 2004, p. 27). Users can feel a strong sense of social presence during a conversation with a robot when it has human cues or characteristics, such as a voice, a language or interactivity (Nass & Moon, 2000; Skalski & Tamborini, 2007). The attribution of human characteristics, human forms, or human behavior to non-human things referred as anthropomorphism (Bartneck, Kulić, Croft, & Zoghbi, 2009). This entails that chatbots can be ‘anthropomorphized’ by the user. Studies have shown that avatars with anthropomorphic cues appear more intelligent and more credible than avatars without anthropomorphic cues (Koda & Maes, 1995; Nowak & Rauh, 2005). Likewise, the likeability of a robot increases whenever a robot expresses more emotions in a human-like manner (Siino, Chung, & Hinds, 2008). Lastly, assigning human-like features to chatbots has been found to improve user’s’ perceptions and building an understanding (Strait, Vujovic, Floerke, Scheutz, & Urry, 2015). Therefore the assumption is that anthropomorphic characteristics of a chatbot moderate the effect on customer satisfaction and have let to the following hypothesis:

(10)

H3. ‘A chatbot with an accommodating response strategy will lead to a higher customer satisfaction compared to a chatbot with a defensive response strategy and this effect is stronger for customers with a human-like chatbot, compared to a machine-like chatbot’.

Perceived Interactivity

Liu (2003) refers to perceived interactivity as the extent to which individuals perceive that the communication allows them to feel in control as if they can communicate synchronously and reciprocally with the communicator. A computer or system is no longer seen as a simple mediator of communication, but as a source of interaction (Sundar, Bellur, Oh, Jia, & Kim, 2016). The conversation between a human and a chatbot is dependent on the user’s input, which will determine how the conversation will flow or how it will be reciprocated. Lee and Choi (2017) showed that reciprocity and self-disclosure are strong predictors of perceived interactivity, building a relationship and user’s’ satisfaction. Self-disclosure can be defined as ‘the process of revealing thoughts, opinions, emotions or personal information to others’ (Pearce & Sharp, 1973). In a human-human conversation people share information with each other where self-disclosure is necessary in order to build a relationship (Greene, Derlega, & Mathews, 2006). According to Moon (2000) and Joinson (2001) this is also evolving in computer-mediated communication. Additionally, how close humans are to another, plays an important role in buffering negative effects of harmful messages (McLaren & Solomun, 2008; Vangelisti & Young, 2000). Recently, Jin (2019) even showed that individuals who have chatted with a chatbot before experienced less negative effects when the chatbot responded with negative messages. Therefore one might assume that if a negative event occurs and the conversation is perceived as interactive it will mediate the effect on customer satisfaction. The hypothesis is a followed:

(11)

H4a. ‘A chatbot with an accommodative response strategy will lead to a higher customer satisfaction, in comparison to a chatbot with a defensive response strategy, mediated through the perceived interactivity’.

Moreover, according to Luger and Sellen (2016) the human-computer interaction still feels restricted and limited for users, compared to real human-human interaction. However, Go and Sundar (2019) showed that when a chatbot was designed with anthropomorphic cues the participants showed more favorable evaluations in a highly interactive conversation compared to a less interactive conversation. Human-like characteristics will transfer ‘social presence’ to users and therefore users will perceive the conversation with a human-like chatbot as more interactive and similar to human-human communication. The following the following hypothesis was formulated:

H4b. ‘The anthropomorphism of a chatbot will moderate the effect of the response strategy, mediated by the perceived interactivity, on customer satisfaction’

Perceived helpfulness

Coyle, Smith and Platt (2012) defined perceived helpfulness as ‘the degree to which the responses in a communication are perceived helpful, and resolving the information needed of the interaction episode or event’ (p. 29). When companies engage in online conversations consumers will evaluate how helpful their responses are. The expectations of the helpfulness of the responses of company are deeply embedded within the consumer itself and will persist when companies are responsive. If a company fails to be responsive to a consumer the expectations of the helpfulness will be obstructed and this affects the trustworthiness of a company, the perceived benevolence of a company and the intentions to buy the product (Coyle et al., 2012). The perceived helpfulness of the responses could also be

(12)

applied on this study. Therefore one might assume that if the conversation with the chatbot is perceived as helpful it will mediate the effect on customer satisfaction. Therefore, the following hypothesis was formulated:

H5a. ‘A chatbot with an accommodative response strategy will lead to a higher customer satisfaction, in comparison to a chatbot with a defensive response strategy, mediated through the perceived helpfulness’.

Furthermore, according to Merkle (2019) customers experienced the same level of satisfaction when they had a conversation with a service robot as with an employee. Especially, after a service failure customers assumed that employees were better capable of helping and handling unexpected situations compared to a robot, since a robot is tied to its programming system (Merkle, 2019). Notably, perceived helpfulness has been seen as an important asset in online customer assistance and chatbots (Zarouali, Van den Broeck, Walrave, & Poels, 2018; Van den Broeck, Zarouali, & Poels, 2019). Yet, not many studies have examined the effect of perceived helpfulness of a chatbot on customer satisfaction, however the assumption is that a human-like chatbot will moderate the mediation of the perceived helpfulness on customer satisfaction. Therefore, the following the following hypothesis was formulated:

H5b. ‘The anthropomorphism of a chatbot will moderate the effect of the response strategy, mediated by the perceived helpfulness, on customer satisfaction’

Conceptual model

After the literature research on customer satisfaction, chatbots, response strategies and anthropomorphism a gap was determined regarding the response strategies and anthropomorphic effects of a chatbots on customer satisfaction. Scientists have not explored

(13)

these possible effects yet. It is therefore interesting to further explore the relationship of these concepts since many companies are implementing chatbots and other artificial intelligence technologies into their corporation. To fulfil the scientific gap on these subjects the following conceptual framework is constructed:

Figure 1. Conceptual model

                                                                                                                                            H5b H3 H4b H4a H1 H5a H2 Method Design

The experimental research design for this study consisted out of a 2 (response strategy: accommodative vs. defensive) x 2 (anthropomorphic design: human-like vs. machine-like) between-subjects factorial design. The experiment existed out of four conditions, as shown in Table 1. A complete overview of the final questionnaire is presented in Appendix 1. Response strategy (Accommodating vs. defensive) Customer satisfaction Pre-existing attitude Anthropomorphism (Human-like vs. machine-like) Perceived Interactivity Perceived Helpfulness

(14)

Table 1. Factorial design of the experiment

Anthropomorphic design

Human-like (N) Machine-like (N) Response strategy

Accommodative Group 1 30 Group 3 34

Defensive Group 2 33 Group 4 30

Data collection and sample

The experiment ran from the 7th of June, 2019, until the 13th of June, 2019. The data was collected in seven days. The participants of this research could exclusively be adult Dutch residents or native speakers, since the conversation with the chatbot was in Dutch. Further, the Dutch postal service PostNL was selected, since it received the most customer complaints compared to every other Dutch company according to the Consumentenbond (Consumentengids, 2019). Therefore a complaint from PostNL was incorporated into the scenario for the conversation.

The participants were mainly collected by the use of the researcher’s own network, which resulted in non-probability sampling methods. Specifically the methods of self-selection sampling, snowball sampling and convenience sampling were used. The participants were approached through Facebook, Instagram, email, WhatsApp and offline. A full description of the recruitment of the participants can be found in Appendix 2. A total 150 participants participated in the online experiment. Yet, 23 participants were removed from the sample since their questionnaire was incomplete or they did not provide a conversation code. The final sample of the questionnaire consisted of 127 participants, with age ranging from 19 to 67 years old (M = 26.96, SD = 8.54). 88 of the participants were female (69.3%), whilst 39 were male (30.7%). The educational background of the participants was relatively high, since 66 participants had a WO master degree (52.0%).

(15)

Pretest

The pretest (N = 23) would determine which complaint would be implemented in the scenario for the conversation, which icons were seen as more human-like and more machine-like and which responses were accommodating or defensive. The participants were recruited using the same sampling technique as in the main questionnaire. The results are shown in Table 2.

First, the participants needed to agree with the terms of the pretest to participate. Next, the four most often-occurring customer complaints of PostNL needed to be categorized (Consumentengids, 2019). The participants needed to range four complaints, number one being the most irritating complaint and number four being the least irritating complaint. According to the results the complaint: ‘Mailman incorrectly indicates that you were at home’ was experienced being the most irritating complaint. Therefore a scenario was created using that complaint.

Secondly, four icons with human-like and machine-like characteristics were selected. The participants answered on pre-existing scales of Powers and Kiesler (2006) if they believed the icon was more natural - unnatural, human-like - machine-like and life-like - artificial. The two icons, which scored best on the scales, were selected for the final experiment.

Lastly, to find out which responses would be seen as accommodating or defensive, the accommodative-defensive continuum of Coombs (1998) was used. Only six responses appeared to be suitable for the experiment. A more detailed description of the results and an overview of the questionnaire of the pretest can be found in Appendix 3.

(16)

Table 2. Results of the pretest

Construct/items M SD Loadings α

Complaints

‘Mailman incorrectly indicates that you were at home’ 1.83 .78 ‘Packet is never delivered or missing’ 2.13 1.10 ‘It is unclear when the package will be delivered again’ 3.17 .89 ‘The Track & Trace code is unclear or wrong’ 2.87 1.18

Anthropomorphizing of the icon

Icon 1 (human) (3.88) (1.01) .73 Natural – unnatural 4.00 1.28 .77 Human-like – machine-like 3.17 1.23 .82 Life-like – artificial 4.48 1.24 .83 Icon 2 (human) (4.14) (1.17) .70 Natural – unnatural 4.39 1.23 .78 Human-like – machine-like 3.39 1.78 .88 Life-like – artificial 4.65 1.37 .71 Icon 3 (machine) (5.97) (1.05) .94 Natural – unnatural 5.74 1.25 .92 Human-like – machine-like 6.09 .97 .95 Life-like – artificial 6.09 1.08 .97 Icon 4 (machine) (5.72) (1.34) .97 Natural – unnatural 5.43 1.41 .96 Human-like – machine-like 5.96 1.26 .98 Life-like – artificial 5.78 1.45 .99 Accommodating-defensive statements Accommodating (1.86) (.83) .76

‘How awful! We sincerely apologize, how 2.30 1.06 .70 would you like to have it solved?’

‘This should not have happened, our sincere 1.43 1.04 .90 apologies, how can we make it up?’

‘How annoying! Our sincere apology for the 1.83 .94 .87 inconvenience, how can we compensate this for you?’

‘We understand your position, how can we 2.4 1.20 Eliminated solve this?’

Defensive (5.74) (1.42) .86

‘Are you sure it is our fault that this has happened?’ 5.13 1.96 .87 ‘I understand your problem, but why is this ours?’ 6.35 1.40 .89 ‘These things never happen with our services, 5.74 1.48 .81

(17)

are you sure that we would deliver the package?’

‘Could you be more specific, we have not noticed 4.83 1.40 Eliminated any troubles from our side.’

Procedure

The experiment began with the informed consent. The participants then answered a set of demographic questions, namely age, gender and educational level. After the demographic questions the participants answered questions about their attitude towards PostNL. Next, the participant was assigned to a condition and received instructions on how to start the conversation with the chatbot. The instructions and conversation with the chatbot lasted no longer than 3 minutes. After the conversation the participant received a unique conversation code, which they needed to enter, to ensure that they truly conversed with the chatbot. Next, the participant answered questions regarding the perceived interactivity, perceived helpfulness, customer satisfaction and the attitude towards PostNL after the conversation. Subsequently two questions were presented in order to check whether the manipulation of the conditions was executed correctly. Finally there was a general closure of the experiment, where gratitude was expressed towards the participants for their participation.

Stimuli

Response strategy. In advance of the conversation the participant received a scenario including instructions. The participants were briefed with a complaint scenario and were asked to begin a conversation with the chatbot who would try to help the customer. The accommodating chatbot used extremely accommodating responses whilst, the defensive chatbot used extremely defensive responses, the used responses are shown in Table 3. The focus of this manipulation was to explore the effects of an accommodating response

(18)

and a defensive response when interacting with chatbots. Visualizations of the conversation are presented in Appendix 4.

Table 3. The accommodating and defensive responses of the chatbot

Response Sample responses

Accommodating ‘How awful! We sincerely apologize for the inconvenience, how would you like this solved?’

‘How annoying! Our sincere apology for the inconvenience how can we compensate this for you?’

‘We understand your frustration, our sincere apologies, this should have not happened, how can we accommodate you?’

Defensive ‘Are you sure it is our fault that this has happened?’ ‘I understand your problem, but why is this our problem?’

‘These things never happen with our services, are you sure that we would deliver the package?’

Anthropomorphic chatbot. The chatbot was specifically designed for this study with the use of the CART framework (Araujo, 2019). The chatbot was designed to interact with the participant using an informal language. The human-like chatbot was named Jan_ and contained an icon with human characteristics. The machine-like chatbot was named PostNLbot and contained an icon with machine-like characteristics. The focus of this manipulation was to explore the effects of interacting with human-like chatbots and machine-like chatbots, containing a name and a profile picture, and using only textual communication (Fig. 2a and Fig. 2b).

(19)

Figure 2a. Visualization of the human-like chatbot

Figure 2b. Visualization of the machine-like chatbot

Measurements of the concepts

The next chapter elaborates on the measurements of the concepts. The principal component analysis (PCA) and reliability tests checked whether the validity and reliability of the scales were correct. In the PCA the Varimax rotation was used and all the factor loadings needed to be above .45 to be included. In the reliability tests the Cronbach’s alpha needed to be above .60 to be a (moderate) reliable scale. All the scales had a reliable scale. Table 4 provides all the items of the scales, their factor loadings and the Cronbach’s alpha.

(20)

Table 4. The factor loadings and Cronbach’s alpha of all the scales. Factor Loadings Item 1 2 3 4 Pre-existing attitude 1 .88 Pre-existing attitude 2 .69 Pre-existing attitude 3 .89 Pre-existing attitude 4 .86 Perceived interactivity 1 .87 Perceived interactivity 2 .83 Perceived interactivity 3 .84 Perceived interactivity 4 .80 Perceived helpfulness 1 .94 Perceived helpfulness 2 .94 Perceived helpfulness 3 .95 ACSI 1 .94 ACSI 2 .91 ACSI 3 .94 Eigenvalue 2.78 2.79 2.66 2.59 % of Variance 69.41 69.71 88.59 86.44 Cronbach’s α .84 .86 .93 .92

Note. *A Cronbach’s α was higher if this item was deleted.

**Improved α after deletion of the item.

Customer satisfaction. Customer satisfaction was measured using the pre-existing scales of the American Customer Satisfaction Index from Fornell et al., (1996). Both the cumulative and the transaction-specific point of view of customer satisfaction were measured in this study. However for the rest of the analysis the three items from the transaction-specific customer satisfaction were used, since one of the cumulative customer satisfaction scales was not reliable, as shown in Appendix 51. In addition, this experiment involved, a single transaction only, a conversation with a chatbot, which relates to the transaction-specific point of view. The statements of the transaction-specific customer satisfaction, deducted from the

                                                                                                                         

1  The results of the PCA and reliability tests of all the latent variables of customer satisfaction are elaborated in Appendix 5.

(21)

EDT, were measuring the overall satisfaction, the degree to which the expectations were positively or negatively disconfirmed and whether the performance of the service was relative to the ideal performance of the service. The following statements were rated: ‘Overall, I am satisfied with the chatbot’, ‘I think the chatbot falls short’, ‘The performance of the chatbot matches to how I want to be helped’ which were measured on a 7-point Likert scale, ranging from strongly agree (1) to strongly disagree (7). For the analysis the scales were recoded to strongly disagree (1) and strongly agree (7). The PCA indicated that the scale was unidimensional (Eigenvalue of 2.60), explaining 86.68% of the variance in customer satisfaction (M = 3.32, SD = 1.59, α = .92).

Response strategy. For the measurement of the manipulation of the response strategy the pre-existing accommodative-defensive continuum of Coombs (1998) was used and slightly adjusted. The participants answered the statement: ‘How would you classify the conversation you had with the chatbot?’ on a 7-point bipolar scale ranging from accommodating (1) to defensive (7) (M = 4.20, SD = 2.29).

Anthropomorphism of the chatbot. For the measurement of the manipulation of the anthropomorphism of the chatbot the pre-existing scale from Powers and Kiesler (2006) was used. The participants answered the following statement: ‘How would you classify the chatbot where you had a conversation with just yet?’ on three different 7-point bipolar scales. The scales ranged from: human-like – machine-like, natural – unnatural and life-like – artificial. The PCA indicated that the scale was unidimensional (Eigenvalue of 2.71), explaining 90.27% of the variance in the anthropomorphism of the chatbot (M = 5.17, SD = 1.72, α = .94).

Pre-existing attitude towards the company. The pre-existing attitude towards PostNL was measured with the pre-existing scale of Mitchell and Olson (1981). The participants rated the following statements: ‘Overall, I believe PostNL is good’, ‘Overall I

(22)

believe PostNL is pleasant’, ‘Overall I believe PostNL has a high quality’, ‘Overall I dislike PostNL very much’ (reverse-coded) on a 7-point Likert scale ranging from strongly agree (1) to strongly disagree (7). The PCA indicated that the scale was unidimensional (Eigenvalue over 2.78), explaining 69.41% of the variance in the pre-existing attitude (M = 2.96, SD = 0.97, α =.84).

Perceived Interactivity. For the perceived interactivity of the conversation four items were adapted from Sundar et al., (2016). The participants rated the following statements: ‘I found the conversation highly interactive’, ‘The conversation enabled two-way communication’, ‘The conversation enabled concurrent (simultaneous) communication’, ‘The conversation felt primarily like a one-way communication’ (reverse-coded) on a 7-point Likert scale ranging from strongly agree (1) to strongly disagree (7). Similar with customer satisfaction the scale was recoded from to disagree (1) and strongly agree (7) for the analysis. The PCA indicated that the scale was unidimensional (Eigenvalue of 2.79), explaining 69.71% of the variance in the perceived interactivity (M = 3.79, SD = 1.32, α =.86). ).

Perceived Helpfulness. For the perceived helpfulness of the chatbot three items were adapted from Yin, Bond and Zhang (2014). The participants rated the following statement ‘How would you describe the conversation with the chatbot?’ on three different 7-point bipolar scales. The scales ranged from not at all helpful - very helpful, not at all useful – very useful, not at all informative – very informative. The PCA indicated that the scale was unidimensional (Eigenvalue of 2.66), explaining 88.59% of the variance in the perceived helpfulness (M = 3.01, SD = 1.75, α =.93).

Control variables. The demographic characteristics were collected to control if the participants were equally distributed over the conditions. And they were also collected to test whether other variables might have a significant influence on the results of this research. For age, the participants were asked to answer the following question, ‘What is your age’ and fill

(23)

in their age in full numbers. For gender, the participants were asked to give answer on the question, ‘What is your gender?’, on a nominal scale (1) female, (2) male and (3) other. Lastly, for educational level the participants were asked to answer the question ‘What is your educational background, up until your current education?’ on a ordinal scale (1) Primary school, (2) High school, (3) MBO, (4) HBO Bachelor, (5) University Bachelor, (6) University Master, (7) PhD and (8) other.

Elaboration of the statistical methods

To answer the research question the Statistical Programming Software SPSS was used. First, all the data was exported from Qualtrics to SPSS. Next, the randomization of the conditions was put into the data manually, since the CART framework executed that. The analysis started with a frequencies and a descriptive test to check for possible errors and missings. The computed scales were all measured on a 7-point Likert or 7-point bipolar scale, which meant that they all had a ratio measurement. ‘Age’ was also measured with a ratio measurement, ‘educational background’ with ordinal measurement and ‘gender’ with a nominal measurement.

For the analysis, first the manipulation checks and the randomization checks were executed. Followed by the direct relationships of the main variables explored with a correlation analysis. These correlation coefficients enabled useful insights of how strong the relationships are, yet was not be suitable for causal effects (Grant & Wall, 2009). The main analysis will be executed with the use of an ANOVA and PROCESS (Hayes, 2017). Model 4 and 8 of PROCESS allows to measure significant effects of moderated mediation models, where a common regression analysis is only able to provide one coefficient. This means that a moderated mediation could not be demonstrated through a regular regression (Hayes, 2017; Edwards & Lambert, 2007; Preacher, Rucker, & Hayes, 2007).

(24)

Results Manipulation check

The validity of the manipulations of the conditions was measured with two Independent Samples T-tests. Two dummy variables were created to execute the manipulations.

Response strategy. The group means of the response strategy show that the participants exposed to the accommodating chatbot (M = 2.31, SD = 1.26) evaluated the response more accommodating compared to the participants exposed to the defensive chatbot (M = 6.13, SD = 1.25) (Table 5). The results were significant, t (125) = 17.13, p < .001, CI = [3.37, 4.26], d = 3.04.

Anthropomorphism. The group means of the anthropomorphism of the chatbot showed that the participants exposed to a human-like chatbot (M = 4.25, SD = 1.67) evaluated the stimuli more human compared to the participants exposed to the machine-like chatbot (M = 6.07, SD = 1.24) (Table 5). The results were significant, t (125) = 6.98, p < .001, CI = [1.30, 2.34], d = 1.24.

Table 5. Independent Samples T-test for response strategy and anthropomorphism.

M SD N F t df p CI-lower CI-upper Response strategy Accommodating 2.31 1.26 64 0.11 17.13 125 .000 3.37 4.26 Defensive 6.13 1.25 63 Anthropomorphism Human-like 4.25 1.67 63 7.58 6.98 125 .000 1.30 2.34 Machine-like 6.07 1.24 64 Randomization check

To check whether the four conditions were equally distributed, Chi-square tests were executed. The randomization checks were executed across the demographic characteristics.

(25)

‘Gender’ was recoded into a dummy variable with female (0) and male (1). ‘Age’ was made numeric and recoded into a dummy variable with the split mean function, ranging young from 18 to 25 years (0) to old from 25.1 to 67 years (1). ‘Educational level’ was recoded into a dummy variable with ‘low educational level’ ranging from primary school to WO Bachelor (0) and ‘high educational level’ ranging from WO Master to PhD (1). The Chi-square test was conducted to compare ‘gender’ between the four conditions, χ2 (3) = 2.01, p = .570. The Chi-square test was also conducted to compare the ‘age’ between the four conditions, χ2 (3) = 2.03, p = .565. Finally, a Chi-square test was conducted to compare the ‘educational level’ between the four conditions, χ2 (3) = 1.45, p = .693. These results showed that the conditions are not significantly different, which means that the participants were randomly assigned to a condition and that there was no need to involve control variables as covariates to the analysis.

Data exploration

The first step of the main analysis was to explore the relationships amongst the variables with a correlation matrix, which is shown in Table 6.

Table 6. Pearson correlations among customer satisfaction

M SD 1 2 3 4 5 6 7 Customer satisfaction 4.67 1.59 - 1. Response strategy .50 .50 .59** - 2. Anthropomorphism .50 .50 .09 -.06 - 3. Pre-existing attitude 2.96 .98 -.01 -.02 -.11 - 4. Perceived interactivity 4.21 1.32 .65** .41** .12 -.01 - 5. Perceived helpfulness 3.01 1.75 .75** .66** .10 -.08 -.67** - 6. Gender 1.69 .46 .06 .09 .08 -.20* -.03 .10 - 7. Age 26.96 8.54 -.04 .04 -.18 .10 .14 -.10 -.15* - 8. Education 5.37 .88 -.00 -.03 -.11 -.13 .03 .04 .16 .18* **. Correlation is significant at the 0.01 level (2-tailed).

(26)

Hypothesis testing

Hypothesis 1. The first hypothesis explored the effects of the response strategy (accommodating vs. defensive) on customer satisfaction. The assumption was that an accommodating response strategy of a chatbot would increase the customer satisfaction, compared to a defensive response strategy of a chatbot. The participants who received an accommodating response of the chatbot (M = 4.26, SD = 1.40) scored higher on the scale of customer satisfaction compared to participants who received a defensive response (M = 2.37, SD = 1.16). The results were significant, t (125) = -8.25, p < .001, CI = [-2.34, -1.43], d = 1.47. The Cohen’s d indicates a large effect size. Hypothesis 1 is therefore accepted.

Hypothesis 2. The second hypothesis explored the effects of the response strategy (accommodating vs. defensive) on customer satisfaction moderated by the pre-existing attitude towards the company. The assumption was that the customer satisfaction would be higher with a positive existing attitude towards PostNL, compared to a negative pre-existing attitude towards PostNL. Model 4 of PROCESS, with 10.000 iterations, showed that the response strategy on customer satisfaction was significant, b = 1.89, SE = .23, 95% BCBCI [1.43, 2.34]. However the response strategy moderated by the pre-existing attitude on customer satisfaction was not significant, b = .00, SE = .02, 95% BCBCI [-.23; .24]. The moderating effect of the pre-existing attitude towards PostNL was not significant and therefore hypothesis 2 was not supported.

For the following hypotheses 3, 4a, 4b, 5a and 5b, model 8 of PROCESS, with 10.000 iterations, was adopted. Further, the pre-existing attitude towards PostNL was included as a covariate, since it could affect customer satisfaction. Some concepts appeared strongly correlated as shown in Table 6, therefore it was better to take them all together in one

(27)

PROCESS model2. Hence, the conceptual model also indicated that the hypotheses could be executed by the use of one model. Model 8 estimated the direct and indirect effect of the anthropomorphism of the chatbots (W) by the response strategy (X) on customer satisfaction (Y) through the perceived interactivity (M¹) and the perceived helpfulness (M²)of the chatbot, controlled by the pre-existing attitude towards the company. The results of direct effects are shown in table 7 and the indirect effect in table 8.

Hypothesis 3. For this hypothesis the response strategy moderated by the anthropomorphism of a chatbot on customer satisfaction was explored. Model 8 showed that direct effects of the response strategy on customer satisfaction was just not significant, b = .57, SE = .30, 95% BCBCI [-.02; 1.16]. The effect of the response strategy moderated by the anthropomorphism of the chatbot on customer satisfaction was also not significant3, b = .14, SE = .36, 95% BCBCI [-.58; .87]. Despite the non-significant results the human-like chatbot appeared marginally significant. Participants conversing with a human-like chatbot with an accommodating response showed a higher customer satisfaction than the participants conversing with a human-like chatbot with a defensive response (Table 8). Nevertheless, the effect of the response strategy on customer satisfaction, moderated by the anthropomorphism of a chatbot still was not significant and therefore hypothesis 3 was not supported.

Hypothesis 4. Looking at the results of hypothesis 4a: the direct effect of the response strategy of a chatbot mediated by the perceived interactivity on customer satisfaction was significant, b = .33, SE = .09, 95% BCBCI [.14; .51]. This means that participants conversing

                                                                                                                         

2  A multicollinearity test showed that the three hypotheses could be merged into one model. According to the results of the VIF there were no multicollinearity issues and therefore the concepts could be merged.    

3 For hypothesis 3 also a two-way ANCOVA was executed. However the results did not significant as well, similar with the results of the PROCESS Model 8.

(28)

with an accommodating chatbot would perceive the conversation as more interactive, leading to an increase on customer satisfaction.

Looking at the results of hypothesis 4b: the indirect effect with perceived interactivity as the mediator and the anthropomorphism of the chatbot as the moderator on customer satisfaction was found not to be significant, b = -.10, SE = .16, 95% BCBCI [-.50; .15]. This means that hypothesis 4a was supported and 4b was not supported.

Hypothesis 5. Looking at the results of hypothesis 5a: the direct effect of the response strategy of a chatbot mediated by the perceived helpfulness on customer satisfaction was significant, b = .39, SE = .09, 95% BCBCI [.21; .56]. This means that participants conversing with an accommodating chatbot would perceive the conversation as more helpful, leading to an increase on customer satisfaction.

Looking at hypothesis 5b: the indirect effect with perceived helpfulness as the mediator and the anthropomorphism of the chatbot as the moderator was found not to be significant, b = .11, SE = .18, 95% BCBCI [-.26; .46].Likewise, hypothesis 5a was supported and 5b was not supported.

Table 7. Summary of the direct results of PROCESS Model 8. Dependent

Perceived interactivity (M¹) Perceived helpfulness (M²) Customer satisfaction (Y)

Independent Coeff. SE p Coeff. SE p Coeff. SE p

Response strategy (X) 1.26 .30 <.000*** 2.19 .33 .000*** .57 .30 .057 Anthropomorphism (W) .55 .30 .074 .32 .33 .329 .03 .26 .916 Perceived interactivity (M¹) - - - .33 .09 <.001*** Perceived helpfulness (M²) - - - .39 .09 .000*** X*W .31 .43 .466 .29 .46 .535 .14 .37 .695 Pre-existing attitude .05 .11 .635 -.11 .12 .382 .05 .09 .617 Constant 2.80 .41 .000*** 1.99 .45 .000*** .44 .41 .281 R² = .196 R² = .464 R² = .613 F (4, 122) = 7.41, p = .000 F (4, 122) = 26.43, p = .000 F (6, 120) = 31.64, p = .000

(29)

* p < 0.05, ** p < 0.01, *** p < 0.001.

Table 8. Summary of the indirect results of PROCESS Model 8. Customer satisfaction (Y)

Effect BootSE BootLLCI BootULCI

Perceived interactivity(M¹) -.10 .16 -.50 .15 Human-like .31 .16 .08 .70* Machine-like .41 .21 .11 .93* Perceived helpfulness(M²) .11 .18 -.26 .46 Human-like .96 .27 .40 1.47* Machine-like .85 .27 .34 1.39* X*W .57 .30 -.02 .1.16 Human-like .72 .31 .10 1.34* Machine-like .57 .30 -.02 1.16

Note. * The effect was significant if the 0 was not between the Bootstrap Confidence Intervals.

Discussions and Conclusion Discussion

According to the results: the response strategy of a chatbot effects customer satisfaction, but the pre-existing attitude towards a company and the anthropomorphism of a chatbot do not moderate that effect. However, the mediators, perceived interactivity and perceived helpfulness, did mediate the effect of the response strategy on customer satisfaction. Including, the anthropomorphism of the chatbot did not moderate the two mediation effects.

The appearance of an effect of the response strategy on customer satisfaction could be clarified through the EDT and SCCT (Oliver, 1980; Coombs, 2007). In the experiment the customer needed to confront the chatbot with a complaint. By starting the conversation the customer will have certain expectations, which will be positively or negatively ‘disconfirmed’. An accommodating response strategy of a chatbot generates a positive

(30)

disconfirmation, leading to an increase in customer satisfaction, whilst a defensive response strategy generates a negative disconfirmation, leading towards an increase in dissatisfaction (Coombs, 1999; Conlon & Murray, 1996; Griffin et al., 1992; Lee, 2005; Lee & Song, 2010). One could conclude that the more accommodating the response of a chatbot is, the more satisfied the customer will be.

The anthropomorphism of the chatbot did not appear significant. Many researchers are currently examining the CASA paradigm and speculate whether chatbots should have human characteristics. Some recent studies have shown that chatbots should possess human characteristics (Araujo, 2018; Portela & Granell-Canut, 2017; Toxtli, Monroy-Hernández, Cranshaw, 2018; Jain, Kumar, Kota and Patel, 2018). While other studies indicate that chatbots should not possess human characteristics. For instance, human-like characteristics of a chatbot can reach a level where it becomes too difficult for humans to differentiate a chatbot between being human or being a bot (Davis, Varol, Ferrara, Flamini, & Menczer, 2016; Candello, Pinhanez, & Figueiredo, 2017) an even “too human-like” chatbot could generate aversive responses (MacDorman, Green, Ho, & Koch, 2009; Strait et al., 2015). As seen in this study, within this context, building chatbots with anthropomorphic characteristics did not influence customer satisfaction. Furthermore this study raises the following question: ‘Do chatbots even need to be anthropomorphic at all?’ Since the main results showed that as long as the response of the chatbot is interactive, helpful and accommodative, it increases customer satisfaction. Therefore does it matter if the answer is given by a human-like chatbot or a machine-like chatbot to satisfy the customer? Future research will have to determine to what extent chatbots should have anthropomorphic characteristics.

(31)

Implications

The findings of this research have important implications for theory and practice. Before, there was a gap of knowledge concerning the effects of the response strategy of a chatbot on customer satisfaction. However, the theories in this study, EDT and SCCT, have shown that their fundamentals are still of great importance in current society. Since the results of this study confirmed the ability of the generalization of the theories on chatbots, the knowledge gap is partially closed.

For practice the findings of this research suggest that in order to increase the level of customer satisfaction companies should integrate accommodative responses of chatbots into their services. In addition the accommodative responses will attribute to the perceived interactivity and perceived helpfulness of a chatbot. On the contrary, the anthropomorphic characteristics of chatbots do not significantly increase customer satisfaction. In practice, for developers it is better to focus on creating chatbots with accommodative responses instead of focusing on the anthropomorphic design of the chatbot. Nonetheless, more research will need to be done with chatbots and customer satisfaction to create a chatbot framework for that.

Limitations and future research

This research contained several limitations that should be considered for future research. First, the sample recruited in this research was not representative for the general population. It was recruited via non-probability sampling techniques. In addition, the demographic characteristics pointed out that most of the participants were quite young, highly educated and female. This could imply that the significant results of the response strategy on customer satisfaction were accomplished due to the fact that young and high-educated people tend to adopt technology faster (Czaja, Charness, Fisk, Hertzog, Nair, Rogers, & Sharit, 2006). Secondly, the experiment was only online for seven days. Although

(32)

the minimum required number of participants was collected, we might assume that if the experiment had been online for a longer period of time this might have generated more diverse participants and a larger number of participants.

Thirdly, the experimental design of the research was chosen in order to provide high internal validity and to prompt causal inferences (Grant & Wall, 2009). Despite that the pre-existing attitude was controlled for in the main analysis, the specified company, PostNL, might have threatened the internal validity of the experiment. Therefore, in future research it might be more valuable if a fictitious company was chosen to guarantee the internal validity of the experiment.

Fourthly, the two icons representing the human-like and the machine-like chatbots were animated icons. One could assume that the absence of the effect of the anthropomorphism of the chatbots was due to the fact that the icons were animated. In future research one might consider selecting actual pictures of humans and machines to acquire significant effects.

Fifthly, the design of the chatbot itself had several limitations. Until today chatbots are programmed in a specific style, provided by machine learning algorithms, to answer the user (Ashktorab, Jain, Liao & Weisz, 2019). For this study, the chatbots were programmed in a Dutch language style and the participants could only interact with them using Dutch. Also several participants did not receive a conversation code, which means that they experienced a breakdown or failure of a chatbot. The experience of a breakdown or failure in a conversation can decrease trust, satisfaction and willingness to continue using a chatbot (Jain, Kumar, Bhansali, Liao, Truong, & Patel, 2018; Jain et al., 2018; Luger & Sellen, 2016). Therefore, future research should try to create scripts in every language, dialect or slang to prevent as many failures as possible.

(33)

Finally, the chatbot in this study was not designed to actually resolve the complaint. This study examined the response strategy of a chatbot and not how a complaint should be resolved. Therefore it would be interesting to create a chatbot that can truly resolve a customer complaint.

Conclusion

As mentioned before chatbots are being implemented in many different companies and services; therefore knowledge of the effects of these technologies is required. This study attempted to determine the influence of response strategies and anthropomorphic characteristics of a chatbot on customer satisfaction and how it could be affected by the pre-existing attitude towards a company, the perceived interactivity and the perceived helpfulness.

The findings showed that the EDT could be seen as a base for future research in the effects of chatbots on customer satisfaction. Future chatbots might be able to communicate and resolve any unforeseen event or complaint that rises. However, for now this study demonstrated that it is of importance that the responses of the chatbot, whether it is a human-like chatbot or a machine-human-like chatbot, should be accommodating to be perceived interactive and perceived helpful in order to satisfy the customer as much as possible.

(34)

References

Allport, G. W. (1935). Attitudes. In A Handbook of Social Psychology (pp. 798-844). Worcester, MA, US: Clark University Press.

Anderson, E. W., Fornell, C., & Lehmann, D. R. (1994). Customer satisfaction, market share, and profitability: Findings from Sweden. Journal of Marketing, 58(3), 53-66.

Androutsopoulou, A., Karacapilidis, N., Loukis, E., & Charalabidis, Y. (2019). Transforming the communication between citizens and government through ai-guided chatbots. Government Information Quarterly, 36(2), 358-367.

Ashktorab, Z., Jain, M., Liao, Q. V., & Weisz, J. D. (2019). Resilient Chatbots: Repair Strategy Preferences for Conversational Breakdowns. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI'19).

Araujo, T. (2018). Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Computers in Human Behavior, 85, 183-189.

Araujo, T. (2019, May). Going beyond the wizard: Using computational methods for conversational agent communication research. Presented at the International Communication Association (ICA) Conference, Washington, DC, USA.

Asch, S. E. (1946). Forming impressions of personality. The Journal of Abnormal and Social Psychology, 41(3), 258.

Bartneck, C., Kulić, D., Croft, E., & Zoghbi, S. (2009). Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International Journal of Social Robotics, 1(1), 71-81.

Brandtzaeg, P. B., & Følstad, A. (2017, November). Why people use chatbots. In International Conference on Internet Science (pp. 377-392). Springer, Cham.

(35)

Candello, H., Pinhanez, C., & Figueiredo, F. (2017, May). Typefaces and the perception of humanness in natural language chatbots. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (pp. 3476-3487). ACM.

Casarez, N. B. (2002). Dealing with cyber smear: How to protect your organization from online defamation. Public Relations Quarterly, 47, 40–45.

Ciechanowski, L., Przegalinska, A., Magnuski, M., & Gloor, P. (2019). In the shades of the uncanny valley: An experimental study of human–chatbot interaction. Future Generation Computer Systems, 92, 539-548.

Clark, A. (2001). They are talking about you: Some thoughts about managing online commentary affecting corporate reputation. Journal of Communication Management, 5, 262–276.

Coombs, W. T. (1998). An analytic framework for crisis situations: Better responses from a better understanding of the situation. Journal of Public Relations Research, 10(3), 177-191.

Coombs, W. T. (1999). Information and compassion in crisis responses: A test of their effects. Journal of Public Relations Research, 11, 125–142.

Coombs, W. T. (2007). Protecting organization reputations during a crisis: The development and application of situational crisis communication theory. Corporate reputation review, 10(3), 163-176.

Conlon, D. E., & Murray, N. M. (1996). Customer perceptions of corporate responses to product complaints: The role of explanations. Academy of Management Journal, 39, 1040–1056.

Consumentengids (2019). Klachten pakketbezorging ‘uw pakket ligt in de kliko’. Consumentenbond. Retrieved on 16th of April from

(36)

https://www.consumentenbond.nl/binaries/content/assets/cbhippowebsite/gidsen/consu mentengids/2019/nummer-4---april/201904p10-klachten-pakketbezorging-p.pdf

Coyle, J. R., Smith, T., & Platt, G. (2012). “I'm here to help” How companies' microblog responses to consumer problems influence brand perceptions. Journal of Research in Interactive Marketing, 6(1), 27-41.

Cui, L., Huang, S., Wei, F., Tan, C., Duan, C., & Zhou, M. (2017). Superagent: A customer service chatbot for e-commerce websites. In Proceedings of ACL 2017, System Demonstrations, 97-102.

Curtis, T., Abratt, R., Dion, P., & Rhoades, D. L. (2011). Customer satisfaction, loyalty and repurchase: Some evidence from apparel consumers. Review of Business, 32(1), 47. Czaja, S. J., Charness, N., Fisk, A. D., Hertzog, C., Nair, S. N., Rogers, W. A., & Sharit, J.

(2006). Factors predicting the use of technology: findings from the Center for Research and Education on Aging and Technology Enhancement (CREATE). Psychology and

aging, 21(2), 333.

Davis, C. A., Varol, O., Ferrara, E., Flammini, A., & Menczer, F. (2016, April). Botornot: A system to evaluate social bots. In Proceedings of the 25th International Conference Companion on World Wide Web (pp. 273-274). International World Wide Web Conferences Steering Committee.

Davidow, M. (2003). Organizational responses to customer complaints: What works and what doesn’t. Journal of Service Research, 5(3), 225-250.

Edwards, J. R., & Lambert, L. S. (2007). Methods for integrating moderation and mediation: a general analytical framework using moderated path analysis. Psychological

(37)

Følstad, A., Brandtzaeg, P. B., Feltwell, T., Law, E. L., Tscheligi, M., & Luger, E. A. (2018, April). Sig: Chatbots for social good. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (p. SIG06). ACM.

Fornell, C. (1992). A national customer satisfaction barometer: the Swedish experience. Journal of Marketing, 56(1), 6-21.

Fornell, C., Johnson, M. D., Anderson, E. W., Cha, J., & Bryant, B. E. (1996). The American customer satisfaction index: nature, purpose, and findings. Journal of Marketing, 60(4), 7-18.

Go, E., & Sundar, S. S. (2019). Humanizing Chatbots: The effects of visual, identity and conversational cues on humanness perceptions. Computers in Human Behavior.

Grant, A. M., & Wall, T. D. (2009). The neglected science and art of quasi-experimentation: Why-to, when-to, and how-to advice for organizational researchers. Organizational Research Methods, 12(4), 653-686.

Greene, K., Derlega, V. J., & Mathews, A. (2006). Self-disclosure in personal relationships. The Cambridge handbook of personal relationships, 409-427.

Griffin, M., Babin, B. J., & Darden, W. R. (1992). Consumer assessments of responsibility for product-related injuries: The impact of regulations, warnings, and promotional policies. Advances in Consumer Research, 19, 870– 878.

Grigoroudis, E., & Siskos, Y. (2009). Customer satisfaction evaluation: methods for measuring and implementing service quality (Vol. 139). Springer Science & Business Media.

Hayes, A. F. (2017). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach. Guilford Publications.

(38)

Heerink, M., Kröse, B., Evers, V., & Wielinga, B. (2010). Relating conversational expressiveness to social presence and acceptance of an assistive social robot. Virtual reality, 14(1), 77-84.

Hoch, S. J., & Deighton, J. (1989). Managing what consumers learn from experience. Journal of Marketing, 53(2), 1-20.

Homburg, C., & Fürst, A. (2007). See no evil, speak no evil: A study of defensive organizational behaviour towards customer complaints. Journal of the Academy of Marketing Science, 35(4), 523–536.

Hunt, H. K. (1977). Conceptualization and measurement of consumer satisfaction and dissatisfaction, Marketing Science Institute.77-103.

Huang, M. H., & Rust, R. T. (2018). Artificial intelligence in service. Journal of Service Research, 21(2), 155-172.

Jain, M., Kumar, P., Bhansali, I., Liao, Q. V., Truong, K., & Patel, S. (2018). FarmChat: A Conversational Agent to Answer Farmer Queries. In Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2(4), 170.

Jain, M., Kumar, P., Kota, R., & Patel, S. N. (2018, June). Evaluating and informing the design of chatbots. In Proceedings of the 2018 on Designing Interactive Systems Conference 2018 (pp. 895-906). ACM.

Jin, B. (2019). Criticism From Artificial Agents: Prior Interaction Reduces Negative Effects. Communication Research Reports, 1-10.

Johnson, M. (2015). Customer Satisfaction.

Johnson, M. D., & Fornell, C. (1991). A framework for comparing customer satisfaction across individuals and product categories. Journal of Economic Psychology, 12(2), 267-286.

Referenties

GERELATEERDE DOCUMENTEN

Research question: To what extent are the workshops satisfied with the products and services provided by the service department of Carmaker A and how does this level of

These strong correlations show that handling complaints in a right manner can improve overall customer satisfaction and stimulate word of mouth behaviour, both

Customers have many choices to choose one fast food restaurant and with years of experience in dining out customers have more expectations in terms of quality, price, and

In this paper, a random effect panel data model was created to investigate the relationship between changes in customer satisfaction and concession years.. This relationship

• Provides insights into the effect of customer satisfaction, measured through online product reviews, on repurchase behavior!. • Adresses the question whether the reasons for

While consistent information plays a reverse role by comparison with that of a large quantity of information, as consistent information increases decision confidence (Gill

H4: The positive impact of Chatbots tone of voice on trust and customer satisfaction will be moderated by age, such that the effect will be stronger for digital natives than for

The challenge that all organizations face is to maintain continuously highly satisfactory service levels (according to the needs and expectations of the