• No results found

The undesirable side effects of AI-powered chatbot assistants: Customer-focused understanding of the process of value co-destruction and the impact of the level of anthropomorphism and cultural dimension

N/A
N/A
Protected

Academic year: 2023

Share "The undesirable side effects of AI-powered chatbot assistants: Customer-focused understanding of the process of value co-destruction and the impact of the level of anthropomorphism and cultural dimension"

Copied!
107
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

The undesirable side effects of AI-powered chatbot assistants:

Customer-focused understanding of the process of value co-destruction and the impact of the level of anthropomorphism

and cultural dimension

University of Amsterdam MSc Business Administration

Track: Digital Marketing

Master Thesis (Final Version) Thesis Supervisor: Myrthe Blösser

EBEC: 20220415100410 Vera Salemans – 11061510 Submission Date: June 24, 2022

Wordcount: 17934

(2)

STATEMENT OF ORIGINALITY

This document is written by student Vera Salemans who declares to take full responsibility for the contents of this document.

I declare that the text and the work presented in this document are original and that no sources other than those mentioned in the text and its references have been used in creating it.

The Faculty of Economics and Business is responsible solely for the supervision of completion of the work, not for the contents.

(3)

Table of Contents

List of Figures and Tables ………...………..6

List of Abbreviations….………..………..7

Abstract.……….………... ….8

1. Introduction………..…………...9

2. Literature Review………15

2.1 AI-powered chatbots……….…………..15

2.1.1 AI-powered chatbot assistants……….…………16

2.2 Value and value co-creation in AI-powered service settings……….…….……16

2.3 The recent rise of value co-destruction in AI-powered service settings……...…………..17

2.4 Customer adoption intention strategy……….17

2.5 Chatbot behavior: Error-free versus cognition challenges………..…………18

2.5.1 Error-free AI-powered chatbot………18

2.5.2 Antecedent of value co-destruction: Cognition challenges………….…………19

2.6 The effect of chatbot behavior: Error-free versus cognition challenges on customer adoption intention strategy………..………19

2.7 Anthropomorphism………..………21

2.7.1 Service robot anthropomorphism and the subsequent effect on customer adoption intention………..……..22

2.7.2 Level of Anthropomorphism as moderator………..……23

2.8 Culture……….……25

2.8.1 Hofstede’s cultural dimension: individualism vs. collectivism………….……..25

2.8.2 Cultural dimension: individualism vs. collectivism as moderator …….………27

2.8.3 Service robot anthropomorphism and cultural dimension: individualism vs collectivism………..……28

2.9 Conceptual Model………30

3. Research Method………..31

3.1 Experimental design………31

3.2 Sample……….……32

(4)

3.3 Data collection……….32

3.4 Procedure……….………33

3.5 Stimulus material……….………34

3.6 Experimental Vignette Methodology (EVM) ……….………34

3.6.1 Manipulation of chatbot behavior (error-free vs. cognition challenges) ………36

3.6.2 Manipulation of the level of anthropomorphism (low vs. high anthropomorphism) ……….………36

3.7 Operationalization and measurements of the variables………...…………39

4. Results………..…………..42

4.1 Data preparation………..….…………42

4.2 Descriptive and frequencies statistics……….……….42

4.3 Factor analysis……….44

4.4 Reliability analysis……….….45

4.5 Randomization check………..………46

4.6 Manipulation check………...……..47

4.7 Correlation analysis……….……47

4.8 Hypothesis testing………49

4.8.1 Hypothesis 1: Total direct effect (H1)……….50

4.8.2 Hypothesis 2a, 2b: Moderating effect of level of anthropomorphism, Model 1………53

4.8.3 Hypothesis 3a, 3b, 3c & 3d: Moderated Moderation, Model 3………...…53

4.9 Hypotheses results………..….55

5. Discussion………..56

5.1 Theoretical implications………..57

5.2 Managerial implications………..60

5.3 Limitations and future research………..61

5.4 Conclusion………..……63

6. References……….……64

(5)

7. Appendices………85

Appendix A: Transcript of study, stimulus used, illustrating experimental manipulations…..85 Appendix B: Stimuli Material……….…..87 Appendix C: Hofstede’s Cultural Dimension Index……….…91 Appendix D: Qualtrics Survey………..…93

(6)

List of Figures and Tables Figures

Figure 1. Conceptual Model Figure 2. PROCESS Model 1 Figure 3. PROCESS Model 3

Tables

Table 1. Differences between Individualism vs. Collectivism

Table 2. Four Conditions of a 2 (Chatbot Behavior) x 2 (Level of Anthropomorphism) Factorial Between-Subjects Design

Table 3. Experiment Manipulation of the Level of Anthropomorphism Table 4. Hofstede’s Cultural Dimension Index

Table 5. Overview of Number of valid observations (N) and Means (M) per Condition Table 6. Gender Distribution

Table 7. Descriptive statistics per condition – CAIS Table 8. KMO and Bartlett’s Test

Table 9. Reliability Analysis

Table 10. Correlation Matrix: Means, Standard Deviation, and Correlation Table 11. Regression results of PROCESS Model 1: Moderation Analysis

Table 12. Regression results of PROCESS Model 3: Moderated Moderation Analysis Table 13. Conditional Effects of Chatbot Behavior (Error-Free vs. Cognition

Challenges)

Table 14. Overview Results Hypotheses

(7)

List of Abbreviations

AI Artificial Intelligence B2C Business-to-Consumer CA(s) Conversational Agent(s)

CAIS Customer Adoption Intention Strategy CC Chatbot behavior x Cultural dimension CD Cultural Dimension

CDC Cultural Dimension Collectivism CDI Cultural Dimension Individualism

CL Chatbot behavior x Level of anthropomorphism

CLC Chatbot behavior x Level of anthropomorphism x Cultural dimension EVM Experimental Vignette Methodology

FLE(s) Frontline Employee(s) INI Individualism Index KMO Kaiser-Meyer-Olkin

LC Level of anthropomorphism x Cultural dimension

M Means

N Number

NLP Natural Language Processing NLU Natural Language Understanding IV

NOPEIAICA No Prior Experiences with AI-powered Chatbot Assistants NWOM Negative Word of Mouth

OFD Online Food Delivery

PCA Principal Components Analysis

PEAICA Prior Experiences with AI-powered Chatbot Assistants SST Self-Service Technology

TT Trust in Technology USA United States of America

(8)

Abstract

An increasing number of firms introduce AI-powered chatbot assistants to provide automated services to customers. However, these human-chatbot interactions do not always run smoothly and cognition challenges are a frequent occurrence. Research has yet to explore the undesirable side effects of AI-powered chatbot assistants in the process of value co- destruction and its impact on the customer adoption intention strategy, especially when cognition challenges have occurred.

To fill this gap, the present study aimed to determine the relationship between chatbot behavior (error-free vs. cognition challenges) and customer adoption intention strategy in AI- driven online service settings. Further, it investigated whether the moderating role of the level of anthropomorphism and the cultural dimension (individualism vs. collectivism) has an impact on this relationship.

An online experiment was distributed to China, Greece, and USA citizens. Participants were randomly assigned to one of the four conditions of a human-chatbot interaction.

Findings from a sample of 427 participants show a significant negative effect of chatbot behavior on customer adoption intention strategy; participants who experienced cognition challenges led to lower customer adoption intention strategies than participants who experienced an error-free AI-powered chatbot. Additionally, no moderating effects of the level of anthropomorphism and cultural dimension have been found in this study.

Based on these findings, this study contributes to a better understanding of the process of value co-destruction in AI-driven online service settings. Moreover, directions for future research are discussed for scholars and practitioners to maximize the customer adoption intention strategy of AI-powered chatbots in a process of value-co destruction from the customers’ perspective.

Keywords: chatbot behavior, customer adoption intention strategy, level of

anthropomorphism, cultural dimension, trust in technology, prior experiences with AI- powered chatbot assistants

(9)

1. Introduction

Nowadays, Artificial Intelligence (AI) is in rapid turn transforming service experiences, as frontline employees (FLEs) are increasingly supported or even replaced by AI technology (Castillo et al., 2021). The nature of the service interface is radically changing from one that is human-driven, to one that is predominantly technology autonomous and dominated, due to the growing AI technology (Larivière et al., 2017). When AI-powered chatbots facilitate customer service, independent of FLEs, according to Van Doorn et al. (2017) they can be conceptualized as a self-service technology (SST).

Moreover, AI chatbots can assist with a variety of tasks along the customer’s journey.

Customers in the business-to-consumer (B2C) domain use these chat services for different reasons, such as to acquire information, product details, or assistance in solving technical problems (Adam et al., 2021). For example, AI-powered chatbots for the online food delivery service (OFD) are increasing (De Cicco et al., 2021). Missing an ordered product at a flash delivery service, such as Flink, Gorillas, or Zapp? With the help of an AI-powered chatbot, you get an immediate automated personalized response at any time and place (Nair et al., 2018). Nonetheless, AI-chatbots can also satisfy more complex service demands, such as retail banks which are turning the use of AI-chatbots as financial advisors to meet increasing demands (Chong et al., 2021). As these examples show, most AI-chatbots improve marketing effectiveness through personalization and innovation as well as increasing efficiency (Grewal et al., 2021).

There are expectations from both academics as well as practitioners that AI will drastically influence business strategies and customer behavior (Abrardi et al., 2021;

Davenport et al., 2020). In addition, it is predicted that by 2025, 95% of customers’ online service interactions will be powered by an AI chatbot (Ashfaq et al., 2020). Similarly, an analysis of multiple AI chatbot cases by Chui et al. (2018) highlights the relevance of AI chatbots in marketing domains, which further emphasizes the relevance for marketing practitioners and researchers. Nonetheless, there is yet an increasing awareness in the

literature of customers’ perspectives of AI-powered chatbots that contribute to the facilitation and understanding of value co-creation (Bassano et al., 2020; Lalicic & Weismayer, 2021;

Payne et al., 2021).

However, while the promise of AI chatbots is tempting and the benefits of value co- creation have been significant, there is also a downside when customers feel not being understood by AI-powered chatbots and turn away (Balaji et al., 2016).

(10)

As an example, the failure of Ikea’s AI-powered chatbot Anna was partial because she was perceived as too human in terms of appearance, and in conversation, she was not

understood by the customer. This led to customers’ expectations that could not be met and in turn led to frustration with follow-up consequences (Brandtzaeg & Følstad, 2018). These undesirable side effects of AI-powered chatbots, delivering the opposite of value co-creation – that is, value co-destruction – is less well-understood and inadequately studied (Järvi et al., 2018; Ostrom et al., 2015; Smith, 2013). Academia has been lagging in examining the value of co-destruction arising from AI-powered chatbots.

Most of the existing AI chatbot research tends to highlight the positive side of AI chatbots in service settings (Adam et al., 2021; Cheng & Jiang, 2020). Given this continued expansion of the use of AI chatbots in service environments, as well as the significant implications for researchers and practitioners, it is getting more crucial to understand the process of value co-destruction. In light of such emerging complexity, there is a need to evaluate the potential facilities for services resulting from AI technology and how they affect the customers in their value-creating processes, in a negative way.

Therefore, this paper addresses the imbalance in the existing literature on AI-powered chatbots by delving deeper into the process of value co-destruction from the customer’s point of view.

According to Adam et al. (2021), the failure of AI chatbots to consistently deliver quality services is still a key barrier to wide-scale use by customers. Specifically, AI

technologies are dependent on customer participation, which increases the complexity of the service and, ultimately, the likelihood that the service will not succeed (Hilton & Hughes, 2013). As customers invest more of their commitment and time in an interaction, they may feel annoyed and frustrated when the co-created service does not meet their expectations (Grönroos & Voima, 2013; Harrison & Waite, 2015). These occurrences represent a loss of valuable resources, like patience and time (Harrison & Waite, 2015).

In defining the process of value co-destruction, customers often choose a specified, and mostly negative, course of action in an attempt to restore their well-being (Mick &

Fournier, 1998), ultimately implying a lower customer adoption intention strategy (Castillo et al., 2021). This is subdivided into an 'avoidance’ customer adoption intention strategy, in which customers may refuse to cooperate again in the interaction. Alternatively, it may lead to a step more extreme and result in a ‘confrontational’ customer adoption intention strategy, where customers express the failed interaction through negative word of mouth (NWOM) and

(11)

switch to a competitor. This has the potential to damage the service provider’s image or even its reputation (Balaji et al., 2016).

To better explain how this harm to the service provider arises, the process of value co- destruction starts concerning the antecedents of value co-destruction that determine how AI can lead to diminished value creation (Bock et al., 2020). Although there is a lack of research in this area for AI chatbots, previous research has focused on the antecedents of value co- destruction of service robots (Echeverri & Skålén, 2011; Järvi et al., 2018; Laud et al., 2019).

Previous studies that examined the process of co-destruction through interaction with physical service robots outline the identification of cognition challenges as important key antecedents of co-destruction (Čaić et al., 2018; Järvi et al., 2018; Vafeas et al., 2016).

Cognition challenges are defined as a lack of understanding or also known as an incorrect interpretation from the physical robot or, in this case, from the AI-chatbot (Castillo et al., 2021; Chaves & Gerosa, 2021). As a result, the progression of the customer-chatbot interaction is low (Bazeley & Jackson, 2019).

Recently, initial research on co-destruction in AI chatbots shows indeed that cognition challenges are an antecedent of co-destruction (Castillo et al., 2021). However, this first research in this specific area by Castillo et al. (2021) appears to have limited generalizability due to a qualitative approach. As AI chatbots in service settings are affected by several specific antecedents, according to Castillo et al. (2021) each antecedent requires further investigation.

Therefore, this paper will focus on this key antecedent (cognition challenges) as a starting point for the undesirable effects of the process of value co-destruction. This will be done by comparing two separate AI-powered chatbot behaviors, namely a human-chatbot interaction in which cognition challenges take place versus a human-chatbot interaction that runs smooth, so-called a hypothetical error-free AI-powered chatbot.

Moreover, to provide insight into the process of value co-destruction, this study investigates the possible explanations for the link between the key antecedent of value co- destruction - cognition challenges (vs. error-free) - and customer adoption intention strategy, based on adopting characteristics of human-like communication. Therefore, the conditional effect of the level of anthropomorphism will be examined.

Anthropomorphism is described as a set of human-like features that varies in social and visual cues (Feine et al., 2019; Go & Sundar, 2019). For decades it is traditionally

perceived in robotics as a static feature that, once experienced during a short-term interaction, represents a sustained social effect (Fink, 2012; Sheehan et al., 2020). Hence, it is not

(12)

remarkable that previous research suggests that AI-based conversational agents (CAs) should create a higher level of anthropomorphism (Rafaeli & Noy, 2005; Zhang et al., 2020) by adopting characteristics of human-like communication (Elkins et al., 2012).

However, the majority of these researches focused on anthropomorphic cues, and their impact on human behavior regarding customer adoption intention is not consistent (Adam et al., 2019; Moussawi et al., 2021). In addition, this primarily focused on embodied CAs as robots. In contrast to AI chatbots, robots can show nonverbal anthropomorphic cues, such as facial expressions. Due to these limited capabilities of anthropomorphic AI chatbots,

consumers might develop an aversion to an AI chatbot in a service setting.

Moreover, finding research in an AI-chatbot setting that uses the level of

anthropomorphism in a value co-destruction process as a moderator has been scarce and needs attention (Blut et al., 2021). This paper will therefore address this gap by studying the conditions under which the effect of cognition challenges on the customer adoption intention strategy operates from the customer’s point of view.

Another challenge in AI-chatbots relates to the cross-cultural differences between countries. Culture is referred to as “the collective programming of the mind” (Hofstede et al., 2005, p. 3), which shapes customers’ perceptions and behaviors in service settings (Chan et al., 2010). AI-chatbots need to deal nowadays with customers from different cultural backgrounds and varying levels of anthropomorphism (Chebat & Morrin, 2007; Michon &

Chebat, 2004).

Tan et al. (2018) investigated the cross-cultural comparisons between customers of China and the USA to anthropomorphizing robots and revealed that Chinese users rated robots higher in the overall anthropomorphism perception compared to USA users. This is consistent with previous research results that cultural background affects customers’

evaluations of anthropomorphizing robots (Lee & Šabanović, 2014; Swoboda et al., 2016).

The four cultural dimensions by Hofstede (1982, 1983), a research-based theory of cultural differences among nations covers an explanation of these results. The emphasis is on the cultural dimension of individualism vs. collectivism, which provides the most powerful explanation of cross-cultural differences in behaviors (Heuer et al., 1999).

However, a condition that has been neglected in previous research is for which type of cultural dimension (individualism versus collectivism) does the effect of the level of

anthropomorphism in the process of value co-destruction of AI-powered chatbots on customer adoption intention strategy (mostly) exist?

(13)

Therefore, this study responds to the calls of Blut et al. (2021), Castillo et al. (2021), and Rese et al. (2020) to analyze a more complex moderating effect by exploring how consumers from different cultural backgrounds interpret chatbot-human interactions concerning dimensions of anthropomorphism.

Therefore, this study aims to answer the following research question:

RQ: What is the effect of AI-powered chatbot assistants’ cognition challenges (vs.

error-free) on customer adoption intention strategies, and how is this relationship moderated through the AI chatbot’s level of anthropomorphism and moderated by customers’ cultural dimension?

This study provides several valuable contributions. First, the findings can contribute to a more developed understanding of the process of value co-destruction in AI-driven service settings.

Such an understanding of value co-destruction and the influence of the level of

anthropomorphism is especially important in light of the pervasiveness of AI technologies in services, particularly regarding the key antecedent cognition challenges, that determine how AI may lead to diminished value creation (Bock et al., 2020). Second, these results represent a contribution to the AI-powered chatbot assistants’ literature.

This study is the first that shows how the relationship between cognition challenges (vs. error-free) and customer adoption intention strategy varies along with the level of anthropomorphism and along a key cultural dimension (individualism versus collectivism).

These insights of this study are of great importance for practitioners as well. They indicate in which cultural setting service providers should implement AI-powered chatbots with a low or high level of anthropomorphism, such that the customer adoption intention remains high when cognitions challenges take place during the chatbot-customer interaction.

The findings support the decisions of service firms that operate in international markets and that aim to implement AI-chatbots to increase customer adoption intention. By considering these evolving results from a cultural dimension, international service firms can gain substantial insights on how to implement AI-chatbots successfully in the international competing markets and increase customer welfare.

The remainder of this paper is structured as follows. First, variable definitions and relevant theoretical background for this study are presented in the literature review. Then the method section provides information on how the experimental research has been conducted,

(14)

before delving deeply into the results and discussion. Finally, the limitations of the research and suggestions for future research are discussed.

(15)

2. Literature Review

2.1 AI-powered chatbots

AI-powered chatbots are defined as computer programs with natural language capabilities, which can be configured to converse with human users and can facilitate decision-making processes (Maudlin, 1994; Tintarev et al., 2016). The word “chatbot”

consists of two parts, the former refers to the conversation and the latter refers to robots. It is often referred to as “AI-powered chatbot” as they are powered by Artificial Intelligence and are capable of understanding and communicating via human language through natural language processing (NLP) (Griol et al., 2013).

The chatbot ecosystem includes not only text-based systems deployed to instant message platforms of companies (e.g., Myca of IBM, Fatema of Bank ABC & CoachBot of Saberr) but likewise voice-activated digital assistants (e.g., Alexa of Amazon.com, Google Home of Google LLC & Siri of Apple Inc.).

The focus of this study is purely on text-based chatbots used by companies for online digital platforms, which are driven by AI. Because thanks to the progress of machine learning, AI-powered chatbots have become an attractive customer service solution for many

companies (Følstad et al., 2018). These are still evolving every day, making it impossible to distinguish between AI-powered chatbots and FLEs. The expectation of their performance is growing as the technology improves over time. According to Suthar (2020), AI-powered chatbots can already answer at least 80% of standard customer inquiries.

Today, AI-powered chatbots are not only used for services but also to interact with customers, build relationships, and replace FLEs. Recently, Chong et al. (2021) have classified AI-powered chatbots into three different anthropomorphic roles (assistants, coaches, and collaborators). This study will only focus on AI-powered chatbot assistants in service settings as there is evidence that these AI-powered chatbots are gaining popularity as part of the customers’ daily life (Chong et al., 2021). AI-powered chatbots will refer in this paper as AI-powered chatbot assistants.

(16)

2.1.1 AI-powered chatbot assistants

AI-powered chatbot assistants have an additional passive role that facilitates the customer in accomplishing tasks and making choices (Chong et al., 2021). AI-powered chatbot assistants strengthen the customer’s capabilities and control in interaction with the service. For instance, it provides the customer with essential information, like product availability, and delivery times, or can help a customer to find the desired products faster.

According to Thakur (2018), this process will strengthen the customer’s self-agency, in other words, the customer’s potential to make decisions during a shopping journey.

On the other hand, AI-powered chatbot assistants are designed for relatively easy tasks, which is limited in both the range of the service when based on scripted dialogue and the technology it develops. There is less flexibility in the AI chatbot’s capability to take commands (Chong et al., 2021). This could lead to failure in achieving the customers’

expectations, with the consequence of co-destruction due to their inability to understand the customer’s input (Følstad et al., 2018). As an example, Facebook’s virtual AI-powered chatbot M, which is said to have failed in over 70% of interactions and an FLE had to intervene (Griffith & Simonite, 2018). Miscommunication errors can lead to co-destruction and even damage the reputation of the company. This motivates research into different elements to improve and understand the impact nowadays of AI-powered chatbots.

2.2 Value and value co-creation in AI-powered service settings

Initially, it is important to understand the concept of value before moving on to the interpretation and process of value co-destruction. Research reveals that there are multiple approaches to value, therefore hard to provide one single definition. However, in general, value can be defined as the result of a trade-off between benefits and costs, not just only monetary cost, but also time, efforts, etc. (Lindgreen & Wynstra, 2005; Plé, 2017).

Then, in the early 2000s, authors observed that value is also interactional. It can only be co- created through the interactions that actors have with each other. These interactions can be either direct (e.g., actor-to-actor interactions) or indirect (e.g., interactions via devices such as services). Furthermore, they facilitate the integration of the resources of one of these actors with those of the other actor(s), leading to value co-creation (Akaka et al., 2012; Edvardsson et al., 2012).

In an AI-powered service setting, the service provider can only create a value

proposition that would involve the availability of the AI-powered chatbot, allied with a set of modules, such as the NLP interpreter module, knowledge base, and understanding and

(17)

interpret meaning from human language (Buhalis & Cheng, 2020). Through the integration of resources (e.g., time, skills, internet access), it is the customer who seeks to interact, often because of the lack of available FLEs, with the AI-powered chatbot, and value co-creation may be created as a resulting outcome.

2.3 The recent rise of value co-destruction in AI-powered service settings

Conversely, a few recent studies have highlighted the possibility that in AI-powered service settings, the interactions between a service provider and a customer can also result in negative outcomes. This is referred to as value co-destruction, the actor(s) experiences a loss of value from the interaction with the other actor (Castillo et al., 2021; Plé, 2017). To be more specific, the well-being of these actors (in this case, of the customer(s)) decreases. This is due to the discrepancy between the actors’ expectations regarding resource integration (Plé &

Chumpitaz Cáceres, 2010).

According to Plé (2017), understanding what can result in value co-destruction is as equally important as understanding what can result in value co-creation, because it must make it possible to avoid a scenario where value co-destruction is the result of a process that was originally intended to co-create value. Moreover, research on value co-destruction with customers is being recognized as a research priority (e.g., Castillo et al., 2021; Plé, 2017).

Ostrom et al. (2015) noted in a study that “customers play a greater role in service development and delivery and that, even when technology assists them in such roles, more is expected of them” (p.139). Research focusing on customer perspective when value co- destruction occurs remains relatively scarce (Engen et al., 2021; Hsu et al., 2021). Therefore, value co-destruction will be defined in the customer-focused understanding and the impact on this process from the customer’s point of view in AI-powered service settings.

2.4 Customer adoption intention strategy

The intention is broadly defined as a construct influenced by specific attitudes and beliefs before adopting or not adopting a technology (Ajzen & Fishbein, 1980; Fishbein &

Ajzen, 1975). Chen et al. (2009) describe intention as the "continued use of the information system" (p. 1251). Many customer service activities have been restructured, to allow technology to either support or replace FLEs (Wang et al., 2013). From a company point of view, the use of SST offers several benefits. However, it is not the implementation of SSTs that provides benefits to the company. Instead, benefits are gained by companies once

(18)

consumers try the SST and commit to future use or also referred to as high adoption intention (Sheehan et al., 2020).

Meuter et al. (2003) found that people with positive experiences with SSTs had significantly higher adoption intention to use SSTs, while people with negative experiences had significantly lower adoption intention to use the same SST in the future. Moreover, a recent experiment by Sheehan et al. (2020) shows that an AI chatbot, which fails in

understanding (error chatbot), produced significantly lower adoption intention scores than an AI chatbot without errors (error-free chatbot). Even according to Prior and Marcos-Cuevas (2016), customers involved in a failed interaction can refuse to interact with the SST in following interactions.

Castillo et al. (2021) have for the first time in the literature, proposed a model of co- destruction from the customer’s point of view. If the intention to use the AI-powered chatbot next time is low, Castillo et al. (2021) show that the customer deploys a strategy. This strategy shows a specified, and usually negative action, in an attempt to restore their well- being (Mick & Fournier, 1998). Prominent actions when customers were attempting to restore their well-being interactions with AI services are ‘avoidance’ and ‘confrontative’. Avoidance strategies include using FLE support and refusing to use the AI chatbot next time.

Confrontative strategies include discontinuing the service, switching to a competitor, or even NWOM.

In a nutshell, according to the literature, positive experiences with SSTs lead to a higher adoption intention. Whereas negative experiences with SSTs lead significantly to a lower adoption intention.

2.5 Chatbot behavior: Error-free versus cognition challenges

Nowadays, there is a wide variety in the quality of AI-powered chatbots (Sheehan et al., 2020). The behavior of the AI-powered chatbot can have a different impact on the customer adoption intent strategy from the customer perspective. Therefore, this study will compare a hypothetically perfect AI-powered chatbot, without errors (error-free) to an antecedent of value co-destruction (cognition challenges).

2.5.1 Error-free AI-powered chatbot

Error-free AI-powered chatbot can be defined as a chatbot that interprets all consumer utterances correctly and responds with its own relevant and precise utterances (Sheehan et al.,

(19)

2020). According to recent literature (Sheehan et al., 2020; Toader et al., 2020), commercial examples of error-free AI-powered chatbots do not presently exist.

2.5.2 Antecedent of value co-destruction: Cognition challenges

Given that value co-destruction has an impact expressed in various ways, identifying the different antecedents enables early warning signals of co-destruction to be attained (Laud et al., 2019). Previous studies focused on physical service robots and the process of value co- destruction found that cognition challenges are a key antecedent of co-destruction (Čaić et al., 2018; Järvi et al., 2018; Vafeas et al., 2016). Even the first recent study in the literature that examined the antecedents for AI-chatbots also came up with cognition challenges as a key antecedent (Castillo et al., 2021). Each antecedent has its own process of value co-destruction in service settings (Hsu et al., 2021). This study will therefore focus on one antecedent of value co-destruction (cognition challenges) to examine whether they indeed lead to the

perception of a failed service interaction from the customer’s point of view and will result in a customer adoption intention strategy.

Cognition challenges are described as the AI-powered chatbot displaying a lack of understanding (Chaves & Gerosa, 2021). There is an incorrect interpretation of the chatbot and therefore the progression of the interaction is low (Bazeley & Jackson, 2019; Castillo et al., 2021). This is also referred to as misinterpretation, which occurs when AI-chatbots misunderstand a problem or question and give an irrelevant answer (Følstad & Brandtzæg, 2017).

Examples include when the AI chatbot asked an extreme amount of questions to understand the customer’s problem or when they repeatedly give the same answer to different customer questions. A respondent from in-depth semi-structured interviews of Castillo et al.

(2021) has experienced a cognitive challenge with an AI-powered chatbot. He complained that it was obvious that the chatbot did not understand his question, “the reply that I received had nothing to do with my question, it was irrelevant” (Castillo et al., 2021, p. 912). This clearly indicates that the chatbot is giving an answer that is out of context.

2.6 The effect of chatbot behavior: Error-free versus cognition challenges on customer adoption intention strategy

Predicting that the hypothetically perfect chatbot (error-free) will lead to high customer adoption intention strategy is quite straightforward.

(20)

According to a study by Toader et al. (2020), it turned out that customers indeed commit the service to future use (higher customer adoption intention) after a flawless chatbot interaction.

However, the study also demonstrated that in the non-error-free chatbot condition, differences arise in the trusting beliefs and positive consumer responses (Toader et al., 2020).

The question arises whether there will be a difference in the adoption intention strategy when different chatbot behavior takes place (error-free versus cognition challenges).

Nevertheless, in the scarcity of literature on AI-powered chatbots there appear to be different arguments on this point.

Scheutz et al. (2011) identify the failure of cognition challenges of the AI-chatbot as a very common communication error. This is because the development of conversational software that can logically interpret meaning from context is complex. According to Castillo et al. (2021), customers do not expect that the AI-chatbot solve the problems or questions completely, but do expect that it can at least understand the context of the problem and provide appropriate guidance. However, it appears that the cognition challenges trigger feelings of anger and frustration in customers. This may lead to a lower customer adoption intention strategy compared to an error-free chatbot.

Nonetheless, findings from in-depth semi-structured interviews of Castillo et al.

(2021) indicate that customers attributed these cognition challenges to both the service

provider as well as themselves. The service provider, because was not skilled well enough and needs more development. On the other hand, they admitted that they need to make sure that the AI-powered chatbot understands their request. Awareness of the use of synonyms or different writing styles of the customer could lead to understanding from the AI-powered chatbot.

Interestingly, another study shows that the error-free chatbot gets the same adoption intent score as when cognition challenges occur (Sheehan et al., 2020). The statement of this is that the AI-powered chatbot is smart enough to identify the source of the cognition

challenge in the interaction, which is known as a problem source (Schegloff, 1992).

Resulting in clarification of either the sender or receiver, similar to Fromkin’s (1971) concept of conversation repair. The message sender (in this case the chatbot), if the possibility of miscommunication is felt, may reformulate his previous statement, following an utterance such as “What I mean is….” or will repeat it. Another possibility is that the receiver (in this case the customer) may ask for clarification of a message, such as “huh?” or even an apology- based form such as “I’m sorry, what do you mean?” (Robinson, 2006; Sheehan et al., 2020).

Corti and Gillespie (2016) argue that clarification is a fundamental part of the intersubjective

(21)

effort, to be defined as shared meaning within two or more conscious minds (Stahl, 2015).

Therefore, an AI-powered chatbot that seeks clarification in a cognition challenge showed the same adoption intention score as an error-free chatbot, because seeking clarification is a natural part of interpersonal communication.

However, according to Järvi et al. (2018) and Castillo et al. (2021), cognition challenges caused by a lack of understanding of the AI chatbot remain a waste of time and frustration just for the customer. As a result, it is expected that this would weigh more heavily. Consequently, the following hypothesis is formulated:

H1. Cognition challenges (vs. error-free) lead to lower customer adoption intention strategy.

2.7 Anthropomorphism

Before delving deeper into this, it is crucial to understand the meaning of anthropomorphism, as it is a core construct for interpreting consumers’ reactions to AI- powered chatbots. The definition of anthropomorphism in this study refers to the extent to which customers experience AI-powered service chatbots as human-like, rather than the extent to which companies design AI-powered service chatbots as human-like. According to Epley et al. (2007, p. 865), this perception results from the attribution of human

characteristics or traits to non-human AI-powered service chatbots.

Research shows that for intelligent objects, such as robots and AI-powered chatbots, anthropomorphizing is comparatively straightforward (Novak & Hoffman, 2019). By

transforming non-humans into humans, anthropomorphism can fulfill two fundamental human needs: the need for social connection and the need for comprehension and control of the environment (Epley et al., 2007).

Although for marketing, anthropomorphism certainly increases products and brand liking (Aggarwal & McGill, 2012). Nevertheless, it remains for the level of

anthropomorphism in AI-powered service chatbots through a lack of research unclear if it affects customer adoption intention strategy in the process of value co-destruction.

However, several meta-analyses have been conducted in this research area with service robots rather than AI-powered service chatbots and identified the effect of the level of anthropomorphism on the customer (Hancock et al., 2011; Roesler et al., 2021).

(22)

2.7.1 Service robot anthropomorphism and the subsequent effect on customer adoption intention

The findings of researchers if the effect of anthropomorphism on customers’

willingness to have higher adoption intention to use a service robot are not consistent. Some results lead to the positive subsequent effect of value co-creation (Stroessner & Benitez, 2019), some remain neutral (Goudey & Bonnin, 2016), while others lead to the negative subsequent effect of value co-destruction (Broadbent et al., 2011).

The arguments of researchers on the positive side effect are that the perception of human attributes in service robots enhances engagement with customers. According to Nass et al.

(1994), customers experience the service robot as more controllable and predictable, and the two-way interaction as more convenient and trustworthy. Empirical studies have shown that consumers prefer perceived similarity to humans. In this way, a high level of

anthropomorphism is indispensable. It increases customer adoption intention with service robots if the customers perceived human-like attributes in the interaction with the service robot (Stroessner & Benitez, 2019).

However, from the results of Goudey and Bonnin (2016), it turns out that service robot anthropomorphism does not affect the consumer positively or negatively and remains neutral.

In contrast to others, who are more skeptical; as perceived anthropomorphism increases,

“consumers will experience discomfort, specifically: feelings of creepiness and a threat to their human identity” (Mende et al., 2019, p. 539). Broadbent et al. (2011) indicated that customers prefer less human-like robots and Vlachos et al. (2016) found even that customers prefer an explicitly machine-like robot; both suggesting a negative experience of robot

anthropomorphism with a subsequent effect of a low customer adoption intention strategy. An example of a robot with human characteristics is Fabio, employed in a grocery store in

Edinburgh to welcome customers and answer their product requests. Nevertheless, his performance was not what was expected, customers started to avoid him in the store. In line with the human tendency to anthropomorphize Fabio, still customers felt frustrated, irritated, and uncomfortable with his actions (Mele et al., 2021).

These blended findings indicate the complexity of the relationship between

anthropomorphism and customer intention. Moreover, it suggests that the effects of a service robot’s anthropomorphism on customers’ user intention are multidimensional and dependent.

(23)

2.7.2 Level of Anthropomorphism as moderator

Even though hardly any research has been done in this area for AI-powered chatbots in the process of value co-destruction, the mixed results of service robots can be considered for AI-powered chatbot assistants. However, it is important to note that these service robots can show nonverbal anthropomorphic cues, such as facial expressions (Adam et al., 2019;

Moussawi et al., 2021). As a result, the impact on customer adoption intention may be different for AI-powered chatbots, due to a difference in anthropomorphism expression.

Blut et al. (2021) have proposed that AI-chatbot service providers have an

incrementally high level of anthropomorphized chatbots. Primarily to convince customers that these AI-powered chatbots are capable of providing services that are traditionally performed by FLEs. Furthermore, Araujo (2018) has associated AI-powered chatbot anthropomorphism with higher customer adoption intention strategy, in which AI-powered chatbots that generate more human-like cues generated stronger emotional connections with the specific firm.

On the contrary, there is increasing evidence that even though AI-powered chatbots can enhance the customer experience by learning from previous conversations with customers and consistently adjusting their responses based on learnings (Xu et al., 2017), they still can cause cognition challenges as service robots can also establish (Mende et al., 2019).

The question arises, whether the level of anthropomorphism can affect this

relationship between the chatbot behavior (error-free vs. cognition challenges) and customer adoption intention strategy. When cognition challenges have been made by the AI-powered chatbot, Corti and Gillespie (2016) found that customers made more effort to correct these misinterpretations when the AI-powered chatbot was perceived as human, compared to when the AI-powered chatbot was perceived as an automated conversational agent.

Additionally, to the cognition challenges of the AI chatbot, a high level of anthropomorphism arguable may benefit higher customer adoption intention strategy by including features reflecting human-like social and visual cues (Corritore et al., 2005; Feine et al., 2019; Go & Sundar, 2019). Given that De Visser et al. (2016) found that AI-powered chatbots are more resilient to failures as their degree of similarity to humans increases.

Therefore, introducing more human-like cues for trust restoration can increase the level of confidence of the customer in the AI-powered chatbot. This in turn may lead to less low customer adoption intention strategies.

Moreover, in parallel with these studies, Sheehan et al. (2020) established that

anthropomorphic chatbots that seek clarification in case of cognition challenges were found to satisfy customers’ social expectations and even to be as effective as idealized error-free

(24)

chatbots. As Sheehan et al. (2020) argue, when cognition challenges have taken place by the chatbot with a high level of anthropomorphism, clarification will occur. A problem source is identified and there is an intersubjective effort (Kaplan & Hafner, 2006). This is seen as human behavior, as a normal part of interpersonal communication.

It will be assumed that a high level of anthropomorphism when cognition challenges have taken place, will not be as effective as an error-free chatbot on the customer adoption intention strategy. Since an error-free chatbot will not lead to frustration and irritation by a customer (Mele et al., 2021). Therefore, this study will predict that to the cognition challenges of the AI-powered chatbot, a high level of anthropomorphism will lead to less negative

customer adoption intention strategies. Consequently, this study proposes that the level of anthropomorphism may contribute to explaining the relationship between the chatbot behavior (error-free vs. cognition challenges) and customer adoption intention strategy.

Therefore, based on the previous studies about service robot anthropomorphism and AI- powered chatbot anthropomorphism, the following hypotheses are formed to test the moderating effect of the level of anthropomorphism:

H2a. A high level of anthropomorphism will have a positive effect on the negative relationship between cognition challenges (vs. error-free) and customer adoption intention strategy, such that cognition challenges will lead to less low customer adoption intention strategies.

H2b. A low level of anthropomorphism will have a stronger effect on the negative relationship between cognition challenges (vs. error-free) and customer adoption intention strategy, such that cognition challenges will lead to lower customer adoption intention strategies.

(25)

2.8 Culture

The conception of culture evolves and has undergone numerous interpretations, depending on the currently dominant theoretical perspective. A recurring conception of culture is “Culture is the collective programming of the mind, which distinguishes the

members of one group of people from others” given by Hofstede et al. (2005, p. 28). In other words, culture determines customers’ perceptions, attitudes, and behavior. For instance; how they experience cognitive processes, how they interact, and how they respond to stimuli (Pick

& Eisend, 2016). Researchers have intensely discussed differences in cultural values and norms between countries across different theoretical and methodological foundations (e.g., Schwartz, 1999; Swoboda et al., 2016).

In 1980, Hofstede published a book in which he reduced national culture to four dimensions. This study will rely on the work of Hofstede’s main dimensions of culture.

Because these cultural dimensions are widely used in international marketing studies that focus on customer behavior and relationship marketing (e.g., Lam et al., 2009; Schumann et al., 2010). But most importantly, Hofstede’s four dimensions are seen as “a valuable tool for understanding an individual’s fundamental cultural orientation” (Reimann et al., 2008, p. 65).

According to Hofstede, there are four main dimensions of culture (Hofstede et al., 2010);

individualism-collectivism, power distance, uncertainty avoidance, and masculinity-

femininity. Nevertheless, the cultural dimension of individualism-collectivism offers the most powerful explanation of cross-cultural differences in behaviors (Heuer et al., 1999; Pick &

Eisend, 2016). Therefore, only this cultural dimension will be further explained in this study and examined as a moderator.

2.8.1 Hofstede’s cultural dimension: individualism vs. collectivism

Hofstede’s work was groundbreaking in the sense that it highlights the cross-cultural differences among nations and societies across the world (Roy, 2020). Based on a survey of more than 116.000 IBM employees in 112 countries1, Hofstede considered cultural dimension indices for the four main dimensions. Thus, an Individualism Index (INI) was also created for 79 countries. The remaining 33 countries from the available database are not taken into consideration because data for the INI is not available for those countries (Hofstede, 2011;

Hofstede et al., 2010). See Appendix D for a sample overview of countries and 3 regions for

1 Data available at: http://geerthofstede.com/research-and-vsm/dimension-data-matrix/

(26)

the INI. The USA stands out with the highest score for individualism (INI=91), while China scores very low (INI=20) as well as Greece (INI=35). This indicates that the USA has a high individualistic culture, while China and Greece have a collectivistic culture.

Individualism relates to the degree to which people in a country prefer to act as individuals, who pursue personal goals, rather than act as members of a group (Hofstede, 1980; Pick & Eisend, 2016). Moreover, individualistic cultures (as in the USA) imply an ‘I’

behavior and are independent of others. In contrast with collectivistic cultures (as in China and Greece), which imply a ‘we’ behavior and are conformity oriented. They show a higher degree of group behavior and classified others as in-group or out-group instead of individuals.

Furthermore, collectivistic cultures prevail over relationships over the task, while

individualistic cultures prevail over task over the relationship (Hofstede, 2011). For a clear overview of the differences between individualism vs. collectivism according to Hofstede (2011) see Table 1.

Table 1

Differences between Individualism vs. Collectivism

Individualism Collectivism

‘I’ behavior

Others classified as individuals Personal opinion expected Task prevails over relationship Independent

Right of privacy

Expected to look after him/herself and his/her immediate family

Meet others less frequently and personal communication

‘We’ behavior

Others classified as in-group or out-group Opinions and votes predefined by in-group Relationship prevails over task

Dependent and maintain harmony Stress on belonging

Born into extended families (e.g., uncles, aunts) which protect them in exchange for unquestioning loyalty

Meet others more frequently and interpersonal communication

Not only the major theory by Hofstede confirms this individualism versus collectivism

differences for the countries the USA, China, and Greece. The dominant USA culture has also been characterized as individualism by other studies, the emphasis on values associated with self-maximization and self-achievement, such as being independent and developing their full potential (e.g., Harwood et al., 1995; Harwood et al., 2001; Tamis-LeMonda et al., 2002).

Whereas Greece as well as Asian societies such as China, has been described as collectivist

(27)

(e.g., Georgas, 1989; Kalogeraki, 2009; Tamis-LeMonda et al., 2002). They consider the values of being dependent and maintaining harmony (Georgas et al., 1997).

That is the reason for focusing in this study on the cultural dimension: individualism versus collectivism, specific to the USA as an individualistic culture background and on China and Greece as a collectivistic culture background.

2.8.2 Cultural dimension: individualism vs. collectivism as moderator

Previous research suggested that culture moderates the effect of customers’

preferences for personalized service (Mattila, 1999). According to researchers, customer behavior and, as a direct result, the adoption intention of customers is strongly dependent on the cultural background in which this behavior is embedded (De Mooij, 2011; Kumar &

Pansari, 2016). Looking into the differences between individualism and collectivism for service providers due to prior research, different arguments for the moderator emerge.

As previous research has shown that collectivists, as a result of more frequent communication, praise a service provider over consumers of individual cultures (Liu et al., 2001). Collectivists who experience issues after the purchase avoid expressing their

complaints to the service provider. They avoid revealing to the service provider that they made a wrong decision. Instead, collectivists engage in a low customer adoption intention strategy, namely a confrontative strategy with NWOM to others for feedback (Ngai et al., 2007). Thus, in the process of value co-destruction of AI-powered chatbots, it is more likely that this will lead to a confrontative strategy in collectivist countries compared to

individualistic countries.

On the other hand, collectivists’ have a higher willingness to cooperate and maintain relationships. As previously mentioned, relationships prevail over tasks (Hofstede et al., 2010) (see Table 1). Therefore, it will be expected that collectivists are more interested in the interaction with a high anthropomorphic AI-powered chatbot, rather than a low

anthropomorphic AI-powered chatbot. Because the high anthropomorphic AI-powered chatbot shows more human-like cues. They will try to maintain this relationship, even when there are cognition challenges. This is a way to show relationship-supportive behavior and loyalty, that collectivists will try to maintain (Hofstede, 2011; Wasti, 2003).

By contrast, consumers in individualistic countries have lower relationship orientations and tasks prevail over relationships (Hofstede et al., 2010) (see Table 1).

Consequently, it can be argued that they prefer a low anthropomorphic AI-powered chatbot.

Moreover, they show relationship-supportive behavior only when it is convenient and are less

(28)

motivated to support the AI-powered chatbot when the key antecedent of value co-destruction has taken place. They are more prone to dissolve a relationship when it becomes inconvenient (Pick & Eisend, 2016), which is likely to lead to an avoidance strategy.

2.8.3 Service robot anthropomorphism and cultural dimension: individualism vs collectivism

Even though for the cultural dimension hardly any research has been done in this area for AI-powered chatbots in the process of value co-destruction, the effects of cultural factors of service robot anthropomorphism can be considered for AI-powered chatbot assistants.

Previous research showed that cultural background affects people’s perceptions of robots (e.g., Bartneck et al., 2005; Kaplan, 2004; Šabanović, 2010). In addition to cultural background, people differ also in their level of anthropomorphizing robots (Fink, 2012) through people’s perceptions of anthropomorphism.

According to Bartneck et al. (2005) and Lee and Šabanović (2014), Asian and Western people have differences in acceptance of different kinds of robots. To be specific, a cross- cultural survey study by Lee and Šabanović (2014) has shown that Koreans had a stronger preference for anthropomorphic robots compared to USA participants. According to

Hofstede’s INI index (see Appendix D) Korea South scores INI of 18, which results in high collectivist culture. Therefore, it can be concluded that a collectivistic cultural background prefers a high level of anthropomorphism.

Moreover, a study by Tan et al. (2018) examined how people from different cultural backgrounds (especially Chinese and American) responded to the anthropomorphic

characteristics of robots, and showed indeed differences. The results show that cultural background influences users’ perception of anthropomorphism, such that Chinese users rated robots with a high level of anthropomorphism higher compared to USA users in the same condition.

This is in line with the predicted arguments in paragraph 2.8.2, people with a collectivistic cultural background are more interested in the interaction with a high level of anthropomorphism compared to people with an individualistic cultural background.

Therefore, summarizing these arguments based on the previous studies about individualism versus collectivism, Hofstede’s theory as well as existing research about the effects of cultural background on service robot anthropomorphism, the following hypotheses have been formed when cognition challenges have taken place:

(29)

H3a: A collectivistic cultural background will strengthen the relationship between a high level of anthropomorphism in AI-powered chatbots and the customer adoption intention strategy when cognition challenges have taken place.

H3b: A collectivistic cultural background will weaken the relationship between a low level of anthropomorphism in AI-powered chatbots and the customer adoption

intention strategy when cognition challenges have taken place.

H3c: An individualistic cultural background will weaken the relationship between a high level of anthropomorphism in AI-powered chatbots and the customer adoption intention strategy when cognition challenges have taken place.

H3d: An individualistic cultural background will strengthen the relationship between a low level of anthropomorphism in AI-powered chatbots and the customer adoption intention strategy when cognition challenges have taken place.

(30)

2.9 Conceptual Model

Figure 1 shows the conceptual model including the independent variable, the dependent variables, the moderators, and the seven corresponding hypotheses.

Figure 1

Conceptual Model of the value co-destruction/value co-creation process of AI-powered chatbot assistants and the interactions explored (moderated moderation)

(31)

3. Research Method

This chapter elaborates on the techniques that were employed in this research. The following steps are explained: the experimental design, sample, data collection, procedure, stimulus material, Experimental Vignette Methodology (EVM), manipulations and

operationalization, and measurements of the variables. Furthermore, each decision taken will be justified.

3.1 Experimental design

To answer the research question, an experimental design was set up because the research examines a relationship that varies between groups and establishes causal evidence (Trochim et al., 2016). Furthermore, an online experimental setting was chosen because of two reasons. Firstly, the online setting allows for a controlled manipulation level of

anthropomorphism and cognition challenges during the human-chatbot interaction. Secondly, reaching the minimum required sample size was more convenient when experimenting online.

The effect between these variables is tested using a 2 (chatbot behavior: error-free vs.

cognition challenges) by 2 (level of anthropomorphism: low vs. high-level

anthropomorphism) factorial between-subjects design, also shown in Table 2 below, with chatbot behavior as an independent between-subject variable and level of anthropomorphism as a between-subject variable.

Participants were randomly assigned to one of the four various conditions in an online experimental setting. Randomization increases the internal validity and decreases possible confounding factors in the experiment (Campbell & Stanley, 2015).

Table 2

Four conditions of a 2 (chatbot behavior) x 2 (level of anthropomorphism) factorial between- subjects design

Low Level Anthropomorphism High Level Anthropomorphism Error-free Chatbot Error-free X

Low level Anthropomorphism (1)

Error-free X High level Anthropomorphism (2)

Cognition Challenges Chatbot

Cognition Challenges X Low level Anthropomorphism (3)

Cognition Challenges X High level Anthropomorphism (4)

(32)

3.2 Sample

Before launching the experiment, the sample size needed to be determined. An a priori power analysis was conducted using the software G*Power version 3.1.9.6 (Faul et al., 2007) to determine the minimum sample size required to test the study hypotheses.

The nature of the experiment involves a statistical test of ANOVA: repeated measures, between factors. For this reason, the Test Family used for this Power Analysis will be an F- test. Results indicated the required sample size to achieve Power (1 – β error probability) = 0.95 for detecting a medium effect size of f = 0.25 based on the rule of thumb (Cohen, 1988), at a significance criterion of α = .05. The power was set to 0.95, which means a 95% chance to find an effect. Moreover, since no relevant data on the effect size is available from previous research or similar articles, was the rule of thumb method used. The number of groups was set to four, as this corresponds with the research setup.

Based on these input parameters, G*Power calculated the obtained sample size of N = 176. Thus, the minimum required sample size per condition is n = 44 to test the study

hypotheses.

3.3 Data collection

For the online experiment, participants are collected through a convenience sample, also called non-probability sampling (Battaglia, 2008). This sampling technique was used because it is affordable, meets certain practical criteria, is easily accessible, and is available at a given time for this study (Dörnyei, 2007; Etikan et al., 2016).

This study was set up via Qualtrics, an online platform for quantitative research to collect responses. Subsequently, the survey by Qualtrics has published at

ProlificAcademic.co, a data collection platform. This is a fast, reliable, and high-quality data collection by connects diverse people around the world. Moreover, Prolific was chosen because this platform made it possible to recruit only citizens from the USA, China, and Greece (see paragraph 2.8.1).

(33)

3.4 Procedure

First, participants were approached and acquired via Prolific with a brief description of the study. To continue this experiment, participants needed to approve the Informed Consent.

The sampling criteria are restricted to citizens of the USA, China, and Greece. Therefore, the survey started with a question to determine where the participant is currently resident: the USA, China, or Greece.

This is followed by an introduction to the situation on what they would be shown next.

The introduction is the same for all conditions. Participants have to imagine that they are the customer (named ‘Rox’), who just came home from a long day at work and is craving some food. Among other things, they ordered biscuits online from the fictitious delivery company:

‘FoodNOW’. Unfortunately, Rox didn’t seem to have received the biscuits. Therefore, the participants contacted ‘FoodNOW’s’ customer support online and an AI-powered chatbot agent is handling the customer support service.

Moreover, the introduction is followed by a brief definition of an AI-powered chatbot for each participant to understand the fictitious scenario that will follow. To be more precise, this explanation is in one sentence: “an AI-powered chatbot is described as a computer program with natural language capabilities, which can be configured to converse with human users and can facilitate the service process.”

Subsequently, each participant was randomly assigned to one of the four conditions.

The visual vignettes started to play (a video of their assigned type of the online customer service support interaction between an AI-powered chatbot assistant and a customer).

After viewing this scenario, participants will answer questions for measuring their scores for the dependent variables (customer adoption intention strategy), as well as the manipulation checks scores of the moderators (level of anthropomorphism, cultural

dimension) and questions for measuring the control variables (age, gender, trust in technology and prior experiences with AI-powered chatbot assistants).

Furthermore, attention-check questions are included to determine whether they viewed the video of the human-chatbot interaction.

In the end, a thank you message and a debriefing page with details about the experiment are shown to the participant. The complete Qualtrics survey can be found in Appendix D.

(34)

3.5 Stimulus material

In all experimental conditions, a video of a human-chatbot interaction was shown as stimulus material. Four hypothetical customer service support videos of an AI-powered chatbot are designed for this study to accommodate the manipulations needed. The videos with the human-chatbot interaction are created with the program Landbot.com.

During the first component of this Experimental Vignette Methodology (EVM) study, exposure to the vignette itself (Atzmüller & Steiner, 2010), the independent variable (chatbot behavior) was manipulated using two different stimuli elements (error-free and cognition challenges). In addition, the variable level of anthropomorphism was also manipulated using two different stimuli elements (low-level anthropomorphism and high-level

anthropomorphism).

3.6 Experimental Vignette Methodology (EVM)

This experiment makes use of an online experimental vignette methodology (EVM) study. According to Atzmüller and Steiner (2010), an EVM is described as: “a short, carefully constructed description of a person, object, or situation, representing a systematic

combination of characteristics.” (p. 2). It consists always of two components, participants in this study, are first exposed to the vignette itself, and afterward are asked to complete questions for the measurement of respondent-specific characteristics.

The vignette approach of this research was operationalized by a fictitious video scenario of a human-chatbot interaction. Before the participants were exposed to one of the four vignettes, a short introduction followed first.

The introduction described the situation regarding an online customer service question, contextualized around the fictitious delivery company ‘FoodNOW’ customer support online. The online food delivery (OFD) industry sector is chosen for the fictitious human-chatbot interaction service setting. This is because AI-powered chatbots for OFD services are increasing, to optimize customer service through directly automated personalized content available at any time and place (Nair et al., 2018).

However, the use of chatbots for OFD services to answer consumers’ questions is still at an early stage, and insights into the process of value co-destruction for these AI-powered chatbot assistants are scarce (De Cicco et al., 2021). According to Outgrow.co, 80% of companies operating in the food industry are expected to have an AI-powered chatbot

Referenties

GERELATEERDE DOCUMENTEN

The fundamental mode radiative decay rate (3 for the 184.9 nm Hg line was calculated with the partial redistribution theory of chapter IV on the assump- tion of a

of sq. This indicates that contestants saw a significant difference in the behaviour and expression of the chat partner. However, pragmatic attributes, which contribute to how

This study further investigates the field of customization by testing the effect of a personalized direct mail, that fits customer preferences, on the drivers of customer equity

 Research Question 3.2: Does considering further user profile information for bullying network users, such as age and history of comments, improve the accuracy of

This research has aimed to discover how awareness of workarounds in healthcare processes can enable the continuous improvement of work systems, by exploring whether a level of

(8) A model that includes the control variables, customer feedback metrics, customer journey stages and the interaction effect between CES and stage four is the logistic

Customer centricity, customer performance, firm performance, organizational structure, processes, centralization, alignment, customer integration, collection of customer

Conceptual model 29-06-15 | 4 Customer touchpoints Physical stores Online stores Catalogues Mobile phones Touchpoint experience - Disconformation - Positivity