• No results found

3. Research Method

3.6 Experimental Vignette Methodology (EVM)

This experiment makes use of an online experimental vignette methodology (EVM) study. According to Atzmüller and Steiner (2010), an EVM is described as: “a short, carefully constructed description of a person, object, or situation, representing a systematic

combination of characteristics.” (p. 2). It consists always of two components, participants in this study, are first exposed to the vignette itself, and afterward are asked to complete questions for the measurement of respondent-specific characteristics.

The vignette approach of this research was operationalized by a fictitious video scenario of a human-chatbot interaction. Before the participants were exposed to one of the four vignettes, a short introduction followed first.

The introduction described the situation regarding an online customer service question, contextualized around the fictitious delivery company ‘FoodNOW’ customer support online. The online food delivery (OFD) industry sector is chosen for the fictitious human-chatbot interaction service setting. This is because AI-powered chatbots for OFD services are increasing, to optimize customer service through directly automated personalized content available at any time and place (Nair et al., 2018).

However, the use of chatbots for OFD services to answer consumers’ questions is still at an early stage, and insights into the process of value co-destruction for these AI-powered chatbot assistants are scarce (De Cicco et al., 2021). According to Outgrow.co, 80% of companies operating in the food industry are expected to have an AI-powered chatbot

assistant by 20202. Therefore, it is essential to analyze AI-powered chatbots from the customer’s perspective in the OFD industry sector, especially when there is value co-destruction.

They were explicitly informed in the introduction before exposure to the vignette itself, that they will be watching a fictitious situation of an interaction between an AI-powered chatbot and a customer of a fictitious company ‘FoodNOW’. In this way, participants would never have an illusion that both conversation partners are human.

After the introduction, the participants were randomly assigned to one of the four video vignettes (a video of their assigned type of human-chatbot interaction with the online customer support of ‘FoodNOW’). Consumers’ answers, as well as responses from the AI-powered chatbot assistant, were displayed in the video vignettes. The four video vignettes differ from each other by manipulation of chatbot behavior (error-free or cognition challenges) and level of anthropomorphism (low or high level of anthropomorphism).

The text-based AI-driven conversation in this study is designed to look and feel like an actual interaction taking place at the fictitious ‘FoodNOW’ customer support online. To add more realism, the conversation takes place within a frame-image of an iPhone.

The participants are exposed to the stimulus in the scenario chatbot interaction.

It is ensured that the conversations are consistent in all other aspects, with of course the exception of the experimental manipulation.

Screenshots of a part of the fictitious scenario AI-powered chatbot assistant

conversation for the various vignettes, including design are included in Appendix B, Figures 1-8 as well as the full transcript that is used for each condition will be found in Appendix A.

The internal as well as the external validity increase, because EVM studies allow for manipulating and controlling the independent variable (Atzmüller & Steiner, 2010). However, it is important to notice that these results of EVM studies are based on hypothetical situations and those situations do not necessarily always reflect real-life situations (Lohrke et al., 2010).

Overall, the use of an EVM approach leads to higher control of manipulation and offers aspects of both higher internal validity of the experiment and higher external validity of the survey research (Aguinis & Bradley, 2014; Atzmüller & Steiner, 2010).

2 https://www.code-brew.com/pdf/FOOD-INDUSTRY-REPORT-INSIGHTS.pdf

3.6.1 Manipulation of chatbot behavior (error-free vs. cognition challenges)

The independent variable chatbot behavior is divided into two different behaviors: error-free or cognition challenges. Error-error-free AI-powered chatbot can be defined as a chatbot that interprets all consumer utterances correctly and responds with its own relevant and precise utterances (Sheehan et al., 2020). Whereas cognition challenges are described as the AI-powered chatbot displaying a lack of understanding (Chaves & Gerosa, 2021). The variable chatbot behavior is operationalized and manipulated by assigning the participants to one of the two conditions:

1. AI-powered chatbot which behaves error-free

2. AI-powered chatbot which behaves with cognition challenges

The scenario is adapted to cognition challenges situation as well as an error-free situation and contextualized around the fictitious delivery company ‘FoodNOW’ customer support online.

The manipulated variable in the scenario is based on the theory, according to Castillo et al. (2021) and Sheehan et al. (2020) as explained in paragraph 2.5. In the scenario, in which there is a faultless AI-powered chatbot (error-free), it interprets consumer utterances correctly and responds with relevant and precise utterances (Sheehan et al., 2020)

In contrast, in the cognition challenge scenario, the AI-powered chatbot does not understand the problem of the customer and provides an irrelevant answer out of the context (Chaves & Gerosa, 2021). Moreover, the customer will receive the same question twice (Castillo et al., 2021).

The transcript of these manipulated chatbot behavior scenarios is included in Appendix A.

3.6.2 Manipulation of the level of anthropomorphism (low vs. high anthropomorphism) The level of anthropomorphism is divided into two levels: low and high. This variable operates as a moderator in this study.

The variable level of anthropomorphism is operationalized and manipulated by assigning the participants to one of the two conditions:

1. Low level of anthropomorphism 2. High level of anthropomorphism

For the manipulated low and high levels of anthropomorphism implemented in the scenario, this study followed the approach of manipulating anthropomorphism in previous studies (e.g., Go & Sundar, 2019; Kim & Sundar, 2012; Morana et al., 2020). Following these existing studies, there exists a varying degree of anthropomorphizing the AI-powered chatbot

assistants that influence the consumers’ behavior and perception of the AI-powered chatbot.

Therefore, it is decided to assess two levels of anthropomorphizing the AI-powered chatbot (low and high). To implement these two levels, there are several key social (e.g., Barcelos et al., 2018; Feine et al., 2019; Leite et al., 2013) and visual cues (e.g., Aggarwal &

McGill, 2007; Diederich et al., 2019; Go & Sundar, 2019) of the AI-powered chatbot to be human-like or non-human-like.

Participants of this experiment, are continuously exposed to the manipulation of the online AI-powered chatbot assistant’s social and visual cues in a low or high level of anthropomorphism during the interaction in the fictitious video vignette.

For example, the two levels varied by name and avatar during the interaction with the consumer. In the low condition, they are exposed with no name and an avatar of a non-human-being (e.g., Go & Sundar, 2019; Hodge et al., 2021; Morana et al., 2020). Whereas in the high condition, they are exposed to a name for the AI-powered chatbot assistant (Hodge et al., 2021; Kim & Sundar, 2012) and an avatar of a real human being (Morana et al., 2020;

Wuenderlich & Paluch, 2017).

Moreover, to simulate a less or more human-like conversation experience for the varying degree of anthropomorphism, the way the AI-powered chatbot is texting is also distinct for both levels. Such as differences in tone of voice, typing emulation, and whether or not to use emoticons. For example, the two levels varied by the tone of voice during the interaction with the consumer. In the low condition, the AI-powered chatbot has a corporate voice and; a professional, distant and formal conversation style (Barcelos et al., 2018; Yu et al., 2016). Whereas in the high condition, the AI-powered chatbot has a human voice; a daily, human-like conversation style has been added to the transcript of these vignettes

(Kuligowska, 2015).

Furthermore, no emoticons are used in the low condition and no type of emulation is present. While in the high condition, emoticons and type emulation are present since research shows that this is implemented with a high level of anthropomorphism (Aggarwal & McGill, 2007; Diederich et al., 2019; Kwon & Sung, 2011).

In a nutshell, two different AI-powered chatbot assistant designs vary in degree of anthropomorphism. Table 3 summarizes the difference between the two levels of

anthropomorphism that have been implemented in the fictitious scenario of this experiment, with the various design decisions for social and visual cues, and last but not least, the research studies in which these cues were examined. In Appendix B screenshots of the scenario in a low and high level of anthropomorphism are included, showing several different elements.

Table 3

Experiment Manipulation of the Level of Anthropomorphism –

AI-powered Chatbot Assistant Anthropomorphic Design and Conversation Style Social or visual cue Level of Anthropomorphism References

LOW HIGH

Name No name Eve (Hodge et al., 2021; Kim &

Sundar, 2012; Sheehan et al., 2020)

Avatar Avatar of no human-being

Avatar of a real human-being

(Go & Sundar, 2019; Morana et al., 2020; Wuenderlich & Paluch, 2017)

Greeting & Goodbye ´ Ö (Leite et al., 2013; Sabelli et al., 2011)

Self-reference ´ Ö (Aggarwal & McGill, 2007; Nass

et al., 1994)

Civility ´ Ö (Derrick & Ligon, 2014; Fogg &

Nass, 1997)

Remember the name of the consumer

´ Ö (Richards & Bransky, 2014)

Emoticons ´ ´ (Aggarwal & McGill, 2007; Kim

et al., 2020; Kwon & Sung, 2011)

Typing emulation ´ Ö (Diederich et al., 2019)

Tone of voice Corporate voice – professional, distant

and formal conversation style

Human voice – Resembling to natural,

daily, human-like conversation style

(Barcelos et al., 2018;

Kuligowska, 2015; Yu et al., 2016)