• No results found

5. Discussion

5.1 Theoretical implications

In the scarcity of literature on AI-powered chatbots, previous research had different arguments about whether there will be a difference in customer adoption intention strategy when different chatbot behavior takes place (error-free versus cognition challenges) (e.g., Castillo et al., 2021; Scheutz et al., 2011; Sheehan et al., 2020). Therefore, this study expected that cognition challenges (vs. error-free) lead to lower customer adoption intention strategy.

In line with some previous research (Castillo et al., 2021; Järvi et al., 2018), the relationship between chatbot behavior and customer adoption intention strategy could be established.

Results showed that the visual vignettes with cognition challenges by the AI-powered chatbot in an online service setting lead to lower customer adoption intention strategies than the visual vignettes with an error-free scenario from the customer’s perspective. This effect was statistically significant, therefore hypothesis 1 has been accepted.

Hence and most importantly, it corroborates earlier findings that customers commit the online service setting to future use (higher customer adoption intention strategy) after a

smooth and faultless chatbot interaction. Furthermore, the underlying cause of a lower customer adoption intention strategy when cognition challenges have taken place, is that it triggers feelings of anger and frustration in customers. All caused by a lack of understanding of the AI-powered chatbot, it remains a waste of time and irritations for the customer (Järvi et al., 2018).

Moreover, in line with the results of this study as well as according to the proposed model of value co-destruction of Castillo et al. (2021), customers will be attempting to restore their well-being interactions with AI services after cognition challenges in an ‘avoidance’ or

‘confrontative’ strategy. As a result, refusing the AI-powered chatbot next time and using FLE support or even switching to a competitor or even negative word-of-mouth (NWOM).

Secondly, it was hypothesized that the level of anthropomorphism affects the relationship between customer behavior and customer adoption intention strategy. Previous research found that when cognition challenges have been made by the AI-powered chatbot, customers made more effort to correct these misinterpretations when the AI-powered chatbot was perceived as human, compared to when the AI-powered chatbot was perceived as an automated conversational agent (Corti & Gillespie, 2016).

Additionally, De Visser et al. (2016) found that introducing more human-like cues for trust restoration if cognition challenges may take place, can increase the level of confidence of the customer in the AI-powered chatbot. This in turn may lead to less low customer adoption intention strategies. Therefore, it was expected that to the cognition challenges (vs.

error-free) by the AI-powered chatbot, a high level of anthropomorphism will lead to less negative customer adoption intention strategies.

Contrary to these beliefs, it did not matter if the level of anthropomorphism was low or high. The results indicate that this effect was not statistically significant.

Thus, based on this study, it was not possible to claim that a higher level of anthropomorphism moderates the negative relationship between cognition challenges and customer adoption intention strategies. These findings are in line with the blended findings of service robot anthropomorphism (e.g., Broadbent et al., 2011; Goudey & Bonnin, 2016;

Stroessner & Benitez, 2019). It suggests that even the effects of AI-powered chatbots anthropomorphism on customers’ adoption intention are multidimensional and dependent.

With dependency, several factors may have played a role, resulting in a possibly insignificant result of this hypothesis. Crolic et al., (2022) find that when customers enter an AI-powered chatbot interaction in an angry emotional state, a high level of the

anthropomorphized chatbot has a negative effect on customer adoption intention strategies.

However, this is not the case for customers in nonangry emotional states. The emotional state of the participants was not considered in this experiment.

Another factor may be the gender of the AI-powered chatbot, a male name, as well as a male avatar was used in this experiment for the high level of anthropomorphism. However, recent research showed that female AI-powered chatbots are much more frequently forgiven when committing cognition challenges compared to male chatbots (Toader et al., 2020).

Presenting customers with a female AI-powered chatbot will create stronger perceptions of kindness, as well as less negative customer adoption intention strategies. Therefore, it will be suggested to consider the emotional state of the participants and the gender of the AI-powered chatbot to be included in follow-up research.

Thirdly, building on the existing literature on cultural dimension and Hofstede’s groundbreaking IBM study (Hofstede, 1983), individualism versus collectivism might strengthen or weaken the relationship between the level of anthropomorphism and the customer adoption intention strategy when cognition challenges have taken place. Since AI-chatbots need to deal nowadays with customers from different cultural backgrounds and varying levels of anthropomorphism (Chebat & Morrin, 2007; Michon & Chebat, 2004). This effect was expected to strengthen in a collectivistic cultural background (vs. individualistic) and a high level of anthropomorphism (vs. low level). Conducting PROCESS Model 3 showed no interaction effect. Therefore, hypothesis 3, divided into sub-hypotheses, is all

rejected. Therefore, it does not matter from which cultural background you are and whether you are dealing with an AI-powered chatbot with a high or low level of anthropomorphism.

Given the fact that there is little prior research into the impact of cultural dimension and anthropomorphism, both aimed at AI-powered chatbots in a process of value

co-destruction could explain the rejected hypotheses of this study.

Moreover, an alternative explanation for rejecting is the fact that this study focuses on only one of Hofstede’s four cultural dimensions (individualism vs. collectivism) about cultural background. The international business literature proposes different frameworks for assessing cross-national differences that may impact the effects of anthropomorphism (Swoboda et al., 2016). Scholars should consider testing other additional country differences using alternative theories instead of Hofstede, as well as primary data. Future research is needed to determine the possible effects.

Moreover, interesting insights of this study are derived from the control variable trust in technology. Since changes in trust can affect the continuity of use (McKnight, 2005), higher trust in technology will be more beneficial during a process of value co-destruction.

Although it was not the main focus of this research, sufficiently interesting trust in technology correlates with the level of anthropomorphism. This means that if customers experienced a high level of anthropomorphism, there is more trust in technology. Furthermore, trust in technology is also correlated with customer adoption intention strategy, more trust in technology experienced higher customer adoption intention strategy.

Finally, this current study contributes to the overall literature on AI-powered chatbots.

In particular, this online experimental vignette methodology study has been formed in the context of an error-free AI-powered chatbot versus cognition challenges by the AI-powered chatbot. Therefore, this research is a widened perspective in the scarce literature on the effects of the process of value co-destruction in AI-powered chatbots on the customer adoption intention strategies. Additionality, this study expands the literature on the moderating roles of the level of anthropomorphism and cultural dimension.

Moreover, this study is further support and enhancement to the research on the customer adoption intention strategy towards AI-powered chatbots in an online customer service setting. By examining the customer adoption intention strategy for different chatbot behaviors, this study expands the scope of the existing literature on the process of value co-creation as well as value co-destruction in online service settings.