• No results found

The AI’s decision is mine : does information about AI influence outcome acceptance?

N/A
N/A
Protected

Academic year: 2021

Share "The AI’s decision is mine : does information about AI influence outcome acceptance?"

Copied!
74
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

The AI’s decision is mine:

Does information about AI influence

outcome acceptance?

Master Thesis

M.Sc. in Business Administration – Digital Business track

University of Amsterdam, Faculty of Economics and Business

Academic Year 2017–2018

Anna-Katharina Konow (11439750)

Amsterdam, 22 June 2018

(2)

2

Statement of Originality

This document is written by Student Anna-Katharina Konow who declares to take full responsibility for the contents of this document.

I declare that the text and the work presented in this document is original and that no sources other than those mentioned in the text and its references have been used in creating it. The Faculty of Economics and Business is responsible solely for the supervision of comple-tion of the work, not for the contents.

(3)

3

Table of contents

List of figures ... 4 List of tables ... 4 List of abbreviations ... 4 Abstract ... 5 Introduction ... 6 Literature review ... 10

AI-enabled decision aids ... 10

Acceptance of algorithms ... 12

Acceptance of automation ... 15

Information about artificial intelligence ... 16

Task-related expertise ... 18

General trust in artificial intelligence ... 20

Methodology ... 24 Research design ... 24 Data collection ... 24 Procedure ... 25 Stimulus ... 25 Measures ... 28 Statistical procedure ... 31 Results ... 32

Descriptive statistics and frequencies ... 32

Reliability analysis for scales... 33

Correlation analysis ... 34

Hypotheses testing ... 36

Further investigation of advice-taking behavior ... 40

Discussion ... 43

General discussion ... 43

Implications for theory ... 48

Managerial implications ... 50

Limitations and future research ... 52

References ... 55

(4)

4

List of figures

Figure 1: Conceptual framework of this study including hypotheses ... 23

List of tables

Table 1: Overview of the conditions and variables of this study ... 24

Table 2: Correlation matrix: means, standard deviations, and Pearson’s correlations ... 35

Table 3: One-way ANOVA descriptive statistics ... 36

Table 4: One-way ANOVA ... 36

Table 5: ANCOVA Test of between-subject effects ... 37

Table 6: PROCESS model summary for task-related expertise (SK) ... 38

Table 7: Conditional effects at levels of task-related expertise (SK) ... 38

Table 8: PROCESS model summary for task-related expertise (OK) ... 38

Table 9: Conditional effects at levels of task-related expertise (OK) ... 38

Table 10: PROCESS model summary for general trust in AI ... 40

Table 11: Conditional effects at levels of general trust in AI ... 40

Table 12: Examples of the possible advice-taking behaviors ... 41

Table 13: Decision matrix showing the results per scenario ... 42

List of abbreviations

AI Artificial intelligence

CRM Customer relationship management

OK Objective knowledge

(5)

5

Abstract

A decent amount of literature has investigated algorithm reliance and has revealed the phenomenon of “algorithm aversion”. However, only a few studies did justice to the new generation of algorithms, which are not programmed after human features and cues anymore but work independently towards a set goal and appear as a “black box” to the user. Thus, this study combined theories from two literature streams, decision aids & forecasting and automation, to reevaluate the findings of previous research for a new generation of algo-rithms. Specifically, this study examined the effect of information about AI on acceptance of the AI-generated outcome and the moderating roles of task-related expertise and general trust in AI. In an online-based experiment, participants either received educational infor-mation about AI or not before interacting with an AI-enabled decision aid in a CRM sce-nario. Results showed that information about AI did not influence outcome acceptance and that both task-related expertise and general trust in AI did not have a moderating effect. Additionally, this study used a twofold measurement of outcome acceptance and measured both whether people relied on the AI-advice and whether they changed their initial choice after receiving advice. The participant’s behavior revealed that they do not want to follow the AI-advice blindly. In over a third of all cases, people changed their choice to another option after receiving advice. These findings suggest that outcome acceptance is a more complex construct which needs further examination in future research. For practitioners, these findings emphasize that employees still want some control over the decision, which must be taken into consideration for establishing a successful human-AI-relationship. Keywords: artificial intelligence, algorithms, decision aids, acceptance, information, trust, task-related expertise, subjective knowledge, objective knowledge

(6)

6

Introduction

Artificial intelligence (AI) has been a major management trend in 2017 and is the most important technology of our era (Brynjolfsson & McAfee, 2017). Because of the fast-tracking development and adaption of AI, global GDP is predicted to increase by 14% until 2030, which would lead to an additional $15.7 trillion in economic power (PwC’s Global Artificial Intelligence Study, 2017). In the upcoming years, almost every industry will take advantage of AI and will transform its core processes and business models (Brynjolfsson & McAfee, 2017). In this process, robotics and autonomous vehicles, computer vision, natural-language processing, virtual agents as well as machine learning and deep learning form the essential AI technology systems. They will trigger the next digital disruption and substanti-ate further advances in other AI technologies (McKinsey Global Institute, 2017). Automa-tion, cognitive computing, and crowds will transform the future workforce and lead to a paradigm-shift in the way we work (Deloitte University Press, 2017).

Despite the bright prospects, at the same time, AI seems to threaten jobs and careers as we know them today, which leads to high concerns and fear among employees (Ransbotham, Kiron, Gerbert, & Reeves, 2017). The so-called “augmented workforce” ini-tiates a reinvention of almost every job and a reorganization of work and companies (Deloitte University Press, 2017). However, it is essential to acknowledge that cognitive systems will only perform single tasks soon, not complete jobs. Therefore, current management strategies aim at integrating human and machine work, instead of replacing the human employee com-pletely (Davenport & Ronanki, 2018). Thus, the biggest challenge for businesses is to ensure, that the workforce will be retrained to work with AI rather than competing with it and pre-paring for a hybrid workforce in which AI and human employees work together to reach a common goal (McKinsey Global Institute, 2017; PwC’s Global Artificial Intelligence Study, 2017).

(7)

7 In the hybrid workforce, the AI-generated outcome serves as decision aid which helps employees to solve their work tasks (Palmeira & Spassova, 2015). Examples for such col-laborations include judgmental advice systems for doctors, recruiting software or algorithms predicting crime for police control (Prahl & Van Swol, 2017; Logg, 2017; Lynch, 2016). The decision aids & forecasting literature revealed the recurring phenomenon of “algorithm aversion”, which describes the tendency of humans to neglect algorithmic advice over per-sonal advice (Dietvorst, Simmons, & Massey, 2015). This neglection of algorithmic advice is mostly explained by the human believe that algorithms cannot learn (Highhouse, 2008), the concern that algorithms do not consider individual targets or factors (Önkal, Goodwin, Thomson, Gönül, & Pollock, 2009), and the loss of trust after seeing algorithms err (Dietvorst, Simmons, & Massey, 2015). A recent study presented a different approach by explaining algorithm aversion through the lack of understanding of how the algorithm gen-erated the advice (Yeomans, Mullainathan, Shah, & Kleinberg, 2018). Automation research conducted similar findings, which emphasize the importance of understanding the underly-ing mechanisms for automation reliance (Lee & See, 2004). This corresponds to research in advice taking, which shows that people choose their own opinion over another advice be-cause they cannot reproduce the decision-making process of the advisor to the same extent as their own (Yaniv & Kleinberger, 2000). In addition to examining the importance of un-derstanding outcome generation on the employee’s willingness to accept and use the out-come, two employee-related factors will be taken into consideration. First, in line with pre-vious research (Merritt & Ilgen, 2008), general trust in AI might influence the relationship. General trust in AI might decrease the need to understand the exact nature of the outcome generation. The importance of initial trust to overcome risk and uncertainty perceptions be-fore the first use has been emphasized in the broader technology context (Li, Hess, & Valacich, 2008), and has been briefly introduced to the automation literature (Hoff & Bashir,

(8)

8 2015). In the context of AI, the perceived risk arises from the delegation of control to a machine (Castelfranchi & Falcone, 2000). However, the impact of initial trust for accepting the AI-generated outcome has not been studied so far. More interestingly, this study aims to look at the moderating role of task-related expertise. Previous research has shown, that peo-ple with high task-related expertise neglect advice irrespective of whom the advice comes from and how it is generated. Such effects of task-related expertise have been shown to affect trust in advice for judges (Ecken & Pibernik, 2015), but also very recently in the context for algorithmic advice (Logg, 2017). However, it has not been researched for AI-generated ad-vice so far. In conclusion, the above-developed gaps lead to the following research question: To what extent does information about artificial intelligence influence acceptance of the AI-generated outcome and what are the moderating roles of task-related expertise and general trust in artificial intelligence?

This study contributes to theory in multiple ways. Firstly, it is one of the first to research the actual effect of information about AI on acceptance of the AI-generated out-come and constitutes a further step in this research area. Secondly, this research followed recent studies to investigate the moderating effect of task-related expertise (Logg, 2017). However, it is the first to distinguish task-related expertise between the perceived knowledge and the actual knowledge and combines two knowledge measurements which have not been used together before. Future research projects can make use of this method. Thirdly, this study is one of the first to do justice to the change in the programming of algorithms (Yeomans, Mullainathan, Shah, & Kleinberg, 2018). Considering general trust in AI as a moderating factor marks a major first step into combining the knowledge of two literature streams, decision aids & forecasting and automation, and reevaluating the earlier findings for the new algorithm types. Lastly, this study helps in shedding light on the phenomenon of algorithm aversion by using a more complex, twofold measurement of outcome acceptance

(9)

9 (Bonaccio & Dalal, 2006). Thus, this study represents a crucial step in investigating the de-cision-making process of people whether to rely on the algorithmic advice.

This study offers significant implications for practice. The biggest hurdle of accept-ing an AI is often the manager’s lack of understandaccept-ing of how the AI system functions (Accenture Institute for High Performance, 2016; Kolbjørnsrud, Amico, & Thomas, 2017). However, this study shows that information per se does not increase outcome acceptance and suggests that even though information will be the fundamental basis to increase outcome acceptance, it might not be the only success factor. Furthermore, this study reveals that peo-ple want some control over the decision and seem involved enough not to want to follow the AI-generated advice blindly. This emphasizes the urgency to define and distinguish between tasks that need to be automated so that humans do not interfere with the AI system and tasks where human skills are fostered to improve the performance of the human-AI relationship.

This study will first review the relevant literature to gain a better understanding of the different concepts. Then, the research design and research methods will be discussed. Subsequently, the results of the study will be presented followed by the discussion of the results, their academic and managerial contributions as well as limitations of this study and recommendations for future research.

(10)

10

Literature review

AI-enabled decision aids

AI is one of the recent and most promising trends in the technology landscape. In its current weak notion, AI exceeds human capabilities in one specific area such as visual per-ception, understanding context, probabilistic reasoning or dealing with complexity (Hengstler, Enkel, & Duelli, 2016, p. 105). Strong AI, on the other hand, refers to a system with human intelligence in all areas, which currently does not exist (Hengstler, Enkel, & Duelli, 2016). AI opens new economic opportunities and does not only address problems differently but can solve completely different problems. It can discover complex structures, uncover generalizable patterns, and produce predictions, which can be used for business de-cisions or for solving other complex problems. One big advantage of machine learning, as a subset of AI, is its ability to process data such as images or language information, which regular computer-based support tools cannot handle (Mullainathan & Spiess, 2017).

enabled decision aids are categorized as low-level automation systems. The AI-enabled system serves as a decision support tool where the human operator performs a task with the help of recommendations provided by the system (Kaber & Endsley, 2004). The decision aid represents a hybrid approach, where machine advice complements the experi-ence and knowledge of the human operator. Compared to full automation, the machine ad-vice is not a substitute for human judgment but a provider of additional input, that the human operator cannot process to the same extent (Palmeira & Spassova, 2015). The current litera-ture emphasizes the importance of the human-machine relationship for every level of auto-mation. The human-machine collaborative system describes the cooperating relationship be-tween human and machine to achieve a common goal (Ezer, Fisk, & Rogers, 2005). A rela-tionship between human and machine becomes successful when there is a match between

(11)

11 the human’s belief about the machine’s capabilities and the machine’s clear statement about its capabilities (Bernstein, Crowley, & Nourbakhsh, 2007). A successful human-machine relationship exists when the human operator chooses to trust the machine and accepts its advice. Therefore, the focal point of this study will be on acceptance of the AI-generated outcome.

This study examines the human behavior towards an AI-enabled decision aid, where the AI-generated outcome supports human decision-making. Intelligent decision support tools at this time are mostly used in situations with high uncertainty, incomplete information, or substantial risk (Razmak & Aouni, 2015). The application of AI-enabled decision aids encompasses wide-ranging practice scenarios. Machine learning-assisted diagnostic tools in medicine review a large amount of information including the patient’s electronic medical record, textbooks, and journal articles to give possible diagnoses based on the patient’s symptoms (Hafner, 2012). Hedge-funds use machine learning to optimize the evaluation of investment opportunities as they can analyze a larger amount of information and exploit patterns in markets which have not been discovered before (Kumar, 2017). AI-based HR-systems enhance recruitment decisions by improving recruiting practices, tightening deci-sion-making, and matching the best candidates for the right positions through data-driven and effective choices (Mather, 2017).

This study wants to shed light on this highly relevant field by investigating ac-ceptance of the AI-generated outcome in the workplace scenario. For this purpose, this study builds on two large bodies of literature about decision aids & forecasting and automation. By combining the findings of both literature streams, this study diverges from prior work in important ways theoretically and practically. Earlier work about decision aids & forecasting has focused on Dawesian algorithms, which have been designed to make the human judg-ment more consistent. By programming algorithms based on human features and cues, they

(12)

12 have produced more accurate forecasts and decisions than humans. However, today’s algo-rithms work fundamentally differently. Using machine learning, the programmers only set the goal and the desired outcome, but the algorithm finds its own best way to reach it (Yeomans, Mullainathan, Shah, & Kleinberg, 2018). Therefore, modern algorithms resem-ble automation to a certain extent, as the analysis and judgment task is performed entirely independently by the algorithm and appears as a “black box” to the human operator. Humans cannot reproduce the algorithmic decision to the same extent and perceive a lack of control and uncertainty. Accordingly, this study builds on previous research in decision aids & fore-casting by examining acceptance of the AI-generated outcome while using insights from automation literature to do justice to the different independent approach of current algo-rithms.

Acceptance of algorithms

It has been statistically proven that algorithms outperform humans in almost every forecast. Algorithms are superior regardless of the judgment task, type of judges, the judge’s level of experience, or the data input (Grove, Zald, Lebow, Snitz, & Nelson, 2000). How-ever, research has unveiled a phenomenon, which is called “algorithm aversion”. This phe-nomenon links findings across numerous studies, that people prefer personal advice over algorithmic advice, take human input more into consideration than algorithmic input and condemn professionals using algorithmic advice instead of reaching out to other experts (Dietvorst, Simmons, & Massey, 2015). This phenomenon is consistent across multiple backgrounds. Arkes et al. (2007) have shown that patients do not like physicians that rely on computer-assisted diagnostic aids. In multiple experiments, participants have been con-fronted with either a scenario where the physician uses a diagnostic support system or a scenario where the physician does not use such system. In all cases, participants have be-lieved that physicians who rely on diagnostic support systems do not have a high diagnostic

(13)

13 ability and thus, are less capable as a physician (Arkes, Shaffer, & Medow, 2007). Similarly, Promberger et al. (2006) have discovered, that people follow the medical advice of the hu-man more than the one of the computer. Participants have decided whether to have an oper-ation or not in different medical scenarios. Results have shown that people are more likely to follow the recommendation of a physician and that they trust the physician more than the computer program to make a good recommendation or decision (Promberger, Baron, & Dawes, 2006). Moreover, Eastwood et al. (2012) have revealed that people prefer judicial decisions based on human judgment over statistical sources. The study has used four types of decision-making strategies: clinical/fully rational, clinical/heuristic, actuarial/fully ra-tional, and actuarial/ heuristic. The participants rated each strategy regarding preference, accuracy, fairness, and ethicalness in legal scenarios. Overall, the results have shown that people prefer legal advice by humans over actuarial tools and they prefer fully rational de-cisions over heuristic-based strategies (Eastwood, Snook, & Luther, 2012). Furthermore, Önkal et al. (2009) have found that people discount financial advice from statistical methods for the personal advice. The participants have adjusted their stock price forecasting after seeing either advice of a human expert or a statistical forecasting method. The results have shown that participants adjust their forecast based on the advice, but they discount the advice of the statistical method much more harshly (Önkal, Goodwin, Thomson, Gönül, & Pollock, 2009).

Research has come up with multiple explanations for this phenomenon. Some of the reasons result from ethical concerns, whether it is appropriate to rely on algorithms for cru-cial decisions or that algorithms are dehumanizing (Dawes, 1979). Other reasons concern the trust in the capabilities of the algorithms. People do not believe that algorithms can learn (Dawes, 1979), or they assume that algorithms cannot learn through experience as proficient

(14)

14 as humans do (Highhouse, 2008). Further reasons base on the concerns that algorithms can-not process and incorporate qualitative data and individual case-based information (Grove & Meehl, 1996), and thus can only make general decisions and cannot adequately consider individual targets (Önkal, Goodwin, Thomson, Gönül, & Pollock, 2009). A crucial factor, which is also addressed by a bigger automation literature stream, is the case of algorithm error. After seeing algorithms or automation err, people lose their trust and do not believe, that machines will learn from their mistakes. Therefore, algorithm error has a long-lasting negative impact on algorithm acceptance (Dietvorst, Simmons, & Massey, 2015).

Research has shown that people prefer the personal advice when it is possible to choose between human and machine advice. However, this is not always the case in practical scenarios. Increasingly, people will be in a situation where they can only choose between the advice generated by a machine and their own choice. This presents a much harder choice, and the phenomenon of algorithm aversion can pose a risk to that development in practice (Lee M. K., 2018). Fortunately, recent studies show that algorithm aversion can be reduced to an extent. Logg (2017) has tested subjectivity of the decision as a mechanism for reliance. The results have shown that people prefer algorithmic advice in the context of objective decisions based on facts and information. In contrast, for subjective decisions based on emo-tions and intuition they still prefer the personal advice. Similarly, if expert advice is availa-ble, people always prefer the expert advice to algorithmic advice, regardless of the decision (Logg, 2017). Furthermore, Dietvorst et al. (2016) have shown that people are more willing to use algorithms if they can adjust the outcome. Participants could choose between their forecast and an algorithmic forecast. The results have shown that people are considerably more likely to choose the algorithmic forecast when they have the chance to modify the forecast, regardless of the extent of the modifications (Dietvorst, Simmons, & Massey, 2016). Moreover, Önkal et al. (2009) have revealed, that the difference in advice discounting

(15)

15 is only present when people get both personal advice and statistical advice. When partici-pants get only personal or only statistical advice, both advice types have the same impact on the participant’s decision (Önkal, Goodwin, Thomson, Gönül, & Pollock, 2009).

Acceptance of automation

Automation literature has defined trust as the fundamental ground for outcome ac-ceptance and automation reliance (Schaefer, Chen, Szalma, & Hancock, 2016). Trust is widely acknowledged as a mediating factor between the dependent variable of automation reliance and the independent variables regarding the automation, the human operator, and the human-automation relationship (McBride, Rogers, & Fisk, 2014). It is a dynamic con-struct resulting from the human-automation-interaction and the performance of the automa-tion (Merritt & Ilgen, 2008). Higher percepautoma-tions about the capabilities, funcautoma-tions and relia-bility of the automation lead to higher reliance on the automation as human operators tend only to use automation they trust and refuse to use automation they do not trust (Dzindolet, Peterson, Pomranky, Pierce, & Beck, 2003; Pop, Shrewsbury, & Durso, 2015). This study follows the same assumption, as it defines acceptance of the AI-generated outcome as a behavioral measure of trust in the AI-enabled decision aid.

Lee & See (2004) have established three bases for trust in automation and the result-ing reliance on automation: performance, process, and purpose. Performance specifies the direct observation of the systems’ behavior and contains characteristics such as reliability, predictability as well as competency or expertise. Process describes the understanding of the underlying mechanisms, including factors such as persistence, integrity, and dependability. Purpose, on the other hand, refers to the intended use of the system and corresponds to loy-alty, benevolence, and faith (Lee & See, 2004).

(16)

16

Information about artificial intelligence

There are only very few studies investigating the influence of knowledge and under-standing on automation acceptance. However, these studies emphasize the importance of knowledge and understanding for an appropriate automation reliance. As stated above, un-derstanding is a critical factor in the trust-building mechanism during automation interaction. The human operator will trust automation more if the underlying algorithms are understand-able and seem to be understand-able to achieve the specific human operator’s goal (Lee & See, 2004). Knowledge about automation is also a very critical variable for automation error manage-ment. In an ideal scenario, the human operator has a clear and more substantial understand-ing of the automation they are interactunderstand-ing with. Furthermore, the human operator should understand his role and the role of the automation within the task as well as the context they are working in (McBride, Rogers, & Fisk, 2014). By understanding how the system works and what is happening, they can reproduce the error and create a causal explanation for it. This improves appropriate trust in automation, but also inhibits drastic automation rejection in the future (Dzindolet, Peterson, Pomranky, Pierce, & Beck, 2003). The introducing infor-mation people receive about the autoinfor-mation and its reliability influences how people per-ceive the automation. The initial expectations correspond to the given information and have a long-lasting influence on human perceptions and reliance on automation (Barg-Walkow & Rogers, 2016). Furthermore, meta-information about the decision improves the knowledge about automation and results in a more appropriate utilization of automated decision-aids. Transparency and understandability lead to a higher acceptance of automated systems, as people can understand the “inner workings” of those systems (Seong & Bisantz, 2008).

Recent algorithm research has produced similar findings. Eastwood & Luther (2016) have shown that giving people educational information about the advantages of an actuarial tool and their practical benefits dramatically reduces the resistance to their implementation.

(17)

17 After receiving information about the tool, participants show an increase in both satisfaction and the willingness to adapt the tool while also having higher perceptions of fairness and ethicalness of the tool (Eastwood & Luther, 2016). Accordingly, Yeomans et al. (2018) sug-gest, that the problem of algorithm aversion runs deeper than just concerns about algorithmic errors and accuracy. They propose that people are averse to algorithms because they do not understand the process behind the algorithm’s recommendation and are more comfortable with personal advice because of the subjective feeling that they understand their decision-making process better. In their study, they have tested if people rely on the advice of a rec-ommender system when predicting whether a friend or family member will like a joke or not. The results have shown that people do not rely as much on the recommender system as they should have and that the lack of subjective understanding of the recommendation pro-cess predicts aversion to using recommender systems. Then, they have tested whether an explanation of how the recommender system works increases the usage of recommender systems. Participants have seen either a sparse or a rich explanation of the recommendation process. The results have shown that a rich explanation of the recommender system leads to increased understanding of the recommendation process and results in higher willingness to use the recommender system (Yeomans, Mullainathan, Shah, & Kleinberg, 2018).

Both studies have used a short text which explains the procedure and the benefits of the algorithm-based decision to manipulate the understanding of the recommendation pro-cess. While Yeomans et al. (2018) have used a scenario about a joke recommendation engine and Eastwood & Luther (2016) have compared a medical and a legal scenario, this study aims to investigate a workplace scenario. Based on the prior findings, this study assumes that information can increase the acceptance of the innovative technology. Thus, information about AI is proposed to be positively related to acceptance of the AI-generated outcome.

(18)

18

Hypothesis 1: There is a positive relationship between information about AI and

acceptance of the AI-generated outcome.

Task-related expertise

Task-related expertise consists of high subjective task-knowledge and high task-spe-cific expertise (Bearden, Hardesty, & Rose, 2001). Task-spetask-spe-cific expertise or objective knowledge (OK) refers to the actual knowledge, which results from accurate information stored in the memory and the person’s ability to perform a specific task. Subjective knowledge (SK) on the other hand corresponds to the person’s self-beliefs about their knowledge and the person’s confidence in their ability to perform a particular task (Alba & Hutchinson, 1987). With high task-related expertise, the human operator feels capable and assured with his decision or behaviors and is less likely to be influenced by other opinions or concerns about social rejection (Bearden, Hardesty, & Rose, 2001).

A meta-analysis has revealed that across many studies, people tend to discount the advice of others in favor of their judgment as they over-estimate their judgmental perfor-mance (Bonaccio & Dalal, 2006). Ecken & Pibernik (2015) have replicated this finding in a field study with legal experts. The results have shown that judges tend to stick to their judg-ment rather than changing it according to the advice they have received. This is even the case when the task is exceedingly difficult, and the advice comes from a valid source of information (Ecken & Pibernik, 2015). For one thing, the information asymmetry regarding the decision can explain advice discounting. People have a clear understanding of their in-ternal justification for their specific choice and strong supporting evidence for their own decision. However, they lack evidence to explain the advisor’s decision. As they feel more comfortable when they know how the decision is made, they tend to stick to their own choice

(19)

19 (Yaniv & Kleinberger, 2000). In contrast, the expert’s advice discounting behavior is ex-plained by the egocentric bias since experts assume that their opinion is superior to others and especially to the advisor’s choice. Therefore, experts always discount advice, regardless of the task, its difficulty, the advisor or how much information about the advisor’s decision-making is available (Krueger, 2003).

The discounting behavior of experts also holds when the advisor is a computer. Au-tomation reliance is negatively biased by the task-related expertise (Lee & See, 2004). Peo-ple with higher self-confidence and higher expertise are more likely to rely on manual con-trol instead of relying on the automation (Lee & Moray, 1994). Recently, Logg (2017) has shown that expertise also suppresses reliance on algorithmic advice. In the experiment, par-ticipants have received algorithmic advice regarding the national security. By comparing lay responses with responses from U.S. government employees who work in national security, the results have revealed that providing advice from algorithms increases advice considera-tion for non-experts, but experts still heavily discount the algorithmic advice (Logg, 2017). Findings from Yaniv & Kleinberger (2000) suggest that information about AI can increase acceptance of the AI-generated outcome for lay people. Information about AI might solve the problem of lacking evidence to explain the advisor’s decision. Since this study focuses on workplace scenarios, most people will be experts. It is questionable whether in-formation about AI will increase acceptance of experts as well or if experts still discount AI-advice, regardless of the information beforehand. Thus, based on the findings discussed above, the effect of information about AI on acceptance of the AI-generated outcome is proposed to be dependent on the task-related expertise. Specifically, information about AI is proposed to be positively related to acceptance of the AI-generated outcome for non-experts whereas acceptance of the AI-generated outcome is proposed to be low in both conditions for experts.

(20)

20

Hypothesis 2: The effect of information about AI on acceptance of the AI-generated

outcome is moderated by related expertise, such that for people with low task-related expertise it leads to an increase in acceptance but has no influence for people with high task-related expertise.

General trust in artificial intelligence

Trust in AI is evolving over time and is developing across the three dimensions: pre-dictability, dependability, and faith. Before the first interaction, trust is mostly driven by the predictability of the AI technology and whether it is possible to anticipate future behavior. Once the risk perceptions have been overcome, and the person is willing to use the AI nology, trust further develops through dependability and consistent behavior of the AI tech-nology. Finally, the user develops faith in the AI technology and wholly relies on the AI technology (Hengstler, Enkel, & Duelli, 2016).

The crucial role of the first dimension “predictability” in the form of initial trust has been a subject of technology adaption research for years. Initial trust is especially important in the technology context, as it helps users to overcome their risk perceptions and uncertainty before interacting with innovative technology for the first time (Li, Hess, & Valacich, 2008). Resistance to technological innovation can be influenced by a range of factors such as per-ceived radical change in habits and usage, missing benefits, potential risks, demographics or perceived (job) threat (Mani & Chouk, 2017; Brougham & Haar, 2017). However, the ante-cedents of initial online trust are different for every person because of their unique set of beliefs about algorithms, which makes them either accept it or resist to it (Beldad, de Jong, & Steehouder, 2010).

While the mediating impact of dynamic trust on automation reliance has been exten-sively studied by now, the potential of initial trust as a moderating factor has been neglected

(21)

21 so far. Only a few studies have even distinguished trust between initial trust in automation before the interaction and dynamic trust during the interaction (Hernández-Ortega, 2011). Initial trust in automation reflects the human operator’s diffuse perception of the automa-tion’s trustworthiness before the interaction between human operator and machine (Merritt & Ilgen, 2008). Hoff & Bashir (2015) have systematically reviewed the automation literature to determine antecedents of initial trust and have presented a three-layered initial trust model consisting of dispositional trust, situational trust, and learned trust. Dispositional trust is influenced by demographic factors as well as personality traits. Situational trust again can be distinguished between external and internal variability. External variability summarizes factors such as system complexity, workload, perceived risk, and organizational settings, whereas internal variability contains factors like self-confidence and subject-mattered ex-pertise. Finally, initial-learned trust is based on personal expectations, experience with the system and understanding the system (Hoff & Bashir, 2015).

In algorithm research, trust has not been studied so far. As algorithms do not base on human features and cues anymore but find their own way to a given goal, the algorithm’s decisions are not predictable and reproducible anymore. This might increase perceived risk, as people delegate control to a machine (Castelfranchi & Falcone, 2000). Accordingly, initial trust might become relevant to overcome those risk perceptions. Therefore, this study intro-duces the concept of initial trust in automation to the algorithm literature. General trust in an AI is defined according to previous automation research as the initial trust which is inde-pendent of the specific AI-algorithm the human operator will work with. It reflects the hu-man operator’s diffuse perception of the AI-algorithm’s trustworthiness before the interac-tion.

High general trust in AI might decrease the need to understand the exact nature of outcome generation to rely on the outcome. Based on the findings of Hoff & Bashir (2015),

(22)

22 understanding the system is only one of many antecedents for initial trust. Therefore, infor-mation about AI might not be necessary in cases where general trust in AI is already devel-oped based on many other antecedents of the situational and dispositional trust factors. How-ever, for people with low general trust in AI, information about AI might be necessary to decrease risk perceptions before the interaction. Many of the antecedents of dispositional and situational trust are personal and fixed and cannot easily be manipulated. Understanding the system as part of the initial-learned trust, on the other hand, can be influenced by infor-mation about AI. Thus, for low general trust in AI, inforinfor-mation about AI might increase acceptance of the AI-generated outcome. Consequently, the relationship between infor-mation about AI and acceptance of the AI-generated outcome is proposed to be dependent on general trust in AI. Specifically, information about AI is proposed to be positively related to acceptance of the AI-generated outcome for low general trust in AI whereas acceptance of the AI-generated outcome is proposed to be high in both conditions for high general trust in AI.

Hypothesis 3: The effect of information about AI on acceptance of the AI-generated

outcome is moderated by general trust in AI, such that for people with low general trust in AI it leads to an increase in acceptance but has no influence for people with high general trust in AI.

Figure 1 shows the conceptual framework of this study and illustrates the proposed relationships of information about AI, acceptance of the AI-generated outcome, task-related expertise, and general trust in AI.

(23)

23 Figure 1: Conceptual framework of this study including hypotheses

(24)

24

Methodology

Research design

This study used a 2-stimuli (information about AI: information vs. no information) x task-related expertise (OK and SK measured separately as continuous variables) x general trust in AI (measured as a continuous variable) between-subjects design. The between-sub-jects design was used to avoid carryover effects between conditions. Table 1 gives an over-view of the conditions and variables used in this study.

IV: Information about AI IV: No information about AI (Control)

1 DV: acceptance of the AI-generated outcome DV: acceptance of the AI-generated outcome

2 Moderator: general trust in AI Moderator: general trust in AI

3 Moderator: task-related expertise (SK + OK) Moderator: task-related expertise (SK + OK)

4 Manipulation check: understanding of AI Manipulation check: understanding of AI

5 Control variables & demographics Control variables & demographics

N = 140 N = 140

Table 1: Overview of the conditions and variables of this study

Data collection

For conducting the responses, the experiment was set up in the form of an online questionnaire. This method was time efficient, affordable, and offered the opportunity to reach many people (Reips, 2002). Furthermore, an online questionnaire offered high internal validity, thanks to the possibility to highly control over variables and randomly allocate par-ticipants to conditions (Evans & Mathur, 2005). The data collection took place in April 2018. The experiment was conducted with Qualtrics, an online platform for quantitative research, and the participants could choose to take the experiment in English or German. To acquire 140 participants per condition, this study aimed for a sample size of at least 280 participants. The participants were acquired through convenience sampling and were reached through the student network of the University of Amsterdam, social platforms such as Facebook, Xing,

(25)

25 and LinkedIn as well as word-of-mouth. Through this method of sampling, it was possible to collect data quickly and economically (Gravetter & Forzano, 2015).

Procedure

First, the participants were randomly assigned to either one condition that showed a text about AI or a text about solid-state batteries serving as the control group. Then the par-ticipants were introduced to the scenario of the customer service and handled three different customer complaints with the help of a fictive AI-enabled decision aid. Afterward, partici-pants stated their general trust in AI and answered questions to measure their SK and OK in both the field of Customer Relationship Management (CRM) and AI. Finally, they shared their demographic information (age, gender, mother tongue, and education) for control pur-poses and reported any suspicion or question they might have about the survey (see Appen-dix for the Questionnaire).

Stimulus

Information about and understanding of the underlying mechanisms has been em-phasized to be a principal factor when it comes to acceptance of algorithmic recommenda-tions (Yeomans, Mullainathan, Shah, & Kleinberg, 2018). To confirm these findings for AI, each respondent was randomly assigned to either the manipulated condition that showed a text about AI, or the control condition that showed a text about solid-state batteries.

In accord with the classification of scientific knowledge, the text offered both factual as well as procedural information. Factual knowledge is understood as the knowledge of scientific facts and concepts, which involves a vocabulary of basic scientific constructs to make sense of the essential points. Procedural knowledge, on the other hand, extends the factual knowledge by understanding how science generates knowledge and evaluates find-ings. While factual knowledge refers to the sound knowledge of scientific facts, procedural

(26)

26 knowledge helps to determine the validity of scientific claims and influences judgment and attitudes towards science (Miller, 1998; Lee & Kim, 2018). Accordingly, the information about AI contained both factual and procedural statements to give a holistic introduction. Moreover, the credibility of the advisor has a substantial influence on a person’s reliance on his advice. An advisor proves to be credible when he is competent and trustworthy enough to deliver an unbiased perspective (Hodge, Mendoza, & Sinha, 2018; Mercer, 2005). Thus, the goal of giving information about AI was to increase the perception of advice competence of the AI-enabled decision aid.

Pretest of stimulus

Initially, the experiment was planned to be a between-subjects design in which the independent variable has three levels (no information vs. simple information vs. complex information) to test various levels of information. They were manipulated through density and syntax modifications: the complex condition used specialist terminology, consisted of more adjectives and adverbs and used a more complex sentence structure whereas the num-ber of words and the content remained almost the same (Gorin, 2005). A pretest of the ma-nipulation was conducted, where respondents (N = 33) were randomly assigned to either condition that showed a simple text about AI, a complex text about AI or a text about solid-state batteries which functioned as the control condition. After reading the text, they reported their perceived understanding of AI on four seven-point Likert scales (1 = not at all, 7 = very much; Cronbach’s alpha = 0.836): “How … knowledgeable, informed, smart, and confident do you feel about Artificial Intelligence?” as well as their perceived ability to work with an AI-enabled system on four seven-point Likert scales (1 = not at all, 7 = very much; Cronbach’s alpha = 0.911): “How … knowledgeable, informed, smart, and confident do you feel about working with an AI-enabled system?”. Additionally, they indicated the perceived complexity of the text on a seven-point Likert scale (1 = not at all, 7 = very much).

(27)

27 According to the pretest there was no statistically significant effect of information about AI on perceived knowledge about AI, F(2,30) = 0.305, p > 0.05, and perceived knowledge about AI-enabled systems, F(2,30) = 0.074, p > 0.05. Turkey post-hoc tests re-vealed that there was no statistical difference of the simple information group with the com-plex information group for both the perceived knowledge about AI (p = 0.968) and the per-ceived knowledge about AI-enabled systems (p = 0.965).

Despite the non-significance of effects, the results showed two important findings. Firstly, the two manipulated conditions showed very similar results and did not work as intended. The perceived knowledge about AI showed the same tendencies for both the sim-ple manipulation (M = 3.93, SD = 1.28) and the comsim-plex manipulation (M = 3.83, SD = 0.79). Similar tendencies are seen for the perceived knowledge about AI-enabled sys-tems with close results for both the simple manipulation (M = 3.38, SD = 0.95) and the com-plex manipulation (M = 3.25, SD = 1.46). This might be the case because the manipulation of information in a short text is not strong enough by simply changing the language. To reinforce the manipulation, it might be necessary to modify other properties such as the scope, medium or interaction level with the information as well. However, this kind of ma-nipulation was not feasible in this master thesis. Furthermore, the complex condition showed lower results in perceived understanding. These findings coincide with research in consumer knowledge, where complexity or amount of information is often used to manipulate the SK of consumers. Even though objectively they have more information and thus a better under-standing, the perceived understanding is lower because they feel they do not fully understand the information they received (Hadar, Sood, & Fox, 2013). Secondly, although not statisti-cally significant, the tendency of the differences between the two manipulated conditions and the control condition showed the intended direction. Both the perceived knowledge about AI (M = 3.60, SD = 0.77) and the perceived knowledge about AI-enabled systems

(28)

28 (M = 3.18, SD = 0.99) were lower for the control group. As a conclusion, the main study only used two conditions for the manipulation of the independent variable: information about AI vs. no information about AI. The simple manipulation was used in the main study to manipulate the information about AI, as the knowledge ratings for this text were slightly higher and therefore more contrasting compared to the control group.

Measures

Dependent variable: acceptance of the AI-generated outcome

In this study, advice utilization is used as a behavioral measure of trust in the AI-enabled system, which concurs with the research in interpersonal advice (Mayer, Davis, & Schoorman, 1995). The participants were confronted with a scenario in customer service and imagined being a customer service employee at a hotline where they manage customer com-plaints. They were introduced to a new system which supports their interaction with the customer and is enabled by AI. The AI-enabled system “listened” to the phone conversation with the customer and then advised whether only to handle the customer complaint based on the standard procedure or to give an additional benefit. The participants could choose to take that advice or to react based on their judgment.

After introducing the participants to the scenario, they then saw three customer com-plaints in a random order. Every customer complaint was briefly introduced, then three op-tions to react were presented (two additional benefits and no additional benefit) followed by the advice of the AI-enabled system. This advice was randomly generated so that there was not one correct way to handle it. The respondents were offered three options to give them more room for their decision and more possibilities than to agree or disagree with the AI-enabled decision aid. Afterward, the participants chose the option they want to communicate

(29)

29 to the customer. Advice acceptance was operationalized as “matching”, specifically the con-sistency between the participant’s decision and the recommendation of the AI-enabled sys-tem. Furthermore, the respondents indicated whether their initial decision based on their judgment would have been the same. Thus, advice acceptance is measured as “shifting” vs. “holding” the initial choice in addition to “matching” (Bonaccio & Dalal, 2006). This meas-urement is considered as very strong since respondents do not indicate whether they would rely on the AI-generated advice hypothetically but are forced to make an actual decision whether to rely on the AI-enabled system.

Moderating variable: general trust in artificial intelligence

Initial trust in automation reflects the human operator’s diffuse perception of the al-gorithm’s trustworthiness before the interaction between human operator and machine (Merritt & Ilgen, 2008). Participants answered twelve items that gauged their level of general trust in AI (Jian, Bisantz, & Drury, 2000), such as “Artificial intelligence’s action will not have a harmful or injurious outcome.” and “I am confident in artificial intelligence.” (1 = strongly disagree, 7 = strongly agree). Since people tend to avoid extremes on pre-coded lists, this study used a seven-point Likert scale. It balances between offering enough choice options for the participants without confusing or overwhelming them (Lee R. M., 1993, p. 77).

Moderating variable: task-related expertise

In this study, task-related expertise reflects high task knowledge and expertise result-ing in high self-esteem and confidence in the judgment. It was operationalized in two measures of both OK and SK, with OK representing the actual knowledge and SK reflecting the person’s confidence in their ability (Carlson, Vincent, Hardesty, & Bearden, 2009). Firstly, the participants indicated whether they have a professional background in the field of CRM and customer service. A scale from consumer knowledge research was used to

(30)

30 measure the SK (Flynn & Goldsmith, 1999). Participants answered five items that gauged their level of SK, such as “I feel very knowledgeable about CRM.” and “Compared to most other people, I know more about CRM.” on a seven-point Likert scale (1 = strongly disagree, 7 = strongly agree). Additionally, a set of five multiple-choice questions was developed to test the OK of CRM of the participants. The five statements were measured on a “true”/ “false” scale. There was no “I do not know”-answer, which forced respondents to think and make up their mind about the proposed statements. The final OK measure was computed as the total number of correct responses, thus ranging from 0 to 5 (Park, Mothersbaugh, & Feick, 1994).

Control variables

The advice contained a benefit. For every scenario, the respondents were confronted with three options for how to react to the customer complaint. Two of the options contained an additional benefit for the customer, and the third option did not include an additional benefit for the customer. This could lead to the tendency, that people might choose the op-tions including a benefit more often which in turn might affect the acceptance of the AI-generated outcome in case it suggests not to give the customer an additional benefit. Thus, to control for whether the advice contained a benefit, it was operationalized as “yes”/”no” per scenario and as the sum of all advices containing a benefit (0 = no advice, 3 = all advices) per respondent on an aggregated level.

Expertise in the field of AI. For valid results, it is essential, that the prior level of knowledge about AI is not significantly high since it would lever out the manipulation. Therefore, participants indicated whether they have a professional background in AI.

Manipulation check: understanding of AI. Furthermore, a manipulation check was conducted to ensure that the information about AI significantly increased the understanding

(31)

31 of AI. SK and OK about AI were measured using the same procedure than for the task-related expertise.

Personal information. Personality traits and demographics can affect a person’s re-sponse to automation or AI (Schaefer, Chen, Szalma, & Hancock, 2016). Thus, participants indicated demographic information regarding age, gender, mother tongue and educational background.

Statistical procedure

The statistics tool IBM SPSS was used for descriptive statistics and data analysis. To test the hypothesis 1 whether information about AI influences acceptance of the AI-gener-ated outcome, a one-way ANOVA was conducted. To include the covariates into the analy-sis, a one-way ANCOVA was performed as well. To test hypothesis 2 and 3 whether the relationship between information about AI and acceptance of the AI-generated outcome is moderated by either task-related expertise or general trust in AI, the Preacher & Hayes Model 1 of the PROCESS feature for IBM SPSS was executed.

(32)

32

Results

Descriptive statistics and frequencies

In total, 313 people (54,6% female, Mage = 31,7) participated in this study for no

compensation. 85,6% of the participants had a German nationality, and 83% had obtained a university degree. Respondents have been randomly assigned to either one of the two con-ditions. Overall, 163 people saw the text of the manipulated condition about AI and 150 people saw the text of the control condition about solid-state batteries. The normality and outlier checks did not reveal any cases that needed to be excluded from the analysis. Fur-thermore, 21 participants indicated a professional background in AI. The following analyses have been conducted both including and excluding these 21 participants. As both analyses showed the same effects, the results including all 313 samples are reported in the following section.

The dependent variable “acceptance of the AI-generated outcome” was operational-ized as a continuous scale through taking the sum of the matches between the respondent’s choice and the AI-generated advice across the three scenarios in the study. Low acceptance was equal to no match between the respondent’s choices and the AI-generated advices whereas high acceptance was equal to all the respondent’s choices matched the AI-generated advice. Thus, the dependent variable ranged from 0 to 3 (M = 1.55, SD = 0.87).

The moderating factor “task-related expertise” was measured in two separate con-structs OK and SK. Task-related expertise (SK) showed a platykurtic distribution with a negative kurtosis = -1.139 (M = 3.41, SD = 1.70). Task-related expertise (OK) on the other hand had a normal distribution (M = 2.71, SD = 1.07). The mean of the second moderating factor “general trust in AI” was equal to 3.88 (SD = 0.98), suggesting that the respondents had a slightly positive attitude towards AI.

(33)

33

Reliability analysis for scales

General trust in artificial intelligence. All participants indicated their general trust in AI with the help of a twelve-item questionnaire developed by Jian et al. (2000). It contained statements such as “Artificial intelligence’s action will not have a harmful or injurious out-come.” and “I am confident in artificial intelligence.” and was answered on a seven-point Likert scale (1 = strongly disagree, 7 = strongly agree). Cronbach’s alpha was used for the reliability analysis to measure the internal consistency of the scale. With Cronbach’s al-pha = 0.864 > 0.70 the scale showed high reliability (Cronbach, 1951). All questionnaire items had a good correlation with the total score of the scale, and none of the questionnaire items would substantially change the reliability if the item were deleted.

Task-related expertise: subjective knowledge. The participants indicated their SK about CRM through a five-item created by Flynn & Goldsmith (1999). Participants answered five items such as “I feel very knowledgeable about CRM.” and “Compared to most other people, I know more about CRM.” on a seven-point Likert scale (1 = strongly disagree, 7 = strongly agree). Again, Cronbach’s alpha was used for the reliability analysis to measure the internal consistency of the scale. With Cronbach’s alpha = 0.964 > 0.70 the scale showed very high reliability (Cronbach, 1951). All questionnaire items had a good correlation with the total score of the scale, and none of them would substantially change the reliability if they were deleted.

Task-related expertise: objective knowledge. The five multiple-choice questions used to determine the OK about CRM discriminated between experts and novices. Experts (15 persons with professional background and working experience) answered M = 3.87 (SD = 0.915) questions correctly on average (range: 3-5); novices (29 participants without professional background who rated their knowledge as very low [1 on a seven-point Likert scale]) answered only M = 1.48 (SD = 0.634) questions correctly on average (range: 0-2).

(34)

34 This difference was significant, t(42) = -10.135, p = 0.00 and showed an appropriate choice of multiple-choice questions to assess the objective knowledge amongst participants.

Correlation analysis

Table 2 shows an overview of the means, standard deviations as well as Pearson’s correlations of the survey items. Information about AI and acceptance of the AI-generated outcome were not significantly related. The two measures of task-related expertise were moderately positively correlated (r = 0.199, p < 0.01). This coincides with research in con-sumer knowledge, which indicates that SK and OK are not strongly positively related and thus are treated as two distinct constructs (Carlson, Vincent, Hardesty, & Bearden, 2009). However, both variables did not have a significant relationship with acceptance of the AI-generated outcome and showed only minimal effect size. General trust in AI was slightly positively correlated with acceptance of the AI-generated outcome (r = 0.123, p < 0.05). Furthermore, task-related expertise (SK) was moderately positively correlated with general trust in AI (r = 0.215, p < 0.01). This concurs with automation research, which considers subject-mattered expertise as one of many antecedents for initial trust (Hoff & Bashir, 2015). The control variable for whether the advice contained a benefit was not correlated with any variable of this study’s model. Therefore, the control variable was left out of the further analysis. The control variable age was slightly negatively correlated with general trust in AI as well as both measures of task-related expertise, which indicates that older par-ticipants were more pessimistic towards AI and had had less expertise in the field of CRM. Moreover, the control variable education was slightly positively correlated with both task-related expertise measures, which suggests that a higher educational level reflects in higher task-related expertise. Furthermore, gender was moderately negatively related to task-related expertise (SK), indicating that women estimated their task-related expertise lower than men.

(35)

35

M SD N 1 2 3 4 5 6 7 8 9 10

1 Information about AI 0,52 0,50 313 -

2 Acceptance of the AI-generated outcome 1,55 0,87 313 -0,03 -

3 General trust in AI 3,88 0,98 313 -0,06 0,123* (0.864)

4 Task-related expertise (SK) 3,41 1,70 313 0,00 0,01 0,215** (0.964)

5 Task-related expertise (OK) 2,71 1,07 313 0,00 -0,03 0,04 0,199** -

6 Control: Benefits 2,04 0,83 313 -0,05 0,05 -0,10 0,03 -0,03 -

7 Control: Age 31,70 11,50 313 -0,03 0,05 -0,169** -0,171** -0,152** -0,02 -

8 Control: Gender 1,55 0,50 313 0,09 0,01 -0,10 -0,231** -0,07 0,00 0,05 -

9 Control: Language 1,21 0,54 313 -0,01 0,04 0,127* 0,09 -0,07 0,01 -0,232** 0,09 -

10 Control: Education 3,08 0,96 313 0,03 -0,01 -0,01 0,145** 0,114* 0,05 0,202** -0,09 0,00 -

*. Correlation is significant at the 0.05 level (2-tailed).

**. Correlation is significant at the 0.01 level (2-tailed).

(36)

36

Hypotheses testing

Primary effect: ANOVA and ANCOVA

The assumptions of normality and independence had been tested and met. Further-more, the assumption of homogeneity was tested, and the results of Levene’s Test indicated, that there was no statistical difference found between the two conditions, F(1,311) = 0.052, p > 0.05. Therefore, a one-way ANOVA between the independent and dependent variable at a 95% confidence interval was used to test the hypothesis that information about AI influ-ences acceptance of the AI-generated outcome. The results in Table 4 show that there is no statistically significant effect of information about AI on acceptance of the AI-generated outcome, F(1,311) = 0.354, p > 0.05. As shown in Table 3, both groups did not differ sig-nificantly in their acceptance of the AI-generated outcome, since the group with information about AI accepted M = 1.52 (SD = 0.88) advices and the group without information about AI accepted M = 1.58 (SD = 0.86) advices in this experiment.

N M SD

No information about AI 150 1,58 0,86

Information about AI 163 1,52 0,88

Total 313 1,55 0,87

Table 3: One-way ANOVA descriptive statistics

SS df MS F p

Information about AI 0,27 1 0,27 0,354 0,55

Error 235,21 311 0,76

Total 235,48 312

Table 4: One-way ANOVA

A factorial ANCOVA has been conducted to control for the covariates age, gender, nationality, and education. All assumptions for an ANCOVA have been tested and met. The test of between-subject effects showed no significant effects for any covariate, all p > 0.05 (see Table 5). Furthermore, the small effect sizes of the variables (partial eta squared) indi-cated that the studied variables explained only a very small or no variance in acceptance of

(37)

37 the AI-generated outcome. Overall, these results suggested that hypothesis 1 needs to be rejected. Information about AI did not influence acceptance of the AI-generated outcome.

SS df MS F p η2 Age 0,841 1 0,841 1,104 0,29 0,004 Gender 0,011 1 0,011 0,014 0,91 0,000 Language 0,540 1 0,540 0,709 0,40 0,002 Education 0,106 1 0,106 0,139 0,71 0,000 Information about AI 0,231 1 0,231 0,303 0,58 0,001 Error 234,01 307 0,762 Total 987,00 313

Table 5: ANCOVA Test of between-subject effects

Additionally, the manipulation check showed that the information about AI did not change the SK and OK about AI significantly for the participants in the manipulated condi-tion. The scores for SK about AI were only moderately higher for the information condition (M = 3.03, SD = 1.52) than for the control condition (M = 2.83, SD = 1.22); t(311) = -1.32, p > 0.05. Similarly, the scores for OK about AI were only slightly higher for the information condition (M = 2.36, SD = 1.09) than for the control condition (M = 2.31, SD = 1.18); t(311) = -0.38, p > 0.05.

Moderating effects: PROCESS

The moderating effects of general trust in AI and task-related expertise on the rela-tionship between the independent variable information about AI and the dependent variable acceptance of the AI-generated outcome were tested in separate analyses using the Preacher & Hayes Model 1 of the PROCESS feature for IBM SPSS. Furthermore, the control varia-bles of age, gender, language, and education were included in the analyses.

Task-related expertise: subjective knowledge. The results in Table 6 show that the regression coefficient for the interaction term (XM) was 0.11 and was not statistically differ-ent from zero, t(313) = 1.07, p = 0.29). The PROCESS analysis was also conducted includ-ing the control variables which did not show significant results. A closer inspection of the

(38)

38 conditional effects did not reveal further insights (Table 7). Furthermore, also the Johnson-Neyman analysis did not show any regions of significance. Thus, task-related expertise (SK) did not have a moderating effect.

Coefficient SE t p Intercept i1 1,58 0,07 22,22 0,00 Information about AI (X) c1 -0,06 0,10 -0,59 0,55 Task-related expertise: SK (M) c2 -0,05 0,07 -0,72 0,47 Interaction (XM) c3 0,11 0,10 1,07 0,29 R2 = 0,0048, p > 0,05 F (3,309) = 0,5017

Table 6: PROCESS model summary for task-related expertise (SK)

Task-related expertise (SK) Effect SE t p

-1,30 -0,20 0,16 -1,21 0,23

0,00 -0,06 0,10 -0,60 0,55

1,17 0,07 0,15 0,43 0,67

Table 7: Conditional effects at levels of task-related expertise (SK)

Coefficient SE t p Intercept i1 1,58 0,07 22,19 0,00 Information about AI (X) c1 -0,06 0,10 -0,59 0,55 Task-related expertise: OK (M) c2 -0,01 0,07 -0,19 0,85 Interaction (XM) c3 -0,02 0,10 -0,22 0,82 R2 = 0,0022, p > 0,05 F (3,309) = 0,2243

Table 8: PROCESS model summary for task-related expertise (OK)

Task-related expertise (OK) Effect SE t p

-0,67 -0,04 0,12 -0,37 0,71

0,27 -0,06 0,10 -0,63 0,53

1,21 -0,09 0,16 -0,55 0,58

(39)

39 Task-related expertise: objective knowledge. Similarly, the results in Table 8 show that the regression coefficient for the interaction term (XM) was -0.02 and was not statisti-cally different from zero, t(313) = -0.22, p = 0.82). Again, the PROCESS analysis was con-ducted including the control variables which did not show significant results. A closer in-spection of the conditional effects did not reveal further insights (Table 9). Furthermore, the Johnson-Neyman analysis did not show any regions of significance as well. Thus, also task-related expertise (OK) did not have a moderating effect.

Even though the results were not significant, they suggested that task-related exper-tise (SK) was more influential and relevant in this context than task-related experexper-tise (OK) as SK had a higher interaction effect than OK. Furthermore, the results showed that the effect of task-related expertise (SK) changed from negative to positive across all levels whereas for task-related expertise (OK) it stayed similarly negative at all levels. However, based on these findings, hypothesis 2 needs to be rejected. The relationship between information about AI and acceptance of the AI-generated outcome did not depend on the task-related expertise. General trust in artificial intelligence. As shown in Table 10, the regression coeffi-cient for the interaction term (XM) was 0.15 and was not statistically different from zero, t(313) = 1.52, p = 0.13). A further PROCESS analysis was conducted including the control variables which did not show significant results. Furthermore, a closer inspection of the con-ditional effects did not reveal further insights (Table 11). Again, also the Johnson-Neyman analysis did not show any regions of significance. Even though the results were not signifi-cant, they showed that the effect of general trust in AI changed from negative to positive across all levels. However, according to the results, hypothesis 3 needs to be rejected. The relationship between information about AI and acceptance of the AI-generated outcome was not dependent on general trust in AI.

Referenties

GERELATEERDE DOCUMENTEN

Eagle Eye Cloud VMS Analytics en AI Video analytics kan worden gebruikt voor zowel real-time monitoring als voor het verkrijgen van inzichten uit historische video.. De Voordelen

Spoor gericht naar de bedrijven -&gt; implementatie van AI in het Vlaamse economische weefsel, inzonderheid KMO’s. Spoor gericht naar de burger -&gt; sensibilisering,

INDUSTRY SUPPORT ACTIONS OUTREACH, EDUCATION, ETHICS, POLICY SUPPORT. FLEMISH INDUSTRIAL

On the whole, it has become clear, that capitalistic values are widely acknowledged in the selected documents, which implies a market-oriented mindset

effectuation represents a paradigm shift in entrepreneurial studies (Perry, Chandler, &amp; Markova, 2012), there is yet to conduct more research on that topic (Edmondson

On the societal level transparency can (be necessary to) build trust, but once something is out in the open, it cannot be undone. No information should be published that

European Citizens’ Initiative → (deliberative) public debate on EU policy → EU Legitimacy The general research approach is a case study on the ECI. The formulated

The results of study 1 demonstrated that no relation was found between the understanding of an eco- label and the attention a consumer pays to an eco-label. Furthermore,