• No results found

The effects of perceived risk and perceived trust on the acceptance of AI systems

N/A
N/A
Protected

Academic year: 2023

Share "The effects of perceived risk and perceived trust on the acceptance of AI systems"

Copied!
53
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

The effects of perceived risk and perceived trust on the acceptance of AI systems

A case study within the Dutch financial sector

University of Amsterdam - Amsterdam Business School

Executive Programme in Management Studies – Strategy Track

Student Name: Chantal Donkervoort

Student ID: 13117130

Supervisor: dr. Hüseyin Güngör

Version: Final version 1.0

Date: March 31, 2022

EBEC approval number: EC 20211123021115

(2)

Statement of Originality

This document is written by Student Chantal Donkervoort who declares to take full responsibility for the contents of this document. I declare that the text and the work presented in this document is original and that no sources other than those mentioned in the text and its references have been used in creating it. The Faculty of Economics and Business is responsible solely for the supervision of completion of the work, not for the contents.

Signature ___________________________________________

(3)

Table of contents

Statement of originality ... 3

Abstract ... 5

1. Introduction ... 6

2. Literature review ... 9

2.1 Definition AI ... 9

2.2 AI within the financial industry... 9

2.3 TAM Model for the acceptance of AI systems ... 11

2.4 Knowledge about AI ... 12

2.5 Perceived Risk and Perceived Trust ... 13

2.5 Control variables ... 14

2.6 Research Question and conceptual model ... 14

3. Method ... 16

3.1 Research design ... 16

3.2 Sampling Technique ... 17

3.3 The measures ... 17

4. Results ... 19

4.1 from raw data to the data set ... 19

4.2 Demographic analysis ... 19

4.3 Reliability analysis ... 20

4.4 Correlation analysis ... 21

4.5 Hypotheses testing ... 24

4.5.1 Direct effects ... 24

4.5.2 Indirect effects ... 25

4.6 Summary of the results ... 27

5. Discussion ... 29

5.1 General Discussion ... 29

5.2 Managerial implication ... 31

5.3 Limitations and suggestions for future research ... 31

6. Conclusion ... 33

(4)

8. Appendix: Questionnaire ... 41

(5)

Abstract

The opportunities for Artificial Intelligence (AI) within the financial industry has become more important, since fintech organizations are ahead of the traditional financial industries.

Development of this technology will have impact on the financial industry and the employees working in the financial industry on how they will perform their tasks and making decisions.

This study investigates whether the perceived risk or perceived trust of an employee could have an effect on the behavorial intention to use an AI system. For this study the existing Technology Acceptance Model is used in combination with added variables.

To test the hypotheses about these variables within the Technology Acceptance Model, an online survey was distributed within the Dutch financial industry.

The results of this study showed that there is a moderating effect of perceived risk and perceived trust on the relationship between knowledge about AI and perceived usefulness.

Therefore, eliminating the employees perceived risk and employees perceived trust could be considered when implementing AI within the organization.

(6)

1. Introduction

Over the last years, Artificial Intelligence (AI) has become more and more useful for financial institutions to implement in their organization. AI is changing the financial industry from a more careful approach of using AI in small parts of the organization to gaining confidence in AI and using it more and more throughout the whole organization. Moreover, the development of new AI systems will increase even more rapidly (McKinsey, 2018). This change in the financial industry impacts the development of different financial institutions. For example, fintech organizations see AI as the most important future technology. In a report of N-iX, 67 percent of the fintech organizations consider AI as the most impactful technology for the following decade. Although AI is not yet fully adopted by fintech organizations, they make the first steps towards its adoption by using big data and machine learning, in which automation is the critical reason for the adoption of AI, which increases productivity and reduces the working load on their employees (Kitsela, 2021). Also, much academic research is done on the adoption of AI with different conclusions, where on the one hand, they argue that humans should work together with AI systems (Davenport & Bean, 2017; Jarrahi, 2018), and on the other hand, they argue that employees will be replaced by AI systems so that the organizations will be more and more efficient (Frischmann & Selinger, 2017).

Nevertheless, if organizations want to use AI to its full potential, the AI systems require human supervision. Also, fintech organizations need to have a strong team of experts to implement relevant AI and keep the system going. Such AI-powered systems can support human intelligence in making decisions, through which customer value can be increased and performance improved (Jarrahi, 2018). Moreover, it allows organizations to enhance the quality of their decisions that would not exist when using artificial intelligence on its own (Colson, 2019). Although AI is still a developing technology and its full potential is yet to be discovered, for fintech organizations, it is essential to start adopting so they can be the early adopters and get a competitive advantage over the traditional financial organizations (Kitsela, 2021). It could be a threat for the more traditional financial organizations since AI is not yet accepted and adopted in most traditional financial organizations because many financial institutions struggle with scaling AI technologies across the whole organization (McKinsey, 2019). Therefore, AI is now mainly used only for the support or control activities within the organization (NVB, 2020). The question arises: Why are financial institutions not willing to start using AI systems within the whole organization?

(7)

The adaptation and use of information technology in the workplace remain a central concern of information systems research and practice (Venkatesh and Davis, 2000). Much research has been done to determine if an employee within an organization is likely to accept a new technology, where the Technology Acceptance Model (TAM) is a commonly used theoretical framework. The TAM of Davis (1989) explains the substantial proportion of the variance in usage intentions and behavior (Figure 1). This model conjectures that an individual's behavioral intention to use a system is controlled by the perceived usefulness and perceived ease of use.

The Technology Acceptance Model (TAM) proposes that external variables, such as the characteristics of the behavioral target, influence behavioral intentions only indirectly by

"influencing the individual's beliefs, evaluations, normative beliefs, motivation to comply, or the importance weights on the attitudinal and subjective norm components" (Fischbein and Ajzen, 1975; Venkatesh and Davis, 2000).

Several researchers suggest that for a successful collaboration between AI and employees, trust in the technology is needed (Gaines-Ross, 2016; Marler et. al, 2009).

In order to achieve value creation by using AI tools, financial organizations need a careful analysis of the risks, a sophisticated understanding of the tools, and sufficient initial investment to avoid inadvertent value destruction (Canhoto & Clear, 2020). Furthermore, to adopt AI, financial organizations need to have technical, strategic, and administrative skills, that they should arrange a clear AI strategy aligned with the critical risk areas and inform employees about those key risk areas in order to adopt an innovation (Nelson and Winter, 1982; Zekos, 2021). Moreover, a human-like AI device has higher acceptance when it can show empathy and interaction concerning the human consumer (Pelau et al. l, 2021). Without understanding how people incorporate information from algorithms into their decisions, firms might have a risk of misusing the opportunities presented by technological advances (Logg, Minson & Moore, 2019).

Furthermore, research by Güngör (2020) research indicates that having high 'self-assessed' knowledge on the subject of AI influences the perception of AI positively. Therefore, this paper focuses on the role of the perceived risk, trust, and AI knowledge concerning the acceptance of AI-driven systems within organizations among the financial industry, in order to assess what risks should be eliminated in order to apply AI to its full potential so that it can benefit financial institutions without disturbance. This results in the following research question:

'What is the effect of AI knowledge in combination with the perceived risk and perceived

(8)

Figure 1: Technology acceptance model (Davis,1986)

The first part of this research represents the in-depth literature review which provides an overview of all the relevant concepts used within this research. This is followed by a description of the applied methodology and the data collection techniques used, including all measurements of the variables used in the analysis. Thirdly, the analysis results will be investigated to draw conclusions. Finally, we will present the theoretical and practical implications and limitations of this study and propose future research options and draw the conclusions of this research.

(9)

2. Literature review

The adoption of artificial intelligence within the financial industry is the central topic in this research. This central topic has been examined by looking at key articles related to the acceptance of artificial intelligence and adding more recent literature, which in some cases gives a critical or an opposite view on the key articles. The main concepts that apply to this research will be discussed in this section. First, based on different literature, a definition of AI will be given. After that, literature about AI in the financial industry will be discussed.

Furthermore, the TAM model will be discussed, including all the variables in this model.

The differentiating variables in this research are perceived risk, perceived trust, and knowledge about AI, which will be discussed in the last part of this literature review. Finally, an overview of the conceptual model and accessory research question and hypotheses will be set out.

2.1 Definition AI

There are a lot of different definitions of AI. In this research, we have chosen the following description based on different literature: AI as a computer-driven system that is trained to perform a specific task better in time by learning, which is related to the similar task of using computers to understand human intelligence and could perform most (data-driven) activities within an organization faster, better and cheaper than humans (Burgess, 2018;

McCarthy, 2004; Finlay, 2018; Kaplan & Haenlein, 2019). In addition, McCarthy (2004) argues that AI can be seen as the science and engineering of making intelligent machines, brilliant computer programs. It is related to the similar task of using computers to understand human intelligence. In this research, we will focus on strong AI, a form of AI in which a machine has intelligence equal to humans with a self-aware consciousness that can solve problems, learn and plan for the future (IBM, 2020).

2.2 AI within the financial industry

The added value of using AI within the whole organization is now increasingly recognized within the financial industry because the use of AI systems could increase the performance of financial institutions, which could result in an increased value for their customers (Lui & Lamb, 2018; Burgess, 2018). At this moment, financial organizations only adopt AI by using some technology activities within some organization departments. For example, they use big data for fraud prevention, and they give formal online financial advice provided by making decisions based on data analysis (NVB, 2020). AI systems have an advantage over using only humans by

(10)

related to data collection. Within the financial industry, there are a lot of complex processes where there is often an abundance of variables and elements available that need to be taken into consideration. Using the superior computational, analytical, and quantitative capabilities of AI equips human decision-makers with the tools to properly process complex situations and could support organizational decision-making (Jarrahi, 2018). Because when employees only use their intuition, they only trust their gut feeling based on past experiences, and they will make a decision without conscious attention. This decision-making approach goes beyond simple, rational thinking and allows an employee to decide by applying a holistic and abstract view (Dane et al., 2012; Salas et al., 2010).

On the other hand, analytical decision-making relies on a more conscious approach where information is gathered and analyzed to develop different solutions (Dane et al., 2012).

Furthermore, financial organizations should understand how people incorporate information from algorithms into their decisions. Otherwise, financial organizations might risk misusing the opportunities presented by technological advances (Logg, Minson & Moore, 2019). Decision- making within organizations has three challenges; uncertainty, complexity, and the decision's equivocality. To face the uncertainty and complexity of the decision-making, collaboration with AI-driven decisions systems could help the human decision-makers (Jarrahi, 2018; Choo, 1991).

Because AI technologies advance rapidly, organizations must remain vigilant to the strengths and limitations of AI in fully delegated and hybrid human-AI decision-making structures (Shrestha et al, 2019). Nevertheless, in introducing AI to organizational decision- making, managers must first build on the organization's internal capabilities to decide on the inputs to the algorithm, the algorithms themselves, and the interpretation of predictions. In addition, for adopting new technologies within an organization, the agility of a firm must be sufficient. It is also essential that organizations align the choice for agility with the internal and dynamic capabilities of the firm and the strategy and position in the market (Teece, Peteraf &

Leih, 2016). Moreover, financial organizations could struggle because there is a lack of a clear AI strategy, a lack of knowledge, or a lack of investing in technology (McKinsey, 2019;

Burgess, 2018; Cubric, 2020). Therefore, financial organizations should arrange a clear AI strategy aligned with the key risk areas and inform their employees about those key risk areas.

In order to achieve value creation by using AI tools, a careful analysis of the risks, a sophisticated understanding of the tools, and sufficient initial investment are therefore essential in adopting AI in their organization (Zekos, 2021; Canhoto & Clear, 2020).

(11)

However, many financial institutions struggle with scaling AI technologies across the whole organization (McKinsey, 2019). Therefore, AI is now mainly used only to support or control activities within the organization (NVB, 2020). A report from the Boston Consulting Group (BCG) (2017), involving more than 3,000 international business executives, managers, and analysts, also shows the willingness of employees to adopt AI within the organization. Even more than 75% of the respondents believe that AI will enable organizations to gain or establish a competitive advantage by using AI systems within the whole organization. Although, 40% of the respondents see AI as a strategic risk. A significant driver of adopting AI within an organization could be represented by their employees' perception of the risk or uncertainty about the AI system (Hoeffler, 2003; Ostlund, 1974; Rogers, 2003). This could be a reason why AI is not yet fully adopted within the financial industry. Therefore, we will add the variables' knowledge about AI' and the 'Perceived Risk' regarding using an AI system to the TAM model.

2.3 TAM Model for the acceptance of AI systems

The TAM model of Davis (1989) is a widely used model and was based on the Theory of The TAM model of Davis (1989) is a widely used model and was based on the Theory of Reasoned Action (TRA), created for making predictions on acceptance and use of new information technologies and systems. It could identify the variables that drive value for organizations' AI systems and employees' adaptability to use AI systems in their daily work.

(Fishbein and Ajzen, 1975; Davis et al., 1989). According to Davis (1989), the behavioral intention to use new technology, such as an AI system, could be measured by the TAM in which the individual employee's actual usage of new technology is determined (Davis et al., 1989). Furthermore, Davis states that the goal of TAM is to "provide an explanation of the determinants of computer acceptance that is general, capable of explaining user behavior across a broad range of enduser computing technologies and user populations" (Davis et al., 1989, p. 985).

Some studies criticize the TAM and argue that the model's simplicity results in a lack of explaining the user behavior. (Bagozzi, 2007). Moreover, they argue that the model is not envisioned to address the organizational context, but mainly the perception of an individual person (Ajibade, 2018). On the other hand, TAM was earlier applied in many studies to investigate the employee acceptance of new technologies, and on average, about only 40% of the variance in the model could be explained (Venkatesh & Davis, 2000). In addition,

(12)

usefulness. Although perceived ease of use and perceived usefulness were considered as the two main components in TAM, later studies indicate that perceived usefulness is considered as the main component in TAM Model. Because perceived usefulness directly influences intention and perceived ease of use acts indirectly through perceived usefulness (Davis, 1989;

Fishbein and Azjen, 1975). Therefore, in this research, we focus on the effect between perceived usefulness and the behavioral intention to use an AI system. The behavioral intention will be considered as an employee's intention to use AI systems during their daily job. The perceived usefulness will be defined as the degree to which an employee believes that using an AI system in their work-life could enhance their job performance (Davis 1989;

Saade and Bahli, 2005).

Since many empirical studies about the perceived usefulness in TAM have shown a significant effect of the perceived usefulness on the behavioral intention to use an AI system (Davis, 1989; Venkatesh, 2000). This would indicate that employees in the financial industry who consider AI systems useful for their job performance are likely to accept the AI

technology at work. Therefore, the following hypothesis will be tested in the research model of this study:

H1: Perceived usefulness will have a positive direct effect on behavioural intention to use an AI system.

2.4 Knowledge about AI

Several studies have shown that the adoption of behavior is influenced by the employees’

prior experience with AI. Employees who are more knowledgeable about an AI system are more likely to adopt them, educating employees is necessary to ensure AI adoption within the whole organization (Fountaine et al., 2019; Thong, 1999; Choi, Kim, & Kim, 2010; Schwartz et al., 2004). An earlier study (Güngör, 2020) showed that AI knowledge could affect

perceived usefulness. Because he argues that having a high ‘self-assessed’ knowledge level regarding AI influences the perception of AI in a positive manner. In addition, a significant amount of research supports the direct positive relationship between the knowledge a user has of a system and their acceptance of that system. They state that users show an increase in both satisfaction and willingness to adapt to the system after receiving information about the system. Explaining the system leads to increased understanding and a higher willingness to use the system (Eastwood & Luther, 2016; Schwartz et al., 2004). Therefore, in this research we will use the effect of AI knowledge to accept AI systems within the whole model, where

(13)

the hypothesis is that higher knowledge of AI will lead to a higher acceptance of AI systems. This results in the following hypotheses:

H2: There is a significant positive direct effect between actual knowledge about AI and Perceived usefulness.

H3: Perceived usefulness will have a positive direct effect on behavioural intention to use an AI system.

2.5 Perceived Risk and Perceived Trust

One of the main reasons that people within financial organizations are held back is

because they are concerned by risks, also mentioned as the uncertainty of the unknown (Huck et al., 2020). Decision-makers will make more risky decisions when the perceived risk of the output of an AI system is low. Given that, employees who will see the use of a new AI system as risky will perceive more risk, and the risk perception of these higher levels of perceived risk will deflate the perceived usefulness and therefore deflates the behavioral intention to use an AI system (Sitkin & Weingart, 1995; Featherman and Fuller, 2003). However, if

employees trust the AI system used within the organization, they can manage those risks, then these more expansive anxieties will probably not surface (Slovic, 1987). Therefore, trust in AI systems is required when this uncertainty and thus perceived risk is present. The uncertainty may increase an employee’s perceived risk, whereas the technology-driven uncertainty exists because of the unpredictability of a technology, which is often beyond the control of the employee or the employer (Mitchell, 1999; Pavlou 2003). In addition, Simon et al. (2020) argue that trust plays a vital role in AI implementation. Their research found evidence for the effect of trust on perceived usefulness. If an employee does not feel threatened, the higher the level of trust in an AI system, the higher the behavioral intention to use an AI system.

On the other hand, a high level of trust may cause problems for the employee’s acceptance of new technology because employees might have a too high level of trust in AI, making them vulnerable to ignoring contradictory information (Lee, 2004). Therefore, we will investigate whether perceived trust and perceived risk could have an essential factor in the effect between knowledge about AI and behavioral intention and on the effect between perceived usefulness and behavioral intention to use an AI system. The following hypotheses are formulated:

H4a: The relationship between knowledge about AI and intention to use an AI system is moderated by perceived trust.

(14)

H4b: The relationship between knowledge about AI and intention to use an AI system is moderated by perceived risk.

H5a: The relationship between knowledge about AI and Perceived usefulness is moderated by perceived trust.

H5b: The relationship between knowledge about AI and Perceived usefulness is moderated by perceived risk.

2.5 Control variables

As mentioned earlier, the main criticism about the TAM model is that it is questionable whether real behavior intention can be measured because hidden personality traits can often motivate behavior. This means that end-users of the information system may not necessarily base their acceptance of a new system on the perception of usefulness. Therefore, the model must consider age, gender, and education level factors, which could influence the acceptance of a system (Ajibade, 2019). The control variables used in the conceptual model are ‘Age,’

‘Gender,” and ‘Educational level’ because these variables could influence the acceptance of an AI- system (Ajibade, 2019). In addition, men and women could have different perceptions of technology (Gefen and Straub, 1997).

2.6 Research Question and conceptual model

Many empirical research and studies are done concerning the adaptation of artificial intelligence, but there is a lack of systematic research in some early AI adoption sectors, such as the financial industry. Moreover, there is less available literature on the adoption of

artificial intelligence and its relation to perceived risk and perceived trust. The following research question is formulated:

‘What is the effect of AI knowledge in combination with the perceived risk on the acceptance of AI-driven decision systems within the financial industry?’

The hypotheses for this research will be:

• H1: Perceived usefulness will have a positive direct effect on behavioural intention to use an AI system.

• H2: There is a significant positive direct effect between actual knowledge about AI and Perceived usefulness.

(15)

• H3: There is a significant positive direct effect between actual knowledge about AI and behavioural intention to use an AI system.

• H4a: The relationship between knowledge about AI and intention to use an AI system is moderated by perceived trust.

• H4b: The relationship between knowledge about AI and intention to use an AI system is moderated by perceived risk.

• H5a: The relationship between knowledge about AI and Perceived usefulness is moderated by perceived trust.

• H5b: The relationship between knowledge about AI and Perceived usefulness is moderated by perceived risk.

• H6: Behavioural intention to use AI systems is controlled by age.

• H7: Behavioural intention to use AI systems is controlled by educational level.

• H8: Behavioural intention to use AI systems is controlled by gender.

Figure 1: Conceptual Model

(16)

3. Method

In this section the methods of collecting and analysing the data are set out. First, the research design is described, followed by the sampling technique and the measurement scales.

3.1 Research design

This research is a quantitative research using a survey with mostly questions that can be answered using a 7-point Likert scale. The survey is created in Qualtrics and starts with explaining the purpose of the research. The survey consisted of 23 questions and was only available in English. Before publishing the survey, a pre-test was conducted amongst direct colleagues within the author's financial organization. Based on their feedback, minor adjustments were made to the survey. In table 1, the design summary of the survey can be found.

Table 1: Survey design summary Survey Design Summary

Method

Population Size

Online Survey via Qualtrics

207,000

Sample Technique Convenience Sampling

Sample Size

Survey Participants

Survey Respons Time

384

Colleagues from authors financial organization, network within financial organizations from author, LinkedIn connections from author

November 26th 2021 – December 10th 2021

(17)

3.2 Sampling Technique

The population for this research will consist of every employee working in the financial industry in The Netherlands. The latest data from UWV (2021) indicates that 207,000 people work in the financial sector in the Netherlands. The ideal sample size will be 384 employees (Qualtrics) based on a 95% confidence level and a 5% margin of error. The sampling technique used in this research is convenience sampling, using the authors’ network within the financial industry. The survey will be sent out via email. The respondents had a time limit of two weeks to fill in the survey. After one week, they received a reminder via email. The survey can be found in Appendix A.

3.3 The measures

The dependent variable Behavioral Intention to use AI- systems is measured by using the theory of the TAM-model of Davis (1980). The measurement scale of prior research from Venkatesh & Davis (2000) will be used to measure this dependent variable. The independent variable ‘knowledge about AI’ will be measured by the 11-point measurement scale of Güngör (2020), which measures the self-assessed ability about AI of the respondents. This one is adopted because of the expectation that this description in the survey will be most apparent to the survey respondents. The mediator variable ‘Perceived usefulness’ will also be measured by the scale of prior research from Davis (1989). Respondents will answer three questions about the probability that an AI system will increase their job performance in the organizational context (Davis,1989). For the moderating variable ‘Perceived Risk’, the respondents will answer five questions about the risk they could perceive using an AI system in their work-related environment based on prior research (Nicolauo, Andreas, Mcknight, 2006). The other moderating variable, ‘Perceived Trust,’ will be measured by the three added questions in the survey about the perceived trust of an employee when using an AI system in their work-related environment (Pennington et al., 2003). In table 2 hereunder, there is an overview of the measurement of all variables.

Table 2: Measurement

Source Variable Number of Scale

Items

Measurement scale

(18)

Gungor (2020) Knowledge about AI

1 item on a 11-point scale

0 (don’t know anything about AI) to 10 (know a lot about AI)

Davis (1989) Perceived Usefulness

3 items on a 7-point scale (Likert)

“Strongly disagree” to

“Strongly agree”

Nicolauo, Andreas, Mcknight (2006) Pennington et all (2003)

(Venkatesh &

Davis, 2000)

Perceived risk

Perceived Trust

Behavorial intention to use

5 items on a 7-point scale (Likert) 3 items on a 7-point scale (Likert) 7 items on a 7-point scale (Likert)

“Strongly disagree” to

“Strongly agree”

“Strongly disagree” to

“Strongly agree”

“Strongly disagree” to

“Strongly agree”

(19)

4. Results

This section focuses on presenting the results deriving from the various statistical analysis that has been conducted using SPSS. First, the data cleaning will be described; then, the demographic analysis will be shown. Furthermore, the reliability of the scale items and the correlation analysis will be explained. Lastly, the hypothesis will be tested using the regression analyses and the PROCESS model of Hayes (2018). This will result in observing whether the proposed hypotheses are being rejected or supported.

4.1 from raw data to the data set

For this study, an online survey was conducted via the survey tool Qualtrics. It was sent out via email to the target group with the use of the convenience sampling method in the period of 26th November until 10th December. After that, the raw data from Qualtrics was exported to IBM SPSS Statistics V.26. Before the statistical analysis, the data was cleared by analyzing the data, data preparation, and removing the data, which consisted of a manual check for irregularities. After that, the data of the sub-questions were transformed into variables by computing the scale means of the sub-questions with the use of IBM SPSS Statistics formulas.

This resulted in 24 incomplete responses, and these results were deleted from the data collection set. After that, a normality check, Pearson correlation table, and scale reliability check were conducted in IBM SPSS Statistics. These tests did not result in a further reduction of the data collection set. After that, the macro PROCESS modelling tool V4.0 from Andrew F. Hayes (2018) was downloaded into SPSS so that the indirect effects of the model could be tested.

4.2 Demographic analysis

For the demographic analysis the descriptive statistics are used, see table 3. 28.9% of all the respondents is female and 71.1% is male. Furthermore, most respondents have the age of 45- 54, this is line with the statistics in de factsheet about employees in the Dutch financial sector from NVB (NVB, 2021). Moreover, respondents are high educated, since more than 80% has a bachelor’s degree or higher. This is in line with the CBS Statline statistics, which shows that at least 60% of the employees in the financial sector is high educated (CBS, 2021). Most respondents did not work with AI before, whereas 26.8 did work with AI before. Furthermore, most respondents 80% work for the author’s company.

(20)

Table 3: Demographic analysis

Variable Frequency Percentage (%)

Age < 25 years old 7 4.7

25-34 years old 40 26.8

35-44 years old 28 18.8

45-54 years old 46 30.9

> 55 years old 28 18.8

Total 149 100.0

Gender Female 43 28.9

Male 106 71.1

Total 149 100.0

Educational Level High school or the equivalent 12 8.1 Secondary vocational education 9 6.0

Bachelor's degree 55 36.9

Master's degree 67 45.0

Candidate/PhD 6 4.0

Total 149 100.0

Worked with AI before Yes 40 26.8

No 89 59.7

I do not know 20 13.4

Total 149 100.0

4.3 Reliability analysis

To test the reliability of the different scales which are used in the survey, a reliability analysis was conducted in SPSS in which all scale items were included. To draw statistically significant relevant conclusions, the Cronbach's alpha score (α) of the scale items needs to be at least .70. In this research, all scales items are above .70, except for the scale items of the variable 'Knowledge about AI.' Furthermore, all scales items show a significant correlation with the scale's total score since all scale items are above .30. If we look at the scale items of the variable 'Knowledge about AI,' we see that none of the scale items would significantly increase the scale's reliability (with > 0.10) when removed. Therefore, we use the scale with a reliability

(21)

'Knowledge about AI.' However, in the next paragraph, we see significant correlations between the 'knowledge about AI" and other variables.

Table 4 Reliability analysis

Scale Variable Reliability

Perceived Usefulness Independent variable α .894

Perceived Trust Moderator α .839

Perceived Risk Mediator α .802

Attitude towards AI Independent variable α .848

Behavioural Intention to Use Dependent variable α .913

Knowledge about AI Independent variable α .635

4.4 Correlation analysis

To test the relationship between the continuous variables, Pearson’s correlation test is conducted. This test shows the intensity and meaning of the relationship between these variables (Field, 2018). Table 5 offers an overview of the correlations, means, and standard deviations.

Based on the Pearson correlation, there is a positive correlation between the variables

‘perceived trust’ and ‘perceived usefulness’ (r = 0.46) and between the variables ‘behavioral intention to use and ‘perceived trust’ (r=0.557), both significant at p < .01. There is also a low positive correlation between the variables ‘knowledge about AI’ and ‘perceived trust’

(r=0.171), which is significant p < .01. These outcomes suggest that respondents with more trust in AI systems are more intended to see the usefulness of an AI system and are more likely to use AI systems for their daily job performance. Weak negative significant effects can be found between the variables ‘knowledge about AI’ and ‘perceived risk’ (r=-0.44), which is significant p <.01. Also, there is a significant negative effect between the variables ‘perceived usefulness’ and ‘perceived risk’ (r=-0.297, p< 0.01). This suggests that respondents who perceive more risk about AI systems are less intended to trust AI systems. In addition, this indicates that respondents with a higher level of knowledge about AI perceive less risk in AI systems.

The control variables do not show a significant effect in the model. Therefore, the variables are not controlled by any of the control variables age, gender, or educational level. Moreover, the

(22)

variables ‘perceived usefulness’ and ‘behavioral intention’ from the TAM Model of Davis have a high positive significant effect on each other which confirms the relationship between these variables in the theory of the TAM model of Davis (1989).

(23)

Table 5: Means, standard deviations,

correlations

Variables M SD 1 2 3 4 5 6 7 8

1. Age 3.32 1.193 -

2. Gender 1.71 .455 .098 -

3. Educational Level 3.31 .951 -.088 -.011 -

4. Perceived Usefulness 3.96 .692 -.077 -.054 .030 -

5. Perceived Trust 3.57 .707 -.028 .055 -.050 .458** -

6. Perceived Risk 2.59 .798 -.074 .010 .021 -.297** -.539** -

7. Behavioural Intention to Use 4.08 .727 -.086 .101 .099 .640** .557** -.435** -

8. Knowledge about AI 4.56 2.041 -.081 .002 .206* .106 .171* -.044 .255** -

* Correlation is significant at the 0.05 level (2-tailed).

** Correlation is significant at the 0.01 level (2-tailed).

(24)

4.5 Hypotheses testing

In this section, the hypotheses will be tested. First, the direct effects between the independent variables and dependent variables will be tested via a multiple regression

analysis. After that, the indirect effects will be tested by using PROCESS of Andrew F. Hayes (2018).

4.5.1 Direct effects

The direct effects in this research have been tested by using hierarchical multiple regression analysis. The first step of this analysis included only the control variables and the dependent variable behavioral intention to use AI systems. This model did not show any significant effects from the control variables in the total model. Therefore, we can conclude that any control variables do not control the dependent variable's behavioral intention to use an AI system. Therefore, we have enough evidence to reject our hypotheses H6, H7, and H8.

In the second step, the independent variables were added to the model to test whether the independent variables "perceived usefulness" and 'knowledge about AI" have a direct effect on the dependent variable's behavioral intention to use an AI system. Table 6 shows that the research model is statistically significant (p=<.001). The R² is 0.445, which means that the total amount of variance in behavioral intention to use an AI system is explained by 44,5% of the independent variables perceived usefulness and knowledge about AI. An examination of the independent variables individually shows that the variable perceived usefulness (β =.620, t

= 9.989, p-value<.01) is significant and has the strongest positive effect on the behavioral intention to use an AI system. This means that for every 1-unit increase of the perceived usefulness, the behavioral intention to use an AI system is .620 higher. The other independent variable, knowledge about AI, is also significant (β =.189, t = 3.048, p-value<.01) and has a positive direct effect on the behavioral intention to use an AI system. This means that for every 1-unit increase of the knowledge about AI to use an AI system, the behavioral intention is .189 higher with regards to using an AI system.

We can conclude that that perceived usefulness and knowledge about AI both have a

significant positive direct effect on the dependent variable behavioural intention to use an AI system. Therefore, there is enough evidence to support the hypothesis ‘There is a significant positive direct effect between actual knowledge about AI and behavioural intention to use an AI system’ (H1) and the hypothesis ‘Perceived usefulness will have a positive direct effect on

(25)

Furthermore, we conducted the model which shows the direct effect between knowledge about AI and the perceived usefulness of an AI system. The research model is not statistically significant since p>.01. Therefore, there is enough evidence to reject the hypothesis “There is a significant positive direct effect between actual knowledge about AI and perceived usefulness (H2).

Table 6: Regression model of behavioural intention to use an AI system

4.5.2 Indirect effects

The indirect effects in this research have been tested by using the PROCESS model from Andrew F. Hayes (2018). The PROCESS model of Hayes (2018) has been used to analyze the indirect effects of the moderator variables ‘perceived trust’ and ‘perceived risk.’ The other variables are the independent variables ‘knowledge about AI’, the mediating variable

‘perceived usefulness, and the dependent variable ‘Intention to use.’ To test whether there is a moderating mediation effect between these variables, model 10 is used (Hayes, 2018). A moderating variable can be described as a third variable that ought to affect the strength of a relationship between a dependent and an independent variable. A moderated mediation, also known as the conditional indirect effect, can be described as the effect of an independent variable on a dependent variable via a mediator and differs depending on levels of a moderating variable (Hayes, 2018).

R R² B SE b t p

Total model .667 .445 .000

Knowledge about

AI .067 .022 .189 3.048 .003

Behavorial intention

to use a system .651 .065 .620 9.989 .000

(26)

Figure 2: Model 10 by F. Hayes (2018)

The first model omits the perceived usefulness (M). The model has an R2 of .269, which means that 26,9% explains the total variance of perceived usefulness by the variables in this model. Also, the model is significant since p<0.01. Furthermore, the interaction terms XW and XZ in model M are significant since p>0.05, which means that there is a moderated effect of perceived risk and perceived trust on the effect between knowledge about AI and perceived usefulness. The results indicate that there is a statistically significant interaction between the effect of knowledge about AI of Perceived Trust on Perceived Usefulness since p<0.05. The interaction between XW in the model of M is (a3=.089, p<0.05), and the interaction between XZ in the model of M is (a4=.126, p<0.05). This means that the direct effect between knowledge about AI (X) and perceived usefulness (M) will get stronger when an employee has a higher level of trust in an AI system and also when an employee perceives lower levels of risks in an AI system. Therefore, we have enough evidence to support hypotheses H4a and H5a.

The results of the model, which omits the dependent variable intention to use (Y), show that the difference in the independent variable knowledge about AI is significant (C'1=.056, p<0.01). Also, the difference in the mediator's perceived usefulness in this model is significant (b1=.487, p=.000). This agrees with the earlier literature about these variables in the TAM model of Davis (1980). However, the interaction terms for both XW and XZ are statistically not significant since p>.05. Therefore, we can conclude that the effect of knowledge about AI on behavioral intention to use an AI system is not moderated by either the level of perceived risk or the level of perceived trust. Therefore, there is enough evidence to reject hypothesis H4b

(27)

Table 7: Process Model 10

4.6 Summary of the results

To summarize the results of this research, an overview is provided of the outcome model in table 8 below. This table shows an overview of the research model with corresponding results of the hypotheses and effect size, including p-values.

Hypothesis Effect p-value Result

Consequent

Perceived Usefulness (M)

Intention to use (Y)

Antecedent Coeff. SE p Coeff. SE p

Knowledge about AI (X) a1i -.004 .025 .884 c’1 .056 .021 .008

Perceived Usefulness (M) --- --- --- b1 .487 .070 .000

Perceived Trust (W) a2i .453 .086 .000 c’2 .234 .078 .003

Knowledge AI - Perceived Trust (XW)

a4i

.089 .043 .042

c’3

.041 .037 .266 Knowledge AI- Perceived Risk (XZ) a5i .126 .039 .002 c’5 .007 .034 .842

Perceived Risk (Z) a3i -.061 .075 .420 c’4 -.139 .063 .029

Constant i1 3.948 .050 .000 i2 2.137 .278 .000

R2 = .269 R2 = .544

F (5,143)=10.527, p<0.001 F(6,142)= 28.178, p <0.001

Moderated Mediation Effect SE p LLCI ULCI

Direct Effect c’1 .056 .021 .008 .015 .098

Index

BOOT

SE BOOT LLCI

BOOT ULCI

Perceived Trust .043 .025 -.001 .097

Perceived Risk .061 .022 .021 .106

(28)

H1 Perceived usefulness will have a positive direct effect

on behavioural intention to use an AI system. .620 .000 Supported H2 There is a significant positive effect between actual

knowledge about AI and perceived usefulness. .106 .199 Not supported H3 There is a significant positive effect between actual

knowledge about AI and behavioural intention to use

an AI system. .189 .003 Supported

H4a The relationship between knowledge about AI and behavioural intention to use an AI system is moderated

by perceived trust. .037 .266 Not supported

H4b The relationship between knowledge about AI and behavioural intention to use an AI system is moderated

by perceived risk. .034 .842 Not supported

H5a The relationship between knowledge about AI and

perceived usefulness is moderated by perceived trust. .043 .042 Supported H5b The relationship between knowledge about AI and

perceived usefulness is moderated by perceived risk. .039 .002 Supported H6 Behavioural intention to use AI systems is controlled

by age. -088 .287 Not supported

H7 Behavioural intention to use AI systems is controlled

by educational level. .093 .261 Not supported

H8 Behavioural intention to use AI systems is controlled

by gender. .111 .181 Not supported

Table 8: Outcome model

(29)

5. Discussion

The acceptance of an AI system in the financial industry is the central topic in this research. Until this study, no academic research has focused on the moderating role of the perceived risk and perceived trust on the relationship between knowledge about AI and perceived usefulness, and the relationship between knowledge about AI and the behavioral intention to use an AI system. The main goal of this research is to find out if perceived risk or perceived trust could have an effect on an existing relationship in the TAM model.

5.1 General Discussion

A fundamental assumption when starting this study was the proven concept of the relationship between perceived usefulness and the behavioral intention to use an AI system (Davis 1989). Even though the relationship between these two variables has been widely researched already, the hypothesis has been integrated into this study as well, and it turned out that this study is no exception. The results have shown that perceived usefulness has a

significant positive direct effect on the behavioral intention to use a system, which means that H1 has been accepted. The higher the perceived usefulness turns out to be, the higher the level of the behavioral intention to use an AI system within the financial industry (Davis,1989).

This is entirely in line with the earlier studies about the TAM model.

In other earlier research, the variable knowledge about AI was added to the total TAM model. More in-depth, based on the literature, high ‘self-assessed’ knowledge regarding an AI system should positively affect the perception of AI (Güngör, 2020). Therefore, the

expectation in this study was that AI knowledge would have a positive direct effect on perceived usefulness.

However, in this study, we did not find a significant direct effect between knowledge about AI and perceived usefulness. Moreover, the variable knowledge about AI was not high in the reliability of the scale items. Gordon (1991) argues that employees cannot rate

themselves because people judge themselves on their perceived capacities and not on their actual knowledge and performance. This could be a reason that the scale item of the variable

‘knowledge about AI’ was not reliable. There is a chance that the respondents have given an answer that was not correct because they were not aware of their knowledge level. Therefore, conclusions about the effects of knowledge about AI in this study are possibly not reliable.

(30)

Nevertheless, the correlation analyses showed multiple significant correlations. The results show a significant positive direct effect between knowledge about AI and the

behavioral intention to use an AI system. Employees who have a higher level of knowledge about AI and are aware of that knowledge are more willing to use an AI system. This is in line with the expectations based on the literature study about the effect of knowledge about AI and the acceptance of an AI system (Gordon, 1991; Güngör, 2020).

Besides these direct effects, we did find some indirect effects. The first effect is the positive direct effect between perceived risk and perceived trust. Employees who have a lower level of perceived risk will also have a higher level of perceived trust.

Furthermore, the results indicate that there is a statistically significant interaction when perceived trust and perceived risk are added to the TAM model. We did find an interaction between knowledge about AI and perceived usefulness, this relationship is moderated by perceived trust and perceived risk. This means that the positive direct effect between knowledge about AI and perceived usefulness will get stronger when an employee has a higher level of trust in an AI system and also when an employee perceives lower levels of risks in an AI system. Therefore, with this research, we can say that the results show that perceived risk and perceived trust do have a minor influence on the relationship between knowledge about AI and perceived usefulness. This is in line with earlier studies about the perceived risk and trust in relation to the acceptance of an AI system. Based on this study, we can say that trust, together with a low level of perceived risk is required for the acceptance of AI systems (Sitkin & Weingart, 1995; Featherman and Fuller, 2003; Slovic, 1987).

In contrast, the effect of knowledge about AI on behavioral intention to use an AI system is not moderated by either the level of perceived risk or the level of perceived trust. Thus, these variables do not moderate the relationship between knowledge about AI and the behavioral intention to use an AI system. A reason for this could be that the criticism about the TAM model is correct and that, indeed, the TAM model lacks the explanatory power of the user behavior (Bagozzi, 2007).

However, since perceived usefulness has a direct effect on behavioral intention to use an AI system, we can also say that the moderated effect of perceived risk and perceived trust on the relationship between knowledge about AI and perceived usefulness indirectly influence

(31)

the effect on the behavioral intention to use an AI system, and thus those variables do have an effect on the behavioral intention to use an AI system and influence the whole TAM model.

5.2 Managerial implication

From a managerial perspective, implementing AI within the whole organization will cost time and a significant investment. But managers could positively impact this, by increasing the knowledge of the employees, their behavioral intention of willing to adopt an AI system will also increase. Therefore, the level of knowledge about AI of employees should be increased.

This could be done by giving AI knowledge sessions within the organization. Second, employee risks in using an AI system should be eliminated. This will result in a higher level of trust in the AI system and, therefore, a positive effect regarding the behavioral intention to use an AI system within the financial organization. Accordingly, employees should be informed about the risks and also about the positive impact on the total risk strategy of the organization since AI can also influence the total risk strategy of an organization.

Lastly, managers should take in consideration that a high level of trust may cause problems, because employees are more likely to ignoring contradictory information (Lee, 2004).

5.3 Limitations and suggestions for future research

This research has been carried out by only the author, with a limited timeframe and monetary budget as well. Therefore, there are some limitations to the methodology and the scope of this study. First, the generalisability could be improved since this research was only focused on the Dutch financial industry. Furthermore, 80 percent of the respondents were from the same organization and therefore the generalisability is not high, which means that this study cannot be generalised to the financial industry or to other industries. Also, instead of the convenience sampling, the random sampling technique can be used to ensure that enough respondents from different organizations are involved and represented sufficiently within the sample.

The second limitation is that this research was cross-sectional, which do not consider that the behavorial intention to use AI might change over time. Currently, the adoption of AI technology is in its beginning phase, and adoption is an ongoing process (McKinsey, 2018). It is therefore recommended that future research uses longitudinal data to measure potential changes in adoption attitudes, such as the contingent effect of employees’ experience with AI.

(32)

In this way, short and long-term adoption behavior can be analysed, which might facilitate a more comprehensive understanding of the relationships between the different constructs.

(33)

6. Conclusion

AI creates opportunities for the more traditional financial organizations by increasing their performance, being more efficient and cutting costs. Therefore, AI has become more and more useful for financial institutions to implement in their organization. The expectation is that the development of new AI systems will increase even more rapidly. Fintech organizations already adopt AI within their whole organizational structure. This could be a threat for the more traditional financial organizations. Therefore, financial organizations should follow these trends and know how they can adopt AI within the whole organization.

With this study we have aimed to provide an understanding of the effects of adopting AI within the financial industry. The existing TAM model was extended with the variable’s knowledge about AI, perceived risk and perceived trust. More in depth, we researched the effects of AI knowledge in combination with the perceived risk on the acceptance of AI-driven decision systems within the financial industry.

A fundamental assumption when starting this study, was the proven concept of the relationship between perceived usefulness and the behavioural intention to use an AI system (Davis 1989). The results of this study, also have shown that perceived usefulness has a significant positive direct effect on the behavioural intention to use an AI system. Moreover, the variable knowledge about AI was added in this research, and the results of this study show that there is significant positive direct effect between knowledge about AI and the behavioural intention to use an AI system. This is interesting for financial organizations, because by increasing the knowledge of AI by their employees, their intention to use AI systems will also increase.

Besides the variable knowledge about AI, also the perceived risk and perceived trust were added to the model. The main goal was to find out if perceived risk or perceived trust would have an effect on an existing relationship in the TAM model. The results show that perceived risk and perceived trust do have a minor influence on the relationship between knowledge about AI and perceived usefulness. This is interesting for financial organizations, because by managing these risks and eliminating the perceived risk and perceived trust levels by their employees, they can increase the employees’ level of the perceived usefulness of AI systems. We also found the effect that employees who have a lower level of perceived risk, will also have a higher level of perceived trust. This is in line with earlier studies about the perceived risk and trust in relation to the acceptance of an AI system.

(34)

In contrast, these variables do not directly influence the relationship between knowledge about AI and the behavioural intention to use an AI system. Further research about the acceptance of AI systems could conduct an experimental design to discover other insights in the employee’s factors for accepting AI systems. Also, to expand the research about the TAM model in relation to the acceptance of AI systems, future research could focus on expanding this study to other industries or countries.

There is an opportunity for financial organizations to expand their performance by starting with scaling their AI activities. They can do that by increasing the employee’s knowledge about AI in combination with informing them about the risks and eliminating uncertainty.

(35)

7. References

Ajibade, Patrick, (2018). "Technology Acceptance Model Limitations and Criticisms:

Exploring the Practical Applications and Use in Technology-related Studies, Mixed- method, and Qualitative Researches" Library Philosophy and Practice (e-journal), 2018.

Bagozzi, R. (2007). The legacy of the Technology Acceptance Model and a proposal for a paradigm shift. Journal of the Association for Information Systems, 8 (4), 244-254.

Boston Consulting Group, (2017). Most Companies Have Big Gaps Between AI Ambition and Execution. Retrieved from BCG website:

https://www.bcg.com/d/press/6september2017-gapbetween-ai-ambition-execution- 169791.

Burgess, A. (2018). The Executive Guide to Artificial Intelligence: How to identify and implement applications for AI in your organization. https://doi.org/10.1007/978-3- 319- 63820-1.

Canhoto, A.I & Clear, F. (2020). Artificial intelligence and machine learning as business tools: A framework for diagnosing value destruction potential, Business Horizons, Volume 63, Issue 2, 2020, Pages 183-193.

Choi, H., Kim, Y., & Kim, J. (2010). An acceptance model for an internet protocol television service in korea with prior experience as a moderator. The Service Industries Journal, 30 (11), 1883–1901.

Choo, C. W. (1991). Towards an information model of organizations. The Canadian Journal of Information Science, 16(3), 32–62.

Colson, E. (2019). What AI-driven decision making looks like. Harvard Business Review.

Cubric, Marija. (2020). Drivers, barriers and social considerations for AI adoption in business and management: A tertiary study,Technology in Society,Volume 62, 2020, 101257, ISSN 0160-791X, https://doi.org/10.1016/j.techsoc.2020.101257.

Dane, E., Rockmann, K. W., & Pratt, M. G. (2012). When should I trust my gut? Linking domain expertise to intuitive decision-making effectiveness. Organizational Behavior

(36)

and Human Decision Processes, 119(2), 187–194.

https://doi.org/10.1016/j.obhdp.2012.07.009.

Davenport, T. H., & Bean, R. (2017). How P&G and American Express are approaching AI.

Harvard Business Review. Available at https://hbr.org/2017/03/how-pgand-american- express-are-approaching-a.

Davis, F. D. (1986). A technology acceptance model for empirically testing new end-user information systems: Theory and results (Doctoral dissertation). MIT Sloan School of Management, Cambridge, MA.

Davis, F. D. (1989). Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Quarterly, 13 (3), 319-340.

Eastwood, J., Luther, K. (2016). What You Should Want From Your Professional: The Impact of Educational Information on People’s Attitudes Toward Simple Actuarial Tools. Professional Psychology: Research and Practice, 47(6), 402-412.

Featherman M., and Fuller M., (2003). "Applying TAM to e-services adoption: the

moderating role of perceived risk," 36th Annual Hawaii International Conference on System Sciences, 2003. Proceedings of the, 2003, pp. 11 pp.-, doi:

10.1109/HICSS.2003.1174433.

Field, A. (2018). Discovering Statistics Using IBM SPSS Statistics. 5th Edition. London, UK:

SAGE.

Finlay, S. (2018). Artificial Intelligence and Machine Learning for Business. 3rd Edition. GB, Relativistic.

Fishbein, M., & Ajzen, I. (1975). Belief, Attitude, Intention, and Behavior: An Introduction to Theory and Researched. Reading: Addison-Wesley.

Fountaine, T., McCarthy B., Saleh, T. (2019) Building the AI-Powered Organization. Harvard Business Review, 97 (4), 62-73.

Frischmann, B., Selinger, E. (2017, September 25). Robots have already taken over our work, but they’re made of flesh and bone. The Guardian. Available at

https://www.theguardian.com/commentisfree/2017/sep/25/robots-taken-overwork-

(37)

Gaines-Ross, L. (2016). What do people–not techies, not companies–think about artificial intelligence. Harvard Business Review, 24.

Gefen, D. and Straub, D. (1997), ‘‘Gender differences in the perception and use of e-mail: an extension to the technology acceptance model’’, MIS Quarterly, Vol. 21 No. 4, pp.

389-400.

Güngör, H. (2020). Creating Value with Artificial Intelligence: A Multi-stakeholder Perspective. Journal of Creating Value, 6(1), 72–85.

https://doi.org/10.1177/2394964320921071.

Gordon, M. J. (1991). A review of the validity and accuracy of self-assessments in health professions training. Academic Medicine, 66 (12), 762-9.

Hayes, A. F. (2018). Introduction to mediation, moderation, and conditional process analysis:

A regression-based approach (2nd edition). New York: The Guilford Press.

Hayes, A. F. (2021). PROCESS v4.0.

Hoeffler, S. (2003). Measuring Preferences for Really New Products. Journal of Marketing Research, 40(4), 406–420. https://doi.org/10.1509/jmkr.40.4.406.19394.

Huck, Johnson, Kiritz and Larson (2020). "Why AI Governance Matters." The RMA Journal, vol. 102, no. 8, May 2020, p. 18. Gale General OneFile. Available at:

https://rmajournal.org/rmajournal/may_2020/MobilePagedArticle.action?articleId=15 83558#articleId1583558.

IBM, (2020). What is artificial intelligence? https://www.ibm.com/cloud/learn/what-is- artificial-intelligence.

Jarrahi, M. H. (2018). Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Business Horizons, 61(4), 577–586.

https://doi.org/10.1016/j.bushor.2018.03.007.

Kaplan, A., Haenlein, M. (2019). Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons, 62, 15-25.

Kitsela, Vladyslav (2021). AI in Fintech: How to prepare for a massive shift in financial

(38)

Lee, H. J. (2004). The role of competence-based trust and organizational identification in continuous improvement. Journal of Managerial Psychology.

Logg, J. M., Minson J. A., Moore D.A., (2019). Algorithm appreciation: People prefer algorithmic to human judgment, Organizational Behavior and Human Decision Processes, Volume 151, 2019, Pages 90-103.

Lui, A., & Lamb, G. W. (2018). Artificial intelligence and augmented intelligence

collaboration: regaining trust and confidence in the financial sector. Information &

Communications Technology Law, 27(3), 267-283.

Marler, J. H., Fisher, S. L., & Ke, W. (2009). Employee self-service technology acceptance:

A comparison of pre-implementation and post-implementation relationships.

Personnel Psychology, 62 (2), 327–358.

McCarthy, John. (2004). What is Artificial Intelligence? Computer Science Department Stanford University Stanford, CA 94305 jmc@cs.stanford.edu http://www- formal.stanford.edu/jmc/ 2004 Nov 24, 7:56 p.m. Revised November 24, 2004.

McKinsey, (2018). Platform operating model for the AI bank of the future. McKinsey Analystics. Available at https://www.mckinsey.com/industries/financial-services/our- insights/platform-operating-model-for-the-ai-bank-of-the-future.

McKinsey, (2020). AI-bank of the future: Can banks meet the AI challenge? McKinsey Analytics. Available at https://www.mckinsey.com/industries/financial-services/our- insights/ai-bank-of-the-future-can-banks-meet-the-ai-challenge.

Mitchell, V.-W. (1999). Consumer perceived risk: conceptualisations and models. European Journal of marketing.

Nelson, Richard R. and Sidney G. Winter (1982), An Evolutionary Theory of Economic Change, Cambridge, MA: Belknap Press, Harvard University Press.

Nicolaou, A. I., & McKnight, D. H. (2006). Perceived Information Quality in Data Exchanges: Effects on Risk, Trust, and Intention to Use. Information Systems Research, 17(4), 332-35l.

NVB, 2020. Digitalisering, innovatie & technologie. Available at

(39)

NVB, 2021. Factsheet werkgeversschap. Available at

https://www.bankinbeeld.nl/app/uploads/2018/07/NVB-Factsheet-Werkgeverschap- januari-2021.pdf.

Ostlund, Lyman E. (1974), Perceived Innovation Attributes as Predictors of

Innovativeness, Journal of Consumer Research, Volume 1, Issue 2, September 1974, Pages 23–29, https://doi.org/10.1086/208587.

Pavlou, P.A. (2003). Consumer acceptance of electronic commerce: Integrating trust and risk with the technology acceptance model. International Journal Electronic Commerce, 7 (3) (2003), pp. 69-103.

Pelau, C., Dabija, D.-C., & Ene, I. (2021). What makes an AI device human-like? The role of interaction quality, empathy and perceived psychological anthropomorphic

characteristics in the acceptance of artificial intelligence in the service industry. Computers in Human Behavior, 122, 106855–.

https://doi.org/10.1016/j.chb.2021.106855.

Pennington, R., Wilcox, H. D., & Grover, V. (2003). The role of system trust in business-to- consumer transactions. Journal of management information systems, 20(3), 197-226.

Rogers, E.M. (2003) Diffusion of innovations. (5th ed.), Free Press, New York, NY.

Salas, E., Rosen, M. A., & DiazGranados, D. (2010). Expertise-based intuition and decision making in organizations. Journal of Management, 36(4), 941–973.

https://doi.org/10.1177/0149206309350084

Saade, R. and Bahli, B. (2005), ‘‘The impact of cognitive absorption on perceived usefulness and perceived ease of use in on-line learning: an extension of the technology

acceptance model’’, Information Management, Vol. 42, pp. 317-27.

Schwarz, A., Junglas, I. A., Krotov, V., & Chin, W. W. (2004). Exploring the role of

experience and compatibility in using mobile technologies. Information Systems and e-Business Management, 2(4), 337-356.

Shrestha, Y. R., Ben-Menahem, S. M., & Von Krogh, G. (2019). Organizational decision- making structures in the age of artificial intelligence. California Management Review, 61(4), 66-83.

(40)

Simon, O., Neuhofer, B., & Egger, R. (2020). Human-robot interaction: Conceptualising trust in frontline teams through LEGO® Serious Play®. Tourism management

perspectives, 35, 100692.

Sitkin, S. B., L. R. Weingart. 1995. Determinants of risky decision-making behavior: A test of the mediating role of risk perceptions and propensity. Acad. Management J. 38 1573- 1592.

Slovic, P. (1987). Perception of risk. Science 236, 280-285.

Teece, D., Peteraf, M., & Leih, S. (2016). Dynamic capabilities and organizational agility:

Risk, uncertainty, and strategy in the innovation economy. California Management Review, 58(4), 13-35.

Thong, J. Y. (1999). An integrated model of information systems adoption in small businesses. Journal of management information systems, 15(4), 187-214.

Venkatesh, Viswanath & Davis, Fred. (2000). A Theoretical Extension of the Technology Acceptance Model: Four Longitudinal Field Studies. Management Science. 46. 186- 204. 10.1287/mnsc.46.2.186.11926.

Zekos G.I. (2021) Risk Management Developments. In: Economics and Law of Artificial Intelligence. Springer, Cham. https://doi.org/10.1007/978-3-030-64254-9_5.

(41)

8. Appendix: Questionnaire

The Acceptance of Artificial Intelligence

Start of Block: INTRODUCTION BLOCK

INTRODUCTION Thank you for your participation in this research.

This online survey is part of my master's thesis graduation research on the acceptance of Artificial Intelligence (AI), a case study within the financial industry. I would like to know if you have any experiences with Artificial Intelligence. Do you already use it? Do you see any risks? Do you trust it?

On average, this survey takes you 5 minutes to complete. Your responses are completely anonymous.

Chantal Donkervoort

Executive Program Management Studies

Amsterdam Business School, University of Amsterdam (UvA)

End of Block: INTRODUCTION BLOCK

Start of Block: General Questions What is your age?

o

< 25 years old (1)

o

25-34 years old (2)

o

35-44 years old (3)

o

45-54 years old (4)

o

> 55 years old (5) What is your gender?

o

Female (1)

o

Male (2)

o

I rather not say (3)

Referenties

Outline

GERELATEERDE DOCUMENTEN

On the whole, it has become clear, that capitalistic values are widely acknowledged in the selected documents, which implies a market-oriented mindset

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of

We suspect that individuals’ general trust in their feelings could affect their ability in activating their set of persuasion knowledge; even at the presence of different saliency

Still, discourse around the hegemony of large technology companies can potentially cause harm on their power, since hegemony is most powerful when it convinces the less powerful

According to the framework of Das &amp; Teng (2001b), perceived risk within an alliance can be constrained by mechanisms of trust (goodwill and competence) and control

It deals with the control systems, including the control of the interaction forces and the compliance, the teleoperation, which uses passivity to tackle the trade- off between

Design procedure: The amplifier is simulated using ADS2009 and Modelithics v7.0 models are used for the surface mount components and the transistors (except for the transistors’

As these technologies allow for a more complete and dynamic view of soil microbial communities, and the importance of microbial community structure to ecosystem functioning be-