“Achieve better artificial intelligence (AI) implementation: a modified version of technology acceptance model- AIAM”

34  Download (0)

Full text

(1)

Master Thesis

“Achieve better artificial intelligence (AI) implementation: a modified version of technology acceptance model- AIAM”

Jiaming Liang 12929662

Submission date: 24

th

of June, final draft MSc.in Business Administration–Strategy Track Amsterdam Business School | University of Amsterdam

EC 20210407020408

Supervised by Dr. Gijs van Houwelingen

(2)

Statement of originality

This document is written by Student Jiaming Liang who declares to take full responsibility for the contents of this document.

I declare that the text and the work presented in this document is original and that no sources other than those mentioned in the text and its references have been used in creating it.

The Faculty of Economics and Business is responsible solely for the supervision of completion of the work, not for the contents.

(3)

Table of Contents

Abstract ... 4

1. Introduction ... 5

2. Literature Review ... 6

2.1. Type of AI ... 6

2.2. AI as Threat ... 8

2.3. AI as Benefit ... 10

2.4. Power ... 10

2.5. Technology Acceptance Model ... 11

2.6. Artificial Intelligence Acceptance Model ... 14

3. Hypotheses ... 15

4. Method ... 17

5. Result ... 23

6. Discussion... 26

7. Limitations, Implications, and Future directions ... 27

7.1. Implications and Future ... 27

7.1.1 Practical implication ... 27

7.1.2 Theoretical implication ... 28

7.2. Limitations and Future ... 28

8. Conclusion ... 29

9. Acknowledgements ... 30

10. Reference ... 31

(4)

Abstract

As AI implementation is gradually attracting wider attention in business world, the importance of understanding how to successfully manage the implementation arise. But challenges also occur, many firms experience obstacles in getting employees on board with AI implementation. This paper aims to analyze how the positive attitudes and negative

attitudes mechanism within employees affect AI acceptance. A modified model of technology acceptance model (TAM) is proposed as in the name artificial intelligence acceptance model (AIAM). This model can be a conceptual framework that reveals how to reach higher chances of successful AI implementation by manipulating the type of AI present to employees and the power perception within employees.

Keywords: Artificial intelligence, technology acceptance, AI implementation, perceived benefit, threat, power, attitude towards use

(5)

1. Introduction

By 2030 almost 50% of jobs could potentially be replaced by Artificial intelligence

technology (Press, 2019). These jobs include low-skilled occupations, also middle-class jobs with high payment (Brougham, 2018). Therefore, understanding how firms can make their employees more willing to work with this "future colleague" becomes relevant.

Nowadays, Artificial intelligence technology is becoming more and more popular and can be seen implemented by firms more often. Artificial intelligence is described as a new generation of technologies capable of interacting with the environment and simulating human intelligence (Glikson and Woolley, 2020). Many researchers predicted AI might potentially impact how we do business in the upcoming technology trend. As manufacturing, retailing, and virtually every other industry in the world transforms their core capabilities and business models to fully use AI technology (Brynjolfsson and McAfee, 2017). Therefore,

understanding how to implement and adopt AI becomes crucial.

AI can be implemented within the firm to meet internal needs of efficiency or for the firm to react to its environment so that it can tackle external problems. The goal for

implementing AI is clear, which is improving the work of humans or cost reducing. To do that, AI turns into many forms to provide its best work. AI includes many different

technologies, such as machine translation, customer service chatbots, and self-learning algorithms. Most of these technologies enable users to understand the environment better and take corresponding actions. Firms have been adopting AI technology to adapt or disrupt their technology environment while refining their strategic and competitive advantages (Wamba- Taguimdje et al., 2019).

It is fair to say that successfully implementing AI can bring great benefits to firms, but AI implementation is not without challenge. Many firms experience obstacles in getting

(6)

employees on board with AI implementation. One reason is employees do not want to work with AI as colleagues. Lichtenthaler (2019) suggested that employees prefer to collaborate with real human colleagues rather than work with AI-based ones because employees may not have real interaction with AI or do not trust AI. These negative attitudes may jeopardize the implementation of AI.

Except for these negative scenarios, people are still keen on finding out what AI can provide. They are looking forward to using a new technology if they perceive a benefit from it. Overall, it is frank to say many employees are having paradoxical attitudes towards AI.

Understanding the reason behind these contradictory attitudes is the focus of this paper. This paper aims to analyze how to incorporate these paradoxical attitudes (sometime see AI as threat, sometime see AI) affect attitude towards using artificial intelligence technology within firms.

This paper will first analyze AI as a technology why it is special then take on understanding the reasons behinds it. Later, a new version of technology acceptance model will be

introduced to explain AI acceptance in implementation. In the end implications will be provided based on analysis.

2. Literature Review

2.1. Type of AI

John McCarthy introduced the term artificial intelligence in 1955; he later summarized it in his work and described AI as the creation that combine science and engineering to make technical products such as machines to interact with human intelligence (McCarthy, 2007).

As in a more recent interpretation, AI can be seen as a creation to mimic human intelligence by understanding, analyzing, and learning from big data via designed algorithms (Reeves,

(7)

2020). Examples of AI can be seen everywhere; AI can be seen in consumers' homes as smart assistants like Siri and Alexa; AI can also be seen in industrial environments, such as

translation algorithms, and self-driving vehicles (Adams, 2017). This study focuses on the AI being implemented into a business context, such as contracts drafting algorithms at law firms or pick up robots at the warehouses.

To the extent of how AI interacts with humans in a business context, there are two types of AI technologies: automation and augmentation. Automation AI implies that

machines replace humans when a task is being assigned, where augmentation AI means that it collaborates closely with humans to perform a task (Raisch and Krakowski, 2020). The critical difference between these two types of AI is whether human decisions/input are involved in the process. An example of automation is robotic process automation for ETA prediction of flights, which can work without humans involved in the working process. On the other hand, an example of augmentation is Chatbox that needs human input.

Leyer and Schneider (2021) study the effect of different types of AI by analyzing the augmentation and automation of AI-enabled software when performing a similar task in a business context. An example in the research is as follows: augmentation AI example is a software that can help lawyers draft contracts. When using this software, users control the process as it is still their own decision to accept or deny the result. Hence, lawyers' jobs change in this process from drafting contracts to auditing and approving contracts. An automation AI example is software that automates contract reviews. The human factor is taken out of the process whenever the software does not flag an error, giving humans more time to focus on more critical jobs.

The results are intriguing; augmentation AI and automation AI have similar effects but via their unique pathway. On the positive side, Leyer and Schneider discovered that augmentation AI supports users by increasing efficiency and accuracy, which will eventually

(8)

free up more capacity. Interestingly, automation AI has this same end result as more free up resources by completely removing human factors from decision-making. On the negative effect side, augmentation AI may lead users to unlearn how to perform a task without AI's assistance, then become dependent on AI. As a result, accountability from these decision- making processes drops accordingly. These adverse effects also apply to automation AI because the human factor is taken out in the first place.

We can see patterns of how these two types of AI affect employees. Automation AI has the potential to affect humans by replacing them. And with the augmentation AI, human roles and work tasks are significantly changed upon the introduction of AI because a new routine and process will be formed (Grønsunda and Aanestad, 2020). It also means that with the implementation of either type of AI, changes in the working environment will occur.

These changes may generate a certain level of negative attitudes among employees due to uncertainty arise. These negative attitudes can be interpreted as the fear of losing jobs over AI or fear of facing disruption in the current working environment, and they raise concerns towards AI implementation among employees (Winick, 2018).

2.2. AI as Threat

There have been numerous studies regarding negative attitudes drawn by AI. For example, researchers sometimes focus on fear (Liang and Lee, 2017), sometimes focus on the threat (Recht and Bryan, 2017), and sometimes focus on lack of trust (Toreni et al., 2020). These negative attitudes in most cases are related to each other. For example, Simon et al. (2020) have shown that trust plays a vital role in AI implementation; the mechanisms they

discovered is that the more threatened one may feel against AI is because he or she has a lower trust level, or the other way around. Another example, Johnston and Warkentin (2010) explained that fear can be the start of awareness of threat. After careful comparison, this

(9)

study focuses on the term threat as representative of the negative attitudes caused by AI.

Main reason is that most of the other negative attitudes will result in threats.

Threats can be understood as security conditions that can result in undesirable outcomes on lives or properties (Crews et al., 2021). For example, in the content of AI implementation in business, undesirable outcomes of employees can be losing jobs,

decreasing salary, difficulty in asking for a raise in wages, or losing skills due to dependency on AI, etc. (Ivanov et al., 2020). With these undesirable outcomes, employees may see AI as a threat and may not be willing to cooperate when AI is implemented. Therefore, managing the perception level of threat among employees is crucial because negative attitudes, such as threatened, are the main obstacles standing in the way of successful implementation instead of the technology itself (Bean, 2019).

Thus, understanding which type or types of employees are having concerns towards AI becomes essential. Agrawal et al. (2019) examined what type of human labor will be a substitute versus a complement to AI, showing that frontline workers may have more concerns about being replaced by AI than managers due to less human judgment involved when performing their jobs. Plastino and Purdy (2018) suggested that many companies may need to shift their focus from hiring for specific skillsets within employees to discovering employees who have the expert judgment or tacit skill sets, such as creative thinking skills that complement AI technologies. In other words, the jobs that require less human assessment in the working process may face a higher threat of being replaced or relocated. And these jobs are mainly at the frontline of the company, and most of the time, they have less control over their work. This also hints that those who have less power at work may become easier to be affected by AI, therefore, perceived higher threats when facing AI.

(10)

2.3. AI as Benefit

As much as we focus on the negative attitudes above, no one is willing to accept technology if it does not attract positive attitudes; Lichtenthaler (2019) suggests that the main reason why people choose to use technology is still that they perceive benefit from it. In the content of this study, the benefit of AI can be, AI improves users' work efficiency/ quality, gains better control for users, makes the jobs easier, etc. (Legris et al., 2003). And potentially, if someone perceived more benefit when working with AI, he or she is more likely to accept the

implementation.

After analyzing both negative and positive attitudes towards AI, discussion of causes for deviating these attitudes among employees is needed. For example, not having control can be a reason why some employees may perceive higher level threat and lower level of benefit.

2.4. Power

Not having control over one's environment is most likely to be associated with severe

negative consequences. And feeling powerless is often accompanied by this kind of perceived loss of control (Rucker and Galinsky, 2008).

Power is typically defined as control over resources (Bugental and Lewis, 1999).

However, power can also be understood as a perception of one's ability to affect others or the ability to resist the effect from others (Fiske and Dépret, 1996). In concludes, individuals’

perception of power can be related with resources control, authority, or position from others’

perspective (Anderson et al. 2012). Therefore, a powerful mindset can be formed due to various causes. Maybe an individual perceives he or she can affect others or just simply others think he or she is powerful than other. Scholars believe that a powerful mindset (versus powerless mindset) leads individuals to experience less negative affect and more positive

(11)

affect. On top of that, power can also affect individuals’ behavior directly, higher perception of power can trigger people to act in a way that is energizing or activating in domains that they perceive as necessary. Power also helps people gain clarity of focus, the eagerness of desire, and drive to work toward desires and aims (Guinote, 2017). As much as feeling powerful can do on doing beneficiary, feeling powerless can do just as much on the opposing side. States of powerlessness might lead people to feel as if they have fewer resources

available, which may lead to not fully utilizing the resources given to them (Keltner et al., 2003).

In this study, power can be interpreted as the perception of the ability of employees to avoid undesirable consequences when working with AI or the perception of the ability to fully use AI as a way to pursue benefits for themselves. Therefore, those who feel powerful may be more willing to accept working with AI because they perceive they can avoid being in unwanted positions, such as being replaced and focusing more on AI's positive effect, such as achieving higher efficiency work. On the other hand, those who feel powerless may be more resistant to accepting AI because they do not perceive they can outperform AI. In the end, AI will replace them. Or they do not see the benefit as much as those who feel powerful see in AI. Therefore, managing employees' perception of power is also essential in the case of AI implementation, as the perception of power can help employees see less of a threat and more of a benefit towards AI.

Next step, a model that can incorporate these attitudes towards acceptance is needed.

2.5. Technology Acceptance Model

There has been a substantial amount of research examining how to adopt new technology;

TAM is the most prominent one. To address whether an information technology system can

(12)

be accepted by people to use, Fred Davis introduced the technology acceptance model (TAM) in 1985. This model indicates that two components determine the users' level of acceptance towards an information technology system. The first component is perceived usefulness. It is understood as the degree to which an individual perceives that technology will improve their work performance (Opoku and Enu-Kwesi, 2020). The second component is perceived ease of use. It can be seen as the degree to which an individual perceives that the technology would require the least effort to use.

In short, this model is defined to examine the mediation of perceived ease of use and perceived usefulness in the effect between external variables and technology use (Legris et al., 2003). The model is as shown in Figure 1.

Figure 1: Technology Acceptance Model (Davis, 1989)

Source: Davis, F. D. (1989). “Perceived usefulness, perceived ease of use, and user acceptance of information technology.” MIS Quarterly, 13(3), 319-340.

This figure shows that perceived ease of use and perceived usefulness are the two most important mechanisms to explain the acceptance of the system. One example interpretation of this model can be that increasing the perception of how useful the technology is to the user or increasing the perception of how easier to use this technology than others can increase the acceptance level for this technology among users.

(13)

Later Schepers and Wetzels (2007) added the subjective norm to this model, a new version as shown in Figure 2. Subjective norm is defined as the how people perceive whether others, especially those who are important to them, judge he/she reaction. (Flanders et al., 1975). This version of TAM indicates a significant influence of subjective norms on perceived usefulness and behavioral intention to use.

Figure 2: Technology Acceptance Model (Schepers and Wetzels, 2007)

Source: Schepers & Wetzels. (2007). A meta-analysis of the technology acceptance model:

Investigating subjective norm and moderation effects. Information & Management, 44(1), 90–103.

This upgraded model shows that TAM can evolve over time. It creates hope for AI.

However, to assess whether we can apply this model to AI, it is essential to discuss the parts missing in this model regarding AI.

(14)

2.6. Artificial Intelligence Acceptance Model

It is not difficult to see that the threat factor is not included in TAM. As presented above, the two main mechanisms (perceived usefulness and perceived ease of use) are related to what users may gain instead of what users may lose against technology (negative attitudes perception).

This study proposes a new version of the technology acceptance model as in the artificial intelligence acceptance model (AIAM) to describe AI acceptance in the content of implementation in business, and the threat factor will be added in the model.

Perceived threat (PT) is termed to describe this threat factor as the perception of undesired outcomes resulting from using AI. Perceived threat represents the negative attitudes mechanism that affects AI acceptance.

And on the other hand, perceived benefit (PB) is termed to describe the perception of benefit users can gain from using AI. Perceived benefit is aligned with perceived ease of use and perceived usefulness from the previous TAM because they all represent the positive attitude towards technology.

Next step, it is necessary to discuss the possible reason to cause different levels of the perceived threat and perceived benefit (as in the external variables in the original TAM).

Davenport and Kirby (2015) propose a change from pursuing automation AI to promoting augmentation AI. Because, even when performing similar tasks, this seemingly simple shift of name will have profound implications for how AI is successfully implemented. Employees will come to see AI as partners and collaborators in a working environment when AI is presented as augmentation other than when AI is presented as automation. Therefore, this study proposes the type of AI that is presented to users as the main driver to lead this model.

(15)

Power is proposed as a moderator in this model. As the leading cause of the perceived benefit and the perceived threat comes from AI itself, users' power perception may differ these attitudes but not a cause.

Therefore, the primary purpose of this study focuses on the impact of the type of AI, moderated by power, on attitude towards use via positive mechanism (perceived benefit) and negative mechanism (perceived threat). The conceptual model of AIAM is as below Figure 3:

Figure 3: Artificial Intelligence Acceptance Model

3. Hypotheses

After the model being presented, hypotheses regarding AI implementation in business need to be made.

As Davenport and Kirby (2015) proposed, when AI is presented as augmentation, employees will see it as partners and not threats, compared to AI presented as automation.

Therefore, it is reasonable to assume that same AI tool, different type presented may change attitudes. Because Augmentation AI is presented as a helping hand to humans, it is expected that employees will perceive a higher level of benefit. On the other hand, Automation AI, in definition, is to replace human workers with the newly implemented AI technology. It is expected that employees will perceive a higher level of threat.

(16)

• H1: Augmentation AI will lead to higher perceived benefit than Automation AI

• H2: Automation AI will lead to higher perceived threat than Augmentation AI Someone who feels powerful is expected to feel less threatened by the AI presented.

At the same time, powerful people usually stay at a higher position in the firm to see more benefit from implementing AI than lower-level employees (who are relatively powerless).

• H3: Powerful will increase perceived benefit level that caused by the type of AI/

Powerless will decrease.

• H4: Powerful will decrease perceived threat level that caused by the type of AI/

Powerless will increase.

As proposed, perceived benefit as a positive and perceived threat as a negative mechanism affects attitude towards use. Therefore, it is expected that perceived benefit is positively related to attitude, and the perceived threat is negatively related to attitude.

• H5: Higher perceived benefit will increase attitude towards use.

• H6: Higher perceived threat will decrease attitude towards use.

Hypotheses details within the conceptual model are as shown in below Figure 4:

Figure 4: Hypotheses details within AIAM

(17)

4. Method

Examining the impact of the type of AI, moderated by power, on attitude towards use via positive mechanism (perceived benefit) and negative mechanism (perceived threat), this study adopted a quantitative approach by conducting an online experiment. Participants were randomly allocated to one of four groups, resulting from orthogonally manipulating both types of AI and power. Group details were two groups of type of AI (Augmentation and Automation) and two groups of power (Powerful and Powerless). In total 4 groups

(Augmentation + Powerful, Aug + Powerless, Automation + Powerful, and Automation + Powerless) were designed.

All participants are undergrads from the University of Amsterdam. Data were

collected via the online platform Qualtrics EMEA. In total, 354 responses were received. Due to an attention check question, many participants ended the experiment without answering all questions. After screening all the data, 252 participants finished all questions. All the

completed samples were kept. Within these 252 participants, 103 were female, 148 were male, and one did not specify. Mean of age is 20.67(max=30, min=17, SD= 1.88). Group's details are as shown in Table 1.

Participants were given detailed scenarios based on the groups assigned. Differences in the type of AI groups are about whether the AI presented will replace human workers. On the other hand, the difference of scenarios between powerful and powerless groups focused on whether the participant had control over implementing that AI technology. Scenarios details are as below:

(18)

Table 1: Group information

Groups

Augmentation 126

Automation 126

Powerful 125

Powerless 126

Aug + Powerful 63

Aug + Powerless 63

Auto + Powerful 63

Auto+ Powerless 63

Scenarios given:

• Augmentation group- Smart Shopping Cart is a smart shopping solution that automatically scan the groceries as customer shops. No scanning at the cashier anymore. All customers need to do is go to cashier, staff will start with random item verifying, then proceed with payment immediately. This AI technology aims to shorten waiting time at the cashier and increase performance for the cashier staff.

Note: No staff will be replaced or reallocated after the

• Automation group- Smart Shopping Cart is a smart shopping solution that

automatically scan the groceries as customer shops. With a payment terminal built directly into a shopping cart, no need for cashier anymore. All customers need to do is walk out of the door when finish, payment will be deducted from their account

immediately. This AI technology aims to improve shopping experience for customers and saving cost of staffing for supermarket. Note: Staff WILL be replaced or

reallocated after implementation

(19)

• Powerful group- Please imagine you have been working at a supermarket for two days a week for some years now. You enjoy the work very much and you can really use the money. As part of your role, you have no control: you HAVE control decide who will be hired and who will let go, and you HAVE a say over investment decisions, this means you will get to HAVE a say over the decision whether Smart Shopping Cart will be implemented.

• Powerless group- Please imagine you have been working at a supermarket for two days a week for some years now. You enjoy the work very much and you can really use the money. As part of your role, you have no control: you do NOT decide who will be hired and who will let go, and you have NO say over investment decisions, this means you will NOT get to have a say over the decision whether Smart Shopping Cart will be implemented.

With the scenarios given, they were asked to access Likert scale questions; these questions were under three categories: perceived benefit, perceived threat, and attitude towards use.

The perceived benefit was measured with 5 items; these items were referencing from Legris et al (2003). A 7-point Likert scale was used. (1 as Totally disagree, 4 as Neutral, 7 as Totally agree). Items of perceived benefit are shown in below Table 2:

Perceived threat was measured with 5 items, these items were referencing from Ivanov et al (2020). A 7-point Likert scale was used. (1 as Totally disagree,4 as Neutral, 7 as Totally agree). Items of perceived threat are shown in below Table 3:

Attitude towards use was measured with 3 items, items were referencing from Davis (1989). A 7-point Likert scale was used. (1 as Totally disagree,4 as Neutral, 7 as Totally agree). Items of attitude towards use are shown in below Table 4:

(20)

Table 2: Items of perceived benefit

Item

PB1 Implementing this technology improves the quality of work at supermarket for staff

PB2 Implementing this technology improves the quality of shopping experience for customer

PB3 Implementing this technology supports critical aspects of my job PB4 Implementing this technology makes it easier to do my job PB5 Overall, I think it is a good idea to implement this technology

Table 3: Items of perceived threat

Item

PT1 I fear I might lose my job due to this technology

PT2 I fear that my salary will decrease after the implementation of this technology

PT3 I fear that, if my salary is increased, my company will have good reason to replace me after the implementation of this technology

PT4 I fear that the implementation of this technology will lead to new requirements of knowledges and skills that I do not have

PT5 Overall, the implementation of this technology makes me feel threaten

(21)

Table 4: Items of attitude towards use

Item

Attitude1 I will follow the instruction to use this technology after implementation Attitude2 I will make sure the implementation of this technology goes smoothly Attitude3 Overall, I’m looking forward to using this technology

A factor analysis was conducted to examine how well the items were contributing to measuring the underlying factors. The Kaiser-Meyer-Olkin measure verified the sampling adequacy for the analysis, KMO = .84. Bartlett’s test of sphericity χ 2 (78) = 1437.158, p

< .001, indicating that correlation structure is adequate for factor analyses. The maximum likelihood factor analysis with a cut-off point of .40 and the Kaiser’s criterion of eigenvalues greater than 1, yielded a three-factor solution as the best fit for the data, accounting for 63.19% of the variance.

The first factor had an eigenvalue of 4.96, and it accounted for 38.18% of the variance in the data. Second factor had an eigenvalue of 2.05 and accounted for a further 15.75% of the variance. Third factor had the eigenvalue is 1.20 and accounted for a further 9.26% of the variance. Result as shown in below Table 5. Within perceived benefit items, PB2 was the outlier plus this is the only item which is not directly related to participant working status.

Therefore, PB2 was removed from further analysis.

Within perceived threat items, PT4 was the outlier and considered to be not sufficiently contribute to the perceived threat factor, plus this is the only item not directly related to participant’s job secureness. Therefore, PT4 was removed from further analysis.

Within attitude towards use item, all 3 items can be considered sufficiently contribute to the attitude towards use factor. Therefore all 3 of the items were kept.

(22)

For the further analysis of the hypothesis, serval variables were created. A new variable PBM is created after the extraction of PB2 to be the means of perceived benefit.

PBM= Mean (PB1, PB3, PB4, PB5). Same go for perceived threat (extraction of PT4) and attitude towards use. PTM= Mean (PB1, PB2, PB3, PB5) and Attitude= Mean (Attitude1, Attitude2, Attitude3).

Table 5: Factor analysis result

Item factor 1 factor 2 factor 3

PB1 Implementing this technology improves the quality of work at supermarket for staff

.74 PB2 Implementing this technology improves the

quality of shopping experience for customer

.50 .45

PB3 Implementing this technology supports critical aspects of my job

.76 PB4 Implementing this technology makes it easier to

do my job

.73 PB5 Overall, I think it is a good idea to implement this

technology

.75 PT1 I fear I might lose my job due to this technology .76

PT2 I fear that my salary will decrease after the implementation of this technology

.78 PT3 I fear that, if my salary is increased, my company

will have good reason to replace me after the implementation of this technology

.80

PT4 I fear that the implementation of this technology will lead to new requirements of knowledges and skills that I do not have

.50

PT5 Overall, the implementation of this technology makes me feel threaten

.86 Attitude1 I will follow the instruction to use this technology

after implementation

.87 Attitude2 I will make sure the implementation of this

technology goes smoothly

.86 Attitude3 Overall, I’m looking forward to using this

technology

.54

Percentage of Variance 38.18% 15.75% 9.26%

Eigenvalue 4.96 2.05 1.2

Cronbach’s Alpha .82 .80 .76

(23)

5. Result

Next step, a correlation test was conducted. For analysis purposes, within the type of AI, augmentation was given the value (-1), and automation was given the value (1). And within the power, powerless was given the value (-1), and powerful was given the value (1). The mean, standard deviation, and correlations of the study variables are provided as in below Table 6. It shows that type is negatively related to PBM (-0.49, p< .001) and positively related to PTM (0.49, p< .001), meaning that if type increases (switching from augmentation to automation), perceived benefit decreases significantly, and perceived threat increases significantly. It also shows that PBM is positively related to Attitude (0.50, p< .001) and PTM is negatively related to Attitude (-0.35, p< .001).

Table 6: Means, Standard Deviations, Correlations

Variables M SD 1 2 3 4

1.Type 0 1

2.Power 0 1 0(1.00)

3.PBM 4.38 1.14 -0.49(<.001) 0.17(.006)

4.PTM 4.60 1.31 0.49(<.001) -0.23(<.001) -0.45(<.001)

5.Attutide 5.21 .95 -0.29(<.001) 0.18(.005) 0.50(<.001) -0.35(<.001)

Last step, a PROCESS test was conducted to verify the whole model, as type of AI (type) being independent variable, attitude towards use (attitude) as dependent variable, perceived benefit (PBM) and perceived threat (PTM) as mediators, and power as moderator.

Based on these variables, number 7 model in PROCESS would be selected. 3 sub models are

(24)

included in this test, model 1 with PBM as outcome, model 2 with PTM as outcome and model 3 with attitude as outcome.

Then, checking regression assumptions was needed. All 3 models reported with low level of autocorrelation (model 1= 1.97, model 2= 1.99, model 3= 1.87), all VIF were

reported <5 meaning with absence of strong multicollinearity. Furthermore, the values of the residuals are normally distributed and constant.

Results from the PROCESS were as following: model 1, outcome: PBM, showed that both direct effect from type (β=-0.49, 95% CI [-0.60, -0.39], p <.001) and power (β =0.17, 95% CI [0.07, 0.28], p=.002) to PBM were significant. It indicated that augmentation AI was perceived to be more beneficial, which support H1: Augmentation AI will lead to higher perceived benefit than Automation AI. However, the interaction effect was insignificant (β

=0.04, 95% CI [-0.07, 0.15], p=.457). H3 was rejected, no interaction of type and power were seen in this model

Model 2, outcome: PTM, showed that both direct effect from type (β =0.49, 95% CI [0.38, 0.59], p< .001) and power (β =-0.23, 95% CI [-0.34, -0.13], p< .001) to PTM were significant, it indicated that automation AI was perceived to be more threaten, which support H2: Automation AI will lead to higher perceived threat than Augmentation AI. However, the interaction effect was insignificant (β =0.03, 95% CI [-0.07, 0.14], p=.524). H4 was rejected, no interaction of type and power were seen in this model

Model 3, outcome: Attitude, showed that the direct effect from type to attitude were insignificant (β =0.00, 95% CI [-0.13, 0.13], p=.996) while the effects from PBM (β =0.43, 95% CI [0.31, 0.56], p<.001) and PTM (β =-0.15, 95% CI [-0.28, -0.02], p=.021) are significant. So, there might be a significant indirect effect. Further results on analysis of the indirect effect showed that the indirect effect of type= -1(augmentation) via PBM (IE=-0.23) was statically significant: 95%CI= (-0.35, -0.14) along with the indirect effect of type=

(25)

1(automation) via PBM (IE=-0.20) was also statically significant: 95%CI= (-0.30, -0.11).

Then, the indirect effect of type= -1(augmentation) via PTM (IE=-0.07) was statically significant: 95%CI= (-0.13, -0.007) along with the indirect effect of type= 1(automation) via PTM (IE=-0.08) was also statically significant: 95%CI= (-0.15, -0.009).

Given that there is no statistical difference between IV (type of AI) and

moderator(power), an extra PROCESS test was run with power being IV and type of AI being moderator see if PBM and PTM also mediate the effect of power. It was shown that the direct effect from power to attitude were insignificant (β =0.07, 95% CI [-0.04, 0.18], p=.200) and the indirect effect of power= -1(powerless) via PBM (IE=0.06) was statically significant:

95%CI= (0.0002, 0.12) and the indirect effect of power= 1(powerful) via PBM (IE=0.09) was also statically significant: 95%CI= (0.02, 0.17). Then, the indirect effect of power= -

1(augmentation) via PTM (IE=0.04) was statically significant: 95%CI= (0.002, 0.08) along with the indirect effect of power= 1(automation) via PTM (IE=0.03) was also statically significant: 95%CI= (0.0008, 0.07).

Therefore, there were, in fact, significant indirect effects of type and power on

Attitude via both PTM and PBM, even though those were not dependent upon the interaction level of them, H5 and H6 both supported.

Below Figure 5 presents p-value summary.

Figure 5: p-value summary in conceptual model

(26)

6. Discussion

This research shows that AI type significantly affects employees' level of perceived benefit and perceived threat. In detailed words, the AI that presented as augmentation received a higher level of perceived benefit, and the AI that showed as automation received a higher level of perceived threat. And perceived benefit and perceived threat indeed mediate the effect from type of AI to attitude towards use. With a result saying that higher perceived benefit will increase willingness to accept, and a lower perceived threat will also have the same effect on a higher acceptance of AI. But unfortunately, the interaction effect of type and power to perceived benefit and perceived threat does not pan out as hypotheses, meaning that power does not moderate the impact from type to either level of perceived benefit or

perceived threat.

But one thing is worth noticing, beyond assumption, a significant direct effect from power to perceived benefit and perceived threat is observed. Furthermore, perceived benefit and perceived threat also mediate the effect of the power to attitude towards use, which can be a topic for further research. It suggests that power have the possibility to be a main cause that differs willingness of acceptance, the same role as AI type.

(27)

7. Limitations, Implications, and Future directions

7.1. Implications and Future

7.1.1 Practical implication

After analyzing this study, one possible implication for AI implementation is to present the AI as augmentation AI to the employee. As discussed by Davenport and Kirby (2015), the type of AI shown can be manipulated. Even though tasks are similar, augmentation and automation AI have very different effects on employee's acceptance levels towards AI. When firms want to adopt AI within the firm, employees' attitudes will be the key factor in affecting the result. One way to manage employees' attitudes is to present the AI more as a helping hand, and they can reach better working performance when working with this AI. The critical point is to let employees know that the AI implemented is not a threat to replace them in the future. In this way, employees would perceive a higher level of benefit and lower level of threat from AI. As a result, the attitude towards use (acceptance) will increase, also the possibility of successful implementation rises accordingly.

The above implication is when the type of AI can be altered, but what if the AI can only be presented as automation? This study provides a possible way of thinking that needs further verification. As discussed above that power does not moderate the effect of the type of AI has on attitude. However, power may have an indirect impact on attitude via perceived benefit and perceived threat by itself. Therefore, when an AI has no choice but to be

presented as automation, a possible way to increase the chance of successful implementation is to increase employee power perception. A possible explanation is that those who feel powerful are more willing to accept working with AI because they perceive they can avoid being in unwanted positions, such as being replaced and focus more on the positive effect of

(28)

what AI can bring, such as achieving higher efficiency at work. But once again, the impact of power onto this acceptance model needs further verification.

7.1.2 Theoretical implication

With a new version of the technology acceptance model- AIAM, this study hopes to

understand the relationship between AI implementation, employees' attitudes, and powerful/

powerless mindset. It may serve as a basis for further research and guide practitioners in merging human employees and AI tools and creating value on the broader business content.

Besides that, this study has further implications of AIAM research to be considered in the future. First, it is one of the first studies to systematically evaluate both positive attitudes (perceived benefit) and negative attitudes (perceived threat) together onto AI acceptance. It has significant contributions to the research field on the business value of employees' attitude towards AI as it confirms the relevance of managing employees' mindset from both positive attitudes angle and negative attitudes angle for technology implementation. Second, even though power does not contribute to AI acceptance as this study assumed, it creates a list of future possibilities that explore the business value of power mindset manipulation in

business.

7.2. Limitations and Future

Nonetheless, these implications must be interpreted with caution, and several limitations should be discussed. First, the participants of sample collection in this study were all students from the University of Amsterdam. Although these students have business study backgrounds and using students as samples minimized the costs, and the research becomes more accessible for a master thesis project, this study can be better if it was performed in a real business environment because the content of this study focusses on business AI implementation.

(29)

Second, this study only represents an overview of the experiment at certain time. It does not consider longitudinal variations of implementation process and switch of employee's

attitudes.

A longitudinal study may include the factors of the switch of type of AI that influence employees' perceived benefit and the perceived threat or the change of power mindset that affect these employees' attitudes. As implementation of a technology is a longitudinal process, these attitudes change may have significant impact onto the result. Third, this study did not include the information on implementation cost and duration differences between augmentation and automation AI. These differences may provide crucial information for firms when deciding which type of AI they would implement in the end. Therefore, based on these differences, further studies regarding more details should be carried out.

8. Conclusion

TAM, as a tool to access technology acceptance, has proven to be more than useful in the field of technology implementation (Legris et al., 2003). TAM provides theoretical thinking of what the essential parts are to affect technology acceptance, which inspires AIAM to include two opposing attitudes mechanisms when explaining AI implementation. Because the original TAM did not include the mechanism that most people would mention when talking about AI- negative attitudes (perceived threat). With AIAM, this study hopes to provide somewhat systematic way to analysis AI implementation.

As AI implementation are gradually finding wider attention in business, it is of extremely importance to be aware of the aspects of employee’s attitudes and mindset in successful incorporation of AI into human teams.

(30)

9. Acknowledgements

First of all, I would like to express my thanks and gratitude to my supervisor, Dr. Gijs van Houwelingen, for providing constant and helpful guidance throughout this whole process of master thesis project. He has provided me insights on reaching an exciting research question, has given me hints on finding a research gap, has shown me examples on demonstrating methodology, and has guided me on how to present the research. It was a great privilege and honor to finish this study under his tutoring. I am also highly grateful for his patience and efficiency. Although my questions can sometimes be too frequent, he always manages to speedy respond with explicit instruction. I could not have asked for a better supervisor for my master thesis project.

Next, I would like to say thank you to my fellow alumni and roommate Mr. Sander van Rijn. He has provided me with useful feedbacks every time I felt stuck. He also provided useful insights on how I should tell a better story in this study because, I must admit,

elaborating is not my strong suit. And he has been helping me to keep on track of the whole thesis process so that I do not have to rush at the very last stage. I would not be able to finish this thesis without the assistance from him.

Last but not least, I would like to express my appreciation to my parents, Qiuhua Liang and Huiping Chen. They have been extremely supportive during the whole master program both emotionally and financially. I know it has not been easy for them, especially with me almost turning 30, which, in Chinese culture, is not a- give up good salary job then go back being student again- kind of age. Yet, they have not lay a word that make me feel bad of this decision. Therefore, a thank you from the bottom of my heart is needed.

(31)

10. Reference

Adams, R. L. (2017, January 10). 10 Powerful Examples Of Artificial Intelligence In Use Today.

Forbes. https://www.forbes.com/sites/robertadams/2017/01/10/10-powerful-examples-of- artificial-intelligence-in-use-today/

Agrawal, A., Gans, J. S., & Goldfarb, A. (2019). Exploring the impact of artificial Intelligence:

Prediction versus judgment. Information Economics and Policy, 47, 1–6.

Anderson, C., John, O. P., & Keltner, D. (2012). The personal sense of power. Journal of personality, 80(2), 313-344.

Bean,R. (2019) Why fear of disruption is driving investment in AI.MIT Sloan Management Review (2019). https://sloanreview.mit.edu/article/why-fear-of-disruption-is-driving- investment-in-ai/

Brougham, D., & Haar, J. (2018). Smart Technology, Artificial Intelligence, Robotics, and Algorithms (STARA): Employees’ perceptions of our future workplace. Journal of Management & Organization, 24(2), 239-257. doi:10.1017/jmo.2016.55

Brynjolfsson, E., & McAfee, A. (2017) What it can — and cannot — do for your organization.

Harvard Business Review. https://hbr.org/2017/07/the-business-of-artificial-intelligence

Bugental, D. B., & Lewis, J. C. (1999). The paradoxical misuse of power by those who see themselves as powerless: How does it happen?. Journal of Social Issues, 55(1), 51-64.

Crews, G. A., Markey, M. A., & Kerr, S. E. M. (2021). Mitigating Mass Violence and Managing Threats in Contemporary Society. IGI Global.

(32)

Davenport, T. H., & Kirby, J. (2015). Beyond automation. Harvard Business Review, 93(6), 58- 65.

Davis, F. D. (1989). Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. In MIS Quarterly (Vol. 13, Issue 3, p. 319). https://doi.org/10.2307/249008

Fiske, S. T., & Dépret, E. (1996). Control, Interdependence and Power: Understanding Social Cognition in Its Social Context. In European Review of Social Psychology (Vol. 7, Issue 1, pp. 31–61).

https://doi.org/10.1080/14792779443000094

Flanders, N. A., Fishbein, M., & Ajzen, I. (1975). Belief, Attitude, Intention, and Behavior: An Introduction to Theory and Research. Addison-Wesley.

Glikson, E., & Woolley, A. W. (2020). Human Trust in Artificial Intelligence: Review of Empirical Research. In Academy of Management Annals (Vol. 14, Issue 2, pp. 627–660).

https://doi.org/10.5465/annals.2018.0057

Grønsunda, T., & Aanestad, M. (2020). Augmenting the algorithm: Emerging human-in-the-loop work configurations. The Journal of Strategic Information Systems, 29(2), 101614.

Guinote, A. (2017). How power affects people: Activating, wanting, and goal seeking. Annual Review of Psychology, 68, 353-381.

Ivanov, S. H., Kuyumdzhiev, M., & Webster, C. (n.d.). Automation fears: drivers and solutions.

https://doi.org/10.31235/osf.io/jze3u

Johnston, A. C., & Warkentin, M. (2010). Fear appeals and information security behaviors: An empirical study. MIS quarterly, 549-566.

Keltner, D., Gruenfeld, D. H., & Anderson, C. (2003). Power, approach, and inhibition.

Psychological review, 110(2), 265.

Legris, P., Ingham, J., & Collerette, P. (2003). Why do people use information technology? A critical review of the technology acceptance model. In Information & Management (Vol. 40, Issue 3, pp. 191–204). https://doi.org/10.1016/s0378-7206(01)00143-4

(33)

Leyer, M., & Schneider, S. (2021). Decision augmentation and automation with artificial intelligence: Threat or opportunity for managers?. Business Horizons.

Liang, Y., & Lee, S. A. (2017). Fear of autonomous robots and artificial intelligence: Evidence from national representative data with probability sampling. International Journal of Social Robotics, 9(3), 379-384.

Lichtenthaler, U. (2019), "Extremes of acceptance: employee attitudes toward artificial intelligence", Journal of Business Strategy, Vol. 41 No. 5, pp. 39-45.

McCarthy, J. (2007). What is artificial intelligence.

https://kewd.pw/what_is_artificial_intelligence.pdf

Opoku, M., & Enu-Kwesi, F. (2020), Relevance of the technology acceptance model (TAM) in information management research: a review of selected empirical evidence, Research Journal of Business and Management- RJBM (2020), Vol.7(1),p.34-44

Plastino, E. and Purdy, M. (2018), "Game changing value from Artificial Intelligence: eight strategies", Strategy & Leadership, Vol. 46 No. 1, pp. 16-22. https://doi.org/10.1108/SL-11- 2017-0106

Press. (2019, July 15). Is AI Going To Be A Jobs Killer? New Reports About The Future Of Work.

Forbes.

Recht, M., & Bryan, R. N. (2017). Artificial intelligence: threat or boon to radiologists?. Journal of the American College of Radiology, 14(11), 1476-1480.

Reeves, S. (2020, May 5). 8 Helpful Everyday Examples of Artificial Intelligence.

https://www.iotforall.com/8-helpful-everyday-examples-of-artificial-intelligence

Rucker, D. D., & Galinsky, A. D. (2008). Desire to acquire: Powerlessness and compensatory consumption. Journal of Consumer Research, 35(2), 257-267.

(34)

Schepers & Wetzels. (2007). A meta-analysis of the technology acceptance model: Investigating subjective norm and moderation effects. Information & Management, 44(1), 90–103.

Shaw,J., Rudzicz,F., Jamieson,T., Goldfarb,A. (2019) Artificial Intelligence and the Implementation Challenge. J Med Internet Res 2019;21(7):e13659

Simon,O., Neuhofer, B., & Egger, R.(2020). Human-robot interaction: Conceptualising trust in frontline teams through LEGO® Serious Play®. Tourism Management Perspectives 35:100692

Toreini, E., Aitken, M., Coopamootoo, K., Elliott, K., Zelaya, C. G., & van Moorsel, A. (2020, January). The relationship between trust in AI and trustworthy machine learning

technologies. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 272-283).

Wamba-Taguimdje, S.-L., Fosso Wamba, S., Kala Kamdjoug, J.R. and Tchatchouang Wanko, C.E. (2020), "Influence of artificial intelligence (AI) on firm performance: the business value of AI-based transformation projects", Business Process Management Journal, Vol. 26 No. 7, pp. 1893-1924

Winick, E. (2018), “Every study we could find on what automation will do to jobs, in one chart”, MIT Technology Review. https://www.technologyreview.com/2018/01/25/146020/every- study-we-could-find-on-what-automation-will-do-to-jobs-in-one-chart/

Figure

Updating...

References

Related subjects :