• No results found

A New Way of Working:

N/A
N/A
Protected

Academic year: 2023

Share "A New Way of Working:"

Copied!
71
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

A New Way of Working:

-Acceptance Toward Artificial Intelligence in a Work Environment

‘Intelligence Artificielle’ – (Peshkova, 2020)

Yasmin van den Brink 12940925

June 2021, final version Supervisor Dr. H. Güngör

EBEC approval number: EC 20210316030316

(2)

2

Statement of Originality

This document is written by Yasmin van den Brink, who declares to take full responsibility for the contents of this document.

I declare that the text and the work presented in this document is original and that no sources other than those mentioned in the text and its references have been used in creating it.

The Faculty of Economics and Business is responsible solely for the supervision of completion of the work, not for the contents.

Date:

30 June 2021

Signature:

Y.E. van den Brink

(3)

3

Table of Content

Abstract 4

1. Introduction 5

2. Literature Review 8

2.1 Artificial Intelligence 8

2.2 The need for Artificial intelligence in the workplace 10

2.3 The challenges of AI at the workplace 11

2.4 Implementing Artificial intelligence 12

2.4.1 Employee perspective 13

2.4.2 Consumer perspective 14

2.5 Aspects to trust 15

2.6 Environmental factors 16

2.7 The resistance against AI 17

2.8 Job security 19

2.9 Models 20

2.9.1 Research model 21

2.9.2 Hypothesis 23

3. Methodology 25

3.1 Research design 25

3.2 Pre-test 26

3.3 Sample 26

3.4 Measurement 27

3.4.1 Survey 30

4. Results 32

4.1 Preliminary steps 32

4.1.2 Demographic analysis 33

4.2 Reliability and normality analysis 34

4.3 Correlation analysis 35

4.4 Hypothesis testing 36

4.4.1 Direct effect 37

4.4.2 Indirect effects 39

4.5 Summary of results 44

5. Discussion 46

5.1 General discussion 46

5.2 Theoretical and managerial implications 48

5.3 Limitations 51

5.4 Further research 52

6. Conclusion 55

7. Literature 57

8. Appendix 67

8.1. Survey 67

8.2 Table with demographics of the participants 71

(4)

4

Abstract

Artificial Intelligence (AI) is everywhere. Whether it is a smartphone application or a self-driving car, the software has infiltrated society. The term AI is an umbrella for a

computer performing tasks with human capabilities, including self-learning.

Examples of benefits of AI are: accelerated processes and a decrease in margin of error, but it could also answer the shortage in the labour market in the Western world. As a result of this, the question of whether AI should be applied in companies no longer seems relevant. The question is how to apply it.

The biggest issue with implementing AI will be the acceptance of AI by employees.

Employees struggle with the fear of being replaced by AI and the uncertainty about what AI entails. In addition, AI software is not easy to understand and not transparent, which causes smog around it.This ambiguity leads to the current situation, in which the factors that play a role in ultimately accepting AI in the workplace are still unclear.

This research clarifies which factors play a role in this acceptance. Several leading models are combined; the UTAUT, the TPB and the VAM. Through a survey among 154 participants, the willingness to use AI in the workplace was measured. The study shows a relation between performance expectancy and behavioural intentions. The effect of subjective norms on behavioural intentions has also been established. In addition, both transparency and explainability affected the intention to use AI.

These results are valuable because they can already be considered by companies when implementing strategies for AI. The results contribute directly to clarifying this subject and offer great opportunities for follow-up research. AI is topical, follow-up research is

recommended.

Keywords: artificial intelligence, algorithms, acceptance, trust, explainable ai, transparent ai, job security, xai, employee acceptance.

(5)

5

1. Introduction

In today's world, digitalisation is indispensable. Whenever we have a question, we

‘Google search’ the answer. When we do not have time to go to the store, we can order our groceries online. Before we take the train, we check the departure times and order a ticket on our smartphone. Digitalisation has infiltrated our lives more than we might realise. When shopping online, the website immediately offers an option similar to what you have been viewing or an item that other consumers bought who viewed the item you are currently viewing. All this information is based on an algorithm (Burgess, 2018). The ads we see on websitesdo not randomly appear on your screen, but they are based on your previous searchesand clicks.

In addition to this, artificial intelligence (AI) is also increasingly infiltrating our work lifein all sorts of ways. More and more strategic directions are mapped out based on

expectations calculated by an algorithm (Duan, Edwards & Dwivedi, 2019). Furthermore, an algorithm can select the most suited candidates when hiring new employees, as Unilever has introduced in 2017 (Gee, 2017). While the use of AI can have many benefits for a company or organization, things can also go awry. A very recent case with major consequences for many Dutch parents is the so-called child benefits affair. Dutch Tax Authorities

(Belastingdienst) used an algorithm to detect fraud among child benefit recipients, which resulted in thousands of parents being wrongfully regarded as fraudsters (Kleinnijenhuis, 2020). Whether the algorithm should be blamed for this or it was due to the human input is still open to discussion. However, people's opinions regarding the use of AI are influenced by these messages. Which is an important observation to keep in mind when the number of decisions made based on data or an algorithm is growing every day.

A combination of a humans’ opinion and a decision based on an algorithm will likely be a good starting point for enhanced collaboration between the two (Guszcza, Lewis &

(6)

6 Evans-Greenwood, 2017). Nevertheless, it is essential for such a collaboration to get off the ground that employees have confidence in such a concept (Duan et al., 2019). The outcome of working with systems in which there is no trust, cannot lead to a good result. A lack of confidence could lead to the non-use or incorrect use of AI, which can influence the firm’s performance (Duan et al., 2019). How much confidence do employees have in the decisions that are made based on AI?

Furthermore, the effect of confidence in AI on the performance of organisations is not limited to said confidence of employees. The use of AI is perhaps more present in the home environment than at work. Consumers are confronted with AI diversity at different times in their purchases or services (Burgess, 2018).Consumers´ confidence in how a company deals with AI can play a role in whether or not they choose the company (Kaplan & Haenlein, 2019). In addition, the experience a person has as a consumer plays a role in the expectations of this same person as an employee (Araujo et al., 2020).Therefore, it is crucial for a

company that consumers have confidence in the use of AI.

Other than a lack of confidence in AI, the use of AI could also go awry due to improper implementation. To fully utilize AI´s capabilities, it must be implemented

appropriately (Floridi et al., 2018; Fountaine et al., 2019; Kaplan & Haenlein, 2019). Part of the implementation is to get employees on board (Floridi et al., 2018; Hengstler et al., 2016).

However, there still appears to be much resistance. For example, the fear that AI will take over people's work (Burgess, 2018; Kreutzer & Sirrenberg, 2019). Understanding what AI entails could help to allay this fear in co-workers and ultimately lead to the acceptance of AI (Fountaine et al., 2019; Lee & See, 2004).

To better implement AI in the workplace, further research is needed into the

acceptance of AI by employees. This research will contribute to the search for answers to the

(7)

7 question of how to introduce AI. It is necessary to look further than just the academic

perspective by also offering the industry tools to deal with this.

(8)

8

2. Literature review

Artificial intelligence (AI) plays a major role in both the work environment and private life (Burgess, 2018; Kaplan & Haenlein, 2019). But what exactly is AI? Companies use various applications and different forms of AI to make a head start on their competitors.

However, employee trust seems to play an essential role in the ultimate success of AI in business (Burgess, 2018; Güngör, 2020; Kaplan & Haenlein, 2019; Kreutzer & Sirrenberg, 2019).

2.1 Artificial Intelligence

According to Güngör (2020): “Artificial intelligence (AI) is an umbrella term for various methodologies that are designed to provide computers with human-like abilities of hearing, seeing, reasoning and learning.”. Understanding what AI entails and where it can be applied is essential to understand how it works.

Artificial intelligence is also referred to as the study to let a machine make better choices than humans do, or at least to have a machine perform the cognitive tasks that a human can perform. This means that the machine must be able to do various things such as learning independently, arriving at a solution, and arguing this. Therefore, three types of assessment are used; description (actual), prediction (will) and prescription (what) (Kreutzer

& Sirrenberg, 2019).

The step from machines that perform complex calculations in the 1950s to the capacities mentioned above might seem big. However, in recent years there have been developments towards this by feeding machines with large amounts of data (Kreutzer &

Sirrenberg, 2019). A revolutionary step was taken from the moment the volume of data became available, big data (Burgess, 2018). The AI system works so that when the system has seen many photos of a cat, the system 'recognises' the cat in a new picture. Although this may seem like a steady process, trouble can still arise. For example, it will not be a problem

(9)

9 for the AI software to distinguish between a red cat and a lion in its natural habitat, since it was fed many pictures displaying a similar context. However, when the red cat is

photographed on the savannah, the system might just think that it is a lion.

A comparison can be made with the human brain to understand how this learning method of the AI system works. On the one hand, information is entered and on the other, a result comes out. In between, different layers have made connections, which can be compared to the neuron system in our brain. By offering more information to the system, new

connections can be made. This step allows the machine to achieve better results and further develop itself independently.

The first algorithm, or in other words, the first connection, will serve as the basis for further developed algorithms. This process is also called machine learning (ML) (Kreutzer &

Sirrenberg, 2019). According to Kreutzer and Sirrenberg (2019), an algorithm is "a programmed statement that processes input data in a predefined form and outputs results based on it ". To let algorithms improve themselves, lots of data is needed. There are three types of learning possible:

The first is supervised learning – In supervised learning, the input is labelled and answers are already known. The only point is that the answers can be obtained from the dataset as precisely as possible (Kaplan & Haenlein, 2019; Kreutzer & Sirrenberg, 2019).

Separating photos or documents in a labelled dataset are an example of this (Kaplan &

Haenlein, 2019).

The second type of learning is unsupervised learning – With this system, no target outcomes have been defined. The input is labelled, but the output is not, so the AI system will have to identify these labels by itself. For example, this way of learning is used for marketing purposes to track down a group of potential customers with specific characteristics (Kaplan &

Haenlein, 2019; Kreutzer & Sirrenberg, 2019).

(10)

10 The last type of learning is reinforcement learning – In reinforcement learning, no optimal result is indicated at the start. The system itself has to find out what works through trial and error. The comparison can be made with the AI system that tries to get better at a video game. In that way, the system constantly improves itself (Kaplan & Haenlein, 2019;

Kreutzer & Sirrenberg, 2019).

2.2 The need for Artificial intelligence in the workplace

AI is an increasingly important theme for companies because the software can accelerate the work process with fewer errors (Burgess, 2018; Kaplan & Haenlein, 2019).

The use of AI may or may not lead to a competitive advantage (Kaplan & Haenlein, 2019).

However, another aspect that increases the urgency of the use of AI, the ageing population.

Much of the western world faces an increasingly ageing population combined with fewer young people (Acemoglu & Restrepo, 2017; de Meijer et al., 2013). This results in two parallel problems.

Firstly, the need for medical care is increasing. Simply put, the elderly (age above 50) need more healthcare than young people (age up to 50). Due to the increasing need, the capacity in healthcare facilities will have to increase. In order to meet the need for health care, diagnoses will have to be made more efficiently (de Meijer et al., 2013). AI could play a supporting role in this.

Secondly, the contraction in the labour market. Due to the declining growth of the population, there are fewer working people, which will continue to decline in the coming years (Acemoglu & Restrepo, 2017; de Meijer et al., 2013). In the example of healthcare facilities, this leads to fewer staff members, who can no longer cope with the increasing need for health care. This situation can be further translated to other sectors.

Previous research has shown that the ageing population does not have to harm economic growth, for example, by using robots (Acemoglu & Restrepo, 2017). AI can

(11)

11 already be seen in the work environment nowadays, such as chatbots in the services sector, selecting the required information in law firms or as a source of advice with hiring new employees. In this way, AI works as a supplement to human work. It supports employees in their work (Burgess, 2018; Floridi et al., 2018).

2.3 The challenges of AI at the workplace

Despite the many benefits of AI for an organization, implementation is often difficult (Floridi et al., 2018; Fountaine et al., 2019; Kaplan & Haenlein, 2019). Various research has shown that the opportunities of AI mainly lie in the further development of technology. At the same time, the challenges seem to lie in the field of human capital. Employees will have to be involved in the further development of AI (Barredo Arrieta et al., 2020; Dwivedi et al., 2021;Lichtenthaler, 2019). As mentioned earlier, Artificial Intelligence can function well as a supplement in the workplace. However, AI is often seen by employees as a replacement rather than an addition.

Nevertheless, implementing AI technologies and making sure they are perceived as a supplement and not a replacement can be challenging for a company. According to the research of Cooper and Zmud (1990), identifying the importance of AI positioning during implementation contributes to the eventual adoption of the software. In addition to this, Markus (1983) shows that a change, such as an AI implementation, will be absorbed by employees when they believe it contributes to the sense of power they have in the company.

However, when there is a feeling that the innovation will decrease power, employees will refuse the software. For this reason, understanding what AI is and what it can mean for an employee is essential (Lapointe & Rivard, 2005).

Another challenge that arises more often is the ethical side to AI development; what is allowed and possible with the technology and how far can one go? These two points seem to be related; how can AI be trusted when the ethical side is not (sufficiently) considered?

(12)

12 Moreover, what are the consequences of not fully explaining how the system arrived at the outcome? (Duan et al., 2019; Dwivedi et al., 2021).

The biggest opportunity for companies probably lies in getting employees on board by the changes in the work processes by AI. When this challenge has been overcome, further consideration can be given to developing the technology and the ethical frameworks. The (better) understanding of what AI entails and how a specific application of AI works is reflected often in the literature. The same applies to the transparency of how a result is achieved by applying AI (Barredo Arrieta et al., 2020; Borges et al., 2021; Davenport &

Ronanki, 2018; Duan et al., 2019; Dwivedi et al., 2021).

In addition to understanding the algorithm itself, not enough attention has been paid to the effect of implementing AI in the company. Do employees who understand and use AI in their work also see the benefits of this, or does this turn out not to be the case (Duan et al., 2019)? All these facets lead to the following research question: What factors impact the acceptance of employees towards artificial intelligence in their work?

2.4 Implementing Artificial intelligence

The ways in which Artificial Intelligence can be applied in a company are almost innumerable, but management will need to understand where and how to apply AI to benefit from it (Kaplan & Haenlein, 2019; Kreutzer & Sirrenberg, 2019). Understanding what the software entails is one of the critical factors in making an implementation successful. It is necessary to look both from the company's perspective as a whole and the understanding of the software from a user perspective (Cooper & Zmud, 1990).

Furthermore, AI should not be expected to be a user-ready package that will be profitable right from implementation. Time and effort are needed to build a stable AI system and enough expertise within the company. Furthermore, it will take time to incorporate a new way of working into its corporate culture (Fountaine et al., 2019).

(13)

13 The leadership style in implementing AI systems seems to play an essential role in further use. Already by introducing minor parts, it will be necessary to think ahead about the approach method. An example of this is the software company IBM; when implementing AI, they decided not to use the term ‘Artificial Intelligence’ but chose ‘Cognitive Computing’.

An approach used to indicate that the AI systems serve as support for employees and not as a replacement (Kaplan & Haenlein, 2019).

Involving employees who will work with the AI application to think about the implementation and functionality in the day to day work, can increase the chance of acceptance. The same applies to including departments other than the department that will use the systems. A collaboration between employees and experts ensures that the AI system is workable (Fountaine et al., 2019). It is unclear to what extent this is applied in practice.

Before any implementation or cross-departmental collaboration can be considered, senior management must first be on board. Without management understanding the urge of AI, use is likely to fail (Fountaine et al., 2019).

The use of AI will change things internally and externally for the company. Internally, the way of working will change, which requires flexibility and trust from employees.

Externally, faster processing could lead to a competitive advantage. However, the consumer will need to have enough confidence in the company to trust it with personal data (Kaplan &

Haenlein, 2019).

2.4.1 Employee perspective

More and more employees will have to deal with AI during their daily work.

However, the AI systems are not an application that is implemented once and remains unchanged. They are systems that are constantly changing and developing to improve. This improving character leads employees to adapt themselves flexibly to change (Kaplan &

Haenlein, 2019). Since the new generation of workers generally do not work their entire

(14)

14 working life for one employer, but 'job hop' instead (Khatri et al., 2001), an employee must deal with many different applications of AI. Understanding AI systems will play an essential role in performing work properly. Training and refresher courses from the company could therefore contribute to the success of the AI systems (Fountaine et al., 2019; Kaplan &

Haenlein, 2019). On the other side, for AI to function properly, a human check is required because the results cannot (yet) be trusted blindly. This is because new errors can always occur and there is a risk of being hacked (Kaplan & Haenlein, 2019).

2.4.2 Consumer perspective

The first acquaintance with AI usually occurs in the domestic atmosphere, such as Siri or Google assistant (Burgess, 2018) or the 'smart' thermostat and lighting from Philips Hue.

Since AI is in many cases initially discovered in the private sphere (Burgess, 2018), this can already leave a mark on expectations. Especially when people experience AI as a consumer in a different way than when it is associated with work (Güngör, 2020).

The expectations that consumers have about the operation of AI can also impact the use of AI systems. Consumers could switch to the competitor if they have more confidence in the method used there. This is of course a very undesirable situation (Kaplan & Haenlein, 2019). A situation like this can apply to various sectors, from assessing medical data to the use of an application for measuring clothing. In the end, the company will always want to radiate the most confidence to be chosen by the consumer. Competitive advantage is no longer easily obtained from just using AI systems to speed the process up. The competition also uses AI solutions, so the way employees use the AI systems and how the consumer criticises the use of the AI methods within the company will play an increasingly important role in creating an advantage.

(15)

15 The question of how AI can be applied to get the most out of it with respect for people and ethics in systems that we do not always seem to fully understand might not count for consumers (Kaplan & Haenlein, 2019). Research by Araujo et al. (2020) shows that,

surprisingly enough, consumers consider the decisions made through AI systems to be found equal or sometimes even better than the decisions made by people. This result gives the impression that these consumers trust AI to some extent.

This brings up an old Dutch saying, loosely translated as 'what the farmer does not know, he does not eat'. Which can be explained as new things are scary and therefore easier rejected. In his book Trust and Power, Luhmann (2018) states that familiarity is a precursor to trust in something. Previous research shows a significant link between being familiar with, for example, an e-ticket and actually using it (Wan & Che, 2004). Since most people get acquainted with AI in the domestic sphere, the following question arises. Does the consumer experience with AI also contribute to the will as an employee to work with the software (Araujo et al., 2020; Duan et al., 2019; Jones et al., 2010)?

Hypotheses 1: The performance expectancy with AI as a consumer contributes to the behavioural intention to accept AI in a work environment.

2.5 Aspects to trust

Exploiting the many possibilities that artificial intelligence offers is only possible if the technology is implemented correctly (Floridi et al., 2018; Fountaine et al., 2019; Kaplan

& Haenlein, 2019). This can be a challenge in many areas, one of which is getting the employees on board. Without the trust of employees in AI, the technology will not do its full justice. The confidence of employees to work with AI arises from the expectation and intention to use it as mentioned before, with acceptance as the ultimate goal (Floridi et al., 2018; Hengstler et al., 2016).

The importance of trust in AI is also mentioned by Lee and See (2004). Their results

(16)

16 indicated that trust contributes to the acceptance of automation, especially in a dynamic environment. Trust, therefore also seems to be an aspect in the development of the use of AI.

Especially when people are expected to cooperate with the technology and, in a way, let the machine take over some of the control (Hengstler et al., 2016).

People tend to trust technology or algorithms more when they understand them (Hengstler et al., 2016). A lack of confidence or even aversion can arise when an AI system has an incorrect outcome. Even though the system on average has a correct outcome more often than a human, it seems more challenging to forgive mistakes made by the system. The more the technology is seen as reliable, the more likely it is to be accepted (Wirtz et al., 2018).

In order to make a positive contribution to daily work, AI will have to become an added value. The system will have to make work easier and therefore not involve any psychological or mental learning effort (Kim et al., 2007). Especially for new users, the ease with which the software is used plays a vital role in its acceptance. Complex systems lead to a negative relationship concerning the acceptance of the system (Kim et al., 2007).

If its use is perceived as complex or if it is expected to be difficult, it will not contribute to acceptance.

Hypothesis 2: The complex technicality of AI plays a negative role in behavioural intention.

2.6 Environmental factors

Additionally, the lack of understanding of the technology in the media does not contribute to the behavioural intention to use AI. In some recent cases, the understanding of AI is hard to find. The media shows that there is still little knowledge about interpreting research related to AI (Schwartz, 2018). Take for instance Cornell University's research on communications, especially business negotiations. In this research, boundaries were not adequately adjusted, which led to a mixture of words and not the intended sentences (Lewis

(17)

17 et al., 2017). Business magazine Fast Company translated this into an article about robots that communicated in their own language (Wilson, 2018). This type of reporting seems to

influence the public in the expectations that exist of AI. According to the survey of Araujo et al. (2020), citizens are concerned about the use of algorithms in making decisions.

Not only reporting in the media but also the immediate environment of people influences their expectations regarding AI.The opinions of important people in the

environment, such as a manager at work or a good friend in private life, can leave a mark on the intention of use.These subjective norms bring a certain social pressure to look at AI in the same way (Ajzen, 1991). Therefore, Fountaine et al. (2019) recommend a 'walk the talk' in the Harvard Business Review. Ensuring that leaders act as role models should make the adoption of the software more accessible. This method is then rolled out step by step to involve the entire company (Fountaine et al., 2019). This sounds like a good theory, but currently, this does not seem to be the case in many companies (Araujo et al., 2020).

Therefore, the following hypothesis is put forward:

Hypothesis 3: Subjective norms negatively influence the intentional behaviour of accepting AI.

2.7 The resistance against AI

The fear of AI is mentioned in the scientific field. Holloway and Hand predicted in 1988 that AI would take the place of actual humans. Years before this, in 1950, philosopher Norbert Wiener expressed his concern about machines that would function as demons that can learn and think for themselves (Yanyu, 2019).

The question that seems to be hiding behind this fear is whether AI is being applied to make machines more intelligent than humans or to improve people's (work) lives by using smart machines (Burgess, 2018)? A valid question since the 'founders' of AI, Douglas

Engelbart and Marvin Minsky, do not seem to have an answer for themselves (Yanyu, 2019).

(18)

18 On the other hand, human intelligence consists of a diverse range of areas, including linguistic intelligence, musical intelligence, spatial intelligence, and so on. The diversity between these areas shows that the fear of being taken over by AI or robots is exaggerated (Kreutzer & Sirrenberg, 2019).

More empirical literature indicates that understanding AI and its exact impact on business and daily work is essential for accepting AI (Fountaine et al., 2019). By making employees better understand AI and its application, it will also be easier to uncover the reason for the resistance in the first place (Fountaine et al., 2019).Understanding the

technology could potentially lead to increased employee confidence.Which in turn can result in better implementation in daily work. AI can have a good effect to support employees’

work in daily tasks and perhaps even progress together (Floridi et al., 2018).

The fear of AI seems to have arisen from projecting one's own behaviour on what AI could mean. It highlights the worst human behaviour imaginable (Szollosy, 2016). Perhaps a better understanding of what technology entails will contribute to understanding the

possibilities with this and therefore, also what is not due to AI but human behaviour.

Understanding AI in literature is a growing theme; there is a need to apply

understandable machine learning in real life. This movement is called eXplainable Artificial Intelligence (XAI). The explainability is viewed from two angles; the transparency of the ML model and the explainability of the model (Barredo Arrieta et al., 2020).Due to the

transparency in a model, the privacy of the data used can be better protected. Because the model is easier to interpret, ethical considerations will become more transparent and can be explained. This will ultimately help remove the foggy cloud surrounding AI (Barredo Arrieta et al., 2020). However, does understanding AI practically lead to trust in an algorithm? Does this influence the performance expectancy, the subjective norms and the expected

technicality? This introduction entails the following moderators:

(19)

19 Hypothesis 4a: The transparency of AI positively influences the performance expectancy towards the behavioural intention to accept AI in a work environment.

Hypothesis 4b: The explainability of AI positively influences the performance expectancy towards the behavioural intention to accept AI in a work environment.

Hypothesis 5a: The relationship between the technicality of AI towards the behavioural

intention of using it in a work environment is moderated by the transparency of AI.

Hypothesis 5b: The relationship between the technicality of AI towards the behavioural

intention of using it in a work environment is moderated by the explainability of AI.

Hypothesis 6a: The transparency of AI moderates the relation between the subjective norms

and the intentional behaviour towards accepting AI.

Hypothesis 6b: The explainability of AI moderates the relation between the subjective norms and the intentional behaviour towards accepting AI.

2.8 Job security

Previous research has shown that there is uncertainty at every level in the company about the consequences of AI and job security (Zhu et al., 2020). The risk of losing a job leads to more fear among employees with a temporary contract, which is a logical

consequence of the difference in rights in the event of dismissal (Chung, 2016). As a result, having a temporary or permanent job can affect the acceptance of AI. It may be possible that employees with a less stable position in the company, namely a temporary contract, will be less likely to accept AI for fear of losing their job. With this, the diversity of contracts could affect the working environment of a department or business unit.

Hypothesis 4c: The type of contract an employee has, has an impact on the relation between the performance expectancy and the behavioural intention to accept AI in a work

environment.

(20)

20 Hypothesis 5c: The relationship between the technicality of AI towards the behavioural

intention of using it in a work environment is moderated by the type of contract an employee has.

Hypothesis 6c: The type of contract an employee has moderated the relation between the subjective norms and the intentional behaviour towards accepting AI.

2.9 Models

There are a lot of different models to measure acceptance towards the use of new technologies. In 1986 Davis developed the most common model to use nowadays: the Technology Acceptance Model (TAM), based on the Theory of Reasoned Action (Sohn &

Kwon, 2020; Williams et al., 2015). In the years that followed, the TAM was further optimized. In 2003, Venkatesh et al. came up with a further extension that combines the TAM and seven other models (Williams et al., 2015). The new model of Venkatesh is called the Unified Theory of Acceptance and Use of Technology (UTAUT) (Venkatesh et al., 2003). This model consists of four variables that can indicate the Behavioural Intention (BI) of the technology and indirect impact on the Use Behaviour (UB) (Williams et al., 2015).

A comparison can be made with the Thomas Theorem (1928), "If men define situations as real, they are real in their consequences" by Thomas and Thomas, which first appeared in the book 'The child in America' (Smith, 1995). This well-known sociological text explains that the interpretation of the situation causes the actions that follow, not the actual facts. It can be said that this way of thinking is used for the first three core variables in the UTAUT model of Venkatesh et al. (2003). In terms of performance, effort and social influence, employees’ expectations about the use of technology are examined.These

expectations seem to be more important than the actual facts on the road to the use of AI and eventual acceptance (Smith, 1995; Venkatesh et al., 2003).

(21)

21 In addition, Ajzen's (1991) model TPB is a widely used variant (Sohn & Kwon, 2020). This model is known for the combination it makes between types of behaviour and the social environment. However, recent research suggests that the newcomer Value-based Adoption Model (VAM) from Kim et al. (2007) seems to best explain the adoption of AI (Pal et al., 2020; Sohn & Kwon, 2020). In this model, the independent variables are divided into two groups; Benefit and Sacrifice. In these groups, the aspects that contribute to or against the adoption of AI are measured. Because the VAM is focused on the adoption of products in which AI is used, the focus here is on the costs and benefits for both the investor and the user.

For this reason, this model is less in line with this research; an employee will not have to pay direct costs of the AI systems. Nevertheless, the positivity derived from the use of, for example, a new smartwatch is slightly different from the use of a system for work. However, the model for this study is based on Kim's cost-benefit technique but adapted to the vision of an employee.

2.9.1 Research model

This study distinguishes the benefits and sacrifices that an employee is likely to experience, as used in the model by Kim et al. (2017). Given the previous experiences with AI from the customer perspective, the employee will be able to see the added value of the performance of AI (Venkatesh et al., 2007). On the other hand, the opinions of an influencer in the environment will have a negative influence on the intention to use AI (Ajzen, 1991). AI could also deter employees because they are afraid of not being able to understand the system correctly (Kim et al., 2007).

The model used for this study has three moderating factors; transparency,

explainability and job security. These factors can play a role in the ultimate acceptance of AI and are thereby taken into account. Accepting AI will likely be more difficult when an employee is afraid of being replaced by the software. With a temporary contract, this risk

(22)

22 seems even more relevant. The use of AI will also play a role in ultimately accepting the software. This variable is divided into two groups, namely, transparency and explainability.

These two factors seem to be why employees do not trust AI (Shin, 2021).

All described above leads to the dependent variable, which is behavioural intentions.

According to Venkatesh et al. (2003), behavioural intentions can be described as "the person's subjective probability that he or she will perform the behaviour in question".

Various research has shown that behavioural intentions are a significant predictor for the actual use of technology (Ajzen, 1991; Chau & Hu, 2002; Mathieson, 1991; Venkatesh et al., 2003).

The controlling factors for this research are gender and age. Previous research has shown that these variables influence behavioural intentions (Venkatesh et al., 2003).

In addition, other aspects that could affect the research have been taken into

consideration. Despite the COVID-19 pandemic, various industries are investing more and more in the applications of AI. These are mainly companies that see the added value of the use of AI. But is there also a difference in the acceptance of AI between the different

industries? In order to be able to control this, the participants are asked in which industry they work. There is a choice from the six different industries that, according to the survey, perform the highest (McKinsey, 2020).

To determine whether there is a difference in the acceptance of AI between different functions, is this applied as a controlling factor. According to the report 'The state of AI in 2020' by McKinsey, the eight functions in which the highest adoption of AI was found are used (McKinsey, 2020).

(23)

23 This research entails the following model, Figure 1. The entire model, including the

hypotheses can be seen below.

Figure 1: the research model

2.9.2 Hypothesis

H1: The experience with AI as a consumer contributes to the behavioural intention to accept AI in a work environment.

H2: The complex technicality of AI plays a negative role in behavioural intention.

H3: Subjective norms negatively influence the intentional behaviour of accepting AI.

H4a: The transparency of AI positively influences the performance expectancy towards the behavioural intention to accept AI in a work environment.

H4b: The explainability of AI positively influences the performance expectancy towards the behavioural intention to accept AI in a work environment.H4c: The type of contract an employee has, has an impact on the relation between the perceived experience and the behavioural intention to accept AI in a work environment.

(24)

24 H5a: The relationship between the technicality of AI towards the behavioural intention of using it in a work environment is moderated by the transparency of AI.

H5b: The relationship between the technicality of AI towards the behavioural intention of using it in a work environment is moderated by the explainability of AI.

H5c: The relationship between the technicality of AI towards the behavioural intention of using it in a work environment is moderated by the type of contract an employee has.

H6a: The transparency of AI moderates the relation between the subjective norms and the intentional behaviour towards accepting AI.

H6b: The explainability of AI moderates the relation between the subjective norms and the intentional behaviour towards accepting AI.

H6c: The type of contract an employee has moderated the relation between the subjective norms and the intentional behaviour towards accepting AI.

(25)

25

3. Methodology

To answer the research question, quantitative research will be carried out. The research design will be further explained in this chapter. How the data will be collected and how the target group was created will also be described. Furthermore, the components of the survey are discussed and a timeline is laid out.

3.1 Research design

This study aims to determine what factors contributes to the acceptance of AI in using it in a workspace. In addition, the research will focus on whether the experiences gained in previous experiences with AI as a consumer play a role in the acceptance as an employee.

The model used is partly derived from Venkatesh's Unified Theory of Acceptance and Use of Technology (2003), Ajzen’s Theory of Planned Behaviour (1991) and Kim’s Value-based Adoption Model (2007).

An online survey was used for this research. The survey was distributed via

WhatsApp, e-mail, social media, LinkedIn and within a business community and published online for about three weeks. The tool Qualtrics was used to create the survey. Answering the questions in the survey took around 5 minutes. The questions were selected from previous research and categorised according to the five variables and a category with the demographic questions as control variables. There were a total of 26 questions which were mostly

answered on a 7 point Likert scale. The survey participants will not be pre-selected and the only requirement for the participants is that they have a job.

The summary of the survey design can be found in Table 1.

(26)

26 Table 1: Summary survey design

Summary survey design

Method Online survey

Survey participants People with a job (of any kind)

Sample size 385

Sample technique Convenience Sampling

Survey tool Qualtrics

Survey response time +/- 3 weeks

3.2 Pre-test

A pre-test was held to test the working of the survey. It was also tested whether the survey is clear and if the participants could complete the survey without the authors’

intervention. The pre-test was held among several acquaintances of the author and some fellow students. The total number of participants in this phase of the study was 5. The pre-test was used to ensure that the survey was tested among people who look from a completely blank perspective and among people with the same experience from the master.

3.3 Sample

The main requirement for this study is that the participants have a job. In the Netherlands alone, 9 million people were employed in the fourth quarter of 2020 (Centraal Bureau voor de Statistiek, 2020). The number of working people worldwide is even more. All these people would fall within the target group of this study. There will be no restriction to the position the respondent performs; both a manager and an executive employee will be able to complete the survey. It does not matter whether they use AI in their work or not.

Knowledge about AI is also not something that will be selected precisely to determine

(27)

27 whether this contributes to confidence. The ideal sample size of this study is 385 participants, taking into account a confidence level of 95% and a margin of 5% (Sample Size Calculator, 2021).However, due to the short time frame, N = 100 will be used as the starting point (Green, 1991). This number is derived from the rule of dumb by Samuel Green (1991); N >

50 + 8 m (m is the number of predictors).

The survey was distributed online with Convenience Sampling. The channels used for this are WhatsApp, e-mail, social media, LinkedIn, the community platform of the Randstad Group where the author works, and in the author's social circle. The survey was shareable for all participants to create a snowball effect.The survey was published online for about three weeks. If the minimum of 100 participants has not yet been reached within three weeks, the survey will remain online longer.

The survey had the option to be conducted anonymously.

3.4 Measurement

The survey starts with an introductory text. It is briefly explained what the survey is about, the goal, and what the participant can expect. The survey will consist of eight parts, the models six variables and two parts with demographic questions. Most of the questions are answered based on the 7 point Likert scale, starting with 0 = strongly disagree up to 7 = strongly agree. This scale has also been used in previous research to answer these questions.

Some of the questions will be answered differently.

Here, the questions will be discussed further per variable:

Dependent variable: Behavioural intention (BI)

This part of the questionnaire comes from the validated model of Venkatesh (2003). The word ‘system’ has been changed to ‘AI’. Furthermore, it was decided to use ‘the next coming months’ instead of a number. The questions are answered on a 7 point Likert scale.

(28)

28 This part will be introduced in the questionnaire with a short text explaining that a link must be made with previous experiences. The questions will be introduced with some examples of AI that everyone is familiar with.

Independent variables: Performance Expectancy (PE)

Previous experiences with AI as a consumer could affect a person's expectations as an employee. The questions for measuring Performance Expectancy arise from the UTAUT model. The UTAUT model has been validated and these items have been used in the questionnaire. The questions from this research by Venkatesh (2003) are used many times more. The 7 point Likert scale is used to answer.

Technicality (Tech)

In order to make a positive contribution to daily work, AI will have to become an added value. If its use is perceived as complex or if it is expected to be difficult, it will not

contribute to acceptance (Kim et al., 2007). For this reason, the questions about technicality from the study by Kim et al. (2007) are used. Again, the 7 point Likert scale is used for the answers.

Subjective Norms (SN)

The environment plays a role in the acceptance of new things. Think of the newspaper articles that influence an opinion, but a colleague or manager can also leave a mark on an implementation. For this reason, this factor will also have to be included in order to measure its effect on ultimate acceptance.The questions come from the research by Ajzen (1991) and the 7 point Likert scale is also used here. For this variable, one question is asked in reverse.

The first three questions are asked from a negative perspective, while the last question is asked from a positive starting point.

Moderators: Transparency (Trans)

In order to understand and fathom an algorithm, it should be transparent what exactly

(29)

29 happens to it. But does this also play a role in the expectation of how the algorithm will perform?To answer this question, the variable transparency is included as a moderator. The questions come from Shin's (2021) research and have been validated. Again, the answers are given on the 7 point Likert scale.

Explainability (Ex)

The algorithm itself should also be understandable. Does it inspire more confidence when a 'decision' can be traced back?Shin's research (2021) is also included in the questionnaire and answered on the 7 point Likert scale.

Job Security (Position)

This variable is a moderating factor in the model. It was asked whether the participant has a job. The survey will end after this question for participants without a job. Also, one question will be asked about the type of contract the participant has. This will act as a moderating factor. Both of the questions can be answered with 'Yes' or 'No'. There will be a breakdown in the choice between two answers for 'No' for the second question.

Control variables: Industry (Indus)

In order to be able to control the business the participant works in, it was asked in which industry the person works. There is a choice from the six different industries that, according to the survey McKinsey, perform the highest (McKinsey, 2020). There was an ‘Other’ option added; the participant could choose to write an industry by themselves. The answer options for this question were; “Healthcare systems and services/pharma and medical products”,

“Automotive and assembly”, “Financial services”, “Consumer and packaged goods/retail”,

“Business, legal, and professional services”, “High tech/telecom” and “Other”.

Function (Func)

The participants were also asked what type of position they had. The eight functions in which the highest adoption of AI was found, according to the report 'The state of AI in 2020' by

(30)

30 McKinsey, are included (McKinsey, 2020). The answer options were; “Product and/or

service development”, “Service operations”, “Marketing and sales”, “Risk”,

“Manufacturing”, “Human resources”, “Supply-chain management”, “Strategy and corporate finance” and “Others”. Here too, is it possible to choose the “Other” option.

Age

At the end of the questionnaire, the age of the participants was asked. A choice could be made from 5 options; “Younger than 20”, “20 - 35 years old”, “36 - 50 years old”, “51 - 65 years old” and “Older than 65”.

Gender

The last question of the survey is about the gender of the participant. There are three options for this question; "Female", "Male" and "I prefer not to say".

Table 2 contains the information about the questions in an overview.

3.4.1 Survey

As mentioned earlier, the survey can be completed anonymously. The answers given in the survey cannot be traced back to a person. However, there will be a possibility to stay informed of the results. To do this, participants will be asked for an e-mail address. This will be completely independent of the answers obtained and can no longer be traced back to the completed survey. The only person who will see any e-mail addresses is the author.

The survey was made in the online tool Qualtrics and distributed further online afterwards. After retrieving the required data with a minimum of 100 participants, the data was analysed, for which IBM SPSS was used. First, it was determined whether the data is normally distributed. Outliers were looked at and the results were prepared for comparison.

(31)

31 The level of reliability will be checked against the Cronbach Alpha score; it must be above α

= .70 to be considered reliable (Field, 2018). After this, the hypotheses were tested.

Table 2: construct and measurement of the variables

Variable Source Scale Items Remark

Performance Expectancy Venkatesh et al., 2003 7-point Likert Scale 3 α .91 Behavioural Intention Venkatesh et al., 2003 7-point Likert Scale 3 α .92

Technicality Kim et al., 2007 7-point Likert Scale 4 α .86

Subjective Norms Ajzen, 1991 7-point Likert Scale 4 α .81

Transparency Shin, 2021 7-point Likert Scale 3 α .85

Explainability Shin, 2021 7-point Likert Scale 3 α .74

Industry 7 Options 1 -

Function 9 Options 1 -

Age 5 Options 1 -

Gender 3 Options 1 -

Job security Yes / No 1 -

Position Yes - No 1 -

(32)

32

4. Results

4.1 Preliminary steps

The survey was published online between Saturday 24th of April and Thursday 14th of May and filled in by 269 people. Of these, N=156 were fully completed. The drop-outs differ from a few completed questions to dropping out at an almost completed questionnaire.

The drop-outs have not been included in the further analysis. There were two respondents without a job and one respondent that did not respond to the age question. These were also not included in the study. This brings us to a total of N = 154.

The data was initially prepared by a manual check for irregularities and conspicuity.

This revealed that not all answers had the same numerical output. For example, some

questions were on a scale from 1 to 7, while others had a scale from 9 to 15 or 12 to 18. This has been adjusted to a scale of 1 to 7 for all output.

Then a look was taken at the answers to question 4 of the variable Subjective Norms. This question has been asked in a reversed way compared to the other questions, as a control question. For this reason, the answers to this question have been reversed (as in 7 = 1, 6 = 2 and so on) to ensure that the questions are equal and could be compared.

Then the variable Gender was looked further into. For this variable, there were three

65%

33%

2%

GENDER

Female Male Unknown

72%

22%

6%

PERMANENT POSITION

Yes No, temporary od on-call No, self-employeed

Figure 2: visual representations of the gender of the participants (N=154)

Figure 3: visual representations of the job security of the participants (N=154)

(33)

33 answer options. To include this variable in further analysis, a dummy variable has been made.

The question has been adjusted to 2 answers; 'Male' and 'Prefer not to say' have been combined, as these were the two smallest groups (male N =51, prefer not to say N =4). The same has been done for the variable Position, where ‘No (temporary contract or on-call basis)’ (N =34) and ‘No (self-employed person)’ (N =10) have been combined to make it easier to compare with other variables.The full output for these questions can be seen in Figures 2 and 3.

4.1.2 Demographic analysis

In order to get a clear picture of the participants, we have further focused on the demographics. The most common age range in the study is from 20 to 35 years old. 60% Of the participants are in this category. This is

also reflected in Figure 4.There were no participants in the categories Younger than 20 and Older than 65 years. These are therefore not included in the above figure.

The industry and function of the participants are shown in Figures 5 and 6. The first figure shows that most participants work in the Business sector, namely 43.2%. This sector is

9,70%0,60%

12,30%4,50%

43,20%

6,50%

23,20%

I N D U S T R Y

INDUSTRY

Heathcare Automotive Financial Retail Business Telecom Other

60% 24% 14,80%

2 0 - 3 5 Y E A R S

O L D 3 6 - 5 0 Y E A R S

O L D 5 1 - 6 5 Y E A R S O L D

AGE

Figure 4: visual representations of the age of the participants (N=154)

Figure 5: visual representations of the industry participants work

in (N=154) Figure 6: visual representations of the function of the participants (N=154)

(34)

34 followed by Other at 23.2% and Financial Services at 12.3%.For the complete table, see Appendix 2.

4.2 Reliability and normality analysis

Subsequently, the reliability of the questions was checked. A Cronbach Alpha test was carried out per question block. All questions had an outcome of at least α = .70 and were estimated reliable (Field, 2018). Therefore, no questions were removed from the

questionnaire because this did not make the total question block distinctively stronger. See also Table 2 in the method for all the results.

The next step was to add the mean of the question blocks per variable. This method of computing scale means has been applied to test the hypotheses and compare the effects (Field, 2018). Aside from the variables Tech and SN, the mean was calculated by subtracting 8 from the mean. In this way, both the positive and negative questions are the same and can be compared more easily.

The distribution has also been looked at. This was done based on the Skewness and Kurtosis test. This test showed that all variables are approximately between the threshold of 2 and -2 (Field, 2018). A normality check was also done based on the Kolmogorov-Smirnov test and the Shapiro-Wilk. This analysis showed that none of the variables were normally distributed, at P < .05. However, given the Likert scale used, the answers are between 1 and 7. The full results can be found in Table 3.

(35)

35 Table 3: Skewness, Kurtosis and normality check (N = 154)

Variable Skewness Kurtosis Kolmogorov-

Smirnov

Shapiro-Wilk

PE -1.34 3.61 .13** .89**

Tech -.13 -.95 .09* .97*

SN -.53 .17 .12** .96**

Trans -1.12 1.90 .18** .90**

Ex -.22 -.52 .08* .98*

BI -.84 1.03 .13** .93**

**. Significant at the .01 level (2-tailed) *. Significant at the .05 level (2-tailed)

4.3 Correlation analysis

Based on Pearson's correlation test, it was determined whether the variables correlated (Field, 2018). In this section, the strength of the correlation was looked at. No causal

conclusions can be drawn based on the result. A correlation cannot prove a causal

relationship; it can only determine if two variables are related (Meyers, 2013). The results of this test can be found in Table 4. The dependent variable Behavioural Intentions has a

positive significant correlation with Performance Expectancy (r = .58, P < .01), Tech (r = .00, P < .05), SN (r =.51, P <.01) and Ex (r =.24, P < .01). This suggests that people with a

positive score on Performance Expectancy also have positive intentions towards using AI.

Likewise the expectation if the people around the participant have positive expectations of the software and people using it, the participant will propably be more likely to have a positive attitude toward using it there self. For Technicality and Explainability there is a significant correlation, but it is a weak one. It is still expected that both technically easy software and understandable software will lead to a behavioural intention of using AI. These correlations were also expected when looking at the literature (Barredo Arrieta et al., 2020;

Kim et al., 2007).

In addition, Transparency has a significantly negative correlation with Behavioural Intentions (r = -.17) at P < .05). It is a weak relation but, this still would suggest that when

(36)

36 the algorithm is transparent, the participants would be less likely to use the software. This was not the expectation from the literature. The literature resulted in the expectation that transparency would lead to a behavioural intention (Barredo Arrieta et al., 2020).

Furthermore, Age has a significant negative correlation with Transparency (r = -.18, P

< 0.5), the same is true for Gender and Age (r = -.21, P < .01) and Gender and Position (r = - .16, P < .05). Which, however, is not relevant for this research. Subjective Norms and

Performance Expectations (r =.53, P < .01), Subjective Norms and Technicality (r = .31, P <

.01) and Age and Position (r =.31) all have a significant positive correlation at P < .01. For example, if the participant experience that the environment is positive towards AI, they also have a positive expectation about the Performance. The same applies to Technology; when there is a positive environment, it is also expected that AI can work well. All the results of the correlation can be found in Table 4.

Table 4: Means, Standard Deviations, Correlations and Reliabilities (N = 154)

Variable Mean SD 1 2 3 4 5 6 7 8 9

1 BI 5.24 1.22 (.92)

2 PE 5.60 1.08 .58** (.92)

3 Tech 4.29 1.31 .20* .13 (.86)

4 SN 5.07 1.09 .51** .53** .31** (.81)

5 Trans 5.70 1.04 -.17* -.06 -.10 -.05 (.85)

6 Ex 4.21 1.23 .24** .06 .30** .11 .08 (.74)

7 Pos .71 .45 .13 .05 .07 .13 -.00 .03 -

8 Age 2.55 .74 .08 .00 .08 .10 -.18* .12 .31** -

9 Gender .65 .48 -.04 -.02 .02 -.01 .11 .03 -.16* -.21** -

**. Correlation is significant at the .01 level (2-tailed) *. Correlation is significant at the .05 level (2-tailed)

4.4 Hypothesis testing

There are many different ways to test hypotheses in which the outcomes are displayed on a Likert scale. In this study, it was initially decided to perform a hierarchical regression.

This is a method where more variables are added at each step. In this way, different models are created. So, it can be checked whether the addition of the variable has a significant effect

(37)

37 and makes the model stronger.

First, the variables were centred ensuring that the models can be compared. It also reduces the risk of multicollinearity. In the entire study, the Minimum of Tolerance does not exceed VIF = 6.8, which allows us to assume that no multicollinearity has occurred, since the value is not higher than 10 (McClelland et al., 2016).

When making the first analysis, an outlier emerged (#85). This participant had a residue of -5.14. The participant had a BI of 1, while the expected value was 6.14. This participant is therefore characterised as an outlier and is not included in the following analyses.As a result, the total number of participants is adjusted to N = 153.

4.4.1 Direct effect

Table 5 shows that only two direct effects are significant in this model. The model itself explains R² = 45.36% of the variation at P < .01.

Performance Expectations has a positive relationship with the dependent variable Behavioural Intentions (𝛽 = .50, P < .01). This means that based on a positive Performance Expectancy, the participant has more intention to use AI. With this, H1 will be adopted. This positive relationship is in line with the expectations from the literature (Araujo et al., 2020;

Duan et al., 2019; Venkatesh et al., 2003; Jones et al., 2010).

Subjective Norms also has a positive relationship with Behavioural Intentions (𝛽 = .24, P < .01). Therefore, a positive environment for AI is more likely to lead to the intention of using AI and the other way around. H3 is, as a result of this, supported. This relationship is also in line with the expectations in the literature (Ajzen, 1991; Fountaine et al., 2019).

However, Technicality has no significant relationship with Behavioural Intentions (𝛽

= .12, P > .05). For this reason, H2 is rejected. So, the difficulty in technicality does not lead to a lower intention to use AI in this research.

All of the control variables have no significant effect on the model. This means that

Referenties

GERELATEERDE DOCUMENTEN

Still, discourse around the hegemony of large technology companies can potentially cause harm on their power, since hegemony is most powerful when it convinces the less powerful

It was hypothesized that the variables sustainable information system, formal sustainability management control system, informal sustainability management control

This means that empowerment and knowledge sharing among employees positively contribute to the relation between the intensity of NWoW and the performance goals of NWoW.. The results

Furthermore, managers should possess individualized consideration, trust, empowerment impact, supporting employees acceptance of IT, supporting knowledge sharing among employees,

On the whole, it has become clear, that capitalistic values are widely acknowledged in the selected documents, which implies a market-oriented mindset

“European technological sovereignty is not defined against anyone else, but by focusing on the needs of Europeans and of the European social model.” (European Commission, 2020f, p. 3)

This should encourage them to play an effective role in the shaping of public policy and public interest in a democratic and constitutional South Africa, where the concept of

Meer noordelijk zijn bronzen kralen zeld- zaam en wordt hun plaats ingenomen door exemplaren in glas(pasta?), barnsteen en andere materialen zoals gagaat. Naast holle buisvormige