• No results found

When will consumers accept advice generated by artificial intelligence (over human advice): : The effect of technology readiness and the differences among the financial, medical, and retail sector

N/A
N/A
Protected

Academic year: 2021

Share "When will consumers accept advice generated by artificial intelligence (over human advice): : The effect of technology readiness and the differences among the financial, medical, and retail sector"

Copied!
92
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

When will consumers accept advice generated by artificial intelligence (over human advice)?

The effect of technology readiness and the differences among the financial, medical, and retail sector.

Advice is probably the only free thing that people won’t take. — Lothar Kaul

University of Amsterdam

Faculty of Economics and Business

MSc. in Business Administration – Marketing Track Student: Estelle Muys Student ID: 11416149 Supervisor UvA: Andrea Weihrauch Supervisor KPMG: Wilco Leenslag

Version: Final

(2)

1 Statement of originality

This document is written by student Estelle Muys who declares to take full responsibility for the contents of this document. I declare that the text and the work presented in this document is original and that no sources other than those mentioned in the text and its references have been used in creating it. The Faculty of Economics and Business is responsible solely for the supervision of completion of the work, not for the contents.

(3)

2 Acknowledgements

After a period of five months, I can now say that this thesis has come to an end. It has been an intensive period, where I combined writing my thesis with an internship at KPMG’s Innovation Advisory department. As I learned a lot in the past five months, I would like to thank the people who supported me through this period.

First, I would like to thank my colleagues at KPMG for giving me the opportunity to write my thesis at the Innovation department. Special thanks go to Wilco Leenslag and Thomas Beelaerts, who supervised and coached me through the whole process, by being patient and taking the time to discuss topics, changing conceptual models, and for proof reading. Next, I would like to send my gratitude to Vincent Pluijmers and Bart Gramberg for proof reading with (or without) a red marker ;).

Moreover, I would like to thank my thesis supervisor, Andrea Weihrauch, for having faith that writing a master thesis at a company would be a good combination and for having the patience to answer all my questions.

Lastly, I would like to thank my boyfriend, Mart Menken, and my mother, Sandy Jacobs, for always being there for me.

(4)

3 Table of Contents

Abstract ... 5

1. Introduction ... 6

2. Literature review ... 14

2.1 Innovation and disruptive innovation ... 14

2.2 Artificial intelligence ... 15

2.2.1 Artificial intelligence definitions ... 15

2.2.2 Artificial intelligence applications... 18

2.2.3 Artificial intelligence as a disruptive innovation ... 19

2.3 Advice taking ... 22

2.4 Interaction between humans and artificial intelligence systems ... 23

2.5 Technology Readiness ... 26

2.6 Sector differences in advice taking ... 28

2.6.1 Financial services sector ... 28

2.6.2 Medical sector ... 30

2.6.3 Retail sector ... 32

3. Data & Methods ... 35

3.1 Experimental design... 35 3.2 Data Collection ... 37 3.3 Procedure ... 38 3.4 Operationalisation ... 39 4. Results ... 41 4.1 Data preparation ... 41 4.2 Statistical assumptions ... 42

4.3 Descriptive and frequency statistics ... 44

4.4 Correlation matrix ... 45

4.5 Reliability analysis for scales... 47

4.6 Hypotheses testing ... 47

4.6.1 One-way ANOVA – Hypothesis 1 ... 47

4.6.2 Pearson’s chi-square ( χ2 ) – Hypothesis 1 ... 49

4.6.3 One-way ANOVA – Hypothesis 2 ... 50

4.6.3 Hierarchical multiple regression – Hypothesis 3 ... 52

4.6.4 One-way ANOVA - Hypotheses 4a, 4b, and 4c ... 55

5. Discussion... 60

5.1Discussion of the results... 60

(5)

4

5.1.2 Technology Readiness ... 62

5.1.3 Sectors ... 63

5.2 Academic contributions ... 65

5.3 Managerial implications... 66

5.4 Limitations and suggestions for future research ... 67

5.5 Conclusion ... 70

References ... 72

Appendices ... 88

Appendix 1: work plan... 88

Appendix 2: G*power analysis ... 89

(6)

5 Abstract

Introduction: An increase has appeared in the development and usage of artificial intelligence applications. However, no studies have explored the factors that influence the acceptance of artificial intelligence generated advice and in which sectors this will be the case. This study aims to predict the acceptance of artificial intelligence generated advice for three advice conditions, with the influence of technology readiness, and three sectors are compared.

Methods: In 2017, this online experiment was conducted on 161 consumers. Data were collected from an online survey in Qualtrics based on questions about advice taking (among sectors) and switching between types of advice. Statements from TRI 2.0 (Parasuraman & Colby, 2015) were used to measure technology readiness. Analyses were done with IBM’s SPSS software.

Results: The results show that participants from the human condition wanted to switch to artificial intelligence generated advice communicated by humans and that participants in the artificial intelligence generated advice communicated by humans condition will switch to solely artificial intelligence generated advice. An effect, although not significant, was found for the higher the level of technology readiness, the higher the acceptance of artificial intelligence generated advice. Furthermore, artificial intelligence generated advice was accepted in the retail sector.

Conclusion: The willingness to switch to a type of advice with artificial intelligence displayed a likelihood of acceptance of artificial intelligence generated advice, but it has to be combined with human advice first. The transition from human advice to solely artificial intelligence generated advice is too large, unless a high level of technology readiness is shown. As this last effect is not significant, more research is needed.

Keywords: artificial intelligence, advice, technology readiness, financial services sector, medical sector, retail sector

(7)

6 1. Introduction

In June 2017, the United Nations organised the “AI for Good” global summit, as artificial intelligence innovation will be central to achieving the United Nations Sustainable Development Goals (Stolze, 2017). The goal of the summit was to find out how artificial intelligence could help solve global problems such as poverty; hunger; health; education; and the environment and how important certain solutions could be (International Telecommunication Union, 2017). Het Financieele Dagblad mentioned that the “AI for Good” global summit finished with two concerns of the United Nations, namely: “AI for Good” could turn into “AI for bad” when artificial intelligence applications will be by illegal sources and the existing unawareness of how people will respond to the increase of artificial intelligence applications (Stolze, 2017).

Additionally, the user landscape in several sectors is changing. For example, consumers are starting to replace traditional healthcare services with digital ones. For example, millennials are already twice as likely to contact their physicians via e-mail or text messages as Baby Boomers (Atluri, 2016). Furthermore, two technology giants, Microsoft and Amazon, are working together in the field of their artificial intelligence based voice assistants Cortana (Microsoft) and Alexa (Amazon). The aim of this collaboration is the development of a system that automatically routes the right question to the right assistant, in which Cortana focuses on professional tasks and Alexa on consumer’s individual lives (Weinberger, 2017). As the ‘war on virtual assistants’ has started, consumers are hesitant of using them: it is possible that your toaster’s voice assistant is incompatible with the voice assistant of your home entertainment system. When you say ‘hello’ to your home, a chorus of different voices may answer back. Furthermore, even though significant steps are made, consumers are hesitant to use technology in the automotive sector. Nowadays, automated cars are almost entirely

(8)

7 ready (95%) for consumer usage; the biggest challenge lies in the acceptance of the public (Nederlandse Omroepstichting, 2017). In summary, all of the above-mentioned examples show 1) how disruptive the impact of artificial intelligence will be in different fields, and 2) how unsure people and organisations are about the usage of artificial intelligence.

Artificial intelligence started as a field aiming to replicate human level intelligence into a machine (Brooks, 1991). This proved not to be the case. Partly, this has to do with the differences between human cognition and machine cognition: computers can easily execute calculation tasks that people consider to be extremely difficult, but fail entirely at common sense tasks that are intuitive for humans. This is known as the Moravec’s paradox (Hassabis, 2017). Tasks that artificial intelligence applications focus on are specific elements such as natural language processing, vision, deep learning, and planning, scheduling and optimisation. All the work in these subspecialties is benchmarked against things humans do within these areas, which makes artificial intelligence a competitor against humans (Brooks, 1991). A more recent study states that artificial intelligence systems may become human’s final invention, which may end human supremacy (Barrat, 2013). In the next 20 years, accelerated artificial intelligence progress can lead towards a breakthrough (based on deep learning that imitates the way young children learn, amongst others), according to technology optimists (Parloff, 2016). This new era is called the “Fourth Industrial Revolution” (Schwab, 2016) or Industry 4.0 (Andelfinger & Hänisch, 2017).

A critical question for the coming years will be: “what will the role of humans be at a time when computers and robots could perform as well or better and much cheaper, practically all tasks that humans do at present?” (Makridakis, 2017). Optimists predict a future where Genetics, Nanotechnology, and Robotics (GNR)

(9)

8 revolutionise everything, while the pessimists believe that when machines outperform humans, they will start making decisions and will devalue humans to a second-rate status (Makridakis, 2017). The artificial intelligence revolution aims to substitute, supplement, and amplify practically all tasks currently executed by humans, becoming, for the first time, a serious competitor to humans (Makridakis, 2017).

Nowadays, people understand the significant number of benefits that the development of artificial intelligence can bring for mankind (e.g. increased lifespan; better health and well-being, and increased food cultivation). On the other hand humans are afraid of the life threats of unemployment and poverty at the individual level (Atkinson, 2017). Regardless of our attitudes towards robots and artificial intelligence, they have arrived and are determined to stay. Companies, employees, managers, and consumers have no choice and need to adapt to this new phase of the economy (Ivanov & Webster, 2017). Therefore, much is to be gained by deepening our knowledge about when and why consumers will prefer artificial intelligence over humans, in all sorts of areas. Nowadays, many decisions are ripe for the use of algorithmic advice considering its low cost and widespread availability (Logg, 2017). Advice generated by artificial intelligence can be seen as an extension of algorithmic advice and can therefore be seen as an interesting area to examine.

In order to fully understand if, when and why consumers prefer advice generated by artificial intelligence, it is helpful to compare different types of advice among different sectors to get a thorough overview. Additionally, this study can contribute to existing literature in many ways. First, previous research that has examined the impact of disruptive innovations (like artificial intelligence) has mostly used frameworks like the technology adoption cycle (Saaksjarvi, 2003); Roger’s adoption of innovation framework (1962); and the technology acceptance model

(10)

9 (TAM) (Porter & Donthu, 2006). While these provide interesting insights, these models are quite outdated as the pace of technological developments has accelerated. On top of that, the acceptance of advice generated by artificial intelligence will potentially be harder for consumers to accept than other technological innovations, as it directly influences the decision making process and it is more similar to humans than any other technology (Furman, 2016). The Technology Readiness Index (TRI) – “people’s propensity to embrace and use new technologies for accomplishing goals in home life and work” (Parasuraman, 2000) – has recently been updated and streamlined by Parasuraman and Colby (2015), which is named TRI 2.0. Technology readiness measured by TRI 2.0 can be used as a valuable psychographic variable in applied, decision-oriented research in contexts where technology-based innovation plays an important role and can allocate consumers into groups with varying TR levels for understanding the role of technology beliefs in the marketplace. Using TRI 2.0 as a measure in this study will therefore help explain the motivation of consumers for choosing advice generated by artificial intelligence over advice provided by a human.

Secondly, this thesis contributes to the existing human-technology interaction literature, as past research on human-technology interaction mostly focuses on aspects of usefulness and usability (Thüring & Mahlke, 2007). It is relevant to dive into the interaction of humans with artificial intelligence, as this technology is increasingly integrated into real-life settings (Kaptelinin et al., 2017). Advances in artificial intelligence reveal a new era of breakthrough innovation and opportunity at a faster pace than ever (McKinsey Global Institute, 2017). Therefore, the technology revolution has caused tension for service providers, consumers, and employees between the positive aspects of increased value and the negative aspects of having to learn and develop trust in new methods of doing business (Parasuraman & Colby, 2015).

(11)

10 Consumers also face trade-offs associated with trying to get maximum value from technology-based service options without encountering frustration or failure. This is ever so relevant, as artificial intelligence has an advantage over humans in many areas (Furman, 2016). Therefore, it could be that the acceptance of artificial intelligence applications (and advice) will be more complex than of other technologies. Understanding why some individuals or businesses adopt technologies while others do not and what constraints they face can help in understanding the evolution of productivity, growth, and inequality (Barham et al., 2015). Furthermore, answering questions about how to get the consumer to actually take advice from a machine, could improve choices and reduce search costs, which is relevant for examining consumer behaviour (Yeomans et al., 2017).

Third, this study contributes to the current advice taking literature as it becomes more common to take advice from technology-based applications instead of from humans. A better understanding of the interaction between consumers and artificial intelligence applications can serve to help lower the barrier for interacting with innovations, for both organisations and society (Kaptelinin, 2017). Therefore, defining the differences between consumers in accepting advice from humans or generated by artificial intelligence will give insights for both academics and managers.

As this thesis aims to be relevant for practitioners, McKinsey Global Institute (2017) examined how companies are adopting artificial intelligence, in which they found that companies in the sectors high tech; telecom; automotive, and financial services are adopting artificial intelligence faster than in other industries. This is shown in Figure 1 below. The financial services sector has one of the highest rankings, the retail sector belongs to the middle of the rankings and the healthcare sector has one of the lowest scores in adopting artificial intelligence. It would be valuable to see if

(12)

11 adopting differences among these threes industries also exist in an academic research setting. Therefore, these three sectors are examined in this thesis.

Figure 1: AI adoption is occurring faster in more digitalised sectors across the value chain. Retrieved from: file:///C:/Users/emuys/Downloads/MGI-Artificial-Intelligence-Discussion-paper%20(3).pdf

The adoption of new technology will have major implications for service providers, consumers, and employees. Going forward, as technology revolutionises products and services among sectors (such as advice generated by artificial intelligence), managers have to deal with more complex challenges concerning innovative consumer experiences, while ensuring their clients and bosses that

(13)

12 consumers are willing to use those experiences (Parasuraman & Colby 2015). Due to the potential for substantial growth in artificial intelligence applications and therefore perceived increase of consumer usage of these applications, the findings of this research question would help managers identify the potential early success of advice generated by artificial intelligence. This thesis will give managers insight in if and when to deploy artificial intelligence generated advice over human advice to solve consumers’ challenges. Furthermore, the results of this study can help marketers with satisfying their customers with personalised and customised services (Herbas-Torrico & Frank, 2017). Knowing what type of advice is preferred by the targeted consumer could help companies to optimise their services and contributes to customisation.

In sum, this research will aim to shed light on differences between three types of advice, namely human advice, advice generated by artificial intelligence and advice generated by artificial intelligence but communicated by a human and the acceptance of advice generated by artificial intelligence and if people will switch between these types of advice. On top of that, the influence of technology readiness on the acceptance of artificial intelligence generated advice is examined. A closer look is given on the differences in taking advice from an artificial intelligence application in the financial, medical and retail sector. Building on the crucial need for consumers to accept artificial intelligence applications, the following research question is addressed:

What is the effect of technology readiness and the difference among the financial, medical and retail sector in determining whether consumers accept artificial intelligence generated advice (over human generated advice and artificial intelligence advice communicated by humans)?

(14)

13 The remainder of this thesis is organised as follows: it starts with a literature review wherein an overview of definitions, concepts and models are provided as well as a connection with previous literature in order to present the hypotheses. Subsequently, the research design is presented, in which the experiment design, operationalisation of the variables and methods are explained. Consequently, the details of data collection and analysis are elaborated on, and the results are presented and discussed. This thesis concludes with implications and suggestions for further research. The work plan that elaborates on the planning of this thesis can be found in Appendix 1.

(15)

14 2. Literature review

The aim of this chapter is to develop a clear understanding of the key concepts that are used in this thesis. Furthermore, the connection with previous literature is made to explain the development of the hypotheses.

2.1 Innovation and disruptive innovation

Rogers (1995) defined innovation as “an idea, practice, or object that is perceived as new by an individual or other unit of adoption”. Innovations can be classified into continuous, dynamically continuous, and discontinuous based on their impact on behaviour and social structure (Robertson, 1971). Continuous innovations are products or services with slight modifications of already existing products (e.g., the introduction of a new chocolate flavour such as the popular caramel-sea salt version), whereas dynamically continuous innovations can be newly created products or services or modifications to existing ones (e.g. laptops instead of wired computers). Discontinuous innovations involve the creation of previously unknown products or services that usually need a significant amount of new learning (e.g., Apple’s iPhone) (Robertson, 1971). These three types of innovation are known as the innovation classification scheme. Technological innovations, such as artificial intelligence, fall into the category of discontinuous innovations as they need a considerate amount of learning before use and can therefore be seen as knowledge intensive innovations (Saaksjarvi, 2003). The innovation classification scheme is essential for considering adoption behaviour as the innovation category directly affects the kind of knowledge that is needed to comprehend the different types of innovations (Saaksjarvi, 2003). The original concept of disruptive technology was grounded in studies of physical products, including the hard disk industry and first introduced by Bower and Christensen in 1995. Later on, the concept evolved into the theory of disruptive

(16)

15 innovation to align with the original interpretation of the concept (competition, market, and business model) and the contemporary business discourse (Christensen & Raynor 2003). A disruptive innovation is “an innovation that dramatically disrupts the current market” (Christensen, 1997). Møller, Gertsen, Johansen, and Rosenstand (2017) add to this:

A disruptive innovation is a new product or service – typically launched by a smaller company – with a lower and/or different performance targeted at a low-end segment of the market and then incrementally improved into the point where it dominates (disrupts) companies in the mainstream market (and makes the incumbents of that market obsolete).

These authors then developed a model which maps the features of traditional disruption and digital disruption. Later on in this thesis, this model will be used to consider artificial intelligence as a disruptive innovation. Before looking into the question if artificial intelligence can be labeled as a disruptive innovation, it is vital to look at the definitions and applications of artificial intelligence.

2.2 Artificial intelligence

2.2.1 Artificial intelligence definitions

Defining artificial intelligence (AI) is difficult, as obtaining a sufficient understanding of the nature of the artificial would do only as long as there is a proper understanding of the idea of intelligence (Fetzer, 2012). Things that are artificially intelligent are different from those that are naturally intelligent (such as human intelligence), as artefacts have special properties that are ordinarily possessed by non-artefacts (such as humans). Artefacts are therefore things that have a certain property

(17)

16 (in this case intelligence) caused by a particular process (in this case because artefacts are created, designed, and/or manufactured in this way). These artefacts can be seen as machines, but the question remains: when are they intelligent? The artefacts can be seen as intelligent when they are able to obtain certain special property that human beings are known for. The difficulty here is that scholars have not ruled out the specific elements of human intelligence. For example, applying emotions such as anger and jealousy to the artefacts could be seen as intelligent, unless scholars already know these emotions are not aspects of intelligence. Until then, the ability to rule them out is lacking (Fetzer, 2012). Artificial intelligence is built on algorithms: “scripts for sequences of mathematical calculations or procedural steps” (Logg, 2017).

The definition of artificial intelligence has evolved over the years, as it is a rapidly changing field of research. Nilsson (1980) developed a definition in the 1980’s which is known as a time of prosperity in artificial intelligence research:

Artificial intelligence is a subpart of computer science, concerned with how to give computers the sophistication to act intelligently, and to do so in increasingly wider realms. It (artificial intelligence) participates thoroughly in computer science’s passion for abstraction, programming and logical formalisms, and detail – for algorithms over behavioural data, synthesis over analysis, and engineering (how to do) over science (what to know).

Luger (2005) adjusted the definition of artificial intelligence by taking into account that artificial intelligence, like every science, is made of human attempts, and is best understood in that context: “Artificial intelligence may be defined as the branch of computer science that is concerned with the automation of intelligent behaviour. It

(18)

17 (artificial intelligence) is the collection of problems and methodologies studied by artificial intelligence researchers”. On top of that, artificial intelligence is still a young discipline and its structure, concerns, and methods are not yet as clearly defined as those of more mature sciences. Therefore, the definition keeps being adjusted as artificial intelligence research keeps evolving. Tirgul and Naik (2016) developed one of the latest definitions of artificial intelligence:

Artificial intelligence is a branch of Science, which deals with helping machines find solutions to complex problems in a more human-like fashion. This generally involves harrowing characteristics from human intelligence, and applying them as algorithms in a computer friendly way.

This thesis follows the definition of Tirgul and Naik (2016), as they incorporated the human intelligence aspect into the definition, which artificial intelligence applications nowadays contain more and more often. Even though the description of Tirgul and Naik (2016) gives a well-outlined overview of artificial intelligence, people tend to use artificial intelligence in different gradations. On top of that, artificial intelligence is an overarching concept, covering multiple practices, which can be found below in Figure 2.

(19)

18 Figure 2: Artificial Intelligence taxonomy. Reprinted from https://blogs.thomsonreuters.com/answerson/artificial-intelligence-legal-practice/

Technology-triggered transformation in services is likely to accelerate in the future because current technologies are increasing rapidly in speed, capacity, connectivity, functionality, and ease of use, while potentially groundbreaking innovations are still nascent (Parasuraman & Colby, 2015). For example, the latest developments in artificial intelligence consist of ‘strong AI’, which is also known as Artificial General Intelligence (AGI). This is defined as “artificial intelligences that can successfully perform any intellectual tasks that humans could” (Atkinson, 2017). Müller and Bostrom (2016) argue that artificial intelligence systems will probably reach overall human ability by 2040-50, and very likely by 2075. From reaching human ability, it will move to superintelligence after that, which can be defined as: “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest” (Bostrom, 2014).

2.2.2 Artificial intelligence applications

Machines have been able to out-calculate humans for decades. The new wave of machines (such as artificial intelligence systems) is exceeding humans in several tasks ranging from image recognition to video-gaming. Soon, machines might outperform humans in certain jobs, and they might even challenge humans in areas such as creativity (Paul-Choudhury, 2016). Artificial intelligence applications are used for real-world problems, which are characterised by incompleteness and uncertainty of humans (Ramos et al., 2008). The current field of artificial intelligence applications can be divided into two categories: the first one tries to duplicate human intelligence and the second one seeks to augment human intelligence by supplementing human

(20)

19 capabilities that exploit the power of computers for augmenting human decision making (Makridakis, 2017).

Even though artificial intelligence is seen as a fairly young field of research, it is applicable in many areas, such as language understanding, problem-solving, modelling, learning and adaptive systems, robotics, games, and perception analysis (Firschein et al., 1973). A few examples of current applications of artificial intelligence are companion robots for the care of the elderly (Tirgul & Naik, 2016); e-skin that impersonates characteristics of human skin based on an artificial intelligence sensor (Zang et al., 2015); and predictive analysis is used in the financial sector to invest in stocks (Tirgul & Naik, 2016). Furthermore, the industry sector benefits from artificial intelligence, as equipment in factories gain intelligence in addition to strength, in which machines become ‘colleagues and partners’ to humans. People remain in charge of the processes, but the machines get more responsibility in tasks and outcomes (Parunak, 1996). On top of that, an artificial intelligence application that gives advice is a chatbot. This is a technology that makes it available for man and machine to interact with each other, by using natural language (Lokman et al., 2009).

One of the most well-known artificial intelligence applications is IBM Watson. This application can divide the human language into text passages and identify them with human-like accuracy, and at speed and scale that is way faster and way bigger than any human can do alone. This means that IBM Watson can deliver and understand the most accurate and correct answer to a question, due to artificial intelligence (High, 2012).

2.2.3 Artificial intelligence as a disruptive innovation

As mentioned before, this thesis will look if the features of disruptive innovation can be applied to artificial intelligence and will therefore make artificial intelligence a

(21)

20 disruptive innovation. To do so, the digital disruption descriptions table from Møller, Gertsen, Johansen, and Rosenstad (2017) are used, as artificial intelligence applications are based on the foundation of digitisation (McKinsey Global Institute, 2017). The

Features of a disruptive innovation

Digital disruption description Does artificial intelligence fit the description:

Role of disruptor in the value

chain/network

Serves often as intermediaries between (but digitally immersed) with suppliers and users/customers, thus rearrange existing value chain/network

As explained in the previous chapter, artificial intelligence is already used to care for the elderly (Tirgul & Naik, 2016). This is a typical example of intervening between user (elderly) and supplier (nursing home).

Organisational structure of

disruptor/disrupting network

Often a virtual organization structures and close collaborative networks of

interdependencies

Among the top artificial intelligence companies are Deepmind, Google, Facebook and OpenAI (Forbes, 2017). These firms are known for their

collaborative networks and virtual structures.

Impact on identity of organisations and people involved in the disruption

Disruptors may often enter with an identity different from establishes players, e.g., a less authoritative identity to fit an intermediary’s role. The user identity may change as effect of being more engaged, e.g., as co-creator of service.

IBM’s Watson uses artificial intelligence to combine natural language processing, hypothesis generation and evaluation and dynamic learning. By combining this, Watson is the first machine that is able to give direct,

confidence-based responses (High, 2012): “It can tease apart the human language to identify inferences between text passages with human-like accuracy, and at speeds and scale that are faster and far bigger than any person can do on their own.” Therefore, IBM Watson enteres with a different identity than machines before. Business model An area of focus, creativity and combinations,

often to quick creating and exploiting scale; e.g., taking a slice of the value, expand qua open source or freemium. Often

involving/engaging users/customers in a exploiting, mutually beneficial or democratic way

Spotify uses artificial intelligence to do recommendations and give users suggestions. Spotify has a

freemium model. This means that artificial intelligence can be applied to open source/freemium business models.

Use of big data auditing

The digital domain often holds a convenient possibility of big data as a disruptive strategy

Artificial intelligence is used in multiple ways in order to smoothen the path for capturing and

organising big data, and it has been used to analyse big data for key insights (O’Leary, 2013). The role of platforms Disruptive information technologies,

especially computing platforms, are often followed by pervasive and radical innovations in software development organizations both in relation to digital services and processes.

A few examples of radical innovations in software development by artificial

intelligence: Microsoft announced real-time translation robots and inventive image recognition

(22)

21 Platforms often enable others to create and

thus engage many to scale fast

technology, Amazon uses it for autonomous robots in its delivery system (Lu et al., 2017).

Speed of diffusion. Fast to exponential building or penetration of market (scaling). Exploiting digital channels. Speed of down-sizing can also be Fast

Speed of technology is high, speed of consumer adoption is low (Logg, 2017).

Initiators of disruption

Start-ups or established players using digital platforms (e.g., Apple app store, Amazon)

Both established players (IBM) and startups (OpenAI) use artificial intelligence.

Nature of offer Often service or large part of service

(intangible). Functions are created by software and electronic devices. Engaging

users/customers

Robotic Process Automation (RPA) is an application of artificial intelligence, in which software and algorithms are used to automate human efforts to maintain efficient business processes (Lu et al., 2017). Role of invention Often recombination of existing solutions Looking again at IBM Watson, this

machine combines three existing capabilities (natural language processing, hypothesis generation and evaluation, and dynamic learning) which makes IBM Watson unique (High, 2012). Distribution of value

created

Value is often more distributed in the value network involved. Value that goes to the disrupter is often a minor charge of a scaled attention of users/customers

Change in the value network can be high, speed of consumer adoption is low (Logg, 2017).

Physical assets Few. Often ‘rented’ assets. Depends on if artificial intelligence

is used by large or small companies.

Humans in work Often more flexible ‘free-lance’ engagement of staff

Depends on if artificial intelligence is used by large or small

companies. Control of the

business

Tends to be real-time, interactive and automated

Looking again at IBM Watson, it is able to unlock hidden value in data to find answers, monitor trends and surface patterns (IBM website, 2017).

Sustainability Connectedness allow for potential better use of resources as visible, e.g., in sharing economy concepts

The IBM Watson enables to interpret languages and translate them in order to be better connected (IBM website, 2017).

Democratisation Connectedness has the potential to widely engage

The IBM Watson can quickly build and deploy chatbots and virtual agents across a variety of channels, including messaging platforms and robots (IBM website, 2017). Table 1: Features of disruption and its explanation by Møller, Gertsen, Johansen and Rosenstad (2017) filled in for artificial intelligence.

In summary, artificial intelligence in general and current artificial intelligence applications thick the boxes of the features of disruption framework of Møller, Gertsen, Johansen, and Rosenstad (2017). This is relevant as it highlights vital aspects and

(23)

22 perspectives that need to be taken into consideration when working with disruptive innovations. To conclude, it is safe to say that artificial intelligence can be identified as a disruptive innovation.

2.3 Advice taking

Advice taking is inherent to decision making. A fundamental question in studying behavioural decision making is: “What does the decision maker do with the available information and advice?”. Previous research answers this question by saying that decision makers engage in interactive social and cognitive processes to build a consistent basis of information (Yaniv & Kleinberger, 2000). This is done by consulting opinions from worthy advisors and assessing their benefits. The role of social processes (with humans) in decision making is therefore highlighted (Sniezek & Buckley, 1995; Yates et al., 1996). On top of that, people are likely to have three aims in mind when they consider taking advice from others: people want to improve the quality of judgments, people want to avoid rejecting help that is offered, and people want to share responsibility for high-risk judgments (Harvey & Fischer, 1997). Advice is not always accepted, it can also be rejected (Heritage & Sefi, 1992). An explanation for rejecting advice is natural asymmetry, which means that giving advice shows a belief in the recipient’s lack of ability or knowledge about the discussed issue (Pudlinski, 2002). For example, mothers of their first born often reject advice often to show their capability to care for their child (Heritage & Sefi, 1992).

Advice can be explained as: “a recommendation from the advisor, favouring a particular option” (Bonaccio & Dalal, 2006). After the advice is given, the decision that has to be made is posited to be influenced by the advice. There are multiple ways to examine advice-taking, for example, the advice can be deliberately given (e.g., a recommendation is for another chosen option is presented) or the advice can be

(24)

23 requested (e.g., the participant him-/herself demands more information before making a decision) (Gino & Moore, 2007). For this study, it is chosen to purposely apply advice to the participants, as the goal is to see if the advice is taken or ignored, not if the advice is requested or not. This is chosen, as receiving (unasked) advice often exposes the decision maker to a potential conflict between their own initial opinion and the advice, and hence to a complexity in combining the opinions (Yaniv & Kleinberger, 2000). Schotter (2003) also acknowledged this, and added that word-of-mouth advice is a powerful tool in shaping the choices that people make and that it tends to push the decisions in the adviced direction. This study aims to find out what will happen with these assumptions when the advice is not given by a human, but by an artifiical intelligence system.

2.4 Interaction between humans and artificial intelligence systems

Human advisors are different from artificial intelligence advisors. However, there will be situations in which it is possible to demand more from humans, and other cases in which it might be possible to hold artificial intelligence systems to a higher standard of explanation (Doshi-Velez et al., 2017). More often, expert systems (such as artificial intelligence systems) are perceived as more credible than human advisors for executing certain tasks (e.g., making calculations, doing predictions) (e.g., Dijkstra, 1999). Therefore, as artificial intelligence capabilities increase, individuals and artificial intelligence systems will have to function alongside each other more often (Hancock et al., 2011). For this parallel functioning to work, however, effective interaction between humans and artificial intelligence systems is needed, which is often complicated due to an aversion in acceptance of the system by humans (Adams et al., 2003).

(25)

24 Previous literature shows that people assess other systems from their initial point of view of computers as advisors combined with their experience with the actual system (Waern & Ramberg, 1996). Knowing the domain seems to be a massive influence in perceiving the computer as a trustworthy advisor or not. In these contexts, trust directly affects the willingness of people to accept artificial intelligence produced information and suggestions. Trust in automation, in general, has been studied with respect to its various performance influences (Chen & Terrence, 2009; Lee & See, 2004; Parasuraman et al., 2009; Sheridan, 2002). The alleged “black box” nature of artificial intelligence systems poses a barrier to trust and adoption of these systems (Shrikumar et al., 2016). Understanding the features that lead to a particular output enhances acceptance of users, which is difficult due to the “black box” nature. Advantages can be gained of when artificial intelligence systems will be presented in order to create an impression of human-likeness with antropomorphic (“a phenomenon that describes the human tendency to see human-like shapes in the environment”) shapes, as this helps to explain the unknown (Złotowski et al., 2015).

The risk of using anthropomorphic shapes for artificial intelligence systems is that people expect these systems to follow human social norms (Złotowski et al., 2015). This is likely to happen, as people have a propensity to apply rules of human-human interpersonal interaction to their interaction with intelligent machines: “users would benefit if machines were designed to incorporate characteristics of humanness that would, in turn, elicit social responses from the human user” (Madhavan & Wiegmann, 2004). Nass and Moon (2000) called the frequent application of social rules of human-human interaction to machines ‘ethopoeia’. As people engage in ‘ethopoeia’, people focus more on errors of machines than on mistakes made by humans (Dzindolet et al., 2002). This is seen as a barrier to accepting the system and its applications.

(26)

25 Consumer resistance to machine recommendations has been widely studied, with a common outcome: algorithms are seen as better predictors than people, yet people prefer to take advice from other humans (e.g., Yeomans et al., 2015; Sunstein, 2014). This phenomenon is called ‘algorithm aversion’, and it is also present when people see algorithmic predictors outperform human predictors (Dietvorst et al., 2015). The lack of flexibility and the inability to step in when the suspicion exists that the algorithm is wrong is the most recurring explanation for ‘algorithm aversion’ (Dietvorst et al., 2016). Other studies conclude that people favour having humans to present information (Eastwood et al., 2012; Diab et al., 2011). This is because humans know the context of what they are advising, can think of similar experiences as the person is in, and because humans are accustomed to human advice (Yeoman et al., 2017). Another explanation is that algorithms and computers are often portrayed as competitors of humans instead of complementary to humans. For example, IBM Watson competed against the best playing humans in the game “Jeopardy” (High, 2012) and Google’s Deepmind AlphaGo won from the best human “Go” player (Vincent, 2017).

It seems that people still prefer recommendations from others instead of from machines. Therefore, this thesis proposes that when human advice is available, people will still choose this type of advice and they will not accept solely artificial intelligence generated advice. Furthermore, as people are used to recommendations from other people, I expect that when artificial intelligence generated advice is communicated by humans (so, basically, when the artificial intelligence based advice has a human touch), advice generated by artificial intelligence will be accepted. This is presented in the following hypotheses:

(27)

26 H1: Consumers are less likely to accept solely artificial intelligence generated advice compared to the acceptance of human advice and of artificial intelligence generated advice communicated by humans.

H2: Consumers who receive human advice are more likely to switch to artificial intelligence generated advice when it is communicated by humans than to solely artificial intelligence generated advice

2.5 Technology Readiness

The Technology Readiness Index (TRI) was originally anchored in the literature on adoption of new technologies and people-technology interactions. The Technology Readiness Index (TRI) is a 36-item scale to measure technology readiness, which can be defined as ‘‘people’s propensity to embrace and use new technologies for accomplishing goals in home life and at work’’ (Parasuraman, 2000). The TRI measures the general readiness of an individual to use new technology, using four personality traits: optimism (“a positive view of technology, with beliefs in increased control, flexibility and efficiency in life due to technology”), innovativeness (“a tendency to be the first using new technologies”), discomfort (“having a need for control and a sense of being overwhelmed by new technologies”), and insecurity (“distrusting technology for security and privacy reasons”) (Parasuraman, 2000). Individuals with optimism and innovativeness and little discomfort and insecurity are more likely to use new technologies (Walczuch, Lemmink & Streukens, 2007).

Furthermore, from the four dimensions of technology readiness, innovativeness is the most studied one. As TRI 1.0 measures the general readiness, also general innovativeness is measured, which can be explained as a general tendency to be innovative, regardless of the field whereas domain-specific innovativeness can be

(28)

27 explained by a person adopting new technologies in one field, but being slower in another field (Liljander et al., 2006). For example, a person could have the newest smartphone, but at the same time have traditional home appliances.

The TRI 1.0 has been widely used during the exponential growth in technology’s influence in the service domain (Parasuraman & Colby, 2015). Simultaneously with and subsequent to the TRI’s development, other scholars examined the advantages and drawbacks of new technology-based systems and their implications for stimulating consumer acceptance (Hoffman et al., 1999; Mick & Fournier, 1998; Meuter et al., 2003, 2005). Since the TRI’s publication, the pace of technological change has accelerated, with the rise of improvements such as high-speed Internet access, mobile commerce, social media, and cloud computing. In this context, and informed by over 12 years of experience using the TRI, Parasuraman and Colby (2015) developed an updated and streamlined TRI 2.0 from which items are used for this thesis. Technology readiness measures the general innovativeness of consumers, which is not domain-specific. Not all empirical studies that used TRI 1.0 confirmed the relationship between technology readiness and the adoption of new technologies (Citrin et al., 2000; Roehrich, 2004). For example, the complete TR 1.0 scale was examined on insurance officers, from which only support for the validity of optimism and innovativeness was found, not for insecurity and discomfort (Taylor et al., 2002). As examined by Walczuch, Lemmink, and Streukens (2007), an individual’s personality makes a difference in the adoption process of new technology. In this case, character based on the four traits of technology readiness has a significant positive effect on technology adoption. On top of that, the TRI 2.0 has the ability to divide users and non-users of innovative technologies (Liljander et al., 2006). Therefore, the TRI 2.0 offers a way to divide individuals based on underlying positive and negative

(29)

28 technology beliefs. In previous studies, the TRI and TRI 2.0 are mostly used to explore the technology readiness of a specific target group (e.g., Caison et al, 2008; Elliott et al., 2008; Lai, 2008; Sermeus, 2016) and to examine technology readiness for a particular new technology (e.g., Taylor et al., 2002; Kuo et al., 2013; Lundberg, 2017). In this thesis, the TRI will be used for studying technology readiness for a new technology (advice generated by artificial intelligence).

As not all consumers are the same or are not motivated by the same beliefs, I believe that a difference could be seen between consumers who encounter a high level of technology readiness and consumers who experience a low level of technology readiness in accepting artificial intelligence generated advice. Therefore, I argue that consumers who experience a high level of technology readiness are open to artificial intelligence generated advice. Furthermore, I believe that consumers who encounter a low level of technology readiness are not willing to accept advice from an artificial intelligence application and will prefer the advice of a human. This generates the following hypothesis:

H3: The effect of the types of advice (human advice, artificial intelligence generated advice, and artificial intelligence generated advice communicated by humans) on the acceptance of advice is stronger as technology readiness levels increase

2.6 Sector differences in advice taking 2.6.1 Financial services sector

The financial crisis starting from 2007 was a groundbreaking time for the financial services sector. In the aftermath of the economic crisis, banking is undergoing

(30)

29 major changes regarding regulations, trust challenges, and opportunities from disruptive technologies (Olanrewaju, 2014). Järvinen (2014) examined trust in the banking sector, from which was concluded that trust differs among contexts and types of services in banking. Consumer trust was highest in banking accounts and lowest in investments and pensions (Järvinen, 2014). Traditionally, banking is a very technology-intensive sector, in which technological progress has resulted in improved quality and variety of banking services (Dauda & Lee, 2015). As Hõbe (2015) concluded about current developments in the financial services sector:

It can even be said that technological advances will cause changes in traditional banking through, on the one hand, the creation of customer-facing products and services. On the other hand, technology has introduced new entrants in the financial services sector, who are ready to rapidly create adjust to user preferences, and deliver innovative solutions.

Sillence and Briggs (2007) found that for financial decisions, consumers want independent and unbiased online advice. On top of that, consumers prefered personalised and tailored information and advice from online sources. As the younger generations are raised with an ‘I want it, and I want it now’-mentality, emerging technologies become essential in receiving information and advice almost instantaneously and in empowering consumers to effortlessly compute complex calculations (Wall Street Journal, 2015). The fact that investments in financial technology startups (so-called fintechs) have tripled in the past five years shows that the market is responding to the changing consumer needs by creating innovative applications using cloud computing technology, smartphone applications and other

(31)

30 disruptive technologies (Dietz et al., 2016). Fitzpatrick, Reichmeier, and Dowell (2017) stated: “If advisors can achieve this and help to eliminate the significant distrust in the financial services industry, then the technological innovations will serve as a catalyst to propel the industry to new heights and successes”. The prediction is that a complete shift will occur, and technology will no longer be used to supplement financial advisors, but advisors will be used as a supplementary resource for new technological models (Fitzpatrick et al., 2017). On top of that, Deloitte (2016) has identified that: “Automation using artificial intelligence might be the next game changer in terms of process efficiency in the financial industry”.

As the financial services sector is subject to technological changes, combined with the fact that consumers are used to the support of machines in defining the correct numbers in finance (e.g. using a calculator) and the increasing need from consumers for tailored and personalised advice, I argue that artificial intelligence generated advice is likely to be accepted by consumers for making financial decisions. This leads to the following hypothesis:

H4a: Advice generated by artificial intelligence in the financial services sector is likely to be accepted by consumers in the artificial intelligence generated advice condition and in the artificial intelligence generated advice communicated by humans condition compared to the human condition.

2.6.2 Medical sector

Although advances in information technology in the past decade have developed immensely in nearly every aspect of our lives, they seem to be coming at a slower pace in the field of medicine (Dilsizian & Siegel, 2014). Health information

(32)

31 technologies that are designed for improving clinical decision making are interesting for the potential to make the growing information overload that clinicians face comprehensive (Chaudhry, 2008). However, despite the theoretical and intuitive benefits of these technologies, existing studies found mixed empirical results (Agostini et al., 2008). Graber and Mathew (2008) found three different perspectives to categorise these mixed results. These perceptions were technology-specific (people had a positive impression of integrating computers into clinical care); professional (risk for the autonomy of the doctors, which lead to a negative attitude); and health-sciences related (the amount of information and education determines the positive opinion towards the technology).

Previous studies about the role of artificial intelligence in medicine, in radiology and pathology particularly, conclude that these roles will be redefined and that the specialists have to adapt incrementally to artificial intelligence as it could improve patient care (Jha & Topol, 2016; Chockley & Emanuel, 2016). Furthermore, artificial intelligence (mostly machine learning) is likely to assist physicians with different diagnoses of diseases, suggestions for treatment options, and recommendations, and in specialisms such as cardiology (Dilsizian & Siegel, 2014). On top of that, Artifical Intelligence in MEdicine (AIME) has analysed three research directions that are promising for artificial intelligence applications in medicine, namely: big data and personalised medicines; evidence-based medicine; and business process modelling and process mining (Peek et al., 2015). This shows that a shift is happening in the medical sector towards integrating artificial intelligence systems.

The difficulty in the medical sector is that consumers (patients) are usually not advised as in the financial services and retail sector (to be discussed in the next paragraph), as the doctor makes the decision for you (e.g., you have a disease, the doctor

(33)

32 decides if you need to take antibiotics and if so, which ones you should take). This means that in this case, the doctor would be advised by the artificial intelligence system and (s)he then decides what to do with the patient. Even though it is fascinating to further look into the challenges (e.g., who is responsible if the doctor uses a diagnosis made by an artificial intelligence system and the diagnosis is incorrect?) and opportunities of artificial intelligence systems in medicine, it is not within the scope of this thesis. The knowledge gap between advisors (doctors) and consumers (patients) exists due to the complexity of understanding jargon, treatments, and processes in the body. Adding this up to the fact that patients are used to receiving advice from a doctor, I argue that it is not likely that consumers will accept medical advice from an artificial intelligence system.

H4b: Advice generated by artificial intelligence in the medical sector is less likely to be accepted by consumers in the human condition as compared to consumers in the artificial intelligence generated advice condition and in the artificial intelligence generated advice communicated by humans condition

2.6.3 Retail sector

The retail landscape is rapidly evolving, as new technologies (e.g. robots, Internet of Things), new business models (e.g. subscription models), big data, and predictive analytics (also a component of artificial intelligence, see Figure 1) show that the shopping process is on the verge of a giant leap into the unknown (Grewal et al., 2017). Technology helps the retailer to target the desired consumers as well as it enables consumers to make better informed decisions about which products or services to buy. Predictions are that the shopping field will depend on newer emerging forces, such as

(34)

33 the Internet of Things, virtual and augmented reality, artificial intelligence, and robots, drones, and driverless vehicles (Laurent et al., 2015).

Apps that already rely on artificial intelligence can develop a positive impact on the customer shopping experience, both online as offline. For example, these apps can gather information about where products are physically located in a store, answer questions about the functionalities of an item, and make suggestions about what other products might be a good combination with the purchased product (Grewal et al., 2017). On top of that, artificial intelligence systems are used by multiple firms to test advances in robotics and drones for retailing purposes (Van Doorn et al., 2017).

As explained in paragraph 2.2, robotics is a component of artificial intelligence. Rodríguez, Paredes, and Yi (2016) concluded, after a field experiment in a shopping mall, that consumers have an intense amount of interest in robots for localisation and navigation purposes. On top of that, it is examined that loyal customers believe that technology applications are relevant when making shopping decisions (Martos-Partal & González-Benito, 2013). Grewal, Roggeveen, and Nordfält (2017) state: “Artificial intelligence systems create engaged customers, but they might also mean that service employees’ jobs would need to be retooled to enable them to provide information at an even higher level than available in an artificial intelligence application”. This shows that consumers receive technology in retail positively and that a shift is happening in the role of traditional shopping assistant jobs. Taken all information into account, I argue that consumers see their benefits in the advantages from artificial intelligence systems in retail and are likely to accept their suggestions before making a purchase decision.

(35)

34 H4c: Advice generated by artificial intelligence in the retail sector is likely to be accepted by consumers in the artificial intelligence generated advice condition as compared to consumers in the human advice condition and in the artificial intelligence generated advice communicated by humans condition

As shown in the paragraphs above, six hypotheses were created. When all six hypotheses are answered, it should be possible to tell which type of advice encourages the acceptance of artificial intelligence generated advice as well as in which sector advice generated by artificial intelligence is preferred. Together, these hypotheses can be combined into a conceptual model, which can be found below in Figure 3. The plan for testing these hypotheses is explained in the next chapter of this thesis.

(36)

35 3. Data & Methods

The aim of this chapter is to describe the techniques that have been used to answer the research question and how these methods contribute to the generation of insightful outcomes. First, the research design is illustrated, which includes a description of the measurement variables. Subsequently, the data collection and sample are described. Next, the procedure is explained, followed by the operationalisation of the variables. Lastly, the method of the data collection is clarified.

3.1 Experimental design

An online survey-based experiment with three conditions was conducted to test the proposed hypotheses. This experiment contained a 3 (human advice/artificial intelligence generated advice communicated by humans/artificial intelligence generated advice) x technology readiness between subjects full factorial design. This method was chosen, as surveys allow measurement, description, and comparison of the participant’s knowledge and attitudes (Fink, 2009). Besides, distributing a survey online has multiple advantages, such as reaching a large number of participants in a short amount of time (Van Selm & Jankowski, 2006). Furthermore, the experimental design of this thesis made it possible to study causal relationships between the independent and the dependent variables (Saunders & Lewis, 2012).

As mentioned before, the independent variable ‘type of advice’ was manipulated into three types of advice, namely: human advice, artificial intelligence generated advice communicated by humans, and artificial intelligence generated advice. The dependent variable was the acceptance of artificial intelligence generated advice. Two control conditions were added.

The first control variable was expertise, as previous literature suggested that human advisors and machines are sometimes seen as experts that make participants rely

(37)

36 on them because of their expertise instead of based on the type of advice. For example, Promberger and Baron (2006) found that participants took the advice of a doctor over the advice of a computer for making a medical decision. This potentially confounds human judgment and expertise. In other work, participants depended on an ‘expert system’ more than on another person (Dijkstra et al., 1998; Dijkstra, 1999). This makes it seem that the system was confounded with expertise. In order to control this, the human advisor was presented as ‘independent advisor’ in this thesis, and the artificial intelligence advisor was presented as ‘artificial intelligence application’. By controlling for expertise, the external validity will be higher than comparing a ‘human expert’ with artificial intelligence generated advice. As Logg (2017) explained: “For example: when people are lost, they consult Google Maps, or if pressed, they ask a stranger on the sidewalk. They do not think to call a travel agent or cartographer”.

The second set of control variables consists of demographics, namely: age, education, and gender. Previous literature suggests that there are apparent differences between age groups with regards to the importance of various factors in technology adoption and usage (Morris & Venkatesh, 2000; Lee & Coughlin, 2015). Relatively younger people are much more likely to have been exposed to information technology at an early age as compared to older people. Therefore, it is reasonable to assume that older people are more accustomed to traditional approaches (here: human advice) and younger people are more reliant on technology (here: artificial intelligence generated advice) (Harris et al., 2013). Furthermore, previous research on technology adoption suggests that clear gender differences exist in the acceptance of technology (Venkatesh & Morris, 2000). Gender shapes the initial decision process that drives technology adoption and usage behaviour in the short term. This suggests that men are more focused on deciding whether to adopt technology or not and women are more balanced

(38)

37 (Venkatesh et al., 2000). On top of that, previous research suggests that people with higher education are more likely to use technology over people with less schooling (Riddell & Song, 2012). To make sure that neither of these demographics influences the results of this thesis, they were controlled for.

3.2 Data Collection

The research population of this study consists of consumers. Given the fact that this study aimed to analyse consumers in the broadest sense of the term, the goal was to involve as many different participants as possible. Therefore, in reaching participants, the self-selection sampling technique was used. This means that participants identified themselves as sample members (Saunders & Lewis, 2012). This sampling technique has become popular among online experiments, as it is an effective way to collect data from various online sources. However, this method has the disadvantage that the participants who choose to execute the survey might have strong opinions about the research topic or are specifically interested in it (Saunders & Lewis, 2012). This can result in a population that might not be representative of the general public.

The online experiment survey was communicated through my personal social media channels (mainly Facebook.com, Linkedin.com, and messaging system

Whatsapp), and distributed through my KPMG e-mail account. While self-selection sampling was used to attract participants, those who started the online surveys were randomly assigned to each experimental condition (either human advice, artificial intelligence generated advice communicated by humans, or artificial intelligence generated advice). By randomly assigning participants across conditions, it is expected that participant attributes are distributed evenly, in order to minimise systematic bias (Field & Hole, 2003).

(39)

38 Before collecting data, an a priori power analysis was executed to determine the sample size required for controlling statistical power (Cohen, 1988). This was done with G*Power 3, which is a general stand-alone power analysis programme for statistical tests, which is commonly used in social and behavioural research (Faul et al., 2007). Resulting from this power analysis, a total of 140 participants were needed for this experiment (see Appendix 2 for the analysis).

3.3 Procedure

Before starting the survey, participants were asked to agree to a protection statement for participants of the Amsterdam Business School. The survey then started with information about three different banks and the situation in which the participants told to be planning to switch banks. Subsequently, the participants were asked to make an initial decision about which bank to choose. Then, the participants were randomly assigned to three groups, from which one received human advice, one received artificial intelligence generated advice communicated by a human, and one received artificial intelligence generated advice. The advice said that the initial choice the participants made, was not the best for their personal financial situation and another bank was suggested. Hereafter, the participants were asked to make a final decision regarding their likeliness to accept the advice and if they had to make a decision, would they take the advice or not. This was followed by questions about how easy or hard it was to decide between accepting or rejecting the advice, and the underlying reason. Subsequently, questions were asked about if they would rather receive the advice from (solely) an artificial intelligence system or from a human (depending on which condition they were assigned to). These open questions had the aim to gather in-depth answers. Although not most usual, open questions can be used to collect the data needed in experimental settings (Campbell, 1975; Sofaer, 1999).

(40)

39 Furthermore, participants’ technology readiness level was measured based on four statements on which they could agree or disagree ranging from 1 (strongly disagree) to 7 (strongly agree) Then, participants were asked if they would take artificial intelligence generated advice for financial, medical and retail decisions and what their definition of artificial intelligence was. Finally, the last three questions involved the participant’s demographics so these could be used as control variables.

3.4 Operationalisation

In developing the stimuli for this thesis, the independent variable ‘type of advice’ was manipulated into three conditions, namely: human advice, artificial intelligence generated advice communicated by humans, and artificial intelligence generated advice. The way how these three types of advice were operationalised, include ‘an independent financial advisor’ for human advice, ‘an independent financial advisor who consulted an artificial intelligence based application’ for artificial intelligence generated advice communicated by humans, and ‘an artificial intelligence based application’ for artificial intelligence generated advice. This was measured three times: first, on a 7-point Likert-scale ranging from 1 (extremely likely) to 7 (extremely unlikely), subsequently choosing from: ‘I would take the advice’ or ‘I would not take the advice’ and with the question if they would switch to another type of advice on a binary scale (definitely will – definitely will not). The outcomes of these three measurements created the three dependent variables (Acceptance, Choice, and Likelihood to Switch).

Technology readiness statements were derived from the TRI 2.0 (Parasuraman & Colby, 2015). TRI 2.0 contains 16 items divided over the four personality traits (optimism; innovativeness; discomfort; and insecurity). To avoid respondent fatigue, not all 16 TR items proposed by Parasuraman and Colby (2015) were included. Instead,

(41)

40 a reduced set of four items was chosen; one for each of the four dimensions optimism, innovativeness, discomfort and insecurity. The items were chosen based on reported factor loadings (Parasuraman & Colby, 2015). On top of that, it helped to prevent item nonresponse bias, which means that the surveys were not returned incomplete (Fraenkel & Wallen, 1993). The four items from TRI 2.0 differ from ‘new technologies contribute to a better quality of life’ for the optimism construct, to ‘other people come to me for advice on new technologies’ for the innovativeness construct, and from ‘when I get technical support from a provider of a high-tech product or service, I sometimes feel as if I am being taken advantage of by someone who knows more than I do’ for the insecurity construct to ‘people are too dependent on technology to do things for them’ for the discomfort item. Participants had to fill in the degree to which they agree with the statements on a 7-point Likert-scale ranging from 1 (strongly disagree) to 7 (strongly agree).

As this survey contained two open questions that were optional for the participants to fill in, these were analysed as well. The open-ended questions were coded into categories and analysed based on the frequency to gather information about why participants would take or ignore artificial intelligence generated advice and why they would or would not switch to another type of advice than they initially received, which can be found in Appendix 3. This is done as combining quantitative and qualitative methods enhances the benefits and minimises the weaknesses of each technique in practice (Hussein, 2015).

Referenties

GERELATEERDE DOCUMENTEN

This general rule is, partly in the light of the possibility of obtaining preliminary relief as substantiated in case law, not contrary to the presumption of innocence of Section

By combining these theoretical elements of infrastructures with this methodological approach, I have brought to the foreground the human network of arrangements,

Based on a quantitative analysis of Norwegian advisory commissions in economic policy, it has found a growing reliance on academic economists and economic knowledge and an

11. Level of maximization in the buying decision 12. Level of “social opinions” applied in buying decision The psychological profi les composed of these variables should

- Informal means of protection are perceived more present and effective than formal means of protection, to protect innovations in the advice- & engineering sector.. This is

Joo and Grable (2001) show that individuals who have higher income, better financial behavior, a positive and proactive attitude towards retirement had a higher level of risk

This paper investigates and compares the impact of different sources of financial advice on the decisions of Dutch individuals to invest in risky financial assets and their

I find that both advanced and self-assessed financial literacy are negatively related to relying on non-professional advice and positively related to relying on one’s