• No results found

List of Figures

N/A
N/A
Protected

Academic year: 2023

Share "List of Figures "

Copied!
91
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

(2)

Statement of originality

This document is written by Student Fabian van Netten who declares to take full responsibility for the contents of this document.

I declare that the text and the work presented in this document is original and that no sources other than those mentioned in the text and its references have been used in creating it.

The Faculty of Economics and Business is responsible solely for the supervision of completion of the work, not for the contents.

Signature:______________________________________________________

F. van Netten

(3)

Table of Contents

Statement of originality 2

Table of Contents 3

List of Figures 5

List of Tables 5

Abstract 6

1. Introduction 7

2. Literature review 15

2.1 Type of service: human vs AI 15

2.1.1 Type of service (AI vs human) and customer trust 16

2.2 Technology Acceptance (TA) 18

2.3 Task characteristic (cognitive vs social intelligence) 21

2.4 Age and AI 25

2.5 Conceptual Model 27

3. Data and research method 29

3.1 Pretest 29

3.2 Study design 32

3.3 Participants 33

3.4 Procedure 34

3.5 Measures 35

3.5.1 Technology Acceptance 35

3.5.2 Customer trust 36

3.5.3 Control variables 37

3.5.4 Manipulation checks 37

4. Data Analysis 39

4.1 Preliminary analyses 39

4.1.1. Mean average per category 39

4.1.2 Reliability check 39

4.1.3 Grouping Technology Acceptance 40

4.1.4 Grouping Age 40

4.2 Normality of Distribution 41

(4)

4.3 Correlation 42

4.4 Hypotheses testing 44

4.4.1 Hypothesis 1: direct effect of service type on customer trust 45

4.4.2 Hypothesis 2a & 2b: moderation effect of TA 46

4.4.3 Hypothesis 3a & 3b: moderation effect of task characteristics 48

4.4.4 Hypothesis 4a & 4b: moderation effect of age 52

4.5 Summary of the results 54

5. Discussion 56

5.1 General Discussion 56

5.2 Theoretical contribution 59

5.3 Managerial contribution 60

5.4 Limitations 62

5.5 Future research 64

6. Conclusion 65

References 66

Appendices 77

Appendix 1: Cognitive task: Analyse customer problems 77

Appendix 2: Social intelligence: Empathize and communicate with patients 78

Appendix 3: Online experiment 79

Appendix 4: Cronbach's Alpha analysis 91

(5)

List of Figures

Figure 1: Conceptual model 27

Figure 2: Diagram of the Pretest: to test the most identified robotic voice 31 Figure 3: Diagram of the experiment: to test the hypotheses 32

Figure 4: PROCESS extension 45

List of Tables

Table 1: Overview of the hypothesis 28

Table 2: Cognitive and social intelligence task characteristics 31 Table 3: TAM: 10 questions to measure technology acceptance 36

Table 4: Trust: 5 questions to measure trust 37

Table 5: Combined means of customer trust 39

Table 6: Cronbach’s α of Technology Acceptance 39

Table 7: Median Low TA vs High TA 40

Table 8: Digital natives vs Digital immigrants 41

Table 9: Correlation matrix 43

Table 10: Repeated measures ANCOVA findings Hypothesis 1 46 Table 11: Repeated measures ANOVA findings Hypotheses 2a & 2b 47

Table 12: PROCESS analysis moderation effect TA 48

Table 13: Repeated measures ANCOVA findings Hypothesis 3a & 3b 50 Table 14: PROCESS analysis moderation effect cognitive task characteristic 51 Table 15: PROCESS analysis moderation effect social intelligence task characteristic 52 Table 16: Repeated measures ANOVA findings Hypothesis 4a & 4b 53 Table 17: PROCESS analysis moderation effect age groups 54

Table 18: Overview of the results of the hypotheses 55

(6)

Abstract

During the last years, digital transformation has been a key for lots of businesses (Urbinati, Chiaroni, Chiesa and Frattini, 2018) and more companies are nowadays using AI to assist their customers with various activities (Zhao, Zhang, Friedman & Tan, 2016). Due to some contradicting findings in previous research, this research was performed to identify if AI service results in more customer trust compared to human service, and if technology acceptance (low vs high), task characteristics (cognitive vs social intelligence) and age (digital immigrants vs digital natives) do influence the preferred service type and level of customer trust. This is important to investigate, as previous research shows conflicting findings and as the moderators might influence the direct effect.

To investigate the main- and moderating effects, a 2x2 within-subjects online experiment was created, in which service type and task characteristics were manipulated and technology acceptance and age were measured. This resulted in 335 respondents taking part in this online experiment.

The data analysis showed that, overall, human service results in the highest customer trust, compared to service type. The analysis shows that this direct effect is not moderated by technology acceptance, task characteristics or age. This implies that for all conditions, human service is preferred over AI service. These findings provide strategists with guidelines on how to use service to create more customer trust.

Keywords:

Strategy, service type, Artificial Intelligence, Customer trust, Task characteristics, Technology Acceptance

(7)

1. Introduction

During the last years, digital transformation has been a key for lots of businesses (Urbinati, Chiaroni, Chiesa and Frattini, 2018), wherein companies have explored new digital technologies to exploit their benefits (Matt, Hess, & Benlian, 2015). Paired with this exploration regarding digital technologies, popularization of the internet, the use of big data and the development of e-commerce, is the rapid increase of research and applications of Artificial Intelligence (AI) (Pan, 2016). The combination of industrial demands and AI resulted in compelling changes in the way companies provide their products and services to their customers (Pan, 2016). More companies are nowadays using AI to assist their customers with various activities, such as customer support based on customer profiles, product comparisons and personalized advice or recommendations (Zao, Zhang, Friedman & Tan, 2015). An explanation for this might be the shift from a product-based economy towards a more service- focused economy, making companies realize that implementing a perfect service is essential in a competitive market (Rust & Huang, 2014).

Despite the fact that AI is a major source of innovation in terms of personalized service and advice, it has also been a threat towards human service jobs (Huang & Rust, 2018).

According to Karatepe, Yavas, Babakus and Avci (2006), service jobs require personal efficacy employees who are dealing with customers’ dissatisfaction and complaints, deliver the best service and always try to achieve customer satisfaction. Several surveys have shown that people expect that human work will be replaced by AI, due to the fact that tasks become automated (Osowa, Ema, Hattori, Akiya, Kanzaki, Kubo, Koyama & Ichise, 2017). One of the researchers that expressed the thought about this, was Stephan Hawking. Back 2015 he mentioned:

“Computers will overtake humans with AI at some within the next 100 years.” This is aligned to research of Halal, Kolber and Davis (2016), which states that a high percentage of the duties

(8)

performed by employees in high-paying jobs can be automated by using current technology.

However, Khatib, Yokoi, Brock, Chang and Casal (1999) state that service robots do have some shortcomings, due to their limited abilities for interaction and manipulation. In sum, a lot of researchers did research regarding the threat of AI on human jobs, however, do not agree on the fact if AI will, in the future, completely replace human service jobs or that AI will just be an addition based on the preference of the end user.

Although researchers do not agree upon the fact if AI will replace human service jobs, researchers do agree upon the fact that making use of either humans or AI in service brings both advantages and disadvantages (Jones and Brown, 2002; Hengstler, Enkel and Duelli, 2016; Hill, Ford & Farreras, 2015; Nadimpalli, 2017). The research of Jones and Brown (2002) states that the advantage of algorithms and formulas is their accuracy and consistency, while human judgements in combination with data might suffer from irregularity. This is in some way supported by Nadimpalli (2017), who states that the increasing level of standardization could be seen as an advantage. However, Nadimpalli (2017) also stated that the use of AI will only be helpful if the data, which is feeding the innovative technology, is good as well. This so-called “garbage in, garbage out” theory implies that outcome depends on the incoming data (Hengstler, et al., 2016). Furthermore, research also states that AI can result in reducing the level of emotions which human beings are having (Nadimpalli, 2017). The reason for this is that AI is making choices based on rationality instead of emotions, which in the longer term will result in that humans will adapt towards this rationality and that emotions will become less important (Nadimpali, 2017). The rationality of AI is also a big contrast towards human service, as it is well known that humans suffer from having a bounded rationality (Jones & Brown, 2002).

While identifying the advantages and disadvantages of both service types, a question could be raised: would a customer have more trust in the service provided by humans, or the

(9)

service provided by AI? To the knowledge, few research has been performed regarding the effect of companies providing service by humans or providing service by AI on customer trust.

This means that a research gap could be identified in the existing literature and that it would be highly beneficial for the strategic literature to fill this gap.

Research also shows that technology, and therefore AI, can be considered as more efficient in certain tasks compared to others (Glikson & Woolley, 2020). To strengthen the literature gap, this research has added another predictor of trust in AI, namely task characteristics. The task characteristics also influence trust according to the article of Ramchurn, Wum Jiang, Fischer, Reece, Robert, Rodden, Geenhalgh and Jennings (2016). In their article they pointed out that people tend to trust AI over humans when it comes to tasks which are cognitive driven. This due to the high intelligence of machines which increases both the amounts of tasks that can be done as the performances. Therefore, it can be an insightful predictor of trust in AI (Hancock, Billings, Schaefer, Chen, De Visser and Parasuraman, 2011).

The article of Ramchurn, Wum Jiang, Fischer, Reece, Robert, Rodden, Geenhalgh and Jennings (2016) pointed out that people tend to trust AI over humans when it comes to tasks which are cognitive driven. Allam and Dhunny (2019), elaborated that AI can lead to multiple benefits when it comes to cognitive tasks and can lead to dimensions and trends which are not overlooked by human beings. Rich, Knight and Nair (2009) added that AI will replace the capabilities of humans, like data cognitive tasks. The article of Gauglitz (2019) was not that convinced and mentioned that AI is beneficial when it comes to cognitive tasks, but it will never take over the human with for example data analysis and thereby the customer trust.

Another spectrum wherein AI tasks are not preferred over humans is outlined in the article of Dietvorst, Simmons and Massey (2018). They mentioned people tend to trust humans over artificial intelligence when tasks need to have social intelligence as a component. Gaudiello, Zibette, Lefort, Chetouani, Ivaldi (2016), added that the trust in the task increases if the task

(10)

contains social intelligence. Robins, Dautenhan, Te Boekhorst and Billard (2005) added in their article that people will trust and bond more with artificial social intelligence, the more the person is increasingly dependent on the social intelligence of the machine. Not everyone is convinced about the added value of ASI. Rouchier (2002) wrote that social intelligence is too complex and that artificial intelligence is not capable of replicating social life.

Although the effect of human- or AI service on customer trust is not widely researched, research made several implications regarding trusting AI in general. Trust is referring to the willingness of one side to be vulnerable to the actions and activities of another side based on the belief that others will execute a particular action important to the trustor, irrespective of the capability to control or manage that side (Mayer, David & Schoorman, 1995). Trust can be seen as a critical cornerstone in the development of decision support systems and as an effective way to lower the perceived uncertainty, resulting in a sense of safety among people. (Ba and Pavlou (2002).

Looking at the research that has been conducted regarding the effect of AI on trust, research of Coeckelbergh (2011) states that people already rely a lot on technologies and that they already assign tasks to machines. This implies that people already put a lot of trust in machines (Coeckelbergh, 2011). The research of Hancock, Billings, Schaefer, Chen, de Visser and Parasuraman (2011) states that humans trust robots as they will protect both welfare and interest. In addition, the research of Boutilier, Caragiannis, Harber, Lu and Procaccia (2015) states that humans trust AI based on its intelligence, as AI intelligence will replace human knowledge as human knowledge will not be sufficient enough. However, there can be seen that AI also raises certain concerns (Boutilier, et al., 2015; Hengstler, Enkel & Duelli, 2016; Mason, et al., 2020). Mason, Peoples and Lee (2020) stated concerns regarding AI meeting human values, its ability to make decisions that would be fair in the eyes of humans and its capability to make valid decisions. This influences the level of trust. Besides, the research of Cave (2019)

(11)

describes some other short-term concerns and immediate challenges that might harm humans could be defined, such as accountability, algorithmic bias and privacy of the human being. In case of the long term, Cave (2019) further elaborated that an uncontrollable super machine could be created, and a wide scale of job losses can be the result of the use of AI. Research explicitly mentions that without tackling customer concerns, a lot of customers will not trust AI, meaning that they will not adapt AI and will not see the positive capabilities that AI will bring (Rossi, 2019). In addition, research of Wang and Siau (2019) shows that trust is crucial in the acceptance of AI and could be seen as the cornerstone of the relationship between humans and AI. This is important to know as research shows that the amount of trust a customer has, influences their behaviour (Wang & Siau, 2019), and more importantly has an effect on the customer purchase intention (Chiu, Hsu, Lai & Chang-Ming, 2012). Research showed that a higher customer trust will result in a higher purchase intention (White & Hong, 2012). This indicates the importance of investigating the effect of type of service (human or AI) on customer trust.

It is interesting to see that multiple research has been done on customer trust in machines and customer trust in AI, however, to the knowledge, few research has been performed regarding the comparison of customer trust in AI and human into one research. This makes the research gap of the effect of human- or AI service on customer trust even stronger, as both the comparison as the addition of providing service is filling a research gap.

The research of Lee and Wan (2010) states that trust in technology is important as technologies tend not to be infallible. This means that human beings will become more vulnerable when they rely and depend more and more on the technologies (Martin, 1996).

However, trust can be seen as an effective way to lower the perceived uncertainty, resulting in a sense of safety among people (Ba and Pavlou, 2002). This safety is related to the adoption and acceptance of technologies (Van Raaij and Schepers, 2008). To measure this technology

(12)

acceptance (TA), several models have been analyzed, however the technology acceptance model (TAM) has been chosen since it is one of the best models to identify technology-related adaptations (Belanche, Casaló & Flavían, 2012). The two determinants of this model are Perceived Ease of Use (PEOU) and Perceived Usefulness (PU) (Davis, 1989). PEOU can be defined as the level to which people believe it is easy to use certain technology (Zarouali, Van den Broeck, Walrave & Poels, 2018). Looking at the second determinant, PU can be defined as the level to which people believe that using certain technology will improve their task- performance (Zarouali, et al., 2018). The model elaborates the information coming from respondents which is affected by the behavioral intentions and the subjective norms which is reflected in the antecedent of the adoption of technology (Chang & Guohua, 2010). The question that remains is, what is the effect of TA on the direct effect of type of service (AI vs human) on customer trust? According to Gefen, Karahanna and Straub (2003) is PEOU having a positive influence on trust, this counts in particular for systems in the initial adoption phase.

This due to the fact that PEOU will motivate customers to commit in the relationship with the company which will lead to making investments which are coming from the customers. Based on the article of Benbasat and Wang (2005), PEOU is an important component when it comes through the increase of PU. The positive influence of trust on perceived usefulness is also elaborated more by Pavlou (2003), especially in the case of the online environment. Previous research investigated the relationship between the TAM and trust (Gefen, et al, 2003). In addition, Van Pinxteren, Wetzels, Ruger and Wetzels (2019), investigated the drivers of anthropomorphism and the effect on trust to help managers with selecting humanoid robots. In this study, the perception of participants about robots which were getting more human-like appearances was researched. This research showed the participants machines which were looking like humans (with eye coloring). The research was lacking since there was no comparison made between the trust on human service or machine service. In their article the

(13)

researchers acknowledged that research on trust and intentions on the usage of the service of robots is highly deficient (Van Pinxteren, et al., 2019). This once again underlines the importance of investigating the effect of type of service (human or AI) on customer trust.

Over the years multiple research has been conducted regarding the direct effect of service type on customer trust, however, very little research has been conducted regarding the overall influence of age on customer trust in AI. One of the few researches regarding the influence of age on trust in AI, has been done by Przegalinska, Ciechanowski, Stroz, Gloor and Mazurek (2019). According to this research, the age group generation Y views the advice coming from AI as a tool that provides wiser decisions than humans. However, this research only focuses on millennials. Therefore, future research suggests doing a similar research on the effect of different age groups on customer trust in AI (Przegalinska, et al., 2019). Research shows that most elderly people are technologically uneducated and illiterate and are unfamiliar with using technology (Rodeschini, 2011). Therefore, there could be expected that younger customers would trust the service of AI more, than elderly people. In contrast, elderly people would probably have a higher customer trust when service is being provided by humans, compared to AI service.

Based on the identified research gaps, the following main research question has been formulated: What is the relationship between human or AI service on customer trust and is this moderated by TA, task characteristics and age?

This research will identify the following academic and managerial contributions to the strategic literature. Firstly, multiple research has been conducted regarding human and machine. In addition, research has already been done regarding human and artificial intelligence, while, to the knowledge no research has been conducted regarding the effect of both human service and AI service on AI and this comparison, meaning that it would be a true contribution to the literature. By investigating this, it could be clearly seen if it would be

(14)

strategically better for companies to use human service or AI service, resulting in more customer trust and eventually a higher purchase intention. Furthermore, this research will also contribute to the strategic literature in terms of the moderating effect of TA on the relationship between human and AI service and customer trust. The examination of this moderator will provide insights to what extent TA explains customer trust in service either provided by humans or by AI. Besides, the research will also provide insights on the moderation of task characteristics and age on this direct effect. This will help companies to identify which service type will result in the highest customer trust (and eventually company effectiveness), based on the task characteristics and the age of the target market and/or customer. In other words, this research will deliver new insights and both managerial- as academic contributions to the strategic literature.

(15)

2. Literature review

2.1 Type of service: human vs AI

The job tasks of employees will be more service minded due to the increasing modern market (Budianto, 2019). Rust and Huang (2014) defined this shift from a product-based economy towards a more service-focused economy in their article, wherein companies realize that implementing a perfect service is essential in a competitive market. Another force which is driving this shift is the increasement of scale service production and therefore a growth of services. These services should create value for the companies through their skilled employees (Buera and Kaboski, 2012; Larivière, Bowen, Andreassen, Kunz, Sirianni, Voss, Wünderlich and De Keyser, 2017. Fan, Wu and Mattila (2016) elaborated that there will come a time wherein machines will also replace the service employees. This replacement can potentially help companies to increase customer service and decrease the costs (Van Pinxteren, Wetzels, Ruger and Wetzels (2019). An important factor in cloning humans (intelligence) is mimicking human emotions. This is further outlined in the article from Martínez-Miranda and Aldea (2005). They are concluding that the emotional models into artificial intelligence systems are in general mostly applicable for that specific domain to which the artificial intelligence system has been designed for. Creating a machine which has human-related factors, is more elaborated in the anthropomorphism theory of Epley (2007). This theory is about the attribution of human appearances, character or behavior to non-humans. The theory claims that if a non-human has these factors, people will behave in a social manner on an interaction level (Moon, 2000). The article of Nilsson, Lierman, Dynesius and Revenga (2005) mentions that succeeding with anthropomorphism and thereby cloning human intelligence, is what researchers want to accomplish in the long run. To measure if a human is successfully replicated by a machine, researcher Alan Turning created the “Turning test”, since defining thinking is too hard to

(16)

conceptualize (Turning, 1950). In this test, the machine needs to pass if the machine is able to convince the human participant of the research, that they have communicated with another human instead of a machine.

While it is considered that service jobs are hard to automate with AI because service jobs do rely more on spontaneous communication and a contextual way of understanding (Autor and Dorn, 2013), Huang and Rust (2018) developed a theory about the likeliness that human job tasks will be replacement by innovative technologies of AI. According to their article the replacement of jobs by AI will take place at a task level instead of at a job level. In this case is AI taken over a part of the human service job. Huang and Rust (2018), described this stage as “augmentation”. Besides the augmentation, the article further elaborated on the transition to AI. They are stating that in the future, AI will be capable of matching the human performance when it comes through both empathically as intuitive tasks.

2.1.1 Type of service (AI vs human) and customer trust

According to Mayer, David and Schoorman (1995) trust can be defined as the willingness of one side to be vulnerable to the actions and activities of another side based on the belief that others will execute a particular action important to the trustor, irrespective of the capability to control or manage that side. Trust can be seen as a critical cornerstone in the development of decision support systems (Muir & Moray, 1996). Furthermore, trust can be seen as an effective way to lower the perceived uncertainty, resulting in a sense of safety among people (Ba and Pavlou, 2002). To establish the relational concept of trust, a minimum of three elements needs to be required (Hancock, Billings & Schaefer, 2011). First of all, there needs to be at least an actor that sends the information as well as a receiver of that information. Secondly, that

(17)

information needs to flow through a communication channel. Last but not least, the information is affected by the interplay coming from the human characteristics (Hancock, et al., 2011).

Looking at the research that has been conducted regarding the effect of AI on trust, research states that people already rely a lot on technologies and that they already assign tasks to machines (Coeckelbergh, 2011). The research of Coeckelberg (2011) indicates that a lot of trust has already been put in machines. In addition, trust is an important foresee of machine success (Pavlou 2003; Ratnasingam 2005). The research of Boutilier, Caragiannis, Harber, Lu and Procaccia (2015) states that humans trust AI based on its intelligence. The reason for this is that AI intelligence will replace human knowledge, meaning that human knowledge will not be sufficient enough. Furthermore, research states that humans trust robots as they will protect both welfare and interest (Hancock, Billings, Schaefer, Chen, de Visser & Parasuraman, 2011).

Nevertheless, there can be seen that AI also raises certain concerns (Boutilier, et al., 2015;

Hengstler, Enkel & Duelli, 2016; Rossi, 2019). Rossi (2019) states several concerns regarding AI. First of all, it raises concerns regarding meeting human values. Secondly, the research also states concerns regarding its ability to make decisions that would be fair in the eyes of humans and its capability to make valid decisions. All these concerns influence the level of trust in AI.

Research explicitly mentions that without tackling customer concerns, a lot of customers will not trust AI, meaning that they will not adapt AI and will not see the positive capabilities that AI will bring (Rossi, 2019). Furthermore, research of Wang and Siau (2019) shows that trust is crucial in the acceptance of AI. Besides, it could be seen as the cornerstone of the relationship between humans and AI. This indicates that the amount of trust a customer has, influences their behavior (Wang & Siau, 2019). More importantly, this eventually has an effect on the customer purchase intention (Chiu, Hsu, Lai & Chang, 2012). This is confirmed by the research of White & Yuan (2012), which shows that a higher customer trust will result

(18)

in a higher purchase intention. Once again, this indicates the importance of investigating the effect of type of service (human or AI) on customer trust.

However, research also reveals that certain customers do not fully trust AI to offer them the same quality of service as human service would offer (Yen & Chiang, 2020). This implicates that service provided by AI is most likely to result in a lower trust level, compared to human service among certain customers. Besides, the research of Lee and Wan (2010) states that trust in technology is important as technologies tend not to be infallible. This means that human beings will become more vulnerable when they rely and depend fully on the technologies (Martin, 1996). Due to this increasing vulnerability, there can be assumed that customers will have a lower trust level in AI service in comparison with human service. This results in the formulation of the following hypothesis:

H1: Human service will yield a more positive effect on customer trust, compared to AI service.

2.2 Technology Acceptance (TA)

Research states that trust can be seen as an effective way to lower the perceived uncertainty, resulting in a sense of safety among people (Ba and Pavlou, 2002). This safety is related to the adoption and acceptance of technologies (Schepers and Wetzels, 2007).

To measure this TA, several models have been created. One of these models is the Matching Person & Technology (MPT) model, which assess the most suitable technology based on a person’s goals and needs. The model measures the barricades that might exist when people using the technology and the type of additional support which might increase technology usage (Scherer & Craddock, 2002). However, according to Scherer, Sax, Vanbiervliet, Cushman & Scherer (2005) this model is complex due to the reactions and

(19)

expectations of humans regarding technologies. Another model focusing on TA is the Unified Theory of Acceptance and Use of Technology (UTAUT). The model is used to measure the intentions of people in a behavioral matter (Liu, Miguel, Rios, Buttar, Ranson & Goertzen, 2015). According to Dwivedi, Rana, Jeyaraj, Clement and Williams (2017), the UTAUT model results in an explanation of the variances in the behavioral intentions. An advantage of this model is that it integrates demographic determinants (Chen, Li & Li, 2011). However, a disadvantage is that the model ignores relationships which might be important for the research and that it proposes certain relationships which are not applicable to all the contexts (Dwivedi, Rana, Jeyaraj, Clement & Williams, 2017). Furthermore, a popular and widely used model is the technology acceptance model (TAM). TAM provides a better understanding of the adoptions of technologies. The model is one of the most popular models, as it can be characterized as simple (parsimony), able to predict acceptance and the use of technology (generalizability), and backed by data (verifiability) (Rauniar, Rawski, Yang & Johnson, 2013).

Taylor and Todd (1995), added another advantage of the TAM, which is that it is an easy method to explain the outcome of the test. However, the fact that TAM does not include a role of attitude, could be seen as a disadvantage (Dwivedi, et al., 2017). Besides, compared to UTAUT it does also not include any demographic determinants. Nevertheless, research states that TAM could be considered as one of the most significant models of current research and due to the other advantages, TAM has been chosen to measure TA.

TAM has two main determinants: Perceived Ease of Use (PEOU) and Perceived Usefulness (PU) (Davis, 1989). PEOU can be defined as the level to which people believe it is easy to use certain technology (Zarouali, Van den Broeck, Walrave & Poels, 2018). TAM suggests that people’s attitude will increase, after they perceive a greater ease of use of technology (Davis, 1989). In other words, if people perceive AI as easy to use, they will have a more positive attitude towards AI. Looking at the second determinant, PU can be defined as

(20)

the level to which people believe that using certain technology will improve their task- performance (Zarouali, et al., 2018). Again, TAM suggests that people’s attitude will increase, after they perceive the technology as useful, meaning that people will benefit from using the technology. In other words, the positive attitude of people towards AI will increase when they perceive AI as more useful. The higher the score on this 12-item TAM scale, divided by 6 questions related to PEOU and 6 questions related to PE, the higher the level of TA of customers.

Looking at the effect of TA on customer trust, there can be stated that people’s attitude will increase, after they perceive the technology as useful, meaning that people will benefit from using the technology. This is supported by Gefen, Karahanna and Straub (2003), who state that PEOU is having a positive influence on trust. Especially for technology systems in the initial adoption phase (Gefen, et al., 2003). The reason why especially these systems are having a positive influence on trust, is because PEOU will motivate customers to commit in the relationship with the company, which will lead to making investments coming from the customers. Not only the PEOU is influenced by trust, also the PU is highly influenced by trust (Mou, Shin & Cohen, 2017). As both PEOU and PU, antecedents of TA, seem to have an influence of trust, a lot of researchers combine TA and trust to prove certain relationships (Belanche, Casaloó & Flavián, 2012; Suh & Han, 2002; Ghazizadeh, Peng, Lee & Boyle, 2012).

As stated, certain customers do not fully trust AI to offer them the same quality of service as human service would offer, implicating that AI service results in lower trust among certain customers. (Yen & Chiang, 2020). This resulted in the formulation of hypothesis 1. The question that remains is if TA will moderate the relationship between service type and customer trust. As the hypothesis indicates that human service will have a more positive effect on trust compared to AI service; there could be expected that TA will play a role in this. Looking at the

(21)

moderating effect of TA on this relationship, this would indicate that those customers not fully trusting AI (preferring human service) also have a lower TA. Vice versa, this would indicate that AI service will have a more positive effect on customer trust, when having high TA.

Therefore, the following hypothesis could be formulated:

H2a: Low technology acceptance is expected to have a more positive effect on customer trust, when service is provided by humans, compared to service provided by AI.

H2b: High technology acceptance is expected to have a more positive effect on customer trust, when service is provided by AI, compared to service provided by humans.

2.3 Task characteristic (cognitive vs social intelligence)

Research shows that technology, and therefore AI, can be considered as more efficient in certain tasks compared to others (Glikson & Woolley, 2020). This is due to the high intelligence of machines, which increases both the amounts of tasks that can be done as the performances.

Therefore, task characteristics can be an insightful predictor of trust in AI (Hancock et al., 2001). Cheney (1986) mentioned that task characteristics can be seen as part of the user behavior and that it increases both work performance as user satisfaction. This only counts when the technology output matches with these task characteristics (Ghani, 1994).

Looking at previous research, the well-cited article of Huang and Rust (2018) identifies four task characteristics, also mentioned as intelligences, based on tasks that can be both identified by humans as by AI. The first intelligence that is described is mechanical intelligence, also known as the ability, to perform in an automated and routined way. This type of task is low in creativity, as tasks can be done with little thought as they have already

(22)

performed for a couple of times (Huang & Rust, 2018). The second intelligence that is described in this research is analytical intelligence. This is known as the ability to convert and process information for problem-solving purposes. Examples of such tasks are for example:

accounting, financial analysis and technology related work. These tasks are more complex and require consistent and systematic work. The third intelligence is intuitive intelligence, known as the capability to think creative and experience-based thinking. Compared to analytical intelligence, understanding can be considered as the key characteristic of intuitive intelligence (Huang & Rust, 2018). Last but not least, Huang & Rust (2018) identified empathetic intelligence as the fourth task characteristic. These tasks are social and emotional related and require empathy and highly interactive service.

Looking at another well-cited article related to task characteristics and AI, Wirtz, Patterson, Kunz, Gruber, Lu, Paluch and Martins (2018) also relate to the article of Huang and Rust (2018) and their four levels of intelligence. However, this research also states that the first three intelligences can be combined and can be identified as cognitive-complexity tasks. The empathetic intelligence as identified by Huang and Rust (2018) can also be identified as emotional-social tasks. These two task types are confirmed by Hudges (2014), who also states that task characteristics can be divided into cognitive skills and social intelligence skills.

The research of Xiao and Kumar (2019) and the research of Huang and Rust (2020) make the division between hedonic and utilitarian tasks. They describe that hedonic tasks are tasks focusing on the emotional and affective aspect of the services provided, which is related to the emotional-social tasks as mentioned by Wirtz, Patterson, Kunz, Gruber, Lu, Paluch and Martins (2018) and the empathetic intelligence as identified by Huang and Rust (2018).

According to Xiao and Kumar (2019) and Huang and Rust (2020), utilitarian tasks are related to functional and efficient aspects of the service, indicating that this would be related to cognitive complexity.

(23)

Combining the research above, research makes a division between cognitive tasks (the ability to process and use information) and social intelligence tasks (the ability to understand and adjust feelings and behaviours). Therefore, these two task characteristics will be considered in this research.

Looking at the cognitive task characteristic, the article of Ramchurn, Wum Jiang, Fischer, Reece, Robert, Rodden, Geenhalgh and Jennings (2016) points out that people tend to trust AI over humans when it comes to tasks which are cognitive driven. Allam and Dhunny (2019), elaborated that AI ensures that through the help of the technique can lead to multiple benefits when it comes to analyzing data and information and can lead to dimensions and trends which are not overlooked by human beings. Gauglitz (2019) is not that convinced and mentions that AI is beneficial in respect to analyzing data and information, but it will never replace humans in doing data analysis and thereby the customer trust. This is not in line with the research of Rich, Knight and Nair (2009) who elaborated that AI will replace the capabilities of humans, like cognitive tasks. This insinuates the positive effect on customer trust (Glikson

& Wooley, 2020), since people tend to give both the authority and the control of trust over to the technology. The research of Glikson and Wooley (2020) further elaborates that people accept and trust the actions of the technology best when the task characteristics are matched with the abilities of the technology.

As most authors state that cognitive related task characteristics yield a more positive effect on customer trust when provided by AI compared to humans, the following hypothesis can be formulated:

H3a: Cognitive tasks will have a more positive effect on customer trust when service is provided by AI, compared to service provided by humans.

(24)

The research of Dietvorst, Simmons and Massey (2016) outline that AI tasks are not always preferred over human tasks. They mention that people tend to trust humans over AI when tasks include social intelligence. Gaudiello, Zibette, Lefort, Chetouani and Ivaldi (2018) state that if a task contains social intelligence, the trust in that particular task will increase. A contradiction can be found in the article of Bainbridge, Brent, Carley, Heise, Macy, Markovsky and Skvoretz (1994), wherein they mention that AI is also able to create social institutions and structures. This article mentions that sociologists should be aware of the potential of artificial social intelligence (ASI) and predict that ASI will add significance to sociology. Robins, Dautenhan, Te Boekhorst and Billard (2005) add that the more people trust and bond with ASI, the more the person is increasingly dependent on the social intelligence of the machine.

However, not everyone is convinced about the added value of ASI. Rouchier (2002) writes that social intelligence is too complex and that artificial intelligence is not capable of replicating social life. This due to the lack of communicative flexibility of AI in comparison with the communicative skills of humans. This results in the fact that it will take a while before people will trust AI.

Therefore, there can be indicated that a task related to social intelligence will be trusted more, when it includes human service instead of AI. Keeping the above into account, the following hypothesis can be formulated:

H3b: Social intelligence tasks will have a more positive effect on customer trust when service is provided by humans, compared to service provided by AI.

(25)

2.4 Age and AI

According to Gibson and Sodeman (2014), every age group has developed its own unique features while facing technologies. The article more elaborated on the differences between age groups that are exposed to technology due to the period wherein they were born, Baby Boomers (1943-1960) and Generation X (1961-1980), and the age group that is been exposed to technology, Millennials (1981-1999) and Generation Z (2000-onward). Research of Morris and Venkatesh (2000) states that it is assumed that older people may be more accustomed to applying non-technology solutions to tasks, as their opportunity to interact with information technology was more limited compared to younger people. Younger people are more reliant on the use of technology, due to the fact that they grew up in the age of the personal computer. This is in line with research of Rodeschini (2011), which shows that most elderly people are technologically uneducated and illiterate and also are unfamiliar with using technology. This is supported by the research of Holzinger and Miesenberger (2009), which states that by increasing age, the ease of using a technological device will decrease. Besides, the research of Morris and Venkatesh (2000) implicates that older people might be less confident in their capability to provide independent judgements about (new) technology and might be more likely to look for opinions that are offered by humans, for example friends and co-workers. Both the fact that older generations are less familiar with technology and that they prefer to seek for opinions offered by humans might imply that elderly people would probably have a lower TA. This is in contrast to the younger people, who are familiar with technology and have a higher need for autonomy (Cook and Wall, 1980), indicating that they would probably have a higher TA.

In addition to the research of Gibson and Sodeman (2014), Kesharwani (2020), made a distinction between the different age generations: digital immigrants and digital natives. Digital immigrants are people born before 1980, for whom lifestyle completely changed due to the

(26)

adoption of new technologies. Digital natives are people born after 1980, growing up in the internet environment. This is supported by the research of Morris and Venkatesh (2000), which shows that age has a very important impact on TA. Looking at age in relation to TAM, the article of Plaza, Martín, Martin, & Medrano (2011) investigated the influence of age on the TA of mobile applications. This research shows that different age groups prefer different usability characteristics. Younger people focus more on the usefulness of a mobile application, while older people focus on ease of use, as they often first need to learn how to use the new application. Although the influence of age on technology has been investigated (Gibson and Sodeman, 2014; Morris and Venkatesh, 2000; Rodeschini, 2011; Holzinger and Miesenberger, 2009; Cook and Wall, 1980; Kesharwani, 2020; Plaza, Martín, Martin, & Medrano, 2011), very little research has been conducted regarding age and general customer trust in AI. However, one of the few research regarding the influence of age on general trust in AI has been done by Przegalinska, Ciechanowski, Stroz, Gloor and Mazurek (2019). According to this research, the age group generation Y views the advice coming from AI as a tool that provides wiser decisions than humans. However, this research only focuses on millennials (Przegalinska, et al., 2019).

Therefore, future research suggests doing a similar research on the effect of different age groups on customer trust in AI. This could be identified as a possible research gap.

While the study of Yang and Shih (2020) did separate the different age groups in their article, they did not investigate the impact on trust in artificial intelligence based on age groups.

Their article reveals that younger people, seen as digital natives are more likely to adopt a technology, compared to older people, seen as digital immigrants. This indicates that age-group generation Y, who can be seen as digital natives, would have a more positive effect on the direct effect of service provided by AI on trust, compared to the baby boomers and generation X, who are digital immigrants. In contrast, baby boomers and generation X, who can be seen as digital immigrants, would have a more positive effect on the direct effect of service provided

(27)

by human on trust, compared to generation Y. Based on this, the following hypothesis can be created:

H4a: Digital natives (people born after 1980), will have a more positive effect on customer trust when service is provided by AI, compared to service provided by humans.

H4b: Digital immigrants (people born before 1980), will have a more positive effect on customer trust when service is provided by humans, compared to service provided by AI.

2.5 Conceptual Model

Based on the literature and research gap, a conceptual framework has been created and hypotheses have been formulated. The framework shows the direct effect of service type (human and AI) on customer trust, wherein the TA (low and high), task characteristics (cognitive and social intelligence) and age groups (digital immigrants and digital natives) are moderating this relationship. This conceptual framework is summarized by figure 1.

Figure 1: Conceptual model

Technology Acceptance (Low vs

High) Type of Service

(Human vs AI)

Customer Trust

Age groups (Digital immigrants

vs Digital natives) H1

H4a H4b

Task characteristic

(cognitive vs social- intelligence)

H3a H3b H2a

H2b

(28)

The following hypotheses could be created based on research and literature:

Table 1: Overview of the hypothesis

H1 Human service will yield a more positive effect on customer trust, compared to AI service.

H2a Human service is expected to have a more positive effect on customer trust, when having low technology acceptance.

H2b AI service is expected to have a more positive effect on customer trust, when having high technology acceptance.

H3a Cognitive tasks will have a more positive effect on customer trust when service is provided by AI, compared to service provided by humans.

H3b Social intelligence tasks will have a more positive effect on customer trust when service is provided by humans, compared to service provided by AI.

H4a Digital natives (people born after 1980), will have a more positive effect on customer trust when service is provided by AI, compared to service provided by humans.

H4b Digital immigrants (people born before 1980), will have a more positive effect on

customer trust when service is provided by humans, compared to service provided by AI.

(29)

3. Data and research method

This chapter is providing detailed information regarding the hypothesis of this research which will be tested. A pretest was set up to test if the audio-fragments used in the experiment were well perceived. After the elaboration on the pretest, the study design will be discussed, followed by the participants of the study, the experiment procedure and the scales used in the research.

3.1 Pretest

A pretest had been performed to test the manipulations. The pretest is distributed among respondents via the way of convenience sampling, meaning that the pretest is personally shared with personal contacts to reach the sufficient number of respondents. This resulted in 35 respondents taking place in this pretest.

The pretest consists of two different parts. The first part was used to test the AI service manipulation. In order to test this, the pretest was set up to test which of the most conversational assistants is preferred when it comes to non-human voice. This pretest will use the same non- human voices as an earlier study from Bickmore, Olafsson, O’Leary, Asadi, Rickless and Cruz (2018). This research used Alexa (Amazon), Google Assistant (Google) and Siri (Apple) since these are representative and used all over the world with billions of voice searches each and every month. Hoy (2018) also confirmed that these voice assistants are mostly used since they are integrated into home speakers and smartphones. Both the article of Kumar, Dixit, Javalgi and Dass (2015) as the article of Canbek and Mutlu (2016), elaborated that the rise in popularity of these voice assistants, leads to the fact that humans fastly embrace voice assistance and dependency. Brill, Munoz and Miller (2019) are taking this a step further and predict that those popular voice assistants will manage at least 85% of the customer relationships within the enterprise businesses, when it comes to human interaction. Since multiple researchers see Alexa

(30)

(Amazon), Google Assistant (Google) and Siri (Apple) as the most popular voices when it comes to robotic AI driven voices, these three voices have been used in the pretest.

The 35 respondents were asked which voice fragment would best represent a robotic service assistant. The text that has been used for the voice fragments of the pretest is coming from the study of Adam, Wessel & Benlian (2020). Of the three voice fragments provided, voice fragment 1, with the Siri voice, was identified as the best voice assistant (57,14%). Voice fragment number 2, Amazon Alexa, had 22,86% of the votes and in the last place was Google Assistant with (20%).

The second part of the pretest consisted of two multiple-choice questions related to the task characteristics variable. Respondents were asked to identify which task would best represent the cognitive and social intelligence task characteristic. Based on the article of Huang

& Rust (2020), four different cognitive tasks and four different social intelligence tasks which were based on four jobs, have been identified. Based on this literature a list of tasks related to cognitive- and social intelligence tasks was presented to the respondents (table 2) (Huang &

Rust, 2018; Huang & Rust, 2020).

In the second part of the survey the respondents answered the question: Out of the following tasks, which can be identified as a “cognitive” task? The cognitive tasks that have been chosen the most (28,57%) is “analyse customer problems”, followed by “figure out which tax rules applied to which client’s particular situation (22,86%). Next is “Empathize and communicate with patients for emotional support and solutions” (14,29%), the same total of votes is for “Analyse Conversation” (14,29%). The second last choice is “Analyse conversation” (11,43%) and in last place “Calm down customer” (8,57%). The other questions that the respondents needed to answer was: Out of the following tasks, which can be best identified as a “socially intelligence” task? The respondents answered in the first place

(31)

“Empathize and communicate with patients for emotional support and solutions” (57,14%), followed by “Calm down customer” (14,28%). Next were “Analyse Conversation” (11,43%) and “Tell a patient she/he has cancer (11,43%).

Table 2: Cognitive and social intelligence task characteristics Cognitive tasks Analyse customer problems

Figure out which tax rules applied to which client’s particular situation

Analyze clinical decision support system Analyze conversation

Social intelligence tasks

Calm customer down

Commiserate with clients who have to pay a high amount of tax Tell a patient she/he has cancer

Empathize and communicate with patients for emotional support and solutions

(Huang & Rust. 2018; Huang & Rust; 2020)

Figure 2: Diagram of the Pretest: to test the most identified robotic voice

Most identified robotic voice

Respondent

Apple Siri

Amazon Alexa Google Home Voice fragments

Task characteristics

Cognitive

Social Intelligence

Identified task characteristic

(32)

3.2 Study design

The hypotheses have been tested via a two (service type: human, AI) by two (task characteristic:

cognitive tasks vs social intelligence) within-subjects experimental design. For the research, two independent variables have been manipulated: service type and task characteristic. The moderators: TA (low vs high) and age groups (digital immigrants vs digital natives) have not been manipulated, meaning that these variables have been measured. The factor which has been used for the measure of these two moderators is the between-subjects factor.

Figure 3: Diagram of the experiment: to test the hypotheses

Experimental design

Human Service (human voice) in combination with:

Respondent

Cognitive tasks

Social Intelligence tasks Artificial

Intelligence Service (robotic voice, based on identified voice from pretest) Demographic

questions

What is your gender?

What is your highest level of education?

How old are you?

What is your nationality?

Human / AI Service

Technology Acceptance

The provided service is trustworthy

I trust the provided service keeps my best interests in mind

The provided service will keep promises it makes to me

The provided service wants to be known as one that keeps promises and commitments I believe in the information they provided service provides me

Statement

(33)

3.3 Participants

The experiment has been filled in by 407 respondents. Out of those 407 respondents, there were 72 responses of the survey that were not imported into SPSS. The reason why these 72 respondents were not analyzed was due to the fact that the survey was not finished or was finished in less than 120 seconds instead of the proposed 300 seconds. The total amount of respondents that has been analyzed is 335.

In the experiment there was not any disqualification of any respondents. Both male and female from different age groups were included in the experiment. Those respondents were reached with the help of convenience sampling. This type of sampling went via multiple social media platforms; namely: LinkedIn, Instagram and Facebook wherein the link to the survey was shared. Next to convenience sampling, the respondents were gathered with the snow sampling method wherein the link of the experiment has been shared with personal contacts with the help of WhatsApp and SMS, which they on their turn shared with their personal contacts.

Of all the 335 respondents, there were 188 (56,12%) males and 147 (43,80%) females.

156 (46,57%) respondents between the 21-30 years old, 67 (20,00%) between 41-50 years old, 39 (11,64%) respondents between 51-60 years old, 38 (11,34%) respondents between 31-40 years old, 18 (5,37%) respondents between 0-20 years old, 11 (3,25%) respondents between 61-70 years old, 7 (1,49%) respondents between 71-80 years old and lastly 1 respondent (0,30%) of 81+ years old participated in the experiment. Looking at education, there can be indicated that 164 (48,96%) respondents had a Bachelor degree, 74 (22,09%) had a Secondary Vocal Education, 69 (20,60%) had a Master degree, 19 (5,67%) Secondary school, 7 (2,09%) PhD and 2 (0,60%) Elementary school.

(34)

3.4 Procedure

The respondents for the experiment were reached with the help of the internet, since it is easier and faster to share the survey and results in a larger and more diverse population of the experiment (Finley & Penningroth, 2015) (The survey can be found in appendix 3).

In this study there is chosen for a within subject design, wherein all the respondents have seen the same survey. The reason is that the research wanted to clarify the thoughts of each and every participant regarding the two conditions (human vs AI). On top of that, the article of Charness, Gneezy and Kuhn (2011) mentioned that the internal validity of the within- subject design does not rely on random assignment. In addition, the article mentioned that within subjects minimizes differences as a result of the random noise. For this purpose, the within-subject design was selected.

The first part of the survey consists of information about the purpose of the experiment and the order and rights that the respondents have including the fact that their data would be used anonymously. After this information, some demographic questions were asked, such as:

gender, education and age. The next part consists of a scenario wherein the respondent needs to think they were at home and having trouble connecting the internet. They are calling their internet provider in order to solve the problem. This scenario is tested in an earlier research of Liu & Sundar (2018) (appendix 1). The scenario is followed up with a voice fragment of a robot and multiple 5-scale Likert questions including a manipulation check to check if the respondent also recognizes the voice fragment as a robot. For the same scenario a voice fragment of a human is used including the same questions and also including the same manipulation check. Also, this scenario is based on an earlier study of Hardy (2017) (appendix 2). After the first scenario there was a second scenario wherein the respondents needed to imagine they were at home and are not feeling well. They were calling their psychiatrist to receive some emotional support. The scenario also included a voice fragment of a human

(35)

including the same questions as in the first scenario including a manipulation check. Secondly the respondents were exposed to a voice fragment of a robot with again the same 5-scale Likert questions including a manipulation check.

After the respondents submitted their surveys, they were seeing a debrief message to thank them for their time and give them an explanation of the research.

3.5 Measures

3.5.1 Technology Acceptance

To measure TA, TAM has been used. TAM was used as it is a popular and widely used model (Davis, 1989). Besides, it can be characterized as simple (parsimony), it is able to predict acceptance and the use of technology (generalizability), and backed by data (verifiability) (Rauniar, Rawski, Yang & Johnson, 2013). As described, TAM consists of two main determinants: PEOU and PU (Davis, 1989). To measure the TA, 12 questions will be asked: 6- items for PEOU and 6-items for PU. These questions can be found in table 3. The responses were estimated on a 5-point Likert scale, ranging from 1 (Strongly disagree) to 5 (Strongly agree) (Abu-Dalbouh, 2013; Davis, 1989). The higher the score on this 12-item TAM scale, divided by 6 questions related to PEOU and 6 questions related to PE, the higher the level of TA of customers.

(36)

Table 3: TAM: 10 questions to measure technology acceptance

Perceived Usefulness

1. AI enables me to accomplish tasks more quickly 2. Using AI improves my job performance

3. Using AI increases my productivity

4. Using AI enhances my effectiveness on the job 5. Using AI makes it easier to do my job

6. Overall, I find AI useful in my job Perceived

ease of use

7. Learning to operate the AI is easy for me

8. I find it easy to get the AI to do what I want it to do 9. Usage of the AI is clear and understandable

10. I find it flexible to interact with

11. It would be easy for me to become skillful at using AI 12. Overall, I find the AI easy to use

(Abu-Dalbouh, 2013; Davis, 1989)

3.5.2 Customer trust

The customer trust of participants was measured by a 5-item scale (Turel, Yuan and Connelly, 2008). The 5 items are used to measure the difference in trust between the human service and the AI service. The items were measured by using a 5-point Likert scale, from strongly disagree (1) to strongly agree (5).

(37)

Table 4: Trust: 5 questions to measure trust

1. The provided service is trustworthy.

2. I trust the provided service keeps my best interests in mind.

3. The provided service will keep promises it makes to me.

4. I believe in the information they provided service provides me.

5. The provided service wants to be known as one that keeps promises and commitments.

(Turel, Yuan and Connelly, 2008)

3.5.3 Control variables

During the research of the relationships between the variables, there needs to be a control variable to investigate if there are different factors which could have a significant effect. The control variables which are used in this research are: gender and education.

The article of Khakurel, Penzenstadler, Porras, Knutas & Zhang (2018) mentioned that due to the fact that males are having a higher population in the technology field and thereby experience with AI, males might have a preference for AI. The answer possibilities regarding the gender will be: Men or Women. Another control variable that will be used in this research is education. This control variable is measured with the help of the following answer possibilities: Elementary school (Dutch translation: basisschool) (=1), Secondary school (Dutch translation: middelbare school) (=2), Secondary vocational education (Dutch translation: MBO) (=3), Bachelor degree (Dutch translation: HBO) (=4), Master degree (=5), PhD (=6), none (=7).

3.5.4 Manipulation checks

To validate if the manipulation of the service types worked well, respondents were asked: How would you identify the last provided voice fragment? The participants could answer this

(38)

question with the following answer possibilities: robotic- (1) or human voice (2) or other (3).

The data showed that for the four voice fragments that were provided, most participants guessed the right voice. The manipulation check of the robotic voice fragment showed a mean of M=1.07 and M=1.21. The manipulation check of the human voice fragment showed a mean of M=2.03 and M=1.95. This indicates that the voice fragments were perceived as intended, meaning that the manipulation could be identified as effective. Therefore, there could be proceed with the data to test the hypotheses.

No manipulation check regarding the task characteristics was presented in the experiment as there was a clear difference between the two tasks. Besides, an introduction text was provided to the respondents which clearly showed the difference of the task (please see appendix 3 for the survey).

(39)

4. Data Analysis

4.1 Preliminary analyses

4.1.1. Mean average per category

Each voice fragment included 5 questions regarding customer trust. The answers to these 5 questions were combined to measure the overall customer trust of the task characteristic in combination with the service type. The outcomes can be found in table 5.

Table 5: Combined means of customer trust

σ =Standard Deviation

4.1.2 Reliability check

The research of Sun and Han (2002) measured the internal consistency of this PEOU and PU by the Cronbach’s Alpha (α=0,956) and could be considered as extremely reliable (Sun & Han, 2002). This research showed that the TAM scale has a high reliability, with Cronbach’s Alpha=0,884 (see table 6 and appendix 4). The corrected item-total correlations showed that all items have a good correlation with the total score of the scale, as they are all above 0.30.

Besides, no single item would significantly affect the reliability once they are deleted.

Table 6: Cronbach’s α of Technology Acceptance

Variable Cronbach’s α

Technology Acceptance 0.884

M σ

Robotic & Cognitive 3.28 0.99 Robotic & Social Intelligence 2.98 1.18

Human & Cognitive 3.87 0.82

Human & Social Intelligence 3.72 0.99

(40)

4.1.3 Grouping Technology Acceptance

The answers to the 12 questions of TA were combined to calculate an overall mean per participant. By calculating the overall mean, participants could be divided into a low TA group and into a high TA group. To divide the participants into these two groups, a median split was used. The advantage of using this is that data gets simplified and will be transformed to a categorical variable (DeCoster, Galluci & Iselin, 2011). The data revealed that the median of the TA category is 3.25. The low TA category included participants scoring lower or equal to 3.25, while the high TA category included the participants scoring higher than 3.25. This split showed that 174 respondents were allocated to the low TA category and 161 respondents were allocated to the high TA category. The impact of the 3.25 mean split is that the low TA category also includes outcomes from respondents that actually responded slightly positively to the TA questions. In other words, this indicates that the low TA category also includes slightly high TA respondents, which might result in biased results.

Table 7: Median Low TA vs High TA

Variable Outcome

Median Split 3.25

Low TA 174

High TA 161

N=335

4.1.4 Grouping Age

As two of the hypotheses were focused on age, participants were divided into two different groups based on their age. By grouping the ages, the hypotheses regarding digital natives (born after 1980) and digital immigrants (born before 1980) could be identified. This resulted in the digital natives group consisting of 212 participants, while the digital immigrants group

Referenties

GERELATEERDE DOCUMENTEN

The survey focuses in particular on the impact of IFRS restatement on key areas in financial instruments, goodwill, pensions and share-based payment and subsequently the

Key words: service innovation, appropriability regime, legitimacy (moral, pragmatic and cognitive) and

This article explores the ways in which service learning also presents opportunities to conduct research and scholarly work that can improve teaching and learning, contribute to

In order to handle categorical and continuous variables, the TwoStep Cluster Analysis procedure uses a likelihood distance measure which assumes that variables in the cluster

Defining the elements needed to optimize the value of the service from PNO to current or developing networks of customers in the northern Netherlands?.

Therefore, this study aims to assess the position of retail centres in the adaptive cycle framework to know which configuration of retail centre enables the adaptation to

Economic reasons are cited by the barangay health workers as well as by the respondents of the survey as the most important factor for wanting to limit the number of children and

UNAIDS Joint United Nations Programme on HIV/AIDS UNDP United Nations Development Programme USAID US Agency for International Development VCT Voluntary Counselling