MSc. in Business Administration – digital marketing Track Ethical dilemmas on customer journey: based on Algorithms and Humans

99  Download (0)

Full text

(1)

1

MSc. in Business Administration – digital marketing Track

Ethical dilemmas on customer journey: based on Algorithms and Humans

Student Name: Na Song Student number: 12918830

Date of submission: 22

nd

June 2021 (final) The name of Institution: ABS, UvA

EBEC approval number: 20210330110346

Name of the thesis supervisor: Dr. Roland Tabor

(2)

2

Statement of Originality

This document is written by Na Song who declares to take full responsibility for the contents of this document.

I declare that the text and the work presented in this document is original and that no sources other than those mentioned in the text and its references have been used in creating it.

The Faculty of Economics and Business is responsible solely for the supervision of completion of the work, not for the contents.

(3)

3

Table of Contents

Statement of Originality ... 2

List of Tables ... 5

Acknowledgements ... 7

Abstract ... 8

Introduction ... 9

Literature Review ... 13

Algorithmic bias and Human bias ... 13

Consumers’ attitudes towards AI and Humans ... 15

Ethical issues involvement of Consumers’ attitudes towards AI and Humans ... 16

Consumers’ attitudes towards unethical behaviors of the companies ... 19

Consumers’ attitudes towards Algorithmic bias and Human bias ... 21

Conceptual Framework ... 24

Data and Method... 26

Research Design ... 26

Conditions in the experiment... 26

Pretest 1 ... 27

Pretest 2 ... 28

Procedure and Measures ... 39

Sampling method ... 33

Data preparation ... 33

Reliability of Scales ... 34

Results ... 36

Descriptive of the Sample ... 36

Descriptive of the Group ... 37

Manipulation Check ... 38

Correlations ... 39

Hypotheses testing ... 40

Hypotheses 1, 2 and 3: Consumers’ satisfaction on AI vs. Humans ... 40

Hypotheses H4a and H4b: The interaction effect of ethical issues involvement ... 43

Hypotheses H5a and H5b: The comparison effect of consumers’ attitudes ... 45

Discussion... 48

The consumers’ attitudes towards Human performance and AI performance ... 48

(4)

4

The consumers’ attitudes towards unethical behaviors caused by Human bias and Algorithmic

bias ... 50

The consumers’ likeness to forgive unethical behaviors caused by Human bias and Algorithmic bias ... 51

Additional findings ... 52

Managerial implications ... 53

Limitation and Future Research Directions ... 54

Conclusion ... 56

References ... 57

Appendices ... 65

Appendix 1: Survey for Pretest 1 ... 65

Appendix 2: Survey for Pretest 2 and Main experiment ... 71

Appendix 3: Descriptive statistics and Results of the additional analyses... 94

(5)

5

List of Tables

Table 1. Conditions in the experiment Table 2. Reliability Checks for pretest 2 Table 3. Scenarios in the experiment

Table 4. Results of Cronbach’s Alpha analysis Table 5. Number of participants per condition

Table 6. Distribution of participants’ gender among the four conditions

Table 7. Distribution of participants’ attitude towards AI among the four conditions Table 8. Distribution of participants’ ages among the sample and four conditions

Table 9. Means and Standard deviations of perceived trustiness for the AI vs. Humans group Table 10. Means, Standard Deviations, Correlations

Table 11 Descriptive Statistics one-way ANOVA, the relationship between consumers’ satisfaction on AI vs. Humans

Table 12 Results of one-way ANOVA, the relationship between consumers’ satisfaction on AI vs.

Humans

Table 13. Descriptive Statistics factorial ANOV, the interaction effect between AI vs. Humans and ethical issues involvement on consumers’ attitude towards the performance

Table 14. Results of factorial ANOVA, the interaction effect between AI vs. Humans and ethical issues involvement on consumers’ attitude towards the performance

Table 15. Descriptive Statistics one-way ANOVA, the comparison effect of consumers’ attitudes on ethical issues involvement

Table 16. Results of one-way ANOVA, the comparison effect of consumers’ attitudes on ethical issues involvement

Table 17. Descriptive Statistics one-way ANOVA, the comparison effect of consumers’

forgiveness on unethical behaviours caused by Humans and by AI

Table 18. Results of one-way ANOVA, the comparison effect of consumers’ forgiveness on unethical behaviours caused by Humans and by AI

Table 19. Descriptive Statistics one-way ANOVA, the relationship between consumers’ perceived trustiness on AI vs. Humans

Table 20. Results of one-way ANOVA, the relationship between consumers’ perceived trustiness on AI vs. Humans

Table 21. Descriptive Statistics factorial ANOV, the interaction effect of gender and AI vs.

Humans on consumers’ attitudes towards perceived trustiness (no ethical issues involved)

(6)

6

Table 22. Results of factorial ANOVA, the interaction effect of gender and AI vs. Humans on consumers’ attitudes towards perceived trustiness (no ethical issues involved)

Table 23. Descriptive Statistics factorial ANOV, the interaction effect of participants’ attitudes towards AI and AI vs. Humans on the perceived trustiness (no ethical issues involved)

Table 24. Results of factorial ANOVA, the interaction effect of participants’ attitudes towards AI and AI vs. Humans on the perceived trustiness (no ethical issues involved)

Table 25. Descriptive Statistics factorial ANOV, the interaction effect between AI vs. Humans and ethical issues involvement on consumers’ perceived trustiness

Table 26. Results of factorial ANOVA, the interaction effect between AI vs. Humans and ethical issues involvement on consumers’ perceived trustiness

(7)

7

Acknowledgements

This research is the final part of my Master’s program of Business Administration – Digital Marketing Track at the University of Amsterdam. It examines the consumers’ general attitudes towards Artificial Intelligence (AI) recruiting tools and Human recruiters; moreover, this research investigates the consumers’ attitudes towards unethical behaviour (e.g., gender bias) caused by Algorithmic bias and Human bias.

It would be impossible to finish this research without the help of a couple of people I would like to thank. First of all, I would like to express my most tremendous gratitude to my thesis supervisor Dr. Roland Tabor, who gave me his professional advice during the thesis writing. His academic attitudes towards science helped me deeply explore the boundary of my research. Furthermore, his experience in the marketing field gave me guidance on my work career in the future.

Moreover, I would like to thank all the professors in the Faculty of Economics and Business at UvA. Thanks to their help and teachings, I gained a lot of academic marketing knowledge. Also, to the second reader of my thesis, I would like to thank him/her for taking time and effort to read my thesis. I hope you will like it.

Lastly, I would like to thank my boyfriend, Mr. B. Brouwer, for his support during my Master’s study and during this pandemic period. Furthermore, to all of my friends in Amsterdam who have accompanied me, I have had a great time with them. Finally, to my dear family, who always support all the decisions I made, thank you for always believing in me no matter what happened.

(8)

8

Abstract

The usage of Artificial Intelligence (AI) has penetrated many fields of business. Algorithms are used as a common tool that helps HR during the recruiting process, saving HR a lot of time and energy. Yet, the popularity of AI has raised a profound concern about algorithmic bias, such as Amazon’s recruiting tool against women. This research tends to examine to what extent consumers’ attitudes differ towards AI performance and Human performance, and to what extent consumers’ attitudes differ towards unethical behaviours caused by Human bias and Algorithmic bias. To answer these questions, we designed an online experiment by using the data collection tool Qualtrics. In the experiment, participants were randomly assigned to one of eight scenarios.

Then, they were asked to fill out the survey. We tested all hypotheses and have the findings below:

(1) most participants believe that algorithmic bias is based on biased data input, originally from human bias; (2) people tend to give a higher score for human recruiter’s performance than AI recruiting tool’s performance; (3) Likewise, people trust human recruiters more than AI recruiting tools in general; (4) It turns out that people trust human recruiters less than AI recruiting tools when there is a gender bias occurring during the recruiting process; (5) there is no significant difference between consumers’ forgiveness of AI bias and Human bias. These results have managerial implications for AI designers to avoid unconscious human biases involved when they program algorithms; and for the HR department to carefully consider the outcomes when they use AI in the recruiting process.

(9)

9

Introduction

‘AI-systems deliver biased results. Search-engine technology is not neutral as it processes big data and prioritizes results with the most clicks relying both on user preferences and location’, as indicated in UNESCO (Artificial Intelligence: Examples of Ethical Dilemmas, 2020). Does Artificial Intelligence (AI) have such a bias as humans? You may wonder…It must refer to the origin of AI technology.

In the first half of the 20th century, the world knew science fiction with ‘artificially intelligent robots’ as a concept. From 1957 to 1974, AI flourished (Anyoha, 2017). Nowadays, AI refers to the Machines that are programmed to think and act like humans with the usage of stimulation of human intelligence(How Artificial Intelligence Works, 2021). Pioneer literature from the 1950s indicated that simple algorithms could perform tasks very well such as diagnosing medical and psychological illness (Dawes, Faust, and Meehl 1989; Grove et al. 2000, cited by Castelo et al., 2019).With the popularity of AI use, unethical behaviors caused by algorithmic bias have been noticed and caused controversy in the marketing field. Since AI is based on human thinking, would human bias eventually become AI bias, thereby conducting the same unethical behaviors? What would be the consumers’ attitude towards those unethical behaviors?

In the modern healthcare system, developing a clinical Artificial Intelligence could help diagnose many health problems, the methods for payment, etc.(Bennett & Hauser, 2013). Artificial intelligence has also been used to predict events that have a big effect on water levels, such as severe thunderstorms, tornadoes, and hurricanes (McGovern et al., 2017). Firms use AI to predict consumers’ behavior by gathering data and analyzing data, thereby achieving deep market engagement (Farrokhi et al., 2020).

In addition, based on the data from the Web, Algorithms are often used in the recommendation system and personalization systems. However, Data bias, that appears in different forms, could cause algorithmic bias, thereby conducting unethical behavior of AI (Baeza-Yates,

(10)

10

2016). Algorithms have an important role in the function of autonomous systems, the use of Autonomous Vehicles (AVs) based on AI is increasing all over the world, so the possibility of algorithmic bias has been increasing (Danks & London, 2017).

Usages of Algorithms, as the base of decision-making, involves safety risks and inevitable discrimination(Lim & Taeihagh, 2019). The famous example – ‘Trolley dilemma’ was introduced by Bazerman & Sezer. It is a thought experiment around an ethical dilemma whether to sacrifice one individual in order to save five people. The setting of the ‘Trolley dilemma’ for self-driving cars could become an Algorithmic bias (Bazerman & Sezer, 2016). Besides, Lambrecht & Tucker (2019) studied discrimination based on gender in the Display of STEM Career Ads, that promotes job opportunities in many math fields, such as technological innovation, engineering, aviation, etc.

Also, Algorithmic bias, for the first time in 2019, was defined in terms of AI and health system as

‘the instances when the application of an algorithm compounds existing inequities in socioeconomic status, race, ethnic background, religion, gender, disability, and sexual orientation to amplify them and adversely impact inequities in health systems’ (Panch et al., 2019).

A lot of examples are based on algorithmic bias. In 2016, Microsoft launched Twitterbot Tay after the success of chatbot ‘xiaoice’ in China in 2014. However, Tay – Being essentially a robot parrot with an internet connection – taught itself with all sorts of misogynistic, racist opinions people tweeted and started to repeat these sentiments back to users (Vincent, 2016). In October 2020, BBC News reported that the UK passport photo checker shows bias against dark-skinned women.

When dark-skinned women submit their passport photos online, the possibility of their photos rejected is more than twice that of lighter-skinned men (Ahmed, 2020). Yet, a New Zealand passport robot tells applicants of Asian descent to open their eyes. A New Zealand man of Asian descent had his passport photo rejected when the facial recognizing system mistakenly thought that his eyes were closed (Staff, 2016). Another example is PredPol’s drug crime prediction algorithm, which has been in use since 2012 in many large cities in the USA. PredPol was trained on biased

(11)

11

data that were based on past housing isolation and past police. Because it is based on crime reports, police officers may focus on drug arrests around public housing while ignoring drug use on the college campuses(Data-Informed Predictive Policing Was Heralded As Less Biased. Is It? – The Markup, 2020).

However, such unethical behaviors exist not only based on algorithmic bias, but also human bias.

Even though human equity has been improving for a long time, discrimination still happens all over the world nowadays (such as gender, race, age, religion, or sexual orientation). Some discriminations are based on unconscious bias, which refers to social stereotypes, prejudice about certain groups of people (Conner, 2020). Especially in the recruiting system, those biases are still a real issue, even though recruitment discrimination is illegal (Discrimination in Recruitment, 2020).

Past research indicated that women are 30% less likely to be hired than men (Staff, 2019).

(Gurchiek, 2019) stated that multiple women got rejected from Jobs because of their names, being perceived as ‘Ghetto Names’. Such discriminations based on human bias caused negative effect and lowered self-esteem of the applicants. Although the lowered self-esteem from this kind of discrimination may not make people suffer all the time, individuals automatically keep repeating those traumas (Goldman et al., 2006). Then, what if recruitment discrimination happens on AI?

An Amazon AI recruiting tool that showed bias against women is one of the famous algorithmic bias examples (Dastin, 2018). Since 2014, the Amazon team had been building recruiting tool to review candidates’ resumes for selecting top candidates. However, by 2015, the company realized that its new algorithmic system did not give candidates ranks with a gender- neutral standard. That is because the designers trained recruiting model by observing the resumes that were submitted over the past 10 years. Most of those candidates are men, so the recruiting model learned by itself that male candidates are preferable (Dastin, 2018).

The previous research has only examined in general consumers’ attitudes towards task performance by humans and by AI: people essentially hold algorithmics to a higher standard than

(12)

12

we do to humans, expecting algorithms to meet performance goals than humans themselves may not (Dietvorst et al., 2015). Therefore, the research question that this thesis aims to answer is: to what extent does consumers’ attitude differ between unethical behaviors caused by humans and by AI in the context of recruiting process?

The purpose of this research is to investigate the reason why we expect a higher moral standard of AI and the difference in consumers’ attitudes in terms of different types of unethical behaviors (such as gender; race; age) and to investigate the possible reasons of how unethical behavior occurs based on algorithms. Based on this research, managers will gain empirical evidence of how different consumers’ attitude towards AI and human beings when it is related to unethical behaviors and carefully consider in what situation AI should be used in the business field;

marketers will understand the advantages and risks of using AI, thereby combining Artificial intelligence understanding and human insights. Guiding them to find an optimal way to use artificial intelligence with an ethical basis.

(13)

13

Literature review

Algorithmic bias and human bias

Algorithms outperform humans in many fields, such as predicting recidivism, predicting heart attacks, even writing the thesis. However, algorithms are not flawless (Lin et al., 2020; Weng et al., 2017). Algorithmic bias has been causing a series of problems in the digital age. The roots of algorithmic bias are based on machine learning and deep learning (Alake, 2020) . Lots of data are the components of deep learning algorithms. The clearer data algorithms use, the better performance it could achieve. However, this deep learning approach could cause a blind spot in the algorithms training system because of the missing and inaccurate data, thereby turning into algorithmic bias (Dickson, 2018). For example, the computer itself might become sexist, this discovery was found by a Boston University researcher and his Microsoft colleagues (Boston University, 2019); some job titles, such as ‘programming’ ‘engineering’ ‘doctor’, mostly match with men’s picture if you use google search engine, whereas ‘homeworker’ ‘nurse’ those words are normally related to females (Dickson, 2018).

The human brain is powerful but has limitations. Rules of thumb are often used by the human brain when people try to figure out something and make a quick decision. In this case, cognitive bias occurs due to fast information processing from the human brain, thereby causing racism, discrimination, stereotypes (How Cognitive Biases Influence How You Think and Act, 2020).

Krieger (1995) indicated that much bias existed in employment decisions was not from discriminatory motivation but from unconscious judgment errors that tailored normal human cognitive functions. Unconscious bias could cause serious problems in our daily life because our brain often misunderstands the signals from our unconscious judgement and shifts into categorized social labeling. As a result, people tend to have prejudice and discrimination against others before getting to know them. In the recruiting process, candidates may get injected before the recruiter reads their resumes (Unconscious Bias, 2020).

(14)

14

Human unconscious bias is not a coincidence, neither algorithmic bias. Deep learning of AI systems cannot guarantee that their output was not affected by human bias, especially unconscious bias (Human Bias and Discrimination in AI Systems, 2019). Amazon AI recruiting tool that showed bias against women shows that training data used for deep learning was imbalanced. This algorithmic bias occurred due to human unconscious bias. One of the stereotypes is that men outperform women. Thus, this stereotype influences the deep learning of AI systems. It results in the Amazon AI recruiting tool has a preference of men (Dastin, 2018). Also, the training data of the AI system reflects past discrimination. For example, if the past women’s job applications for

‘engineering’ ‘programming’ were rejected more frequently than men’s based on gender, then any algorithmic model based on this training data may show the same kind of discrimination, even if the designers did not intend to add those bias (Human Bias and Discrimination in AI Systems, 2019).

In many cases, humans’ subjective judgment of data could be reduced by AI, because machine learning systems learn to improve their predictive accuracy, thereby outperforming human beings.

For example, Jon Kleinberg and others indicated that racial disparities could be reduced by algorithms in the criminal justice system. However, many researches show that Artificial Intelligence models can be embedded in the prejudices of people and society, and deployed on a large scale, as algorithms are designed based on training data (Silberg & Manyika, 2020).

Saligrama indicated that machine learning algorithms are based on the documents and data we put in; whatever bias exists in our daily life is being embedded into the AI system (Boston University, 2019).

All in all, those findings show that the algorithms themselves are non-biased. Algorithmic bias occurs because of the training data that human beings put in. Therefore, H1 has been proved:

Algorithmic bias is based on the biased data put, initially based on human bias.

(15)

15 Consumers’ attitudes towards AI and Humans

In 2019, Statista researched consumers’ attitudes towards AI. Research has shown that business buyers tend to have a more positive attitude than consumers. For example, 70% of business buyers thought that they trust companies to use AI in a way that benefits them, whereas only 48% of consumers felt the same way (Statista, 2020). Indeed, the popularity of AI has been increasing in many business fields, such as automation, data analysis, streamlining operations, etc. In 2018, Harvard Business Review predicted that AI would make better performance in marketing services, such as manufacturing and supply chain management (Development, 2020). Thus, what do consumers think of AI being used in daily life has become a top topic.

Current research has shown that consumers may believe that other customers would like to accept using AI devices because they could benefit from using them, such as improving performance effectiveness. Moreover, based on social identity theory, if one customer believes in AI devices, other people from the in-role group tend to have a positive attitude towards using AI devices (Gursoy et al., 2019). In most countries, more and more people in the household are willing to make an investment decision based on the advice from Algorithms. Particularly, men were willing to believe in investment advices from computer programs than women (Waliszewski &

Warchlewska., 2020). In the tourist management field, AI also has contributions. Martin et al (2020) indicated that anthropomorphic tendency, and cognition had a positive association with the attitude towards AI-curated reviews (AAICR). Moreover, the research investigated how college students think of Artificial Intelligence at two universities in Timisoara. Research has shown that many students had positive attitudes towards the usage of AI, who believe that the development and sustainability of AI technology will make the whole society better (Gherheș & Obrad, 2018).

However, with the development of Artificial Intelligence, the deep relationship between humans and AI has been changed. Trust became the uncertain factor in this relationship. Pitardi & Marriott (2021) investigated the challenges of building trust with Artificial Intelligence voice-based

(16)

16

assistants (Vas), even though VAs offered retailers and consumers many opportunities. This study indicated that based on social entities and human social rules, the interaction between consumers and VAs could support the usage from a para-social perspective. Although AI has been accepted in many fields, the usage to medical Artificial Intelligence caused a big controversy. Research has proved that patients were more likely to have negative attitudes if a provider of medical services is automated rather than human, regardless of the performance. This is because they thought that AI providers lack of unique emotional characteristics (Longoni et al., 2019). Another research discussed both advantages and disadvantages of using AI technology on the basis of automation, such as smart homes and self-drive cars. On one hand, algorithms help customers make easier choices, thereby improving performance effectiveness. On the other hand, it may lower consumers’

sense of autonomy, thereby causing the absence of human beings’ independence (André et al., 2017).

Regardless of ethical and emotional reasons, research indicated that algorithms outperform human beings in predicting recidivism; heart attacks (Lin et al., 2020; Weng et al., 2017). Furthermore, algorithms might be as good in writing our assignments (Köbis & Mossink, 2021). Li et al. (2020) investigated online shoppers’ attitudes towards AI customer services and found that 71.5% of customers accepted or at least did not refuse AI customer services. Taking the above into consideration, the following hypothesis is formulated:

H2: There is a significant difference in consumers’ attitudes towards AI performance and Human performance.

H3: In detail, consumers trust AI performs a better task than humans do.

Ethical issues involvement of Consumers’ attitudes towards AI and Humans

(17)

17

With the popularity of AI usage nowadays, consumers start to worry about the biggest challenge of developing AI – ethical issues. Even though the ethical implication of machine learning and automation were already mentioned in the 50s (Samuel, 1959), the growing use of AI has generated new questions. What are the most important ethical concerns in terms of the increasing usage of AI? How do we take into account AI ethics in the business and marketing field?

One of the main ethical concerns would be whether AI replaces human workers. While AI has been proved to be more effective than humans when performing a task (Castelo et al., 2019), it is not a job killer. What AI does to help digitalize the transformation of the business process.

Research has shown that companies that adopted artificial intelligence not only accelerated the Return of Investment (ROI)but created a more efficient and warm work environment. In particular, AI helps humans to do a better job instead of replacing humans (Walch, 2019).

Another ethical concern is whether artificial intelligence has rights. In 2018, an autonomous car killed a pedestrian. It was believed to be the first time that a machine – self-driving technology killed a human. People were outraged (Wakabayashi, 2018). The fact is that around 3700 people are killed globally everyday due to car accidents (Road Traffic Injuries and Deaths—A Global Problem, 2020). But why were people only outraged to this accident? Because we may never accept when a machine kills a human. If we do not have regulations regarding AI, this scenario on the road will repeatedly happen (Walch, 2019). Therefore, putting ethical laws and regulations is necessary.

Many big companies, such as Google, Facebook, Amazon, have already had ethical boards set up to monitor their AI implementation(Lufkin, 2017). Companies have such measures because AI ethics, or ethics in general, has a lack of mechanisms to enhance its normative regulation. Thus, the membership of ‘Partnership on AI’, including Amazon, Apple, Baidu, Facebook, Google, IBM, and Intel, can be highlighted whenever there is a serious legal issue in terms of AI ethics

(18)

18

(Hagendorff, 2020). Besides, unavoidable normative regulations have been included in the decision-making system in the field of autonomous vehicles (Lo Piano, 2020).

The consideration of AI ethics has not only been taken by the companies but also by consumers. In 2019, a new report of consumers’ attitude towards AI indicated that only 25% of consumers would trust a decision made by AI over that by a person in terms of their bank loan. Besides the regulatory process, the more important thing for consumers is whether the AI system would do things in an ethical way (Cannon, 2019). It has something to do with the fact that consumers trust and rely on Algorithms less for tasks that seem subjective (vs. objective) in nature because they think that Algorithms do not have feelings (Castelo et al., 2019).

In fact, even though algorithms outperform humans, consumers often reject them. This phenomenon was called ‘algorithm aversion’ (Dietvorst et al., 2014). Further research from Dietvorst and his colleague suggests that people may not truly be averse to algorithmsbut prefer exceptional accuracy. This research further explained that people reject algorithms in uncertain decision domains because they diminish sensitivity to error prediction (Dietvorst & Bharti, 2020).

Artificial intelligence ethics happens to be an uncertain domain in the decision-making system.

While algorithms outperform humans (Lin et al., 2020; Weng et al., 2017; Gebru et al., 2017), ethical concerns seem to moderate consumers’ attitudes. Current research has shown that most people believed that AI did not utilize morality or empathy. Only 12% of consumers thought that AI could tell the difference between good and evil, whereas over half (56%) of consumers believed that machines could not behave morally (Cannon, 2019). Through all findings, the following hypothesis is constructed:

H4a: Ethics involvement moderates Consumers’ attitude towards AI vs. Humans.

H4b: In particular, if there is ethics involvement, consumers trust AI less compared to humans.

(19)

19

Consumers’ attitude towards unethical behaviors of the companies

Workplace ethics are the set of standards, values, moral rules that have to be followed in the workplace. Proper ethics could help employees communicate effectively, develop professional relationships and take responsibilities (Blog, 2019). However, many unethical behaviors based on human bias have been exposed in many companies, which has harmed brand image and reputation.

Thus, what is the consumers’ attitude towards those unethical behaviors of the companies?

As one of the most known brands, Coca-Cola has been struggling with its ethical crisis. Since the 1990s, Coca-Cola has been accused of having racial discrimination, causing environmental pollution, depletion of natural resources, etc. (Magill, 2006). The biggest scandal of Coca-Cola was the child labor issue. In 2004, Coca-Cola’s sugar supplier was exposed by Human Rights Watch in El Salvador of using child labor to harvest sugar cane (El Salvador: Child Labor on Sugar Plantations - Business & Human Rights Resource Centre, 2004). This incident has caused global controversy. Mark Thomas published his book ‘Belching out the devil – global adventures with Coca-Cola’ in 2009. The ‘funny’ cover picture and red bold ‘devil’ impressed people with its meaning (Williams, 2013). In 2011, Sussex University banned all Coca-Cola products due to their unethical practices. Other campuses were following (Frith, 2011). Camps-Cura (2016) indicated that gender bias is one of the most important factors that explain why children continue to work in some developing countries. Stenberg (2019) also calls on people that it is time to boycott Coca- Cola based on four reasons – plastic pollution worldwide, extreme animal cruelty of dairy cows in Fairlife, a donation to politicians supporting abortion restrictions, and misleading information about the causes of food-related diseases.

Nestle, one of the world’s largest food & beverage companies, has been boycotted due to unethical behavior of promoting baby milk formula to mothers in the developing countries and spreading misleading information (Five Unethical Companies, 2020). Moreover, other unethical scandals have been continually exposed against Nestle. For example, Nestle took clean drinking water in the

(20)

20

areas where local people desperately need it and participated in child labor and human trafficking (O’Callaghan, 2019); Nestle also uses unsustainable palm oil and workable ingredients at the lowest possible prices mixed with their innovative technology, no matter whether those ingredients are healthy or not (Wolf, 2015). People seem to lose faith in Nestle after those unethical behaviours. In 2015, the Indian government sued Nestle for ‘unfair trade practices’ related to Maggi noodles products (BBC News, 2015).

Racial discrimination has been a serious issue in the workplace. Many large companies engage in racial discrimination, such as Walmart Inc., Abercrombie & Fitch, and General Electric (5 Big Companies Sued for Racial Discrimination, 2020). In 2003, Abercrombie & Fitch was accused of discrimination against African-American, Latino, and Asian American applicants and employees.

Even those people got hired in the rare situation; they were given a position where the public cannot see them (Legal Organization Fighting., 2018). In 2005, Marc Thomas, head of General Electric’s aviation materials, accused the company of racism against African-Americans and stated that they were not allowed to ask any question (BBC NEWS | Business | GE Accused of Race Discrimination, 2005). Likewise, in 2008, a group of 4,500 truck drivers sued Walmart with the reason of racial discrimination – they applied for jobs as truck drivers but being rejected. In particular, black applicants needed to provide a good credit history when they applied for the job, but this requirement did not apply for white applicants (Schlein, 2019). Those unethical behavior based on human bias, particularly racial bias, caused many protests and made customers lose their faith in those brands. For example, a mom in suburb Washington, D.C., packed her kids’

Abercrombie clothes and returned them to headquarters. A group of students made a video condemning Abercrombie (Abcarian, 2013).

Panchanadeswaran & Dawson (2011) have indicated that high discrimination levels significantly reduce peoples’ self-esteem and subsequently influence their further mental health status. Through those examples, we can see that consumers are likely to lose their faith and boycott the companies

(21)

21

once there are unethical behaviors among companies. To test this supposition, the following hypothesis is formulated:

H5a: Consumers have a negative attitude towards companies when unethical behaviors occur, compared with ethical behaviors.

Consumers’ attitudes towards Algorithmic bias and Human bias

Artificial Intelligence (AI), as a futuristic advanced technology, has permeated almost every industry and become a part of humans’ daily life. Consumers use AI tools to solve basic problems but do not realize it. However, consumers’ attitude towards the usage of AI is still a value topic, especially when it refers to Algorithmic bias (Hazlegreaves, 2021). While algorithms are created to solve the problems and make better decisions, algorithmic bias, as its by-product, cause a lot of troubles and harms humans’ development. Research has indicated that algorithmic bias harmed racial groups and caused more serious discrimination in an implicit and explicit way (Turner Lee, 2018).

Besides causing discrimination, algorithmic bias also shows in lowering the visuality of social media content. Facebook and Google have become the most important way for populations to get information worldwide. To deal with a huge amount of news on the social media website, those companies started to use filtering algorithms based on personal information and preference.

Humans can mutually influence those algorithms and show users the specific contents (Bozdag, 2013). Likewise, Abul-Fottouh et al. (2020) examined the effect of YouTube’s algorithmic recommendation systems on anti-vaccine videos and found that its demonetization policy of harmful content based on algorithmic system may have lower the visibility of anti-vaccine content.

Algorithms have a key role in the marketing field. Algorithmic bias, as a by-product of Algorithms, is inevitable. Thus, what would be consumers’ attitudes when algorithms fail?

(22)

22

Back in 1988, St. George’s Hospital Medical School was found guilty of racial and sexual discrimination due to algorithmic bias. This school used algorithms to select candidates. However, women or people who have non-European names were rejected because of the human bias embedded in the algorithms (Garcia, 2016). In 2018, Pew Research Center did a research about public attitudes towards algorithms and found that the public showed greater concern about using algorithms to make decisions. Most Americans found it unacceptable for AI to make decisions that might influence humans real-life and concerned about algorithmic bias (Smith, 2018).

It is well-known that algorithmic systems can be a risk to lead humans to make the biased decision.

However, most algorithmic systems failed because humans embedded their unconscious bias into the system (Review into Bias in Algorithmic Decision-Making, 2021). Some implicit tests have proved that biases we may not notice or realize have started to harm our society. For example, in Florida, African-American defendants were mislabeled ‘high risk’ by criminal justice algorithms, and this mislabeled rate is two times higher than it mislabeled white defendants (What Do We Do About the Biases in AI?, 2019). It may relate to the fact that humans hold racial bias. Even nowadays, in the U.S. labor market, African American workers still face obstacles to finding a job, not to mention a good job (African Americans Face Systematic Obstacles to Getting Good Jobs, 2019).

Past research has investigated the difference between Algorithmic and human performance (Zerilli et al., 2018; Huang & Rust, 2018). Castelo et al. (2019) indicated that even though AI is programmed to think and act like humans, people trust algorithms less for the subjective (vs.

objective) tasks. Huang & Rust (2018) indicated that AI is increasingly improving service by performing different tasks, creating a large number of innovations, thereby threatening human jobs.

Nevertheless, humans still hold higher standards and expectations of AI. However, people more quickly lose confidence in algorithms than human forecasters after seeing them make a mistake reach (Dietvorst et al., 2015). Through all findings, the following hypothesis is constructed:

(23)

23

H5b: Consumers tend to be more tolerant when unethical behaviors are caused by humans than by AI.

(24)

24

Conceptual model

The proposed conceptual model framework uses Artificial Intelligence (AI) and Humans as the independent variables (IVs) and Consumers’ attitude as the dependent variable (DV). In addition, Ethics involvement as a moderator of the relationship between AI and consumers’ attitude and the relationship between Humans and consumers’ attitude. In particular, this research will investigate and compare consumers’ attitudes towards AI and towards Humans when performing a task.

Further on, to what extent does ethics involvement differ consumers’ attitude towards AI and Humans.

Figure 1. Conceptual Model

Main effect:

H1: Algorithmic bias is based on the biased data put, initially based on human bias.

H2: There is a significant difference in consumers’ attitudes towards AI performance and Human performance.

H3: Consumers trust AI performs a better task than humans do.

Moderation effect:

H4a: Ethics involvement moderates Consumers’ attitude towards AI vs. Humans.

(25)

25

H4b: In particular, if there is ethics involvement, consumers trust AI less compared to humans.

Comparison effect:

H5a: Consumers have a negative attitude when unethical behaviors occur, compared with ethical behaviors.

H5b: Consumers tend to be more tolerant when unethical behaviors are caused by humans than by AI.

(26)

26

Data and Method

Research Design

To test the hypotheses in this research, an online experimental vignette study was conducted by using the data collection tool Qualtrics. The independent variables were Humans vs. Artificial Intelligence. The dependent variable was consumers’ attitudes, operationalized by evaluating consumers’ attitudes towards AI vs. Humans. Additionally, as Ouchchy et al. (2020) proposed, AI ethics has been mentioned more and more in developing AI systems and being used in social media. Thus, Ethical issues involvement was added as a moderator variable. This study measured the change of consumers’ attitudes towards AI and Humans when Ethical issues involvement was added.

Conditions in the experiment

For the manipulation of the independent variables, there were two conditions: an AI condition and Human condition. The AI condition comprised that an AI recruiting tool was introduced in the recruiting process. The Human condition implied that participants experienced the traditional recruiting process with a Human recruiter.

To test the further hypotheses, ethics involvement (ethical behaviors vs. unethical behaviors) was used as the moderator variable. Thus, in the conceptual model, a 2x2 between-subject design was conducted. This study consists of four conditional groups:

Table 1. Conditions in the experiment

Conditions Independent variables Moderator (Ethical issue involvement)

1 AI recruiting tool No

2 Yes

3 Human recruiter No

(27)

27

4 Yes

Stimuli

Out of a number of possible unethical behaviors, gender bias has been selected to be designed to gather the stimuli for the experiment. Firstly, gender bias is still a serious issue in the workplace.

Some recruiters prefer having male candidates to females, even though that was an invisible requirement (Braddy et al., 2019). Secondly, it was easy to define the effect of gender bias on consumers’ attitudes during the recruiting process, compared with racial bias. Participants either choose male identification or female identification. Multiple pretests were conducted to measure the manipulations of the AI and Humans scenarios. The methodology of collecting stimuli data is described below.

Pretests

To pretest the operationalization of the manipulations, two pretests were conducted. Both pretests used the online data collection tool Qualtrics and participants were selected randomly through different online platforms. Pretest 1 consisted of 36 participants and took place between 22-26 March. Pretest 2 consisted of 56 participants and took place between 18-22 April. During pretest 1, a few flaws in the manipulations of scenarios were found and were improved in pretest 2. Below is depicted the detailed description of pretest 1 and pretest 2. The full surveys of two pretests are added in Appendices 1 and 2.

Pretest 1

The purpose of pretest 1 was to test the readability and understandability of the materials and messages of the four different scenarios. To measure the understandability of the scenarios, a 5- point Likert scale method was used for the Q1 (1: strongly disagree; 2: somewhat disagree; 3:

Neither disagree nor agree; 4: somewhat agree; 5: strongly agree). Among36 participants (N=36), 83.3% of the participants chose either strongly agree or somewhat agree (N1=30). On average, the

(28)

28

mean of participants’ understandability towards scenario materials was 4.11 (M=4.11). Thus, the scenario messages were seen as understandable for this research. However, all four scenarios were designed based on gender bias against women. Therefore, male participants were asked to pretend to be female during the scenario. Considering that the results might be different when the male participants faced gender bias against themselves, the scenarios were improved during pretest 2.

Pretest 2

The objective of the second pretest was to test the reliability of the study. Firstly, the additional four scenarios for male participants were added. Those four additional scenarios were designed based on the initial female scenarios. The only difference was that male participants faced gender bias towards male candidates when they applied for a nursing job. It further improved the experiment on Pretest 1, because male participants were not asked to pretend to be female and faced gender bias towards females. Thus, all participants were assigned to different scenarios based on their gender (male vs. female). Secondly, to simplify the survey filling-out process, the slider questions were changed to Matrix Table questions. The effect of both adaptations on the reliability was checked in pretest 2. Reliability is the extent to which data collection tools or analysis procedures yield consistency (Saunders et al., 2009).Reliability checks in this pretest were run for the satisfaction of the recruiting process, performance, trustworthiness, and AI acceptance. The Cronbach’s alpha, which represents the internal consistency of the samples. From table 2, we can see that the Cronbach’s alphas of three items were 0.893, 0.798, 0.878 respectively (all > 0.7), which indicated a high level of internal consistency with this specific 56 samples. Therefore, the manipulation for pretest 2 could be used in the official experiment to gather data further.

Table 2. Reliability Checks for pretest 2

Construct Cronbach’s Alpha N of Items

Consumers’ satisfaction 0.893 3

(29)

29

Performance 0.798 3

Trustiness 0.878 3

Procedure and measures

For this research, eight scenarios were designed in the experiment. Four scenarios were designed for female participants, and the other four scenarios were designed for male participants. Eight scenarios were:

1. No unethical behaviors occurred during the CV checking stage by AI recruiting tool (for male participants).

2. Unethical behaviors occurred during the CV checking stage by AI recruiting tool (for male participants).

3. No unethical behaviors occurred during the CV checking stage by a Human recruiter (for male participants).

4. Unethical behaviors occurred during the CV checking stage by a Human recruiter (for male participants).

5. No unethical behaviors occurred during the CV checking stage by AI recruiting tool (for female participants).

6. Unethical behaviors occurred during the CV checking stage by AI recruiting tool (for female participants).

7. No unethical behaviors occurred during the CV checking stage by a Human recruiter (for female participants).

8. Unethical behaviors occurred during the CV checking stage by a Human recruiter (for female participants).

(30)

30

After filling out the self-identifying gender question, female participants and male participants were randomly assigned to one of those four scenarios, respectively. For male participants in the unethical behaviors scenarios, the scenarios consisted of the gender bias against men being engaged in nursing study/work. For female participants in the unethical behaviors scenarios, the gender bias against women being engaged in mechanical study/work was played. Participants who chose not to say their gender were redirected to the end of the survey, see Table 3.

Table 3. Scenarios in the experiment

Gender Number of the scenarios Scenarios

Male identification 1 No unethical behavior x AI

recruiting tool

2 Unethical behavior x AI

recruiting tool

3 No unethical behavior x Human

recruiter

4 Unethical behavior x Human

recruiter

Female identification 5 No unethical behavior x AI

recruiting tool

6 Unethical behavior x AI

recruiting tool

7 No unethical behavior x Human

recruiter

8 Unethical behavior x Human

(31)

31

recruiter

Prefer not to say End of the survey

For the participants who selected either male or female, they were asked to fill out the questionnaire after reading the specific scenario each participant assigned to. The questionnaire was different based on the scenario every participant was assigned to. Firstly, participants were asked the job type they saw in the scenario. It is to check whether participants read the scenario carefully and to ensure that further data are valid. Secondly, consumers’ attitudes towards the recruiting process, performance, trustiness and AI acceptance were evaluated in the questionnaire.

In addition, the participants who were assigned to see unethical behaviors (e.g., gender bias) scenario were also asked to assess their forgiveness to AI recruiting tool/Human recruiter. Lastly, participants’ general attitudes towards AI technology and the future were assessed. At the end of the survey, participants were asked to fill out demographic information, like age. For the official surveys, see Appendix 2 (the same survey for the Pretest 2).

Independent variable: AI vs. Humans

The independent variable was manipulated into two groups: AI or Humans. The AI group was operationalized by using AI recruiting tool during the CV check stage. The Humans group was operationalized by using the Human recruiter during this stage.

Dependent variables: consumers’ attitudes towards the recruiting process, performance, trustiness, AI acceptance, and forgiveness

(32)

32

Consumers’ attitudes towards the recruiting process, performance, trustiness and forgiveness were measured by using 7-point Likert scales. These four dependent variables were operationalized by evaluating participants’ attitudes based on four adjectives (Extremely dissatisfied/Extremely satisfied, Extremely bad/Extremely good, Not at all/A great deal, Extremely unlikely/Extremely likely). Participants were asked to evaluate their satisfaction with the recruiting process with 7- point Likert scales (1: Extremely dissatisfied; 7: Extremely satisfied); Then, participants were asked to assess the performance of AI recruiting tool/Human recruiter (1: Extremely bad; 7:

Extremely good); Consumers’ trustiness towards AI recruiting/Human recruiter, was asked in the following question. Participants gave their assessment from ‘Not at all’ to ‘A great deal’ (1: Not at all; 7: great deal). In the end, for those participants who were assigned to unethical behaviour scenarios, they were asked to evaluate to what extent were they likely to forgive the unethical behaviour caused by AI recruiting tool/Human recruiter (1: Extremely unlikely; 7: Extremely likely). Furthermore, AI acceptance were measured using 5-point Likert scales based on the adjective (Extremely unacceptable/Extremely acceptable). Again, the high point (from 1 to 5) indicated high AI acceptance.

Other variables: participants’ attitudes towards AI technology and future, the origin of algorithmic bias

Before asking the demographic information, like age, the participants assigned to unethical behaviour scenarios were asked whether they thought algorithmic bias was originally from human bias, ranging from ‘Definitely yes’ to ‘Definitely no’. The high point (from 1 to 5) indicated that they did not think so.

(33)

33

With participants’ attitudes towards AI technology and the future, a multiple-choice with the three answers (An optimist, An pessimist, It’s complicated) was designed. The participants who selected the answer ‘It’s complicated’, were asked to give their opinions.

Sampling method

The main experiment took place with online survey tool Qualtrics from 24 April to 3 May 2021.

Since no requirement of any professional knowledge in the AI field and the strict Covid-19 rules in the Netherlands , the participants were gathered through online platform. The survey was distributed through social media like LinkedIn, WhatsApp, Facebook and online marketing research platform like Reddit. It resulted in, in total, 559voluntary participants.

Data preparation

Data preparation was run by IBM SPSS Statistics 26 software. Firstly, the data collection via Qualtrics was finished and checked for missing data. 65 out of 559 participants were not completed their questionnaire. Thus, their data were excluded from the data analysis. Furthermore, 16 out of 559 participants, who chose not to say their gender, were redirected to the end of the survey.

Therefore, their data were excluded as well. It was found that 14 participants wrongly answered the question related to the scenario information. We assumed that those participants did not read the scenario before filling out the survey. Thus, the data from 14 participants were marked as invalid data and were removed from the database. No outlier was found in any variables.

After removing invalid data, 464 participants were marked as valid data to do further analysis. All the questions were designed based on the same direction in the survey, high scores indicated high values. In addition, the N ≥30 per group, the Central Limit Theorem applied to this data analysis, which meant the mean of the data was normally distributed. Thus, further data analysis could be performed.

(34)

34

To differentiate the condition groups of the experiment, the scale measure from 1 to 4 was used: 1 indicated the scenario ‘No unethical behaviors occurred during the CV checking stage by AI recruiting tool’; 2 indicated the scenario ‘Unethical behaviors occurred during the CV checking stage by AI recruiting tool’; 3 was coded with the scenario ‘No unethical behaviors occurred during the CV checking stage by the Human recruiter’; and 4 was coded with the scenario

‘Unethical behaviors occurred during the CV checking stage by the Human recruiter’. Gender was not included in differentiating the condition groups. Additionally, the independent variable was coded with AI condition and Humans condition (1=AI; 2=Humans). The moderator variable was coded with Ethical issues involvement (0=No; 1=Yes). Finally, gender was coded with male;

female; and prefer not to say condition (1=male; 2=female; and 3=prefer not to say).

To be able to analyse data, the measure of 7-Likert data was coded to scale data from 1 to 7. 1 indicated ‘extremely dissatisfied’ for customers’ attitude towards the recruiting process; 7 indicated ‘extremely satisfied’. The same measure was used to analyse other variables except for the consumers’ attitude towards AI, technology, and the future. Multiple choice question was used for this question: 1 indicated ‘an optimist’; 2 indicated ‘a pessimist’; and 3 showed the neutral opinion.

Reliability of scales

The reliability of scales was checked to ensure the internal consistency by checking Cronbach’s Alpha. Consumers’ satisfaction, performance, and trustiness all used 7-Likert scale questions. The reliability findings could be found in Table 3. It showed that The Cronbach’s Alpha of each construct was above 0.7, indicating that the data can be used for further analysis. The forgiveness question was only designed for the participants who were assigned to unethical behaviours scenario. Therefore, it was not concluded in the reliability check.

(35)

35 Table 4. Results of Cronbach’s Alpha analysis

Construct Cronbach’s Alpha N of Items

Consumers’ satisfaction .818 3

Performance .713 3

Trustiness .850 3

(36)

36

Results

Descriptive of the sample

Tables 5 to Table 8 showed the distribution of participants among the four conditions. Table 5 showed that the number of each condition was equal due to the equally random contribution setting in Qualtric. Table 6 showed that male participants were more than female participants; 60.1% of participants were male, while 39.9% were female. The data of participants who chose ‘prefer not to say’ their gender during the data preparation stage were excluded and not shown in Table 6.

Furthermore, Table 7 showed that more than half participants (53.7%) were optimists towards AI technology and the future, whereas 19.4% were pessimists. Additionally, 26.9% of participants hold a neutral attitude. It could be related to the fact that most participants thought that AI technology improved people’s daily lives. Those who hold the neutral attitude believed that AI technology had a good side and a bad side; It depended on how did we use it. Lastly, Table 8 showed that the majority of participants (81.0%) were aged 18-34. 34.9% of participants were aged between 18 and 24, and 46.1% were aged between 25 and 34.

Table 5. Number of participants per condition

Total Condition 1 Condition 2 Condition 3 Condition 4

Participants 464 (100%) 116 (25%) 116 (25%) 116 (25%) 116 (25%)

Table 6. Distribution of participants’ gender among the four conditions

Gender Total Condition 1 Condition 2 Condition 3 Condition 4 Male 279 (60.1%) 74 (63.8%) 71 (61.2%) 67 (57.8%) 67 (57.8%) Female 185 (39.9%) 42 (36.2%) 45 (38.8%) 49 (42.2%) 49 (42.2%)

Table 7. Distribution of participants’ attitude towards AI among the four conditions

(37)

37

Attitude Total Condition 1 Condition 2 Condition 3 Condition 4

Optimist 249 (53.7%) 58 (50.0%) 64 (55.2%) 68 (58.6%) 59 (50.9%)

Pessimist 90 (19.4%) 21 (18.1%) 26 (22.4%) 20 (17.2%) 23 (19.8%)

Neutral 125 (26.9%) 37 (31.9%) 26 (22.4%) 28 (24.1%) 34 (29.3%)

Table 8. Distribution of participants’ ages among the sample and four conditions

Age group Total Condition 1 Condition 2 Condition 3 Condition 4

< 18 6 (1.3%) 1 (0.9%) 1 (0.9%) 3 (2.6%) 1 (0.9%)

18-24 162 (34.9%) 40 (34.5%) 41 (35.3%) 36 (31.0%) 45 (38.8%)

25-34 214 (46.1%) 48 (41.4%) 58 (50.0%) 60 (51.7%) 48 (41.4%)

35-44 57 (12.3%) 17 (14.7%) 13 (11.2%) 11 (9.5%) 16 (13.8%)

45-54 11 (2.4%) 4 (3.4 %) 1 (0.9%) 3 (2.6%) 3 (2.6%)

55-64 12 (2.6%) 5 (4.3%) 2 (1.7%) 3 (2.6%) 2 (1.7%)

> 64 2 (0.4%) 1 (0.9%) 0 (0.0%) 0 (0.0%) 1 (0.9%)

Descriptive of the group

An analysis was conducted to examine if each of the four conditions consisted of similar characteristics or significant differences between groups, such as gender, consumers’ attitude towards AI and the future, and age. Due to the categorical groups of both independent variables and dependent variables, a one-way ANOVA on ranks was used. For gender groups, no significant differences between the four conditions were found, (F=0.461, p=0.498 > 0.05). Furthermore, the

(38)

38

results showed no significant differences between the conditions in terms of consumers’ attitudes towards AI and the future groups (F= 2.098, p=0.124 > 0.05). Thus it could be assumed that equal distribution of participants’ attitudes towards AI and the future was found among the four conditions. Moreover, it showed no significant difference between age groups (F=0.512, p=0.825 >

0.05).

The above results showed that there were no significant differences between groups in terms of three characteristics: gender, consumers’ attitude towards AI and the future, and age (all p > 0.05).

Thus, group means could be used for further testing.

Manipulation check

Considering that participants’ attitudes towards AI and humans may influence the perceived trustiness, a manipulation check was conducted with the use of an independent sample t-test. The aim of this manipulation check was to examine whether perceived trustiness was higher (vs. Lower) when participants were assigned to AI recruiting tool group (vs. The human recruiter group). The test showed that the manipulation of AI vs. Humans was successful, F=19.162, p=0.00 < 0.05.

However, a reverse result was found. Specifically, participants in the AI recruiting tool group (M=3.09) perceived lower trustiness than participants in the Human recruiter group (M=3.16).

Thus, more tests were conducted for correlations.

Table 9. Means and Standard deviations of perceived trustiness for the AI vs. Humans group

IVs N M SD

AI recruiting tool 232 3.09 1.49

Human recruiter 232 3.16 1.81

(39)

39 Correlations

A correlation matrix was conducted to examine how the variables correlate with each other and provided by SPSS for all six variables (Chapman, 2017). The full correlation matrix can be found in Table 10. No correlation between variable 1 and variable 2, variable 4, variable 5, and variable 6 was found in this table.

The table showed that Ethical issues involvement and Consumers’ attitude towards the process satisfaction were negatively correlated as previously expected (- 0.669, p < 0.01). Furthermore, a positive correlation was found between performance and process satisfaction (0.757, p < 0.01).

Next to this, Performance and Trustiness; The process satisfaction and Trustiness were positively correlated as expected (0.709, p < 0.01; 0.580, p < 0.01; respectively). Moreover, Ethical issues involvement was negatively correlated to both the performance and the trustiness (- 0.767, p <

0.01; - 0.560, p < 0.01 respectively).

Table 10. Means, Standard Deviations, Correlations Means, Standard Deviations, Correlations

Variables M SD 1 2 3 4 5 6

1. Humans vs. AI 1.50 0.501

2. Process satisfaction

3.50 2.079 0.084

3. Ethical issues involvement

0.50 0.501 0.000 -0.669**

4. Performance 3.54 2.030 0.026 0.757** -0.767**

5. Trustiness 3.12 1.657 0.021 0.580** -0.560** 0.709**

(40)

40

6 Forgiveness 2.09 1.475 -0.064 0.185** .b 0.248** 0.383**

.b cannot be computed because at least one of the variables is constant **. Correlation is significant at the 0.01 level (2-tailed).

Additionally, the results showed no significant difference between demographic variables and any of the variables above (all p > 0.05).

Hypotheses testing

To test all hypotheses, ANOVA tests were conducted to examine whether there was a significant relationship between independent variables AI vs. Humans and dependent variables consumers’

satisfaction towards the process, the performance, and perceived trustiness. In addition, one-way AVOVA was used to test whether statistically significant differences exist on a quantitative dependent variable between different groups of a categorical independent variable (Kim, 2017).

The interaction use of the moderator variables ethical issues involvement on AI vs. Humans variables was examined using factorial ANOVA. Bevans (2021) indicated that factorial ANOVA was used to test whether statistically significant differences exist in scores on quantitative outcome variables between different groups on more than one categorical predictor variables.

Before testing each of the hypotheses, a check of normality distribution was done using Kolmogorov-Smirnov Test. It showed that each variable is normally distributed. Moreover, Levene’s Test of Homogeneity of each variable showed that all variables are homogenous (p >

0.05).

Hypotheses 1, 2 and 3: the relationship between consumers’ satisfaction on AI vs.

Humans

Firstly, H1 was proved in the Literature Review part by quoting the previous researches. Then, to further verify this finding, a question about whether participants thought AI bias was initially from

Figure

Updating...

References

Related subjects :