• No results found

AI-Enabled Decision Support Systems: Developing A Predictive Model for The Adoption of Online Review Checkers (ORCs)

N/A
N/A
Protected

Academic year: 2021

Share "AI-Enabled Decision Support Systems: Developing A Predictive Model for The Adoption of Online Review Checkers (ORCs)"

Copied!
88
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

AI-Enabled Decision Support Systems:

Developing A Predictive Model for The Adoption of Online Review Checkers (ORCs)

Anna Georgoudis Student number: 11732059

Master’s Thesis

Graduate School of Communication Master’s programme Communication Science

Research Master track

Supervisor: Guda van Noort

Date of completion: 31st of January 2020

(2)

Abstract

Online Review Checkers (ORCs, i.e. fakespot.com) are new AI-enabled systems promising consumers to aid their purchasing decisions. By employing algorithms, ORCs gauge the authenticity of products’ online reviews and summarize the reviews’ content. ORC’s premise is tackling two recent problems associated to online reviews: 1) the

companies’ manipulation of reviews jeopardizing their trustworthiness (quality problem) and 2) the growing number of reviews affecting the consumer’s decision ability through

information overload (quantity problem).

Drawing from information processing theory, Technology Acceptance Model, decision-making theory, online reviews research and research on trust in AI technology, the current study develops an innovative model that explains the underlying mechanisms of ORC use. Data was collected from a representative sample of 965 Dutch online consumers through a survey. The model was tested by applying Structural Equation Modeling. Findings suggests that the model is predictive of the consumer’s intention of using an ORC in future purchases. Interestingly, the individual’s trusting beliefs in AI technology, the ORC’s underlying logic, is a strong predictor of future use intention. Practical advice is given to communication professionals on the implementation of ORC as strategic marketing tools.

Keywords: technology acceptance model, information processing, decision-making, online reviews, trust in AI technology

(3)

AI-Enabled Decision Support Systems:

Developing A Predictive Model for The Adoption of Online Review Checkers (ORCs) Seeking the opinion of others is what we do before making purchasing decisions.

Especially online, this natural human need, can be fulfilled (Burton & Khammash, 2010). The shift from brick-and-mortar stores to online platforms for shopping provided consumers with a larger source of those opinions, extended beyond friends and families to the entire network of Internet users (Cheung, Lee, & Rabjohn, 2008). Those opinions are better known as online reviews and, by mirroring consumers’ subjective experience of a product, emerged as a reliable cue for trusting online retailers and as a guarantee for the uncertainty associated with online transactions (e.g. product quality; Hong, Xu, Wang, & Fan, 2017). Although online reviews are considered one of the most influential source of information consumers rely on when purchasing (Lee & Shin, 2014), concerns regarding their quality and quantity have been raised. Regarding the quantity, studies show how the growing number of reviews available online is undermining the consumer ability to make decisions through increasing information overload (Furner & Zinko, 2017; Kwon, Kim, Duket, Catalán, & Yi, 2015). Regarding their quality, examples exist of companies jeopardizing the trustworthiness of online reviews by manipulating them (i.e. fake reviews) across different industries (Hu, Bose, Koh, & Liu, 2012; Racherla & Friske, 2012).

Online review checkers (ORC; i.e., fakespot.com, thereviewerindex.com and

reviewmeta.com) are recently developed AI-enabled systems that seem to tackle these two problems associated with online reviews. In fact, ORCs promise consumers to help them making better purchasing decisions (Fakespot.com, 2019) by gauging the authenticity of reviews associated with a product and a seller (i.e., addressing the quality issue) and by providing a summary of the content of those reviews (i.e., addressing the quantity issue). This

(4)

information is presented to the consumer in the form of a “report” generated through statistical analysis and artificial intelligence (AI) technology.

While ORCs appear promising consumer decision-making support tools, we propose their adoption is threatened by the lack of transparency surrounding the process of fraudulent reviews’ detection, and by the consumers’ distrust in AI-technology (Enkel, 2017). Therefore, the current study investigates whether ORCs indeed support and improve the consumer’s online decision-making by addressing the online reviews’ quality and quantity issues. And whether this, along with the influence of the transparency associated to its algorithmic analysis process and the individual’s trusting beliefs in AI technology, in turn, explains the further adoption of such systems.

To understand ORC use and its determinants we build on information system (IS) research (Häubl & Trifts, 2000) and argue that ORCs are specific types of web-based decision support system (DSS), namely recommendation agents (RAs) (Xiao & Benbasat, 2007). More

specifically, we identify four evident similarities between ORCs and RAs forming the basis of the suggested framework for this research: facilitating decision-making by decreasing cognitive effort and increasing decision confidence, lack of transparency in the technology working logic, employment of AI technology, user’s concerns in (AI) technology. Given empirical findings demonstrating these four aspects are predictive of RA’s adoption (Aljukhadar, Senecal, & Daoust, 2012; Cramer et al., 2008; Enkel, 2017; Nilashi, Jannach, Ibrahim, Esfahani, & Ahmadi, 2016; Ruijing, Benbasat, & Hasan, 2018; Xu, Benbasat, & Cenfetelli, 2014), the current study aims to examines whether these elements are also predictive of ORC use.

The study combines three streams of researches to build a theoretical model identifying the critical determinants of ORCs’ future use and explaining whether ORCs influence the consumer’s decision making. The first regards online reviews and how their quality and

(5)

quantity dimensions affect the consumer decision making. Such studies are rooted on the information processing theory and decision-making (Elwalda, Lü, & Ali, 2016; Gao, Zhang, Wang, & Ba, 2012; Han et al., 2006). The second concerns the user’s adoption of RAs (Xiao & Benbasat, 2007), fundamental theories are the Technology Acceptance Model (TAM; Davis, 1985) and decision-making (Xu et al., 2014). The third stream involves studies on trust in AI-technologies (Tussyadiah, Zach, & Wang, 2019; Van Eeuwen, 2017).

More specifically, the study applies and extends a conceptual model developed by Xiao and Benbasat (2007) for the understanding of the user’s adoption of RAs. Drawing from the original framework, we propose that the consumer’s evaluation of the ORC (perceived ease of use, perceived usefulness, satisfaction and trust) and the consumer’s perception of the ORC’s influence on decision making process (perceived decision effort) and outcome

(perceived decision quality) both predict the intention of using an ORC for future purchasing decisions. Moreover, based on translation of original framework to the current context and literature on online reviews, the study further claims that characteristics of ORC’s process, the perceived transparency of the ORC’s underlying algorithm, and output, the perceived quality of the information contained in the ORC’s reviews analysis report, predicts future use of ORC via influencing the individual’s evaluation of ORC and the decision-making.

Furthermore, we investigate how the individual’s trusting beliefs in AI technology influence the future use of an ORC. In sum, the study addresses the following question:

To what extent is the intention of using an ORC in the future predicted by the user’s evaluation of the ORC, the user’s perception of the ORC’s influence on the decision-making process and outcome, ORC characteristics and trusting beliefs in AI?

The study has five important theoretical contributions. 1. Albeit extensive research on RAs, the current study is the first, to our knowledge, shedding lights on the influence that ORC have in aiding the consumer’s purchasing decision online. 2. By applying the Xiao and

(6)

Benbasat (2007) conceptual model to a new phenomenon, we provide empirical support on whether this theoretical framework applies to other technologies other than RA in online purchasing contexts. 3. The model contributes in understanding how ORCs overcome the quality and quantity problems of online reviews through enhancing the perceived decision quality and information quality. 4. The study contributes in elucidating our understanding of this emergent ORC phenomenon by taking the consumer’s perspective and moving beyond the existing researches that have mostly focused on developing and evaluating existing algorithms for the detection of fake reviews (Arjun, Vivek, Bing, & Natalie, 2013; Li, Chen, Liu, Wei, & Shao, 2015; Lin et al., 2014). 5. The study extends Xiao and Benbasat (2007) framework by investigating the role played by the individual’s trusting beliefs in AI

technology for the future use of ORCs, thus bringing a new understanding to the consumer’s adoption of AI-enabled technologies.

The research has also societal and managerial relevancy for three main reasons. Firstly, given the considerable advantages that online reviews bring to brands and companies (e.g., increasing sales, enhancing the customer’s involvement; Singh et al. (2017); Weathers, Swain, & Grover (2015)), and the interest in the successfulness of online reviews for consumer’s purchasing decisions (Bellman, Johnson, Lohse, & Mandel, 2006; Liu & Park, 2015), paramount is the understanding of the ORCs phenomenon in connection to online reviews future trends. Secondly, the study gives communication professionals advice on the underlying mechanisms involved in the ORC’s promise to help consumers in their purchasing decision. Thus, highlighting whether ORCs can be implemented as a strategic marketing tool and how their benefit should be communicated to consumers. Finally, given the global concern regarding the rising misinformation (Lazer et al., 2018) and the general skepticism towards artificial intelligence, this study is able to shed light on whether consumers are ready to accept and use these tools designed to fight misleading information online.

(7)

Theoretical Framework How do online review checkers (ORCs) work?

ORCs are new AI-enabled systems promising consumers to help them making better purchasing decisions. This is currently done by informing the consumer about the ‘fakeness’ of a product’s online reviews and by providing a ‘summary’ and an overall evaluation of the content of the reviews analyzed (AMZDiscover, 2018). ORCs are accessible through web-interfaces, and their recent appearance is probably a result of three phenomena: the increasing relevance played by online reviews in affecting purchasing decisions, the rising concern of online misinformation and the computational development at our disposal. Online consumers interested in purchasing a product on one of the most familiar marketplaces (e.g. Amazon) can copy and paste the link of the product on one of the ORCs-based websites (e.g.

fakespot.com, reviewermeta.com). After this simple task, the ORC performs an analysis of the products’ reviews and provide to the consumer a “report”. Albeit differences in add-ons (e.g. compatibility with online retailers), the following analyses are consistent across existing ORCs: estimate of misleading reviews, adjusted product’s rating, sentiment analysis,

reviews’ most used words to describe the product, etc. These analyses are executed by employing machine learning, a sub-category of AI technology, where proprietary algorithms and statistical models are used to recognize patterns and make inferences on the fetched data, that is online reviews.

How do ORCs address online reviews’ quality and quantity issues?

ORCs base their algorithmic analyses on online reviews, one of the most important information source for consumers nowadays (Liu & Park, 2015). Also called consumer-generated opinions, online reviews exist for the majority of industries and type of goods (search vs experience) and they drive consumers’ purchases by increasing the individual’s confidence in the decision, improving the costumers’ feedback and by creating a more

(8)

authentic shopping experience, to name a few (Clement, 2018). The reviews’ qualitative and quantitative dimensions (i.e., valence, reviewer profile image), along with their information richness, render their helpfulness on the consumer shopping behavior (Karimi & Wang, 2017; Qazi et al., 2016; Weathers et al., 2015).

Two drawbacks emerged from the consumers’ reliance on online reviews for purchasing decisions: 1) there are too many reviews and 2) their quality is questionable (Furner & Zinko, 2017; Hu et al., 2012). These volume and quality issues negatively influence consumer decision making. First, studies show how the increasing volume of online reviews threatens the consumers’ ability to locate relevant information, thus increasing the perception of information overload and affecting the subsequent decision quality (Zha, Li, & Yan, 2013). Second, the number of suspicious reviews has increased over time and studies confirmed how business frauds such as writing fake reviews or filtering out negative ones undermines the reviews credibility (Luca & Zervas, 2016).

The negative effect of the online reviews’ quality and quantity on the consumer’s decision-making is explained by the information processing theory. First, the individuals limited cognitive ability in processing information may trigger dysfunctional consequences (i.e. anxiety, confusion) when many reviews are present (Han et al., 2006). Subsequently, these undesirable results can then lead to be less confident in the decision making. Second, the quality dimension of reviews is crucial for an effective processing of information as it informs the individual’s perceived usefulness of that information (Elwalda et al., 2016; Gao et al., 2012).

How to conceptualize ORCs?

ORCs are classifiable as having features in-between information-filtering systems (IF) and information-retrieval (IR) systems, according to the categorization of information systems (IS) suggested by Hanani et al. (2001). As ORCs, both IF and IR manage a large amount of

(9)

information by filtering only what is relevant for the user (Hanani et al., 2001). Consistent with IF, ORCs are designed for screening out a dynamic stream of data. Conforming to IR, ORCs are designed for ad-hoc use, do not collect the user information, describe the user needs as queries, select data from data items (i.e., online reviews), anyone can access the system and do not deal with social issues like user privacy (Hanani et al., 2001). Moreover, ORCs hold the following four similarities with IS designed to support decision-making during different stages of online shopping, such as RAs. Firstly, RAs facilitate purchasing decisions by decreasing the individual’s cognitive effort and improving the decision quality during the information search phase of the shopping activity for their ability of analyzing a large amount of data (Aljukhadar, Senecal, & Daoust, 2012; Xiao & Benbasat, 2007). In a similar vein, this mechanism of consumer’s cognitive effort reduction may play an important role in ORCs, where the system facilitates the analysis of the abundance of information contained in the online reviews, thus strengthening the confidence in the purchasing decision.

Secondly, both ORCs and RAs lack of transparency in the reviews’ analysis process and in the personalized recommendation process, respectively. In fact, ORCs do not provide clear explanation regarding their reasoning in determining misleading reviews. In an analogous way, studies have raised transparency issues for RAs (Cramer et al., 2008).

Thirdly, ORCs, as well as the second generation of RAs, employ AI technology to analyze the large amount of data (Ruijing et al., 2018). Finally, the study is informed by existing consumer’s concerns of using RAs and ORCs. Individuals’ trust in RAs is threatened by the implicit gathering (i.e. past browsing and purchasing behaviors) of consumer’s preferences used for providing personalized recommendations (Cramer et al., 2008; Xu et al., 2014). Whereas for ORCs, given the employment of objective data unrelated to the user, trust concerns are likely related to the AI technology itself, and not to data privacy, as suggested by the general public skepticism towards AI (Enkel, 2017).

(10)

Conceptual model development for ORC use

The study’s conceptual model is developed by drawing from three streams of researches: the user’s adoption of RAs, online reviews and studies on trust in AI-technologies. Starting from the first, relevant are studies revealing behavioral issues and responses of RAs use and adoption (Zahedi, Song, & Jarupathirun, 2008). Findings suggests that RAs, for example, increase the consumer’s effectiveness in searching for products online, increase consumer’s satisfaction (Hostler, Yoon, Guo, Guimaraes, & Forgionne, 2011) and enhance the

consumer’s confidence in purchasing decisions by lowering the perceived information overload (Aljukhadar et al., 2012; Ruijing et al., 2018). In a similar vein ORCs might positively influence the consumer’s perceived decision quality as well.

Within factors explaining the adoption of RA systems, the existing literature reports trust as being the most relevant antecedent (Benbasat & Wang, 2005, 2007; Ruijing et al., 2018), followed by affective responses such as enjoyment (Ashraf, Ismawati Jaafar, & Sulaiman, 2019) and satisfaction (Ashraft, Jaafar, & Sulaiman, 2016), transparency (Benbasat & Wang, 2016; Cramer et al., 2008; Xu et al., 2014) and the explanation aid of the system (Benbasat & Wang, 2007; Tan, Tan, & Teo, 2012). Similarly, all these predictors might play an important role for ORCs.

Empirical studies on RAs have mainly departed from the Technology Acceptance Model (TAM; Davis, 1985). Although firstly developed for the individual’s acceptance of computer-based systems in organizational settings, the TAM has been widely employed for empirically testing the acceptance of various technologies such as ride-sharing services (Wang, Wang, Wang, Wei, & Wang, 2018), QR Codes (Sang Ryu & Murdock, 2013), RFID Technology (Hossain & Prybutok, 2008), internet banking (Alsajjan & Dennis, 2010), wearable

technologies (Hwang, 2014) and Internet of Things (Mital, Chang, Choudhary, Papa, & Pani, 2018). The similarities between ORCs and RAs imply that the TAM framework could also

(11)

explain the consumer’s future use of ORC. Therefore, the current study borrows the

framework proposed by Xiao and Benbasat (2007) for the studying of RAs in the context of e-commerce, which was built on the TAM. The original conceptual model postulates that the user’s direct experience with an RA informs two responses: the user’s evaluation of the RA and the perception of the RA’s influence on the decision-making which, subsequently, influence the RA’s adoption.

Moreover, it identifies important aspects of RAs (provider credibility, RAs characteristics, factors related to the product and the user and the user-RA interaction) hypothesized to be predictors of the RA’s use responses. In similar vein, we propose that the individual’s direct experience with an ORC informs the consumer’s future use of ORCs in online purchasing settings by influencing: 1) the consumer’s perception of the ORC’s influence on the decision-making process (perceived cognitive effort) and outcome (perceived decision quality), 2) the consumer’s evaluation of the ORC system (perceived ease of use, perceived usefulness, trust in the ORC and satisfaction with the ORC). Moreover, characteristics of both the ORC process and output influence the user evaluation of the ORC. Regarding the process, we propose that the individual’s perceived transparency of the algorithmic analysis of the reviews, perceived process transparency, influences the evaluation of ORCs. Regarding the output, we draw from empirical evidence from research on online reviews, the second stream of research, and we propose that the quality of the information provided by the ORC, the perceived information quality, is a compelling predictor of the user’s evaluation of ORC and of the perceived decision quality. In fact, as sustained by the information processing theory, the information quality affects the evaluation of the usefulness of the reviews, which

subsequently influence the confidence in the decision (Elwalda et al., 2016; Gao et al., 2012; Keller & Staelin, 1987).

(12)

Finally, the third stream of research extends Xiao and Benbasat (2007) model by considering the influence that the individual’s trusting beliefs in AI technology have on ORC’s future use intention. This construct captures the individual’s orientation towards the technology employed by the ORC, and it is believed to be important for two main reasons: the complex data processing of the ORC and the general sentiment of distrust that the public holds towards AI.

The predicting model of ORC’s use is visually depicted in Figure 1. Assumptions for the relationships between the constructs are formulated in the following paragraphs (see

summary in Table 1). Hypotheses development

The model assumes that future use intention of ORCs is explained by user evaluation of an ORC system, the perception of ORC’s influence on the consumer decision-making process and outcome, ORCs’ characteristics, and user’s trusting beliefs in AI. Moreover, it also assumes relations between these sets of variables.

Influence of perceived ORC’s influence on decision-making process and outcome on ORC’s future use intention

Consumer’s surfing the web for online purchases find themselves in an overwhelming information-rich environment, where it is really hard to navigate all the available information (Park & Lee, 2008). This is especially true for online reviews, where individuals, in the attempt of gaining other’s consumers opinions about a product, become even more confused and experience information overload (Park & Lee, 2008). Given the individual’s limited cognitive abilities, the information processing theory proposes that people employ different strategies when making decisions, which are ultimately compromising between minimizing the cognitive effort and reaching an accurate decision (Xu et al., 2014). This behavior is relevant in the first phases of the decision making, when people search for information and

(13)

evaluate their considered set of alternatives (Gao et al., 2012). In those initial stages, decision aids systems mainly offer their contribution (Xiao & Benbasat, 2007) by allowing individuals to increase the confidence in the quality of their decision’s outcome (Häubl & Trifts, 2000; Phu & Tho, 2019; Xu et al., 2014). ORCs, by narrowing down the content of online reviews while extracting only relevant information for the consumer (e.g. the most used words for describing the product), relief consumers from the burden of cognitive constrains and, ultimately, allow them to reach an optimal decision. Those perceived benefits of reduced cognitive effort and increased decision quality would then lead consumers to use an ORC for future purchases (Chen, 2017). It is therefore proposed that:

H1a: The perceived decision effort increases the consumer’s intention of using ORC in the future

H1b: The perceived decision quality increases the consumer’s intention of using ORC in the future

H1c: The perceived decision effort increases the perceived decision quality

Decision-making might not only predict future ORC use, but also the perceived usefulness of the ORC. Previous RAs research (Xiao & Benbasat, 2007) revealed that interacting with a RA might cause users to overestimate, in a sort of “illusion of control”, the performance beliefs (e.g., decision confidence) of the effectiveness of RAs (Xiao & Benbasat, 2007). This asserts a relationship between decision quality and RA usefulness. Similarly, we propose that the relationship between decision quality and ORC’s perceived usefulness applies to the ORC phenomenon as well, and therefore:

H2: The perceived decision quality increases the perceived usefulness of the ORC

Influence of the consumer’s evaluation of ORC on ORC’s future use intention

According to the TAM, the individual’s behavioral intention of using a technology is determined by a set of beliefs formed during the user’s direct experience with that technology

(14)

(Davis, 1989). TAM specifies the perceived usefulness and perceived ease of use as the two salient beliefs of the user’s evaluation of a system influencing its use. Perceived usefulness describes the user’s perception that the system is beneficial for performing the prescribed activity, whereas the perceived ease of use captures “the degree to which a person believes that using a particular system would be free of effort” (Davis, 1989; pag. 320). Past empirical findings repeatedly proven the strong positive influence of the perceived usefulness on the individual’s intention of using a new technology and on RAs (Al-Natour, Benbasat, & Cenfetelli, 2008). Moreover, they provided support for the direct and indirect (via the

perceived usefulness) influence of perceived ease of use on the behavioral intention of future use (Hansen, Saridakis, & Benson, 2018; Kanchanatanee, Suwanno, & Jarernvongrayab, 2014; Sheng & Zolfagharian, 2014). Therefore, we propose for ORCs that:

H3: The consumer’s a) perceived usefulness and b) perceived ease of use of the ORC increases the intention of using the ORC in the future

H4: The consumer’s perceived ease of use increases the perceived usefulness, and subsequently it increases the intention of using the ORC in the future

Criticized for its parsimony (Sheng & Zolfagharian, 2014), TAM is broadened by adding two variables to the set of beliefs that predict behavioral intention towards the technology and that are part of the set of the ORC’s evaluation variables explaining its future use: trust and satisfaction. With regard to satisfaction, prior studies suggested the complementary work of the technology acceptance and user satisfaction literature in understanding information systems use (Wixom & Todd, 2005). And they confirmed that the individual’s satisfaction of the information system, after initial use, positively affect the individual’s intention of

continuing to use the system (Xiao & Benbasat, 2007). Moreover, the more likely an

individual is satisfied with the system, the more it will perceived it as ease to use (Wixom & Todd, 2005). The following is therefore proposed for ORCs:

(15)

H5: The consumer’s satisfaction with the ORC increases the intention of using the ORC in the future

H6: The consumer’s satisfaction with the ORC increases the perceived ease of use With regard to trust, several studies demonstrates the important role played by trust as antecedent of use intention of a new technology (Aghdaie et al., 2012; Siau & Wang, 2018) and of information systems adopted as decision aids (Komiak & Benbasat, 2006). As the RA’s use is influenced by the consumers’ confidence in the product recommendations

provided by the RA (Xiao & Benbasat, 2007), for ORCs, we expect the individual’s intention to use ORC in the future to be influenced by the perception that the information provided after the online reviews’ analysis is trustworthy. Moreover, Xiao and Benbasat (2007) proposes that the information asymmetry involved in the relationship between the individual and a RA makes trust critical because of the individual’s inability of determining whether the system is working for the sole benefit of the user, especially in online environments (Aghdaie et al., 2012). Given these findings, we propose for ORCs that:

H7: The consumer’s trust in the ORC increases the intention of using the ORC in the future

Trust does not only influence use intention, but also the evaluation of the perceived usefulness. By guaranteeing the individual’s expectation that the system is going to perform in the desired way, trust increases the certainty of the system’s behavior, thus positively affecting also the perceived usefulness of the system (Aghdaie et al., 2012; Gefen, Karahanna, & Straub, 2003). It is therefore posited for ORCs:

H8: The consumer’s trust in the ORC increases the perceived usefulness

Influence of ORC characteristics on ORC’s evaluation and on perceived decision quality

As proposed by Xiao and Benbasat (2007), characteristics referring to the RA’s process and output influence the user’s evaluation of RA. In similar vein, we propose that ORC

(16)

characteristics such as the perceived transparency of the ORC’s algorithmic analysis (process), and the information provided in the ORC’s analysis report (output), are fundamental predictors of the consumer’s evaluation of the ORC and of the perceived decision quality.

Studies of RAs demonstrated how the transparency of the process is a positive

trust-building mechanism towards the system (Benbasat & Wang, 2016; Brunk, Mattern, & Riehle, 2019; Nilashi et al., 2016). Process transparency deals with the user’s understanding of how the RA works and bases its recommendations on (Cramer et al., 2008). The individual’s acceptance of recommendations provided by intelligence RAs is higher for transparent system, due to an enhanced understanding of the system’s work (Cramer et al., 2008). Thus, we propose that the transparency of the ORC process, the extent to which the consumer understands how the ORC analyzes the online reviews (Jiang, 2018), is crucial for the ORC’s future use. Due to the asymmetry of information, the general sentiment of distrust that

individuals feel towards AI technology, and the difficulty in understanding the complexity of the algorithms employed by ORCs without adequate mathematical knowledge, we predict the transparent explanation of the ORC’s analysis logic essential for building trust in ORCs (Enkel, 2017; Jiang, 2018). This is further supported by recent studies revealing how transparency is fundamental for building trust in applied AI (Hengstler, Enkel, & Duelli, 2016) and raising debates about the ethical importance of disclosing cases where AI technology is “author” of communications (Guzman & Lewis, 2019). Therefore, the following is formulated:

H9: The perceived process transparency increases the trust in ORC

H10: The positive influence of perceived process transparency on trust in ORC, subsequently increases the intention of using the ORC in the future

(17)

Not only the process, but also the output of the ORC, the perceived quality of the

information of the ORC’s reviews analysis report, is relevant for the consumer’s evaluation of ORC and for the perceived decision quality. Information quality positively affects the perceived satisfaction of an IS and, indirectly, its successfulness and future adoption (Hanani et al., 2001; Prasanna & Huggins, 2016; Yoon, Hostler, Guo, & Guimaraes, 2013). In online environments, different dimensions of information quality (i.e. accuracy, timeliness,

relevance and comprehensiveness) are precursor of the individual’s perceived usefulness of the information (Cheung et al., 2008). For which regards ORCs, it is posited that the

information quality of the content provided in the analysis report is a predictor of the individual’s intention of using an ORC in the future and this relationship is subsequently influenced by perceived satisfaction and usefulness.

Supported by the information processing theory and by empirical findings, the quality of the information, informs the consumer’s decision making both in studies involving online reviews, as well as information systems (Chen, Nguyen, Klaus, & Wu, 2015; Hicks et al., 2012; Lane & Staelin, 1987). And show how dysfunctional consequences triggered by the reviews quality issue lead individuals to be less confident in their decisions (Han et al., 2006). It is therefore posited for ORCs:

H11: The perceived information quality of the ORC increases a) the consumer’s satisfaction with and b) the consumer’s perceived usefulness of the ORC

H12: The perceived information quality positively influences a) the satisfaction and b) the perceived usefulness and subsequently increases the consumer’s intention of using the ORC in the future

H13: The perceived information quality increases the perceived decision quality

(18)

As previous researches suggested, the credibility of the information is informed by both the trust in the medium and in the source of that information (Lucassen & Schraagen, 2012). This was indeed demonstrated by studies on the determinants of product’s reviews, where credibility informs the trust in the information (Lee & Shin, 2014; Mackiewicz & Yeats, 2014). In similar vein, we argue that consumers trust ORCs, the medium of the information, once they trust the source of the information, the AI technology. Therefore, the individual’s trusting beliefs in AI technology, the “author” of the online reviews’ analysis report should positively affect the trust in the ORC itself. This is further supported by previous studies on the adoption of novel technologies which found characteristics of the user relevant

antecedents of the trust in the technology. In fact, the individual’s attitude towards robots predicts trusting beliefs in intelligent service robots (Tussyadiah et al., 2019), and

individual’s attitude towards mobile advertisement positively induces the attitude towards using a messenger chatbot (Van Eeuwen, 2017). Thus, we assume the following:

H14: The consumer’s trusting beliefs in AI technology increase the intention of using the ORC in the future

H15: The consumer’s trusting beliefs in AI technology increase the trust in the ORC Method

Sample

To test the hypotheses, a structured questionnaire was distributed among a representative sample of consumers [age between 18-65] in the Netherlands for 7 days during the month of December 2019. Participation to the study required individuals with minimum 18 years old and understand basic English. These requirements were communicated in the study’s introduction and monitored by specific questions at the beginning of the questionnaire. Participants that did not meet these requirements automatically exited the survey. Failing the attention check questions was another response exclusion criterion. Quota sampling was

(19)

employed and a total of 4200 individuals were contacted by Dynata, the panel agency processing the data. After having accounted for screened out participants, the final sample size consisted of 695 individuals, meeting the minimum 200 cases and 5 observations per estimated parameter thresholds sufficient for structural equation models analyses (Wolf, Harrington, Clark, & Miller, 2013). Participants were 342 men (49.2 %) and 353 women (50.8 %), with a mean age between 45 and 54 years old (SD 1.35). 32.4 % of participants hold a bachelor’s degree, 16.8 % a high school degree, master’s degree (15.4 %) and college degree (12.2 %). The majority were Dutch (96.3 %) (see Table 2 for descriptive).

Design

The experiment was administered online. Respondents received an email invitation to participate, containing a unique URL accessible only once. Each participant received the standard’s recruiting agency compensation. An introduction statement assured the anonymity and voluntary participation, as well as the data privacy consent to begin the study.

Participants took on average 13 minutes to complete the survey.

After initial demographic questions (gender, age, nationality and education), participants were introduced to a scenario-based shopping situation in which they wanted to purchase a mattress found on Amazon.com. A mattress was chosen for being an experience product: individuals are more likely to search for information and feedback for experience products compared to search ones due to the uncertainty and equivocality in evaluating their subjective attributes (Gao et al., 2012). Whereas Amazon was chosen for the general consumer’s

popularity and familiarity with this e-commerce platform. The scenario informed the respondent that, before the final decision, they wanted to read some reviews on the product, and they decided on evaluating them by visiting the review checker website fakespot.com. Past findings have demonstrated that the individual’s direct experience with the IS serve as a basis for the user’s evaluation of the IS and its subsequent adoption (Bajaj & Nidumolu,

(20)

1998; S. S. Kim & Malhotra, 2005; Limayem, Cheung, & Chan, 2003; Venkatesh, Morris, Davis, & Davis, 2003). Therefore, a short video (duration 1.48 minutes) showed the participants the recording of the steps involved for requesting the reviews report analysis from fakespot.com. The video showed the Amazon’s link of the mattress copied and pasted on fakespot.com, the submission of the analysis request and the final report. Although less realistic than asking individuals to perform themselves this task, this option was considered optimal because it prevented the participant a temporary exit from the survey (thus ensuring higher response rate). Moreover, it ensured that all participants viewed the same report: the review checker in fact, performs the analysis on real-time data, and new reviews posted by costumers for a product might create small changes in the analysis report’s result.

After watching the video that simulated an experience with the ORC, attention check questions were included, followed by the central variable measurements and by control variables.

Finally, participants were thanked for their participation and debriefed about the exact research aim of the study.

Measures

The content validity of the study was ensured by employing and adjusting validated scales. All constructs, except from socio-demographic variables, control variables and the

satisfaction items, were measured on seven-points Likert scales ranging from strongly agree to strongly disagree answer options (see Table 3 for summary of constructs).

Intention to use ORC

The individual’s intention to use ORC in the future in a similar context after the initial use uses three items adapted from Benlian, Titah, and Hess (2012).

(21)

The degree to which the consumer believes that the ORC enhances a better purchasing decision, the perceived usefulness (PU), and the degree to which the ORC is perceived easy to use, the perceived ease of use (PEOU), are each measured by three items adapted from scales developed by Benlian at al. (2012). The six items measuring the consumer’s trust in the ORC (TRO) capture the competence, benevolence and integrity dimensions of this

construct and the scale is adapted from Benlian et al. (2012); Nicolaou and McKnight (2006). The four seven-point differential scale items for the satisfaction (SA) constructs are adopted from Dabholkar & Sheng (2012).

Trusting beliefs in AI technology

Trust in AI technology (TRAI) derives functionality, helpfulness and reliability of a specific technology from items borrowed by Tussyadiah et al. (2019).

Consumer’s decision-making process and outcome

The perceived decision effort (PDE) measures the cognitive effort required while using the ORC for making products’ decisions with four items adapted from Wang and Benbasat (2016); Xu et al. (2014). Whereas the perceived decision quality (PDQ) captures the subjective dimension of the consumer’s confidence in the purchasing decision aided by the ORC and it is assessed with three items deriving from Tan et al (2012).

ORCs characteristics

The perceived information quality (PIQ), the extent to which the information provided by the ORC analysis report is relevant, understandable, accurate, complete, adds value and timeliness, adopts six items from Filieri and McLeay (2014). Whereas the perceived process transparency (PPT), the degree to which the consumer understands the motives and the inner workings of the ORC’s reviews analysis is measured with four items adapted from Wang and Benbasat (2016).

(22)

Control variables measures the previous use of an ORC (Q8), the online reviews’ use frequency (Q11), the familiarity with the mattress’ brand (Q9) and the feeling towards online reviews (Q10). The final questionnaire (see Appendix) consisted of 54 items.

Attention checks

Including attention check variables ensured the quality of results. The participants’ attention during the video was assessed by asking to select the correct answers of the ORC procedure and function. Moreover, participants could proceed after fully playing the video. Whereas the participants’ attention during the questionnaire was assessed through the following item “For this question, please select the “Somewhat Agree” answer to

demonstrate your attention” after some batteries of questions. In order to ruling out the risk of acquiescence bias, especially in batteries of questions, some items were negatively phrased (Krosnick & Presser, 2009).

Data analysis

Structural Equation Modeling (SEM) using analysis of moment structure in Amos 25 with Maximum Likelihood Estimation (MLE) is used to test the hypotheses and develop a

predictive model of the consumers’ intention to use an ORC in the future. This method was chosen, over multiple regression, because it simultaneously assesses the relationships among variables and the degree of error for the independently estimated variables (Joo & Sang, 2013). During the initial data screening we checked for the absence of missing values and we controlled for outliers, multicollinearity, normality of distribution and reverse coded scale. The data failed to meet the requirements for multivariate normality with a Mardia’s

normalized coefficient out of the acceptable range (-3 > c.r. < +3) (Kline, 2011). As multivariate normality did not improve after removing cases far from the centroid, the decision was to retain the outliers and account for the non-normality of data with Bollen-Stine bootstrapping during the measurement model analysis. Following a two-steps approach,

(23)

the evaluation of the measurement part of the model preceded the structural one (Kline, 2011).

Measurement model

A Confirmatory Factor Analysis (CFA) was performed to validate the validity of the latent constructs employed in the study (Nilashi et al., 2016).

The overall goodness of fit of the measurement model was confirmed by the following Fit Indexes: CMIN/DF = 2.66, GFI = 0.87, AGFI = 0.85, TLI = 0. 94, CFI = 0.95, IFI = 0.95, RMR = 0.08, RMSEA = 0.05, 95% CI [0.05 – 0.05]. The observed value of the model Chi-Square (χ #/df= 2023.94/762, p < .001) resulted significant, suggesting a poor fit. However,

the sensitivity of this fit index to the sample size (N > 200) and multivariate non-normality (Kline, 2011), render it not appropriate for assessing the goodness of fit in this context. The model fit was achieved after re-specification conducted by adding error correlations between items of belonging to the same scale (see Figure 2) as suggested by the modification index values (M.I. > 20). In this case, the correlation between the error variance was theoretically sound as items were measured on the same scale (Byrne, 2001).

The model’s reliability in measuring the latent construct, assessed in SPSS, was confirmed by very good Cronbach’s alpha values (α > .80) for each scale (see Table 4). Factor loadings above minimum (> 0.60) indicated unidimensionality of the scales (Kline, 2011). Three items with factor loadings slightly below 0.60 (TRAI_5, TRO_6 and PPT_4) were retained given the achieved goodness of the fit of the model and in order to preserve the face validity of the validated scales. Moreover, the indicator reliability was also confirmed by an acceptable Composite Reliability (CR) score for each scale above the required threshold of 0.70 (Fornell & Larcker, 1981).

The ability of the indicators of measuring the right construct, convergent validity, was confirmed by the statistically significant item loadings (p < .001) and by Average Variance

(24)

Extracted (AVE) values above 0.50. Although one scale (PIQ) showed an AVE value slightly below 0.50 (0.49), the convergent validity of the scale is considered valid given the CR value higher than 0.70 (Fornell & Larcker, 1981). Discriminant validity, the ability of the scales of measuring distinct construct was confirmed by two tests. First, the AVE scores were

compared with the squared correlations among the constructs. All AVEs, expect for the scales PIQ, TRO and PU, across corresponding rows and columns were higher than the squared correlations among the constructs (see Table 5). To further check, discriminant validity was demonstrated by the second test were the correlations between the pair of constructs did not exceed the accepted threshold (0.90) (Kline, 2011) (see Table 6). Considering these results, the model was classified acceptable for moving on with the structural part.

Results

The second phase of analysis investigated the strengths and directions of the relationships between the latent constructs. This was performed through Maximum Likelihood Estimation (MLE) in Amos 25 and controlling for the non-normality of data with bootstrapping (Bollen-Stine bootstrapping was not applied here because it does not produce the 95% CI to

determine the effects’ significance). The original model did not fit the data. Therefore, as suggested by the modification index values, a theoretically sound path was added (PDE à PEOU). The final model (see Figure 3) was accepted after showing the following fit indices: χ #/df = 2545.75/909, p < .001, CMIN/DF = 2.80, GFI = 0.86, AGFI = 0.83, RMR = .10, IFI =

0.93, TLI = 0.92, CFI = 0.93, RMSEA = 0.05, 95% CI [0.05 - 0.05]. The $# of the

endogenous variable, the consumer’s intention of using an ORC in the future, is 0.67 meaning that the exogeneous constructs explain 67% of the variance.

Overall, the model supports most of the hypotheses. Starting from the direct effects (see Table 7), the model supports a medium effect of PDE on I2R (H1a, b* = 0.38, p < .001).

(25)

However, the effect of PDQ on I2R (H1b, b* = 0.04, p = .307) is not significant. PDE is not a predictor of PDQ (b* = 0.04, p = .402), not supporting H1c. PDQ has a small effect (b* = 0.08, p < .05) on PU, supporting H2. As supported by H3a PU has a very strong effect on I2R (b* = 0.77, p < .001). H3b is partially supported with a weak, but negative, effect of PEOU on I2R (b* = - 0.23, p < .001). The effect of SA on I2R posited by H5 (b* = 0.08, p = .062) is not significant. However, SA has a small significant effect (b* = 0.18, p < .001) on PEOU, supporting H6. The effect of TRO on I2R is not significant (b* = - 0.12, p = .233), thus not supporting H7. TRO has a strong significant effect on PU (b* = 0.69, p < .001), as posited by H8. The effect of PPT on TRO is significant with a moderate effect (b* =0.38, p < .001), supporting H9. PIQ is a strong predictor of SA (b* = 1.08, p < .001) but not of PU (b* = - 0.04, p = .606), as posited by H11a and H11b respectively. PIQ significantly predicts PDQ (b* =0.76, p < .001), supporting H13. Finally, TRAI is a significant predictor of both I2R (b* = 0.23, p < .001) and TRO (b* = 0.59, p < .001), supporting H14 and H15.

With regard to the proposed indirect effects (see Table 8), the model supports a small indirect effect of PEOU on I2R via PU posited by H4 (b* = 0.10, p < .01). The indirect effect of PPT on I2R via TRO proposed by H10, although significant (b* = 0.03, p < .01), is almost null. H10 (p < .01). The indirect paths posited by H12a (b* = 0.04, p = .156), and H12b (b* = 0.03, p = .156) are not supported.

Within the control variables effects (see Table 9), Q8 (b* = - 0.20, p < .01) and Q11 (b* = - 0.14, p < .001) have a weak negative effect on I2R. Q10 predict PDQ and TRO, however the effects are rather small (b* = - 0.15, p < .01; b* = - 0.11, p < .05). Finally, Q11

significantly influences SA (b* = - 0.10, p < .01), TRO (b* = - 0.11, p < .001) and PU (b* = - 0.10, p < .001).

(26)

Lastly, no equivalent models are visible according to Lee-Hershberger replacing rules (Kline, 2011) as there are not complete blocks at the beginning of the model and

unidirectionally linked endogenous variables sharing the same cause. Conclusion & Discussion

The aim of the current study was to investigate whether ORC, by addressing the quality and quantity issues of online reviews, improve the consumer’s online decision-making, and whether this benefit translates in the consumer’s adoption of ORCs for future purchasing decisions. Moreover, we investigated the influence of trusting beliefs in AI technology, the underlying logic of ORCs’ algorithmic analyses, on the future consumer’s use of ORCs. Through the combination of decision-making theory, information processing theory and the TAM model, and by drawing from empirical research on AI technology and online reviews, we developed a predictive model of relevant predictors of ORC’s use,

offering crucial opportunities for both academics and practitioners. Departing from proposed similarities between ORCs and RAs, we applied and extended Xiao and Benbasat (2007)’s framework originally developed for RAs’ acceptance in e-commerce contexts. Data was collected through an online survey administered to a representative sample of Dutch online consumers. Subsequently, the model was tested through structural equation modeling (SEM) analysis.

The results of this study provide insights into the understanding of the consumer’s

intention of using ORCs for future purchasing decisions in online environments. Overall, the satisfactory fit of the model supports four main findings. 1. ORCs tackle the quantity issue of online reviews as demonstrated by the significant effect of perceived decision effort on the intention of using ORC in the future. 2. ORCs address the online reviews’ quality issue as demonstrated by the significant effect of the perceived information quality on the perceived decision quality. 3. The TAM, through its fundamental constructs, perceived ease of use and

(27)

perceived usefulness, is a timely theoretical framework able to explain the adoption of newly developed technologies. 4. The individual’s trusting beliefs in AI technology are a critical predictor of the future use of technologies embedding AI and, as shown by this research, are a precursor of the individual’s trust in ORCs. Moreover, the extent to which the system transparently communicates its inner logic determines the basis for the development of trust in the system itself and influence the future adoption of the technology as well.

By looking at the paths’ specific findings, the study confirms the importance of the perceived usefulness of a technology for its future use (Al-Natour et al., 2008). Although previous findings supported a direct effect of the constructs involved in the user’s evaluation of the technology, the present study offers a different, yet interesting, perspective. In fact, it shows that the individual’s perceived ease of use of the ORC, along with the individual’s trust in the ORC and the perceived satisfaction, indirectly affect its future use via the

perceived usefulness. In line with previous studies, those set of beliefs that are formed during the individual’s direct experience with the technology are still relevant, but we provide support for the strong mediation of the perceived usefulness.

Our results corroborate the existing notion that, in the relationship between a user and a new technology, trust mirrors whether the user perceives the ORC is working for the user’s benefit and it is trustworthy (Aghdaie et al., 2012; Siau & Wang, 2018). Interestingly, not only the individual’s trust in the medium of the information, but also in the source of the information, is a relevant predictor of technology use (Lucassen & Schraagen, 2012). In fact, the individual’s trusting beliefs in AI technology (source), the underlying logic of the ORC, and the trust in the ORC (medium) are predictors of the consumer’s intention of using an ORC in the future. Moreover, the study shows how the individual’s pre-existing attitude towards AI technology positively induces the intention of using a technology that embeds AI. Theoretical insights

(28)

This study sheds light on the simultaneous effect of both the user’s evaluation of a new technology and the perceived influence on the decision-making. Interestingly, the consumer’s enhanced perception in the decision confidence influence the perceived

usefulness of the ORC which, subsequently, is a stronger predictor of the intention of using an ORC in the future. Moreover, low levels of perceived decision effort positively influence the intention of using an ORC in the future. This finding provides further evidence that a decision support system frees the individual from the cognitive effort of reviewing a great quantity of information, tackling the online reviews quantity issue. Although previous studies demonstrated a direct impact of these factors on the intention of using a new technology (Chen, 2017; Xiao & Benbasat, 2007), the difference in this study is that we simultaneously measured this effect with the individual’s evaluation of the technology. And the perceived usefulness demonstrated to be a stronger predictor than the perception that the technology improves the individual’s decision quality. Moreover, it demonstrates how a reduced cognitive effort increases also the perception that the technology is easy to use.

ORC’s characteristics are influential in increasing the intention of using an ORC in the future. As previously shown, the extent to which the user understands how the ORC works and its underlying logic predicts the trust in the ORC and, subsequently, the intention of using an ORC in the future (Cramer et al., 2008).

Finally, the successfulness of the adoption of an ORC is determined also by the extent to which the user perceives the information of high quality. The findings provide clear

evidence that the more the individual perceives the information to be reliable, trustworthy and up to date, the more will think the ORC is useful and therefore, will be more inclined to use it in future purchases (Elwalda et al., 2016; Gao et al., 2012). Moreover, the quality of the information influences the consumer’s confidence in the decision, thus supporting the ORC

(29)

contribution in tackling the online review quality issue and in line with findings of information processing theory.

Societal and managerial implications

Managers and professionals equipped with this new understanding are aware of a driving predictor of ORC’s use. And, by focusing on these factors, they can develop activities designed to encourage consumers to use ORCs. Moreover, they are aware of ORCs’

underlying mechanism of enhancing purchasing decisions. They should also be aware that a good experience with the product itself is not enough for its future adoption, thus they should consider the user’s existing beliefs of the embedded technology, AI, as critical. Given the relevance of online reviews for brands, this research contributes in understanding of the future trends in online reviews (Bellman, Johnson, Lohse, & Mandel, 2006; Liu & Park, 2015). Professionals that want to introduce in the market AI-enabled technologies supporting consumers’ decisions, may consider the importance of transparency of the logic behind the working of the technology and should advocate for an open communication with the consumers of how the product works. With this research, we can also confirm that this is extremely important for the products that employ AI technologies and algorithms that are yet not still understood by the majority.

Limitations and future research

Despite the merits the current study is not free from shortcomings attaining both its validity and reliability. Starting with the latter, three endogenous variables did not pass one of the tests for the discriminant validity (PIQ, PU and TRO). This might be explained by having measured the constructs with the same answer scales and instrument (survey design)

(Podsakoff, MacKenzie, & Podsakoff, 2012). Further analyses should control for the presence of common method bias and should replicate the study by placing attention on pauses between the items of different constructs during the survey (Podsakoff et al., 2012).

(30)

Although the population validity of the study is ensured by having employed a representative sample of online consumers of the Dutch nationality recruited through an accredited panel agency, careful attention should be posed to generalizing the findings to consumers of other countries. The Netherlands has a high rate of educated citizens (European Comission, 2018) and it is technologically developed (Boyrikova, 2017), meaning that people have already come across AI-enabled information systems and, overall, they understand the underlying working. This might not be true for other countries outside Europe where, for example, the influence of trusting beliefs in AI might be even stronger.

Another threat to the external validity of the study is the generalizability of the findings to purchasing situation involving other experiential products than mattresses (e.g., restaurants) and search products (e.g. cameras). In fact, it was demonstrated that RAs are more influential for experience products than search ones (Benlian et al., 2012). Variability of the findings might exist depending also on the employment of other ORCs because of the differences in transparency and explanation of employed algorithms. Empirical evidence should be gathered of the actual behavior of consumers through the participant’s interaction with the ORC itself. In fact, we acknowledged that the present study, by showing participants a video, was only able to measure the intention of the behavior. The behavioral predictions of ORC use should be tested also by controlling for the consumer’s purchasing behavior

characteristics online (maximizers vs satisfiers).

Finally, the internal validity of the study should be further assessed with a replication of the study in a longitudinal fashion, rather than by only using a cross-sectional technique. This would confirm the temporal causality of the constructs employed in the study, as well as shedding lights on which factors play an important role in the evolution of the long-term adoption of ORC systems after actual use. Lastly, by conducting an experimental study,

(31)

future research would be able to assess and compare the differential effect of online reviews and ORC systems in aiding the purchasing decision.

(32)

References

Aghdaie, S. F. A., Sanayei, A., & Etebari, M. (2012). Evaluation of the Consumers’ Trust Effect on Viral Marketing Acceptance Based on the Technology Acceptance Model. International Journal of Marketing Studies, 4(6), 79–94.

https://doi.org/10.5539/ijms.v4n6p79

Aksoy, L., Cooil, B., & Lurie, N. H. (2011). Decision Quality Measures in Recommendation Agents Research. Journal of Interactive Marketing, 25(2), 110–122.

https://doi.org/10.1016/j.intmar.2011.01.001

Al-Natour, S., Benbasat, I., & Cenfetelli, R. T. (2008). The effects of process and outcome similarity on users’ evaluations of decision aids. Decision Sciences, 39(2), 175–211. https://doi.org/10.1111/j.1540-5915.2008.00189.x

Aljukhadar, M., Senecal, S., & Daoust, C. E. (2012). Using recommendation agents to cope with information overload. International Journal of Electronic Commerce, 17(2), 41–70. https://doi.org/10.2753/JEC1086-4415170202

Alsajjan, B., & Dennis, C. (2010). Internet banking acceptance model: Cross-market examination. Journal of Business Research, 63(9–10), 957–963.

https://doi.org/10.1016/j.jbusres.2008.12.014

AMZDiscover. (2018). 3 Free Fake Reviews Detectors to help you Spot Fake Reviews on Amazon. Retrieved from https://www.amzdiscover.com/blog/3-free-fake-reviews-detectors-to-help-you-spot-fake-reviews-on-amazon/

Arjun, M., Vivek, V., Bing, L., & Natalie, G. (2013). Fake Review Detection: Classification and Analysis of Real and Pseudo Reviews. Technical Report, 80(2), 159–169. Retrieved from

https://pdfs.semanticscholar.org/4c52/1025566e6afceb9adcf27105cd33e4022fb6.pdf%2 50Ahttp://www.sciencedirect.com/science/article/pii/S0278431910000198%255Cnhttp:/

(33)

/www.sciencedirect.com/science/article/pii/S0261517708000824%255Cnhttp://www.sci encedirect.com

Ashraf, M., Ismawati Jaafar, N., & Sulaiman, A. (2019). System- vs. consumer-generated recommendations: affective and social-psychological effects on purchase intention. Behaviour and Information Technology, 0(0), 1–14.

https://doi.org/10.1080/0144929X.2019.1583285

Ashraft, M., Jaafar, N. I., & Sulaiman, A. (2016). The mediation effect of trusting beliefs on the relationship between expectation-confirmation and satisfaction with the usage of online product recommendations. The South East Asian Journal of Management, 10(1), 75–94.

Bajaj, A., & Nidumolu, S. R. (1998). A feedback model to understand information system usage. Information and Management, 33(4), 213–224. https://doi.org/10.1016/S0378-7206(98)00026-3

Bellman, S., Johnson, E. J., Lohse, G. L., & Mandel, N. (2006). Designing marketplaces of the artificial with consumers in mind: Four approaches to understanding consumer behavior in electronic environments. Journal of Interactive Marketing, 20(1), 21–33. https://doi.org/10.1002/dir.20053

Benbasat, I., & Wang, W. (2005). Trust In and Adoption of Online Recommendation Agents. Journal of the Association for Information Systems, 6(3), 72–101.

https://doi.org/10.17705/1jais.00065

Benbasat, I., & Wang, W. (2007). Recommendation agents for electronic commerce: Effects of explanation facilities on trusting beliefs. Journal of Management Information

Systems, 23(4), 217–246. https://doi.org/10.2753/MIS0742-1222230410 Benbasat, I., & Wang, W. (2016). Empirical Assessment of Alternative Designs for

(34)

Journal of Management Information Systems, 33(3), 744–775. https://doi.org/10.1080/07421222.2016.1243949

Benlian, A., Titah, R., & Hess, T. (2012). Differential effects of provider recommendations and consumer reviews in e-commerce transactions: An experimental study. Journal of Management Information Systems, 29(1), 237–272. https://doi.org/10.2753/MIS0742-1222290107

Bonsón Ponte, E., Carvajal-Trujillo, E., & Escobar-Rodríguez, T. (2015). Influence of trust and perceived value on the intention to purchase travel online: Integrating the effects of assurance on trust antecedents. Tourism Management, 47, 286–302.

https://doi.org/10.1016/j.tourman.2014.10.009

Boyrikova, A. (2017). Why the Netherlands is the new Silicon Valley. Retrieved from https://innovationorigins.com/why-netherlands-is-the-new-silicon-valley/

Brunk, J., Mattern, J., & Riehle, D. M. (2019). Effect of Transparency and Trust on Acceptance of Automatic Online Comment Moderation Systems. 2019 IEEE 21st Conference on Business Informatics (CBI), 01, 429–435.

https://doi.org/10.1109/cbi.2019.00056

Burton, J., & Khammash, M. (2010). Why do people read reviews posted on consumer-opinion portals? Journal of Marketing Management, 26(3–4), 230–255.

https://doi.org/10.1080/02672570903566268

Byrne, B. M. (2001). Structural Equation Modeling With AMOS,EQS, and LISREL: Comparative Approaches to Testing for the Factorial Validity of a Measuring Instrument. International Journal of Testing, 1(1), 55–86.

https://doi.org/10.1207/S15327574IJT0101

Chen, C.-W. (2017). Five-star or thumbs-up? The influence of rating system types on users’ perceptions of information quality, cognitive effort, enjoyment and continuance

(35)

intention. Internet Research, 27(3), 478–494. https://doi.org/10.1108/IntR-08-2016-0243 Chen, C. H., Nguyen, B., Klaus, P. “Phil,” & Wu, M. S. (2015). Exploring Electronic

Word-of-Mouth (eWOM) in The Consumer Purchase Decision-Making Process: The Case of Online Holidays – Evidence from United Kingdom (UK) Consumers. Journal of Travel and Tourism Marketing, 32(8), 953–970.

https://doi.org/10.1080/10548408.2014.956165

Cheung, C. M. K., Lee, M. K. O., & Rabjohn, N. (2008). The impact of electronic word-of-mouth: The adoption of online opinions in online customer communities. Internet Research, 18(3), 229–247. https://doi.org/10.1108/10662240810883290

Clement, J. (2018). Impact on user-generated content such as customer reviews and ratings according to online shoppers in the United Sates as of March 2017. Retrieved from https://www.statista.com/statistics/253371/ways-online-customer-reviews-affect-opinion-of-local-businesses/

Cramer, H., Evers, V., Ramlal, S., Van Someren, M., Rutledge, L., Stash, N., … Wielinga, B. (2008). The effects of transparency on trust in and acceptance of a content-based art recommender. User Modeling and User-Adapted Interaction, 18(5), 455–496. https://doi.org/10.1007/s11257-008-9051-3

Dabholkar, P. A., & Sheng, X. (2012). Consumer participation in using online

recommendation agents: Effects on satisfaction, trust, and purchase intentions. Service Industries Journal, 32(9), 1433–1449. https://doi.org/10.1080/02642069.2011.624596 Davis, F. D. (1985). A technology acceptance model for empirically testing new end-user

information systems. Massachusetts Institute of Technology. Retrieved from

https://www.researchgate.net/profile/Sonam_Mathur2/publication/301824711_Demogra

(36)

Banking_Services_in_India/links/5aec0c02458515f59981f28c/Demographic-Influences-on-Technology-Adoption-BehaviorA-St

Davis, F. D. (1989). Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Quarterly, 13(3), 319–340.

https://doi.org/10.1016/j.cell.2017.08.036

Elwalda, A., Lü, K., & Ali, M. (2016). Perceived derived attributes of online customer reviews. Computers in Human Behavior, 56, 306–319.

https://doi.org/10.1016/j.chb.2015.11.051

Enkel, E. (2017, April). To Get Consumers to Trust AI, Show Them Its Benefits. Harvard Business Review, pp. 2–5.

European Comission. (2018). Education and Training. Monitor 2018: Netherlands. Luxembourg. https://doi.org/10.2766/067643

Fakespot.com. (2019). Say no to fake reviews and counterfeits. Retrieved from https://www.fakespot.com

Filieri, R., & McLeay, F. (2014). E-WOM and Accommodation: An Analysis of the Factors That Influence Travelers’ Adoption of Information from Online Reviews. Journal of Travel Research, 53(1), 44–57. https://doi.org/10.1177/0047287513481274

Fornell, C., & Larcker, D. F. (1981). Evaluating Structural Equation Models with

Unobservable Variables and Measurement Error. Journal of Marketing Research, 18(1), 39–50.

Furner, C. P., & Zinko, R. A. (2017). The influence of information overload on the

development of trust and purchase intention based on online product reviews in a mobile vs. web environment: an empirical investigation. Electronic Markets, 27(3), 211–224. https://doi.org/10.1007/s12525-016-0233-2

Gao, J., Zhang, C., Wang, K., & Ba, S. (2012). Understanding online purchase decision making: The effects of unconscious thought, information quality, and information

(37)

quantity. Decision Support Systems, 53(4), 772–781. https://doi.org/10.1016/j.dss.2012.05.011

Gefen, D., Karahanna, E., & Straub, D. W. (2003). Trust and TAM in Online Shopping: An Integrated Model. MIS Quarterly, 27(1), 51–90. https://doi.org/10.1021/es60170a601 Graefe, A., Haim, M., & Haarmann, B. (2018). Readers’ perception of computer-generated

news: Credibility, expertise, and readability. Journalism, 19(5), 595–610. https://doi.org/10.1177/1464884916641269

Guzman, A. L., & Lewis, S. C. (2019). Artificial intelligence and communication: A Human– Machine Communication research agenda. New Media and Society.

https://doi.org/10.1177/1461444819858691

Han, I., Park, D.-H., & Lee, J. (2006). Information Overload and its Consequences in the Context of Online Consumer Reviews. In Pacific Asia Conference on Information Systems (PACIS) (p. 28). Retrieved from http://aisel.aisnet.org/pacis2006/28 Hanani, U., Shapira, B., & Shoval, P. (2001). Information filtering: Overview of issues,

research and systems. User Modeling and User-Adapted Interaction, 11(3), 203–259. https://doi.org/10.1023/A:1011196000674

Hansen, J. M., Saridakis, G., & Benson, V. (2018). Risk, trust, and the interaction of perceived ease of use and behavioral control in predicting consumers’ use of social media for transactions. Computers in Human Behavior, 80, 197–206.

https://doi.org/10.1016/j.chb.2017.11.010

Häubl, G., & Trifts, V. (2000). Consumer decision making in online shopping environments: The effects of interactive decision aids. Marketing Science, 19(1), 4–21.

https://doi.org/10.1287/mksc.19.1.4.15178

Hengstler, M., Enkel, E., & Duelli, S. (2016). Applied artificial intelligence and trust-The case of autonomous vehicles and medical assistance devices. Technological Forecasting

(38)

and Social Change, 105, 105–120. https://doi.org/10.1016/j.techfore.2015.12.014 Hicks, A., Comp, S., Horovitz, J., Hovarter, M., Miki, M., & Bevan, J. L. (2012). Why

people use Yelp.com: An exploration of uses and gratifications. Computers in Human Behavior, 28(6), 2274–2279. https://doi.org/10.1016/j.chb.2012.06.034

Hong, H., Xu, D., Wang, G. A., & Fan, W. (2017). Understanding the determinants of online review helpfulness: A meta-analytic investigation. Decision Support Systems, 102, 1–11. https://doi.org/10.1016/j.dss.2017.06.007

Hossain, M. M., & Prybutok, V. R. (2008). Consumer acceptance of RFID technology: An exploratory study. IEEE Transactions on Engineering Management, 55(2), 316–328. https://doi.org/10.1109/TEM.2008.919728

Hostler, R. E., Yoon, V. Y., Guo, Z., Guimaraes, T., & Forgionne, G. (2011). Assessing the impact of recommender agents on on-line consumer unplanned purchase behavior. Information and Management, 48(8), 336–343. https://doi.org/10.1016/j.im.2011.08.002 Hu, N., Bose, I., Koh, N. S., & Liu, L. (2012). Manipulation of online reviews: An analysis

of ratings, readability, and sentiments. Decision Support Systems, 52(3), 674–684. https://doi.org/10.1016/j.dss.2011.11.002

Hwang, C. (2014). Consumers ’ acceptance of wearable technology : Examining solar-powered clothing. Iowa State University - Thesis.

Irina, M., & Sutton, S. G. (2015). The effects of decision aid structural restrictiveness on cognitive load, perceived usefulness, and reuse intentions. Internation Journal of Accounting Information Systems, 17, 16–36.

https://doi.org/10.1016/j.accinf.2014.02.001

Jiang, L. (2018). Prediction tool for consumer decision making in E-Commerce: Exploring “If not” type of explanation facilities on trust. Americas Conference on Information Systems 2018: Digital Disruption, AMCIS 2018, 1–5.

Referenties

GERELATEERDE DOCUMENTEN

Using empirical methods that are commonly used in human-computer interaction, we approach various aspects of automotive user interfaces in order to support the design and development

Dit kan bijvoorbeeld door de haag over een zekere lengte naast de nauwe uitgang te verlagen naar een meter veertig waardoor een volwassene er­ overheen kan kijken en

Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers) Please check the document version of this publication:.. • A submitted manuscript is

To underline the validity of our com- ponent selection procedure and the usefulness of ICA in general for the removal of eye movement artifacts, we compared the results of the

Den Hartog Jager heeft het over een onzichtbare `Muur' die de kunst sinds de uitvinding van l'art pour l'art (het idee dat ware kunst alleen zichzelf dient) zou omringen en die

Based on the mentioned variables, the following research question will be examined in this paper; does the price level of a product and/or trust-assuring

likelihood of having an a) under review, b) reviewed, c) coming soon or d) launched status.. The community moderation interactions provide positive and motivating signals to

When a user receives elaborate comments from users without a visible status indicator (no-status) and short comments from high-status users, the direct negative effect