• No results found

Believe it or not. The credibility of online product reviews.    

N/A
N/A
Protected

Academic year: 2021

Share "Believe it or not. The credibility of online product reviews.    "

Copied!
38
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Believe it or not. The credibility of online product reviews.

An experiment on the effects of review valence and reputation badges on credibility, brand attitude and purchase intention.

Ieke Deckers - 10373012

Master’s Thesis – Persuasive Communication Supervisor: Dr. Marieke Fransen

Graduate School of Communication Date: June 23, 2020

(2)

Abstract

Online product reviews are an important source of information in the decision-making process of consumers. Consumers who come across positive reviews are expected to be more willing to complete a purchase, but only when they perceive the review as credible. This experiment (N = 194) examined how review valence (moderately positive / moderately negative / extremely negative) presented with or without a reputation badge affected consumer responses and perceptions of credibility. The main findings of this experiment were that a moderately positive review resulted in higher purchase intentions than a moderately negative review, and that a moderately positive review led to more positive brand attitudes compared to both negative reviews. In addition, our findings indicate that presenting a reputation badge near a product review does not increase consumers’ perceived credibility.

Key words: review valence, eWOM credibility, reputation cues, online product reviews, brand attitude, purchase intention.

(3)

Introduction

Online product reviews have become an important and frequently used source of pre-purchase information for consumers when they are surfing on product pages of e-commerce websites (Park & Kim, 2008; Sen & Lerman, 2007). Product reviews are considered an influential type of electronic word-of-mouth (WOM), a term that refers to “any positive or negative online statement made by actual, potential, or former customers about a product or company, made available to a multitude of people via the Internet” (Hennig-Thurau, Gwinner, Walsh, & Gremler, 2004). In contrast to traditional face-to-face word-of-mouth, electronical word-of-mouth (eWOM) is created and published online by unfamiliar people. Online product reviews, as part of eWOM, are considered one of the most powerful eWOM channels to affect the purchase decision-making process (Mudambi & Schuff, 2010; Bambauer-Sachse &

Mangold, 2011; Duan, Gu, & Whinston, 2008).

By inviting former customers to write a product review, a positive or negative product evaluation can be conveyed, which is what Frijda (1986) refers to as review valence. Positive reviews are advantageous for brands and retailers, as prior research has revealed that reading a positive product review can lead to a higher brand attitude and increase the likelihood that the consumer will purchase the product (i.e. Keller, 2007; Schlosser, 2011; Doh & Hwang, 2009). In contrast, a negative product review can decrease a consumers’ brand attitude and lower his or her purchase intention (Cheung & Thadani, 2012; Sparks & Browning, 2011; Lee, Park, & Han, 2008). However, consumers often discredit product evaluations when they believe that the source of the message is not believable, and are less willing to accept and act upon product information that they do not perceive as credible (Crowley & Hoyer, 1994). As in offline retail, credibility has been considered crucial in an online retail context, possibly more so as a result of the impersonal nature of online selling. But as the review is written by an unknown whose identity is hidden, assessing the credibility of an online product review is complex (Cheung, Luo, Sia & Chen, 2007). This assessment includes perceptions of the reviewers’ level of trustworthiness and relevant expertise (Chatterjee, 2001; Schindler & Bickart, 2005). Some online retailers have found an approach to communicate to consumers which reviews are likely to be credible, by presenting certain cues that provide consumers with an indication of the reputation of the reviewer. A cue can be a reputation badge next to a review and the reviewer’s name, that indicates that this reviewer’s earlier product reviews were appreciated by other website visitors. With this aggregate assessment of past users, such badges have been found to increase consumers’ perceptions of review credibility (Shan, 2016; Xu, 2014).

(4)

Because reviews are considered so influential in the consumer decision-making process, the relationship between review valence and credibility has been widely researched. While most researchers agree that positive reviews are perceived as more credible than negative reviews (i.e. Lin & Xu, 2017; Pentina, Bailey, & Zhang, 2018), research has found mixed results on the perceived credibility of moderately negative product information. For instance, according to Crowley and Hoyer (1994) and Kupor and Tormala (2018), consumers perceive product information that contains some negative information (but is not extremely negative) as more thoughtful than exclusively positive product information. These mixed findings suggest that, if credibility-enhancing cues are provided, moderately negative product reviews can also be perceived as very credible. As reputation badges have been found so influential in affecting eWOM credibility perceptions, depicting a reputation badge near a moderately negative review may increase its credibility.

Therefore, this study will examine the differences between consumer responses after exposure to different valence reviews that are shown with or without credibility-enhancing reputation cues. We will examine the effect of review valence on consumer responses and, more importantly, add upon the field of research on the effect of reputation cues on

consumers’ perceived credibility. The findings of this research will support e-commerce websites and online retailers in their decisions relating to providing credibility-enhancing cues on their product pages.

RQ: What are the effects of review valence and reputation cues on consumers’ perceived credibility of an online review, and how does perceived credibility subsequently affect consumers’ brand attitude and purchase intention?

Theoretical framework

The influence of reviews on consumer responses is often explained in relation to how and why consumers use eWOM in general. Mostly, consumers read eWOM, including online product reviews, to make better and well-informed purchase decisions (Hennig-Thurau, Gwinner, Walsh, & Gremler, 2004). In addition to supporting consumers’ purchase intention, numerous studies have also found that consumer reviews are able to influence consumers’ brand attitudes (Cheung, Xiao, & Liu, 2012; Duan, Gu, & Whinston, 2008; Mudambi & Schuff, 2010; Park, Lee, & Han, 2007). Brand attitude consists of a consumers’ overall evaluation regarding a certain brand, for instance, whether he or she values or trusts a certain brand (Wu & Wang, 2011). As consumers who have a positive attitude towards a brand are more likely to purchase one of the brands’ products, brand attitude is considered an important

(5)

construct in consumer research (Mudambi & Schuff, 2010; Park, Lee, & Han, 2007). Consumers who come across positive reviews are more likely to approve of the brand, and will often feel encouraged to complete a purchase (Keller, 2007; Schlosser, 2011). Numerous studies examined consumers’ decision-making process, dependent of different type of review valence, and found that a positive product review increased consumers’ attitudes toward the product (Wang, Cunningham, & Eastin, 2015; Doh & Hwang, 2009; Sen & Lerman , 2007). One possible underlying reason for this finding is that positive product reviews confirm consumers’ positive pre-existing attitudes about the product or brand, and reinforces the notion of a further consideration of the product or brand (Forman, Ghose, & Wiesenfeld, 2008). This so-called confirmation bias, where people detect that certain information is consistent with their prior beliefs, attitudes or knowledge, makes them more likely to trust the presented information (Crocker, 1981; Alloy & Tabachnik, 1984) and to take it into account for subsequent purchase decisions (Peterson & Wilson, 1985; Zeithaml, 1988).

Mudambi and Schuff (2010) examined the effect of moderately or extremely positive reviews on the consumer decision-making process, and found that consumers perceived moderately positive reviews about “experience products” as more helpful than extremely positive reviews. Experience products are products of which the attributes and quality cannot be estimated easily, or depend upon subjective evaluations. In contrast, reading negative reviews about a product or brand can decrease consumers’ interest in a brand and discourage them from buying the product (Lee, Park, & Han, 2008; Keller, 2007; Schlosser, 2011). For instance, Lee Rodgers and Kim (2009) presented their participants a product page with a computer that was presented with a positive review, a moderately negative review, an

extremely negative review or no review at all. They found that positive reviews led to a more positive brand attitude, whilst an extremely negative review led to a more negative brand attitude. In addition, they found that an extremely negative product review led to more negative attitudes compared to the moderately negative review or no review at all.

We also expect that varying degrees of review valence will influence consumer responses differently. In line with the Lee, Rodgers and Kim (2009), we expect that exposure to a moderately positive review will lead to more positive brand attitudes and increased purchase intentions compared to moderately negative and extremely negative reviews. Similar to Lee, Rodgers and Kim, we expect that exposure to a moderately negative review will lead to more positive brand attitudes and increased purchase intentions compared to an extremely negative review. The following hypothesis is proposed:

(6)

H1a. Exposure to a moderately positive review results in a more positive brand attitude and higher purchase intention compared to a moderately negative and an extremely negative review. Exposure to a moderately negative review results in a more positive brand attitude and higher purchase intention compared to an extremely negative review.

Consumers often discredit product endorsements when they believe that the source is not credible, or when they suspect that the given opinion is not based upon a truthful

evaluation (Crowley & Hoyer, 1994). But how does one assess the credibility of an online product evaluation, when the anonymous writer is an unknown? In traditional communication, Hovland and Weiss (1951) described that the attractiveness, physical features, familiarity and power of a communicator may impact the credibility of the source. The assessment of the credibility of online information works differently than in offline face-to-face communication. After all, in computer-mediated communication, many of the attributes as described by

Hovland and Weiss are impossible to assess. For instance, the nature of virtual textual exchange does not permit the conveyance of credibility cues such as facial expressions (Hovland, Janis, & Kelley, 1953). In WOM research, the construct of credibility is often considered multi-dimensional, and can be examined with the constructs of source credibility and message credibility. Ohanian (1990) considered source credibility to consist of

perceptions of source expertise and trustworthiness. Interpreted by the receiver, source expertise refers to the extend to which a source is perceived as being able to make valid statements, whilst source trustworthiness refers to the level of confidence that a source is able to communicate valid assertions (Ohanian, 1990; Hovland & Weiss, 1951). In addition, the construct message credibility is associated with the quality of the message, such as whether the reader perceives the message as factual and accurate (Cheung, Sia, & Kuan, 2012).

Previous studies revealed that exposure to reviews that differed in review valence, influenced consumers’ perceived credibility differently. For instance, Lin and Xu (2017) found that a positive review was perceived as more credible than a negative review. However, varying degrees of positive or negative reviews can be distinguished. Pentina, Bailey, and Zhang (2018) compared three types of review valence, and found that a one-sided positive message increased consumers’ perceptions of review trustworthiness and credibility more than a one-sided negative review or a two-sided review, consisting of both negative and positive statements. In addition, Schindler and Bickart (2012) found that a moderately positive review was more trustworthy than an extremely positive review, because consumers questioned the reviewer’s motives when a review is overly positive. Finally, research

(7)

considers extremely negative product information as the least credible type of eWOM (Pentina, Bailey, & Zhang, 2018; Sen & Lerman, 2007). According to Sen and Lerman (2007), consumers often attribute extremely negative product statements to internally based motivations and subjective reasoning of the reviewer (Sen & Lerman, 2007). In line with the described findings, we propose that:

H1b. Exposure to a positive review results in higher perceived credibility compared to a moderately negative and an extremely negative review. Exposure to a moderately negative review results in higher perceived credibility compared to an extremely negative review.

Various eWOM studies have examined the role of review credibility in the relationship of review valence and consumer responses (i.e. Jimenez & Mendoza, 2013; Wang, 2005; Ba & Pavlou, 2002; Chang, Rhodes, & Lok, 2013). Multiple researches have described that the effects of eWOM valence on purchase intentions and brand attitudes will only materialize when the consumer perceives the message as credible (Ba & Pavlou, 2002; Wang, 2005). For instance, Wang examined whether product endorsements of sources that were perceived as experts by consumers led to more positive brand attitudes than product endorsements that were written by reviewers who were considered to have low-expertise. He found that eWOM source credibility was an important factor in influencing the formation of brand attitude and purchase intention for both luxury as well as daily consumers goods. Secondly, Ba and Pavlou examined the effect of a positive or negative review on the number of sold premium-priced products, and took in consideration the role of trust. For expensive products, they found that the sales of premium-priced products depended on the degree of trust in the reviewer. As trust is considered a main construct of source trustworthiness, it is likely that this mediating effect will be found in the present study too. Finally, Reichelt, Sievert and Jacob (2014) and Chang, Rhodes and Lok (2013) described that the effect of eWOM messages on consumers’ brand attitudes and purchase intentions depended on

consumers’ assessment of the credibility (Reichelt, Sievet, & Jacob) or trustworthiness of the online information (Chang, Rhodes, & Lok).

In line with the above-described findings of studies that confirmed the role of credibility in the relationship between reviews and consumer responses, the present study expects that the underlying mechanism of perceived credibility of the review mediates the relationship between review valence, brand attitudes and purchase intentions. More specifically we expect that:

(8)

H1c. The relationship between review valence and consumer responses is mediated by perceived credibility.

Above, we have explained that most research agrees that positive eWOM is perceived as most credible, followed by neutral or slightly negative eWOM and, finally, extremely or solely negative eWOM (i.e. Pentina, Bailey, & Zhang, 2018; Schindler & Bickart, 2012). But not all research agrees that moderately negative product information is necessarily less credible than moderately positive information. In contrast, some studies have shown that, under the right circumstances, moderately negative product information can be perceived as more credible than positive information. First, Kupor and Tormala (2018) found that a

moderately negative product review was perceived as more credible than positive reviews, but only when this moderately review was surrounded by reviews with a deviating valence. However, this effect has not yet been found for moderately reviews that are presented to consumers without the presence of other reviews. Second, Crowley and Hoyer (1994)

examined how product information that contained some negative product statements affected the credibility of the message. They found that including some negative product statements led the consumer to conclude that the writer of this message was “telling the truth”, and was being more honest about the product, compared to product information that did not mention any negative product attributes. Moreover, according to Crownley and Hoyer, product information that contains some negative statements runs counter to “expected” positive endorsements. Subsequently, this perception may increase a consumers’ perceived credibility on the product information. Despite the fact that the study of Crownley and Hoyer examined the credibility of product information in general, and not specifically consumer product reviews, their findings do suggest that there may be more to the credibility of moderately negative eWOM.

Building upon the findings of Kupor and Tormala (2018) and Crownley and Hoyer (1994), it may be that depicting certain credibility-enhancing cues can increase the credibility perception of moderately negative product information. In the last few years, multiple online retailers have integrated such cues in the review section of their product pages. For instance, Amazon has awarded 500 registered accounts with a “Top-500 reviewer” badge. These are supposed to guide consumers to identify which reviews contain the most credible information, based upon the to-date reputation of the reviewer (Park & Lee, 2009). Usually, a registered reviewer can earn such a badge after he or she has been posting reviews that have been ranked as helpful by other website visitors. More specifically, such a badge is awarded through a

(9)

calculation, based on the high number of ‘helpful votes’ relative to the number of reviews published by that reviewer. System-calculated cues, such as the described badge, are

considered as an influential, trustworthy, objective, and free of bias cue that can significantly increase the perceived credibility of eWOM messages (Sundar & Nass, 2001; Van Der Heide, Johnson, & Vang, 2013). Earlier eWOM research has shown how reputation badges increased consumers’ perceptions of credibility. For instance, when Shan (2016) examined the

relationship between the reputation of a reviewer and consumers’ perceived source credibility, he found that when a reputation badge was presented with a review, higher scores of

trustworthiness were given compared to consumers who were exposed to a review without the reputation badge. Secondly, Cheung, Luo, Sia and Chen (2007) examined how multiple determinants of an eWOM message affected consumers’ perceived eWOM credibility. Of all determinants, the reputation of a reviewer was found the strongest one to increase consumers’ perceived credibility of his or her message. More specifically, Cheung, Luo, Sia and Chen found that presenting a reputation cue near an online product review increased consumers’ perceived trustfulness of the source, compared to the same review without the cue (Cheung, Luo, Sia, & Chen). These findings suggest that depicting a reputation cue can significantly increase consumers’ perceived credibility of a review.

We expect that presenting a reputation badge with a review will cause an interaction effect between review valence, the badge itself, and perceived credibility. In correspondence with the findings of Shan (2016) and Cheung, Luo, Sia and Chen (2007), we expect that consumers who are exposed to a moderately negative review that is presented with the reputation badge will perceive this review as significantly more credible than this same review without such a badge. Presenting the reputation badge with this review is expected to have an effect on credibility such that consumers will believe that this reviewer has honestly described his or her disappointing product experience. The badge is not expected to increase consumers’ perceived credibility of the extremely negative review, because extremely negative product evaluations are often perceived as a subjective and emotional rant of the reviewer (Sen & Lerman, 2007). In addition, it is unlikely that the badge will further increase the credibility of moderately positive product evaluations, because according to Kupor and Tormala (2018) and Crowley and Hoyer (1994), these type of product reviews are already perceived by consumers as thoughtful and sensible. Earlier, we explained that we expect that exposure to a positive review will lead to higher perceived credibility compared to both negative reviews, and that exposure to a moderately negative review will lead to higher

(10)

this “traditional” relationship between review valence and credibility is expected. In sum, the following hypothesis on the described interaction effect formulated:

H2. A reputation badge significantly changes the effect of a moderately negative review on perceived credibility, such that exposure to a moderately negative review with a badge will be perceived significantly more credible compared to exposure to a moderately negative review without such a badge. In contrast, showing a badge does not significantly increase the

perceived credibility of consumers who are exposed to reviews that are moderately positive or extremely negative.

The concepts and relationships of this research are integrated in the conceptual model (figure 1).

Figure 1: Conceptual model

Method Participants

Since 80% of the Dutch adult population has indicated to shop online (Centraal Bureau voor de Statistiek, 2019), Dutch adults were a suitable target group for this research design. The participants of this experiment were recruited via a “neighbourhood safety

prevention” WhatsApp group and the researchers’ own network. This non-probability strategy of convenience sampling is an effective and economical strategy to reach the target group when time and financial resources are limited (Belch & Belch, 2019). In total, 225

(11)

participants started the experiment. However, thirty-one were excluded from the final sample because of incomplete surveys (n = 11), or because they answered the control question wrongfully (n = 20), leaving the total sample at N = 194. The age of the participants ranged between 19 and 73 years with a mean age of 34 years (SD = 14.97).

Design

The design of this study was a between-subject design with two experimental factors: 2 (reputation badge: with versus badge) X 3 (review valence: moderately negative /

moderately positive / extremely positive). A minimal number of 180 participants were needed, as according to Brysbaert (2019), each condition in an experiment should include at least 30 participants. The participants were randomly assigned to one of the six experimental conditions. See table 1 for an overview of the mean characteristics of each condition.

Table 1

Mean characteristics per experimental condition

P-values are derived from One-way ANOVA tests (ordinal variables) and a Chi-Square Test (nominal variable).

Materials

Reviews on health-watches by the brand Fitbit were the material of this experiment. This product was selected because it should be interesting for both woman and men with different ages. To ensure that the reviews in this study were perceived as close to actual e-commerce product reviews, the statements in the reviews were based on existing FitBit health-watch reviews. For instance, as multiple actual Fitbit health-watch reviews included statements about the watches’ software and the comfort of the wristlet, the reviews in this

Condition

Characteristic 1 2 3 4 5 6 p-value

N 37 30 25 37 36 29

Age 33.08 37.61 34.62 33.47 32.78 35.63 .780

% female 67.53 70.92 65.45 84.24 75.02 73.34 .564 Tendency to read e-reviews 3.97 3.65 3.96 3.95 3.91 3.87 .754 % that uses a health-watch 14 16 35 18 22 10 .230 Scepticism e-reviews 4.22 4.06 3.96 3.89 4.06 4.23 .909

(12)

study also included these attributes. All reviews were kept as identical as possible; only the minimally needed changes were made to assure the intended manipulations for the different conditions. The stimulus material is presented in Appendix 1.

Review valence. The product statements in the reviews denoted either a moderately positive, moderately negative, or extremely negative product evaluation. For instance, the moderately positive review stated that the software was “good enough”, whereas this was replaced in the moderately negative conditions by “just not good enough” and finally, “just bad”, in the extremely negative condition. To ensure that the manipulations of review valence were interpreted as intended, a pre-test was conducted. Here, participants (N = 40) were asked to indicate, on a seven-point Likert scale, how negative or positive they perceived the writer of the reviews to be about the product (1 = Extremely negative, 7 = Extremely positive). ANOVA results indicated that a significant difference was found between the groups, as F (2, 39) = 47.253, p < .001. LSD post-hoc comparison indicated that the group that was exposed to a moderately positive review perceived this review significantly more positive (M = 5.08, SD = 1.19) than the group that was exposed to the moderately negative review condition (M = 2.86, SD = .95, p < .001) and the extremely negative condition (M = 1.46, SD = .66, p < .001). The participants in the moderately negative review also perceived the review less negative then the extremely negative review (p < .001). These results showed that the participants perceived the material in correspondence with their intended experimental condition, which confirmed that these reviews could be used in the experiment.

Figure 2: Pre-test results: perceived valence per review condition

Reputation badge. A “Top Reviewer” badge was either presented, or not presented with the review of this experiment. The badge was designed in a similar fashion as large online retailers such as Amazon depict their “Top contributor” badge and, similar to Amazon

(13)

and Tripadvisor, was shown just next to the (nick)name of the reviewer. The badge was depicted quite largely relative to the review, to ensure that the badge was noticed if shown. Presenting such reputation badges near a review is expected to increase consumers’ perceived credibility of the information (Sandar & Nass, 2011). With the same group of participants as the earlier described pre-test (N = 40), it was pre-tested whether they noticed the presence of the badge. The results showed that, among the participants who were shown a review without a badge (n = 20), 95% rightfully indicated this. Of those who were shown a review with a badge (n = 20), 100% indicated its presence correctly. The results demonstrated that almost all participants answered the question correctly, and that the designed badge could be used in the experiment.

Procedure

A Qualtrics link directed participants to the online experiment webpage, where they were welcomed and introduced to this study. Before starting the experiment, all participants had to agree to the described terms in the information consent on their privacy rights. The experiments’ questionnaire began with a few demographic questions (age, gender) and control questions (general scepticism towards the truthfulness of online reviews, general tendency to read product reviews pre-purchase, and whether the participant currently used a health-watch). Then, all participants were randomly assigned to one of the six experimental

conditions (see table 1) and were requested to read the depicted review. After reading to the review, participants were asked to answer multiple questions about their perceptions of the review. These questions examined participants’ perceived credibility, their attitude towards the brand Fitbit and their intention to purchase the depicted Fitbit health-watch. Participants were thanked for their participation, and were invited to contact the researcher if they were interested in receiving the final research report.

Measures

See table 1 for an overview of all scores per group, and appendix 2 for the experiments’ questionnaire.

Brand Attitude. Participants’ brand attitude was measured with two items, namely participants’ confidence in the brand (“I feel confidence in this brand”) and how valuable they perceived the brand to be (“I see this brand as valuable”). This scale was derived from the research of Delgado-Ballester (2004). Participants could answer on a 7-point semantic differential Likert-scale (1 = Strongly disagree, 7 = Strongly agree). A Factor analyses

(14)

confirmed that this component explained 90.36% percent of the variance (EV = 1.81), and Cronbach’s alpha = .89. The average score of these items used to measure consumers’ brand attitude was M = 4.43, SD = 1.24.

Purchase intention. Participants’ purchase intention was measured using one item, “If I would be interested in buying a health watch, I would buy this product”. This item was adapted specifically for the chosen product from the scale of Schmuck, Matthes, Naderer, and Beufort (2018). Participants could answer on a seven-point Likert scale: (1 = Strongly

disagree, 7 = Strongly agree) and results showed that M = 3.96, SD = 1.47.

Credibility. The mediator credibility was measured through a scale introduced by Ohanian (1990) and is still used by researchers today (i.e. Shan, 2016). Source credibility consists of the dimensions source trustworthiness (reviewer honesty, reviewer reliability and reviewer trustworthiness) and source expertise (reviewer experience and reviewer

qualification). In addition, the dimension message credibility was measured with the items message believability and message credibility, a scale of Cheun, Sia and Kuan (2012), adapted by Block and Keller, (1995) and Smith and Vogt (1995). Participants could answer on a 7-point Likert scale (1 = Strongly disagree, 7 = Strongly agree). A Factor Analyses showed that the items that measured source trustworthiness explained 81.88% of the variance, (EV = 2.46), Cronbach’s alpha = .89. The average score of these items was M = 4.92, SD = 1.02. The component source expertise explained 83.17% of the variance (EV = 1.66), Cronbach’s alpha = .80. The average score of these items was M = 4.33, SD = 1.23. Finally, message credibility explained 79.36% of the variance (EV = 1.59) Cronbach’s alpha = .74, and the average score of these items was M = 4.81, SD = 1.17.

Control questions. Certain character traits or existing attitudes of participants can affect the results of an experiment. To minimalize these influences, multiple control questions were included in the experiment. Participants’ general scepticism to trusting online product information was assessed with a scale of eWOM scepticism, namely “In general, I am not sceptical about the truthfulness of online reviews”, a scale developed by Zhang, Ko and Carpenter (2015). Descriptive analyses showed M = 4.07, SD = 1.46. The second control variable was whether participants used a watch, as those who have bought a health-watch earlier are expected to have a more positive attitude towards the product or brand. This item was based on Xu (2014), and participants could answer with “yes” or “no”. Only 19,11% of the participants reported to currently use a health-watch. The third and final control

variable was consumers’ tendency to read consumer reviews, measured with the item “I … read consumer reviews before buying an expensive product”. This item could be answered on

(15)

a 5-point Likert scale (1 = Never, 5 = Always). Possibly, consumers who usually or always read product reviews perceived the review of this experiment with higher credibility. Descriptive analyses showed M = 3.89, SD = .95. Finally, to assure that the participants had carefully read the review, all participants had to answer if “The review message was about a health-watch”, which could be answered with either “yes” or “no”. The participants who gave the wrong answer to this question (n = 20) were excluded from the sample.

Results Randomization check

One-way ANOVA and Chi-Square tests were conducted to check if the randomization between groups worked as intended (see all means and p-values in table 1). These tests measured whether participants were equally distributed between the experimental groups, taking into account their answers on the control questions. The Chi-Square test showed that the distribution of gender did not differ significantly between groups, as χ2 (5) = 3.898. The ANOVA results showed that the mean ages between the six groups did not significantly differ, F (5, 188) = .494. In addition, the ANOVA results revealed that there were no significant differences between the groups for review scepticism, F (5, 188) = .306, current health-watch use, F (5, 188) = 1.389, and tendency to read product reviews pre-purchase, F (5, 188) = .529. In sum, the tests showed that there were no differences between the

experimental conditions regarding the distribution of age, gender, current product use, general scepticism towards reviews, and tendency to read product reviews pre-purchase. Thus, the randomization for demographics and the described control questions was successful.

Testing assumptions

Multiple assumption tests were conducted to ensure valid interpretation of our experiment. Our data did not have missing values, and standardized variables showed no outliers. The assumption of normality was conducted with Skewness and kurtosis statistics. These tests verified that the data of purchase intention, brand attitude, source trustworthiness and expertise, and message credibility were normally distributed (see table 2).

(16)

Table 2

Frequencies of dependent variables

Dependent variable Descriptive statistics Skewness Kurtosis M Std. deviation M Std. error M Std. error

Brand attitude 4.42 1.23 -.32 .18 -.17 .35 Purchase intention 3.96 1.47 -.06 .18 -.70 .35 Source trustworthiness 4.92 1.02 -.57 .18 .44 .35 Source expertise 4.33 1.23 -.37 .18 .11 .35 Message credibility 4.81 1.17 -.73 .18 .39 .35 Hypothesis testing Review valence

In H1a, we expected that exposure to a moderately positive review leads to a more positive brand attitude and a higher purchase intention compared to a moderately negative and an extremely negative review, and that exposure to a moderately negative review leads to a more positive brand attitude and higher purchase intention compared to an extremely negative review. This expected effect of review valence on consumer responses was measured with a MANOVA, with brand attitude and purchase intention as the dependent variables, and review valence as the independent variable. The results indicated that the overall effect of review valence on consumer responses was significant, according to Wilks’ Lambda, F (4, 194) = 2.825, p = .025, η² = .03.

The ANOVA results showed that the overall effect of review valence on brand attitude was significant, as F (2, 194) = 3.524, p = .031, η² = .04. Post-hoc LSD tests revealed that exposure to the moderately positive review resulted in a significantly higher brand attitude compared to the moderately negative review (Mdiff .53, p = .011), as well as compared to the extremely negative review (Mdiff .42, p = .052). However, the moderately negative review did not result in higher brand attitudes than the extremely negative review (Mdiff -.11, p = .621). See all means in table 3.

The results showed that the effect of review valence on purchase intention was also significant, as F (2, 194) = 4.505, p = .012, η² = .05. Post-hoc LSD tests revealed that

exposure to the moderately positive review resulted in significantly higher purchase intentions compared to the moderately negative review (Mdiff .75, p = .003). However, the moderately positive review did not result in higher purchase intentions than the extremely negative review

(17)

(Mdiff .39, p = .125), neither did the moderately negative review result in higher purchase intentions than the extremely negative review (Mdiff -.35, p = .179). See all means in table 3.

Table 3

Perceived Credibility

In H1b, we expected that exposure to a positive review results in higher perceived credibility compared to a moderately negative and an extremely negative review, and that exposure to a moderately negative review results in higher perceived credibility compared to an extremely negative review. In H2, we expected an interaction effect between review valence and the reputation badge on perceived credibility. We expected that exposure to a moderately negative review with a badge is perceived more credible than a moderately negative review without such a badge, but that showing the badge does not increase the perceived credibility of the other two reviews. Both hypotheses were tested with a MANOVA test, with the three dimensions of perceived credibility (source trustworthiness, source expertise, and message credibility) as the dependent variables, and review valence and the reputation badge as the independent variables.

The MANOVA test revealed that the overall effect of review valence on review credibility was not significant, as Wilks’ Lambda revealed that F (6, 194) = 1.203 p = .304, n² = .02. The ANOVA results showed that the effect was not found for trustworthiness, F (2, 194) = 2.593, p = .077, n² = .03, neither for expertise, F (2,194) = .087, p = .917, n² = .01, nor for message credibility, F (2, 194) = 2.018, p = .136, n² = .02 (see all means in table 4). These findings reject the expected main effect as predicted in H1b.

Descriptive statistics review valence on consumer responses Valence Brand Attitude Purchase Intention

M SD M SD

Moderately Positive 4.71 1.20 4.34 1.47 Moderately Negative 4.20 1.19 3.58 1.40 Extremely Negative 4.29 1.30 3.96 1.47

(18)

Table 4

Descriptive statistics review valence on credibility

Valence Source Trustworthiness Source Expertise Message Credibility

M SD M SD M SD

Moderately Positive 5.09 1.05 4.36 1.32 4.95 1.26

Moderately Negative 4.96 .93 4.36 1.15 4.90 1.02

Extremely Negative 4.69 1.05 4.28 1.22 4.55 1.24

The results also revealed that there was no main effect of showing the reputation badge on perceived credibility, as Wilks’ Lambda revealed that F (3, 194) = .368, p = .776, n² = .01. The ANOVA results showed that the effect was not found for trustworthiness, F (1, 194) = .425, p = .515, n² = .01, nor for expertise, F (1,194) = .247, p = .620, n² = 01, or message credibility, F (1, 194) = 1.084, p = .299 n² = 01. See table 5 for all means.

Table 5

Descriptive statistics reputation badge on credibility

Reputation badge Source Trustworthiness Source Expertise Message Credibility

M SD M SD M SD

Without badge 4.89 1.00 4.29 1.28 4.73 1.21

With badge 4.97 1.04 4.38 1.18 4.90 1.16

Then, the MANOVA test revealed that the expected interaction effect was not significant, as Wilks’ Lambda showed F (6, 194) = 1.049, p = .393, n² = .02. The ANOVA results revealed that the effect was not found for trustworthiness, F (2, 194) = .573, p = .565, n² = 01, neither for expertise, F (2,194) = .033, p = .968, n² = 01, or message credibility, F (2, 194) = .308, p = .736, n² = 01. In contradiction to H2, presenting the moderately negative review with the badge did not increase the perceived credibility compared to the same review without the badge. See all means in table 6.

(19)

Table 6

Descriptive statistics reputation badge and review valence on credibility

Condition Source trustworthiness Source expertise Message credibility M SD M SD M SD

Moderately positive * without badge 5.02 .88 4.34 1.38 4.93 1.21 Moderately positive * with badge 5.16 1.20 4.37 1.29 4.98 1.33 Moderately negative * without badge 5.02 1.03 4.31 1.22 4.72 1.13 Moderately negative * with badge 4.90 .85 4.40 1.01 5.07 .83 Extremely negative * without badge 4.54 1.08 4.20 1.25 4.48 1.25 Extremely negative * with badge 4.81 1.04 4.35 1.22 4.61 1.24

Consumer responses

A MANOVA test was conducted to examine if the reputation badge had a main effect on consumer responses. The MANOVA, with brand attitude and purchase intention as the dependent variables and review valence and the reputation badge as independent variables, showed that reputation badge did not have a main effect on brand attitude or purchase

intention, as Wilks’ Lambda revealed that F (2, 194) = .166, p = .848, n² = .01. See all means in table 7.

Table 7

Descriptive statistics reputation badge on consumer responses Brand attitude Purchase intention

Reputation badge M SD M SD

Without badge 4.44 1.18 4.02 1.58

With badge 4.40 1.30 3.93 1.39

Explorative analyses

We expected that credibility mediates the relationship between review valence and consumer responses (H1c). But, with no main effect from review valence on credibility, the expected mediating role of credibility in the relationship between review valence and

consumer responses was also rejected. Still, a linear regression was conducted to examine if a participants’ higher brand attitude or perceived credibility resulted in a higher purchase

(20)

intention. A linear regression, with purchase intention as the dependent variable and brand attitude, source trustworthiness, source expertise and message credibility as the independent variables, showed a significant regression model, as F (4, 194) = 31.629, p < .001. This model explained 40% of the variance in purchase intention (R² = .40). The association between brand attitude and purchase intention was significant and strong, b* = .72, t = 10.149, p < .001, 95% CI [.576, .854]. In addition, B = .60 reflects that with each increase on the brand attitude scale from 1-7, purchase intention increases with .60. The test revealed that none of the constructs of perceived credibility were significant predictors of purchase intention.

Conclusion and discussion

The goal of this study was to examine consumers’ perceived credibility of a review, as well as their brand attitude and purchase intention, after they were exposed to a review about a Fitbit health-watch that was moderately positive, moderately negative or extremely

negative. Besides that, we examined the effect of presenting reviews with a reputation badge on consumers’ perceived credibility. In line with the first hypothesis (H1a), the results showed that exposure to the moderately positive review led to significantly more positive brand

attitudes compared to the moderately negative as well as to the extremely negative review. In addition, we found that exposure to the moderately positive review resulted in significantly increased purchase intentions compared to the moderately negative review. These findings correspond to multiple studies that also found that exposure to positive reviews led to more positive brand attitudes (Wang, Cunningham, & Eustin, 2015; Doh & Hwang, 2009) and increased purchase intentions (Schlosser, 2011; Sen & Lerman , 2007) compared to negative reviews. Also, explorative analyses revealed that the degree to which a consumer thinks positively about a brand significantly increases his or her purchase intention.

A strength of this study was that the participants who had a Fitbit health-watch were distributed equally between the groups, because they were expected to have a more positive attitude towards the reviewed product compared to those who do not have a health-watch. The confirmation bias describes how participants detect that a certain type of presented

information confirms with their prior attitudes, and therefore are more likely to accept this message (Peterson & Wilson, 1985; Zeithaml, 1988). In future research, it would be interesting to examine the role of the confirmation bias in the effect of positive reviews on consumer responses. When actually shopping online, it could be that consumers who read a positive review detect that their interest in the product and existing positive attitude towards it matches the positive evaluation in the review. In addition, it is likely that the confirmation

(21)

bias plays a larger role when consumers who have a positive attitude towards a product are shown a positive review that is accompanied by a corresponding high star rating. Then, compared to a textual review, the sight of a textual review accompanied by a high star rating of the product could bring about a more positive brand attitude and higher purchase intention, through the confirmation bias.

The current study examined how different degrees of review valence affected the three constructs of perceived credibility. As opposed to H1b, exposure to the moderately positive review did not lead to significantly higher perceived credibility than the negative reviews, nor did exposure to the moderately negative review lead to significantly higher credibility than the extremely negative review. These findings do not correspond to previous studies (i.e. Lin & Xu, 2017; Sen & Lerman, 2007; Pentina, Bailey, & Zhang, 2018). There are two possible explanations why the expected effect was not found.

The first explanation could be related to the review itself. According to Sen and Lerman (2007), exposure to different valence reviews about hedonic products, which are desirable objects that stimulate feelings of pleasure, fun, and enjoyment, lead to significantly different perceptions of trustworthiness. Sen and Lerman found that negative reviews about hedonic products are perceived as less trustworthy than positive reviews about hedonic products, because consumers believe that these negative evaluations are driven by subjective taste and personal reasons of the reviewer. In contrast, Sen and Lerman did not find this effect of review valence on credibility for practical and functional products, which they refer to as utilitarian products. The material of this study predominantly considered functional attributes of the product, such as the software and the battery duration of the watch. However, the watch can also be seen as a hedonic product, as thinking about buying such a (luxury) watch can stimulate one’s imagination about feeling healthful when working on health goals. However, the reviews in this study did not focus on hedonic thoughts and criteria. For instance, the reviews did not mention any health associated feelings and aspirations with use of this watch. Since the material of this study only mentioned functional attributes of the watch, exposure to the reviews is likely to have triggered participants’ utilitarian thoughts and criteria. In

accordance to the findings of Sen and Lerman, it may be that these choices in the material of this study have contributed to not finding the expected effect of review valence on consumers’ perceived credibility. Future research might reveal effects of review valence on perceived credibility when the review does include hedonic comments. With reviews that do trigger hedonic criteria, significant differences in perceived credibility could have been found after

(22)

exposure to different degrees of review valence, with the negative reviews leading to significantly lower credibility than the positive ones.

The second potential reason for the findings of review valence on credibility may be related to the questionnaire used. Before exposure to the reviews, all participants were asked to denote their general scepticism towards the truthfulness of online reviews. This item could have made participants more sceptical towards trusting the presented review in this

experiment, specifically those who were presented the moderately positive review. These participants may have perceived the positive statements in the positive review as less credible, because their general scepticism towards trusting online information was augmented by the described item. One counterargument for this explanation is that the scores of credibility for the moderately positive were still quite high. Still, if this question would have been asked as the final question in the experiment, the perceived credibility of the participants who were exposed to the moderately positive review might have been higher. Then, significant findings of perceived credibility could have been detected between the moderately positive review and the negative reviews.

In H1c, we expected that consumers’ perceptions of credibility of the review would mediate the effect of review valence on consumer responses. However, since no main effect of review valence on credibility was found, the expected mediation effect of credibility had to be rejected. In fact, explorative analyses showed that no effects were found from any of the construct of credibility on brand attitude or purchase intention. This indicates that consumers’ beliefs about the relevant expertise or motivations of the reviewer to communicate valid statements, did not affect brand attitude or purchase intention in this study. However, when actually shopping online, consumers who visit product pages and read product reviews often already are interested to purchase the product. Then, a highly credible review might persuade them to either buy the product or not, dependent on the evaluation, especially when the financial impact of the purchase is high. However, in this experiment, it is unlikely that a large number of the consumers were seriously considering to purchase the Fitbit watch. Consequently, this might have prevented the influence of review valence on purchase intention, regardless of any of the constructs of credibility of the presented review.

Also, the material of this study did not present any personal information about the reviewer to consumers. According to Mackiewicz (2010), presenting personal information about a reviewer, such as their age and gender, increases perceptions of source

trustworthiness. The more consumers know about a reviewer’s identity, the more they

(23)

information, future research could further investigate the effect of source trustworthiness on consumer reactions.

This study examined how a reputation badge presented with a review interacted with the valence of a review on perceived credibility. In contradiction to H2 and previous findings of Shan (2016) and Cheung, Luo, Sia and Chen (2007), presenting the moderately negative review with the badge did not increase the credibility of the review. In addition, showing the badge did not increase the perceived credibility of the moderately positive review or

extremely negative review, compared to the same review without the badge. Our findings suggest that consumers felt hesitant to trust the objectiveness or truthfulness of the badge. Still, the source trustworthiness and message credibility scores of all reviews were relatively high, with or without a badge. An explanation for the insignificant effect of the reputation badge on perceived credibility in this study could be that most product pages of actual e-commerce websites contain multiple reviews, and not just one, as in the present study. Similar to Kupor and Tormala (2018), who found that a review with a deviating valence is perceived as more credible than its surrounding reviews, it is possible that a review that is presented with a reputation badge has more effect on credibility when it is surrounded by multiple reviews without such a badge. In such a scenario, the one review that is presented with a badge could lead to higher perceived credibility, compared to the surrounding reviews without such a badge.

Practical implications

All in all, the reputation badge did not increase participants’ perceived credibility or purchase intention, nor did it lead to a more positive brand attitude. Therefore, we do not recommend e-commerce websites to invest in this computer-generated badge. This study did confirm that review valence is important to brands and retailers, as exposure to a moderately positive review led to higher purchase intentions than an extremely negative review. Also, exposure to a moderately positive review led to significantly higher brand attitudes than a moderately negative and an extremely negative review. Both findings support the notion that positive product reviews on product pages are advantageous to online sellers. Although sellers cannot exert direct influence on these reviews, this should stimulate them to live up to

consumers’ expectations, for instance by ensuring the quality of their products and by providing consumers with honest and realistic product descriptions.

(24)

References

Alloy, L., & Tabachnik, N. (1984). Assessment of covariation by humans and animals: The joint influence of prior expectations and current situational information. Psychological Review, 91(1), 112–149. https://doi.org/10.1037/0033-295X.91.1.112

Ba, S., Pavlou, P. (2002). Evidence of the effect of trust building technology in electronic markets: Price premiums and buyer behavior. MIS Quarterly, 26(3), 243–268.

https://doi.org/10.2307/4132332

Bambauer-Sachse, S., & Mangold, S. (2011). Brand equity dilution through negative online word-of-mouth communication. Journal of Retailing and Consumer Services, 18(1), 38–45. http://doi.org/10.1016/j.jretconser.2010.09.003

Basuroy, S., Chatterjee, S., & Ravid, S. (2003). How Critical Are Critical Reviews? The Box Office Effects of Film Critics, Star Power, and Budgets. Journal of Marketing, 67(4), 103-117. Retrieved June 4, 2020, from www.jstor.org/stable/30040552

Belch, G., & Belch, M. (2018). Advertising and promotion: An integrated marketing communications perspective (Eleventh edition.). New York, NY: McGraw-Hill Education.

Bickart, B., & Schindler, R. M. (2001). Internet forums as influential sources of consumer information. Journal of Interactive Marketing, 15(3), 31-40.

https://doi.org/10.1002/dir.1014

Block, L. G., & Keller, P. A. (1995). When to accentuate the negative: The effects of perceived efficacy and message framing on intentions to perform a health-related behavior. Journal of marketing research, 32(2), 192-203.

https://doi.org/10.2307/3152047

Brysbaert, M. (2019). How many participants do we have to include in properly powered experiments? A tutorial of power analysis with reference tables. Journal of Cognition, 2(1), 16. https://doi.org/10.5334/joc.72

Centraal Bureau voor Statistiek. (2019). Online winkelen: kenmerken aankoop, persoonskenmerken. Retrieved on June 5, 2020, from

(25)

Chang, T. P. V., Rhodes, J., & Lok, P. (2013). The mediating effect of brand trust between online customer reviews and willingness to buy. Journal of Electronic Commerce in Organizations (JECO), 11(1), 22-42. https://doi.org/10.4018/jeco.2013010102

Chatterjee, P. (2001). Online Reviews: Do consumers use them? Advances in Consumer Research, 28, 129–133.

Cheung, M., Luo, C., Sia, C., & Chen, H. (2007). How do people evaluate electronic word-of- mouth? Informational and normative based determinants of perceived credibility of online consumer recommendations in China. Pacis Proceedings, 18, 69–81

Cheung, C., Sia, C., & Kuan, K. (2012). Is This Review Believable? A Study of Factors Affecting the Credibility of Online Consumer Reviews from an ELM Perspective. Journal Of The Association For Information Systems, 13(8), 618–635.

https://doi.org/10.17705/1jais.00305

Cheung, C., & Thadani, D. (2012). The impact of electronic word-of-mouth communication: A literature analysis and integrative model. Decision Support Systems, 54(1), 461– 470. https://doi.org/10.1016/j.dss.2012.06.008

Chiou, J., & Cheng, C.I. (2003). Should a company have message boards on its Websites?

Journal of Interactive Marketing, 17(3), 50–61.https://doi.org/10.1002/dir.10059

Crocker, J. (1981). Judgment of covariation by social perceivers. Psychological Bulletin, 90(2), 272–292. https://doi.org/10.1037/0033-2909.90.2.272

Crowley, A., & Hoyer, W. (1994). An Integrative Framework for Understanding Two-Sided Persuasion. Journal of Consumer Research, 20(4), 561–574.

https://doi.org/10.1086/209370

Delgado-Ballester, E. (2004). Applicability of a brand trust scale across product categories. European Journal of Marketing, 38(5/6), 573–592.

https://doi.org/10.1108/03090560410529222

Doh, S., & Hwang, J. (2009). How consumers evaluate eWOM (Electronic Word-of-Mouth) messages. CyberPsychology & Behavior, 12(2), 193–197.

(26)

Duan, W., Gu, B., & Whinston, A. (2008). Do online reviews matter? — An empirical investigation of panel data. Decision Support Systems, 45(4), 1007–1016.

https://doi.org/10.1016/j.dss.2008.04.001

Forman, C., Ghose, A., & Wiesenfeld, B. (2008). Examining the relationship between reviews and sales: The role of reviewer identity disclosure in electronic markets. Information systems research, 19(3), 291-313. https://doi.org/10.1287/isre.1080.0193

Frijda, N.H. (1986). The emotions. New York, NY: Cambridge University Press.

Greer, J. D. (2003). Evaluating the credibility of online information: A test of source and advertising influence. Mass Communication and Society, 6(1), 11928.

https://doi.org/10.1207/S15327825MCS0601_3

Van Der Heide, B., Johnson, B. K., & Vang, M. H. (2013). The effects of product

photographs and reputation systems on consumer behavior and product cost on eBay. Computers in Human Behavior, 29(3), 570–578.

http://dx.doi.org/10.1016/j.chb.2012.11.002

Hennig-Thurau, T., Gwinner, K., Walsh, G., & Gremler, D. (2004). Electronic Word-of-mouth via consumer-opinion platforms: What motivates consumers to articulate themselves on the Internet? Journal of Interactive Marketing, 18(1), 38–52.

https://doi.org/10.1002/dir.10073

Hovland, C., & Weiss, W. (1951). The influence of source credibility on communication effectiveness. Public Opinion Quarterly, 15(1), 635-650.

https://doi.org/10.1086/266350

Keller, E. (2007). Unleashing the power of word of mouth: Creating brand advocacy to drive growth. Journal Of Advertising Research, 47(4), 448–452.

https://doi.org/10.2501/S0021849907070468

Kupor, D., & Tormala, Z. (2018). When Moderation Fosters Persuasion: The Persuasive Power of Deviatory Reviews.(Report). Journal of Consumer Research, 45(3), 490– 510. https://doi.org/10.1093/jcr/ucy021

Lee, J., Park, D., & Han, I. (2008). The effect of negative online consumer reviews on product attitude: An information processing view. Electronic Commerce Research and

(27)

Lee, M., Rodgers, S., & Kim, M. (2009). Effects of valence and extremity of eWOM on attitude toward the brand and website. Journal of Current Issues &amp; Research in Advertising, 31(2), 1–11. https://doi.org/10.1080/10641734.2009.10505262

Lin, C., & Xu, X. (2017). Effectiveness of online consumer reviews. Internet Research, 27(2), 362–380. https://doi.org/10.1108/IntR-01-2016-0017

Mackiewicz, J. (2010). Assertions of expertise in online product reviews. Journal of Business and Technical Communication, 24(1), 3-28.

https://doi.org/10.1177/1050651909346929

Mudambi, S., & Schuff, D. (2010). What makes a helpful online review? A study of customer reviews on Amazon.com. (Case study). The Management Information Systems

Research Center, 34(1), 185–200.

Ohanian, R. (1990). Construction and validation of a scale to measure celebrity endorsers' perceived expertise, trustworthiness, and attractiveness. Journal of Advertising, 19(3), 39-52. https://doi.org/10.1080/00913367.1990.10673191

Park, D., Lee, J., & Han, I. (2007). The Effect of Online Consumer Reviews on Consumer Purchasing Intention: The Moderating Role of Involvement. International Journal of Electronic Commerce, 11(4), 125–148. https://doi.org/10.2753/JEC1086-4415110405

Park, D., & Kim, S. (2008). The effects of consumer knowledge on message processing of electronic word-of-mouth via online consumer reviews. Electronic Commerce Research and Applications, 7(4), 399–410.

https://doi.org/10.1016/j.elerap.2007.12.001

Park, C., & Lee, T. M. (2009). Information direction, website reputation and eWOM effect: A moderating role of product type. Journal of Business research, 62(1), 61-67.

https://doi.org/10.1016/j.jbusres.2007.11.017

Pentina, I., Bailey, A., & Zhang, L. (2018). Exploring effects of source similarity, message valence, and receiver regulatory focus on yelp review persuasiveness and purchase intentions. Journal of Marketing Communications, 24(2), 125–145.

https://doi.org/10.1080/13527266.2015.1005115

Peterson, R. A., & Wilson, W. R. (1985). Perceived risk and price reliance schema as price-perceived quality mediators. Perceived quality, 247-68.

(28)

Reichelt, J., Sievert, J., & Jacob, F. (2014). How credibility affects eWOM reading: The influences of expertise, trustworthiness, and similarity on utilitarian and social functions. Journal of Marketing Communications, 20(1-2), 65-81.

https://doi.org/10.1080/13527266.2013.797758

Schindler, R. M., & Bickart, B. (2005). Published “word of mouth”: Referable, consumer-generated information on the Internet. In C. P. Haugtvedt, K. A. Machleit & R. F. Yalch (Eds.), Online consumer psychology: Understanding and influencing behavior in the virtual world (pp. 35- 61). Mahwah, NJ: Lawrence Erlbaum Associates.

Schlosser, A. (2011). Can including pros and cons increase the helpfulness and persuasiveness of online reviews? The interactive effects of ratings and arguments. Journal of

Consumer Psychology, 21(3), 226–239. https://doi.org/10.1016/j.jcps.2011.04.002

Shan, Y. (2016). How credible are online product reviews? The effects of self-generated and system-generated cues on source credibility evaluation. Computers in Human

Behavior, 55, 633–641. https://doi.org/10.1016/j.chb.2015.10.013

Sen, S., & Lerman, D. (2007). Why are you telling me this? An examination into negative consumer reviews on the Web. Journal of Interactive Marketing, 21(4), 76.

https://doi.org/10.1002/dir.20090

Smith, R., & Vogt, C. (1995). The Effects of Integrating Advertising and Negative Word‐of‐ Mouth Communications on Message Processing and Response. Journal of Consumer Psychology, 4(2), 133–151. https://doi.org/10.1207/s15327663jcp0402_03

Sparks, B., & Browning, V. (2011). The impact of online reviews on hotel booking intentions and perception of trust. Tourism Management, 32(6), 1310–1323.

https://doi.org/10.1016/j.tourman.2010.12.011

Sundar, S., & Nass, C. (2001). Conceptualizing sources in online news. Journal of

Communication, 51(1), 52–72. https://doi.org/10.1111/j.1460-2466.2001.tb02872.x

Wang, A. (2005). The effects of expert and consumer endorsements on audience response. Journal of Advertising Research, 45(4), 402–412.

http://dx.doi.org/10.1017/S0021849905050452

Wang, S., Cunningham, N. R., & Eastin, M. S. (2015). The impact of eWOM message characteristics on the perceived effectiveness of online consumer reviews. Journal of

(29)

Interactive Advertising, 15(2), 151-159.

https://doi.org/10.1080/15252019.2015.1091755

Wu, P., & Wang, Y. (2011). The influences of electronic word-of-mouth message appeal and message source credibility on brand attitude. Asia Pacific Journal of Marketing and Logistics, 23(4), 448–472. https://doi.org/10.1108/13555851111165020

Xu, Q. (2014). Should I trust him? The effects of reviewer profile characteristics on eWOM credibility. Computers in Human Behavior, 33, 136–144.

https://doi.org/10.1016/j.chb.2014.01.027

Zeithaml, V. A., & Zeithaml, C. P. (1988). The contingency approach: its foundations and relevance to theory building and research in marketing. European Journal of Marketing. https://doi.org/10.1108/EUM0000000005291

Zhang, X., Ko, M., & Carpenter, D. (2016). Development of a scale to measure scepticism toward electronic word-of-mouth. Computers in Human Behavior, 56(C), 198–208.

(30)

Appendix

Appendix 1: Stimuli for experimental conditions

Condition 1: Moderately positive x Badge

(31)

Condition 3: Extremely negative x Badge 1

(32)

Condition 5: Moderately negative x Without badge

(33)

Appendix 2: Experiments’ questionnaire

Hello,

For my Master thesis I am conducting a study and I need your help! The goal of this research is to generate insight into how consumers assess consumer reviews. The survey will take about 3 minutes. As this research is being carried out under the responsibility of the ASCoR, University of Amsterdam, we can guarantee that:

1) Your anonymity will be safeguarded, and that your personal information will not be passed on to third parties under any conditions, unless you first give your express permission for this

2) You can refuse to participate in the research or cut short your participation without having to give a reason for doing so. You also have up to 7 days after participating to withdraw your permission to allow your answers or data to be used in the research.

3) Participating in the research will not entail your being subjected to any appreciable risk or discomfort, the researchers will not deliberately mislead you, and you will not be exposed to any explicitly offensive material.

4) No later than five months after the conclusion of the research, we will be able to provide you with a research report that explains the general results of the research.

For more information about the research and the invitation to participate, you are welcome to contact the project leader Ieke Deckers at any time.

I hereby declare that I have been informed in a clear manner about the nature and method of the research, as described in the email invitation for this study.

I agree, fully and voluntarily, to participate in this research study. With this, I retain the right to withdraw my consent, without having to give a reason for doing so. I am aware that I may halt my participation in the experiment at any time.

If my research results are used in scientific publications or are made public in another way, this will be done such a way that my anonymity is completely safeguarded. My personal data will not be passed on to third parties without my express permission.

If I wish to receive more information about the research, either now or in future, I can contact Ieke Deckers, by emailing iekedeckers94@gmail.com.

Should I have any complaints about this research, I can contact the designated member of the Ethics Committee representing the ASCoR, at the following address: ASCoR secretariat, Ethics Committee, University of Amsterdam, Postbus 15793, 1001 NG Amsterdam; 020‐ 525 3680; ascor‐secr‐fmg@uva.nl.

o I understand and accept the text presented above, and I agree to participate in the research study

(34)

What is your gender? o Male o Female

o Prefer not to say

Do you use a watch that tracks your daily movement and sleep? o Yes

o No

How often do you read consumer reviews before buying an expensive product? I ... read consumer reviews before buying an expensive product

o Never o Rarely o Sometimes o Usually o Always

What is your age?

________________________________________________________________

"In general, I am not sceptical about the truthfulness of online reviews" I ... with this statement

o Strongly disagree o Disagree

o Somewhat disagree o Neither agree nor disagree o Somewhat agree

o Agree

(35)

Please read the review below. This review is about a product that measures your heart rate, daily movement and quality of sleep. You can proceed to the next question after 8 seconds.

End of Block: Condition1 Start of Block: Condition2 End of Block: Condition2 Start of Block: Condition 3 End of Block: Condition 3 Start of Block: Condition 4 End of Block: Condition 4 Start of Block: Condition 5 End of Block: Condition 5 Start of Block: Condition 6 End of Block: Condition 6

In my opinion, the review message is accurate o Strongly disagree

o Disagree

o Somewhat disagree o Neither agree nor disagree o Somewhat agree

o Agree

o Strongly agree

In my opinion, the review message is believable o Strongly disagree

o Disagree

o Somewhat disagree o Neither agree nor disagree

(36)

o Somewhat agree o Agree

o Strongly agree

The review was about a health-watch o True

o False

In my opinion, the person who wrote the review seems honest o Strongly disagree

o Disagree

o Somewhat disagree o Neither agree nor disagree o Somewhat agree

o Agree

o Strongly agree

In my opinion, the person who wrote the review seems reliable o Strongly disagree

o Disagree

o Somewhat disagree o Neither agree nor disagree o Somewhat agree

o Agree

o Strongly agree

In my opinion, the person who wrote the review seems trustworthy o Strongly disagree

(37)

o Somewhat disagree o Neither agree nor disagree o Somewhat agree

o Agree

o Strongly agree

In my opinion, the person who wrote the review seems qualified to write this review o Strongly disagree

o Disagree

o Somewhat disagree o Neither agree nor disagree o Somewhat agree

o Agree

o Strongly agree

In my opinion, the person who wrote the review seems experienced to write this review o Strongly disagree

o Disagree

o Somewhat disagree o Neither agree nor disagree o Somewhat agree

o Agree

o Strongly agree

I feel confidence in the brand Fitbit o Strongly disagree

o Disagree

o Somewhat disagree o Neither agree nor disagree

(38)

o Somewhat agree o Agree

o Strongly agree

I see the brand Fitbit as valuable o Strongly disagree o Disagree

o Somewhat disagree o Neither agree nor disagree o Somewhat agree

o Agree

o Strongly agree

If I would be interested in buying a health watch, I would buy this product o Strongly disagree

o Disagree

o Somewhat disagree o Neither agree nor disagree o Somewhat agree

o Agree

Referenties

GERELATEERDE DOCUMENTEN

[r]

How does the valence of online customer reviews written by unknown consumers and the valence of peer opinions impact the purchase intention of sportswear products, and how is

The volume intensity of online consumer reviews is positively associated with the purchase intention and choice probability of the displayed product.. H2b The valence

Using a choice based conjoint design, it is shown that review valence is the most important attribute for customers to choose their preferred health insurance contract, before

• In line with theory, the high levels of objectiveness, concreteness and linguistic style all contribute to online consumer review helpfulness through argument quality and

Since the three independent variables (objectiveness, concreteness and linguistic style), which lie under the categories of semantic and linguistic characteristics, can at the

The aim of this empirical research is to analyze the relationship between the sender’s expertise with the product and the quality of the arguments presented in an online

Although the impact of identity disclosure on content credibility is not significant, the remarkable relationship between the two independent variables is shown in figure