• No results found

eWOM: Does platform type moderate the effect of review valence on review credibility?

N/A
N/A
Protected

Academic year: 2021

Share "eWOM: Does platform type moderate the effect of review valence on review credibility? "

Copied!
24
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

(Emerce, 2016)

eWOM: Does platform type moderate the effect of review valence on review credibility?

Master’s Thesis January 14

th

, 2019

F. Steenbeek

S2539438

Marketing Management

University of Groningen

Supervisor: Dr J.C. Hoekstra

Second supervisor: Dr. B. Harms

(2)

1

Preface

This master’s thesis, titled ‘eWOM: Does platform type moderate the effect of review valence on review credibility?’, is a research study on how platforms affect the credibility of reviews. It was written to fulfill the graduation requirements of the master’s program in Marketing Management at the University of Groningen. The topic of this thesis is the ‘effectiveness of branded digital content’. After having spent some time searching for a thesis topic, I decided to write about online reviews, as they are increasingly being consulted by consumers prior to purchasing

products. Prior research has not studied the effect of valence and platform types on the credibility of reviews. Finding literature on this topic was difficult, but I managed to identify some useful papers to base this thesis upon. I would like to thank my supervisor, Dr Janny Hoekstra, for the critical feedback and the encouraging words that she provided while I was writing this thesis, both of which stimulated me very much. It was a pleasure working with her. Dr Hoekstra’s meetings were very helpful in leading me to take further steps during the process of writing this thesis, and she encouraged me to continue working on this study. I would also like to thank PhD.

researcher Bianca Harms for her feedback on this thesis. In addition, I would like to thank all of my respondents; without their cooperation, it would not have been possible to write this thesis.

Last, but certainly not least, I would like to thank my family and friends for their support throughout my study and the course of this master’s thesis. Your encouragement kept me going, and I am therefore very thankful.

I hope you will enjoy reading this thesis.

Fennita Steenbeek

Groningen, 14

th

January 2019

(3)

2

Abstract

Many consumers consult reviews before purchasing products or services. However, review valence might have an effect on review credibility, as customers may perceive positive reviews as being manipulated marketing actions. Furthermore, marketer-generated platforms displaying reviews might be perceived as less credible when compared to consumer-generated platforms due to the former’s perceived selling intentions. This research investigates the effects of review valence and platform type on review credibility. Prior research provides inconsistent findings with regard to the effects of review valence on review credibility. This study aims to determine whether review valence does have an effect on review credibility and, if so, whether this effect is moderated by platform type. Factors influencing review credibility should be assessed, as

reviews are an important aspect of the customer journey that possibly leads to purchases. An online experiment comparing the effects of positive, neutral, and negative reviews displayed on either consumer-generated or marketer-generated platforms is conducted. We find that review valence has an effect on review credibility, but this effect is non-significant when moderated by platform type. A possible explanation for the non-significant outcomes obtained could be the relatively small sample size. Future research could investigate how consumers search for reviews, for example, which types of platforms they use more frequently, whether consumers tend to use platforms interactively, and whether such interaction might influence factors such as review credibility and purchase intention.

Keywords: eWOM, review, platform type

(4)

3

Table of Contents

1. Introduction 4

2. Conceptual model and hypotheses 5

2.1 Conceptual model 5

2.2 Review valence and review credibility 6

2.3 Moderating effect of CGPs and MGPs 7

3. Method 8

3.1 Research design and data collection 8

3.2 Construct measurement 9

3.3 Factor analysis 9

3.4 Manipulation checks 11

3.5 Test for control variables 12

3.6 Analyses 12

4. Analyses and results 13

4.4.1 Review valence and perceived review credibility 14

4.4.2 Interaction effect of review valence and platform credibility 15

5. Conclusion 15

5.1 Limitations 16

5.2 Future research and concluding remarks 16

Sources 17

Appendices 20

Appendix A: Data distribution 20

Appendix B: Data boxplot 20

Appendix C: Negative, neutral and positive reviews posted on CGP 21

Appendix D: Negative, neutral and positive reviews on MGP 22

Appendix E: Means and standard deviations 23

(5)

4

1. Introduction

Due to the rise of the Internet, people are able to search for information anywhere and at any time. Beyond searching for information, the Internet also allows users to share information about experiences or products with other people on a wide variety of platforms (e.g. social media, forums, and blogs). This type of communication on the Internet is called electronic word of mouth (eWOM), this kind of Internet communication constitutes a process of

dynamic, ongoing information exchange among potential, actual, or former consumers regarding a product, service, brand, or company that is available to a multitude of people and institutions

(Ismagilova,

Dwivedi, Slade, & Williams, 2017). Electronic word of mouth may have large impacts on firms, since it has a significant reach and is publically available. More importantly, it affects

consumers’ brand choices and sales of goods and services (Goldsmith & Horowitz, 2006). This finding makes it very important for firms to know what actual or former consumers communicate on the Internet concerning their brands or firms and how other people are influenced by such communication.

It is possible to distinguish between four types of eWOM: one-to-one, one-to-many, many-to- one, and many-to-many (Xia, Huang, Duan, & Whinston, 2009). This paper focuses specifically on reviews (i.e. one-to-many). Evaluations of products or services are sources of information that can help consumers make purchase decisions (Lackermair, Kailer, & Kanmaz, 2013). The

evaluative direction of a review can be positive, negative, or neutral (Lee & Youn, 2009).

Previous studies have researched the effects of review valence and message credibility, although findings on this topic are inconsistent (Cheung, Luo, Sia, & Chen, 2009; Kusumasondjaja, Shanka, & Marchegiani, 2012). A possible explanation for this is that studies differ in the platform type used in their experiments. Platform types can be divided into consumer-generated and marketer-generated platforms. The latter (which are also referred to as ‘corporate platforms’) include brand websites such as Amazon.com. Consumer-generated platforms are owned and operated by independent organizations or individuals (e.g. TripAdvisor).

The purpose of this paper is to investigate whether these inconsistent findings with regard to the impact of review valence on review credibility are due to differences in platform types, as prior research studies have generally included only one of the two platform types in their experiments.

Therefore, we hypothesize that the effect of review valence on review credibility is moderated by

platform type.

(6)

5

We expect that negative reviews will result in greater perceived credibility compared to positive reviews. Platform type is expected to moderate this effect, so we assume that, for positive reviews, the marketer-generated platform (MGP) will be more credible and that, for negative reviews, the consumer-generated platform (CGP) will be more credible.

This thesis contributes to the existing literature in the field of online marketing by providing insights into the impact of review valence on review credibility and the role that platform type plays in this relationship. The results of this study provide insights into the effect of review valence and platform type on review credibility. The findings of this research could help firms to stimulate customers to post reviews on the most credible platform (hopefully, in order to attract new customers) and to monitor reviews on the most credible platform to determine how

consumers experience products or services, as this might influence potential customers.

This paper is structured as follows: Chapter 2 discusses the conceptual model, along with the hypotheses. The research design is described in Chapter 3, and Chapter 4 presents the results of the analyses. Finally, a discussion of the results, limitations, and directions for future research is provided in Chapter 5.

2. Conceptual model and hypotheses 2.1 Conceptual model

Figure 1 depicts the conceptual model used in this study. We expect that negative reviews will result in greater review credibility compared to both positive and neutral reviews.

Figure 1. Conceptual model

(7)

6

Platform type is expected to moderate this effect, so we assume that, for negative reviews, the consumer-generated platform (CGP) will be more credible and that, for positive reviews, the marketer-generated platform (MGP) will be more credible.

2.2 Review valence and review credibility

Although previous studies have researched the effects of review valence, they did not obtain consistent results. The majority of previous research on this topic indicates that negative

information generally has stronger effects on consumer judgment than either neutral or positive reviews (Chiou & Cheng 2003; Mizerski, 1982). This can be explained with reference to the negativity effect, which is the notion that, in most situations, negative events, emotions, or information are more salient, potent, dominant, and generally more efficacious than positive events, emotions, or information (Rozin & Royzman, 2001). This effect occurs because negative information is scarcer and can play a more diagnostic role than positive information.

Negative information helps assign a target to a lower quality category more easily than positive

information helps to assign the target to a higher quality category

(Ahluwalia & Gürhan-Canli, 2000; Herr, Kardes, & Kim, 1991). In addition, according to theories of impression formation, one of the major elements affecting the informativeness of a cue is its discrepancy from expectations. Most people expect life outcomes to be moderately positive, and, since negative information is more distant from expectations than positive information, negative information should have a greater weight when cues are combined into an impression (Fiske, 1980;

Skowronski & Carlston, 1989). When considering reviews specifically, positive reviews have been found to not affect message credibility (Doh & Hwang, 2009). Furthermore, Cheung et al.

(2009) demonstrated that message valence has no impact on message credibility. However, other studies (Ballantine & Au Yeung, 2015; Kusumasondjaja et al., 2012) have found that negative reviews are perceived as being more credible than positive reviews. Due to their apparent independence from the firm in question, negative review sets are perceived as being more accurate than positive review sets, (Cheung et al., 2009). Therefore, we formulate the following hypothesis:

H1: Negative reviews are perceived as more credible compared to neutral and positive reviews.

(8)

7

2.3 Moderating effect of CGPs and MGPs

Consumers access online reviews on either consumer-generated platforms (CGPs) or marketer- generated platforms (MGPs). Studies have found that these two types of platforms have different effects on consumer judgment. Lee and Youn (2009) employed the discounting principle of Kelley (1973) as the underlying framework in their study on the influence of platform type on consumer judgment. They state that, as reviews on a marketer-generated platform can be influenced by marketers (e.g. a reviewer could be compensated by a brand for reviewing its product), consumers may discount a product’s actual performance and hence may be less persuaded by the reviewer’s recommendation. In contrast, as consumer-generated websites are not influenced by marketers, consumers are more likely to attribute the review to the actual performance of the product. Hence, reviews on consumer-generated websites are more credible than reviews posted on marketer-generated ones. The authors did not find platform type to have a direct impact on consumers’ judgments of products. However, studies on how consumers process information about online platforms have demonstrated that consumers reported greater product interest when they obtained information from consumer-generated platforms when compared to marketer-generated ones (Bickart & Schindler, 2001). Xue and Phelps (2004) questioned this finding in their study, as they found that this relationship is moderated by audience

characteristics. Platform type only mattered for consumers with low product involvement when they had more experience with offline consumer-to-consumer communications and when they felt that those communications interactions had been useful.

In their meta-analysis of the determinants of online review helpfulness, Hong, Xu, Wang, and Fan (2017) found that platform type can have mixed effects on the determinants of helpfulness.

Online review platforms moderate the effect of review depth, review age, reviewer information

disclosure and reviewer expertise on how customers perceive the helpfulness of the review. As

noted in Section 2.2, previous studies on the effects of review valence and message credibility

have obtained inconsistent results. Taking into account the moderating role played by platform

type with regard to determinants of review helpfulness, we intend to test whether platform type is

also a moderating factor in the relationship between review valence and credibility. Taking both

review platforms into account, Jeong and Koo (2015) found that subjective negative reviews

posted on a consumer-generated website were perceived as more helpful than the same reviews

posted on a marketer-generated website, whereas, when the reviews were subjective and

(9)

8

positive, subjects perceive information posted on marketer-generated platforms to be more useful compared to information on consumer-generated platforms. The authors found no statistical difference between platform types for objective negative reviews. We expect to find the same effect on our dependent variable, as the construct of helpfulness is built upon credibility (Li, Huang, Tan, & Wei, 2013). Therefore, we formulate the following hypotheses:

H2: Negative reviews posted on consumer-generated platforms will have a stronger impact on review credibility when compared to marketer-generated platforms; and

H3: Positive reviews posted on marketer-generated platforms have a stronger impact on review credibility when compared to consumer-generated platforms.

3. Method

3.1 Research design and data collection

This study is based on quantitative research. A 3 (positive vs. neutral vs. negative review)

×

2 (consumer-generated vs. marketer-generated platform type) between-subjects experimental design is used. We focus on product evaluations of experience goods (restaurants). The

participants were Internet users in the Netherlands between the age of 16 and 69. Participants are presented with the following scenario: The participant is planning an important dinner with friends, and he or she has to pick a restaurant. As this dinner is meaningful to the participant, he or she decides to check some reviews of the restaurant in question (since he or she has not been there before). After the explanation, the participants were provided with one of the six

conditions. Each of these conditions depicts a web page displaying reviews of a fictitious

restaurant called ‘Max.’ The reviews were composed of existing reviews posted on TripAdvisor.

Each of the six conditions provides the same quantity of information (see appendix C. and appendix D. for all of the conditions). The neutral valence type consist of a balanced set of negative and positive reviews. This valence type was conducted Platform type and valence were randomly assigned to the participants. Platform type was manipulated using a fictitious CGP or a fictitious MGP. Fictitious platform types were used to control for the familiarity of the platform and the confounding effects of the credibility of the platform or reviews. After having seen the conditions, the participants were presented with questions. Potential respondents were

approached by Facebook or email. To increase the participation rate, a lottery in which one could

(10)

9

win a bol.com gift card was included in the survey. A total of 251 participants participated in the experiment; after excluding those who did not finish the survey, 140 remained.

For checking the CGP vs. MGP manipulation, a question stating ‘I think the web page I just saw is independent of the restaurant’, measured by a 7-point Likert scale ranging from 1 (totally disagree) to 7 (totally agree), was included. The valence manipulation was checked by a question stating ‘The reviews on this web page are mainly: …’ using a 7-point Likert scale ranging from 1 (extremely negative) to 7 (extremely positive).

Lastly, questions on age, education, review experience, web page credibility, and attention paid to reviews and web page were included. These variables were included in the survey in order to determine whether we should consider them as covariates in our analyses.

3.2 Construct measurement

Review credibility was measured using the 3-item scale of Block and Keller (1995). Items were measured with a 7-point Likert scale ranging from 1 (strongly disagree) to 7 (strongly agree).

Attention to web page and review were measured using the 5-item scale of Laczniak and Muehling (1993) and a 7-point Likert scale ranging from 1 (none at all) to 7 (very much). For credibility of the web page, eight items adapted from Ohanian (1990) and Rodgers (2004) were used. The last construct measure was obtained by semantic scaling. All scales and their

corresponding items are presented in Table 1.

3.3 Factor analysis

Factor analyses were conducted on all of the constructs. Before conducting the factor analysis for review credibility, items 2 and 3 had to be recoded, as these questions were phrased negatively, which was in contrast with the first item. A reliability analysis showed a Cronbach’s alpha of .357. Deleting the second item resulted in an increased Cronbach’s alpha of .445, which was still

<.5 and therefore unacceptable. When conducting the factor analysis, the third item loaded the highest (.802) on the construct. Therefore, we decided to keep this item only for further analysis.

For the construct of web page credibility, eight measurement items were used.

(11)

10 Table 1.

Factor items, Factor loadings and Cronbach’s alpha

Construct Items

Factor

loading CA Review credibility I think the information in this review is

unbelievable

Block and Keller (1995) The information in the review is credible*

I think the information in the review is

exaggerated*

Web page credibility I think this website is not trustworthy/trustworthy .865 .876 Ohanian (1990) and Rodgers (2004) I think this website is not credible/credible .869

I think this website is not believable/ believable .836

I think this website is unrealistic/realistic .848

I think this website is biased/unbiased*

I think this website is not objective/ objective*

I think this website is compromising/

uncompromising*

I think this website is unethical/ethical*

Review attention How much attention did you pay to the review? .882 .858 Laczniak and Muehling (1993) How much did you concentrate on the review? .827

How involved were you with the review? .777

How much thought did you put into evaluating the

review?

.732

How much did you notice the review? .784

Web page attention How much attention did you pay to the web page? .896 .928 Laczniak, Russell, and Muehling

(1993)

How much did you concentrate on the web page? .930

How involved were you with the web page? .856

How much thought did you put into evaluating the

web page?

.852

How much did you notice the web page? .878

*items deleted after factor analysis

(12)

11

Four items that showed factor loadings lower than .4 were deleted (Matsunaga, 2010), so only four items remained. The Kaiser-Meyer-Olkin measure of sampling adequacy was .803, which was above the recommended value of .6, and Bartlett's test of sphericity yielded significant results. The Cronbach’s alpha was .876 after conducting the factor analysis. Web page attention, review attention, and web page credibility all loaded high on the constructs; therefore, no items were deleted. The Kaiser-Meyer-Olkin measures of sampling adequacy for review and web page attention were .837 and .880, respectively. Web page credibility showed a KMO of .803. All constructs show satisfactory values for Bartlett’s test of sphericity and Cronbach’s alpha. The factor loadings and Cronbach’s alphas for the items can be found in Table 1.

3.4 Manipulation checks

The valence manipulation was checked using a control question stating ‘The reviews on this web page are mainly: …’, which was measured using a 7-point Likert scale ranging from 1

(extremely negative) to 7 (extremely positive). Two independent-samples t-tests were conducted to compare the valence conditions. First, an independent-samples t-test was conducted to check the valence manipulation. There was a significant difference in the scores for the positive (M = 6.1, SD = .76) and neutral (M = 3.1, SD = .90) review conditions; t (95) = 17.30, p < .001.

Second, we also tested the negative valence with the neutral valence manipulation with an independent-samples t-test. Negative reviews (M = 1.6, SD = 1.0) were also significant from neutral review conditions; t (91) = 7.89, p < .001. These results suggest that the manipulation of valence was conducted correctly.

To check the consumer-generated versus marketer-generated platform manipulation, a control statement stating ‘I think the web page I just saw is independent from the restaurant’, measured using a 7-point Likert scale ranging from 1 (totally disagree) to 7 (totally agree), was included.

An independent-samples t-test was conducted to check this manipulation. There was a significant difference in the scores for the independent (M = 5.45, SD = 1.29) and dependent (M = 3.89, SD

= 2.11) platform conditions; t (138) = -5.21, p < .001These results suggest that the manipulation

of platform independence was also conducted correctly.

(13)

12

3.5 Test for control variables

The survey included questions on age, education, review experience, web page credibility, and attention paid to reviews and web page. We conducted several test to determine whether we should consider those variables as covariates. In order to analyze whether or not the average review credibility reported by men is different from that of women, we performed an

independent samples t-test with gender and review credibility. The result of this test was not significant, t (138) = .51, p = .614. The average review credibility of men (M = 5.1, SD = 1.42) did not differ from the average review credibility of women (M = 5.2, SD = 1.17).

In order to analyze whether or not education level influences review credibility, we performed a one-way ANOVA, which was not significant, F (5, 134) = 1.77, p = 0.122.

Table 2.

Regression results control variables

Variable R

2

F Df p B t

Age 0.002 .34 1.138 .56 .009 12.46

Review experience 0.052 .37 1.138 .55 .06 14.27 Web page attention 0.014 2.029 1.138 .157 -.107 20.08 Web page credibility 0.184 31.12 1.138 .000 .501 6.08 Review attention 0.084 12.60 1.138 .001 .354 6.49 a. Dependent variable: review credibility

By means of a regression analysis, the variables of age, review experience, web page attention, web page credibility, and review attention were analyzed to determine whether or not they should have been included as control variables. The results of these analyses can be found in Table 2.

All of the variables that showed insignificant results were not included in the analyses. Only the variables web page credibility (M = 4.89, SD = 1.07) and review attention (M = 5.1, SD = 1.02) showed significant results, and therefore they were controlled for in all of the analyses.

3.6 Analyses

Both valence and review platform are categorical variables; therefore, it was appropriate to

conduct an ANCOVA for all of the hypotheses.

(14)

13

4. Analyses and results

After removing those participants who did not complete the survey, 140 out of 251 remained as the sample. Participant age ranged from 16 to 69, with an average age of 24 and a standard deviation of 6.8. With regard to the composition of the sample group, 34.3% were men and 65.7% women. Almost half of the participants (47.9%) obtained a bachelor’s degree. 17.9%

obtained a degree higher than a bachelor’s degree and 34.2% obtained a degree lower than a bachelor’s degree. Furthermore, most participants read reviews before going to an unknown restaurant most of the time (40.7%), 11.4% always read reviews before going to an unknown restaurant, and 47.9% read them about half of the time or less.

Table 3.

Sample sizes

Positive Neutral Negative

Consumer-generated 22 27 17

Marketer-generated 25 23 26

Total 47 50 43

Table 4.

Summary of findings

Hypotheses Findings

H1: Negative reviews are perceived as more credible compared to neutral and positive reviews.

Mixed support

H2: Negative reviews posted on consumer-generated platforms will have a stronger impact on review credibility when compared marketer-

generated platforms.

Not supported

H3: Positive reviews posted on marketer-generated platforms have a stronger impact on review credibility when compared to consumer- generated platforms.

Not supported

Random assignment led to the sample sizes used in each treatment, as presented in Table 3. The

results of the hypotheses test are summarized in Table 4.

(15)

14

4.4.1 Review valence and perceived review credibility

ANCOVA was conducted to test the first hypothesis, since we have two covariates in our analysis, namely: web page credibility and review attention. The independent variable, valence, includes three groups, namely: positive reviews (M = 5.04, SD = 1.40, n = 47), neutral reviews ( M= 5.08, SD = 1.31, n =50) and negative reviews (M =5.44, SD = 0.98, n =43). Assumptions of analysis of covariance were tested before conducting the actual analysis. The normality

assumption was evaluated using a histogram (see appendix A.) The data is slightly skewed to the left, though the normality assumption is met. A few outliers were identified in the dataset (see appendix B.). Furthermore, both the assumption of homogeneity of regression slopes and the assumption of homogeneity of variances were found to be true. The assumption of homogeneity of variances was evaluated and found tenable using Levene’s test F (2, 137) =.166. Analysis of covariance shows that the effect of review valence on review credibility is significant F (2,131) = 3.222, p = .043, ES = .047, which means there is a significant difference between the three

valence conditions and review credibility.

Table 5.

Pairwise comparisons (I)

Valence

(J) Valence

Mean difference (I-J) SD p

Positive Neutral -.115 .224 .608

Negative -.447 .232 .056

Neutral Positive .115 .224 .608

Negative -.332 .229 .149

Negative Positive .447 .232 .056

Neutral .332 .229 .149

a. Dependent variable: Review credibility b. Based on estimated marginal means

c. Adjustment for multiple comparisons: Least Significant Difference (equivalent to no adjustments)

(16)

15 Table 6.

Estimates effect

Valence Mean SD

Positive 4.987 .161

Neutral 5.102 .156

Negative 5.434 .167

a. Dependent variable: Review credibility

b. Covariates appearing in the model are evaluated at the following values: web page credibility = 4,8875, attention review = 5,1100.

Table 5 and 6 show the post-hoc analyses, no significant effects are found in the post-hoc test (LSD) (see table 5.). The possible explanation for the opposing effects is discussed in Chapter 5.

4.4.2 Interaction effect of review valence and platform credibility

ANCOVA was conducted to compare the review credibility of three valence types whilst

controlling for platform type. Platform type includes two groups, namely: dependent platform (M

= 5.17, SD = 1.22, n = 74) and independent platform (M = 5.18, SD = 1.30, n = 66). The normality assumption was met (see appendix A). The homogeneity of regression slopes

assumption is violated, this could influence the results and make the results less reliable, though we still continue with the ANCOVA. The assumption of homogeneity of variances was tested by Levene’s test F (5, 134) =.438, as the test shows insignificant results, the assumption of

homogeneity of variances is met. Analysis of covariance shows that the moderating effect of platform type on the relationship between review valence and review credibility is non-

significant F (2,126) = .718, p = .490, ES=.011. The direct effect of valence F (2,126) = 5.273 p

= .006, ES=.077 is significant.

5. Conclusion

Consumers increasingly choose to consult online reviews before buying products or services.

The moderating role of platform on the relationship between review valence and review

credibility is examined. H1 is partially supported. Review valence has a significant effect on

review credibility. However, post-hoc analysis shows non-significant results. The non-significant

results could be due to a lack of statistical power, as the sample size was small. The pairwise

(17)

16

comparison between positive and negative reviews is close to significance, and the estimates effect is stronger for negative reviews than for positive ones, so it seems that negative reviews have stronger effects on credibility compared to positive reviews. Though, this should be interpreted with caution, as the post-hoc test results were non-significant. Furthermore, no moderating effect of platform type was found on the relationship between review valence and review credibility. Important to note is that web page credibility and review attention were included as covariates, as they were also found to have an effect on review credibility.

5.1 Limitations

One of the limitations of this research was the small sample size, this leads to lower statistical power. A possible explanation for the high drop-out rate might have been the presence of a language barrier, as most participants were Dutch and the survey was in English. Some of them may have perceived this as a barrier to completing the survey, as they may have feared not understanding the questions given their level of English or felt that translating the questions would have taken too much time. In addition, the sample was 34.3% male and 65.7% female; a more evenly distributed sample could have ruled out outcomes due to gender differences.

5.2 Future research and concluding remarks

As attention to the web page was low (see appendix E.), future research could investigate

whether in natural settings also pay low attention to the web page they are visiting. Future

research could also study how consumers search reviews, for example, which types of platforms

are more used, whether they are used interactively, and whether this might influence factors such

as review credibility and purchase intention. Furthermore, reviews are now available in video as

well in written formats. As Xu, Chen, & Santhanam (2015) have already studied the effects of

format type on factors as credibility and helpfulness, it can also be tested whether these differing

format types are influenced by platform type (e.g. whether online video reviews are perceived as

more credible on Instagram, on YouTube, or even on other platform types).

(18)

17

Sources

Ahluwalia, R., & Gürhan-Canli, Z. (2000). The Effects of Extensions on the Family Brand Name: An Accessibility-Diagnosticity. Journal of Consumer Research, 27(3), 371–381.

Ballantine, P. W., & Au Yeung, C. (2015). The effects of review valence in organic versus sponsored blog sites on perceived credibility, brand attitude, and behavioural intentions. Marketing Intelligence & Planning, 33(4), 508-521.

Bickart, B., & Schindler, R. M. (2001). Internet forums as influential sources of consumer information.

Journal of Interactive Marketing, 15(3), 31-40.

Block, L. G., & Keller, P. A. (1995). When to accentuate the negative: The effects of perceived efficacy and message framing on intentions to perform a health-related behavior. Journal of marketing research, 192-203.

Cheung, M. Y., Luo, C., Sia, C. L., & Chen, H. (2009). Credibility of Electronic Word-of-Mouth:

Informational and Normative Determinants of On-Line Consumer Recommendations.

International Journal of Electronic Commerce, 13(4), 9-38.

Chiou, J.-S., & Cheng, C. (2003). Should a Company Have Message Boards on Its Web Sites? Journal of Interactive Marketing, 17(3), 50-61.

Doh, S. J., & Hwang, J. S. (2009). How Consumers Evaluate eWOM (Electronic Word-of-Mouth) Messages. CyberPsychology & Behavior, 12(2), 193-197.

Emerce. (2016). Consumenten prikken door betaalde reviews van bloggers heen [Image]. Retreived from https://www.emerce.nl/wire/consumenten-prikken-door-betaalde-reviews-bloggers-heen

Fiske, S. T. (1980). Attention and Weight in Person Perception: The Impact of Negative and Extreme Behavior. Journal of Personality and Social Psychology, 38(6), 889–906.

Goldsmith, R. E., & Horowitz, D. (2006). Measuring Motivations for Online Opinion Seeking. Journal of Interactive Advertising, 6(2), 2–14.

Herr, P. M., Kardes, F. R., & Kim, J. (1991). Effects of Word-of-Mouth and Product-Attribute

Information on Persuasion: An Accessibility-Diagnosticity Perspective. Journal of Consumer Research, 17(4), 454–462.

(19)

18 Hong, H., Xu, D., Wang, A., & Fan, W. (2017). Understanding the determinants of online review

helpfulness: A meta-analytic investigation. Decision Support Systems, 102, 1-11.

Ismagilova, E., Dwivedi, Y. K., Slade, E. L., & Williams, M. D. (2017). Electronic Word Of Mouth (eWOM) in the Marketing Context: A State of the Art Analysis And Future Directions. New York, NY: Springer Publishing.

Jeong, H.-J., & Koo, D.-M. (2015). Combined effects of valence and attributes of e-wom on consumer judgment for message and product. Internet Research, 25(1), 2-29.

Kelley, H. H. (1973). The Processes of Causal Attribution. American Psychologist, 28(2), 107-128.

Kusumasondjaja, S., Shanka, T., & Marchegiani, C. (2012). Credibility of online reviews and initial trust:

The roles of reviewer’s identity and review valence. Journal of Vacation Marketing, 18(3), 185- 195.

Lackermair, G., Kailer, D., & Kanmaz, K. (2013). Importance of Online Product Reviews from a Consumer's Perspective. Advances in Economics and Business, 1(1), 1-5.

Laczniak, R. N., & Muehling, D. D. (1993). The relationship between experimental manipulations and tests of theory in an advertising message involvement context. Journal of Advertising, 22(3), 59- 74.

Lee, M., & Youn, S. (2009). Electronic word of mouth (ewom): How eWOM platforms influence consumer product judgement. International Journal of Advertising, 28(3), 473-499.

Li, M., Huang, L., Tan, C.-H., & Wei, K.-K. (2013). Helpfulness of Online Product Reviews as Seen by Consumers: Source and Content Features. International Journal of Electronic Commerce, 17(4), 101-136.

Matsunaga, M. (2010). How to factor-analyze your data right: Do’s, don’ts, and how-to’s. International Journal of Psychological Research, 3(1), 97-97.

Mizerski, R. W. (1982). An Attribution Explanation of the Disproportionate Influence of Unfavorable Information. Journal of Consumer Research, 9(3), 301-310.

Ohanian, R. (1990). Construction and validation of a scale to measure celebrity endorsers' perceived expertise, trustworthiness, and attractiveness. Journal of advertising, 19(3), 39-52.

(20)

19 Rodgers, S. (2003). The effects of sponsor relevance on consumer reactions to internet sponsorships.

Journal of Advertising, 32(4), 67-76.

Rozin, P., & Royzman, E. B. (2001). Negativity bias, negativity dominance, and contagion. Personality and social psychology review, 5(4), 296-320.

Skowronski, J. J., & Carlston, D. E. (1989). Negativity and Extremity Biases in Impression Formation: A Review of Explanations. Psychological Bulletin, 105(1), 131–142.

Xia, M., Huang, Y., Duan, W., & Whinston, A. B. (2009). Ballot box communication in online communities. Communications of the ACM, 52(9), 138-138.

Xu, P., Chen, L., & Santhanam, R. (2015). Will video be the next generation of e-commerce product reviews? Presentation format and the role of product type. Decision Support Systems, 73, 85-96.

Xue, F., & Phelps, J. E. (2004). Internet-facilitated consumer-to-consumer communication: The moderating role of receiver characteristics. International Journal of Internet Marketing and Advertising, 1(2), 121–136.

(21)

20

Appendices

Appendix A: Data distribution

Appendix B: Data boxplot

(22)

21

Appendix C: Negative, neutral and positive reviews posted on CGP

(23)

22

Appendix D: Negative, neutral and positive reviews on MGP

(24)

23

Appendix E: Means and standard deviations

Variable Mean (SD)

Webpage credibility 4.9 (1.1) Review attention 5.1 (1.0) Web page attention 3.4 (1.4)

Referenties

GERELATEERDE DOCUMENTEN

In the current study it is hypothesized that the effect of the independent variables (the presence of demographic/ psychographic characteristics attached to an OCR)

This research examines how three elements of informative MGC (information quality, post popularity, and post attractiveness) can lead consumers to like and

•   A positive consumer response (disconfirming response) compared to a negative consumer response (confirming response), decreases the impact of source credibility on

This research shows that when people are confronted with a negative OCR and subsequently with a disconfirming (positive) response, the influence of source credibility on

A study by Holbrook (1999) outlines that among other criteria for taste formation the correlation between popular appeal and expert judgments is positive.

The stability of the catalyst was studied under four di fferent reaction conditions over the course of ∼20 h, in which catalyst- to-feed ratio (W/F) was varied from 300 to 600 kg/(L s

variability of consumption, production, trade and climate on crop-related green and blue water footprints and inter-regional virtual water trade: A study for China (1978-2008),

The results confirmed our hypothesis: the PSF-shape-based beamforming grid combined with 2D cubic interpolation (PSF_2Dcub) showed the most accurate and stable performance with