• No results found

The acceptance of online reviews

N/A
N/A
Protected

Academic year: 2021

Share "The acceptance of online reviews"

Copied!
27
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Master’s Thesis, Graduate School of Communication Master track: Persuasive Communication

Author: Ingeborg Oost

Student ID-card number: 11131888 Supervisor: Sandra Zwier

Date of completion: 31-05-2016

The acceptance of online reviews

(2)

Foreword

You are now reading the master thesis ‘The acceptance of online reviews’. This thesis is written as a final graduation assignment for my master program in Persuasive Communication at the University of Amsterdam. Under the guidance of Sandra Zwier I completed this thesis and executed the experiment. Therefore I want to thank her, because without her help I could not have done all of this. I want to thank the University of Amsterdam as well for giving me the opportunity to learn at their institution. And finally I want to thank all the participants whom participated in the experiment. Without them I could never have collected the data to complete this paper.

Parts of this thesis arose from a research proposal written for the master course ‘Marketing Communication; in the period of December 2015 and also with the guidance of the course ‘Research Methods Tailored to the Thesis’ in the period of January 2016.

I hope you will enjoy reading this thesis.

Ingeborg Oost

(3)

Abstract

This paper has the aim to find out whether or not platform type, product involvement and people’s generation have an influence on the acceptance of online product reviews. Nowadays consumers have more and more possibilities to ventilate their opinions, including the internet. Through online product reviews consumers can state their opinions about products and thereby make businesses grow or decay. In an experiment two different types of online review platforms, brand generated and non-brand generated, were compared to see whether or not there is a

difference in the online review acceptance level. Also product involvement and people’s age generation were taken into account as moderating variables. Platform type, product involvement, and people’s generation were not found to impact online review acceptance in the present study. In conclusion it can be said that our study did not find evidence that platform type influences the acceptance of online reviews or that level of product involvement and people’s generation

moderate this effect. These conclusions suggest that brands can generate their own website pages for reviews about their products with the knowledge that these pages can be as much accepted by consumers as reviews on non-branded website pages.

(4)

Introduction 


Remember the days of Web 1.0? The good old days when the great majority of brands only had to manage websites with simple and clear information. When interaction between the brand and its customers was not visible, let alone interference between customers. Well say hello to the new Web 2.0, where interaction possibilities are everywhere and everybody can add their opinions and thoughts about a brand on the World Wide Web (Berthon, Pitt, Plangger & Shapiro, 2012). According to Berthon et. al. (2012) the technologies of Web 2.0 have shifted the locus of brand activity from the desktop to the World Wide Web, the locus of value production from the brands to the consumers, and the locus of power from the brand to the consumer. In short this means that nowadays consumers have a bigger role in brand management than ever before since they can use their power to influence the perceived value of branded products and services.

One of the ways for consumers to use their influence is the posting of online reviews. According to Keller (2007) business growth can be predicted by positive consumer reviews of a product or service. However little is known about the acceptance of online reviews. According to Baumer and Goldman (2013) consumers are increasingly learning that online reviews could be manipulated by brands and therefore be inauthentic. Bambauer-Sachse and Mangold (2013) concluded that further research is needed on the question whether consumers differ with regard to the knowledge that online reviews could be manipulated by brands, and whether the credibility of the source plays a role in such judgements. The present research will try to answer some of these questions.


More specifically, the aim of the present research will be to find out if consumers accept online reviews on different online review platforms equally and whether or not product

involvement and age generation impact on this process. To give a clear direction to the research the following research question is formulated: 


Does the platform type where online reviews are placed influence the acceptance of online reviews? And to which extent do level of product involvement and people’s generation moderate this effect?


(5)

Theoretical framework

In the last couple of years many brands and other internet companies have created the possibility for customers to leave their opinions and thoughts about branded products or services by

providing them online space to add a review (Anderson & Simester, 2014). In some cases, the main purpose of a website itself even is the gathering of reviews (e.g., TripAdvisor.com,

yelp.com). Research on online reviews so far has mainly focused on why customer write reviews and whether and how these reviews can influence others. But more recently research is also beginning to focus on fraudulent and deceptive reviews and how these may influence customers (Luca & Zervais, 2013). 


Online reviews can be considered form many perspectives, but one of the defining aspects is the extent to which other customers accept the online review. Acceptance is in this case defined as an act of believing, approving, recognizing and assenting. Therefore, the more a review is accepted the more impact it can be assumed to have during the buying process. Meanwhile, brands not only monitor written online reviews, but also go a step further by publishing positive reviews about their own products and/or negative reviews about their competitors’ products that are not actually authentic (Dellarocas, 2006; Hu, Bose, Gao & Liu, 2011). 


For consumers it can be very difficult to distinguish real customers’ opinions from the manipulated ones. However, the possibility of manipulating online reviews will at least raise consumers’ awareness that an online review could be not be authentic and this could lead to increased scepticism towards and less acceptance of online reviews among consumers (Mayzlin, 2006). The present research will focus on a number of factors that could make a review more or less likely to be accepted by customers. The first of these is the type of platform on which the online review is published.

Platform type

As mentioned before there are different kinds of platform types where people can post their reviews. This includes brand generated platforms, where a brand gives consumer space to write a review on their platform, as well as non-brand generated platforms, that are created by another party than the reviewed brand. It can be expected that there will be differences between brand generated versus non-brand generated platform types in the level of the acceptance of a review by

(6)

customers. Watts and Zhang(2008) state that one of the main consumers perceptions of online review platforms is source credibility. It can be assumed that consumers see the brand generated platform primarily as a marketing tool where the brand managers attempt to manipulate these reviews. According to the Reactance Theory (Brehm, 1966) people will then try to resist these reviews, fuelled by the crave of people to maintain their freedom and wish to not be manipulated in any way.

On the other hand, it can be assumed that consumers see non-brand generated platforms as more independent and less as a place where brand managers can manipulate these reviews. Van Reijmersdal, Neijens and Smit (2010) state that when a message is not perceived as persuasive, it will receive more attention and be perceived as more believable. Therefore, the reviews on non-brand generated platforms will be more likely to be accepted, whereas reviews on non-brand

generated platform types will be seen as more persuasive since the brand itself is involved and therefore will be less likely be accepted. Accordingly, the first hypothesis is formulated:

H1: Reviews on non-brand generated platforms are more likely to be accepted by consumers in comparison to reviews on brand generated platforms.

Level of involvement 


According to the Elaboration Likelihood Model there are two routes to attitude change, the central route and the peripheral route (Petty, Cacioppo & Schumann, 1983). These two routes provide a framework to understanding the effectiveness of persuasive communication. According to Petty et al. (1983) the two routes differ in the level of involvement and elaboration. People who take the central route think more critically about arguments and may need further information before forming an attitude. The peripheral route is taken by people who use less cognitive effort and do not relay on well thought through arguments. According to the Elaboration Likelihood Model individuals vary in their motivation and ability to elaborate. Motivation is related to the personal relevance to process a persuasive message, while the ability to process persuasive messages refers to the cognitive competence of an individual.

According to Kim, Kim and Park (2010) product involvement refers to the motivation and ability of a person to learn more about a product before purchasing it. When a person is more

(7)

involved he or she will be more likely to be critical towards the given product information. Applying this to the processing of online reviews, this means that review cues will be more critically judged under conditions of high product involvement. Reviews on brand generated platforms are thereby less likely to be accepted since people could be more suspicious about the influence of the brand on these reviews. When product involvement is low especially the

peripheral cues of a review platform will be cognitively processed (Kim et al., 2010). Therefore it is likely that under conditions of low involvement, platform type has less of an impact on review acceptance than under conditions of high involvement. In conclusion the following hypotheses are formulated:


H2a: Reviews for lower involvement products are more likely to be accepted by consumers in comparison to reviews for higher involvement products.


H2b: Reviews on non-brand generated platforms are more likely to be accepted by consumers in comparison to reviews on brand generated platforms, especially if the reviews are about low involvement products in comparison to high involvement products.

Generation


The so-called ‘X’ and ‘Y’ generations are the younger generations compared to the Baby Boom generation (born between 1940 and 1960) and a lot is written about their technological usages and preferences (Heaney, 2007). Generation X exists of people whom were born in the 1960s to the 1980s (O’Bannon, 2001) and generation Y exists of people whom were born in the 1980s to the 2000s (Weiler, 2005).

Generation X and Generation Y are often considered similar in their pragmatic outlook on life but some differences can be noticed as well. Hymowitz (2007) states that generation Y can be characterized as more optimistic and idealistic and more inclined to value traditions in

comparison with generation X. According to a study of Heaney (2007) people in the Y generation are significantly more heavy internet users and this generation uses the internet to obtain

information more often compared to the X generation. Seventy per cent of generation Y uses the internet to gather information in comparison with 45.4 per cent of generation X (Heaney, 2007).

(8)

Just as generation Y, generation X can also comprehend the technology of the internet and have a high preference for online and e-mail businesses communications (Reisenwitz & Iyer, 2009). Generation Y however is also tech savvy and has the honour to be the first generation to use the internet and all its tools like e-mail and webpages since their childhood (Tyler 2008).

According to Lazarevic (2012) generation Y has a very unique attitude towards brands since they have been raised in a time where everything is branded. This makes that generation Y is more comfortable with and conscious of brands in comparison with earlier generations

including generation X (Heaney, 2007). Because this generation is more conscious of brands they likely are also more aware of their marketing tactics than generation X (Laxarevic, 2012). This awareness could possible lead to more scepticism towards online product reviews among generation Y than generation X since former are more likely to be aware of the possibility of brands to generate inauthentic online reviews. Accordingly, the following hypotheses are formulated:

H3a: Generation X is more likely to accept an online review in comparison to generation Y.

H3b: Reviews on non-business generated platforms are more likely to be accepted by consumers in comparison to reviews on business generated platforms, especially if the reviews are judged by consumers who belong to generation X.

H3c: Reviews for lower involvement products are more likely to be accepted by consumers in comparison to reviews for higher involvement products, especially if the reviews are judged by consumers who belong to generation X.


H4: Reviews on non-brand generated platforms are more likely to be accepted by consumers in comparison to reviews on brand generated platforms, especially if the reviews are about low involvement products in comparison to high involvement products, and even more if the reviews are judged by consumers who belong to generation X.

In short as is shown in Figure 1 it is expected that the type of online review platform (brand generated versus non-brand generated) influences acceptance of online reviews. Further it is

(9)

expected that this effect will be moderated by involvement with the product under review which could be either high or low, and the reader’s generation which could be either X or Y.

Figure 1: Conceptual model of the effect of platform type, product involvement, and generation on online review acceptance.

Methods 


An experimental study was conducted to test the systematic effect of platform type (brand generated versus non-brand generated), product involvement (low involvement versus high involvement product), and generation (generation X versus generation Y) on participants’ acceptance of online product reviews. Before executing the actual experiment, a pilot study was conducted. This pilot study targeted the stimulus materials in which the factors Platform type and Product involvement were manipulated and tested whether or not the differences in stimulus materials for the branded and non-branded platform types were clearly noticed by a number of pilot participants representative of the sample in the main study. As well, whether or not the pilot participants recognized differences in stimulus materials that represented high and low product involvement products.

(10)

Pilot study Sample

The sample of the pilot study consisted of 23 participants. Out of these participants 21.7 per cent (N=5) were male and 78.3 per cent (N= 18) were female. 30.4 per cent (N= 7) were from

generation X and 69.6 per cent (N= 16) were from generation Y.

Design

This pilot had a 2 x 2 factorial experimental design, with Platform type (2 levels, Brand generated versus Non-brand generated), and Product involvement (2 levels, Higher involvement versus Lower involvement) as between-subjects variables.

Independent variables

Platform type: Platform type refers to an online website about brands or products where people

can see different kinds of content, including online reviews. For this experiment two types of platforms were distinguished where the online reviews can occur: a) Non-brand generated

platforms, such as blogs, review sites, social network sites, and brand communities which are

created and managed by consumers or other non-brand owners, and b) Brand generated

platforms, such as corporate blogs and brand sponsored messages on the internet which enables

the the brand to engage itself in a dialog with consumers. These platforms are created and managed by the brand itself (Van Noort & Willemsen, 2012).

People in the pilot study were shown a website containing four online product reviews. Figure 2 shows the website pages as shown to participants in the branded versus non-branded version. As can be seen in Figure 2 (bottom panels), the website link to the brand generated

platform had the same name as the logo name, paired with a brand-related slogan. Also, on top of

the page more explanation of the brand was shown to make it even more obvious that this concerned a brand generated platform. The chosen brand name was fake and could not be linked to an earlier experience to rule out of the role of previous brand judgements.

The non-brand generated platform also shown in Figure 2 (top panels) looked exactly the same as the brand generated platform. However, in this version the website link had a different name than the brand name of the reviewed product and a different website slogan was added. This slogan made clear that the website was non-brand generated and made ‘by consumers for consumers’.

(11)

Figure 2: The stimulus materials with non-brand generated websites on top, brand generated websites on bottom, low involvement products left, and high involvement product right.

Product involvement:

Product involvement refers to the extent to which a potential buyer perceives a certain product or service to be important for himself/herself and how much value is given to a certain product. (And the higher a person is involved the more actively information on products and services will be gathered (Hong, 2015).) As a low involvement product a biro has been used (see Figure 2, left panels) and as a high involvement product a laptop was chosen (see figure 2, right panels). To make sure that only the products mentioned in the reviews created the differences in outcomes and not the reviews itself, the reviews for both products were exactly the same and only the words biro and laptop replaced each other.

(12)

Dependent variables


Whether or not the manipulation of the factor Platform type was noticed by the pilot participants was tested by answering the following two items: ‘This website is independent’ and ’This website

presents these reviews out of own interest’ The manipulation of the factor Product involvement

was tested with the following two items: ‘Before I buy the shown product I read a lot to make

sure I buy the right one’ and ‘It does matter which brand and type I buy of the showed product’

All items were answered with the use of a 5-point Likert scale which ranged from ‘I totally

agree’ (1) to ‘I totally disagree’ (5). 



Procedure


All participants were approach via WhatsApp messenger. Through this app they received a personalized message with the short explanation that the included link included a simple survey as a pre-test for my main experiment as part of my graduation and that they would be a great help by filling in the survey. They were also told through WhatsApp messenger that they could ask any question if they had one and that they could stop the pilot at any time if they wanted to stop. When they clicked on the link they would go directly to the survey and they first had to answer the questions about gender and age. After these two questions they were asked to have a good look at the image of the website page presented to them and answered the four questions earlier mentioned under ‘dependent variables’. After they filled in these questions they got a screen that the pilot was done and they saw a thank you note as well. 



Results

An independent samples t-test was conducted to indicated whether or not the participants noticed differences between the stimulus materials of product involvement (high versus low

involvement). The analysis indicated that the difference in means between the low involvement product of the biro (M = 3.80, SD = 1.32), and high involvement product of the laptop (M = 1.50, SD = .52), was significant for the question whether or not the participant would read a lot before buying the product t(21) = -5.60, p = .00. This means that the participants would gather more information for a laptop before buying one in comparison to a biro. Also an independent samples t-test was conducted for the question whether or not the participants thought brand and type were

(13)

important. The analysis indicated that the difference in mean between the low involvement (M = 3.55, SD = 1.37) and high involvement (M = 1.50, SD = .52), t(10) = 9.81, p = .00 was significant. This means that people do care more about what kind of laptop brand and type they buy in comparison with a biro.

An independent samples t-test was conducted to indicate whether or not the participants noticed differences between the stimulus materials of platform type (brand versus non-brand). The analysis indicated that the difference in means between the brand platform (M = 3.40, SD = 1.16) and non-brand platform (M = 2.75, SD = 1.24) for the question whether or not the website seems independent was not significant t(21) = -1.22, p = .236. However, the difference in means showed that the participants did notice the difference between the brand generated website and the non-brand generated website in the expected direction. A final independent samples t-test was conducted to test whether or not the participants noticed differences between the stimulus materials of platform type (brand versus non-brand) for the question whether or not the website showed the reviews out of own interest. The analysis indicated that the difference in means between the brand platform (M = 2.93, SD = 1.16) and non-brand platform (M = 3.4, SD = 1.06), images were in the expected direction but not significant, t(21) = .89, p = .382.

Conclusion

The pilot study confirmed that the stimulus materials for the factor product involvement were ready to use since the participants of this pilot study confirmed differences in level of

involvement to buy the products (biro versus laptop). Although no significant differences were found between platform types, the means showed that the study participants noticed that the brand generated platform seemed less independent and showed the reviews more out of own interest than the non-brand generated platform. That is why the stimulus materials were retained for the main study.

Main study Sample

In total 136 respondents participated in the experiment. One participant was below eighteen and one participant claimed he was one hundred years old, and these were removed since they were

(14)

too young or probably not serious. Another six participants were removed for obtaining a z-score below z = -2 and above z = 2 on the message acceptance scale and thus identified as outliers on the critical dependent variable. The remaining 128 participants were used for the data analysis.

Out of the final 128 participants 63.3 % (N=81) were female and 36.7 % (N= 47) were male. Regarding generation, 54.7 % (N = 70) of the participants belonged to Generation Y with a mean age of M = 24.61 (SD = 4.037), and 45.3 % (N = 58) belonged to Generation X with a mean age of M = 50.31 (SD = 8.306). Concerning level of education, the largest share of the participants (46.1 %, N = 59) attended education at college level, 21.9 % (N = 28) at university level, 18.8 % (N = 24) at intermediate vocational level, and 13.2 % (N = 17) at lower

vocational/high school level.

The largest share of the participants, 35.2 % (N = 45), uses the internet one to two hours a day. 31.3 % (N = 40) uses the internet three to four hours a day.,14.0 % (N = 18) uses the internet more than seven hours a day, 12.5 % (N = 16) uses the internet five to six hours a day, and 7.0 % (N = 9) uses the internet less than one hour a day. 43.0 % (N = 55) uses the internet once a week to search for product information and reviews. 35.1 % (N = 45) uses the internet once a month to search for product information and reviews. 10.9% (N = 14) uses the internet once a year to search for product information and reviews. 10.1 % (N = 13) uses the internet once a day to search for product information and reviews. 0.8 % (N = 1) uses never to almost never the internet to search for product information and reviews.

Design

The experimental study had a 2 x 2 x 2 factorial design, with Platform type as a between-subjects variable (2 levels, Brand generated versus Non-brand generated), Product involvement as a between-subjects variable (2 levels, Higher involvement versus lower involvement), and Generation as a quasi-experimental variable (2 levels, Generation X versus Generation Y). The factorial design is shown in Figure 3.

(15)

Figure 3: A 2 x 2 x 2 factorial design, with Platform type as a between-subjects variable, Product involvement as a between-subjects, and Generation as a quasi-experimental variable.

Independent variables Platform type:

For a definition see chapter pilot study. Since the results of the pilot study showed that the stimulus materials were interpreted as intended the stimulus materials of the pilot study were used for the main experiment as well.

Product involvement:

For a definition see chapter pilot study. Since the results of the pilot study showed that the stimulus materials were interpreted as intended the stimulus materials of the pilot study were used for the main experiment as well.

Generation:


The year the respondent was born defined his or her generation. For this experiment there were two levels: ‘Generation X’ and ‘Generation Y’. Most researchers use birth dates ranging from the 1960s to the 1980s to define Generation X (O’Bannon, 2001), so the youngest people of

Generation X in our sample were 36 years old in 2016. Generation Y is the generation born after Generation X. Most researchers use birth years ranging from the 1980s to the 2000s (Weiler, 2005). So the oldest people of Generation Y in our sample were about 36 years old in 2016.

To make sure this variable was measured in the right way every participant was asked for their age at the end of the experiment.

< 36 yrs old

Platform type Product

involvement generatedBrand Non-brand generated Higher

Lower

> 36 yrs old

Platform type Product

involvement generatedBrand Non-brand generated Higher

(16)

Dependent variable

Acceptance of online reviews:

Acceptance was defined as an act of believing, approving, recognition and assenting of an online review. After viewing the stimulus website, every participant filled out a 7-item message

acceptance scale. This message acceptance scale was created by translating and shortening the message acceptance scale of McLaughlin (2011). Some examples of items of this scale are: These

reviews are unbiased; These reviews are accurate; and These reviews are believable. The items

were answered with the use of a 5-point Likert Scale which ranged from ‘I completely disagree’ (1) to ‘I completely agree’ (5). See appendix A for the full message acceptance scale. The acceptance scale was found to be reliable with a Cronbachs’ alpha of .846 and had a general mean of 2.787 (SD = 1.202).

Manipulation checks

To check whether or not the participants perceived the experimental factors, Platform type and Product involvement, as these were intended, a manipulation check has been added at the end of the experiment. The manipulation check questions were asked after message acceptance was measured. To check if participants perceived the manipulated factor Platform type as intended the following questions were asked: ‘How much do you believe the website with the shown reviews is

independent?’ and ‘How much do you believe the website shows the reviews out of own interest?’

The two items together had a Cronbachs’ alpha of .651. To check whether or not participants perceived the manipulated factor Product involvement as intended the following questions were asked: ‘Do you think it is important to learn more about the shown product before buying?’ and ‘Do you think it is important to pay attention to brand and type when buying the product?’ Both manipulation check questions were answered with a 5 point Likert scale which ranged from ‘I

totally agree’ (1) to ‘I totally disagree’ (5). The two items together had a Cronbachs’ alpha of

.896.

Procedure

Participants were approached through WhatsApp messenger, Facebook or LinkedIn. Through these mediums the participants were asked to take part in a study that was part of a Master thesis. When the participants clicked on the link they would see the introduction page. On this page

(17)

some information was given about their rights and an overall goal of the research was given. The exact goal of the research was not given to make sure the participants were not primed. At the end of the introduction page they had to agree with an informed consent, and if they did they would see one of the four created website images containing the reviews. They were asked to have a good look at the image and answer some questions at the end of the page about the shown reviews. This is when the message acceptance scale was presented. When these questions were answered they would be directed to the next page with the manipulation check items. After answering these questions they were directed to the last page with demographic questions. Questions about age, gender, education, internet usage, and searching for product information and product reviews on the Internet were asked. Finally, they were directed to the last page on which they would read a thank you note and e-mail address in case they had any questions or wanted more information about the research.

Results

Randomization check

Several randomization checks were performed to check whether or not the participants were randomly assigned to the different conditions in terms of properties such as gender and age. These analyses confirmed that the participants were randomly distributed over experimental conditions. See appendix B for a description of the randomization checks and results.

Manipulation checks

To test whether the manipulation of platform type was perceived as intended, an independent samples t-test was conducted with Platform type as an independent variable (brand generated versus non-brand generated) and the platform manipulation check variable as dependent variable. The mean in the brand generated website condition (M = 3.461, SD = .742) was significantly higher than in the non-brand generated website condition (M = 2.828, SD = 1.001), t (9.507) = 4.064, p = .003. This result confirmed that the reviews shown on the brand generated website were indeed seen as less independent and more as presenting the reviews out of own interest than the reviews shown on the non-brand generated website.

(18)

To test whether the manipulation of product involvement was perceived as intended, an

independent samples t-test was conducted with Product involvement as an independent variable (higher involvement product versus lower involvement product) and the involvement

manipulation check variable as dependent variable. The mean in the higher involvement

condition (M = 1.723, SD = .739) was significantly lower than in the lower involvement product condition (M = 3.793, SD = 1.149), t (22.638) = -12.162, p = .000. This result confirmed that the high involvement product of the laptop indeed was seen as involving more deliberation before buying than the low involvement product of the biro.

Main analyses

A two-way Analysis of Variance was conducted with Platform type (brand generated versus non-brand generated) and Product involvement (higher versus lower) as independent variables and mean scores on the Message Acceptance Scale as dependent variable. Contrary to what was expected in Hypothesis 1, the analysis showed that the platform type did not significantly impact acceptance of the online reviews, F(1, 126) = .713, p = .400. As can be seen in Table 1, the mean acceptance of the online reviews on the brand generated website was slightly, but not

significantly higher than the mean acceptance of the online reviews on the brand generated website. This means that platform type did not influence the level of online review acceptance.

The same two-way Analysis of Variance showed that, contrary to what was expected in Hypothesis 2a, product involvement did not significantly impact acceptance of the online

reviews, F(1, 126) = .161, p = .689. As can be seen in Table 1, the mean acceptance of the online reviews on the high involvement product was comparable to the mean acceptance of the online reviews on the low involvement product. This means that product involvement did not have influence on the level of online review acceptance.

The two-way Anova showed that the interaction between the platform type and the product involvement did not significantly impact acceptance of the online reviews, F(7, 120) = .000,

p = .991, so that Hypothesis 2b needed to be rejected. As can be seen in Table 1, platform type

and product type had a comparable positive influence on online review acceptance meaning that platform type and product involvement did not interact in influencing the level of online review acceptance.

(19)

Contrary to what was expected in Hypothesis 3a, the Anova showed that the generation did not significantly impact acceptance of the online reviews either, F(1, 126) = 2.784, p = 0.098. As can be seen in Table 1, the mean acceptance of the online reviews among the X generation was slightly, but not significantly higher than the mean acceptance of the online reviews among the Y generation. This means that generation did not have influence on the level of online review acceptance.

Contrary to what was expected in Hypothesis 3b, the Anova showed that the interaction between the platform type and generation did not significantly impact acceptance of the online reviews, F(7, 120) = 2.111, p = 0.149. As can be seen in Table 1, platform type and generation had a comparable positive influence on online review acceptance meaning that platform type and generation did not interact in influencing the level of online review acceptance.

Contrary to what was expected in Hypothesis 3c, the analysis showed that the interaction between the product involvement and generation did not significantly impact acceptance of the online reviews, F(7, 120) = 2.597, p = 0.110. As can be seen in Table 1, product involvement and generation had a comparable positive influence on online review acceptance meaning that

product involvement and product involvement did not interact in influencing level of online review acceptance.

Finally and contrary to what was expected in Hypothesis 4, the analysis showed that the 3-way interaction between the platform type, the product involvement, and generation did not significantly impact acceptance of the online reviews, F(7, 120) = .027, p = 0.869. This means that the interaction between platform type, product involvement and generation did not have influence on the level of online review acceptance.

(20)

Table 1: The mean acceptance of the online reviews.

(Interaction) Independent Variables Mean online review acceptance scale SD Platform Brand 2.769 .090 Non-brand 2.661 .091 Involvement High 2.741 .089 Low 2.689 .091 Generation X Y 2.822 2.689 .086 .094 Platform x Involvement Brand High 2.794 .126

Low Non-brand High Low 2.744 2.688 2.635 .128 .126 .130 Platform x Generation Brand X

Y Non-brand X Y 2.783 2.755 2.860 2.462 .123 .131 .120 .136 Involvement x Generation High X

Y Low X Y 2.783 2.755 2.866 2.462 .123 .131 .120 .136 Platform x Involvement x Generation Brand High X Y Low X Y Non-brand High X Y Low X Y 2.921 2.667 2.644 2.844 2.979 2.396 2.742 2.528 .164 .191 .184 .197 .197 .197 .160 .206

(21)

Conclusion

In conclusion it can be said that the level of online review acceptance in the present study was not influenced by the type of platform on which the reviews were shown. Also, the level of product involvement and the generation of the reader were not shown to impact online review acceptance. This can be concluded since the manipulation checks showed that people did perceive the brand generated website as less independent than the non-brand generated website. However,

participants exposed to reviews on the non-brand generated platform did not show a higher level of acceptance of the online reviews in comparison with the respondents exposed to the brand generated platform. Further, the participants did perceive the high involvement product of the laptop as requiring more deliberation before buying than the low involvement product of the biro. However, there was no higher level of online review acceptance for people who saw the low involvement product of the biro in comparison to people who saw the high involvement product of the laptop. We also found no differences in online review acceptance when people from Generation X and Generation Y were compared. Therefore the research question “Does the

platform type where online reviews are placed influence the acceptance of online reviews? And to which extent do level of product involvement and people’s generation moderate this

effect?”can be answered as follows: Our study did not find evidence that platform type influences

the acceptance of online reviews or that level of product involvement and people’s generation moderate this effect.

The above conclusion was unexpected since earlier studies suggested that a difference in level of acceptance between the different platform types, levels of product involvement, and people’s generation could be expected. Apparently people do find that reviews on the brand generated websites and the reviews on the non-brand generated websites are equally credible, despite the fact that the same participants thought the websites themselves were not equally credible and the non-brand generated website was perceived as more credible in comparison with the brand generated website. This is not in line with th earlier finding by Watts and Zhang (2008) that one of the main consumers’ perceptions of online review platforms is source credibility, and Van Reijmersdal, Neijens and Smit (2010) who stated that messages on credible platforms will not be perceived as persuasive and therefore more believable and acceptable. It was also expected that when a person is more involved he or she will be more likely to be critical and less likely to

(22)

accept product reviews (Kim et al., 2010). However, the results of this experiment showed that the different levels of product involvement did not influence the acceptance of the product reviews. Further, that generation Y can be supposed to have a very unique attitude towards brands since they have been raised in a time where everything is branded (Lazarevic, 2012) and that generation Y is more conscious of brands in comparison with generation X (Heaney, 2007) did not influence the acceptance of online reviews in the present study. The people from the Y generation were not more likely to accept the reviews in comparison with people from the X generation.

Discussion

Decisions had to be made when designing the experiment of the present study. One of these decisions was whether or not a page break would be added after the website image was shown to participants. This break could make the results more reliable since participants would not able to have another look at the website image while answering the online review acceptance scale. However, it was expected that a lot of participants would open the link of the experiment on their mobile devices and therefore have a small screen, meaning that they might not see all cues first time around. Therefore, it was decided to not use a page break so that participants could have a second look at the website image and search for some cues before answering the review

acceptance scale. It should be noted though that this decision could have affected the results in a way that the effects are stronger than they will be in daily use since the participants could more critically answer the questions by having a second look. Hence, this possible shortcoming of our study does not explain the absence of any effects.

A clear explanation for the results of the experiment is hard to give since none of the expected hypotheses were confirmed. The participants did notice the differences in platform type (the non-brand versus brand versions) and product involvement (biro versus laptop) so this could not form an explanation. Maybe the results could be due to the fact that the reviews were so loud and clear that these were accepted for the brand and non-branded platforms, and the high and low product alike. That no difference in effect was found in terms of generation could be due to the fact that product reviews and word of mouth are not a whole new concept. Before the internet

(23)

was invented people would also see product reviews in brochures and as marketing slogans. Generation X could thus also be used to the ways in which reviews are shown and are used to persuade people, and therefore the expected advance of the Y generation with their knowledge about the internet and marketing could be negligible. Since all the differences in results between the conditions, although not significant, are as expected, it could also be possible that platform type, product involvement, and generation do have the expected effects on review acceptance but only in this experiment other factors are overruling this effect. One of these factors that could have been overruling the tested factors is the fact that participants were considered to pretend like it was a real situation. Participants clearly knew that the shown images and the reviews were fake, but had to pretend they were real. It could be possible that the participants took the reviews more seriously because of this than they would if it was a real daily life situation.

Since all hypothesis were rejected it may be that other kinds of factors would lead to the expected differences in the acceptance of online review. For example, the acceptance of online reviews on social media websites like Twitter, Facebook and Instagram could be compared with the acceptance of online reviews on more traditional websites where only a small part of the website is consumer-generated. In this experiment only traditional websites were compared and no differences were found. There could be a difference in the level of acceptance of online reviews for social media platforms in comparison with more traditional platforms. It could also be that particularly product information leads to a difference in the level of acceptance rather than product reviews. Maybe people are more sceptical towards product information of higher

involvement products in comparison to lower involvement products. So, product information of high and low involvement products could be compared in term of acceptance.

Finally, the conclusions seem encouraging for brands that have to deal with online product reviews since the results of the present study suggest that brands can generate their own website pages for reviews about their products with the knowledge that these pages are as much accepted as reviews on non-branded website pages. Also, it does not seem to matter whether high

involvement products or low involvement products or products for younger people or for the elderly are marketed, since these factors do not seem to make product reviews less acceptable for consumers.

(24)

References

Anderson, E. T., & Simester, D. I. (2014). Reviews without a purchase: Low ratings, loyal customers, and deception. Journal of Marketing Research, 51(3), 249-269.

Bambauer-Sachse, S., & Mangold, S. (2013). Do consumers still believe what is said in online product reviews? A persuasion knowledge approach. Journal of Retailing and Consumer

Services, 20(4), 373-381.

Berthon, P. R., Pitt, L. F., Plangger, K., & Shapiro, D. (2012). Marketing meets Web 2.0, social media, and creative consumers: Implications for international marketing strategy.

Business Horizons, 55(3), 261-271.

Brehm, J. W. (1966). A theory of psychological reactance. New York, NY: Academic Press. Dellarocas, C., 2006. Strategic manipulation of internet opinion forums: Implications for

consumers and firms. Management Science, 52(10), 1577–1593.

Heaney, J. G. (2007). Generations X and Y's internet banking usage in Australia. Journal of

Financial Services Marketing, 11(3), 196-210.

Hong, I. B. (2015). Understanding the consumer's online merchant selection process: The roles of product involvement, perceived risk, and trust expectation. International Journal of

Information Management, 35(3), 322-336.

Hu, N., Bose, I., Gao, Y., & Liu, L. (2011). Manipulation in digital word-of-mouth: A reality check for book reviews. Decision Support Systems, 50(3), 627–635.

Hymowitz, C. (2007), “Managers Find Ways to Get Generations to Close Culture Gaps”. Wall

Street Journal, July 9, p. B.1.

Keller, E. (2007). Unleashing the power of word of mouth: Creating brand advocacy to drive growth. Journal of Advertising Research, 47, 448-452.

Kim, J. U., Kim, W. J., & Park, S. C. (2010). Consumer perceptions on web advertisements and motivation factors to purchase in the online shopping. Computers in Human Behavior,

26(5), 1208-1222.

Lazarevic, V. (2012). Encouraging brand loyalty in fickle generation Y consumers. Young

Consumers, 13(1), 45-61.

Lee, J., & Lee, J. N. (2009). Understanding the product information inference process in electronic word-of-mouth: An objectivity–subjectivity dichotomy perspective.

(25)

Information & Management, 46(5), 302-311.

Lee, T. Y. & Bradlow E. T. (2011), “Automated marketing research Using Online Customer Reviews,” Journal of Marketing Research, 48 (October), 881–94.

Luca, Michael and Georgios Zervais (2013), “Fake It Till You Make It: Reputation, Competition, and Yelp Review Fraud,” working paper, Harvard University.

Mayzlin, D., 2006. Promotional chat on the internet. Marketing Science, 25(2),155–163. McLaughlin, M. L. (2011). Communication Yearbook 9 (No. 9). New York, NY: Routledge. O'Bannon, G. (2001). Managing our future: The generation X factor. Public Personnel

Management, 30(1), 95-110.

Petty, R. E., Cacioppo, J. T., & Schumann, D. (1983). Central and peripheral routes to advertising effectiveness: The moderating role of involvement. Journal of consumer research, 135-146.

Reisenwitz, T. H., & Iyer, R. (2009). Differences in generation X and generation Y: Implications for the organization and marketers. Marketing Management Journal, 19(2), 91-103. Tyler, K. (2008). Generation gaps. HRMagazine, 53, 69-73.

Van Noort, G., & Willemsen, L. M. (2012). Online damage control: The effects of proactive versus reactive webcare interventions in consumer-generated and brand-generated platforms. Journal of Interactive Marketing, 26(3), 131-140.

Watts, S. A., & Zhang, W. (2008). Capitalizing on content: Information adoption in two online communities. Journal of the Association for Information Systems, 9(2), 3.

(26)

Appendix A

Message acceptance scale (McLaughlin, 2012) 


The reviews are:

I completely agree/agree/ neutral/do not agree/completely disagree

believable convincing acceptable important accurate persuasive independent Appendix B Randomization check

A chi-square test was conducted to test whether the participants in the platform type conditions (brand generated versus non-brand generated, independent variable) and the product involvement conditions (higher versus lower, independent variable) were equally divided in terms of gender. Out of 67 participants in the brand generated condition 22 (32.8%) were male, out of 67

participants in the non-brand generated condition 28 (41.8%) were male and there were no significant differences found for the condition platform type, χ2 (1) = 1.149, p = .284, neither for the condition product involvement, since out of 66 participants in the condition higher

involvement 24 (36.4%) were male, out of 68 participants in the lower involvement condition 26 (38.2%) were male, χ2 (1) = .050, p = .823. Therefore, male and female were equally divided across experimental conditions.

A chi-square test was conducted to test whether the participants in the platform conditions (brand generated versus non-brand generated, independent variable) and the involvement

(27)

conditions (higher versus lower, independent variable) were equally assigned in terms of education level (elementary education versus VMBO/MAVO/IBO versus MBO versus HAVO versus VWO versus HBO versus WO, dependent variable). Out of 64 participants in the brand generated condition 33 (51.6%) had a college degree, out of 64 participants in the non-brand generated condition 26 (40.6%) had a college degree and there was no significant main effect found for the platform type conditions, χ2 (5) = 3.751, p = .586, and neither for the involvement conditions, since out of 65 participants in the higher involvement condition 27 (41.5%) had a HBO degree, out of 63 participants in the lower involvement condition 32 (50.8%) had a HBO degree, χ2 (5) = 1.742, p = .884.

An one-way ANOVA test was conducted to test whether the participants in the platform type conditions (brand generated versus non-brand generated, independent variable) and the involvement conditions (higher versus lower, independent variable) were equally divided in terms of age (18 till 76, dependent variable). No main effect was found for the condition platform type F (1, 126) = .091, p = .763, brand generated M= 36.6 (SD= 14.159), non-brand generated M= 35.88 (SD= 14.557), nor for the condition involvement F (1, 126) = .003, p = .953, higher involvement M= 36.18 (SD= 14.013), lower involvement M= 36.33 (SD= 14.719). So, there were no significant differences between the conditions concerning age.

An one-way ANOVA test was conducted to test whether the participants in the platform type conditions (brand generated versus non-brand generated, independent variable) and the involvement conditions (higher versus lower, independent variable) were equally divided in terms of online search habits (often till never, dependent variable). No main effect was found for the condition platform type condition F (1.,126) = 2.571, P = .111, brand generated M= 2.547 (SD= .733), non-brand generated M= 2.773 (SD= .845), nor for the condition involvement F (1. 126) = 4.402, P= .083, higher involvement M= 2.515 (SD= .780), lower involvement M= 2.809 (SD= .780). So, there were no significant differences between the different conditions when it comes to online search habits and therefore the randomization for this condition good.

Referenties

GERELATEERDE DOCUMENTEN

Articular cartilage debrided from grade IV lesions showed, both in native tissue and after pellet culture, more deviations from a hyaline phenotype as judged by higher

Micron-sized topography (wavelengths ranging from 4.8 µm to 9.9 µm and amplitudes ranging from 1015 nm to 2169 nm) caused cell alignment and smaller features

The project will provide the knowledge, methods and tools (e.g. a maptable) required for the design and implementation of vegetated foreshores as a safe, ecologically desirable,

Uit Figuur 2-9 kan worden afgeleid dat de stroombergende breedte in de Zeeschelde duidelijk is afgenomen door de ruimtelijke veranderingen die zijn opgetreden sinds 1850..

Goals is also statistically and practically significantly related (medium effect size) to customer focus, continuous improvement, employee involvement, collaboration, system

The moduli space of semistable rank 2 vector bundles with trivial determinant, Bun(C) is canonically iso- morphic to the quotient of Jac(C) by the elliptic involution [ 25 ].. Let

Still considering the vast selected range of rubber tread compounds for the present study, the prediction of actual tire lateral grip on the road with a solid test wheel within

The results show that the proposed seed extraction mechanism derive random seed from sensor data and the seed is capable of passing 12 out of 15 NIST statistical tests.. The results