• No results found

Online reviews: The role of information about the reviewer on the reviewer’s trustworthiness

N/A
N/A
Protected

Academic year: 2021

Share "Online reviews: The role of information about the reviewer on the reviewer’s trustworthiness"

Copied!
77
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Online reviews: The role of information about the reviewer on the

reviewer’s trustworthiness

A study examining the effects of a reviewer label, profile picture, and type

of display of the reviewer’s name on the reviewer’s trustworthiness,

partially mediated by likability.

By

(2)

Online reviews: The role of information about the reviewer on the

reviewer’s trustworthiness

Mats Neeft

S3534294

Peizerweg 27-P

(06)83542070

m.p.neeft@student.rug.nl

Master Thesis

Supervisor: Dr. J.A. Voerman

Second supervisor: Dr. J.C. Hoekstra

Rijksuniversiteit Groningen

Faculty of Economics and Business

Department of Marketing

PO box 800

(3)

ABSTRACT

(4)

TABLE OF CONTENTS

1. INTRODUCTION ... 7

1.1 A lack of trust in OCRs ... 7

1.2 Factors of OCRs affecting its trustworthiness ... 8

1.3 Information about the reviewer ... 9

1.3.1 Label assigned to the reviewer ... 9

1.3.2 Profile picture of the reviewer ... 10

1.3.3 The type of display of the reviewer’s name ... 10

1.3.4 Reviewer’s likability ... 11

1.4 Problem statement ... 11

1.5 Research questions ... 11

1.6 Thesis structure ... 12

2. LITERATURE REVIEW ... 13

2.1 Effect of reviewer label on the reviewer’s trustworthiness ... 13

2.2 Effect of profile picture on the reviewer’s trustworthiness and likability ... 14

2.3 The way the reviewer’s name is displayed on the reviewer’s trustworthiness ... 15

2.4 Interaction effect of profile picture and reviewer’s name on the reviewer’s trustworthiness ... 15

2.5 Partial mediation of likability ... 16

2.6 Effect of product involvement on the effect the presence of a reviewer label on reviewer’s trustworthiness ... 17

2.7 Skepticism towards the reviewer’s identity ... 17

2.8 Conceptual model ... 18

2.9 Hypothesis table ... 19

3. METHODOLOGY ... 20

3.1 Experimental research design and participants ... 20

3.1.1 Population and sample ... 20

3.2 Survey procedure and design ... 22

3.2.1 Introduction ... 22

(5)

3.2.3 Manipulation of independent variables and stimuli ... 22

3.2.4 Manipulation/interpretation check ... 24

3.2.5 Demographic variables ... 25

3.3 Operationalization and reliability of scales ... 25

3.3.1 – Multi-Item scales, Likert scale and slider ... 25

3.3.2 The variable ... 27

3.3.3 The variable likability ... 28

3.3.4 The variable skepticism towards the reviewer’s identity ... 28

3.3.5 The variable product involvement ... 28

3.4 Plan of Analysis ... 29

3.4.1 Data preparation ... 29

3.4.2 Factor analysis ... 29

3.4.3 Reliability analysis ... 30

3.4.4 Analysis of variance (ANOVA) ... 30

3.4.5 Regression analysis ... 30

3.4.6 Mean centering ... 31

3.4.7 Multicollinearity ... 31

3.4.8 Mediation analysis ... 32

3.4.9 Results interpretation/manipulation check ... 32

3.4.10 Correlation between reviewer’s trustworthiness and trustworthiness of the review ... 33

4. RESULTS ... 34

4.1 Average scores per condition and experimental levels ... 34

4.2 Experimental variables on reviewer’s trustworthiness ... 35

4.2.1 Differences in mean values per condition ... 35

4.2.2 Differences between effects of the experimental variables ... 36

4.3 Regression analysis conceptual model without ‘likability’ ... 38

4.3.1 Results model 1 ... 38

(6)

4.3.4 Results model 4 ... 39

4.3.5 Results model 5 ... 39

4.3.6 Mediation analysis of likability ... 39

4.4 Results of the hypothesis table ... 41

5. CONCLUSIONS & RECOMMENDATIONS ... 43

5.1 Conclusion ... 43

5.2 Answering and discussing the partial research questions ... 43

5.3 Managerial and academic implications ... 44

5.4 Limitations ... 45

5.5 Recommendations for future research ... 46

REFERENCES ... 47

APPENDIX A – QUALTRICS SURVEY ... 51

APPENDIX B – FACTOR ANALYSES ... 58

APPENDIX C – RELIABILITY ANALYSES ... 62

APPENDIX D – ANOVA OUPUT ... 65

APPENDIX E – ONE SAMPLE T-TEST ... 69

(7)

1. INTRODUCTION

Imagine a situation in which you need advice to make a purchase decision. Before the rise of internet, you would have probably trusted your friends or family to give you this kind of advice (traditional word-of-mouth), or you might have decided to trust the information provided by the seller himself. However, nowadays you can also base your purchase decision on electronic-word-of-mouth (eWOM), which can be defined as “all informal communications

directed at consumers through internet-based technology related to the usage of characteristics of particular goods and services, or their seller” (Litvin, Goldsmith, & Pan, 2008). In other

words, eWOM represents all the reviews you can find online, such as consumer reviews and ratings, that are actually the most accessible and frequent form of eWOM (Chatterjee, 2001). It is widely recognized that eWOM influences customer purchase decisions, and thereby has an effect on sales (Floyd et al., 2014). Unlike traditional word-of-mouth that is rather limited in its reach, eWOM is made available to “a multitude of people and institutions” (Hennig-Thurau, Gwinner, Walsh, & Gremler, 2004). Websites such as TripAdvisor, Google, and Amazon are hosting millions of Online Consumer Reviews (OCRs) and their impact on consumers reading them is constantly growing in making a purchase decision for a product or service (Filieri & McLeay, 2014).

Thus, as OCRs reach more people than traditional word-of-mouth and as they are increasingly affecting consumer purchase decisions, it is easy to imagine that companies find it important to manage them.

1.1 A lack of trust in OCRs

While there is an increasing importance of OCRs in consumer purchase decisions, having trust in reviews is an important factor of discussion (Teixeira & Kornfeld, 2014). Even stronger than this, consumers feel a general mistrust towards OCRs (e.g., Engle, 2019): Survey data from Maritz Research shows that among the more popular review websites, 40% of its participants did not trust its content and ratings (Sterling, 2013). This mistrust also exists because the media frequently reveals scandals of managers posting fake reviews (Floyd et al., 2014): “Fake reviews can breed mistrust among consumers, and are likely to prompt a negative

backlash in the marketplace” (Moyer, 2010). For example, take the ‘mind-blowing’ and

(8)

because companies offer consumers (monetary) incentives to write them. Subsequently, biased OCRs lead to mistrust in OCRs among consumers (Jurca et al., 2010).

Thus, a lot of consumers still do not have trust in OCRs. But what does it mean to have

trust in something? In its basic form, trust can be seen as “a trustor’s expectations about the motives and behaviour of a trustee” (Doney & Cannon, 1997). Simpson (2012) explains that

trust arises because humans have the need to live socially and to rely on each other. Often the definition of trust includes a situation in which the person who is relying on another has a vulnerable position. There is a possibility to harm, whereas the one who has to trust wants to reduce the uncertainty (Friedman, Kahn, & Howe, 2000; Hurley, 2006).

To sum up, a general mistrust towards OCRs exists due to issues like companies posting fake reviews or companies offering incentives to consumers for writing a review. As it is very important for companies that their OCRs are perceived trustworthy, it is necessary to look into the different factors of an OCR, to see which ones help to increase its trustworthiness.

1.2 Factors of OCRs affecting its trustworthiness

There are multiple factors of an OCR that can influence its trustworthiness. “When people choose to trust they have gone through a decision-making process, based on factors that can be identified, analysed and influenced” (Hurley, 2006). The current literature already examines several of those factors that affect its trustworthiness.

Regarding the content, Liu et al. (2008) find that the content influences the trustworthiness of the OCR. People tend to perceive negative OCRs as more trustworthy than positive ones; it’s the review valence (Banerjee & Chua, 2019; Filieri, 2016; Dong et al., 2019). Banerjee & Chua (2019) address style factors of an OCR where the attractiveness of the title and the informativeness of descriptions affect the trustworthiness. Not only what is written in the OCR can have an influence on trust in OCRs, but also the platform where the review appears (Dong, Li, & Sivakumar, 2019) As for example, the retailers’ website, review websites or social network platforms. Furthermore, differences in trustworthiness can even vary depending on the device on which it is read (Grewal & Stephen, 2019). People tend to have greater trust in reviews when they read them on mobile devices.

(9)

impact of perceived similarity between the reviewer and the reader. Yet, the researchers do not take into account that an OCR is rather limited in the amount of information it gives about the reviewer. The research of Xu (2014) experimentally tests the impact of the absence of a profile picture of the reviewer on the reviewer’s trustworthiness. However, their conceptual model lacks a direct link to the trustworthiness of the review itself. It seems that the current literature lacks an empirically tested model that looks to different kind of information about the reviewer to examine reviewer’s trustworthiness.

To sum up, there are multiple factors of an OCR that can influence its trustworthiness. This research will focus on the reviewer’s trustworthiness, the reviewer being the person who wrote the review.

1.3 Information about the reviewer

As mentioned in the previous paragraph, the information you get about the reviewer is limited in OCRs. Nevertheless, the information that is available about the reviewer could impact the amount of trust a consumer has in the reviewer, and subsequently the review. This research focusses on the few elements in a review that actually do tell something about the reviewer. These elements in the review are often peripheral cues, which are factors that provide a relatively low ‘effort-thinking’ for determining a positive or negative evaluation of the attitude object (Whittler & Spira, 2002). To clarify the previous definition of a peripheral cue in the context of OCRs we can say that it is an element in the review, easy to process, in order to form your opinion about the reviewer. While it is a common problem in online reviews that the amount of peripheral cues is limited (Kusumasondjaja, Shanka, & Marchegiani, 2012), it is important to understand what kind of effects the available peripheral cues have on the reviewer’s trustworthiness.

1.3.1 Label assigned to the reviewer

(10)

Figure 1.1 Top fan label on Facebook (source: www.Facebook.com)

But what does such a ‘Top fan’ label mean? It can be acquired “by being one of the most active

people on your page” (Facebook Help Centre, 2019). No further details are given, which makes

the meaning of such labels unclear. This area is currently unexplored in the literature, while it is important to know if and how these kinds of labels impact the reviewer’s trustworthiness. 1.3.2 Profile picture of the reviewer

A profile picture could work as another peripheral cue (Van der Land & Muntinga, 2014), that shows the reader some information about the reviewer. After investigating the top 10 consumer review websites (Vendasta, 2019), it becomes clear that a majority of them allows the reviewer to connect with their social media account; e.g. Facebook. When a reviewer connects a social media account with the review platform, their profile picture is automatically attached to the review. Hum et al. (2011) research the identity construction in social networking sites, by examining profile pictures of Facebook users. They find that the majority of the profile pictures included the participant as the only person in the picture. In the context of online reviews, the reader gets an idea of who the reviewer is with this profile picture.

1.3.3 The type of display of the reviewer’s name

(11)

1.3.4 Reviewer’s likability

The peripheral cue ‘profile picture’ is not only expected to have an effect on the reviewer’s trustworthiness, but also on the reviewer’s likability. Likability is an accumulation of several characteristics, that other people perceive and judge in a positive or negative way (Oliver, 2013). These characteristics are typically present in a reviewer’s profile picture. Therefore, likability might partially explain the mechanism of the relationship between the profile picture and reviewer’s name on the reviewer’s trustworthiness. The likability of a person has a close relationship to a consumers trust (Wood, Boles, & Babin, 2008) and has often been suggested as possible underlying concept of trustworthiness.

1.4 Problem statement

Summarizing, with the rise of the internet, consumers can base their purchase decisions on OCRs. However, consumers feel a general mistrust when it comes to OCRs, which is a problem for many companies. When consumers do not have trust in an OCR, this negatively affects their purchase decisions, resulting in a decrease in potential sales. As a result of previous research, trust in OCRs derives from multiple factors. This research focuses on the reviewer. It experimentally tests information about the reviewer, that might influence the reviewer’s trustworthiness, and thereby the trustworthiness of the OCR. That information can be a label assigned to the reviewer, a reviewer’s profile picture, and a reviewer’s name. The literature defines this kind of information as peripheral cues. The usage of labels that claim something about the reviewer is yet unexplored in the current literature. Insights could help to fill in the theoretical gap. Furthermore, they could help companies in their decision on how to use peripheral cues, containing information about the reviewer in their reviews. This knowledge could help companies to design their OCRs in such a way that consumer perceive a higher trust in the reviewer.

1.5 Research questions

To acquire insights into the problem statement above, this research addresses one main research question distributed over several partial research questions.

RQ: How do a reviewer label, a profile picture and the type of display of the reviewer’s name affect the reviewer’s trustworthiness, and how does

(12)

Formulating the following partial research questions:

1. What is the effect of the absence or presence of a reviewer label on the reviewer’s trustworthiness?

2. What is the effect of the absence or presence of a profile picture of the reviewer on the reviewer’s trustworthiness and the reviewer’s likability?

3. What is the effect of reading the reviewer’s full name or system-generated username on the reviewer’s trustworthiness?

4. Are there any interaction effects of the reviewer label, the profile picture and the type of display of the reviewer’s name on the reviewer’s trustworthiness or likability?

5. Which other variables play a role that influence the reviewer’s trustworthiness? 6. Which other variables might influence the effects of a reviewer label, the profile picture and the type of display of the reviewer’s name on the trustworthiness of the reviewer?

1.6 Thesis structure

(13)

2. LITERATURE REVIEW

In this chapter, the existing literature on the effect of information disclosure on the reviewer’s trustworthiness is discussed, more specifically to get a better understanding of the effect of a reviewer label, a profile picture, and the type of display of the reviewer’s name. It also discusses a possible partly mediation effect of likability, for the relation of the peripheral cue profile picture on the reviewer’s trustworthiness. Based on a better understanding of the relationship between these concepts, appropriate hypotheses are formulated.

2.1 Effect of reviewer label on the reviewer’s trustworthiness

Although the literature does not directly discuss the effects of a label assigned to the reviewer, the usage of labels is not a new phenomenon in marketing. For instance, they are widely used on food products to highlight claims (Bissinger, 2019). These claims tell more about the product, as for example its quality or environmental friendliness.

Bernard, Duke, & Albrecht (2019) discuss two sorts of labels. The first label is a no-information label. It is defined as a label that provides no-information that is already known to the consumer or does not give any information. Secondly, a minimal-information label, which is a label with an unclear meaning which lacks substantive evidence. It seems that the labels attached to a review, serving as peripheral cue, tend to resemble this second definition of a label. The labels do provide new information about the reviewer, e.g. ‘This reviewer is an expert’ or ‘This reviewer is a Top-Fan’. However, those statements are based on unclear evidence. Interestingly, their results show that the usage of such minimal-information labels can affect consumer perceptions and a higher willingness to pay: “The addition of any label,

regardless of information provided, may inherently increase some consumers’ trust in the quality of the products because the label might be perceived as an act of approval by an authority or expert (Bernard et al., 2019).

(14)

Thus, a reviewer label seems to be a minimal-information label that lacks evidence. However, this kind of labels could increase the reviewer’s trustworthiness, while it is proven to increase the trust in the quality of food products. Multiple theories describe possible explanations that can help to understand why a reviewer label would increase the reviewer’s trustworthiness. The following hypothesis is formulated:

H1. The presence of a reviewer label in an OCR will increase the reviewer’s trustworthiness.

2.2 Effect of profile picture on the reviewer’s trustworthiness and likability

To determine the effect the presence or absence of a profile picture could have on the reviewer’s trustworthiness and likability, several areas of research are revised. First of all, literature on branding teaches us that processing fluency, which is the ease with which consumers identify and recognize the target (Lee & Labroo, 2004), account for positive brand evaluations. The authors emphasize that, in general, visuality in general enhances the amount of processing fluency more compared to text. Coherently, a profile picture in a review could increase this visuality, and, thus, processing fluency. This may lead to more positive evaluations for the reviewer’s trustworthiness or likability.

Then, looking into the trustworthiness of user-profiles, Hawlitschek & Lippert (2015) find that, based on data from Airbnb, the presence of profile pictures positively influences the trustworthiness of user profiles. Also, Filieri (2016) states that the presence of a self-picture is beneficial for the trustworthiness. Based on 38 in-depth interviews with users of OCRs, the author shows that consumers indeed use a profile picture in their judgement of perceived trustworthiness, and that the effect of the presence of one is positive (e.g.: “I also look at the

profile picture of the reviewer...I generally look with suspicion to accounts with default profile pictures”). The following is hypothesized:

H2. The presence of a profile picture will increase the reviewer’s trustworthiness.

And:

(15)

2.3 The way the reviewer’s name is displayed on the reviewer’s trustworthiness

In the current research, the third variable revealing information about the reviewer is the the way the reviewer’s name is displayed in the review. Mesch (2012) finds that internet users who post information online that use their real names are perceived as more trustworthy than those that did not use their real names. This is in line with the research of Uslaners (2004) where he finds two groups of people that surf on internet: (1) ‘Mistrusters’, who are in general mistrust and worried when surfing on the internet, and (2) ‘trusters’, who have more trust and see the internet as gentle and kind. His findings suggest that “63.1 % of generalized ‘trusters’ say that

they use their real names on the Web, compared to 55.7% of ‘mistrusters’”. Based on this, the

following is hypothesized:

H4. The presence of a reviewer’s full name instead of a system-generated username will increase the reviewer’s trustworthiness.

2.4 Interaction effect of profile picture and reviewer’s name on the reviewer’s trustworthiness

Next to the main effects, an interaction effect between the presence of a profile picture and the way the reviewer’s name is displayed can be expected. The literature finds that people make ‘face-name associations’ (Karl et al., 2001). When the reader of a review encounters simultaneously a reviewer’s full name and a profile picture, the information about the reviewer may strengthen each other. The following hypothesis is formulated:

(16)

2.5 Partial mediation of likability

The concepts of reviewer’s trustworthiness and the reviewer’s likability are closely related to each other (Wood et.al., 2008). “In general, greater liking leads to greater trust” (Nicholson, Compeau, & Sethi, 2001). The effect of likability on interpersonal trust has often been examined in the seller- and buyer situation. For example in the research of Doney & Cannon (1997), in which likability served as the strongest predictor for trust between persons. Chen & Dhillon (2003) mention that in as well the traditional buyer-seller relationship, as in e-commerce, the trustworthiness of the source partly depends on the likability of that same source. The following is hypothesized:

H6. An increase in the reviewer’s likability will increase the reviewer’s trustworthiness.

Likability may not only directly influence the reviewer’s trustworthiness but could also serve as a mediator. For example, in the study of Nicholson, Compeau, & Sethi (2001), the researcher find that liking serves as a mediator between personal interaction and trust. Moreover, their findings suggest that this mediation effect is the strongest when the relationship is ‘young’. This ‘young’ relationship also holds for the relationship between the reader of a review and the reviewer. While it is expected that the profile picture would increase the reviewer’s trustworthiness (hypothesis 2), and the likability partially the underlying mechanism is (hypothesis 3) that explains this increase, the following hypothesis is formulated:

(17)

2.6 Effect of product involvement on the effect the presence of a reviewer label on reviewer’s trustworthiness

Product involvement refers to the degree to which an individual is involved with a given product on a regular basis, and includes the perceived personal relevance of the product (Hanzaee, & Ghafelehbashi, 2012). The literature states that the impact of expertise claims depend on the type of product. For example, different effects between: digital goods (Amblee & Bui, 2007), travels (Zhang, Zhang, & Yang, 2016), mortgages (Kuusela, Spence, & Kanto, 1998), and wine (Friberg & Grönqvist, 2012). The amount of involvement with a product may influence the effect of a reviewer label on the reviewer’s trustworthiness. The following is hypothesized:

H8. The positive effect of the presence of a reviewer label on the reviewer’s trustworthiness

is stronger if the reader of the review is more involved with the product.

2.7 Skepticism towards the reviewer’s identity

According to Malhotra (2009), there might be more variables to control for which could account for variation in the dependent variable. These so-called control variables are included in this research. A control variable that we should take into account as it could very well influence the perceived trustworthiness of any reviewer, is people’s skepticism towards the reviewer’s identity in online reviews. McKnight & Chervany (2002) describe skepticism in the reviewer as people’s attitude towards the reliability of a communication source. If someone is more concerned about the authenticity of the reviewer in general, this could affect the level of trust someone has in the reviewer. The following is hypothesized:

H9. An increase in skepticism towards the identity of the reviewer will decrease the amount of trustworthiness in the reviewer.

(18)

2.8 Conceptual model

(19)

2.9 Hypothesis table

This table includes all the descriptions of the above-mentioned hypothesized relationships between the concepts.

Hypothesis Description

1 The presence of a reviewer label in an OCR will increase the reviewer’s trustworthiness.

2 The presence of a profile picture will increase the reviewer’s trustworthiness. 3 The presence of a profile picture will increase the reviewer’s likability. 4 The presence of a reviewer’s full name instead of a system-generated

username will

increase the reviewer’s trustworthiness.

5 The presence of a reviewer’s full name versus a system-generated username strengthens the positive effect of a profile picture on the reviewer’s

trustworthiness.

6 An increase in the reviewer’s likability will increase the trustworthiness of the reviewer.

7 The effect of the presence of a profile picture on the reviewer’s trustworthiness is partially mediated by the reviewer’s likability.

8 The positive effect of the presence of a reviewer label on the reviewer’s trustworthiness is stronger if the reader of the review is more involved with the product.

(20)

3. METHODOLOGY

In this chapter, the research design will be explained, and used to test the hypotheses mentioned in chapter two. This design will serve as a blueprint for conducting empirical research and specify the details of the approach (Malhotra, 2009). A quantitative method is performed as an experiment (i.e. causal research), in which several independent variables adding information about the reviewer are manipulated and set within a review. The goal is to measure the effects of the experimental variables on reviewer’s trustworthiness and likability.

3.1 Experimental research design and participants

In order to make causal inferences between the concepts, a between-subjects factorial experimental design is set up. This design is used to measure the effects of the different independent variables at various levels (Malhotra, 2009). This factorial design allows for interactions between the variables and consists of conditions for every possible combination of independent variables. In this experiment, there are three independent variables with two levels each: the presence of a reviewer label (absent/present), presence of a picture of the reviewer (absent/present), and the type of display of the reviewer’s name (reviewer’s full name/ system-generated username). Each participant will be randomly assigned to one of the eight conditions, shown in table 3.1.

Conditions No profile picture of reviewer Profile picture of reviewer

System-generated Username Reviewer’s full name System-generated Username Reviewer’s full name No reviewer label Condition 1 Condition 2 Condition 5 Condition 7

Reviewer label Condition 3 Condition 4 Condition 6 Condition 8

Table 3.1 2x2x2 factorial design

3.1.1 Population and sample

Before the collection of responses, the target population of this experiment should be clarified, as well as the sample of the population.

(21)

the researchers’ personal social media channels or social environment. The expectation is that all participants in the sample, by using this method, are people out of a population that make use of online consumer reviews and are proficient in the English language. However, this technique might lead to a biased sample of the population. Even though, it is chosen due to time restrictions. The average age of the sample is expected to be relatively young, because of the digital form of data collection. For each of the conditions there are 30 participants required. It means that there are at least 240 valid responses needed.

The data for this paper was generated using Qualtrics software (Qualtrics, Provo, Ut, 2005), version 56. This is an online software to make and distribute surveys. The survey was open for a period of two weeks, starting on the 29th of November and closing on the 12th of

December. A total of 377 participants started the survey, among which 289 finished it. From the 289 that finished, 259 respondents correctly answered the attention check with “Disagree”. The answers of those 259 respondents are used in further analysis. Out of those 259 respondents, there are relatively more female (60.52%) than male (39.48%), with the greatest difference in gender for condition six. The age of the respondents is, with an average of 24.58, equally spread in both the conditions and gender. Table 3.1.1 shows an overview of the respondents per condition.

Condition N Gender Average Age

Male Female 1 31 (11.97%) 17 (54.84%) 14 (45.16%) 24.42 2 31 (11.97%) 12 (38.71%) 19 (61.29%) 25.00 3 31 (11.97%) 15 (48.39%) 16 (51.61%) 25.55 4 33 (12.74%) 13 (39.39%) 20 (60.61%) 23.58 5 31 (11.79%) 10 (32.26%) 21 (67.74%) 26.23 6 33 (12.74%) 8 (24.24%) 25 (75.76%) 23.61 7 36 (13.90%) 11 (30.56%) 25 (69.44%) 23.06 8 33 (12.74%) 16 (48.48%) 17 (51.52%) 25.58 Total 259 (100%) 102 (39.48%) 157 (60.52%) 24.58

(22)

3.2 Survey procedure and design

This paragraph explains what the respondents encountered during their participation in the survey, the product that is chosen, and how experimental variables were manipulated. Also, it shows the results of the manipulation/interpretation check.

3.2.1 Introduction

The survey starts with an introduction and welcomes participants to the study. It will give an indication of the time that is required to finish and a general overview of the survey structure. No information will be given on the purpose of the review that might impact the answers. The introduction ends with an explanation of the possibility of winning a gift card, based on the findings of Yu et al. (2017) in which the response rate went up by 18% when there was a $10 incentive, and the number of respondents finishing the survey increased by 30%. 3.2.2 The product to review

After the introduction, the participants are asked to imagine a situation in which they have become interested in photography and are browsing on internet to find a camera that fit their needs. They encounter a webpage about a SLR camera. The choice of the SLR camera was based on the study of Zhang et al., (2012), who present a product ranking model that applies weights to product reviews. They mention that on Amazon.com, the world’s largest e-commerce retailer, the SLR camera in the price range of $500 - $700 belongs to the most important products affected by online reviews. Today, in 2019, the SLR camera is still one of the most reviewed categories (Amazon, 2019). All the indicators of the brand SONY are removed from the picture and the name of the SLR camera is changed in SLR2041. Furthermore, the layout of the original webpage from Amazone.com is changed in such a way that it is unrecognizable.

Although participants are expected to be proficient in the English language, ordinary words are used in the questionnaire and will match the vocabulary level of the respondents (Malhotra, 2009). Because of this, and because it is crucial to understand for which product the review has been written, the term ‘SLR camera’ will be followed by the Dutch translation ‘spiegelreflexcamera’.

3.2.3 Manipulation of independent variables and stimuli

(23)

object or event to which the responses are measured (Malhotra, 2009). Those stimuli are created by manipulation of the experimental independent variables.

The base of each stimulus exits of an actual consumer review of an SLR camera as found on Amazon.com. In this review, the name of the original author, the default profile picture, the date, and the brand name are deleted. The review that is chosen scores a three-star rating out of five. Indeed, the effect of a one-star review is bigger than the effect of a five-star review (Sun, 2012) while a three-star review is often described as ‘worth trying’. Also, the text of the review reveals both positive characteristics of the camera (e.g. movie shoot is good), and negative ones (not in super zoom). The rating and text could influence the outcomes of the dependent variable to more average scores on overall, following the theory about the effect of review valance on trust (e.g. Filieri, 2016). The review was written on April 21, 2019. The basis for each stimulus is shown in figure 3.1.

Figure 3.1 Stimulus base

The different levels of the experimental variables are added to this review. Table 3.2. shows a graphical representation of all the manipulated levels of the experimental variables.

Experimental variable Levels

Presence of a reviewer label

None

Presence of a picture of the

reviewer None

Type of display of the reviewer’s name

Esther Smith User83712

(24)

• The first experimental variable, the reviewer label, is completely made up by the researcher. Its graphics are derived from cleanpng.com1. The form and relative size to

the profile picture are based on the ‘top-fan’ label of Facebook.com, shown in figure 1.1. The text ‘expert’ is placed next to the image.

• Secondly, the profile picture of the reviewer is a non-existing face of a female acquired via Thispersondoesnotexist.com2. This is done due to the importance of ethical

concerns in marketing research (Malhotra, 2009). The size of the profile picture (H: 3.85 cm x W: 3.85 cm) is equal to that of popular websites for online reviews, such as TripAdvisor and Facebook.

• Lastly, the variable for the reviewer’s name is operationalized by randomly generating the full name and username on the website name-generator.org3.

For instance, in condition 6, the stimulus consists of the reviewer’s label and reviewer’s picture being present together with a system-generated username. Subsequently, those levels are added to the basic review, shown in figure 3.2.

Figure 3.2 Stimulus as used in condition six

3.2.4 Manipulation/interpretation check

As discussed in chapter 2.1, a reviewer label might be perceived as an act of approval by an expert. On the other hand, the possibility exists that participants do not link the label with expertise. While this research expects that the impact of expertise claims depends on the type of product, it is important that the participants perceived the reviewer as an expert due to the reviewer label. An additional question is taken into account in the survey to tests how participants perceive the reviewer label.

1 https://www.cleanpng.com is a free and open source to acquire Portable Network Graphics (PNG’s).

(25)

3.2.5 Demographic variables

Furthermore, there are more general demographic variables to consider that could account for variance in the dependent variable. This kind of variables are used to characterize segments (Gupta & Chintagunta, 1994). The most commonly used demographic variables are age and gender. In the survey questions will be added that ask for those demographics.

3.3 Operationalization and reliability of scales

After the participants are exposed to one of the eight stimuli, a set of questions will follow that includes the dependent variable and all other variables of interest. These questions and their scales are presented in table 3.3. The extended explanations of variables, based on the plan of analysis in section 3.5, are stated below this table.

3.3.1 – Multi-Item scales, Likert scale and slider

(26)

Table 3.3 Operationalization

Concept Items Scale Eigenvalue,

alpha and variance Dependent variable Reviewer’s trustworthiness source: (Ohanian, 1990)

1. The person that wrote the review was dependable.

2. The person that wrote the review was honest.

3. The person that wrote the review was reliable.

4. The person that wrote the review was sincere.

5. The person that wrote the review was trustworthy. 7-point Likert scale: Strongly disagree – strongly agree Eigenvalue 3.03 Variance 60.5% α (.830) / when item 1 deleted .842 Manipulation / interpretation check

Interpretation check of the reviewer label and manipulation check for the effect of the moderator

1. How much do you agree with the following statement: “When the label, shown above, is attached to an online consumer review, I think that the writer of the review is an expert.”

7-point Likert scale: Strongly disagree – strongly agree N/A Mediator Likability source: (Reysen, 2005)

1. The person that wrote the review is friendly.

2. The person that wrote the review is likeable.

3. The person that wrote the review is warm.

4. The person that wrote the review is approachable.

5. I would ask the person that wrote the review for advice.

6. I would like the person that wrote the review as a co-worker.

7. I would like the person that wrote the review as a roommate.

8. I would like to be friends with the person that wrote the review.

9. The person that wrote the review is physically attractive.

10. The person that wrote the review is similar to me.

11. The person that wrote the review is knowledgeable 7 – point Likert scale: Strongly disagree – strongly agree Eigenvalue 4.78 Variance 43.45% α .848 Moderator Product involvement source:

(Traylor, Mark B.; Joseph, 1984)

1. When other people see me using this product, they form an opinion of me. 2. You can tell a lot about a person by seeing what brand of this product he uses.

3. This product helps me express who I am.

4. This product is “me”

5. Seeing somebody else use this product tells me a lot about that person. 6. When I use this product, others see me the way I want them to see me.

(27)

Attention check source:

(Malhotra, 2009)

1. To check if you are still paying attention to the questions in this survey, please select ‘Disagree’

7-point Likert scale: Strongly disagree – strongly agree N/A Control variables

Skepticism towards the reviewer’s identity in online reviews

source:

(Zhang et al., 2016)

1. I don’t think that most online reviewers are the people who they claim to be.

2. The identities of the online reviewers are often deceptive.

3. People rarely write consumer reviews for their own business. 4. People writing online product reviews are not necessarily the real customers.

5. People write online reviews pretending they are someone else. 6. Different reviews are often posted by the same person under different names.

7 – point Likert scale: Strongly disagree – strongly agree Eigenvalue 2.82 Variance 47.07% α (.647) / when item 3 deleted .795

Gender 1. Please indicate your gender. Male - Female N/A Age 1. Please indicate your age Ratio scale N/A

Extra variable

Trustworthiness of the OCR source: (Miller, 2016)

1. Overall, did you trust the online review that was presented to you?

7-point Likert scale:

No trust at all – Complete trust

N/A

3.3.2 The variable reviewer’s trustworthiness

(28)

3.3.3 The variable likability

The factor analysis for the variable likability can be found in appendix B.2. The results show that this variable has three components with an Eigenvalue higher than one (1: 4.78 2: 1.49 3: 1.01). The KMO is .849 and Bartlett’s test of sphericity is significant (.000). After performing separate reliability tests on each factor, presented in appendix C.2, it can be concluded that all factors are reliable. Looking into the different items of the scale, the existence of three factors becomes logical. The different items across the factors obviously specify different aspects of likability. Items 1,2,3 and 4 indicate that the reviewer is: friendly, likeable, warm and approachable. Whereas items 7,8,9 and 10 ask the participant if he/she feels similar to the reviewer, about the physical attractiveness and if the reviewer is suitable as roommate or friend. At last, variables 5,6 and 11 are describing questions related to the perceived knowledge of the reviewer, such as whether the participant would ask for advice. This research aimed to measure likeability as indicated by factor one. This factors still accounts for 43.45% of the variance in the answers for the variable. Cronbach’s alpha is .848 for the four items.

3.3.4 The variable skepticism towards the reviewer’s identity

The control variable Skepticism towards the reviewer’s identity, appendix B.3, has two components with an Eigenvalue above one (1: 2.82 2: 1.13). The KMO is .745 and Bartlett’s test of sphericity is significant (.000). The first component explains 47.07 % of the variance. Only the first component is chosen as factor, reasoning that the second factor only exists due to item 3_3, “People rarely write consumer reviews for their own business.” It differs a lot from the other items in the scale, and, although people are expected to be proficient in English, this sentence could be confusing for Dutch participants. This becomes clear after interpreting the results in the reliability analysis of appendix C.3. Cronbach’s alpha is .65 for the six items. Deleting any other item than Q3_3 would bring the alpha to approximately .52. However, deleting only item Q3_3, increases Cronbach’s alpha to .8.

3.3.5 The variable product involvement

(29)

3.4 Plan of Analysis

When the experiment is done, and all the data is collected, the process of analysing the data begins. To do so, a plan was created prior to the data analyses. It explains the choices for various statistical techniques to gather insights out of the data. The software Statistical Package for the Social Sciences (SPSS) is used to perform the analyses.

3.4.1 Data preparation

After deleting all the unfinished participants and the participants who did not pass the attention check question, the dataset from Qualtrics consists out of 259 respondents. The answers of these respondents do not contain missing data, apart from the optional question where respondents could fill in their e-mail address. One of the participants indicated its age as “twenty-five”, this text is changed in the dataset to its numeric value.

3.4.2 Factor analysis

With a ‘clean’ dataset, the first step is to perform factor analyses on the multi-item scales in the research. The outputs for the factor analyses are presented in appendix B. A factor analysis is used for data reduction and summarization (Malhotra, 2009). The goal of this factor analysis is to reduce the number of items of the concept, by combining the items based on common variance. With the factor analysis, we test if the theoretical assumed one-factor structure of the items exists. Subsequently, the new factor will lead to reduced multicollinearity between variables later on in the regression models.

(30)

one-3.4.3 Reliability analysis

It is necessary to look at the internal consistency reliability of the different developed factors, the so-called coefficient alpha. This measure of reliability focuses on the internal consistency of the set of items forming the factor(s) (Malhotra, 2009). An often used alternative for this, is the Cronbach’s alpha. Its measurement varies from 0 to 1, and a value of > 0.6 is needed for an adequate internal consistency reliability. The outputs of the reliability analyses are presented in appendix C.

3.4.4 Analysis of variance (ANOVA)

After conducting the factor- and reliability analyses, an ANOVA is used for examining the differences in the mean values of the dependent variable associated with the effect of the controlled independent variables (Malhotra, 2009). Thus, it measures the variance in the dependent variable reviewer’s trustworthiness for the experimental variables and the eight conditions, testing the null hypothesis that all means are equal.

3.4.5 Regression analysis

In order to quantify the relationships of the variables in the conceptual model without the mediator ‘likability’, several multiple regression analyses are conducted (see table 3.4 for an overview of the formulas for the five regression models).

Model Description Regression formula

1 Main effects EVs on the DV Y = b0 + b1 * EV1 + b2 * EV2 + b3 * EV3 + ε

2 Main effects EVs on the DV and interaction effect

Y = b0 + b1 * EV1 + b2 * EV2 + b3 * EV3 + b4 * EV2*EV3 + ε

3 Main effects EVs on the DV, interaction effect and moderator

Y = b0 + b1 * EV1 + b2 * EV2 + b3 * EV3 + b4 * EV2*EV3 + b5 * Product involvement + b6 * EV1*Product involvement + ε

4 Main effects EVs on the DV,

interaction effect and control variables

Y = b0 + b1 * EV1 + b2 * EV2 + b3 * EV3 + b4 * EV2*EV3 + b5 * Skepticism towards the reviewer’s identity + b6 * Age + b7 * Gender + ε

5 Main effects EVs on the DV, interaction effect, moderator and control variables

Y = b0 + b1 * EV1 + b2 * EV2 + b3 * EV3 + b4 * EV2*EV3 + b5 * Product involvement + b6 * EV1*Product involvement + b7 * Skepticism towards the reviewer’s identity + b8 * Age + b9 * Gender + ε

(31)

The null hypothesis of the regression model implies that there is no linear relationship between the ‘X-variables’ and the Y-variable reviewer’s trustworthiness. The value of the F-statistic should be higher than 4.96 for a significant model. The coefficient of determination (R2) shows

how much variance the model explains, relative to the total variance in the variable reviewer’s trustworthiness. When a variable is added to the model, but relatively explains less of the variance in the reviewer’s trustworthiness, the R2 is penalized. This can be found in the adjusted

R2.

3.4.6 Mean centering

The continuous variables are mean centered. Mean centereing is done by subtracting the mean value of each continuous variable, from each individual observation in that variable. This is done to be able to interpret the interactions in the model. Mean centering changes the interpretation of the intercept in the regression analysis.

3.4.7 Multicollinearity

(32)

3.4.8 Mediation analysis

While in this study we want to acquire insights into the role of the mediator variable likability that may partially explain the relationship between the profile picture and the reviewer’s trustworthiness, a mediation analysis is performed. First, the digital book for mediation models by Andrew (F.) Hayes has been revised (Hayes, 2013). There is no possibility to use the tool of Hayes while none of the models represent the conceptual model in this study. Subsequently, we are unable to look to the full mediation model. However, the method for mediation suggested by Baron & Kenny (1986) allows us to obtain insights into the relationships between the profile picture, likability and the reviewer’s trustworthiness. There are four steps, including three regressions, involved in the process of Baron and Kenny:

• Step 1 (Path c), estimates a model for the effect of a profile picture on the reviewer’s trustworthiness. This model is equivalent to model 5 in the regression analysis, however, model 5 is estimated without bootstrapping (see next paragraph).

• Step 2 (Path a), estimates a model for the effect of a profile picture on the mediation variable likability.

• Step 3 (Path b), estimates a model for the effect of likability on the reviewer’s trustworthiness

• Step 4 (Path c’), Full mediation would occur when the effect of a profile picture on the reviewer’s trustworthiness disappears. For the expected partial mediation, the effect of a profile picture on the reviewer’s trustworthiness has become weaker.

The significance of the models will be tested by bootstrapping. Bootstrapping is a re-sampling procedure (Byrne, Gustafsson, & Martenson, 2002). With this technique, multiple sub-samples of the original sample are drawn randomly. The default setting in SPSS uses 1000 sub-samples for the Bootstrapping procedure.

3.4.9 Results interpretation/manipulation check

(33)

reviewer not as an expert due to such a reviewer label, the mean would be expected to be low, and significantly differ from 3.5. Vice versa, if participants would perceive the reviewer as an expert due to the reviewer label, the mean would be expected to be high, and significantly differ from 3.5. The results of the one sample-test can be found in appendix E. The observed mean value for the interpretation check is 4.37, and significantly differs from 3.5 (P < .01). The participants indicated that when the reviewer label that has been used in the survey is attached to a review, they perceive the reviewer as an expert.

(34)

4. RESULTS

In this chapter, the results out of the data analyses are discussed. Complete overviews of all the performed analyses can be found in appendix D until F.

4.1 Average scores per condition and experimental levels

This paragraph presents in Table 4.1 an overview of the average scores of the dependent variable reviewer’s trustworthiness for each of the eight conditions. To acquire insights on the effect of the experimental variables on the mediator likability, this variable is also included as dependent variable in the analyses. Table 4.2 shows the total average of the reviewer’s trustworthiness and likability for each level of the experimental variables.

The average scores for each condition account for a minimum of 4.67 in condition eight and a maximum of 5.04 in condition six. There are differences to see across the conditions. However, these differences are less clear between the total averages for the levels of the experimental variables. They all score approximately an average of 4.8. The experimental variable with the largest difference between its total average scores, is the type of display of the reviewer’s name.

Conditions No profile picture of reviewer Profile picture of reviewer system-generated Username Reviewer’s full name system-generated Username Reviewer’s full name No reviewer label Condition 1

RT : 4.74 Condition 2 RT : 4.86 Condition 5 RT : 4.87 LA : 4.27 Condition 7 RT : 4.91 LA : 4.15 Reviewer label Condition 3

RT : 4.89 Condition 4 RT : 4.75 Condition 6 RT : 5.04 LA : 4.32 Condition 8 RT : 4.67 LA : 4.42

Table 4.1 Average scores on reviewer’s trustworthiness (RT) and likability (LA) per

condition

Total average

No reviewer label 4.84

Reviewer label 4.84

No profile picture of reviewer 4.81 Profile picture of reviewer 4.87 System-generated Username 4.88 Reviewer’s full name 4.80

(35)

4.2 Experimental variables on reviewer’s trustworthiness

To see whether there are any significant differences between the information shown about the reviewer on the reviewer’s trustworthiness, an ANOVA is conducted. The extended output of those analyses can be found in appendix D.

4.2.1 Differences in mean values per condition

The first two ANOVA’s are testing if there are any differences between the eight conditions in the mean values of the variables, reviewer’s trustworthiness and likability. Figure 4.2.1 presents a bar chart of the mean (Y-axis) value per condition (X-axis) for both reviewer’s trustworthiness and likability.

Figure 4.2.1 Bar chart of mean value per condition for the mediator and dependent variable

Looking at the figure above, it turns out that across the eight conditions for both the variable reviewer’s trustworthiness and likability, the means are quite equally spread. However, the trustworthiness in the reviewer scores relatively higher to likability, the two variables seem to follow the same pattern.

(36)

at the bar chart in figure 4.2, this is a logical result while the mean value for all the conditions are very similar to each other.

4.2.2 Differences between effects of the experimental variables

The third ANOVA tests for differences in the mean values among the experimental variables. This three-way ANOVA with an interaction effect between profile picture and reviewer’s name, turns out to be insignificant (F-value = .358). Figure 4.2.2 4shows the relation

of the interaction effect.

Figure 4.2.2 Line chart of mean value likability for the interaction effect

The figure above shows that the mean value of the reviewer’s trustworthiness stays rather constant between a profile picture or no profile picture with the reviewer’s name. Almost the same mean value for the reviewer’s trustworthiness holds for a username without a profile picture. However, an interaction effect seems to exist for the combination picture and username.

The same analysis is performed with likability as dependent variable. The model is insignificant, but with a higher F-value (.138). The experimental variable profile picture turns out to be highly significant (F = 6.317). This effect is visible in figure 4.2.3.

(37)

Figure 4.2.3 Line charts of mean values likability for effect profile picture

(38)

4.3 Regression analysis conceptual model without ‘likability’

In this paragraph, the results of the regression analyses are presented. These analyses are based on the regression models presented in the earlier mentioned table 3.4. The extended outputs can be found in appendix F. Table 4.3 presented below, shows the effect sizes in standardized betas for all the explanatory variables.

Model 1 Model 2 Model 3 Model 4 Model 5

Constant .016*** -.023*** 4.828*** 4.734*** 4.713***

Reviewer label -.008 -.006 -.008 .000 -.001

Profile picture .065 .081 .070 -.071 .067

Reviewer’s name -.088 -.005 -.005 -.021 -0.16

Profile picture * Reviewer’s name -.076 -.072 -.069 -.071

Product involvement 1.596 .128

Product involvement * Label -.042 -.081

Skepticism towards reviewer’s identity -.345*** -.340*** Age -.053 -.053 Gender .034 .041 R2 .004 .006 .019 .129 .137 Adjusted R2 -.008 -.010 -.005 .104 .106 F-Value .321 .358 .806 5.290*** 4.384***

Table 4.3 Results of the regression models on the DV reviewer’s trustworthiness

***p-value < .01, **p-value < .05, *p-value < .10 4.3.1 Results model 1

In the first model it becomes clear that the main effects of the experimental variables on the dependent variable reviewer’s trustworthiness are not significant. The overall model is not significant with an F-value of .321. The R2 of .0045 shows that the first model explains 0.4 %

of the variance in the reviewer’s trustworthiness. While a negative R2 is not possible, through

penalization of adding variables that do not contribute to explaining variance in the reviewer’s trustworthiness, the adjust R2 is -.008.

(39)

4.3.2 Results model 2

The second model, in which the interaction effect between a profile picture and reviewer’s name is included, is not significant. The F-value is .358 and the model explains 0.6 % of the variance. After penalization, the R2 adjusted decreases to -.010. This indicates that the

second model in which the interaction effect is included is not better than the first model. 4.3.3 Results model 3

In the third model, the moderator product involvement on the experimental variable label is included. The model is still insignificant (F-value = .809). There is an increase of the constant to 4.828. The R2 slightly increased to .019 with a corresponding R2 adjusted of -.005.

4.3.4 Results model 4

The fourth model, with a F-value of 5.290, is significant. In this model the control variables are added to the main effects and the interaction effect of the profile picture and reviewer’s name. The reason for this significance, is the control variable skepticism towards the reviewer’s identity. This variable is highly significant (P < .01) with a standardized beta of -.345. This variable also increased the R2 to .129, and subsequently the adjusted R2 to .104.

4.3.5 Results model 5

Model five is the full model, apart from the possible mediation effect of likability. This model is also significant with a F-value of 4.384. Again, this model seems to be significant because of the inclusion of the control variable skepticism towards the reviewer’s identity. In this model the standardized beta of the variable skepticism towards the reviewer’s identity is -.340, meaning that its effect on the reviewer’s trustworthiness is slightly more positive than in the third model without the moderator product involvement. As more explanatory variables are added, the R2 increased to .137. However, also the R2 improved to .106.

4.3.6 Mediation analysis of likability

(40)

Model 1 Model 2 Model 3 Dependent variable RT LA RT Constant 4.713*** 4.141*** 4.781*** Reviewer label -.002 -.025 Profile picture .120 (Path c) .269** (Path a) .056 (Path c’) Reviewer’s name -.028 -.001

Profile picture * Reviewer’s name -.142 -.152

Product involvement .112 .101

Product involvement * Label -.100 -.118

Reviewer’s likability .291***

(Path b) Skepticism towards reviewer’s

identity -.294*** -.281*** Age -.008 -.008 Gender .075 .052 R2 .137 .024 .213 Adjusted R2 .106 .020 .181 F-Value 4.384*** 6.288** 6.682***

Table 4.3.5 Results of the regression models for the mediation analysis

***p-value < .01, **p-value < .05, *p-value < .10

In the first model, path c (b = .120) the effect of the profile picture on the reviewer’s trustworthiness is not significant (P > .05). According to Baron & Kenny (1986), the mediation analysis would stop here. However, based on our theory in chapter two, we could continue the analysis (Shrout & Bolger, 2002). In the second model, path a (b = .269) the effect of the profile picture on likability is significant (P < .05). In the third model, path b (b = .291) the effect of likability on the reviewer’s trustworthiness is significant (P < .01). This model also shows that the b of the profile picture decreased from .120 (Path c) to .056 (Path c’) when likability is included as independent variable in the model. Figure 4.3.5 illustrates these findings:

(41)

Figure 4.3.5 Beta coefficients for partial mediation

4.4 Results of the hypothesis table

Hypothesis Accepted / Rejected Outcome

1. The presence of a reviewer label in an OCR will increase the reviewer’s trustworthiness.

Rejected (P > .05)

2. The presence of a profile picture will increase the reviewer’s trustworthiness.

Rejected (P > .05)

3. The presence of a profile picture will increase the

reviewer’s likability Accepted (P < .01)

4. The presence of a reviewer’s full name instead of a system-generated username will increase the reviewer’s trustworthiness.

Rejected (P > .05)

5. The presence of a reviewer’s full name versus a system-generated username strengthens the positive effect of a profile picture on the reviewer’s trustworthiness.

Rejected (P > .05)

6. An increase in the reviewer’s likability will

increase the reviewer’s trustworthiness. Accepted (P < .01)

7. The effect of the presence of a profile picture on the reviewer’s trustworthiness is partially mediated by the reviewer’s likability.

Partially accepted (based on Shrout &

Bolger, 2002)

X on Y (P > .05). However, b decreased due to the mediator.

8. The positive effect of the presence of a reviewer label on the reviewer’s trustworthiness is stronger if the reader of the review is more involved with the product.

Rejected (P > .05)

9. An increase in skepticism towards the identity of the reviewer will decrease the amount of

trustworthiness in the reviewer.

(42)

The results show out of the ANOVA that there are no significant effects (P > .05) for the experimental variables; reviewer label, profile picture and type of display of the reviewer name on the reviewer’s trustworthiness. This means that there is no support for hypothesis 1,2 and 4.

The ANOVA turns out to be significant (P < .05) for the experimental variable profile picture when the dependent variable is the reviewer’s likability. The b for this effect out of the mediation analysis is .269. This indicates that there is support for hypothesis 3.

All the regression models show that there are no significant (P < .05) results for the interaction effect of the variables ‘type of display of the reviewer’s name’ and ‘the profile picture’. Hypothesis 5 is rejected.

The mediation analysis reveals a significant (P < .01) effect of the reviewer’s likability on the reviewer’s trustworthiness (b = .291), supporting hypothesis 6. Furthermore, this analysis shows a decrease in the b of the profile picture from .120 to .056 when the mediator variable reviewer’s likability is added to the model. hypothesis 7 is partially accepted.

(43)

5. CONCLUSIONS & RECOMMENDATIONS

This chapter, based on the results, starts with answering the main research question in an overall conclusion. Subsequently, the answers on partial research questions are given and discussed. After that, managerial implications are given. Then, the boundaries of the research follow, which are stated in the limitations. The final paragraph proposes directions for future research.

5.1 Conclusion

The goal of this research was to gain insights in the potential relationships of a reviewer label, profile picture, and the type of display of the reviewer’s name in an OCR with the reviewer’s trustworthiness. The results show that in this research there are no direct effects of this kind of information about a reviewer in an OCR on the reviewer’s trustworthiness. However, the study proves that there is an effect of the profile picture on the reviewer’s likability, and that an increase in the reviewer’s likability increases the reviewer’s trustworthiness.

5.2 Answering and discussing the partial research questions

In this research no effect has been found for the absence or presence of a reviewer label, on the reviewer’s trustworthiness. According to this study, this would imply that the addition of such a reviewer label to a consumer review does not lead to an increased amount of trustworthiness towards the reviewer. The results also show that this effect is not different depending on the involvement the participant has with the product. This finding suggests that the influence of labels in online reviews do not have the same effect on trust in the reviewer, as they do for trust in food products (e.g. Bernard et al., 2019). A reason for this could be that the usage of labels on food products is well known for consumers, while the usage of labels in online reviews is quite a new phenomenon. It could also be that trust in products differ too much from having trust in a person.

(44)

reviewer’s likability instead of the reviewer’s trustworthiness. Thus, adding a profile picture to the online review increases the reviewer’s likability. Subsequently, an increase in likability increases the reviewer’s trustworthiness. The existence of the effect of a profile picture on likability is in line with the theory about processing fluency (Lee & Labroo, 2004). According to this theory is the profile picture is easy to process and may therefore account for a positive evaluation of the reviewer’s likability.

The research did not find a significant effect for the type of display of the reviewer’s name on the reviewer’s trustworthiness. Interpreting this, it does not matter for the amount of trust in the reviewer if either a system-generated username or reviewer’s full name is stated in the OCR. Furthermore, the study does not find any interaction effects between the type of display of the reviewer’s name and a profile picture. No difference between a system-generated username or reviewer’s full name on trust, contradicts with the findings of Uslaner (2004) and Mesch (2012) who did find that real names are perceived as more trustworthy online.

Apart from information about the reviewer in the OCR that could influence the reviewer’s trustworthiness, this study also looks to other variables which could directly have an influence on the amount of trust in a reviewer. The reviewer’s trustworthiness neither depended on whether the participant was female or male, nor on their age. However, significant results were found for the variable skepticism towards the reviewer’s identity. This means that participants that were sceptical towards the identity of the reviewer, strongly had a lower trust in the reviewer. This is a rather interesting finding, because when information was added to the review that disclosed the reviewer’s identity (e.g. the profile picture or the reviewer’s name), the trustworthiness towards the reviewer did not increase. A possible explanation for this could be that the information that was shown of the reviewer throughout the explanatory variables did not expose enough information about the reviewer’s identity. Another explanation could be that they perceived the experimental variables that showed information about the identity of the reviewer as not real. This would be in line with the theory about fake eWOM, in which people are concerned about the authenticity of the reviewer (e.g. Zhang, Ko, & Carpenter, 2016).

5.3 Managerial and academic implications

(45)

review. They could do this either to promote making a profile on the (review)website or to give reviewers the possibility to link with their social media profiles. Furthermore, while this research does not find a significant effect of a reviewer label on the reviewer’s trustworthiness, it could help companies in their consideration to attach such reviewer labels to the review. A reviewer label could also have a negative effect on the reviewer, especially when it is attached to a reviewer without valid evidence for the claim or clear explanation.

From an academic perspective does this research help to fill in the theoretical gap concerning labels attached to reviewers. Besides that, it proposes a new model to access the effects of different peripheral cues shown in a review on the reviewer’s trustworthiness, and does it discuss the role of the reviewer’s likability.

5.4 Limitations

While in this study, the results show that there are no direct effects of the experimental variables showing information about the reviewer on the dependent variable the reviewer’s trustworthiness, it should be acknowledged that this conclusion may depend on the way the experimental variables are made. For example, the profile picture has specific components (e.g. size or facial expression). Furthermore, marketing communication literature teaches us that even factors such as attractiveness could influence the trustworthiness of the source (Fennis & Stroebe, 2015). People who are perceived as attractive are also seen as more intelligent and even more trusted. Eagly et al., (1991) describe this as the ‘halo effect’, which can be understood as ‘what is beautiful is good’. Also, a sense of feeling similar to the reviewer could differ per type of experimental variable. While even before consumers consulted online reviews, WOM-communication was perceived as much more efficient and effective between individuals with shared backgrounds (Gilly et al., 1998). Feeling similar to someone is a factor that not only derives trust in WOM, but also in eWOM (Racherla et al., 2012).

Referenties

GERELATEERDE DOCUMENTEN

Specific components already available include (1) numerous data retrievers from our local databases, (2) a chromosome map summarizing genes and genomic regions linked to CHDs, (3)

Concluding, this paper will take a look at the combined effect of (1) return policy leniency on the level of regret a buyer experiences after having made a product choice and

- To what extent do the motivators, egoistic, altruistic, and social impact the targeted consumer In their willingness to write a request.. And which one has the most impact on

Besides the effect of the motivators, egoistic, altruist, and social, and the personalization and the interaction of personalization on the ORE, there are two expected

How do a reviewer label, a profile picture and the type of display of the reviewer’s name affect the reviewer’s trustworthiness, and how does.. likability mediate the effect of

While this study builds on previous literature on online consumer reviews by studying real name exposure, spelling errors, homophily and expert status (Schindler

Also, Virtual Memory Palace environments can be used for at least up to five times, using different lists of words each time, without losing noticeable memory recall accuracy in

This shouldn’t be a reason to lower and restrict the amount and the quality of content of photos of human suffering in the public news, because there is proof for change resulting