• No results found

Social desirability bias in online review systems

N/A
N/A
Protected

Academic year: 2021

Share "Social desirability bias in online review systems"

Copied!
33
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Social desirability bias in online review systems

T.H. Drexhage

Student number: 5922178

University: Universiteit van Amsterdam (UvA) Masters: Managerial Economics

Supervisor: dr. A.S. (Adam) Booij

ECTS: 15

Declaration of originality

This document is written by Student Toon Drexhage who declares to take full responsibility for the contents of this document.

I declare that the text and the work presented in this document is original and that no sources other than those mentioned in the text and its references have been used in creating it.

The Faculty of Economics and Business is responsible solely for the supervision of completion of the work, not for the contents.

(2)

Abstract

Ratings and reviews represent an important measure of quality on P2P and P2C platforms, but have proven to be overwhelmingly positive and may not represent a truthful measure of performance. This paper explores the variables of influence on these review and rating outcomes determined by previous research and offers one more, not yet identified, variable. We investigate a sample of reviews and ratings on Google and Facebook for 499 restaurants in Amsterdam and find evidence that suggests that a reviewer has a tendency to review in a socially desirable manner which positively biases rating and review outcomes. The

magnitude of this bias is determined by the ease with which a review can be linked to a reviewer’s personal online profile.

(3)

1. Introduction

“Like many transactions in the online economy, “perfect” has become the default rideshare rating. The majority of customers give 5 stars across the board, reserving one-star scores for only the most egregious experiences. There’s hardly any middle ground.” Reads a quote in Wired magazine

regarding the rating system of UBER (Kat Kane, wired magazine, 2015)

As the sharing economy grows, so do the peer-2-peer (hereafter: “P2P”) companies and platforms that are affiliated with them (Zervas et al., 2015). Companies such as AirBnB, Uber, and Ebay have gained in popularity over the past years, and the sharing economy is expected to account for a considerable part of the entire economy in the near future (Matzler et al., 2015; Schor, 2016).

One of the challenges accompanied by this growth is how to determine the trustworthiness and quality of the near limitless amount of potential suppliers. In a

traditional market the importance of signaling quality and building a consumer relationship to improve sales is universally recognized and studied. Earlier studies show that consumers’ quality predictions and consuming behavior are heavily influenced by price, advertising campaigns, and brand name (de Langhe et al., 2015; Dellarocas, 2003). Arguably small, or private, sellers operating on P2P platforms do not have the budget to establish a brand name or set up advertising campaigns, or they do not operate on a scale on which this is cost-effective.

It is on these grounds that P2P platforms have adapted and developed a way for sellers to communicate a measure of quality and establish a reputation for their good or service. In the majority of cases this is done by a system that provides consumers with the possibility to leave a rating and an accompanying review after consuming a good or service. The aggregated average rating is shown on the purchasing page as a measure of quality, with the reviews serving as an additional reference. These systems provide “a viable mechanism

for fostering cooperation among strangers in such settings by ensuring that the behavior of a trader toward any other trader becomes publicly known and may, therefore, affect the behavior of the entire community toward that trader in the future. Knowing this, traders have an incentive to behave well toward each other, even if their relationship is a one-time

(4)

deal.”(Dellarocas, 2003). Besides forming an incentive for traders to behave in a socially

desirable manner these rating and review mechanisms also provide consumers with a measure of quality that potentially eliminates information asymmetry between sellers and buyers (Malbon, 2011).

Even though there are distinct differences, these review systems are often compared with “old-fashioned” word of mouth also knowns as WOM (Schlosser, 2011; Gupta et al., 2010; Dellarocas, 2003; Duan et al., 2008). Word of mouth is a system in which two non-financially motivated parties verbally exchange information with each other in real time and space (Schlosser, 2011). WOM is often identified as an important influencer of consumer behavior (Gupta et al., 2010).

The most noticeable differences between traditional WOM and online reviews (also referred to as e-WOM) is that online reviews transcends traditional boundaries. A single person’s review is no longer bounded by time and social ties (Duan et al., 2008). Resulting in a review that may reach a much larger audience. On the other hand this could potentially decrease the value traditionally attributed to word of mouth, as credibility is often

determined by the relationship with the reviewer, who, in the case of online reviews, is mostly unknown (Schlosser, 2010; Wei et al., 2014). The question arises how much value prospective consumers attribute to these online reviews and to what extent they influence the eventual consuming decision.

Other criticism on these review systems comes from numeral empirical papers that have researched the rating distribution that arises on these platforms. They all find that ratings and reviews given by consumers are overwhelmingly positive (Zervas et al.,2015; Muchink et al., 2013; Hu et al., 2009; Chevalier et al., 2006). Given the assumption that preferences are heterogeneous and the fact that not all products and services can yield a utility that is above average, these results suggest that the review systems that are currently in place do not represent a truthful measure of quality or performance. Research identified multiple variables of influence to explain the skewed distribution of reviews on these types of P2P platforms.

This paper gives an insight in which variables are of influence on the outcome of social review and rating systems and tests whether, not yet identified, social image concerns influence the review outcome.

(5)

four groups; ex ante selection; ex post influences; manipulation; and reciprocity or fear of retaliation. In which ex ante selection states that consumers only consume products and services of which their expected utility is high, creating a selection effect which positively biases the consumer group and therefore the review outcomes (Chavelier et al., 2006; Hu et

al., 2009). Ex post influences state that after consuming a good or service reviewers are

influenced by previous reviews (Salganik et al., 2008; Salganik & Dodds, 2006; Muchink et al., 2013; Nan Hu et al., 2006) or ex post selection occurs in which only a certain group of

reviewers is willing to accept the cost of leaving a review (Nan Hu et al. 2006). Manipulation states that producers or sellers influence the review outcome by creating false reviews (Malbon, 2013; Hu et al., 2012; Wessel et al., 2015). Finally, the variable of reciprocity or fear of retaliation is named. Although mostly reserved for platforms that have two sided review systems. The way these two sided review systems are set-up may result in strategic

manipulation and gaming (Klein et al., 2006; Zervas et al., 2015; Dellarocas & Wood 2008). This paper introduces and tests another variable that is of influence on ex post review outcomes. This variable is driven by social image concerns, stating that reviewers will review in a way that is socially desirable by signaling socially desirable personality traits, which creates a positive bias as reviewers do not want to be viewed as overcritical or complaining. This influence is arguably bigger when the review is easily linked to the reviewers online person. To test this a sample of reviews on 499 restaurants in Amsterdam is compared on Facebook and Google. These platforms are comparable in users and review systems but differentiate in the ease with which a review can be coupled to the reviewer’s online profile.

Suggestive evidence is found that points to social image concerns as a variable of influence on review outcomes. To fully conclude the effect social image concerns have on review systems and prove a causal relation more research is needed. This paper aims to provide additional insight in the variables of influence on review systems and contributes to a better understanding of what influences reviewers in their review behavior.

2. Rating systems in theory

This chapter provides an overview of the relevant literature regarding the effect of WOM and e-WOM on purchase decisions and elaborating on the variables of influence on online review/rating systems. In addition, this chapter also provides a theoretical base for the Hypothesis “The easier a review can be linked to a person or a person’s cultivated online

(6)

profile the bigger the positive social desirability bias of the review”. The base for the main hypothesis is further explained in section 2.2.4. Sub hypotheses are given in section 2.2.4.1 through 2.2.4.4.

2.1 The influence of WOM and e-WOM on consuming decisions

To determine the value of online review and rating systems the first objective is to determine if, and how, consumers use these systems to make purchase decisions. As mentioned in the introduction, the value attributed to word of mouth is often determined by contextual clues and the receivers’ relationship to the reviewer (C. Dellarocas, 2003; L. Wei et al., 2014). In the case of e-WOM there is virtually no, or a very superficial relationship which could potentially diminish the value a future consumer accredits to online reviews. (Gupta & Harris, 2010).

This abatement in perceived value is not substantiated by empirical research, which finds that e-WOM recommendations do influence consuming considerations and choices significantly (P. Gupta et al., 2010; G. Zervas et al., 2015; L. Pantelidis, 2010). De Langhe et

al., 2015 even goes as far as to state “They (Consumer) place enormous weight on the

average user rating as an indicator of objective quality compared to other cues. They also fail to moderate their reliance on the average user rating when sample size is insufficient.”; in

addition, consumer ratings appear to be perceived especially valuable when making

consuming decisions in the service industry, due to its nature (Racherla &Friske 2012; Li Wei

et al., 2014).

Concluding, there is evidence that potential consumers use review systems to help making consuming decisions. If these systems are distorted and do not represent a reliable measure of performance or quality this could potentially result in consumers making

suboptimal consuming decisions. Therefore there is a need to determine what the disturbing variables on these systems are, as to eventually get these systems to display a reliable measure of performance.

2.2 Mechanisms of influence on social review and rating systems

As described before, a number of empirical papers have analyzed the rating distribution on P2P and P2C (peer-to-company) platforms and find that the ratings and reviews given by consumers are overwhelmingly positive (Chevalier & Mayzlin, 2006; By Nan Hu et al, 2009;

(7)

G. Zervas et al, 2015. L. Muchink et al, 2013). One even finds that 99.1% of all reviews given on a certain platform are positive reviews (Dellarocas, 2003). Other reports also note that the reviews are not normally distributed but follow a certain J-curve first described by Nan Hu et al. (2006). Meaning a very high fraction of very positive reviews and a lower fraction of very negative reviews is observed and there are virtually no average reviews. A typical rating-distribution on Facebook and Google is shown in figure 1 for three restaurants in Amsterdam.

Figure 1: J-curve, ratings for three restaurants in Amsterdam (09-07-2017) Y-axis are fractions rating divided by total ratings

Assuming preferences are heterogeneous, random and independent it is unlikely that this J-distribution or positive skewedness is a truthful representation of performance. Nan Hu et al, (2009) show that a normal distribution arises for a certain set of products when no selection effects are at play. This suggests that the review mechanisms that are in use do not truthfully represent the quality of the good or service. Not every product can yield a utility that is above average. As on account for the observed J-curve, the information value that can be attributed to the, often shown, congregated average diminishes when the average is based on a multimodal distribution.

In the following parts of this section an overview will be given of the variables to which previous research has attributed the skewed distribution and positive bias. They are

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 5 4 3 2 1 5 4 3 2 1

Facebook rating Google rating

George Bistro Portugalia Bordewijk

(8)

coarsely divided in the following sub groups; ex ante selection; ex post influences; manipulation; and retaliation.

2.2.1 Ex ante selection

Consumer will only consume when ex-ante (before consuming) there is a high probability that they will gain a higher than average utility from consuming a good or service. Due to this ex ante selection consumers will have a higher probability of enjoying their product or service and will review accordingly, creating a positive bias. (Chavelier & Mayzlin, 2006) By Nan Hu et al, (2009) Conclude the same “…since only people with higher product

valuations purchase a product, those with lower valuations are less likely to purchase the product, and they will not write a (negative) product review (purchasing bias). Purchasing bias causes the positive skewness in the distribution of product reviews and inflates the average.”.

In their research, Nan Hu et al. (2009) demonstrate in an experiment that the observed J-distribution disappears and report to find a normal distribution when the same product is reviewed by an unbiased, random, group of consumers opposed to the consumers that left a review on Amazon after purchasing the same product.

At first glance this outcome is a very strong argument for the case of ex-ante selection but it does not necessarily prove the ex-ante hypothesis. Despite both papers claiming that ex-ante selection is the reason for the interference, the selection could also happen ex-post (after consuming). In which only consumers that received a high utility from consuming the good feel inclined to leave a review. What is a valuable lesson learned from this research is that in these cases preferences are normally distributed when there is no selection effect.

2.2.2 Ex post influences

Ex post influences can be divided in two sub-groups, one group argues that a bias occurs when a reviewer is influenced by former reviews when reviewing him/herself. The other group finds that ex post selection effects occur. Leaving a review is inherently costly in time and effort. Only a small fraction of all consumers is willing to accept these costs and leave a review. Because of heterogeneity in the sample this will ultimately bias the outcome.

(9)

We first consider ex post selection effects. Hu et al.(2006) introduces the “brag and moan” model which argues that only consumers that received a very high (brag) or a very low (moan) utility from consuming a product will be willing to accept the cost of leaving a review. More than any other study this model explains the often observed J-distribution instead of only the observed high fraction of positive outcomes. Having said this they only prove that the multimodal distribution exists and do not present any empirical evidence to support their theory that this is due to brag and moan on ex post selection. However if we look at reviews from the scope of a public good we could argument the same. Leaving a review is costly in time and money and a reviewer will only leave a review if they receive a higher utility from doing so. Depending on their social preferences this may only be the case when preventing someone to have a very bad experience (moan) or making sure someone has a very good one (brag), this may rationally make sense but is very speculative without supportive research.

Another explanation for the high amount of extreme outcomes that involves ex post selection is given by Li Wei et al., (2014) who prove that a review is viewed as more helpful by prospective consumers when it is far deviated from the average rating. They do this by looking at reviews given on Yelp, which is a platform that gives consumers the option to mark a review as helpful. The research finds that reviews that are far deviated from the average reviews receive more of these “helpful votes” then those around average. Stating that these reviews may represent added or surprise information. A reviewer that wants his review to be of value to future consumers or simply wants his review to be read might recognize this added value and may purposely leave a non-representative highly deviated review.

The first argument considering ex post influence is attributed to ex post herding, and proves that consumers that rate are biased by previous ratings. Muchink et al.,(2013) prove this by reviewing positive or negative votes on comments on news aggregation websites. They manipulate the first rating on a comment to be a positive or negative vote. A random positive manipulation of a comment “…increases the likelihood of positive ratings by 32%

and created accumulating positive herding that increased final ratings by 25% on average”.

This bias only exists if it is a positive bias concluding that “negative social influence is

neutralized by crowd correction.” Meaning that a random negative manipulation will not

(10)

This result can be placed in the context of infectious popularity or a self-fulfilling prophecy. A self-fulfilling prophecy is “a false definition of the situation evoking a new

behavior which makes the originally false conception come true” (R. Merton, 1948)

Salagnik et al. (2008) finds a form of self-fulfilling prophecy that relates to this. Songs that the researchers in a laboratory experiment initially falsely marked as popular became popular over time. Suggesting that consumers utility gained from consuming a product is influenced by other consumers’ appreciation of the product, whether or not this

appreciation is correctly grounded.

Even though this result is observed, concluding that quality has nothing to do with review outcomes or popularity would be erroneous. Salganik et al.(2008) find that

consumers are capable of determining real quality as they find that really “good” songs always become popular even if other songs are falsely marked as good.

Another way to interpret the self-fulfilling prophecy in the context of online reviews is by stating that when a good/service gets a positive review it is more likely to be consumed and reviewed again. Creating a positive feedback loop. (N. Ho-Dac et al, 2013; Godes & Mayzlin 2004)

To sum this up, a self-fulfilling prophecy might help lower quality goods accumulate good reviews when initially reviews are positive manipulated. This conclusion makes a strong argument for initial manipulation of reviews by sellers.

2.2.3 Manipulation

As discussed in section 2.1, ratings and reviews have a significant influence on consuming decisions and in most cases the reviewer is a stranger to the prospective consumer

(Schlosser, 2011). These factors may lead to sellers trying to take advantage, or game these review systems by manipulating reviews (J. Malbon, 2013; N. Hu et al., 2012; N. Hu et al., 2011; M. Wessel et al., 2015)

Manipulation can manifest itself in different ways. Sellers might incentivize consumers to leave a good review by offering free samples or paying a monetary fee (J. Malbon, 2013). Sellers may also simply buy reviews from “click-farms”. In the latter case a seller will pay a fixed amount for a number of positive reviews, likes or follows to establish a stronger online profile. These reviews, likes or follows are given by fictitious online persona

(11)

and cost anywhere from 22$ for a 100 to 172$ for a 1000 reviews on sites like http://www.followersandlikes4u.com.

Multiple empirical papers prove the existence of manipulation of online reputation mechanisms and find manipulation numbers to be as high as 10.3% of all reviews. (N. Hu et

al., 2012; N. Hu et al., 2011; M. Wessel et al., 2015). Nan Hu et al.(2011) also concludes that

even when consumers are aware of the fact that these systems get manipulated they are unable to sufficiently correct for this.

Putting together the manipulation and the self-fulfilling prophecy findings we can conclude that they form a solid argument when it comes to the positive bias found in online rating and review systems. But even with manipulation numbers being as high as 10.3% they do not fully explain the overwhelmingly positive reviews found in earlier research.

2.2.3 Retaliation and reciprocity

The last mechanism that is determined to be of influence on the outcome of e-WOM review systems is driven by fear of retaliation or reciprocity. This mechanism is exclusive to

platforms that have two sided review systems such as AirBnB, UBER and EBAY. On these platforms the consumer does not only rate the seller but the ratings go vice versa. Which results in gaming and strategic manipulation of reviews. (Klein et al. 2006; Zervas et al., 2015; Dellarocas et al., 2008) in which consumers under report negative experiences or strategically post them last minute to avoid retaliatory negative feedback. Positive feedback is given as soon as possible to give the other party sufficient time to behave reciprocally (Klein et al. 2006). Eventually resulting in a positive bias. In the case of EBAY the number of positive reviews far outnumbers the number of negative ones (99.1%).

2.2.4 Introducing social desirability

This paper proposes that besides the previously described variables there is another variable of influence. One that is driven by social image concerns. Arguing that in P2P or P2C contact, users will review in a way they believe is socially desirable. Reviewers do this to artificially create a “better” image of themselves towards review recipients, prospective readers and their peers.

(12)

desirable reputation is found in multiple modes of scientific research. First researched in the context of the social desirable responding (hereafter: “SDR”) in which participants in surveys give answers that are socially desirable instead of truthful (Tourangeau et al, 2007; Fisher 1993). Another form of strategic manipulation of content is found on social media platforms. Due to the structure, and the nature of relationships established on social media, it gives users a platform to create an “ideal self” by strategically presenting information (Rui et

al,.2013; Ellison et al., 2006; Counts & Stecher. 2009). Ratings or review systems on social

media platforms give users an extra means to signal positively regarded personality traits towards prospective readers and review receivers as well as their own “friends” base. These rating and review systems arguably become less of a means to signal personality traits when it is difficult to couple them to the reviewer’s online person. Consequently, a social desirable bias mainly exists when a review is easily coupled to a users’ online person.

What is socially desirable behavior? Complaining is often seen as negative or anti-social behavior. Lott and Lott (1970) show that people who are disliked are described with words such as “complaining, narrow-minded, and overcritical”, while people who are liked are described with words as “broad-minded, forgiving, and generous” suggesting that we dislike people that complain a lot and like people that are generally positive and open minded.

Reviewers arguably recognize this and will therefore leave a positively biased review or rating to not be marked as complaining or over-critical, creating a positive bias. This bias will be more pronounced when the review is easily linked to the reviewer’s cultivated online person.

The observations of very negative reviews can also be explained by this mechanism, as only consumers who are very aggrieved by a good/service will not feel as they are complaining or over-critical but will feel justified to leave a bad review. Closely relating to the “brag and moan” argument proposed by Nan Hu et al.(2006)

This results in the following hypothesis: “The easier a review can be linked to a person or a person’s cultivated online profile the higher the positive social desirability bias of the review.”

2.3 Sub Hypotheses

(13)

person’s cultivated online profile the higher the positive social desirability bias of the review.” this paper uses data gathered on review systems for both Google and Facebook. Both have a review system that gives users the ability to rate and review small to larger sized companies such as restaurants, shops and even hospitals. But the platforms have one key difference when it comes to how easy a review is linked to a reviewer’s online person or profile. This difference makes it possible to make a clear comparison between the two and determine the influence of the social desirability bias.

To be able to confirm the main hypothesis the following sub hypotheses will be tested;

2.3.1 Rating difference

First off, the social desirability bias states that reviewers will rate more socially

desirable if a review is more easily linked to their online person or profile. Therefore a higher congregated average rating is suspected on Facebook. Resulting in the hypothesis: “If a review is more easily linked to a reviewer’s online profile a higher congregated average is observed.”

2.3.2 Difference in restaurant rating variance

When the social desirability bias diminishes, a review or rating will become a better representation of reality and is less driven by other reviewer influences. Closely relating to the argument of Muchink et al. (2013) discussed in section 2.2.2 it is expected that reviewers will be influenced by previous reviews and ratings. This influence is arguably higher when the social desirability bias is high and reviewers look to other reviews to determine what

response is socially desirable. The influence of previous reviews results in a smaller variance for ratings when the social desirability bias is high. Resulting in the following hypothesis: “If a review or rating is more easily linked to a reviewer’s online profile a smaller spread in

reviews is observed.”

2.3.3 Peer knowledge

Arguably the social desirability bias will diminish when the peer group of the reviewer is not aware of the quality of the reviewed restaurant which makes the reviewer feel that he can be more honest. Giving the hypothesis: “If a review or rating is difficult to check by the

(14)

reviewers peers (such as friends of the reviewer) the social desirability bias diminishes, thus a lower rating and a higher variance is observed”. This hypothesis is tested by controlling for location and review frequency and will be further elaborated in the section 3.2

2.3.4 Reviews and rating dispersion

Socially desirable behavior can be signaled in two ways: either by giving a high rating, or by strategically giving a high rating with an accompanying review. When the review is seen as a true measure of appreciation the accompanying rating is the part in which a reviewer can signal positive personality traits. He can, for example, review somewhat critical and give a reasonably high rate accompanying the review. In this manner the reviewer signals that he is generous, not overcritical, and forgiving. Therefore when looking at reviews and ratings, a higher rating is expected for a review that communicates the same amount of satisfaction on Facebook as on Google. The refutable hypothesis is therefore: “When a review is more easily linked to a reviewers online profile the accompanying rating for a review is higher resulting in a higher accompanying rating on Facebook than on Google”.

The next section explains what data is collected and why. Subsequently, it gives a brief overview of what choices were made in the selection progress.

3. Data and design

3.1 Platforms

Facebook and Google are chosen as platforms of interest because they use very similar rating systems in which every rating or review is weighted the same, opposed to other platforms that make use of “premium” reviewers or other forms of hierarchy. On most of these hierarchical structured platforms reviewers that have written a lot of reviews get a higher weighting factor for their reviews. When looking at the ex-ante selection argument described in section 2.2.1 this could pose a problem. A reviewer might have certain preferences that do not represent the opinion of the population, however the reviewer writes a lot of reviews which, subsequently, are weighted heavier. This could skew the distribution and inaccurately represent the opinion of the general population.

(15)

Another problem posed research on platforms that make use of a hierarchical

structure is that no insight is given to how reviewers and reviews are weighted, which makes it difficult to determine what the effect of these weighted reviews is.

Google and Facebook are very comparable in their review procedure; users can leave a review consisting of a star rating on a scale of one to five. Where five indicates the best and one indicates the worst product or service experience. Both on Facebook and on Google the person who is about to review can see the congregated average and the first two or three given textual reviews depending on screen size.

There is one distinct difference between the platforms, this difference is the reason why data from Google and Facebook were chosen for this paper. When a reviewer leaves a review on Facebook the reviewer’s name will be displayed in a list with all reviews. Clicking on the reviewer’s name in this list will redirect users to the reviewer’s personal page. On Google, clicking on a user’s name will redirect to other reviews the person has given but not to his personal page. An example of a reviewer’s page on Google is included in the appendix. This difference in personal exposure is instrumental in testing the main hypothesis as it indicates that a review on Facebook is easier linked to a person’s cultivated online page and person. Which, according to the hypothesis, would make the social desirability bias higher on Facebook.

There are some smaller differences between the platforms: whereas on both platforms reviewers get the option to further explain their star-rating by writing a short review. On Facebook this review has a minimum of 50 characters where Google reviews do not have a minimum requirement. Google does give users the option to upload a picture with their review, Facebook does not. Another difference is that Facebook shows in one or two words what a star-rating means when giving a rating. Even though these differences exist I observe no large difference in review length between the two platforms and argue that these differences are not substantial in influencing reviewer behavior.

3.2 Datasets

The review and rating differences are analyzed by looking at reviews and ratings for restaurants in Amsterdam. Data has been gathered from the opening date of a restaurant up to March 2017. This means that for the same restaurant the same time period is analyzed on both platforms, thus data from both platforms capture the same exogenous shifts. The

(16)

dataset contains all the reviews given to restaurants within the Amsterdam inner ring road, with the exception of Amsterdam Noord, for both on Google and on Facebook.

To gain a better understanding of how reviewers rate on both platforms an in depth analysis of the reviews and ratings of 35 randomly selected restaurants is performed (hereafter also referred to as “Sub-Sample”) and further discussed in section 3.2.2

3.2.1 Main Dataset

The dataset consists of ratings and reviews for 499 restaurants and shows the congregated average computed over nearly 70000 individual ratings. The dataset also contains the number of ratings given, the number of 5 star ratings, the number of one star ratings, the variance of the ratings for each individual restaurant and a dummy variable for location and review frequency. The data contains all restaurants within the ring way of Amsterdam, with exception of Amsterdam Noord. This means it contains restaurants from all price classes and with a broad variety of quality and products resulting in a

heterogeneous sample.

However, not all 499 restaurants were included in the dataset, some restaurants are not active on Facebook or have chosen not to give the option to review and rate.

Restaurants with reviews on only one of the two platforms were removed from the dataset. A second restriction is the number of reviews, only restaurants with at least 4 reviews were included. This does potentially exclude small or new restaurants but also excludes volatile changes attributed to very few reviews. Taking into account the above criteria results in a dataset of 413 restaurants.

Before doing tests or regressions on the data two things have to be accounted for; The characteristics of the data (each restaurant is observed twice) calls for clustered standard errors to account for within cluster correlation of the error term and the

distribution has to be reviewed. As argued by Stevens (1984) outliers and influence points can have sizable influence on regression results, a regression can thus be considered

sensitive for outliers and influence points in the dataset. Therefore a skew in the distribution of interval/ratio variables in a regression have to be minimized as much as possible. In addition, a linear regression assumes a linear relationship between the dependent and the independent variable, this assumption is more likely to hold when both variables are

(17)

the variables can be considered reasonably linear, than according to Cohen (2003) the residuals are more likely to be homoscedastic and normally distributed.

To test for a normal distribution a Shapiro-Wilk test is performed and the Q-Q plot is analyzed. The Shapiro-Wilks test gives enough evidence to reject the hypothesis that the data is normally distributed (p<0.01). In an effort to resolve this, a logarithmic and a square root transformation were conducted. These do not satisfactorily transform the data to a normal distribution.

When inspecting the histogram for the variable rating on both Google and Facebook a long left tail is observed as is shown in figures 5 and 6 in the appendix. To account for this tail the data is trimmed at 5% (at respectively 3.6 and 4.1) resulting in a normally distributed dataset. The same process is repeated for the individual variance per restaurant. Table 1 shows the descriptive statistics for the dataset and the constructed sample. Note that due to the trim there is a reduction in observations for the variable “Rating” and “Variance”.

In order to run a regression a dummy variable is created for Facebook which is 1 if a review is given on Facebook and 0 if it is given on Google. This dummy variable represents the independent variable indicating how easy a review is linked to a reviewer’s personal page.

To test for the hypothesis introduced in section 2.3.3 two dummy variables are created, one for location and one for review frequency. The location dummy is 1 if the restaurant is in a known touristic area in Amsterdam (Leidseplein, Damstraat or Dam) and 0 otherwise, assumed is that reviewers in these areas are mostly tourists. The assumption that drives the creation of this dummy variable is that the peer group of these tourists do not know the reviewed restaurant and the reviewer will therefore review more honestly. The same is done for review frequency. If a restaurant has over 100 reviews it is assumed that a reviewer’s peer group knows about the restaurant or that knowledge is easily obtainable reading other reviews and subsequently making a consumer’s review more socially desirable. Therefore the dummy variable for location is 1 if a restaurant has over a 100 ratings and 0 if it has less.

Firstly, after trimming the variables “rating” and “variance” they meet the

assumptions to execute a mean comparison test for paired data, as the observations are independent of one another, and the dependent variables are now approximately normally distributed and don’t contain outliers. This test will give insight in the influence of platform

(18)

on rating outcome and restaurant rating variance. Then, to control for the added independent variables introduced above the following regressions are executed.

𝑅𝑎𝑡𝑖𝑛𝑔 = 𝛽0+ 𝛽1∗ 𝐹𝐵 + 𝛽2∗ 𝐿𝑜𝑐𝑎𝑡𝑖𝑜𝑛 + 𝛽3∗ 𝑓𝑟𝑒𝑞𝑢𝑒𝑛𝑐𝑦 + 𝛽4∗ 𝐹𝐵𝐿𝑜𝑐𝑎𝑡𝑖𝑜𝑛+ 𝛽5∗ 𝐹𝐵𝑓𝑟𝑒𝑞𝑢𝑒𝑛𝑐𝑦

+ 𝜀

𝑉𝑎𝑟𝑖𝑎𝑛𝑐𝑒 = 𝛽0+ 𝛽1∗ 𝐹𝐵 + 𝛽2∗ 𝐿𝑜𝑐𝑎𝑡𝑖𝑜𝑛 + 𝛽3∗ 𝑓𝑟𝑒𝑞𝑢𝑒𝑛𝑐𝑦 + 𝛽4∗ 𝐹𝐵𝐿𝑜𝑐𝑎𝑡𝑖𝑜𝑛+ 𝛽5

∗ 𝐹𝐵𝑓𝑟𝑒𝑞𝑢𝑒𝑛𝑐𝑦+ 𝜀

In which 𝐹𝐵𝑓𝑟𝑒𝑞𝑢𝑒𝑛𝑐𝑦 and 𝐹𝐵𝐿𝑜𝑐𝑎𝑡𝑖𝑜𝑛are the interaction variables between the variable for Facebook and frequency and Facebook and location. The regression is clustered at

restaurant level to account for within cluster correlation of the error term. The results of these regressions are shown in table 2 and 3.

The result for the mean comparison test for paired data is not reported as it is the same as the result in the first column of table 2 and 3.

3.2.2 Sub Sample

For the analysis of the sub-sample, reviews for 35 restaurants are grouped per category per restaurant using the following definition of category: Best ever; only good points; good and bad points; only bad points; worst ever. A review is included in the “best ever group” if a written review states it is the “best in Amsterdam”, “best I ever had” etc. and included in the only good points group when a reviewer does not mention any criticism. The good and bad group if some form of criticism is reported; the bad when there is only criticism, and worst ever if the reviewer states “worst I ever had” or similar phrases. The written reviews are assumed a true measure of satisfaction, as they are less subjective then a rating, giving a rating of 3 stars may mean something entirely different to one reviewer than to the other whilst a written review is a more accurate tool to transfer a certain sentiment.

The accompanying ratings will be noted per category. These accompanying ratings give an average rating per category, per restaurant. These averages are compared between platforms. This clustering creates a measure that allows comparing if the same appreciation or satisfaction is rated with the same, a higher, or a lower grade on Facebook versus Google. The hypothesis states that reviewers rate in a socially desirable manner (positive, non-complaining) which would result in a higher average rating per group on Facebook then on

(19)

Google. This difference is most likely expressed in the good/bad group as in this group criticism is given but the accompanying rating can signal socially desirable behavior such as not being complaining or overcritical.

The textual reviews have been categorized by one person. To test for internal consistency the same categorization procedure is repeated by four other people for five restaurants out of the same subset. For this sample of five restaurants the Cronbach’s Alpha coefficient is determined. For all restaurants in this sample of five the coefficient is higher than the threshold value of 0.7, concluding that the categorization of the sub-sample is internally consistent.

To obtain results a linear regression is run with Facebook as independent variable to representing how easy a review is linked to a reviewer’s personal page, and average rating per group as the dependent variable. As discussed in the former section the dependent variable is tested for a normal distribution using the Shapiro-Wilk test and the regression makes use of clustered standard errors clustered at restaurant level to account for within cluster correlation of the error term.

4. Results

4.1 Main dataset

Google Facebook

All Sample Sample

Mean S.D. Obs Mean S.D. Obs. Mean S.D. Obs

Rating 4.231 0.3927678 499 4.305 0.2745 391 4.6228 0.2210 382 Variance - - - 0.766 0.3894 371 0.5445 0.332 373 Total rat. 56.06 66.37901 499 60.48 70.71705 413 106.48 136.5576 413 Fract. 5 0.5489 0.1784617 499 0.5558 0.1642584 413 0.7437 0.1452114 413 Fract. 2-4 0.4003 0.1636582 499 0.3981 0.1523271 413 0.2260 0.1314502 413 Fract. 1 0.0507 0.0736545 499 0.0461 0.0622449 413 0.0304 0.0451176 413

Table 1. Descriptive statistics data and main sample, note that variables Ratings and Variance have been trimmed so have fewer observations.

(20)

Table 1 shows the descriptive statistics for the dataset. The variable “Variance” displays the individual variance per restaurant; the fractions displayed in the table are the normalized ratings. So “fraction 5” displays the number of 5 star ratings divided by the total number of ratings given.

The dataset displays the same features as found in previous research. The reviews are overwhelmingly positive. Respectively 30.1% and 71.1% of restaurant reviews on Google and Facebook have an average rating of 4.5 or higher. Over half of all ratings given on Google are 5-star ratings, on Facebook this number is as high as 74 percent. A lower amount of one-start ratings is found (respectively 5% and 3%) than might have been expected after looking at previous research but in many cases a noticeable J-distribution is still observed.

Before starting to interpret the homogeneity between the Facebook and Google users has to be determined. Therefore, the correlation between the two data sets is

reviewed and specifically the correlation between the congregated average ratings on both platforms is tested. Preferences are inherently heterogeneous, so if the preferences of the users on both platforms are correlated this means that the composition of users on both platform must be as well. A significant positive correlation coefficient of 0.523 (p<0.01) is found. This correlation-coefficient is not sufficient to completely rule out any ex ante selection effects between platforms but suggests against sizable differences.

(21)

Rating Facebook 0.317*** (0.0179) 0.317*** (0.0146) 0.315*** (0.0146) 0.307*** (0.0153) 0.333*** (0.0153) 0.346*** (0.0184) 0.336*** (0.0185) Location -0.153*** (0.0369) -0.199*** (0.0468) -0.195*** (0.0477) FB*Location 0.0999** (0.0470) 0.107** (0.0484) Frequency -0.0897*** (0.0213) -0.0507 (0.0378) -0.0374 (0.0381) FB*Freq. -0.0613 (0.0412) -0.0716* (0.0414) Clustered X X X X X X Obs 773 773 773 773 773 773 773 Nrest 413 413 413 413 413 413 413

Note: S.E. are clustered at restaurant nr. and robust when clustered. Standard errors are in parenthesis; *P<0.10, **P<0.05, ***P<0.01

Table 2. Regression results.

Variance Facebook -0.222*** (0.0262) -0.222*** (0.0248) -0.219*** (0.0242) -0.217*** (0.0254) -0.239*** (0.0251) -0.278*** (0.0304) -0.272*** (0.0306) Location 0.196*** (0.0461) 0.209*** (0.0577) 0.215*** (0.0582) FB*Location -0.0270 (0.0809) -0.0524 (0.0837) Frequency 0.0935*** (0.0279) -0.0244 (0.0468) -0.0440 (0.0435) FB*Freq. 0.1837*** (0.0554) 0.1966*** (0.0535) Clustered X X X X X X Obs 744 744 744 744 744 744 744 Nrest 413 413 413 413 413 413 413

Note: S.E. are clustered at restaurant nr. and robust when clustered. Standard errors are in parenthesis; *P<0.10, **P<0.05, ***P<0.01

Table 3. Regression results.

Interpreting the results in table 2 and 3 Lead to the following; Looking at the last column of the regression results it is shown that the Facebook variable is positive and

(22)

significant (𝛽1=0.336, p<0.01). This suggests that even when this study controlled for the effects of frequency and location Facebook had a higher rating average than Google, which support the hypothesis proposed in section 2.3.1 “If a review is more easily linked to a reviewers online profile a higher congregated average is observed”

The Facebook variable is also significant (𝛽1=-0.272, p<0.01) on restaurant rating variance supporting the hypothesis in section 2.3.2 “If a review or rating is more easily linked to a reviewers online profile a smaller spread in reviews is observed.”

The location had a negative significant effect on the average rating (𝛽2=-0.195, p<0.01). Which indicates that restaurants in touristic areas receive a lower rating on average than restaurants in the rest of the sample. In contrast, the interaction between Facebook and the location had a significant positive effect (𝛽4=0.107, p<0.05) on rating and a non-significant influence on the dependent variable variance. Subsequently increasing the difference between rating outcomes on Facebook and Google. This rejects the hypothesis proposed in section 2.3.3, as was argued that the difference between the ratings on Facebook and Google would diminish when a rating or review is difficult to check by a reviewer's peers. An alternate explanation for the correlation found is that when reviews can be difficult to check by a closely related third party a review or rating becomes more of a means to game and falsely signal positive experiences to peers which would increase the social desirability bias.

The review frequency had a non-significant effect on the average rating and restaurant rating variance. In contrast, the interaction between Facebook and the review frequency had a significant negative effect (𝛽5=-0.0716, p<0.10) on rating and a significant positive influence on the dependent variable variance (𝛽5=0.197, p<0.01). This result again leads to rejecting the hypothesis proposed in section 2.3.3. The alternate explanation given in the section above also works for this interaction variable. When the total amount of reviews grow a review is easier checked against other reviews and therefore becomes less of a means to (falsely) signal personality traits.

The conclusion that can be drawn from the first two results is that there is suggestive evidence not to reject the main hypothesis that a social desirability bias positively influences ratings and reviews. A significant increase in average ratings and a significant decrease in variance is observed when the ease with which a review is linked to a reviewer is increased. Suggesting that reviewers leave higher ratings and reviews are more influenced by previous

(23)

reviews when the reviewer is less anonymous.

Looking at the results on the interaction variables, more research is needed to

conclusively state that the differences found in the sample are caused by a social desirability bias and if there is a theoretical base for the alternative explanation given.

To obtain a better understanding of how consumers review restaurants an additional test is performed on the reviews of 35 restaurants. Assessing whether the textual reviews show a slightly more critical audience on Google or that the same kind of criticism is rewarded with a lower star rating on Google. To perform this test the written reviews are assumed to be a true measure of appreciation and, per group, the difference in aggregated average is compared between platforms.

4.2 Sub-sample

An overview of the subsample is presented in table 4. Even though the sub sample consists of 35 restaurants very little “worst” reviews are observed on both platforms and the grade given for such a review is always a one. The grade given for a “best” review is (almost) always a five. Subsequently only the Good, Good/Bad and Bad variables are normally distributed according to the Shapiro-Wilks test.

Sub-Sample

Google Facebook

Mean S.D. Obs. Mean S.D. Obs.

Best 4.946 0.183 33 5 0 29 Good 4.640 0.151 35 4.848 0.173 34 Good/Bad 3.585 0.519 33 3.814 0.916 23 Bad 1.503 0.489 28 1.595 0.580 17 Worst 1 0 11 1 0 8

Table 4. Descriptive statistics sub sample

To test the hypothesis proposed in section 2.3.4 a linear regression is ran on the congregated average rating per group. Because the same restaurant is observed twice the standard errors are clustered at restaurant level to control for within cluster correlation of the error term. The outcome of the regression is presented in table 6.

(24)

Sub Sample

Only good Good/bad Bad

Facebook 0.208*** 0.229 0.0928

(0.0323) (0.2111) (0.167)

Restaurant nr. X X X

Obs 69 56 45

Nrest 35 35 35

Note: S.E. are clustered at restaurant nr. Standard errors are in parenthesis; *P<0.10, **P<0.05, ***P<0.01

Table 5. Regression results.

No regression is run for the highest and the lowest group (e.g. Best, Worst), looking at the summary statistics, these groups are very much alike and are not interesting in answering the hypothesis. When a reviewer says it is the best or worst experience he/she has ever had it would be suspected that the accompanying rating is in the extremes (5 or 1) on both platforms so no difference will be observed.

The first observation in the sub sample does point towards accepting the hypothesis stated in section 2.3.4 that a similar textual review will receive a higher rating when the review is easier linked to the reviewer’s personal profile. The Facebook variable is positive and significant (β=0.208, p<0.01) for the “Only Good” subgroup. Suggesting that reviewers on Facebook with the same product or service appreciation give a higher accompanying rating. Giving a higher rating for the same review, signals socially desirable personality trades such as being generous and not over critical. This result thus supports the main hypothesis. In the other two groups no significant difference is observed. Even though the sign is correct for these two groups the standard errors are large. When a larger sample is tested these will probably decrease and might make the correlation for the Good/Bad and the Bad group significant. For now, no significant difference for these groups is observed which rejects the hypothesis stated in section 2.3.4.

5. Discussion and conclusion

Even though the results in the previous chapter point to accepting the overall hypothesis that the easier a review is linked to a person’s online profile the higher the social desirability bias resulting no conclusive relation can be confirmed with the results obtained. There are a few alternative explanations that need to be ruled out by future research to completely

(25)

isolate the effect of the social desirability bias.

Future research has to make sure the two sample groups are identical to rule out ex ante selection based on platform preferences by different consumers. One way to achieve this is by researching consumers that rate and review on both platforms. However this might overcomplicate the collection of a sufficient amount of data. Another possibility is to

conduct a laboratory research in which the proposed variable can be manipulated. Or prove the variable has influence on a number of different platforms with various degrees of social exposure.

Another disturbing factor may be introduced by the minimum review length in Facebook. The minimum review length of 50 characters may prove to be a threshold for some prospective reviewers to only leave a rating, which could have two effects on the overall outcome of the star-rating. Writing a short textual review might make a consumer reconsider his or her gained utility more than when just giving a rating. Resulting in an overall lower average rating on Google. It could also point in the direction of an ex ante selection of reviewers in which the reviewers that are a more inclined to write a short review of just a few words using Google more than Facebook, introducing heterogeneity. Though it is a possibility no literature studies prove that reviewers have a lower appreciation when a textual explanation is required.

Throughout this paper one important assumption has been done which has not been addressed respectfully but may be of considerable influence. Consumers and reviewers have been assumed to be ignorant towards the review distributions on different platforms. However, consciously or sub-consciously they may correct for the positive bias observed on different platforms and also for the bias between platforms. Essentially suggesting that a higher star rating on Facebook is “worth less” or indicating a lower quality than on Google. This could also be interpreted as a form of platform wide herding behavior in which one platform started off with higher star-ratings then the other, tying in to the research of Muchink et al. (2013). This can be be checked via a survey to see if consumers are actually able to distinguish between ratings given on one platform opposed to the other.

Another differentiating factor between reviews on Google and Facebook that has not been addressed appropriately is the ability to react to reviews. Facebook gives users and companies the ability to react on given reviews. Thus by leaving a moderate or bad review the reviewer might expect some sort of retaliation in the form of a reprimand by the

(26)

company or another user. Though in collecting the data we barely observed cases in which this happened it does not exclude the potential influence it has on reviewers. Former

research does suggest that the effect of possible retaliation is small (Dellacrocas, 2008) so no large impact is expected.

Concluding, there is suggestive evidence that social desirable reviewing is a variable of influence on the outcome of rating and review systems. Though to conclude on a causal relation more research is necessary.

The implications the social desirability bias has on rating and review systems is a peculiar one, when a rating system wants to portray a more reliable measure of

performance the argument of social desirability suggests to make reviews more anonymous. This anonymity subsequently devalues the given rating or review as the relationship or context is proven to be of important influence on review-value in both WOM and e-WOM.

(27)

6. Appendix

(28)
(29)
(30)

Figure 5. Histogram Rating distribution Google

(31)

References

Chevalier, J. A., & Mayzlin, D. (2006). The effect of word of mouth on sales: Online book reviews. Journal of marketing research, 43(3), 345-354.

Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2013). Applied multiple

regression/correlation analysis for the behavioral sciences. Routledge.

Counts, S., & Stecher, K. B. (2009, March). Self-Presentation of Personality During Online Profile Creation. In ICWSM.Duan, W., Gu, B., & Whinston, A. B. (2008). Do online reviews matter?—An empirical investigation of panel data. Decision support systems, 45(4), 1007-1016

De Langhe, B., Fernbach, P. M., & Lichtenstein, D. R. (2015). Navigating by the stars: Investigating the actual and perceived validity of online user ratings. Journal of Consumer

Research, 42(6), 817-833.

Dellarocas, C., & Wood, C. A. (2008). The sound of silence in online feedback: Estimating trading risks in the presence of reporting bias. Management science, 54(3), 460-476. Dellarocas, C. (2003). The digitization of word of mouth: Promise and challenges of online feedback mechanisms. Management science, 49(10), 1407-1424.

Ellison, N., Heino, R., & Gibbs, J. (2006). Managing impressions online: Self‐presentation processes in the online dating environment. Journal of Computer‐Mediated

Communication, 11(2), 415-441.

Fisher, R. J. (1993). Social desirability bias and the validity of indirect questioning. Journal of

consumer research, 20(2), 303-315.

Godes, D., & Mayzlin, D. (2004). Using online conversations to study word-of-mouth communication. Marketing science, 23(4), 545-560.

Gupta, P., & Harris, J. (2010). How e-WOM recommendations influence product consideration and quality of choice: A motivation to process information

perspective. Journal of Business Research, 63(9), 1041-1049.

Ho-Dac, N. N., Carson, S. J., & Moore, W. L. (2013). The effects of positive and negative online customer reviews: do brand strength and category maturity matter?. Journal of

Marketing, 77(6), 37-53.

Hu, N., Bose, I., Koh, N. S., & Liu, L. (2012). Manipulation of online reviews: An analysis of ratings, readability, and sentiments. Decision Support Systems, 52(3), 674-684.

Hu, N., Liu, L., & Sambamurthy, V. (2011). Fraud detection in online consumer reviews. Decision Support Systems, 50(3), 614-626.

(32)

Hu, N., Zhang, J., & Pavlou, P. A. (2009). Overcoming the J-shaped distribution of product reviews. Communications of the ACM, 52(10), 144-147.

Hu, N., Pavlou, P. A., & Zhang, J. (2006, June). Can online reviews reveal a product's true quality?: empirical findings and analytical modeling of Online word-of-mouth

communication. In Proceedings of the 7th ACM conference on Electronic commerce (pp. 324-330). ACM.

Klein, T. J., Lambertz, C., Spagnolo, G., & Stahl, K. O. (2006). Last minute feedback. Lott, A. J., Lott, B. E., Reed, T., & Crow, T. (1970). Personality-trait descriptions of differentially liked persons. Journal of Personality and Social Psychology, 16(2), 284. Malbon, J. (2013). Taking fake online consumer reviews seriously. Journal of Consumer

Policy, 36(2), 139-157.

Matzler, K., Veider, V., & Kathan, W. (2015). Adapting to the sharing economy. MIT Sloan

Management Review, 56(2), 71.

Merton, R. K. (1948). The self-fulfilling prophecy. The Antioch Review, 8(2), 193-210. Muchnik, L., Aral, S., & Taylor, S. J. (2013). Social influence bias: A randomized experiment. Science, 341(6146), 647-651.

Pantelidis, I. S. (2010). Electronic meal experience: A content analysis of online restaurant comments. Cornell Hospitality Quarterly, 51(4), 483-491.

Racherla, P., & Friske, W. (2012). Perceived ‘usefulness’ of online consumer reviews: An exploratory investigation across three services categories. Electronic Commerce Research

and Applications, 11(6), 548-559.

Rui, J., & Stefanone, M. A. (2013). Strategic self-presentation online: A cross-cultural study. Computers in Human Behavior, 29(1), 110-118.

Salganik, M. J., & Watts, D. J. (2008). Leading the herd astray: An experimental study of self-fulfilling prophecies in an artificial cultural market. Social psychology quarterly, 71(4), 338-355.

Salganik, M. J., Dodds, P. S., & Watts, D. J. (2006). Experimental study of inequality and unpredictability in an artificial cultural market. science, 311(5762), 854-856.

Schor, J. (2016). DEBATING THE SHARING ECONOMY. Journal of Self-Governance &

Management Economics, 4(3).

Schlosser, A. E. (2011). Can including pros and cons increase the helpfulness and

persuasiveness of online reviews? The interactive effects of ratings and arguments. Journal

(33)

Stevens, J. P. (1984). Outliers and influential data points in regression analysis. Psychological

Bulletin, 95(2), 334.

Tourangeau, R., & Yan, T. (2007). Sensitive questions in surveys. Psychological

bulletin, 133(5), 859.

Wei, L., Xu, W., & Islands, C. (2014). Exploring heuristic cues for consumer perceptions of online reviews helpfulness: The case of Yelp. com. In Proceedings of (pp. 52-63).

Wessel, M., Thies, F., & Benlian, A. (2015, May). A Lie Never Lives to be Old: The Effects of Fake Social Information on Consumer Decision-Making in Crowdfunding. In ECIS.

Zervas, G., Proserpio, D., & Byers, J. (2015). A first look at online reputation on Airbnb, where every stay is above average.

Referenties

GERELATEERDE DOCUMENTEN

It is assumed that the bridge will withstand the load according to EN 1991-2 using Load Model 71 [5]. The characteristic values are multiplied by a factor α on lines carrying

In the summer of 2012, a first workshop was organised in Ghent and Athens to facilitate interaction among different stakeholders (i.e. citizens, professional developers,

Physical penetration tests are seldom done without social engineering, because when entering a location, it is imminent that the testers will have to interact with the

In de kern gaat het om de laagdrempeligheid van de wijkcoaches (zowel voor gezinnen als voor scholen), de wijkgerichtheid van de aanpak, de integrale hulpverlening op

Bamboo fibres recently attracted interest as a sustainable reinforcement fibre in (polymer) composite materials for structural applications, due to specific mechanical properties which

The data collected from this project promises to provide important information about: (1) the level of psycho- logical distress and the prevalence of common mental disorders

The title of the research is: “An evaluation of the administration and payments of social grants in the Northern Cape and Western Cape: Its strengths and

272 2009 An Exact Solution Procedure for Multi-Item Two-Echelon Spare Parts Inventory Control Problem with Batch Ordering 271 2009 Distributed Decision Making in Combined