• No results found

Taking and giving : public goods dilemmas in the online world of rating and ranking : a research on (un)motivated behavior on restaurant review platforms

N/A
N/A
Protected

Academic year: 2021

Share "Taking and giving : public goods dilemmas in the online world of rating and ranking : a research on (un)motivated behavior on restaurant review platforms"

Copied!
68
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

World of Rating and Ranking

A research on (un)motivated behavior on restaurant review platforms

Renske Brands 11786000

First supervisor: Olav Velthuis

Second supervisor: Alex van Venrooij MSc Sociology – Cultural Sociology Faculty of Social and Behavioral Sciences University of Amsterdam

08-07-2019

(2)

2 Table of Contents

Abstract ... 4

Chapter 1. Introduction ... 5

1.1 Online rating and reviewing ... 6

Chapter 2. Theoretical framework ... 9

2.1 Collective action problems ... 9

2.1.1. Collective action on the Internet. ... 10

2.2 Electronic word-of-mouth ... 12

2.3 Rewards in the public goods dilemma ... 17

2.4 Sociological relevance ... 19 Chapter 3. Methods ... 22 3.1 Data collection ... 22 3.1.1 Participant recruitment. ... 22 3.1.2 Interviews. ... 23 3.2 Participants ... 24 3.3 Data analysis ... 25

Chapter 4. Three understandings of review platforms ... 27

4.1 Frames ... 28

4.2 The consumer-focused understanding ... 28

4.2.1 Non-reviewers. ... 31

4.2.1.1 The emotional impact of not reviewing. ... 31

4.3 The restaurant-focused understanding ... 32

4.3.1 Non-reviewers ... 36

4.4 The self-focused understanding ... 37

4.5 Theoretical frameworks ... 39 4.5.1 Non-reviewers. ... 39 4.5.2 Consumer-focused reviewers. ... 40 4.5.3 Restaurant-focused reviewers. ... 40 4.5.4. Self-focused reviewers. ... 41 4.5.5 Affordances. ... 42 4.6 Final remarks ... 43 Chapter 5. Discussion ... 45 5.1 Limitations ... 45

(3)

3

5.2 Future research ... 48

5.3 Practical implications ... 50

5.4 Conclusion ... 52

Literature ... 55

Appendix A: Motivational factors and categories ... 61

Appendix B: Recruitment flyer ... 62

Appendix C: Interview Guide [Dutch] ... 63

Appendix D: Participant characteristics ... 66

(4)

4 Abstract

The use of user-generated review platforms has exploded in the last decennia. Restaurant owners nowadays are dependent of online reviews to have a thriving business. Online review platforms thus have a big impact in contemporary society. However, the question as to why people review restaurants has remained a puzzle to this day. After all, the review platforms constitute public goods that are available to anyone, contributors and non-contributors. Even though it is in every individual’s best self-interest to use such a platform without contributing to it, the platform cannot exist without these contributions. The existence and popularity of the platforms indicate that some people do behave in a group-oriented manner (i.e., engage in collective action). Many sociological theories (i.e., collective action theories; (electronic) word-of-mouth theories; affordances theories) have proposed (partial) explanations for this behavior, but none has been able to fully explain people’s reviewing tendencies. The current research aimed to solve a part of this puzzle by conducting semi-structured interviews with eleven reviewers and four non-reviewers. The data were analyzed using an abductive approach to discourse analysis. The interviews yielded 17 motivations to review and 10 motivations not to review. From these motivational factors, three different understandings were identified that the interviewees used to make sense of restaurant review platforms: the (1) consumer-focused, (2) restaurant-focused, and (3) self-focused understanding. These understandings consist of cognitive frames that (non-)reviewers use to explain and justify their behaviors, and include specific narratives about the goal and utility of review platforms. They also have practical implications for people’s behavior. While the first understanding corresponds well with much of the literature about reviewing behavior, current theoretical perspectives are not able to explain the other two as well. The current research is the first to identify how these three understandings shape and motivate reviewing behaviors, thereby enriching the literature and opening doors to new investigations into this subject.

(5)

5 Chapter 1. Introduction

“Before TripAdvisor, the customer was only nominally king. After, he became a veritable tyrant, with the power to make or break lives.” (Kinstler, 17 August 2018)

Anno 2019, the Internet has pervaded all areas of social life, among which the area of hospitality. Before the Internet, one had to trust upon the opinion of their direct social network to find the best restaurants in a specific neighborhood or city. Nowadays, we can search for and compare infinite possibilities online. This is made possible with help of our fellow citizens. Voicing one’s opinion about a restaurant online helps to create an information database that the entire world population can access and benefit from. Exchanging information and opinions online thus can help others. The existence and popularity of online rating and ranking websites show that many people do voice their opinions online. The power of such online opinions has grown quickly. As the quote at the beginning of this page says: online reviews have given consumers the power to grow from a ‘nominal king’ to a possible ‘tyrant’ (Kinstler, 2018). Yet not everybody seems to be equally interested in contributing to these online knowledge databases. With all of this information readily available on the Internet, individual dilemmas arise: do we want to take the effort to contribute to the knowledge database, or do we just use the information that is already available? Though ranking websites have been around for a while, the question why people (do not) review restaurants online on restaurant ranking websites has not yet been answered. Why do some people feel compelled to review restaurants online, despite the fact that they could also use the website without contributing to it? Moreover, why do others not feel this need to contribute, even though they use others’ contributions to their own benefit? The current research tries to formulate an answer to the question what motivates people to contribute towards such restaurant ranking websites, or why they abstain from engaging in such behavior.

While many theorists have already tried to answer this question, no definite answer can be formulated yet. We still do not know exactly why these restaurant review websites exist and keep existing, even though it seems like it is not in a contributor’s direct personal interest to contribute. The goal of the current research is therefore to investigate people’s motivations (not) to write restaurant reviews by examining people’s own interpretations of their behavior.

This thesis is structured as follows. First, an overview of the online review environment will be created in the remainder of this introductory chapter. Next, in chapter 2, we will start a dialogue between three strands of research that have touched upon this subject: the collective

(6)

6 action literature (originating from sociology), the electronic word-of-mouth literature (originating from marketing studies), and the affordances literature (originating from platform studies). After presenting our theoretical framework, our methodology will be discussed in chapter 3. The results of the research and an analysis of these results will be presented in chapter 4, where our main findings will show that the theoretical approaches as they exist now are insufficient to explain restaurant reviewing behaviors online. That is, the current research has found that people can have different understandings of restaurant review platforms (which shape their reviewing behaviors); not all of which can be explained by the theories used in our theoretical framework. Finally, chapter 5 will focus on the (theoretical and practical) implications of our findings, as well as the limitations and the conclusion of this research.

1.1 Online rating and reviewing

To understand why people write online reviews, we first need to understand what the online review environment looks like. Historically, the restaurant industry used to be reviewed only by experts. A gradual move towards a more participatory role for consumers in the valuation process of restaurants can be seen from the 1970s onwards, with the biggest change happening at the end of the 1990s (Mellet, Beauvisage, Beuscart, & Trespeuch, 2014). Though consumer participation was not entirely absent in the 1970s, when only expert paper guides existed (e.g., Michelin), it was minimal: consumers could write letters and/or e-mails to the guides to "rectify omissions or report on developments that had occurred since the visit of the official critic" (Mellet et al., 2014, p.14). Later, paper guides appeared that were created from consumer surveys, though the editors of these guides still heavily mediated the consumers’ contributions. The online reviews as we know them now, fully 'democratized', emerged in the late 1990s, with Amazon being the "pioneer in the field of online customer feedback" (Mellet et al., 2014, p.16).

This development was accelerated by the rise of the so-called Web 2.0, which “refers to a set of collaborative online technologies and services that proliferated rapidly starting in the mid-2000s … [which] resulted in an explosion of user-generated content freely available online” (Lewis, 2012). The Web 2.0 is a way to collect, store, connect, and retrieve an unlimited amount of information, mobilizing “the collective intelligence of its users” (Lewis, 2012).

On the Web 2.0, user-generated content (UGC) platforms (e.g., social networks, review sites, online communities; Ayeh, Au, & Law, 2013) exist that help us direct our searches. UGC platforms are built upon customer participation, where the customers (i.e., the website visitors) help to create the website itself by virtually exchanging information, expressing thoughts, and

(7)

7 reconfiguring knowledge with one another (Ayeh, Au, & Law, 2013; Shang, Li, Wu, & Hou, 2011). In doing so, new knowledge is created (Shang, Li, Wu, & Hou, 2011).

The current research will investigate one type of such UGC platforms: online evaluation systems, particularly rating and ranking websites. On these platforms, the UGC consists of online customer reviews (OCRs) and ratings. That is, most review websites collect visitors’ opinions in two ways: a star/bullet value (i.e., a rating; often ranging from one to five), and a textual description of one’s opinion (i.e., an OCR) about a particular establishment (Banerjee & Chua, 2015). The website subsequently creates a summary of the reviews and an average of the ratings in order to produce a ranked list of the evaluated subjects (Beuscart, Mellet, & Trespeuch, 2016). Even though all review platforms differ from each other in their particular histories, designs, and positioning, they do work in the same way (i.e., collecting both ratings and reviews to create a knowledge database) and have the same goal (i.e., the “empowerment of ordinary consumers”; Mellet et al., 2014, p.17). Since posting and reading reviews on these platforms is free, the platforms rely on income generated from advertisements to keep existing (Filieri, Alguezaui, & McLeay, 2015). That is, the more popular and influential a restaurant review platform is, the more e-retailers from this industry would be willing to pay for a sponsored link from this website (Filieri, Alguezaui, & McLeay, 2015).

In the past two decades, the use of such online evaluation systems has exploded due to our need for accountability, transparency and efficiency (Espeland & Sauder, 2007). The pervasiveness of this culture in our social lives is shown by the fact that in 2008, already 30% of Internet users had posted an online comment or review and 30% had issued a rating online (Pang & Lee, 2008). In 2017, a study amongst Dutch citizens found that already almost half of the Dutch consumers had posted an online review (Königs & Moha, 2017).

Even though the reviews posted on UGC platforms are often posted by people that we do not know personally, we tend to trust them. Research has shown that 66% of the population trusts others customers’ online reviews (Nielsen, 2015). Trusting the validity of these reviews implies that we also let them shape our opinions. The implications for business are therefore very real, even though the reviews may not always be. That is, experts estimate that around one-third of all customer reviews online are fake (Streitfeld, 2012). A complete industry has even formed out of companies' desire for positive reviews: there are many businesses and people that get paid to write (positive) reviews, and companies buy and sell fake accounts to write their own reviews (Vleugels, Schellevis, & Felix, 2019). The validity and reliability of OCRs is often questioned due to their subjective nature, as well as the legitimacy and representativeness of the inexpert reviewers (Ayeh, Au, & Law, 2013; Beuscart, Mellet, &

(8)

8 Trespeuch, 2016; Dellarocas, 2003). One of the most important drawbacks of UGC platforms is therefore that not all reviews can be checked (for their validity and/or reliability) before they are posted online. Moreover, the transparency of the website cannot always be guaranteed. That is, UGC platforms sometimes collaborate with organizations that use the data on the platforms for their own benefit (Hearn, 2010). Lastly, the content of the reviews may even be hurtful; both for the persons and for the businesses that are on the receiving end of the review (Beuscart, Mellet, & Trespeuch, 2016). The UGC platforms are aware of their shortcomings, tough, and constantly try to improve themselves. For example, international restaurant ranking and review website TheFork has changed their website in such a way that now only people who made a reservation via their website can post a review of the restaurant they visited. This solves part of the reliability problem the UGC platforms face, though it creates new problems since the website now excludes restaurants that do not use the reservation system of that platform.

However, it is important to highlight the positive sides of UGC platforms and OCRs as

well. First of all, online evaluation systems allow for a greater information distribution and a greater motivation for organizations to improve (Espeland & Sauder, 2007). Furthermore, the numeric lists of such ranking/review systems helps us to organize and understand our social worlds better. The numeric lists these websites generate, make it easier to sort through the overwhelming amounts of information that can be found on the Internet (Hearn, 2010). They therefore reduce uncertainty in our lives (Karpik, 2010). Finally, proponents of OCRs “resort to an ideological discourse that associates OCRs with the democratization of markets and the empowerment of consumers” (Beuscart, Mellet, & Trespeuch, 2016, p.460). Especially in the culinary world, which is traditionally dominated by elite critics, such an ideology is very powerful (Beuscart, Mellet, & Trespeuch, 2016). After all, “why believe what one or a few experts have to say about something when you can see an average ranking of hundreds, if not thousands of consumers?” (Lehmann, 2012, p.10). Review websites thus enable the consumer to make informed decisions based on other laypersons’ opinions and to claim their power in the relationship between them and the restaurant, thereby enabling them to hold restaurants accountable.

The fact that UGC platforms have grown exponentially in the last couple of decades show that these advantages are important to consumers. Still, they cannot explain why people decide to contribute to the platform. The current research will try to answer that question. To do so, we will first delve into the theoretical perspectives that have already touched upon this subject.

(9)

9 Chapter 2. Theoretical framework

Even though the question why people write online reviews has been studied before, primarily in social sciences and economics (i.e., disciplines that emphasize the importance of self-interest in human decision-making), an explanation for the popularity of writing online reviews has not yet been found. The current research will try to fill this theoretical gap. To do so, however, an overview of the existing theories about this subject is needed. The following chapter will therefore provide an overview of the theories upon which the current research will build: the collective action literature, the electronic word-of-mouth literature, and the affordances literature. We will show how these three strands of research highlight different aspects of the same problem and differ in their interpretations of as well as their answers to the main question of this research: why do people write restaurant reviews (even when they can also use others’ reviews without contributing themselves)?

2.1 Collective action problems

Writing reviews on restaurant review platforms can be classified as ‘collective action’. Collective action concerns the alignment of multiple people’s behavior in order to achieve a particular common interest or objective (Olson, 1965). In order to do so, each individual in this group of people has to decide that they want to achieve this goal and must want to serve the interest of the group. Social dilemmas underlie such collective action, since individuals often have multiple interests, some of which may be conflicting. A social dilemma is a dilemma where “what is best for the individual is in conflict with what is best for the group, yet if all pursue their own self-interest, the group as a whole is worse off” (Harrell & Simpson, 2016, p.1283). In other words, everyone would benefit from taking action, but no one will take the action if they are the only one to do so. This dilemma is also known as a ‘collective action problem’: a problem “where rational individual action can lead to a strictly Pareto-inferior outcome, that is, an outcome which is strictly less preferred by every individual than at least one other outcome" (Taylor, 1987, p.19).

Collective action problems have been around since the beginning of society. That is, early social scientists already investigated collective action by focusing on the puzzle of social order in our physical society. For example, in trying to explain how and why individuals engage in collective action (thereby transcending self-interest), Marx focused on class interests, Durkheim used the concept of solidarity, and Parson stressed the inculcation of common ends (Willer, 2009). Well-renowned social scientist Willer (2009) even wrote that it is “one of

(10)

10 sociology's classic puzzles how groups motivate their members to set aside self-interest and contribute to collective action” (p.23). The current research aims to solve part of this puzzle.

The current research will focus on a particular social dilemma or collective action problem: the public goods dilemma. A public goods dilemma is “a situation in which members of a group must decide whether or not to contribute their resources to a public good wherein, when once successfully accomplished, every member will have an equal share of the associated benefits, regardless of his/her actual contribution” (Chen & Bachrach, 2003, p.139). Thus, in this dilemma, it is in the long-term interest of the group to contribute towards a common good, even though contributing may not be in the individual interests of the group members.

Public goods have two distinct characteristics: “non-excludability” and “non-rivalry”. This means that they are goods “that are accessible to all at the international level, the consumption of which by one individual does not reduce the possibility of consumption by other individuals” (Héritier, 2015, p.540). Thus, public goods are available to everyone. The non-excludability aspect of public goods allow people to take advantage of others’ efforts. People that do not contribute to a public good but do profit from others’ contributions, thereby maximizing their personal profit, are ‘free-riders’ (Chen & Bachrach, 2003). They place their self-interest above the interest of the group. Free riding is, of course, only beneficiary if enough people do contribute to the public good. If too many people free ride, it is not possible to maintain or provide the public good. Also, free riding on the provision of a public good is often seen as morally wrong (Hardin, 2003).

Collective action theorists assume that people always act in their own self-interest. The concept of public goods is therefore a puzzle, as it seems impossible to create and maintain public goods if everyone only does what is best for themselves (Willer, 2009). Still, society exists, and with it, collective action and public goods. Public goods are actually quite common in our social worlds. Examples include the national military (in which not all citizens have to serve but that will still protect all citizens when needed) and a clean environment (since no one is obligated to take shorter showers, eat less meat or drive electric cars, but the entire world population benefits from a decrease in pollution; Hardin, 2003; Lane, 2011).

2.1.1. Collective action on the Internet. The examples described above take place in our real, physical society, though. The current research, in contrast, aims to study another social space: the Internet. The development and rise of the Internet has allowed for the

creation of new public goods and new corresponding collective action problems, inviting new investigations into the subject.

(11)

11 On the Web 2.0, just like our physical society, collective action is possible. The most famous example of a Web 2.0 service that relies on collective action is Wikipedia, a free online encyclopedia that is created and maintained by Internet users (Lewis, 2012; Shang, Li, Wu, & Hou, 2011). Such a platform is an example of a public good; it is created from collective action. That is, in order for such platforms to exist and/or be maintained, people have to contribute to the websites voluntarily, even though doing so might not be in their best self-interest. After all, Wikipedia writers are not obligated to contribute to the website or get any physical rewards for it, but still spend time doing so.

It is important to note, however, that although these Web 2.0 platforms are necessarily non-rival (i.e., the usage of one user does not impede the usage of another user), they are not necessarily non-excludable. Some Web 2.0 services do exclude some people from using their service by turning it into a paid service. Take for example music streaming service Spotify: on this platform, certified users can contribute to the platform by adding music to the streaming service and normal users can contribute by, among other things, creating public playlists. However, we cannot enjoy these songs or playlists if we do not pay for a subscription to Spotify. The possibility to free ride is thereby limited. Many of the Web 2.0 platforms that we would classify as a public good do not engage in such restrictive measures, though, since their existence is dependent on consumer participation (i.e., user-generated content). It is therefore in their own best interest not to restrict the amount of website visitors.

The current research will focus on one specific digital collective action problem: reviewing restaurants online, on restaurant ranking and rating platforms. While there is not much literature on this specific topic (i.e., the collective action problem of writing restaurant reviews on the Web 2.0), the classical collective action principles apply here as well. As the existence of online restaurant evaluation platforms depends on user-generated content, they constitute public goods. That is, many people have to put in the effort to rank and review establishments; without these public efforts, the platforms could not exist. Also, these reviews and ratings are visible to every visitor of the platform, both contributors and non-contributors, and all of these visitors benefit from the fact that these platforms exist. The decision whether to review or to free ride therefore constitutes a collective action problem (i.e., a public goods dilemma): a choice between self-interest (i.e., taking advantage of others’ contributions to the platform without contributing yourself) and the group-interest (i.e., contributing towards the platform in order for the platform to keep existing). And just like the sustainment of public goods in our actual, physical, society constitutes a puzzle to collective action theorists (that

(12)

12 assume people always act out of self-interest), so does the digital collective action problem of reviewing restaurants online and sustaining the review platform.

2.2 Electronic word-of-mouth

Next to the collective action literature, more research has taken an interest in the question of what motivates people to participate on such online evaluation systems. An important strand of research in this area comes from the field of marketing, which, interestingly enough, does not share the assumption of the collective action literature that self-interest is the driver behind all human action. These theorists believe that self-interest is not necessarily incompatible with an interest in the welfare of others (Balasubramanian & Mahajan, 2001). Thus, in clear contrast to the collective action literature, the marketing literature believes altruism can be part of the explanation for human (reviewing) behavior. This literature thus takes a very different perspective from that of the collective action literature, which might elicit interesting insights if contrasted with each other.

The marketing literature perceives UGC platforms not as public goods but as communication tools. That is, the rise of UGC platforms has allowed for new forms of communication to thrive and has changed the process of decision-making. We used to rely on our friends and family to tell us what restaurants to go to. Such word-of-mouth (WOM) advertisement (i.e., “an oral form of interpersonal non-commercial communication among acquaintances”; Cheung & Lee, 2012, p.219) has been shown to heavily influence consumer’s decisions (Arndt, 1967; Litvin, Goldsmith, & Pan, 2008) as it provides the involved actors with credible information (Karpik, 2010). This is especially true for the hospitality industry, where the intangibility of the products makes them hard to evaluate before purchasing them (Litvin, Goldsmith, & Pan, 2008). This uncertainty is reduced through WOM communications, leading to faster and better buying decisions (Hennig-Thurau & Walsh, 2014).

The Internet, however, has enabled consumers to engage in so-called “electronic word-of-mouth” (eWOM), which concerns “any positive or negative statement made by potential, actual, or former customers about a product or company, which is made available to a multitude of people and institutions via the Internet” (Hennig-Thurau, Gwinner, Walsh and Gremler, 2004, p.39). As the definition shows, reviews constitute eWOM communications as reviews consist of positive or negative statements about a restaurant visit, made on a publicly available platform on the Internet. However, it is important to note that not all eWOM communications constitute reviews. That is, eWOM communications constitute other types of online communications as well (e.g., forum questions and posts, video comments).

(13)

13 eWOM differs from traditional WOM communications in many ways. For instance, it enables consumers to reach many individuals at once that would otherwise socially or geographically be more difficult to reach; the content stays available to the public for an indefinite amount of time; and it is possible to engage in it anonymously (Hennig-Thurau et al., 2004). Thus, eWOM increases speed, scale, persistence, and accessibility of information, but decreases the ability to judge credibility, in comparison to traditional WOM communications (Cheung & Lee, 2012). With these features, eWOM has become an important factor in people’s decision-making processes: research has shown that 61% of American consumers use some kind of online customer feedback (i.e., online reviews, blogs, et cetera) before buying a product or service (“Online Reviews Sway Shoppers”, 2008). Reasons for using such customer feedback mainly concern the attainment of additional information, checking the quality of the product or service, or selecting/prioritizing choices (Königs & Moha, 2017).

Writing an online review, in contrast to using them, depends on different motivational factors. Recent research by Königs and Moha (2017) shows that Dutch consumers primarily post reviews if they have had a particularly positive or negative experience; consumers with a less distinct experience were less inclined to write a review. Having a strongly positive or negative experience cannot explain the phenomenon of online reviewing on its own, though. To truly understand the drivers of eWOM communication, one might have to start by looking at traditional WOM communication. Sundaram, Mitra and Webster (1998) found eight motivational factors that could explain people’s engagement in traditional WOM communications. Four of them predicted people’s engagement in positive WOM communication (i.e., altruism, product involvement, self-enhancement, and helping the company) and four of them explained negative WOM communications (i.e., altruism, anxiety reduction, vengeance, and advice seeking). Balasubramanian and Mahajan (2001) also investigated this topic and found three motivational categories. The first, focus-related utility, entails the belief that one adds value through one’s contributions, thereby strengthening or advancing the community/platform. The second, consumption utility, concerns the benefit one gets from consuming others’ contributions (Balasubramanian & Mahajan, 2001). For example, by reading others’ contributions, readers may become motivated to write a comment themselves; for instance to seek advice and request problem-solving information post-purchase (Hennig-Thureau et al., 2004). The third motivational category, approval utility, entails the satisfaction one feels when others consume and approve one’s contributions (Balasubramanian & Mahajan, 2001). The second motivational category in particular raises the question why people would not just use others’ contributions without putting in the effort themselves (i.e.,

(14)

14 free ride). The eWOM theorists do not see free riding as a viable option. They do not even discuss this behavior. Though they admit that not all people will necessarily have to contribute, they believe that even “such ‘silent’ constituents can encourage increased contributions by others” by providing approval utility (Balasubramanian & Mahajan, 2001, p.126).

A couple of years later, Hennig-Thurau and colleagues (2004) took this research to the online environment. They confirmed the existence of the three above-mentioned categories in this new environment, and identified two more motivational categories: moderator-related utility (i.e., the convenience one experiences and active support one gets from the platform in solving problems, making it easier for consumers to complain) and homeostase utility (i.e., the desire to balance one’s life by expressing positive feelings and venting negative feelings). From these five broad categories, they identified specific motivational factors that could explain consumers’ engagement in eWOM communications. Significant factors were a desire for social interaction, economic incentives (i.e., economic rewards), concern for other consumers, increasing self-worth (i.e., through drawing attention and receiving positive feedback), and (to a lesser extent) seeking advice.

These findings are not conclusive, though. Other research has found other motivational factors. For example, Cheung & Lee (2012) showed that a sense of belonging to a community, a person’s reputation, and the enjoyment of helping others are the most important factors that motivate consumers to engage in eWOM communication on UGC platforms. Though one could argue that these factors overlap (e.g., concern for others with helping others; belonging with social interaction), they are certainly not identical. Moreover, according to Cheung and Lee (2012), reciprocity and moral obligation do not predict consumers’ eWOM behavior, which seems contradictory to Balasubramanian & Mahajan (2001) findings (about altruism) at first glance, since some researchers would identify reciprocity and morality as components of altruism. However, Cheung and Lee (2002) argue that only the enjoyment of helping others is truly an altruist motive. In their view, reciprocity entails the expectation of getting something in return in the future (i.e., is an egoist motivation), and morality is purely based on a sense of duty or obligation (i.e., moral principles such as justice or utilitarianism). Neither of these two motivational factors are therefore, according to Cheung and Lee (2012), truly altruistic, which may account for the fact that these factors were not found to have explanatory power.

An overview of the discussed motivational categories and the corresponding motivational factors for people to engage in (e)WOM communications can be found in Appendix A. As the table in Appendix A (i.e., Table 1) shows, some motivational factors return in multiple articles (in slightly different forms), such as altruism, advice seeking, and

(15)

self-15 enhancement (i.e., increasing self-worth and reputation). The fact that these factors are found most consistently in the literature implies these are found to be some of the most important motivational drivers in the (e)WOM literature.

One of these recurring factors is the self-interested motivation of reputation or status-enhancement. Research has found that UGC platforms are ‘social spaces’ that allow for status to be accumulated within them (Levina & Arriaga, 2014). The platforms not only display internal sources of status or capital, accumulated through other users’ expressions of admiration for either the content or the contributor itself (e.g., number of likes or followers), but also allow for external sources of capital to be imported into the platform. For example, users’ photographic skills increase the quality of their reviews’ content. Research has shown that the possibility to develop a reputation as an ‘expert’ within such an online platform is an important motivator to contribute to the platform (Levina & Arriaga, 2014). Kovács and Horwitz (2018) have conducted a comprehensive exploration of this motivational factor for restaurant reviewers. They too found that status-enhancement is an important driver to review restaurants online and argued that people use their consumption patterns to signal status (i.e., “conspicuous consumption”). Importantly, though, others need to be aware of the purchase. To ensure this process, people post a review online. This is especially relevant in the case of restaurants, which people easily associate with a certain level of social status, and that people generally see as an indicator of the consumer’s social status (Kovács & Horwitz, 2018). Since there are not many people around in a restaurant, one can enlarge the audience of their consumption (and apparent status) by reviewing it online. People are therefore also more likely to write a review of a high status restaurant than of a lower status restaurant. It does not matter that the reviewer does not know who will read their reviews, since “the very act of writing a review is a form of public display” and therefore fulfills the writer’s need to show off their status (Kovács & Horwitz, 2018).

Despite some factors being found repeatedly, there is currently no consensus in the literature on the answer to the question what motivates people to engage in eWOM. The same holds for the theory’s answer to the question why people do not engage in eWOM communications. The reasons most often found in the literature are time constraints (Gretzel, Yoo, & Purifoy, 2007; Murphy, Gil, & Schegg, 2010; Wilson, Murphy, & Fierro, 2012), privacy and security concerns (Gunn, 2015; Murphy et al., 2010; Wilson et al., 2012), lack of confidence in one’s writing or ability to help (Gretzel et al., 2007; Gunn, 2015; Murphy et al., 2010; Wilson et al., 2012), technological issues (Gunn, 2015; Preece, Nonnecke, & Andrews, 2004), and being lazy and/or not feeling the need to contribute or share (Preece et al., 2004;

(16)

16 Gretzel et al., 2007; Murphy et al., 2010). However, again, many more factors can be found in the literature. These studies do not see this behavior as ‘free riding’, though, as they do not see the UGC platforms as public goods (as the collective action literature does) and believe that even non-contributors play a role in motivating others to contribute through the provision of approval utility (Balasubramanian & Mahajan, 2001).

Overall, one can conclude that the eWOM literature is very scattered. It gives many options, but no clear answer. What the eWOM literature does have in common (in contrast to the collective action literature), is a focus on altruism instead of self-interest in their explanations. However, their altruism explanation is not entirely convincing. That is, while these researchers indicate altruism might be an explanatory factor, they do not specify what it is about altruism, specifically, that drives people to behave in an unselfish way. That is, altruism is a highly debated concept. In the academic literature, it has often been equated with many other concepts that are closely related but different, such as “giving, sharing, cooperating, helping, and different forms of other-directed behavior” (Monroe, 1994, p.862). Whereas behavioral scientists define it as “costly acts that confer economic benefits on other individuals” (Fehr & Fischbacher, 2003, p.785), psychologists often take a broader approach and see it as “behavior intended to benefit another, even when doing so may risk or entail some sacrifice to the welfare of the actor” (Monroe, 1994, p.862). However, even though the exact definitions in various disciplines differ, economists, evolutionary biologists and psychologists all accept the assumption that self-interest is the norm of human behavior (i.e., egoism; Gantt, Reber, & Hyde, 2013; Monroe, 1994). This even applies to altruist behavior: one might help another person to avoid negative feelings (e.g., generating positive feelings) or even to help oneself in the long term (Gantt, Reber, & Hyde, 2013). It seems like the eWOM literature talks about ‘pure altruism’, which means that one truly acts with the benefit and concern of the other as goal, whereas the collective action literature assumes people predominantly commit ‘impure altruism’ (i.e., “pro-social behavior motivated by more self-centered motives […] potentially in addition to concern for others”; Wittek & Bekkers, 2015, p.579). Taking a collective action perspective, one would argue that ‘altruism’ in itself is not a motivational factor, since there is always an underlying motive at play (i.e., a reason why people want to help) that might be egoistic and deserve a more thorough exploration. In the remainder of this article, both pure altruism and impure altruism will be considered as potential explanations as no ‘a priori’ conclusions can be drawn on the validity of either option.

(17)

17 2.3 Rewards in the public goods dilemma

While the collective action literature focuses on self-interest as the primary motivation for people’s (online reviewing) behavior, the literature on eWOM communications does not share this assumption and allows altruism to be a (partial) explanation. However, it is important to keep in mind that some UGC platforms are built in a way that promotes collective action by ‘rewarding’ individuals for altruistic behavior. The collective action problem is then partially solved by giving the users incentives, thereby aligning the self-interest and the group-interest (Wittek & Bekkers, 2015). This solution to the public goods dilemma is an example of Cheung and Lee’s (2012) egoism motivation and plays into Balasubramanian and Mahajan’s (2001) approval utility of eWOM communications: they provide the reviewer with a sense of satisfaction or achievement, which makes them more prone to engage in such behavior again. Such tools are examples of “affordances”: material artifacts (e.g., technologies) that determine what people are allowed to do (Bucher & Helmond, 2018).

In the context of the current research, these affordances particularly entail factors built into the review systems that might influence people’s decision to review. For example, TripAdvisor, an American travel website using review and ranking systems, gives their most avid reviewers titles (e.g., “top contributor”) and badges (e.g., “hotel expert”, “senior photographer”, “helpful reviewer”), as well as points. Moreover, they count the amount of ‘likes’ the reviews of a reviewer have received and assign reviewers (on the basis of these likes and the amount of points they have) to a certain ‘contributor level’. Google Reviews (i.e., a public feature on Google Maps where one can rank and rate establishments) uses a similar system: they too provide reviewers with points and titles (e.g., “local guide”), assigning them to a certain ‘guide level’. TheFork works with titles as well (e.g., “master taster”), but also uses an economic incentive to persuade website visitors into reviewing. That is, they award reviewers with so-called “Yums” that reviewers can use to get a discount on their next reservation through the website. Research has shown that 45% of Dutch consumers have received a (chance at a) reward for writing a review online (Königs & Moha, 2017). A third of the consumers that had ever posted an online review even indicated that they could win a prize for writing a positive review (Königs & Moha, 2017). The same research also concluded that younger people are more prone to write reviews because of this particular reason (i.e., receiving a reward or winning a prize) than older people (Königs & Moha, 2017).

Such technological features of websites are more than just features. They carry symbols and connotations, and therefore meanings (Bucher & Helmond, 2018), which makes affordances an important concept to take into account in this research. The concept of

(18)

18 ‘affordances’ was originally introduced by Gibson in 1979, in the field of ecological psychology (Hafezieh, 2015). Gibson saw an affordance as a relational construct related to both the object’s properties (i.e., the environment) and the subject’s perception (i.e., the people; Hafezieh, 2015). That is, affordances have to be perceived in order for them to exist. This relational view is not always shared in the literature; other researchers have argued for a more dispositional view that focuses on the intrinsic properties of the environment that provide the possibility to take action (i.e., enable or constrain people’s actions; Hafezieh, 2015). Gibson’s concept was brought to the field of design studies by Norman in 1988, who took the role of designers as a focal point (instead of that of the perceiver). He saw affordances as “a relationship between the properties of an object and the capabilities of the agent that determine just how the object could possibly be used” (Norman, 2013, p.11). He thus took a relational stance. However, he gave most of the power to the designer. Norman argued that a good design would lead to easily perceivable affordances that would then also lead to the device being used in the way that the designer had intended it to be used (Bucher & Helmond, 2018). Gaver (1991) refined these theories by stating that, at least in the case of technology, affordances could be both perceptible and hidden for the subjects. Postigo (2016) elaborated on this perspective by defining affordances as the ways in which “social structures […] take shape in association with a given technical structure” (p.336). Hutchby (2001) then brought this concept into the sociological realm and applied it to technology. He argued that affordances have a function in the social world, namely that they enable or constrain actions. That is, while the affordances of artefacts do not prescribe certain actions, they set limits on what people can do with or via the technology (Hutchby, 2001). Affordances thus shape the conditions of possible actions (Bucher & Helmond, 2018). The current research will base its framework within this socio-technological framework. The importance of perception is not to be forgotten, however, because “if people do not see or understand that a particular behavior is possible, they are unlikely to engage in that behavior (except by accident)” (DeVito, Birnholtz, & Hancock, 2017, p.741). This perceptibility is both a product of the system’s design and the user’s attributes (DeVito, Birnholtz, & Hancock, 2017).

All in all, though researches have not yet agreed on the exact nature of the relationship between technology and people (in the context of affordances), one thing is clear: website features are more than features. They have the ability to direct or change people’s behaviors and are influential in people’s meaning making processes. These factors therefore also have the potential to influence people’s decision-making process in the face of a (public goods) dilemma. That is, the affordances can shape individuals’ behavior by changing their interests. The affordances literature therefore assumes that people’s self-interest can be molded. This

(19)

19 literature therefore also assumes people will primarily act out of self-interest (in accordance with the collective action literature and in contrast to the eWOM literature). However, even though these affordances can steer people’s decisions and all UGC platforms under investigation in this research make use of such affordances, there are still website visitors that do not post reviews. Thus, these affordances do not have the power to ‘solve’ the public goods dilemma completely.

2.4 Sociological relevance

The question why people engage in collective action on restaurant review websites (i.e., write reviews) thus remains. None of the discussed strands of research has been able to formulate a satisfying answer. In fact, they present strongly opposing interpretations of this phenomenon, creating an even larger puzzle. The current research aims to solve part of this puzzle by investigating people’s representations of their own motivational behavior.

The sheer amount of people using online ratings and reviews and their influence on people’s consumption decisions demonstrates the importance of solving this puzzle. The fact that reviews have become such integral parts of our social lives makes it necessary to expand our theoretical knowledge about this subject, especially because the literature creates such contradictory expectations. The current research will be the first to start a dialogue between the two areas of research that study this phenomenon with such different perspectives. That is, the collective action literature assumes that people act out of self-interest. They assume people will always consider their individual costs and benefits before acting. Online reviewing therefore becomes a social dilemma where one has to choose between the self and the group. In contrast, the literature on eWOM communications assumes that people sometimes do act altruistically. These researchers thus do not share the assumption that people always act out of their own interest. They therefore do not see online reviewing as a social dilemma. The current research even adds a concept from ‘platform research’ (i.e., affordances) to this dialogue as well, which assumes that individuals’ self-interest can be molded. As the current research is the first to connect these three strands of literature, it has the potential to explore and discuss the validity of the perspectives, which makes it highly relevant and valuable to the existing literature on motivational behavior.

Furthermore, the literature discussed above all use quantitative research methods (i.e., surveys). Quantitative data is more easily compared, replicated and generalized than qualitative data (Ushioda & Dörnyei, 2012). However, this process requires one to average the participants’ answers and thereby does not “do justice to the subjective variety of an individual life” (Dörnyei

(20)

20 & Ushioda, 2011, p.204). With the current study’s focus on individuals and their individual reasons for reviewing online, it would be very disadvantageous to strip away the individuality of people’s experiences. Therefore, the current research will use a qualitative research method (i.e., interviewing). Qualitative research is “interested in analyzing subjective meaning or the social production of issues, events, or practices by collecting non-standardized data” (Flick, 2014, p.542). Thus, interviews allow for a deeper understanding of how people make sense of their social worlds and allow us to get to the underlying meanings of people’s initial answers. In the current research, this particularly pertains to people making sense of public goods dilemmas and their own behavior in the face of such a dilemma. Being able to dive deeper into this subject together with the participants is important since people might find it difficult to understand their position in a public goods dilemma at first. That is, they might experience difficulty identifying and exploring their motivations without being prompted to think deeper about their behavior and explanations for said behavior. That is not to say that the previously discussed literature is not useful. In contrast, these surveys have provided us with important insights into people’s practical consciousness (i.e., unconscious, automatic cognition; Vaisey, 2008). The current qualitative research, however, aims to refine, clarify and extent these previous quantitative studies, by tapping into participants’ discursive consciousness (i.e., conscious, explicit cognition; Vaisey, 2008). Moreover, interviews have been shown to be useful to study attitudes, values, beliefs and motives, as they allow the interviewer to probe for explanations, and they provide an opportunity for the researcher to look for non-verbal clues hinting at the validity of the participants’ answers (Barriball & While, 1994). Through this in-depth data collection and subsequent comprehensive understanding of people’s motivations and intentions, the outcomes of the current research will be a valuable addition to the existing literature.

Additionally, a research on online reviewers’ motivations has not yet been conducted in the Netherlands, which may provide us with an exploration of cultural differences or similarities. The only Dutch research connected to our research question (i.e., Königs & Moha, 2017), was predominantly concerned with motivations to use online reviews instead of writing them. They only investigated one factor that concerned writing online reviews, which was the effect of prizes. However, the fact that a Dutch review website (i.e., IENS) has been able to grow from a local guide to an international network (i.e., TheFork) within a decade or two, shows the importance of ratings and rankings (and the platforms) in Dutch society. It is therefore important to study Dutch reviewers’ motivations.

(21)

21 Finally, the societal implications of online reviews are very real: online reviews can ‘make or break’ an establishment. This is especially true because we tend to trust online reviews (Nielsen, 2015) and thus let them shape our opinions, which can create a snowball effect. The underlying processes of this online phenomenon are therefore important to study. Understanding people’s motivations to (not) behave in accordance with the group interest (i.e., understanding the public goods dilemma in the context of online reviewing websites) is the first step in understanding the bigger processes at play in this online world of rating and ranking. Finding structural factors that underlie these motivations may help us to better understand the relational aspects of collective action problems.

The implications of understanding these motivations are not limited to the hospitality industry, however. Many other Web 2.0 services are dependent upon consumer participation (e.g., donation platforms). Understanding how pro-social behavior emerges and is shaped on UGC platforms (or what hinders people to engage in pro-social behavior) is therefore highly important to understand human motivational behavior in general.

(22)

22 Chapter 3. Methods

To be able to formulate an answer to our research question, the current research made use of a qualitative research method. That is, semi-structured interviews were conducted in the Randstad area of the Netherlands. The following chapter will first create a detailed overview of the data collection process, including information on the participant recruitment process and data gathering (i.e., conducting the interviews). Next, the demographical composition of the sample used in the current study will be presented, followed by a step-by-step overview of the data analysis process.

3.1 Data collection

3.1.1 Participant recruitment. Participants had to meet the following entry requirements to be suitable for participation in the current research: they had to be between 18 and 60 years old, they had to have been out for dinner in the (recent) past, and they had to be familiar with restaurant reviewing websites.

Participants were recruited in various ways. The first method of participant recruitment was through the Internet, specifically the UGC platforms themselves. That is, the researcher reached out to reviewers on TripAdvisor, asking them to participate in the current research. The contacted reviewers were found by searching through the reviews that were recently given to restaurants in Leiden and The Hague. TripAdvisor is the only restaurant review platform that allows members to message each other, so, unfortunately, this method restricted the search for participants to TripAdvisor reviewers. A second way in which participants were recruited, was through flyers. Flyers were created by the researcher and distributed amongst four restaurants in the city of Leiden. Two of these restaurants were more expensive, luxurious restaurants, whereas the other two were cheaper, bigger, and more low-key. The choice of these restaurants was made based on the researcher’s estimation of the type of customer these restaurants attracted (i.e., older and younger people, respectively). The restaurant owners all promised to distribute the flyers amongst their customers, together with the check. The flyers displayed the goal of the research, the entry requirements and the contact details of the researcher, along with the question whether the reader would like to participate in this research (in Dutch). A digital example of the flyer can be found in Appendix B. As a third method of participant recruitment, this flyer was also included in the digital newsletter of one of the (more expensive) restaurants. The fourth way in which participants were recruited, was through the distribution of flyers at four buildings of Leiden University (on bulletin boards). A final approach to recruit participants concerned a snowballing-technique, starting from the social network of the researcher. That is,

(23)

23 the researcher searched within their own social network for people who might know people that could be suitable for the current research.

The research included both people from the millennial generation (i.e., Generation Y, born between 1980 and 2000) and people from Generation X (i.e., born between 1960 and 1980). While Generation X participants were found quite easily, the recruitment of millennials turned out to be very difficult. That is, none of the people contacted through TripAdvisor were from Generation Y, not one millennial responded to the flyers in the restaurants or University buildings, and there were not a lot of young people in the researcher’s direct or indirect social network that reviewed restaurants online. A few potential participants had to be turned down because they did review hotels but did not review restaurants, but most of the millennials that were reached out to, did not review anything. For this reason, the total number of millennials (five) in this research is significantly smaller than the amount of Generation X participants (ten). 3.1.2 Interviews. Fifteen semi-structured interviews were conducted in order to find an answer to the research question. This method allowed for an in-depth exploration of the individual motivations and underlying meanings (i.e., structural explanations behind people’s motivations). Before the interview started, the researcher reminded the participants of the fact that participation was voluntary, they could always stop the interview without any consequences, and they did not have to answer questions that they felt uncomfortable answering.

An interview guide guided the researcher during the interviews but did not limit them in asking additional questions when the opportunity arose. That is, the semi-structured interviews allowed the researcher to ask further questions on issues that were raised by the participant if it seemed like it could be a relevant issue for the results of the study. This way, richer data could be collected. The interview guide can be found in Appendix C. All interviews were conducted face-to-face at a date, time and location that the interviewee could decide. Locations where the interviews were conducted include offices, homes, University buildings, a terrace, and a company’s coffee bar. All interviews were recorded, with the interviewee’s consent, using both a voice recorder and an iPhone. The average length of the interviews was 51 minutes and 49 seconds (with a range between 20:26 and 1:18:22). The cumulative time of the interviews added up to 12 hours, 57 minutes and 29 seconds.

At the end of an interview, all participants received a chocolate bar of their choosing (i.e., dark chocolate, milk chocolate, milk-hazelnut, or white chocolate) as a reward for their participation.

(24)

24 3.2 Participants

The sample of the current study consisted of 15 participants, including five Dutch men (33.3%) and 10 women (66.7%) from the Randstad area (i.e., an urban agglomeration of cities) in the Netherlands. Six participants lived in Leiden (40%), five lived in Rotterdam (33.3%), three lived in Utrecht (20%), and one lived in The Hague (6.7%). The research included five participants (33.3%) from Generation Y and ten participants (66.7%) from Generation X. Participants were between 23 and 59 years old, with an average age of 41.6 years. Taking the two age groups apart, the average age of the millennial participants was 23.8 years, whereas the average participant from Generation X was 50.5 years old. The educational level of the participants (i.e., highest completed degree) differed: six participants (40%) had finished a degree at a University of Applied Sciences, five participants had finished a Master’s degree at a University (30%), three participants had finished a Bachelor’s degree at a University (20%), and one participant had earned a vocational degree (6.7%). Concerning race or ethnicity, fourteen participants identified as Caucasian/white (93.3%) and one as Hindu (6.7%).

Participants included both people who review restaurants online via restaurant review sites and people who do not, to be able to compare the motivations and interests of the two groups. Of the fifteen participants, eleven (73.3%) identified themselves as (avid or occasional) reviewers, whereas four (26.7%) identified themselves as non-reviewers. The five men were all reviewers, so all reviewers were female (i.e., 40% of the female participants were non-reviewers). With regards to the age groups: in the Generation X group, eight out of ten people were reviewers (80%), whereas in the Generation Y group, 3 out of 5 people were reviewers (60%). An overview of all participants with their corresponding characteristics can be found in Appendix D (Table 2). Participants’ identities are protected by using pseudonyms. No real names are used in this paper.

For the current research, it did not matter what type of restaurant review platform the participants used. The literature on affordances has shown that features of the websites themselves can influence people’s decision to review or not to review. Since the current research tries to tap into the motivations for these decisions, it was beneficial to include all UGC platforms. In terms of usage of platforms, the participants of the current research all used TripAdvisor. Most participants also used TheFork, and some discussed the usage of Google Reviews. Other platforms that participants discussed were Yelp, BookDinners, ViaMichelin, Facebook, Deliveroo, and De Buik van Rotterdam. In terms of writing reviews, most participants used either TripAdvisor, IENS, and/or Google Reviews.

(25)

25 3.3 Data analysis

The data were analyzed using an abductive approach to discourse analysis. The goal of abductive analysis is to produce “new hypotheses and theories based upon surprising research evidence” (Timmermans & Tavory, 2012, p.170). This data-driven analytical approach allowed the researcher to analyze the interviews without being limited to any pre-existing categories or theories, while still being guided by a broad range of knowledge of different theories (Charmaz, 2014; Emerson, Fretz, & Shaw, 1995; Timmermans & Tavory, 2012). That is, the data itself showed what categories and processes were important (Charmaz, 2014; Emerson, Fretz, & Shaw, 1995). However, the researcher did read theories before conducting the interviews and during the coding process, allowing them to generate “novel theoretical insights that reframe empirical findings in contrast to existing theories” (Timmermans & Tavory, 2012, p.174).

The use of discourse analysis allowed the researcher to analyze the interview data qualitatively in order to understand the underlying meanings. Taking an abductive approach to this type of analysis made sense for the current research because this research is investigating individual motivations and such motivations cannot be studied directly. One therefore has to rely on interviewees’ interpretations of their own behavior (i.e., their discourse: what and how people communicated about their motivations). People’s motivations (i.e., the underlying meaning of the discourse) could thus only be found within the interview data. That is, it can only be studied indirectly by interpreting the participants’ discourse. Moreover, the researcher did not want to steer the participants’ answers into a particular direction and therefore thought it would be best not to be limited to any pre-existing theories.

The interviews were transcribed and coded in order to find patterns, contradictions and commonalities among the interviews (Charmaz, 2014). The transcription and coding processes for each interview started as soon as possible after conducting the interview. Because the researcher used an abductive approach to data analysis, the coding process started with a line-by-line coding of the Dutch interview transcriptions in the qualitative data analysis software program Atlas.ti. In doing so, the researcher tried to capture as many ideas and themes as possible, while staying as close to the data as possible, as a way of opening up new avenues of inquiry (Charmaz, 2014; Emerson, Fretz, & Shaw, 1995). The method of line-by-line coding prescribes to code as much as possible, which facilitates the later search for patterns. In accordance with Charmaz’s (2014) advice, the researcher tried to preserve actions and keep the codes active. This initial round of coding yielded 1700 codes (i.e., an average of 113.33 codes per interview). Next, all of the initial codes were categorized into broad categories by hand. These categories were all data-driven, found by the researcher while (re-)reading the codes. The

(26)

26 codes were then categorized again digitally, double-checking if the right categorizations were applied in the first round. The relevant categories created from the initial codes were ‘motivation to review’ (287 initial codes), ‘hypothetical motivation to review’ (89 initial codes), ‘motivation not to review’ (195 initial codes), ‘hypothetical motivation not to review’ (18 initial codes), ‘goal of reviewing’ (30 initial codes), and ‘emotion in reviewing and reflection process’ (70 initial codes). The category ‘hypothetical motivations to review’ consisted of imagined motivations discussed by non-reviewers (imagining reasons why others could review or imagining factors that could persuade them to review) and other reviewers (imagining other reviewers’ motivations that differ from theirs). In the case of ‘hypothetical motivations not to review’, the focused codes reflected answers by reviewers imagining why others might not to review and non-reviewers imagining other reasons not to review besides their own.

Subsequently, for each category, the initial codes were summarized and merged into focused codes. That is, the researcher searched for patterns, contradictions and commonalities among the initial codes, as well as general categories the initial codes could be an instance of (Emerson, Fretz, & Shaw, 1995). Such themes were written down when noticed and all corresponding initial codes were written down next to it. When finalizing this process for a category, the researcher critically assessed the focused codes again; often rewriting and restructuring them until they accurately captured the data set. Because the interviews were conducted in Dutch, the initial codes were Dutch as well. However, since the focused codes would be used in the analytical process of writing the results, the focused codes were created in English. An overview of all of the focused codes found in the data can be found in Appendix E (Table 3). These focused codes encompassed almost all relevant initial codes found in the data. After compiling the focused codes for all of the categories, the focused codes of the various categories were compared within each category as well as between categories, again searching for similarities and differences (Charmaz, 2014). This analysis was the starting point of the results section.

(27)

27 Chapter 4. Three understandings of review platforms

In the current empirical research, a wealth of different motives for people to write, or not to write reviews, was found. This chapter will first discuss the results of the interviews before diving into their connections to the theoretical framework as presented in chapter 2.

Some of the motivations we found were driven by self-interest (in accordance with the collective action literature), but also many were driven by other interests that cannot be that easily related to the collective action literature. The complexity and variety of the findings was not without structure, however. Indeed, in the narratives of the participants, one can identify three ways of making sense of restaurant review platforms. These understandings comprise what the respondents perceive as the goal or function of the platform, and therefore also influences the way they use of the platform. The three types of understandings we have found in the current study are: (1) the consumer-focused understanding, (2) the restaurant-focused understanding, and (3) the self-focused understanding. These understandings have practical implications for people’s behaviors and therefore enable us to understand why people (do not) write restaurant reviews (i.e., what motivates them).

Both reviewers and non-reviewers can have the same type of understanding of a platform. However, some non-reviewers started the interview off by stating that they simply did not even think about reviewing as a thing to do. They never consciously made the decision to review or not to review; it just never even crossed their minds. While this is not a specific factor that motivates people to abstain from reviewing, it is one of the most important things to highlight in the face of the question why people do not review restaurants (according to the interview data of the current study). Marie, an older woman who never reviews restaurants, summarized this sentiment as follows:

I just never thought about it. It’s never been a conscious decision, but I just never thought about the possibility of writing a review about that dinner. … No, no, I’ve just never really consciously considered it.

Thus, people seemingly forget. It is not “on top of their mind” or “part of their thought process”, as stated by Juliette, a millennial who never reviews. However, most respondents were able to give a more detailed answer to the question why they do (not) review restaurants. The motivations they discussed can be structured into three different understandings of restaurant review platforms.

Referenties

GERELATEERDE DOCUMENTEN

It causes a sentence without any time adverbial mentioned and without any further context provided to have a double reading; When a definite past time adverbial is added, the

mechan- ism design-based negotiations in the purchasing function as well as AI, four discussion topics were developed: (1) AI in the purchasing process, (2) AI in mechanism

Increase the capacity of Indonesian Ministries, Local Government, Agencies, Public and Private Companies, and Knowledge Institutions in developing, exploring and utilization

Our first aim is to examine the effectiveness of online CBT (vs a waiting list control condition) in reducing symptom levels of PCBD, PTSD, and depression in people bereaved by

Observed remains of architectural structures and features on Mapungubwe include single free-standing stonewalls; terrace stonewalls, some of which are constructed

The aim of this paper is to investigate the effect of different dimensions of online consumer ratings (volume, valence, and dispersion) among two product categories

Verder dient er ook op gewezen te worden dat in deze zone een aantal vondsten aangetroffen werden, die aan de Romeinse tijd toegeschreven kunnen worden. Het is dan ook niet uit te

Als gevolg van deze problematiek zijn thans bij de normen voor ecologische duurzaamheid niet alleen de aspecten soortensa- menstelling en bosstruktuur van belang,