• No results found

The Moduser: Fortifying the Comment Space Through Active Filtering

N/A
N/A
Protected

Academic year: 2021

Share "The Moduser: Fortifying the Comment Space Through Active Filtering"

Copied!
47
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

T

HE

M

ODUSER

:

F

ORTIFYING THE

C

OMMENT

S

PACE

T

HROUGH

A

CTIVE

F

ILTERING

T

HE

D

EVELOPMENT OF

C

OMMENT

M

ODERATION

:

F

ROM

C

OLLABORATIVE

F

ILTERING TO

P

ROFESSIONAL

M

ODERATORS AND

M

ODUSERS

Master’s Thesis New Media

Name Lisanne Verena Blomberg Student number 10199438

MA Media Studies, New Media and Digital Culture

Thesis Supervisor prof. dr. R.A. (Richard) Rogers Second Reader dr. T.J. (Tim) Highfield

(2)

2

A

BSTRACT

Online comment spaces are places where users can discuss everything with each other. These

discussions often contain slurs of trolling and bad comments with foul language, which is a reason for many users to no longer read the comments. However, comment moderation prevents the worst comments from showing online, it fortifies the comment space. Moderation can happen in many ways, and this thesis gives a schematic overview of the different techniques, based on literature and research. The largest part of moderation is being done by professionals. For example, a large

platform such as Facebook has many moderators employed. Moderation can be done before or after something is posted: pre- or post-moderation. Furthermore, the development of moderation has also been researched for this thesis. This resulted in another overview wherein a timeline of the changing roles of users and moderators can be seen. Moderation has evolved from a collaborative filtering system in the early days of internet to professional content moderation now. This timeline and historical internet archive research of platforms Slashdot and Reddit, whose users play an important role in the moderation process, lead to an important finding that users are gaining more awareness and influence in the moderation process. While moderation now is still mainly operated in the invisible back-end of platforms, it is changing towards a new type of moderating where the user has more influence: the moduser. Users will be an important part of the moderation process by actively filtering the comments they read. The large online platforms will see the benefits of the wisdom of the crowds and together they can create a safer online space.

Keywords: moduser, active filtering, comment space, comment moderation, content moderation, fortifying, internet archive

(3)

3

T

ABLE OF

C

ONTENTS

1. Introduction 4

2. Theoretical Framework 7

2.1 Comment Space 7

2.2 Content and Comment Moderation 13

2.3 Pre- and Post-Moderation 17

3. Methodology: Researching Platforms and their Moderation Techniques 20

4. Case Study: Moderation of the Comment Space Over the Years 22 4.1 Characteristics of the Comment Space: Front- and Back-End 22 4.2 Development of (Moderating) the Comment Space 24

4.3 Development of Comment Moderation Rules 30

4.3.1 Slashdot and Reddit: Platforms with Modusers 30

4.3.2 Newspapers: Old Media Moderating Online 35

5. Conclusions: Professional Moderators and Modusers in the Comment Space 38

(4)

4

1.

I

NTRODUCTION

The internet started as a place where ‘geeks’ could discuss different topics and where information on many different things could be found. For someone born in 1992, the early days of the internet are fascinating and seem utopian. The internet was a place where everyone could do and say as they pleased, and online forums existed to help each other out. When online access became standard, and more users from all around the world joined, the tone of the online discussions changed too. The online comment space is a feature of the internet that has been a part of it since the beginning, ranging from discussions on forums, reviews about items users bought, to a social platform like Twitter. These comment spaces, in all their different shapes and sizes, and with all their different types of communities, are almost all moderated in some way. This thesis will focus on how moderation of comments posted by users has changed and developed over the years, and on the many different forms of moderation that exist.

Nowadays, the online comment space seems to have turned into a place where users always disagree, and which is generally filled with negativity. During large events – for example, the election of the president of the USA in 2016 – there seems to be no room for nuance in the online world, only two parties with completely different points of view standing right across each other disagreeing on everything. Users are getting more accustomed to the online negativity, which seems to seep into the vocabulary of everyday life, with politicians addressing each other in less formal ways and harsher language uttered on the streets. In March 2018 this happened when man’s clothing brand

Suitsupply launched an advertising campaign wherein two kissing men were portrayed. Not only

were the online comments very harsh and discriminating, the actual posters were smeared with swastika signs and offensive slurs (Garnier).

However, users do not even see the worst parts in their online timelines, feeds, and comment spaces, because companies prevent users from seeing these parts; this is called

moderation. Moderating the comments can be done in different ways and it is a way of fortifying the discussion against bad and illegal content. It differs how open platforms are about their policies, many are quite secretive, and users are often unaware of these rules. (Inter)national laws and ideology of the platforms influence this list of rules, which is visible for and to be used by employees of the platform, as only a small part of moderation is done by algorithms while the majority is done by the employees. There are more and more vacancies to be filled by these so-called content moderators who decide what content can stay online and what gets deleted. This decision can be taken before or after the comment is posted, but moderation happens mostly without users being aware of it. Sarah Roberts, an assistant professor in information studies, wrote her PhD about content moderation jobs in 2010. She calls this commercial content moderation. Call centers in

(5)

5 countries such as India and the Philippines have started providing this service for companies, as almost every platform that has user generated content needs it. A platform like Facebook, with 2 billion users, needs many moderators, but they are very secretive about this. Their workers are obliged to sign a declaration stating that they will not reveal the details of their job to anyone, in the workspace they are not allowed to have their phone with them and the windows are blinded to prevent that anyone from outside can gather information (Punsmann, Kreling et al.). Early 2018 Dutch newspaper De Volkskrant interviewed (former) moderators from The Netherlands working for Facebook, whose job requires fulltime reviewing content that could be illegal, which frequently leads to psychological problems. While the employer offers some help in how to process the content they have seen, it does not prevent that many employees quit after just a few months of work. The examples mentioned in the De Volkskrant article describe what kind of intense and violent content the moderators are coping with, which leads to many of them finding an outlet in drinking and doing drugs. These problems could be why Facebook is so secretive about the existence of these jobs. However, when asked for a reaction they do claim that their workers are looked after very well and get all the help they need. Humans are not equipped to cope with so many violent images all day long. Unfortunately, at the moment algorithms can only catch 10% of the illegal content and humans are too important to dismiss (Kreling et al.). However, other jobs that entail dealing with images of illegal content, require that employees work for a maximum of four hours a day instead of fulltime, for example police officers having to watch child pornography.

A noteworthy aspect of the online moderation is that it seems to have grown a lot over the years. Globalization caused more humans to have access to internet, which led to larger online communities and thus more moderation. Another reason could be that (new) local laws force platforms to be more restrictive, or feedback from users asking for a safer online space. These options will be reviewed in chapter 4 to see how moderation has developed since the early days of the internet. Besides looking at the development, I will research and compare multiple platforms to see what different forms of comment spaces and moderations exist. This part of the research will start at the level of reviewing (front-end) interface characteristics and will move to (back-end) moderation specifics. All these findings are put into two diagrams, of which the first represents an overview of the different moderation techniques at the front- and back-end of platforms, and of which the second gives a timeline of the development of comment moderation divided into decades up until the 2020’s. So, not only do I research the past development, but also at the direction moderation can possibly develop into.

In this thesis I will use the terms content and comment moderation simultaneously. Content moderation means all the user generated content that is moderated, which contains videos, photos and text. Comment moderation is part of the content moderation, but it is solely focused on the

(6)

6 comments. This part in particular is interesting because the comment section is the place where users encounter others with opposite opinions, which often leads to heated debates with offensive comments. In my personal experience the comment space has become a lot more negative and emotional. Many users try to avoid the comment space, but according to Joseph Reagle – an assistant professor in communication studies who wrote about the comment space in Reading the

Comments (2015) – they should still read it to engage in the discussion. Furthermore, the comment

space cannot be ignored as it has become part of everyday life. It has become a standard way of communicating for a whole generation as they have grown up in this digital world (126).

First, I will present the theoretical framework and give general background information on the comment space, and on how extended this place is. Then I will explain why and how moderation of comments and content is needed. Moreover, I will dig into how this moderation has developed over the years. Then I will study what characteristics different types of platforms have and determine how moderation is done and how this had developed. These findings will be substantiated by the previously mentioned diagrams. Lastly, changes in the way platforms and newspapers moderate will also be determined by looking at About Us and FAQ pages of platforms Slashdot and Reddit in the Internet Archive Wayback Machine, and by analyzing changes in newspaper comment spaces. Slashdot and Reddit are interesting platforms to look at because their users are important in the moderation process, thus providing the research with another way of moderation. Newspapers show another side, of old media that struggle with their online role and how moderation techniques and rules are changing constantly. It has been a trend for newspapers to turn off their online comment spaces, but they also have been trying to conduct new systems to provide a good discussion platform where journalists and readers can find each other and where moderating should be easier.

The development of the moderation of online comment spaces shows that users first had a lot of influence in the moderation process, and that they were collaboratively forming a filter that was responsible for the content in the online communities. When these communities grew, moderation became more difficult, and moderating was professionalized leading to the creation of actual jobs as moderators. These professional moderators have been hidden for many years, and most users were unaware of their existence. Since about a year a lot has been written about the moderators, and it has become clear that this situation is no longer maintainable because of their work conditions. Perhaps users should gain back some of the responsibilities, as over the years some newly introduced features in the comment space have already made this happen a little bit.

Platforms Reddit and Slashdot are good examples of how users can have more moderation power.

Modusers is the new term for these users that also moderate.

Facebook will be discussed a lot in this thesis, because this is an important platform with over 2 billion users worldwide. Furthermore, since the beginning of 2018 there are more and more

(7)

7 moderators of Facebook sharing their story with journalists, breaking the secrecy that has been surrounding their jobs. All articles that have stories of these moderators are useful, because Facebook itself was until recently not sharing their moderation rules and has made the moderators sign contracts that forbid them from talking to outsiders. It seems to be an escalating movement, where users are becoming more aware of how they are being moderated and what they can do themselves. In the early days of the internet, harsh comments or illegal content would be deleted (or perhaps only judged) by users who did this voluntarily, but back then users felt more connected to a community, thus feeling more obligated to keep it a clean and positive place. Naturally, it was not only positive, and discussions could be of a negative tone, but the ‘wisdom of the crowds’ played an important part in keeping it mostly legal and non-violent. Nowadays large companies like Google and Facebook form an enormous part of the internet, and therefore know a lot about the users, and are for a large part responsible for how the web is moderated. Due to recent articles about Facebook’s moderators, their collected personal data of its users, and the General Data Protection Regulation (GDPR) that has taken effect in the European Union on 25th May 2018, users are becoming more

aware of their rights and the moderation that is taking place at the back-end of platforms. This new awareness could mean that more users can slowly become modusers and that the platforms, users and moderators can work together on creating a good online community.

2.

T

HEORETICAL

F

RAMEWORK

2.1COMMENT SPACE

The online comment space includes a lot of different forms of commentary. This commentary has existed for centuries, the difference with this century is that now anyone can comment on anything online. The comment space does not have a great reputation, and some people suffer because of it, whether it is a professional critic that needs to find his own style and be exclusive to compete with amateur online reviewers, or writers that struggle with fake book reviews, or anyone who has ever been in an online argument with a provocative troll. Online discussions have become harsher, which is partly because of the rise of online advertisements and consumerism. Discussing items online was more innocent in the early days of the internet when online advertisement was still non-existent (Reagle 71). Internet seems to be the biggest instigator of the hatred, being a communicating network that makes “very friendly people furious reactors [and] rabiate haters that constantly want to attest their aversion” (Van Gijssel).

The online comment space regularly contains a slur of bad comments and foul language by trolls and other commentators, causing readers to try and avoid this section more (Reagle 2). However, the thrill of wanting to read the comments still exists. The most important function of

(8)

8 online comments in a nutshell is to inform others and share thoughts for their benefit (38). Still, websites have difficulties to maintain a comment section, because it is difficult to moderate, therefore some websites no longer have this feature. Websites of newspapers used to have very popular comment spaces, in 2008 75% of online news media in the United States had them (Ruiz et al. 464), but since then that percentage has been declining. The development of comment spaces on websites of newspapers will be further discussed and researched in paragraph 4.4.2. Smaller groups create better discussions, the paradox is that because of these good debates the small groups become more popular and thus their number of users grows, which leads to less fruitful discussions (10).

The key to having a civil comment section is that users need to feel like they know the whole community they are interacting with. Furthermore, Joseph Reagle states that websites need to fortify their comment space to establish this civility, meaning “to make the system more resistant to abuse” (7). For example, by demanding users to do an extra task before commenting (such as retyping a sentence), using moderators who decide which comments are the best, forcing users to use their real names (like Facebook aims to do), or asking a membership fee of users (websites of newspapers tend to use this system, where only subscribers can comment on the articles). All in all, these solutions are not flawless, and they are mostly aimed at keeping the community small. Extra tasks are easily “hacked” by paying workers in third world countries to perform them, moderators also need to be moderated or else they are not neutral, using real names has proven to help decrease the amount of comments, because this brings along an important part of etiquette: do not say anything that you would not say in front of your mother (9), and membership fee often includes login codes that can be shared with multiple people, so it can lead to even more users and an even larger community (10). All these solutions fall within the fortifying spectrum, however for this concept the term ‘moderating’ will be used more frequently in this thesis.

Keeping a friendly comment space turns out to be difficult, as they always grow because comments always beget other comments (26). Reviews are one particular part of the comment space, which have been around for a very long time. These reviewing comments also receive

comments, for example in the form of shares, likes and +1’s, which Reagle states can be perceived as a “personal offering that reflects enthusiasms and experiences for the betterment of other people” (33). There are many reviews online; anyone can write a review about anything. Therefore, the role of critics has become somewhat insignificant and is buried under the clutter of users that are longing for attention with their often badly written reviews. So, if critics want to stand out of the clutter they should pay more attention to their style and writing technique (Burn). When anything has been purchased online, buyers receive an e-mail that asks to leave a review; the market deems feedback very important. It is difficult to persuade consumers to leave a review, so Amazon, for example,

(9)

9 rewards loyal reviewers by introducing them to their Vine program wherein reviewing customers will receive free items under the condition that they write about it (Reagle 70/71). Fake reviews and comment manipulation are a consequence of everyone being able to write a review (57), this is preceded by a long history of writers or academics who would, centuries ago, review their own work under a pseudonym (49). Nowadays this phenomenon occurs in the form of trolls who write many of the same nonsense reviews about a wide variety of items. Research shows “that between 10 to 30 percent of online reviews are fake” (50). Five-star reviews are openly traded online, which is something that happens frequently for websites such as Amazon and eBay, and which is difficult to prevent (Box and Croker). Fake comments and reviews will appear at any place where anything can be commented. This type of trolling can also be used by higher institutions: some countries pay civilians to post positive comments on national businesses, for example Thai soldiers get paid to write positive comments about the monarchy (Reagle 53). It goes beyond a little trolling, it can move onto influencing national cases. International cases can be affected too, during the US presidential elections in 2016 Russians trolls created fake social media accounts and posted ads that were against Hilary Clinton and pro Donald Trump (Bump).

The fake comment writers can be divided into three types: the fakers who praise their own work or trash others, the makers who will do this for a fee, and the takers who pay for these services (Reagle 50). Fake likes and upvotes can also be bought in large numbers for platforms such as Facebook, Twitter and YouTube, thus creating more attention for these accounts that have more upvotes (56). Platforms try to prevent these fake comments of having much effect. For instance, Amazon created a new system where well rated reviews appeared at the top instead of readers having to read through the rest of the average reviews (57). Like said before, websites are trying to fortify their comment space and protect it from clutter. For example, by implementing a CAPTCHA that

can only be solved by humans and not by computers, but this implementation is no longer

completely reliable due to robots learning more and being able to solve them, and the use of cheap labor in third world countries (58). Bad reviews can cause someone’s business to shut down, considerably preventing this is very important for them, therefore positive reviewers can get paid hundreds of dollars (65). Several social media platforms now offer sponsored and promoted comments, which are more distinguished from regular comments and target and reach potential consumers (68). Overall, the market around reviewing has affected the tone of online discussions and comments to become more negative (71). Nonetheless, these fake reviews are not merely negative, the humorous review is a more harmless form (152). Online a lively exchange of the funniest parodic reviews of various items can be found (159).

Besides reviews written about various items, people receive a lot of feedback too (74). This personal feedback intents to help others achieve a goal and in doing so distinguishes itself from other

(10)

10 types of feedback, where it is usually to help others decide whether to buy a product or not (88). This evaluation is delivered in different forms and for different reasons. A teacher explained to Reagle that asking for anonymous online feedback after a course results in many rude responses (78). In online writing the tone of voice is often harder to read, thus personal feedback can come across harsher than meant by the writer. The anonymity combined with the lack of being able to read the right tone of voice, creates (typical) online comments that can come across as mean and offending. Some online communities share harsher feedback than others, for example feedback among coders is traditionally quite unforgiving towards one another (87/88). Perhaps this online community has been influential in deciding the overall tone of voice of online discussions, because it is an outspoken community and they have been an online community since the beginning of the web.

Online feedback can often be read by many people, if focused on one speaker or writer it can be perceived as shameful for the artist to receive this public feedback (90). Hierarchy hardly exists online, so a student can speak to his professor in a very different tone than he would face-to-face, not seeing someone’s reaction while talking to them means that social cues and context are being missed (94). Emojis could be helpful in getting the message across, however symbols can be interpreted differently, generating confusion (156). Psychologists have researched anonymous behavior in real life; anonymity causes people to behave differently and to lose the feeling of being oneself and disregard social norms. Online many users are anonymous, in the early days of the internet more so than now, but even when users operate in their real identity their behavior can still be interpreted as offensive. This could be because they are used to being surrounded by mean comments in this area or because they live in an online bubble with like-minded people (95). Another phenomenon in bad online behavior is trolls, in this case meaning the users whose comments’ sole purpose is to ‘trigger’ other users into responding and arguing. Besides feeling self-loss while being anonymous, online comment spaces can also induce a shift from identifying as oneself to identifying with a whole group, besides just abandoning one’s own norms and values they take the expectations of that group (97). Trolls are also just part of a group and hide behind the façade that this ‘culture’ has raised. The comment space (and the internet as a whole) seems to be an inviting place for trolling, as every online visitor is familiar with this behavior, accepts it to some extent and knows and follows the rule “don’t feed the trolls” (99). Haters (or sometimes called flamers) are a next step beyond trolls. Besides making fun of and triggering users they target people and harass them online which can lead to real consequences. All of this is covered under the ‘freedom of speech’-blanket, which gives them the right to say anything. This right differs somewhat globally and causes issues with bringing these trolls and haters to justice. This issue will be discussed more extensively in the next paragraph about content moderation.

(11)

11 Trolls’ comments are purely provocative. Research shows that many of these kind of users are male and derive from a masochistic online culture (106). A platform like 4chan has many of these types of visitors; its random page is notoriously famous for the gross content it shows. The content on the front page of 4chan often stays there for just a couple of seconds. Being as provocative as possible can lead to more screen time on 4chan (107). The platform can be considered an important influence in trolling becoming meaner and grosser over the last couple of years. Trolling can cause online discussions to escalate, with two small parties starting the discussion and two large groups ending it standing face to face. These discussions derive of one-liners of the original discussion, with context getting lost and offensive comments flying across. The loss of context is typical for these online discussions, wherein two groups only derive on what has been said recently and do not take the time to read the whole thread. They do not hesitate to join the debate even when they are not very well informed (108).

As much as users are being told not to feed the trolls, they still engage with them and get upset. Reagle proposes not to completely ignore trolls (or comment spaces that include them), but to step up and help victims of trolls and haters “- whether emotionally, financially, or legally” (119). People with different points of view or those who are part of a minority are often bullied online, but they can also find like-minded people and a safe place. The comment space has a schizophrenic character in being a comforting place while also being mean spirited (123). Just ignoring the comment space and its “bad” side would be too easy, especially since there is a whole generation growing up that does not know a world without this online comment space. For these teens reacting and posting online is part of everyday life and not something to consider quitting, part of their identity comes from their online behavior (123/124). All the words, photos and videos users upload form their online identity and can be controlled (125/126). Comments of others on this posted content helps shape the image of themselves and form their identity (127).

Besides the risk of having to deal with negative comments and trolls, there is another reason for no longer wanting to engage in the comment space; the lack of quality and quantity in the words of the comments (149). Facebook comments consisting merely out of tagging others are quite standard, but the solution could be that the social media platform offers to show the top comments which can still consist out of arguments exceeding just a couple of words. In this case readers lose the chronological order in which the comments were posted, so to follow a discussion (or to read it back) unchronological will be difficult, however Facebook’s option of directly reacting to a comment helps create a chronological discussion underneath one comment (161). Furthermore, confusion in discussions can be caused by assumptions that are made by the debaters about each other. The image that is created in one’s mind can turn out to be false, for example when they expect someone to be a man while it is a woman (164). Lack of quality and quantity can also be seen when popular

(12)

12 artists post anything and often receive “First!” as one of their first comments, not leading to a

productive discussion.

The website on which the comment space is located is important, in other parts of this thesis the differences between these websites will be explained. Twitter is such a platform, first the

creators were unsure what it would be, whether messages would be private or open (160). Then it became the microblogging medium as it is today, with open messages and still containing a built-in option to tweet other users privately. As said, a lot can go wrong in the comment space: lacking in value, too few space to say everything, out of context, written to the wrong person, use of wrong symbols, and all in all causing confusion and thus letting a discussion escalate. The comment space can function as a reflection of the worse side of humanity (172), websites are struggling with how to moderate these comments. When Twitter CEO Jack Dorsey sent out a tweet in the beginning of March 2018 asking users to help in making the public conversation healthier and more civil, he realized the impact Twitter could have on the real world. A month prior to Dorsey’s tweet, Facebook CEO Mark Zuckerberg also used his own platform to inform users of the many changes made to help improve the civility in the comment space, all to prevent fake news and hate speech. It is striking that Zuckerberg posted all these items that would help improve, considering that since the start of

Facebook he would be more positive about the role of his platform in this whole debate and now he is starting to acknowledge the negative side. Even though Facebook has often been accused of violating privacy and other issues, since 2018 it seems Zuckerberg can no longer deny these accusations. This has reached a high (or low) point on April 10th, when Zuckerberg had to testify at

the US House of Representatives (because the company Cambridge Analytica had been able to collect data about millions of unknowing Facebook users), where he had to answer questions concerning privacy issues and what sort of data Facebook collects from its users to sell to third parties. Even with users knowing that Facebook collects their personal data, they stay on the platform. It seems like an ever-growing platform (monster) that cannot be escaped (Mims). Being one of the platforms with the most users worldwide brings along its difficulties, content placed by so many users demands a lot of moderation. They are very secretive about this, but recent articles and a documentary (The Cleaners, 2018) have revealed that Facebook has many moderators working full time reviewing potential illegal content. The platform received a lot of questions regarding their moderation techniques, to which they responded with a statement in their newsroom stating that they extended the community standards with their internal guidelines (Bickert). The community standards exist of six headings divided into twenty-two subheadings, ranging from ‘Violence and criminal behavior’ to ‘Safety’ and ‘Offensive content’, using a lot of words as “striving to [protect]” and “in an attempt to [prevent online damage]” (facebook.com/communitystandards). All in all, it is

(13)

13 an extensive list of what they aim their platform to be, but it is (probably) not the same as the

specific list of rules their moderators are using.

The online comment space could be considered as the modern public sphere, as described by Jürgen Harbermas, because it is a place for democratic discussions (Ruiz et al. 464). Commenters may not always be aware of the visibility of their posts, and act as if they are in a closed space. Options of privacy can restrict how many users can see one’s posts, but when commenting a public article these restrictions are not in effect. In the early days of the internet it started as a sort of utopian place where everyone was equal and where users could be who they wanted to be, now it is overwhelmed with negative comments and violent content. Sarah Roberts argues that if the work of content moderators would be more widely known, the comment space could become more of the utopian place it once was supposed to be (Roberts2 9). Internet used to be free of control, and every user

could be themselves, but it has been controlled by government and large corporations more and more over the years (Dahlberg 617). Not only are the platforms controlled by these institutions, they are also used by them, to establish a certain identity and control their image (DePaula and Dincelli). Online comment spaces are used to contribute to the critical public sphere, “by reproducing

normative conditions for public opinion formation” (Rasmussen 20). The comment space is crowded and has many types of users, more users means more moderation is needed, this can be done in different ways which will be explored in the next paragraph.

2.2CONTENT AND COMMENT MODERATION

The comment space is a broad term and a place that can be found everywhere on the web. To keep this place safe of harsh, mean and/or racist comments, moderation is needed. It is a long-lasting problem of the online world: how to control this space? Once it was a limitless place and now it is controlled in many ways, of which content moderation is one (Roberts1 68). Typically, these

controlling settings are invisible for users, this is also the case for most forms of moderation (68/69). Online control is not the same as classic control forms such as a government, because online power is decentralized and part of a distributed network without a classic form of hierarchy (Galloway 83/85). Even without centralized control, governments can still collect data about their citizens and use what they post online to control them (Roberts1 71). Users can feel free when roaming the internet,

unaware of being watched, controlled and moderated behind the scenes (76).

It is striking how large parts are flooded with negativity; fake reviews, trolls and simply too many users in one comment space, therefore the comment space needs to be moderated.

Moderation means to keep platforms free of illegal content, this can be executed in various ways. The negativity in the comment space is often already filtered, meaning users do not even see the worst comments and other types of content. Platforms try to keep their space clean in different

(14)

14 ways, because some content could be considered as illegal (in some countries), therefore the

platforms have to act upon it. They tackle this problem in different manners. On a website such as Reddit, subreddits (the many pages with different topics Reddit is divided into) often have a list of rules about how the redditors (users of Reddit) should behave. Larger subreddits try to prevent haters and trolls from ruining and cluttering the front page this way. Since it was founded in 1997, Slashdot has been an important website to get the latest news about developments in the tech world. They too have an extensive list of rules regarding the comments (Lampe and Resnick 543, Slashdot.org/faq). Facebook has rules about what type content may be posted too, but handles it differently. Apparently, they have vacancies for “Customer Care Agents” in different languages situated in Berlin (Rogers2), which seem to be hidden vacancies aiming to find content moderators.

This is a scattered and secret business, which different platforms, companies and brands take part in (Roberts2 1). It is kept secret, because it should appear as if the concerned content naturally

disappeared (2). Facebook has many employees who moderate content in Germany, because a German law requires companies to remove violent content and hate speech (Punsmann), and

because renting a large workplace is cheaper here than in other cities. Similarly, they have workers in cheap countries as the Philippines and India. Content that is illegal is of a very shocking and violent nature. Being a content moderator means looking at all this content and deciding whether it should be posted or not, it is a job that requires secrecy and is quite depressing according to those who have been moderating (Roberts1 96).

So, moderating content is a job that has parts that can only be done manually and require a lot of time, as algorithms cannot judge the way humans can (Roberts1 13). These jobs are hidden,

because the websites want the end results that users see not to be soiled by this idea, and they do not want to reveal their way of work to users wanting to troll or post provocative content (15). This is not a new phenomenon, these types of jobs have always been hidden away, they are like the factory workers in the nineteenth century (89, Postigo 222). Furthermore, the internet and what is posted on it tends to be seen as something almost divine, that is not controlled by humans, so users forget that all this can be controlled in some way too (Roberts1 34). Since 2017 more and more has been written

and exposed about these jobs, which started with The Guardian posting segments from Facebook’s secret policy that the moderation team has to follow (Hopkins). Still, this backside remains quite hidden, and more articles that expose these policies could make users become more aware of the human side at the back-end.

Users do encounter negative sides of the web, which could remind them of the human side of it. Online hatred and trolling can often be encountered online, when this is being frowned upon trolls hide behind the right of free speech (Reagle 102). However, this is not a right that justifies discrimination or hatred. There have been court cases in some countries where online hatred has

(15)

15 been up for debate, leading to conviction of the online bullies. For example, when online threats addressed to Dutch politician Sylvana Simons were condemned in court in the spring of 2017

(Bahara). Facebook is secretive about their content moderating rules (up until recently, as mentioned in the previous paragraph), and how the moderating happens (Krause and Grassegger). The rules for this process are constantly changing, following laws and learning on the job (Punsmann). Platforms change their rules frequently and national laws differ too, which make moderating a difficult task to execute.

Moderating has become an important job with many vacancies to fill. The reason people become a content moderator differs. For example, journalist Burcu Gültekin Punsmann wrote an article about the three months she worked at the content moderation department for Facebook in Berlin, which she calls her “[t]hree months in hell”. At first it intrigued her, her colleagues came from all over the world and she would work for an important social media platform. When Sarah Roberts discovered there were workers who moderated online content, she was immediately fascinated by this and researched this phenomenon further. She interviewed content moderators for her research on commercial content moderation. The interviewees had been scouted straight out of college to become a content moderator, being lured in by the idea to work for an important tech company in Silicon Valley, because they were mostly graduates of the faculty of humanities and job opportunities were scarce (Roberts1 95). Besides being a professional moderator or doing it for research, some do it

as a task that is required for their job. For example, Alan Taylor ̶ a photo editor for The Atlantic ̶ moderated the comments that were posted underneath his own articles. Up until the beginning of 2018 readers could post comments, and Taylor pre-moderated all of the comments posted at his photo blogs. For ten years this task had been a huge part of his life, where he had to moderate comments almost every day. Since readers get upset when their comment takes a long time to get published, this task induces a lot of pressure. So, there are different reasons to become an online moderator, some just want to make the online world a safer place, or are in dire need of a job, but the personal mental results for the workers are hardly ever good.

Moderating is a tough job that goes on 24/7 (Roberts2 2), and it is difficult for workers to

establish for themselves what it is they do, because it can either be violating freedom of speech or protecting innocent users to see violent content (Punsmann). Punsmann eventually quit after three months, because she noticed that she was getting used to all the horrible violent and criminal content, and it had affected her worldview making her more worried about her loved ones.

Furthermore, the job only requires the content to be deleted, but the users that posted it are almost never intervened with, while Punsmann often felt the urge to get them help from a psychologist. She wished she could communicate with the persons posting the cruelty, but the job merely asked to delete the posts and every now and then send a notification to a poster. After three months

(16)

16 Punsmann realized that society is in need of education, which would require a collective effort. The secrecy surrounding this job does not benefit this purpose, still she is positive about the fact that the companies are taking their responsibility (even though this is prohibited according to some laws) and in the end have a positive effect on the online comment space. Also, Facebook does not want their newsfeed covered in violence, because it creates an environment wherein users are too intimidated to post anything. As they profit from users communicating with each other and looking at

advertisements (Krause and Grassegger), this could be viewed as the most important reason for Facebook to have moderators.

At Facebook the content is only investigated (and moderated) when flagged by other users, these reports are handled by the moderators (Punsmann). The workers judge the content, the judgement can vary depending on their own moral codes and values (Roberts2 1). Punsmann

estimates that about 6.5 million posts are flagged weekly. All content is always first published, because it has to be flagged for it to be reviewed (2). Besides Facebook, this flagging-method (or reporting) is used on a lot of popular websites that contain user generated content, such as YouTube, Twitter and Instagram (Crawford and Gillespie 411). These websites use this technique, because the enormous amount of content that is placed on their pages leaves them no choice but to hand some of the responsibility over to their users (412). Furthermore, this tool gives the websites a reason to shift the blame to their community when illegal content does show up. The flagging system hands few ways to express the nature of the complaint (413), ranging from one simple click to filling in a whole explanation of the reason to flag (414). After flagging, users sometimes get feedback on the reception of their report, but more often they have to trust the flag has been received and queued to be reviewed later (416). Once the content has been removed, there are often no signs left indicating it has ever been there (a removed tweet for example will be visible as “deleted tweet” when it has received reactions), moreover the reason for removing is not published. Flagging can also be misused ̶ as a prank, to generate attention, or by gamifying it and letting tons of users flag for one purpose ̶ undermining the status of the flag (420). Gamifying the system leads to the flag gaining a political status, while it is being used to collectively get across a message, for example when an anti-Muslim group flags content posted by Muslims (421). These actions undermine the free speech zone of the internet, and it is not even fully visible when this happens (Roberts1 43). Transparency in the process

of deleting content could help platforms and users be more trustworthy towards each other, and prevent things like gamification of the system from happening (Crawford and Gillespie 423). If this shows anything, is it how difficult it is to have a working moderating system, especially at platforms with many users.

So, moderation is largely professionalized, and users only play a small part in this process. Users could gain more responsibility in the process of moderation, and the online platforms could

(17)

17 have more faith in them. Paragraph 4.3.1 discusses two platforms where the users are more involved in moderating, and these users seem to feel responsible for the online community and moderate it according their standards. The involvement of users in moderating the comment space mostly does not extend beyond the option to flag or report, or to up- and downvote comments; they are hardly involved in the whole moderation process. Instead of perceiving users as passive readers, they should be seen as active and engaged ones who feel responsible for the conversation not just by reacting to each other but by helping to moderate too. The situation reminds of the active filtering

audience concept from television studies. In the early years of television viewers were thought of as

passive readers who absorbed the images that came out during the final phase of the television production line. However, in the 1970’s this idea started to change when researchers thought of users as being equipped with an active filtering system (Morley 46). An online comment can be perceived as a media text that is being read by users, just as well as television programs are media texts. Cultural theorist Stuart Hall extended this theory by stating three different reading positions: subordinate, dominant and negotiated, which shows that reading and receiving images is never passive or straight forward. Professor of communications David Morley modified these positions and stated that reading is always an interaction between discourses of the text and discourses of the reader (Storey 11). Overall, the role of users could be reconsidered, and they could become more important in the moderation process, leading to a new media version of active filtering. I propose the term moduser as interpretation of this theory, as a new way to address the actively moderating user.

2.3PRE- AND POST-MODERATION

Comment moderation falls within the broader category of content moderation. In the beginning of the internet, comment spaces existed and gatekeeping was not yet necessary. Moderation was mostly done by volunteers (Roberts4 4). While certain websites grew, their comment spaces grew

too. The popularity and easy access (lack of fortifying) caused a flood of comments including the rise of the troll space. Websites started using different techniques to moderate the comments, such as letting users moderate each other and forcing users to subscribe and enter their personal data. While this works for websites such as Reddit and Slashdot, an enormous website as Facebook takes drastic measures to keep being able to moderate; they use cheap labor for commercial content moderation. Sarah Roberts sees a link between this cheap labor and e-waste, where the former is disposing digital trash that has been partially formed by the latter hardware waste that is piling up in the countries where a lot of the cheap labor takes place (2). Closing their eyes and leaving their (digital) waste in the hands of someone else (7), the way the world is being flooded with garbage is a bigger problem than the digital garbage, but the jobs have different negative physical and mental influences on workers and are inflicted in the same parts of the world.

(18)

18 Content that needs moderation gets to moderators in some way, half of the moderating is done when the comments are already posted, this would be called post-moderating or editing (Goodman and Cherubini). Users have a role in this too, where they can flag each other’s comments and which is thus partly based on the wisdom of the crowds. The other half of the moderating is done before the content is posted: the pre-moderating technique or gatekeeping. Other websites use a combination of pre- and post-moderation (Goodman and Cherubini, Ruiz et al. 464). Comment spaces of newspaper websites from around the world conduct it differently ̶ or they used to, as many have turned off the option to comment ̶ and accept that they are going to have to read a lot of comments in pre-moderation. They also turn it off for some articles to keep it manageable (Ruiz et al. 473), or comments are put in a hidden queue and judged before being published (Taylor).

Newspapers’ social media pages have taken over the discussion role, which is another reason for them to turn off their comment space. Dutch news site NU.nl claimed this was one of their reasons for turning it off. NUjij (translation: NOWyou), their infamous comment platform, was often flooded with negativity which could also be a reason that it was shut down. However, after being offline for a year and a half, NU.nl has brought their comment space back, because now they have a new

software that should make it easier for their journalists to react to the readers and easier to moderate (NU.nl). NUjij will use software Talk from The Coral Project, which is founded by

newspapers such as The Washington Post and The New York Times. Their main goal is to focus on the content and to encourage a good discussion. The more users and comments a website has, the more difficult moderating obviously is, but the trend of shutting down the comment space seems to turn around with software such as Talk being initiated. Moderating is a task done by both humans and algorithms. It can be very time-consuming; with the gatekeeping technique every comment has to be judged, while with the editing technique only comments that have been flagged need to be judged, in any case both techniques can lead to a large number of comments needing to be moderated.

Newspapers have been struggling to find good ways to edit, since they first went online. First, they had been in a gatekeeping position for years, determining the content of the news, but online many voices could be heard, and the newspapers lost their leading role (Bruns1 32). Axel Bruns

noticed in an article in 2003 how gatekeeping shifted to gatewatching, because websites linked to other sources of news instead of publishing it themselves, watching the gates rather than keeping them shut (34). Bruns thought this shift was typical for the internet back then (43), but fifteen years later he revised his theory and updated it to all the changes that have occurred over these years. Mostly the arrival of social media has changed a lot, journalists and users alike are gatewatchers on these platforms (Bruns2 8), especially Twitter has become a platform which is used by journalists to

find news and to share with users (10). Users decide what news to share and act as gatewatchers (Ihlebæk 474). Newspapers have not lost their important place in providing news, they have adapted

(19)

19 to the online world by including features as liveblogging about events and being present on social media. It differs how involved users are and how they are being moderated, but on Twitter most journalists can easily be contacted. Being on multiple channels changes the way to moderate. On their own website newspapers can make their own moderation decisions, but on websites such as Twitter this is not in their control and can cause a decrease in the amount of moderation (476).

In 2013 the World Editors Forum (Goodman and Cherubini) reported on the best ways for online newspapers to moderate comments. This is one of the only reports for which this topic has been researched on such a large global scale: they interviewed journalists from 104 newspapers spread across 63 countries. Some main findings of this research are that the use of pre- and post-moderation is quite evenly divided among these newspapers. After post-moderation 11% on average of the comments is deleted. Deletion happens for different reasons, such as being generally offensive, containing hate speech or bad language, or being spam. (Il)legality of the comments is a grey area, moderators are often unaware of the rules (or the lack thereof). Moderators do not feel like they are limiting user’s freedom of speech, as they believe there are other places online where everything can be said when not discussing a specific news article. There is no consensus about the preferability of using real names or anonymity, the former supposedly leads to more qualitive conversation, while the latter offers a voice for commenters who might not be able to speak freely under their own name. Facebook has a real identity policy, and a lot of the newspapers have a Facebook page too. These are less moderated than their own websites, because this policy is seen as a way to make the discussions less controversial. Journalists may participate in discussions about their own articles, which can lead to a better quality of the discussion, but some see it as a space solely for the readers to discuss amongst each other.

So, moderating is a job mostly done by humans, a small part of it can be done by algorithms (Roberts1 13), but it is a human process with human decisions and values (92). Sometimes,

algorithms can help the moderators, for example when scanning the comments for banned words (14). Platforms that use hashtags, like Twitter and Instagram, use these too to look for banned words. The hashtags can help make the moderation work easier, but they can also block certain hashtags that could be dangerous. For example, when someone is looking for pro eating disorder tags and it results in the platform questioning the user is they need help, instead of the search resulting in content about that topic (Gerrard 6). It is difficult to determine what hashtags should be blocked and to what extent this needs to be executed. It means that a clear set of rules needs to be created, deciding upon this list is a difficult first step. Not all rules are as apparent as for example having to ban content that is of an abusive or racist nature. Local laws influence these rules too, for example in the United States online platforms are not responsible for user generated content. As long as the platform is unaware of posted illegal content, they are not obligated to act upon it, that is

(20)

20 why they can leave it up to their users with a flagging or reporting system (Crawford and Gillespie 419). However, platforms are quite free from the law when creating their set of rules, so the governing comes mostly from the platforms themselves (Gillespie, Suzor). Who decides what happens with flagged material and who judges it, differs per platform; most platforms have two layers of judgement, if the first layer decides that the content falls within a grey area it moves up and is judged by a content policy team (Crawford and Gillespie 419). The decisions are often made by the platforms themselves, forming a sort of monarchial system with one ruler (422). Other platforms, such as Wikipedia, leave room for discussion among their users (421). However, this system is shadowed by users desiring more to be on top than to strive for openness and shared knowledge (Van Dijck and Nieborg 862). Despite this negative side, it does give a solution for the way the flagging system can be gamified, because users can discuss this openly and articulate their concerns (Crawford and Gillespie 423).

3.

M

ETHODOLOGY

:

R

ESEARCHING

P

LATFORMS AND THEIR

M

ODERATION

T

ECHNIQUES

Internet used to be a place of free speech, where everything could be said and done; it was

perceived as a place that could be utopian (Roberts2 9). Over the years this idea of a utopia changed

into the messy internet as it is today, where the right of freedom of speech has largely been taken over and abused by trolls and flamers. Websites such as Facebook and Twitter are trying different ways to moderate the content, and other websites have completely turned off the comment space. The online comment spaces have been developed to cope more with trolls, fortifying the space against them. Over the years this fortifying has been done in different ways, which I will put into context and support with an overview in chapter 4.

To research how comment moderation has been done over the years, I will look into web history. While most websites have a rough outline of how moderation is done nowadays (to be found on their FAQ or About pages), it is more difficult to see per website exactly how this is done and how it has changed and developed over the years. However, literature provides an insight into how the moderating was done previously, which helps for creating a general overview, and the current moderating systems can be added to this overview. Therefore, the research and overview will be based on articles from those days that discuss moderation techniques, as well as being based on general ways to moderate content. It will not be focusing too much on specific websites, only where possible and necessary. Out of these sources I will specify different moderating systems and give them fitting terms and concepts.

(21)

21 It is quite difficult to research the history of the web, as websites often do not have an archive and changes occur very fast. Niels Brügger – professor in internet studies and digital

humanities – researched the history of Facebook. He researched the functions and elements that are available for the users and focused on what happened in the development of Facebook and whether there is a general development of the platform as a media text. He focused on solely these elements, because researching the history of Facebook as a company or other aspects is almost impossible without having access to their internal documents. The Internet Archive does not have any recordings from large companies such as Facebook and Twitter, because they have been excluded (Brügger, Rogers1 363). Information provided by Facebook itself can be used, but this information is not

confirmed by any other sources. So, to research (the history of) such a large website as Facebook has its restrictions. Researching comment moderation proofs difficult too, as Sarah Roberts found out, because it is such a secretive business where it is difficult to find out what the work entails (Roberts1

58).

When researching a digital feature, the difficulty is that it is always changing, so available literature is outdated but gives a nice insight of how it used to be. Brügger argues that it is important to archive certain websites, because they change so fast and will later offer a glimpse of what users thought to be important at that moment in time. This awareness is growing, first archives were digitizing by publishing their (analogue) items online, now the internet is being archived too. Material that only exists online will be able to be researched in the future too, because of initiatives as The Internet Archive (Rogers3 160). National libraries controlled by the government are archiving

websites as well because of this reason, however choosing which websites to archive proofs to be difficult, but it is a developing field (kb.nl). Web history is important because when periods as the 1990’s and 2000’s will be researched without looking at the web, a crucial part of society will be overlooked (Milligan 80).

So, literature will function as a base to research the changes in online content moderation, but I will also use the previously mentioned Internet Archive. This organization has been archiving websites for over twenty years, which can be found in their Wayback Machine (Internet Archive). The

Internet Archive Wayback Machine Link Ripper is a tool from the digital methods initiative which will

help in collecting a list of URLs from a certain timeframe. The URLs will be from FAQ or About pages of two platforms with prominent moderation techniques: Slashdot and Reddit. Furthermore, the plugin Grab Them All for the Mozilla Firefox browser, will be used to convert all the URLs into screenshots. Collecting screenshots allows to see changes of these websites in one glance, because the tool arranges them chronologically in a folder, which provides a sort of time-lapse of the website (Rogers3 161). The Wayback Machine originally began as a tool to prevent the web from having dead

(22)

22 there (162). A lot of websites have a brief lifespan, estimates range from 44 to 75 or 100 days (Van Bussel 7), which shows how ephemeral the web is, and archiving will make it less so (Rogers3 164).

This research is more focused on how dynamic the web is and what changes happen over the years, so not on websites that no longer exist. Furthermore, focusing on one or two websites in particular gives the opportunity to tell a larger story out of the changes from these pages (166). Overall, archiving the web offers a lot of different ways to research it and it can tell stories of the web, or other cultural and political events and history (168).

So, the Wayback Machine will function as a way to tell the story of certain platforms in this research. Large platforms as Facebook and Twitter are not archived, but others such as Slashdot and Reddit are. These are interesting platforms to research, because they let users do a large part of the moderation and they are very open about their rules. By looking at their pages concerning the moderation rules, I hoped to find a story of how and why this has changed over the years. Moreover, I looked at websites and comment spaces of newspapers and how they have changed or arranged their moderation rules. This part is also based on literature, since their moderation rules cannot be found online, but they have published some articles about it themselves. Online newspapers are interesting to look into, considering many of these websites have turned off their comment space and have let social media take over the role of discussion leader. However, there seems to be a turnaround now that newspapers are creating new commenting systems that make it easier for journalists to interact with users and perhaps makes the moderation easier too. The goal is to find what changes have been made over the years, and whether the role of users and readers has become more important.

4.

C

ASE

S

TUDY

:

M

ODERATION OF THE

C

OMMENT

S

PACE

O

VER THE

Y

EARS

4.1CHARACTERISTICS OF THE COMMENT SPACE:FRONT- AND BACK-END

Commenting and giving feedback has a long history prior to the internet, many analogue ways of commenting have shifted towards the internet, such as commenting on newspaper articles. Where before the newspaper editors could function as gatekeepers to keep the rude and harsh comments away, online moderating turned out to be more difficult. In recent years newspaper websites, such as the New York Times, had turned off the option to comment online and now have limited options. Users must register, not all articles are accessible for comments, others just for twenty-four hours and they have a list of fifteen items that users must take into consideration before commenting (help.nytimes.com). A large part of this list is to remind users that moderating is human work and thus very difficult to establish, as discussed in paragraphs 2.2 and 2.3. Websites with many commenting users and not enough manpower to moderate all comments is a recurring problem.

(23)

23 Since the early 2000’s a couple of large websites have been dominating the internet, of which some are social platforms which have dominated the way comments are being moderated. Facebook is the largest social platform, in over a decade the website has developed from a local to a global network, expanding its arms onto other websites by incorporating plug-ins and buying other large platforms such as Instagram and WhatsApp, and becoming the enormous machine collecting and selling information about its users as it is now known. The other big player in the field is Google, which owns a lot of other websites where users can comment, such as the blogging website Blogger, video platform YouTube and their own social medium Google+. A smaller player is Yahoo!, which owns several websites too, such as the photo sharing platforms Flickr and Tumblr. Other platforms stand on their own: Twitter, Reddit and Amazon to name a few larger ones. All these platforms have comment spaces in different shapes and sizes, all have changed in some form since their beginning, and the importance of the comment space differs per website. To see to what extent the moderation has changed, I make a distinction between front-end and back-end moderation representing the first and second layer of moderation. The back-end layer is the one that users do not see and that has been discussed in the theoretical framework (see paragraphs 2.2 and 2.3). In short, this is the layer most known as where the comment moderators operate. The front-end layer is what users do get to see, this category is quite broad and extents from users entering personal data to login onto the platform, to age restrictions and to the characteristics in the comments space. Therefore, I will discuss the most important differences in this space across different platforms.

In the beginning of Facebook (2004), commenting was not its focus; in its base it is a platform for connecting with other people (Brügger). The platform has seen a lot of changes in its interface and other items, but one of its most famous features is the like-button (2009). Since the introduction of this button, users have asked for a counterpart in the form of a dislike-button, but Facebook has not yet given in to these demands. Facebook is an ever-changing platform, at the beginning of May 2018 they introduced the new feature of down- and upvoting comments, which appears in some discussions and seems to be still in a testing phase before being broadly launched. The aim of this new feature is to stop bad comments from influencing the comment space too much, by letting users anonymously downvote (or upvote) comments (Withers). Zuckerberg claims that Facebook wants to have a positive atmosphere, but as it turns out just being able to ‘like’ does not make for a merely positive environment. However, Joseph Reagle states that all these forms of likes reflect enthusiasm by users and are for the betterment of other people (33). In 2017, Facebook extended the like-button with more reactions, by adding emotions such as ‘love’ and ‘sad’ next to the famous thumb

indicating a ‘like’, giving users the possibility to no longer only react positively with the buttons. It seems Facebook’s original approach has successfully influenced other platforms. Twitter (2006) uses the same sort of upvoting system, where users can comment, ‘heart’, or retweet and not downvote,

(24)

24 as do photo platforms Instagram (2010), Flickr (2004) and Tumblr (2007). Networking platform LinkedIn (2003) uses the same upvoting system, and has the option to share a post, just as Facebook has. Facebook has been influential towards other platforms, but they have taken inspiration from others when they introduced the option to down- or upvote comments.

Other platforms have a system that allows users to not only upvote comments and reactions, but to downvote these too. Reddit is an example of a website that employs this system where the total amount of points for one comment or post is determined by all down- and upvotes.

Furthermore, these points are reflected in user’s ‘karma’, which tells other users how positive or simply active they are on the platform, Reddit claims the karma-number is not formed by simply adding all the upvotes, but by their secret algorithm. This up- and down-voting system resembles that of websites such as Amazon where users write reviews, and where users can judge each other’s reviews on how helpful it was. Users on both these platforms get to keep the points they received and can establish a certain status this way. Like the amount of karma decides how visible the comment is in a thread on Reddit, similarly on Amazon popular reviews are shown on top and popular reviewers are rewarded by Amazon (see paragraph 2.1). Slashdot is a platform that also uses karma as an indication for users to see how positively their comments have been received.

Furthermore, users can change the threshold, meaning that they can change the minimum score of comments they want to see, ranging from -1 to +5. This means that Slashdot users can decide what sort of comments they want to see. If they set it at its lowest every comment will show, including irrelevant, bad or troll comments, setting it high will show only excellent comments

(Slashdot.org/faq).

Where most back-end moderation systems are invisible for users, Reddit has a moderation system that falls in between the front- and back-end. Every subreddit has its own moderator(s), who have the power to delete comments. Reddit is very upfront about this system and redditors can chat with moderators to ask them anything. Furthermore, bots play an important role on Reddit. Bots are software applications running automated scripts and performing tasks. Because of the open system of the platform, users can code them into performing many different tasks in the comment space. The moderators are often not just users, but also bots. So, the back-end moderation can be divided into visible and invisible systems. In paragraph 4.3.1 there is more about the development of moderation on Reddit and Slashdot.

4.2DEVELOPMENT OF (MODERATING) THE COMMENT SPACE

Platforms copy characteristics and features of other platforms, so differences in the comment sections can be found in the details. They may have different interfaces, but the way comments are rated, upvoted, flagged et cetera, shows much overlap. As stated before, the number of comments

Referenties

GERELATEERDE DOCUMENTEN

De gemeente heeft behoefte aan regionale afstemming omtrent het evenementenbeleid omdat zij afhankelijk zijn van de politie en brandweer voor inzet: ‘wij hebben

As the academic studies discussed in the literature review have shown, there seems to be a relationship between host’s SES and Airbnb participation and

Of sociale steun ook een rol speelt in de levenskwaliteit van kinderen en jongeren en de ervaren last van NAH is onduidelijk, maar op basis van de onderzoeken naar sociale steun

This thesis also draws from works in Shakespeare Animal Studies, such as Erica Fudge’s works on the distinction between human and nonhuman in early modern England (“Monstrous

However, given that capitalist society is a society of generalised commodity production governed by the legal regime of private property, it follows that

To overcome this, a compliance feedforward controller is designed while a simple mass feedforward controller can’t compensate for the lightweight systems due to the flexibilities

In this observational study we estimated the proportion of postmenopausal breast cancer patients initially diagnosed with hormone receptor (HR)-positive locally advanced or

In the first phase of digital divide research (1995-2005) the focus was also on the two first phases of appropriation of digital technology: motivation and physical access..