• No results found

Rumor Has It: Epistemological questioning in relation to new forms of information circulation

N/A
N/A
Protected

Academic year: 2021

Share "Rumor Has It: Epistemological questioning in relation to new forms of information circulation"

Copied!
82
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Rumor Has It…

Epistemological questioning in

relation to new forms of

information circulation

Stefanie Voortman

(2)

1

Rumor Has It…

Epistemological questioning in relation to new forms of

information circulation

Special thanks to: Dr. M.D. Tuters, Tommaso Eli, Gerrit Krijnen and the Digital

Methods Initiative (University of Amsterdam).

Master Thesis New Media and Digital Culture

Title: Rumor Has It: Epistemological questioning in relation to new forms of information circulation

Name student: Stefanie Voortman Student number: 10432779

Institution: Department of Media Studies, University of Amsterdam Name Supervisor: Dr. M.D. Tuters

Second Reader: Dr. E.J.T. Weltevrede Number of words: 22.586

Date of completion: 26 June 2017

(3)

2

Table of Content

Introduction ... 3

Methodology ... 8

Theoretical Framework ... 8

Case Study: Mapping Fake News Hotspots on Facebook ... 10

Chapter 1: Beyond Fake News ... 16

1.1. New Regimes of Truth and Personalized Information Universes ... 16

1.1.1. New Regimes of Truth ... 16

1.1.2. The Dangers of Personalized Filtering ... 21

1.1.3. Critique of the filter bubble argument... 28

1.2. Could it be us? ... 32

1.3. In Sum ... 40

Chapter 2: Facebook Entities as New Actors In Spreading Political Rumor ... 43

Sub Question A: What are the hotspots of fake news on Facebook?... 45

Sub Question B: What are the agendas of these hotspots?; how do they enroll political fake news within their own modes of existence? ... 52

Sub Question C: How are the hotspots related to each other; are there hotspot clusters? ... 60

Implications ... 63

Discussion ... 66

Conclusion ... 69

(4)

3

Introduction

Recently, the phenomenon of fake news has been highly popular in academic writing as well as journalistic articles. Its meaning can refer to different things: to its usage by politicians to dismiss stories they do not like, or to the “stories that are provably false, have enormous traction in the culture, and are consumed by millions of people” (Pelley, “How fake news becomes a popular, trending topic”).

While this thesis will not be about fake news per se, it will offer some

understanding as to why it is increasingly difficult to distil one ‘truth’, why people believe unfounded stories, and what influence do online platforms exert in this process. The goal of this thesis is to move beyond fake news and look into the nature of truth, forming beliefs and the influence of internet sources.

A Pew Research Centre Research “The Modern News Consumer” found out that even though people still follow the national, local and international news, they increasingly do so via online platforms; especially the younger generations (50% of 18-29-year-olds, 49% of 30-49-year-olds). Nearly twice as many adults

(38%) get their news online as they do via newspaper. Nearly half of these use social media (Mitchell et al.). For this reason, it is important to understand the epistemological effect of these news distributors; especially in a time full of fake news and political rumor.

One should note that the statistics from this studies are based on the population of the United States. This will also form the spatial boundary to this thesis, in particular, the case study which focal point is the United States presidential election of 2016. Not only because the statistics do not include other

geographical locations but furthermore, because last year’s United States election, forms a great study ground for the spreading of misinformation (information that is incorrect but is spread without the intent to mislead) and disinformation (the deliberate spreading of false information with the intention to mislead; often executed by governmental organizations)(Sheridan Libraries). During this event, a lot of fake news was spread and it was not always clear that

(5)

4 this information was unfounded which might have its influence on the results of the election. Misinformation and disinformation have always been around but during the 2016 presidential election, the phenomenon increased in size because of the circulation tactics of online platforms (Drobnic Holan). This makes it the perfect case study to research fake news in relation to misinformation,

disinformation, and truth as a variable.

Furthermore, this thesis focuses mainly on Facebook because, due to its size, reach and power, it is effectively the most influential social media platform (Read). It is a powerful source of information for an immense amount of people. The Pew Research Center stated in their article “News Use Across Social Media Platforms 2016” that a majority of American adults (62%) use social media as a news source. From all social media platforms, Facebook is by far the largest social media platform and 44% of the U.S. population relies on the platform for news. Furthermore, 66% of its users use the platform as a news source.

Moreover, according to this research, most users (64%) receive news from only one social media platform, which is mostly Facebook. An important factor of this platform, among other, is that users no longer actively pull information but get information pushed by the algorithms on Facebook; therefore, Facebook has a real influence on the information users receive (Gottfried and Shearer).

Therefore, the following research question will be guiding during this thesis: what is the epistemological effect of Facebook and other online information sources on what is considered the truth?

Over the course of 2016, Facebook received a lot of critique on its role as distributor of information. Company founder and CEO Mark Zuckerberg has repeatedly stated that Facebook is not a media company but a technology company, and should, therefore, have no responsibility for its content (Gibbs). Facebook has been accused of among others; influencing the U.S. 2016 Election and banning iconic war images.

It began on 03 May 2016, when gadget and technology blog Gizmodo published the article “Want to Know What Facebook Really Thinks of Journalists? Here's What Happened When It Hired Some” about a secretive Facebook project on Facebook’s trending news. It turned out that the algorithms were not up to the

(6)

5 task to manage the news feed and therefore, Facebook hired human editors to support and train the algorithm (Nunez). According to Facebook, the company needed the human editors because the algorithms brought up around 40% of noise due to its focus on purely the engagement aspect (Osofsky), (Read). Facebook hired human editors to oversee the process, but presumably did not say anything to not lose their role as an unbiased pipeline for news (Isaac), (Nunez, “Want to Know What Facebook Really Thinks of Journalists?”).

However, human editors were told to suppress conservative news which makes it similar to a traditional newsroom. Several employees stated that they worked with an ‘injection tool’ to make certain topics trending while removing others. (Nunez, “Former Facebook Workers: We Routinely Suppressed Conservative News”). Facebook turned out to be biased and reflect certain institutional values while pretending to be objective.

In August 2016, the company again made headlines. This time because they fired all their human editors to return to a solely algorithmic driven trending feed (Facebook). This led to another series of scandals within the first weekend. Trending topics became: a false story about Fox News’ Megyn Kelly, a

controversial piece about a four-letter word attack on rightwing Ann Coulter, and an article about a video of a man masturbating with a McDonald’s sandwich (Thielman). Human curators were instructed to stick to a list of trusted media sources, the algorithms were not. This led to a great environment for fake news to flourish.

In November, after U.S. President Donald Trump had won the presidential

elections, the social media giant again made headlines. This time because of two other issues concerning Facebook as an infrastructure for news for other issues: fake news and filter bubbles. The American news feeds got more polarized in either red or blue, there were a lot of unknown media sources, and there was the (unexpected) victory of Donald Trump, which raised the question if social media had the real power to change history in unpredictable ways (Manjoo)

(7)

6 Over the course of 2016, Facebook users learned that the pope had endorsed Trump, that a Democratic operative was murdered when he agreed to testify against Hillary Clinton, and that Bill Clinton raped a 13-year old girl. These are just a few examples of untrue news stories that circulated on the social media platform. Vox journalist Timothy B. Lee explains in his article “Facebook's fake news problem, explained“, that stories like this thrive on Facebook because they have a high level of engagement, which the trending algorithms prioritize. Since there is no longer human supervision on which topics or sources go trending, fake news has gotten free play.

Journalist Max Read explains that many users were served emotionally charged ‘news’ stories about the candidates because the algorithm understood from experience that users wanted to read these types of stories. Even though a lot of stories were fake or parodies, their placement in the news feed and appearance made them indistinguishable from the rest of the feed. The problem snowballed when users started clicking the stories, which in combination with the

engagement-driven algorithms resulted in a lot more exposure (Read).

The problem of fake news got bigger after Facebook fired their human editors because trending stories no longer had to pass human judgment.

The other critique was that of the filter bubbles; the idea that personalization algorithms cause an internet environment in which only the interest and opinions of the user get reflected. In the article “Bursting the Facebook Bubble” Wong, Levin, and Solon describe asking a group of left and right orientated voters to swap filter bubbles to see how people would react (Wong, Levin, and Solon). A Republican supporter stated that he had never seen positive ‘stuff’ about Hillary Clinton before he entered the opposite filter bubble experiment. Several

participants said that they did seek out opposing viewpoints on Facebook but that their feed generally provided a rather one-sided experience. Participants also felt frustrated by the incorrectness of the stories of ‘the other side’. What was surprising was that two contestants, found the conservative Facebook feed actually a lot more diverse than the liberal, which would be logical if Facebook suppressed Republican news. At the end of the experiment, some participants got reinforced in their viewpoint by seeing the viewpoint of ‘the other’ while others wanted to open up the conversation with the ‘other side’.

(8)

7 This demonstrates that Facebook is not just a pipeline for news but has a real influence on which information users receive. In this thesis, I will make use of several academic theories to explain how there can be multiple truth

operationalized at once, how people form beliefs and what the influence of online information is in this process. Furthermore, by making use of a case study, I will offer some understanding as to who shares fake news and to why they do so.

(9)

8

Methodology

As I have explained above, this is not a thesis on fake news. Rather, it will move beyond this issue and explore a deeper problem: how did the truth become something that is debatable? There are more complex processes at work that make factors such as true or false no longer certain, and circulation plays a big part. This thesis consists out of two parts which together offer a better

understanding of the situation around fake news. My first chapter provides an exploration of different theories on how and why different people hold different ideas of the truth, and what the influence of online platforms is. The second chapter consists of an empirical study on fake news hotspots on Facebook, which finds its origins in the publication “A Field Guide to Fake News” by Public Data Lab to which I have contributed. However, as I will explain below, in this thesis it is taken a step further and becomes embedded within the offered framework.

Theoretical Framework

As explained above, this part offers an exploration of different theories of how truth became fragmented within societies. The first segment of this part will combine the idea of ‘regimes of truth’ by philosopher Michel Foucault, with the idea of ‘regimes of post-truth’ by political communication scholar Jayson Harsin. This offers some insights on how truth is normally operationalized by institutions, such as the traditional (news) media. And, in contrast, what the influence of new forms of circulation (internet sources) is on this system. According to the project “A Field Guide to Fake News” fake news is not simply a type of content that circulates online. Rather, it is the nature of the circulation and reception, which make something into fake news. This means that it cannot be grasped without taking the infrastructure that facilitates it, into account (9).

Building on this idea, the next segment will deal with one of the most distinctive features in this new forms of news circulation: personalized information. As is argued by internet activist Eli Pariser in The Filter Bubble: What the Internet is

Hiding from You (2011), personalization algorithms on platforms such as

(10)

9 online identity. The dangers of personalization are also the theme in

Republic.com (2001) by legal scholar Cass Sunstein. Combining these theories

with the idea of regimes of (post) truth, allows to build an argument on how the personalized circulation of content, in theory, could lead to: extremism,

fragmentation and personalized regimes of truth.

However, the idea of the filter bubble is not without its critique and therefore, this thesis would not be complete without providing an overview of these objections. Especially, because there are other factors at play that make us debate the truth.

The last segment of this chapter returns to the idea of epistemology and specifically focuses on how humans form beliefs. It covers theories on forming beliefs and the limitation that personalization offers. For the latter, I make use of the article “Justified Belief in a Digital Age: On the Epistemic Implications of Secret Internet Technology” by researchers on the subject of social epistemology and new media, Boaz Miller and Isaac Record. One shall see that the human process to form beliefs is not as rational as one might think but rather it is a more emotional process; as is argued by social and cultural psychologist

Jonathan Haidt, in his work The Righteous Mind: Why Good People are Divided

by Politics and Religion (2012). Next, to this, I will make use of the work The Internet of Us: Knowing More and Understanding Less in the Age of Big Data

(2016) by epistemological philosopher Michael P. Lynch which deals with the implications of internet sources for understanding rather than knowing.

(11)

10

Case Study: Mapping Fake News Hotspots

on Facebook

The second part of this thesis zooms in onto the U.S. presidential elections of 2016; an event that enables to track fake news as a symptom, in the real world. As I noted in the introduction, the U.S. election was an event during which fake news, misinformation, and disinformation, were heavily circulated. Fake news travels extremely fast when it is embedded within a platform (Public Data Lab 9). In order to fully understand the explosive rise of fake news, it makes sense to track actors that have ‘contaminated’ the network by releasing fake news content on it, and what their motives for doing so are.

In this case study, the entities that have shared political fake news content on Facebook are tracked. Facebook has not just become nearly ubiquitous in the online experience of (American) internet users, but it has also centralized the consumption of news. The platform is used by more than 200 million people each month; more than 60% of the overall population in the United States (Herrman). In addition, as stated earlier a lot of people use Facebook to inform themselves. Therefore, Facebook hosts a big part of the political conversation in the United States. This raises a question as to how this conversation is formed and what motives entities have to spread misinformation and/or disinformation.

During this case study I will answer the following subset of questions: A. What are the hotspots of fake news on Facebook?

B. What are the agendas of these hotspots?; how do they enroll political fake news within their own modes of existence?

C. Are the hotspots related to each other; are there hotspot clusters?

The issue of the circulation of fake news (on Facebook) is quite a complex one; multiple actors are at play which together formed the situation around the issue of fake news. In order to investigate this field, I will make use of the method of

Actor Network Theory by sociologist and philosopher Bruno Latour as expressed

in Reassembling the Social (2005) in combination with Digital Methods to execute the mapping process. Latour’s work functions as a field guide, or as he calls it himself: a travel guide to map controversies. The aim is not to use a social structure or force as the explanation for the issue but to flatten the

(12)

11 situation and look at the actors at play. The situation around fake news is

complex and the actions of different actors had, whether knowingly or

unknowingly, a part in the formed reality. By observing the goals and behavior of the different actors in the field, it is possible to form a map of reality to provide insight (Latour, Reassembling the Social, 16-24, 64-67, 87-95).

In order to do so, Latour calls for a different way to study the social. He makes a distinction between two definitions. Typically, the social is understood as

something pre-given; the social glue that forms society and therefore would explain everything happening in it. However, according to Latour, the social is not something stable but something that is constantly in motion; the social is constantly (re) assembled by performance. The social is being (re)constructed by different particles in the field, which Latour calls actors; these actors are

constant in movement and therefore, reality is in constant reassembling (Reassembling the Social, 4-8, 16-25).

The social becomes traceable when actors are in movement; when something is changing or controversial. In order to map reality, one has to trace the actions of different actors in the field (Reassembling the Social, 21-23). One of the

instances, where it is possible to track the social, is when things fall apart. In normal circumstances, we do not pay a lot of attention to the functioning of the Facebook timeline; it is only when things go wrong, as in the aftermath of the explosive rise of fake news, that things become traceable again (Latour,

Reassembling the Social, 80-81).

Another important factor in the method of Latour is that objects have agency too. This implicates a change in what defines action; anything that influences the social is a part of the map. Not just human-to-human or object-to-object

connections matter, but also the connections between human actors and non-human actors (Reassembling the Social, 63 -76). Non-non-human actors such as algorithms can influence the outcome of an action and therefore, should be included in the map. In the case of fake news, there is a discussion about the influence of Facebook algorithms on the circulation of content; algorithms have an influence on what content users see on Facebook, and therefore, influence the outcome and reality. Personalization algorithms are, what Latour calls, mediators

(13)

12 rather than intermediaries; their input is never a clear indicator for their output. They function as a sort of black box and therefore, their role has to be taken into account.

I operationalize the practice of mapping by making use of Digital Methods, which allows me to track associations online. Web epistemologist and author of Digital

Methods, Richard Rogers, explains digital methods as a way to use web data and

internet research to study society. Roger proposes that researchers should follow the logic and the techniques of the medium in order to study it (4-17). However, following these web native techniques results in some obstacles and limitations. For example, in the case of this case study, the research is limited by the logics of Facebook; Facebook functions as a sort of walled garden, where only “friends” (and “friends” from “friends”) can view each other’s profile and post information. Researchers, therefore, do not have access to information about personal

accounts (Rogers 24-25).

Digital Methods enables to track where fake news “enters” the platform, or to put it in other words, which Facebook actors (groups, pages, and users) posted a fake news article on Facebook that is not Facebook-native. They are

metaphorically named hotspots to refer to the geographical phenomenon of hotspots where flows of magma make their way through the earth's crust to spread over earth’s surface. Resembling the geographical phenomenon, these Facebook entities function as portals that enable a matter to spread into another dimension. To provide an example, the story "Donald Trump Protester Speaks Out: "I Was Paid $3,500 To Protest Trump's Rally" claims that Democratic party presumably hired people to protest against Donald Trump during his rallies. This story was originally published by the website abcnews.com.co (a site that was intentionally made to look like ABC News) (Silverman “Here are 50 of the biggest Fake News Hits on Facebook From 2016”). However, it was picked up by 60 Facebook pages that each shared the story with their public. Their public interacted with it, resulting in an overall engagement of 426972.

One cannot know for certain whether or not the users that reacted with this piece believed them to be true. Nonetheless, recent surveys found out that fake news headlines on the subject of the election were accepted to be true by 75% of the

(14)

13 American voters (Silverman “Here are 50 of the biggest Fake News Hits on

Facebook From 2016”). If this is true, these 60 hotspots caused roughly 320195 people to believe the misinformation or disinformation they spread. A pretty big impact and Facebook is not the only platform that allows people to share stories.

As stated before, starting from August 2016, Facebook’s automated trending module has created problems in the circulation of content. However, this may also have caused the great rise of fake news on the platform. Journalist Craig Silverman of Buzzfeed found out that during the last three months of the U.S. presidential campaign, engagement on Facebook on the matter grew

enormously. During this period, the top-performing election fake news stories outperformed the top news stories from traditional news sources, such as the New York Times, Washington Post, Huffington Post and NBC News. Up until that moment, traditional news sources had been the dominant source of news, but then they were surpassed by fake news and hoaxes (Silverman, “This Analysis Shows How Viral Fake Election News Stories Outperformed Real News On Facebook”).

In this case study, I researched which actors functioned as a hotspot for the most successful political fake news stories of 2016, as identified by Buzzfeed in the article “Here Are 50 Of The Biggest Fake News Hits On Facebook From 2016”. Examples from the political sections of this list are: “Obama Signs Executive Order Banning The Pledge Of Allegiance In Schools Nationwide” from

ABCNews.com and “Pope Franciscus Shocks World, Endorses Donald Trump for

President, Releases Statement” from Ending the Fed.

The tool CrowdTangle enables one to view which actors have shared the stories of this list onto Facebook, and therefore, it allows to map the hotspots of fake news on Facebook. According to Herrman in his article “Inside Facebook’s

(Totally Insane, Unintentionally Gigantic, Hyperpartisan) Political-Media Machine” Facebook native pages on the subject of political news and advocacy, played an important part in forming the political conversation on the social media platform. The identify themselves with names such as “Occupy Democrats”, “The Angry Patriot” or “Addicting Info” and often have a large follower base, ranging from a few hundred to millions of followers (Herrman). These Facebook native pages

(15)

14 have, by reason of their follower base, the capacity to share messages with an even larger audience, and thus have the ability to influence the political debate on a platform that plays an important role in news distribution.

In this case study, I aim to find out what type of entities function as a hotspot on Facebook. Furthermore, I will investigate with what purposes specific Facebook groups acted and whether there are connections and ties between them. This case study will function as a map of Facebook native actors.

From Networks to Modes of Existences

However, as Latour remarks in An Inquiry Into the Modes of Existence: An

Anthropology of the Moderns (2013), Actor Network Theory, as a method has

some limitations. On the one hand, this method allows a researcher to cross the boundaries of multiple domains and track the associations between them but on the other hand, it fails to qualify these traced associations. It focuses on

quantifying and tracking connections instead of figuring out what these

connections actually mean. Actor Network Theory allows for flattening differences between actors from different domains, however, one has to take into account that different domains have different modes of existence; different modes have different values and different factors of verifications of what are considered true statements. In order to fully understand actors, a researcher has to try to

understand these values and therefore, pursue to qualify the associations and tracings within the network. This enables to better track different experience or different modes of existence.

In An Inquiry Into the Modes of Existence, Latour offers three main critiques on tracing actions (Latour, An Inquiry Into the Modes of Existence 157-161):

1. Action is doubled

Firstly, by tracing actions, the researcher might miss the source of action; the actor that made something or someone do something. For example; by making use of digital methods one can detect a Facebook page sharing a piece of fake news. However, it is unsure who made this Facebook page act

(16)

15

2. Direction of action is uncertain

Secondly, to continue on the first point, by mapping the researcher will not know what made an actor act. Latour offers the example of a sculptor in which it is unclear whether the artist authored the sculpture or whether the sculpture

motivated the artist to start the work in the first place. To explain this in the line of the last limitations within the realm of digital methods; digital methods in itself will not provide an answer as to what motivated certain Facebook pages to share fake news. There will need to be made additional moves to gain a better

understanding.

3. The action is qualified as good or bad

Thirdly, since facts and society are constructed, the action might stem from a judgment about what is true or false; important or dismissible.

Thus, to fully comprehend the situation and the reasoning of Facebook hotspots, a qualitative layer has to be added by researching their identity, goals, motives, and ambitions in their own words, and finding a way to compare them in respect to their own modes of existence. In order to do so, I used a manual classification method that focused on describing actors in their own language. By applying multiple levels of analyzing, I was able to form bigger categories out of these descriptions; each with their own unique characteristics. Together these provide an insight into the field of sharing fake news; to find out whether the Facebook hotspots have malicious intents or are captured in misinformation like many of us. The case study will offer a who and what actors do but in order to fully understand the situation, one cannot go without questioning why. This is where the first chapter is useful. It provides context as to why it is possible to believe in these types of unfounded stories and why truth by its nature is not a uniform construct.

(17)

16

Chapter 1: Beyond Fake News

Surrounding the U.S. 2016 Presidential Election, there was a lot speculation on the role of Facebook as a distributor of news. This chapter is about the effect of such new news actors on the field of truth; do information sources from online platforms create fragmentation in the truth? In this chapter, I look at several theories from different disciplines about the establishment of truth, such as the theory regimes of truth by philosopher Michel Foucault, which will form the starting point. The entire chapter revolves around epistemology; the

philosophical study of knowledge, verification and forming beliefs.

Part 1 of this chapter deals with the new distributors of information and their effects on society. Part 2, revolves around how humans form beliefs and how new internet sources play a role in this process.

1.1. New Regimes of Truth and Personalized

Information Universes

1.1.1. New Regimes of Truth

According to philosopher Michel Foucault, there is no such thing as a universal truth because “each society has its regime of truth” (Foucault 130). The regime of truth is a type of discourse; the construct which forms social reality. It

determines which statements society accepts to be or false. Furthermore, it also determines which mechanisms and instances people use to distinguish between true and false, and with what techniques and instances people value truth.

Lastly, it fixes which entities have a leading role in determining which statements are true; these are the gatekeepers of truth (Foucault 130). Foucault does not discuss the subject of media at length but he does indicate that news media, among other apparatuses, function as gatekeepers; thus these institutes have an impact on which statements are seen as true in society (Foucault 131).

However, the dynamics of apparatuses and discourses that form regimes of truth change historically. Therefore, political communication expert Jayson Harsin proposes an update to Foucault’s theory to help to understand the changes within communication technology, globalization and communication styles

(18)

17 (Harsin, “Regimes of Posttruth”, 1-4). Today’s news apparatus is less centralized than before. The internet enables a multiplicity of new news sources, in the form of blogs, websites and social media feeds (Harsin, “Regimes of Posttruth“, 166). In addition, in recent years trust in the traditional news institutions has been falling. A study by research company GALLUP found out that media trust among Americans has been declining since 1973, and hit its new low point in 2016. According to this study, 32% of Americans trust traditional media, such as television news and newspapers. Especially young people and Republicans have lost trust, respectively 26%, and 14% still have trust in the traditional news system (Swift). In effect, the power of traditional news institutions over what is considered to be truth has decreased.

Internet activist Eli Pariser, explains this falling of trust in traditional media due to these new internet sources. He argues that it is hard to see the errors and omissions in the news when one relies on one source, for example, The New York

Times. In contrast, when the internet provides a multiplicity of sources, it

enables the public to see the faults in the original news source. People do not distinguish between the trustworthiness of a random blog or Facebook page or a newspaper. This leads to a distrust in news institutions

(The Filter Bubble, 64).

Political communication scholar Jayson Harsin agrees partly with this explanation. However, he indicates that part of the problem is the changing values within the news sector.(“Attention! Rumor Bombs, Affect, Managed Democracy 3.0 Working Paper”, 6-7). He argues that the internet news sources have eliminated the role of traditional media as gatekeepers, by simply dissolving the gates. The concept of gatekeepers was first identified in the work “Forces Behind Food Habits and Methods of Cange” (1943) by psychologist Kurt Lewin. Lewin researched how effective food consumption was during World War II. He found out that: "food does not move by its own impetus. Entering or not entering a channel and moving from one section of a channel to another is affected by a 'gatekeeper'" Lewin 37). In the process of getting food from the store, there have to be made a lot of decisions; decisions that were often made by the woman or maid in the household. Therefore, they decide which food entered the channel to finish on the dinner table. As the gatekeeper of food, they were in control of the flow of

(19)

18 food that their families consumed (Lewin 37-60). Nowadays, this concept is often used in communication to refer to the person or institutions that are in charge for which information is included for dissemination. The old media system had fewer broadcasting channels and closer ties to government, educational and military institutions. This meant that ‘what came to the table’ was controlled by a tightly clustered group; information was relatively uniform. However, the current news apparatuses are many-headed hydrae (Harsin, “Regimes of Posttruth”, 2-4). News institutions receive an increasing amount of competition from the dense amount of internet sources; sources that are faster, more circulated and more entertaining than traditional media (“Attention! Rumor Bombs”, 10).

However, the tactics of this new news actors, new media interfaces, emphasize speed and circulation and fewer resources. In order to keep up with competition, news institutions need to keep up with the need for real-time reporting, with high speed and low costs. By adopting this trend for fast and entertaining news, there are fewer resources for verification, which caused a crisis of verification (Harsin, “Attention! Rumor Bombs”, 6-11).

In this new media environment original news institutions and government

institutions, no longer have true control over the information that people receive; the gates are dissolved. People can choose from which source they will consume; meaning that there is an entire assorted buffet of information. The multiplicity of news sources in combination with the new technologies of targeted content (which will be discussed later this chapter) produces a reality in which it would make more sense to talk about “truth markets” instead of regimes of truth. The new internet resources have dissolved the role of traditional news as

gatekeepers, and as a result, there are different ‘truths’ operationalized at the same time (Harsin, “Regimes of Posttruth”, 4-5). In addition to this,

governments increasingly make use of public relations or propaganda which make it hard for citizens to trust the ‘truth’ that these organizations tell them (Harsin, The Rumor Bomb 2-7). For example, the Bush administration used the political rumor that Iraq had weapons of mass destruction and ties with Al Qaeda to justify the Iraqi war. However, many Americans believe that the claims were exaggerated and manipulated by the government to sell the war (Judis and Ackerman).

(20)

19 As explained before, according to Latour, different institutions can have different modes of existence; therefore, there are different methods of deciding what is ‘truth’ and thus multiple ‘truths’ at work at the same time. Because there are multiple modes of existence at work, Harsin theorizes that the regimes of truth have been replaced by regimes of post-truth; wherein the truth is not simply a fact but fragmented (Harsin, “Regimes of Posttruth”, 4-5). To give an example: creationist institutions have a different justification system than scientific

institutions and will, therefore, have different attitudes towards the age of the earth. This theorizing is similar to the argument of Foucault; both authors reject a presupposed universal truth and state that different parties (whether

institutions for Latour or societies for Foucault) have a different system of truth. Both state that each system has its own methods for distilling which statements are true and their own value set for what is right and important to them.

However, Latour argues that several institutions within one society can have different views of reality while Foucault states that there is one regime of truth within the entire society (130). Nonetheless, both agree that there are multiple ways to interpret the truth depending on the discourse at play.

In line with this idea, it would no longer be useful to fact check because there are multiple regimes of truth which each have their own verification systems. Fact checking in within one regime will not debunk the story within other regimes. To refer to the example above, scientists believe that the world is roughly 4.5 billion years old based on radiometric age dating of meteorite material. However, using the Bible as a testimony learns that the earth is just 6,000 years. Creationists hold the latter verification higher in hierarchy and therefore, belief the world is roughly 6,000 years old. While scientists often hold radiometric age-dating of more value. Since there is nobody from that time is alive, to give a conclusive answer, it is one method against another. Both regimes are right within their own mode of verification.

Among other factors, the internet as dissolver of the gates of truth enables a state of post-truth. However, there are also other implications of getting news from the news feed instead of the newspaper. Software researcher Tarleton Gillespie argues in the essay “The Relevance of Algorithms” that journalists, like algorithms, have invisible tactics for determining what is important. However,

(21)

20 journalistic and algorithmic objectivity are not the same thing. Journalistic

objectivity is based on an institutional promise and professional training and expertise. For this institutional connection, journalism produced a more or less stable regime of truth. In contrast, the objectivity of algorithms finds its origin in the technological and mechanical promised neutrality. The content that is

distributed to the users, is presented as if there was no influence in the process (n.p., pdf: 15-17). Nonetheless, Gillespie argues that just like the promise of journalistic objectivity algorithms are not per definition neutral; they can also sort for commercial gain, political interests or a personal gain. Algorithms can act as a gatekeeper, whose powers can supplement editorial power (Gillespie n.p., pdf: 25-26). Algorithms can influence to which information people have access. Harsin also places an emphasis on the role of both personalization algorithms. Next to the multiplicity of news sources and the crisis of verification,

personalization algorithms and marketing play a role in producing the regime of post-truth. Within this argumentation structure, it is useful to make use of a theory that is often seen as a follow-up of Foucault’s theories on discourse and discipline: the concept ‘societies of control’ by philosopher Gilles Deleuze. Deleuze asserts that there is no longer such a thing as an individual, for the reason that the word individual implies that it is no longer divisible. According to Deleuze, people are not self-contained units because they can be divided into different aspects and roles within society. Deleuze calls these divided parts dividuals. The implication of this is that power, which was once centralized, does no longer discipline individual, but rather that there is a centralized form of power that controls these dividual parts (Deleuze 3-7). On the internet, algorithms have the ability to quantify online behavior by measuring and producing groups based on this dividual aspects. In effect, personalization algorithms direct attention, they steer the attention of a user to certain aspects while they do not give them direct access to other data (Harsin, “Regimes of Posttruth”, 4-5). Big data-driven predictive analytics produce segmented, fragmented and personalized spheres of truths, which control individuals on a dividable level. In the next segment, I further discuss the implications of personalization algorithms within the subject of content circulation in depth.

(22)

21

1.1.2. The Dangers of Personalized Filtering

As demonstrated above, algorithms play a role in producing a regime of post-truth; a place where fake news can flourish and there are multiple truths at hand. In continuation of the last part, this segment will offer a deeper

understanding of the danger of information personalization algorithms. What is the argument behind the accusation that Facebook, by its method of circulating content, had an influence during the election?

Internet activist Eli Pariser’s theory on filter bubbles offers an understanding of the impact of the way that content circulates. In his work, The Filter Bubble:

what the Internet is hiding from you (2011), Pariser famously argued that online

personalization of information leads to a space in which users only encounter their own interest and beliefs.

Personalization algorithms on (social) media platforms might do a great job of recommending movies and music that users might like. But as Pariser argues, it might become dangerous when these mechanisms get applied to politics and the filtering of information. Internet filters decide what users are shown based on what or who users ‘like’ and what they have clicked on. In this way,

personalization is shaping the way in which information flows. The unique micro-universes, that these filters generate is, what Pariser calls the filter bubble; a unique space that fundamentally changes the way people encounter ideas and information (The Filter Bubble, 9).

According to media scholar José van Dijck, an algorithm can be understood as: “a finite list of well-defined instructions for calculating a function, a step-by-step directive for processing or automatic reasoning that orders the machine to produce a certain output from given input” (30). In addition, to this rather

technical definition, (Facebook) algorithms, EdgeRank and GraphRank (van Dijck 49), also have an additional role. They track the behavior of users on Facebook and on connected sites, in order to maximize their profit out of advertising and the collection of data (Skeggs and Yuill 391). By tracking behavior online,

Facebook’s algorithms have the power to sort and filter the available content per user and therefore controlling their access to content. Thus Facebook’s

(23)

22 control what ideas or news users encounter (Van Dijck 49). They fix and limit what is possible for the user, and for this reason could be seen as a sort of lens between the reality and the user, which could potentially skew user’s perception of the world (Pariser, The Filter Bubble, 10-15).

However, filtering systems do make it possible to manage the immense amount of information published online. Without filtering, no user would be able to read everything that is important online. Personalization filtering enables the users to, easily find the information that is important for them. What is most important to them is a reflection of what fits the user’s interests and opinions (Pariser, The

Filter Bubble, 4). COO Sheryl Sandberg of Facebook stated that people do not

want information that is targeted to the whole world, instead they prefer

something that reflects their own interests (Pariser, The Filter Bubble, 110). Or as Mark Zuckerberg famously quoted: “A squirrel dying in front of your house may be more relevant to your interests right now than people dying in Africa” - Mark Zuckerberg in: Pariser, Eli. The Filter Bubble. What is relevant and

interesting differs per user.

Nowadays, personalization is a big part of our daily experience. In 1995 MIT technology specialist Nicholas Negroponte predicted the coming of “the Daily Me”; a communications package that is personally designed for a specific user. The transmitter (television, radio, newspaper, magazine etc.) used to determine which information the user received but computers, which are designed to filter, sort and prioritize information, are able to sort information based on what user might want to perceive (Negroponte 153). This might sound like a good thing because it would empower consumers to consume the information that is most ‘relevant’ to them. However, as legal scholar Cass Sunstein argues in his book

Republic.com (2001), there is a downside to the growing power of consumers to

filter what they see (8-9). It should be noted that the argument of Sunstein predates the argument of Pariser by roughly ten years and it is often seen as a predecessor of the filter bubble argument. Moreover, one should take into account that personalization technologies have evolved a lot over this time frame.

(24)

23 Sunstein illustrates this argument by using the example of the newspaper. When reading the newspaper, people would skip over articles that they did not find interesting, especially the political section. Nevertheless, they still had to glance over information that was unwanted or unplanned to consume, in order to decide if they were going to dismiss them (Sunstein 27). Thus, people were exposed to unwanted information. However, because of personalization systems, users can eliminate certain views, opinions or topics, that do not reflect their interest, before even taking note that they are out there (Sunstein 27). Moreover, as Pariser argues nowadays the excluding of information happens without the users even being aware of it happening, and the propaganda of the information that they see (10). As Van Dijck states: “as software increasingly structures the world, it also withdraws, and ‘it becomes harder and harder for us to focus on as it is embedded, hidden, off-shored and merely forgotten about.” (29). The

problem with a technical infrastructure is that it and its effect tends to become invisible over time.

Another difference with earlier versions of personalization is that the filter bubble is not optional. Previously, users made a choice to consume certain information and disregard other content but now the personalization algorithms work

automatically and filter out certain information before it reaches the user. One last difference is that each of the users is alone in their own filter bubble. There were information channels with a certain alignment or bias before but they always reached multiple people; the filter bubble is a you-niverse (10).

If personalization algorithms limit people to only consume content that fits their worldview, what would be the effect? In his argumentation structure, Sunstein refers to the right of free speech in the United States, which allows

(controversial) speech and expression to flow freely on public ground (Sunstein 27). This system results in exposure to a diversity of views; which might be unwanted, but it also offers people the chance to enrich their minds with unexpected new worldviews and new knowledge; Pariser refers to this phenomenon as serendipity. (Sunstein 33) (Pariser 1-18). Even when these diverse opinions do not offer people fresh insights, they do make them aware of different worldviews (Sunstein 33).

(25)

24 In contrast, the internet enables people to solely consume the information they find relevant or interesting; whether they consciously set the filter themselves or whether unconsciously filtered by algorithms. According to Sunstein, this could lead to multiple problems. The first problem is the fragmentation of society. The internet enables users to get stuck in places that only reflect their own opinions or worldviews, which are called echo chambers. In echo chambers, the belief of users gets echoed back which leads to a reinforcement of their belief; this could lead to extremism (48). Echo chambers or filter bubbles, lead to clustered (extremist) views of the world and this makes it harder to understand each other's viewpoint. Segmentation makes it harder to tackle social issues as one community. Another effect is the phenomenon of cyber cascades; in which a view becomes widespread because a lot of people believe in it, which creates a snowball effect (49).

The second problem of personalization involves the social spreading of information. Normally, the sharing of information also has a social role.

Especially so-called solidarity goods; these are information goods that become more valuable with the number of people that consume them. Solidarity goods are things like presidential debates or sporting games; which also work as a sort of bonding tool because they are worth more when you can discuss them with somebody (Sunstein 49- 50).

The third and final problem is the understanding of freedom. By understanding freedom as consisting of the sovereignty of consumers, one would believe that freedom solely consists in having the satisfaction of making our own choices. However, freedom also lies in the chance to form one’s own preferences and beliefs formed under the correct conditions; in an environment wherein there is a sufficient amount of diverse information (Sunstein 50). This is what Pariser refers to as the influence of our online identity on our offline identity. The filter bubble can limit user’s ability to develop and to choose how to live their life. When you enter a filter bubble, it is decided which options you are aware of and therefore, the user could get stuck in a static, ever narrowing version of themselves (110-113). Pariser notes that there is a danger in algorithmically created versions of our identity; it should be noted that they are not a direct reflection of who we really are. As stated before, personalization algorithms only capture certain

(26)

25 aspects of our lives to form an image of who we are. However, they are unable to grasp other aspects or simply miscategorize us. In “A new Algorithmic

Identity” (2011) media critique scholar John Cheney-Lippold claims that

individual users are unable of experiencing which effect algorithms have on their lives. The reason for this is that algorithms are not targeted towards individual users but rather target them as part of a certain category. The baseline for this categorization is not the individual user but smaller parts of them; on the dividual level (Cheney-Lippold, 14-19).

According to Cheney-Lippold, the categorization is based on this dividual aspect of one’s life; such as their job, or love of mystery novels (14-19). Philosopher Gilles Deleuze, who coined the term, states that people are not so much

controlled by discipline but rather a form that controls what is open and what is closed. New data gathering technologies reduce individuals to dividuals (Deleuze 3-8). The personal data and dividual aspects are re-used outside the control of the user. Based on this data and a recombination of several dividual aspects, the user is put into a category. Categories are formed out of dividual aspect, which in turn control what individual user with that specific dividual aspect sees. The fact that users receive certain content would then softly persuade the user towards a model of normalized behavior and identity to fit the category. (Cheney-Lippold, 14-19). The problem with this system is that people normally do not fit exactly into nice, clear-cut, categories, but are softly suggested by the content, the algorithms return, to fit themselves more to the mold. As Pariser notes that humans do not only shape media but are also shaped by media; if a user is exposed to a certain opinion a lot then there is a big chance that they will accept this opinion as their own (Pariser 124-125). Or as communication scholar John Culkin once summarized the work of media philosopher Marshall McLuhan: “We shape our tools and, thereafter, our tools shape us.” (Culkin 51-53).

The arguments that Sunstein and Pariser make, both emphasize the importance of serendipity or unexpected occurrences to allow the user to develop

themselves. It should be noted that this argument is rooted in a cosmopolitan worldview, which revolves around the thought that every human is a citizen of the world and therefore there should be multiple understanding and respect to collide. Professor of American and English literature, Paul Gilroy claims that a

(27)

26 degree of estrangement from one’s own culture is essential to a cosmopolitan commitment. With estrangement, he means the “process of exposure to

otherness”, in order to create the value of diversity within sameness (Gilroy 67). In other words: it is a worldview in which humans have to try to keep

themselves open for other opinions and customs, in order to create a unity in society.

Therefore, if humans solely consumed content that fits their worldview it could form a threat for a heterogeneous society. As stated above, it could lead to

extremism and polarization in society. Or as journalist Jenna Wortham explained, one society that would be split into two different countries: one that sees blue and black, and one that sees white and gold.

Sunstein states that group polarization is the effect of a system in which individuals and groups make different choices (65). Group polarization occurs when people get trapped in echo chambers which results in an enforced, and more extreme belief. The first explanation is that of persuasive arguments. The idea is that an individual's position is based on which argument seems the most convincing. In echo chambers, the group discussion is already inclined in a

certain direction, and therefore, the members will offer a disproportionately large amount of arguments for their own case, and hardly any arguments against their initial claim (Sunstein 67-68). Because of this, the original opinion will likely seem more convincing and if members of the group shift they will generally move towards a more extreme stance instead of a more metropolitan worldview. This mechanism of group polarization revolves around a ‘limited argument pool’; one that is skewed towards the group’s own stance (Sunstein 68).

The second explanation that Sunstein offers for group polarization, is that of social comparison. This mechanism revolves around the idea that people want to be perceived favorably by other members of the group. This means that they will often adjust their stance towards the dominant opinion within the group

(Sunstein 68). Another effect is what political scientist Elisabeth Noelle-Neumann calls the “spiral of silence”; in which people with minority standpoints within the group, silence themselves in fear of isolation (276). This is related to the concept of the Overton window. The Overton window is a construct for the range of public

(28)

27 stance points to ideas, or, to put it more bluntly, which ideas are socially

acceptable (Lehman). Before access to the internet, politicians, and media only expressed opinions that were deemed mainstream (Manjoo). As a result, people with minority, or alternative, stances hardly ever heard their own opinions

reflected (Manjoo). This could, in turn, lead to a spiral of silence in which humans with alternative ideas, would silence themselves. With social media, it is possible to connect with users all over the world, which allows people with

‘not-mainstream’ views to find people that share their stance. This allows them to do things together. Social media can produce echo chambers in which fringe ideas get repeated, and therefore gain power. It becomes accepted to share fringe stances within echo chambers, which in turn makes the Overton window more extreme, to the right as well as the left (Tuler and Kasperson 156-157). As Sunstein argues, echo chambers, empower people to take a more radical stance point, which leads to a more extreme difference in political left and political right. This could lead to polarization in society.

However, as Pariser argues, because of personalization algorithms, people are no longer aware that content is filtered to fit their argument. Facebook and other online platforms would create an automatic echo chamber without the user having search for it themselves; a filter bubble. Filter bubbles, as echo

chambers, cause extremism and polarization. Pariser’s argument is that users only see their own opinion, which would automatically stir them towards a more enforced belief; a skewed worldview. Humans no longer would have the chance to encounter unwanted or unplanned content to give them the freedom to form their beliefs. Personalization algorithms even eliminate the chance for users to glance over other content. Instead, Personalization algorithms keep users trapped in an endless version of themselves with no place for serendipity. Also, social media has made it possible for fringe opinions to be heard and to be

reflected, which in turn, enlarges the Overton window which makes the (political) debate more intense.

Even though Pariser and Sunstein seem to make a convincing case about the dangers of online personalization, not everybody agrees with the filter bubble argument. In the following part, I will discuss different arguments against the filter bubble concept within the academic discourse.

(29)

28

1.1.3. Critique of the filter bubble argument

As stated before, the theory of the filter bubble, or that of algorithms narrowing user’s minds, is not perfect. In the part below, I will offer five categories of critiques that are being used in the academic discourse to counter the filter bubble argument.

“Same birds flock together”

The first category of critique is the idea that humans themselves already filter out information that they do not find interesting or that does not fit their

worldview. Digital librarian Nina de Jesus states that the filter bubble problem is not a technological problem but a social one. In general, people do not search for information and people that do not ‘fit’ within their view. This social bubble is constructed by many factors, such as race, gender, class, and geography.

Solving the technical filter bubble problem will not change the fact that people in society are divided into groups.

In addition to this, the study “Promoting Civil Discourse Through Search Engine Diversity” by user behavior researcher Elad Yom-Tov et al. demonstrated that users tend to click on items that reflect their own opinion. However, the personalization algorithms do add to this phenomenon and as Cass Sunstein claims, technological filtering systems eliminate the encountering of unwanted content.

“It is just a theoretical concept”

The second critique is that the filter bubble concept is just a theoretical concept but that it does not exist in real life, or that the effect of algorithmic filtering is negligible. Information law researchers, Frederik J. Zuiderveen Borgesius et al., made a recent inventory of existing empirical studies of the phenomenon. They concluded that some studies indicate a small personalization effect but that the technology at this point simply is not sufficient enough to cause a completely personalized internet experience. Personalization technology on most sites is in an infant stage, and therefore, there is no risk at this moment. However, if technology improves it could become a problem.

(30)

29 After the publication of Pariser’s work The Filter Bubble, Facebook conducted their own study to conduct how serious the claim laid out by Pariser is. The research by Facebook data scientists, Eytan Bakshy et al., “Exposure to

Ideologically Diverse News and Opinion on Facebook.”, measured the effect of personalization algorithms on the feed among American users, who

self-described themselves as liberal or conservative. On average the ‘filter bubble effect’ is about 6% (1130-32). Which means that 6% of the stories in the feed are personalized. Apparently, the actions of ‘friends’ have more influence on the content of the feed than the personalization algorithm (Pariser, Did Facebook’s Big New Study Kill My Filter Bubble Thesis?). However, as noted above, if people ‘flock’ together in a cluster of like-minded people in real life, they might as well do the same online. Therefore, if the majority of one’s Facebook friends have a characteristic in common, one’s Facebook feed will have a stronger bubble effect. Another finding of the study is that the ‘filter bubble effect’ is slightly stronger for self-described liberal Facebook users; 8% of cross-cutting content is lost by algorithmic filtering, versus 5% for conservative users. However, another finding is that the choices of users to ignore or not click on ‘unwanted’ content, has a bigger impact on why individuals might not read enough content out of their comfort zone. The study estimates the chance that a conservative does not click on cross-cutting at 17%, while the clicks of liberals would lead to a limitation of 6%. Overall, users decide to limit their variety of information intake more than the algorithms do (Bakshy et al, 1130-32). Moreover, personalization can also help to give people access to a diverse set of information and pre-selected

personalization might even lead people to make more diverse choices (Helberger 441), (Nguyen et al. 80-86). However, when people are wired to choose what is familiar to them, algorithmic filtering does add another layer of eliminating cross-cutting information, without users controlling what to exclude. Furthermore, the filter bubble effect may be stronger for one user than it is for others, also

(31)

30

“It is not a bubble”

The third type of critique of the filter bubble argument is that the metaphor of the bubble is not the best way to describe the phenomenon. Researcher on the subject of big data, Anders Koed Madsen proposes a shift in his article “Beyond the Bubble: Three Empirical Reasons for Re-Conceptualizing Online Visibility”. The shift he proposes is to replace the metaphor of the filter bubble with the term web visions to explain how real-world representations are shaped by online visibility. A web vision is an: “environment of information that is open to

exploration by a web user interested in specific issues” (9). The problem with Pariser’s argument is that he is solely focused on algorithms as the cause for the information that is visible to the user, while in reality, the algorithm is just one of multiple selection mechanisms (6-7). Madsen discovered that there was a

discrepancy between the search engine results page, a direct representative of the algorithm, and the websites that users actually visited and therefore, the web vision is not just dependent on the algorithm (21). The implication is that users do not get trapped in a bubble but get a certain starting point to their exploration of the web sphere.

“Do not throw the baby out with the bathwater”

The fourth category of often used critique is that we must not forget that there is a positive side to personalization and that we must not, so to speak, throw the baby out with the bathwater. According to legal scholar Eric Goldman,

personalization algorithms counter the popularity bias of normal algorithms and thus reduce “the winner takes it all mentality” of promoting popular items because they are popular. Due to personalization, the results differ per person and is the outcome more relevant and fair (199-200). Another issue that personalization solves, is that of managing the vast amount of information online. Without personalization filters, there would be too much information to comprehend and to find anything useful. Personalization algorithms offer the information that is most relevant to you in that time (Intelligence Squared U.S. Debates).

Lastly, most big actors in the field of personalized filtering, such as Google and Facebook, do allow the user to turn off personalization if they wish to

(32)

31

“Reading does not equal believing”

The last critique of the filter bubble argument is not whether or not there exists such a thing as a filter bubble. But rather, it has to do with the assumption that users believe everything they read. Pariser and Sunstein argue that because of personalized information, people would get stuck in extremist viewpoints; that if a Facebook user reads only pro-Trump articles, including fake news and hoaxes, that this person will automatically accept everything they read as if Facebook in itself is an injection needle. The last argument against the filter bubble is that humans have ways of forming beliefs and to justify their opinion (Intelligence Squared U.S. Debates).

In sum

In the first part of this chapter, I have offered an overview of the new

infrastructure of news circulation and how this infrastructure might lead to a ‘market of truth’ or ‘regime of post-truth’ in which the consumer can choose themselves which ‘truth’ they prefer. In the second part, I have demonstrated that in the time of personalized algorithms and big data, users might not even have to choose which truth they prefer because according, to the filter bubble argument, user will only see one mode of truth and will not be able to see the world within different modes of existence, or within another regime. In theory, the filter bubble could create a country divided into two nations; one that sees gold and white, and another that sees blue and black. Nonetheless, in the segment I have offered some objections towards this argument, that indicates that the technical workings of the filter bubble are not strong enough to produce a personalized regime of truth for all of us. In the next part, I will investigate the other side of the feed: the users. In fully understanding the impact and context of fake news, it is important to understand how humans form beliefs.

(33)

32

1.2. Could it be us?

In the previous part, I offered an exploration of different theories that focus on the distribution of knowledge rather than on the perceiving of content. Therefore, in this second part, the focus is on the actual perceiving of knowledge and

therefore, it returns again to the study of epistemology. To understand why humans have trouble in identifying fake news, in the definition of unfounded content. It is important to understand the process of forming beliefs and forming moral stance points. This part will explore how people form beliefs, and what the influence of the internet as a distributor of knowledge is in this process.

As indicated earlier, people increasingly rely on internet sources as an

information source. Furthermore, there is an increasing amount of sources to consult online. However, all this data might not lead us to more knowledge. Data is not information, and not all information is good information, and good

information alone does not automatically lead to knowledge. Founding father of information theory Claude Shannon argued that data signals are messy, and for data to turn into meaningful information one has to filter out some of the noise (A Mathematical Theory of Communication 1-4). Thus, information is filtered or extracted from that what we receive but this still is not knowledge. Lynch defines knowledge by making use of Plato: “Knowing is having a correct belief (getting it right, having a true opinion) that is grounded or justified, and which can

therefore guide our action“ (Lynch 14). Knowledge, therefore, is forming a true belief based on a ground, that one can trust to act from. The tendency of news sources to emphasize speed and entertainment rather than sources, reduce the trustworthiness and therefore, overall knowledge in the news. However, there are lots of different forms to give grounded information; reliable sources, experience or a grasp of the big picture (Lynch 15).

According

to scholars on the subject of the philosophy of technology and

epistemology: Boaz Miller and Isaac Record, relying on search engines and other online platforms would not lead to knowing since online information is often incomplete and biased. Therefore, they argue in their article “justified belief in a digital age: on the epistemic implications of secret internet technology” that a subject’s belief based on information from the internet is less justifiable. This is

(34)

33 because people often do not understand the workings of the circulation system, nor would rely on additional sources (117). Their argument is that humans should form a practicable responsible justified belief; they should investigate claims to the extent of what is possible and required in a particular situation. What is possible and required relies on role expectancies and competencies of the individual and the tools in his or her availability. Personalization algorithms result in different outcomes for different users, and in line with the dangers of the filter bubble, it simply would not be enough for users to rely on just their search results or their Facebook feed, to form a responsible belief. Miller and Record also state that factors such as bias, emotion or gut feelings should not be factors in forming a true responsible belief (118-125). However, one must keep in mind that different regimes of truth or different modes of existence have different verification systems, so Miller and Record’s argument is rather normative and will therefore not be accepted by everybody as the correct method to distil the truth.

However, within their line of thought, personalization algorithms have the following features that make it harder for users to form responsible justified beliefs based on their outcome: firstly, online personalized filtering often makes it hard to tell difference between experts, commercial parties, hooligans and biased sources. Secondly, algorithms are opaque; it is not clear how relevance is calculated nor, are users aware of them being at work. And finally, it is unclear what the results cover; what is included in the dataset and what is excluded. According to Miller and Record, as a result of these features, using internet services that sort information will often lead to not beliefs that are not responsibly justifiable on a technical ground (127-129).

Miller and Record claim that in order to make the personalized web services a more qualified place, to form a belief and gain knowledge in an epistemological responsible way, four actions could be undertaken: 1) stop filtering results, 2) give the user more control over the filtering process, 3) make users more aware of the limitations and features of algorithms in forming responsible beliefs, or 4) verify internet sources on basis of how trustworthy they are and eliminate

troublesome sources (128-129).

Referenties

GERELATEERDE DOCUMENTEN

H8: The motivation to read more OCR’s after reading one review will be higher (lower) when consumers showing a more analytical (intuitive) information processing style when searching

This study shows that a more liberal political ideology does not necessarily have to increase internal and / or external CSR practices, as no such evidence was found. The

Per 1 januari 2008 zijn de regels over het gebruik van biologische mest in de biologi- sche sector aangescherpt.. Vanaf die datum behoort 45 kg N per ha afkomstig te zijn

quest for EEG power band correlation with ICA derived fMRI resting state networks. This is an open-access article distributed under the terms of the Creative Commons Attribution

Mark Bovens and Anchrit Wille argue that a new political cleavage in Europe has emerged between citizens with high levels of education and those with lower levels of

The aims of this study were to assess what improvement in travel time could be made by Genetic Algorithms (GA) compared with random delivery route solutions, and to assess how

The desorption experiments were carried out by placing a weighed adsorption column containing an adsorbent in equilibrium with feed in the adsorbent regeneration section of the

In 2010, she started PhD project in Catalytic Processes and Materials group at the University of Twente, The Netherlands, on Application of Attenuated Total Re lection FTIR