• No results found

Echo Chambers, Filter Bubbles and Fake News: Examining the Rise of Left-Wing Fake News on Facebook

N/A
N/A
Protected

Academic year: 2021

Share "Echo Chambers, Filter Bubbles and Fake News: Examining the Rise of Left-Wing Fake News on Facebook"

Copied!
48
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Echo Chambers, Filter Bubbles and Fake News:

Examining the Rise of Left-Wing Fake News on Facebook

Cristiaan van Wijk Student number 5734630 University of Amsterdam

(2)

- 1 - Acknowledgements

Completing this master’s thesis would probably not have been possible without the support of my thesis supervisor, Dr Esther Weltevrede, whose positive attitude and useful feedback encouraged me to power through. Erik Borra provided me with a great variety of sources, both from inside and outside of academics, which were hugely beneficial. Thanks also go to my fellow student Charlotte Leclerq, who offered her fresh perspectives on my work during our regular feedback sessions. And last but not least, I would like to thank my girlfriend, Lieke Verheijen, for her continual support and for applying her knowledge of academic writing in an effort to help me improve the quality of my thesis.

(3)

- 2 - Table of Contents

Acknowledgements page 1

1. Introduction: Echo Chambers, Fake News, and Conspiracy Theories page 3

2. Political Discourse on the Internet: From Optimism to Realism page 9

3. Echo Chambers and Filter Bubbles: Influencers of Polarization and Media

Bias in the American Political Landscape page 13

4. Methodology page 24

5. Results page 31

6. Discussion page 34

7. Conclusion page 40

(4)

- 3 -

1. Introduction: Echo Chambers, Fake News, and Conspiracy Theories

With Donald Trump as the winner of the 2016 United States presidential elections, it seems that the partisan divide in the American voting population has increased significantly. The rather vile tone that was used primarily by the Republican presidential candidate Donald J. Trump during the televised debates, in mainstream media, and on his Twitter page, seems to have contributed to the further polarization of the Republican and Democratic voter base. As Baldassari and Gelman put it, “[p]olitical polarization constitutes a threat to the extent that it induces alignment along multiple lines of potential conflict and organizes individuals and groups around identities, thus crystallizing interests into opposite factions” (409). Besides Donald Trump’s own behavior, and the scandals that this behavior brought forth in the media, the explanation for the further polarization of the American right might be found in the efforts of certain online communities in knowingly or unknowingly spreading propaganda, sometimes in the form of what recently has been coined fake news. These elections have given birth to countless Facebook-native ‘news’ pages, of which a large part seem to follow the same business model: “build a big following, post links to articles on an outside website covered in ads and then hope the math works out in your favor” (Herrman). The tactics used to trick users into visiting outlinks that contain fake news stories, and making users believe that the content is real, vary in the same way that definitions for fake news differ.

One tactic is to mimic legitimate news outlets. For example, the URL CBSnews.com.co deliberately tries to trick viewers into thinking that they are actually viewing the CBS News website. Traditional media organizations have spent years trying to find ways to get traffic from Facebook to their own website. These new, Facebook-native media outlets differ from older media organizations, since they feel comfortable on Facebook as it gives them all the tools they need to publish, as well as a way to profit from the content they publish.

A term like ‘fake news’ would lead one to think that it has a straightforward definition, which can easily be explained. However, the term is being used by both right-wing and left-wing citizens, to refer to different kind of news sources. The right, including the current president of the United States, likes to use the term ‘fake news’ to refer to large mainstream media, meaning that their message is biased and that the news they publish skews the facts. Yet other people refer to completely fictional, made up stories as fake news. These concocted stories often feature a clickbait title and are guaranteed to rile up readers who deem the story factual. In one definition, fake news does not refer to news articles written by biased journalists, or

(5)

- 4 -

distorted headlines deliberately written to generate clicks (and in effect, create revenue). Instead, fake news refers to the deliberate spread of false news stories on social media outlets with the intention to earn ad revenues, to push a certain political narrative, or to outright troll the users of these social networks (the Guardian). To be specific, ‘trolling’ used in the context of these past US elections differs from what traditionally has been considered trolling. Trolling used to refer to the actions of individuals who deliberately spread false information, expressed offensive opinions, or spewed vulgarity at other users on the web “with no apparent instrumental purpose” (Buckels, Trapnell, and Paulhus 97). Those who trolled others on the web mainly did this to get a rise out of people and to antagonize them for their own amusement. In the definition that was used for ‘trolls’ in the last decade, trolls simply trolled for fun. But at present, who are also called ‘trolls’ (by themselves or by others) are members of the alt-right who use memes to push their narrative. Yet this use of the term is somewhat unjust, since their actions can be seen as distinctly different from the classic troll tactics and goals. As Whitney Phillips, Jessica Beyer, and Gabriella Coleman point out in their article about the trolling tactics of the far right during the past election cycle, by referring to their practices as trolling, far right trolls use the term as a cop-out. This self-referral to their actions as trolling dismisses critique on their ideas and passes the blame onto the person who is getting trolled or who lets himself get trolled. This, the authors argue, “normalizes an antagonist-centered worldview, in which the aggressor gets to choose their own terms and assert what the appropriate reaction to their behavior might be” (Philips, Beyer, and Coleman par. 8). It seems that trolling is no longer void of ideology, and that the act of trolling has become a part of the tactics that the alt-right uses to spread their political message and to attract new members.

The definition of fake news is thus more complicated than simply “news that is not true,” and while the issue of fake news itself is of great importance in the current political climate, I will argue that its working definitions are fluid and highly subjective. In an effort to show that fake news is part of a larger phenomenon of falsehoods and manipulated information that is spread on the internet, Claire Wardle outlined seven different types of mis- and disinformation ranging from satire to fabricated or manipulated content, as can be seen in Figure 1.

(6)

- 5 -

Figure 1: The seven types of mis- and disinformation Claire Wardle distinguishes (FirstDraft).

While sounding straightforward, the term ‘fake news’ is used by different groups in different ways, and can mean multiple things simultaneously. It has been used as “a catch-all phrase to refer to everything from news articles that are factually incorrect to opinion pieces, parodies and sarcasm, hoaxes, rumors, memes, online abuse, and factual misstatements by public figures that are reported in otherwise accurate news pieces” (Weedon, Nuland, and Stamos 4). There is fake news written by Eastern European people with the intention of making a profit off of the accompanying advertisements (Silverman and Alexander). There are also partisan news websites that balance on the line between promoting a heavily biased narrative and spouting utter falsehoods, of which the extreme right website Breitbart is a good example. Among other incidents, such as feeding the lie that former president Barack Obama were a Kenyan-born Muslim (Goldstein), Breitbart published a false story about a mob consisting of 1000 Muslims that supposedly set fire to a church in Dortmund on New Year’s Eve 2016 (France-Presse). The fact that Breitbart generally publishes true, though heavily biased, news articles, makes it hard for readers to separate fact from fiction. What is truly remarkable is that the current President of the United States of America, as well as many of his followers, use the term fake news as an attack on mainstream media outlets that are critical of the current administration, without actually putting in the effort of debunking the news that is branded as fake by this right-wing

(7)

- 6 -

movement. Donald Trump has referred to CNN (Smith), as well as The New York Times as fake news (Morin), and has repeatedly questioned the integrity of journalists working for major mainstream news outlets. Ironically, the president himself has referred to sources that can be described as dubious at best, to promote the birther movement (the conspiracy theory that states former president Barack Obama is not a U.S. citizen), instances of large-scale voter fraud during the 2016 election, and the (non-existing) link between the vaccination of children and the development of autistic behavior (Maheshwari). Given that public trust in journalism has been plummeting for several decades, with only 32% of Americans (and 14% of Republicans) stating that they have a “great deal or fair amount of trust in the media” (Swift), these accusations directed at the mainstream media of being fake news fit into a larger trend of distrust in media – and perhaps into the overall trend of low confidence in large, intricate institutions. In a series of surveys held by the Nieman Journalism Lab, researchers found that distrust in news media in the United States is especially prominent among right-wing voters. After identifying the political affiliation of the respondents, it turned out that 51 percent of left-wing respondents reported trusting news outlets, while only twenty percent of right-wing respondents answered this question affirmatively (Owen). As a side note, the situation in the United Kingdom seems to be reversed: 37 percent of the left-wing voters in the UK reported trusting the news, while exactly half of the right-wing respondents were shown to trust the media. In this thesis, my focus is not solely on the content of fake news, but also on the contexts in which it circulates (Bounegru et al.).

Dissemination of fake news can lead to badly informed groups and individuals, especially when taking into account that users of online media generally pick and choose the sources they want to be able to push news items to them, or that algorithms influence the kind of content a user gets to see. A Facebook or Twitter feed provides its users with a limited perspective of what is happening in the world. These closed off online spaces which function as a news bubble wherein users gather information are referred to as echo chambers (Sunstein 6). In Republic.com, American legal scholar Cass Sunstein argues that for a democracy to function properly, citizens should be exposed to “materials that they would not have chosen in advance” (5) and that they should share a “range of common experiences” (5). He describes a situation where citizens who participate in modern communication technologies are actively restricting themselves to their own points of view. In an online space that functions as an echo chamber, users generally share the same views on the issue(s), person(s), or phenomena to which the community is devoted, and there is a severe lack of criticism, or nuance, on the status

(8)

- 7 -

quo. Even if such nuance is presented by a user to others within this virtual chamber, this user’s views run the risk of being ignored, and the user might face harsh criticism from others for going against the status quo. The interaction users have with each other may strengthen their perceived notions about certain issues. When false information is fed to the community, and nobody addresses the incorrectness of the information, groups may be manipulated and pushed to extreme views. Recognizing that social media and social networks play an increasingly important role in shaping public opinion and public perception on certain issues, echo chambers “can prevent engagement with alternative viewpoints and promote extreme views” (Williams et al. 135) and a lack of critical scrutiny sometimes leads citizens to move to extreme positions on partisan issues. In some more extreme cases, a complete rejection of main stream media accounts of significant events by an individual can lead these individuals to accept a fringe conspiracy theory as truthful. A space where opinions that oppose the dominant ones are quelled, combined with the (un)intentional spread of blatant misinformation by users in that space, poses a serious risk to the inhabitants of that space, as they might be willing to reject logic and instead choose to believe a more exciting false narrative.

In the current public debate, the notion exists that the spread of fake news and biased news articles have played an important role in the outcome of the past US presidential elections. Several commentators have suggested that fake news was a vital element in Donald Trump’s election victory (Dewey 2016, Parkinson 2016, and Read 2016). In their recently published study, Allcott and Gentzkow argue that the role of fake news in the elections is overstated in the mainstream media. The authors show that only 14 percent of Americans called social media their “most important” source on election news, and that “fake news in [the] database would have changed vote shares by an amount on the order of hundredths of a percentage point. This is much smaller than Trump’s margin of victory in the pivotal states on which the outcome depended” (20). Instead, mainstream media are blamed for having played an important role in creating the momentum for Donald Trump. Besides the modern way of reaching a target audience through online content for major news outlets, television still plays an important role in reaching the political base of both Democrats and Republicans. Large television networks like CNN and Fox constantly cover politics and still reach millions of American voters. Both channels have been criticized, by the right and left wing respectively, of pushing partisan news that either spins news coming from one of the parties or attacks the other party. During the past election cycle, CNN was nicknamed ‘Clinton News Network’ by Republicans supporting Trump. Fox received criticism for spinning facts on numerous occasions. Several instances of

(9)

- 8 -

Fox news anchors pushing disinformation to their viewers can be found in the documentary feature film Outfoxed.

The present thesis focuses on fake news that went viral on the internet in the months after Donald Trump was inaugurated as the 45th president of the United States of America. I have investigated which fake news stories were being pushed by questionable websites on the social media site Facebook in the wake of the 2016 presidential elections, and what role fact-checking websites played in combatting the fake news phenomenon. One of the main goals of this thesis has been deciding to what extent interaction occurred with the fake news stories, and which types of stories garnered (significantly) more interaction than other types. I have focused on discovering what kind of sites or platforms were responsible for spreading the fake news, and whether the topics written about during the months after the inauguration significantly differed from those written about during the elections. For collecting a corpus of popular widely shared stories, the fact-checking website Snopes has been used. My research question can be condensed as follows:

What kind of fake news stories were engaged with the most in the months following Donald Trump’s inauguration, and can analyzing these stories add to the existing definition(s) regarding the term ‘fake news’?

In the coming sections, I will first shape a theoretical framework, to shed light on the ways the democratic process has been intertwined with the internet and to show that the optimism in the early days about the potential of the internet has by now turned into concern in the current debate. After this section, the focus will be shifted towards contemporary problems that exist surrounding echo chambers, and the algorithmically created filter bubble. I will connect these concepts to the relatively new issue of fake news and will argue that fake news is part of a larger phenomenon of dis- and misinformation techniques. Next, we will arrive at the empirical part of the thesis, where I will identify key fake news stories that have been debunked in the weeks after Donald Trump’s inauguration. At the same time, I will study the sources that have pushed these stories. In the discussion, a critical assessment of the methods used will be outlined, until finally arriving at my conclusion.

(10)

- 9 -

2. Political Discourse on the Internet: From Optimism to Realism

Before conducting an empirical study, we must, of course, explore what others have written about how interaction between citizens with political issues is affected by the internet, how the development of digital technologies has influenced the way citizens engage with politics, and what relevant authors have written about how the internet can suppress certain voices, while amplifying others.

The rise of the internet has complicated the idea of the public sphere, a term that refers to how communities influence the state through deliberation. The use of the term ‘public sphere’ was popularized by Jürgen Habermas, who describes the ideal form of a public sphere as “made up of private people gathered together as a public and articulating the needs of society with the state” (176). Peter Dahlgren expands the definition when writing about the role of the internet in that public sphere. He defines a public sphere as a “constellation of communicative spaces in society that permit the circulation of information, ideas, debates – ideally in an unfettered manner – and also the formation of political will” (148). One can argue that the internet is not an ideal place for large-scale deliberation, nor to express certain wishes to the state in a clear and uniform way. Richard Rogers and Noortje Marres (2005) argue that the activity that takes place within a network of actors on the internet surrounding a certain issue is not exactly a debate, and that these actors are not really having a conversation or that they present a clear point of view. Instead, they refer to interlinked webpages that are dedicated to an issue as “issue-networks” (2). This structure greatly differs from how Habermas envisioned how a public sphere should function.

The internet was thought to expand the public sphere and, as is explained in the paragraph below, on the surface it can be easily believed that it would provide new opportunities for democratic deliberation, as citizens would be able to gather resources on issues, voice their opinion and deliberate with each other on the internet, all resulting in a stronger or better version of democracy. Writing about bias in a pre-internet world, Robert A. Hackett noted that “objectivity [can no longer be] be taken as the opposite of ideology in media, if indeed the forms and rhetoric of objectivity help to reproduce dominant political frameworks, or position the media audience as passive observers and consumers” (253).

The democratic potential of the internet and the influence globally networked individuals would have on the public sphere were lauded in the early stages of the internet. In the 1990s, there was a utopian attitude among academics, with Nicholas Negroponte coining

(11)

- 10 -

the term the Daily Me (Negroponte 153) to emphasize “the highly tailored information and knowledge we could acquire to become better-informed citizens and consumers” (Benkler et al. 5). It was Yochai Benkler who stressed the importance of the commons, which refers to the “institutional devices that entail government abstention from designating anyone as having primary decision-making power over use of a resource” (Benkler 2). A commons-based model for information production would diminish “the power of the state and of incumbent media to shape public debate, and that radically decentralized, commons-based production by once passive consumers would enhance participation and diversity of views” (Benkler et al. 5). As is the case with multiple scholars writing about the early stages of the internet, Benkler’s view on the potential of the internet in its early stages was optimistic, as he saw once passive consumers turning into active users, who would be able to engage in “open social conversation” (Benkler 579) on a new democratic and open medium. This optimism came under pressure after researchers began conducting network analyses on the web, starting with hyperlink analysis.

After the new millennium, researchers started to perform network analysis to study the way users deliberate with each other on the internet (and with whom), and how online communities are shaped. It is important to realize that before the current internet paradigm and before the Web 2.0 internet paradigm, microblogging platforms and social network sites as we know them now were non-existent. The research performed around the new millennium showed that conversation was not neatly and evenly dispersed, but that it was mainly concentrated in the form of hubs, and that participation on the internet followed a power law distribution (Barabási and Albert 1999, Clay Shirky 2003).

This power law distribution is created by the possibility to choose what to view on the internet. Clay Shirky remarks that “[i]n systems where many people are free to choose between many options, a small subset of the whole will get a disproportionate amount of traffic, […] even if no members of the system actively work towards such an outcome.” This argument undermines the idea that the internet is an inherently democratic medium, since most of the users will remain unheard. Popular websites in the early stages of the internet depended, for their popularity, on receiving many links. Once a site became more popular, the chance for other web pages to link to this page increased severely. Popular pages would not often link to unpopular web pages. If they started linking to a relatively unknown website, the chance for that site becoming (more) popular would, as logic dictates, increase accordingly. Networked practices are commonly characterized by these power law distributions, which show positive feedback loops on both the demand side, “where attention to a given node feeds back additional

(12)

- 11 -

attention to that same node” (Benkler et al. 597), as has first been shown in citation counts in the academic world (Price), as well as on the supply side. For instance, power law distributions can be seen in cases of open source software projects. In their analysis of open source software development projects on the open source database and collaboration site SourceForge, Madey, Freeh, and Tynan found that a the “initial success [of a project] breeds more success because developers prefer to be part of a successful project” (1810). Furthermore, in online encyclopedia projects like Wikipedia, most of the work is done by a small minority of dedicated users. Muchnik and others found, in their network analysis of Wikipedia pages, that only “5% of users contribute 80% of the edits” (2). All these findings suggest strongly that, although users were in fact able to voice their opinion, most of these voices would go unheard. Only the websites at the top of the power law distribution would eventually be seen by others.

Drezner and Farell provided an argument that might be truer now than when they first approached the issue of democracy and participation on the web. They state that there was an overlap between the mainstream political media, i.e. the mediasphere, and the blogosphere. Since the early blogosphere, there was an important link between mainstream journalism and the then new medium of blogs (24). Early bloggers who wrote about politics were often figures from the world of journalism, and “therefore [blogs] affect political debate by affecting the content of media reportage and commentary about politics” (22). Wallstein’s analysis on the overlap of topical discussion on political blogs and mainstream media presents a different view, since he concludes that “on a significant number of domestic and foreign policy issues, the blog and media agendas respond to different factors and are, therefore, likely to be unrelated to each other in any significant way” (579). This does not mean that there is no overlap at all. In fact, Wallstein states that he noticed a “positive, bidirectional relationship between media coverage and blog discussion” (580), where the discussion of a certain topic in mainstream media would be covered in the blogosphere shortly after, and vice versa. With increasing pressure on print media, coupled with the fact that all major, and minor, news media are represented on the internet, today the lines between mainstream media and political content on the web have not just been blurred: they have dissolved completely.

In his book The Myth of the Digital Democracy, Matthew Hindman tries to diminish the utopian view of the public sphere on the web, by arguing that “[b]loggers fit poorly into the narrative that has been constructed for them” (103). In this narrative Hindman writes about, blogs are filled with empowered ordinary citizens that expand on the “social and ideological diversity of the voices” (102) found online. The author identifies this kind of framing in the

(13)

- 12 -

mainstream media in stories he calls “Joe Average Blogger” features (108). In such stories, journalists write about an ordinary citizen who suddenly gained a voice in the political arena through his or her blog. Instead of a collection of empowered citizens, a new elite of content creators has been developed on the internet. Most content on the web will either never be seen or mostly be ignored. Only a few voices will reach a large audience: his empirical study showed that “the overall size of the political public sphere was negligible” (Benkler 597). Hindman states that these facts make it difficult to assume that the rise of blogs have changed the ways in which citizens “have their voices heard in politics” (128). Overall, the author remains optimistic, since he argues that the debates that take place in the blogosphere are still of value to democracy.

After having shed light on the internet’s democratic potential and criticism on this perceived potential, the coming section will shift attention to two related, but very different concepts: the issue of echo chambers, as briefly mentioned in the introduction, and the concept of the filter bubble, which Eli Pariser describes as a “unique universe of information” created by algorithms that learn from past behavior (10). Here we move away from discussing earlier digital communication methods towards more contemporary ways, of which social media networks are the most relevant for the current web.

(14)

- 13 -

3. Echo Chambers and Filter Bubbles: Influencers of Polarization and Media Bias in the American Political Landscape

To further divulge the issue of echo chambers on the web, we must try to provide a working definition of this concept. As briefly touched upon in the introduction, Cass Sunstein introduced the issue of echo chambers in 2001 in his book Republic.com. The term might be more popular now than ever, after the long and impactful presidential elections of 2016 in the United States of America. The concept is clear-cut: like-minded individuals are more likely to band together in an online environment, and those with differing opinions are likely to be pushed away from the space that these like-minded people inhabit. Certain opinions and views are prominent within this space; alternative opinions are drowned out. The inherent problem with this phenomenon is that understanding of, and empathy for, the Other will be eroded.

Writing about climate policy in the United States and the influence of echo chambers on policy making regarding climate change, Jasny and others define echo chambers as “social network formations that transform the ways in which information is transmitted and interpreted by actors” (782). The logic is quite simple: the nature of the internet makes it very easy for its users to find and connect with like-minded individuals, and at the same time to avoid deliberation with those who have a different outlook on issues. This practice erodes understanding of the other and undermines a shared (media) experience, long to be considered a prerequisite for a well-functioning democracy. Matthew Baum and Tim Groeling examined how decisions made by editorial staff of partisan websites (FoxNews.com, DailyKos.com, and FreeRepublic.com) skew the content that gets featured on these sites, in relation to the non-partisan online news outlets Associated Press and Reuters. The authors found some support for accusations of both the right and the left that there is a clear ideological bias in the media. They found evidence of “the right-skewed political orientation” of FoxNews.com (360), and found that the online platform of Fox favors Republican and Conservative interests. Important to address is that they studied major news outlets, while in the current media landscape extreme partisan websites are gaining influence on readers and public trust in mainstream media is declining. Interestingly, Baum and Groeling also identified that the non-partisan Associated Press “prefers stories critical of Republicans” (360), but they note that this might be explained by the prevailing political climate when they conducted their research. For both Liberal as well as Conservative new media outlets, there exist “clear and strong preferences for news stories that benefit the party most closely associated with their own ideological orientations” (359).

(15)

- 14 -

Users who reside in echo chambers on the web and are not, or hardly, subjected to alternative viewpoints or are not open to empathize with viewpoints that are not one’s own, can be at risk of polarization. Cass Sunstein writes, in his book Going to Extremes, about the dangers of like-minded individuals banding together in their own political bubble. By looking at experiments conducted in diverse fields, Sunstein identifies a common theme in extremist behavior, as he claims that “[w]hen people find themselves in groups of like-minded types, they are especially likely to move to extremes” (2). He argues that people who deliberate with other members of a group (be it slightly racist individuals, moderate feminists, or individuals who are inclined to invest in real estates) end up becoming more extremist in their views after they have been in contact with these groups (3-4). This means that people within groups are inclined to move to more extreme positions on certain issues corresponding to those of other group members. The internet offers many ways for individuals to connect to like-minded individuals. One of those methods is the modern equivalent of a space where in antiquity the Romans and Greeks met for social interaction: the forum. The internet forum is an ideal space for people to engage in discussion on topics they find interesting and for like-minded users to find one another, since forums are often either devoted to one specific field of interest, or to a large amount of topics. In the Dutch-speaking web sphere, the latter type of forum can be exemplified by Fok forums. Fok forum offers users the opportunity to discuss everything from film and philosophy, to politics and planes. Forums usually display the available categories plainly, and topics are neatly organized and easily recognizable. The internet generally offers narrower focus as well. For example, people who enjoy growing bonsai trees can use search engines to identify forums that only discuss aspects of this particular hobby.

In the context of online social networks, individuals who find themselves in a minority that holds an opposite view on a certain issue can be deterred from this space by vocal (and even aggressive) others, as their efforts in starting an open debate with those others prove to be fruitless.

In the current debate that is taking place in the mainstream media, attention has been given to the rise of mass public manipulation through data. According to Carole Cadwalladr, writing for The Guardian, the billionaire Robert Mercer has, in his offline adventures, funded the political campaigns of both Ted Cruz and Donald Trump, as well as put large sums of money into ultra-conservative non-profit organizations and a right-wing thinktank that propagates climate ‘skepticism’. Reports show that Mercer tried to influence voters by disrupting the mainstream media using data technology. Mercer is alleged to have a stake in the company

(16)

- 15 -

Cambridge Analytica, which has worked for Donald Trump’s campaign and the Leave campaign in the United Kingdom. This was the movement that wanted British citizens to vote ‘leave’ in the Brexit referendum held in 2016. Cambridge Analytica is accused of using trackers to learn in detail about the personality traits of right-wing leaning, but mostly undecided voters who the data deemed susceptible to subtle influence. The company then allegedly targeted these undecided voters with specific ads on Facebook, in an attempt to get an emotional reaction out of the user and, in effect, to influence the political preference of that user. Michael Kosinski, the lead scientist of Cambridge University’s Psychometric Centre “found that with knowledge of 150 likes, their model could predict someone’s personality better than their spouse” (Cadwalladr). Information gathered through likes on Facebook can be combined to create psychological profiles that can subsequently be used to predict and control behavior, which can lead to citizens being unknowingly coerced to vote against their own interests. Here it is relevant to refer back to the recently published study by Allcott and Gentzkow, though, since they concluded that the role social media played during the last elections is overstated by mainstream media outlets. So it is important to keep in mind that despite the real accusations directed at Robert Mercer, the actual impact of the shady activities that he is accused of undertaking can at present not be assessed accurately.

By turning away individuals who do not share the same sentiment towards an issue as other group members within an (online) group, an even more homogenous group will be created. Following Sunstein’s reasoning, this entire process will lead to greater chances for polarized behavior and the likelihood for individuals to ascribe to conspiracy theories or other fringe theories. It should be noted that the adoption of users of a more polarized stance on certain issues need not always be a negative development. Consider, for example, a group that is moderately concerned about the future of the rainforest. Following Sunstein’s argument, the individuals in this group will most likely become more concerned about the destruction of the rainforest and even less optimistic about its future than they were before interacting with other group members. This might make them willing to put more effort in trying to preserve rainforests, and members would be inclined to stop buying palm oil products, since palm trees are often planted on deforested areas (Clay 32). In other words, it very much depends on the viewpoint itself whether intensification of that viewpoint is in any way detrimental or not. But even in the example about rainforests, a group that ascribes to extremist views regarding a subject might be pushed to use extreme measures to achieve their common goal.

(17)

- 16 -

Indeed, the end point of extremism often is violence towards those who hold a point of view differing from that of the polarized group, as well as attempts to suppress the views of the group who holds a different or moderate view. These violent aspects of polarization can, for instance, be seen in the countless terrorist attacks that have been committed since the attacks on the World Trade Center in New York on September 11, 1993. Strongly connected to the issues of false and heavily biased information on the web, and users mainly receiving information through personalized Facebook and Twitter feeds, is the rise of conspiracy theories. The 2016 US elections were plagued by scandals and (verbal) attacks by Republicans on Democrats and vice versa. In what can be regarded as one of the most outrageous attacks, Republicans claimed that the Democratic candidate Hillary Clinton, and her campaign manager John Podesta, were at the head of a pedophile ring that was operating out of the basement of Comet Ping Pong, a Washington D.C. pizza restaurant (Politifact). The conspiracy theory came out of right-wing blogs and was picked up by Donald Trump supporters who frequent the discussion boards of 4chan and certain parts of content aggregation site Reddit. The theory came to be known as Pizzagate. On both 4chan and Reddit, a rather large and very active group of Donald Trump supporters participated in pushing the theory. Reddit users created the subreddit /r/pizzagate in order to discuss the theory and present new ‘evidence’. In an effort to quell fake news, moderators quickly banned the subreddit (Ohlheiser). The so-called evidence for the Pizzagate controversy was ‘found’ in a series of emails from the Democratic National Committee (DNC), the formal governing body of the Democratic party in the United States, which were made public by Wikileaks. In some of these emails, prominent member of the democratic party John Podesta referred to pizza in unusual phrasing, which prompted Donald Trump supporters to believe that pizza must have been code for something more sinister.

As is common in conspiracy thinking, proponents of the conspiracy theory took the remarks made by Podesta and connected them to all kinds of unrelated elements, thereby creating a web of, what they deem, evidence for their claims. Users of Reddit, 4chan, and Twitter saw code words which supported their theory in the hacked DNC emails that were published by WikiLeaks. Pizzagate plainly exemplifies how fringe conspiracy theories can have an effect in the real world, since the hoax led proponents of the theory to visit and harass the owner and even incited an armed man to visit Comet Ping Pong (the Washington pizza restaurant referred to above) and fire his assault rifle in an effort to apparently ‘investigate’ the claims related to the theory.

(18)

- 17 -

As with most conspiracy theories that find their origin in online environments such as chat rooms, and that circulate online, pizzagate has been debunked by several mainstream media outlets, among whom the fact-checking website Snopes as well as the New York Times. And as with most conspiracy theories, proponents easily dismiss these sources as being part of the conspiracy or, maybe somewhat ironically, as left-wing propaganda. In Conspiracy Theories and Other Dangerous Ideas, Cass Sunstein states that a conspiracy theory can be considered as such “if it is an effort to explain some event or practice by referring to the secret machinations of powerful people who have also managed to conceal their role” (16). The Pizzagate conspiracy theory actually diverges from this definition, since there was no actual event or practice to be explained by a vast conspiracy. What seems to be unique in this specific conspiracy theory is that it functions as a weapon in the war that is waging between Republicans and Democrats in the highly polarized American political landscape. It seems not to matter anymore whether certain claims are true or not, if a certain political narrative can be pushed by referring to untrue claims. It is not a coincidence that Oxford Dictionaries have declared ‘post-truth’ to be the 2016 international word of the year (BBC). Now that the elections are over and Donald Trump has won, the theory is still being pushed by users of social media sites such as Twitter and Facebook, even though the political ‘war’ is over.

In the case of the pizzagate controversy, even after the armed man with his rifle had to conclude that there was no basement at Comet Ping Pong, which had to mean that an important aspect of the theory was wrong, the man was not convinced of the inauthenticity of the conspiracy theory. Sunstein’s argument that participation on the internet can increase polarization gained support from the works of Adamic and Glance, who examined linking patterns and discussion topics of both left- and right-wing political blogs during the 2004 presidential elections in the United States. They found that only one of six links at the top of the liberal and conservative blogosphere went to a source of the opposite ideology. In fact, left-wing blogs mainly linked to sources and blogs that were ideologically homogenous, and in their discussion, “liberal and conservative blogs [focused] on different news, topics, and political figures” (43). In The Wealth of Networks, Benkler argues that crosslinking to different ideological material is not necessarily evidence of polarization and fragmentation, but could be considered “a normal allocation of attention to debates within one’s political milieu and across the divide” (597). In the context of (genuine) fake news, linking across an ideological divide is often done by using a title that can be considered clickbait, so as to provoke outrage and, at the

(19)

- 18 -

same time, arouse curiosity. This mix of emotions is deliberately generated to attract more clicks.

By combining link analysis with the more qualitative approach of content analysis, Hargittai, Gallo and Kane find little evidence for an increase in fragmentation and polarization, but instead conclude both that “political commentators are much more likely to engage those with similar views in their writing [and that] they also address those on the other end of the ideological spectrum” (85). Also addressing the polarization argument laid forth by Sunstein, Gentzkow and Shapiro show that individuals are exposed to more ideologically diverse opinions on the web than in their offline lives, and that even those that actively try to shield their points of view from dissenting opinions are generally subjected to these opposing viewpoints anyway. In other words, there is no escaping the other’s voice. They conclude that ideological segregation on the internet is “significantly lower than segregation of face-to-face interactions in social networks” (1831). The fact that it seems to be difficult to restrict exposure to one’s own point of view does not mean that people accept other points of view, nor does it mean that dissenting opinions are being addressed in a respectful matter or that they have an influence on the other’s opinion.

In her book Hearing the Other Side, Stanford political scientist Diana Mutz addressed the apparent discrepancies between the desire for a deliberative democracy and the simultaneous existence of a participatory democracy. A deliberative democracy demands from its citizens the exposure to, and acceptance of, alternative opinions, or as Lawrence, Sides and Ferrell put it: “we would like citizens both to be enthusiastic participants in politics and to respect diverse perspectives” (141). At the same time, a participatory democracy requires citizens to group with like-minded individuals in order to get their voice heard by politicians. According to W. Lance Bennett, this practice is what characterizes our era. He identifies our age as the age of personalized politics, an age in which large-scale political action can be achieved in a way that was never possible before, and in which “individual expression displaces collective action frames in the embrace of political causes” (37). This action can be directed not only at politicians, but at a broad range of targets. Bennett acknowledges that the personalization of political action poses several challenges for organizing protests, “chief among which concerns negotiating the potential trade-off between flexibility and effectiveness” (Bennett and Segerberg 772). By looking at how several networks that engaged in protests at the 2009 G20 London Summit used digital media, Bennett and Segerberg found that the group that adopted “a more personalized communication strateg[y] [...] maintained the strongest

(20)

- 19 -

network” (Bennett and Segerberg 794). Diana Mutz stresses the seemingly contradictory idea of the politically active individual who is simultaneously regularly exposed to alternative viewpoints, and the individual that respects and understands these alternate views:

We want the democratic citizen to be enthusiastically politically active and strongly partisan, yet not to be surrounded by like-minded others. We want this citizen to be aware of all of the rationales for opposing sides of an issue, yet not to be paralyzed by all of this conflicting information and the cross-pressures it brings to bear. We want tight-knit, close networks of mutual trust, but we want them to be among people who frequently disagree. And we want frequent conversations involving political disagreement that have no repercussions for people’s personal relationships. (125-126)

In the quote above, Diana Mutz implicitly offers critique on the general idea that Sunstein proposes in Republic.com. Sunstein argues that being exposed to alternative opinions is not only healthy for citizens, but that it is, in fact, absolutely necessary to engage in deliberation with individuals who have different ideas about political issues and issues outside of politics, so as to avoid navigating in an echo chamber where one’s own views on issues are constantly confirmed. That users organize themselves in groups is not necessarily bad, but it becomes a problem “when politically active individuals can avoid people and information they would not have chosen in advance, [as] their opinions are likely to become increasingly extreme as a result of being exposed to more homogeneous viewpoints and fewer credible opposing opinions” (Conover et al. 89). Lawrence, Sides and Farrell investigate the idea that was put forward by Mutz, and argue that online deliberation and political participation may indeed be at odds with each other (141). In their research, readers of political blogs report a high degree of participation in politics, but at the same time the authors found that those who were most politically engaged were often also subjected to political polarization (151-152). This is in itself not a remarkable finding, since it is rather impossible for individuals to form a polarized worldview without being bombarded by the same messages and opinions.

In the context of the internet, the ideal scenario for a participatory democracy would entail that citizens are active, check on their government and, for example, join a collective to

(21)

- 20 -

take action when they disagree strongly with proposed government policy. This would mean that citizens have to find and deliberate with like-minded individuals, who most likely share the same opinion on partisan topics. For a deliberative democracy, on the other hand, these same active, organized citizens, would have to be subjected to alternative opinions and would actually have to be active in different online spaces to be able to meet these politically heterogeneous individuals. Of course, it is unfair to present a false dichotomy. The internet does give its users ample opportunities to get subjected to alternative viewpoints. For example, people can find alternative opinions in the comments section underneath online articles. The problem with this example is that certain newspapers attract specific readers, with either a Liberal or a Conservative bias. In the case of Twitter, users can participate in deliberation with people who do not share the same viewpoint when they are using the same hashtag. But, although users can present several points of view around the same topic that is connected to a hashtag, not all hashtags are inherently neutral. A neutral hashtag is one that does not project any bias or as little bias as possible, as opposed to an ideologically charged hashtag. Examining bias in political searches on Twitter, Kulshreshta and others found that “queries related to democratic and republican candidates as well as democratic and republican debates have a democratic-leaning input bias.” The hashtag #MAGA, which stands for “Make America Great Again,” refers to Donald J. Trump’s main campaign slogan that he used during the 2016 presidential elections. It is an example of an ideologically charged hashtag. But even though one would expect this hashtag to be mostly used by right-wing Trump supporters, sometimes politically active opponents try to engage in what resembles hashtag hijacking. Hashtag hijacking refers to the overtaking of a specific hashtag that was purposely created to push a certain narrative. Such hijacking provides an unintended counter-narrative to the preferred one. It does not necessarily have to be related to politics: the most infamous example of hashtag hijacking can be found outside of the realm of politics. When the McDonald’s marketing team decided to push the #McDStories hashtag on Twitter, in the hope that Twitter users would do the online marketing for them, users quickly started sharing negative experiences they had when visiting a McDonald’s restaurant with the #McDStories hashtag attached to their tweet (thebigstory.nl). For politically engaged hashtag hijackers, Hadgu, Garimella, and Weber found that these users “are more active than other politically interested users” (56). The next section will discuss an issue that is related to Sunstein’s concern about echo chambers: the filter bubble.

In his book The Filter Bubble, Eli Pariser warns readers for the inherent dangers of a personalized web experience. Pariser refers to modern-day internet filters on websites such as

(22)

- 21 -

Google or Facebook as “prediction engines” (10). Internet companies monitor what users do, like, share, and have in common with like-minded others, in an effort to predict future behavior and preferences, and to shape a personalized browsing experience. This leads to, as the title of his book suggests, users navigating in a filter bubble, a tailor-made browsing experience that is created by algorithms that monitor user behavior (10). Of course, people have always consumed media that aligns with their worldview and interests, but Pariser argues that the filter bubble is a new phenomenon by identifying three dynamics that are unique to new media. Firstly, as opposed to audience members watching the same television program, you are alone in this bubble. This fact deprives people of a shared experience, and “pull[s] us apart” (10). Secondly, the filter bubble is invisible to its inhabitants. When users watch a certain talk show on a channel that is generally considered to push a right- or left-wing narrative, users are aware of the fact that there is at least some bias present. On the internet, users are generally unaware of their predefined experience. Lastly, it is extremely hard or even downright impossible to opt out of the filter bubble. One would have to refrain from using mainstream social media sites, search engines, and even most mainstream browsers to be able to escape the bubble (10-11). An important element to highlight in the case of the filter bubble is that it is human action that ultimately affects the tailor-made experience, since “human factors play a role not only in the development of algorithms, but in their use as well” (Bozdag 214), so it is the interplay between the algorithm and the individual that determines the contents of their bubble. Humans possess agency, and especially web savvy individuals are not slaves to algorithms.

Referring back to Cambridge Analytica’s attempts to influence voters in both the Brexit campaign of the Leave movement, as well as to the presidential elections in the United States, further problematizes Pariser’s notion of the filter bubble. Besides the engineers that created the online platforms that we all use (perhaps unintentionally) influencing the way we view the (digital) world, we now have members of the upper class using high-tech companies like Cambridge analytica to promote their own political views, and use big data to strengthen the power of the elite and sow distrust between several societal groups, as the case of Robert Mercer exemplifies. In studying Facebook, Bakshy, Messing and Adamic (2015) found evidence that suggests that users are exposed to a substantial amount of news content on their feed from Facebook friends who are ideologically opposed to them. This nuances fears of users navigating in an ideological bubble that is created for them by algorithms.

What seems to connect the issues of the echo chamber and the filter bubble, is the shaping of a biased world view, either ‘naturally’ occurring by users’ own efforts (although the

(23)

- 22 -

platforms which these users use have their own agency in this process) or created by algorithms. Several authors have investigated how far the issue of bias in (online) news consumption reaches, resulting in differing conclusions about the size of this perceived problem. The following paragraph examines several of these studies.

Peter Dahlgren argues that democracy has fallen on hard times and that the “hope is often expressed that the Internet will somehow have a positive impact on democracy and help alleviate its ills” (147). Although the internet has become a part of the public sphere, or rather, a new sphere for the public to deliberate in, it is becoming clear that this does not necessarily benefit the values of a democratic society. Instead, in the last year(s) the internet seems to have been increasingly used as a tool for misinformation, and as a weapon in what is starting to look more and more like ideological propaganda warfare. Indeed, the internet gives its users both the tools and the space to spread their message and to spread false (or skewed) information. These phenomena drive groups who identify with either side of the political spectrum further away from each other, and provide a hotbed for polarization. Coming from a statistics background, Flexman, Goel, and Rao wrote about the effect of the rise of search engines and social networks on ideological segregation (299). They examined the browsing behavior of 50,000 users of the Bing search engine, by using data provided by the Bing Toolbar. Although confronted with more material from the opposite political spectrum than in the offline ages, they concluded that “articles found via social media or web-search engines are indeed associated with higher ideological segregation than those an individual reads by directly visiting news sites” (318). At the same time, the authors found that the consumption of online news generally did not differ very much from offline consumption, as users mainly visited the homepage of mainstream news media that are in line with their own ideological view (318). In the environmental sciences, Williams and others performed a network analysis of Twitter users to find out how skeptics – or rather denialists – of climate change interact with opposing views. Based on the content of the tweets, the authors identified what kind of attitude users have (activist, skeptic, or neutral) and visualized their behavior in three types of networks: follower, retweet, and mention networks (128). They found that users indeed mainly interacted with like-minded individuals and that “exposure to alternative views was relatively rare” (135). Users who did engage with others holding alternative viewpoints mostly held less polarized views.

Experiencing a walled-off browsing experience, whether in a filter bubble or inside an echo chamber, can have an effect on users’ perception of issues. James Druckman and Michael Parkin examined if and how American voters are influenced by editorial slant, which they

(24)

- 23 -

define as “the quantity and tone of a media outlet’s candidate coverage as influenced by its editorial position” (1030). They found that editorial slant indeed has an effect on a voter’s view of a certain candidate (1046). Other research suggests, however, that there is no inherent bias in (American) news media. Dave D’Alessio and Mike Allen performed a meta-analysis of media bias in presidential elections in the United States, for which they considered 59 quantitative studies. They firstly identify three types of bias that are discussed in the body of work they examine: gatekeeping bias (deciding what gets covered and what does not), coverage bias (how much a certain topic gets covered), and statement bias (how much a member of the media can insert their own opinion into a piece) (135-136). Ultimately, the authors conclude that “there is no evidence whatsoever of a monolithic liberal bias in the newspaper industry, at least as manifest in presidential campaign coverage” (148), nor is there any evidence of a conservative bias in the industry. The authors nuance this claim by adding that specific newspapers and/or reporters do, in fact, show a clear and significant bias. Having scrutinized the literature on echo chambers, filter bubbles, conspiracy theories, and fake news, in the following chapter I will explain the methodology I chose to use for the empirical part of this thesis.

(25)

- 24 - 4. Methodology

The empirical method employed in this thesis was established by taking inspiration from one of the studies conducted in the Field Guide to Fake News, which was developed by the Public Data Lab.1 This so-called field guide relied on digital methods (Rogers) for investigating several

issues connected to the dissemination of fake news. In the chapter ‘Mapping Fake News Hotspots on Facebook’, the researchers relied on a list of 22 fake news stories about political topics related to the 2016 presidential elections. Provided by BuzzFeed News,2 these were the

stories that garnered the most engagement on Facebook, in the form of likes or shares. In an effort to shed light on fake news that was propagated on Facebook not during, but after the elections, the method that has been applied in the aforementioned field guide has served as a basis for the methodology I developed for this thesis. The reasons for focusing on fake news that has been written after the installment of the current president are as follows. I would like to investigate what kind of news stories were deemed newsworthy when the battle (the presidential elections) had been fought, and if the topics that were then written about are significantly different from those written about during the elections: are people, as one might hypothesize, less likely to engage in discussions about political content after hot political events such as an election? Another focus is determining what kind of actors were then responsible for pushing any fake news.

My methodology will be described step by step below, but one important aspect has to be stressed first. The distinction between the method applied in this thesis and the method that was used for exposing publics that engage with fake news on Facebook in the aforementioned Field Guide is that for this thesis, a new list of popular fake news stories had to be compiled from scratch, since the focus lies on fake news after the elections. This means that the starting point for the empirical study was to compile a list of popular, preferably heavily shared fake news stories that popped up after the elections. The compilation of this list was done by using the popular fact-checking website snopes.com. Similar to other fact-checking websites, journalists working for Snopes investigate popular stories on the web for their authenticity and judge the validity of these stories. Fact-checking websites have begun to play an important role in the democratic process. Not only do some of these websites such as snopes.com, politifact.com, and factcheck.org handle individual claims made by prominent politicians, they

1 http://www.publicdatalab.org

(26)

- 25 -

also devote articles to debunking popular viral news stories that contain dubious or downright untrue claims. It is worth addressing, though, that in the current media landscape and political climate in the West, even fact-checking websites are polarized. At the same time Donald Trump’s (easily verifiable) lies were piling up during his campaign, his supporters’ trust in fact-checking websites was dwindling down. In a YouGov poll held in October last year, 77 percent of Trump supporters expressed the opinion that they had no or not much trust in fact-checking websites, compared to 11 percent of Hillary Clinton supporters (Golshan). The right-wing news organization Breitbart has criticized Facebook for asking several fact-checking websites (Snopes, ABC, Politifact and Factcheck.org) to help fight fake news on the social media site. According to Breitbart’s commentator Ben Kew, the aforementioned factcheckers “all […] have records of left-wing partisanship.” While there is no fact-checking website that checks the claims made on other fact-checking websites, one would assume that if disinformation is spread on Snopes or similar sites, a simple Google search would reveal at least some reliable sources addressing this. However, this is not the case.

All stories that are featured on Snopes receive a specific rating, ranging from false to true, with ratings like ‘mostly false’ and ‘mostly true’ in between. The stories that Snopes judges are listed chronologically, so cataloguing widely spread hoaxes during a specific time and using a certain interval can be done rather easily. As a starting point for gathering stories, January 21, 2017 has been chosen, given that this is the day after Donald Trump’s inauguration. This date was a clear breaking point with the past. The election, and with it the fierce battle, was over and the new president was installed. In the ten weeks after the inauguration, I have selected two stories per week. These are stories that the reporters at Snopes have deemed ‘false’ and can thus be conceived as fully-fledged fake news. In selecting these stories, there was a preference for stories that are about politics or political in nature, at the expense of more random stories (stories on, for example, celebrities or food). After selecting a story, the source that was accused by Snopes employees of propagating the story has been followed, as in some cases Snopes was able to identify the original purveyor of the fake news story. If a direct website was not provided by Snopes, a Google search has been performed for the story title in a clean, freshly installed and never before used research browser (Firefox), so as to not compromise the search results. I then selected the highest ranking website that pushed the fake news story, after which I put the URL into the browser plugin Crowdtangle. This tool lets users see how much interactions users have had with a certain story, and on which social media platform(s) the story has been shared, and how often. Interactions are calculated in Crowdtangle by adding the

(27)

- 26 -

number of likes, comments, and shares that a story has garnered on social media sites. After following the steps outlined above, a list of twenty popular fake news stories has been created which is presented in Figure 2. It is important to stress that interacting with a story (sharing, liking, or commenting it) is not equivalent to endorsing or uncritically engaging with that false story. Most people who are active in the Twittersphere would recognize the by now (in)famous phrase “retweets do not equal endorsements.” The same logic applies to the behavior exhibited by critical users who interact with content on Facebook. Just because a Republican voter likes or comments on a fake news article does not automatically mean the user believes the story, nor that he agrees with the sentiment expressed in it.

WEEK NUMBER

FAKE NEW STORY

Week 1 Obama Daughters BUSTED–Dad Has To Bail Them Out Of Jail At 3AM (1836 interactions)

URL: http://thelastlineofdefense.org/breaking-obama-daughters-busted-dad-has-to-bail-them-out-of-jail-at-3am/

Trump Identifies Canadian Women As ‘Second Biggest Threat to U.S. After ISIS’ (6731 interactions)

URL: http://www.burrardstreetjournal.com/trump-threatened-by-canadian-women/

Week 2 Pope Francis slams Trump’s immigration ban; ‘It’s hypocrisy to call yourself a Christian’ (2374 interactions)

URL: http://en.southlive.in/world/2017/01/29/pope-francis-slams-trumps-immigration-ban-its-hypocrisy-to-call-yourself-a-christian

Kellyanne Conway: “Racism Is A Small Price To Pay For Being Great Again” (22598 interactions)

URL: http://politicot.com/kellyanne-conway-racism-small-price-pay-great/

Week 3 President Fires White House Butler, Holdover From Eisenhower Era (9057 interactions)

URL: https://extranewsfeed.com/president-fires-white-house-butler-holdover-from-eisenhower-administration-4b480c83f82b

(28)

- 27 -

President Trump Makes ENGLISH The OFFICIAL LANGUAGE OF THE United States (69589 interactions)

URL: http://donaldtrumppotus45.com/2017/01/26/breaking-president-trump-makes-english-the-official-language-of-the-united-states/ Week 4 Trump Just Ended Obama’s Vacation Scam And Sent Him A Bill (59497

interactions)

URL: http://thelastlineofdefense.org/breaking-trump-just-ended-obamas-vacation-scam-and-sent-him-a-bill-you-have-to-see-to-believe/ Sabotage: Obama is commanding an army of 30.000 anti-Trump activists

from his home 2 miles from the White House (18681 interactions) URL:

https://www.infowars.com/sabotage-obama-is-commanding-an- army-of-30000-anti-trump-activists-from-his-home-2-miles-from-the-white-house/

Week 5 Donald Trump Just Sent FEDERAL TROOPS Into The CHICAGO GHETTO. There Are ARMED TANKS Rolling Through CHIRAQ! (94567 interactions)

URL:

https://www.facebook.com/mediatakeout/videos/1561809773850964/ 2 Liberal Democrat Congressmen Arrested For Planning Trump’s

Assassination (17838 interactions)

URL: http://thelastlineofdefense.org/breaking-2-liberal-democrat-congressmen-arrested-for-planning-trumps-assassination/

Week 6 George Soros Arrested And Charged With Hate Crimes Against America (7314 interactions)

URL: http://thelastlineofdefense.org/breaking-george-soros-arrested-and-charged-with-hate-crimes-against-america/

Whoopi Goldberg: Navy SEAL Widow was “Looking for Attention” (57173 interactions)

URL: http://undergroundnewsreport.com/the-truth/whoopi-goldberg-navy-seal-widow-looking-attention/

Week 7 Mike Pence Promises To Create The Department Of Anti-Witchcraft (4446 interactions)

(29)

- 28 -

URL: http://www.patheos.com/blogs/laughingindisbelief/2017/02/mike-pence-promises-create-department-anti-witchcraft/

FBI Issues Warrant For Obama’s Arrest After Confirming Illegal Trump Tower Wiretap (6792 interactions)

URL: http://thelastlineofdefense.org/breaking-fbi-issues-warrant-for-obamas-arrest-after-confirming-illegal-trump-tower-wiretap/

Week 8 Snoop Dogg Arrested For Conspiracy After Talking About His ‘Murder Trump’ Video (3874 interactions)

URL: http://thelastlineofdefense.org/breaking-snoop-dogg-arrested-for-conspiracy-after-talking-about-his-murder-trump-video/

Nancy Pelosi Was Just Taken From Her Office In Handcuffs (4806 interactions)

URL: http://thelastlineofdefense.org/breaking-nancy-pelosi-was-just-taken-from-her-office-in-handcuffs/

Week 9 Hillary Clinton Appointed To Replace Disgraced US Senator (2064 interactions)

URL: http://thelastlineofdefense.org/breaking-hillary-clinton-appointed-to-replace-disgraced-us-senator/

Anthony Weiner Placed In Protective Custody–Will Turn State’s Evidence Against Hillary (3770 interactions)

URL: http://thelastlineofdefense.org/anthony-weiner-placed-in-protective-custody-will-turn-states-evidence-against-hillary/

Week 10 Jeff Sessions Disbarred About Thousands Of Complaints (5780 interactions)

URL: https://leftliberal.com/2017/03/29/jeff-sessions-disbarred-about-thousands-of-complaints/

Sweden’s ‘no-go zone’ crisis: Three police officers injured after being attacked by thugs (2625 interactions)

URL: http://www.express.co.uk/news/world/766057/Three-police-officers-injured-attacked-thugs-Stockholm-no-go-zone

Figure 2: List of collected fake news stories (including URL). The list has been manually compiled by going through all articles that Snopes debunked in the weeks after Donald Trump’s inauguration.

(30)

- 29 -

To get a sense of which news stories were interacted with the most, the list above has been used to create an Excel sheet featuring the news story (in the form of a short key expression), the amount of interactions the story has garnered, and in which week the story was featured. This can be seen in Figure 3. This prepared data sheet was then entered into the online data visualization program Raw. For the visualization, the chart type ‘Circle Packing’ was chosen, in order to demonstrate the weight each story has relative to each other (see Figure 4 on the next page).

Figure 3: Representation of Excel sheet. Featured are: a shortened story title, the total number of interactions as reported by the Crowdtangle tool, and in what week after Donald Trump’s inauguration the story was featured on Snopes.

News story Total interaction number Week following inauguration

Obama's daughters 1836 Week 1a

Canadian women 6731 Week 1b

Pope Francis 2374 Week 2a

Kellyanne Conway 22598 Week 2b

Butler fired 9057 Week 3a

English language 69589 Week 3b

Vacation bill 59497 Week 4a

Anti-Trump activists 18681 Week 4b

Tanks in Chicago 94567 Week 5a

Trump assassination plot 17838Week 5b

George Soros arrested 7314 Week 6a

Whoopi Goldberg 57173 Week 6b

Department of anti-Witchcraft 4446 Week 7a

Illegal wiretap 6792 Week 7b

Snoop Dogg arrested 3874 Week 8a

Nancy Pelosi arrested 4806 Week 8b

Hillary Clinton senator 2064 Week 9a

Evidence against Hillary 3770 Week 9b

Jeff Sessions disbarred 5780 Week 10a

(31)

- 30 -

Figure 4: Data visualization of the compiled list of fake news stories (created with Raw). Size represents the number of interactions (with larger bubbles indicating more interaction). The ‘Tanks in Chicago’ story is by far the most interacted with on Facebook, while the story on Obama’s daughters, who were claimed to have been arrested, is the least interacted with.

Referenties

GERELATEERDE DOCUMENTEN

It is used as follows in this research: a social media user is exposed to fake news articles (trigger), processes the information (cognitive), and this results in

To find a way to determine if an article is real or not is the most important aspect of this research, but in order to help people, other than other researchers, solve the problem

Based on the correlational analysis and the regression analysis, H1: ‘There is a negative linear relationship between trust in fake news regarding climate change and climate

Alternatively,  to  facilitate  democratic  discourse?  The  second  step  would  be  determining  whether  the  fake  news  in  question  was  a  political 

Chapter 4 Membrane-bound Klotho is not expressed endogenously in page 133 healthy or uremic human vascular tissue. Chapter 5 Assessment of vascular Klotho expression

Of sociale steun ook een rol speelt in de levenskwaliteit van kinderen en jongeren en de ervaren last van NAH is onduidelijk, maar op basis van de onderzoeken naar sociale steun

Reinstating the style of satire does not necessarily change this: the fact that Horace spent a lot of time writing his satires does not mean they merit attention.. However, the

Considering programme involvement as a facilitator between different media conditions and advertising effects, Segijn and colleagues (2017) had compared