• No results found

Fakebook : how Users Perceive Fake News and Misinformation on Facebook

N/A
N/A
Protected

Academic year: 2021

Share "Fakebook : how Users Perceive Fake News and Misinformation on Facebook"

Copied!
34
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

MA Erasmus Mundus Master Degree :

Journalism, Media and Globalisation

FAKEBOOK:

How Users Perceive Fake News and Misinformation on Facebook

by

Alessandro Timur Amato

Student ID: 11300280

Master’s Thesis

Graduate School of Communication

Master’s programme Communication Science

Supervisor/Examiner: dhr. dr. Rachid Azrout

(2)

Abstract

This study examines how people identify and perceive fake news on Facebook. The most recent academic research has focused mainly on the reasons behind the spreading of misinformation such as people’s psychological traits or political ideology. This study aims to look at somewhat more practical factors involved in the choices Facebook users make when clicking on news content, and to investigate whether those users perceive and identify fake news based on elements such as clickbait, media sources (source credibility and the political alignment of the source) and the political message conveyed by the news. Both the political message of the news and the alignment of the source are expected to influence users’ perceptions of the news in terms of political

congruence. A one group, repeated exposure experiment will determine the relevant predictors of news’ perception by Facebook users, for instance whether and to what extent their perceptions and reactions are affected by political ideology and knowledge of the media source. The results show that Facebook users mostly tend to trust news and sources that match their overall political

ideology. The source of the news also significantly affects users’ perception of news, whereas the effect of clickbait language is not as strong as expected.

Introduction

Social media such as Facebook have proven numerous times to be an ideal platform for the spreading and thriving of misinformation (Del Vicario, et al., 2016). During the American 2016 presidential campaign the circulation of misinformation and fake news on Facebook increased dramatically, triggered by viral news stories with all sorts of allegations concerning the candidates Donald Trump and Hillary Clinton (Allcott & Gentzkow, 2017). Media organizations have tried to combat the epidemic often with conflicting results, as these same organizations might also be responsible for spreading misinformation, or in other words information that is corrected with follow-up update at a later point in time. What we have witnessed is a battlefield where mainstream media balanced the influence of fake political news on people’s opinions by engaging in wide debunking operations against online trolls and misperceptions (The New York Times, 2016).

(3)

Academic research often uses the term misinformation to indicate the spreading of incorrect information through media channels, or at times the more specific term disinformation to identify fabricated information deliberately aimed at deceiving the audience (Ecker, Cook, &

Lewandowsky, 2015). Misinformation can be a quite broad category, for instance including the circulation of news and debatable opinions regarding controversial issues such as GMOs or

vaccines (Bode & Vraga, 2015) where the information is often incorrect or without clear scientific foundations, but it can also include news content, often breaking and sensational, that is created to misinform and generate traffic to sell advertisements. However, media have recently labeled misinformation online as fake news, stressing the deceptive nature of its spreading as opposed to the veracity of the information the audience should consume from trustworthy media organizations (Nature, 2016). In other words, there is a societal definition of fake news stemming from the media that is very much in line with the academic definition of misinformation: the fabrication and

spreading of incorrect information, the purpose of which might be either the deliberate deception of the audience it targets, or a mistake in spreading information, possibly corrected by a later update. Since these two labels refer to the same concept, this study will use both fake news and

misinformation without distinction.

Scholarly literature on misinformation has encompassed various fields of academic research and focused on different elements that characterize the spreading of misinformation from the polarization of the audience to political partisanship, from personality traits to emotions such as anger and anxiety (Bessi, 2016; Chen, Sin, Sei-Ching, Theng & Lee, 2015; Weeks, 2015). Specifically, online misinformation has captured the attention of scholars for its rapidity or viral attributes, scope and long-term effects on people’s misperceptions (Bode & Vraga, 2015).

Within the communication field, some have investigated the reasons behind the sharing of fake news online among students, with results showing that most respondents in their sample had shared misinformation at least once, above all for social interaction purposes (Chen et al., 2015). Other studies have mainly focused on the causes and factors behind the spreading of fake news or

(4)

the specific traits of online trolls, such as the content analysis conducted by Bessi (2014) on Facebook conspiracy pages, which showed a correlation between the Facebook users’ engagement and the news consumption patterns of their network. According to his findings, users’ aggregation around certain shared (mis)-beliefs may determine the virality of the (mis)-information.

Fake news and its spreading have also impacted different academic fields outside

communication research, where attempts at combatting online misinformation led to the creation of algorithms that detect fake news and prevent it from going viral, based on language elements such as the frequency with which certain words appear in a text (Conroy, Rubin, & Chen, 2015). With approximately the same purpose but a different approach, there are authors within the

communication field who have experimented ways of correcting misinformation and

misperceptions online through various techniques such as the Facebook related article feature, that is when Facebook suggests an article or related content based on the user’s preferences, displayed right below the article of interest (Bode & Vraga, 2015; Ecker, Cook, & Lewandowsky, 2015).

Notwithstanding the increasing publications on the topic, communication research still lacks literature that analyzes the elements involved in how people perceive and identify online fake news, and how these are characterized in the first place. But before the act of sharing fake news on social media, there are important factors that drive people’s news habits which are crucial for their evaluation of the news and in turn the spreading of potentially fake news. As Tony Harcup and Deirdre O’Neill showed in their study on news values (2016), people’s online news habits are influenced by values that pertain mainly to the specific topic of the news item such as news about celebrities, good news or news that is resonant with a specific culture, but also by other more structural factors including entertainment and shareability of the content. Harcup & O’Neill

identified shareability in their study (2016) as generally funny stories people want to share to laugh or make friends laugh, or outraging stories people share to find empathy or agreement with from their peers. The authors used shareability as a cross-category that would encompass news values such as celebrity stories, cultural resonance, good news based on the extent to which these were

(5)

shared on Facebook (Harcup & O'Neill, 2016). In this sense shareability can then be interpreted as a potential dimension affecting the perception of news in a certain audience, therefore leading to a higher or lower likelihood of sharing the information. The present study is concerned both with the factors involved in the process of news perception by Facebook users, if and to what extent these users are willing to read and share the information regardless of its trustworthiness. Findings might give additional insight into whether and why Facebook users share news that they perceive as fake, thus adding to the academic literature of viral misinformation and misinformation sharing (Bessi, et al., 2014; Chen et al., 2015).

Considering both news values and the literature investigating the reasons behind the act of spreading misinformation this experiment sets out to investigate the importance and magnitude of the cognitive and structural factors that are involved in the process of perceiving and identifying fake news on Facebook. In other words, this study will aim to answer the following research question:

Which factors lead Facebook users to perceive news as fake?

Specifically, the goal of the experiment is to investigate what makes people perceive and trust news on Facebook to different degrees based on several hypothetical factors such as clickbait elements, and find out whether people’s political ideologies and knowledge of the media source have a prominent effect on their perception of news as fake or real. Additionally, the study seeks to investigate whether users who identify certain news as fake are willing to read or share the content on Facebook, and why so.

The results will provide insight not only into people’s behaviors on Facebook and towards fake news, but will possibly provide new ways of combatting the spreading and scope of online fake news more effectively by identifying those prominent factors that influence Facebook users’ perception of news.

(6)

Theoretical Framework and Hypotheses

Studies conducted on the polarizing effects of social media (Bessi, et al., 2014) showed that users who have a defined stance on and perception of certain controversial issues such as GMOs and vaccines differ in their tendency to perceive news as fake. The analyses by Bessi et al. (2014) identified recurrent patterns of users’ attitudes on Facebook regarding conspiracy theories and scientific information, according to which there is a positive relationship between the likelihood of users sharing misperceptions and the number of likes these misperceptions have among their network of friends and pages; not only: these patterns also tend to be highly polarized according to users’ likes in their networks. These findings are in line with a specific trend in communication research that sees Facebook as a catalyst for misperceptions and fake news in that it provides users with echo chambers where these are constantly confirmed and rarely challenged (Bessi, 2016). The study by Bode and Vraga (2015) also showed how misperceptions on Facebook can be reinforced by motivated reasoning and the search for news driven by confirmation bias. In their study, the authors experimented a new way of correcting misinformation on Facebook by using the then new feature of related links, and their findings suggest that, since misperceptions are significantly reduced when a link to correct information is given, social media can potentially be used to correct misinformation (Bode & Vraga, 2015). There are still, however, many issues concerning users’ selective exposure to news on Facebook, their motivated reasoning and confirmation bias in the search for news. Studies on political partisanship have looked at how political ideology affects the inclination for misinformation regarding certain political candidates or broader news topics. In his study, Weeks (2015) showed not only that users are inclined to consume information that fits their political views, but that this relationship is moderated by feelings of anger and anxiety concerning certain political issues.

Selective exposure and motivated reasoning are known phenomena in communication studies. In their barest form, these concepts refer to how people seek confirmation of their beliefs or

(7)

With the increasing share of news being consumed on the internet and the shift to a more active-like search for news online, scholars have observed how these phenomena might have intensified given users’ possibility to neglect certain news sources altogether (Stroud, 2008). The findings of Iyengar and Hahn (2009) suggest that users are inclined to look for and consume media that support and reinforce their beliefs and political preferences; this tendency creates a polarized media audience, at least in the United States, where conservatives’ news organization of choice is Fox News while democrats prefer MSNBC. Not only do these results depict a country, and increasingly a world where the audience is politically split in half, but they also encourage these same news

organizations to cater to their audiences by presenting them with increasingly slanted news (Iyengar & Hahn, 2009). The fact that the online realm and by extension social media might intensify this trend has been challenged by Stroud in her attempt at studying online users’ potential exposure to diverse news types and sources (Stroud, 2008). Notwithstanding the wide availability of news sources thanks to the new media, her results still show a very strong relationship between

Republican political leaning and use of conservative media outlets on one hand, Democratic poltical leaning and use of liberal media outlets on the other, although with a relatively higher significance when applied to US cable TV news (Stroud, 2008). These considerations on the influence of selective exposure and motivated reasoning on the news consumption habits of users allow us to draw a connection between these and the perception of fake news on Facebook. Since academic literature in the communication field still lacks studies on selective exposure and motivated

reasoning applied to social media, this study will apply those concepts to investigate whether users’ perception of news as fake will be conditioned by their political ideology and other factors.

To clarify further, the perception of news as fake might not only be influenced by the inherent political message of the news, whether this is in line with users’ perspective or not, but also whether this comes from a source they resonate with in terms of matching political views. Political ideology will likely play a role here regarding the specific news sources users trust. The perception of

(8)

Wolfgang (2015) in their study might be in turn influenced by the political alignment of news organizations as illustrated by the above-mentioned studies on selective exposure (Iyengar & Hahn, 2009; Stroud, 2008). In other words, users might trust only certain news organizations whose political alignment matches theirs. Hence, we expect Facebook users to be more likely to consider news that doesn’t match their political ideology as fake news, and the same for news that comes from a source whose editorial standpoint is in contrast with their political ideology: for instance news coming from MSNBC to a conservative, or news coming from Fox News to a liberal-democrat. To summarize, this first argumentation considers the perception of news as fake as a more subjective evaluation of news based on individual political ideology, the political message expressed in the news and the political alignment of the news source. Hence, the first two hypotheses of this study:

H1 Facebook users will likely consider news that does not match their political ideology as fake.

H2 Facebook users will likely consider news that comes from sources whose political alignment does not match their ideology as fake.

But the perception of news is not an exclusively subjective, ideological process of evaluation. In fact, both academic and media professionals would argue that there exists an inherent degree of objectivity in news, and that there are ways to discern trustworthy news from fake news.

Consequently, this experiment will also take a more objective approach and test whether and to what extent the perception of news is influenced by content-specific elements of Facebook posts such as clickbait language.

Clickbait language refers to written language that is emotionally overcharged, even

hyperbolic, and has specific cues in the structure of the syntax and the use of verbs and adjectives (Chakraborty, Paranjape, Kakarla, & Ganguly, 2016). The concept of clickbait language has also been applied to studies within the computer science field that are experimenting ways for automated detection of fake news (Conroy, Rubin, & Chen, 2015). As a matter of fact, automated fake news

(9)

detection uses two different approaches: linguistic approaches or network approaches. Whereas the network approach looks at metadata and other web-based cues that are considered to be instances of deception—such as inconsistent meta titles and descriptions, but also inherent characteristics of web pages that are part of a network of websites—the linguistic approach is concerned with

linguistically structural elements that are usually coded as instances of deception in fake news (Conroy, Rubin, & Chen, 2015). However, the problem of analyzing just the presence of clickbait language is the subjective nature of its interpretation on the part of the audience: in other words, someone might consider clickbait something that feels completely normal to someone else. In fact, one of the main problems that characterized the surge in fake news during the American 2016 presidential election is that fake news had all the linguistic characteristics of real news. Traditional media organizations too sometimes publish articles that contain clickbait-y headlines such as “read more” or “click here”. In this study, it’s expected that social media users are quite familiar with the concept of clickbait language can recognize it and detect whether the news post they are viewing consists of accurate, trustworthy information or might be fake news.

H3: The presence of clickbait language in the Facebook post will likely lead users to perceive the content as fake news.

Additionally, the wide availability of news sources online can make the process of checking sources extremely difficult and overwhelming, and Facebook adds up to this problem by suggesting news posts based on algorithms whose reliability and accuracy have been severely questioned more than once (Oremus, 2016). This creates confusion on the part of the user whose capability of source recognition might be impaired. That is why the presence or absence of clickbait alone is not enough of an objective measure to flag fake news on social media, but source credibility should be

investigated as well: that is, whether the news comes from a well-known and well-respected source (high-credibility) or from an unknown website or questionable blog (low-credibility) (Knobloch-Westerwick et al., 2015). For this reason, this experiment wants to look at source credibility within social media and test whether the name of the source has a prominent impact on users’ perception

(10)

of news as fake. Source credibility is here used to indicate whether participants in the survey have a relatively clear idea of where the news content comes from, in other words whether the content is distributed by a well-known, trustworthy news organization such as Reuters or The Guardian, or whether it’s generated by a website that is not listed or recognized as a news outlet nor journalistic publication. In their study about political information searching in Germany, Knobloch-Westerwick et al. (2015) defined source credibility as a means by which users attend to the persuasiveness of a certain political message. In other words, the authors hypothesized that users are more likely to be persuaded by political messages stemming from high-credibility sources rather than low-credibility ones (Knobloch-Westerwick, et al., 2015), based on findings of previous studies in the

communicatin field (Petty & Cacioppo, 1986). As for the third hypothesis of this study, users are expected to recognize the source of the news and thereupon evaluate the accuracy of the news content:

H4: Facebook users will likely consider news that comes from low-credibility sources as fake, and news that comes from high-credibility sources as accurate.

While political leaning, source credibility and the presence or absence of clickbait language might influence how users perceive news as fake on Facebook, in the introduction we already mentioned that this experiment will also look at how users react to the same news on Facebook, in other words whether users are willing to read and share the content of a Facebook news post regardless of their evaluation (fake or reliable). There are several reasons behind the act of reading or sharing news content on Facebook that go beyond the mere evaluation of its trustworthiness. Regardless of it being perceived as fake or not, users read and share content on Facebook for reasons of simple social interaction, because they seek confirmation or because they would like to know what their peers think about the issue at hand (Chen et al., 2015). In this sense, the concept of shareability adopted by Harcup and O’Neill (2016) in their study on news values plays an important role as it provides a new dimension of newsworthiness that is specific to social media, and that users might rely on when consuming and sharing news content on Facebook. We already mentioned

(11)

in the introduction that shareability has been defined by Janine Gibson as “stuff that makes you laugh and stuff that makes you angry,” both feelings that are likely to be aroused in users by political news on Facebook, whether it’s fake or not (quoted in Harcup & O’Neill, 2016, 11). The act of reading and sharing news content will be thus treated in this study as a dependent variable, influenced on different levels by political leaning, source credibility, presence of clickbait language and perceived accuracy of the news. We optimistically expect that users will be more likely to share trustworthy news content based on considerations regarding the language cues and source

credibility and regardless of one’s political leaning. The perceived accuracy of the news is expected to act as a mediator on users’ willingness to read and share news. Based on these assumptions, the fifth hypothesis is the following:

H5: The perceived accuracy of the news will play a mediating role in the main relationship: the less accurate the news is perceived to be, the smaller the likelihood of it being read or shared.

Methodology

To test the five hypotheses illustrated above, this experiment relies on an online-embedded survey, and a within-subject research design with multiple exposure. The following section will illustrate the sampling method used to recruit respondents, the operationalization of the concepts discussed in the theoretical part, the procedure and realization of the conditions and survey design, including the reasons behind the specific choices.

Sampling Method

The ideal sample for this experiment would be a population consisting of the average Facebook user, with a mix of age, gender, ethnicity and profession as diverse as possible so that it could be projected onto the general population. To achieve this goal without any budget, an online-embedded survey was circulated among the Mundus Journalism students’ community and their peers, and among Elsevier’s employees, a multinational publishing company. The community of Mundus Journalism students offers an extremely diverse sample in terms of origins and gender,

(12)

although most of the members belonged to the same community of practice: journalism, media and communication, and are thus more likely to recognize trustworthy media outlets and detect clickbait language. This rather narrow sample could potentially hamper the external validity of the results. Consequently, to have more variance on variables such as age and profession, the survey was circulated among Elsevier’s workforce which consists of professionals working in different fields from IT and publishing to communication, management and sales, and the company kindly agreed to support the experiment. The total number of recruited respondents was 183, and the respondents who completed the survey were 145: a completion rate of 79%. This choice of sample is mainly justified by circumstantial limitations, but holds an acceptable degree of generalizability. To account for the limitations of the chosen sample, the survey was designed in a completely randomized way so as to minimize the order effects. Thanks to the within-subject design of the experiment with multiple exposures as well as to the randomized order, every respondent

functioned as his or her own control condition, thus ensuring a high level of internal validity. All the respondents agreed to participate in the study before starting the survey and were de-briefed about the purpose and nature of the experiment once the survey was completed. The survey was anonymous, no personal information such as name or address was asked, the automatically-stored information such as IP address as well as the respondent’s answers are kept as confidential

information in line with the ethical form signed and submitted to the University of Amsterdam. Survey Design

To test the five hypotheses of this study, the experiment consisted of a within-subject design with multiple exposure; in other words, each respondent was presented with all the conditions of the experiment: the presence or absence of clickbait elements, the political message of the news story and the source of the news story. Since the purpose of the survey was to measure users’ perception of news on Facebook, the survey was designed to contain only Facebook posts showing 12 different news stories. These posts were independently designed to contain all the possible combinations of

(13)

variables: clickbait language, source credibility, political message and source alignment. 1 None of the news stories were true, in fact they were all created based on existing news and not actual fake news that circulated during the campaign, thus eliminating the risk that someone might have read the same news. Six stories contained clickbait language and six did not, six stories conveyed a left-wing political message while six conveyed a right-left-wing political message, four stories were from made-up, low-credibility sources, four from left-aligned publications and four from right-aligned publications. To simulate the news stories that were circulated during the American 2016

presidential elections, the Facebook posts were created on topics which regarded the candidates Donald J. Trump and Hillary Clinton. This also provided more consistency to the conditions: by choosing a wider spectrum of political issues instead, the variation would be due to the different levels of opinions and understanding. After a pre-test questionnaire where respondents were asked to indicate their age, gender and overall political ideology, each respondent was presented with the 12 Facebook posts in a randomized order, and for each post they were asked to indicate on a scale from 1 to 10 how accurate they perceived the news to be, to what extent they would read the story and to what extent they would share the story with other people. At the end of the survey, two optional questions would appear only to those respondents who chose a value smaller than 5 on the scale of accuracy and greater than 5 on the likelihood of reading scale, and the same combination for the likelihood of sharing scale. These were open questions investigating the reasons behind certain choices: especially why respondents wanted to read and/or share a news story they deemed to be inaccurate. As a conclusion, all the respondents were thanked for participating in the

experiment and de-briefed about the purpose of the survey and the simulative nature of the stories they viewed.

1 All the Facebook posts were intentionally created in Photoshop as faithfully as possible to Facebook’s visual design, and are

(14)

Operationalization

Hypotheses number 1 and 2 required a combination of variables in order to be tested. First, in the pre-test, each respondent answered the question “What is your overall political ideology?” based on a scale from 0 to 10, with 0 being left-wing and 10 being right-wing. Second, they viewed the 12 Facebook posts and answered the following questions: 1) “How accurate do you think this news is?” 2) “On a scale from 1 to 10, how likely are you to read this story?” 3) “On a scale from 1 to 10, how likely are you to share this story?”. The theoretical principle behind the questions is similar to the method employed by Knobloch-Westerwick et al. (2015) investigating the

participants’ reactions to the news, but this survey relied on a numerical scale without labels, more suitable for measuring the immediate perceptions of the respondents without overthinking about the answers. Depending on the Facebook post and the manipulation, the value on the political ideology scale will be then compared to the score on perceived accuracy, reading likelihood and sharing likelihood. According to the observations made earlier on selective exposure (Iyengar & Hahn, 2009), Facebook posts conveying a left-wing message (supportive of Hillary Clinton) or a right-wing message (supportive of Donald Trump) are expected to be perceived as more or less accurate by the respondents based on their political idelogy. In the same way, Facebook posts published by sources that are publicly considered to be politically left or right-wing (e.g. The Guardian as a left-aligned publication and Breitbart as a right-left-aligned publication) are expected to match the political views of the respondents. To summarize with an example, a story supportive of Donald Trump published by a right-wing outlet like Breitbart will score high on all the three dependent variables (perceived accuracy, reading likelihood and sharing likelihood) if the respondent has right-wing political views (>5 on the political ideology scale). The stimuli were thus constructed in a way that would show respondents a clear political message in the headline as well as the name and URL of the media source. Accordingly, the independent variables for H1 and H2 were respectively operationalized as story congruence and source congruence.

(15)

As discussed and hypothesized (H3) in the theoretical session, the presence of clickbait in the Facebook posts was operationalized based on the research conducted by Chakraborty et al (2016), and identified in the use of capital letters in the headlines such as “BREAKING” or

“EXCLUSIVE”, the use of hyperbolic or emotionally-charged language, for instance by reporting insults, expressing disgust, shock or outrage in such a way that the audience would feel moved or sympathetic about the content of the Facebook post, and lastly the use of call-to-actions such as “read more/click here” or “watch the video.” The independent variable for H3 was thus the presence of clickbait language in the headline of the Facebook news posts.

Our fourth hypothesis (H4) purports that Facebook users would trust news depending on where it comes from. In other words, source credibility as conceptualized in the theoretical part of this study and by Knobloch-Westerwick et al. (2015) plays an important role. This is a crucial part of the stimuli as it reflects how fake news during the presidential elections came from dubious sources and publications with names deceptively similar to mainstream media (BuzzFeed, 2016). The stimuli were thus divided into “known media source” and “unknown (made up) media source”, which resulted in three different groups of sources: four left-aligned well-known sources, four right-aligned well-known sources, and four unknown, low-credibility sources. The independent variable for H4 was thus operationalized as source credibility. For the sake of clarity, the unknown sources also obeyed to the “political message stimulus” (H1): two of them conveyed right-wing messages (stories supportive of Trump) while the other two conveyed left-wing messages (stories supportive of Clinton). For what concerns the dependent variables, this study was designed with three

dependent variables that are affectet by all the independent variables so far operationalized, and were: perceived accuracy, reading likelihood and sharing likelihood. The dependent variables were all interval variables based on a scale from 1 to 10. All the stimuli are summarized and illustrated in the following table, including the names of the publications used in the Facebook posts.

(16)

Table 1: Summary of Stimuli — Facebook Posts Clickbait No Clickbait Left Wing Message Right Wing Message Left Wing Message Right Wing Message Unknown

Source Break News

Last Line of Defense

The Democrat

Report US All News Left Wing

Known Source BuzzFeed

Business

Insider The Guardian Vice News Right Wing

Known Source Forbes Breitbart Fox News New York Post

Finally, the fifth hypothesis theorized a mediated relationship between the respondents perceptions of the news and their willingness to read and/or share the story on Facebook. In other words, if a respondent deems the story to be fake, he or she will be less likely to read and share the story. Operationalizing this hypothesis in a survey question was not possible, so the mediation was tested in the data analysis process. It’s important to mention that the open questions aimed at those respondents who would read and share a story that they considered inaccurate are an indication of the magnitude of this mediation: the more such cases, the less supported is the hypothesis.

Analysis and Results

Since all participants in the survey were exposed to all the conditions at the same time, to find out whether there were significant differences in the mean scores of the sample, we resorted to paired sample t-tests comparing two conditions at a time. The first hypothesis (H1) was related to the political message, so the responses were computed in SPSS Statistics as:

- Congruous: political ideology < 5 and mean score of left-wing stories on dependent variables (e.g. perceived accuracy);

(17)

- Alternatively, political ideology > 5 and mean score of left-wing stories on dependent variables (e.g. perceived accuracy);

- Incongruous: political ideology < 5 and mean score of left-wing stories on dependent variables (e.g. perceived accuracy);

- Alternatively, political ideology > 5 and mean score of left-wing stories on dependent variables (e.g. perceived accuracy);

The results of the t-tests identified a significant difference between the mean scores of this group on the perceived accuracy scale (t=13.9 and p <.001), on the reading likelihood scale (t= 12.6 and p <.001) as well as on the sharing likelihood scale (t= 4.6 and p <.001). A similar analysis was conducted for the other conditions by combining the responses in groups and comparing their mean scores with a paired sample t-test. The results are summarized and illustrated below in Table 2, 3, 4 and Fig.1.

Table 2: Mean Scores of Independent Variables Story Congruence (H1) and Clickbait (H3) with Significance

Table 3: Mean Scores of Independent Variables Source Congruence (H2) and Source Credibility (H4)

Dependent Variables (1-10 scales) Independent

Variables

Accuracy t-test Significance

Reading Likelihood t-test Significance Sharing Likelihood t-test Significance Congruous 5.678 t=13.9 p<.001 5.297 t=12.6 p<.001 2.255 t=4.6 p<.001 Incongruous 3.678 3.699 1.799 Clickbait 4.596 t=-1.687 p=0.94 4.448 t=-.993 p=.322 1.998 t=-.814 p=.417 No Clickbait 4.773 4.552 2.047

Dependent Variables (1-10 scales)

Independent Variables Accuracy Reading Likelihood Sharing Likelihood

Left Wing Source 5.05 5.144 2.167

Right Wing Source 5.013 4.599 2.008

(18)

Table 4: Results of t-tests for the above Independent Variables (Table 3).

But the paired sample t-tests only tell us about the existing effect of a specific variable independently. Since the respondents were presented with all the conditions simultaneously, it’s necessary to examine all the hypothesized effects together while also accounting for age, gender

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6

Accuracy Reading Likelihood Sharing Likelihood

Fig.1: Mean scores of all Independent Variables on Dependent Variables (1-10 scales)

Congruous Incongruous Clickbait No Clickbait

Unknown Source Left Wing Source Right Wing Source

Dependent Variables Paired Sample t-test Accuracy Reading Likelihood Sharing Likelihood Unknown Source & Left Wing Source t=-8.08

p<.001

t=-9.5 p<.001

t=-3.9 p<.001 Unknown Source & Right Wing Source t=-9.48

p<.001

t=-6.7 p<.001

t=-1.99 p<.05 Left-Wing Source & Right-Wing Source t=.327

p=.744

t=4.5 p<.001

t=2.6 p<.05

(19)

and political ideology (measured in the pre-test questionnaire). This was achieved through a mixed model analysis, also known as multi-level modelling. To run the mixed model analysis in SPSS, the data needed to be restructured so that each individual respondent appeared 12 times as a unit of analysis, one for each experimental condition. This operation was necessary because the experiment design had no independent observations, but they all occurred together (all the conditions to all the participants). The mixed model analysis allows us to test the estimated effect of our independent variables on one dependent variable at a time, while controlling for age, gender and political ideology. Additionally, we can look at the interactions between the different conditions and, for instance, political ideology to see whether and to what extent their effects change when other stimuli are in place. The output of the multi-level modelling provides us with an “estimate of fixed effect,” (including standard error, t value and significance) and “covariance parameters”. When we look at our main hypothesis (H1), the mixed model analysis shows that a “congruous story”, that is a story in line with the respondent’s political ideology (< 5 for left wing and > 5 for right wing on the scale) scores 1.187 higher than an incongruous one on the perceived accuracy scale (t=9.85, p <.001), .813 points higher on the reading likelihood scale (t=6.777, p <.001), and .306 points higher on the sharing likelihood scale (t=4.249, p <.001). Since the independent variable for H2 source congruence required a different approach, let’s first establish the effect of the presence of clickbait elements in the Facebook posts on the perceived accuracy of the news. In this case, the estimate of fixed effect for the clickbait condition is -.222, which means that the Facebook posts with clickbait elements scored -.222 points lower than those without clickbait on the perceived accuracy scale. Although this is quite a low difference, the t value (-1.979) and p value (<.05) demonstrate that our result is statistically significant, and thus confirms the third hypothesis (H3). For reasons of brevity, all the individual results of the mixed model analysis are summarized in the tables below (Table 5, 6, 7), but it’s important to point out the effect of source credibility and source congruence as

(20)

Facebook post published by a well-known source is perceived 1.066 points more accurate than one published by an unknown, low-credibility source (t=9.173, p <.001).

Instead, source congruence seems to have a negative effect on the accuracy level of the news, and non-significant results. However, if we look at the interaction between the respondents’

political ideology and the alignment of the source (source congruence) the data indicates that the political alignment of publications does influence how respondents perceive the news, in line with hypothesis two (H2). More specifically, political ideology here acts as a moderator so that source credibility has a significant effect on the perceived accuracy at different levels of political ideology. By controlling for political ideology in our mixed model analysis we can establish what the

estimated effect of the sources is on the perceived accuracy of the Facebook posts. Table 5 shows that when political ideology is controlled, a Facebook post published by a left-wing source is perceived as 1.7 points more accurate, whereas a right-wing source as 1.55 points more accurate. In other words, the effect of a politically-aligned source might be significant at different values of political ideology. The analysis provides us with an interaction coefficient of political ideology for both left-wing sources and right-wing sources: -0.17 for left-wing sources and -0.13 for right-wing sources when the political ideology scale is 0. This means that for each increase of political

ideology (1,2,3…) the effect of a left-wing source decreases by -0.17 and by -0.55 that of a right-wing source. We can then calculate the estimated fixed effect of the sources at different levels of political ideology of the respondents by subtracting the interaction coefficient from the estimated effect. For instance:

- Political ideology =1  estimated effect of left-wing source = 1.70 – 0.17 * 1  1.53

Concluding, the analysis only partially supports hypothesis two (H2) as there is no significant main effect of source congruence, but there is a significant interaction between political ideology and the alignment of the source which might lead to significant effects on the perceived accuracy (see Table 6 for the complete calculation).

(21)

Table 5: Mixed Model Analysis — Effect of Independent Variables on Perceived Accuracy (one model for each independent variable)

Table 6: Mixed Model Analysis — Estimated Effect of Source Congruence for Different Levels of Respondents’ Political Ideology

Estimated Fixed Effect Left-Wing Source Estimated Fixed Effect Right-Wing Source Ideology Respondent = 0 1.704* 1.55** Ideology Respondent = 1 1.53 1.00 Ideology Respondent = 2 1.36 0.45 Ideology Respondent = 3 1.19 -0.1 Ideology Respondent = 4 1.02 -0.65 Ideology Respondent = 5 0.85 -1.2 Ideology Respondent = 6 0.68 -1.75 Ideology Respondent = 7 0.51 -2.3 Ideology Respondent = 8 0.34 -2.85 Ideology Respondent = 9 0.17 -3.4 Ideology Respondent = 10 0 -3.95 */**p<.001

Independent Variable Estimated fixed effect t P SE

Story Congruence 1.187 9.85 <.001 0.117

Source Congruence -0.287 -2.244 0.025 0.128

Clickbait -0.222 -1.979 0.048 0.112

Source Credibility 1.066 9.173 <.001 0.116 Left Wing Source (Ideology Respondent=0) 1.704 5.498 <.001 0.31 Right Wing Source (Ideology Respondent=0) 1.55 5.003 <.001 0.31

(22)

Table 7: Mixed Model Analysis — Effect of Independent Variables on Reading Likelihood (unique model, all IVs).

Independent Variable Estimated fixed effect t P SE Story Congruence .307 4.25 <.001 0.072 Source Congruence .142 1.6 .110 0.088

Clickbait -0.040 -.558 0.577 0.72

Source Credibility .183 2.072 0.38 0.88

Table 8: Mixed Model Analysis: effect of independent variables on Sharing Likelihood (unique model, all IVs)

Independent Variable Estimated fixed effect t P SE Story Congruence 1.187 9.85 <.001 0.117 Source Congruence -0.287 -2.244 0.025 0.128 Clickbait -0.222 -1.979 0.048 0.112 Source Credibility 1.066 9.173 <.001 0.116

Finally, our fifth hypothesis theorized a mediation effect between the perceived accuracy of the Facebook posts and the likelihood of reading and sharing the stories. In other words, the more the respondent perceives the news as accurate, the more he or she is likely to read the story and share it with other people. According to Baron and Kenny (1986), a mediation occurs when three different conditions are met: 1) our independent variable influences the assumed mediator (path a); 2) the assumed mediator has a significant effect on the dependent variable (path b); 3) when paths a and b are controlled, the main relationship between the independent variable and the dependent variable (path c) is zero, or as close to zero as possible. If we run the mixed model analysis with all the independent variables together (clickbait, source credibility, story congruence and source congruence) to test their effect on the perceived accuracy (path a), we can see their effect on the dependent variable (Table 9). Then, by inserting the accuracy scale in the model as an independent variable, we test its effect on the likelihood of reading and likelihood of sharing the stories, and in both cases a higher value of perceived accuracy leads to higher chances that the story would be read and shared. Finally, while testing this last relationship we can see that the effect of the other

(23)

independent variables is drawn much closer to 0, thus meeting all the requirements of a mediation and supporting hypothesis five (H5). The results of this last mixed model analysis, including t values, p values and standard errors, are included in Tables 9, 10, 11.

Table 9: Effect of Independent Variables on Perceived Accuracy (path a)

Table 10: Effect of Perceived Accuracy on Reading Likelihood (path b and c)

Table 11: Effect of Perceived Accuracy on Sharing Likelihood (path b2 and c2)

Variable Estimated fixed effect t P SE

Story Congruence 1.187 10.109 <.001 0.538

Source Congruence 0.039 0.275 0.783 0.143

Clickbait -0.168 -1.43 0.153 0.117

Source Credibility 1 7.009 <.001 0.116

Variable Estimated fixed effect t P SE

Story Congruence 0.108 1.064 0.288 0.102

Source Congruence 0.374 3.1 0.002 0.12

Clickbait 0.005 0.054 0.957 0.099

Source Credibility 0.257 7.009 0.036 0.123

Accuracy 0.594 26.29 <.001 0.022

Variable Estimated fixed effect t P SE

Story Congruence -0.011 -0.16 0.873 0.068

Source Congruence 0.131 1.641 0.101 0.08

Clickbait 0.005 0.072 0.943 0.065

Source Credibility -0.086 -1.064 0.288 0.081

(24)

Discussion and conclusion

Political views are often so deeply rooted in people’s minds that it’s deemed impossible to alter them in any efficient way. In fact, neuroscientists have conducted studies on the human brain to discover why this happens, and why human beings consider inconvenient truths to be personal insults (Kaplan, Gimbel, & Harris, 2016). It’s no wonder that social media are not immune to these aspects of the human mind, but the scale to which these are reiterated and confirmed is alarming at best, especially when the fairness of the most important political elections in the world is at stake. The results of this experiment have confirmed the claims of that branch of communication studies which investigates news consumption phenomena such as selective exposure, confirmation bias and motivated reasoning, and the fears that Facebook potentially isolates users in echo chambers, or filter bubbles, where their truths are constantly reaffirmed by the media and peers they follow and interact with (Iyengar & Hahn, 2009; Stroud, 2008; Petty & Cacioppo, 1986).

Political ideology and message resonance (or congruence) turned out to be the first and foremost indicator people base their evaluation of news on. These results are somewhat more troubling today that we live in the digital age of infinite media possibilities, where anyone could search, read and research anything with a few clicks. Instead, Facebook users make their

assumptions based on a few headlines, comments and likes they see on a Facebook post without necessarily knowing or even caring where the news comes from, whether it’s true or not (all of the news in this experiment were ultimately fake). The fact that Facebook users seek confirmation of their political views in the search for news also goes against what Bode and Vraga (2015)

optimistically envisioned in their study: that certain features of social media might in fact help correct people’s misperceptions around certain key societal and political issues. In such a context, it’s perhaps not suprising that fake news about Pope Francis endorsing Donald Trump ahead of the elections outperformed, in terms of engagement, legitimate news shared by the most popular media organizations on Facebook (BuzzFeed, 2016). The results of this survey also suggest that other

(25)

factors such as the source of the news and the use of clickbait language influence to some extent users’ perception of news, but their effect, although significant, is still very much subordinate to that of the inherent political message of news stories. In fact, another somewhat worrisome result is that Facebook users might not pay too much attention to the so-called “objective cues” of

automated fake news detection such as clickbait language (Chakraborty, Paranjape, Kakarla, & Ganguly, 2016), its effect on the evalutaion of news being minor and barely significant as compared to the political message of the news story.

On the bright side, however, we can see from the results that users indeed rely on the source of the information when making asssumptions regarding the accuracy and trustworthiness of the news they encounter: source credibility has a significant effect on all the dependent variables of this study (perceived accuracy, reading and sharing likelihood). Additionally, source credibility

intended as trust in well-known media as opposed to unknown ones, has a greater effect on users’ perceptions than the political congruence of the source as hypothesized in hypothesis two. Perhaps this is a hopeful signal, as Knobloch-Westerwick et al. (2015) observed in their study, that

professional, objective journalism coming from well-known, trusted media organizations still makes the difference and mankind is not spiraling towards a world where slanted news and blatant political bias are the new professional standards. But media organizations should take the recent virality of fake news on Facebook as a warning, a reminder of what professional journalism is not and where the difference lies between fake news and legitimate news. On this last point, it seems useful to recall the concept of shareability as explained in the theoretical section (Harcup & O'Neill, 2016). One of the characteristics of fake news shared during the 2016 American presidential

campaign was its ability to spark curiosity in the readers by presenting them with hyperbolic headlines, alleged exclusive content and utterly unbelievable facts. In this experiment certain news stories (mostly the ones from unknown, made up sources) tried to recreate this phenomenon. Two questions were specifically asked to those respondents who deemed the news to be inaccurate but

(26)

(Azrout, Van Spanje, & De Vreese, 2012)responses, 93 (more than 50%) said they would read at least one of the stories they considered to be inaccurate, and 20 explained in the reasons that they would do so for fun, entertainment or curiosity. Although only 10 respondents would also share innacurate stories, these numbers tell us about the way fake news becomes viral and potentially more dangerous as it reaches more and more people who might actually believe what is being reported. As unreasonable as it might sound, perhaps professional journalists and media

organizations who are committed to the profession’s ethical conduct should stop chasing the clicks in a run they cannot win and maybe differentiate their content in other forms, relying more on fact-checking websites or on third-party institutions that can verify the news that’s being reported. In fact, the viral dimension should not be the objective of professional, trustworthy news reporting. Being able to change people’s political views is a more distant goal, but what objective journalism can do is provide the audience with facts and their not-too-speculative interpretation while fighting the counterfeit out there.

Finally, the limitations of this experiment require a few words. Certainly the greatest

obstacle to achieve particularly significant results is the sampling method: a purposive, not entirely randomized sampling method could not attain results that can be projected onto the general

population. Moreover, the fact that story congruence and source congruence made use of the same measure made it impossible to analyze both independent variables in the same model, thus leaving a greater degree of uncertainty as to the effect of source congruence in the conditions. Additionally, a larger and more diverse sample would allow for a more precise examination of the source and its influence on news perception, as one would expect its effect to go along in a similar direction of the political message conveyed by the story. Future studies on social media and selective exposure to news content could make use of these results and analyze more in depth variables such as the political message and the alignment of the source not necessarily in a binary way (left-wing or right-wing). Last but not least, this study was conducted on political news around the American presidential election of 2016, a very specific and narrow context. Different contexts could provide

(27)

more interesting and relevant results, and the same method could be applied to non-political news but with a focus on one particular issue at hand, be it climate change or GMOs, to see the degree of polarization around those issues.

References

Allcott, H., & Gentzkow, M. (2017). Social Media and Fake News in the 2016 Election. Stanford University Press, 1-40.

Azrout, R., Van Spanje, J. & De Vreese, C. (2012). When News Matters: Media Effects on Public Support for European Union Enlargement in 21 Countries. Journal of Common Market Studies 50(5), 691-708. DOI: 10.1111/j.1468-5965.2012.02255.x

Baron, R. M., & Kenny, D. A. (1986). The Moderator—Mediator Variable Distinction in Social Psychological Research: Conceptual, Strategic, and Statistical Considerations. Journal of Pe~nality and Social Psychology, 51(6), 1173-1182.

Bessi, A. (2016). Personality Traits and Echo Chambers on Facebook. Computers in Human Behavior, 65(2016), 319-324.

Bessi, A., Petroni, F., Del Vicario, M., Caldarelli, G., Scala, A., Zollo, F., & Quattrociocchi, A. (2014). Viral Misinformation: The Role of Homophily and Polarization. arXiv, 1-12. Bode, L., & Vraga, E. K. (2015). In Related News, That Was Wrong: The Correction of

Misinformation Through Related Stories Functionality in Social Media. 65, 619-638. BuzzFeed. (2016, December 15). BuzzFeed: Fake News 50. Retrieved January 20, 2017, from

BuzzFeed.com:

https://docs.google.com/spreadsheets/d/1sTkRkHLvZp9XlJOynYMXGslKY9fuB_e-2mrxqgLwvZY/edit#gid=652144590

BuzzFeed. (2016, November 16). This Analysis Shows How Viral Fake Election News Stories Outperformed Real News On Facebook. Retrieved Decemeber 1, 2016, from

BuzzFeed.com: https://www.buzzfeed.com/craigsilverman/viral-fake-election-news-outperformed-real-news-on-facebook?utm_term=.na9ka9g93v#.spK5gkwk7G

Chakraborty, A., Paranjape, B., Kakarla, S., & Ganguly, N. (2016). Stop Clickbait: Detecting and Preventing Clickbaits in Online News Media. arXiv, 1-8.

Chen, X., Sei-Ching, J. S., Yin-Leng, T., & Chei, S. L. (2015). Why Students Share Misinformation on Social Media: Motivation, Gender, and Study-level Differences. The Journal of

(28)

Conroy, N. J., Rubin, V. L., & Chen, Y. (2015). Automatic Deception Detection: Methods for Finding Fake News. Association for Information Science and Technology (pp. 6-10). St. Louis : ASIST.

Del Vicario, M., Bessi, A., Zollo, F., Petroni, F., Scala, A., Caldarelli, G., . . . Quattrociocchi, W. (2016). The spreading of misinformation online. PNAS, 113(3), 554-559.

Ecker, K., Cook, J., & Lewandowsky, S. (2015). Misinformation and How to Correct It. (I. John Wiley & Sons, Ed.) Emerging Trends in the Social and Behavioral Sciences., 1-17. Harcup, T., & O'Neill, D. (2016). What is news? Journalism Studies, 1-20.

Iyengar, S., & Hahn, K. S. (2009). Red Media, Blue Media: Evidence of Ideological Selectivity in Media Use. Journal of Communication, 59, 19-39.

Kaplan, J. T., Gimbel, S. I., & Harris, S. (2016). Neural correlates of maintaining one’s political beliefs in the face of counterevidencea. Scientific Reports, 6, 1-11.

Knobloch-Westerwick, S., Mothes, C., Johnson, B. K., Westerwick, A., & Wolfgang, D. (2015). ermany and the United States: Confirmation Bias, Source Credibility, and Attitude Impacts. Journal of Communication, 65(2015), 489-511.

Nature. (2016). Epistemology of Fake News. Sringer Nature, 540, p. 525. Oremus, W. (2016). Slate. Retrieved February 15, 2017, from www.slate.com:

http://www.slate.com/articles/technology/cover_story/2016/01/how_facebook_s_news_feed _algorithm_works.html

Petty, R., & Cacioppo, J. (1986). The elaboration likelihood model of persuasion. In L. Berkowitz (ed.). (N. Y. Press, Ed.) Advances in Experimental Social Psychology, 19, 123-205.

Stroud, N. J. (2008). edia Use and Political Predispositions: Revisiting the Concept of Selective Exposure. Polit Behav, 2008(30), 341-366.

The New York Times. (2016). Media’s Next Challenge: Overcoming the Threat of Fake News. Retrieved February 25, 2017, from www.nytimes.com:

https://www.nytimes.com/2016/11/07/business/media/medias-next-challenge-overcoming-the-threat-of-fake-news.html

(29)

Appendix: Experimental Conditions

Below are all the Facebook posts used for the experimental conditions. Each post comes with a description of the particular stimulus that’s been applied to it. It’s useful to remember that the order of the posts was randomized using Qualtrics so that every respondent.

Facebook Post 1: Unknown, low-credibility source; No Clickbait; Right-wing message (supportive of Trump)

Facebook Post 2: Known left-wing source; Clickbait; left-wing message (unsupportive of Trump)

(30)

Facebook Post 3: Unknown low-credibility source; Clickbait; right-wing message (unsupportive of Clinton)

Facebook Post 4: Known left-wing source; No Clickbait; right-wing message (supportive of Trump)

(31)

Facebook Post 5: Known right-wing source; Clickbait; left-wing message (unsupportive of Trump)

Facebook Post 6: Unknown low-credibility source; No Clickbait; left-wing message (supportive of Clinton)

(32)

Facebook Post7: Unknown source;

Clickbait; left-wing message (unsupportive of Trump)

Facebook Post 8: Known right-wing source; No Clickbait; left-wing message

(33)

Facebook Post 9: Known left-wing source; No Clickbait; left-wing message

(unsupportive of Trump)

Facebook Post 10: Known right-wing source; Clickbait; right-wing message (unsupportive of Clinton)

(34)

Facebook Post 11: Known right-wing source; No Clickbait; right-wing message (unsupportive of Clinton)

Facebook Post 12: Known left-wing source; No Clickbait; right-wing message

Referenties

GERELATEERDE DOCUMENTEN

(Aguilar-Gaxiola); College of Medicine, Al-Qadisiya University, Diwaniya Governorate, Iraq (Al-Hamzawi); Health Services Research Unit, Institut Hospital del Mar

2(a) shows, for each Booter separately and on the over- all of all surveyed databases, how many times users purchase attacks from Booters. As expected the number of users that did

However, given that capitalist society is a society of generalised commodity production governed by the legal regime of private property, it follows that

In this observational study we estimated the proportion of postmenopausal breast cancer patients initially diagnosed with hormone receptor (HR)-positive locally advanced or

Uit de resultaten blijkt dat de merken sinaasappelsap vooral te onderscheiden zijn door een meer of mindere 'natuurlijke' smaak en meer of minder 'bitter'.. De

To identify the possible interrelations between the castle and its surroundings it is important to obtain a wide range of knowledge about the different contexts; the urban

Given the alleged rise of junk news and in light of Facebook measures to improve news feed quality, the objective of this study was (1) to assess the total reach of junk news

Researching the user acceptance of new technologies.. “Which variables can contribute to the Technology Acceptance Model in order to improve this model, and when this model is