• No results found

Safeguarding Information: perceived credibility of algorithms as news writers

N/A
N/A
Protected

Academic year: 2021

Share "Safeguarding Information: perceived credibility of algorithms as news writers"

Copied!
32
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Safeguarding Information:

Perceived credibility of algorithms as news writers

Hendrikus F. Horst

11634561

University of Amsterdam Master’s Thesis

Graduate School of Communication

Master’s programme Communication Science

(2)

Abstract

Algorithms appear to be replacing the traditional gatekeepers of information along with certain tasks from journalists. This study explores the potential of algorithms as the new gatekeepers for modern day news outlets and their perceived credibility among news consumers. The perceived source credibility was analyzed by means of a vignette survey, consisting of 16 scenarios. The scenarios included four moderating factors: (a) the type of news writer, (b) the political leaning of the news outlet, (c) the topic of the news story, and (d) the type of data collected. Contrary to the first hypothesis, journalists compared to algorithms were perceived as more credible. Furthermore, none of the moderating factors had any effect on perceived credibility. Lastly, political leaning of the individual in addition to the

conditional effect of political leaning of the news outlet had no significant effect on perceived credibility. These results reveal that, when confronted with the journalistic process,

individuals tend to prefer a journalist to an algorithm.

(3)

Introduction

The Internet has allowed for the rapid dissemination of an abundance of information to any individual that has access to the network. News outlets have greatly benefitted from this advanced flow of information. The increased pace of transmission has allowed news outlets to provide immediate updates on ensuing news stories whether they be national or international. Social media platforms have granted media institutions a greater reach to individuals, and content created by citizens in addition to already available data is easily accessible and ready for integration into news articles. At first glance, technology of the Information Age appears beneficial to all stakeholders, but the onset of the virtual network, however, has not been entirely beneficial for news outlets and the news consumer.

Besides the increased speed and greater access, the Internet has diminished traditional gatekeeping capabilities of media institutions. The Internet has enabled the creation of online news, which is boundless in terms of space for news stories. This is contrary to the limited space defined in traditional journalism, which dictated gatekeeping and news selection (Tsfati, 2010). “News content now comes from more than just the elite media” (Dahmen & Morrison, 2016, pg.659). Internet users with unspecified objectives are now capable of creating and sharing information without having to endure editorial procedures. This lack of traditional gatekeeper in an environment of excess information has produced an unfavorable dynamic between the news consumer and news outlets. The term ‘Fake News’ embodies an assortment of issues (Mould, 2018) that perfectly sum up the repercussions of the

disappearing classic gatekeeper. First, the Internet, with its low costs, has allowed for the increase in alternative news outlets with extreme leanings on the political spectrum. Second, the news stories that are spread by these news outlets are generally politically motivated and riddled with biases (Mohseni & Ragan, 2018). Lastly, these news stories are usually

(4)

This misinformation not only misleads the news consumer, but also decreases trust in news outlets by reducing the perceived credibility in factual information and their sources.

The absence of total control of information by media institutions has redefined why and how the news consumer trusts news outlets. The news consumer today has the option to disagree with mainstream media, and is more likely to do so in a politically divided

environment. This mistrust leads to the news consumer finding alternatives news sources, primarily nonmainstream sources (Tsfati, 2010). Mistrust in media by the news consumer is one, among the numerous hurdles that the news industry is facing today. A 2016 Gallup poll shows a steady decrease in trust in mass media over the last decade in all age ranges and political leaning of American citizens (Swift, 2016). Other countries have found similar trends of mistrust (Tsfati, 2010). As numerous amounts of websites claim to provide news with varying quality and biases, the news consumer grows more uncertain about the credibility of the source and its content.

Simultaneously with the declining role of the traditional gatekeeper, new technology has allowed for new methods of data management. Larger mainstream news outlets (i.e., British Broadcasting Company (BBC), Associate Press (AP), Washington Post, Reuters, etc.) have applied artificially intelligent technology in order to deal with the large amounts of information in addition to neutralizing misinformation. Whether the journalistic tools are developed in-house or purchased from private companies, numerous news outlets have

embraced innovations related to artificial intelligence that have the potential to raise the speed of various journalistic processes and increase news content quality. This trend - the

application of artificially intelligent technologies within journalistic processes - appears to be growing at a steady rate and has reached a point where ‘automated’, ‘robot’, or ‘augmented’ journalism has become a topic of discussion among academic researchers (i.e., Kim & Kim, 2018; Graefe, 2016; Clerwall, 2014). In terms of research, academics have primarily

(5)

concentrated on news content written by algorithms – a list of steps defined by code,

mimicking the human process. Specifically, the studies have looked at two factors; the quality and credibility of news content written by automated processes (Wölker & Powell, 2018; Jung et al., 2017; van der Kaa & Krahmer, 2014; Clerwall, 2014) and the credibility of an algorithm as a news writer (Liu & Wei, 2018; Kim & Kim, 2018; Haim & Graefe, 2017).

In short, these past studies have concluded that algorithms as news writers invoke less emotion, are perceived as less biased, and are expected to, and write content well enough to be found credible (Liu & Wei, 2018; Wölker & Powell, 2018; Jung et al., 2017; Haim & Graefe, 2017; van der Kaa & Krahmer, 2014; Clerwall, 2014). Seemingly, algorithms as news writers have the potential to mitigate distrust in news outlets by lowering perceived biases and increase credibility. However, the majority of the previous studies have focused on fact-based content that can be created by algorithms – sports and finance, which tend to lack certain qualities that interpretative topics may contain, such as opinion or ideological slant (i.e., Wölker & Powell, 2018; Jung et al., 2017; Clerwall, 2014). Moreover, these studies primarily observed factors related to the content, and not the source. The current study focuses on interpretative topics, such as human interest and politics, and analyzes the perceived

credibility of algorithms and journalists by placing them into conceivably real-life scenarios with the intent on finding a contributing factor to a change in perception surrounding the news writer. The overarching question of this study asks the following:

To what extent, and under which conditions (i.e., political leaning of the news outlet, the topic of the news story, and type of data collected), is an algorithm as news writer perceived as more credible than a traditional journalist?

(6)

Artificially intelligent technologies have much potential in journalistic processes, as algorithms have, among other functions, allowed for data sorting, automatic updates, and simple news story creation in topics such as weather, finance, and sports. News outlets may grant future artificially intelligent processes with more gatekeeping responsibilities by allowing algorithms to gather data, select essential content, and write articles about more interpretative topics, such as politics. This study focuses on the prospects of automated journalism, and how the public perceives algorithms surrounding specific conditions in terms of credibility.

The following section will give a short review of the relationship between trust and credibility, followed by a review on previous academic studies on algorithms and credibility. The third section will present the hypotheses with supporting theories, which will transition into the details of the research method. The results of the analyses are then presented. Finally, the results are discussed, and I acknowledge several limitations of the study, which are to be followed by concluding remarks.

Trust, Credibility, and Cognitive Heuristics

Depending on the study, the concept of trust may slightly vary in meaning. Lucassen and Schraagen base their study on a definition that defines trust as an exchange of vulnerability to another’s action, regardless of the potential risk (2012). Tsfati describes it as the expectation

that the interaction with the trustee will be beneficial (2010). In an environment where information is abundant, but where the quality of that information varies, trust is not immediately gained. In the context of this study, trust is thus ‘earned’ and the end result of exposure to high levels of credibility.

Credibility by itself consists of various levels; being medium, message, and source credibility (Lucassen & Schraagen, 2012). Within this study, the immediate source is the

(7)

news writer, thus will primarily focus on source credibility. It should be noted however, that this study is analyzing source credibility from the traditional definition in which there is a single source. Academics, however, have advised for a definition that encompasses the diversity of sources in cyber space (Metzger et al., 2010). In other words, there are numerous sources today that define information (or news articles), making it difficult to determine who or what the source actually is and where the content derived from. The research design in this study has included a second level of sources (social media and the other news websites), but explicitly asks participants to rate the news writer in hopes of avoiding confusion about the source. Source credibility has been established as containing two factors - ‘expertise’, the competence of the source - and ‘trustworthiness’, the intentions of the source (Lucassen & Schraagen, 2012). For instance, a reader of a news article would need to determine the credibility of the journalist by concluding whether the he / she is competent and if the source is well intentioned.

The abundance of information online poses a predicament on the information consumer, however. Due to the continuous flow of information, and absence of editorial gatekeeper on that flow of information, individuals lack the time to systematically determine whether information is credible or not. News consumers therefore tend to rely on credibility heuristics, or mental shortcuts to determine the credibility of information (Metzger et al., 2010). For instance, credibility heuristics in relation to source credibility may simply look factors related to the source such as an image, name, or demographics of the author of information, if that information is available. Source credibility may also be evaluated by previous experiences with the source (Lucassen & Schraagen, 2012). This study intends to observe which factors, or cues surrounding the news writer influences the perceived credibility of the source.

(8)

Algorithms and Credibility

One of the earliest studies on algorithms and perceived credibility focused on news content that was created by both algorithms and journalists - a recapitulation of a sports game. The pilot study found that participants could not discern between the content written by either news writer, with indicators such as objectivity being rated higher in algorithms (Clerwall, 2014). An extended study by van der Kaa and Krahmer included journalists as participants and explicitly bylined the author (2014). Results supported the pilot study and the source (algorithm) and the content of the stimuli (sports and finance articles) were perceived just as credible as a journalist (van der Kaa & Krahmer, 2014). Also, journalists appeared to rate the expertise indicators higher in algorithms, compared to that of journalists (van der Kaa & Krahmer, 2014). These results were echoed by a later study that analyzed, in addition to credibility, the quality of the content, and similar to that of van der Kaa and Krahmer, were perceived as higher coming from an algorithm (Jung et al., 2017). Other researchers have also discovered that co-authorship between the two types of news writers show equality in

perceived credibility (Wölker & Powell, 2018). These results support not only the idea that algorithms equal in basic content quality written by a journalist, but also that algorithms are perceived as superior in some cases.

It was surmised that individuals might have lower expectations for the algorithm and higher expectations of journalist (Jung et al., 2017; van der Kaa & Krahmer, 2014), which was confirmed in a later study that individuals did, in fact, have higher expectations for journalists in regards to quality, but expected automated news to be as, or more credible than journalists (Haim & Graefe, 2017). In search of why individuals expect the news writers to be equal in terms of credibility, academics have proposed that the public may believe that

algorithms lack the bias that journalists presumably possess (Wölker & Powell, 2018; Haim & Graefe, 2017; Jung et al., 2017). Research related to algorithmic recommendation of news

(9)

on social media supported this same idea; that individuals appear to believe that algorithms are immune to the bias of the news outlets by which they are deployed (Thurman et al., 2018). Individuals may perceive algorithms as a news writer to be a neutral party without bias and provide only factual information.

In a similar vein, a recent experiment analyzed the induction of emotional involvement news written by algorithms. Contrary to the previous studies, this research concentrated on writing spot or interpretive news, and revealed that content bylined with an algorithm as news writer invoked less emotional involvement compared to that of a traditional journalist (Liu & Wei, 2018). More importantly, this held true even on websites from news outlets on either side of the political spectrum. The algorithm was also perceived as more objective, but with less expertise, which may be explained by Haim & Graefe’s study related to expectations in this type of technology (2017). In general, the results suggest that

automated journalism may have the capability to reduce resentment towards ideological opposition, and maintain a level of credibility.

Theoretical Framework

The overarching theory that drives this study is the idea that the traditional gatekeeper – the news outlet – has lost much control on the flow of information, which has allowed for invalidated information to enter the general public discourse. In short, gatekeeping theory espouses the idea that mass media manages the flow of information in terms of what should and what should not be presented to the consumer (Coddington & Holton, 2014). Shoemaker (1991) explains it as “the process by which billions of messages get cut down and

transformed into hundreds of messages that reach a given person on a given day” (p. 1). This

information management includes: selection, addition, withholding, displaying, channeling, shaping, manipulation, repetition, timing, localization, integration, disregarding, and deletion

(10)

(Barzilai-Nahon, 2008). With the onset of information communication technology (ICT), researchers have expanded on traditional gatekeeping theory to fit modern day dynamics between the information provider and information consumer.

Gatekeeping theory today takes into account the virtual network, changing roles, individual interests, flows of information, etc. (Barzilai-Nahon, 2008) and has broadened to include other ‘guards’ of information, such as routines, codes of conduct, and algorithms (Coddington & Holten, 2014). Search engines, and social media platforms exemplify the use of algorithms as gatekeepers by recommending users trending and popular content.

Algorithms are able to perform a multitude of functions in a fraction of a second making them ideal for data management and the gatekeeping of credible information.

Additionally, research has indicated algorithms as news writers to be perceived as, or more credible than traditional journalists in writing news content (Liu & Wei, 2018; Jung et al., 2017; van der Kaa & Krahmer, 2014; Clerwall, 2014). Therefore, the current study proposes the first hypothesis, which states the following:

H1: An algorithm as a news writer (compared to a journalist) is perceived as more credible.

Socially, individuals and organizations that align themselves with a liberal ideology appear to embrace change to a greater extent compared to those that align themselves with a more conservative ideology. The same could be said about economic policies. In combination of two societal components – social and economic – it is predicted that liberal companies are more likely to apply recent technology within their business processes. Liberal news outlets might be more likely to experiment with algorithms than a conservative outlet due to

underlying social and economic ideologies. This presumption would fall in line with the fact that news outlets that have embraced artificial intelligence mentioned in a previous paragraph

(11)

are moderate to left leaning. The vast application of algorithms would lead individuals to have a greater confidence in news outlets that have utilized this type of technology for some time, compared to companies that have not. In terms of news writing and news outlets, the

following hypothesis states:

H2a: An algorithm as a news writer (compared to a journalist) that is employed by a liberal news outlet is perceived as more credible than a conservative news outlet.

In line with gatekeeping is the notion of the hostile media. “Hostile Media” theory suggests that political ideology influences the perceived trust in the media (Matthes, 2011; Vallone et al., 1985). Lee expanded on that, revealing that distrust along with cynicism predicts the perceptions of media bias (2005). In line with this research, individuals leaning a certain way on the political spectrum would distrust the news outlet on the opposing side. This effect appears to be more prominent among conservatives (Lee, 2005; Lee, 2010). Furthermore, that distrust would derive from the media outlet and not necessarily the source. A different result is expected when the news outlet leans a direction matching the political views to that of a news consumer. As the news consumer is more likely to agree with the news outlet, the notion of ‘hostile media’ is absent and potentially giving way to ‘confirmation bias’. News

consumers are more likely to favor news that is in line with their view (Westerwick et al., 2017) compared to the potential ‘neutral’ viewpoint from an algorithm. The hypothesis states the following:

H2b: An algorithm as a news writer (compared to a journalist) employed by a liberal news outlet is perceived as more credible to a news consumer that leans right on the political spectrum.

(12)

The topic is also believed to influence the credibility of the source. In line with hostile media theory, individuals may expect more bias from journalists in politically charged topics than human-interest or less interpretative topics. Therefore, and drawing from the previous

conclusions that artificial intelligence may be perceived as less biased than human journalists (Haim & Graefe, 2017; Thurman et al., 2018), I predict that:

H3: An algorithm as new writer (compared to a journalist) covering a political protest is perceived as more credible, while an algorithm and journalist as news writer are perceived equally credible when covering a royal wedding.

Lastly, and more of a modern journalistic issue, is that of data collection, which also has the potential to influence credibility. Numerous online news outlets appear to draw information from other news sites (Vargo & Guo, 2017). Also, more and more news outlets integrate social media data into their news stories (Rony et al., 2018). If the news writer presents the reader with already parsed data from another news site, it would give the perception of a single viewpoint. If the news writer collects data from social media, it may give numerous viewpoints about the topic, allowing for the story to be less biased. Since I expect that algorithms are perceived as neutral, it is believed that in either case of data collection the algorithm is preferred, even if data collection from other news sites reiterates a pre-existing frame of the news story.

H4: An algorithm as new writer (compared to a journalist) collecting data from social media or other news websites is perceived as more credible.

(13)

Gatekeeping theory guides the main prediction that algorithms are able to replace traditional journalists in data management and writing news content, while the other hypotheses are guided by hostile media theory, previous studies on the topic, and speculation. Unlike previous studies exploring the possibilities of algorithms as news writers, the current study adopts a research design that inserts algorithms into scenarios. These scenarios consist of general gatekeeping functions held by the news writer, while specific factors are compared and contrasted, including the algorithm with journalist.

Method

A factorial within-subject vignette survey was conducted in order to test both the validity and reliability of the perceived credibility in the source. Participants read a total of four scenarios and rated them in accordance with a source credibility scale.

Design

The design of the experiment is a 2 (source: journalist versus algorithm) x 2 (political leaning of news outlet: conservative versus liberal) x 2 (topic: political protest versus royal wedding) x 2 (data type: social media versus other news sites) factorial survey design. The design was

(14)

created off the basis of the journalistic process that van der Kaa & Krahmer (2014) have outlined. This process – researching a news topic, selecting relevant details, structure the information, and write the story – can be easily applied to algorithms. The scenarios have been presented in such a way that the surrounding factors of the scenario can be applied to both a human journalist and algorithm. Each participant was shown four random scenarios and asked to indicate the perceived credibility of the source for each scenario. Two instances of the scenarios, which in combination include all the factors, are found below:

A conservative news outlet decides to use a journalist to cover an upcoming news story about a political protest. The journalist will collect data about the topic of the news story from social media, choose important details for the reader, and write the article. The article is then published on the website from the news outlet.

A liberal news outlet decides to use an automated process (algorithms) to cover an upcoming news story about a royal wedding. The automated process (algorithms) will collect data about the topic of the news story from other news websites, choose important details for the reader, and write the article. The article is then published on the website from the news outlet.

Sample

Initially, and in line with the scientific standard, the goal for total amount of participants was 30 for each of the 16 scenarios, which makes a total of 480 participants. Participants included anyone over the age of 18. A maximum of 500 participants were recruited through the

Amazon’s online platform – Mturk – and 13 by contacting acquaintances. The total of participants collected amounted to 513. However, 19 failed the attention check and were dropped, making the total 494. The age range of participants ranged from a minimum of 18 and maximum of 98 years old with a mean age of 34.32 years old (SD = 11.53). As for

(15)

degree while 19% of the respondents had completed high school or equivalent. Less than one percent has completed less than high school. The remaining respondents (25%) had

completed graduate school or higher. Lastly, the political leaning ranged from 1 to 10, with the mean being 5.26 (SD = 2.57).

Stimulus

While much recent research in relation to automated journalism has focused on survey and experimental methods, this study utilizes that of a ‘vignette’, or factorial survey. While

traditional surveys tend to posses a high external validity – the ability to be applied to real-world situations – they also tend to posses a low internal validity, preventing any definite conclusions about causation. Experiments possess the reversed; low external validity and high internal validity. This grants experiments potential conclusions about causation, but due to the controlled environment ignores the application to real-world situations. Vignette surveys intend to take the advantages of both methods by creating various scenarios imagined from real-life situations with the potential of finding causation. In terms of this research,

individuals may have certain expectations about the role of journalists, and algorithms. This method hopes to limit the factors impacting source credibility.

Measures

After every scenario participants were asked how they would rate the credibility of the source, in this case the journalist or algorithm, on a scale of source credibility which had been used in a previous study by Liu & Wei (2018). The 7-point bipolar scale includes two smaller scales labeled ‘expertise’ and ‘trustworthiness’, containing both five items. These items consisted of

adjectives and their counterparts: dependable, honest, reliable, sincere, trustworthy, expert, experienced, knowledgeable, qualified, and skilled. This study focused on perceived

(16)

credibility and disregarded any further analysis on the separate ‘expertise’ and ‘trustworthiness’ scales.

For each scenario a factor analysis was conducted. A principal component analysis (PCA) with a ‘direct oblimin’ rotation shows that the 10 items form a single dimensional

scale with one component retaining an eigenvalue above 1,with one of the scenario reaching a minimum score of 6.92 and another reaching a maximum score of 8.12, while the

combination of all 16 scenarios averaged to be 7.79. Contrary to expectations, the scale did not distinguish between items of ‘expertise’ and ‘trustworthiness’. For this reason, only the full scale, named ‘credibility’ was used in further analyses. A further reliability analysis of the scale in all scenarios gave a minimum of Cronbach’s alpha of .95, and maximum Cronbach’s alpha of .97 and between all 16 scenarios averaged with a Cronbach’s alpha of .97, (M =

42.87, SD = 15.34).

Procedure

The initial survey was created in Qualtrics, a survey-construction platform. The survey was presented on MTurk, which contains qualification options in order to filter participants for recruiting. The qualification selected was the ‘approval percentage of participants’, which was

set at 97%. The amount of participants asked for on the platform was 500, which was reached within five days.

Each participant was shown four random scenarios. This prevented potentially skewed results, as participants’ expectations changed after the first scenario due to repetition effect. Moreover, a scenario could appear similar to a previously shown scenario, giving participants the feeling that the scenarios were the same. The randomization therefore reduced primacy and recency effects, the effects of certain questions being asked in a certain order.

(17)

An attention check question was inserted after the measuring of perceived credibility to ensure thoughtful answers and prevent the inclusion from answers by ‘bots’ (man-made

scripts running on the MTurk platforms with the goal of filling in surveys for just for financial gain). Lastly, the participants were asked general demographic-related questions including a question in regards political leaning.

Results

Descriptives

Table 1 provides the means, standard deviations, and correlations between the primary

variables within the study. The results show a negative correlation between age and credibility (r = -.135, p < .001), meaning that credibility appears to increase in younger individuals. Furthermore, there is a positive correlation between political leaning and credibility (r = .174, p < .001). Thus, as individuals lean more towards the conservative side of the spectrum, the more credibility appears to increase. Lastly, there is a negative correlation between news writer and perceived credibility (r = -.175, p < .001), meaning that when individuals are confronted with an algorithm in the vignette survey, they tend to find the source less credible.

Table: 1

Mean SD Age Political

Leaning News Writer News Outlet News Topic News Data Credibility Age 34.32 11.52 - Political Leaning 5.26 2.57 -.004 - News Writer [1=journalist] 1.50 .5 .012 .005 - News Outlet [1=conservative] 1.49 .5 -.020 .009 -.004 - News Topic [1=protest] 1.51 .5 -.041 .018 -.011 -.001 -

(18)

Hypothesis 1 – Direct Effect of the News Writer

A multi-level analysis, with an estimation method of restricted maximum likelihood (REML) was conducted in order explain the differences between the independent variable news writer consisting of two levels (‘algorithm’ and ‘journalist’) with the dependent variable being ‘credibility’. Explicitly the hypothesis predicts an algorithm as a news writer to be perceived

as more credible than a journalist.

Table 2: Linear Mixed Model

β SE p

Intercept 4.01 .05 < .001

News Writer [Journalist] .55 .07 < .001

Note: Results of the linear mixed model analysis measuring direct effect of the news writer on perceived credibility.

As shown in Table 2, the results indicate that there is a statistically significant effect of news writer on perceived credibility, (β = .55, p < .001). On average, participants perceived the

credibility of journalists higher (M = 4.56, SD = 1.47) than algorithms (M = 4.01, SD = 1.59). The effect of the difference between groups is small to medium (dcohen = .35). There is a statistically significant effect, but contrary my expectations. Interestingly, I found that journalists are perceived as more credible. The initial hypothesis, however, is not supported.

News Data

[1=news websites] 1.50 .0 -.022 .019 .006 .000 .003 -

Credibility 4.28 1.56 -.135** .174** -.175** .009 .010 .020 -

* p < .05, **p < .01, *** p < .001

(19)

Hypothesis 2a – News Writer & News Outlet Interaction

A multi-level analysis, with the estimation method of REML was conducted in order explain the differences between the two independent variables – news writer consisting of two levels (‘algorithm’ and ‘journalist’) and news outlet consisting of two levels (‘conservative’ and ‘liberal’) – and the dependent variable being ‘credibility’. Explicitly the hypothesis predicts

an algorithm deployed by a liberal news outlet to be perceived as more credible compared to a journalist.

Table 3: Linear Mixed Model

β SE p Intercept 4.00 .07 < .001 News Writer [Journalist] .59 .09 < .001 News Outlet [Conservative] .02 .09 .805 News Writer [Journalist] ×
News Outlet [Conservative] -.09 .14 .471

Note: Results of the linear mixed model analysis measuring the moderating effect of news outlet bias on the news writer and perceived credibility.

As shown in Table 3, there is no statistically significant effect of news writer when moderated by news outlet bias on perceived credibility, (β = -.09, p = .471). The news story topic does

not influence the direct effect between the news writer and perceived credibility. The hypothesis is not supported.

(20)

Hypothesis 2b – News Writer & News Outlet & Political Leaning Interaction

A multi-level analysis, with the estimation method of REML was conducted in order explain the differences between the independent variable news writer consisting of two levels

(‘algorithm’ and ‘journalist’) moderated by two other variables; News outlet, which consists of two levels (‘conservative’ and ‘liberal’) and ‘political leaning’. The dependent variable is ‘credibility’. Explicitly the hypothesis predicts an algorithm, compared to a journalist,

deployed by a liberal news outlet to be perceived as more credible by a news consumer that leans right on the political spectrum.

Table 4: Linear Mixed Model

β SE p

Intercept 3.41 .16 < .001

News Writer [Journalist] 1.07 .22 < .001

News Outlet [Conservative] -.25 .22 .246

Political Leaning .11 .03 < .001

News Writer [Journalist] ×

News Outlet [Conservative] -.36 .31 .248

News Writer [Journalist] ×

Political Leaning -.09 .04 .015

News Writer [Conservative] ×

Political Leaning .05 .04 .187

News Writer [Journalist] × News Outlet [Conservative] × Political Leaning

.06 .05 .290

Note: Results of the linear mixed model analysis measuring the moderating effect of political leaning on news outlet bias, which moderates the effect on the news writer and perceived credibility.

(21)

As shown in Table 4, there is no statistically significant effect of news writer when moderated by the news outlet, which is moderated by political leaning, on perceived credibility, (β = .06,

p = .290). Political leaning does not influence the moderating effect of the news outlet on the news writer and perceived credibility. An algorithm deployed by a liberal news outlet is not perceived as more credible to a news consumer that leans right on the political spectrum, thus the hypothesis is not supported.

Hypothesis 3 – News Writer & News Topic Interaction:

A multi-level analysis, with the estimation method of REML was conducted in order explain the differences between the independent variable news writer consisting of two levels

(‘algorithm’ and ‘journalist’) and the news topic consisting of two levels (‘political protest’ and ‘royal wedding’) with the dependent variable being ‘credibility’. Explicitly the hypothesis

predicts an algorithm as a news writer covering a political protest to be perceived as more credible than a journalist while the two types of news writers are equally credible when covering a royal wedding.

Table 5: Linear Mixed Model

β SE p

Intercept 4.04 .07 < .001

News Writer [Journalist] .52 .09 < .001

News Topic [Political Protest] -.05 .09 .607

News Writer [Journalist] ×

News Topic [Political Protest] .05 .14 .730

Note: Results of the linear mixed model analysis measuring the moderating effect of news topic on the news writer and perceived credibility.

(22)

As indicated in Table 5, there is no statistically significant effect of news writer when moderated by news topic on perceived credibility, (β = .05, p = .730). The news story topic

does not influence the effect between the news writer and perceived credibility. The hypothesis is not supported.

Hypothesis 4 – News Writer & Data Type Interaction

A multi-level analysis, with the estimation method of REML was conducted in order explain the differences between the independent variable news writer consisting of two levels

(‘journalist’ and ‘algorithm’) and the type of data collected consisting of two levels (‘other news websites and ‘social media’) with the dependent variable being ‘credibility’. Explicitly

the hypothesis predicts an algorithm as a news writer collecting data from social media or other news websites to be perceived as more credible than a journalist collecting data from those sources.

Table 6: Linear Mixed Model

β SE p

Intercept 4.02 .07 < .001

News Writer [Journalist] .60 .09 < .001

Data Type [Other News Websites] -.01 .09 .914

News Writer [Journalist] × Data Type [Other News Websites]

-.11 .14 .417

Note: Results of the linear mixed model analysis measuring the moderating effect of the type of data collected on the news writer and perceived credibility.

(23)

As provided in Table 6, there is no statistically significant effect of news writer when

moderated by the type of data collected on perceived credibility, (β = -.11, p = .417). The type

of data collected does not influence the effect between the news writer and perceived credibility. The hypothesis is therefore not supported.

Figure 2: The interaction effect between the news writer and the political leaning of the news outlet.

Figure 3: The interaction effect between the news writer, the political leaning of the news outlet, and the political leaning of participants.

1 2 3 4 5 6 7 8 Conservative Liberal Algorithm Journalist 1 2 3 4 5 6 7 8 Journalist Algorithm

(1) Conservative News Outlet, High Political Leaning (2) Conservative News Outlet, Low Political Leaning (3) Liberal News Outlet, High Political Leaning (4) Liberal News Outlet, Low Political Leaning

(24)

Figure 4: The interaction effect between the news writer and the news topic.

Figure 5: The interaction effect between the news writer and from where the data was collected. 1 2 3 4 5 6 7 8 9 10 Protest Wedding Algorithm Journalist 1 2 3 4 5 6 7 8 9 10 SocialMedia NewsWebsites Algorithm Journalist

(25)

Discussion

This study intended to identify potential factors causing a change in perceived credibility between the two types of news writers – algorithm and journalist – by presenting participants with scenarios of the journalistic process. Past research found algorithms to score higher on credibility descriptors than journalists (Liu & Wei, 2018; Haim & Graefe, 2017), yet the current study showed opposing results. The algorithm was perceived as less credible than the journalist as a news writer. This is especially particular when considering Jung et al.’s study, which found algorithms to be more credible than journalists when participants were made aware (and mislead) of the actual author (2017).

Regarding the further assessment of the results, no other factor presented (the political leaning of the news outlet, the topic of the news story, and the type of data collected)

surrounding the news writer had a moderating impact on perceived credibility. Most

interestingly is the lack of difference in perceived credibility when considering news topic. It was assumed that the societal importance of the topics would influence the credibility, but it did not. Previous findings by Liu and Wei revealed that there was no difference in credibility for a journalist when writing about spot or interpretive news, but there was for an algorithm (2018). It may be that participants perceived the topics (royal wedding and political protest) to be equal of importance and felt that the journalist would be the most credible in writing about those topics. It could also be that individuals felt that a human-interest topic demanded the human touch and a news story about a political protest called an interpretation or analysis, incapable for an algorithm.

Lastly, there was no interaction effect between the news writer, the news outlet, and political leaning. It was predicted that participants would regard the algorithm as less, or un- biased than the journalist and rate the perceived credibility higher when confronted with the news outlet that opposed their political leaning, as suggested by previous results (Liu & Wei,

(26)

2018). The results of the current study reveal however, that this is not the case. A potential reason for this may be that participants assumed that the algorithm reflects the bias of the developer. A recent poll by Pew Research Center found that nearly 60% of the American public believes that this is the case (Smith, 2018). This reasoning is line with the reality of algorithms, which are susceptible to biases such as data distribution and various engineered processes (Mohseni & Ragan, 2018).

The result of the current study contributes to the literature in two ways. First, the findings are contradictory to those of previous studies on algorithms and credibility, meaning there might be moderating factors that influences credibility of algorithms. One of which had previously been discovered. Expectation, which had been analyzed by Haim and Graefe (2017), may have played a role in the results. Algorithms are increasingly complex, which makes it difficult for the general public to understand how they perform their specific tasks. This lack of understanding inevitably creates uncertainty and may lower expectations in this type of technology. As suggested by previous results, the public expects more from journalists compared to an algorithm (Haim & Graefe, 2016). It may be the case that the public has different expectations from the news writer in the tasks presented in the scenarios.

Second, the results have strong implications for studies on algorithms and gatekeeping capabilities. The research design of this study applied a vignette survey in order to position algorithms as the exclusive news writer of news outlets, granting them the responsibility of gatekeeper. It is the first study to fully place an algorithm as a gatekeeper for news outlets. Previous studies on algorithms and credibility presented the algorithm as a news writer without any other responsibilities, but write the news (i.e., van der Kaa & Krahmer, 2014; Clerwall, 2014). The current study explicitly allocated responsibilities to the algorithm by having them perform the entire journalistic procedure. On basis of the suggestion that

(27)

in algorithms to perform certain tasks, such as news writing, than others (collect data, select relevant information), regardless of surrounding facts. Currently, it appears that the public still relies on human journalist to provide them information that is credible and discounts an algorithm as a viable candidate for gatekeeping information for news outlets.

Limitations

This study utilized the Amazon’s virtual marketplace, MTurk, in which individuals are able to collect participants for their study. Previous studies have questioned the legitimacy and

quality of answers given by respondents on the platform (Goodman et al., 2012; Casler et al., 2013; Buhrmester et al., 2011). Due to limited resources, participants were paid a marginal amount and may have impacted the quality of answers. In addition, one may argue that the ‘approval percentage of participants’ initially set at 97% is low, which could also have

produced low quality answers.

Furthermore, the scenarios presented to participants may have been oversimplified in order to fit both the procedures of a human writer and an algorithm. No detailed explanation was given on how algorithms would collect data, how they would choose relevant

information for the reader, and how they would, or could write the article. This simplification could have lead participants to believe that a human journalist would be more capable in all these functions because it is something that they can conceptualize. An algorithm on the other hand, would perform various meticulous sub-functions leading it to its final intended

objective. Considering the simple scenario that was presented to participants, the journalist can be interchanged with an algorithm. However, when considering the underlying methods of the tasks that need to be performed by either the journalist or algorithm, they cannot.

Similarly, and drawing off the discussion of the results, there may have been a general lack of understanding on algorithms. Although algorithms exist in commonly-used search

(28)

engines and social media, they are rarely understood by the general public. A page informing individuals on how an algorithm could perform the specific journalistic tasks, and their

capabilities could have ensured participants that they are viable candidates to be news writers.

Conclusion

Despite potential limitations, the results in this study suggest that individuals, in light of knowing the journalistic procedure and the media bias of the news outlet, would perceive an article written by a journalist more credible than an algorithm. In an economic and

technologically-driven environment, in which news outlets enthusiastically integrate automated systems and grant algorithms the responsibility to complete monotonous and meticulous tasks in order to do more with less, it is important to acknowledge that the technology does, in fact, impact the news consumer. This study intended to support current and future implementations of artificial intelligence into journalistic procedures. Contrary to predictions however, a journalist as news writer was perceived as more credible even when writing about human-interest news topic such as a royal wedding, meaning that full

integration of algorithms as fully-fledged news writers is disadvantageous and potentially harmful to credibility.

In light of the results, there are questions that could be considered for future research. While previous research has revealed that individuals have difficulty, or cannot distinguish between algorithmically created content from the content written by a journalist (Wölker & Powell, 2018; van der Kaa, 2014; Clerwall, 2014), the current study has revealed that participants perceive the complete journalistic procedure performed by a human more credible. It would be interesting to expand on the current study by describing, prior to the experiment, how an algorithm would succeed in all the journalistic tasks. In understanding how an algorithm could be more efficient, and lack political bias, participants may perceive

(29)

credibility differently. This would not only explain the results of the current study, but also reveal, albeit implicitly, the preference of the public.

In addition, there are many questions to be answered in relation to algorithms performing certain tasks and how these tasks are perceived by the public in terms of credibility. Future research could focus on these functions in standalone studies, and not in combination, as the present study did. Data collection by algorithms, for instance, will most likely become a popular topic for news outlets and journalists in the near future, especially in an environment where credibility is difficult to establish and time consuming to do so.

In a virtual domain without the traditional gatekeeper, varying qualities of information containing extreme political ideologies bounce on the web between individuals by means of social media, making it difficult for the news consumer to establish credibility in information. This study looked at whether the public would perceive an algorithm to be credible in all steps of the journalistic process, in hopes of results that could mitigate current levels of distrust in news outlets. Algorithms, with their data managing capabilities and news writing proficiencies, have much potential and may, in the near future become the definite gatekeeper of information for news outlets. For now however, it appears that the public has more trust in journalists than algorithms to manage all the information available on contentious political topics, choose relevant material, and write an objective news article for any individual leaning on either side of the political spectrum.

(30)

References

Barzilai‐Nahon, K. (2008). Toward a theory of network gatekeeping: A framework for

exploring information control. Journal of the American society for information science

and technology, 59(9), 1493-1512.

Buhrmester, M., Kwang, T., & Gosling, S. D. (2011). Amazon's Mechanical Turk: A new source of inexpensive, yet high-quality, data?. Perspectives on psychological

science, 6(1), 3-5.

Casler, K., Bickel, L., & Hackett, E. (2013). Separate but equal? A comparison of participants and data gathered via Amazon’s MTurk, social media, and face-to-face behavioral testing. Computers in Human Behavior, 29(6), 2156-2160.

Clerwall, C. (2014). Enter the robot journalist: Users' perceptions of automated content. Journalism Practice, 8(5), 519-531.

Coddington, M., & Holton, A. E. (2014). When the gates swing open: Examining network gatekeeping in a social media setting. Mass Communication and Society, 17(2), 236-257.

Dahmen, N. S., & Morrison, D. D. (2016). Place, space, time: Media gatekeeping and iconic imagery in the digital and social media age. Digital Journalism, 4(5), 658-678. Goodman, J. K., Cryder, C. E., & Cheema, A. (2013). Data collection in a flat world: The

strengths and weaknesses of Mechanical Turk samples. Journal of Behavioral

Decision Making, 26(3), 213-224.

Haim, M., & Graefe, A. (2017). Automated News: Better than expected?. Digital

Journalism, 5(8), 1044-1059.

Jung, J., Song, H., Kim, Y., Im, H., & Oh, S. (2017). Intrusion of software robots into journalism: The public's and journalists' perceptions of news written by algorithms and human journalists. Computers in Human Behavior, 71, 291-298.

(31)

Lee, T. T. (2005). The liberal media myth revisited: An examination of factors influencing perceptions of media bias. Journal of Broadcasting & Electronic Media, 49(1), 43-64. Lee, T. T. (2010). Why they don’t trust the media: An examination of factors predicting

trust. American Behavioral Scientist, 54(1), 8-21.

Liu, B., & Wei, L. (2018). Machine Authorship In Situ: Effect of news organization and news genre on news credibility. Digital Journalism, 1-23.

Lucassen, T., & Schraagen, J. M. (2012). Propensity to trust and the influence of source and medium cues in credibility evaluation. Journal of information science, 38(6), 566-577. Matthes, J. (2013). The affective underpinnings of hostile media perceptions: Exploring the

distinct effects of affective and cognitive involvement. Communication

Research, 40(3), 360-387.

Metzger, M. J., Flanagin, A. J., & Medders, R. B. (2010). Social and heuristic approaches to credibility evaluation online. Journal of communication, 60(3), 413-439.

Mohseni, S., & Ragan, E. (2018). Combating Fake News with Interpretable News Feed Algorithm. arXiv preprint arXiv:1811.12349.

Mould, T. (2018). introduction to the Special issue on Fake news: Definitions and Approaches. Journal of American Folklore, 131(522), 371-378.

Thurman, N., Moeller, J., Helberger, N., & Trilling, D. (2018). My friends, editors,

algorithms, and I: Examining audience attitudes to news selection. Digital Journalism, 1-23.

Tsfati, Y. (2010). Online news exposure and trust in the mainstream media: Exploring possible associations. American Behavioral Scientist, 54(1), 22-42.

Rony, M. M. U., Yousuf, M., & Hassan, N. (2018). A Large-scale Study of Social Media Sources in News Articles. arXiv preprint arXiv:1810.13078.

(32)

Smith, A. (2018, November 18). Public Attitudes Towards Computer Algorithms. In Pew Research Center.

Swift, A. (2016, September 16). Americans' Trust in Mass Media Sinks to New Low.

In Gallup. Retrieved from https://news.gallup.com/poll/195542/americans-trust-mass-media-sinks-new-low.aspx

Van der Kaa, H., & Krahmer, E. (2014, October). Journalist versus news consumer: The perceived credibility of machine written news. In Proceedings of the Computation+

Journalism Conference, Columbia University, New York (Vol. 24, p. 25).

Vallone, R. P., Ross, L., & Lepper, M. R. (1985). The hostile media phenomenon: biased perception and perceptions of media bias in coverage of the Beirut massacre. Journal

of personality and social psychology, 49(3), 577.

Vargo, C. J., & Guo, L. (2017). Networks, big data, and intermedia agenda setting: An analysis of traditional, partisan, and emerging online US news. Journalism & Mass

Communication Quarterly, 94(4), 1031-1055.

Westerwick, A., Johnson, B. K., & Knobloch-Westerwick, S. (2017). Confirmation biases in selective exposure to political online information: Source bias vs. content

bias. Communication Monographs, 84(3), 343-364.

Wölker, A., & Powell, T. E. (2018). Algorithms in the newsroom? News readers’ perceived credibility and selection of automated journalism. Journalism, 1464884918757072.

Referenties

GERELATEERDE DOCUMENTEN

[r]

In this paper, we take a more systematic look into the perceived trustworthiness and expertise of robot-written news articles, searching specifically for

The contrast between the impact of 9/11 and aircrafthijackings between 1970-1990 in aviation security regulation_ Maud Duit. GRADEMARK RAPPORT ALGEMENE OPMERKINGEN Docent PAGINA

In this thesis, I will examine whether the individual-, community-, and institutional-related psychological drivers of the Community Engagement Theory are also relevant in the

The results of the first stage of the demonstrator process, of both a metamodel and a finite element model, are propagated to the second stage finite element model for a new set

e) Zoek uit welk getal je moet veranderen in de vergelijking om het laagste punt één hokje omhoog te schuiven. Geef de nieuwe vergelijking.. a) Neem de tabel over, reken

This study of small group teaching features three different settings: lear study of small group teaching features three different settings: lear This thesis therefore investigates

Using Social Network Analysis and AquaCrop, it came forward that drip irrigation, plastic mulch and water use of 150 to 300 mm needs to be included and that the ANA should create