• No results found

AI in the news? Effect of modality on perceived news credibility of robot journalism

N/A
N/A
Protected

Academic year: 2021

Share "AI in the news? Effect of modality on perceived news credibility of robot journalism"

Copied!
36
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

AI in the news?

Effect of modality on perceived news credibility of

robot journalism

An-Yu, Lin

Student ID: 12367842

Master’s Thesis

Graduate School of Communication

Master’s programme Communication Science

Erasmus Mundus Masters Journalism, Media & Globalisation

Supervisor: Dr. Sanne Kruikemeier

Date of completion: 3. June 2019

(2)

Abstract

This research conducted an experiment to investigate how news consumers (N = 214) perceive automated news regarding the authorship and modality. With the development of AI technology, news companies not only adopt the algorithms to automatically produce news, but also apply AI to create news videos. Since AI news video is a new format of automated journalism, it is therefore needed to investigate if it is perceived credible by the public. Previous research tried to understand the automated news credibility by manipulating the byline, source attribution, and to make comparisons with human-written stories. Followed by previous studies, this research manipulates the news author (AI vs. human) and news

modality (text vs. video) to understand people’s credibility perceptions towards AI news. The results show that the public perceived automated news that attributed to AI and that attributed to human journalist as equally credible. Moreover, the perceived credibility of AI anchor news video and AI news article are almost the same, meaning that the modality does not affect the credibility perceptions of automated news. These results give us important

implications that AI news videos are perceived credible and robot journalism to large extents is accepted by the public.

Key words: AI, artificial intelligence, robot journalism, automated journalism, modality,

(3)

AI in the news?

Effect of modality on perceived news credibility of robot journalism

Robot journalism is increasingly applied in the newsroom (Graefe, 2016). Some news organizations such as Associated Press (AP), apply algorithms to automatically generate contents, especially in the news that need statistics to support the story, for example, financial and sport news (Graefe, 2016; Jung, Song, Kim, Im & Oh, 2017). Not only AP, but many leading media organizations such as Forbes, Chinese media Tencent and The Financial News in Korea employ “robot reporters” to help produce news articles (Jung et al., 2017; Waddell, 2018). This process of using algorithms to automatically write news stories from data with little or without human intervention is referred to as robot journalism (Clerwall, 2014), or automated journalism (Carlson, 2014; Haim & Graefe, 2017; Latar, 2014; Napoli, 2014).

Robot journalism is based on the use of algorithms, software and in particular,

artificial intelligence (AI) to produce news stories (Montal & Reich, 2017). AI is increasingly and widely used within the news industry and gradually replaced human journalists’ works. For instance, there are some functions that are done by AI in journalism, such as data mining, text writing, topic selection and personalized algorithm (Miroshnichenko, 2018). Moreover, robot-generated news is not limited in the texts format, but also in videos. Specifically speaking, AI is further used to replace the role of anchor in the news. Xinhua news agency has launched the first AI anchor, and they indicate that the AI anchor can work 24 hours, because there is no need for him to take a rest (Xinhua, 2018). He is now working in the anchor team, so people could see him reports news frequently in the news videos that released by Xinhua news agency’s online platforms.

Applying AI and algorithms to produce news is not new, but the range is getting larger and deeper, from text to video, so it is inevitable that people will access more automated news in their daily lives. Audiences consume the news and receive the information, so their

(4)

organizations’ angle, people’s willingness to consume automated news is top concern for their business strategy to introduce robot journalism (Kim & Kim, 2016). News consumers’

satisfaction has great possibilities to affect media organizations’ decisions on how and to what extent to apply AI to the news (Haim & Graefe, 2017). Thus, how the public perceives

automated news credible or not could be an important implication for news organizations. Therefore, understanding people’s perceptions of news that is attributed to AI and how the rise of robot journalism influences news credibility is the main objective of the study.

News consumers’ perceptions are important for the development of robot journalism. Regarding evaluating the people’s perceptions of the news, most previous research focus on the news credibility of automated news (e.g. Clerwall, 2014, van der Kaa and Krahmer, 2014), and some of them extend news credibility to news quality (e.g. Jung et al.,2017, Zheng, Zhong & Yang, 2018), such as examining the measurement of readability and liking. However, their findings are inconsistent. The findings from Graefe, Haim, Haarmann and Brosius (2016) show that no matter how the source is declared, or actual, machine-written news tends to be rated higher than human-written news. Jung et al. (2017) indicates that when it was declared as a robot, the Korean public scored higher to the news quality, and when it was declared as a journalist, the algorithms-written news was scored lower. The research by Waddell (2018) however concludes a different result that news assumed to be machine-written is less credible than human-machine-written. Wölker and Powell (2018) found that credibility perceptions of human, automated, and combined content do not have a lot of differences. Thus, some studies found that news attributed to robot is less credible, but some found that the credibility perception of automated news is higher. Given that the past findings are equivocal, besides that robot journalism is not only presented in articles, but also in videos, this study examines the credibility perception of automated news by using both articles and videos. To examine the effect of news modality is important, because based on previous research, television news is perceived more credible than newspapers (Metzger et al., 2003).

(5)

Replicating it to the news modality of robot journalism, it is thus interesting to know how people would perceive AI news videos compared to AI news articles. In addition, given that previous credibility perceptions studies merely focus on automated texts, there are no studies focusing on videos which attributed to robots, so this study could fill this gap. Thus, it is meaningful to understand if people perceive two news modalities of automated journalism differently.

Furthermore, there are many factors which might influence people’s perceptions on robot journalism indirectly, and one of these is the ability to recall robotics. Waddell (2018) has proved that users who could recall robotics from popular media have less possibilities to give negative evaluations on news attributed to machines. Since automated news could be viewed as computers-generated products, the image of “computers” might indirectly affect people’s perception of it. Considered that is much easier for the public to recall computers than robotics, people’s general perceptions of computers are more suitable to investigate the relationships with automated news credibility. It is thus reasonable to expect that if people trust computers more, they might have positive evaluations on automated news.

Overall, the objectives of this research are to investigate (1) if the public perceives automated as credible compared to human journalists’ works, (2) how the modality of

automated news affect people’s credibility perceptions of the news, and (3) if people’s general perceptions of computers positively influence their perceptions on the news from AI

regarding credibility. To what extent does AI influence credibility in the news and what is that role of the modality and individual’s trust in computers?

Theoretical framework

Previous research on the credibility of robot journalism and the emergence of AI anchor

Credibility is a perceived concept that depends on people’ s perceptions which results from accessing and evaluating multiple aspects simultaneously, for instance, bias, trust and

(6)

accuracy (Wölker & Powell, 2018; Flanagin & Metzger, 2000; Tseng & Fogg, 1999: 40; Meyer, 1988). Credibility research can be traced back to the 1950s (e.g. Hovland & Weiss, 1951; Hovland, Janis & Kelley, 1953), and it remains an important criterion for journalism (Clerwall, 2014; Wölker & Powell, 2018). With the development of automated journalism, the public gradually pays attention to this new form of journalism. Consequently, many studies of credibility perceptions on automated news were conducted to investigate how the public perceived it compared to journalists’ works.

Previous research examined the effect of automated journalism on news credibility by comparing the perceived credibility of machine-written news and human-written articles. Graefe et al. (2016) built on and extended prior works from Clerwall (2014) and van der Kaa and Krahmer (2014) by controlling the variables of declared source and actual source. The finding shows that machine-written news tends to be rated higher than human-written news no matter the source is declared or actual. An explanation for the result is that most of the news writing focus on the facts presentation and lacks complicated narration, so participants tend to rate both article sources as credible (Graefe et al., 2016). Another possibility is that readers might have lower expectations for machine-written news, so they might be surprised after reading their works and gave them higher scores (Graefe et al., 2016; van der Kaa & Krahmer, 2014).

The research by Jung et al. (2017) shows that when the automated news story was purportedly written by a robot journalist, the public gave it higher scores. However, when the automated news was declared as a journalist’s work, people scored lower. They found that the reasons are the public’s negative attitude toward journalists’ credibility and people craving for new information and technology in the Korean society (Jung et al., 2017). However, finding from Waddell (2018) is different as it shows the news purportedly generated by the machine is less credible than news attributed to a human journalist (Waddell, 2018). The study by Melin et al. (2018) focusing on one particular system, the Valtteri, a Natural Language

(7)

Generation (NLG) system as a case study, and credibility is one of the measurements to evaluate the content quality. They found that while on average, Valtteri-generated content were rated lower than the articles written by journalists, Valtteri is particularly ranked the highest at the credibility ratings (Melin et al., 2018). The study by Wölker and Powell (2018) indicates that credibility perceptions of human, automated, and combined content and source are equal, only for sports articles that automated news perceived significantly more credible than human texts (Wölker & Powell, 2018).

According to previous studies on credibility perceptions of robot journalism, it is noticed that their findings are inconsistent. It could be summarized that it is because of various methodological paradigms and contextual factors (Liu & Wei, 2018). For example, Clerwall (2014) and Graefe et.al (2016) compared news that actually generated by an algorithm with a human-written story on the same topic, while another paradigm focuses on machine authorship (e.g., Graefe et al., 2016; Haim & Graefe, 2017; van der Kaa & Krahmer, 2014; Waddell 2018). They compared news with identical content but declared to be either reported by human or by robot (Liu & Wei, 2018). Research following the first paradigm found that people tend to rate computer-generated news as more credible than human-written works. However, studies that applied the manipulation of machine authorship have different findings. For example, the study by Waddell (2018) shows the news purportedly written by machine is less credible while the study by Haim and Graefe (2017) found that participants preferred human-written news for readability but automated news for credibility. Therefore, it is needed to pay attention to different methodology as well as different contextual factors (Liu & Wei, 2018).

The contextual factors could be the length, topics, genres, the source (e.g., news organizations) (Liu & Wei, 2018) and the modality of the news stimuli. Such elements may serve as cues to affect people evaluating the experimental materials. Therefore, the influence

(8)

of contextual factors should be considered by researchers to understand how media consumers perceive robot journalism.

It is also noted that they studied the text modality of robot journalism in particular. It might be because a news article is the primary modality of automated journalism in the early age, as well as most news organizations that applied robot journalism in their news rooms collaborate with the NLG firms, such as Automated Insights and Narrative Science to produce computer-generated texts (Wölker & Powell, 2018). Nevertheless, the technology of artificial intelligence is gradually developing and now not only applying on the news texts but also on the news videos.

Previous research all focused on the text format of robot journalism. However, along with the breakthrough of AI technology, robot journalism has developed into another area, not only applying on online news articles, but also on videos, so it is thus important to study audience’s perceptions on AI news videos. This insight is inspired by Xinhua news agency from China, which has had the first AI anchor to report news that surprised the public (Xinhua, 2018). This new application of AI in the news media catches public’s attention, as well as has a great progress on the development of robot journalism. Since it is a new area, research is needed to support the development by understanding people’s perceptions.

Therefore, followed by the research that conducted by van der Kaa and Krahmer (2014) and Graefe et al. (2016), attribution is manipulated. In the study, one video is modified and replaced the AI anchor title with a human being’s name and to make sure there is no marks and cues about AI on the video. Another one maintains all elements of the original AI news video, so people could see the title “AI news anchor” directly. For the news articles, the contents are the same as AI news videos, with one article attributed to a human journalist and another one with assigned AI writing algorithms. Therefore, how would the public perceive news that is attributed to AI compared to that is attributed to a human journalist? Do people

(9)

perceive news video with assigned the AI anchor as more credible than that with assigned the human anchor? Would the public perceive automated news as more credible?

Thus, the first research question is:

RQ1: To what extent do people find automated news credible compared to human

journalists?

Effect of modality

The existing research on the credibility perception of robot journalism yields mixed findings. It is unquestionable that not only news content and news source affect the results, there are more variations behind, for example, news topics, genres, embeddedness with different news outlets and so on (Liu & Wei, 2018). These factors have possibilities to have implications on people to evaluate machine-written news. The findings from Liu and Wei (2018) show that a trusted institution enhances the perceived news credibility of machine-written news (Liu & Wei, 2018), which strengthen this argument. Therefore, in order to understand how robot-generated news is received by news consumers, influence of news modalities should also be taken into account.

Modality has generally been defined as using of text, audio, graphics, and video to present message (Horning, 2017; Sundar & Limperos, 2013). Research in the field of media credibility has a longstanding interest on which medium is perceived more credible (Metzger et al., 2003). The disputes between the credibility of newspapers and television news can be dated back during the 1950s, at first, newspapers were argued to be more credible, however, television surpassed the newspapers and became more believable (Metzger et al., 2003). Many arguments on why the credibility perception of television is higher are because of the visuality, the differences between newspapers and television news industries, brevity of reporting and so on (Metzger et al., 2003). The finding from Ibelema and Powell (2001)

(10)

echoed that compared to newspapers, cable television news was rated higher in the credibility perception because of its higher levels of visual and aural stimuli.

Nevertheless, such studies all focus on traditional media or specifically, traditional journalism. Despite news that include higher visual and aural stimuli have great possibility to make people’s credibility perceptions higher (Ibelema & Powell, 2001), what if the news article is generated by robots not human beings and the anchor of the news videos is AI? Not alike traditional journalism, robot journalism is developed from the internet and is presented on the online platforms most. The study by Kiousis (2006) indicated that the perceived credibility of online news stories might be influenced by the differences in the modality, but modality might not be the direct effect (Kiousis, 2006). Modality might shape audience perceptions of credibility but relies on users’ engagement with the content (Kiousis, 2006).

Applying the findings of media credibility research on modality effect to AI

journalism research, it is reasonable to question that if people would perceive AI news videos as more credible than AI news articles based on the findings from traditional journalism research. It is also important to understand if modality would directly affect the perceived news credibility of automated news. AI news video is different from the texts, since the audience can directly watch the vivid AI anchor and hear his/her voices rather than only read the text that purportedly generated by robots. However, AI news video is still not the same as TV news. Even though AI anchor’s face looks the same as humans, the expressions and voice still have slight differences. Therefore, how would people perceive AI news videos as

credible compared to automated news articles?

AI news video is a new format of AI journalism and is not as prevalent as robot-generated texts that applied in lots of media organizations. Thus, given that there is no research on the effect of modality in robot journalism, a new research question is needed. How does news modality affect the perceived credibility of automated news? What modality is viewed as more credible to apply the technology of AI?

(11)

Thus, the following research questions is:

RQ2: To what extent does the modality of automated news (video vs. text) affect people’s

credibility perceptions of the news?

Computers credibility and credibility of news attributed to AI

People’s prior exposure to technology and their prior experiences with computers might possibly influence their expectations of AI journalism. Some scholars have tried to verify the assumptions. The experiment conducted by Sundar, Waddell, and Jung (2016) proved the effect of “Hollywood Robot Syndrome”. If people have the ability to recall a robot from popular media, this could help them decrease their anxiety about robotics. Compared to those who are unable to recall a robot, their concern about robotics are assuaged. The study by Waddell (2018) built on this and tested the relationships between prior recall of robots and credibility of news ascribed to machines. The results support that users who could recall robotics from popular media have less possibilities to give negative evaluations on news attributed to machines. Such recall ability indirectly affects the news credibility of machine attribution.

Given these past works, the image of “robot” and the prior recall indirectly affect the credibility perceptions, it is reasonable to assume that the image of “computer” might also be a cue. Since AI journalism, robot journalism and automated journalism are all referred to algorithms and computer-generated news, it is logical to expect that not only “robot” but also “computer” might have the possibility to affect the credibility of news attributed to machines. However, computers as tools are widely used in humans’ daily lives. It is much easier for the public to recall a computer than a robot. Therefore, instead of testing the effect of prior recall of computers, computers credibility perception is a more crucial variable.

(12)

Fogg (2003) explored the relationship between computers and credibility, as he proposed that people’s assumptions, stereotypes and impressions contribute to credibility perceptions (Fogg, 2003, p. 132). Such prerequisite impressions of the computer then may have the possibility to indirectly affect the credibility of AI journalism. Do people trust computers or not affect their credibility perceptions on news attributed to AI? It is important to understand the relationship. Thus, based on these insights, it can be expected that if people perceive computers credible, they may have positive evaluations on news attributed to computers.

H1: If people’s general perceptions of computers are more positive (compared to more

negative), it is likely that people are more positive about the credibility of news from AI compared to a human journalist.

H1 RQ2

RQ1

Figure 1. Research model

Methods

To understand the credibility perceptions of automated journalism, the method of experiment is applied. The advantages of experiments are the high degree of internal validity (Mcdermott, 2002). Investigators can control from the experimental designs, settings to dependent variables that also can be manipulated, so only the independent variables vary.

Automated news vs. human journalist Credibility perceptions Modality Computers credibility

(13)

Therefore, causal inferences could be strongly supported. The disadvantages however are the concerns about external validity, artificial environment and experimenter bias (Mcdermott, 2002). In order to understand how the modality of automated news affects news credibility perceptions, how people perceive automated news compared to human journalists, and if computers credibility positively correlates with AI news credibility, the experiment is needed. Through the experimental design, the news contents, modality and authorship can be

controlled, so the causal inferences between dependent variables (e.g. news credibility) and independent variables (e.g. modality) can therefore be supported. These causalities can thus be discussed and applied. Moreover, since this study is an audience research, it is important to recruit participants joining and filling out the experimental survey.

In the experiment (N = 214), a 2 (author: AI or human being) x 2 (modality: video or article) between-subject design was used. Two formats of AI news are manipulated, and the contents are exact the same for both videos and articles, but the source is different (AI and Human). In a similar way to previous research (Graefe et al., 2016, van der Kaa & Krahmer, 2014), purported source is manipulated by using a byline to present the article as human-written or AI-human-written, and the video as broadcasted by the AI anchor or the human anchor.

Each respondent was presented with one article or one video as the same topic in randomized order. For the randomization checks, the results from multiple ANOVAs showed that there are no statistically significant differences for age, gender, education, understanding Brexit from news, familiarity with AI journalism and prior knowledge about the news media industry in China across the four experimental conditions:

1. AI news video with assigned the human anchor (N = 54); 2. AI news video with assigned the AI anchor (N = 55);

3. Texts from AI news video with assigned the AI writing algorithm (N = 51); 4. Texts from AI news video with assigned the human journalist (N = 54).

(14)

Procedure

The experimental survey consists of a pre-exposure questionnaire and a post-exposure test. Before exposure to the stimuli, each respondent would provide their demographic

information and complete a pre-exposure question about their perceptions of computers credibility. For example, participants are asked to fill out the question: “On a scale of 1-7, how do you perceive computers?”, and rated four indicators of credibility, namely

“trustworthy, reliable, accurate and unbiased”.

Furthermore, after answering their knowledge of Chinese news media and familiarity of the Brexit issue, participants were exposed to either a news article or a news video

purportedly attributed to AI or a human being. Participants are assigned to one of those conditions: AI news article with assigned AI source, AI news article that is attributed to a journalist, AI news video with assigned AI anchor, AI news video that remove the title of AI anchor but replaced with a real human’s name. After the stimuli, each respondent completed a series of post-exposure questions that measured the news credibility perceptions.

Sample

Participants were recruited through the social media platforms, such as Facebook, LinkedIn and Twitter. The sample is comprised news consumers worldwide. This decision was made because the stimuli materials are in English, so considered the language effect, it is thus important to recruit participants who understand English well. Furthermore, previous studies either recruited the sample at the national level (e.g. Graefe et al, 2016; van der Kaar & Krahmer, 2014) or focused on one continent (e.g. Wölker & Powell, 2018). Therefore, in order to fill up this gap, this study conducted the experiment at the international level and seeks to provide a more comprehensive result.

In total, 336 people answered the questionnaire. The final sample consists of N = 214 respondents after removing the incomplete surveys. There are 68 male participants and 146

(15)

female participants. Age ranged from 18 to 60, and the average age is 29 years (M=29.31, SD=9.17). Among the 214 respondents, 101 (47.2%) have received Bachelor’s degrees, 90 (42.1%) Master’s degrees, 9 (4.2%) High school graduates, 6 (2.8%) Some college but no degree, 5 (2.3%) Associate’s degrees and 3 (1.4%) Doctoral degree.

Overall, 64% of respondents indicated that they are from Asia, 29% from Europe, 2.8% from North America, 1.9% from Africa, 1.4% from Oceania and 0.9 from South America. There are 40 different nationalities in the sample. One respondent possesses a dual citizenship, and two respondents chose not to provide their citizenship but reveal that they are from Asia and Europe respectively. Although respondents from Asia and Europe account for over 90% of the sample, it represents a mixed and an international sample, and its diversity is more than previous studies.

Stimuli

Each respondent would be presented with one article (news attributed to AI or news attributed to a journalist) or one video (AI news video with assigned AI anchor or AI news video with assigned the human anchor).

The AI anchor video which is used in this study is from Xinhua news agency of China. This news agency has launched the first AI anchor to report news, which surprised the public (Xinhua, 2018). According to Xinhua, AI anchor actually adopts the human anchor’s face, so it looks just like the human being (Xinhua, 2018). The AI anchor videos all have clear title on them (Xinhua AI anchor), so people could know what they watch is AI anchor. In this research, we manipulated AI anchor videos as one remains the same as what it looks like originally, and the title “Xinhua AI Anchor” is clear in the video. As for another one, we removed the title and replaced with the name “Zhang Zhao”. We made up a human’s name to make it more realistic as a true human anchor.

(16)

The topic of the video is about a survey that shows there will be more job

opportunities for UK graduates despite Brexit. The reasons to choose it are because Brexit is an important issue that most people would follow or have a basic understanding of it, it is comparatively appropriate for the international audience study. Furthermore, this news is based on a survey which is considered to be a more data-driven work - an important characteristic of automated journalism.

To measure the effect of modality, the stimuli material of texts is created and is transcribed from the selected AI anchor video. We adopted the same frame, font, color and design of the Xinhua news website to make the articles look more realistic. The contents of the two stimuli articles are the same, but the source is manipulated. One is purportedly sourced from the correspondent “Zhang Zhao” and another one is declared generated by an advanced AI writing algorithm of Xinhua News Agency, “MAGIC” (Machine-generated content). Therefore, the videos and the articles are exact the same news contents, but in different modalities and authorships. Additionally, we make the authorship of each article and video visible to control the variables. For the stimuli materials that used in the experiment, please see Appendix A.

Measures

Understanding Brexit from news media

To understand to what extent that the participants understand Brexit issue and if they knew it through the news media, three items were measured. Participants were rated on these statements (1=totally disagree, 7= totally agree), “I have a comprehensive understanding of Brexit.”, “Most information about Brexit that I had was from the news.”, and “I follow

various news organizations to understand Brexit.” (M = 4.15, SD = 1.28, Cronbach’s α=0.67). Factor analysis of this measure (KMO = 0.62; p < .001) results in one component with an

(17)

eigenvalue greater than 1 (eigenvalue 1.8), with all items positively correlate with the component.

Prior knowledge about news media industry in China

Prior knowledge about Chinese media was measured with four items and one of these is negative item. We asked respondents to answer their agreements with these statements (1=totally disagree, 7= totally agree), “I have a comprehensive understanding of news media environment in China.”, “I consume news from Chinese news media for a long time.”, “I know the development of news industry in China.” and “I do not know the news media in China.” (M = 3.33, SD = 1.50, Cronbach's α=0.81). Result of factory analysis (KMO = 0.78; p < .001) presents one component with an eigenvalue above 1 (eigenvalue 2.7), and each dimension correlates positively with the component.

Familiarity with AI journalism

It was measured by asking the participants to indicate their agreements with the following statements (1=totally disagree, 7= totally agree), “I have heard about robot journalism, automated journalism, AI journalism, etc.” and “I have a comprehensive

understanding of robot journalism, automated journalism, AI journalism, etc.” (M = 3.19, SD = 1.88, Cronbach's α=0.88). Factor analysis (KMO = 0.5; p < .001) results in one component with an eigenvalue greater than 1 (eigenvalue 1.8), with each item correlates positively with the component. These questions were asked after they consumed the news and rated the credibility perceptions in order to prevent the respondents to prime themselves for the experiment.

Independent variable

Robot journalism

There are different terms regarding robot journalism, such as “computational journalism”, “automated journalism” and “algorithmic news” are used to represent the idea that using computers and software to gather and produce the news (Clerwall, 2014). News

(18)

that are categorized in the field of robot journalism are because they are automatically generated by computers and algorithms without any intervention of humans or with little human input (Zheng et al., 2017).

To operationalize automated journalism, this study selects an AI anchor video from Xinhua News Agency as the stimuli to understand how people perceived AI news video regarding the credibility. Furthermore, to measure the effect of modality on robot journalism, the texts format is transcribed from the AI anchor video, so the content is the same but in different modalities.

Dependent variable

Credibility

Credibility is complicated, and it has to be evaluated in multiple dimensions (Meyer, 1988). According to previous studies, Graefe at al. (2016) applied Sundar’s (1999) four rating factors method (credibility, liking, quality, and representativeness), then generating

“accurate”, “trustworthy”, “fair”, and “reliable” to capture perceived credibility. Wölker and Powell (2018) employ “believable, accurate, trustworthy, unbiased and complete” to measure message credibility. Thus, this study adopts Sundar’s (1999) four rating factors method and adjust previous research by Graefe at al. (2016) and Wölker and Powell (2018), the indicators to operationalize the concept of credibility in this study is “accurate, trustworthy, reliable and unbiased”, and these dimensions are operationalized on a 7-point Likert-type scale (1=not at all, 7=very much).

Since most previous study applied the method by Sundar (1999), the indicators to measure credibility have been repetitively tested. In this study, computers credibility, news credibility and source credibility were measured by these four operationalized dimensions. Before exposure to the stimuli, participants were asked to fill out the question of computers credibility. Authorship is manipulated in the research, so regarding the source credibility,

(19)

participants were asked to rate their credibility perceptions of the author (either AI or human). For news credibility, they were asked to evaluate the news itself.

The principal component factor analysis and the reliability test were conducted to understand the validity and reliability. For news credibility (KMO = 0.83; p < .001) with Varimax rotation on the four indicators presents one component with an eigenvalue above 1 (eigenvalue 3.24). Each dimension correlates positively with the component. The reliability of the scale is excellent (Cronbach’s alpha = 0.92). For source credibility (KMO = 0.82; p

< .001), the results showed one component with an eigenvalue above 1 (eigenvalue 3.41), and all items correlates positively with the component. Moreover, the reliability of the scale is also excellent (Cronbach’s alpha = 0.94). Factor analysis of computers credibility (KMO = 0.75; p < .001) results in one component with an eigenvalue greater than 1 (eigenvalue 2.4), with each item positively correlates with the component. The reliability of the scale is acceptable (Cronbach’s alpha = 0.77).

In general, news credibility (M = 3.89, SD = 1.20), source credibility (M = 3.83, SD=1.20) and computers credibility (M = 5.20, SD = 0.91) in all conditions are credible.

Data collection process

Respondents are recruited through the internet, by using Facebook, LinkedIn, and other social media platforms. Participants are invited to conduct the online experiment. Data collection is from April 29, 2019 to May 12, 2019, a period of two weeks.

Ethical considerations

After collecting the data, the information about IP sites are deleted before analyzing the data. After analyzing and finishing the thesis research, the dataset would be deleted for good.

(20)

Analyses

Univariate Analysis of Variance is conducted to examine respondents’ perceived credibility by comparing the mean scores of new credibility (dependent variable) of AI news article with assigned AI writing algorithm, AI news article with assigned a human journalist, AI news video with assigned AI anchor and AI news video with assigned the human anchor (independent variables). Through the statistical test, the mean ratings of news modalities (video and text news) and authorship (automated news and human journalist) could be compared. Moreover, univariate analysis of variance is also used for the main effect analysis. Furthermore, the regression analysis is used to test if computers credibility and news

credibility has significant effect and if there is the interaction of AI news and computers credibility.

Results

First, we analyzed the perceived news credibility of the news author (automated news versus human journalist). No main effect was found, F(1, 212) = 0.828, p=.364. News

consumers perceive the news credibility of AI and human equally. Compared the mean scores of all conditions, it is found that people perceive AI news video assigned to AI (M=3.97, SD=1.13) more credible than that assigned to human (M=3.68, SD=1.15), while the mean scores of text stimuli are not that different. Mean ratings of news credibility for all conditions are summarized in Table 1. With these results, RQ1 can be answered: Generally, participants perceived both the AI and human journalist as equally credible, though they scored the stimuli attributed to AI are slightly higher than that assigned to human journalists.

(21)

Table 1. News credibility perceptions by four experimental conditions

N News credibility

Total 214 3.894(1.20)

AI news video attributed to the human anchor

54 3.680(1.16)

AI news video attributed to the AI anchor

55 3.973(1.14)

AI news texts attributed to the AI writing algorithm

51 3.966(1.16)

AI news texts attributed to the human journalist

54 3.958(1.35)

Note. Mean ratings of news credibility. News credibility was measured by four bipolar items on a seven-point Likert scale (not reliable/reliable, inaccurate/accurate, not trustworthy/trustworthy, biased/unbiased). Standard Deviations are reported in parentheses.

Furthermore, we measured the influence of the modality (video versus text), but there was no significant main effect, F(1, 212) =0.661, p=.417. Therefore, credibility of video news and text news can be assumed equal. Further compared the mean scores of AI news video with AI news article, it is also found that there are no differences regarding people’s

credibility perceptions of the news. Their credibility perceptions are almost the same, so AI news in video is not perceived as more credible than AI news in text. Modality might not be the contextual influence that indirectly affects the credibility perceptions of automated news. These results can answer the second research question (RQ2): it could be assumed that modality does not affect people’s credibility perceptions on automated news.

(22)

Besides, for source credibility, no significant effects were found for both attribution differences, F(1, 212) = 0.154, p = .695, and modality, F(1, 212) = 0.836, p = .362. Thus, credibility of all sources can be presumed as equal. Mean ratings of credibility perceptions overall are included in Appendix B.

For the relationship between credibility perceptions of computers and perceived AI news credibility, there is no statistically significant effect was found in the univariate analysis of variance test, F(3, 210) = 0.878, p = .453. We also conducted a regression analysis to test the interaction of AI news and computers credibility, however, there is still no significant effect, F(3, 210) = 0.758, p = .519. See Table 2 for the summarized regression analysis.

Table 2. Regression analysis results

Predictors B SE β (Constant) 3.366 0.672 Authorship – 0.066 0.961 – 0.028 Computers credibility 0.088 0.128 0.066 Interaction 0.040 0.182 0.090 R2 0.011 F Change 0.758 ∆R2 0.011 Sig. F Change 0.519

Note. Dependent variable: news credibility. *p<.05, **p<.01, ***p<.001.

Thus, these results do not sufficiently support hypothesis one. H1 needs to be rejected: people’s general credibility perceptions of computers do not positively correlate with

(23)

credibility of news from AI. However, it is noticed that perceived computers credibility (M=5.20, SD=0.91) is higher than perceived news credibility (M=3.89, SD=1.20) overall.

Supplementary analyses tested if familiarity with AI journalism, understanding Brexit from the news media and prior knowledge of the news media industry in China would

influence the news credibility perceptions. The regression analysis was conducted to understand if they are predictor variables. The results showed that there are no statistically significant effects besides prior knowledge of Chinese media, F(1, 212) = 8.22, p =.005. A linear regression established that it could statistically significantly predict news credibility. A further moderation test was conducted to know if the prior knowledge of Chinese media is a moderate variable. However, after using PROCESS macro, a regression-based analysis for SPSS (Hayes, 2016), the moderation effect was not found significant, F(1, 210) = 1.075, p =.301. Since the aims of the study are to investigate the perceived news credibility of automated news and the effect of modality, the relationship between prior knowledge of Chinese media and news credibility would not be further discussed.

Discussion and conclusion

The purpose of the current study is to determine whether the public found automated news credible compared to human journalists’ works and investigated the effect of modality. This research is the first to study the video format of AI journalism and in the international setting. Overall, the research has shown that automated news are perceived as credible as human journalists’ works. Also, the modality of automated news does not affect people’s credibility perceptions. Perceived credibility of AI news video and perceived credibility of AI news article can be assumed equal. This finding for a diverse nationality sample corroborates previous research testing the credibility perceptions of automated news in different country settings (Clerwall, 2014; Graefe et al., 2016; van der kaa & Krahmer, 2014), and in European countries setting (Wölker & Powell, 2018), which also found automated news credible.

(24)

There is no significant main effect between news that attributed to AI and that attributed to human. They can be assumed to be equally credible. This may be explained by the fact that people are positive about AI technology. With the advance of AI technology, people seem not to resist the changes but accept. Craving for new information and technology has thus resulted in the increasing perceived credibility of automated news (Jung et.al, 2017). Another possible explanation is that people’s expectations towards AI news is relatively lower than human journalists (Haim & Graefe, 2017; Graefe, 2016). When participants consumed automated news attributed to AI, they found that they are actually good, so they evaluated them as credible. In contrary, when they consumed automated news attributed to human, they found that they failed their expectations, since people tends to expect human journalists’ works to be much better (Haim & Graefe, 2017). Therefore, credibility perceptions between automated news and human journalists’ works become similar.

Furthermore, the source attribution does not obviously affect the credibility perceptions of automated news in this research. It is noted that findings from previous

research indicates that when news purportedly written by human journalists tend to be scored higher than the algorithm (Graefe et al., 2016; Waddell, 2018). However, the results of this research cannot echo these studies’ findings. Automated news is not rated higher on

credibility when the author is declared as a human anchor or a journalist. In contrary, when automated news is correctly assigned to AI, people gave slightly higher scores. Paying attention to the mean ratings of news credibility by four conditions. The public scored higher (M=3.97) to the AI news video when it was notified as the real AI anchor, while they scored lower (M=3.68) to the AI’s work when the anchor was noticed as a human. A possible explanation is AI anchor is easier to discern. Although the face of AI anchor is the same as a human, his voice is mechanical, and his body movement is unnatural. A pleasant and

believable vocal style is an important communication tool for the news anchor (Burgoon, 1978). Also, fluency, clarity and pleasantness in news delivery heightened credibility

(25)

perceptions (Burgoon, 1978). Thus, with these strange expressions of AI anchor, even though the title is replaced with a human’s name, people are still skeptical of the news source.

Focused on the effect of modality on automated news, the research has shown that unlike news organizations and news types have impacts on automated news credibility (Liu & Wei (2018), modality might not be the indirectly contextual factor that would affect people’s credibility perceptions towards AI news. This finding is partly consistent with Kiousis (2006) who demonstrated that modality does not affect perceived source and message credibility directly but is contingent on the ability to stimulate engagement. Also, not in line with previous research on the effect of modality in traditional journalism, the result has been unable to demonstrate that the news video is perceived more credible than the news article.

People do not perceive AI news video more credible, and this may be explained by the fact that AI journalism is different from traditional journalism. AI news videos are presented on online platforms, which means that it belongs to the web medium. Therefore, research from traditional journalism cannot be easily generalized to robot journalism. Furthermore, although modality does not affect people’s credibility perceptions on automated news, this provides a hint that AI news video is virtually acceptable. This research supports that AI news videos are perceived as credible as news articles. Since AI journalism is rapid expansion, it is expected that there will be various presentations of automated news.

The current study also found the perceived computers credibility is not consistent with the expectations that would positively affect the perceived news credibility. The results are not in agreement with Waddell’s (2018) findings which showed that the ability to recall of robotics and evaluations of automated news have a positive relationship. It is speculated that the public separate computers credibility with AI news credibility, and people might not perceive automated news as computers products. Another explanation is that computers are important tools in people’s daily lives, so are not similar as “robots” could stimulate their

(26)

imaginations. Therefore, unlike the image of “robots”, the image of “computers” is not a cue that would affect automated news credibility.

This research serves some implications. This study’s findings show that automated news credibility equal to human journalists. For the development of robot journalism and the media organizations, it is a good sign since AI’s works are competitive with human

journalists’ works. In this respect, AI journalists or AI anchors have reached an equal status to their human fellows (Wölker & Powell, 2018). Automated news cannot surpass

human-generated stories for now, however, as the advance of AI, it has great possibilities to outstand in the future. Moreover, AI news videos as the new product in the field of automated

journalism, it should be further developed because it is perceived as credible as automated news articles. Finally, for the journalists, it is a warning than an awareness that perceived credibility of human journalists’ works are not higher. Even though journalists cannot be replaced by the AI technology so far, it does not mean it will never happen.

To conclude, the results show that the public perceived automated news that attributed to AI and that attributed to human journalist as equally credible. Moreover, the perceived credibility of AI anchor news video and AI news text are almost the same, meaning neither the authorship nor the modality does not affect the credibility perceptions of automated news. Furthermore, there is no significant difference between the perceived news credibility on robot journalism and human journalism. As the study at hand demonstrated, people gave good scores to automated news. This gives us an important implication that robot journalism to large extents is perceived credible and is accepted and approved by the public.

Taken together, this study has demonstrated that the differences in the perceived credibility of automated news and human journalist are small. Effect of modality on

automated news has found little. With the advance of AI journalism, it is expected that news production would be more diverse, and the modality of automated news would be multiple, and people’s attitudes towards AI news would be positive. The development of AI is

(27)

unavoidable, so there will certainly be more applications of it on the news industry. Thus, human journalists and human anchors should think of their jobs, their responsibilities, their competitive conditions and their uniqueness that would never be replaced by automation.

Limitation and further research

The major limitation of this study is the content of the article is transcribed from the AI news video and then further adjusted to be alike automated-generated contents. Thus, it is not the actual content that automated generated by algorithms, so using the text stimuli as AI-generated news articles is defective. However, for the purpose of the experiment and for the manipulation of the modality, the contents for both articles and videos should be the same. Furthermore, the external validity is decreased by changes on stimuli materials. Also, the language is a problem in the experimental survey. This study collected participants worldwide and the language of the questionnaire was in English. It seems to make sense because English is an international language. However, participants cannot answer the survey in their mother tongue which might lead to the misunderstandings, and it might have influenced on the study results.

Further research should consider the limitations above and then make improvements. This study manipulates the attribution of the news, namely, the authorship, so the stimuli that used in the research are all AI-generated but the attribution is different. It is unfortunate that the study did not include the true human anchor video to compare with the AI anchor. Therefore, further studies might explore the credibility perceptions between true human anchor news video with AI anchor news video, to test if people perceive these two videos differently. In other words, by using the true news video and news articles assigned to their true authorship, the results might have differences compared to this research and would also be closer to the reality.

(28)

References

Burgoon, J. K. (1978). Attributes of the Newscasters Voice as Predictors of His

Credibility. Journalism Quarterly,55(2), 276-300. doi:10.1177/107769907805500208 Carlson, M. (2014). The Robotic Reporter. Digital Journalism, 3(3), 416-431.

doi:10.1080/21670811.2014.976412

Carlson, M. (2015). The Robotic Reporter: Automated Journalism and the Redefinition of Labor, Compositional Forms, and Journalistic Authority. Digital Journalism 3 (3):416–431.

Clerwall, C. (2014). Enter the Robot Journalist. Journalism Practice, 8(5), 519-531. doi:10.1080/17512786.2014.883116

Flanagin, A. J., & Metzger, M. J. (2000). Perceptions of internet information credibility.

Journalism and Mass Communication Quarterly, 77(3): 515–540.

Fogg, B. J. (2003). Persuasive Technology : Using Computers to Change What We Think and

Do. Amsterdam: Morgan Kaufmann.

Graefe, A. (2016). Guide to automated journalism. Available at:

http://towcenter.org/research/guide-to-automated-journalism/ (accessed 12 May 2019). Graefe, A., Haim, M., Haarmann, B., & Brosius, H. (2016). Readers’ perception of computer-

generated news: Credibility, expertise, and readability. Journalism, 19(5), 595-610. doi:10.1177/1464884916641269

Haim, M., & Graefe, A. (2017). Automated News. Digital Journalism, 5(8), 1044-1059. doi:10.1080/21670811.2017.1345643

Hayes, A. F. (2016). Supplement to DOI: 10.1111/bmsp.12028. Available at: http://afhayes.com/public/bjmspsupp.pdf (accessed 12 May 2019).

Horning, M. A. (2017). Interacting with news: Exploring the effects of modality and perceived responsiveness and control on news source credibility and enjoyment

(29)

among second screen viewers. Computers in Human Behavior, 73, 273-283. doi:10.1016/j.chb.2017.03.023

Hovland, C., Janis, I., Kelley, H. (1953). Communication and persuasion. New Haven, CT: Yale University Press.

Hovland, C. & Weiss, W. (1951). The influence of source credibility on communication effectiveness. The Public Opinion Quarterly, 15(4), 635-650. Retrieved from http://www.jstor.org.proxy.uba.uva.nl:2048/stable/2745952

Ibelema, M., & Powell, L. (2001). Cable Television News Viewed as Most Credible.

Newspaper Research Journal, 22(1), 41-51. doi:10.1177/073953290102200104

Jung, J., Song, H., Kim, Y., Im, H., & Oh, S. (2017). Intrusion of software robots into

journalism: The publics and journalists perceptions of news written by algorithms and human journalists. Computers in Human Behavior, 71, 291-298.

doi:10.1016/j.chb.2017.02.022

Kim, D., & Kim, S. (2016). Newspaper companies’ determinants in adopting robot journalism. Technological Forecasting and Social Change 117: 184–195. Kiousis, S. (2006). Exploring The Impact Of Modality On Perceptions Of Credibility For

Online News Stories. Journalism Studies, 7(2), 348-359. doi:10.1080/14616700500533668

Latar, N. L. (2014). The Robot Journalist in the Age of Social Physics: The End of Human Journalism? The Economics of Information, Communication, and Entertainment The

New World of Transitioned Media, 65-80. doi:10.1007/978-3-319-09009-2_6

Liu, B., & Wei, L. (2018). Machine Authorship In Situ. Digital Journalism, 1-23. doi:10.1080/21670811.2018.1510740

Mcdermott, R. (2002). Experimental methods in political science. Annual Review Of Political

(30)

Metzger, M. J., Flanagin, A. J., Eyal, K., Lemus, D. R., & Mccann, R. M. (2003). Credibility for the 21st Century: Integrating Perspectives on Source, Message, and Media

Credibility in the Contemporary Media Environment. Annals of the International

Communication Association, 27:1, 293-335, doi: 10.1080/23808985.2003.11679029

Meyer, P., (1988). Defining and measuring credibility of newspapers: Developing an index.

Journalism Quarterly 65: 567–574.

Miroshnichenko, A. (2018). AI to Bypass Creativity. Will Robots Replace Journalists? (The Answer Is “Yes”). Information, 9(7), 183. doi:10.3390/info9070183

Montal, T., & Reich, Z. (2017). I, Robot. You, Journalist. Who is the Author? Digital

Journalism, 5(7), 829-849. doi:10.1080/21670811.2016.1209083

Napoli, P. M. (2014). Automated Media: An Institutional Theory Perspective on Algorithmic Media Production and Consumption. Communication Theory, 24(3), 340-360.

doi:10.1111/comt.12039

Sundar, S. S. (1999). Exploring Receivers Criteria for Perception of Print and Online News.

Journalism & Mass Communication Quarterly, 76(2), 373-386.

doi:10.1177/107769909907600213

Sundar, S. S., & Limperos, A. M. (2013). Uses and Grats 2.0: New Gratifications for New Media. Journal of Broadcasting & Electronic Media, 57(4), 504-525.

doi:10.1080/08838151.2013.845827

Sundar, S. S., Waddell, T. F., & Jung, E. (2016). The Hollywood Robot Syndrome: Media Effects on Older Adults’ Robot Attitudes and Adoption Intentions. Proceedings of

HRI 2016: ACM/IEEE International Conference on Human-Robot Interaction, 343–

350. doi: 10.1109/HRI.2016.7451771

Tseng, S., & Fogg, B. J. (1999). Credibility and computing technology. Communications of

(31)

van der Kaa, H., & Krahmer, E. (2014) Journalist versus news consumer: The perceived credibility of machine written news. In: Proceedings from the computation

+journalism conference, New York. Available at:

https://pure.uvt.nl/ws/files/4314960/cj2014_session4_paper2.pdf (accessed 3 Feb 2019).

Waddell, T. F. (2018). A Robot Wrote This? How perceived machine authorship affects news credibility. Digital Journalism, 6(2), 236-255. doi:10.1080/21670811.2017.1384319 Wölker, A., & Powell, T. E. (2018). Algorithms in the newsroom? News readers’ perceived

credibility and selection of automated journalism. Journalism: Theory, Practice &

Criticism, 146488491875707. doi:10.1177/1464884918757072

Xinhua. (2018, Nov 8). World's first AI news anchor makes "his" China debut. XINHUANET. Retrieved from http://www.xinhuanet.com/english/2018-11/08/c_137591813.htm Xinhua. (2019, Jan 22). More Job Opportunities for UK Graduates Despite Br. Retrieved

from http://www.cncnews.cn/new/detail/113329.jhtml

Zheng, Y., Zhong, B., & Yang, F. (2018). When algorithms meet journalism: The user perception to automated news in a cross-cultural context. Computers in Human

(32)

Appendix A

(33)
(34)

Note. The cover of the AI news video attributed to human “Zhang Zhao”. There is no any marks or logos that showed the anchor is AI.

(35)
(36)

Appendix B

Table B1. Credibility perceptions overall by four experimental conditions

N News credibility Source credibility

Total 214 3.894(1.20) 3.828(1.20)

AI news video attributed the human anchor 54 3.680(1.16) 3.551(1.22) AI news video attributed to the AI anchor 55 3.973(1.14) 3.955(1.15) AI news texts attributed to the AI journalist 51 3.966(1.16) 3.760(1.19) AI news texts attributed to the human journalist 54 3.958(1.35) 4.042(1.22)

Note. Mean ratings of both news credibility and source credibility by four experimental conditions. News credibility and source credibility were measured by four bipolar items on a seven-point Likert scale (not reliable/reliable, inaccurate/accurate, not trustworthy/trustworthy, biased/unbiased). Standard Deviations are reported in parentheses.

Referenties

GERELATEERDE DOCUMENTEN

Gaan we langs een rivier op zoek naar haar begin, dan komen wij vanzelf uit bij haar bron, maar wat als we teruggaan in de tijd; waar ligt de oorsprong van de tijd?. Wat was er vóór

In het seizoen 1993/'94 is op Proefbedrijf De Noord onderzoek gedaan naar de aanwezigheid van enkele gewasbeschermingsmiddelen (chloorthalonil, prochloraz, vinchlozolin en captan)

De gemeente heeft behoefte aan regionale afstemming omtrent het evenementenbeleid omdat zij afhankelijk zijn van de politie en brandweer voor inzet: ‘wij hebben

• Toevoeging van 100 ppm Crina® Piglets of 40 ppm Avilamycine aan het voer geeft geen verbetering van de technische resultaten van de gespeende biggen vergeleken met biggen die

This study of small group teaching features three different settings: lear study of small group teaching features three different settings: lear This thesis therefore investigates

We show that, for sufficiently large sensor sets, the decentralized schedule results in a waiting time that is a constant factor approximation of the waiting time under the

Linguistic control: Annesi Mehmet’in bodrumdaki doğum günü hediyesi yavru köpeği gördüğünü biliyor muymuş?. Adapted from Flobbe

Paediatric Neurology, Leuven, Belgium; **National Centre for Epilepsy, Oslo University Hospital, Oslo, Norway; †† Hospital Necker ‐Enfants Malades, Paris Descartes University,