• No results found

Deepfakes and Disinformation: The Effects of Ultra-Realistic Fake Videos on News Credibility and Acceptance of Information.

N/A
N/A
Protected

Academic year: 2021

Share "Deepfakes and Disinformation: The Effects of Ultra-Realistic Fake Videos on News Credibility and Acceptance of Information."

Copied!
54
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

The Effects of Ultra-Realistic Fake Videos

on News Credibility

(2)

The Effects of Ultra-Realistic Fake Videos on

News Credibility and Acceptance of

Information.

Agathe G. Rialland

12536040 Master’s Thesis

Graduate School of Communication

Master’s Program Communication Science: Political Communication Supervisor: dhr. T. Dobber MSc

June 26, 2020 7.444 words


(3)

been at the center of recent discussions on the future of political communication in a digital age. The state-of-the-art technology is bringing new challenges in terms of disinformation because of its presumed deceptive power. This paper tackles questions regarding news credibility when

juxtaposed with a deepfake. Through message credibility, source credibility, priming effect, and effects of video content on information processing, the main goal of this paper is to determine the influence of deepfakes on the acceptance of disinformation involving a political figure. The results demonstrate, contrarily to expectations, that deepfakes have a counterproductive effect when used to spread political disinformation as most people became aware of being exposed to a fake content, while text-based disinformation alone led to more uncertainty and less resistance to the

disinformation. Overall, this paper offers an optimistic view of the future of deepfakes and

disinformation, revealing that people are not so easily deceived by deepfake video content, and in turn not easily convinced by disinformation. However, these results are to be used carefully and in context to the particular treatment used in this research.


(4)

“Technologists expect that advances in AI will soon make it impossible to distinguish a fake video and a real one. How can truths emerge in a deepfake ridden ‘marketplace of ideas?’ Will we take the path of least resistance and just believe what we want to believe, truth be damned?”

- Danielle Citron, TEDSummit 2019

INTRODUCTION

A picture is worth a thousand words, as the saying goes. But what about a video?

In our digital and interconnected world, a video shared on social media can be the match that starts the wild fire of social unrest and outcry. As we recently witnessed, the viral and tragic video of George Floyd’s death at the hand of policemen in Minnesota (Cobb, 2020), showed how a simple cellphone video can be the tool to inform, spark protest, raise awareness and induce societal

changes across the globe. Communication today is deeply rooted in technology, enabling us to share and connect with one another through the World Wide Web. Almost indispensable, the Internet has become the arena for leisure, information, or even a tool for justice against racist actions and words. But the fact that the digital realm is also easily malleable leaves a lot of questions hanging when we start thinking about what really is really possible with today’s tools and technology. Disinformation, whereby one is able to distort a depiction of reality through the news, is not a new concept.

However, today’s technology brings new dimensions to this ‘truth decay’ (Citron & Chesney, 2019). What happens when one is able to distort reality through video content and thus making the

detection of what is true and what is false almost impossible? Akin to know Danielle Citron pondered upon the issue during her TED talk, this question led me to conduct research and investigate the influence of deepfake video technology onto perception of information.

(5)

Deepfake is the name given to a machine learning technology involving Artificial

Intelligence (AI) and deep learning, where video contents can be altered or fabricated in order to replace one face by another (Laugée, 2019). Moreover, it can alter voice and speeches, in an almost undetectable way (Idem.). Deepfakes are ultra-realistic videos of someone saying or doing

something they have never, in reality, spoken or done. Although it has been primarily used for pornography purposes - by implanting the face of a usually famous actor or actress onto a

pornographic scene (Pitt, 2019) - there could be tremendous political and societal impacts from a deepfake used to spread false information intentionally; in turn threatening democracy (Vaccari & Chadwick 2020; Bennett & Livingston 2018). The deepfake technology has not been developed for wrongful purposes, and in some cases, would turn out to be useful for many harmless reasons, such as educational purposes or humour and satire (Zend & Olivera-Cintrón, 2019). However,

manipulation of images has unfortunately been the tool to conduct malicious operations and alter the view of reality for decades (Deirdre Fernand, 2017). While Danielle Citron imagines a deepfake depicting scenes such as American soldiers burning a Koran, and other scenarios that could impose significant potential incitement, I thought about the implications that such videos have on changing people’s opinion over a particular political leader. In the current political climate, public opinions over political leaders can be very fragile. What happens when the public believes that a politician has said or done something they have not?

A recurring topic that has been recently associated with deepfakes is the 2020 presidential election in the United States (US). Particularly, preceding the 2016 US elections particularly, the term was popularised among citizens, especially through Donald Trump’s use of the term in an attempt to attack the credibility of certain newspapers and journalists (Ștefăniță, Corbu & Buturoiu, 2018). In some ways, the delegitimisation problem that arose during the 2016 elections a high chance of being seen again during the 2020 elections. However, in this case, claiming “this is fake

(6)

news” would be substituted for claiming “this is a deepfake”. Although I will refer to the term as disinformation, the aim is the same: persuade through a false story that appears to be news.

Since the associated technology is fairly recent, literature on deepfake is sparse. Although this technology has been addressed in the in more details in the technological fields rather than in social sciences, the level of visual deception that deepfakes seem to allow has led some social science scholars to theorise on the effects of deepfakes. The majority of academic literature about deepfake includes descriptive work on deepfakes and their potential threat to democracy (Nelson & Lewis 2019; Pitt 2019; Westerlund 2019; Zeng & Olivera-Cintrón 2019) and society (Chesney & Citron, 2019) or studies on deepfake detection systems and video authentication (Korshunov & Marcel, 2019), for criminal justice purposes (Maras & Alexandrou 2019; Chesney, Citron & Baker 2019), or Human Rights (Kozemczak, 2019) . Some of the literature mentions disinformation and the implication of deepfakes as a potential threat, nevertheless, there is no scientific research available on actual effects of deepfakes on disinformation. Interestingly however, a study very recently conducted by Vaccari & Chadwick (2020) on the deceptiveness of deepfakes, using the exact same stimuli as the one I used, concluded that deepfakes bring uncertainty, which in turn leads to distrust in news media. Although unpublished at the time of conducting my research, Vaccari & Chadwick’s work represent a good base literature for my scientific research on deepfakes, while leaving space to respond to their unanswered question of how people approach the role of political deepfake in the public discourse (Idem.) The knowledge gap that this study intends to fill is the scientific bridge between theoretical assumptions on deepfakes in a context of disinformation and empirical scientific evidence on the potential threat that deepfakes pose to disinformation.

Expressly, does a false news story actually become more convincing through an evidential deepfake? Thus, closely related is thus the main research question of the thesis:

(7)

RQ: To what extent are deepfakes able to influence the acceptance of disinformation?

This paper will tackle subjects such as message credibility in disinformation, media trust, source credibility and the priming effect. It is the interlocking and layering of multiple factors that make deepfakes a threat: from the accessibility of the technology to the rapidity that information is spread on the internet (Chesney & Citron, 2019). To answer the research question, an experiment was conducted whereby participants are exposed to a fake webpage of a fictive newspaper containing disinformation as well as a clip from a deepfake video of former US president Barack Obama made by BuzzFeed, supporting the disinformation.

THEORETICAL FRAMEWORK

a. Disinformation (fake news)

This research is first and foremost a study on disinformation, commonly called “fake news”. The term “ fake news” has often been used in the media to describe the sharing of non-factual information, whether it be a complete fabrication or manipulation of a piece of information, or a form of propaganda (Ștefăniță, Corbu & Buturoiu, 2018). What “fake news” actually refers to in academic terms is misinformation or disinformation. Scholars usually prefer to refer to the term “fake news” as disinformation, which can be defined as: wrong information purposely shared with the intent to deceive and influence the public opinion, meaning that there is an intention to spread around an information that is fabricated or untrue (Corner, 2017). Disinformation is different from misinformation; as the latter does not include the intention to spread untrue facts (Hendricks & Vestergaard, 2019). Disinformation is not a new development, as the term, which emerged during

(8)

the Cold War, has been used to describe deceptive information from government propaganda to misleading published newspaper articles to hoaxes on the Internet (Levine, 2014). Because of its societal impact, disinformation has been widely studied in the field of communication, but also psychology and political science. Particularly, scholars’ work on digital disinformation enabled this research to be put into its theoretical context. In a study on the effects of exposure to multiple news sources and disinformation onto perceived realism and political attitudes, Belmas (2014) finds that political attitudes are impacted by the perception of fake news being realistic, rather than merely the exposure to disinformation. In other words, to theorise on political attitudes, it is important to first take into account if individuals take disinformation seriously when exposed to them. Furthermore, Ștefăniță, Corbu & Buturoiu (2018) find that generally, people do not tend to consider themselves influenced by disinformation, while considering that distant others might be more influenced than they are. This finding shows that people expect themselves to be capable to determine what is disinformation and what is not, while others can’t, or to a lesser extent. However, this is not

representative of people’s actual capacity to verify the legitimacy of an information, and this lack of awareness could also lead people to not double-check informations seen online, making them easy targets for persuasion. The key-word here is: persuasion. In essence, disinformation is primarily concerned with its persuasive power. Which leads onto how deepfakes can make a difference: are deepfakes the new tool to heighten disinformation’s persuasiveness?

b. Ethos: the credibility

In communication science, the art of persuasion is inherent to the field, with the philosopher Aristotle first describing the term two thousand years ago (Gallo, 2019). Aristotle’s rhetoric

(9)

pillar of persuasion, ethos, is accomplished through credibility. Ethos is what this research is mainly intending to discover: do people find the message credible? Message credibility refers to

“perceptions of believability, either of the source or of the source’s message” (Metzger, Flanagin, Eyal, Lemus, & Mccann, 2003, p. 302, as cited in Ernst, Kühne & Wirth, 2017) although I believe that for paper, Appelman & Sundar’s own definition of credibility is more appropriate since it defines message credibility as “an individual’s judgment of the veracity of the content of communication” (2016, p.63). In political communication, credibility holds importance in the influence it has over opinion and political attitudes (Pascu & Coman, 2013) because credibility leads to a better processing of information as well as less resistance and in turn more positive attitudes towards information (Ernst, Kühne & Wirth, 2017). In this research, the credibility factor concerns the deepfake. The credibility of deepfakes lies in the perceived credibility inherent to videos. A study by Diakopoulos & Essa (2010) on video credibility reveals “a significant effect for the visualisation on the overall credibility of the information in the video” (p.79). Additionally, Rössler et al. (2018) prove that user’s capability to distinguish deepfakes from real videos is low. Linking this findings to news credibility, Brosius, Donsbach & Birk (1996) explain that the particularity of television news lies in the use of moving pictures that give the news a sense of authenticity, credibility and actuality, making the viewer an “eyewitness” to the event (p. 180). Because of the “realism heuristic” (Frenda et al., 2013, as cited in Vaccari & Chadwick, 2020), images have a stronger misleading power than text alone which, in the case of political figures making a statement, create “visual cues that appraise the credibility [...] of the spokesperson”, as mentioned by Graber (1990, p. 153). Deepfakes being moving images, the same sense credibility could be derived from the assumptions mentioned above. Based this, I drew the first hypothesis:

(10)

H1: Participants exposed to a deepfake video are more likely to find the message credible than those not exposed to the deepfake.

In his book on web content credibility, Wierzbicki (2018) points out the importance to distinguish the two concepts of credibility and truth, which are closely related but not completely equal: whereas credibility is more related to cognitive capacities of each individual, truth is a more objective and universal concept making the distinction possible that a credible information is not always true, and a true information is not always credible. As this research is presenting

disinformation, that is information that is untrue, it seemed important to also make this distinction in the assessment of credibility, thus the second hypothesis:

H2: Participants exposed to a deepfake video are more likely to accept the message as true than those not exposed to the deepfake.

Credibility is not only achieved through the message, but though the source too. Previous research on message credibility often mention the importance to assess the acceptance of

information through multiple underlying factors, and not only message credibility, testing for the simultaneous interaction between these different factors ( Gualda & Rúas 2019; Wierzbicki, 2018). A study by Miller & Kurpius (2010) finds that viewers believe stories from official sources are deemed more credible than stories from unofficial or citizen sources. Appelman & Sundar (2016) as well as Foy et al. (2017) also suggest the importance to investigate the interaction between source credibility and message credibility as their research both claim that if a source is seen as credible, the message tends to be seen as credible too. Hence the third hypothesis:

(11)

H3: Participants who find the disinformation source credible are more likely to find the message credible than those who qualify it as not credible.

c. Logos: the argument

Arguably, Foy et al. (2017) findings claim that both message and source credibility can be influenced by the plausibility of the message, as people tend to accept messages to a greater extent when they coincide with their beliefs about the world and tend to trust a source more when the message is plausible. Studies of credibility tend to look at the message, the source, and the format, but less attention has been given to people’s capacity to process the information, whereas it seems important to note here. As previously mentioned, Aristotle’s rhetoric explains persuasion through

logos, or a logical assessment of facts (Rapp, 2010). By following the persuasive aim, the deepfake

becomes the logos, the argument itself to bring persuasion. When it comes to fact checking, deepfakes can provide video evidence inciting people to believe an information. Rationality and critical thinking could potentially offer alternative explanations in the way people might be

persuaded by disinformation and deepfakes, and for this reason, it seemed important to hypothesise over the respondents’ assessment of video as a fact-checking device, leading to the fourth

hypothesis:

H4: Participants exposed to the deepfake video are more likely to consider the video the proof of veracity of the disinformation than participants not exposed to the deepfake.

(12)

d. Priming

In social science research, priming is defined as a stimulus that can activate the memory and thus increase the weight of a certain judgement task (Lavrakas, 2008). In other words, priming can shift attention to a certain aspect of a question, influencing towards a certain response. Generally, when conducting research, it is important to avoid priming. Priming can influence the respondents in their responses as information will be evaluated with the priming effect in close proximity (Watson et al., 2018). In addition, the concept of selective exposure that explains people tend to expose themselves to information that corresponds to their pre-existing beliefs (McGuire & Papageorgis 1962; Knobloch-Westerwick & Kleinman, 2012; Knobloch-Westerwick, Mothes & Polavin, 2020) has also been used in some research to explain attitude changes in persuasion situation. For instance, McGuire & Papageorgis (1962), explain the influence of forewarning a person of an attack on their beliefs before asking for their opinion on a dissonant information, leading their participants to resist more to the dissonant information. This theory inscribes itself into the cognitive dissonance theory developed by Festinger (1957), which claims that people are likely to avoid s situation where information clashes with their beliefs and attitudes (Donsbach, 2008). Juxtaposing priming and cognitive dissonance has been described by Marquis (2007) as the “Awareness of Persuasion” (p. 191) when describing the moderators of the priming effect in his article. In this understanding, those who are made aware of influence, through priming, are more inclined to resist to the influence instead of assimilating it. The priming effect thus has the power to inform the subjects that they are probably being manipulated, leading them to correct this dissonant information. In other words, the priming effect can act as the bridge between the state of cognitive dissonance brought upon by disinformation and the return to consonance. A study by Wu et al. (2016) confirms this theory by finding that priming does increase the perception of manipulative

(13)

intent and in turn decreases the perceived quality and credibility of an information. Accordingly, these assumptions led me to the last hypotheses:

H5: Participants primed about the existence of deepfakes are more likely to find the message less credible then those who are not primed.

H5a: Participants primed about the existence of deepfakes are more likely to accept the message as true than those who are not primed.

H5b: Participants exposed to deepfakes and primed are less likely to find the message credible than those exposed to the deepfake and not primed.

H5c: Participants exposed to deepfakes and primed are less likely to accept the message as true than those exposed to the deepfake and not primed.

I conceptualised each hypothesis in the model below (see Figure 1).

(14)

RESEARCH DESIGN, METHOD AND DATA

a. Design

The hypotheses were tested through an online-based survey-experiment with a self-administered questionnaire. The experimental design was chosen to establish causality between deepfakes and message credibility while excluding alternative factors and controlling for a priming effect. Data was collected from April 29th, 2020 to May 25th, 2020. The participants were

randomly assigned into four groups: three groups with a treatment and one control group (see

Figure 2). The 2 (Priming effect) x 2 (Deepfake) factorial design allow to isolate the effect of each

stimulus as well as the combined effect of both stimuli.

Each participant was given access to an experimental block consisting of an image which was completely fabricated for the purpose of this research, and representing a fictive online news website named “The Observer News”. The page represented the “imposter content” form of

(15)

disinformation (Wardle & Derakhshan, 2017) in which disinformation is posing as being an official source, including for instance logos, names and formatting aiming to make the information credible. This is the type of disinformation was chosen to be replicated in this research with a stimulus, particularly created to appear as a legitimate, credible and reliable source of information, with the aim of persuading through a false story that appears to be real news.

“The Observer News” webpage was presented to the participants as if it was a screenshot presenting the headline “Former President Obama controversial statement: “President Trump is a total and complete dipshit” (see Appendix C). Its design was inspired by the online version of the newspaper The New York Times, in order to give the webpage credibility and a professional feel and therefore increase the ecological validity of the experiment. This was done to maximise the deceptive effect of the screenshot to appear as legitimate and as real to the viewer as possible. On the right-hand side of the page, a column referring to two other articles was used as placement for the priming effect. The top article space was used for the priming effect (see in Treatment) while the other referred to the 2020 US Elections. This position seemed like the best place to put the caption used to primed participants as it was easily interchangeable with another non-priming caption for the control group. The second article caption was chosen to place the article in a context of imminent presidential elections.

b. Sample

The participants were selected through social media (Facebook, Instagram, WhatsApp and political science threads on Reddit) and participated in the experiment through a self-administered online survey. In order to reduce non-desired priming effects, participants were deceived by being told that they were completing a survey on political news habits.

(16)

The total number of participants recruited was N=604. Out of this number, some did not complete the survey, while some other did not choose to save their answers. Those were removed which brought the sample to N=421. In order to verify that people had read and understood the questions, two attention check questions were added regarding the content of the disinformation treatment. The respondents who failed to answer both questions correctly were removed from the sample, as well as the respondents who indicated being under the age of 18, which brought the final sample to

N= 399.

The final sample included participants with age raging from 18 to 67 (M= 26.4), and coming from 43 different countries. The United States is the most represented country with 171 participants (42.9%), followed by the Netherlands with 62 participants (15.5%). Participants indicated various educational levels, with 21.1% indicating that they completed “some college”, 40.4% that they had a Bachelor’s degree, and 23.1% have a Master’s degree. The sample is predominantly male, with 65.2% male respondents for 33.3% female.

For the questions for regarding news habits, created following Prochazka &Schweiger’s (2019) scale for trust in news media, 73.3% of the respondents indicated following the news from online news websites and 16.5% indicated following the news through social media, while traditional news media (newspaper, television and radio) only got a cumulative of 8.8% of the respondents. The sample indicated to generally trust news accessed through online news websites: on a scale of 1 (distrust completely) to 7 (completely trust) 66.2% of participants place themselves above the middle mark (M= 4.65, SD= 1.35), while indicating a lot more distrust for news accessed through social media with 77.4% placing themselves below the middle mark (M= 2.77, SD= 1.22) (See Appendix D - Table 1 & 2).

(17)

c. Treatment

The deepfake treatment consisted of adding a 4-second deepfake video of Obama saying: “President Trump is a total and complete dipshit”. This deepfake video was created by BuzzFeed in collaboration with American actor Jordan Peel in April 2018 and was shared massively on social media (Mack, 2018). This existing and arguably well-know deepfake video was chosen despite the fact that it may have been seen before by the participants. After a review of multiple deepfakes that could be used to create this experiment, as well as the possibility to create a deepfake for the sole purpose of this experiment, it was decided that the BuzzFeed deepfake was arguably one of the most convincing deepfake available. In addition, it fulfilled the key requirement of having a recognisable politician as well as offering the statement “President Trump is a total and complete dipshit”, which in itself has great potential to be assumed as disinformation.

The priming treatment was included in the fake webpage of “The Observer News”, through the caption: “Deepfake technology makes fake information harder to detect, say experts.” and was placed on the right-hand side column, as if referring to another article. The participants that were not primed read the caption: “Tiny digital businesses play key role in local economies, study says” (see Appendix C). This second caption was chosen as it is a neutral statement that would not interfere with the experiment while being a plausible caption that could appear on a news website. Participants were randomly assigned to the different conditions of the experiment . The

randomisation function on Qualtrics made sure to equally divide the participants into the 4

conditions, with about 25% of the participants in each condition (N= 399). Group 1 and 2 (49.8% of the participants) were primed. Group 2 and 4 (49.5% of the participants) were exposed to the

deepfake. This means that group 3 and 4 were not primed and group 1 and 3 did not have access to the deepfake.

(18)

d. Measurement

i. Message Credibility and Perceived Validity 1. Perceived Validity

After being exposed to the experimental block, participants answered question about their opinion on the fake news headline. The first question asked the participants about their perceived validity of the claim of “The Observer News”, to establish whether or not participants thought the claim was true in order to test for the concept of Perceived Validity (Wierzbicki, 2018). The item was measured with a 5-point scale (1= “definitely true” , 5= “definitely false”) (M= 4.14, SD=1.03) (See Appendix D - Chart 3).

2. Message Credibility

Following Appelman & Sundar (2016) scale for message credibility, participants were asked to indicate how well the following adjectives describe the content they saw on a 5-point scale (1= “extremely well”, 5= “not well at all”): accurate (M= 4.36, SD=1.04), authentic (M= 4.38,

SD=1.04), believable (M= 4.28, SD=1.01). These three items were combined into one factor, a

principal axis factor analysis was conducted on the three items, with oblique rotation, resulting in the Kaiser-Meyer Olkin measure for sampling adequacy to be above the minimum criterion of 0.5 (KMO = .67), and Bartlett’s test of sphericity shows the correlations between variables to be significantly different from zero, χ² (3) =348,93 p < .001. Based on Kaiser’ criterion (and the scree plot), I retained 1 extracted factor, which had an eigenvalue of 2.09 and explained more than half of the variance (69%) in the individual items. As all items had factor loadings well above .4, all items were retained as measures of “message credibility”. Together, these three items form a reliable scale (Cronbach’s α = .78). A new variable consisting of the mean scores of each item was computed into

(19)

a scale named “message credibility” (1= “Extremely credible”, 5= “not credible at all”) (See Appendix D - Chart 4).

3. Source Credibility

The same process was applied the three questions related to trust in “The Observer News”. After a reverse-coding of the second item (“likely to misrepresent reality”) to follow the direction of the two other items measured (“credible”; “trustworthy”), a factor analysis was conducted, leading to the three items loading on to one factor (Eigenvalue = 2.4; Cronbach’s α = .86). A new scale variable named “source credibility” (M =5.37; SD =1.32) was computed creating a scale from most credible to least credible (1= “Strongly agree”; 7= “Strongly disagree”) (See Appendix D - Chart 5). The operationalisation of source credibility is not fully based on literature, as literature on source credibility usually recommends use of indicators: “objective”, “trustworthy” and “biased/

unbiased” (Karlsson, Clerwall & Nord, 2014). However, in this research the source assessed is fictive, meaning that the participants could not base their answer on anything other that the feel and look of the page. It therefore seemed better to use the item “credible” instead of “biased/unbiased”. Furthermore, the phrasing “likely to misrepresent reality” was chosen instead of “objective” as it seemed clearer and could therefore avoid misinterpretation of the question.

4. Acceptance of information through video content

To the best of my knowledge, there is no existing scale able to measure this item, as the concept is very particular to the topic of disinformation through video content. The technology acceptance model (Davis, 1989) was used as a base to create this measurement scale, as it explain individuals’s behaviour concerning their engagement with technology through the items ‘perceived usefulness’ and ‘intention of use’ (Ju & Albertson, 2018), but in order to fit this research, the

(20)

measurement had to be adapted. The concept was measured through two questions in which

participants indicated on a 7-point scale (1= “Strongly agree”; 7= “Strongly disagree”) whether they considered the video (deepfake) to be proof of the claim made in the article as the first question and if they would only believe the claim by seeing the video as the second question. The two questions were phrased this way in order to allow every participants to respond regardless of their exposure to the video treatment.

Again, a factor analysis was applied, leading the two items to load on to one factor (Eigenvalue = 1.62; Cronbach’s α = .76; M = 4.14; SD =1.79) (see Appendix D - Chart 6) and new variable: “acceptance of information” was computed.

ii. Manipulation Check

The last question of the survey acted as a manipulation check. In that question, participants were asked to indicate wether they believed the video of Barrack Obama to be real or fake. The intended manipulation is that participants who have been exposed to the deepfake would believe the video to be real. The manipulation check reveals that 67.2% of the experimental group indicated that the video was “definitely fake” and another 21.9% “probably fake”, bringing both to a cumulative 89.1% of the participants indicating the video was fake.

In order to get more insights, an additional open question was added for participants who believed the video to be fake. The participants were asked to indicate why they believed the video to be fake. This question is not part of the manipulation check per se but helps to provide context to

participants’ responses. The answers revealed that only a minority of the respondents based their decision to categorise the video as probably or definitely fake because they knew it was a deepfake. The majority of the respondents instead based their judgement on the fact that they did not believe

(21)

that Barack Obama would say this vulgar sentence to a camera. They based this opinion on their knowledge of the former president’s intelligence and formal manners.

RESULTS

a. Effect of the deepfake on credibility (H1) and effect of the deepfake on validity (H2)

The first hypothesis (H1) predicts a positive relationship between exposure to deepfakes and the message credibility of disinformation. The second hypothesis (H2) predicts a similar

relationship for perceived validity of disinformation. These hypotheses were tested through independent sample t-tests as I am testing the differences in mean scores of message credibility in two groups, one exposed to the deepfake, the other one not. For the first hypothesis (H1), Levene’s F-test (for equality of variances) is statistically significant as the p-value is .012 , which means we can not assume equal variances among the experimental and control group. Those exposed to the deepfake (M = 4.44, SD = .80) (see Appendix A - Table 1) are on average less likely to find the message credible than those not exposed to the deepfake (M = 4.22, SD = .89). The mean difference (of – 0.22) is statistically significant, t (394) = –2.62, p = .009, 95% CI [–.39, –.05], and represents a small effect, d = 0.26.

For the second hypothesis (H2), Levene’s F-test is not significant with its p-value of .478 well above .05, and so the assumption of homoscedasticity is met. The t-test reveals that in the group exposed to the deepfake (M = 4.43, SD = .94) (see Appendix A - Table 3), participants are on average more likely to consider the message being untrue than those not exposed to the deepfake (M = 3.86, SD = 1.04). The mean difference is statistically significant t (397) = –5.75, p < .001, 95% CI [–.76, –.37] (see Appendix A - Table 4), and represents a moderate effect, d = 0.57.

(22)

b. Effect of source credibility on overall credibility (H3)

The third hypothesis is concerned with checking if source credibility positively correlates message credibility by expecting participants who find the disinformation source credible more likely to find the message credible than those who find the source not credible. With a Pearson’s correlation coefficient, it is possible to assess the relationship between the two variables. The results indicate a significant and strong association between source credibility and message credibility (r(399) = .562, p < .001). Subsequently, a linear regression analysis was run.

The model is statistically significant F (1, 397) = 183.25, p < .001) with an R² = .316, predicting that 31.6% of the variance in the dependent variable (message credibility) can be interpreted from the independent variable (source credibility). Source credibility positively predicts message

credibility, b = .37. This effect is statistically significant, t = 13.54, p < .001, 95% CI [.31, .42](see

Appendix A - Table 5). It represents a strong effect, b* = .56. This result supports H3 by predicting that the more credible the source is assumed to be, the more credible is the message is presumed.

c. Acceptance of the information through video content (H4)

For the fourth hypothesis, I looked at the differences in mean scores between the

experimental group and the control group in their responses towards the video being the proof to what the disinformation claims, expecting that participants exposed to the deepfake would be more likely to consider the video the proof of veracity than the group not exposed to the deepfake. For this analysis, an independent sample t-test was executed. Levene’s F-test (for equality of variances) is not statistically significant with p = .365. Contradictory to expections, those exposed to the deepfake (M = 5.19, SD = 1.49) are on average less likely to consider videos to be the proof of veracity of a claim compared to those not exposed to the deepfake (M = 3.12, SD = 1.43) (see Appendix A - Table 6). The mean difference (of – 2.07) is statistically significant, t (397) = –14.17,

(23)

p < .001, 95% CI [–2.37, –1.79] (see Appendix A - Table 7), and represents a very strong effect, d =

1.42, which here means than more than 92% of the control group is below the mean of the experimental group.

d. Priming, deepfake and moderation effect on credibility (H5, H5a, H5b, H5c)

Similar to the first two hypotheses, the fifth hypotheses (H5) look at the effect of priming onto message credibility and perceived validity, predicting a negative relationship between the dependent and independent variables. After meeting the requirement of homoscedasticity (p= .77), the effect of priming onto message credibility (H5) reveals to be statistically insignificant, p= .877 (see Appendix A - Table 9). Accordingly, we cannot reject the null hypothesis that message

credibility does not differ between those who have been primed (M = 4.33, SD = .86) and those who have not (M = 4.32, SD = .86) (see Appendix A - Table 8). Similar results are to be observed for perceived validity of the message (H5a), where the t-test is also not statistically significant p= .967 (see Appendix A - Table 11) and thus does not allow to say that there is a difference in perceived validity of the message between those who have been primed (M = 4.14, SD = 1.05) and those who have not (M = 4.14, SD = 1.01) (see Appendix A - Table 10) . In other words, priming has no direct effect onto the dependent variable.

The last hypotheses are concerned with the moderation effect of priming onto the

relationship between the independent variable exposure to deepfake and the independent variables

message credibility and perceived validity of the message. To test for these hypotheses, two-way

ANOVAs were run through SPSS. The 4 groups that are compared are more or less of equal size, and all have more than 30 cases, the assumptions for running ANOVA are therefore satisfied. For the effect of priming onto the relationship between deepfake and message credibility (H5b), the factorial analysis of variance reveals that no interaction effect of priming between exposure to

(24)

deepfake and message credibility could not be demonstrated. Contrary to what was expected, when exposed to the deepfake, participants who have been primed (M = 4.41, SD = .81) are finding the message more credible than participants who have not been primed (M = 4.46, SD = .79). In the control condition, oppositely, message credibility is lower for those who have been primed (M = 4.25, SD = .90) compared to those who have not (M = 4.18, SD = .89) (see Appendix A - Table 12). Although differences between the groups, the mean scores are representative of a 5-points scale (1= “Extremely credible”, 5= “not credible at all”) meaning that the participants are still generally finding the message not credible. The model shows a significant but very weak main effect of exposure to deepfake onto message credibility F (1, 395) = 6.83, p = .009, η² = .017 but no significant main effect of priming F (1, 395) = 0.02, p = .878, η² < .001. Finally, the interaction effect of priming with exposure to deepfake on message credibility is very weak and not statistically significant, F (1, 395) = 0.54, p = .463, η² = .001 (see Appendix A - Table 13).

The same analysis was run for perceived validity of the message (H5c). Once again, the model shows a significant main effect of exposure to deepfake onto perceived validity, with a small effect size F (1, 395) = 32.90, p < .001, η² = .07, but no significant main effect of priming F (1, 395) = 0.01, p = . 972, η² < .001. The interaction effect of priming with exposure to deepfake on

perceived validity is very weak and below the significance level of .05 , F (1, 395) = .285, p = .594, η² = .001 (see Appendix A - Table 15). Despite the analysis of variance test not having statistical significance, the descriptive statistics confirmed the results found for H5, showing that participants who have not been exposed to the deepfake generally lean towards categorising the message as more true on the scale, whether not-primed (M = 3.83, SD = 1.09) and primed (M = 3.88, SD = . 99), while participants who have been exposed to the deepfake are generally perceiving the message as more false, with almost no difference between those who have been not-primed (M = 4.45, SD

(25)

= .91) and primed (M = 4.40, SD = .97) (see Appendix A - Table 14). Again, despite the differences, the mean scores indicate a general tendency to perceive the message as false.

LIMITATIONS

Before I address the conclusions of this research, it is important to mention the limitations which give context to the understanding of the results. Firstly, the manipulation check reveals that a large majority of the respondents was not deceived by the deepfake and confidently reported it being fake. This goes against much of the literature on deepfakes which expects that most people are not able to distinguish deepfakes from real videos (Rössler et al., 2018). For this research, the video used is existing on different platforms on the Internet, and due to its high views, there is a chance that it was already known by the respondents, defeating the deceiving potential of the deepfake and therefore counteracting with the external validity of the experiment. In addition, the association of a well-spoken politician with vulgar language reduces the plausibility of the claim. Although this particular deepfake seemed much better than others deepfakes available on the Internet, in term of high level or realism and the inclusion of sound, the choice to use this particularly vulgar segment of the video could have contributed to the low deceptive level. As a recommendation for further research: including a treatment that has been designed and fitted for the purpose of the research could help improve the reliability of the findings. Secondly, the experiment was conducted through an online-based questionnaire, where participants had the possibility to fact-check the disinformation and the fictive online news website, which counteract with the controlled environment that lab-based experiments would provide. Finally, around 60% of the sample

indicated having completed a higher education. This indication is not only non-representative of the general population, but also education level is a determinant of political knowledge (Grönlund & Milner, 2006), and thus could explain why the deepfake failed to deceive the participants.

(26)

CONCLUSION & DISCUSSION

In this study, I set out to determine if disinformation becomes more credible through deepfakes. The actual results of this analysis are at the opposite end of what was expected at the beginning of the research. The first hypotheses (H1 and H2) expected that deepfakes would have a persuasive effect, making a disinformation more credible and appear legitimately true. Yet, results disprove this assumption and actually reveal that the deepfake had the opposite effect, leading more of the respondents exposed to the deepfake to qualify the message as not credible and false while those who were only exposed to the text-based disinformation were more dubitative in their answers. These results coincide slightly with the findings of Vaccari & Chadwick (2020) who similarly conducted a research which led to claiming that deepfakes may not always deceive individuals but instead stimulate uncertainty. The analysis strongly shows that deepfakes do not automatically deceive as the manipulation check shows that most respondents did not qualify the deepfake as real.

However, I would not agree Vaccari & Chadwick (2020) who state that deepfakes create

uncertainty. In this case, the respondents who had been exposed to the deepfake tended to lean more strongly towards the extreme of the scales asking about credibility and believability than those who only had access to the text-based disinformation alone. This is confirmed through the analysis of the fourth hypothesis (H4) which indicates more uncertainty with disinformation standing alone than when accompanied by a deepfake. In Aristotle’s persuasion typology, the deepfake was supposed to act as logos, acting as a fact-check on the content of the disinformation. It could be argued that the deepfake indeed acted as such, but in this case, individuals have used logic, as well as their

knowledge of the world to make sense of the disinformation. Chadwick (2019), whose article contributes to explaining how digital media shapes political opinion, theorises about how people

(27)

might make sense of the deepfake. He points out the importance of taking into account prior beliefs, political knowledge, and even familiarity with the physical characteristics and mannerisms of the person being the object of the deepfake. In that sense, he makes a point for other researchers to not undermine the rationality expectations when analysing deepfakes and individual behaviour. The follow-up question to the manipulation check proves that this concept of “familiarity with the physical characteristics” of the person is the reason why people did not believe the video to be real, as most deemed that Barack Obama knows better than to say a vulgar insult while facing a camera.

In addition, the effect of priming was not statistically supported by the analysis, showing that in this case, having a small caption about deepfake had no impact on people’s attitude towards the disinformation. Accordingly, there is also no substantial evidence that priming can moderate information acceptance through a deepfake. On the other hand, the insignificant effect of priming can be explained by the fact that most people were not deceived by the deepfake in the first place, making the priming information irrelevant. However, in the last question in which participants could indicate why they believe the video to be fake, two participants who had not been exposed to the deepfake indicated having seen the priming caption as the reason why they would believe the video to be fake. Although there is no statistical evidence of this, this small piece of information seems to be in accordance with what Marquis’ “awareness of persuasion” (2007, p.191) previously mentioned. Instead of influencing people outside of their awareness, as argued by Molden (2014), this kind of priming effect could be seen as influencing people into leaving their state of uncertainty during decision-making. However, this can only be assumed and further research is needed to confirm this theory.

(28)

Interestingly, yet not surprisingly, the results of the fourth hypothesis (H4) show that message credibility is highly influenced by source credibility. In other words, it seems that people tend to trust what looks professional and legitimate. This has some implications in terms of disinformation warfare as it shows that a dupe of a trustworthy source could potentially share disinformation that could then be deemed credible. Knowing that this particular fake webpage was easily created with a widely available Adobe programme, the feasibility of creating such deceiving fake online webpage is high. The credibility factor in the research was thus not the deepfake as expected, but the source. In terms of the “realism heuristic” (Frenda et al., 2013, as cited in Vaccari & Chadwick, 2020), it can still be argued that visual cues can give credibility (Grabber, 1990), but in this case, credibility happen through the design of the fabricated news page used to give the disinformation. Accordingly, it could be interesting to add an extra source credibility dimension to research like this one by comparing the credibility of different fabricated news webpage, that could differ in their professional design and perceived legitimacy.

It is important to note however that high source credibility does not automatically produce high message credibility, as seen in this research. This can be linked to the findings of Clark & Evans (2014) whose research on source credibility and persuasion mentions the importance of looking at pro attitudinal and counter attitudinal messages in persuasion research. According to their research, when a message is counter-attitudinal, recipients might be more incline to defend their existing views when the source is credible, and in turn be less inclined to qualify the source as credible. Based on the responses of the follow-up question of the manipulation check, it can be concluded that the disinformation presented is counter-attitudinal, clashing with existing beliefs of the participants, and thus explaining the strong level of resistance to the message.

(29)

In deepfake literature, emphasis has been put on deepfake detection at a system level. New detection technology is being developed to contribute to the accurate detection of deepfakes, for instance through the work of FaceForensics++ which developed a massive public datasets of deepfake images to be used in research on manipulated images (Rössler et al., 2019). While I agree that detecting deepfakes at the source of the spread can only reduce the harm of their fraudulent content, it is important to understand what happens at a micro-level, and how deepfakes influence people’s perception of an information. What is clear from this research is that although

disinformation and the deepfake presented to the participants created at least some uncertainty about what is true or not, the results carry a more optimistic societal perspective than most literature on deepfake, showing that people are not deceived as easily as it seems. However, even if the deepfake used here is not able to achieve its disinformation purpose, deepfakes’ potential

consequences for the truth, for trust in the media and for democracy should not be disregarded as the technology keeps evolving. There is still a lot of research to be carried out to contribute the literature on deepfakes as their novelty allows for many theoretical assumptions.

In conclusion, if I had to give an answer to Danielle Citron’s quote which inspired this research, I would tell her that “no, truth may not be damned”. And although the deepfake technology is appearing as a powerful tool to undermine the truth, acceptance of information is constructed through a layering of factors, applying to people’s logic and cognitive process, and where deepfakes alone do not hold the power to make disinformation credible.

(30)

REFERENCES

Appelman, A., & Sundar, S. (2016). Measuring message credibility: construction and validation of an exclusive scale. Interactivity and online Credibility (Author abstract), 93(1), 59–79. https://doi.org/10.1177/1077699015606057 Balmas, M. (2014). When Fake News Becomes Real: Combined Exposure to Multiple News Sources and Political

Attitudes of Inefficacy, Alienation, and Cynicism. Communication Research, 41(3), 430–454. https://doi.org/ 10.1177/0093650212453600

Barthel, M, Mitchell, A., & Holcomb, J. (2016). “Many Americans Believe Fake News Is Sowing Confusion,” Pew Research Center, 05.12.2016, verified 17.06.2018: http://www.journalism.org/2016/12/15/ many-americans-believe-fake-news-is-sowing-confusion/

Beldad, A., de Jong, M., & Steehouder, M. (2010). How shall I trust the faceless and the intangible? A literature review on the antecedents of online trust. Computers in Human Behavior, 26(5), 857–869. https://doi.org/10.1016/j.chb. 2010.03.013

Brosius, H., Donsbach, W., & Birk, M. (1996). How do text-picture relations affect the informational effectiveness of television newscasts? Journal of Broadcasting & Electronic Media, 40(2), 180–195. https://doi.org/

10.1080/08838159609364343

Chadwick, A.. (2019). The new crisis of public communication: Challenges and opportunities for future research on digital media and politics (Version 1). Loughborough University. https://hdl.handle.net/2134/11378748.v1 Chesney, R., & Citron, D. (2019). Deepfakes and the New Disinformation War: The Coming Age of Post-Truth

Geopolitics. Foreign Affairs, 98(1), 147–155.

Clark, J. K., & Evans, A. T. (2014). Source Credibility and Persuasion: The Role of Message Position in Self-Validation. Personality and Social Psychology Bulletin, 40(8), 1024–1036. https://doi.org/10.1177/0146167214534733 Cobb, J. (2020). The Death of George Floyd, in Context. Retrieved 17 June 2020, from https://www.newyorker.com/

news/daily-comment/the-death-of-george-floyd-in-context

Copeland, D., Gunawan, K., & Bies-Hernandez, N. (2011). Source credibility and syllogistic reasoning. Memory & Cognition, 39(1), 117–127. https://doi.org/10.3758/s13421-010-0029-0

Davis, F. (1989). Perceived usefulness, perceived ease of use and end user acceptance of information technology. MIS Quarterly, 13(3), 319-340

Donsbach, W. (1991). Exposure to Political Content in Newspapers: The Impact of Cognitive Dissonance on Readers’ Selectivity. European Journal of Communication, 6(2), 155–186. https://doi.org/10.1177/0267323191006002003 Donsbach, W., Brosius, H., & Mattenklott, A. (1993). How unique is the perspective of television? A field experiment

on the perception of a campaign event by participants and television viewers. Political Communication, 10(1), 37–53. https://doi.org/10.1080/10584609.1993.9962962

Ernst, N., Kühne, R., & Wirth, W. (2017). Effects of message repetition and negativity on credibility judgments and political attitudes. International Journal of Communication, 11, 21.

Estrella Gualda, & José Rúas. (2019). Conspiracy theories, credibility and trust in information. Communication & Society (Formerly Comunicación y Sociedad), 32(1), 179–193. https://doi.org/10.15581/003.32.1.179-193 Festinger, L. (1957). A theory of cognitive dissonance. Stanford University Press.

Foy, J.E., LoCasto, P.C., Briner, S.W. et al. Would a madman have been so wise as this?” The effects of source credibility and message credibility on validation. Mem Cogn 45, 281–295 (2017). https://doi.org/10.3758/ s13421-016-0656-1

Gallo, C. (2019). The Art of Persuasion Hasn’t Changed in 2,000 Years. Retrieved 19 June 2020, from https://hbr.org/ 2019/07/the-art-of-persuasion-hasnt-changed-in-2000-years

Graber, D.A. (1990), Seeing Is Remembering: How Visuals Contribute to Learning from Television News. Journal of Communication, 40, 134-156. doi:10.1111/j.1460-2466.1990.tb02275.x

(31)

Grönlund, K. and Milner, H. (2006), The Determinants of Political Knowledge in Comparative Perspective. Scandinavian Political Studies, 29, 386-406. doi:10.1111/j.1467-9477.2006.00157.x

Harris, Douglas. (2019). Deepfakes: False Pornography Is Here and the Law Cannot Protect You. Duke Law and Technology Review, 99.

Hendricks V.F., Vestergaard M. (2019) Alternative Facts, Misinformation, and Fake News. Reality Lost. Springer, Cham Ju, B., & Albertson, D. (2018). Exploring Factors Influencing Acceptance and Use of Video Digital Libraries.

Information Research, 23(2), 35.

Karlsson, M., Clerwall, C., & Nord, L. (2014). You Ain’t Seen Nothing Yet. Journalism Studies, 15(5), 668–678. https:// doi-org.proxy.uba.uva.nl:2443/10.1080/1461670X.2014.886837

Knobloch-Westerwick, S., & Kleinman, S. (2012). Preelection Selective Exposure: Confirmation Bias Versus Informational Utility. Communication Research, 39(2), 170–193. https://doi.org/10.1177/0093650211400597 Knobloch-Westerwick, S., Mothes, C., & Polavin, N. (2020). Confirmation Bias, Ingroup Bias, and Negativity Bias in

Selective Exposure to Political Information. Communication Research, 47(1), 104–124. https://doi.org/ 10.1177/0093650217719596

Lavrakas, P. J. (2008). Encyclopedia of survey research methods (Vols. 1-0). Thousand Oaks, CA: Sage Publications, Inc. doi: 10.4135/9781412963947

Levine, T. R. (2014). Encyclopedia of Deception. SAGE Publications, Inc.

Mack, D. (2018). This PSA About Fake News From Barack Obama Is Not What It Appears. Retrieved 22 June 2020, from https://www.buzzfeednews.com/article/davidmack/obama-fake-news-jordan-peele-psa-video-buzzfeed Marquis, L. (2007). Moderators of Priming Effects: A Theory and Preliminary Evidence from an Experiment on Swiss

European Policy. International Political Science Review, 28(2), 185–224. https://doi.org/ 10.1177/0192512107075405

McGuire, W., & Papageorgis, D. (1962). Effectiveness of Forewarning in Developing Resistance to Persuasion. The Public Opinion Quarterly, 26(1), 24-34. Retrieved June 20, 2020, from www.jstor.org/stable/2747076 Miller, A., & Kurpius, D. (2010). A Citizen-Eye View of Television News Source Credibility. American Behavioral

Scientist, 54(2), 137–156. https://doi.org/10.1177/0002764210376315

Molden, D. (2014). Understanding Priming Effects in Social Psychology: What is “Social Priming” and How does it Occur?. Social Cognition, 32(Supplement), 1–11. https://doi.org/10.1521/soco.2014.32.supp.1

Nicholas Diakopoulos and Irfan Essa. 2010. Modulating video credibility via visualization of quality evaluations. In Proceedings of the 4th workshop on Information credibility (WICOW ’10). Association for Computing Machinery, New York, NY, USA, 75–82. DOI:https://doi.org/10.1145/1772938.1772953

Pascu, M., & Coman, C. (2013). Mass-Media and Political Communication Credibility. Bulletin of the Transilvania University of Braşov, Series VII: Social Sciences and Law, 2, 287–290.

Payne, G., & Dozier, D. (2013). Readers’ View of Credibility Similar for Online, Print. Newspaper Research Journal, 34(4), 54–67. https://doi.org/10.1177/073953291303400405

Petersen, M. B., Osmundsen, M., & Arceneaux, K. (2018, September 1). A “need for chaos” and the sharing of hostile political rumours in advanced democracies. PsyArXiv Preprints. https://psyarxiv.com/6m4ts/

Piazza, R., & Haarman, L. (2016). A pragmatic cognitive model for the interpretation of verbal–visual communication in television news programmes. Visual Communication, 15(4), 461–486. https://doi.org/

10.1177/1470357215621688

Pitt, J. (2019). Deepfake Videos and DDoS Attacks (Deliberate Denial of Satire) [Editorial]. IEEE Technology and Society Magazine, 38(4), 5–8. https://doi.org/10.1109/MTS.2019.2948416

(32)

Prochazka, F., & Schweiger, W. (2019). How to Measure Generalized Trust in News Media? An Adaptation and Test of Scales. Communication Methods and Measures, 13(1), 26–42. https://doi.org/10.1080/19312458.2018.1506021 Rapp, C. (2010). "Aristotle's Rhetoric”. The Stanford Encyclopedia of Philosophy (Spring 2010 Edition), Edward N.

Zalta (ed.)

Rössler, A., Cozzolino, D., Verdoliva, L., Riess, C., Thies, J., & Nießner, M. (2019). FaceForensics++: Learning to Detect Manipulated Facial Images.

Salwen, M., Garrison, B., & Driscoll, P. (2004). Online News and the Public. Hoboken: Taylor and Francis.

Thompson, S. (1952). Credibility and Truth. The Journal of Religion, 32(4), 272–275. https://doi.org/10.1086/484333 Törnberg, P. (2018). Echo chambers and viral misinformation: Modeling fake news as complex contagion.

Vaccari, C., & Chadwick, A. (2020). Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News. Social Media + Society, 6(1). https://doi.org/

10.1177/2056305120903408

Vosoughi, S., Roy D. & Aral S. (2018) The spread of true and false news online. Science, 359(6380):1146– 1151. https://doi.org/10.1126/science.aap9559

Wardle, C., & Derakhshan, H. (2017). INFORMATION DISORDER: Toward an interdisciplinary framework for research and policy making (pp. 20-49). Strasbourg: Council of Europe. Retrieved from https://rm.coe.int/ information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c

Watson, P., Wiers, R., Hommel, B., & Wit, S. (2018). Motivational sensitivity of outcome-response priming: Experimental research and theoretical models. Psychonomic Bulletin & Review, 25(6), 2069–2082. https:// doi.org/10.3758/s13423-018-1449-2

Weidner, K., Beuk, F., & Bal, A. (2019). Fake news and the willingness to share: a schemer schema and confirmatory bias perspective. Journal of Product & Brand Management, 29(2), 180–187. https://doi.org/10.1108/

JPBM-12-2018-2155

Westerlund, M. (2019). The Emergence of Deepfake Technology: A Review. Technology Innovation Management Review, 9(11), 40–53. https://doi.org/10.22215/timreview/1282

Wierzbicki A. (2018) Understanding and Measuring Credibility. Web Content Credibility. Springer, Cham. https:// doi.org/10.1007/978-3-319-77794-8_2

Wu, M., Huang, Y., Li, R., Bortree, D., Yang, F., Xiao, A., Wang, R., Wojdynski, B., & Golan, G. (2016). A Tale of Two Sources in Native Advertising: Examining the Effects of Source Credibility and Priming on

Content,Organizations, and Media Evaluations. American Behavioral Scientist, 60(12), 1492–1509. https:// doi.org/10.1177/0002764216660139

Zeng, C., & Olivera-Cintrón, R. (2019). Preparing for the World of a "Perfect" Deepfake (pp. 1-30).

Ștefăniță, O., Corbu, N. & Buturoiu, R. (2018). Fake News and the Third-Person Effect: They are More Influenced than Me and You. Journal of Media Research, 11, 5-23. 10.24193/jmr.32.1.

(33)

APPENDIX A: RESULTS TABLES

Table 1 - Attitudes towards message credibility (H1)

Table 2 - T-test results: deepfake on message credibility (H1)

Table 3 - Attitudes towards perceived validity (H2)

(34)

Table 6 - Attitudes towards acceptance of information through video content (H4)

(35)

Table 8 - Attitudes towards message credibility, primed groups(H5)

Table 9 - Results of the T-test: Priming on message credibility (H5)

Table 10 - Attitudes towards perceived validity, primed groups (H5a)

(36)

Table 12- Descriptive statistics of priming onto the relationship between deepfakes and message credibility (H5b)

Table 13- Results of the two-way ANOVA: priming onto the relationship between deepfakes and message credibility (H5b)

(37)

perceived validity (H5b)

Table 15 - Results of two-way ANOVA: priming onto the relationship between deepfakes and perceived validity (H5c)

(38)

APPENDIX B: SPSS SYNTAX * Encoding: UTF-8.

* Cleaning data

*Selecting cases: only completed responses DATASET ACTIVATE DataSet1. USE ALL.

COMPUTE filter_$=(Q28=4). VARIABLE LABELS filter_$ 'Q28=4

(FILTER)'.

VALUE LABELS filter_$ 0 'Not Selected' 1 'Selected'.

FORMATS filter_$ (f1.0). FILTER BY filter_$. EXECUTE.

*Filtering out check question DATASET ACTIVATE NewSample1. DATASET COPY Newsample. DATASET ACTIVATE Newsample. FILTER OFF.

USE ALL.

SELECT IF (Q14 = 1 AND Q15 = 2). EXECUTE.

DATASET ACTIVATE NewSample1.

*Recoding Variables for string variables AUTORECODE VARIABLES=Q10 /INTO EducationRecoded /PRINT. AUTORECODE VARIABLES=Q32_1 /INTO OnlineTrust_REC /PRINT. AUTORECODE VARIABLES=Q38_1 /INTO ValidityClaim_REC /PRINT. AUTORECODE VARIABLES=Q39_1 /INTO Q391_REC /PRINT. AUTORECODE VARIABLES=Q39_2 /INTO Q39_2_REC /PRINT. AUTORECODE VARIABLES=Q39_3 /INTO Q39_3_REC /PRINT. AUTORECODE VARIABLES=Q42_1 /INTO Q42VideoFake_REC /PRINT.

*Recode conditions and compute new variable

RECODE Q35 (1=2) INTO Q35_Rec. VARIABLE LABELS Q35_Rec 'News

article + video / Primed'. EXECUTE.

RECODE Q36 (1=3) INTO Q36_REC. VARIABLE LABELS Q36_REC 'News

article / Not-primed'. EXECUTE.

RECODE Q37 (1=4) INTO Q37_rRec.

VARIABLE LABELS Q37_rRec 'News article + video / Not-primed'. EXECUTE. COMPUTE Conditions=SUM(Q34,Q35_Rec,Q 36_REC,Q37_rRec). EXECUTE. *Descriptive statstics: *Conditions FREQUENCIES VARIABLES=Conditions /NTILES=4

/STATISTICS=STDDEV MODE SUM /BARCHART PERCENT /ORDER=ANALYSIS. *Sample age FREQUENCIES VARIABLES=Q4.0 /STATISTICS=STDDEV MEAN /ORDER=ANALYSIS. *flitering out minors USE ALL.

COMPUTE filter_$=(Q4.0 >= 18). VARIABLE LABELS filter_$ 'Q4.0 >= 18

(FILTER)'.

VALUE LABELS filter_$ 0 'Not Selected' 1 'Selected'.

FORMATS filter_$ (f1.0). FILTER BY filter_$. EXECUTE.

*Descriptive stats: sample

DESCRIPTIVES VARIABLES=Q4.0 /STATISTICS=MEAN STDDEV MIN

MAX. FREQUENCIES VARIABLES=Q8 /STATISTICS=STDDEV MODE /PIECHART PERCENT /ORDER=ANALYSIS. FREQUENCIES VARIABLES=EducationRecoded /STATISTICS=STDDEV MODE /BARCHART PERCENT /ORDER=ANALYSIS. FREQUENCIES VARIABLES=Q12 /STATISTICS=STDDEV MODE /BARCHART PERCENT /ORDER=ANALYSIS. FREQUENCIES VARIABLES=Q16 /STATISTICS=STDDEV MODE /BARCHART PERCENT /ORDER=ANALYSIS. DESCRIPTIVES VARIABLES=Q20 /STATISTICS=MEAN STDDEV. FREQUENCIES VARIABLES=Q20 /STATISTICS=STDDEV MODE /BARCHART PERCENT /ORDER=ANALYSIS. AUTORECODE VARIABLES=Q32_2 /INTO OnlinetrustSoc /PRINT. COMPUTE OnlineTrustSum=SUM(Onlinetrust Soc,OnlineTrust_REC). EXECUTE. FREQUENCIES VARIABLES=OnlineTrustSum /STATISTICS=STDDEV MEAN MODE /BARCHART PERCENT /ORDER=ANALYSIS. DESCRIPTIVES VARIABLES=OnlinetrustSoc /STATISTICS=MEAN STDDEV. FREQUENCIES VARIABLES=OnlinetrustSoc OnlineTrust_REC

/STATISTICS=STDDEV MEAN MODE /BARCHART PERCENT

/ORDER=ANALYSIS. *description measurement

FREQUENCIES

VARIABLES=ValidityClaim_REC /STATISTICS=STDDEV MEAN MODE /PIECHART PERCENT

/ORDER=ANALYSIS.

FREQUENCIES VARIABLES=Q391_REC Q39_2_REC Q39_3_REC /STATISTICS=STDDEV MEAN MODE /BARCHART PERCENT

/ORDER=ANALYSIS. *factor analysis credibility FACTOR

/VARIABLES Q391_REC Q39_2_REC Q39_3_REC

/MISSING LISTWISE

/ANALYSIS Q391_REC Q39_2_REC Q39_3_REC

/PRINT UNIVARIATE INITIAL KMO EXTRACTION ROTATION /PLOT EIGEN

/CRITERIA MINEIGEN(1) ITERATE(25) /EXTRACTION PC

/CRITERIA ITERATE(25) DELTA(0) /ROTATION OBLIMIN

/METHOD=CORRELATION. RELIABILITY

/VARIABLES=Q391_REC Q39_2_REC Q39_3_REC

/SCALE('ALL VARIABLES') ALL /MODEL=ALPHA

/STATISTICS=DESCRIPTIVE SCALE CORR

/SUMMARY=TOTAL.

*create new factor: message credibility COMPUTE

MessageCredibility=(Q391_REC + Q39_2_REC + Q39_3_REC) / 3.

(39)

EXECUTE.

*recoding misrepresent reality question RECODE Q37_2 (1=7) (2=6) (3=5) (4=4)

(5=3) (6=2) (7=1) INTO RepReality_REC.

VARIABLE LABELS RepReality_REC 'likely to represent reality'. EXECUTE.

*factor analysis source credibility FACTOR /VARIABLES RepReality_REC Q37_1 Q37_3 /MISSING LISTWISE /ANALYSIS RepReality_REC Q37_1 Q37_3

/PRINT UNIVARIATE INITIAL KMO EXTRACTION ROTATION /PLOT EIGEN

/CRITERIA MINEIGEN(1) ITERATE(25) /EXTRACTION PC

/CRITERIA ITERATE(25) DELTA(0) /ROTATION OBLIMIN

/METHOD=CORRELATION. RELIABILITY

/VARIABLES=RepReality_REC Q37_1 Q37_3

/SCALE('ALL VARIABLES') ALL /MODEL=ALPHA /STATISTICS=DESCRIPTIVE SCALE CORR /SUMMARY=TOTAL. COMPUTE SourceCredibility=(Q37_1 + Q37_3 + RepReality_REC) / 3. EXECUTE. FREQUENCIES VARIABLES=Q42_1 /STATISTICS=STDDEV MEAN MODE /ORDER=ANALYSIS.

FACTOR

/VARIABLES Q41_1 Q41_2 /MISSING LISTWISE /ANALYSIS Q41_1 Q41_2

/PRINT UNIVARIATE INITIAL KMO EXTRACTION ROTATION /PLOT EIGEN

/CRITERIA MINEIGEN(1) ITERATE(25) /EXTRACTION PC

/CRITERIA ITERATE(25) DELTA(0) /ROTATION OBLIMIN

/METHOD=CORRELATION. RELIABILITY

/VARIABLES=Q41_1 Q41_2 /SCALE('ALL VARIABLES') ALL /MODEL=ALPHA /STATISTICS=DESCRIPTIVE SCALE CORR /SUMMARY=TOTAL. COMPUTE VideoAcceptanceInfo=(Q41_1 + Q41_2) / 2. EXECUTE. FREQUENCIES VARIABLES=VideoAcceptanceInf o

/STATISTICS=STDDEV MEAN MODE /ORDER=ANALYSIS.

FREQUENCIES VARIABLES=Conditions /STATISTICS=STDDEV MEAN MODE /ORDER=ANALYSIS.

*Creating two groups for t-test: exposed / not exposed to deepfake RECODE Conditions (1=0) (2=1) (3=0)

(4=1) INTO Condition_Grouped. VARIABLE LABELS Condition_Grouped

'0= nodeepfake 1=deefaked'. EXECUTE. *t-test for H1 T-TEST GROUPS=Condition_Grouped(0 1) /MISSING=ANALYSIS /VARIABLES=MessageCredibility /CRITERIA=CI(.95). *testing H2 T-TEST GROUPS=Condition_Grouped(0 1) /MISSING=ANALYSIS /VARIABLES=ValidityClaim_REC /CRITERIA=CI(.95).

*Creating two groups for t-test: primed/not primed RECODE Conditions (3=0) (4=0) (1=1) (2=1) INTO Condition_Grouped_Primed. VARIABLE LABELS Condition_Grouped_Primed '0=not primed; 1=primed'. EXECUTE. *t-test for H5 T-TEST GROUPS=Condition_Grouped_Pri med(0 1) /MISSING=ANALYSIS /VARIABLES=MessageCredibility /CRITERIA=CI(.95).

*t-test for H5a T-TEST GROUPS=Condition_Grouped_Pri med(0 1) /MISSING=ANALYSIS /VARIABLES=ValidityClaim_REC /CRITERIA=CI(.95). *ANOVA for h5b UNIANOVA MessageCredibility BY Condition_Grouped_Deepfake Condition_Grouped_Primed /METHOD=SSTYPE(3) /INTERCEPT=INCLUDE / POSTHOC=Condition_Grouped_D eepfake(BONFERRONI) / PLOT=PROFILE(Condition_Group ed_Deepfake*Condition_Grouped_ Primed) TYPE=LINE ERRORBAR=NO MEANREFERENCE=NO YAXIS=AUTO

/PRINT DESCRIPTIVE HOMOGENEITY /CRITERIA=ALPHA(.05) /DESIGN=Condition_Grouped_Deepfake Condition_Grouped_Primed Condition_Grouped_Deepfake*Con dition_Grouped_Primed. *ANOVA for H5c UNIANOVA ValidityClaim_REC BY Condition_Grouped_Deepfake Condition_Grouped_Primed /METHOD=SSTYPE(3) /INTERCEPT=INCLUDE / POSTHOC=Condition_Grouped_D eepfake(BONFERRONI) / PLOT=PROFILE(Condition_Group ed_Deepfake*Condition_Grouped_ Primed) TYPE=LINE ERRORBAR=NO MEANREFERENCE=NO YAXIS=AUTO /PRINT ETASQ DESCRIPTIVE

HOMOGENEITY /CRITERIA=ALPHA(.05) /DESIGN=Condition_Grouped_Deepfake Condition_Grouped_Primed Condition_Grouped_Deepfake*Con dition_Grouped_Primed. *Correlation for h3 CORRELATIONS /VARIABLES=SourceCredibility MessageCredibility /PRINT=TWOTAIL NOSIG /STATISTICS DESCRIPTIVES /MISSING=PAIRWISE. *regression analyis for H3 REGRESSION

/MISSING LISTWISE

/STATISTICS COEFF OUTS CI(95) R ANOVA /CRITERIA=PIN(.05) POUT(.10) /NOORIGIN /DEPENDENT MessageCredibility /METHOD=ENTER SourceCredibility /SCATTERPLOT=(*ZRESID ,*ZPRED). *t-test for H4 T-TEST GROUPS=Condition_Grouped_De epfake(0 1) /MISSING=ANALYSIS /VARIABLES=VideoAcceptanceInfo /CRITERIA=CI(.95).

Referenties

GERELATEERDE DOCUMENTEN

Prior knowledge moderates the relation, such that when prior knowledge is positive (vs. negative) the relation between a humorous ad and message credibility is positive (vs.

The results showed that (1) message credibility is higher for a humorous ad than for a serious ad; (2) positive prior knowledge results in higher message credibility than

Monetary incentive Type of argument Perceived credibility of online customer review, perceived by the receiver Monetary reward proneness Consumer criticism regarding OCRs

significant. The assumption that product involvement has a moderating role on the effect of: eWOM source on source credibility is supported by the results of this research.

To find a way to determine if an article is real or not is the most important aspect of this research, but in order to help people, other than other researchers, solve the problem

To answer the research question of this paper: “What is the effect of website credibility on perceived advertisement credibility and what is the ideal combination of

The objectives of this study were to compare plant and arthropod diversity patterns and species turnover of maize agro-ecosystems between biomes (grassland and savanna) and

De vraag kan en moet daarom worden gesteld wanneer door middel van het toepassen van de methoden genoemd onder 1 tot en met 5 grenzen worden overschreden en wanneer dat evident is