• No results found

“Why Am I Seeing This Ad?” : facebook’s Ad Explanations and Users’ Privacy Concerns and Attitudes

N/A
N/A
Protected

Academic year: 2021

Share "“Why Am I Seeing This Ad?” : facebook’s Ad Explanations and Users’ Privacy Concerns and Attitudes"

Copied!
33
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

“Why Am I Seeing This Ad?”

Facebook’s Ad Explanations and Users’ Privacy Concerns and Attitudes

By Charlie Kluin 10353089

Master’s Thesis

University of Amsterdam - Graduate School of Communication Master’s program Communication Science

Supervisor: Sanne Kruikemeier January 31, 2020

(2)

Abstract

Advertisers on social media often use tracking in order to personalize advertisements on social media. This practice is often covert, and not transparent. In order to be more

transparent, Facebook has added a ‘Why am I seeing this ad?’ button to explain to users why they were selected to see the advertisement. However, previous research has shown that these ad explanations are often incomplete and misleading. This study aims to understand what happens when Facebook users see a more transparent ad explanation, whether this has effects on their privacy concerns and attitudes and if this relationship is mediated by persuasion knowledge and persuasion resistance in the form of negative affect. Furthermore, a second aim is to find out if there is a difference when participants are exposed to a political Facebook advertisement. An online experiment with a between-subjects design was conducted which showed no significant effect for mediation with persuasion knowledge, nor was there a significant effect for moderated mediation with type of ad as moderator. However, the results did find a significant mediating relationship of more transparent Facebook ad explanations on privacy concerns through persuasion resistance. This finding indicates that persuasion

resistance is more important for privacy concerns than previously thought.

Keywords: Facebook, Online behavioral advertising, Ad explanations, Political ads,

(3)

Introduction

After the elections of 2016 in the United States and the Brexit referendum, it became world news that the Trump campaign and the Leave campaign worked together with

Cambridge Analytica, a company that used digital data to influence voters (Graham-Harrison & Cadwalladr, 2018). Cambridge Analytica designed algorithms that micro targeted specific users of social media to be more susceptible for the advertisements that the Trump

administration released (Lewis & Hilder, 2018). The moment this became public knowledge, it was an enormous scandal. The digital data was gathered through Facebook, and many of the political advertisements were shown to specific Facebook users (Lewis & Hilder, 2018). As a consequence, Facebook has received a lot of backlash. For example, in April 2018

Facebook’s CEO and co-founder Mark Zuckerberg was called to testify in congress, where he spent almost five hours. In July 2019 Facebook was fined 5 billion dollars to settle a lawsuit regarding data privacy violations, while the European Union is still investigating and likely to penalize Facebook as well (Davies & Rushe, 2019).

The Trump administration used personal advertisements on social media in order to sell their political message to specific groups of people (Lewis & Hilder, 2018). Another name for the practice of targeting specific users to see specific ads is Online Behavioral Advertising (OBA). OBA is defined as “the practice of monitoring people’s online behavior and using the collected information to show people individually targeted advertisements” (Boerman, Kruikemeier & Zuiderveen Borgesius, 2017, p. 364). This happens on multiple social networking sites, among which Facebook.

In response to the Cambridge Analytica scandal and other scandals involving privacy, Facebook has been trying to change their image. One of the changes they have made is to be more transparent about why users see certain advertisements. The ‘Why am I seeing this ad?’ button above advertisements has been available since 2014 but has undergone some changes,

(4)

to give users the feeling that they have more understanding and control over the ads they see (Facebook Business, 2018; Hearn, 2019). This is very interesting, as OBA is often covert (Boerman, Kruikemeier & Zuiderveen Borgesius, 2017). However, according to Andreou et al. (2018) at times these explanations are misleading and often incomplete. By doing an experiment using a browser extension and self-made advertisements, Andreou et al. (2018) show that there is a mismatch between the ad explanations that Facebook gives users, and the scope of possible selection criteria that advertisers can choose from when they want to target a population. This study shows that Facebook does not show users the whole picture

regarding the advertisements on their platform. Even though transparency is very important, especially since research shows that users often do not have a lot of knowledge about OBA (Smit, van Noort & Voorveld, 2014; Ur, Leon, Cranor, Shay & Wang, 2012).

Furthermore, Ghosh, Venkatadri and Mislove (2019) show that political advertisers also target specific Facebook users using their personally identifiable information. This makes it even more important for users to understand what OBA is and what tracking entails, since it can influence the democratic process (Barocas, 2012; Zuiderveen Borgesius et al., 2018). However, previous studies have found very contradictory user reactions to more transparency. For instance, Weinshel et al., (2019) show that when users have more knowledge about tracking, they intend to protect their privacy more. Oulasvirta, Suomalainen, Hamari, Lampinen and Karvonen (2014) show that users can perceive data collection without transparency and explicit permission as negative, while when intentions were openly communicated, users had less privacy concerns. Eslami, Krishna Kumaran, Sandvig and Karahalios (2018) found that ad explanations should be interpretable for the users. In the study done by Dogruel (2019) the amount of detail in ad explanations could be perceived as positive when there was a medium amount of detail, or as negative when there was a high amount of detail. These divergent outcomes show that there is not yet enough insight into

(5)

users’ concerns and attitudes when they are confronted with more transparent information about OBA and tracking. Furthermore, understanding this becomes even more important when political advertisements are involved, since there has been very little research done on transparency and political advertisements on social media. Another aspect of transparency literature that has not yet received sufficient attention is the link between transparency and persuasion knowledge. In what way would more transparency influence persuasion

knowledge?

There has been research done on online advertisements and its effects on persuasion knowledge and consumers attitudes (Boerman, Willemsen & van der Aa, 2017; van

Reijmersdal et al., 2016). However, a lot of studies have focused on the difference in users’ attitudes when they are notified that a (social media) post is sponsored (Boerman &

Kruikemeier, 2016; Boerman, Willemsen & van der Aa, 2017; Kruikemeier, Sezgin &

Boerman, 2016; van Reijmersdal et al., 2016). However, these studies focus on the difference between covert and overt advertisements, and not on the tactics used for these advertisements. Even though Friestad and Wright (1994), who originally coined the term ‘Persuasion

Knowledge’, specified that there are many different kinds of persuasion knowledge, including marketers’ tactics. They believe that persuasion attempts “includes the target’s perceptions of how and why the agent has designed, constructed, and delivered the observable message(s)” (Friestad & Wright, 1994, p. 2). For instance, Friestad and Wright give an example where a person believes that one of the persuasion tactics of an advertisement is to connect the advertisement with familiar and important things for the target (1994, p. 14). This is exactly what is done with OBA, where advertisers are personalizing advertisements geared towards the target. Therefore, it is important to understand the link between OBA and users’

perception of the persuasion tactics marketers use. However, this is missing in the literature, as most research about OBA and persuasion knowledge focuses on disclosure (Boerman &

(6)

Kruikemeier, 2016; Boerman, Willemsen & van der Aa, 2017; Kruikemeier et al., 2016; van Reijmersdal et al., 2016).

Furthermore, targets of persuasion attempts will have judgements about how

appropriate certain persuasive tactics are (Friestad & Wright, 1994, p. 5). This is likely to be relevant for OBA as well, since users will also have judgements about the appropriateness of tracking and are likely to have concerns about their privacy (Boerman, Kruikemeier & Zuiderveen Borgesius, 2018; Estrada-Jiménez, Parra-Arnau, Rodríguez-Hoyos, & Forné, 2017; Smit et al., 2014; Ur et al., 2012; Youn & Kim, 2019). However, although there has been some research on OBA, research about the relationship of OBA on Facebook and consumer knowledge and concerns is missing (Boerman, Kruikemeier & Zuiderveen

Borgesius, 2017, 372). Therefore, this study aims to fill this gap by studying what happens to users’ privacy concerns and attitudes when they get a more transparent view of Facebook’s OBA. To understand this, the following research question was formulated:

To what extent do Facebook users’ privacy concerns and attitudes change when exposed to more transparent ad explanations of Facebook’s Online Behavioral Advertising?

The aim of this study is to expand the literature on Facebook transparency, especially in regards to political advertisements; to broaden the available research on persuasion

knowledge by focusing more on marketers’ tactics regarding OBA; and to research the link between transparency and privacy concerns and attitudes.

Theoretical Framework Online Behavioral Advertising

In order to answer this question, it is first important to understand the background of OBA, after which we will delve into the literature available on political targeted ads. Many people will recognize that often when they search for something online, suddenly all the

(7)

online advertisements they see are about this item, or similar items. This is the basic premise of OBA, where online behavior is followed and is used as a basis for personalized

advertisements (Boerman, Kruikemeier & Zuiderveen Borgesius, 2017). There are multiple steps in this process, specifically tracking users in their online activity, sharing the gathered data to advertisers and showing personalized advertisements to internet users (Estrada-Jiménez et al., 2017). Altaweel, Good and Hoofnagle (2015, p. 24) found that tracking on websites increased from 2012 to 2015, sometimes even doubling the number of cookies on websites. Google had the highest amount of tracking cookies on popular websites, followed by Facebook (Altaweel et al., 2015, p. 18). On the online market space, cookies are matched together and shared or sold to advertisers (Estrada-Jiménez et al., 2017, p. 36).

OBA is often covert, meaning that people often do not know that they are being tracked and that the advertisements they see are based on their internet behavior (Boerman, Kruikemeier & Zuiderveen Borgesius, 2017). The study done by Smit et al. (2014, p. 21) shows that Dutch citizens often do not have enough knowledge about OBA, and that they understand even less about cookies. Although Facebook attempts to be more overt, by adding the ‘Why am I seeing this ad?’ explanation, according to Andreou et al. (2018), a lot of information is missing in these explanations. For instance, the explanations Facebook gives do not mention cookies or tracking. Thus, Andreou et al. (2018) show that Facebook is not transparent enough about their collection and usage of online data. Following Boerman, Kruikemeier and Zuiderveen Borgesius (2017, p. 368), because Facebook does not disclose their collection and usage of personal information, and also does not ask for consent, this is likely to have a negative effect on users’ trust towards Facebook.

Although Facebook does have a ‘sponsored’ button above their advertisements, they are not transparent about exactly how they personalize advertisements and how they gather the data relevant for personalizing advertisements (Andreou et al., 2018). This is problematic,

(8)

since many users have limited knowledge of data tracking and its use for advertising (Ur et al., 2012; Weinshel et al., 2019). Furthermore, according to Oulasvirta et al. (2014, p. 637) when users’ data is collected without users giving explicit permission, they perceive it as negative. However, there are very different outcomes when looking at users’ attitudes towards OBA and transparency. There have been reports that when data collectors are transparent about their intentions, users have less privacy concerns (Oulasvirta et al., 2014). Another study showed that if users are truly interested in the topic of the advertisement, they were less bothered with the personalization of the ad (Dolin et al., 2018). Furthermore, the amount of detail when explaining how OBA and tracking works also has influence on whether the users are concerned or not (Dogruel, 2019; Eslami et al., 2018; Ur et al., 2012). The study done by Weinshel et al., (2019) shows that when users have a more accurate understanding of online tracking, they have higher intentions to take privacy protective measures.

Additionally, multiple studies have shown that by increasing transparency, users’ persuasion knowledge is activated, which can lead to resistance against this persuasion (Boerman, Willemsen & van der Aa, 2017; van Reijmersdal et al., 2016), for instance by ad avoidance (Youn & Kim, 2019). Therefore, it is expected that when users are exposed to the condition where the ad explanation is more transparent – specifically by including that Facebook tracks users’ online behavior and stores and shares their personal data – users’ persuasion knowledge will be activated, and participants will have more privacy concerns and attitudes. Furthermore: “Those with more knowledge might perceive the cost and benefits differently and might believe that the negative consequences are more severe” (Boerman, Kruikemeier & Zuiderveen Borgesius, 2017, 372). Therefore, the following hypothesis was formulated:

H1: Exposure to a more transparent Facebook ad explanation will activate

(9)

privacy concerns and (b) attitudes, compared to when the Facebook ad explanation is less transparent.

Political Advertisements

As previous research about transparency shows, there are many aspects that can influence users’ privacy concerns and attitudes. Oulasvirta et al. (2014), show that both the intention and identity of the data collector matters for the privacy concerns of users. Similarly, Boerman, Willemsen and van der Aa (2017) show that the source of Facebook ads is

important, and that what or whom the source of the ad is can have different effects on users’ persuasion knowledge. They found that when a celebrity posts a disclosed sponsored

Facebook ad, users’ persuasion knowledge is activated, and this leads to critical and

distrusting feelings (Boerman, Willemsen & van der Aa, 2017, p. 89). When brands disclosed that their Facebook post was sponsored, this did not necessarily activate users’ persuasion knowledge, possibly because users already knew that posts by brands are advertisements (Boerman, Willemsen & van der Aa, 2017, pp. 90, 91). This shows that there can be a difference in users’ activation of persuasion knowledge, which might be attributed to the source of the Facebook ad. But what if the source of the Facebook ad is political? This is important to research because political advertisers also target specific users through Facebook (Ghosh et al., 2019).

Boerman and Kruikemeier (2016, pp. 286, 287) believe that consumers might be less likely to perceive political ads as advertisements, since they have different expectations of the communication of political parties. Similar to what Boerman, Willemsen and van der Aa (2017) found, Boerman and Kruikemeier (2016, p. 287) found that users automatically activate persuasion knowledge when confronted with a message posted by a brand, while users do not automatically recognize political messages as persuasive. This can be explained

(10)

by brands and political parties having different reasons and goals when communicating to an audience (Boerman & Kruikemeier, 2016; van Steenburg, 2015). Thus, when users are confronted with a political message explicitly labeled as an advertisement, their perception of that message can change, subsequently activating their persuasion knowledge (Boerman & Kruikemeier, 2016, p. 287). Therefore, because messages posted by political parties are not necessarily recognized as persuasive messages, when political messages are explicitly advertisements, users are more likely to activate persuasion knowledge.

Furthermore, Boerman and Kruikemeier (2016, p. 292) found that when users

recognized political promoted tweets as ads that they had less favorable opinions of the source and of the trustworthiness of the source, and that they were more skeptical of the tweet, compared to tweets by brands. Boerman and Kruikemeier (2016) show that when social media ads are political in nature, they are more likely to be perceived as negative. Therefore, it is expected that when Facebook ads are political in nature, users will perceive them as more negative than non-political Facebook ads. Thus, it is expected that political Facebook ads will generate higher privacy concerns and attitudes. The following hypothesis was formulated:

H2: Exposure to a political Facebook ad will activate persuasion knowledge which in turn will activate resistance, leading to higher levels of (a) privacy concerns and (b) attitudes, compared to when the Facebook ad is not specified as a political ad.

As mentioned above, when users are confronted with a more detailed explanation of OBA and tracking they are more likely to be concerned (Dogruel, 2019; Eslami et al., 2018, Ur et al., 2012) and can have higher intentions to protect their privacy (Weinshel et al., 2019). This is probably due to their persuasion knowledge being activated, which can lead to

persuasion resistance (Boerman, Willemsen & van der Aa, 2017; van Reijmersdal et al., 2016; Youn & Kim, 2019). However, it is expected that this relationship will be stronger when users are confronted with a political advertisement, as they do not automatically activate their

(11)

persuasion knowledge when confronted with a political message (Boerman & Kruikemeier, 2016). Furthermore, when participants are exposed to a more transparent ad explanation about a political advertisement, it is likely that their persuasion knowledge will be activated, leading to persuasion resistance and possible privacy concerns and attitudes. This is in line with the findings of Turow, Delli Carpini, Draper and Howard-Williams (2012) who found that Americans were very critical of tailored political advertisements, with a large percentage indicating that they would not vote for candidates who use tailored political advertisements. Therefore, it is expected that when an ad is both transparent about tracking and sharing online data, and is also specified as a political ad, that users’ privacy concerns and attitudes will be higher than when the ad is not specified as a political ad. Therefore, the following hypothesis was formulated:

H3: The effect of exposure to a more transparent Facebook ad explanation, which will activate persuasion knowledge and subsequently activate resistance, leading to higher levels of (a) privacy concerns and (b) attitudes is moderated by the type of ad, where the

relationship is stronger when the Facebook ad is political, compared to when the Facebook ad is not political.

Method Design

In order to prove the hypotheses and answer the research question, a between-subjects online survey-embedded experiment though Qualtrics took place in December 2019.

Participants were recruited through convenience sampling. This was mostly done by posting a link on multiple Facebook pages and messaging Facebook connections through Facebook Messenger, but also by sending a link to acquaintances on WhatsApp. In order to attract more participants, five bol.com vouchers of 10 euro were allotted among the participants.

(12)

After reading and consenting to the ethical permission form, participants were asked some questions about their age, gender, nationality, education and Facebook activity. After this, participants were randomly assigned to one of the four experimental conditions, resulting in approximately 40 participants per condition (Min = 38, Max = 42). The experiment had a between-subject 2 × 2 factorial design, with the independent variable ‘ad explanation’ tested on two levels, where participants were either shown an original Facebook ad explanation or a more transparent one. To create the more transparent ad explanation, the study by Andeou et al. (2018) was examined.

All participants were shown a short video that took about one minute. The video started with an image of a Facebook advertisement, which lasted approximately ten seconds, after which the mouse cursor started moving to the three dots displaying the options and subsequently clicked on ‘Why am I seeing this ad?’. The participants were shown a new window overlapping the original advertisement that gave an explanation as to why they were shown this advertisement. In order to change as little as possible between the conditions, just one sentence was changed. The condition with the original Facebook ad explanation read ‘This is information based on your Facebook profile and where you’ve connected to the internet’, for the condition with a more transparent ad explanation, this sentence was changed to ‘This is information based on tracking your internet behavior and storing and selling your personal data’. For screenshots of the different ad explanations, see the Appendix.

The other factor in this experiment was the moderator variable, which is ‘type of ad’. This was manipulated by showing participants either a specified political advertisement, which was mentioned both in the original advertisement and on the explanation window by adding the word ‘political’, or a non-specified advertisement. When participants were exposed to the political condition, the Facebook page of the advertisement was called ‘The Urban Political Party’ and when participants were exposed to the non-political advertisement,

(13)

the Facebook page was called ‘The Urban Party’. The control group in this experiment were shown the non-political and less transparent ad explanation, since this is how currently

everybody sees the ad explanations. The video used for the experimental conditions was made by taking screenshots of Facebook ads, digitally altering the screenshots and putting them together into a realistic representation of Facebook by using invision.1 Because the stimuli

were made to look realistic, external validity was high.

After being exposed to one of the four conditions, participants were asked to fill out some questions relating to persuasion knowledge, resistance in the form of negative affect, privacy concerns and privacy attitude intentions. After this they were asked what explanation Facebook gave for showing them the ad and what the name of the Facebook page was, as a manipulation check. If participants wanted to join the raffle, they were redirected to another Qualtrics webpage where they could fill in their email address. Because the raffle was held on another Qualtrics page, the participants’ email addresses could not be connected in any way with their answers to the study, and thus their privacy was ensured. This was also explicitly mentioned to the participants.

Sample

Although there were 206 responses to the survey, only 169 participants filled out all the questions. Furthermore, since the study is about Facebook ads, only participants who were at least once a week active on Facebook were included. This further minimized the sample size to 159 participants. The participants were aged from 19 to 58 (M = 28.1, SD = 7.2), with 73 female participants (45.9%). The sample was highly educated, with 52.8% having received at least a bachelor’s degree at the university. Although the sample had 24 different

1 The experimental condition videos can be found through

(14)

nationalities, the Dutch nationality was overly represented, comprising 75.5% of the total sample.

Measures

Persuasion knowledge and resistance. There were two different mediators in this experiment. The first mediator was persuasion knowledge as used in Ham, Nelson & Das (2015). Participants had to indicate on a seven-point scale whether they agreed with

statements such as ‘The post of the Urban (Political) Party feels like an ad’ (Ham et al., 2015). This variable was tested on five levels, with a higher score indicating higher persuasion

knowledge (1 = strongly disagree, 7 = strongly agree). The mean score of the five questions was computed into the variable ‘persuasion knowledge’ (Eigenvalue = 2.67; explained variance = 53.33%; α = 0.77; M = 5.86; SD = 0.82).

The second variable in the mediation was resistance against persuasion knowledge in the form of negative affect, which was labeled ‘persuasion resistance’ and which was

measured on four levels with a five-point scale from 1 (strongly disagree) to 5 (strongly

agree) derived from Zuwerink and Cameron (2003). These questions were all formulated as:

‘While reading the explanation Facebook gave for showing me this ad, I felt...’ with the four levels being ‘Angry’, ‘Enraged’, ‘Irritated’ and ‘Annoyed’. The mean score was used to indicate persuasion resistance (Eigenvalue = 3.09; explained variance = 77.13%; α = 0.90; M = 2.80; SD = 1.00).

Privacy concerns and attitudes. The dependent variables of this study were privacy concerns and attitudes. Privacy concerns were adapted from the study by Choi and Land (2016, p. 874) and were measured with five items on a seven-point scale from 1 (strongly

disagree) to 7 (strongly agree). Participants were asked to indicate whether they agreed with

(15)

‘I am concerned that Facebook may sell my personal information to other companies’. The mean score of these five items was computed into the variable ‘privacy concerns’ (Eigenvalue = 3.72; explained variance = 74.43%; α = 0.91; M = 5.51; SD = 1.20).

Privacy attitude intentions was adapted from Boerman et al. (2018, p. 10), although renamed from privacy protection behavior to privacy attitude intentions. This variable was measured with ten items where participants were asked ‘After seeing Facebook's explanation of why you see certain ads, how likely are you to’. Examples of the items are ‘delete cookies’ and ‘use the “do not track” function in your browser’. Furthermore, one of the items from the original study was replaced with ‘use a VPN’. Answers were on a six-point scale from 1 (never) to 6 (I already do this). The ten items were computed into the variable ‘privacy attitude intentions’ (Eigenvalue = 4.99; explained variance = 49.89%; α = 0.88; M = 3.67; SD = 1.15).2

Manipulation check. For the manipulation check two crosstabulations were executed. To ensure that participants had read the ad explanation, the ad explanation conditions were cross tabulated with the question ‘What was one of the reasons Facebook gave for showing you this ad?’. Of the participants that were exposed to the less transparent ad explanation, 59% selected the right correct explanation. Of the participants exposed to the more

transparent ad explanation, 56.8% selected the correct ad explanation. The majority of the responses answered correctly, and Pearson’s Chi Square test showed significant results (𝜒(2) = 20.71, p < .001). However, there was still a large number of participants that did not answer correctly, which is in line with previous research done on disclosure studies (Boerman, Willemsen & van der Aa, 2017, p. 87).

2 A factor analysis showed that there were two factors explaining the ‘privacy attitude intention’ items. However,

because the second variable had an eigenvalue of 1.07 and only explained 10.67% of the variance, and there was no clear theoretical distinction between the items, the decision was made to compute the items together into one variable.

(16)

The manipulation check results were better for the condition of the ad being political or not. Of the participants that were exposed to the non-political ad, 87.8% answered

correctly. Of the participants that were exposed to the political ad, 76.6% answered correctly (𝜒(2) = 91.80, p < .001).

Results Randomization

To make sure that randomization was correct and there were no differences between the experimental groups, separate analyses of variance (ANOVAs) were executed with age and education as dependent variables and the experimental conditions as independent variables. Furthermore, cross tabulations were conducted with gender, nationality and Facebook activity as dependent variables and the experimental conditions as independent variables. The results showed that there were no significant differences between the experimental groups in terms of age (F(3, 155) = 1.18, p = .321), education (F(3, 155) = 0.67, p = .572), gender (𝜒(3) = 5.82, p = .121), nationality (𝜒(69) = 76.00, p = .263) or Facebook activity (𝜒(9) = 7.41, p = .594).

Effects of Ad Explanation on Privacy Concerns and Attitudes

To test the hypotheses, regression analyses were conducted using PROCESS, which was designed by Andrew Hayes (2018). To test H1, two separate serial mediation tests were conducted using Model 6 of PROCESS (Hayes, 2018). The first mediation was done with ad explanation as independent variable, persuasion knowledge as the first mediator, persuasion resistance as the second mediator, and privacy concerns as dependent variable. The results (see Figure 1) showed no significant effect of ad explanation on persuasion knowledge (b = 0.01, p = .968), nor was there a significant effect of persuasion knowledge on persuasion resistance (b = 0.01, p = .886), nor on privacy concerns (b = 0.10, p = .351). However, there

(17)

was a significant effect of ad explanation on persuasion resistance (b = 0.42, p = .009), and of persuasion resistance on privacy concerns (b = 0.36, p < .001). This shows that participants who read the more transparent Facebook ad explanation were more likely to feel persuasion resistance. Furthermore, participants who felt more persuasion resistance were more likely to have higher privacy concerns, while controlling for ad explanation.

The effects of a more transparent ad explanation on persuasion knowledge and

subsequently on privacy concerns are both not significant. This means that for the participants who read the more transparent ad explanation, persuasion knowledge was not necessarily activated, nor did persuasion knowledge lead to higher privacy concerns. The ad explanation itself did not have a significant effect on privacy concerns (b = -0.10, p = .590), and there was no significant indirect effect of ad explanation on privacy concerns (indirect effect = 0.00, boot SE < 0.01, 95% BCBCI [-0.011; 0.013]). Thus, participants who were exposed to the more transparent ad explanation did not necessarily have higher privacy concerns. These results do not support H1a.

Figure 1 Mediation model: Effect of ad explanation (less transparent vs. more transparent) on privacy concerns via persuasion knowledge and persuasion resistance (n = 159). *p < .05, **p < .01, ***p < .001.

For H1b the dependent variable is privacy attitude intentions (see Figure 2). Only the effects on privacy attitude intentions were different in this second mediation, with persuasion

(18)

knowledge having a non-significant effect on privacy attitude intentions (b = -0.06, p = .577). Persuasion resistance had a non-significanteffect on privacy attitude intentions (b = 0.03, p = .764), as did ad explanation (b = 0.20, p = .278). Furthermore, although the more transparent ad explanation did lead to a significant persuasion resistance of the participants, this

persuasion resistance did not translate to participants’ privacy attitude intentions. The ad explanation itself had a non-significant effect on privacy attitudes (b = 0.20, p = .278), and the indirect effect of ad explanation on privacy attitude intentions was also not significant

(indirect effect = 0.00, boot SE < 0.01, 95% BCBCI [-0.003; 0.003]). Thus, H1b was not supported.

Figure 2. Mediation model: Effect of ad explanation (less transparent vs. more transparent) on privacy attitude intentions via persuasion knowledge and persuasion resistance (n = 159). *p < .05, **p < .01, ***p < .001.

Effect of the Type of Ad on Privacy Concerns and Attitudes

In order to test the second hypothesis, once again two separate serial mediation tests were conducted using Model 6 of PROCESS (Hayes, 2018). To test H1a, type of ad was used as the independent variable, persuasion knowledge as the first mediator, persuasion resistance as the second mediator and privacy concerns as the dependent variable (see Figure 3).

Contrary to our expectations, type of ad had a non-significant effect on persuasion knowledge (b = -0.04, p = .766), and a non-significant effect on persuasion resistance (b = 0.15, p =

(19)

.344). Persuasion knowledge also had a non-significant effect on persuasion resistance (b = 0.02, p = .864). There was no significant effect of persuasion resistance on privacy concerns (b = 0.10, p = .352), although there was a significant effect of persuasion resistance on privacy concerns (b = 0.35, p < .001). This indicates that exposure to a political ad did not lead to more persuasion resistance. Type of ad had a non-significant effect on privacy

concerns (b = -0.01, p = .971) and the indirect effect was also non-significant (indirect effect < -0.01, boot SE < 0.01, 95% BCBCI [-0.013; 0.011]). Thus, H2a is not supported.

Figure 3. Mediation model: Effect of type of ad (not political vs. political) on privacy concerns via persuasion knowledge and persuasion resistance (n = 159). *p < .05, **p < .01, ***p < .001.

The dependent variable in H2b is privacy attitude intentions (see Figure 4). Type of ad has a non-significant effect on privacy attitude intentions (b = -0.09, p = .643), as does

persuasion knowledge (b = -0.06, p = .571). Persuasion resistance has a non-significant effect on privacy attitude intentions (b = 0.05, p = .569). The overall indirect effect was also not significant (indirect effect = 0.00, boot SE < 0.01, 95% BCBCI [-0.004; 0.004]). These results do not support H2b.

(20)

Figure 4. Mediation model: Effect of type of ad (not political vs. political) on privacy attitude intentions via persuasion knowledge and persuasion resistance (n = 159). *p < .05, **p < .01, ***p < .001

Moderated Mediation on Privacy Concerns and Attitudes

In order to test the two moderated mediation analyses, Model 7 of PROCESS was used (Hayes, 2018). The first moderated mediation tested H3a and was run with ad

explanation as the independent variable, type of ad as the moderator, persuasion knowledge as the first mediator, persuasion resistance as the second mediator and privacy concerns as the dependent variable (see Figure 5). This test showed that ad explanation had a non-significant effect on persuasion knowledge (b = 0.17, p = .677), as did type of ad (b = 0.13, p = .755). The interaction between ad explanation and type of ad on persuasion knowledge had a non-significant effect (b = -0.11, p = .669). Ad explanation had a non-non-significant effect on persuasion resistance (b = 0.69, p = .159), as did type of ad (b = 0.43, p = .384). The interaction between ad explanation and type of ad on persuasion resistance was also non-significant (b = -0.19, p = .552). Persuasion knowledge did not have a non-significant effect on privacy concerns (b = 0.10, p = .351), however persuasion resistance did have a significant effect on privacy concerns (b = 0.36, p < .001). The transparency of the ad explanation had a non-significant effect on privacy concerns (b = -0.10, p = .590). This test shows that by adding type of ad as a moderator, the effect of ad explanation on persuasion knowledge did

(21)

not change. The same is true when type of ad is used as a moderator for the effect of ad explanation on persuasion resistance.

Comparing the conditional indirect effects of the two types of ad on privacy concerns through persuasion knowledge shows that both the non-political ad (indirect effect < 0.01, boot SE = 0.03, 95% BCBCI [-0.056; 0.074]), and the political ad (indirect effect < -0.01, boot SE = 0.03, 95% BCBCI [-0.078; 0.055]), had a non-significant indirect effect on privacy concerns through persuasion knowledge. However, the non-political ad did have a significant indirect effect on privacy concerns through persuasion resistance (indirect effect = 0.18, boot SE = 0.10, 95% BCBCI [0.023; 0.393]), while the political ad did not have a significant indirect effect (indirect effect = 0.12, boot SE = 0.09, 95% BCBCI [-0.028; 0.316]). This indicates that the non-political Facebook ad had a positive significant indirect effect on participants privacy concerns, mediated by persuasion resistance. However, since this is only the case for participants who were exposed to the non-political Facebook ad, and since these indirect effects are only significant when mediated by persuasion resistance, and not when mediated by persuasion knowledge, H3a is not supported.

Figure 5. Moderated mediation model: Effect of ad explanation (less transparent vs. more transparent) on privacy concerns via persuasion knowledge and persuasion resistance, moderated by type of ad (n = 159). *p < .05, **p < .01, ***p < .001. « Process did not give the number for the total indirect effect.

(22)

In order to test H3b, the dependent variable was privacy attitude intentions (see Figure 6). Because only the dependent variable was changed, the effect of ad explanation on

persuasion knowledge and on persuasion resistance was the same, as was the moderation with type of ad. Persuasion knowledge had a non-significant effect on privacy attitude intentions (b = -0.06, p = .577). Persuasion resistance had a non-significant effect on privacy attitude intentions (b = 0.03, p = .764), as did ad explanation (b = 0.20, p = .278). The conditional indirect effect of the two types of ad on privacy attitude intentions through persuasion knowledge show that both the non-political ad (indirect effect < -0.01, boot SE = 0.03, 95% BCBCI [-0.069; 0.041]), and the political ad (indirect effect < 0.01, boot SE = 0.03, 95% BCBCI [-0.049; 0.059]), had a non-significant indirect effect. Furthermore, the conditional indirect effect of the non-political ad (indirect effect < 0.01, boot SE = 0.06, 95% BCBCI 0.104; 0.143]), and the political ad (indirect effect < 0.01, boot SE = 0.04, 95% BCBCI [-0.072; 0.105]) on privacy attitude intentions through persuasion resistance was also not significant. Since there was no significant effect of ad explanation on privacy attitude intentions, mediated by persuasion knowledge and persuasion resistance, and moderated by type of ad, H3b is not supported.

Figure 6. Moderated mediation model: Effect of ad explanation (less transparent vs. more transparent) on privacy attitude intentions via persuasion knowledge and persuasion resistance, moderated by type of ad (n = 159). *p < .05, **p < .01,

(23)

Discussion and Conclusion

This study focused on whether more transparent Facebook ad explanations activate users’ persuasion knowledge and persuasion resistance, and whether this has an effect on users’ privacy concerns and attitudes. Furthermore, a second purpose was to determine if there was a difference when users were exposed to a political Facebook ad compared to a non-political Facebook ad. The third purpose was to find out if the relationship between transparent Facebook ad explanations activating persuasion knowledge and resistance, leading to higher privacy concerns and attitudes, was moderated by the ad being political. The results find no support for the hypotheses. Thus, the answer to the research question is that Facebook users’ privacy concerns and attitudes do not significantly change when users are exposed to more transparent ad explanations of Facebook’s OBA.

Furthermore, this study found that more transparent Facebook ad explanations did not have an effect on participants’ persuasion knowledge. This could be explained by

participants’ persuasion knowledge automatically being activated when confronted with a Facebook ad, as Boerman and Kruikemeier (2016) suspect. However, according to Boerman and Kruikemeier (2016) this would indicate that there would be a difference if the ad was political in nature, which was not the case in this study. A possible explanation for this could be that Facebook users are used to political Facebook ads, since multiple studies show that political ads have steadily become more common on social media (Boerman & Kruikemeier, 2016; Ghosh et al., 2019; Kruikemeier et al., 2016; Turow et al., 2012). Furthermore, this would be in line with the study done by Kruikemeier et al. (2016, pp. 370, 371), who found that additional explanation about the practice of personalizing ads did not change participants’ persuasion knowledge. Kruikemeier et al. (2016) believe this might be explained because

(24)

participants seem to have quite some persuasion knowledge of how political advertising on social media works.

Interestingly, this study found a statistically significant relationship between transparent ad explanations, persuasion resistance and privacy concerns. Although this relationship was not hypothesized, it is nonetheless an interesting finding, since it implicates that more transparent ad explanations do have an effect on persuasion resistance in the form of negative affect. This is in line with other studies that have shown that disclosures and information about OBA can lead to feelings of skepticism, creepiness and irritation (Boerman & Kruikemeier, 2016; Ur et al., 2012; Youn & Kim, 2019), although there have also been findings that show the opposite (Eslami et al., 2018; Oulasvirta et al., 2014). Furthermore, this study found a significant relationship between persuasion resistance in the form of negative affect and privacy concerns. This finding is in line with previous studies where there was a relationship between negative affect and privacy concerns (Ur et al., 2012; Youn & Kim, 2019).

However, although there was a significant mediation with privacy concerns, there was no significant mediation when the dependent variable was privacy attitude intentions. This is interesting, because it implicates that although participants feel anger, annoyance and other feelings of negative affect, they do not necessarily intend to change their privacy behaviors. This is in line with the findings of Boerman et al. (2018, p. 16), where participants feel

threatened by tracking practices, but have little confidence in their own ability to prevent this. Furthermore, they found that participants were not sure of the effectiveness of taking privacy protective measures (Boerman et al., 2018, p. 16). This could explain why this study found that users who feel more resistance do not necessarily intend to change their privacy attitudes. It seems that Facebook users do feel resistance, but that they do not intend to actually resist these privacy invasive practices of OBA and tracking.

(25)

Limitations and Future Research

The most important limitation in this study is that the different experimental

conditions did not differ in regards of ad disclosures. Every experimental condition mentioned that it was an advertisement, which might be the reason for the high mean score of the

participants’ persuasion knowledge, since it is possible that all participants’ persuasion knowledge was activated. However, it is also possible that the ceiling effect of persuasion knowledge has been reached. Nevertheless, the possible activation of all participants’

persuasion knowledge cannot be ruled out, since all the experimental conditions disclosed that the participants saw an advertisement. The decision to not include a non-disclosure condition was made because otherwise the number of experimental conditions would have been too high, which was not realistic for this study. Thus, recommendations for further research would be to either include experimental conditions where there is no mention of the Facebook post being an advertisement, or to slightly change the questions relating to persuasion

knowledge. This could be done to measure a different aspect of persuasion knowledge, to go against the possible ceiling effect. A possible aspect of persuasion knowledge that would be interesting to test would be participants’ persuasion knowledge of marketers’ tactics.

Another important limitation of this study is that the Facebook ads that were used for the experimental conditions were not from real brands or political parties. This was done to change as little as possible between the different conditions, however the ecological validity was lower because of this, since the ads were less realistic because of this and thus the findings might be less generalizable. Furthermore, multiple studies have shown that the source of the advertisement matters, as it can influence users (Boerman, Willemsen & van der Aa, 2017; Dolin et al., 2018; Oulasvirta et al., 2014; Ur et al., 2012). However, since none of the participants recognized the advertisements, none of the participants were influenced by recognition of the source. This was a positive influence for the internal validity. Nonetheless,

(26)

it would be interesting to find out if users’ persuasion resistance in the form of negative affect would change when exposed to ads from different sources.

Future research should focus more on the relationship between transparency and persuasion resistance in the form of negative affect, as it seems that this is more important than previously thought. It is especially interesting that although participants felt negative affect, they did not intend to change their privacy attitudes. There needs to be more research on this, as it is important to understand what this means for citizens’ perceptions of their online privacy, especially now that there are more personalized political advertisements on Facebook. Furthermore, these findings highlight that there needs to be more research done on what motivates Facebook users to change their privacy attitudes.

Implications

More research about the relationship between transparency and negative affect can have implications for Facebook itself, as previous research about transparency has shown that users have very divergent reactions when they are confronted with more transparent ad explanations (Dogruel, 2019; Eslami et al., 2018; Oulasvirta et al., 2014; Ur et al., 2012; Weinshel et al., 2019). This makes it important for companies such as Facebook to understand when users perceive transparent ad explanations as negative (e.g. ‘creepy’, Ur et al., 2012) or as positive (e.g. increasing user trust, Eslami et al., 2018).

The societal implications of this study are mostly in what was not found. This study shows that although the participants felt anger, irritation and other forms of negative affect, they did not intend to change their privacy behavior. Since there have been more instances of personalized political online ads (Ghosh et al., 2019; Turow et al., 2012), and this can have a negative influence on democracy (Barocas, 2012; Zuiderveen Borgesius et al., 2018), it is very important to understand how citizens can be motivated to protect their online privacy. That is, if they even can be motivated to protect their privacy. If this is not the case, then this

(27)

shows that there is more need for laws and regulations protecting citizens online privacy, since it seems that even when citizens are angry, they do not intend to change their behavior.

Theoretically this study contributes to the existing literature by expanding the research that has been done on transparency, especially by integrating the literature on persuasion knowledge and on persuasion resistance with transparency literature. This study especially shows the importance of persuasion resistance when studying transparency and highlights its significance for further research.

References

Altaweel, I., Good, N. & Hoofnagle, C. J. (2015), Web Privacy Census. Technology Science, December 15, http://techscience.org/a/2015121502.

Andreou, A., Venkatadri, G., Goga, O., Gummadi, K. P., Loiseau, P., & Mislove, A. (2018). Investigating Ad Transparency Mechanisms in Social Media: A Case Study of

Facebooks Explanations. Proceedings 2018 Network and Distributed System Security

Symposium. doi: 10.14722/ndss.2018.23191

Barocas, S. (2012). The price of precision: Voter microtargeting and its potential harms to the democratic process. In Proceedings of the first edition workshop on Politics, elections

and data, 31-36. doi: 10.1145/2389661.2389671

Boerman, S. C., & Kruikemeier, S. (2016). Consumer responses to promoted tweets sent by brands and political parties. Computers in Human Behavior, 65, 285–294. doi: 10.1016/j.chb.2016.08.033

Boerman, S. C., Willemsen, L. M., & van der Aa, E. P. (2017). “This Post Is Sponsored”: Effects of Sponsorship Disclosure on Persuasion Knowledge and Electronic Word of Mouth in the Context of Facebook. Journal of Interactive Marketing, 38, 82–92. doi: 10.1016/j.intmar.2016.12.002

(28)

Boerman, S. C., Kruikemeier, S., & Zuiderveen Borgesius, F. J. (2017). Online Behavioral Advertising: A Literature Review and Research Agenda. Journal of Advertising,

46(3), 363–376. doi: 10.1080/00913367.2017.1339368

Boerman, S. C., Kruikemeier, S., & Zuiderveen Borgesius, F. J. (2018). Exploring Motivations for Online Privacy Protection Behavior: Insights from Panel Data.

Communication Research, 1-25. doi: 10.1177/0093650218800915

Choi, B. C., & Land, L. (2016). The effects of general privacy concerns and transactional privacy concerns on Facebook apps usage. Information & Management, 53(7), 868– 877. doi: 10.1016/j.im.2016.02.003

Davies, R., & Rushe, D. (2019, July 24). Facebook to pay $5bn fine as regulator settles Cambridge Analytica complaint. Retrieved from

https://www.theguardian.com/technology/2019/jul/24/facebook-to-pay-5bn-fine-as-regulator-files-cambridge-analytica-complaint.

Dogruel, L. (2019). Too much information!? Examining the impact of different levels of transparency on consumers’ evaluations of targeted advertising. Communication

Research Reports, 36(5), 383–392. doi: 10.1080/08824096.2019.1684253

Dolin, C., Weinshel, B., Shan, S., Hahn, C. M., Choi, E., Mazurek, M. L., & Ur, B. (2018). Unpacking Perceptions of Data-Driven Inferences Underlying Online Targeting and Personalization. Proceedings of the 2018 CHI Conference on Human Factors in

Computing Systems - CHI 18. doi: 10.1145/3173574.3174067

Eslami, M., Krishna Kumaran, S. R., Sandvig, C., & Karahalios, K. (2018). Communicating Algorithmic Process in Online Behavioral Advertising. Proceedings of the 2018 CHI

Conference on Human Factors in Computing Systems - CHI 18. doi:

10.1145/3173574.3174006

(29)

advertising: Analysis of privacy threats and protection approaches. Computer

Communications, 100, 32–51. doi: 10.1016/j.comcom.2016.12.016

Facebook Business. (2018, December 21). Changes We Made to Ads in 2018. Retrieved from https://www.facebook.com/business/news/changes-we-made-to-ads-in-2018.

Friestad, M., & Wright, P. (1994). The Persuasion Knowledge Model: How People Cope with Persuasion Attempts. Journal of Consumer Research, 21(1), 1. doi: 10.1086/209380 Ghosh, A., Venkatadri, G., & Mislove, A. (2019). Analyzing Political Advertisers’ Use of

Facebook’s Targeting Features. Conference: IEEE Workshop on Technology and

Consumer Protection (ConPro ’19).

Graham-Harrison, E., & Cadwalladr, C. (2018, March 17). Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. Retrieved from https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election.

Ham, C.-D., Nelson, M. R., & Das, S. (2015). How to Measure Persuasion Knowledge.

International Journal of Advertising, 34(1), 17–53. doi:

10.1080/02650487.2014.994730

Hayes, A. F. (2018). Introduction to Mediation, Moderation, and Conditional Process

Analysis: A Regression-based Approach. New York, NY: Guilford Press.

Hearn, I. (2019, July 15). Facebook Expands "Why Am I Seeing This Ad" Feature to Provide More Detailed Insights to Users. Retrieved from

https://www.impactbnd.com/blog/facebook-expands-why-am-i-seeing-this-ad-feature-to-provide-more-detailed-insights-to-users

(30)

Between Personalized Advertising on Facebook and Voters Responses.

Cyberpsychology, Behavior, and Social Networking, 19(6), 367–372. doi:

10.1089/cyber.2015.0652

Lewis, P., & Hilder, P. (2018, March 23). Leaked: Cambridge Analytica's blueprint for Trump victory. Retrieved from

https://www.theguardian.com/uk-news/2018/mar/23/leaked-cambridge-analyticas-blueprint-for-trump-victory. Oulasvirta, A., Suomalainen, T., Hamari, J., Lampinen, A., & Karvonen, K. (2014).

Transparency of Intentions Decreases Privacy Concerns in Ubiquitous Surveillance.

Cyberpsychology, Behavior, and Social Networking, 17(10), 633–638. doi:

10.1089/cyber.2013.0585

Smit, E. G., van Noort, G., & Voorveld, H. A. (2014). Understanding online behavioural advertising: User knowledge, privacy concerns and online coping behaviour in Europe. Computers in Human Behavior, 32, 15–22. doi: 10.1016/j.chb.2013.11.008 Turow, J., Delli Carpini, M. X., Draper, N. A., & Howard-Williams, R. (2012). Americans

Roundly Reject Tailored Political Advertising. Annenberg School for Communication,

University of Pennsylvania. Retrieved from http://repository.upenn.edu/ascpapers/398 Ur, B., Leon, P. G., Cranor, L. F., Shay, R., & Wang, Y. (2012). Smart, useful, scary, creepy.

Proceedings of the Eighth Symposium on Usable Privacy and Security - SOUPS 12.

doi: 10.1145/2335356.2335362

Van Reijmersdal, E. A., Fransen, M. L., van Noort, G., Opree, S. J., Vandeberg, L., Reusch, S., … Boerman, S. C. (2016). Effects of Disclosing Sponsored Content in Blogs.

American Behavioral Scientist, 60(12), 1458–1474. doi: 10.1177/0002764216660141

Van Steenburg, E. (2015). Areas of research in political advertising: a review and research agenda. International Journal of Advertising, 34(2), 195–231. doi:

(31)

Weinshel, B., Wei, M., Mondal, M., Choi, E., Shan, S., Dolin, C., … Ur, B. (2019). Oh, the Places Youve Been! User Reactions to Longitudinal Transparency About Third-Party Web Tracking and Inferencing. Proceedings of the 2019 ACM SIGSAC Conference on

Computer and Communications Security - CCS 19. doi: 10.1145/3319535.3363200

Youn, S., & Kim, S. (2019). Newsfeed native advertising on Facebook: young millennials’ knowledge, pet peeves, reactance and ad avoidance. International Journal of

Advertising, 38(5), 651–683. doi: 10.1080/02650487.2019.1575109

Zuiderveen Borgesius, F. J., Möller, J., Kruikemeier, S., Fathaigh, R. Ó., Irion, K., Dobber, T., … Vreese, C. D. (2018). Online Political Microtargeting: Promises and Threats for Democracy. Utrecht Law Review, 14(1), 82. doi: 10.18352/ulr.420

Zuwerink, J. J., & Cameron, K. A. (2003). Strategies for resisting persuasion. Basic and

(32)

Appendix

Screenshots of Ad Explanations

Experimental condition 2. Political & low transparency ad explanation.

(33)

Experimental condition 3. Non-political & high transparency ad explanation.

Referenties

GERELATEERDE DOCUMENTEN

The results from Study 1 suggested that for consumers with extensive knowledge, verbal comparative ads provide more information and produce more favorable

To identify this frame, the rhetorical strategies that this study used are; the continuous usage of Facebook, why deleting your Facebook account does not really matter, the risk

The third frame shows that, despite the scandal and its consequences, Facebook can continue as a company, and people still use the social media

Although some research has been conducted in privacy concerns and behavioral intention, no research has been done specifically on privacy concerns and the behavioral intentions of

Prove that there is a positive correlation between the continuation of the peacekeeping function of Asante traditional authorities and the persistence of chieftaincy shown by

After viewing the advertisement, participants were asked to answer questions about perceived privacy concerns, click-through intentions, forward intentions, privacy cynicism,

• Prove that there is a positive correlation between the continuation of the peacekeeping function of Asante traditional authorities and the persistence of

In this paper we examine how simulated social touch by a virtual agent in a cooperative or competitive augmented reality game influences the perceived trustworthiness, warmth