• No results found

Targeted advertising and consumer privacy: practices and underlying reasons that evoke privacy violation feelings in young adults.

N/A
N/A
Protected

Academic year: 2021

Share "Targeted advertising and consumer privacy: practices and underlying reasons that evoke privacy violation feelings in young adults."

Copied!
80
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Targeted advertising and consumer privacy: practices and underlying

reasons that evoke privacy violation feelings in young adults.

Master thesis towards the Degree Master of Science MSc Marketing (Management track)

University of Groningen Faculty of Economics and Business Presented by

Name Anastasia Grammatikopoulou

(2)

Abstract

The development of online social networks and e-commerce websites has created an opportunity for companies to use consumer data for targeted advertising. Targeted advertising implies tailoring ad content to specific characteristics of an end-user. Targeting, when done in the right way, is confirmed to be effective for companies. At the same time, extant research has found that consumers often feel violated by it. The questions when exactly do consumers feel privacy violation through targeted advertisements and why remained open. This study addresses these questions by interviewing 12 participants that belong to the age group 19-25, and who have a high chance of being exposed to targeted advertising. Data collection through private messages, and ad displays that use obvious retargeting and high personalization were found to be violative towards user privacy. Consumer marketing knowledge is found as an aspect having an impact on the intensity of a violation feeling. Furthermore, three underlying reasons for privacy violation feelings are revealed, related to data collection and ad display methods respectively: invasion into private life, consumer awareness of data collection and personal image threat. These relations between targeted advertising practices that violate consumer privacy and reasons are presented in a framework, which serves as a main contribution of this paper.

(3)

Table of Contents

1. INTRODUCTION ... 4 2. LITERATURE REVIEW ... 6 2.1 TARGETED ADVERTISING ... 6 2.2 ONLINE PRIVACY ... 8 Privacy violation ... 9 3. METHODOLOGY ... 13 Participants ... 14 Procedure ... 15 4. FINDINGS ... 16

THEME 1: DATA COLLECTION... 17

Private conversations ... 17

THEME 2: AD DISPLAY ... 20

Obvious retargeting... 20

High personalization ... 22

THEME 3: CONSUMER MARKETING KNOWLEDGE ... 24

5. DISCUSSION ... 25

Invasion into private life ... 26

Awareness of data collection ... 27

Personal image threat ... 28

6. CONCLUSION ... 29

6.1 LIMITATIONS AND FUTURE RESEARCH ... 30

6.2 MANAGERIAL IMPLICATIONS ... 31

REFERENCES: ... 34

APPENDICES ... 45

APPENDIX A - INTERVIEW GUIDE ... 45

(4)

1. Introduction

Remember yourself looking for a product on an e-commerce website and later seeing this or another suggested product from the same website appear on your social media feed? This is the example of an online retailer collecting data about you (the user) and utilizing it for marketing purposes. The online world and technology development have enabled companies to access consumers’ personal information through big data, e.g. organizations capturing and using people’s information to better understand consumers, predict choices and behavior (Tirunillai and Tellis, 2014), and enhance marketing analytics (Gartner Report, 2012; Quix and Sloot, 2017). Marketing analytics are the base for personalization that is used in targeted advertising provided to consumers online. As an outcome of such activities, many users proclaim privacy concerns and feelings of privacy violation online, with two-thirds perceiving that they do not have full control over their information (European Commission, 2015; Tucker, 2012).

The term user privacy is defined as a selective control of access to personal information about users online (Altman, 1975; Nill and Aalberts, 2014). Extant literature has confirmed that although users are highly concerned about their privacy, they still want to give their personal details to companies (Barnes, 2006; Brown, 2001; Spiekermann et al., 2001; Acquisti and Grossklags, 2005; Norberg et al., 2007). As much as they are feeling violated by personalized advertising, social network users do not want to see irrelevant content on their feed (Sutanto et al., 2013). Previous research on the topic has called the contradiction between privacy concerns and actual behavior on the internet privacy paradox (Aguirre et al. 2015; Barnes, 2006; Norberg et. al., 2007). Despite declaring that the provision of personal information online is at times disturbing, the majority of people still acknowledge that it is an essential part of using online services (European Commission, 2015; Norberg et. al., 2007; Zafeiropoulou et al., 2013). Often, in order to access certain benefits, users are required to accept the requirements of personal information sharing (Zafeiropoulou et al., 2013). This raises a question: when exactly is it the case for consumers to feel violated by targeted advertisements?

(5)

access the needed information (Zafeiropoulou et al., 2013). However, all of these contexts were tested by means of experimental research or surveys, which are voluntary and based on self-reported behavior that is often different from the actual one. Such research does not ask consumers directly about the motivations of their behavior, which provides opportunities for further research. Some research has revealed that users do feel violated by targeted advertisements (Johnson, 2013; Tucker, 2012). However, marketing practices that cause the violation feeling are still unknown. Johnson (2013) has looked at the privacy violation, or ‘harm’ as it is referred to in their paper, caused by targeted advertising. They have found that overuse of advertising technologies makes consumers more prone to privacy violation and, as a result, advertising avoidance, which also correlates to other literature’s findings (Alreek and Settle, 2007; Godfrey et al., 2011; White et al., 2008). Moreover, extant literature findings suggest that the attitudes and violation feelings with targeted advertising may differ between consumer groups (Johnson, 2013) and age groups (Barnes, 2006; Van den Broek et al., 2015), and may change depending on targeting strategy and technology that is used.

It is important to investigate and understand the types and motivations of user responses, their attitudes towards targeted advertisements and when such appear violative towards their privacy. This study addresses this issue and aims to reveal clear targeted advertising components and practices that violate consumer privacy. Further, current research aims to understand the motivations behind consumer privacy violation feelings. Therefore, the research question of this study is: “When and why do consumers feel their privacy to be violated by targeted advertisements?”.

(6)

Current study reveals three themes that influence privacy violation feelings: data collection, ad display methods and consumer background knowledge about targeting. These include data collection through private conversations, obvious retargeting, and high personalization. Extant literature and interview findings are used to explain three main reasons for privacy violation feelings: invasion into private life, awareness of data collection, and personal image threat. The relation between cases when consumers feel violated by targeted advertisements and reasons for that are used to develop a targeted advertising privacy violation framework, which serves as a theoretical contribution of this paper. This study extends previous research by identifying exact marketing practices that violate the privacy of a major consumer group. These findings also contribute to the marketing management practices by identifying targeted advertising elements that are violative and therefore may be ineffective, as suggested by extant literature (Acquisti, 2012; Athey and Gans, 2010; Johnson, 2013). Insights relevant for ethical considerations in marketing tactics can also be drawn from the results of this study. The research is structured as follows: in Section 2, existing literature on targeted advertising and (online) privacy is reviewed. Subsequently, the methodology and research process are explained in Section 3. Section 4 provides findings of the interview analysis, followed by the discussion of findings in Section 5. Lastly, Section 6 concludes the findings and discussion, as well as provides managerial implications and limitations of current research.

2. Literature Review

2.1 Targeted Advertising

A high increase in the popularity of social media has created a new form of advertising for companies. They started to actively use social networks for marketing purposes and consumer insight (Kozinets, 2002). Social media, especially Facebook, enables firms to tailor ad contents to specific needs or characteristics of end-users, allowing them to personalize it by using shared consumer information (Tucker, 2014). This kind of personalized ad practice is called

targeted advertising. Data shared online by users includes far more than just the date of birth,

(7)

Ho, 2006). In an online world, it can be seen that e.g. in the example shown in the introduction, algorithms record users’ online browsing behavior, allowing them to push through advertisements that are related to previous searches. Online personalization, including targeted advertising, tends to increase consumer loyalty and cross-selling of products and services for companies (Alba et al., 1997; Chepalla and Sin, 2005). Tracking user personal information also helps to improve website usability and enables personalization of website outlook, messages, and offers (Boerman et al., 2017; Buchanan et al., 2007). Most often, companies use such information as demographics and purchasing behavior for targeting. Tracking purchasing behavior across websites is an effective tool in predicting future behavior, and therefore it is substantially used in targeting (Heilman et al., 2003; Rossi et al., 1996). However, as the usage of personalization in online advertising grows, so does the risk of privacy concerns from the consumers’ side. While targeted advertising on social networks might be appealing for consumers in a sense of seeing relevant content on their feed, at times it is still frightening from the privacy perspective (Stone, 2010). Pronounced advertisements may evoke a feeling of being manipulated in consumers’ minds, especially when users start processing and analyzing the appearance of an ad (Campbell, 1995). White et al. (2008) note that finding advertising intrusive or invasive can negatively affect users’ favorability towards it.

(8)

emails, have found that higher personalization is perceived negatively if users do not find a valid reason for why their personal data being used. Therefore, while consumers favor targeted advertising over the generic one, there is a thin line between it being perceived as useful and invasive. Cho and Cheon (2004) analysis has shown that negative experience with advertisements, dissatisfaction, disruption and irritation caused by those lead to ad avoidance, both cognitive and behavioral. Therefore, it is important to determine when such feelings appear. In order to identify this, first, a literature on consumer perception of online privacy and privacy violation has to be examined.

2.2 Online privacy

Privacy concerns refer to people’s beliefs and perceptions about their privacy, and how much they are (not) ready to share (Malhotra et al., 2004). While the popularity of the internet and social media increases, threats to user privacy do too (Mainier and O’Brien, 2010). Previous studies and literature have identified that people experience discomfort when encountering targeted advertisements, as they realize that the information about them was collected online without their awareness (Tucker, 2012). At the same time, actual marketplace behavior suggests that consumers are not considered protective of their data and freely provide personal information that is subsequently used for targeting (Norberg et. al., 2007).

(9)

self-disclosing for a purpose associated with more likable things. Simultaneously, they overestimate such risks in situations that they associate with things that they dislike (Kehr et al., 2015; Wakefield, 2013). The discrepancy between user privacy attitudes and behavior has been explained by the ritualization of the use of social network sites. Debatin et al. (2009) discovered that social media has become a routine for most people, and therefore they do not pay attention to how much information they disclose about themselves. It was seen that there is a clear mismatch of consumers’ privacy attitudes and behavior, and possible reasons for that addressed by previous studies. Furthermore, literature has found that the feeling of privacy violation does exist among consumers, despite being generally tolerant towards targeted advertiseents. Thus, it is essential to look at the online privacy violation cases and aspects that influence that, and literature that has addressed it.

Privacy violation

The appearance of the social networks resulted in consumers being pushed to share personal information. While social media websites offer settings to regulate user privacy, it is easier for people to share the information online rather than hide it (Schneider and Zimmer, 2006). The introduction of the Facebook News Feed in 2006 has evoked the first information privacy discussions. At the same time, Hoadley et al. (2009) revealed that Facebook users are moderately concerned about who exactly can access such data and how it can be used. Dienlin and Trepte’s (2014) research showed that privacy concerns did not negatively influence people’s behavior on social media/networks. Moreover, when given precise information on the possibility to change privacy settings on Facebook, the minority of people decided to exert at least one of the settings, even after primed with legal or anecdotic stories (Nosko et al., 2012).

(10)

those users’ acceptance of more tailored personal advertisements. Perceived control of an access to the information also reduces consumers’ privacy concerns regarding the information shared on the Facebook news feed (Hoadley et al., 2009). Baek (2014) showed that the discrepancy between privacy concerns and behavior is reduced by providing users with arguments for and against sharing personal information to companies or social media. The amount of trust towards the company that collects data also affects consumers’ attitudes towards self-disclosure (Bleier and Eisenbeiss, 2015; Chepalla et. al., 2005). At a situational level, there is an inverse correlation between privacy and trust, with a higher trust compensating for a lower privacy, and the other way around (Joinson et al., 2010). Crisis literature has also shown that perceived seriousness of violation influences consumer response (Bansal and Zahedi, 2015). For the case of the online world, they show that the use of consumer information that is shared internally on particular websites violates user trust as they accuse the company itself of the information leak. Interestingly, Bansal and Zahedi (2015) find that such violation of trust has more impact on consumers than hacking.

Bucher (2017) found that Facebook users experience algorithms used by the social network, that is, they understand that the network involves technology to tailor the feed appearance. Some consumers see algorithms for targeting as a further violation of their privacy (Johnson, 2013; Rader, 2014). Lack of knowledge seems to influence people’s perceptions about their capability to protect their data online (Boerman, 2018). Boerman’s (2018) study showed that people who are more aware of various ways to protect their data have engaged in privacy protection behavior more. They suggest that the amount of knowledge depends on the level of consumer education. Accordingly, people with lower education are feeling a privacy threat (Boerman, 2018). This results in some users engaging in strategies to avoid targeting, such as falsifying personal data, especially when they feel like in need of more information control (Punj, 2017).

(11)

used incongruently with company’s promise and customer expectations, privacy issues arise (Moore et al., 2015). Relationship marketing that focused on creating more intimacy actually has been perceived as intrusive by customers (O’Malley et al., 1997). Recent technological developments have created opportunities for companies to collect and utilize consumer private information both online (Kachhi and Link, 2009) and offline (Lekakos, 2009). Godfrey et al. (2011) have also looked at the technology integration in marketing for data collection, and found that online marketing communication has limits that, when exceeded, create negative reactions among consumers. Extant literature has pointed out that the violation of consumer expectations of advertising and marketing communication, such as tracking online behavior may result in negative responses (Alreck and Settle, 2007; White et al., 2008).

The amount of information that users share online is influenced by social norms. For example, people have a favorable view of profiles that are available to “only friends”. Users tend to customize their profile pages to match the look of other profiles and consider social aspects to form certain impressions about themselves (Strater and Lipford, 2008). This implies a difference in the information that users are both willing and not willing to share online. Sivanathan and Petit (2010) describe that people want to signal a particular image about themselves to others. Belk (2013) described digital consumption as a concept of “extended self”, whose arguments correlate with literature on self-protection through consumption (Sivanathan and Petit, 2010). These authors argue that consumers engage in need for self-integrity. This applies to digital consumption as people perceive their possessions, including online profiles, as parts of themselves. Therefore, when the image other people receive about a person is different from the intended one, they feel threatened.

(12)

online behavior (Madden & Smith, 2010; Steijn, 2014; Van den Broeck et al., 2015). Their concrete attitudes towards their privacy on social media are still blurred. Johnson (2013) has looked at consumer attitude towards targeted advertising, and identified that consumers most likely want to get advertisements different from the ones they are receiving. The violation feeling, or harm as described in Johnson’s (2013) paper, may occur in different cases depending on the consumer group and the technology that is used for targeting.

Consumers often perceive certain marketing techniques as annoying. Moore et al. (2015) have determined that aggressiveness and stalking are part of annoying marketing. Literature has also found that such aggressive tactics negatively affect companies no matter how well-known their brand is (Campbell et al., 2003). Johnson’s (2013) analysis and calculations’ argument for privacy issues is that consumers do not receive preferred advertisements. According to the literature above, ‘preferred’ advertisements may be related to the type of ad displayed on a consumer screen, or the method of user information collection. Some of the aspects that may influence consumer attitudes towards advertisements they are receiving are the need for self-integrity, trust for companies, the level of education and the technology used for information collection and targeting. Current research makes a step towards revealing consumer advertising preferences, or rather targeting practices that are clearly unfavored by a consumer group aged 19-25, because those violate their privacy. This paper contributes to the way towards improved targeted advertising, which will eliminate the privacy violation feeling and therefore ad avoidance by consumers, as proposed by extant literature (Acquisti, 2012; Athey and Gans, 2010; Johnson, 2013).

(13)

consumers feel their privacy to be violated by targeted advertisements?”. Phenomenological approach of current research uses consumers’ language as the main instrument for deriving the meaning of users’ conscious experiences with targeted advertising (Goulding, 2005). In-depth interview findings of current research reveal privacy attitudes when encountering targeted advertisements. Interview analysis is used to determine which methods of data collection and ad display consumers find violative, and looks into reasons behind such feelings by investigating consumer motivations for that. Because privacy violation cases have not been determined by previous research, these make the findings of this paper. As consumer motivations and aspects influencing the violation feeling serve as underlying explanations for the research findings, they are described in the discussion section of this paper.

3. Methodology

(14)

for violation feelings. User motivations expressed when asked directly why they feel violated, or provided independently during a conversation, were used to identify common justifications and cluster them into three aspects that explain why the violation feeling occurs.

The process of the empirical part of this research is as follows: after reviewing relevant literature and determining current issues in the field of targeted advertising, a research question was formed. Semi-structured interviews were developed (Appendix A) and used as a main method of data collection. Conducted interviews were then transcribed, coded and analyzed.

Participants

(15)

Table 1 – Participants of the study

Procedure

(16)

The audio recordings of interviews were transcribed and coded, to identify concepts for the analysis of patterns and relationships between them. The software ATLAS.ti was used for the analysis to highlight interview quotes that are used as exemplary of the findings. During the analysis, these statements were used to construct meanings of experiences. Recurrent themes across interviews were clustered into sub-dimensions as suggested by Goulding (2005). Revealed themes are described in the findings and discussion sections of this research to explain the two questions that this research is trying to answer: “when” and “why” people feel violated by targeted advertising.

The findings section of this paper covers particular advertising practices that create privacy violation feelings among consumers. This section presents interview quotes and concrete cases that were clustered during interview analysis. Two main themes are revealed: data collection and ad display methods. Additionally, the interview analysis revealed a third theme which suggests that the feeling of violation can be intensified or decreased by the amount of targeting practices knowledge held by the consumer. It is important to look into why consumers feel violated. As highlighted by Maxwell (2012), the underlying reasons in a qualitative research focus on mental and physical causal processes that determine particular outcomes, rather than just identifying the relationship. The discussion section focuses on underlying processes and motives that result in a privacy violation feeling. Consumer reasons why they feel violated expressed during interviews, alongside with extant literature, demonstrate why there is a relationship between targeted advertising practices and privacy violation feelings.

4. Findings

(17)

share online, with some respondents reporting to always keeping this thought in mind when working online. This supports previous research arguing that users aged 19-25 are more pragmatic with the type of information they share online (Madden & Smith, 2010; Steijn, 2014; Van den Broeck et al., 2015). However, when asked about the kind of information they would never share online, the answers were mainly connected to types of information that would damage their safety, personal or professional image. While answers to such questions do not seem directly related to the explanation of the privacy violation feeling, they will later help readers to understand the reasons why such feelings appear. Three themes related to targeted advertising influencing the privacy violation feeling among consumers have been revealed and are presented in this chapter.

Theme 1: Data Collection Private conversations

Respondents defined privacy as freedom of choice and control of whether and what kind of information to share. While being aware of the fact that information stored online can be accessed by anyone, consumers still expect their private online conversations to remain private. Users believe that their private messages on social media are accessed by third parties. During interviews, participants mentioned algorithms reading their private conversations on social media for better targeted advertising, realization of which evokes a privacy violation feeling.

FI1: “I have this a lot of times when I talk to someone over WhatsApp or Facebook [...] even though technically we know how easy it is for, for example, Facebook Group, to target things, it still is very creepy. Because you still have that feeling that whenever you text someone or send a private message, that it stays only between two of you, but it is not really the case anymore.”

(18)

disturbing if it is not the case. Interviewees who stated that targeted advertisements can be useful when looking for specific products, were still unfavorable of algorithms that recorded this information through private messages:

FI3: “I text more about sports with friends since recently, [...] but I did not put it on Instagram, and then suddenly Instagram showed me all the sports ads. That is creepy.”

Additionally, sending public content in a private message to another person such as posts made by public profiles on social media is also considered information shared between participants of the conversation only. Although these algorithms might technically not be related to reading private conversations, but rather collecting the sharing data of a public post, consumers are usually not aware of the exact techniques used by algorithms. Therefore, they still feel violated by advertisements that use privately shared information. This means that such data collection methods have the same influence on consumers’ feelings as those using keywords from a private conversation.

However, intrusion into a private conversation is not limited to private messages. The majority of interviewees expressed their concerns about the privacy of their offline conversations. Respondents are aware of the microphones on their devices listening to their voices and extracting keywords for targeted advertising. Privacy concerns related to microphone-enabled devices are only growing with time as more of them get incorporated into our daily lives (Gray, 2016).

FI2: “Everybody got this experience when you were just talking about something [...] and your phone was somewhere around, and then the next day you would get an advertisement for the same thing that you were talking about. That is quite a scary experience because you may know that your phone is recorded.”

(19)

microphone on their devices is turned on at all times (Gray, 2016). Gibler et al.’s (2012) research on leaks in the Android operating system has confirmed that data is being recorded through the microphone on mobile devices. Facebook also confirmed to hire contractors that transcribe private user conversations, which resulted in the company losing share by 1.3% (Frier, 2019). Consumers that expressed concerns about this type of data collection called such practice “very intrusive”, “violative”, “shocking”, “creepy”, and “scary”. Interviewees expressed their concerns about the freedom of speech and behavior. Moreover, some participants note that listening to all conversations is not profitable for companies. That is, if algorithms detect keywords about a product or service that consumers are not actually interested in and start to advertise it, there will not be any positive effect on sales.

MI6: “The ads that appear because I talked about the product, it is scary, because most of the time you do not search for it online, so it is mostly the microphone in your phone that listens.”

FI2: “[...] every time when I do not type but just speak to someone and that same ad comes up on the phone, I feel like being recorded and that I am not safe.”

FI1: “[...] even if you do not think about how big the invasion is all the time, there is a weird shocking moment when I talk about ordering a pizza and two minutes later I have an advertisement.”

MI7: “It is a waste of money for the company because I am not interested in [...]. The way they got the information is creepy because I never searched for it, I remember that. Then, they would have been listening to my conversations or they would have seen that my phone was near [...] phone and that we were close to each other, and they decided to advertise it to me as well.”

(20)

Even when consumers admitted the benefits of advertisements they encountered, for example, in the form of a discount, it did not decrease the violation feeling for them.

Theme 2: Ad Display Obvious retargeting

Advertising based on past browsing behavior was indicated as the most familiar case of targeting strategies for consumers. Interviewees admitted that this type of advertising has become so common that it does not provoke any type of reaction anymore, and they habitually “scroll past” it. At the same time, participants reported to notice such advertisements the most. The time between the browsing and ad appearance was mentioned as one condition that creates a feeling of privacy violation. This applies to both advertising that appears too soon or days later after the web search. Consumers consider advertisements that appear too soon after a single search for a product or service intrusive, as it often does not imply an intention to buy. Kumar and Tomkins (2010) have identified that the average number of page views in a search session is 6.3. Therefore, a single page view should not be considered as a search session (Kumar and Tomkins, 2010), and thus retargeting for this behavior intrudes consumer privacy.

Interviewees explained that becoming aware of their personal data being recorded and stored for longer periods, makes consumers feel more violated in case of retargeted advertisements appearing later in time.

MI1: “If I search for something on the web and then get the ad for the exact same product a week later, that would make me feel uneasy because then it would keep that information stored and use it when they thought would be the appropriate time.”

(21)

Consumer loyalty and intention to buy, closely relates to levels of trust (Flavian and Guinaliu, 2006). Flavian and Guinaliu (2006) find that trust in an online world is highly influenced by perceived security with respect to the way companies handle private consumer data. Perceived security can be achieved by companies setting up clear privacy policies on their websites (Flavian and Guinaliu, 2006). Even though consumers rarely read the privacy terms on websites, they still expect those to appear there. Consumers instantly have higher trust in companies or websites that provide privacy statements (Pan and Zinkhan, 2006). However, multichannel targeting negatively affects consumers’ trust associated with these terms and policies. During interviews, participants were asked to access Facebook feed and talk about advertising they see on their screen. They were also asked to recall what action could lead to them receiving a particular ad that they encountered. When advertisements were related to products that respondents were searching for on platforms other than web search, e.g. App Store or Google Play, it negatively affected their perception of safety.

MI1: “It makes me less trusting of the online environment. Even though many companies say that your privacy is respected, when targeted advertising goes across multiple platforms then it makes me very disillusioned of what they promise.”

Correspondingly to advertising that appears too soon, using multichannel retargeting of single search products is described by consumers as “unnecessary” and “violative”. Users that have more knowledge about marketing and the process behind retargeting campaigns, are more accepting of those, but negatively characterize targeting that takes place too quickly. MI3: “It feels violated when I get an ad immediately after I Google one word for one time. For example, I have been interested in what are current prices to fly to a certain country, and then after one search, you get so many targeted ads. So that is when I think: “calm down a little”.”

(22)

for. Multichannel advertisements for other types of websites were described as intrusive and annoying.

High personalization

Personalized advertising involves marketing analytics in order to tailor targeted advertisements to specific needs, characteristics, or geolocation of a user (Tucker, 2014). While personalized advertising proved to be more effective than general retargeting practices, there are boundaries which increase feelings of privacy violation of personalized advertising if crossed. This has been reported to be the case with location advertising. Location-based advertising is defined as advertising that depends on and is activated by positional information of the user, usually through mobile devices (Hirsch et al., 2006). For many, location-based advertising in the form of advertisements focused on a general area, e.g. city, is familiar and useful. However, the more specific information about a user’s location it employs, the more violated consumers feel. Interviewees do not favor apps and advertising providers that know their exact location, or even the geographic area of the city they are in. Only a couple of respondents were aware that most of the location advertisement are based on the catchment area of the business. Despite that, they prefer such advertising to be subtle and unobvious. FI4: “[...] they already do some location-targeted marketing, but if they would also show me my exact location that would be creepy. I know they use your location, but they do not put it in your face that they do so.”

Consumers reported to notice personalized advertisements the most. At the same time, they also experienced privacy violation by those. Interviewees mentioned that high personalization makes them feel restricted in their online behavior. This feeling increases correspondingly to the more personalized an ad becomes. A combination of numerous pieces of personal information in targeting created a more negative reaction. Consumers reasoned that high personalization makes them aware of algorithms. As found by Johnson (2013), becoming aware of algorithms used in targeted advertising creates a privacy violation feeling among consumers.

(23)

FI4: “If you are confronted with too much information, like your name, birthday, and so on, I think that would be kind of creepy.”

For this reason, it is important to touch upon the use of sensitive information for personalized advertisements and the motives behind it. Using sensitive data such as physical and mental health, sexuality, dating life, political ideologies, or religious beliefs in personalized advertisements was described as violative by users. These matters are usually out of people’s control, which is why consumers do not favor advertisements that make them feel like companies trying to make a profit from delicate areas of their lives.

MI7: “[...] I can see how the fact that companies are using my internet search is a pathway to worse practices. I do not have anything to hide, but let’s say, somebody does. There are topics that no one would like to see on their feed. For example, targeted advertising on sexual topics.”

MI1: “[...] if I was to gain an injury, I would not like to get targeted ads for pain relief or any other type of medicine. Information that […] the average company would not be able to guess even with the advertisement.”

Algorithms that use online behavior across platforms, apps, and sites to suggest products that might be relevant for users, is a personalization method that highly influences the violation feeling. Interviewees provided examples of algorithms that do not use browsing information for simple retargeting, but rather to try and guess consumer’s underlying needs. The most common example being when algorithms guessed the pregnancy of a user and correspondingly targeted products to their family before they were aware of the news (Hill, 2012).

(24)

M6: “I would be careful with sexual products. That is not appropriate for a broader variety of people. So, things where certain kinds of people, or groups of people, could think negatively of you. Like racism or other things that you do not want to be associated with. For example, right-wing parties, because that is not something that represents me. The topics that are more sensitive to people.”

Similarly to bringing up controversial or sensitive topics in private conversations often being avoided to escape uncomfortable discussions, consumers would prefer not to get targeted advertising in case someone else can see their screen. Hence, such sensitive topics were mentioned the most when participants were asked about the type of advertisements they would not want other people to see when for example, when walking by in a café or library. People prefer to conform to social norms (Cialdini and Goldstein, 2004). Because other people that may notice advertisements that appear on a user's screen do not know about their actual beliefs, an ad inconsistent with what is considered “normal”, or accepted by the majority, creates a violation feeling.

Theme 3: Consumer marketing knowledge

(25)

FI4: “[...] before I started working, I did not really know anything about [targeted advertising] [...] Back in the days, I would accept all the cookies, I would not even look at it. And due to my work, I got more aware [...] that I should be more careful.”

These findings support the conclusion by Boerman et al. (2018) stating that the amount of consumer knowledge influences their attitudes towards online data protection, suggesting that people with less knowledge are more vulnerable to privacy threats. People with more knowledge of targeting practices and technology are more likely to engage in privacy protection (Boerman et al., 2018). Thus, the more extensive knowledge consumers have about targeted advertising practices, the less violated they feel. It is important to note that consumer marketing knowledge does not eliminate the violation feeling, but only diminishes it. Therefore, in addition to the cases described in the previous two themes of this section, the feeling of violation occurs in a stronger form when the background marketing knowledge is limited.

5. Discussion

(26)

Figure 1 - Privacy violation feelings of consumers aged 19-25 caused by targeted advertising

Invasion into private life

(27)

Trust has been identified as one of significant influencers on consumer self-disclosure to companies (Bleier and Eisenbeiss, 2015; Chepalla et. al., 2005). Realizing that they have not been informed about who exactly can read or listen to private conversations negatively affects consumer trust in companies. Lower trust results in less information sharing by consumers online, and therefore fewer possibilities for better targeting (Joinson et al., 2010). Interviewees provided reasons that the use of highly specific personal information or data collected from private messaging invades their private lives. Using consumer private data for targeting forms a feeling that companies want to monetize on people and their private information. Extant research, as well as current study’s respondents, mentioned that feeling of control over own data makes consumers more willing to disclose and favorable towards targeted advertising (Goldfarb and Tucker, 2011; Tucker, 2014). Therefore, by invading people’s private conversations, algorithms jeopardize consumer interest in advertising. The feeling of no control results from the difficulty to opt out of unnecessary cookies because websites make it harder to do so rather than agree to them (Schneider and Zimmer, 2006). Likewise, users are not able to opt out of the option of being supervised by algorithms in messenger apps and most certainly in real life. The unfulfilled need for personal information control may result in consumers falsifying their data or engaging in other advertising avoidance strategies, which will negatively affect marketing practices (Punj, 2017).

Awareness of data collection

(28)

and Google users has also found that awareness of the fact that websites collect data about users is not associated with unwanted access to their data. Data collection methods that people are mostly aware of are likes or links that they click on (Rader, 2014). Their research explains the unwanted data collection as the information gathered across websites, or such information as political party preference, which correlates with findings of the current paper. Realizing that companies are able to monitor and record their behavior increases consumer privacy issues (Rapp et al., 2009). This suggests that finding a middle-point in the way an ad appears on users’ screens could make targeted advertising less intrusive, and therefore more acceptable.

Personal image threat

Consumer psychology explains that people tend to conform to social norms and are likely to behave in a certain way to obey what is generally accepted by the majority (Cialdini and Goldstein, 2004). People’s offline and online behavior is motivated by the desire to form impressions about themselves (Strater and Lipford, 2008). Belk (2013) has related digital consumption to the concept of “extended self”. This concept implies that people perceive their material possessions as parts of themselves (Belk, 1988). Therefore, when consumers fail to create a necessary impression or personal image, they feel threatened. When it comes to targeted advertising, the strategies consumers consider violating are the use of past browsing behavior or information about personal preferences that might threaten their desired personal image. Such categories include sexual products, job postings, beliefs, political views, and other controversial topics.

(29)

6. Conclusion

This study explores how consumer perceptions of their privacy are influenced by targeted advertising. In particular, when and why targeted advertising violates consumer privacy. Previous research found that consumer attitudes towards targeted advertising become more violative when advertising technologies are overused (Alreek and Settle, 2007; Godfrey et al., 2011; Johnson, 2013; White et al., 2008). Additionally, those may differ between consumer groups (Johnson, 2013) or age groups (Barnes, 2006; Van den Broek et al., 2015). This study extends the research by looking into conscious experiences of consumers by means of semi-structured interviews of users aged 19-25. Interview findings reveal data collection and ad display practices that users find violative towards their privacy. Consumer marketing knowledge affects the strength of a privacy violation feeling. A framework has been developed to connect targeted advertising practices with underlying reasons for violation feelings, providing a more structured and clear visualization of the issue.

(30)

with less marketing knowledge. Nevertheless, it is important to note that higher knowledge does not eliminate the violation feeling, but only weakens it.

Current study focused on interviews of people aged 19-25. This consumer group has a higher chance of being exposed to targeted advertising. Hoadley et al. (2009) study, involving people of the same age group, has found that by giving users more perceived control over the availability of personal data for the public reduced privacy concerns. This research finds that the consumer group aged 19-25 is highly concerned about who exactly sees their data. Through semi-structured depth interviews, current research revealed that consumer knowledge of online data collection practices is extensive. Even with perceived control, users are aware that any information they share online can be accessed by third parties, which does not reduce privacy concerns, contradicting Hoadley et al. (2009) findings.

This study contributes to previous research in the following ways. First, it determines when consumers feel privacy violation with targeted advertising, providing concrete data collection and ad display methods that have to be reevaluated. Secondly, the underlying motives behind violation feelings are identified and linked to targeted advertising methods that consumers find violative. Based on these relations, the framework was developed to understand the relationship between targeting strategies and consumer feeling of violation. This framework can be used and validated in future research.

6.1 Limitations and future research

(31)

Finally, extant research implies that privacy attitudes differ between consumer groups (Johnson, 2013) and age groups (Barnes, 2006; Van den Broek et al., 2015). This study has only focused on one group, and is unable to compare it to other consumer groups and make a conclusion on how exactly privacy attitudes are different. Further research could explore this matter by conducting the study involving other consumer groups. It can also look into whether such attitudes vary depending based on device or the social media platform/website where consumers encounter targeted advertising.

6.2 Managerial implications

Although this study focused on experiences of a limited amount of consumer group representatives, some managerial implications were drawn during the analysis. Current paper highlights the importance of research and testing in targeted advertising campaigns, which implies checking whether the ad covers topics that could be sensitive for consumers. For multichannel advertising, findings suggest considering more subtle practices of retargeting that would be less distorting of daily online experience. A subtle targeted advertising has been named less violating and therefore more favorable by consumers. For retargeting, this research observes that advertising based on past browsing behavior has a better reception when it serves as a reminder of products that consumers were searching for previously and repeatedly. The technology of past browsing behavior algorithms should focus more on the web pages or products that were visited by users 6.3 times on average, as suggested by Kumar and Tomkins (2010). Moreover, advertisers might need to reconsider the way consumers are targeted on topics such as sexuality, job switch, religious beliefs, political views, mental and physical health. One suggestion could be to make data collection on these topics less obvious, or activate retargeting in cases of repeated search only.

(32)

that consumer awareness does not eliminate the violation feeling, but only mitigates it. Therefore, an implication would be to assume the least that consumers can know about advertising and take managerial decisions from there. That is, using no more than one piece of personal information, such as age, gender, or location, in personalized advertising, as those were more violative to consumers with less marketing knowledge.

A way to build consumer trust in companies, in combination with using their personal information for targeting, is to give them the ability to manage their data. In case of private conversations, companies could build more trust by allowing users to manage preferences, or provide clear explanations about what kind of data from private conversations is used, e.g. informing that algorithms are detecting keywords rather than reading or listening to whole conversations. Although allowing users to manage the access to their data may result in companies having less information for product improvement, it is the long-term customer loyalty that will positively influence the performance. Additionally, by being transparent in its operations and building trust among consumers, customers might be willing to share more information than when they are unaware of how it is used. Gray (2016) has also provided an example of Apple practices that disassociate the audio file with an identifier (device ID, etc.) with time. This implies users keeping their privacy, where companies are still getting consumer information for managerial decisions. If implementing such practice, companies could also develop statements and terms on data usage conditions that can be accessed by consumers.

The use of algorithms and user personal information in targeted advertising has its benefits, both for companies and consumers. However, in order to attain consumer willingness to self-disclose and minimize the violation feeling, it is important to recognize their motivations for privacy concerns and create appropriate solutions. Becoming more transparent about the use of consumer information, and developing more subtle, not “into-the-face” advertising, and, most importantly, matching company’s actions to their promises, could help to build consumer trust, willingness to share, and decrease the privacy violation feeling caused by targeted advertisements.

(33)
(34)

References:

Acquisti, A. (2004) Privacy in electronic commerce and the economics of immediate gratification. Proceedings of the 5th ACM conference on electronic commerce, May 17-20, New York, USA; 2004.

Acquisti, A. (2012). Inducing customers to try new goods. Review of Industrial

Organization, 44(2), 131-146.

Acquisti, A., Grossklags, R. (2006, June). Imagined communities: Awareness, information sharing, and privacy on the Facebook. Paper presented at the 6th Workshop on privacy enhancing technologies, Cambridge, UK. Retrieved from:

http://people.cs.pitt.edu/~chang/265/proj10/zim/imaginedcom.pdf

Aguirre, E., Mahr, D., Grewel, D., Ruyter, K. D., & Wetzels, M. (2015). Unraveling the personalization paradox: The effect of information collection and trust-building strategies on online advertisement effectiveness. Journal of Retailing, 91, 34–59.

Alba, J., Lynch, J., Weitz, B., Janiszewski, C., Lutz, R., Sawyer, A., Wood, S. (1997) Interactive Home Shopping: Consumer, Retailer, and Manufacturer Incentives to Participate in Electronic Marketplaces, Journal of Marketing, 61(3), 38-53.

Alhabash, S., & Ma, M. (2017). A Tale of Four Platforms: Motivations and Uses of Facebook, Twitter, Instagram, and Snapchat Among College Students? Social Media + Society. https://doi.org/10.1177/2056305117691544

Alreck, P. L., & Settle, R. B. (2007). Consumer reactions to online behavioural tracking and targeting. Journal of Database Marketing & Customer Strategy Management, 15(1), 11-23. Altman, I. (1975). The environment and social behavior: Privacy, personal space, territory,

(35)

Ashworth, L., & Free, C. (2006). Marketing dataveillance and digital privacy: Using theories of justice to understand consumers’ online privacy concerns. Journal of business

ethics, 67(2), 107-123.

Athey, S., & Gans, J. S. (2010). The impact of targeting technology on advertising markets and media competition. American Economic Review, 100(2), 608-13.

Baek, Y. M. (2014) Solving the privacy paradox: A counter-argument experimental approach. Computers in Human Behavior, 38, 33-42.

Baek, Y. M., Kim, E., Bae, Y. (2014) My privacy is okay, but theirs is endangered: Why comparative optimism matters in online privacy concerns. Computers in Human Behavior, 31, 48-56.

Barnes, S. B. (2006). A privacy paradox: Social networking in the United States. First Monday, 11(9). Retrieved from http://firstmonday.org/htbin/

cgiwrap/bin/ojs/index.php/fm/article/view/1394/1312

Barth, S., de Jong, M. (2017), “The privacy paradox – Investigating discrepancies between expressed privacy concerns and actual online behavior – A systematic literature review,”

Telematics and Informatics, 34, 7, 1038–1058.

Belk, R. W. (1988). Possessions and the Extended Self, Journal of Consumer Research, 15, 139–68.

Belk, R. W. (2013). Extended self in a digital world. Journal of consumer research, 40(3), 477-500.

Bleier, A., & Eisenbeiss, M. (2015). The importance of trust for personalized online advertising. Journal of Retailing, 91, 390–409.

(36)

Boerman, S. C., Kruikemeier, S., Zuiderveen Borgesius, F. J. (2018). Exploring Motivations for Online Privacy Protection Behavior: Insights From Panel Data. Communication

Research, 1-25.

Boyd, D. M., Ellison, N. B. (2008) Social Network Sites: Definition, History, and Scholarship. Journal of Computer-Mediated Communication, 13, 210-230.

Brandimarte, L., Acquisti, A., Loewenstein, G. (2012) Misplaced Confidences: Privacy and the Control P aradox. Social Psychological and Personality Science, 4(3), 340-347.

Brown B. (2001) Studying the internet experience. HP LaboratoriesTechnical Report (HPL-2001-49).Retrieved from: <http://www.hpl.hp.com/techreports/2001/HPL-2001-49.pdf>. Buchanan, T., Paine, C., Joinson, A. N., & Reips, U. (2007). Development of measures of online privacy concern and protection for use on the internet. Journal of the Association for

Information Science and Technology, 58, 157-165.

Bucher, T. (2017) The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms. Information, Communication & Society, 20(1), 30-44.

Campbell, M. C., & Keller, K. L. (2003). Brand familiarity and advertising repetition effects. Journal of consumer research, 30(2), 292-304.

Chellappa, Ramnath K., and Raymond G. Sin (2005), “Personalization versus Privacy: An empirical examination of the online consumer’s dilemma,” Information Technology and

Management, 6, 2-3, 181–202.

Cho, C. H., & Cheon, H. J. (2004). Why do people avoid advertising on the internet?.

Journal of advertising, 33(4), 89-97.

Cho, H., Lee, J. S., Chung, S. (2010) Optimistic bias about online privacy risks: Testing the moderating effects of perceived controllability and prior experience. Computers in Human

(37)

Cialdini, R. B., & Goldstein, N. J. (2004). Social influence: Compliance and conformity.

Annu. Rev. Psychol., 55, 591-621.

Corbin, J., & Strauss, A. (2014). Basics of qualitative research: Techniques and procedures

for developing grounded theory. Sage publications.

Debatin, B., Lovejoy, J. P., Horn, A. K., Hughes, B. N. (2009). Facebook and Online Privacy: Attitudes, Behaviors, and Unintended Consequences. Journal of

Computer-Mediated Communication, 15, 83-108.

Duggan, M. (2015a). The demographics of social media users. Pew Research Center. Retrieved from http://www.pewinternet. org/2015/08/19/the-demographics-of-social-media-users/

Ellison, N. B., Steinfield, C., Lampe, C. (2007). The benefits of Facebook “friends”: Social capital and college students' use of online social network sites. Journal of

Computer-Mediated Communication, 12, 1143−1168.

European Commission. (2015). Data Protection. Brussels, Belgium: European Commission. Retrieved from

https://ec.europa.eu/commfrontoffice/publicopinion/archives/ebs/ebs_431_en.pdf

Eurostat. (2020). Internet users who bought or ordered goods or services for private use in the previous 12 months, by age group, EU-28, 2009-2019. E-commerce statistics for

individuals. Retrieved from: https://ec.europa.eu/eurostat/statistics-explained/index.php/E-commerce_statistics_for_individuals

Flavián, C., & Guinalíu, M. (2006). Consumer trust, perceived security and privacy policy.

Industrial management & data Systems.

(38)

Frier, S. (2019, August). Facebook Paid Contractors to Transcribe Users’ Audio Chats. Bloomberg Report. Retrieved from: https://www.bloomberg.com/news/articles/2019-08-13/facebook-paid-hundreds-of-contractors-to-transcribe-users-audio

Gabisch, J. A., Milne, G. R. (2014). The impact of compensation on information ownership and privacy control. Journal of Consumer Marketing, 31, 13–26.

Gibler, C., Crussell, J., Erickson, J., & Chen, H. (2012, June). AndroidLeaks: automatically detecting potential privacy leaks in android applications on a large scale. In International

Conference on Trust and Trustworthy Computing (pp. 291-307). Springer, Berlin,

Heidelberg.

Gilovich, T., Griffin, D., Kahneman, D. (2002) Heuristics and biases: the psychology of intuitive judgment. Cambridge University Press.

Godfrey, A., Seiders, K., & Voss, G. B. (2011). Enough is enough! The fine line in executing multichannel relational communication. Journal of Marketing, 75(4), 94-109. Goldfarb, A., & Tucker, C. (2011). Online display advertising: targeting and obtrusiveness. Marketing Science, 30, 389–404.

Goulding, Christina (2005), “Grounded Theory, ethnography and phenomenology: A comparative analysis of three qualitative strategies for marketing research,” European

Journal of Marketing, 39, 3/4, 294–308.

Gray, S. (2016, April). Always on: privacy implications of microphone-enabled devices. In

Future of privacy forum.

Heckathorn, D. D. (2011). Comment: Snowball versus respondent-driven sampling.

Sociological methodology, 41(1), 355-366.

Heilman, C. M., Kaefer, F., Ramenofsky, S. D. (2003), “Determining the appropriate amount of data for classifying consumers for direct marketing purposes,” Journal of Interactive

(39)

Hill, K., 2012. How Target Figured Out A Teen Girl Was Pregnant Before Her Father Did. [online] Forbes. Available at: <https://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/> [Accessed 20 April 2020].

Hirsch, F., Kemp, J., Ilkka, J. (2006). Mobile Web Services: Architecture and

Implementation, John Wiley & Sons.

Hoadley, C. M., Xu. H., Lee, J. J., Rosson, M. B. (2009) Privacy as information access and illusory control: The case of the Facebook News Feed privacy outcry. Electronic Commerce

Research and Applications, Retrieved from: doi:10.1016/j.elerap.2009.05.001

Johnson, J. P. (2013), “Targeted advertising and advertising avoidance,” The RAND Journal

of Economics, 44, 1, 128–144.

Joinson, A., Reips, U., Buchanan, T., Schofield, C. (2010). Privacy, trust, and selfdisclosure online. Human Computer Interaction, 25(1), 1-24.

Kachhi, D., & Link, M. W. (2009). Too much information: Does the internet dig too deep?. Journal of Advertising Research, 49(1), 74-81.

Kehr, F., Kowatsch, T., Wentzel, D., Fleisch, E. (2015) Blissfully ignorant: the effects of general privacy concerns, general institutional trust, and affect in the privacy calculus.

Information Systems Journal, 25(6), 607-635.

Kokolakis, S. (2017) “Privacy attitudes and privacy behaviour: A review of current research on the privacy paradox phenomenon”, Computers & Security, 64, 122–134.

Kozinets, R. V. (2002). The field behind the screen: Using netnography for marketing research in online communities. Journal of Marketing Research, 39(1), 61—72.

(40)

Lee, H., Park, H., Kim, J. (2013) Why do people share their context information on Social Network Services? A qualitative study and an experimental study on users' behavior of balancing perceived benefit and risk. International Journal of Human-Computer Studies, 71(9), 862-877.

Lekakos, G. (2009). It's personal: Extracting lifestyle indicators in digital television advertising. Journal of Advertising Research, 49(4), 404-418.

Madden, M., & Smith, A. (2010). Reputation management and social media. Retrieved from http://www.pewinternet. org/2010/05/26/reputation-management-and-social-media/

Mainier, M. J., O’Brien, M. (2010). Online social networks and the privacy paradox: a research framework. Issues in Information Systems, 11(1), 513-517.

Malhotra, N. K., Kim, S. S., & Agarwal, J. (2004). Internet users’ information privacy concerns: The construct, the scale, and a causal model. Information Systems Research, 15, 336–355.

Maxwell, J. A. (2004). Causal explanation, qualitative research, and scientific inquiry in education. Educational Researcher, 33(2), 3-11.

Maxwell, J. A. (2012). The importance of qualitative research for causal explanation in education. Qualitative Inquiry, 18(8), 655-661.

Moore, R. S., Moore, M. L., Shanahan, K. J., & Mack, B. (2015). Creepy marketing: Three dimensions of perceived excessive online privacy violation. Marketing Management, 25(1), 42-53.

Morando, F., Iemma, R., Raiteri, E. (2014) Privacy evaluation: what empirical research on users’ valuation of personal data tells us. Internet Policy Review, 3(2).

(41)

Norberg, Patricia A., Daniel R. Horne, and David A. Horne (2007), “The privacy paradox: personal information disclosure intentions versus behaviors,” Journal of Consumer Affairs, 41, 1, 100–126.

Nosko, A., Wood, E., Kenney, M., Archer, K., Pasquale, D., Molema, S., & Zivcakova, L. (2012). Examining priming and gender as a means to reduce risk in a social networking context: Can stories change disclosure and privacy setting use when personal profiles are constructed? Computers in Human Behavior, 28(6), 2067–2074.

O'Malley, L., Patterson, M., & Evans, M. (1997). Intimacy or intrusion? The privacy dilemma for relationship marketing in consumer markets. Journal of Marketing

Management, 13(6), 541-559.

Pan, Y., & Zinkhan, G. M. (2006). Exploring the impact of online privacy disclosures on consumer trust. Journal of Retailing, 82(4), 331-338.

Papacharissi, Z., Gibson, P. L. (2011) Privacy online: Perspectives on privacy and

self-disclosure in the social web. Heidenberg and New York: Springer, 75-89.

Perrin, A. (2015). Social media usage: 2005-2015. Pew Research Center: Internet, Science & Tech. Retrieved from http:// www.pewinternet.org/2015/10/08/social-networkingusage-2005-2015/

Petronio, S. (2012). Boundaries of privacy: Dialectics of disclosure. New York: SUNY Press.

Phelps, J., Nowak, G., & Ferrell, E. (2000). Privacy concerns and consumer willingness to provide personal information. Journal of public policy & marketing, 19(1), 27-41.

(42)

Rader, E. (2014). Awareness of behavioral tracking and information privacy concern in facebook and google. In 10th Symposium On Usable Privacy and Security ({SOUPS} 2014) (pp. 51-67).

Rapp, J., Hill, R. P., Gaines, J., & Wilson, R. M. (2009). Advertising and consumer privacy.

Journal of Advertising, 38(4), 51-61.

Reynolds, B., Venkatanathan, J., Goncalves, J., Kostakos, V. (2011) Sharing Ephemeral Information in Online Social Networks: Privacy Perceptions and Behaviours. INTERACT, 3, 204-215, University of Madeira.

Rossi, P. E., McCulloch, R. E., Allenby, G. M. (1996) “The Value of Purchase History Data in Target Marketing”, Marketing Science, 15(4), 321-340.

Sivanathan, N., & Pettit, N. C. (2010). Protecting the self through consumption: Status goods as affirmational commodities. Journal of Experimental Social Psychology, 46(3), 564-570. Spiekermann, S., Grossklags, J., Berendt, B. (2001) privacy in 2nd Generation

E-Commerce: Privacy Preferences versus actual Behavior. In: Proceedings of the 3rd ACM conference on electronics commerce. October 14-17 Florida, USA; 2001.

Statista (2019) Average daily internet usage worldwide 2019, by age and device. Retrieved from: https://www.statista.com/statistics/416850/average-duration-of-internet-use-age-device/

Statista (2019) Distribution of internet users worldwide as of 2019, by age group. Retrieved from:

(43)

Statista (2020) Most popular social networks worldwide as of January 2020, ranked by number of active users. Retrieved from: https://www.statista.com/statistics/272014/global-social-networks-ranked-by-number-of-users/

Statista (2020) Share of global mobile website traffic 2015-2019. Retrieved from: https://www.statista.com/statistics/277125/share-of-website-traffic-coming-from-mobile-devices/

Steijn, W. M. P. (2014). Developing a sense of privacy: An investigation into privacy

appreciation among young and older individuals in the context of social network sites.

Tilburg Institute of Law Technology and Society. Retrieved from

https://www.tilburguniversity.edu/nl/onderzoek/rechten/ event-phd-defense-w-steijn/ Stone, B. (2010, March 3). Ads Posted on Facebook Strike Some as Off-Key. New York

Times.

Strater, K., & Lipford, H. R. (2008). Strategies and struggles with privacy in an online social networking community. People and Computers XXII Culture, Creativity, Interaction 22, 111-119.

Subrahmanyam, K., Reich, S. M., Waechter, N., & Espinoza, G. (2008). Online and offline social networks: Use of social networking sites by emerging adults. Journal of applied

developmental psychology, 29(6), 420-433.

Sutanto, J., Palme, E., Tan, C. H., & Phang, C. W. (2013). Addressing the personalization-privacy paradox: an empirical assessment from a field experiment on smartphone users. MIS

quarterly, 1141-1164.

(44)

Tirunillai, S., & Tellis, G. J. (2014). Mining marketing meaning from online chatter: strategic brand analysis of big data using latent Dirichlet allocation. Journal of Marketing

Research, 51, 463–479

Tucker, C. E. (2014). Social networks, personalized advertising and privacy controls.

Journal of Marketing Research, 51, 1547–7193.

Van den Broeck, E., Poels, K., Walrave, M. (2015) Older and Wiser? Facebook Use, Privacy Concern, and Privacy Protection in the Life Stages of Emerging, Young, and Middle

Adulthood. Social Media + Society, 1-11.

Wakefield, R. (2013) The influence of user affect in online information disclosure. The Journal of Strategic Information Systems, 22(2), 157-174.

White, T. B., Zahay, D. L., Thorbjørnsen, H., & Shavitt, S. (2008).

Getting too personal: Reactance to highly personalized email solicitations. Marketing Letters, 19, 40–50.

Zafeiropoulou, A.M., Millard, D.E., Webber, C., O’Hara, K., 2013. Unpicking the privacy paradox: Can structuration theory help to explain location-based privacy decisions?. In: WebSci ’13 Proceedings of the 5th Annual ACM Web Science Conference, New York, USA, 463–472.

Zincir, O. (2017). Knowledge workers’ social media usage as a personal knowledge

(45)

Appendices

Appendix A - Interview Guide

Background of the participant and online behavior:

1. How old are you?

2. What is your occupation?

3. Approximately, how many hours per day do you spend online? Privacy:

4. What is privacy for you? a. How do you define privacy?

b. What words do you associate with privacy? c. When do you feel the most safe/private?

5. What are the steps that you take to protect your privacy? For example, in your everyday life/online?

a. Are you changing privacy settings on your social media accounts? Which ones are you changing?

6. How do you perceive your privacy on social media websites? How much are you willing to share online?

a. What kind of information are you usually sharing online? b. What would you never share online?

c. Could you give an example of when you wanted to post something but then decided not to because it was too private? Why did you change your mind?

Targeted advertising:

7. How often do you notice targeted ads online? a. When do you usually notice them?

(46)

c. Could you think of an example of a targeted ad you came across recently that caught your attention?

8. How does it affect the online experience for you?

a. What is your reaction when you see a targeted ad? How does it make you feel? Why? b. Could you think of examples when you found a targeted ad useful? Why?

c. Could you think of an example when you felt like your privacy was violated by the information used in a targeted ad? Why?

d. Ads for what kind of products do you find more useful?

Ask to open their Facebook feed. Ask to find a targeted ad on their feed: 9. Why do you think you got this ad?

a. What kind of previous behavior (conversation, search engine, etc.) you think led to this ad appearing on your feed?

b. Have you been talking/considering about [topic related to the ad]?

10. Once you remembered the reason the targeted ad appeared on your feed, how does it make you feel?

11. Do you find targeted advertising useful? a. Ads for what kind of products you find more useful?

12. What if another person would see the targeted ads you are getting? How would it make you feel?

a. How would you feel if another person saw the TA you are getting? OR How would you feel if another person saw the TA we just discussed?

b. Would you want other people to see/notice the targeted ads you are getting?

c. Ads for what kind of products or services would you feel uncomfortable for other people to see?

(47)

14. What type of ads would you definitely not like to see on your feed? E.g. specific type of products/services, or specific type of way targeting was made

Referenties

GERELATEERDE DOCUMENTEN

The investigation of cloud-specific information security risk management and migration readiness frameworks, detailed in Paper 5 (An Effective Risk Management

Targeted advertising and consumer privacy: practices and underlying reasons that evoke privacy violation feelings in young adults..

14 The effects of the socio-demographic characteristics (moderators) on the relationship between the level of privacy protection of the cloud storage service and the

In the back matter section of this dictionary there is a complex outer text, indicated by the table of contents in the front matter functioning as a primary outer text as Wortfelder

The technological transformation of public space (as is taking place particularly with the transformation into smart cities and living labs; see Chapter 1), where

33 Cantonal Judge Amsterdam 19 April 2002, furisprudentie Arbeidsrecht 2002/107 (Zwaan/Nortel Networks): employee had sent e-mail with pornographic attachment by mistake to the

The smoothed trajectories reveal the interaction patterns more clearly than the raw data and they show that all learners use verbal and adjectival con- structions in phases: at

Er werd aan de hand van eerder onderzoek verwacht dat jongeren met een grotere mate van angstsymptomen, zoals gemeten met de SCARED, slechter zouden presteren op de Chessboard