• No results found

Privacy economics : the effect of perceived control on Facebook behavior

N/A
N/A
Protected

Academic year: 2021

Share "Privacy economics : the effect of perceived control on Facebook behavior"

Copied!
72
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Privacy Economics: The effect of perceived control on Facebook

behavior

Laure van Waardenburg (10789456)

Master Thesis MSc Economics

Mastertrack Behavioral Economics University of Amsterdam

Thesis supervisor: Audrey Hu August 17th, 2016

Abstract

Online social network sites offer attractive means for interaction and communication, but also raise privacy and security concerns. People claim to be anxious about the information that is available about them online; yet, they are still inclined to share personal data. One of the most popular answers to these privacy concerns is to give people more control over their information online. Facebook introduced one of these solutions in 2015, the ‘Privacy Basics Tool’. This tool increases perceived control for users considering the information they share on Facebook even though the objective risks are unchanged. This focuses on the question that increasing perceived control over the release of private information will decrease individuals’ concern about privacy and increase their propensity to disclose sensitive information, even when the objective risks associated with such disclosures do not change or worsen. This study focuses specifically on one of the most popular social network sites, Facebook, by introducing and using the new construct of Perceived Control Effect (PCE) in order to research the effect of the feeling of being in control over on one’s information on the people’s willingness to share information. A linear regression model and a few mediation analyses are performed on a dataset with 226 respondents to test these relationships. The results showed that PCE has a significant positive effect on the willingness to share personal information. The effect on willingness to share personal information is mediated by the variable of privacy concern towards Facebook and knowledge of Facebook privacy policy. The findings in this study highlight how technologies that make individuals feel more in control over the release of personal information may have the unintended consequence of eliciting greater disclosure of sensitive information. The results of this thesis can be used to help governments to produce suitable laws in order to optimally protect people’s personal privacy. Keywords: Behavioral Economics, Facebook, Privacy Basics Tool, online privacy, perceived control, privacy paradox, willingness to share information, mediation model

(2)

2

Statement of Originality

This document is written by Student Laure van Waardenburg who declares to take full responsibility for the contents of this document.

I declare that the text and the work presented in this document is original and that no sources other than those mentioned in the text and its references have been used in creating it.

The Faculty of Economics is responsible solely for the supervision of completion of the work, not for the contents.

Table of Contents

1. Theoretical Framework ... 4

1.1. General Privacy Economics ... 4

(3)

3

1.2. Privacy and Behavioral Economics ... 5

1.2.1. Incomplete information ... 6

1.2.2 Cognitive Biases ... 7

1.3. Privacy and Control ... 8

1.3.1 Control and Social Network sites ... 10

2. Research question and hypotheses ... 13

3. Methodology ... 19

3.1. Study Design ...19

3.2. Data collection ...20

4. Data Description ... 22

4.1. General Data Analysis ...23

4.1.2. Privacy attitude ... 23

4.1.3. Information provided on Facebook ... 25

4.1.4. Awareness of Facebook policy ... 26

4.1.5. Self-reported visibility ... 29

4.1.6. Attitudes Toward Facebook ... 29

5.1.7 Effect of the Facebook Privacy Tool ... 30

5. Regression Analysis ... 31

5.1 Results of Regression Analysis ...33

5.1.1 Direct Effects ... 33

5.1.2 Mediating effects ... 33

5. Discussion & Conclusion ... 36

References:... 39

Appendices: ... 44

Appendix A: ...44

Appendix B ...57

(4)

1. Theoretical Framework

1.1. General Privacy Economics

Economists have been writing about privacy since, at least, the 1970s. It started with the contributions of Chicago scholars such as Posner (1978) and Stigler (1989) who state that the protection of privacy creates inefficiencies in the marketplace by concealing potentially relevant information from other economic agents. Renewed interest in the topic arose with Varian (1996) who observed that it is possible that consumers suffer privacy costs when too little personal information about them is being shared with third parties, rather than too much. This view that privacy protection causes market inefficiency, is however not shared by all economists. Hirshleifer (1980), for instance, notes that the assumptions of rational behavior underlying the Chicago Scholar’s privacy models fail to capture the complexity of consumers’ privacy decision-making. It is clear that privacy is a complex decision problem leading to differing opinions and attitudes. From early on economic studies of privacy have viewed individuals as rational economic agents. According to this view individuals are fully informed or base their decisions on probabilities from known random distributions and always have the goal of maximizing utility.

This assumption that privacy preferences are consistent and constant has been challenged by recent research, surveys and experiments. These studies show that individuals can be exceptionally inconsistent when it comes to their behavior if it concerns private information. Surveys continue to identify privacy as one of the most pressing concerns individuals have when it comes to information technology. Turow et al. (2012) found that 66% of Americans do not want to receive behaviorally targeted advertisements. A study by the Dutch Dialogue Marketing Association found that 70% of the participants did not want behavioral advertising (Helberger et al., 2012). A study in the United Kingdom found that 8% of the participants were comfortable with behavioral advertising (Bartlett, 2012). 10% was comfortable with Gmail scanning the contents of emails for targeted advertising and 80% of the people worried about firms using their data without consent and selling data to third parties. In addition, some of the numerous surveys in this field not only show that individuals are concerned about the privacy and security of their personal information, but many also state they would be willing to take steps to protect their own information and are in some cases even willing to pay for it (Bogerius, 2015). The results of these

(5)

5

surveys suggest that many people value their privacy and are willing to take steps to protect it. Despite this, several empirical studies suggest that even people who claim to be privacy-conscious are willing to provide personal information for relatively small rewards. This seems contradictory given the information discussed above and inspires a discussion on the difference between attitudes towards privacy and actual privacy behavior, the so called ‘privacy paradox’ (Rifon et al, 2007). Evidence for this paradox can be found anywhere on the web; people continuously click the ‘I agree’ box on websites to accept cookies without even reading terms. Next to this, people also share a lot of information without even being aware of it; e.g. many apps collect the physical location of their users without the users’ noticing. On the other hand social network sites are a perfect example of people actively seeking for opportunities to share sensitive information (photos, addresses etc.), on platforms such as Facebook, Twitter and Instagram (Leslie et al, 2011).

Privacy decisions are about attempting to control information flows in order to balance the competing interests - the costs and benefits - of sharing personal information or not. This makes them a natural field to study for economics. However as the research discussed above shows, traditional economic models have made overly restrictive assumptions about the nature of individual privacy preferences. Several psychological factors also seem to play a role when deciding whether to protect or share our personal information. Sometimes we might worry about personal intrusions of little significance, but overlook those that may cause significant damages. The traditional economic models disregard these psychological and emotional components in making privacy decisions. Behavioral economics does take these components into account and is hence is a promising field of study to further study the privacy paradox.

1.2. Privacy and Behavioral Economics

To make sense of these inconsistencies discussed above a lot of research has been done in the last decades applying behavioral economics on the issue of privacy economics. Previous research (Acquisti, 2002; Margulis, 2003; Noberg et al, 2007) identifies two main decision-making ‘hurdles’; privacy decision-making is influenced by 1) incomplete information and 2) cognitive biases, both of which will be discussed below.

(6)

6

1.2.1. Incomplete information

One of the main issues in privacy economics is what information an individual has access to as she prepares to take privacy sensitive decisions. Individuals making the decision might not (completely) be aware of the risks during certain transactions, or they may even ignore the existence of protective technologies. Many firms track people via behavioral targeting without them even being aware. When one sees releasing personal data as payment for services (as does a rational economic model), it is clear that there are information asymmetries. As Cranor & McDonald (2010) state: “People understand ads support free content, but do not believe data

are part of the deal.” In order to make an informed choice, people will first have to

realize that they are making a choice. This statement is further enhanced by the fact that they found that 86% of participants were aware that behavioral targeting takes place. These same participants do however also admit that they know very little to nothing about how this data is collected and what happens with it: “it seems people do not understand how cookies work and where data flows.” (Cranor & McDonald (2010): . As a result individuals have to decide based upon incomplete or asymmetric information.

The concept of asymmetric information is well known in the economic literature. Akerlof (1970) first analyzed it in his well-known paper; market for lemons (Akerlof, 1970). Lee et al (2005) sketch privacy economics as the classic market for lemons. Consumers cannot recognize quality (here, absence of data collection for advertising), and therefore will not pay for it. As a result, the market spirals downward. In their article they propose signaling as a solution. In practice it is difficult for firms to distinguish themselves from others by offering privacy-friendly services. Almost all privacy policies start the same: “The privacy of our users is a top priority

for us.” This means that all privacy policies look similar at a first glance and nobody

actually takes the time to fully read them (Bogerius, 2015). It is near to impossible for people to figure out exactly what happens to their data. With whom will their information be shared? Four reasons can be identified as to why people do not read privacy policies. First, life is simply too short. Cranor & McDonald (2010) calculated that it would cost an individual 244 hours per year to read all the privacy policies of all the websites she visits. Second, privacy policies are often very long, vague and difficult to read. Jensen & Potts (2004) conclude that a vast majority of privacy policies are too difficult to understand for the average Internet users.

(7)

7

privacy policy on one website often refers to the privacy policies of other firms. This means that people may sometimes have to read dozens of privacy policies in order to learn about the data collection of only one website. The situation is further complicated by the fact that websites change their privacy policies regularly and without any notice, meaning that people will have to check them regularly in order to be completely informed (Bogerius, 2015). All these transactions make it difficult for people to make a good informed choice regarding data protection. Third, even if people knew what firms did with their data, it would be difficult to predict the consequences. What exactly is it that their information will be used for (price discrimination, identity fraud, etc.)? Fourth, it is difficult for people to attach a monetary value to information about their behavior. Meaning they do not know how much they ‘pay’. For instance, people might not know how much profit a firm makes from their information, or what the costs are of a possible privacy invasion (Acquisti, 2010).”

Even if people get to the point where they read and understand the privacy policy of the websites they visit this cannot guarantee that somebody knows what will happen with his or her data. It also happens that firms choose not to act according to their privacy policy. An example of this is when Google stated that people who used the Safari browser on certain devices were effectively opted out of tracking, because Safari blocks third party cookies. It later turned out that Google bypassed Safari’s settings (Angwin et al, 2012). A second example is of Facebook, which got a lot of critique in 2014 because they manipulated the timeline of their users for research purposes Only four months after this they added the term ‘research’ tot heir privacy policy therefore invading the user’s privacy (McNeal, 2014). It would simply take people too much time to keep track of the changing and complicated privacy policies and on top of that checking whether firms actually comply with their privacy policies. A hypothetical fully rational person would know how to deal with information asymmetry and uncertainty. But people do not tend to deal with information asymmetry in a ‘rational’ way. For this reason it poses a problem when people are faced with asymmetric information when making privacy-sensitive decisions.

1.2.2 Cognitive Biases

Making rational choices about complex matters such as privacy is difficult, and people often rely on heuristics for such choices. Relying on heuristics for privacy decisions can lead to biases (some of which will be discussed below). Tversky &

(8)

8

Kahneman (1974) describe three main heuristics in their famous article. Heuristics can be seen as little rules of thumb people use when making decisions. One of these heuristics that is relevant in the field of privacy economics is the representative heuristic. An example of this is the fact that that “Individuals are less likely to provide

personal information to professional-looking sites than unprofessional ones, or when they are given strong assurances that their data will be kept confidential” (Acquisti,

2009). Furthermore there are three biases that also influence the decision-making process in the privacy domain. First there is the endowment effect, according to which individuals’ value goods more when they are already in their possession (Tversky & Kahneman, 1974). Acquisti et al (2013) find preliminary evidence that individuals tend to assign a higher ‘sell’ value to their personal information (the amount they request from others to give them their information) than the ‘buy’ value (the amount they are willing to spend to make sure that the same information is not released to others). This difference in Willingness to Accept (WTA) and Willingness to Pay (WTP) does not fit in the traditional economic model. Second, the status quo bias has also been mentioned in several papers. This bias refers to the fact that most people tend to favor the existing situation and will tend to avoid any effort involved in changing their choices. For privacy economics this means that the default setting will have a big impact on the dynamics between the firm and its users (Sunstein, 2013). A study by Acquisti (2010) showed that with online social networks, the vast majority of users do not change their default privacy settings. Third, the so-called present bias also influences privacy decision-making. This bias refers to the tendency individuals have to trade-off privacy costs and benefits in ways that are inconsistent with their initial plans, leading to damages of their future selves in favor of immediate gratification. Ainslie (1975) found that discount functions are approximately hyperbolic. Hyperbolic discount functions are characterized by a relatively high discount rate over short horizons and a relatively low discount rate over long horizons. Thus, individuals tend to under-discount long-term risks and losses while acting in privacy-sensitive situations.

1.3. Privacy and Control

Looking at the research mentioned above it can be seen that there has already been quite some research in linking privacy economics to behavioral economics. There is however a lot of ground still uncovered. In the privacy literature, the concept of privacy is closely associated with the psychological control of personal information (Westin,v 1967). Several privacy theorists have used a control perspective when

(9)

9

defining what privacy is. For example, Altman (1974) defines privacy as “the selective control over access to the self or to one’s group”. Stone et al. (1983) define privacy as “the ability of the individual to control personal information about one’s self”. Lastly, Margulis (2003) defines privacy as “control over or regulation of or, more narrowly, limitations on or exemption from scrutiny, surveillance, or unwanted access”. Consumers tend to perceive information disclosure as less privacy-invasive if they believe that they have control over the collection and use of their personal information (Culnan and Armstrong, 1999). Malhotra et al. (2004) state that control is one of the most important factors affecting privacy concerns among Internet users. In the psychology literature, control is usually treated as a perceptual construct because perceived control affects human behavior much more than actual control does (Skinner, 1996). Next to this prior research has suggested that people are more willing to take risks, and judge those risks as less severe when they feel in control (Slovic, 1987). As a cognitive construct, Perceived control is defined as a belief about the extent to which an agent can produce desired outcomes (Eccles & Wigfield, 2002). Given the key role that control plays in the literature and in the discussions on privacy, it is clearly useful to use control theories when building a theoretical framework for privacy research. Which is something that study attempts to do.

Hoadley, Lee and Rosson (2010) highlight the importance of analyzing control as a psychological perception (instead of actual control) and examining information access as a perception of ease of access. They show how the illusion of loss of control, introduced by the introduction of News Feed features on Facebook, gave users the idea that there was an increase information accessibility (outside of their control). This ‘illusion’ then led to a higher privacy concern among these users. Other research also supports these results, finding that consumers tend to have lower privacy concerns if they perceive a certain degree of control over the collection of their personal information (Sheehan and Hoy, 2000).

In the privacy literature, control over personal information is often constructed as instrumental to privacy protection. To such a big extent that many policy makers and data holding companies see control as the key solution of many of the modern privacy challenges. Not only external parties see more control as the solution, users also continuously state that they want more control. The question is whether or not this is actually in their benefit. Brandimarte, Acquisti and Loewenstein (2012) show that ‘more’ control can actually lead to ‘less’ privacy, in the sense of higher objective risks associated with personal information disclosure. They argue that people fail to

(10)

10

distinguish between two different forms of control: control over release of personal information and control over access and usage of the information by others. If people are able to determine which information they release it makes them feel more in control even though the people who receive this information still create risks they cannot control. Higher levels of control may therefore not always serve the intended goal of enhancing privacy protection. In fact, findings in the research discussed above highlight how technologies that make individuals feel more in control over the release of personal information may carry the unintended consequence of eliciting riskier disclosures. This leads to an interesting and at the same time alarming question. Is providing people with more ‘control’ over their personal information in the online environment a good movement? Especially perceived control leads people to share more information, even though this does not align with their best interests. As said before, in general people are not able to fully read and grasp the privacy policies of websites. So by giving them the responsibility over their own information it automatically means they are not in full control, since there is an information asymmetry. In most of the privacy literature, control over personal information online is portrayed as instrumental to privacy protection, to such an extend that both data holding companies and regulators see control as the key to the solution of contemporary privacy challenges. For this reason the question whether more control leads to less privacy will be central in this study.

1.3.1 Control and Social Network sites

When talking about the flow of personal information there is no larger platform for this than online, more specifically on the several Social Network Sites that can be found online. Over the past decade, Online Social Network sites (OSN) have dramatically increased in popularity, reaching all over the world and providing new opportunities for interaction and socialization. Facebook, for example, has transformed from a simple college network website to one of the most popular OSN’s in the world with over than 1.71 billion monthly active users from all around the world. Facebook users share more than 30 billion pieces of content (e.g. photos, news stories, blog posts), and interact with over 900 million objects each month (Facebook statistics, 2016). This introduces various risks for Facebook users. These risks include the more obvious ones such as accidental information disclosure, reputation etc. More interestingly however there are also quite some less obvious risks involved, a Wall Street Journal study found numerous third-party applications on Facebook extracting

(11)

11

identifiable user information from the platform and sharing this bounty with advertising companies. This introduces an additional privacy risk on Facebook by the large amount of information interaction between third-party developers and Facebook users. Stutzman, Gross and Acquisti (2013) investigated the behavior of Facebook users over in a longitudinal study and found two interesting and slightly contradicting results. First, Facebook users seem to increasingly show privacy-seeking behavior. They limit the amount of data they share with unconnected profiles on Facebook. Second, the amount of personal information those Facebook users shared privately to other connected profiles (‘friends’) actually increased over time. In doing so, users ended up increasing the disclosure of personal information to other entities on the network as well, third-party apps, advertisers and Facebook itself. The results of this study highlight the privacy paradox; the tension between privacy choices because of individual preferences and the role of the environment (including factors such as asymmetric information, bounded rationality and cognitive biases) in affecting those choices. As stated before this paradox can be seen anywhere on the web. Surveys keep finding that people are very concerned about their online privacy and feel uncomfortable about behavioral advertising (Bartlett, 2012; Helberger et al, 2012). Yet at the same time people seem to act in an opposite manner (accepting cookies, sharing personal information on social media websites etc.). Combining this with the knowledge we have on the increase of social network use all over the world it leads to the question what the effects of the changing privacy policies are on these social network sites. Hoadley et al (2010) researched the effect of the introduction of the ‘timeline’ feature on Facebook. Although this new interface did not change actual control over who had access to what information, Hoadly et al (2010) found that this new feature induced lower levels of perceived control over personal information due to easier information access, which in turn led users to be more careful with the information they shared.

In 2015 Facebook introduced its new ‘privacy tool’ a tool which makes it easier for users to control who can see the information on their profile. At the same time Facebook also released it’s new privacy policy, this shows that the ‘privacy tool’ has some spurious elements. This tool makes it easy for users to see and control who can see the information they provide online. But if your friends use certain apps via Facebook, they are the ones that decide which information of you is shared, not you yourself. Based on research done before the question is raised whether this new privacy tool is really in favor of the people using Facebook. Does it lead people to make decisions that are in line with their initial preferences or does it actually trick them in going against these preferences? Last year Facebook admitted for the first

(12)

12

time that they also keep track on the behavior of its users on other platforms besides Facebook (websites, apps etc.) “We’re continuing to improve ads based on the apps

and sites you use off Facebook.” (Facebook, 2015). In sum this does not necessarily

have to be a problem for everybody, but it does show that there is a lot more than meets the eye when it comes to the use of social networks.

CEO Mark Zuckerberg has repeatedly stressed the role of privacy controls and emphasizes how the new privacy rules are there for Facebook users to optimize the user friendliness of social networks. But does it really?

(13)

2. Research question and hypotheses

This study aims at casting a light on the behavior patterns of Facebook users and specifically the change in behavior when it comes to the ‘privacy tool’ newly introduced on Facebook. This tool grants people more control on what information they share on Facebook. Unfortunately this control is mostly a myth since Facebook still owns all the data you share online, whether you decide to only share it with friends or not, and can still use it and sell it to third parties. Next to this the amount of information that is visible to others also depends on the privacy settings of our friends and not just your own. For this reason the privacy tool on Facebook mainly adds to the perceived control of users and does not grant them any actual extra control over their online data. Following the information from the literature on privacy economics discussed before the research question on which this study will focus is;

Does more perceived control on Facebook lead users to reveal more personal information?

This study contributes to the academic literature in Behavioral Economics in two ways. First, research on perceived control in combination with privacy has been done before but only in a limited amount of studies. This study stands out by focusing specifically on one social network site (Facebook). Second, this is the first study to use the newly constructed Perceived Control Effect (PCE) that focuses on the effect of the recently introduced ‘ Privacy Basics Tool’ on Facebook. This tool was introduced to grant the users’ wishes for more control on the information they share. This study will measure the effects of the introduction of this tool.

As said before the main focus of governments right now is to give the consumer more control when it comes to sensitive information online. There is a debate going on whether this is the correct way to handle online privacy. In order for this study to be relevant it is important to first establish whether people are interested in protecting their personal information at all. Since Facebook does allow you to choose whether only your friends can see specific a specific picture, but Facebook still owns the right to this picture. Next to this Facebook does not only track your information flow on Facebook itself, but also all other websites you visit. In order to differentiate the construct of perceived control from that of actual control it is assumed that people do care about what happens to the flow of their personal information online. Therefore the first hypothesis that will be tested:

(14)

14

Hypothesis 1:

Individuals want do not want to protect their personal information online.

One of the main aspects of Facebook that people are unaware about is the fact that it uses your flow of information (the information you provide to Facebook and the other websites you visit) in order to offer customized advertising. In order to differentiate the construct of perceived control from that of actual control we assume that people do not appreciate these behaviorally targeted advertisements. This leads to the second hypothesis that will be tested in this study:

Hypothesis 2:

Individuals appreciate behavioral advertisements.

Following this we can return to the tool that was already mentioned before. Does the privacy tool that Facebook introduced last year have an effect on the perception of control on Facebook. As Skinner (1996) already pointed out perceived control affects human behavior much more than actual control does. This leads to the third hypothesis that will be tested:

Hypothesis 3:

The availability of personal control through privacy-enhancing technology (privacy basics tool) is not positively related to consumers’ perceived control over their personal information.

Since people tend to change their behavior more based on perceived control instead of actual control over their information it raises the question whether providing people with more control over their personal information online is the best way to protect their online privacy. This study can help cast some light on that question. In order to do so this study introduces a new concept called the Perceived Control Effect (PCE). The main construct this study will focus on is the question in how far the perceived effect of control influences our willingness to share information online. The conceptual framework can be seen in figure 1.

(15)

15

Figure 1: Conceptual Framework

The following hypotheses have been derived from the direct effects of PCE on

Willingness to share personal information.

The construct of Perceived Control Effect measures the amount of control an individual feels he/she has when sharing information on Facebook. This study mainly focused on the perceived control people feel when sharing information on Facebook. The reason why the focus is on perceived control instead of on actual control is because the introduction of the new privacy tool on Facebook might give people the option to choose who can see their information. However, in reality everything you put on Facebook still belongs to the company. They can share and sell this information to other parties as well.

The main goal in this study is to figure out whether there is a relationship between the PCE and the willingness to share information. This leads to the fourth hypothesis to be testedl

Hypothesis 4:

An increase in perceived control over the release of private information on Facebook will not decrease an individuals’ concern about privacy and therefore not increase their tendency to reveal sensitive information, even when the objective risks associated with such disclosures do not change.

It is important to not only study the effect of PCE on the willingness to share information, but to also study the behavioral process through which PCE affects the willingness to share information as well. In order to understand this process, several

(16)

16

mediating effects on the relationship between the independent variable (PCE) and the dependent variable (Willingness to share personal information) have also been studied. These mediating effects are: 1) Privacy concern, 2) Value of privacy, 3) Value of relevant advertisement, 4) Knowledge of privacy policy and 5) Privacy concern towards Facebook. Summarized, the PCE concept itself will assume that consumers are heterogeneous in terms of privacy concern, how much they value privacy and relevance and how well aware they are of the privacy policy. This heterogeneity is expected to translate into different “privacy calculus” strategies, captured by different valuations of these variables which, in turn, explain the effect of PCE on willingness to share personal information.

1. The construct privacy concern measures in how far a consumer is concerned about their online privacy. People who score high on this variable are worried about the future consequences of sharing their information. A high value of privacy concern indicates that people are very concerned when it comes to their privacy and it is therefore expected that they will be less like to share personal information online. Considering this it is expected that PCE will have a negative effect on the variable privacy concern. In turn the variable privacy

concern is also expected to have a negative effect on the dependent variable Willingness to share personal information. This means that people who

perceive a high level of control will be less concerned about their privacy and people who are less concerned about their privacy will be more willing to share their personal information online.

2. The construct value of privacy measures how much a consumer values his or her privacy. A consumer with a high value of privacy prefers to keep their information to themselves and not to share it with others whereas a person with a low value of privacy will have less of a problem having other people know this information. Where privacy concern measures the general concern people have about the concept of privacy, value for privacy measures mores specifically if people feel intruded when asked for their personal information. Somebody who scores high on this variable is not likely to share a lot of personal information. Considering this PCE is expected to have a negative effect on the variable value for privacy. In turn it is also expect that the variable value for privacy will have a negative effect on the dependent variable willingness to share personal information.

3. The construct value of ad relevance measures in how far people appreciate relevant (behaviorally targeted) advertisements over general advertisement. It is expected that ad relevance will have a negative effect on PCE and a

(17)

17

positive effect on the willingness to share personal information. This means that a person who values behaviorally targeted advertisements over general ones will perceive a lower PCE, since they are aware that their information is used by Facebook for these advertisements. Following this people with a high value of relevance are expected to be more willing to share personal information for the same reason. This follows the line of reasoning of a study by Similar to Phelps et al (2000), where they found similar results.

4. The construct Privacy concern towards Facebook measures the general trust people have in Facebook and how concerned they are when sharing their information on Facebook. Where privacy concern measures people’s concern for their privacy in general, this construct measures their concern for privcy specifically towards Facebook. People with a low value here will have a low privacy concern towards Facebook whereas people with a high value here will have a high privacy concern towards Facebook. Here a negative relation ship is expect for Privacy concern towards Facebook with both the dependent and the independent variable. This means that participants score high on this variable are expected to feel less in control over their information and are also less likely to share personal information on Facebook.

5. The last construct, knowledge of the privacy policy, measures how informed people are about the privacy policy of Facebook. People who score high on this construct are well informed and vice versa. This construct is expected to have a negative effect on PCE and the willingness to share personal information, since these people will be aware that they are not actually in control of their own data and therefore less likely to share it. It is however also possible that this relationship turns out to be the other way around. According to Nowak and Phelps (1997), consumers are more willing to share personal information when they know what the information will be used for. It will therefore be interesting to see in what direction the mediation effect will go.

All of the information discussed above leads to the fifth and final hypothesis to be tested;

Hypothesis 5:

The relationship between perceived control and tendency to reveal sensitive information as explained in Hypothesis 4 is not mediated by several factors: 1)

(18)

18

privacy concern, 2) value for privacy, 3) value of add relevance, 4) privacy concern towards Facebook and 5) knowledge of Facebook privacy policy.

Lastly, a few consumer characteristics will also be used as moderator variables. Sheehan and Hoy (2000) did not find any meaningful patterns when researching privacy concerns across demographics. On the other hand, Graeff and Harmon (2002) found that privacy concerns do vary depending on age, gender and education.

(19)

3. Methodology

3.1. Study Design

In order to test the hypotheses in this study a quantitative research was performed. A questionnaire was distributed in order to gather the necessary data. The questionnaire consisted of approximately 40 close-ended questions. The full questionnaire can be found in appendix A.

All questions required an answer and the respondents could not continue or finish the questionnaire if they did not answer all questions. This format was chosen in order to prevent that people would (accidentally) skip questions. Most questions were asked using a 5-point Likert scale. If a different answering method was used in the questionnaire this will be indicated below.

The first questions in the questionnaire were a set of demographic questions. For the variable education the option “I don’t want to answer” was added. The reason for this is that not everyone wants to give such personal information. Without this option participants might choose another answer at random just so they do not have to give out private information and this would influence the results.

Second, questions were asked to measure the variable privacy concern. This was done by asking a set of calibration questions similar to those in a study by Acquisti and Gross (2006). In order to make sure that privacy attitudes were measured without priming the participants on the topic of this research, privacy questions were alternated with questions on unrelated topics such as economic policy and immigration. The questions ranged from general ones (e.g., “How

important do you consider the following issues in the public debate?”), to more

specific issues (e.g., “How do you personally value the importance of the following

issues for your own life on a day-to-day basis?”), and personal ones (e.g.,

“Specifically, how worried would you be if” [a certain scenario] took place?). For all questions participants had to rank the issues in order of importance (1 being most important and 6 being least important). The method of ranking the statements was chosen here instead of a 6-point Likert scale to ensure that people actually had to choose which was more important than the other, in order to get stronger results.

Third, the value of privacy variable was measured by asking participants about the way they felt about sharing their information in numerous occasions.

Fourth, the value of ad relevance was measured by asking participants about their feeling towards personalized advertisements.

(20)

20

Fifth, participants were asked general Facebook related questions in order to get a feeling of the difference in usage among participants and their attitude towards Facebook.

Sixth, willingness to share personal information was measured by asking how willing the respondents were to share 10 types of private information on Facebook and who can access their profile information (Phelps et al 2000).

Seventh, the knowledge of Facebook privacy policy was measured by a set of simple yes/no/don’t know questions about the Facebook privacy policy and the Privacy tool on Facebook.

Eight, the privacy concerns towards Facebook was measured by asking questions about the amount of trust people feel towards Facebook, their view on it’s privacy policy and in how far they feel in control over the information they share here.

Ninth, questions were asked to measure the Perceived Control Effect (PCE). These questions were only asked to people who were aware of certain possibilities to control information sharing on Facebook. This includes questions such as; “If you could not manage this (who can search for and find you profile), how much information would you share on your profile?” or “How much information would you share on Facebook if this privacy tool did not exist”. These questions were all asked in a 3-point Likert scale making it easy to compare them.

Lastly, a question was added in order check if the participants actually read the all questions and answered them seriously. This quality control question asked participants to select the first answer (strongly agree). The participants who answered this question incorrectly where left out of the analysis since there was a chance that they just filled in everything randomly in order to win the price. An overview of the constructs used and the questions belonging to each construct can be found in Appendix B.

3.2. Data collection

The survey was constructed using Qualitrics (http://www.qualtrics.com) a free online tool to create surveys. The data was collected by spreading the questionnaire via Facebook, since participants needed to be users of this social network site. The questionnaire was distributed to friends, family and colleagues and they were asked

(21)

21

to spread it further. Next to this the questionnaire was posted it in several groups on Facebook (e.g. ‘Social Science group Amsterdam’, ‘Sharing is Caring Maastricht’, etc.). These methods of distribution together were used to insure that the sample was as representative as possible for the whole population. To prevent any discrepancies in the data as a result of translation differences, the questionnaire was only provided in English. In order to stimulate people to answer the questionnaire, everyone who filled in the questionnaire completely and left their email address had a chance of winning a gift voucher. The winners were drawn randomly.

(22)

4. Data Description

A total of 284 participants accessed the survey. All questions were mandatory which meant that there were no incomplete questionnaires. The question about education had an option ‘I prefer not to answer this question’ which was filled in by a large group of people. Quite a few participants had the quality control question wrong. This question was answered incorrectly by 58 out of the 284 participants, which is a little more than 20%. This question was added in order to filter out people who filled in the questionnaire just for the prize and did not actually take the time to read and answer the questions. These participants were therefore taken out of the analysis. This left 226 participants of whom 52% were female and 48% male. The average mean of age was 2.7 and the median 3.0, this means the average age was between 25 and 34. The education level was only filled in by 146 participants and had a mean of 4.6, which means that the average education level was between a bachelors and a master’s degree.

It is important to have a reliable measuring method since this guarantees the quality of the research. In order for a measuring method to be reliable it has to produce stable and consistent results (Barlow & Proschan, 1975). The reliability of this questionnaire was ensured in three ways. First, participants were only asked questions they knew the answer to or by giving them the option to answer ‘I don’t know’. Second, as already explained above, a question was added in order check if the participants actually read the all questions and answered them seriously. Therefore 226 participants were left for the analysis. Third, Cronbach’s Alpha was used for the multi-item scales and Pearson’s R for the two-item scales in order to check the reliability of each construct used in the questionnaire. The table with the measured Cronbach’s Alphas and Pearson’s R’s for the scales can be found in table 1.

(23)

23

Privacy Concern (B) 0,580 (0,726)

Value of Privacy 0,405

Value of add relevance 0,392

Willingness to share personal information 0,912 Privacy concern towards Facebook 0,705 Knowledge of Facebook Privacy Policy 0,796

Perceived Control Effect 0,747

Table 1. Concepts used

Values of 0.7 and 0.8 are commonly seen as acceptable levels of Cronbach’s alpha (and Pearsons R). When looking at the results in table 1 it can be seen that only the concepts ‘Willingness to share personal information’, ‘Knowledge of Facebook

Privacy Policy’, ‘Privacy concern towards Facebook’ and ‘Perceived Control Effect’

reach this level. However the construct privacy concern would also pass the threshold if Question 4 is taken out of the comparison (then the Cronbach’s alpha becomes 0.726). Since question 4 only measures how important participants think privacy is in public debate it would not be a problem to take it out. In contrast to the other questions it does not tell us anything about the concern for privacy of the participants themselves. The remaining three variables score a lot lower. This is a bit problematic because it could mean that the questions in these constructs do not measure the same thing. If this is the case it could reduce the statistical power of the scale. However, in scientific literature there seems to be a disagreement about the acceptable level of correlation. Cattel (1965) and Briggs and Cheek (1986) are examples of researchers who argue that if the correlation is too high, one of the items on the scale might be redundant since it makes the construct too specific. They argue that a correlation between 0.2 and 0.4 is optimal. Because of this disagreement when it comes to the correct value of correlation the decision was made to continue with all constructs realizing this may form a limitation for this research.

4.1. General Data Analysis

Before going into the regression analysis the general data will be analysed in this section of this study.

4.1.2. Privacy attitude

(24)

24

general privacy attitude of the participants by asking a series of questions inspired by a study performed by Acquisti and Gross (2006). Participants were asked to rank several statements in order of relevance (1 most relevant and 6 least relevant). On average privacy is considered as a fairly important issue in public debate by participants (mean; 2.77; sd: 1.47). In fact, it was regarded a more important issue in the public debate than the state of the economy (the statistical significance was confirmed by a Wilcoxon signed-rank test: z= -4.947, p= 0,000), global warming (z=-10.267, p=0.000) and education (z=-11.638, p=0.000). It was however considered less important than Immigration (z =-2.738, p = 0.006) or Terrorism (z=-3.150, p = 0.002). It is interesting to see that when asked how important the issues are to the participants on a day-to-day basis, privacy comes in highest in ranking (mean: 1.76; sd: 0.99). For all categories, participants assigned slightly (but statistically significantly) more importance to the issue in their own life in comparison to in public debate (in the privacy policy case, a Wilcoxon signed-rank test returns z = -5.70, p = 0.000 when checking the higher valuation of the issue in the own lives). Similar results were also found when asking for the participants’ concern with a number of issues directly relevant to them: the state of the economy where they live, threats to their personal privacy, the threat of terrorism, global warming and the flood of immigrants. The final question in this part of the questionnaire was how worried participants would be if a number of specific events took place in their lives, again asking them to rank these events. The highest level of concern was registered for “A stranger knew where you live and the location and schedule of the classes you take” (mean of 1.57, with 52.2% of participants ranking this issue first. This was followed by “Five years from now, complete strangers would be able to find out easily your sexual orientation, the name of your current partner, and your current political views” (mean of 1.92, with 39.8% of the participants ranking this situation first), followed, in order, by the ‘global warming’ scenario (“The European rejected all new initiatives to control climate change and reduce global warming”), the security scenario (“It was very easy for foreign nationals to cross the borders undetected”), the ‘politics’ scenario (“Britain decides to exit the EU”). The questionnaire continued with some general questions to measure the attitude of participants to the use of their personal information for behavioral advertisement.

(25)

25

Figure 2. General questions on value for privacy

As can bee seen in figure 2 a majority of the participants indicate to be worried about the possible risks of giving personal data (78%) and about the amount of data gathered about them (67%). Next to this, 60% claims to be annoyed when asked for their personal information. Participants were also asked two questions where their attitudes towards behavioral advertising. Here 76% of the participants indicate that they think companies should not be allowed track consumers in order to send them personalized advertisements. 65% of respondents indicate that they do not appreciate personalized advertising. Given the date analyzed in this section we can see that hypotheses 1 (Individuals do not want to protect their personal information) and 2 (Individuals do appreciate behavioral advertisements) can both be rejected.

4.1.3. Information provided on Facebook

The next thing that is interesting to look at is the reported use of Facebook. What information do users share on Facebook, with whom and of what quality? Looking at the results, many people are quite selective with the information they share (see table 1). Most publish their birthdays, school or workplace and relationship status, but hide their phone number, address and sexual orientation. There is no significant difference in the amount of information male and female participants share. Neither are there any significant differences in the amount of information people share when looking at the differences in age or education level. The general results to these questions can be found in table 2 and figure 3. The willingness to share information is affected by the frequency with which people update their profile (a Pearson

(26)

26

χ2(120) = 185. 031, Pr = 0.000 shows that the distribution is significant). The willingness to share information is not affected by the frequency of login.

Information Share Do not share

Birthday 96% (218) 4% (8) School or workplace 91% (205) 9% (21) Phone number 40% (91) 60% (135) Personal address 40% (91) 60% (135) Political views 66% (149) 34% (77) Religious views 39% (90) 61% (136) Sexual orientation 38% (86) 62% (140) Relationship status 74%(168) 26% (58) Partners’ name 64% (144) 36% (82) Family members 69% (157) 31% (69)

Table 2. What accurate information do you share on Facebook?

Figure 3. Who can see the information Facebook users share?

4.1.4. Awareness of Facebook policy

How well informed is the average Facebook user when it comes to the website’s privacy policy? By default, every member of Facebook appears in searches of

(27)

27

everyone else, and every member of the social network site can read every profile. Other users can either be friends, friends of friends or non-friends. Facebook does have a privacy tool that gives users a certain amount of control to choose what information to reveal to whom. Members are able to select their profile visibility (who can read their profiles) as well as their profile searchability (who can find their profiles through the search features) by type of users. When asked whether they read Facebook’s privacy policy, 61% state they have not read the privacy policy and 37% states they have skimmed over it. This is a low number, but that is not very surprising. It also does not tell us exactly how informed people are about the privacy policy of Facebook, people might be well informed even though they did not read it. Among the participants of this questionnaire 24% claim not to know whether it is possible to manage who can find their Facebook profile or think that they do not have this possibility. 26% claims not to know whether they have any control over who can actually read the information on their profile and 31% of the participants claim not to know whether they can manage who sees specific information (e.g. birthday visible for friends of friends, but relationship status only visible for friends). See figure 4 for an overview of this data.

This awareness of who can see their profile is affected by the frequency with which participants update their profile (a Pearson χ2(12) = 39.908 Pr = 0.000 shows that the distribution is significant). Next to this participants were asked questions about how much information they believed Facebook owns (‘Does Facebook keep and own all information you provide to Facebook?’, ‘ Do you believe Facebook collects information about it’s users outside of Facebook?’ and ‘ Does Facebook also combine information about you from other sources?’). The answers to these questions can be found in figure 6. None of these questions are significantly affected by the frequency with which participants update their profile.

To summarize, the majority of Facebook members claim to know about ways to control visibility and search ability of their profiles, but a significant minority of members are unaware of those tools and options.

(28)

28

Figure 4: Awareness of visibility information on Facebook

(29)

29

Figure 6: What information do people belief Facebook gathers?

4.1.5. Self-reported visibility

Next it is interesting to look at whether participants were aware of how visible and searchable their online profiles were. Here the focus was on those participants who claimed to not have changed their privacy settings or who claimed that they did not know it was possible to change these settings (those who answered ‘no’ or ‘don’t know’ to question 17 and 18). Because they did not change these settings their profiles would be visible and searchable by all members on Facebook. Of these participants 15% incorrectly did not believe that everybody was able to find their profile on Facebook. Next to this 20% incorrectly did not believe that everybody was able to see the information on his or her profile.

4.1.6. Attitudes Toward Facebook

In order to see in how far users themselves know the privacy guidelines and trust Facebook as a website a few questions were added about the general attitude towards Facebook. 61% of the participants have not read Facebook’s’ privacy police, 2% read it thoroughly and 37% skimmed through it. The results to the question to what extend people trust Facebook and the people who are on Facebook can be seen in table 3. As expected most people seem to trust there friends and this number

(30)

30

gets lower for friends of friends and unconnected users. 44.4% of people feel some sort of distrust towards Facebook, which makes Facebook to least trusted of all options. An independent t-test showed no significant influence (p=0.342) of whether or not people read the Facebook privacy policy on the amount of trust people felt towards Facebook. How much do you trust? Trust Completely Moderately trust Neither trust nor distrust Moderately distrust Completely distrust Facebook 0.9% (2) 22.1% (50) 29.6 % (67) 33.2% (75) 14.2% (32) Friends on Facebook 39.8% (90) 35.4% (80) 16.8% ( 38) 8% (18) 0% Friends of friends on Facebook 15.9% (36) 52.7% (119) 22.1% (50) 8.4% (19) 0.9% (2) Users not connected to you on Facebook 0.4% (1) 22.6% (51) 52.7% (119) 21.2% (48) 4% (9)

Table 3: How much do people trust those they share their information with?

5.1.7 Effect of the Facebook Privacy Tool

In order to test whether the introduction of the ‘Privacy Basics Tool’ on Facebook had any effect on the users’ perceived control over their information the answers to question 21; ‘Were you aware of the existence of this tool’ were compared to the answers that followed from the construct ‘Perceived Control Effect’. This was done by performing an independent t-test. The results of this test showed that there was a significant difference (p=0,000) in the scores for participants who where aware of the existence of the privacy tool (M=1,48; SD=0,24) and the participants who were not aware of the existence of this tool (M=1,266; SD=0,32). These results mean that we can reject Hypothesis 3 (“The availability of personal control through

privacy-enhancing technology (privacy basics tool) is not positively related to consumers’ perceived control over their personal information”).

(31)

5. Regression Analysis

In order to test Hypothesis 4 and 5 several simple regression analyses were performed that test the effect of the independent variable (PCE) on the dependent variable (Willingness to share information). A mediation model as introduced by Baron and Kenny (1986) was used in order to test whether the mediator values (privacy concern, value for privacy, value of add relevance and the knowledge of the Facebook privacy policy) had a significant effect. The mediation model that was used can be seen in figure 7. Here X is the independent variable, M the mediator variable and Y the dependent variable.

Figure 7: The mediation model

In order to test the mediation effect Kenny (2014) discusses four steps that need to be taken:

1. Show that the independent variable (X) is correlated with the dependent variable (Y). This step establishes that there is an effect that may be mediated.

2. Show that the independent variable (X) is correlated with the mediator. Use M as the criterion variable in the regression equation and X as a predictor (test path a), this step essentially involves treating the mediator as if it were an outcome variable.

3. Show that the mediator variable affects the dependent variable (Y), show that the mediator affects the outcome variable. Use Y as the criterion variable in a 1. Total effect: 𝑐 = 𝑎𝑎 + 𝑐′ 2. Direct effect: 𝑐′ = 𝑐 − 𝑎𝑎 3. Indirect effect: 𝑐 − 𝑐′= 𝑎𝑎

(32)

32

regression equation and X and M as predictors (estimate and test path b). 4. To establish that M completely mediates the X-Y relationship, the effect of X

on Y controlling for M (path c') should be zero.

These are also the four steps followed in the regression analysis that is performed in this study. The first step is to start with a simple regression analysis to the test the effect of the independent variable (PCE) on the dependent variable (Willingness to share information).

1) 𝑌 = 𝛽0+ 𝛽1 × 𝑋

The second model is then used to test the effect of the independent variable (PCE) on the mediator variable (privacy concern, value for privacy, value of add relevance and the knowledge of the Facebook privacy policy). It is therefore used to test path a in figure 4. This is step two in Kenny’s (2014) model.

2) 𝑌 = 𝛽0+ 𝛽1 × 𝑀

The last model tests the effect of both the independent variable and the mediator variable on the dependent variable (Willingness to share information). Testing the third point in Kenny’s (2014) list.

3) 𝑌 = 𝛽0+ 𝛽1 × 𝑋 + 𝛽2 × 𝑀

Finally it is necessary to test whether this mediation effect is significant. In doing this a Sobel test was used. The equation for this Sobel test is:

𝑆𝑆𝑎𝑆𝑆 𝑇𝑆𝑇𝑇: 𝑎 × 𝑎 �𝑎2× 𝑆𝑎2+ 𝑎2× 𝑆

𝑏2

This Sobel test is treated similar to a z-test. Therefore absolute values larger than 1.96 are significant at 0.05 and are sufficient to reject the hypothesis that there is no mediation effect.

(33)

33

5.1 Results of Regression Analysis

5.1.1 Direct Effects

The result of the general regression analysis testing the correlation between the independent and the dependent variable can be found in table 4. The tables with the complete results can be found in Appendix C.

β (PCE)

Std. Error Sig. R squared Adjusted R squared Willingness to share

information

0.800 0.198 0.000 0.068 0.063

Table 4: Regression output

These outcomes show that the regression model testing the effect of the Perceived Control Effect (PCE) on the willingness to share information is a significant positive effect (β= 0.800 and p= 0.000). This means that when people feel they have more control over the information they share they are more likely to share information. This is in line with what was predicted in the hypotheses. Based on this regression analysis, Hypothesis 4 (“An increase in perceived control over the release of private

information on Facebook will not decrease an individuals’ concern about privacy and therefore not increase their tendency to reveal sensitive information, even when the objective risks associated with such disclosures do not change.”) can be rejected.

The moderating effect of consumer characteristics was also tested on this model. The effects for gender, age and education were not significant. The adjusted R squared in table 3 is a measure of goodness of fit and tells how well the model would predict a different sample of the same population. Based on the adjusted R (0.063), which is low it can be said that there are probably some other variables that affect the dependent variable. The plot was tested for heteroscedasticity, but not show any signs of this.

5.1.2 Mediating effects

The mediating effects of privacy concern, value for privacy, value of ad relevance and the knowledge of the Facebook privacy policy were tested using the methods of Baron and Kenny (1986) in combination with the Sobel test. Table 5 below shows the input and output of the Sobel test. The complete analysis performed in SPSS using

(34)

34

the PROCESS software developed by Andrew F Hayes Ph.D (2013) can be found in Appendix C. Privacy concern Value of Privacy Value of ad relevance Knowledge privacy policy Privacy concern Facebook a -0.155 0.001 -0.049 0.636 -2.36 b -0.037 0.521 -0.086 0.907 -0.316 c 0.800 0.800 0.800 0.800 0.800 c’ 0.806 0.800 0.796 0.223 0.054 Sa 0.176 0.226 0.276 0.070 0.195 Sb 0.084 0.049 0.045 0.1407 0.054 Sobel test - - - 5.2324 5.160 p - - - 0.000 0.000

Table 5: Sobel test

As can be seen from the Sobel test only two of the variables significantly mediate the effect of PCE on the willingness to share information when taking a 95% confidence interval, the variable Privacy concern towards Facebook and Knowledge of privacy

policy. For the variables Privacy concern, Value of Privacy and Value of ad relevance

the Sobel test could not be performed since the effect of the mediator on the independent variable was not significant and therefore there is no direct effect. Given that we use the 95% confidence interval in combination with these outcomes the following conclusions can be drawn. First PCE had a significant negative effect (β= -2.36 and p=0.000) on privacy concern towards Facebook, which in turn has a negative effect (β= -0.316 and p=0.000) on the Willingness to share information. So if people feel more in control over the information they share on Facebook they will feel less concerned about their privacy and vice versa. Furthermore, people who are more concerned about their privacy on Facebook are less willing to share information online. Second, PCE has a significant positive effect on knowledge of privacy policy

(β= 0.636 and p=0.000) which means that people feel less in control the more they

are aware of the privacy policy on Facebook. The surprising thing is however that there is also a positive effect of knowledge of privacy policy on the willingness to

(35)

35

have better knowledge of what happens to their private information would be less likely to share information online. Next to this is can be seen that the variables

privacy concern, value for privacy and value of add relevance have no significant

mediating effect. The variable privacy concern had a significant negative effect (β= -0.037 and p= 0.0001) on the dependent variable but no significant effect on the independent variable. The variable value of ad reference has a significant negative effect (β= 0.086 and p= 0.000) on the dependent variable but no significant effect on the independent variable. The same goes for the variable value for privacy, which has a significant positive effect on the dependent variable (β= 0.521 and p= 0.000) but no significant effect on the independent variable. These are very interesting outcome since it shows that the relationship between PCE and Willingness to share

information cannot be explained by the amount of privacy concern, the value of

privacy people feel or the value of add relevance. Therefore even people who are highly concerned about their privacy (or highly value their online privacy or dislike behaviorally targeted advertisements) will still share more information if they perceive a higher amount of perceived control. These models were all tested for heteroscedasticity and none of the models showed any signs of heteroscedasticity. Based on the analysis in this part of the study hypothesis 5 (The relationship

between perceived control and tendency to reveal sensitive information as explained in Hypothesis 4 is not mediated by several factors: 1) privacy concern, 2) value for privacy, 3) value of add relevance, 4) knowledge of Facebook privacy policy and 5) Privacy concern towards Facebook) can be partly rejected. Only the variable privacy concern towards Facebook and knowledge of Facebook privacy policy have a

significant mediating effect when taking a 95% confidence interval, so for these mediating variables the hypothesis can be rejected.

(36)

5.

Discussion & Conclusion

The overall goal of this study was to further illuminate the effects of perceived control mechanisms on consumers’ willingness to share information. The expectation was that such effects were exhibited through the mediation by 1) privacy concern, 2)

value for privacy, 3) value of add relevance, 4) knowledge of Facebook privacy policy and 5) Privacy concern towards Facebook. The evidence from this study provides

empirical support that perceived control is a predictor of people’s willingness to share information. Next to this it showed that privacy concern and knowledge of Facebook

privacy policy were the key mediators in this relationship.

This study introduces a new construct; the Percieved Control Effect, to research how this consumer trait affects consumers’ trade-off between privacy concerns, privacy concern specific towards Facebook, privacy knowledge, value for privacy and ad-relevance regarding the willingness to share information. This information is relevant to governments because it can help shape the policies regarding online privacy. The idea that control is an essential component of privacy is so universal that control has become a code word employed by government bodies in proposals for enhanced privacy protection and by data holders and service providers in order to avert criticisms regarding the privacy risks born by data subjects. The argument is understandable and appealing. Next to this it is further enhanced by the fact that people in general also state to want more control over how their information is collected and used (Consumer Reports National Research Center, September 2008, http://www.consumersunion.org/pub/core_telecom_and_utilities/006189.html).

Yet, higher levels of control may not always serve the ultimate goal of enhancing privacy protection. The findings in this research further illustrate the paradoxical policy implication that the feeling of security conveyed by the provision of privacy tools may lower concerns regarding the actual accessibility and usability of information, driving those provided with such protections to reveal more sensitive information to a larger audience even though this is not in line with their interests. This can specifically seen from the fact that the construct of privacy concern has no mediation effect on this relation between PCE and willingness to share information. People with very high concern about their (online) privacy increase their information flow just as easily as people with low concern for privacy as soon as they perceive to have more control. This illustrates that providing people with a sense of control can lead them into acting against their own initial interests. New and easy to use rvacy

Referenties

GERELATEERDE DOCUMENTEN

Removing the dead hand of the state would unleash an irresistible tide of innovation which would make Britain a leading high skill, high wage economy.. We now know where that

The political State diversity on the other hand shows significant results at some points, as firms located in Democrat States obtain more financial performance than firms located

This implies that the local heat transport carried by small-scale fluctuations is locally reduced (increased) in plume emitting (impacting) regions around BL height..

The findings of the study revealed that participants perceived the lack of the necessary entrepreneurial skills, difficulties in acquiring the necessary funding as well

Objective: This study aimed to conceptualize wearable technology embodiment, create an instrument to measure it, and test the predictive validity of the scale using

Younger participants’ negative beliefs and dislike of ITF determined their lower product acceptance and intended consumption of the less modernized dishes (samples

The high discriminatory ability of our test, between typical and struggling readers as well as within struggling readers, was indicative of DRM’s relevance (Chapter 2).