• No results found

The value of private information

N/A
N/A
Protected

Academic year: 2021

Share "The value of private information"

Copied!
60
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

THE VALUE OF PRIVATE

INFORMATION

Thesis

BORIS NIENHUIS

STUDENT NO. 0165824

27-01-2017

EXECUTIVE PROGRAMME IN MANAGEMENT STUDIES- MARKETING TRACK

ABS, UVA

(2)
(3)

ii

Statement of Originality

This document is written by Student Boris Nienhuis who declares to take full responsibility for the contents of this document.

I declare that the text and the work presented in this document is original and that no sources other than those mentioned in the text and its references have been used in creating it.

The Faculty of Economics and Business is responsible solely for the supervision of completion of the work, not for the contents.

(4)

iii

Table of content

Abstract ... v

1Introduction ... 1

1.2 Private data as a new asset ... 1

1.3 Monetizing personal information... 3

1.4 Theories of personal data markets ... 6

Perturbation-based data privacy ... 7

1.5 Pricing privacy and private data ... 8

1.6 Value of private of information ... 9

1.7 Summary... 10

1.8 Research Gap ... 11

1.9 Research question ... 12

2 Literature review ... 14

2.1 privacy ... 14

2.2 Privacy and private data ... 16

2.2.1 privacy economics & Psychology of ownership and asset valuation ... 16

2.2.2 Type of data/information sensitivity ... 17

2.2.3 Value of data sensitivity ... 19

2.4 Trust ... 20

2.5 Legitimacy ... 22

2.6 Summary of variables ... 24

2.7 Conceptual model ... 25

3 Methodology ... 27

3.1 Technique of data collection ... 27

3.2 Items used ... 27

3.3 Sample participants ... 29

3.4 Strength and limitations ... 29

4 results research ... 31

4.1 results of test survey ... 31

4.2 Results final questionnaire main research ... 32

Questionnaire ... 32

demographics ... 33

Scenarios ... 33

(5)

iv

Effect of variables on valuation of private information ... 38

5. discussion ... 41 6. Conclusion ... 43 limitations ... 44 Future research ... 44 References ... 46 Appendix ... 50 Test questionnaire ... 50 Final Questionnaire ... 52

Tables & Figures Table 1list of organizations who protect and monetize private data ... 3

Table 2 private information from most to least valued ... 19

Tabel 3 scores of different scenarios ... 34

Table 4 difference in group means ... 37

Figure 1 conceptual model 1 13

Figuur 2 conceptual model 2 25

Figure 3 mean score of each scenario 37

(6)

v

Abstract

Technological developments have created several new possibilities to collect, distribute, store and manipulate individuals’ private information. As a result, an individual’s private information is increasingly less private. Private information can vary from an individuals’ name, address, date of birth to medical information, e-mail messages and social security number. This information is of great value for parties such as (online) marketers, search engines, social media, but also for governments, universities research facilities. These data are so valuable that to some they can be considered as the world’s new oil.

the possibilities to leverage these private information and create a marketplace have been explored in several studies and by different (commercial) organizations. There is however no ubiquitous method to calculate the value of private information and so far the value private information has for individuals is not included in the calculation. This study investigates the valuation of private

information by individuals and which factors influence this valuation. Central research question is:

How do people value their private information depending on the nature of this private information, trust in the party who requests this information and the legitimacy of the information request? This is

investigated via an experiment by survey. Eight different scenarios were presented to eight different groups of respondents. Each scenario describing a situation where a party is interested in the respondents’ private information. The respondents could indicate for how much money they were willing to share their private information in that scenario. The results of the survey show that the sensitivity or the private information is an important driver for the valuation of this private information: the more sensitive this information, the higher its perceived value. Also trust in the party which asks for the information influences the value: the more trust, the lower the private information is valued and vice versa. Legitimacy however does not seem to influence the valuation of private information by individuals. The results of this study contribute to a better understanding of the value of private information and perhaps to a possible monetization of these data by individuals. Furthermore, the results of this study give an indication of what type of information is considered acceptable by individuals for companies and other parties to inquire, moreover it stresses the importance of trustworthiness of organizations when it comes to dealing with the private information of their customers.

(7)

1

1Introduction

Technological innovations such as the internet and smartphones have enabled governments and companies alike to amass individual’s private information on an unprecedented scale. On the one hand people share their private information involuntarily by governments spying programs and companies tracking their online behaviour via cookies. On the other hand people share their private information voluntarily with the world via social media. An incredible amount of data is shared every day. According to Spiekermann, Acquisti, Böhme ,Kai-Lung Hui (2015, p161) : “Individuals around the

world send or receive 196 billion emails, submit over 500 million tweets on Twitter, and share 4,75 billion pieces of content on Facebook”. In cyberspace an individual’s personal information is often

required to enable specific online activities. To create an online account, one often needs to share its personal information such as name, email address, phone number, date of birth, address etc. with a third party. Every time a mobile app is downloaded and installed, its user shares a bit of his or her privacy and (mobile) behaviour with the (corporate) world.

New technologies have created a give-and-take relationship between technology and individual’s information: people are progressively more dependent upon technologies, which in turn need people’s personal information in order to function (Jerome, 2013). Encouraged by the development of technologies that facilitate collection, distribution, storage, and manipulation of personal

consumer information, privacy has become a ‘hot’ topic for policy makers and businesses alike: “Our

age of big data has reduced privacy to a dollar figure” .”Bits of personal information sell for a fraction of a penny, and no one’s individual profile is worth anything until it is collected and aggregated with the profiles of similar socioeconomic categories” . “It would be interesting to see if there are

possibilities for individuals to sell their data”. (Jerome, 2013, p48).

1.2 Private data as a new asset

The World Economic forum (2011, 2012) describes personal data as a new ‘asset class’. The

European commissioner for Consumer Protection compares personal data with oil to illustrate how important personal information has become (Spiekermann, et al, 2015). The business models of today’s biggest tech companies are operating by providing a service to customers for free while in return collecting and monetizing their personal information (Carrascal, Riederer, Erramilli, Cherubini, de Oliveira, 2013). Companies such as Google and Facebook are built upon collecting and selling millions of files to identify patterns in the behavior of different demographics. Also companies from

(8)

2

more traditional industries such as banking and retail are exploring opportunities to redeem the pile of data they are sitting on.

According to Spiekermann et al (2015, p161): “Personal data is seen as a new asset because of its potential for creating added value for companies and consumers, and for its ability to enable services hardly imaginable without it. Companies use personal data for a variety of purposes: reduce search costs for products via personalized and collaborative filtering of offerings; lower transaction costs for themselves and for consumers; conduct risk analysis on customers; and increase advertising returns through better targeting of advertisements. Personal data can also be a product in itself, when it is entangled with user generated content, as in the case of social media. Personal data can also become strategic capital that allows businesses to derive superior market intelligence or improve existing operations. This can materialize in better or new forms of product development (for instance, mass customization: Henkel and von Hippel 2005) as well as price discrimination (Acquisti and Varian 2005). Businesses can also build competitive advantage or create market entry barriers by using personal information to lock customers in (Shapiro and Varian 1998).”

The value of personal data for companies is thus on the one hand in collecting large amounts of these data for market research: what are the market preferences of consumers from a certain age, gender social background etc. On the other hand companies are interested in collecting data from individuals when this enables them to do a personalised offer: a product, or service for a reduced price.

Although personal data is sometimes described as a new asset class, most people share their private information for free. Something that many believe will end in next years. Businesses that depend on free data will soon need to offer a decent incentive to consumers to share their private information. (Financial Times, 2015,

http://www.ft.com/cms/s/0/9046ec52-8168-11e5-8095-ed1a37d1e096.html#ixzz3sDcr3Gml access date October 25, 2015). There is an increasing focus on the fact that, while third parties significantly profit from buying and selling consumers’ data, the individuals whose data is being traded do not receive any financial compensation. It is estimated that by 2020 people’s digital identity applications, for businesses and consumers, can bring annual benefit of approximately one trillion euro’s in Europe (Liberty Global, Boston Consulting Group, 2012). A discussion is going on about whether users should benefit from the data they generate, since there is no reason why third parties should not pay the individuals for the use of their data. What is problematic however, is that people do not consider their private information as a property and therefore are not aware of its value. It would be interesting to see what happens when people do consider their private data as a property, value it as such and what effect it would have on

(9)

3

companies and institutions who are dependent on the use of individuals private data. So far however, individuals do not have a clear idea of how much their data is actually worth.

1.3 Monetizing personal information

According to a study by the OECD (2013) about the economics of personal data there is no common method to estimate the value of personal data. This can be either done by: “market valuations of

personal data, or other related market measures, or on individual perceptions of value of personal data and privacy” (OECD, 2013, p18). Several companies, non-profit organizations and individuals

however are becoming aware of the potential worth of private data for individuals and propose business models and market structures to enable consumers more control to leverage their personal data. According to Jerome “Monetizing privacy has become something of a holy grail in today’s data

economy” (Jerome, 2013, p48). Creating a market for personal data could result into a shift in the

balance of power between consumers and companies that collect data. Companies and non-profit organisations such as datacoup ,Meeco, Social Data Collective and Qiy, are offering their services to consumers to protect and leverage their data. See table 1 below for more information on these different companies and services.

Table 1list of organizations who protect and monetize private data Meeco

“Meeco is a new service to help you manage life and all your important digital relationships. Add, organise, edit and securely share all your information. Meeco was created with the purpose to empower people to own and benefit directly from their personal data. Reward is not just about money; it is what matters to you. Meeco is about helping you gain the insight and have the data to negotiable better outcomes for you and your family”. https://meeco.me/why-meeco.html ( access date: February 1 2016)

Citizen Me

“Today, your data is used by businesses to help them make better decisions. This is a good thing, it provides us with amazing products and services to use. But what can we achieve when we leverage our own data to help us manage our lives and make better decisions? CitizenMe places the power of data science in the hands of everyday people. By doing so, our users can unlock insights about themselves to help them navigate the unpredictable challenges life throws their way”.

(10)

4

Digi.me

“We are enabling the personal data economy” https://get.digi.me/(access date: February 1 2016)

Digi.me, currently operating as a digital storage facility for just under 350,000 users in 150

countries around the world, plans to launch a function on its existing app that will allow businesses to approach subscribers direct to purchase their data.

Datacoup

“Almost every link in the economic chain has their hand in our collective data pocket. Data brokers in the US alone account for a $15bn industry, yet they have zero relationship with the consumers whose data they harvest and sell. They offer no discernible benefit back to the producers of this great data asset – you. Datacoup is changing this asymmetric dynamic that exists around our personal data. The first and most important step is getting people compensated for the asset that they produce. We are building for a future where individuals like you are in control of your data and are the chief beneficiaries of its value”. https://datacoup.com/docs#about (access date: February 1 2016)

Qiy.com

“The Qiy Foundation's mission is to give people control over their data and facilitate them to do smart things with it. This applies to data they produce themselves and data that is available from third parties”. https://www.qiyfoundation.org/about-qiy/ (access date: February 1 2016)

Apart from the examples of companies and organizations who focus on data management and data monetization in the table above, also companies who deal with a lot of private data of their users, have transformed data into a commodity for its customers. In China for example, companies such as e-commerce retailer Alibaba and technology giant Tencent are developing and already working with a personal credit system called Sesame. “Sesame Credit is the first credit agency in China to use a

scoring system based on online and offline data to generate individual credit scores for consumers and small business owners, it tracks financial and consumption activities of our users for generating the score” (The Wallstreet Journal, 2015 http://blogs.wsj.com/chinarealtime/2015/11/06/china-wants-to-tap-big-data-to-build-a-bigger-brother/ access date: October 20 2015). Alibaba creates an incentive for customers to use its own payment system by raising the Sesame score for those who use Sesame. Sesame awards points to its users for its purchases, but also for the amount of private information they are willing to share with Sesame: professional network, financial history,

educational background etc. Sharing such information increases the amount of a person’s social score. The higher the social score the more rewards a consumer gets. “The company makes no secret

(11)

5

about their creditworthiness and character”. “Someone who plays video games for ten hours a day, for example, would be considered an idle person, and someone who frequently buys diapers would be considered as probably a parent, who on balance is more likely to have a sense of responsibility,” Li Yingyun, Sesame’s technology director, told Caixin, a Chinese magazine”. In some ways, Tencent’s credit system goes further, tapping into users’ social networks and mining data about their contacts, for example, in order to determine their social-credit score.”( the New Yorker, 2015,

http://www.newyorker.com/news/daily-comment/how-china-wants-to-rate-its-citizens access date: December 15, 2015)

The above is a clear, perhaps a bit scary, example of the monetization of private data and privacy in general. For the data market to really take off however, it is up to the consumers to decide how important monetizing their data is and whether they need a financial compensation. The consumer has to decide whether it wants to control its data. Perhaps not only to sell, but also to manage it. Although companies as mentioned in table 1 are already attempting to leverage people’s personal information, it is contested whether personal information can be considered as a “property” at all. There are, according to Spiekermann et al (2015, p162) several ‘data protection principles’ which leave “little room for market negotiations between the data subject and the data controller, let alone

between third parties”. “Thus personal data markets must deal with these constraints, or operate in grey areas – as many currently do.” For example, some firms use enforcement gaps or regulatory arbitrage between jurisdictions to engage in the trade of personal data”. “As a result aside from special and tightly regulated ventures (such as address brokerage for direct marketing, credit reporting, government health insurance, or pollster panels), most of personal data gathered today online seems to remain rather inconvenient unit of account”. “If that was to change and a true kind of “currency” or monetary value were to be sought for personal data, then we would need to deal with the fact that data has- in many respects- the traits of a free commons”. “By its nature, personal data is non-rival, cheap to produce, cheap to copy, and cheap to transmit”. [ ] “Hence, markets for

personal data would need to rely on legal frameworks that establish alienability, rivalry, and excludability for personal data, and assign initial ownership to an entity such as the data subject”.

When personal data is considered to be a property and property rights are enforced, business models the rely heavily on free data might be negatively affected and not be profitable anymore. Evaluating the worth of personal data is according to Spiekermann et al (2015, p163) : “due to

context-dependence and contingencies that affect the costs and benefits arising from the protection or the sharing of personal data is notoriously difficult”. [ ] “The difficulty of measuring the value of data raises many questions about price discovery in personal data markets.” “How can buyers and

(12)

6

sellers negotiate in a setting where information is inherently asymmetric? What market mechanism can determine the right price under the constraint of minimal information leakage? How should buyers and sellers do the accounting for their trades?”.

There is no clear consensus among scholars and studies about what a person’s private information is exactly worth. There are different assessments of the worth of an individuals’ private data. In 2013 Federico Zannier started a kickstarter campaign to sell his own information: “I've data mined myself. I've violated my own privacy. Now I am selling it all. But how much am I worth?”, . Depending on how much Zanniers private information people wanted to know, they could pledge an amount of money, the bigger the donation, to more information. In 30 days 213 people pledged $ 2733,- ( Kickstarter, 2013, https://www.kickstarter.com/projects/1461902402/a-bit-e-of-me access date: March 7 2016).

In a study in 2012 by J.P. Morgan Chase the value of each unique user was worth approximately $4 to Facebook and $24 to Google (Li, Yang Li, Miklau, Suciu, 2014). Other studies have shown however that a person’s personal data is worth less than a dollar (Financial Times, 2013, how much is your personal data worth?

http://www.ft.com/intl/cms/s/2/927ca86e-d29b-11e2-88ed-00144feab7de.html#axzz3imjegqZC access date: April 5, 2016 ). In 2014 a Dutch student sold his ‘personal soul’ in an online auction to the website the next web for £288 (The Guardian, 2014, http://www.theguardian.com/news/datablog/2014/apr/22/how-much-is-personal-data-worth access date February 1 2016)

1.4 Theories of personal data markets

Several studies investigate the possibilities to monetize private information via mathematical methods to create a market where individuals’ private information ca be sold via market makers to interested parties. Gkatzelis ,Aperjis and Huberman (2015) consider a market where buyers can access unbiased samples of private data by appropriately compensating the individuals to whom the data corresponds . Its’ approach leverages the randomized nature of sampling, along with the risk-averse attitude of sellers in order to discover the minimum price at which buyers can obtain unbiased samples. They show that their mechanism provides optimal price guarantees in several settings (Gkatzelis et al, 2015).

The problem that the authors address is that there are a large amounts of private data, but unfortunately these data cannot be accessed due to privacy concerns (one of the principles Spiekermann (2015) addressed that causes private date to be different from other commodities), while at the same time a lot of individuals’ data is bought and sold by big data brokers, without the knowledge of the owners of this specific data ( Gkatzelis et al, 2015). A solution for this would be to

(13)

7

create a market for private data where sellers can offer their data to interested parties, while

preserving the privacy of the sellers’ data by providing access to an unbiased sample of a certain size. In the design proposed by Gkatzelis et al. (2015), the market maker operates as an intermediate between the buyer and the seller. The goal of the mechanisms Gkatzelis et al. envision is to design a:

“truthful mechanism that the market-maker can use, i.e., mechanisms that incentivize the sellers to always report their true privacy related preferences. Also, to ensure that no seller regrets

participating in the market, we restrict ourselves to individually rational mechanisms: that is, mechanisms that always reward each seller at least as much as the privacy cost that he suffered. Note that this private data is already stored and the individuals cannot change it, so the market maker does not need to worry about eliciting the true value of the data as well.”(Gkatzelis et al. 2015,

p111).

A similar study was done by Li, Yang Li, Miklau , Suciu

,

(2014) They proposed a similar mechanism to

protect the owner’s data while releasing to analysts noisy versions of aggregate query results. The article adopts and extends key principles from both differential privacy and query pricing in data markets (Li et al, 2014).

Perturbation-based data privacy

Perturbation-based data privacy is about protecting the individual’s data while simultaneously releasing to interested parties the result of aggregate computations over a large population (Li et al., 2014). The theory of perturbation-based privacy mechanisms is not however applied in practise since internet companies have followed a business model where data is acquired by offering a free service and selling this data to third parties (Li et al., 2014). Li et al propose a framework to assign prices to queries in order to compensate the owners of data for their loss of privacy. For privacy protection reasons, the perturbation of data is always necessary. The more perturbation, the cheaper the price. According to Li et al “the relationship between the accuracy of a query result and its cost depends on

the query and the preferences of contributing data owners. Formalizing this relationship is one of the goals of this work.” (Li et al, p34, 2014). The second goal of the authors was to “formalizing arbitrage for noisy queries […] it is possible to design quite flexible arbitrage-free pricing functions”. (Li et al.

p43, 2014). A third goal is to “formalize the relationship between privacy loss and payments to the

data owners”. There are three parties involved: the owner of the data, the market maker and the buyer and his queries“ (Li et al. p43, 2014). The article identifies that data owners suffer a degree of

privacy loss, which they have to be compensated for. The proposed framework calculates a price based on different variables such as the amount of added ‘noise’, type of data etc. A mathematical expression calculates the revenue for the selling party (Li et al. 2014).

The above studies help to create an algorithm which enables a hypothetical marketplace for producers and parties interested in private data. Via the mechanisms described also individuals

(14)

8

whose private data is traded will be financially compensated. They do not look however into how people themselves value private data and the possible effect this has on the price. The studies in the next paragraph take a closer look at this.

1.5 Pricing privacy and private data

A study from 2013 investigates how users valuate different types of private information (PI) while being online and compared it to what extend consumers value their PI in an offline environment (Carrascal, Riederer, Erramilli, Cherubini, de Oliveira, 2013). According to Carrascal et al. the ecosystem of service providers on one end and users on the other can be viewed as a two sided market, where the ‘good’ that is being traded, is the PI of users. For users to have a better

understanding of the cost-benefit of the transaction, it is important that they first know of the value of their PI they are trading away. The value that users put on their own PI depends on the context and on the type of interaction (Carrascal et al, 2013). The research focused on what value users associate with their PI, more specifically their web browsing behaviour (Cassascal et al. 2013). The study showed that “1) users value their PI related to their offline identities more (3 times) than

their browsing behaviour. “Users also value information pertaining to financial transactions and social network interactions more than activities like search and shopping. We also found that while users are overwhelmingly in favour of exchanging their PI in return for improved online services, they are uncomfortable if these same providers monetize their PI” (Carrascal et al, 2013, p189.) Thus

people are willing to share their private information in return for better services or deals, but they rather do not want to share it with third parties. The median value of users related to their offline identity is € 25 and their browsing history € 7,-. The valuation of offline identity does not vary across different services, but it does for online web browsing behaviour: social and finance web sites receive on average a higher valuation. What the study furthermore shows is that most users are

overwhelmingly negative about their PI being monetized(Carrascal et al, 2013).

Another study has investigated whether consumers are willing to pay money to websites to have them not tracked via cookies, or whether they are willing to receive money for having them tracked by a website. (McDonald, Cranor, 2010). The results from this study shows that a majority of consumers are not willing to pay for their privacy, they are however willing to receive money in return for their privacy. A similar result was obtained from research done by Acquisti (2009), in his study, the financial advancement of not paying is regarded by consumers more favourable than their privacy. Also similar studies done by Beresford, Kübler, Preibusch (2012) shows that consumers are not willing to pay money for their privacy. As a result the value of personal information is often estimated too low by consumers (Spiekermann, 2012). A study from Savage and Waldman (2013) shows however that consumers are indeed willing to pay for their privacy. The possible difference

(15)

9

with this research is that this was a survey and the study did not investigate the behaviour of people, which tends to be different when it comes to their privacy. Roosendaal (2013) investigated whether consumers are willing to share their private information in return for services. A majority indicated to have neutral feelings about their information being used and sold to third parties. Which, according to the study, is caused by the fact that consumers know very little about the value of their personal information to begin with. However, a minority indicated to feel good about giving their private information in return for free services. Acquisti (2009) underscores this by indicating that consumers are oblivious about the value of their private information and what companies are using it for and therefore cannot make a good estimation of the value their information is worth when they’re paying with it.

Benndorf and Normann (2014) did an experiment to investigate whether people are willing to sell their personal data depending on the type of personal data and elicitation method. According to the authors the experiments are novel in that they are incentivized, the focus on privacy issues is salient and the use of the data – marketing purpose- is transparent and unambiguous. Moreover, their experiment differentiates from other experiments in that other experiments do not make clear who will make use the personal data and for what purpose. Ambiguity can have an effect on the

demanded price of individuals data since it decreases their willingness to sell their private data (Benndorf &Normann, 2014). Another difference was that the privacy issue was salient. A lack of salience was also cause for a lower value elicited in other studies. In a laboratory setting using a Becker, DeGroot & Marschak – mechanism (measuring utility by a single-response sequential method) and TIOLI (take it or leave it) mechanism, subjects could indicate how much money they wanted for different bundles of data. Consisting of: preferences, contact data, combination of these two, Facebook about and Facebook timeline (Benndorf & Normann, 2014). To compare the results in an experimental setting , with a non- experimental setting, a subsequent survey was done among a different population in which respondents were asked whether they were willing to sell the same bundles. The subjects of the survey were not as willing as the group in the experimental setting. In the controlled laboratory experiment about 5 of 6 participants were willing to sell their private data. The price for which the participants are willing to sell their data was in between € 5 in the TIOLI test and in between € 15 - € 19 in the more advanced BDM design (Benndorf & Normann (2014).

1.6 Value of private of information

According to the OECD paper about the economics of personal data, valuing private information can be done via a survey or economic experiment. Although research in this area is still in a “preliminary

(16)

10

a survey was done among respondents in different countries to investigate privacy and security in a connected life: “This study was sponsored by Trend Micro and performed by the Ponemon institute

and executed “ to learn if consumers are worried about how technology that captures personal information, especially Internet of Things (IoT) or Internet of Everything (IoE), is affecting their privacy and security” (Ponemon Institute, LLC, 2015, p1). Amongst other things the study explored how

people value different types of private information. The study classified private information into different categories. A small majority of the respondents in the study (56%) was willing to share their personal data to a trusted party in exchange for money (Ponemon Institute LLC, 2015).

In 2001 Adams and Sasse investigated which factors influence people’s willingness to share privacy sensitive information with a third party. They discovered that assessment of privacy risk depends on three main factors: 1) information receiver: how much trust do people have in the party that accesses their information; 2) information usage: the way this data is used and 3) information

sensitivity: the different data sensitivities of the various items requested will have an impact on the

disclosure rate. A social security number for example is considered more privacy sensitive than one’s name, or address (Malheiros, Brostoff, ,Jennett, Sasse, 2013).

1.7 Summary

Below a summary from the different studies discussed above

• people, scholars and others consider private data progressively more as a commodity. • Private information is very different from other properties. This makes it very difficult to

price.

• most people do not feel ownership of their private information.

• Different parties (companies and researchers) are thinking of ways to monetize private information and let individuals be financially compensated for the use of their data. • Several studies have designed models which consists of a market maker who operates as

a broker between the sellers and the buyers

• Some of these consider a perturbation-based model where perturbation-based data privacy is about protecting the individual’s data while simultaneously releasing to interested parties the result of aggregate computations over a large population (Li et al, 2014).

• The more perturbation, the cheaper the price: the identity of the seller is not disclosed, it is not possible to discover the identity of the person who sold the data, only its profile • Private information is valued differently depending on the type of private information.

(17)

11

• The value that users put on their private information and privacy depends on the context and on the type of interaction

• Types of private information can be categorized in different classes based upon sensitivity

• The more consumers trust a third party, the more they are willing to share their information with this third party.

• When it comes to privacy, there is a difference between what consumers say and the way they behave.

• There is a difference between willing to pay (WTP) for privacy and willingness to accept payment for privacy (WTA). Results from studies have shown that people are less willing to pay for privacy and the amount of money WTA exceeds the WTP.

• There is no ubiquitous formula or universal formula of pricing personal data, or privacy. It is therefore difficult to calculate the market value of private information.

• Assessment of privacy risk depends on three factors: 1)Trust in the receiver of the

information, 2)sensitivity of the private information and 3)usage of this information

1.8 Research Gap

Investigating what value private information has to individuals is important for several reasons. 1) There are different ways to calculate the value a person’s private information. The value an

individual assigns to its private information however is not included in any of the described pricing methods. Some studies such as the survey of the Ponemon institute (2015) and the study by Benndorf and Normann (2014) have investigated pricing of types of private data and the willingness to sell private data and by individuals. The results of these studies show that factors such as the context and information usage in which the data request is made and trust in the party which asks for the data influences the valuation and pricing of data by individuals, however these factors are not taken into the design of the survey and experiment. For example, the difference between the valuation of private information in relation with a trusted and not trusted party is not. Thus, it is unclear what effect they have on the eventual valuation of the data. It would be interesting to take a closer look at how these factors influence the valuation of private data. This would further contribute to the field of valuation of private information which is, according to the OECD “in a preliminary

(18)

12

2) Private information is an important commodity: ‘the new oil’. Most people and institutions however, don’t consider it that way. For society it is important to understand the true value of private information. To further enable private information to be considered as a

commodity, it is important that people get a better understanding of its value.

3) If we want to establish data markets in which consumers and producers of data markets are benefitting from it, it is important that private data somehow reflect the value it has for an individual. Therefore, for a pricing mechanism to work it is important that the value people assign to their own data is brought into the equation.

4) Entire industries are built upon the concept of people sharing their private information in

return for the use of a platform, service etc. Valuing data is important for business because:

“by estimating how much customers value the protection of their personal data, managers seek to predict which privacy-enhancing initiatives may become sources of competitive advantages and which intrusive initiatives may trigger adverse reactions”. (Acquisti, John,

Loewenstein, 2013, p 249).

5) In the ongoing discussion in society about the need and possibilities to collect private data,

commercial goals and privacy concerns, a better understanding of the value people assign to their private information can help as a guidance in this discussion since the cost of violation of privacy are often difficult to assess (Acquisti et al. 2013). Valuation of privacy by

individuals is therefore important for policymakers who often have to choose between policies that trade off privacy for other goals. Whether these other goals are worth it may depend, partly on the value that people put on their privacy.

6) Some authors, such as Adams and Sasse (2001), claim that context, data sensitivity and usage

influence the disclosure rate of private information. It is interesting to further investigate whether each of these factors also influence the valuation of private information.

1.9 Research question

Based on the literature about pricing of privacy a conceptual model can be designed in which the main driver for the value of private information is the degree of privacy sensitivity of the private information. The value is further moderated by the type of party which makes the data inquiry and information usage of this information or context for the request. See figure 1 below for the conceptual model.

(19)

13

Figure 1 conceptual model 1

The above relation between type of private information, context and party which makes the data inquiry will be further investigated. The central research question is therefore:

How do people value their private information depending on the nature of this private information, the party who requests this information and the context of the information request?

Sub-questions are:

What is the influence of the type of private information on the valuation by individuals?

How does the context in which request is made influence the valuation of private information?

How does the party who requests the information influences the valuation of private information?

In the next chapter the concepts of privacy, types of private information, trust and legitimacy will be investigated, the conceptual model further elaborated and hypotheses formulated.

(20)

14

2 Literature review

2.1 privacy

Privacy has been a concern for a long time for different field of studies: social, philosophic and legal science. It has been considered a fundamental human right by the United Nations Declaration of Human Rights, the international Covenant on Civil and Political Rights, the charter of Fundamental rights of the European Union and is considered crucial issue in democratic societies (Acquisti, Gritzalis, Lambrinoudakis, di Vimercati 2008). There is no universal definition for privacy, several have been used. According to Solove: “Privacy, however, is a concept in disarray. Nobody can

articulate what it means. Currently, privacy is a sweeping concept, encompassing (among other things) freedom of thought, control over one’s body, solitude in one’s home, control over personal information, freedom from surveillance, protection of one’s reputation, and protection from searches and interrogations. Philosophers, legal theorists, and jurists have frequently lamented the great difficulty in reaching a satisfying conception of privacy” (Solove, 2008, p1). The definition of privacy is

furthermore influenced by different technological innovations. Today’s new technologies’ growing capacity for information processing, plus its complexity, have made privacy an increasingly important issue ( Flavian, Guinaliu, 2006). In relation to the internet, privacy is affected in different aspects such as the obtaining, distribution or the non-authorized use of personal information (Wang, Lee, Wang 1998).

Zuiderveen Borgesius (2014) distinguishes three perspectives of privacy

1) Limited access to the private sphere

2) Individual control over personal information

3) Privacy as the freedom from unreasonable constraints on identity construction

“The three perspectives highlight different privacy aspects of behavioural targeting”.( Zuiderveen

Borgesius, 2014, p83). There are however no clear boundaries between the different privacy perspectives and Zuiderveen Borgesius stresses that none of the three privacy perspectives are absolute. Other authors identify similar perspectives, or contexts in which privacy can be defined, such as: ‘the concealment of information’, ‘right to peace and quiet’ and the’ right for freedom and autonomy’ (Posner, 1981).

Privacy as limited access to the private sphere

Privacy as limited access is about an individuals’ personal sphere, where a person can remain out of sight and left alone. This definition suggests that too much violation of one’s private sphere restricts privacy. (Zuiderveen Borgesius, 2014). The perspective has some weaknesses. In a way it can be regarded as being too narrow because it is not always very obvious which aspects of privacy are

(21)

15

supposed to be in the scope of privacy as limited access. To build a relationship with other people for example, a person sometimes has to disclose their personal information to others. On the other hand the definition can be considered as too broad since it: “doesn’t explain which aspects of one’s life are

so private that access shouldn’t be permitted. (Zuiderveen Borgesius, 2014, p86). Individual control over personal information

The focus on privacy as control over personal information started in the 1960’s a time when

computers became more widespread and the public started to fear that computers would be able to collect personal data on an unprecedented scale. In a 1974 report by the OECD it was argued that the concept of privacy should move to the control approach: “if people fear that organisations make

decisions about them without the possibility of having a say in the decision process, the answer doesn’t have to lie in ensuring that information isn’t collected. Having control over information concerning oneself may be at least as important” (Zuiderveen Borgesius, 2014, p88).

Privacy as control can be summarized according to the definition of Westin: “Privacy is the claim of

individuals, groups or institutions to determine when, how and to what extent information about them is communicated to others. This can be summarised as privacy as control” ( Zuiderveen

Borgesius, 2014, p87). Privacy as control over information suggests that a lack of control, or losing control, over personal information affects privacy and underlines people’s liberty to decide what to do with information which relates to them. (Zuiderveen Borgesius, 2014). Moreover, privacy as control has the advantage of respecting people’s individual preferences: it accommodates that people have different privacy wishes. (Zuiderveen Borgesius, 2014). According to Brandimarte Acquisti, Loewenstein: “control is construed as instrumental to privacy protection- so much so that

privacy itself is often defined as the control over personal information flows” (Brandimarte, Acquisti,

Loewenstein 2012, p242).

Privacy as the freedom from unreasonable constraints on identity construction

Recently, a third perspective on privacy has become popular among scholars: privacy as “the freedom

from unreasonable constraints on identity construction” (Zuiderveen Borgesius, 2014, p92).

This perspective states that “is that individual self-determination is itself shaped by the processing of

personal data” (Zuiderveen Borgesius, 2014, p92). Privacy as identity construction therefore

concerns protection against too pervasive manipulation of data by third parties, possibly via technology. An example of this would be a personalized ad (Zuiderveen Borgesius, 2014). Other authors have also defined privacy in relation to the control personal information by individuals and have stressed the influence of technological innovations on privacy perspectives. According to

(22)

16

Flavian and Guinaliu privacy has “generally been defined as an individual’s ability to control the terms

by which his personal information is acquired and used”. (Flavian, Guinaliu,2006 p604).

2.2 Privacy and private data

There is a close relationship between privacy and personal data, since , depending on the type of private information a person is, or is not likely to disclose it and as a result possibly harm its privacy. The European directive on the protection of personal data (directive 95/46/EC,16,DPD) regulates the processing of personal data. Article 2(a) of this directive defines personal data as: “personal data

shall mean any information relating to an identified or identifiable natural person (‘data subject’); an identifiable person is one who can be identified, directly, or indirectly, in particular by reference to an identification number or to one or more factors specific to his physical, physiological, mental,

economic, cultural, or social identity. Examples are name and address (Roosendaal, 2013, p9).

According to the OECD: the definition of private data contained within the OECD Privacy Guidelines:

“any information relating to an identified or identifiable individual (data subject)” (OECD, 2013,p7).

2.2.1 privacy economics & Psychology of ownership and asset valuation

As investigated in the previous chapter, personal data has value to its owners and parties who are interested in analysing this data. Their interests however are often at odds (Li et al, 2014). According to Acquisti “Privacy decisions often involve attempting to control information flows in order to

balance competing interests – the costs and benefits of sharing or hiding personal information”

(Acquisti, 2009, p82). Acquisti refers to the concept of privacy economics: “Broadly speaking, privacy

economics deals with informational trade-offs: it tries to understand, and sometimes quantify, the costs and benefits that data subjects (as well as potential data holders) bear or enjoy when their personal information is either protected or shared”: (Acquisti, 2009, p82). People share their

information depending on different factors. New technologies have made privacy trade-offs furthermore more difficult to navigate, since they have enhanced the possibilities to collect private data and often people respond to these new technologies by demanding more privacy.

According to Spiekermann, Korunovska and Bauer (2012), the research stream of behavioural economics of privacy, focus on privacy behaviour in the realm of daily communication and people continuously weigh the privacy costs of disclosing their privacy against the benefits of such disclosures. The focus of these studies underlines scholars’ focus on the privacy value. They are interested in the sensitivity of information in different contexts and the question of how monetary expectations reflect this sensitivity. In general, these studies seem to confirm that people who are

(23)

17

more sensitive to privacy put a higher value on their private information. Spiekermann et al. (2012) suggest however, that personal information economics is different from the economics of privacy since the value of personal information is driven by more factors than only people’s privacy considerations and the economics of personal information relates to all of the information that people share and they question whether privacy is the main factor which determines the value of people’s private information (Spiekermann et al 2012). The endowment effect is mentioned for the behaviour relevant for the evaluation of private information: people ask for more money to give something up they own, than they are willing to pay for the same thing. Therefore private information might be considered as an asset they own, as their ‘property’. The psychology of ownership has been identified as a core construct for reflecting people’s possessive feelings. According to Spiekermann et al. (2012, p5) “several arguments exist for a positive relationship

between a psychology of ownership for PI and PI asset valuation”. The authors also argue that there

is a close connection between the valuation of private information and ‘asset consciousness’. When people realize that there is a market for their private information it will influence the valuation of their private information (Spiekermann et al, 2012, p5). The higher the psychology of ownership, the higher the valuation of private information. The lower willingness to pay for privacy can also be explained by the fact that people assume their information to be kept private and therefore they don’t accept paying for it.

2.2.2 Type of data/information sensitivity

There are different types of private information. The type of private information can be categorized depending on the privacy sensitivity of the information. There is however no clear definition of sensitivity and what sensitive is. Also, the context wherein a request is made can influence the degree of sensitivity of data. Article 29 of the data protection directive of the EU defines sensitive data as personal data revealing racial origin, political opinions, or religious or philosophical beliefs,

trade-union membership and data concerning health or sex life (Article 29, Directive working party,

2011).

The OECD (2013) distinguishes the following types of personal data:

1) User generated content, including blogs and commentary, photos and videos, etc.

2) Activity or behavioural data, including what people search for and look at on the Internet, what people buy online, how much and how they pay, etc.

(24)

18

4) Locational data, including residential addresses, GPS and geo-location (e.g. from cellular mobile phones), IP address, etc.

5) Demographic data, including age, gender, race, income, sexual preferences, political affiliation, etc.

6) Identifying data of an official nature, including name, financial information and account numbers, health information, national health or social security numbers, police records, etc” (OECD,

2013,p8).

Schneier (2010) differentiates six types of data:

1) “service data, which is provided to open an account (e.g., name, address, credit card information,

etc.);

2) disclosed data, which is entered voluntarily by the user;

3) entrusted data, taking as an example the comments made on other people’s entries; 4) incidental data, which is about a specific user, but uploaded by someone else;

5) behavioural data, which contains information about the actions users are undertaking when

using the site and may be used for targeted advertising; and

6) inferred data, which is information deduced from someone’s disclosed data, profile or activities” (OECD, 2013,p8).

A further common distinction for personal data is between personally identifiable information (PII) and non-PII (OECD, 2013). PII is data can directly identifies a person, non-PII on the other hand is often considered not to be identifying of itself. For example, PII can consist of

• name and address information,

• social security, health or other unique identifying numbers, • health records and financial information.

Non-PII generally consist of data related to people’s on and offline behaviour such as: • used search terms, visited websites, online purchasing behaviour etc.

Other types of data is more difficult to categorize e.g.: GPS position, IP address, etc.). Moreover, via analytical methods it is possible to combine different types of information so individuals can be

(25)

19

identified. Today’s techniques can often enable data relating to search terms, websites visited, GPS positions, and IP address, to be linked back to an identifiable individual (OECD, 2013).

2.2.3 Value of data sensitivity

How to value the degree of data sensitivity has been investigated in several studies. A study done by Frog (2011) in the United States, India and China, showed that there are three levels of data that people want to protect, per each level the degree of protection (and the assigned value) differs. The top level consists of: social security numbers (national identity numbers) and credit card information. This information is valued highly (USD 150 – 240 per entry) the middle level consists of digital

communication history, such as web browsing history, location and health information (around USD 50) and the bottom third level consists of information containing facts about individuals, such as online purchasing history and online advertising click history. Little value is assigned to this type of information (USD 3 – 6) (Frog, 2011)

Listed below in table 2, from the most valued category to the least and the average value attributed to it, are the different categories used in the survey done by the Ponemon institute (2015).

Table 2 private information from most to least valued Type of information Value in $

Passwords (login details) 75,8

Health condition 59,8

Social Security number 55,7 Payment details (credit card) 36

Credit history 29,2

Names of friends & family members 23,5 Purchase histories 20,6 Physical location (GPS) 16,1 School or employer 13,3 Home address 12,9

Photos & videos 12,2 Hobbies, tastes & preferences 12,2

(26)

20

Email address 8,0

Browser settings & histories 7,1 Special dates including date of

birth

5,9

Phone numbers 5,9

Name 3,9

Gender 2,9

The surveys by the Ponemon institute (2015) and Frog (2011) have shown that people value private information differently. In both studies personally identifiable information is such as social security numbers and information about health condition are considered more valuable than less privacy sensitive information such as age, or gender. Studies such as by Frog (2011) have also shown that the sensitivity of the data and value people assign to it correlates. Overall, the personal identifiable information, such as passwords, health condition, social security number are valued more than non-personal identifiable information. We assume therefore the following hypothesis:

H1: the more personal information is privacy sensitive, the higher it is valued

2.4 Trust

Trust in a third party is an important driver for people to either share, or not to share, their private information . (Acquisti, 2009). Trust is regarded as an important factor in the marketing perspective. According to Flavian and Guinialiu: “trust is considered to be, along with commitment,

communication and satisfaction, one of the basic pillars supporting the relationship marketing theory”. (Flavian, Guinialiu, 2006, p602). Trust may be defined as: “one party’s belief that its needs will be fulfilled in the future by actions undertaken by the other party” .“Thus, trust is a set of beliefs held by a consumer as to certain characteristics of the supplier, as well as the possible behaviour of the supplier in the future (Flavian, Guinaliu, 2006, p 602) Trust can refer to the value that one party

assigns to specific features of the opposite partner in the exchange, mostly the degree of honesty of the other partner (Gundlach, Murphy, 1993). Flavian and Guinialiu investigated the factors which influence trust of consumers in ecommerce websites: “trust is considered to be made up of three

basic dimensions: the web site’s perceived (1) honesty, (2) benevolence and (3) competence. (Flavian,

Guinaliu, 2006, p 602)

Different studies show that consumers find it very important that their private information is protected and that businesses involved are handling this information appropriately. Pickard and

(27)

21

Swan (2014) investigated whether consumers are willing to share their personal health data with third parties. Results from an online survey showed that the respondents are willing to share health information for research purposes. Building on these results, the authors present a framework to increase health information sharing based on trust, motivation, community, and informed consent. (Pickard, Swan 2014).

In a study done by TNO about privacy experience on the internet in the Netherlands, (TNO, 2015) the privacy concerns in relation to cyberspace of Dutch citizens were investigated. This study takes a closer look at the trust people have in third parties and whether the degree of trust influences their privacy concerns. The results revealed that the respondents who participated in the study often make a deliberate choice whether to, or not to share their private information with a party on the internet. Different factors determine the degree of privacy concerns while online. Relevant factors are: 1) prior experience of the respondents: abuse or loss of private information in the past can influence the behaviour of people (TNO, 2015, p17) ,2) the context in which the private information is collected: when information is asked in a medical, private, or commercial context. Trust is

furthermore influenced by 3) a party’s motivation: when the motivation for private information is purely commercial, respondents were more reluctant to share their private information (TNO, 2015, p17). 4) The degree of trust they have in the organization which requests and processes their information and influences the willingness of the respondents to share their data. The results of the study shows a great difference in the degree of trust the respondents have in different organizations in relation to how institutions and organizations handle their private information. On a scale from 1 to 10 respondents could rate their trust. Respondents had the most trust in the police, tax

authorities, and healthcare organizations, the least trust in social media, webstores, market research firms and, perhaps surprisingly, non-profit organizations. (TNO, 2015, p30). It is furthermore

interesting that organizations that principally operate in the offline environment are more trusted than organizations whose operations are primarily online (TNO, 2015). The study by TNO showed further that people who have more trust in party are more willing to share their private information and that ‘offline’ organizations are trusted more than ‘offline’ organizations. (TNO, 2015)

The study by Frog (2011) found that the parties who make an information inquiry such as brands and industries, are trusted in a different way and that it all depends to whom the data is revealed. Respondents to the Frog (2011) survey indicated that they put more trust in financial institutions, technology firms, wireless carriers and wireless handset makers and less trust to government agencies and social networks (OECD, 2013).

The degree of trust influences the degree of privacy invasion experienced by an individual. The greater the degree of trust the less privacy invasion is experienced. Privacy invasion, or Willingness to

(28)

22

share information can be translated into the value that people assign to their private information, thus:

H2: The more trust in a party which makes a private information inquiry, the lower this data is valued.

2.5 Legitimacy

The study by the OECD indicated that context is an important variable when it comes to measuring the value of private data for people: “In recent years, several experimental studies have attempted to

quantify individual valuations of personal data in diverse contexts. Even though research in this area is still in a preliminary stage, two general messages can already be extracted. First, people tend to differ with respect to their individual valuation of personal data (i.e., the amount of money sufficient for them to give away personal data) and their individual valuation of privacy (i.e., the amount of money they are ready to spend to protect their personal data from disclosure). Second, empirical studies point out that both the valuation of privacy and the valuation of personal data are extremely sensitive to contextual effects” (OECD, 2013, p5).

In the book ‘Spy in the coffee machine’ O’Hara and Shadbolt (2008) explore the concept of

“contextual integrity” and describe examples when people react negative or positive to data requests depending on the context in which it is asked. For example collecting information about someone’s marital status during a job interview can be considered as inappropriate (O’Hara and Shadbolt in Malheiros et al 2013, p243). Other studies have shown that when the reason, or context, for a data collection request is explained people are more willing to provide their information (Malheiros et al, 2013, p243).

The circumstances and context under which information is requested and used influences the legitimacy, and possibly the value of this information. This is a significant driver for privacy costs, the lower the perceived legitimacy of the data request the more privacy individuals felt they were giving away ( Annacker, Spiekermann, Strobel, 2001).

In their study : strangers on a plane: context willingness to Divulge sensitive information, John, Acquisti and Loewenstein claim that “concerns about privacy are responsive to contextual cues that

often bear little connection to the objective dangers and benefits of divulging information” (John,

Acquisti , Loewenstein, 2011, p1). Contextual cues can either rouse or downplay privacy concerns. (John, Acquisti, Loewenstein, 2011). The authors distinguish between the type of information and privacy relevance. They claim that cues affecting privacy concerns will have a bigger effect on willingness to disclose sensitive rather than nonthreatening information (John, Acquisti,

(29)

23

Loewenstein, 2011). What is noteworthy is that the experiments by the authors discovered that consumers are more forthcoming with information when sensitive questions are asked informally. These results suggest that marketers might be more successful in obtaining private information when they make the fewest promises to protect consumers’ privacy. (John, Acquisti, Loewenstein, 2013). Annacker, Spiekermann, Strobel (2001) also investigated which factors influence the perceived cost of providing personal information. The authors claim that the perceived legitimacy of an information request in a specific context is one of the drivers of the perceived cost of providing it. (Annacker et al, 2001). The context influences the justification of a question. The authors furthermore claim that the legitimacy is not only influenced by the context, but also by the importance of the context. When consumers consider the information request as important for the fulfilment of the service, they consider little cost to provide the information. Importance here is defined as: “the perceived degree

to which an information request can contribute to an optimal product or service experience”.(Annacker et al, 2001,p3).

Malheiros et al (2013) investigated when people apply for a credit card, which circumstances influence people to accept certain data requests and under which circumstances they don’t. The authors discovered that the perceived relevance of the data request was important. When the data request was considered relevant it was perceived more positively than irrelevant ones. Perceived fairness is furthermore associated with relevance perceptions, perceived fairness is related to how acceptable it is to use the item from an ethical perspective. Perceptions of the consequences of disclosing a data item had an influence on acceptability as well, when participants think that data disclosure results in a positive or neutral outcome, they saw it as more acceptable. When they thought data disclosure would harm them in the future however, they don’t. Moreover, the acceptability was influenced by the sensitivity of the data request: when too personal, or invasive, the request was not considered acceptable. As a result, more sensitive items were more likely to be withheld (Malheiros et al 2013). It is furthermore suggested that transparency of purpose of data requests, as in the context of credit application, can make individuals feel more comfortable with answering questions. Hence: when a purpose of data request is given, it can influence the way people feel about their privacy is compromised.(Malheiros et al 2013).

The above theories suggest that legitimacy is an important driver for individuals to perceive the costs for giving their private information in a data request. Legitimacy is furthermore influenced by several factors such as: the context of the data request, transparency of the data request, explanation for the inquiry, perceived fairness of the request, perception of the consequences of the data request and importance for the fulfilment of a service. Therefore the following hypothesis is assumed:

(30)

24

H3: Participants will value their data lower when a reason, or explanation is given for the data request, than compared to when no reason for the data inquiry is given.

2.6 Summary of variables

Based on the discussed literature above the variables below will be used to study the value of private information:

Type of private information

Private information varies in the degrees of sensitivity. To share a person’s address with a third party for example might be considered less privacy intrusive, or privacy sensitive information, than sharing your medical history with a third party. Different studies have shown that private information can be categorized into different types and that there is a certain degree of sensitivity to each category

Legitimacy

According to different authors legitimacy has a moderating effect on the valuation of privacy and private data. Legitimacy is influenced by the degree of transparency. According to Jennett, Malheiros and Brostof (2012) and Malheiros et al. (2013) transparency of purpose of data requests, can make respondents more willing to answer the request. For this study the degree of legitimacy is therefore determined by the degree of transparency. According to Malheiros et al. (2013) whether or not an explanation is offered for the data inquiry by the party who is interested in the data influences the transparency and as a result the willingness to share private information. Thus the probability for giving consent when a request is made for private information, is expected to be higher when a reason is given for that request, than when no reason is given.

The degree of trust in the party which request information

Trust is an important factor in online and offline transactions. For this study trust will be

operationalized into the degree of trust individuals have in parties when dealing with their private information. The type of party which is interested in the private information influences the willingness of a person to share its private information. The type of party and trust in this party therefore indirectly influences the value of a person’s private data.

(31)

25

2.7 Conceptual model

Based on the theories discussed in this chapter, the conceptual model can be further elaborated into the conceptual model below:

Figuur 2 conceptual model 2

Based on the above theories I expect people to value their private information depending on the type of private information. The more sensitive the type of information, the higher its value to people. This relation is moderated by the legitimacy of the request for data, which is influenced by whether an explanation is given for the data inquiry, and the trust in the party which asks for the information.

The central research question is further adjusted into:

How do people value their private information depending on the nature of this private information, trust in the party who requests this information and the legitimacy of the information request?

Sub-questions are:

How does privacy sensitivity of information influence the valuation of private information by individuals?

(32)

26

How does trust in the party who request the information influence the valuation of private information?

Referenties

GERELATEERDE DOCUMENTEN

When the user has a high perceived privacy risk, the users has the perception that everything that is shared with the fitness tracker can potentially be violated by those

In the back matter section of this dictionary there is a complex outer text, indicated by the table of contents in the front matter functioning as a primary outer text as Wortfelder

Based on an analysis of related work, we dis- tinguish explicit and implicit concept drift, which in automatic subject indexing translates to settings with new descriptor terms and

At the same time, employers (and, indirectly, the public) often have a legitimate interest in policies and practices that may impact on privacy. In this chapter, a number of

The aim of the pilot study was to investigate the relation between physical ac- tivity level (as either inactive, moderately active, or highly active) and daily living through a set

Development of novel anticancer agents for protein targets Estrada Ortiz, Natalia.. IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish

Removing the dead hand of the state would unleash an irresistible tide of innovation which would make Britain a leading high skill, high wage economy.. We now know where that