• No results found

Escaping privacy violation : an application of right of exit theory to privacy violation on the internet

N/A
N/A
Protected

Academic year: 2021

Share "Escaping privacy violation : an application of right of exit theory to privacy violation on the internet"

Copied!
47
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

1

Escaping Privacy Violation

An application of right of exit theory to privacy

violation on the internet

MSc Thesis in Political Science

Track: Political Theory and Political Behaviour

University of Amsterdam

Graduate School of Social Sciences

Titus Vreeke

Student number: 10171169

Supervisor: Dr. Enzo Rossi

Date: 24-06-2016

(2)

2

Abstract

The invention of the internet has changed our way of life. Most of the changes are advantageous to individuals and society. However, the internet has proven detrimental to individual privacy allowing companies to track our online behaviour and collect incredible amounts of personal data. The existing approaches to dealing with this privacy infringement are ineffective. In this paper I propose an alternative approach of providing a right of exit from the privacy infringement of the internet. Using the existing right of exit theory, I argue that the privacy problems affiliated with the internet can be evaded by making it possible for individuals to choose whether their privacy is violated. Based on the risks attached to privacy violations and the value of privacy for individuals and societies, I conclude that we are in dire need of a right of exit from the internet’s privacy violation and that if we uphold basic liberal ideals, it is the state’s responsibility to provide this exit.

(3)

3

Contents

Introduction ... 4

1. Privacy ... 8

1.1. Defining privacy ... 9

1.2. Big data and its dangers ... 14

1.3. The value of privacy ... 20

1.4. Conclusion ... 23

2. Controlling Privacy ... 26

2.1. Privacy control by the state ... 26

2.2. Right of exit from the internet ... 31

2.3. The necessity of exit ... 37

2.4. Conclusion ... 39

Conclusion ... 41

(4)

4

Introduction

The invention of the internet near the end of the 20th century has since changed to world in innumerable

ways. It has allowed for unprecedented data storage, has interconnected the world in and extraordinary way and with the rise of social media has now also started to change our social relations and behaviour. Undoubtedly, the internet has had a positive impact on the world, providing human kind with new opportunities in almost every domain of human life. Science, culture, industry and leisure have greatly benefitted from the newly found advantages. There are however, negative sides to the internet that are more easily overlooked and have only begun to receive the attention they deserve since the turn of the century. The most obvious example is a privacy issue. Ever since Edward Snowden revealed the malicious practices of the National Security Agency (NSA) in the United States, the issues around privacy on the internet have been highly debated in the media and scientific literature. The scope of the privacy invasion by nation states has been laid barren. Yet privacy infringement by nation states, however important, is not the issue I wish to discuss here. Similar issues might be raised beyond the domain of the state.

The private sphere is home to various privacy issues as well. Corporations use the internet to track the online behaviour of consumers and collect tremendous amounts of data about their lives. The collected ´big data´ is used by companies for various things like advertising and making managerial decisions (Provost and Fawcett 2013: 4). The data allow for companies to predict what individual consumers want and thereby play into their needs and desires. Either by targeting consumers with advertisements that are aimed at their personal situation, or by predicting what people want on a more general level and adjust course accordingly. What differs from the privacy infringement by governments is that the latter can generally still be defended using a security argument. Governments usually argue for their privacy infringing practices in light of national security. One can debate over how to weigh privacy and security against one another but at least to some extent the privacy invasion by states holds significant advantages for most of the invaded parties. In the case of companies invading the privacy of their consumers, this argument is much harder to make. Because of this, I wish to focus on privacy infringement by private parties.

The privacy concern with the internet and other modern technologies has been widely discussed in scientific and philosophical literature (Solove 2004, Solove 2008, Solove et al. 2010, Mayer-Schönberger and Cukier 2013, Floridi 2014). Quite common in the philosophical critiques is a concern for the autonomy of people when their privacy is violated. Taken solely, privacy is a hard value to defend. It is no surprise that many people smother the raised concerns for their privacy by stating that they have ‘nothing to hide’.

(5)

5

However, it is not solely their visibility to others that is at stake, but also their capacity to make autonomous and unbiased decisions. When people realize that their actions might be under supervision, they tend to behave differently. They lose a certain kind of spontaneity, overthink their actions and adapt their behaviour to match a social norm (Reiman 2002). If the privacy issue negatively affects autonomy, it comes into conflict with a core liberal value. Typically, when liberty, autonomy, equality or any other liberal value is impeded in western liberal societies by commercial enterprises, there are two options of dealing with this. Bottom up and market-centred, where the market is stimulated to reform and adapt in order to fix the negative externalities. Or top down and state-centred, where the state sets up regulations and attempts to counter the problems head on. Although arguments have been made for either solution in the field of internet privacy (Schwartz 1999), neither approach is very effective when it comes to the internet. The market fails to reform because there is too little competition over the privacy issue. This locks companies into a status quo that is profitable for them and thus the incentive for change is non-existent. States are in an awkward spot because of the cloudy nature of the internet. It is very hard to control, or even nudge corporate behaviour on the internet because it is quite easy for businesses to restrain the government from checking their data or monitoring their behaviour.

There is however, a third and less obvious option. Instead of trying to change the malicious practice of privacy invasion, the state can attempt to provide an exit from the privacy infringement of internet corporations. Usually applied to social groups, right of exit theory roughly states that illiberal practices in a liberal society are tolerable, as long as there is a meaningful and substantial right of exit; a possibility to opt out. Although on the internet we are not dealing with a clearly distinguishable social group that needs an exit route, there are similarities. Most prominent among these similarities is that when we participate on the internet, our privacy is invaded without our direct consent. Much like in many religions, the most common example in right of exit theory, people participate on the internet without truly realizing that this participation has a negative effect on certain liberal rights they possess. In this paper I will argue that if we maintain that a right of exit from a certain practice is necessary when core liberal values are affected and we accept that the internet does indeed harm our privacy, the state should attempt to provide an exit route if she fails to negate the damages to liberal values in a different manner. I will defend this statement over the course of two chapters.

The first chapter will set the stage for a discussion of right of exit theory applied to internet privacy. I will investigate possible conceptions of privacy and use the results to form a definition of privacy that fits the situation of the internet and is workable in relation to right of exit theory. I will then turn to the internet’s privacy infringing practices. Big data collection has become a standard for internet companies

(6)

6

across the board. I will discuss the benefits for the companies involved and point out the risks these practices carry for society and individuals. The increasing trust in numbers has several implications that strengthen the call for a right of exit. Equality and justice are two core liberal values that risk damages from ‘the big data revolution’. In the final part of the chapter I will discuss the value of privacy for individuals and societies. Although often unnoticed by the affected individuals, privacy infringement is harmful to negative freedom, autonomy and emancipating forces in society. By discussing these values, privacy infringement is further problematized and the need for a solution becomes apparent.

The second chapter covers the core arguments of this paper and will contain my reasoning for the provision of a right of exit. I will start by discussing the state’s options for controlling or limiting the privacy infringement that takes place on the internet. Both the market-centred and the state-centred approach will receive attention and their advantages and disadvantages will be explained in full. Ultimately, I will argue that neither approach is capable of providing satisfactory privacy protection for citizens. After dismantling these common strategies I will introduce my own idea of providing a right of exit from the internet. In order to do so I will first shed light on right of exit theory in its standard usage. However, because internet users cannot be understood as a clearly delineated social group, I will adjust the theory slightly. By focusing on the parts of the theory that are compliant to the problems of the internet I will explain how providing a right of exit might overcome the privacy issues. Not by solving them, but by providing a means of escape. It is important to note that this exit must not be understood as an option to stop using the internet completely. I do not want to propagate an alternative to the internet, nor a way to quit using it. We are far past that, the internet has taken an extremely dominant position in western societies and expecting people to completely refrain from using it would be naïve. Instead, exit must be understood as an escape from the privacy infringement of the internet. A possibility for individuals to decide whether, by whom and to what extent their privacy is violated. In the final section of the chapter, I will highlight the necessity of a right of exit. Because many people do not seem to care about their privacy, it is easily argued that providing them with a choice is a useless endeavour. Using arguments from previous parts of the paper, I will refute this claim and argue that if we wish to uphold liberal ideals, the state should provide a substantial and meaningful right of exit.

My conclusion will be that we are in dire need for a right of exit from the internet’s privacy violation and that it is the state’s responsibility to provide it. With a right of exit in place, companies can choose whether they wish to continue their privacy infringement without harming consumers that do not have a choice. When individuals have the possibility to choose whether their privacy is infringed upon, it becomes their personal responsibility. Furthermore, a right of exit could stimulate the market to become more

(7)

7

privacy oriented. If privacy becomes an issue and companies start to compete around it, the pool of possible choices will grow even further. I will end this paper with some speculations into how a right of exit could be provided. This question is not under investigation here and is problematic to say the least. By discussing the problems surrounding this question, I intend to open avenues for further research into a right of exit as a counter to the privacy problems that pester internet users.

Before I turn my attention towards the matters at hand, I would like to note that the problems I address are to be interpreted as a general trend in modern western societies. I realize that not everyone experiences the same amount of privacy infringement from their online behaviour. Undoubtedly there are some who barely use the internet or are skilled and aware enough to shield themselves from the infringement. Yet I do believe that everyone is affected to some extent. As will become clear in the remainder of this paper, shielding yourself fully is near impossible to accomplish, as is refraining from use altogether. Furthermore, because of the phenomenon’s relative youth, the internet´s implications are still diverse amongst different social groups. However, new technologies like the internet play the most dominant social role in the lives of younger generations (Jones 2009) and because of this I have every reason to believe that the internet’s impact on our lives, and the affiliated privacy problems, will grow for decades to come.

(8)

8

1. Privacy

Privacy as a concept is traditionally under dispute. The term originates from Aristotle’s classical distinction between the public and the private sphere, but was properly introduced in 1890 when Samuel Warren and Louis Brandeis published their essay ‘The right to privacy', in which they propagated a ‘right to be let alone’. The notion has since been interpreted and criticized in numerable ways and remains highly contested to this day. Over the past decade, privacy has received renewed attention due to new technological advancements that supposedly create novel threats for our privacy. The internet and the affiliated practice of big data collection has made profiling of internet users frighteningly accurate. This allows for internet companies to predict social, medical and political trends in societies. Furthermore, the profiling is often used for targeting individuals via advertisements which lets companies increase their revenue. Although at first glance this may seem innocent or at times even beneficial to consumers, there are a number of implications and dangers to our privacy that threaten the treasured liberal way of life in western societies.

This stage-setting chapter will consist of three distinct parts and will contain a discussion of (1) different usages of the concept privacy and a relevant definition for privacy infringement on the internet, (2) the threats the internet and large scale collection of personal data pose to privacy and (3) the implications of this privacy invasion for individuals and core liberal values. In the first section I will discuss six possible conceptions of privacy. Although all six of these are relevant to internet privacy in one way or another, two of them are useful for creating a conception of privacy that is applicable to the situation on the internet. The ‘limited access to the self’ and the ‘control over personal information’ conceptions can be merged together to create a new definition of privacy that accurately captures the problems of the internet and allows for an application of right of exit theory. In the second section I will discuss the risks that are tied to big data. Companies are using big data to make decisions that affect our lives and data can be flawed. Using several examples I will show how usage of big data can pose problems for basic liberal values like equality and autonomy. The third section will discuss the value of privacy. Privacy is usually understood as an instrumental value that helps promote other things in society. After a brief explanation of why privacy is best understood as instrumentally valuable, I will delve into the values it helps promote and how people and societies benefit from individual privacy.

(9)

9

1.1.

Defining privacy

In his book ´Understanding Privacy´, Daniel Solove brands the term ´a concept in disarray´ (Solove 2008: 1) According to him, no one is able to accurately describe what it means. He distinguishes six common definitions of privacy in modern scientific literature: (1) the right to be let alone, (2) limited access to the self, (3) secrecy, (4) control over personal information, (5) personhood and (6) intimacy (Solove 2008: 15-34). Solove suggests that none of these six possibilities fully covers what privacy is because privacy as a concept is highly situational. The situation under which the term is used decides what parts of the different possible definitions are appropriate (Solove 2008: 38).

I am inclined to agree with Solove. In a situation of personal contact between relatives for example, we might treasure a certain degree of secrecy and intimacy. In a parent-child relationship we expect that the two parties generally have a right to keep certain secrets from each other. A child might not want his parents to know what he discusses with his friends and the parent will generally want to shield their child from their sexual affairs. At the same time, however, a degree of intimacy through the sharing of other personal, intimate information of their own choosing is vital for maintaining a healthy parent-child relationship. When we consider privacy of individuals with regard to the state, different definitions come to the fore. Here we might treasure ‘limited access to the self’ more highly. At least to some degree, we want to be able to limit what the government knows about us and at what moments they have access to our information. Philosopher Jeffrey Reiman, whose ideas on the value of privacy I will discuss in more detail in the third part of this chapter, uses this type of privacy to critique governmental use of IVHS technology. A system that allows the government to track our movements by car through license plate recognition. According to him, this technology pervades our control over who accesses our personal information. Whom we visit, for example, might for some people be highly personal information and publication could have devastating effects on their personal lives (Reiman 2004: 196). A final example of the situational nature of privacy, is also the most classic one. In relation to media, ‘the right to be let alone’ is by far the most fitting. It is no surprise that the previously touched upon and earliest notion of privacy by Warren and Brandeis was written as a cry for the ‘right to be let alone’. Warren and Brandeis aimed this request at media that started to employ photo cameras and published personal pictures of the rich and famous. To this day, celebrities that call upon their privacy are most keen on being let alone. In all of the examples I have given, aspects of the other conceptions of privacy might play a part, but in most cases there is a conception that clearly is most dominant.

If a definition of privacy is indeed situational, I must find a suitable definition for internet privacy with regard to corporations before the dangers of this relatively new type of privacy infringement can be

(10)

10

accurately discussed. In order to find this definition I will go over the six different possibilities offered by Solove, decide to what extent they are relevant for internet privacy and lengthen my discussion of them according to their relevance. As will become clear, some of the conceptions are quite similar and some are more relevant than others, but almost every one of them provides new insights into what privacy is and how it is best defined for my intentions. Bundling the relevant information together ultimately creates the type of situational definition that Solove recommends.

‘The right to be let alone’ has already received some attention in previous paragraphs. Like many conceptions of privacy, it emerged as a response to new technological developments. The rise of the newspaper at the end of the 19th century proved problematic for the higher classes of society at that time.

Largely sensationalistic, the newspapers sought to document the lives of rich individuals for the pleasure of the common people. After the photo camera became more widely available in 1884, the invention rapidly gained in popularity. Warren and Brandeis were soon fed up with the intrusive pictures in the newspapers and in 1890 they published ‘The right to privacy’ (Solove 2008: 15). In the article, they formulate a defence of privacy based upon existing intellectual property laws: ‘The principle which protects personal writings and any other productions of the intellect or of the emotions, is the right to privacy, and the law has no new principle to formulate when it extends this protection to the personal appearance, sayings, acts, and to personal relation, domestic or otherwise.’ (Warren and Brandeis 1890: 213). In short, they argue that individual outings and appearances are part of one’s intellectual property and should therefore be protected by law1.

Regarding internet privacy however, the conception is of little relevance. Warren and Brandeis were targeting media that reported directly about people´s lives, leaving their images and stories scattered throughout the media. The problem with this type of privacy infringement is that people have a right to some sort of private life, free from public exhibition. This is not what is problematic about the companies that violate our privacy on the internet. Their violation is most often invisible to the public eye. Only the companies that gather the data and the potential buyers of this data get to see the extensive knowledge about the lives of internet users. If however, companies would actively start spreading images of or stories about their clients that they acquired using their data collection, the right to be let alone would indeed become relevant. At the moment however, this seldom happens. Facebook has launched a new project in

1The article has had a big impact on legislature and the juridical system. As late as 1967, Warren and Brandeis’s view

(11)

11

which people’s profile pictures are shown next to certain products or organizations they have ‘liked’. However, these pictures are only shown to friends and only after the connection is made (facebook.com). These connections and photos are already visible to friends, so although it reminds of public exhibition it is rather a redirection to already public information. This is not enough for the right to be let alone conception to be relevant for the practices on the internet. There is no new personal information that is made public.

Better suited to privacy infringement on the internet is the ‘limited access to the self’ conception. In some ways, it is similar to the conception made famous by Warren and Brandeis. At roughly the same time as their ‘The Right to Privacy’, Edwin L. Godkin published ‘The Rights of the Citizen’ in which he states that privacy encompasses ‘the right to decide how much knowledge of [a person’s] personal thought and feeling (…) private doings and affairs (…) the public at large shall have.’ (Godkin 1890: 65). At first glance, this formulation might seem identical to the previous conception. What makes it different and far more relevant however, is the inclusion of choice. According to this conception, we should have the possibility to make a decision on what is known about ourselves: ‘a form of individual control over who has access to the self’ (Solove 2008: 19). Right of exit theory relies heavily on the possibility for people to choose whether they wish to participate in illiberal practices and this conception fits this emphasis on decision-making. Exit in right of exit theory is most prominently an option to opt out. A choice between retaining membership of a group and enduring its illiberal practice, or leaving the group to escape from that practice. If it were the case that people actively choose to reveal themselves on the internet thereby allowing companies to thrive on their personal data, there would not be a problem according to this conception, nor according to right of exit theory. The point is exactly that people most often lack this choice and are forced, one way or another, to use the internet extensively allowing for companies to make heavy use of the data this provides. Clearly then, the definition of privacy relevant for my critique of privacy infringement on the internet must include choice as an important factor.

The next conception discussed by Solove is what he calls ‘secrecy’. According to this conception privacy violation is the ‘public disclosure of previously concealed information’ (Solove 2008: 21). Solove distinguishes that it is a subset of the limited access to the self conception because keeping secrets is a way to limit the access of others to the self (Solove 2008: 20). It is however, much narrower than the previous conception. Aimed only at information that people wish to conceal, secrecy does not cover fully what is at stake on the internet. While surfing the web, people generate bits and pieces of data that taken by themselves often do not uncover any of their secrets. They might sometimes, for example when people visit pornographic websites or talk to others on adultery websites like ‘second love’. Yet for the most part,

(12)

12

there is no direct cause for secrecy in our surfing behaviour. However, it might be contested that people can simply want secrecy, no matter the reason or the things they are trying to hide. Furthermore, the problem with big data is that with the incredible amount of data that is collected it is possible to paint such accurate pictures of individuals that we might well consider the sum a secret. People’s behaviour, preferences, social contacts and other personal information build up to form a profile that allows for incredible accuracy in predicting how someone will respond in certain situations, what his personality is like and what he desires (Mayer-Schönberger and Cukier 2013: 78). If we were able to, we would not readily and out of our own initiative give up this cluster of information to a random corporation. The desire for concealment of this information means that the total package of information qualifies as a secret. A secret that is uncovered piece by piece, but nevertheless of grand proportions. Secrecy thus is important to consider. However, because secrecy can be regarded as a subset of limited access to the self, the conception is of little further addition for a definition of my own. Secrecy is a choice to withhold information, the importance of choice was already captured from the previous definition. My argument does however shed light on the sneakiness of big data collection, without clear warning the internet lures people into giving up far more of themselves than they would consciously allow.

Another subset of limited access to the self, is the ‘control over personal information’ conception (Solove 2008: 25). Unlike the secrecy conception however, this subset provides useful focus for forming my own definition. Control over personal information type privacy demands exactly what the name suggests, that individuals control who sees what parts of their personal information (Solove 2008: 26). This is lacking on the internet. Partially because people do not realize what information they give up while surfing and partially because people often do not have a realistic choice to refrain from using the internet. The argument for this conception is relatively similar to the argument I already made when discussing the limited access to the self conception. The focus on the control of the subject over what is disclosed to the outside world is of crucial importance when considering internet privacy in relation to right of exit theory. The choice between enduring an illiberal practice and opting out, is what right of exit theory promotes. However, the control over personal information conception does bring something new and relevant to the table. The focus on personal information and our direct control over this information makes it more fitting to the problems with privacy I am discussing. Big data collection is primarily about the gathering of personal information and the problem I distinguish is exactly that people lack control over this process. It is the internet corporations that have claimed control over personal information. According to this conception, it should be the individual who decides what information they wish to commit.

(13)

13

The next conception to be discussed is ‘personhood’. However, I will not go into much detail on personhood here. In a nutshell, privacy as personhood aims to protect personal decisions that are essential in shaping our individual identities. Our personalities and identities are subject to the choices we make and personhood strives to protect these choices from outside interference. The underlying idea is that a lack of privacy can affect the choices people make and thereby shape their identities. I will discuss a similar idea in section three of this chapter, where I delve into the value of privacy. However relevant for valuing privacy, personhood is not much of a conception of privacy on its own. It is more commonly used as an addition to other conceptions to aid them in explaining why privacy is important (Solove 2008: 30).

The final conception discussed by Solove is ‘intimacy’. Like personhood, it can hardly be regarded as a full conception of privacy but it illuminates a specific value of privacy. Privacy as intimacy upholds that intimate relationships cannot exist without them being private to some extent. Formulated as an attempt to move away from the traditional hard focus on informational privacy, the intimacy conception targets interpersonal relationships where not only information is private, but also the actions that flow from love or care (Solove 2008: 35). Privacy in this sense, stems from being able to share intimate information and actions with the people close to you. Although it might be argued that the data collected on the internet contains bits and pieces of intimate relations that are maintained via the internet, intimacy is of little importance for my intentions. Usage of big data comes down to packages of information that allow corporations to profile people on a large scale. The contents of intimate online conversations or other types of communication are not directly hindered by this, nor are they specifically analysed or intruded upon. Intimacy as a definition of privacy is too narrow to suit the scale of privacy infringement on the internet.

It seems that out of the six possible conceptions, two cater best to privacy infringement on the internet. The limited access to the self conception and as a subset of this the control over personal information conception, provide an understanding of privacy that is useful in several ways. First of all, the emphasis on the existence of choice in what we show of ourselves is crucial when internet privacy is discussed from the standpoint of right of exit theory. If a right of exit from the internet is to exist, people must be given a meaningful choice in whether they will have their personal information used or not. When this choice exists we can no longer call it privacy infringement if people consciously choose to grant this information to others. This means that if a right of exit is successfully provided, people would not only be able to choose whether they will participate in an illiberal practice, but the practice would even cease to be illiberal once they consciously choose to partake. Secondly, an emphasis on information caters best to privacy

(14)

14

infringement on the internet. Large pools of information are the cornerstone of the illiberal practice as big data is the main tool for corporations to make profit of our surfing behaviour. Privacy on the internet in relation to corporations thus is: the right for individuals to consciously and actively decide what personal

information is generated through usage of the internet and who is able to see and make use of this information, without excessive costs to the individual. I have added a part to the definition that addresses

the costs to the individual because a right of exit cannot be regarded credible if these costs are unrealistically high. I will explain this in more detail in chapter two when I discuss right of exit theory. Although personhood or intimacy might be relevant for the value of privacy, they bring little to the table when we want to understand what is currently happening to the privacy of people that use the internet on a daily basis. Secrecy and the right to be let alone enlighten some pieces of the puzzle but are too limited to provide the full picture. Secrecy helps envision that we must have a right to shield ourselves, the grand exposure that happens right now is illegitimate in the sense that very little people would consciously allow it. The right to be let alone can be relevant, but is too specific for my cause. It only provides new insights in cases where the images or outings of people are publicly displayed to everyone by the internet conglomerates.

1.2.

Big data and its dangers

According to Mayer-Schönberger and Cukier, the start of the 21st century heralded the ‘big data

revolution’. At the break of the millennium, only one quarter of the world’s stored information was digital. Most information was still stored in books, on film, cassettes or other non-digital media. The rapid rise of the internet and other technological advances have shifted this balance in an unbelievable manner. In 2013 it was estimated that a mere 2 percent of the stored information is non digital (Mayer-Schönberger and Cukier 2013: 9). The capacities for storing and gathering information on digital media have increased exponentially and this provides many opportunities for almost every part of society. Healthcare, education, the government and many other institutions benefit one way or another from having tremendous amounts of data at their disposal. It is no surprise that the opportunities offered by big data have not gone unnoticed in the world of business. Already many companies have turned to making data driven decisions in various fields pricing and production and other managerial decisions (McAfee and Brynjolfsson 2012: 64). There are countless different usages of big data that are not all relevant for my purpose. Factories for example, produce tons of data on the functioning of machines allowing for streamlining of the production process.

(15)

15

What I am interested in however, is that data that is gathered about individuals by tracking their behaviour on the internet. The possibilities offered by this type of data collection are endless. Google is able to accurately predict a flu epidemic using the search terms of its users and advertisements shown on the internet are often targeted at specific consumers using the profile that was created of them. Additionally, many websites use algorithms that make use of the data to provide its users with suggestions or advertisements. Websites like Amazon index previously visited products and ratings to suggest products customers might like. Google is well known for their Google-ads that target specific groups with catered advertisements (www.wordstream.com, a). All this seems nothing but beneficial to consumers and in many ways it indeed is. However, it must not be forgotten that these companies use these algorithms for a simple reason: to make more profit. Amazon hopes to sell more of the suggested products and Google is paid remarkably well for every click the advertisements receive2. Furthermore, there are many risks

involved with large scale data gathering that are not always recognized. If big data was nothing but positive, the privacy infringement necessary to gather the data would be easier to accept. In this section I intend to shed light on the dangers of using big data and point out that we should be weary of relying too heavily on big data. It is tempting to surrender ourselves to the power of numbers but we must not forget that numbers can distort reality. I will start with a short introduction of a definition of personal data that will allow for a distinction in between different types of data. Of all possible kinds of data, personal data specifically is often gathered in a manner that is harmful to privacy. After the distinction is made, I will discuss the possible uses of personal big data more extensively. This discussion will provide me with clear examples for the last part of this section where I will turn my attention towards the risks that are tied to big data.

Personal data is a broad term that is generally used to describe any type of data that relates to an individual. It is defined in the European Union Directive as follows: ‘personal data shall mean any information relating to an identified or identifiable natural person (‘data subject’); an identifiable person is one who can be identified, directly or indirectly, in particular by reference to an identification number or to one or more factors specific to his physical, physiological, mental, economic, cultural or social identity.’ (European Parliament and Council of the European Union 1995). There are two things that are important to note about this definition. First of all, it does not matter whether the specific information about a person directly leads to them being identifiable, any information that can be tied to a person and

2 97% of Google’s revenue comes from advertising. The keyword with a highest cost per click is insurance. Each click

on the advertisements that appear after searching for this keyword earns Google a whopping $54.91 (www.wordstream.com, b).

(16)

16

that might help in identifying them is considered personal. In this sense, even seemingly irrelevant information like the colour of clothing on a specific day can be regarded personal. Secondly, personal data is a broad term that can be applied to information regarding individual characteristics (their opinions, appearances, habits and the like) as well as to information that is not tied to their specific personal characteristics (their passport number or address). Usage of such personal data by businesses is widespread and very diverse. To illustrate some of the possible usages I will introduce two specific but typical examples of big data usage before discussing the practice on a more general level.

In 2004 hurricane Frances was swooping across the Caribbean and was on its way to hit the east coast of the United States. A week before it reached Florida, Walmart’s chief information officer Linda M. Dillman, decided to create a forecast of sales based upon data gathered during previous storms. By doing this, she was able to predict what goods would be in high demand and stock up accordingly. Not surprisingly, bottled water was one of the products that needed extra stocking. You would hardly need a data analysis to predict this but the analysis also predicted some more unusual demands. Who would have thought that strawberry Pop-Tarts would sell approximately seven times as much before a storm, or that the demand for beer would be almost higher than the demand for water (Provost and Fawcett 2013: 3). Although the usage of data in this example is still quite simple and not very privacy infringing because it was not registered which customers bought certain products, it illustrates clearly the advantages that big data can provide to businesses. If Walmart had relied on the instincts of their managers they would never have been able to stock up on all the right products and would have lost revenue because of it. Furthermore, although privacy was not under direct threat in this example, adding names to the sold products is only a small step to take and this makes a world of difference when it comes to the degree of privacy infringement.

My next example shows how the adding of names to retail information can suddenly create a completely different situation. The example has quickly risen to become a classic for illustrating the possibilities of big data predictions. In 2002, Target started a project that would allow them to predict whether female customers were pregnant. Young mothers represented a large increase in sales because of all the baby related products they need. The problem was that birth records are usually public and once the baby was born, the new mothers received endless vouchers and advertisements from numerous stores. Target wanted to try and beat their competition by creating an algorithm that could predict pregnancy based upon the shopping behaviour of their female customers. They succeeded with flying colours. Sometime after they had implemented the algorithm an angry father came to the store to

(17)

17

complain about the advertisements sent to his sixteen-year-old daughter. He accused Target of encouraging his daughter to get pregnant with all the baby related products they were showcasing to her. It wasn’t until weeks later that his daughter confessed her pregnancy to him and he came back to Target to apologize (Duhigg 2012). What is frightening about this example is that the prediction could be made even when the girl was clearly trying to hide her pregnancy from the rest of the world. The smallest of deviations from our normal behaviour can, by the use of big data, give away the biggest of our secrets. By placing the data individuals produce in algorithms, companies are able to discover highly personal information. Through general, impersonal analysis they can come closer to an individual than they would ever be able to with a more personal approach.

These anecdotes are exemplar in two ways. First and foremost, the Target example illustrates how companies collect data about our behaviour and put it to use in their marketing. This is the most common and widely spread use of big data. The ads we see on the internet are usually the result of our past surfing behaviour. Secondly, it indicates how companies are using big data to get ahead of their competitors. By predicting how customers will behave companies can make decisions that help them get an edge. In the case of Target, they were able to present specific products to customers before the competition could even know about these customers. In Walmart’s case, the usage of big data allowed the company to stock their stores in an unusual manner thereby securing revenue that other companies would likely miss. There are endless other possibilities with big data. Internet streaming service Netflix uses highly sophisticated algorithms to predict what we like to watch and offer us corresponding films and TV-series. Airline companies use big data to predict flight behaviour of customers and adjusts pricing accordingly. Meaning that apart from the supply and demand for a certain plane ticket, our previous searching behaviour can influence the prices we pay for air travel. Generally, the more personal data is collected by a business, the better they can predict things about individuals that are important to business decision-making (Provost and Fawcett 2013: 342).

One thing should be clear from the previous examples and it cannot be stressed enough: ‘at its core, big data is about predictions!’ (Mayer-Schönberger and Cukier 2013: 11). This observation is highly important when the risks of big data are considered. Almost every one of them has to do with the fact that big data can only predict and by definition, predictions can turn out to be wrong. No matter how accurately a prediction might be, there is a margin of error that is easily ignored. Putting too much trust in the numbers carries significant risks. Apart from the more obvious privacy issue related to big data, Mayer-Schönberger and Cukier distinguish two further risks. The first risk flows from the type of privacy infringement that is

(18)

18

inherently linked to many types of big data: ‘penalties based on propensities’. The second risk is described by them as ‘a dictatorship of data’. I will discuss them separately.

Punishing people based upon probability seems like a good way to prevent things from happening. Governments have a tendency to try and prevent rather than react. Smoking is heavily taxed in an attempt to prevent lung cancer, wearing a seatbelt is mandatory to prevent injury or fatality in car accidents and people are thoroughly searched when they wish to enter airports or other places that risk a terrorist attack. This all sounds very sensible and most people are more than happy to allow these small constraints on their freedom to better the whole of society (Mayer-Schönberger and Cukier 2013: 160). However, when data is used to try and predict what people might do in order to prevent them from doing it, it carries a type of constrain in freedom that these preventions do not. For prevention based on propensity to be effective, there would have to be some sort of punishment tied to an expected crime. Without the fear of punishment, people would not be put off from trying again later. Already governments in the United States are experimenting with big data to predict the chances of someone committing crimes, enabling police departments to keep a close look at them (Mayer-Schönberger and Cukier 2013: 159). Although in these cases people are not yet punished before they commit a crime, it is a small step to take. Apart from the risk of a wrong prediction because of the imperfectness of data, this type of analysis also feeds into discrimination and is detrimental to equality. It places people who have certain characteristics like race or class into groups that apparently need extra supervision. What this means is that if someone belongs to such a group his privacy is impeded even further and he is not treated equal to other citizens on the basis of his characteristics.

In the case of governments, trusting upon the predictions made from big data might have nauseating consequences of unjust large punishments and we should be glad that momentarily this is not a reality. Some businesses however, have already started to employ a type of punishment based on propensity. Insurance companies have been known to adjust their policy prices to certain characteristics of people. A genepool that suggests greater chances of disease, can be a reason for a higher insurance premiums. In car insurance too, policies are no longer solely priced based upon past accidents. Some insurance companies have started installing devices in cars that track our driving behaviour and the prices are adjusted accordingly for each individual (Brouns-Van Engelen 2013). These are just minor examples, but the idea is that with a large pool of data these types of predictions can be made more often and more accurately. If businesses come to believe that their predictions are perfect, or at least treat them like they are, they run the risk of penalizing people without solid proof. Belonging to a certain group or behaving a

(19)

19

certain way is no guarantee for future behaviour and thus it should not be treated like it. It negates the presumption of innocence, feeds into discrimination and threatens fair judgment.

This threat partially ties into the second risk discussed by Mayer-Schönberger and Cukier. The dictatorship of data refers to the tendency of humans to put all their trust in data and leave as little as possible open to human judgment. As an example they describe the hiring policy of the giant internet business Google. For some time during the company’s younger years, job candidates were selected based upon their SAT scores and their grade point averages when they graduated from college. Even accomplished managers who had been working for years were still subjected to these checks. This is surprising, because of all types of businesses it is in the tech sector where college degrees have often proved to be irrelevant. Bill Gates, Mark Zuckerberg and Steve Jobs are just three well known examples that had probably failed to make the cut if they applied for a job at Google because they dropped out of college early to start their own businesses. It might be contested that these are examples of extreme and rare outliers, but the problem of using predictions to make decisions that affect people’s lives, is exactly that the outliers can be negatively affected. In the case of using college scores to select job candidates, anyone who has dropped out for any reason is immediately on the back foot. Things like talent and drive that do not necessarily translate into scores, are unknowingly ignored.

Aggregative reasoning of the sort Google employed, violates individual rights. Individuals in liberal states are entitled to be treated equally and fairly, jumping to conclusions based upon probability is neither fair nor equal. As with punishments based on propensity, certain determined characteristics are employed to make possible life changing decisions. The example indicates is that if we put too much trust in data we fail to see certain things because not everything is included in the data. If punishment based upon propensity is something to be weary of, then certainly placing utmost trust in numbers to reward people is not a good idea either. When everything is extracted from numbers and correlation or causation, we will inevitably miss things that simply were not or cannot be expressed in data. These are risks that are inherently tied to big data and they are tied heavily to the privacy infringement problem. Without collecting insane amounts of data, there would be no possibility of punishing based on propensity nor cause to put all our trust in numbers. Privacy thus has practical implications, but its value is not limited to this. In the next section I will discuss the value of privacy for individuals and societies.

(20)

20

1.3.

The value of privacy

In the first chapter I explained why privacy is a slippery concept that differs in meaning in different contexts. This effects how privacy is valued. Generally there are two ways of valuing privacy. Its worth can be expressed in the advantages it confers upon society or the individual. Furthermore, privacy is usually understood as an instrumental value (Moor 1997, Solove 2008). It is supportive of, or detrimental to, some other value depending on the situation. On a societal level, privacy is often attacked by arguing that protecting the privacy of individuals does not weigh up to the benefits infringement might have for society. Criminals or terrorists can benefit from privacy laws for example, helping them to stay in the shadows. However, these types of arguments usually pertain to privacy from the state as they are typically responsible for maintaining security. In relation to businesses, the societal benefit argument against privacy is harder to make. One might contest that a thriving business environment ultimately benefits the whole of society including the individuals whose privacy is invaded. This argument is very indirect however, and there is no way to proof to suggest that such an environment cannot be reached without privacy infringing practices. As a part of its instrumental value, privacy can very well be beneficial to society. In the coming section it will become clear that individual privacy has great benefits for the individuals. Indirectly however, these individual benefits can also have positive effects for society.

Before I continue it must be noted that some philosophers have also attributed intrinsic value to privacy (Johnson 1994: 89). The idea behind this is that privacy is a necessary condition for other values like autonomy or security. Without privacy, we cannot have security nor autonomy and because they are intrinsically valuable, privacy is too. This however, feels very much like a workaround. There are several arguments that can be made against this line thinking. Firstly, it is highly debatable whether security and autonomy cannot exist without privacy (Moor 1997: 28-29). Secondly, if privacy enables intrinsic value, does that not make it instrumental? I believe it does, no matter how necessary for something else, that fact that privacy’s value lies in this necessity only confirms the instrumentality. Because of this, I will argue for privacy as an individual value that is instrumental to other values held dear in western liberal societies. In order to do so, I will follow and expand on philosopher Jeffrey Reiman’s thoughts. He argues that there are four risks to the individual when their privacy is invaded: (1) the risk of extrinsic loss of freedom, (2) the risk of intrinsic loss of freedom, (3) symbolic risks and (4) the risk of psychopolitical metamorphosis (Reiman 2004: 201). The first three of these arguments relate to the individual, the fourth argument takes these concerns for the individual and transforms them into a societal argument.

(21)

21

By extrinsic loss of freedom Reiman means: ‘all those ways in which lack of privacy makes people vulnerable to having their behaviour controlled by others.’ (Reiman 2004: 201). There are two ways in which privacy infringement might control the behaviour of an individual. First of all, open access to personal information might be used to directly pressure someone in order to make them behave a certain way. Blackmail is the most obvious example of this kind of behaviour control (Reiman 2004: 201). Secondly, when an individual knows his information might be exposed, this can also have an effect on how they behave. Knowing that certain actions can have consequences for finding a job or joining a certain group when they become known, can make people think twice about undertaking these actions. The knowledge that there is a possibility of negative consequences when they do something, is often enough for people to reconsider the action (Reiman 2004: 202).

This type of mechanism echoes Foucault’s panoptical model of society. In discipline and punish, Foucault famously describes the panopticon, a structure thought up by Jeremy Bentham with an architecture allows constant supervision of human subjects placed in cells around a watchtower. The subjects do not know if or when they are being watched, but they are aware of the possibility. Foucault argues that this possibility is enough for them to behave in a desirable way (Foucault 2012: 200). In society too, the possibility of being watched influences the way we behave. So much even, that ultimately conscious awareness of this constant gaze becomes unnecessary for the adjustment of our behaviour. We internalize the gaze and let it dominate our every move. On the internet most people are aware that their movements are tracked and can be accessed by various parties. They might not think about it constantly while surfing the internet, but their behaviour is still influenced by the awareness. Hence control over behaviour by others is very much relevant when we consider the internet and the privacy infringement that takes place there.

Reiman refers to this control over behaviour as a loss of freedom for the controlled agent. For the remainder of this paper, I will refer to it as a loss of autonomy. Freedom as a concept is not accurately chosen to represent what is at work here. Freedom is generally split up into positive and negative freedom. Reiman fails to illuminate which type of freedom he is referring to. Negative freedom generally refers to the ability to act, free from any external or internal restrictions (Christman 2015). A cripple for example, is not free to do everything he pleases because his handicap prevents him from taking part in certain physical activities. He can however, still be positively free and autonomous. Conceptions of positive freedom often overlap with those of autonomy. Both refer to the possibility for self-realization of the individual; the possibility to rule ourselves authentically. Autonomy is distinguishable from negative freedom in the sense that it refers to independence and authenticity of the desires that make us act in the first place, not the

(22)

22

lack of barriers to an action. It fosters the ability to make decisions based upon our desires, emotions or values (Christman 2015). In the case of the panoptical gaze that influences behaviour, it is these decisions that are pressured. People are not damaged in their negative freedom to act a certain way, there are no direct boundaries or threats that prevent them from doing so. However, the possibility of being watched is crippling to an extent where people, consciously or unconsciously, fail to act upon their desires in the way they otherwise would.

The risk of intrinsic loss of freedom as described by Reiman, is more true to the loss of negative freedom. According to Reiman, a loss of privacy directly limits our available choices. Some acts are differ when they are observed from when they are performed in private. Sex is the most obvious example, but some types of conversations also fall into this category. Criticizing an individual for example, has a different meaning when it is done in private than when it is done in front of others. What this means, is that publicity of behaviour changes its very nature (Reiman 2004: 204). If someone has a video conversation with a friend through a social medium. He is no longer solely having the conversation and having intimate face to face interaction, but he is doing these things and creating a record of them. The option of simply having a conversation through the social medium is forgone because every bit and piece of data is stored. When people realize they are observed, they identify with the observer´s viewpoint and change their behaviour accordingly. Because the participants realize the video and its sound might be stored in a database, they may refrain from certain acts or expressions that could harm them if they were made public. The fact of being under supervision changes the things people do and the way they do them, thereby limiting the options of choosing to do things that rely on privacy for them to be authentic.

By symbolic risks Reiman means that invasion of privacy is insulting to the invaded party. When people’s data is constantly recorded and stored, they are denied a certain form of self-ownership. The accurate picture that can be painted of an individual from their online behaviour, means that other parties have access to that individual’s person without their direct consent. Of course, people are not denied of their self-ownership in the same way as slaves or prisoners are, but the point is that there is no possibility to deny access to their person. Even when the individual does not realize that their privacy is neglected and the data is not used in any harmful way, the spying eye ignores basic human dignity and as such is insulting to the individual (Reiman 2004: 205). This is why the risk is symbolic, the critique stands even when there is no immediate danger to the individual. Furthermore, it is important to keep in mind that even when the data is not collected with malicious intent, the data is stored in large quantities and for a long time. Although there might not be a danger now, we can never be sure of the future and of what hands the data might fall into.

(23)

23

The final risk described by Reiman is not an immediate threat to the individual but a society wide end result that looms if we do not pay privacy the respect it deserves. The risk of psychopolitical metamorphosis follows from the previously discussed risks and entails that continuously ignoring privacy will lead to an impoverished society of uncritical and one-dimensional people. Because of the strong desire to adhere to social trends and the realization that behaviour is stored, people will start to mirror each other. Scared to stand out, think differently or act unconventional, individuals will all try and adhere to the same social norms and standards forming a one-dimensional mass3. Inner life and critical thought is

dampened and replaced by the mindless following of convention and tradition (Reiman 2004: 208). Solove also views conformity to social norms as one of the implicit risks of privacy infringement and expands on his thoughts more than Reiman. Our desire to cohere to and fit in with the rest of society, makes us very susceptible to frozen norms and standards. Social norms are useful and even necessary to a certain extent. They set boundaries and delineate what is acceptable or unacceptable behaviour and thought (Solove 2008: 95). It is however, equally important to sometimes cross these boundaries. Deviance can instigate social change, shifting the existing norms. New norms can create opportunities for previously oppressed groups, (de)criminalize certain activities or even lead to the implementation of a new political system. If no one dares to cross these boundaries anymore because they are scared of public humiliation or even of punishment, the norms will remain fixed. Personal privacy according to Solove, is not merely beneficial to the individual but of the utmost importance for societies (Solove 2008: 99). Without privacy risk falling into a social tyranny. Existing social norms and standards are preserved because no one dares to be different and challenge the status quo. The feeling or fear of being watched is enough to keep us in check, reminding once again of Foucault’s explanation of panoptical power.

1.4.

Conclusion

In this chapter I have set the stage for a discussion of a right of exit from the internet. A call for an exit right is justified on the basis of the internet’s practices being detrimental to privacy. Privacy however, is not a concept that is easily defined, nor is its value always obvious. In the previous sections I have explored what is a fitting definition of privacy, how privacy is violated on the internet, the risks this poses to individuals and finally, what the value of privacy is for individuals and society. There are a number of observations that are important going into the next chapter. I will run through them one by one.

3 Herbert Marcuse addressed the dangers of a one-dimensional society as early as 1964. Although not from a privacy

perspective, his theory highlights the implications of one-dimensionality and is a must read for those interested in this subject (Marcuse 1964).

(24)

24

Privacy is a situational concept, its meaning and form varies under different circumstances. In order to find a conception that is applicable to the situation on the internet, I discussed six common conceptions and used them to form my own definition. Out of the six, the limited access to the self and the control over personal information conceptions are most fitting. What is important in these conceptions is that they include choice as a dominant factor in privacy. In right of exit theory the possibility of choosing to leave is a crucial in deciding whether a substantial right of exit exists. Furthermore, the focus on personal information of the latter conception fits the personal data targeted privacy infringement on the internet. Ultimately, the inquiry led me to form the following conception of privacy: the right for individuals to

consciously and actively decide what personal information is generated through usage of the internet and who is able to see and make use of this information, without excessive costs to the individual. The focus on

choice highlights that privacy is not a passive state but instead something that can be sought after through conscious decision-making. Furthermore, the emphasis on personal information caters to the common practices of the internet. Finally, the necessity of a lack of costs to make a decision that protects their personal information is crucial when the conception is applied to right of exit theory. As will be explained more extensively in the next chapter, a right of exit cannot be thought substantial or credible if the costs to the exiting individual excessively high.

In the second section I addressed the common usages of big data and the risks affiliated to them. Internet companies are using big data in ways that are often beneficial to the internet users themselves. This takes the attention away from the possible risks of the so called ‘big data revolution’. Websites use the data they gather to suggest products, or to show targeted advertising fitted to consumers. On the one hand, consumers can take advantage from this. On the other hand however, the companies are profiting greatly from these practices. Furthermore, there are two important risks to consider. First of all, the danger of punishment based on propensity looms. Big data usage is all about making predictions. The problem with making databased predictions is that they are never completely foolproof. What this means is that if companies use the results from their data based algorithms to make decisions that affect people, they are taking the risk of unjustly penalizing these individuals. Furthermore, making decisions based upon certain personal attributes goes against a basic idea of equality. Someone’s background or situation that can be captured in the data, should not be cause for unequal treatment. Secondly, the tendency of humans to place all their trust in numbers and equations makes falling into a ‘dictatorship of data’ an important risk to consider. When decisions are made completely based on data, there is a risk of missing certain things. Human judgement and insights are foregone, leaving decision-making prone to miss outliers or things that simply cannot be accurately displayed by data. These observations are important when discussing the

(25)

25

necessity of a right of exit. The risk of unjust treatment of individuals is one of the possible justifications for a right of exit from big data collection.

Other justifications can be deducted from the value that privacy has for individuals and society. Without privacy, individuals are at risk of losing a degree of autonomy and negative freedom. Autonomy is lost because the knowledge of being under supervision changes the way people act and the decisions they make. The knowledge that a certain action might come back and haunt them if it became public, ca be enough for people to refrain from taking this action. Negative freedom is lost because certain intimate activities change in nature when they are not done in privacy. These activities can no longer be engaged in when privacy is threatened, limiting the amount of options available. Apart from the value for individuals, society as a whole also benefits from individual privacy. Without it, people are more prone to conformity because deviating from social norms and standards can have negative consequences. This locks societies in a status quo, where change is not as easily instigated by alternative individuals or groups because they feel less secure in expressing their deviance.

In the next chapter I will discuss possible ways of dealing with the privacy issue affiliated to the internet. This chapter has set the stage to do so, introducing the important concepts and justifying the need for a way to deal with privacy infringement. This paves the way to discuss the introduction of an exit right from the internet’s privacy infringement as one of the possible solutions. In the final conclusion of this paper, the important observations from this chapter will be discussed in conjuncture with my theory on right of exit and its relation to the internet.

(26)

26

2. Controlling Privacy

In the previous chapter I have explained what privacy is, how privacy is affected by the internet and why privacy is important. Privacy affects many different types of values, both on a personal and a societal level. Without privacy, people lose a degree of autonomy, are harmed in their negative freedom and become prone to conformity. This can have grave effects for individuals as well as societies. In this chapter I will turn my attention towards possible options for controlling privacy on the internet. In the first part of this chapter I discuss the current dominant approaches to privacy control: a market-centred and a state-centred approach. The prior puts its trust in the self-correcting mechanisms of the market relying on its invisible hand to tackle problems like privacy infringement. The latter allocates a role to the state to ensure the protection of privacy online. Both of them are flawed and ineffective in providing privacy for individuals. The self-correcting mechanism of the market fails because of a collective action problem and the state is incapable to penetrate the cloudy nature of the internet and thus cannot effectively control internet companies. In the second part of this chapter I will introduce the alternative of providing a right of exit. Using existing right of exit theory from the multiculturalist tradition, I will investigate whether exit from the internet currently exists. The short answer is that it does not. The internet takes an extremely dominant position in our society and it is unreasonable to ask people to forego on the advantages it provides. Furthermore, the existing alternatives like The Onion Router (Tor) network are unsatisfactory, they are widely unknown and require skill and knowledge to use effectively. The final section will answer whether a right to exit from the internet is necessary. People often seem to care little about their privacy, so why should they be given the opportunity to protect it? Using arguments from various parts of this paper, I will come to the conclusion that a right of exit from the internet is necessary if we wish to uphold certain basic liberal values.

2.1.

Privacy control by the state

In an article from 1999, Schwartz distinguishes three sets of opposing approaches for a state that wishes to control internet privacy. Although the internet and its surrounding practices have progressed far beyond the condition of the 20th century, his observations remain highly relevant today. In some cases, his

observations have grown in relevance rather than declined. Schwartz calls the three sets of approaches (1) market versus state, (2) bottom-up versus top-down and (3) self-regulation versus formal law. All three sets of options are similar: either letting the market and businesses rule themselves or placing this

Referenties

GERELATEERDE DOCUMENTEN

from NTA in suspension. Figure 2: The same region imaged with multiple analytical instruments. a) SEM image recorded (in absence of gold coating) at 5 kV. b) Raman cluster image

• Het verschilt per profiel hoe werknemers bezig zijn met hun loopbaan: de ‘Nerd’-profielen en de ‘Status seeker’ zijn intrinsiek gemotiveerd voor het werk en

Thus, although this article, as well as the rest of the series, may seem to follow a narrative analysis pattern, the difference is that the body-space framework used to

It indicates that a longer linker leads to less restricted polymer movements in the silica surface polymer interphase like in the case of silica with low specific

The development of the questionnaire, Monitor Orthopaedic Shoes (MOS), comprised 7 phases: 1) literature search and expert interviews; 2) development of the first (pilot)

for the variable on the share of female directors (ShareFem) has to be significant. If the coefficient is 

Targeted advertising and consumer privacy: practices and underlying reasons that evoke privacy violation feelings in young adults..

Interviewer: What kind of information used in targeted ads would make you feel that your privacy is violated.. Interviewee: If I search for something on the web and then get the ad