• No results found

Privacy violation perception in the surveillance society

N/A
N/A
Protected

Academic year: 2021

Share "Privacy violation perception in the surveillance society"

Copied!
43
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Privacy violation perception in the surveillance society

A case study of Digital Natives and Non-Digital Natives in the Netherlands

March 13, 2018

Anne van Loon s1155407

Master Thesis Crisis and Security Management Supervisor: T.J.M Dekkers Msc

(2)

Content

Page

1. Introduction 2

2. Theoretical Framework 5

2.1 Surveillance 5

2.2 The Law on Intelligence and Security 2018 7

2.3 Privacy 8

2.4 A Taxonomy of Privacy Violation 10

2.5 Digital Natives 14

3. Methodology 16

3.1 Procedure 16

3.2 Measures 17

3.3 Reliability and Validity 18

3.4 Process of Analysis 19

4. Results 20

4.1 Descriptive Statistics 20

4.2 Results Total Population 20

4.3 Results per Age Group 22

4.4 t-test 25 4.5 Correlations 26 4.6 Regression 28 5. Conclusion 31 5.1 Conclusion 31 5.2 Discussion 32 5.3 Limitations 33

5.4 Recommendations for Further Research 34

Bibliography 33

Appendix A: Questionnaire (Dutch) 39

(3)

1. Introduction

On Dutch television, the tv show ‘Hunted’ was broadcasted. The tv show follows various Dutch citizens that have become fugitives, and a fictitious team of the Dutch General Intelligence and Security Services needs to find them by using a broad scala of technologies and processes. While watching the show, it becomes obvious that in today’s society, it is almost impossible to disappear and not to be found. In the case of a fugitive, or criminals and terrorists, this might be a good thing. But it also raises questions about the privacy of other citizens; should we protest against these surveillance measures, that have such a big impact on our privacy, or is this something we need to give up in order to make sure the intelligence agencies have the right means to ensure a safer society?

These questions are currently central in so-called surveillance societies, societies full of routine, mundane surveillance (Lyon 2001). The surveillance society gained a boost after 9/11; the event prompted widespread international concern for security in the face of global terrorism, the ‘War on Terror’ was declared, existing surveillance was reinforced at crucial points, and many countries rapidly passed new laws permitting unprecedented levels of intelligence surveillance (Lyon 2002, 1). Ever since, the surveillance society has been expanding in the light of protecting societies against threats such as terrorism, human trafficking and cybercrime. To some extent, citizens are accepting the expansion of surveillance practices because it carries with it a

justification: being subject to monitoring and physical searches is accepted when citizens are in high security areas, such as airports (Lyon 2001, 3). However, the surveillance practices once thought necessary only in high security zones now seem to have spilled over to ordinary life and ordinary places (Zedner 2003, 169).

Political and public debates within surveillance societies are often ignited by surveillance revelations, formerly private and unpopular surveillance practices revealed to the public, that bring to light the extent to which surveillance practices have spilled over to ordinary life. Edward

Snowden, a former contractor that worked for the American National Security Agency (NSA), caused a global uproar in 2013 when he and journalists of the Guardian leaked documents of the NSA that revealed a tremendous part of its spying activities and cooperation with other

governments, intelligence agencies and companies in global surveillance programs. The documents revealed that normal citizens were spied on, without having done any wrongdoings (Lyon 2014, 2). A global debate emerged on how far privacy extends in the Internet age. One the one hand, it is questioned to what extent it is justified that governments violate privacy in order to combat

(4)

of surveillance to ordinary life is justified and normalized in the interest of national security (Wahl-Jorgensen et. al. 2017, 18-19).

In the Netherlands, the debate on privacy versus surveillance has reached a new political and public height due to the renewal of the Law on Intelligence and Security, that is effective since January 1st 2018. This law includes, among other measures, new and expanded authorities for the General Intelligence and Security Services (AIVD); such as wiretapping communications, hacking devices and storing gathered data of citizens that might live in the same neighbourhood as a suspected person. Among Dutch citizens, these new authorities of the AIVD have caused a debate to emerge. On the one hand, citizens protest against these new authorities because it might involve innocent citizens that have done nothing wrong, but still might have their privacy invaded, while on the other hand, citizens do not mind the new authorities of the AIVD because they acknowledge the importance of a safer society, or because ‘they have nothing to hide’. The latter argument is often heard among the public, and in discussions it is the first thing that people say when they are asked what they think of surveillance (Martijn and Tokmetzis 2016, 13). Counter arguments are given in the form of ‘you do have something to hide, even though you might not be aware of it’, which indicates that the dominant idea of privacy is of concealment and secrecy.

The public debate in the Netherlands about the new Law on Intelligence and Security will be the starting point of this research. This research wants to examine how Dutch citizens perceive the newly implemented law in the interest of a safer society, and to what extent they feel violated in their privacy by the new authorities of the AIVD. However, this research will use the taxonomy of privacy violation introduced by Solove (2006) that will allow to research privacy violation as more than one thing, moving beyond the dominant idea that privacy is about concealment and secrecy only. Solove’s pragmatic approach towards privacy violation will be adopted, which suggests that privacy violation might be more things at once, that can be determined by the context and practices in which it occurs.

Additionally, within this research a distinction will be made between two different age groups: Digital Natives and Non-Digital Natives. The generation known as Digital Natives, or Millennials, has been growing up with computerized devices and the Internet, and they are seen as native speakers of the digital language (Prensky 2001, 1). They are used to living an online life, and this might have transformed how they perceive their lives and relate to the world around them (Palfrey and Gasser 2008, 3). This might indicate that there is a difference between Digital Natives and Non-Digital Datives in how they perceive important issues, such as privacy. Indeed, research has shown that Digital Natives tend to lack behind on privacy issues; they feel like they need to give parts of their privacy up in order to live an online life (Fulton and Kibby 2017, 197). This raises the

(5)

on Intelligence and Security, and to what extent there is a difference in how they feel violated in their privacy by it. This question is central in this research, and leads to the following research question:

“To what extent is there a difference in how Dutch citizens feel about, and violated in their privacy by, the new Law on Intelligence and Security 2018, and how can this difference be explained by being a Digital Native?”

This research will contribute to academics by expanding the knowledge on differences among Digital Natives and Non-Digital Natives regarding privacy and surveillance. Current research on Digital Natives has only addressed privacy issues and concerns which are seen as an integral part of the online environment they live their lives in. This research tries to address whether Digital Natives are less concerned with, and feel less violated in their privacy by, surveillance measures that are increasingly implemented in expanding surveillance societies. This thesis will first outline the contours of the surveillance society, so that the new surveillance authorities of the AIVD as stated in the Law on Intelligence and Security can be positioned within it. Secondly, the concepts of privacy and privacy violation will be elaborated on, discussing the extensive amount of literature and addressing the difficulty of conceptualizing privacy, and therefore also privacy violation. Third, by using the framework of privacy violation introduced by Solove (2006), the methodology of this research will be outlined to demonstrate that it moves beyond the difficulty of conceptualising privacy, and beyond the argument of ‘I’ve got nothing to hide’. Fourth, the analysis of the results of the survey will be discussed, placed in relation to other literature and the research question will be answered. Finally, the limitations of this research will be addressed, which result in further

(6)

2. Theoretical Framework

2.1 Surveillance

Surveillance is everywhere in our daily lives. Taking a stroll in the city centre of a big city is captured by numerous cameras and even receiving a message that communications in mobile messaging app Whatsapp are encrypted has to do with surveillance. The expansion of surveillance can be attributed to a more general shift in the field of security. Governments have been feeling the pressure to think and act preemptively, especially after catastrophic events such as 9/11. There has been a shift from a post- to a pre-crime society, in which forestalling risks has taken precedence over responding to wrongs done (Zedner 2007, 262). A pre-crime society is about anticipating and forestalling that which has not yet occurred and may never do so (Zedner 2007, 262). Surveillance is seen as the perfect means to accomplish this pre-emptive state, and therefore, it is integrated into our societies. In surveillance societies, routine, mundane surveillance is embedded in every aspect of life (Lyon 2001, 1). Today, the most important means of surveillance resides in computer power; the massive growth in computer application areas and technical enhancement have made

communication and information technologies central to surveillance (Lyon 2001, 2).

So what exactly is surveillance and what surveillance practices exist and are currently mostly used within surveillance societies? Surveillance is any collection and processing of personal data, whether identifiable or not, for the purposes of influencing or managing those whose data have been garnered (Lyon 2001, 2). Personal data that is collected by surveillance can be seen as everything that can be linked to an individual - and to which they may be accountable (Trottier 2012, 18). Surveillance can be broken down into four aspects (Ball et. al. 2006, 4). First, it is purposeful; the watching can be justified by a publicly agreed goal, such as control. Second, it is routine; it happens as we all go about our daily business. Third, it is systematic; it is planned and carried out according to a schedule that has nothing to do with randomness. Lastly, it is focused; most surveillance refers to identifiable persons whose data is collected and processed. The data that is collected by

surveillance is of the numerical or categorical type, which refers to transactions, exchanges, statuses, accounts and so on, which then again include the use of credit cards, bank cards, mobile phones, the Internet, a purchase, search or a phone call (Ball et. al. 2006, 20). This is often referred to as ‘dataveillance’: monitoring or checking people’s activities or communications in automated ways, using information technologies (Ball et. al. 2006, 4).

To understand the surveillance society, one needs to understand the technologies that are used within it. There are five areas in which key changes and developments have taken place, and that currently make up the contours of the surveillance society (Ball et. al. 2006). The first area is that of telecommunications, which includes the infrastructural technological processes of communication,

(7)

the systems and devices through which telecommunications are achieved, and also the exchange of data, messages or information (Ball et. al. 2006, 17). In this area, the huge range of communicative functions enabled by large scale and digital and computing systems, such as the Internet, is also included (Ball et. al. 2006, 17).

The second area in which key changes have taken place, is video surveillance. Almost as soon as it was invented, the camera was being used to record the faces and other physical characteristics of criminals (Ball et. al. 2006, 19). Closed-circuit television (CCTV) is the most prominent and well-known surveillance technology in this area; the estimated number of CCTV cameras in public spaces in the Netherlands is one million (Het CCV 2014). Additionally, the police uses, among others, mobile camera-units, body cams and cameras to guard vital objects and persons; the Ministry of Defence uses drones, and traffic cameras are installed to check license plates, to fine those who are speeding (Flight 2013).

The third area in which key changes have taken place is of the computer database. The data that is gathered by surveillance practices can now be gathered, categorized and cross-referenced far faster and more accurately (Ball et. al. 2006, 20). Different datasets may be matched against each other to identify persons and suspicious patterns of activity, or the data may be ‘mined’: analysed in great depth by technologies to reveal patterns that may require further investigation (Ball et. al. 2006, 20). The Schengen Information System (SIS), which is an information system that allows police and border guards to enter and consult alerts on individuals and objects, is an example of such a database.

The fourth area is that of biometrics. Biometrics is “the automated technique of measuring a

physical characteristic or personal trait of an individual and comparing that characteristic or trait to a database for purposes of recognizing that individual.” (Woodward 1997, 1481). Biometric

scanning is then the process whereby biometric measurements are collected and integrated into a computer system, so that in the future these measurements can be used for the purpose of

identification and verification. The appeal of biometrics is that the body provides a constant, direct link between record and person, and so the risk of fraud is reduced (Ball et. al. 2006, 23).

Biometrics consist of, among others, fingerprints, facial recognition and iris-scanning, and can be visible in ID systems such as passports and ID card systems.

The final area in which key changes and developments have taken place, is locating, tracking and tagging. Surveillance practices are increasingly referenced, organised and located through Geographical Information Systems (GISs). Many track the geographical movements of people, vehicles or commodities using RFID chips, Global Positioning Systems (GPS), smart ID cards, transponders or the radio signals given off by mobile phones or portable computers (Ball et. al. 2006, 24).

(8)

Underlying the above-mentioned technologies of the surveillance society, are two important processes. The first is the process of collecting Big Data, which is used to describe the creation and analysis of massive datasets gathered by surveillance practices. Big Data is a notable concept because of the huge amount of personal data that can be processed, and because of the way data in one area can be linked to other areas and analyzed to produce new inferences and findings

(Richards 2013, 1939). Software and technologies are developed to create surveillance practices based on Big Data; the Snowden revelations revealed to the public that the NSA had surveillance programmes running that collected huge amounts of data on American citizens (Lyon 2014, 3). The Big Data gathered from such surveillance practices is used to create profiles, which is the second process underlying current surveillance practices. A profile, in this case, refers to any accumulation of information of an individual by an organisation (Trottier 2012, 19). The information that is gathered on individuals is thus used to create a digital identity, one that exists next to one’s physical identity. These profiles are then used to sort people: the data that is gathered with surveillance practices is used to cluster persons into categories, based on, among others, their income, habits and offences (Lyon 2004, 140). The issue with these profiles is that citizens cannot control the personal information that makes up such a profile, and often they are not aware of the digital identity that has been made.

2.2 The Law on Intelligence and Security 2018

The above named technologies and practices are extensively used by governments; surveillance revelations, such as the Snowden revelations, bring this to the attention of the public. In the Netherlands, the government felt like the authorities of the General Intelligence and Security Services (AIVD) were lagging behind on the developments in surveillance technologies and processes (Rijksoverheid 2017). In order to expand the authorities of the AIVD, the Dutch

government has pled for a renewal of the Law on Intelligence and Security, in which the authorities of the AIVD and the Royal Netherlands Army Intelligence Unit (MIVD), are represented. The former is focussed on safeguarding Dutch society, while the latter focuses more on intelligence that might be of military importance. The Law on Intelligence and Security (Wet op de inlichtingen- en veiligheidsdiensten, from hereon: Wiv) dates back from the 1990s, and so it has been caught up by new technological developments. Dutch government believes that, in order for both intelligence agencies to do their job properly, the law needs to be adjusted; especially in the area of

data-gathering from telephones, e-mails and the Internet (Rijksoverheid 2017). Therefore, the renewal of the Wiv has been accepted, and the new Wiv is effective since the 1st of January 2018. The Wiv 2018 covers various areas that the intelligence agencies are operating in, or have to do with. The law covers, among others, wiretapping communications, hacking devices, data processing,

(9)

databases, data sharing with foreign intelligence agencies, cooperation with foreign intelligence agencies, oversight on the practices of the intelligence agencies by a commission, and it also covers the duty of secrecy of employees of the AIVD and MIVD (Wet op de inlichtingen- en

veiligheidsdiensten 2018).

In the public discussion, however, only a few key changes within the law are central because they affect citizens’ ordinary life the most. From here on, Wiv 2018 will refer only to these key changes central in public debate. The first key change that is central in public discussion is that the AIVD has gained new authorities to wiretap communications to a larger and broader extent. When a suspected person lives in a certain neighbourhood, the AIVD might be able to wiretap on the

communications of other people, living in the suspected person’s surroundings. The second key change is that the AIVD might be allowed to hack computerized appliances, such as computers, smart-TVs, and smartphones. The final key change central to public discussion is that the data that the AIVD gathers with surveillance practices will be on hold for three years in a database, and that this data might be shared with other intelligence agencies.

These three changes altogether have ignited public and political debate in the Netherlands. Various organisations (Dutch Data Protection Authority; The Netherlands Institute for Human Rights; Amnesty International), political parties (Groenlinks, D66, SP, and others) and public figures (Arjen Lubach) have voiced their concern regarding this law. The main point made by the opponents of this law is that it violates the privacy of innocent citizens due to the use of a so-called ‘dragnet’; the AIVD will gather as much data possible in the surroundings of a suspected person, thus also including the data of others than the suspected person. The metaphor dragnet refers to fishermen who use a net to catch fish, but always end up catching more or other species of fish. The proponents of the law, such as some political parties (VVD, PVV, CDA, and others), and the AIVD and the MIVD, argue on the other hand that this law is necessary in order to map the current

increasing threat to Dutch society and to protect the national security and democratic nation (Parool 2017). As the director of the AIVD, Rob Bertholee, once said in an interview: “[...] in the battle against terrorism, the citizen should not complain about privacy” (Modderkolk 2016).

2.3 Privacy

The most prominent argument that is present in the debate about privacy and surveillance, is ‘I have

nothing to hide’ (Martijn and Tokmetzis 2016, Solove 2007, Lyon 2004). Often, this argument is

accompanied with ‘I am not doing anything illegal’, or ‘I am not doing anything wrong’. The idea behind this is when an individual engages in legal activity and does not engage in criminal,

embarrassing or humiliating activities, he or she has nothing to worry about and therefore nothing to hide (Solove 2007, 747). People have tried to counter this argument by stating that individuals do

(10)

have something to hide; a simple example of this would be a negatively toned conversation on Whatsapp that you are having with a coworker about your boss. However, this argument, and its counter argument, assume that privacy is only about hiding something that you do not want other people to know. But is this really what privacy is all about, or does it involve more than just this dominant idea of privacy?

Privacy is a concept that knows an extensive history, and therefore various approaches to the concept exist. First of all, privacy is known as a distinctive right. The first to acknowledge this were Warren and Brandeis, in their article The Right to Privacy (1890). They approached privacy as ‘the right to be let alone’.Warren and Brandeis acknowledged the growing importance of solitude and retreat in the face of increasing intensity and complexity of life (Baghai 2012, 952). Their approach to privacy thus implicates seclusion and separation. Currently privacy is, as a right, represented in the United Nations (Article 12 Universal Declaration of Human Rights) and European Union (Article 8 European Convention on Human Rights), that both focus on no interference in one’s private life.

Secondly, besides being a right, privacy is also approached as a value, or idea, that is held by individuals. Alan Westin establishes the social value of privacy in his book Privacy and Freedom (1967). He states that privacy provides individuals and groups in society with a preservation of autonomy, a release from role-playing, a time for self-evaluation and for protected communication. According to him, privacy is “the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others” (Westin 1967, 7). James Griffin argues that privacy is both about autonomy and liberty: “it is a feature of deliberation and decision; it has to do with deciding for oneself, and it is a feature of action; it concerns pursuing one’s aims without interference.” (Griffin 2007, 701). Raymond Wacks states that the idea of privacy embraces the desire to be left alone, free to be ourselves - uninhibited and unconstrained by the prying of others (2010, 30). Diggelman and Cleis (2014) argue that privacy is about two competing core ideas. They state that “privacy is about creating distance between oneself and society, about being left alone, but it is also about protecting elemental community norms concerning intimate relationships or public reputation” (Diggelman and Cleis 2014, 442).

Thirdly, other scholars break privacy down into a more narrow conceptualisation. With the technological development in the surveillance society, privacy seems to be more about concerns regarding protecting one’s personal information (Tavani 2008, 135). This is referred to as informational privacy, which Floridi defines as “freedom from epistemic interference that is

achieved when there is a restriction on facts about someone that are unknown” (Floridi 1999, 52).

(11)

neither has, nor will have, any impact on B, there is no basis for B having a right to know what A is up to (Bannister 2005, 66). Informational privacy has to do with control over access to oneself and to information about oneself (Moore 2007, 812).

2.4 A Taxonomy of Privacy Violation

As becomes clear from the literature, privacy is an umbrella term: it encompasses different things, such as freedom of thought, freedom of decisions, freedom of prying by others, freedom of

interference and the right to be let alone. Many scholars, among them the few that are mentioned above, have tried to find the essence of privacy. They seek to find clear characteristics of privacy, as to create a certain category, or framework, in which things might fall in or out. Daniel Solove criticises this traditional way of conceptualising privacy; he argues that the theories are either too narrow because they fail to include the aspects of life that we typically view as private, or too broad because they fail to exclude matters that we do not deem as private (Solove 2002, 1094). He

proposes another way of conceptualising privacy, with two important aspects. First, he draws from Ludwig Wittgenstein’s notion of family resemblances, which entails that certain concepts might not have a single common characteristic; rather they draw from a common pool of similar elements. For example, in a family, each child has certain features similar to each parent; and the children share similar features with each other; and even though they may not all resemble each other in the same way, they do all bear a resemblance to each other (Solove 2002, 1098). Implementing this on the conceptualisation of privacy, Solove argues that privacy can draw from different conceptions, which might overlap. According to Solove, there are six general headings, which are also reflected in the existing literature on conceptualising privacy: “(1) the right to be let alone; (2) limited access to the self - the ability to shield oneself from unwanted access by others; (3) secrecy - the

concealment of certain matters from others; (4) control over personal information - the ability to exercise control over information about oneself; (5) personhood - the protection of one’s

personality, individuality, and dignity; and (6) intimacy - control over, or limited access to, one’s intimate relationships or aspects of life.” (Solove 2002, 1092).

Secondly, Solove suggests a pragmatic approach to conceptualising privacy. Pragmatism moves beyond an ultimate reality and a fixed essence, or core element, of privacy. It emphasizes the importance of context, and the dynamic nature in which the concept privacy is used. Thus,

conceptualising privacy should be about understanding privacy in specific contextual situations (Solove 2002, 1128). Solove starts from the bottom up; he states that conceptualizing privacy is about understanding and attempting to solve certain problems, and these problems involve

disruptions to certain practices. Practices refer to various activities, norms, traditions and customs, such as writing a letter, making decisions, having a conversation, and so on. Privacy is a dimension

(12)

of these practices, and so when we state that we are protecting privacy, we are claiming to guard against disruptions to certain practices. Privacy invasion occurs when practices are disrupted; for example when someone would read a private letter, or when someone is overhearing a private conversation. There are various types of practices, and also various ways in which these practices can be disrupted. Solove argues that privacy should be conceptualized by focusing on the specific types of disruptions and the specific practices disrupted, rather than looking for a core essence of privacy (Solove 2002, 1130). He does this by introducing a taxonomy of privacy violation, which is an overview of different privacy disruptions. Conceptualising privacy is thus done by using the taxonomy of privacy violation, and applying it in a certain context.

In order to better understand what can be understood as privacy violation, Solove (2006) has developed a taxonomy that aims at identifying and understanding different kinds of socially recognized privacy violations. The taxonomy identifies and organizes so-called harmful activities, that are only problematic or harmful if a person does not consent to most of these activities (Solove 2006, 484). The taxonomy, displayed in Table 1, consists of four basic groups of activities, that are each consisting of different related forms of harmful activities. These groups of harmful activities are not meant to create boundaries or categories; it might occur that these privacy violations share many commonalities, and they do not share a common denominator (Solove 2006, 562). At the centre of the taxonomy lies the data subject: the individual whose life is most directly affected by the activities classified in the taxonomy (Solove 2006, 489). Other people, businesses or the government might collect information from the data subject, which can be perceived as harmful. The data collectors then process the data, which is labeled as ‘information processing’ in the taxonomy. Hereafter, the data collectors might share the gathered data with others, or they might release the information. In this step, ‘information dissemination’, the data moves further away from the data subject. Finally, the last basic group is invasion, which involves impingements directly on the individual, and does not necessarily involve information.

Table 1

Taxonomy of Privacy Violation

Group of activities Forms Explanation Information collection: data

gathering

Surveillance the watching, listening to, or recording of an individual’s activities

Interrogation the pressuring of individuals to divulge information

(13)

Information processing: the use, storage and manipulation of data that has been collected

Aggregation the gathering together of information about a person, involving the combination of bits and pieces of data to form a digital double

Identification connecting information to individuals, and connecting the digital double to the person in real life

Insecurity contamination of one’s digital double, such as identity theft

Secondary use the use of data for purposes unrelated to the purposes for which the data was initially collected, without the data subject’s consent Exclusion the failure to provide individuals with notice and

input about their records. Information dissemination: the

revelation of personal data or the threat of spreading information

Breach of confidentiality

the revelation of secrets about a person, by which a violation in trust in a specific relationship occurs

Disclosure true information about a person is revealed to others, which causes a damage to reputation Exposure the exposing to others of certain physical and

emotional attributes about a person that we rather not expose

Increased accessibility

records of people that are made more accessible to organisations, such as government offices Blackmail involves coercing an individual by threatening to

expose her personal secrets if she does not accede to the demands of the blackmailer

Appropriation the use of one’s identity or personality for the purposes and goals of another

Distortion the manipulation of the way a person is perceived and judged by others and involves the victim being inaccurately exposed to the public Invasion: does not necessarily

involve information

Intrusion involves invasions or incursions into one’s life. It disturbs the victim’s daily activities and routines, destroys her solitude and often makes her feel uncomfortable and uneasy

Decisional

interference governmental interference with people’s decisions regarding certain matters of their lives

Note. The data in this table is from Solove (2006): 499-557

As has become evident from the literature, privacy is a difficult concept to include in a research. It is known as a right, a value or an idea, and it has many different approaches and definitions that all seem to emphasize a different aspect of what privacy might be. Additionally, privacy might mean something different to someone else. Therefore, instead of trying to find one concept of privacy that

(14)

fits this research the most, the conceptualisation of privacy will take the pragmatic approach suggested by Solove (2006). This means that the meaning of privacy will be derived from the context, and that the taxonomy of privacy violation will be used in order to determine which elements are relevant in the context of the Wiv 2018. There will be multiple elements in order to move the measurement of privacy violation beyond the argument ‘I’ve got nothing to hide’, which assumes that privacy is only about concealment and secrecy.

In the taxonomy of Solove, there are four groups of harmful activities. First of all, information collection is very relative in the case of the Wiv 2018, because the new authorities of the AIVD are mainly focussed on new technologies to gather intelligence. The subgroup surveillance is

applicable; the Wiv 2018 includes new authorities for the AIVD to wiretap citizens and hack computerized appliances. The subgroup interrogation is not; the Wiv 2018 is not aimed at

pressuring citizens to give up certain parts of information. Secondly, data that has been gathered by the AIVD will be stored in a computerized database for three years. Information processing is therefore also relative, but not all subgroups are applicable in this context. Some harmful activities are relating a digital identity that is created by putting together all the information that is gathered about a person. However, citizens do not have any insights whether this actually happens, and if it does, to what extent this is happening. Therefore, questions regarding digital identities would be based on assumptions, and will result in guesswork. This results in the groups aggregation, identification and insecurity being not applicable in the context of the Wiv 2018. The subgroup secondary use is applicable because, as is stated in the Wiv 2018, the AIVD might share the gathered data with other intelligence agencies, and citizens would not be informed about this. Additionally, citizens have no idea what data has been gathered about them, and there are no ways in which they can easily access this information. Therefore, the subgroup exclusion is also

applicable. The third harmful activity in the taxonomy of Solove is information dissemination, which refers to the revelation of personal data or the threat of spreading information. This group can be disregarded as a whole, because intelligence agencies operate in secrecy. Their goal is not to expose citizens with the information that they have gathered, but rather secretly investigate people that might cause any harm to society. Finally, the fourth group of harmful activity is invasion. The subgroup intrusion is applicable; because the AIVD might hack or wiretap the devices that citizens use in their personal environment, there is a possibility of the AIVD intruding into their private space and lives. The subgroup decisional interference is not applicable; the government is not trying to influence citizens’ decisions by gathering information on them.

(15)

2.5 Digital Natives

The development of digital technologies has led to a development in surveillance practices; as the implementation of the Wiv 2018 shows, governments feel the need to keep up with these

developments in order to ensure intelligence agencies the most up-to-date technologies and

authorities. These developments in surveillance practices mirror the technological developments in our lives; it has almost become impossible to think that we once lived without devices such as smart-TV’s, laptops and smartphones and applications such as Facebook, YouTube, Twitter and Google Maps. Currently, there is a generation that has been growing up with these new

technologies. They have spent their entire lives surrounded by and using computer- and videogames, digital music players, video cams, cell phones, and email, the Internet and instant messaging are integral parts of their lives (Prensky 2001, 1). Additionally, this generation is used to living an online life, without explicitly distinguishing it from the offline; they share large parts of their lives on social media, read blogs, chat, download music, download books and articles from an e-library, and play games online. This generation is often referred to as the Digital Natives, or Millennials, which consists of people born in or after 1982 (Howe and Strauss 2009, 4). They are seen as native speakers of the digital language of computers, video games and most importantly the Internet (Prensky 2001, 1).

Growing up with these developments and living in the digital era has transformed how people live their lives and relate to one another and to the world around them (Palfrey and Gasser 2008, 3). Therefore, for Digital Natives, privacy has a very different meaning than to the generations before them. Digital Natives lead their social lives mediated by online services, but very few are thinking of the consequences of the data they are leaving behind them (Palfrey and Gasser 2008, 54). They have no idea what their digital dossiers contain, and how they might access this information if they do get knowledge of the content. For starters, most Digital Natives do not put care into reviewing privacy policies of the social media they use, and they have no idea of the consequences of

disclosing personal information on social media and the Internet, which might be on there for a long period of time (Palfrey and Gasser 2008, 57). Research has shown that Digital Natives tend to accept a reduction in their privacy as part of contemporary life; they feel like they need to give it up in order to participate in the online environment (Fulton and Kibby 2017, 197). Additionally, most of them lack knowledge of personal privacy and security on social networking sites (Lawler et. al. 2012, 22). In comparison with older generations, who are generally concerned about their online privacy, and are also more sensitive to privacy issues, the generation of Digital Natives is lagging behind (Zukowski and Brown 2007, 198).

The research that is done to examine privacy perception among Digital Natives is mostly focused on the online lives that they are so used to living. A gap in this literature is that there is no

(16)

research on privacy perceptions among Digital Natives in relation to increasing surveillance

practices implemented by governments. This research will try to fill that gap by examining whether Digital Natives in the Netherlands tend to feel different about the Wiv 2018 and privacy violation than Non-Digital Natives. Since the general findings in the literature on Digital Natives suggest that older generations tend to care more about their privacy, the following hypothesis has been formed:

“Digital Natives will perceive the new Wiv 2018 as more justified in the interest of a safer society, and feel less violated in their privacy by the new authorities of the AIVD, than Non-Digital

(17)

3. Methodology

3.1 Procedure

The aim of this research is to examine differences in public perception of privacy violation among Dutch citizens. The case of the Wiv 2018 has been chosen as the starting point for this research. In order to map the public perception, the choice was made to use a quantitative methodology to collect data, in order to be able to reach as much citizens as possible. An online survey was created, by using Google Forms, that was spread via email, Facebook, Linkedin and Whatsapp to direct family members, friends, colleagues and others. The survey can be found in Appendix A and B, in the original Dutch format and in the English format. Friends and family were asked to share the survey, via email and social media. This was very often done, and enlarged the range of the survey which created a more diverse response. The survey introduced the respondents to the context of the research; a short explanation about the Wiv 2018 was given and the new authorities of the AIVD were introduced, to make sure that every respondent entered the survey with the same context in mind. The survey was conducted in a two-week period, making it a cross-sectional study.

The survey was divided into three parts. The first part is aimed at gaining general information about the respondents, such as gender and age. In this section, the date of birth of the respondents was asked in order to determine who is born in or after 1982 and can be considered a digital native. The second part consists of 17 statements that have been divided into two parts. The first part consists of statements that ask about privacy, security and how respondents feel about the Wiv 2018 in the interest of a safer society. The second part consists of statements concerning the authorities of the AIVD specifically, and are supposed to measure to what extent the respondents feel violated in their privacy by these new authorities. All the statements in the survey are set up according to the Likert scale; respondents could indicate on a scale from 1 to 5 how much they agree or disagree with a statement. The values are 1: strongly disagree, 2: disagree, 3: neutral, 4: agree and 5: strongly agree. This scale gives a form of nuance to the answers, and allows the respondents to indicate the intensity of their answers.

The survey was first distributed as a pilot to four individuals; two Digital Natives, and two Non-Digital natives. The feedback received from them was taken into account and the adjusted version was the final version of the survey that was distributed. The total amount of respondents to the survey is 208. However, some responses were invalid. Due to a technical problem that some respondents encountered when filling out the survey on their telephone, some respondents were not able to enter the right birth year. This resulted in 27 respondents filling out 2017 as their birth year. These responses have been removed from the original sample, which resulted in a final sample of 181 respondents. In this research, the response rate is hard to pinpoint. The distribution of the

(18)

survey relied on a snowball effect; friends and family were asked to share the survey via social media. This was done by a high number of people, so it is estimated that the number of people that laid eyes on the survey is much higher than the actual respondents that filled out the survey.

3.2 Measures

In this research, the goal is to examine whether there is a difference between Digital Natives and Non-Digital Natives in the perception of the Wiv 2018, and their perceived privacy violation. The first part of the statements, as mentioned before, is aimed to measure what respondents think of the Wiv 2018 in the interest of a safer society. These statements will reflect on what side of the debate the respondents are; are they willing to accept the Wiv 2018 in the interest of a safer society, or do they feel like the law is unjustified? This part of the statements will be represented as the variable ‘Wiv 2018’. The first two statements in this part of the survey are general; they ask to what extent privacy and security are perceived as important, and are therefore not included in the measurement of the variable Wiv 2018. So, variable Wiv 2018 consists of statements 3-7. Statements 8-17 are aimed at measuring to what extent the respondents feel violated in their privacy by the new authorities of the AIVD, and the variable these statements represent is therefore called ‘Privacy Violation’.

The ten statements are comprised of the four harmful activities from Solove’s privacy violation taxonomy, in combination with the three changes in the new Wiv 2018 that are central to the public debate. In the Appendix, the statements have been colour coded to easily show which statement measures what harmful activity. The first three statements (in red) measure to what extent citizens find the new surveillance measure of the AIVD itself an unpleasant idea. This reflects the harmful activity Surveillance as stated in the privacy violation taxonomy. The next three statements (in green) reflect the harmful activity Invasion, and measure whether the respondents perceive the new authorities of the AIVD as an invasion of their privacy. The next two statements (in blue) measure the harmful activity Exclusion, and focus on whether respondents find it an unpleasant idea that they do not have insight into how the AIVD uses and stores their data, and that respondents do not have access to their data. Finally, the last two statements (in yellow) measure Secondary Use, which focuses on whether respondents are afraid that their personal information might be shared with other organisations, or used for other purposes.

As both variables consist of multiple statements, the scores for each statement have been summed up in order to create the computed variables Wiv 2018 and Privacy Violation. For the variable Wiv 2018, this means that the minimum score can be 5, and the maximum score 25. The higher the score, the more the respondent feels like the Wiv 2018 is not justified in the interest of a

(19)

safer society. For the variable Privacy Violation, the minimum score can be 10 and the maximum score 50. The higher the score, the more violated the respondents feels in their privacy.

3.3 Reliability and Validity

In order to examine the scale reliability of the computed variables, Cronbach’s alpha was executed. Any result higher than 0.7 would mean that the reliability is acceptable and that no statements need to be deleted. Cronbach’s alpha was executed for the variables Wiv 2018 and Privacy Violation, but also for the indicators Surveillance, Invasion, Secondary Use and Exclusion. The result of the Cronbach’s alpha test are displayed in Table 2. All the computed variables and indicators have the results of > 0.7, which means that the computed variables and indicators are reliable.

Table 2

Cronbach's Alpha for Computed Variables

Variable/Indicator Number of items Cronbach’s alpha

Wiv 2018 5 0.866 Privacy Violation 10 0.951 Surveillance 3 0.925 Invasion 3 0.945 Exclusion 2 0.834 Secondary Use 2 0.905

By using a survey this research might be sensitive to response tendencies. The respondents might fill out the survey in a way that they think is politically correct, socially desirable, or what they think the researcher wants to hear. Additionally, the use of a Likert scale may also cause some tendencies. Respondents might always agree with statements or always disagree with statements; they can always or never pick the extreme scores, or they might take the previous statements as a guide, and so they end up giving the same scores to the statements. Additionally, the statements might also be steering in that they have all been directed in the same way to facilitate data analysis. This will result in systematic errors, which will reduce the validity of this research. However, by using a scale from 1 to 5, respondents could give more nuance to their questions, as opposed to presenting them with just two extremes. In this way, the problem of reduced validity was weakened as much as possible.

(20)

3.4 Process of Analysis

For the analysis of the data, SPSS Statistics was used. For all the different analyses used in this research, the level to evaluate significance is α =.05. The next section will begin by outlining the results of the survey. This will be done in two ways: first the results for the total sample size will be discussed, which will bring to light the trends among the respondents. Second, the results will be split up according to Digital Natives and Non-Digital Natives, to bring to light the trends per age group and to identify some possible differences between the two groups. To visualise the trends, histograms, or frequency distributions, will be used. A histogram is a graph that displays how many times a score has occurred among a certain population. It is a graph plotting values of observations on the horizontal axis, with a bar showing how many times each value occurred in the data set. This will give a good indication of trends among populations.

In order to draw more in-depth conclusions from the data, and thus to examine whether there are connections, correlations, relations and significant differences, various statistical analyses will be applied to the data. First of all, in order to examine whether any found differences between Digital Natives and Non-Digital Natives are significant, a t-test will be used. A t-test measures whether the found difference between the two age groups is systematic, and not caused by accident, like coincidences, a confounding variable or a wrong sample. If the difference is not caused by accident, the results of the t-test will be statistically significant.

Secondly, Pearson’s correlation test will be executed on different parts of the data in order to examine what the strength of the association between variables is, and what the direction of that relationship is. The strength of the relationship can be read from the results of the correlation test, which will vary from -1 to +1. The correlation test will be executed to see whether there is an association between Age, Gender, the variable Wiv 2018 and the variable Privacy Violation.

Additionally, Pearson’s correlation will also be used to examine what the strength of the association is between Privacy Violation and the indicators that make up this variable. In this way, it can be calculated what indicator has the strongest association with Privacy Violation.

Finally, a regression analysis will be executed on the significant correlations of this research. This will give a deeper meaning to the existing associations between variables. A regression

analysis is the statistical model of a straight line: y = ax + b, y is the dependent variable, or outcome variable, x is the independent variable, or predictor variable, and a and b are the regression

coefficients. A regression analysis will tell us when the predictor variable goes up by one, how much effect this will have on the outcome variable. Additionally, the regression analysis will give a shared variance between two variables by R2.

(21)

4. Results

4.1 Descriptive Statistics

The final sample of the survey consists of 181 respondents; 95 respondents (52,5%) are born in or after 1982, and are grouped as Digital Natives, and 86 respondents (47,5%) are born before 1982, and are therefore grouped as Non-Digital Natives. The final sample consists of 110 females (60,77 %), 70 males (38,67 %) and 1 other gender (0,55%). The group Digital Natives (n=95) consists of 63 females (66,3%) and 32 males (33,7%), and the group Non-Digital Natives (n=86) consists of 47 females (54,7%), 38 males (44,2%) and 1 other gender (1,1%). The mean age for the total sample is 39.39, displayed in Table 3. The mean age for the group Digital Natives is 26.55, and the mean age for the group Non-Digital Natives is 53.36. As can be seen in Table 3, the standard deviation for the group Digital Natives is smaller than that of the group Non-Digital Natives, which indicates that the distribution of age is smaller in the former group. This can be ascribed to the smaller group size; while the group Non-Digital Natives consists of every respondent that is older than 35, the group Non-Digital Natives can only consist of respondents that are aged between 18 and 35 years.

Table 3

Mean Age for Total Sample and per Age Group

Population N Minimum Maximum Mean

Standard Deviation Total Sample 181 18 77 39.39 15.981 Digital Natives 95 18 35 26.55 4.388 Non-Digital Natives 86 36 77 53.36 11.736

4.2 Results Total Population

The survey that has been distributed to the respondents consists of 17 statements, of which 15 have been divided into two variables: Wiv 2018 and Privacy Violation. The first two statements in the survey are questioning whether the respondents value their privacy and security, independent from one another. They have not been included in the measurements of either variable. Statement 1 is “My privacy is important to me.”, and Statement 2 is “My security is important to me.” In Figure 1 and 2, the histograms of both statements have been displayed. As can be seen, both statements deviate from normal by having a negative skew: the most frequent scores are clustered at the end of the scale with the higher scores. Respondents thus value both their privacy and security, however a slight difference can be seen between the two statements: 124 respondents strongly value their

(22)

security, while only 72 respondents strongly value their privacy. This indicates that respondents value their security more strongly than they value their privacy.

Figure 1. Histogram Statement 1 (privacy) Figure 2. Histogram Statement 2 (security)

The variable Wiv 2018 consist of 5 statements (statement 3-7, Appendix B). These statements are focussed on asking the respondents whether they think the new Wiv 2018 is justified in the light of a safer society. In Figure 3, the histogram for the variable Wiv 2018 is displayed. The histogram shows that there is a slight deviation from a normal distribution: there is a small positive skew just below the middle score of 15. For the variable Wiv 2018, a higher score translates as respondents being less acceptant of the new law in the interest of a safer society. The most frequent scores lie just below the middle, which translates as those respondents being slightly acceptant of the new Wiv 2018 in the interest of a safer society. This trend can also be seen in the mean score, which is 13.91, and thus also slightly below the middle.

(23)

The residual part of the statements (statements 8-17, Appendix B) reflect the variable Privacy Violation, which consists of the indicators Surveillance, Invasion, Secondary Use and Exclusion. These statements are all reflecting the new authorities of the AIVD as stated in the Wiv 2018, and the higher the score on this variable, the more respondents feel violated in their privacy. The

histogram for the variable Privacy Violation is displayed in Figure 4. As can be seen in Figure 4, the scores on Privacy Violation are not normally distributed; there is a negative skew because the frequent scores are clustered at the higher scores. Remarkable for this variable is that the most frequent score is the maximum score: 16 respondents have given this score which indicates that they feel very violated in their privacy by the new authorities of the Wiv 2018. The negative skew towards the higher scores is also reflected in the mean score, which is 35.232.

The variables Wiv 2018 and Privacy Violation do not measure the same thing, and can therefore not be compared. However, it is interesting to see that there seems to be a slight gap. The statements in the first part of the survey are asked in a general manner: what do the respondents perceive of the new law in relation to a safer society? The general trend here seems to be that respondents are slightly accepting the law, and perceive it as justified in relation to a safer society. This automatically creates an assumption that when respondents are acceptant of the law, they would also accept the new authorities and thus would not feel so violated in their privacy by it. As can be seen by comparing the histograms and mean scores of both variables, represented in Figure 3 and Figure 4, this expected assumption is not represented in the variable Privacy Violation. The statements in the second part of the survey are asked in a more personal way, and do not compare the authorities of the AIVD to the interest of a safer society. The statements are aimed at

questioning whether respondents would feel violated in their privacy when specific surveillance practices of the AIVD would be implemented on their lives. In this case the general trend seems to be that respondents feel violated in their privacy. There might thus be a difference in what

respondents think of the law in the interest of a safer society, and what respondents think of the authorities within the new law when it has an influence on their personal lives.

4.3 Results per Age Group

As mentioned before, the first two statements of the survey represent whether privacy and security are important to the respondents. In Figure 5 and Figure 6, the histograms for both statements, split up according to Digital Natives and Non-Digital Natives, can be seen. An expectation that can be extracted from the literature about Digital Natives and privacy was that Digital Natives care less about their privacy than Non-Digital Natives. As can be seen in Figure 5, this is also the case among the respondents because there is a difference in how strongly both groups value their privacy. Among the Digital Natives, 25 respondents have indicated that they strongly agree with the

(24)

statement that privacy is important to them, while in the group Non-Digital Natives 47 respondents have indicated that they strongly agree with the statement that privacy is important to them. For Statement 2, displayed in Figure 6, there is no considerable difference visible among both groups. However in general it can be said that both groups tend to value their security more than their privacy; more respondents in each group have indicated that they strongly value their security than they value their privacy (63 in Digital Natives and 61 in Non-Digital Natives for security, compared to 25 in Digital Natives and 47 in Non-Digital Natives for privacy).

Figure 5. Histogram of Statement 1 (privacy), split up according to Digital Natives and Non-Digital

Natives

Figure 6. Histogram of Statement 2 (security), split up according to Digital Natives and Non-Digital

(25)

The second part of the statements (3-7) are measuring the variable Wiv 2018. In Figure 7, the histogram for this variable is displayed per age group. As can be seen, there is no immediate big difference visible between the two groups, which is also reflected in the mean scores of both groups: Non-Digital Natives have a mean score of 14.419, while Digital Natives have a mean score of 13.453. Digital Natives tend to be more accepting of the Wiv 2018, because they have a lower mean score. Additionally, as can be seen in Figure 7, the peak of the group Digital Natives lies with the score 12, while for the group Non-Digital Natives, this lies with 13. There is only 1 point

difference, and the mean scores for both groups are also differing one point, so there is no big difference between the two age groups for this variable. It can be concluded from this data that both groups slightly accept the new Wiv 2018 in the interest of a safer society.

The third and final part of the statements in the survey (8-17) represent the variable Privacy Violation. In Figure 8, the histogram for this variable is displayed per age group. As can be seen in Figure 8, both groups have a negative skew: the higher the score, the more frequent they occur. However, the group Non-Digital Natives is more extreme in their scores: the maximum score of 50 has been given 12 times by respondents in that group, while in the group Digital Natives only 4 respondents have given the maximum score. This is also reflected in the mean score; the group Digital Natives has a mean score of 34.516, while the group Non-Digital Natives has a mean score of 36.023. The data thus implicates that the group Non-Digital Natives tends to feel more violated in their privacy by the new Wiv 2018, than the group Digital Natives.

Figure 7. Histogram for the variable Wiv 2018, split up according to Digital Natives and

(26)

Figure 8. Histogram of the variable Privacy Violation, split up according to Digital Natives and

Non-Digital Natives

4.4 t-test

To examine whether differences among the two age groups are significant in this research, a t-test has been executed. First of all, the literature addressing the difference between Digital Natives and Non-Digital Natives has indicated that Digital Natives tend to be less aware of privacy issues and are less concerned about their privacy. The first statement in the survey asks the respondents to what extent they think their privacy is important. On average, respondents that are Non-Digital Natives have a higher score on the statement whether they think their privacy is important (M = 4.33, SE = .100), compared to respondents that are Digital Natives (M = 3.99, SE = .088). This difference, -.336, BCa 95% CI [-.599, -.074], was significant t(173.541) = -2.536, p = .006. This means that in this research, Digital Natives significantly value their privacy as less important than Non-Digital Natives.

Secondly, the differences in scores on both variables Wiv 2018 and Privacy Violation need to be examined. On average, respondents that are Non-Digital Natives have a higher score on the variable Wiv 2018 (M = 14.42, SE = .508), compared to respondents that are Digital Natives (M = 13.45, SE = .519). This difference, .966, BCa 95% CI [2.400, .468], was not significant t(179) = -1.330, p = .093. However, even though the t-statistic is not statistically significant, this does not mean that the effect is unimportant in practical terms. The effect size can tell us whether the effect is substantive. It is the r-value, and for this variable r = 0.099. This can be translated as a weak effect, and so there is no significant difference for the variable Wiv 2018, with additionally no substantive effect.

For the variable Privacy Violation, respondents that are Non-Digital Natives on average have a higher score on the variable Privacy Violation (M = 36.023, SE = 1.284), compared to

(27)

respondents that are Digital Natives (M = 34.516, SE 1.083). This difference, -1.507, BCa 95% CI [-4.803, 1.788], was also not significant t(179) = -.903, p = .184. The r value for the variable Privacy Violation is .067, which also indicates that there is a weak substantive effect. Thus, there is no significant difference among the two age groups for the variable Privacy Violation, and there is no additional substantive effect.

4.5 Correlations

In order to examine whether there exist any associations between variables that have been included in this research, Pearson’s correlation test has been used. First, the test has been executed to

examine the correlation between Age, Gender, Wiv 2018 and Privacy Violation. The results of the Pearson Correlation for these variables can be seen in Table 4. The test reveals that there is no significant relationship between Age and Gender, Wiv 2018 and Privacy Violation and between Wiv 2018 and Privacy Violation. The test does reveal one significant correlation. There is a

significant strong positive correlation between Wiv 2018 and Privacy Violation (r = .777, p = .000,

p < α). The variable Wiv 2018 shares 60.4% of its variability with the variable Privacy Violation

(R² = .604). This means that there is a correlation between the scores of both variables: when the score on the variable Wiv 2018 goes up, the score on the variable Privacy Violation also goes up. The higher the score on the variable Wiv 2018, the less acceptant the respondents are of the new law, and the higher the score on the variable privacy Violation, the more respondents feel violated in their privacy. This correlation thus indicates that the less acceptant respondents are of the Wiv 2018, the more they feel violated in their privacy.

Table 4

Pearson Correlations

Variable Age Gender Wiv 2018 Privacy Violation

Age 1 -.094 .099 .067

Gender 1 .000 .032

Wiv 2018 1 .777**

Privacy Violation 1

**. Correlation is significant at the .01 level

Secondly, the Pearson correlation test has been used for the variable Privacy Violation and its indicators Surveillance, Invasion, Exclusion and Secondary Use, in order to see the nature of the association between the indicators and the variable. The correlation coefficients are displayed in Table 5. As can be seen in Table 5, all correlations are significant, positive and strong (the p value

(28)

for all the correlation coefficients is .000, p < α) . This indicates that when the score on all indicators goes up, the overall score on the variable Privacy Violation also goes up. First, the correlation of Surveillance and Privacy Violation has an effect size of 87.9% (R² = .879), so 87.9% of the variability in Surveillance is shared in the variable Privacy Violation. Second, the correlation between Invasion and Privacy Violation has an effect size of 85,2% (R² = .852), and thus 85.2% of the variability in Invasion is shared in Privacy Violation. Third, 75% of variability in Exclusion is shared in the variable Privacy Violation (R² = .750). Finally, the correlation between Secondary Use and Privacy Violation has the smallest effect size; 61.6% of the variability in Secondary Use is shared in Privacy Violation (R² = .616). The data thus shows that, having the biggest effect size, the indicator Surveillance has a large influence on the variable Privacy Violation, while the indicator Secondary Use, with the smallest effect size, has the smallest influence.

Finally, Pearson’s correlation has been used to analyse the associations of variables within the different age groups. The result of the Pearson test per Age Group are displayed in Table 6. Within the group Non-Digital Natives, there is a significant strong positive correlation between Wiv 2018 and Privacy Violation (r = .763, p = .000, p < α). This indicates that, among this group, there is a correlation between the score on the variable Wiv 2018, and the score on the variable Privacy Violation.

Table 5

Pearson Correlations for Privacy Violation and Indicators

Variable/Indicator Privacy Violation p-value

Surveillance .938** .000

Invasion .923** .000

Exclusion .866** .000

Secondary Use .785** .000

**. Correlation is significant at the .01 level

As the score of Wiv 2018 goes up, so does the score of the variable Privacy Violation. The effect size of this correlation is 58.4%, which means that 58.4% variability in the variable Wiv 2018 is shared in the variability of the variable Privacy Violation (R² = .584). Within the group Digital Natives, there is also a significant strong positive correlation between Wiv 2018 and Privacy Violation (r = .791, p = .000, p < α). The correlation in this age group is bigger, and therefore the effect size is also bigger than in the group Non-Digital Natives: 62.6% of the variability in the variable Wiv 2018 is shared in the variable Privacy Violation in the group Digital Natives (R² =

(29)

.626). The data thus indicates that the association between the variables Wiv 2018 and Privacy Violation is stronger in the group Digital Natives than in the group Non-Digital Natives.

Table 6

Pearson Correlations per Age Group

Variable Gender Wiv 2018 Privacy Violation

Non-Digital Natives Gender 1 0.063 0.004

Wiv 2018 1 .764**

Privacy Violation 1

Digital Natives Gender 1 0.089 -0.043

Wiv 2018 1 .791**

Privacy Violation 1

**. Correlation is significant at the .01 level

4.6 Regression

The results of the Pearson correlation test reveal that there is no significant correlation between age, Wiv 2018 and Privacy Violation. Unfortunately, there is thus no reason to execute a regression analysis to measure the extent of the influence of age on both variables. However, Wiv 2018 and Privacy Violation have a strong positive correlation with each other. In order to examine this relationship and effect further, a simple linear regression was calculated to predict the score of Privacy Violation based on the score of Wiv 2018. First, this is done for the total sample size (N = 181), and the results are displayed in Table 7. A significant regression equation was found (F(1, 179) = 271.882, p < .000), with an R² of .603. Respondents’ predicted score on Privacy Violation is equal to 10.463 + 1,780*(Wiv 2018). The R² value has also been discussed in the previous section as a follow-up calculation of the Pearson correlation. It indicates that 60,3% of the variability in the variable Wiv 2018 is shared in the variable Privacy Violation. Additionally, the score of Privacy Violation increases 1,78 for an added value of 1 to the score of Wiv 2018. With the results of this regression analysis, a prediction can be made on the score of Privacy Violation, based on the score of the variable Wiv 2018.

Earlier in this section, the data of the total population was analysed by using histograms. Comparing the histograms of the variable Wiv 2018 and Privacy Violation, it became obvious that some sort of gap exists between the two. Wiv 2018 has an mean score below the middle, while Privacy Violation has a mean score above the middle. When we would fill in the regression formula that represents the estimated linear line, this gap becomes visible. For example, when the most frequent given score of the variable Wiv 2018 is entered, which is 13, the predicted score of Privacy

(30)

Violation will be 33,6. This shows that even though respondents might score below average on Wiv 2018, the predicted score of privacy violation is above average.

Table 7

Results regression analysis Wiv 2018 and Privacy Violation. 95% bias corrected and accelerated confidence intervals reported in parentheses

b Standard Error b p Constant 10.463 (7.32, 13.60) 1.592 p = .000 Wiv 2018 1.78 (1.57, 1.99) 0.108 p = .000 Note. R = .777, R² = .603

Secondly, a simple linear regression was calculated for the groups Non-Digital Natives and Digital Natives separately. The results of these calculations are displayed in Table 8. Within the group Non-Digital Natives, a significant regression equation was found (F(1, 84) = 117.451, p < .000), with an R² of .538. Non-Digital Natives’ predicted score on Privacy Violation is equal to 8.741 + 1.892*(Wiv 2018). This means that in the group Non-Digital Natives, when the score of Wiv 2018 goes up by the value of 1, the score of Privacy Violation goes up by the value of 1.892.

Additionally, within the group Digital Natives, a significant regression equation was found (F(1, 93) = 155.955, p < .000), with an R² of .626. Digital Natives’ predicted score on Privacy Violation is equal to 11.791 + 1.689*(Wiv2018). This indicates that in the group Digital Natives, when the score of Wiv 2018 goes up by the value of 1, the score of Privacy Violation goes up by the value of 1.689. As can be seen by comparing the two regression equations for both age groups, the influence of the score on Wiv 2018 has a larger effect on the score of Privacy Violation in the group Non-Digital Natives than in the group Non-Digital Natives (a = 1.892 compared to a = 1.689).

(31)

Table 8

Results regression analysis Wiv 2018 and Privacy Violation, split up by age group. 95% bias corrected and accelerated confidence intervals reported in parentheses

b Standard Error b p Non-Digital Native Constant 8.741 (3.467, 14.015) 2.652 p = .001 Wiv 2018 1.892 (1.545, 2.239) .175 p = .000 Digital Natives Constant 11.791 (7.943, 15.639) 1.938 p = .000 Wiv 2018 1.689 (1.421, 1.958) .135 p = .000

Referenties

GERELATEERDE DOCUMENTEN

- Customizability: The degree to which repositories themselves support customization of components to provide a tailored learning experience (cf. Kellogg, et al. ) or to support

• Met het teeltconcept van Het Nieuwe Telen Gerbera is in de bekeken periode door meer schermuren en zo moge- lijk dubbel te schermen, door efficiëntere inzet van belichting,

Negative differences in total SRI increased demand for wood pellets were mainly the result of transi- tions from natural forest, pasture and nonforest vegetation to pine plantations

Interestingly, this enrichment was not or much less observed in response to LPS in pre- conditioned microglia (Fig. H3K4me3 and AcH3 enrichment levels in control

The parameters of each partially reconstructed function are fixed to the values found in fits to simulated events, and are varied to determine the associated systematic

The modeling framework of port-Hamiltonian systems is systematically extended to linear constrained dynamical systems (descriptor systems, differential-algebraic equations) of

Coke formation was considerably reduced (from 32.3 down to 12.6 %) and aromatics yields increased (from 8.2 up to 21.6 %) by first applying a low temperature hydrogenation step

Focus level dashboard interface for representing Prize Papers: (a) a flow map representing connections between the user ’s selected subsets, (b) a line chart representing the