• No results found

Students’ attitudes and awareness regarding public universities’ data policies : an empirical study

N/A
N/A
Protected

Academic year: 2021

Share "Students’ attitudes and awareness regarding public universities’ data policies : an empirical study"

Copied!
41
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Students’ Attitudes and Awareness Regarding Public

Universities’ Data Policies: An Empirical Study

Bachelor thesis by Sebastiaan Cornelis Stuurman

Name: Sebastiaan Cornelis Stuurman (10212388) Supervisor: Gábor Kismihók

Department: Business Studies Date of submission: 29 June 2015

(2)

Statement of originality

This document is written by Student Sebastiaan Stuurman who declares to take full responsibility for the contents of this document.

I declare that the text and the work presented in this document is original and that no sources other than those mentioned in the text and its references have been used in creating it.

The Faculty of Economics and Business is responsible solely for the supervision of completion of the work, not for the contents.

(3)

Abstract

Because of the rather exploratory and theoretical nature of most work in the emerging field of learning analytics, this research attempts to contribute by empirically studying relationships between students’ privacy attitudes towards the government, online behavioral advertising and public universities. Based on a student sample of 155 questionnaires, one of the main findings was that students’ attitudes towards public universities’ data policies are shaped by both their experiences with online behavioral advertising and information directly supplied by the university. However, this second aspect proved to be problematic, as universities currently insufficiently provide information about their data access and ownership policies.

(4)

Contents

Abstract………...2

1. Introduction………..…...6

2. Conceptual framework………...6

2.1 Universities’ data analysis……….…………...6

2.2 Level of trust in the government………...…….…11

2.3 Online behavioral advertising and privacy……….………..….14

2.3.1 Legislative influences……….…16

3. Methodology………...…..……17

3.1 Literature………....…17

3.2 Quantitative research design………..……18

3.3 Sample……….….……..…19

3.4 Questionnaire design………....…..20

3.4.1 Trust in government……….…....…...…21

3.4.2 Privacy concerns regarding OBA………... ……..21

3.4.3 Privacy concerns regarding Blackboard………..…....……22

3.4.4 Specific privacy risks ………....….…22

4. Results………..…..….23 4.1 Sample characteristics……….….…..23 4.2 Gender differences……….……24 4.3 Faculty differences ………...…..….25 4.4 Reliability……….…...26 4.5 Correlations………....…...….27 4.6 Regressions………..…...…28 5. Discussion………...…....30

5.1 Trust in government and concerns regarding Blackboard………….……30

5.1.1 Unawareness due to lack of transparency……….…..…31

5.1.2 Unawareness due to low political involvement…………...…32

5.2 Relationship between OBA concerns and Blackboard concerns.… ….... 34

5.3 limitations and suggestions for future research……….….36

6. Conclusion……….…..37

(5)

1. Introduction

The Dutch government recently announced that from 2015 on, students in higher education will no longer receive financial support, but instead can apply for a loan (Rijksoverheid). At the same time, universities are pressured into working more efficiently, due to declining contributions from the government and demands for an increase in the quality and

measurability of their educational activities. These developments caused Dutch universities to sign an agreement, the ‘Hoofdlijnenakkoord’, in which they pledged to put more effort into actively selecting and guiding students (VSNU). Technology is predicted to play a major role in realizing this, as it could enable institutions to take advantage of the large quantities of data they possess. For example, tracking and analyzing students’ interaction with online learning environments such as Blackboard holds great potential. At the same time, many commercial environments have already seen a similar technology-powered revolution.

For example, our online shopping experience is nowadays strongly shaped by personalized offers and recommendations. Through the use of cookies and trackers, companies such as Google and Facebook are able to construct consumer profiles that perfectly reflect our lifestyles and tastes (Jai, Burns and King 3). In a famous example, the analysis of a girl’s online behavior revealed her pregnancy to her father before she had personally told him, as advertising companies started sending pregnancy-related

advertisements to her house (Greengard 18). Such advanced analytical techniques often serve a commercial purpose, but their success has opened up a wide range of other possible

applications, also in educational contexts.

In education, data analysis is still an emerging field. This can be partially blamed on an issue that also arises in commercial applications: privacy concerns. While privacy concerns in relation to online advertising have been extensively researched and attitudes towards educational uses are gradually being mapped, little has been done to find out how the two are connected (McDonald and Cranor, Slade and Prinsloo). As a result, this thesis offers a detailed overview of students’ concerns regarding both behavioral advertising and data analysis on universities’ online learning environments, after which the relationship between them is empirically analyzed. Specifically, students’ attitudes toward Blackboard are analyzed, which is an online learning environment used by many universities, including the University of Amsterdam where this study takes place. Because the majority of universities in the Netherlands are public institutions, and privately owned universities function in a more commercial way, this thesis focuses on public universities. This inspired an additional

(6)

From this analysis emerges an empirical investigation of the degree to which students’ level of trust in the government is connected to their level of privacy concerns over Blackboard. An important consideration in doing this, is attempting to understand the relationship between actual privacy issues and perceived privacy issues, which results in a focus on transparency aspects.

By connecting privacy issues and transparency aspects of the respective institutions, this research hopes to contribute to our understanding of the ethical side of emerging uses of data analysis. By looking at students’ experiences with the described concepts, this thesis specifically attempts to clarify whether previous experiences with privacy and data analysis shape attitudes towards educational uses.

2. Conceptual framework

2.1 Universities’ data analysis

Tene and Polonetsky describe how analyzing data does not only have advantages for commercial actors, but also for governments. Governments and their institutions are known to use expansive online systems and databases, for example for tax purposes, but also in order to keep records concerning healthcare (63). In this thesis, the focus will be on the storage and analysis of data by a specific type of (semi)-government institution: public universities.

Motivated by increasing numbers of students, tighter budgets, intense international competition and greater calls for reliable quality and accountability measures, universities are increasingly developing a data-driven mind set (Siemens, Dawson and Lynch 5). The field of learning analytics describes the way in which technology can be utilized in order to improve education. By using data analysis to enhance their understanding of students’ behaviors and learning processes, universities are able to improve their entire educational systems. As a result, Siemens, Dawson and Lynch suggest that: “Education systems that do make the transition towards data-informed planning, decision making, teaching and learning will hold significant competitive and quality advantages over those that do not” (2). In practice, this changing mind set translates into both the use of new technologies and into being confronted with the associated risks. A technology that has a central place in this development is the use of online learning environments by universities.

The use of online learning environments can benefit the institution itself, but also students and their instructors. These systems offer for example course materials, ranging from assignments to entire online lectures. Not only the instructions for these assignments

(7)

can be found online, but often students have to turn in their work on these systems as well, meaning that they either contain a database of assignments or are linked to such a database. Also, online learning environments provide insight into a student’s study progress by giving an overview of the courses he or she attends and what results were obtained for these courses. Finally, they contain discussion boards and other communication tools (Slade and Prinsloo).

Students leave a data ‘footprint’ when using these systems, allowing universities to track them and analyze their behaviors (LACE). These footprints together form a vast amount of information, which universities subject to complex analytical calculations “in order to discover and display previously unknown, and potentially useful, data patterns” (Ferguson 6). These patterns in specific behavioral aspects or student characteristics can be used to

optimize not only the system itself, but entire study programs by tailoring them to the needs of students. Analyzing progress of large numbers of students can for example help make predictions about how well a student fits a specific course, providing opportunities to offer customized advice. Arnold and Pistilli found that giving students automatic warning signs when the data suggested that they were falling behind, significantly improved their results. Their research was based on the analysis of a wide variety of Blackboard data, including grades, demographic characteristics, academic history and the intensity of students’

interaction with Blackboard (2). This positive influence on study results is an example of how different stakeholders can benefit from data analysis in online learning environments:

students improve their grades, teachers save time because of these automatic feedback systems, and universities save money as a result of higher retention rates. Finally, such detailed information about the success of specific study programs and types of students is welcomed by policymakers, as they put pressure on universities to deliver measurable outcomes. This helps them assign or defend budgets (Slade and Prinsloo 1511).

Slade and Prinsloo argue that privacy issues in educational contexts originate from a tension between two points of view. Namely, from a moral point of view, students participate in educational systems as actors who are vulnerable, because their identities are still in a developing state. Because of this temporal and developing state, optimal discretion and integrity are required from universities. On the other hand students can be looked at from a market-centered perspective, in which they are users who provide data, which should be used to optimize efficiency and output. They call this last perspective the “commoditization of education” (1512). Currently, even universities themselves are still figuring out what their standpoint should be, contributing to a lack of transparency towards students about what boundaries they set for themselves (1513).

(8)

As technical possibilities increase, the risk of inappropriate use and unsafe storage of data also grows. A risk that does not directly seem to be related to privacy is that universities could go too far in guiding students. According to Sclater, we should always consider the power imbalance between institutions, staff and students when studying learning analytics. He argues that data analysis plays an important role in this increasing power of institutions over individuals: “Institutional power is increasing at the expense of individual identity, which can increasingly be shaped by applying analytics to our interactions” (Sclater 5). The previously described example, in which students receive advice based on their study

progress, may eventually develop into a situation in which students are subject to over-generalizations. They are then no longer treated as individuals, like is the case with face-to-face counselling, but are judged based on statistical regularities in the behaviors of thousands of others. When universities would track and advise students during every step they take in their educational careers, students would be subject to certain ideologies that universities express through their guidance. Pariser even uses the word “indoctrination” to describe the potential that data-centered systems have to constantly guide towards a certain lifestyle or choice pattern (121). Slade and Prinsloo confirm these concerns by arguing that institutional algorithms “will always reflect and perpetuate current biases and prejudices” (1524).

Besides these somewhat abstract or even futuristic ideas, there are privacy hazards we are already encountering today. Data can end up being used for other purposes than for which it was originally collected, without the consent of the user. This goes for both advertising and educational uses of data analysis. The level of detail and accuracy of the data universities collect makes this type of information very versatile. For this reason, Slade and Prinsloo warn that the use of educational data for non-education purposes is to be considered especially dangerous. This is the case because of both the confidential nature of the information and it’s ‘completeness’. Completeness refers to the fact that the information universities have about students covers so many aspects of their lives and is so detailed, that the resulting datasets are especially vulnerable to misuse (Slade and Prinsloo 1512). The use of data for originally unintended purposes can be done both by universities themselves and by third parties who acquire the data in an illegal way. Because of this second ‘option’, storing students’ data in a secure way is an essential part of guaranteeing their privacy (Mayer and Mitchell 45).

Returning to the ‘completeness’ aspect, data collection by public universities carries a risk that is specific to governmental institutions. Cavoukian and Jonas describe how tightening budgets and calls for measurability also cause other governmental institutions to increasingly focus on data analysis. “Public health authorities, for example, have a need for

(9)

more detailed information in order to better inform policy decisions related to managing their increasingly limited resources” (3). The fact that, among other institutions, tax, education and healthcare are all connected to the same government, creates an additional privacy risk for consumers, as each of these organizations holds sensitive information. Together, the information sets held by the government cover citizens’ entire lives, which theoretically creates unlimited possibilities for data analysis.

For this reason, Cavoukian and Jonas argue that organizations need to be clear to users about the ownership and control of the data they store (5). More specifically, universities should clarify to what extent and under what circumstances they allow other governmental or even commercial institutions to access their student data. A crucial factor in describing the scale of these risks is the extent to which universities are linked to the

government and other governmental institutions. However, as the next paragraph will show that the public universities have significant ties to the government, this is one of the most serious risks.

Finally, Blackboard openly states that its systems come with an entire list of pre-set technologies that register information, including location data: “Cookies may be used to uniquely identify a user and they may be used to track individual preferences and other information about a web user. Blackboard products use this technology to provide secure learning experiences, track usage and manage application performance” (Blackboard). Besides gathering this information, it also describes how the Blackboard system allows universities to analyze data: “Identify at-risk populations for early intervention based on a rich multitude of predefined, but customizable, factors like enrollment status, GPA, financial aid load, high school test scores, ethnicity, gender, student zip code, and more” (Blackboard).

In the first place, this connection between universities and Blackboard is interesting because it shows that commercial businesses are closely involved in the development of educational systems. This is concerning because it allows universities to become dependent on external parties. Both technically and financially, it becomes difficult for universities to switch to a different system once it works with a specific data management system. When a client becomes dependent on a single supplier in such a way it is called a ‘vendor lock-in’. This is a major issue because it steers the client, in this case a university, towards using an unhealthy decision making mechanism (White 7). Namely, vendor lock-in motivates universities to be guided by, for example, their desire to avoid high switching costs, rather than purely their desire to achieve its educational goals.

(10)

However, that the description of what universities can do with Blackboard comes from Blackboard’s commercial website rather than from the university’s system itself, is also interesting on a different level. It is an indication of what Sclater describes as an issue that relates to nearly all of the aspects described above: transparency. From this perspective, the most important questions are not what types of data universities collect and what they do with it, but rather to what extent we are informed about and agree with universities’ policies (Sclater 15). On the most basic level, universities could inform students about their data collection and analysis activities by either showing the information when a user is new to the system or by clearly making it visible in a menu. A more thorough approach would be to explicitly ask for consent in a way that is similar to how data collection policies are when it comes to advertising activities. This will be discussed in section 2.3.

Overview of the most significant concerns in relation to universities’ data analysis

General issues Legislation

(More in section 2.3)

Different governmental bodies have different policies (EU, national, universities)

Lags behind current developments Transparency Lack of transparency damages

universities’ goodwill Privacy risks Data Storage Security issues

Internal storage versus outsourcing Data Access and

ownership

Connecting to other government institutions. Ties to commercial actors, Vendor lock-in.

Data Collection Visibility, consent,

User-provided versus system-generated data

(11)

2.2 Level of trust in the government

Teo, Srivastava and Jiang argue that attitudes towards online environments are not primarily based on the interaction between consumers and platforms themselves, but rather on the relationship with the underlying party (104). This means that in commercial environments, consumers do not necessarily mistrust websites, but rather the companies that are behind them. As public universities are funded by the government, general attitudes towards the government could affect attitudes towards universities, but this depends on the extent to which they are integrated or are perceived to be. The relationship between public universities and the government is sometimes problematic. To understand exactly how they relate to each other it is helpful to analyze several aspects of this relationship step by step.

The first and foremost function of public universities is to develop and distribute knowledge. Our concept of a ‘modern’ university is strongly based on the views of nineteenth-century German scholar Wilhelm von Humboldt, who argued that universities should not serve the interests of a government. Rather, they should operate independently: it’s institutional and academic autonomy should be guaranteed. Harbers describes this view as follows: “Scientific practices, aiming for the production of true, objective and universally valid knowledge, should distance themselves from any economic and political interests” (4). Even today, academic and organizational independence is still considered ideal by many scientists, which is an important part of what causes the tensions described below.

Van Mierlo argues that during the twentieth century, universities gradually moved from being mostly privately funded institutions, to being nearly a hundred percent funded by the government (15). This was necessary primarily due to increasing numbers of students after the second world war. However, this increased scale also required universities to mature on an organizational level: they gradually turned into what Van Gils and Linthorst call

“professional bureaucracies” (79). One of the main characteristics of such an organization is that its policies are primarily developed by professionals, in this case academics. Such an approach is effective for managing universities’ daily activities such as teaching and planning research projects, but proves to be highly ineffective for long-term strategic policy making. This is the result of a relatively ‘flat’ organizational structure: academics prefer not to be managed in a top-down and strongly hierarchical way, which makes decision making slow (Van Gils and Linthorst 79). While the increasing scale and changing environment demanded flexibility, public universities were often unable or unwilling to adapt accordingly.

Consequently, and also because of its significant financial input, the central government increasingly got involved in the strategic decision-making process of public universities.

(12)

“The decline of financial autonomy in the 1950’s was followed by a loss of policy independence in the 1980’s” (Van Mierlo 16). This intervention primarily took place at macro-level, for example in planning growth in student capacity, rather than in its daily activities (Van Mierlo 15).

The distant involvement of the government at times compromises mutual

understanding between these two levels of authority. Because of the rather cold, technical approach of the government, part of the Dutch academic society developed a concern that the central government is not sufficiently aware of what students and teachers actually need in practice. The tension between these academic and technical approaches to education recently surfaced, when a group of students and teachers occupied several university buildings, demanding a stronger focus on students’ personal development rather than measurability (University of Amsterdam).

These sometimes conflicting academic and organizational views serve as a useful background for analyzing the way in which public universities and the government are connected from a data perspective. Namely, because these perspectives illustrate where many of the previously named aspects of the discussion around student data come from, such as productivity and integrity. By now, it is clear that the government does exercise a degree of influence on public universities and that this happens mostly at macro level rather than on their operational decision-making. So, to what extent and at what level(s) do the two

institutions connect when it comes to data? More specifically: to what extent do government institutions other than universities themselves, have access to data acquired by universities?

As this discussion shows, the central government does not control every aspect of public universities. Rather than centrally storing all data that emerge from government institutions, the government combines information from separate databases. The WRR, a scientific governmental advisory board, describes several ways in which it does so. One example is that when a student applies for financial support, the responsible institution automatically calculates the amount he or she is entitled to, by retrieving income data from the governments tax department (WRR 26). The question is to what extent such data sharing activities are possible between public universities and the government.

Explicitly searching for privacy policies, the University of Amsterdam does provide a statement saying that it complies with the WBP, a general law for the protection of personal information (University of Amsterdam). However, as we will also see later in the discussion of behavioral advertising, privacy legislation is often vague. Namely, an important part of the WBP is that the government is only allowed to use personal information for: “well defined,

(13)

justified purposes” (CPBweb). Even though this appears to be a safeguard against the abuse of student data, such ‘tests’ involve a high degree of subjectivity.

Because of these institutional ties and a lack of clarification on data policies from a legal perspective, in this research the level of trust students have in the government is hypothesized to affect to what extent they believe public universities can be trusted with their Blackboard data.

Hypothesis 1: A higher level of trust in the government reduces privacy concerns regarding universities’ management of online learning systems data.

2.3 Online behavioural advertising and privacy

Goldfarb and Tucker suggest that using online behavioral advertising has one primary advantage for online vendors, namely that it significantly increases consumers’ purchase intent (395). Doing so, it has become a valuable tool for many companies involved in online retail and services. From the consumer’s perspective, online behavioral advertising cannot be clearly defined as either a positive or a negative development, as there is a tension between its advantages and disadvantages. On one hand, they benefit from the fact that it gives vendors the ability to offer free content and personalize suggestions, offers and promotions. At the same time, Internet users are often suspicious of online behavioral advertising activities due to privacy concerns. Chellappa and Sin describe this relationship as a direct trade-off, as consumers may even decide not to use certain websites as a result of these privacy concerns (182). Their research shows that consumers are only willing to accept online behavioral advertising practices if the advantages outweigh their privacy objections. McDonald and Cranor define this as the balance between willingness to accept (WTA) and willingness to pay (WTP). The authors found that 64% of consumers find online behavioral advertising intrusive, but what exactly are the techniques and associated risks consumers are concerned about?

In the early days of the Internet, consumers were often unaware of the fact that their Internet use was far from confidential. But by now, most of them realize that being involved in advertising and tracking practices is the price they have to pay for being able to access free content (Tene and Polonetsky 281). Despite rising levels of awareness and skepticism, still only few understand the exact mechanisms behind data collection, the purposes data are collected for, and the actors involved (Tene and Polonetsky 282). The precision with which

(14)

users’ online behavior is logged, is fueled by advertisers’ desire to maximize conversion rates. Namely, the more is known about a consumer’s interests and behaviors, the more specific advertisers can tailor ads based on this information, and the more likely the consumer is to click it. In order to effectively gather these sets of data, advertisers use a number of techniques.

Probably the most well-known form of tracking Internet users is through cookies. These come in several varieties, but essentially they enable advertisers to track users’

behavior across different webpages. Initially, these tools were developed to temporarily store products in online shopping baskets while continuing to look for other purchases. Also, users could now store login information and create personalized browsers and website settings. As this type of cookies, so called first-party cookies, only stores information related to a single website, it creates only minor privacy risks (Tene and Polonetsky 290).

Later on, however, third-party cookies were introduced. These allow external parties to gather information about users across their entire Internet use, by installing small pieces of software in users’ browsers. By giving each user a unique ‘identifier’, websites are able to recognize which user visits a certain website and how he uses it. Subsequently, all these pieces of information from different websites are put together in order to create ‘profiles’, which are then sold on ad networks. These networks set up automatic auctions for

information packages to determine for what kind of advertiser the user is the most valuable (Tene and polonetsky 291). One of the most recent developments in online tracking is related to the rise of mobile devices. Because consumers usually carry their mobile phones and tablets with them, advertisers can not only collect information about what websites people use and what they search for, but can also link such information to specific locations. Additionally, mobile browsers currently offer far fewer options to manage cookie settings. The fact that mobile browsing is predicted to soon take over browsing on traditional computers as the most common way of accessing the Internet, increases the importance of these concerns (McDonald and Cranor). Also, these concerns are not only related to mobile browsing, as mobile apps are gaining popularity. Many apps are linked to ad networks in a similar way to websites, for example those created by game developers or shopping

platforms. These apps are usually even less transparent when it comes to privacy settings than browsers (Grace et al. 4).It is clear that both first and third parties now have the ability to collect all sorts of consumer data using a variety of techniques, but what exactly makes this development dangerous?

(15)

Cavoukian and Jonas argue that misuse of data can take place on two levels: through

unauthorized access by external individuals and by the advertising organizations themselves. The first scenario is linked to security issues in relation to data storage. The second refers to organizations using the collected data for purposes for which it was not intended: “If the analytics system is used for a purpose that goes beyond its legal mission, privacy may be at risk” (Cavoukian and Jonas 7).

Officially, even when the data is misused in either of these ways, the consumer would still not be at great risk. After all, the collector supposedly anonymizes the collected data. But the main reason behind online privacy concerns is that it is often unclear to what extent and how easily these seemingly anonymous packages of data can be traced back to real

individuals (63). Companies involved in online advertising attempt to make it impossible to do so, however: “Over the past few years, computer scientists have repeatedly shown that even anonymized data can often be re-identified and attributed to specific individuals (Tene and Polonetsky 63).

The (de-)anonymization issue is inherently linked to the fact that data collection is quickly getting more efficient. According to Cavoukian and Jonas, the more advanced data collection techniques are, the more complete the information is and the easier it becomes to identify actual individuals. They use jigsaw puzzles as an analogy for this process: “The most simplistic way to think about this is to consider why the last few pieces of a puzzle are the easiest: there is more ‘data’ in front of you than ever before” (6). So effectively, the increasing sophistication of data collection techniques is working against efforts the same companies make to anonymize data.

2.3.1 Legislative influences

A major factor that is keeping the online privacy discussion alive is that both the national government and the EU have not yet sufficiently tackled online privacy issues. Currently, legislation consists of a framework focusing on transparency and user consent (Sclater 11). On one hand, the advertising industry uses users’ superficial consent to ‘terms and

conditions’ as a permit to do anything that falls within the often vague definitions used in such agreements. Tene and Polonetsky consider this unfair to the user, as even advertising professionals often do not fully understand the processes behind data collection, let alone the average consumer. On the other hand, privacy advocates argue that users should always have the option to ‘opt-out’ of online advertising practices. This suggestion is equally unrealistic, as it ignores the fact that online advertising is essentially an exchange: user data versus free

(16)

content. Consequently, it would be difficult to simply remove one side of the equation and still expect consumers to be able to use websites at no cost (285). The user is stuck between the two sides and as long as policymakers don’t find a way to create an acceptable balance, the regulatory part of the online privacy discussion will only keep strengthening consumers’ doubts.

As this discussion of literature shows that consumers are skeptical of data

management by businesses and that the risks are similar when it comes to governments, their privacy concerns regarding OBA are expected to affect their concerns regarding data

management by universities. What strengthens this expectation is the fact that the online privacy debate so far has largely taken place in relation to commercial contexts, which makes this a factor that is expected to have a high impact.

Hypothesis 2: Privacy concerns regarding Online Behavioral Advertising increase privacy concerns regarding universities’ management of Blackboard data.

Because a high level of trust in the government is expected to reduce privacy concerns over universities’ data management, but privacy concerns over OBA increase students’ skepticism over data management in general, this is expected to weaken the first relationship.

Hypothesis 3: Privacy concerns over OBA weaken the relationship between trust in the government and privacy concerns regarding universities’ management of Blackboard data.

(17)

3. Methodology

As this thesis consists of a combination of analyzing existing literature and using empirical methods to contribute to this existing knowledge, the methodology will be explained

following these steps. After the criteria for the used literature are clarified, the considerations behind the methods used for collection will be described. This is followed by descriptions of the sample and the design of the questionnaire.

3.1 Literature

As a first step, a more extensive literature review was executed in order to build a strong foundation for the analysis of empirical data. In selecting relevant literature for the discussion of online privacy, the decision was made to only use work published after 2005, because statistics show that the part of the population that has access to the internet has barely grown since then, while in the early years of the twenty-first century there was still a strong annual increase (CBS). For this reason, the assumption was made that internet-use reached a

relatively mature state during this period. This is essential, because it suggests that work from before this period describes a state of the Internet that is no longer fully relevant to today’s society, making it difficult to apply or even irrelevant. This aspect is relevant for both the discussion of online behavioral advertising and of educational uses of data.

The fact that data use for educational purposes has an even shorter history, has certain implications for the types of sources from which literature could be used. As learning

analytics is still a relatively small but quickly developing field of research, few dedicated journals have emerged so far. Even though for example the Journal of Learning Analytics, published by the Society for Learning Analytics Research (SOLAR), helps establish a solid foundation for this discipline, many of the articles take a rather exploratory approach. Of course the nature of existing work influenced the decision to take a similar perspective in this thesis, but it also meant that I could not rely solely on traditional journal articles, but had to consult additional sources. Namely, because learning analytics is in this developing state, a significant part of its knowledge framework can be found in blog posts on the websites of dedicated organizations, as well as in conference proceedings.

For example, the SOLAR network hosts annual conferences at which scientists discuss the latest developments on learning analytics. Because of the short history of the field, such events are considered a major factor in its development. The 2014 edition of the conference on Learning Analytics and Knowledge(LAK) brought together 237 people

(18)

(SOLAR). Because of this concentration of knowledge, the proceedings of such conferences offer a dense overview of the most recent developments. Similarly, organizations such as JISC and LACE offer platforms that provide blog posts and interviews with experts on learning analytics.

3.2 Quantitative research design

The decision to go for a quantitative approach was primarily made because it offers

opportunities to distinguish specific relationship between clearly defined constructs, which can be difficult in qualitative analyses, as those depend on coding. Data for this research was collected through questionnaires in order to be able to collect information from a large group of students.

Students were approached in university buildings with the question whether or not they would like to participate. In case a student wished to participate, he or she was handed a questionnaire. Students of course decide for themselves whether or not they wish to

participate, which has a negative influence on the sample composition, but this is a factor that can never be completely ruled out. The questionnaires were handed out physically, which has several advantages. On a practical level, it could increase students’ willingness to participate, as it is more transparent than many online survey designs. By this, I mean that it can be difficult with online questionnaires to see how many questions are left or what types of questions will follow on other pages, while a physically administered survey reveals these characteristics instantly. Secondly, as the sample description shows that this thesis required a diverse group of participants, physically handing out surveys enabled the researcher to balance different types of participants, by for example visiting different faculties. Online questionnaires do not offer nearly the same level of control over this aspect, as the researcher would have to acquire for example e-mail addresses of suited participants, which would have been difficult in this case considering the time frame.

The downside of approaching people is that it inherently results in a process of subjective selection, no matter how rationally researchers create their samples. Also, the presence of the researcher could negatively influence the intended neutral and anonymous nature of the questionnaire (Keller 162). To minimize this effect, the questions and

definitions were formulated and explained as clearly as possible, as this reduces the necessity for students to interact with the researcher during their participation.

(19)

3.3 Sample

The sample used in this thesis consisted of 155 university-level bachelor students studying at the University of Amsterdam. This seemingly odd number is the result of the two phases in which data was collected. In the initial survey design, the questions regarding specific privacy risks, placed at the end of the questionnaire, were not yet included. The decision to add these was made after 55 questionnaires had already been filled out. Because the goal was to have at least 100 data points for each question, 100 of the revised questionnaires were handed out.

An important part of creating the sample was making sure that there was a

consistency in the participating students’ understanding of the concepts ‘government’ and ‘university’. This resulted in the decision to only include students from one university, and who all have the same nationality. Because non-Dutch students probably do not have a clear or complete view on the Dutch government, only Dutch students were included in the sample.

The sampled group ideally consisted of a representative mix of students in relation to the different faculties, as well as genders. For example, male students in the computer sciences faculty may be more inclined to have privacy concerns in relation to online advertising than female students in the economics faculty, or the other way around. All faculties were included: Economics and Business, Law, Humanities, Medicine, Science, and Social and Behavioural Sciences (University of Amsterdam). It is nearly impossible to draw a sample that perfectly represents all students at the University of Amsterdam, but it is always important to strive for a diverse group of participants. As a result, the aim of this research was to draw a sample that did not deviate too significantly from the actual distribution across these faculties. This result was obtained by visiting each of the main university buildings, of which most are used by specific faculties. At each of these locations, the idea was to find the number of participants that would match the relative size of that faculty. For example, if 40 percent of all university students attended the Economics and Business faculty, 40 percent of questionnaires were randomly handed out at the Roeterseiland-building, which primarily belongs to that faculty. Of course not all students at a certain location belonged to the same faculty, which led to a certain deviation from the actual distribution. These deviations are shown in table 1 of the ‘results’-section.

(20)

3.4 Questionnaire design

The first part of the questionnaire consisted of short demographic questions. More

specifically, as a result of the considerations described above, students were asked about their gender and the faculty they are studying at. This decision was made for two reasons. Firstly, it helps respondents get started with the questionnaire in a comfortable way due to the simple, straightforward nature of these questions (Keller 163). More importantly, it allows the

researcher to describe the sample characteristics, to make sure the relationships between the main variables are not distorted by external factors. Because the main goal of this study is not to investigate relationships that include gender or faculties, these variables are not included in the hypotheses but are added as control variables. Because students are not equally

distributed across the different faculties, it is likely that some faculties are overrepresented and others underrepresented. In order to be able to do statistical testing, it is often required that there are at least 30 observations in each group, because this means that they can be treated as normally distributions (Saunders 219). This makes it unlikely that all faculties can be used for statistical testing, but the information is still useful for describing the sample. Gender, on the other hand, can be used as an actual quantitative variable.

3.4.1 Trust in government

The first main variable that was measured is the students’ trust in the Dutch government. To measure students’ general level of trust in the government, a previously validated set of four items was used, which measure attitudes on a seven-point Likert scale ranging from ‘fully agree’ to ‘fully disagree’ (Teo, Srivastava and Jiang 2008). In order to be able to analyze the results, these items were combined into one measure for trust in the government.

1. I feel that government acts in citizen’s best interest

2. I feel fine interacting with the government since government generally fulfils its duties efficiently

3. I always feel confident that I can rely on the government to do its part when I interact with it

(21)

3.4.2 Privacy concerns regarding online behavioral advertising

In order to measure students’ privacy concerns regarding online behavioral advertising, a previously validated survey developed by Dinev and Hart was used. They developed a general measurement tool for online privacy concerns.

This part of the survey consisted of four items, which use a five-point Likert scale. Each item is a statement and the participants have to indicate to what extent they are concerned about a certain aspect of privacy, ranging from “Not at all concerned” (1) to “Very concerned” (7). In order to specify that these questions have to be answered in relation to OBA, the items were introduced using the following text, as suggested by Kim:

"Below are some statements that might describe your reaction to Online Behavioral Advertising in general. How much do you agree or disagree with each statement? Please mark your responses by checking one number for each statement. There are no right or wrong answers. When I see Online Behavioral Advertising:”

1. I am concerned that the information I submit on the Internet could be misused.

2. I am concerned that a person can find private information about me on the Internet.

3. Iam concerned about submitting information on the Internet, because of what others might do with it.

4. I am concerned about submitting information on the Internet, because it could be used in a way I did not foresee.

3.4.3 Privacy concerns regarding the University of Amsterdam’s Blackboard system

In order to measure students’ privacy concerns regarding the university’s Blackboard system, the same pre-validated scale developed by Dinev and Hart was used as the one for measuring privacy concerns regarding OBA. This decision was made because literature suggests that the concerns consumers have about both types of data storage and analysis are similar (Slade and Prinsloo 1523; Tene and Polonetsky 63). To clarify that these questions are only about students’ experiences with Blackboard, an adaptation is made in relation to the previous questions: “the internet” is replaced by “Blackboard”. The items were also introduced using a different text, namely:

(22)

"Below are some statements that might describe your reaction to the Blackboard system used by the University of Amsterdam. How much do you agree or disagree with each statement? Please mark your responses by checking one number for each statement. There are no right or wrong answers. When I use the university’s Blackboard system:”

1. I am concerned that the information I submit on Blackboard could be misused.

2. I am concerned that a person can find private information about me on Blackboard.

3. I am concerned about submitting information on Blackboard, because of what others might do with it.

4. I am concerned about submitting information on Blackboard, because it could be used in a way I did not foresee.

3.4.4 Specific Blackboard privacy risks

The additional specific questions described earlier were selected based on suggestions found in the literature regarding concrete privacy risks. The point of adding these questions was to measure to what degree students attach concrete risks to their general concerns. These questions used a five-point Likert scale.

“Below are some statements that might describe your reaction to the Blackboard system used by the University of Amsterdam. W hen I use Bl ackboard: ”

1. I am concerned that my location data can be misused.

2. I am concerned that data regarding my study progress can be misused. 3. I am concerned that my internet activity outside Blackboard is being tracked

4. Results

Before the findings are analyzed in relation to the described literature, this chapter will give an overview of the most important statistical results. The first part consists of a description of the sample, meaning that the actual sample is compared to the ideal sample as proposed in the methodology. These sample characteristics, specifically gender and faculty, will then be used to test whether any of the effects in relation to the main variables can be explained by looking at control variables. Secondly, the newly created combined measures will be tested for

reliability using Cronbach’s alpha. In the final part, the results of correlations and regressions are discussed.

(23)

4.1 Sample characteristics

Initially, 55 questionnaires were handed out, and another 100 with the specific questions were added. So in the end 155 students completed the questionnaire, of which 152 were used in the final dataset. The gap of three questionnaires emerged because these were not usable due to incomplete parts. In the described ideal sample, this group of participants would consist of an almost equal number of males and females. The actual sample proves to be fairly close to this goal, as 71 participants are male and 81 are female. That a slight majority of the participants is female (53,3%) corresponds with statistics regarding the male/female ratio of all students attending the University of Amsterdam, namely 56% females and 44% males (University of Amsterdam).

Table 1: Sample distribution across faculties compared to the entire university population in percentages

Faculty University of

Amsterdam (uva) Sample

Law 11.10 5.30

Humanities 21.10 40.7

Medicine 7.70 0.70

Science 17.40 22.40

Business & Economics 15.90 13.20 Social and Behavioural sciences 26.50 17.80

Note: (N = 152)

When it comes to the faculties that the students are attending, the distribution is slightly different from the ideal sample composition (Table 1). Creating a perfectly representative sample was already considered impractical beforehand. However, this is not the main point of showing these statistics, as this aspect was primarily included as a check to make sure that at least a fair number of students from different faculties were included.

(24)

4.2 Gender differences

Because Likert scales were used, the actual combined scores for each individual measure are not particularly meaningful, but the differences between groups do give an indication for aspects that may be interesting to look at in the future. When it comes to students’ concerns over privacy issues in relation to both online behavioral advertising and universities’ online learning environments, the results showed that there were almost no differences between the sexes (Table 2). On the other hand, looking at students’ level of trust in the government, the results did show a significant difference (α=.028): female students (M=15.68) are

substantially less likely to trust the government than their male counterparts (M=17.15). The fact that such gender differences were almost non-existent in the other variables makes this an even more interesting observation.

Table 2: Gender scores and differences

Average Male Average Female Average Total P-value for difference between sexes Government trust (4-28) 17.15 15.68 16.37 .028* Concerns OBA (4-20) 13.69 13.86 13.78 0.757 Concerns Blackboard (4-20) 7.37 7.23 7.30 0.798 Concerns Internet Activity

(1-5) 2.51 2.33 2.05 0.475

Concerns Study Progress (1-5) 2.06 1.80 1.94 0.176 Concerns Location (1-5) 2.12 1.98 2.42 0.547

Note: (N-male=71, N-female=81), *Significant at the .05 level (2-tailed).

4.3 Faculty differences

In general, samples need to consist of at least 30 data points in order to be treated as normally distributed, which is a requirement for many statistical analyses (Keller 264). As a result it is difficult to include all faculties in this part of the analytical procedure. Consequently, both the Law and Medicine faculty were excluded in this section.

Important to note in relation to the averages shown in table 3 is that the variable “Trust in Government” has a reverse scale compared to the other variables. This means that a high score on “Concerns Blackboard” is similar to a low score on “Trust in Government” in terms of deemed trustworthiness. Using an ANOVA test, analysis of the variables that remain

(25)

after removing “Law” and “Medicine” shows that there are major differences between the faculties, but that these are statistically not significant (Table 3). This goes for both concerns regarding online behavioral advertising and students’ level of trust in the government. At first, this appears to be a somewhat surprising result, but a possible explanation can be found in the number of students in each category. Because the total sample of 152 students is distributed across six categories, the number of students in each category is so small that statistically significant relationships between them are unlikely to occur. A larger sample size may change these outcomes, especially the ones that are close to being significant (Keller 354).

Table 3: Differences and averages per faculty

Economics

& Business Humanities Science

Social & Behavioural sciences P-Value for differences Government trust (4-28) 16.05 15.89 17.50 17.50 0.054 Concerns OBA (4-20) 13.00 14.45 12.74 13.85 0.195 Concerns Blackboard(4-20) 6.75 7.77 6.91 7.19 0.602 Concerns Internet Activity(1-5) 2.50 2.86 2.07 2.16 0.099 Concerns Study Progress(1-5) 1.88 2.19 1.77 1.79 0.345 Concerns Location (1-5) 2.13 2.14 2.03 2.00 0.875 Note: (N=152)

(26)

4.4 Reliability

For each of the main variables, multiple items were combined into a single measure. To make sure that each of these new scales was reliable, a test was done that involves Crohnbach’s Alpha. The first measure that was created from the raw data is “Trust in Government” (table 4). Results show that the total reliability score is .85, indicating that the created measure is reliable, as above .70 is usually considered acceptable (Keller 396). In case removing an item would result in a major improvement of the total measure’s Cronbach’s Alpha, the item would be excluded from the combined measure (Keller 397). This is not the case, however, as Cronbach’s Alpha would not improve significantly if any of the items were removed. Secondly, four items were combined into the variable “Concerns Online Behavioral

Advertising”. Results in relation to this measure also show that the total Cronbach’s Alpha is high, meaning that the measure is reliable, and that it cannot be significantly improved by removing one of the items (Table 4). The final combined measure is “Concerns Blackboard”. For this variable, the same goes as for the previous two.

Table 4: Reliability of combined measures

Cronbach's Alpha Standard deviation Standard Error Government trust 0.850 4.140 0.340 If item 1 is removed 0.823 If item 2 is removed 0.787 If item 3 is removed 0.826 If item 4 is removed 0.802 Concerns OBA 0.850 3.450 0.280 If item 1 is removed 0.823 If item 2 is removed 0.787 If item 3 is removed 0.826 If item 4 is removed 0.802 Concerns Blackboard 0.850 3.140 0.260 If item 1 is removed 0.823 If item 2 is removed 0.787 If item 3 is removed 0.826 If item 4 is removed 0.802 Note: (N=152).

(27)

4.5 Correlations

As a first step towards understanding the relationships between the main variables, correlations were checked (Table 5). Trust in the government does have a weak positive correlation with concerns regarding Blackboard (p=.041). The strongest correlation is found between concerns regarding online behavioral advertising and concerns regarding blackboard (r=.562), which is even significant at the .01 level.

Table 5: Correlations between the main variables

Trust in Government Concerns OBA Concerns Blackboard Government trust - - - Concerns OBA - - - Concerns Blackboard (DV) .166 (.041*) .562 (.000**) -

Note: N = 102, *Significant at the .05 level tailed),**Significant at the .01 level (2-tailed).

Secondly, it is interesting to see to what extent respondents’ answers to the questions in relation to specific Blackboard risks correlate with their general level of concern towards Blackboard (Table 6). All three of the specific questions show to have a strong positive relationship with the general measure for privacy concerns regarding Blackboard. How these results can be interpreted will be expanded on in the discussion section.

Table 6: Correlations between "Concerns Blackboard” and specific measures

Correlation with Concerns

Blackboard P-value Internet Activity 0.523** .00

Study Progress 0.546** .00

Location 0.507** .00

(28)

4.6 Regressions

After controlling for differences explained by gender and faculty, it was time to look at the main variables using a hierarchical regression approach. In hypothesis 1 it was stated that a student with a higher level of trust in the government was predicted to show a lower level of concern regarding Blackboard. First of all, the adjusted R-squared shows that the model including both trust in the government and OBA concerns (model 1) explains 31.6% of the variations in students’ concerns regarding privacy on Blackboard (Keller 244). However, the predictive power of the model appears to depend entirely on students’ concerns regarding OBA. This conclusion is based on the p-values, as well as the Betas (Tables 7&8). Namely, Government trust’s Beta is relatively low (β=-.100) and is not statistically significant (p=.143>.05). As a result we can conclude that hypothesis 1 is rejected.

Table 7: Regression including the main variables and moderation effects

Concerns Blackboard (DV) Model 1 Model 2 Coefficien t SE Beta Coefficien t SE Beta Constant 1.636*** 1.284 1.569*** 1.282 Government trust (IV) -.076 .051 -.100 -.071 .051 -.0093 Concerns OBA (IV) .501 .062 .550 .497* .062 .546 Moderation effect -.261 .194 -.091

r² .316 .320

(29)

Table 8: Regression including the main variables and moderation effects

Concerns

Blackboard (DV)

Model 1 Model 2 F-Score P-value F-score P-value Complete model 35.917 .000** 24.68 .000** Government trust

(IV)

.143 .170 Concerns OBA (IV) .000** .000** Moderation effect .80

Note: N = 152, *Significant at the .05 level, **Significant at the .01 level.

“Concerns OBA” however, does show to be a strong factor in explaining students’ concerns regarding Blackboard. In hypothesis 2, students’ attitudes towards online behavioral

advertising were expected to positively affect their level of concern over Blackboard. The issue of causality will be further explained in the discussion section, but the results show that a high level of concern over OBA is strongly positively linked to high levels of concern over Blackboard. The statistics show that the relationship is highly significant (p=.000<.001) and that for every increase of 1 in a student’s score on OBA concerns, his Blackboard concern score increases by .55 (β=.550) (Keller 276). This means that hypothesis 2 is confirmed.

Hypothesis 3 was related to the predicted moderation effect of OBA concerns on the relationship between students’ trust in the government and their level of concern over Blackboard. But as trust in the government is shown not to have any significant relationship with Blackboard concerns, the moderation effect also does not take place. This is confirmed by the statistics: the p-value of Government trust (p=.143>.05) actually increases

(p=.170>.05) with the addition of the moderating variable. As a result, the moderator proves to be highly insignificant as well (p=.80>.05), meaning that hypothesis 3 is rejected.

(30)

5. Discussion

In this section, the statistical results will be discussed on a theoretical level, by both relating them to literature described earlier and by presenting additional considerations. Further, limitations of the current study will be described, which will also be related to suggestions for future research.

5.1 Trust in government and concerns regarding online learning environments

The main finding in relation to the first hypothesis is that students’ level of trust in the government is not associated with their level of privacy concerns regarding public

universities’ online learning environments. There are several ways to explain this unexpected result.

An aspect that may be relevant is that in the discussion of literature, both the actual relationship between the government and public universities, and the degree to which this relationship is visible were described. It was expected that the relative invisibility of data sharing practices caused students to be even more skeptical. In reality however, it is possible that this invisibility actually prevented students from being concerned with governmental access to their data whatsoever.

This suspicion arises from the fact that the described literature suggests that public universities and the government are to some degree connected, while this appears not to be recognized by students in practice (Van Gils and Linthorst, Van Mierlo). For example, earlier it was mentioned that Teo, Srivastava and Jiang argue that users base their attitudes towards online platforms on their attitude towards the underlying party (104). In their own research, the authors did find that: “Trust in the government is positively related to trust in

e-government web sites” (Teo, Srivastava and Jiang 100). Even though online learning

environments are technically not websites, that this effect was found suggests that students do not at all see them as systems that have ties to the government. Because previous work

explicitly names governments as a major stakeholder in universities, this seems contradictory. Students seem to misinterpret the relationship between the government and public

universities. The idea that students are not fully aware of the control and ownership side of public universities does not stand alone however. It is strengthened by literature suggesting that students’ awareness is also relatively low when it comes to a specific aspect of

universities’ functioning, namely their use of data collection and analysis activities (Slade and Prinsloo 1524).

(31)

Where does this unawareness come from? The apparent low level of awareness among students in relation to the exact functioning of public universities when it comes to data ownership and access has two possible causes: either students are unaware due to a lack of clarity from the university’s side, or they are simply uninterested.

5.1.1 Unawareness of the relationship between public universities and the government due to a lack of transparency

As announced above, the first possible explanation for students unawareness of this connection is to ‘blame’ the university. Sclater discusses various questions related to the visibility of data collection, storage and analysis in educational settings. He describes a wide range of data types for which these questions are relevant: “personal information provided by the student, the student’s study record, sensitive information such as ethnic origin and

disability, details of contacts between the student and the University, interactive content generated by the student, system-generated data such as accesses to the Virtual Learning Environment (VLE), data derived from other data and data generated internally e.g. student use of a library subscription service” (15). Each of these data types has its own characteristics and its own transparency issues, but the most basic distinction Sclater makes is between user-provided data and data that were system-generated. Namely, for both types it can be unclear for which purposes the data will be used, but for the latter type the user is often even unaware of what information is stored about him, because he does not directly enter it himself.

Like regular websites, universities usually have a data privacy policy. However, it is often either hard to find or too complex and extensive to read (Sclater 15). In this case, especially ownership and control aspects are relevant to the user, as these cause him to often not have a clear view on who has access to the collected data (16). For this reason, students do also not know to what extent other government bodies have access to their educational data. Because of the relatively short history of advanced data collection, analysis and storage by

universities, students may not yet be aware of the scale and sophistication of these processes. While data collection techniques appear to be relatively well-known, ownership and access issues seem to be largely unexplored by the average user. This belief is strengthened by the findings in relation to the questions that referred to three specific Blackboard privacy issues. The data showed that students strongly associate the named risks (location data, study progress and external internet activity) with their general level of concern regarding

Blackboard. This indicates that they had a relatively clear idea of the data collection ‘risks’ that would exist on Blackboard, even before the actual suggestions were made. The empirical

(32)

findings in relation to hypothesis 1 tell us that students do not recognize the relationship between universities data policies and the government. At the same time, theory suggested that it exists or at least has plenty of organizational and legal room to exist. Consequently, that students do relate Blackboard to data collection concerns but do not see the link with the government, suggests that ownership and access issues are far less understood. Paragraph 6.2 will offer an explanation for this difference. However, looking at the theory, the conclusion that students are aware of the fact that they are being tracked, but not aware of who accesses or owns these data, is alarming. In the first place, this is supported by the result that students do recognize specific Blackboard-related risks. This is expressed in the high correlation between the specific questions, which relate to the collection of specific types of data, and students’ general concern over Blackboard. However, if they were as aware of access issues as they were of collection risks, they would also recognize the relationship with the

government, as literature suggests government access forms a major risk in relation to university data (Harbers; Cavoukian and Jones 5). This is not the case however, as students prove not to relate their level of trust in the government to their concern over their

Blackboard data. If we don’t know whether or not our data is stored, accessed and owned by reliable people or institutions, what is the value of knowing what data is being collected? This argument will be further explained in relation to online behavioral advertising, which offers an explanation related to visibility.

5.1.2 Unawareness due to students’ low degree of political involvement

A different, but perhaps the simplest explanation would be to point at students’ lack of political involvement. Verhoeven describes how several respected writers in the field of political sciences argue that Dutch citizens nowadays often don’t take the time to actively participate in politics. Rather, we live in a “democracy of spectatorship” rather than one of involvement. We are said to distance ourselves from political processes in such a way that we base our opinions on the actions of others (21). Due to prolonged periods of peace and

wealth, we have become less watchful for developments that may turn out to be damaging in the long term (43). These broad trends may sound too general to be applied to such a specific context, but developments in relation to the university show that low involvement of students is a concrete issue. For example, the turn-out at the most recent elections for the University of Amsterdam’s student council was only 21% (Folia). This relatively passive attitude towards the policy and functioning of the university may cause students to simply not be aware of the organizational and informational links between public universities and the government. Even

(33)

though nearly all students will know that public universities in the Netherlands are almost entirely funded by the government, they may not be as aware of the way in which the government could benefit from the university’s data.

Namely, if a student would realize that the separation between the university’s data and other governmental organizations is relatively thin, he would at least partially base his level of trust in the university’s systems on his level of trust in the government. Firstly, literature showed that the legal barrier for the government to use citizens’ data strongly depends on subjective definitions like ‘justified use’ (CPBweb). This leaves the question to what extent universities are a government organization. However, researchers stated that due to external developments, public universities increasingly depend on the government when it comes to strategic planning activities, suggesting an increasing degree of influence of the Dutch government over public universities. Despite the fact that universities are still autonomous in most of their activities, it would be expected that these developments cause students to at least partially link their trust or mistrust in the government to their trust in universities data policies.

A completely different way of approaching this result is to suggest that students might

actually be very much aware of developments in the relationship between universities and the government and that this relationship is different than most authors describe. For this

perspective, we can look at the work of Kezar, who suggests that the government and public universities might not be as closely related anymore as they have been in the past. She describes the traditional function of universities as: “educating citizens for democratic

engagement, supporting local and regional communities, preserving knowledge and making it available to the community” (430). In fulfilling these functions, they worked closely with the government as an integrated system, in order to contribute to the democratic society.

However, Kezar argues that higher education gradually moved away from being a social, governmental institution and developed a more market-oriented, industrial set of goals and values (430). This development can be related to the “commodification of education” concept described earlier (Slade and Prinsloo 1512).

So far it is proposed that students either are not sufficiently interested in the functioning of universities and their data policies, are very much interested, or are kept unaware by universities’ lack of transparency. However, most likely not one but a

combination of these explanations forms the answer to the found results. For example, it is possible that one part of the student population is highly interested in politics and data

Referenties

GERELATEERDE DOCUMENTEN

attitude towards Frisian than the students who had a low level of Frisian, because earlier research showed that a high level of Frisian positively influenced the attitudes towards

6XSUDPROHFXODU QDQRSDUWLFOHV 613V  DUH IRUPHG E\ PXOWLSOH EXLOGLQJ EORFNV WKDW DUH  EURXJKW WRJHWKHU E\ PXOWLSOH QRQFRYDOHQW VXSUDPROHFXODU LQWHUDFWLRQV 613V

Ondanks dat schaamte als pro-sociaal gedrag kan worden geïnterpreteerd (Feinberg et al., 2011), blijkt uit dit onderzoek dat interpretatie van schaamte bij vrouwen wordt beïnvloed

This challenge is scoped to a few combinatorial problems, including the academic vehicle routing problem with soft time windows (VRPSTW) and a real world problem in blood supply

Uit dit onderzoek blijkt dus dat niet alleen stigma een effect heeft op de verschillende componenten van het leven zoals werk of sociale contacten, maar dat deze componenten ook

Naar interval-smaak zijn geen studies gepubliceerd tot dusver, maar bij toon-kleur synesthesie blijkt er een toename van witte stof banen te zijn tussen de visuele, auditieve

with low socio-economic status did have a higher degree of air pollution exposure and a higher environmentally induced health risk (Burnett et al., 2001).The small difference

This conceptual model is an extension of Aaker’s broadly accepted brand equity model (1992). The remaining part of the literature review will build on this conceptual framework by the