• No results found

Theoretical framework

N/A
N/A
Protected

Academic year: 2021

Share "Theoretical framework "

Copied!
108
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

MA thesis by Joris Heijkant Faculty of arts, s2203170

Department of Media Studies and Journalism Rijksuniversiteit Groningen, March 2017 Supervisor: prof. dr. M.J. Broersma Second reader: dr. A. Heinrich

(2)

Contents:

Introduction 3

Theoretical framework

4

Chapter 1.1: Information reduction before the age of the internet 6

Chapter 1.2: News audiences, gatekeepers and media bias 7

Chapter 1.3: The challenges of news personalization 10

Chapter 1.4: Solutions for the filter bubble 14

Chapter 1.5: New news economics 17

Chapter 1.6: A new balance and the future 19

Methodology

23

Chapter 2.2: Sample 23

Chapter 2.3: Taxonomy 24

Chapter 2.4: Qualitative research interviews 29

Chapter 2.5: User survey 29

Results

32

Chapter 3.1: Taxonomy 32

Chapter 3.2: Qualitative research interviews 38

Chapter 3.3: Audience research 43

Conclusion

54

Limitations and further research

59

Acknowledgements

60

References

References 61

List of figures 66

Appendices

Appendix 1: Qualitative research interviews, editors – interview protocol 67

Appendix 2: User survey question sheet 69

Appendix 3: NOBO quota 75

Appendix 4: Transcript – introductory interview editors 77

Appendix 5: Transcript – interview Dennis Van Luling, AD 85

Appendix 6: Transcript – interview Laurens Verhagen, VK 97

(3)

Introduction

Personalization is a “hot topic” in the news world. A great deal of the information the public consumes nowadays is catered to it personally, and news websites are actively bargaining for their piece of the pie.

Although personalized news has yet to reach maturity, the algorithms behind successful internet companies such as Google and Facebook might become the new distribution standard in online journalism.

News personalization is critiqued for intensifying polarization, leading to ‘filter bubbles’, polarization and “infobesitas” (Pariser, 2011: 9, 73, 113, Sunstein, 2001: 67). Some news media acknowledge that filter bubbles played a role in the election of Donald Trump (El-Barmawy, 2016, Bear, 2016) and the Brexit-vote (Bell, 2016). Bill Gates recently agitated against filter bubbles (Delaney, 2017), and Mark Zuckerberg named it one of the “two most discussed concerns” at Facebook last year (Zuckerberg, 2017). The proposed problems with news personalization make it worthwhile to investigate how various actors within the news industry act and react on the issue.

Moreover, news personalization tampers with basic journalistic principles such as gatekeeping and the view of the audience. Traditional ideas regarding the journalist as a gatekeeper to the public’s information (Shoemaker & Vos, 2009: 1) are under duress because of algorithmic filtering gaining ground. Audience responsiveness is becoming more important, if not the new norm (Anderson, 2011a:

529).

Although there has been research on news personalization before, interest in the subject mostly came from academic fields outside of journalism studies. Until Neil Thurman did his categorization of personalization features (Thurman, 2011), research on personalization was often done using psychological (Oulasvirta & Blom, 2007, Lavie et. al, 2009) or technological (Kalyanaraman & Sundar 2006) angles. It is important to shed light on the topic from an angle of journalism studies as well.

In this thesis, I will inquire the status of news personalization in The Netherlands, looking at the online platforms of both De Volkskrant and Algemeen Dagblad. Through a mapping of their personalization features, interviews with their online chief editors and a user study I hope to answer the following research question:

RQ: What is the state of news personalization in the Dutch news landscape, looking at both De Volkskrant and Algemeen Dagblad?

This research question is to be divided into three sub-questions:

Q1: How do De Volkskrant and Algemeen Dagblad currently use news personalization on their websites?

Q2: How do the chief online editors of De Volkskrant en Algemeen Dagblad approach news personalization?

Q3: What are the user needs and wants regarding news personalization of the online platforms of De Volkskrant and Algemeen Dagblad?

With this research, I aim to envision both the current state of news personalization in The Netherlands as well as a possible future.

This thesis consists of four chapters. In the theoretical framework, I will argue that the rise of personalized news cannot be merely explained through new technology itself, but that it lies at the intersection of the emergence of network technologies, human psychology, new news economics and a shift in journalistic culture. Moreover, the phenomenon and the criticism surrounding it can be explained using historical counterparts. In the method chapter, I will explain the used mixed methods approach, dissecting the different parts of my research. The results chapter will be used to explain as well as analyze the research results. The insights will be combined in the conclusion and placed in perspective of earlier research on journalism and technology.

(4)

Chapter 1: Theoretical framework

Since the advent of the internet and digital technologies, the amount of data produced by humans has taken a sharp increase. In 2011, IBM calculated that 90 per cent of the information in the world by then was created in the 2 years before (IBM, 2011). Eric Schmidt, former CEO of Google, used the catchphrase that “if you record all human communication from the dawn of time up to 2003, it would take up 5 billion gigabytes of storage space. Now we’re creating that much data every two days”

(Schmidt, in Pariser: 2011: 11). The rise of the internet and the enormous waves of data that followed have strongly affected the media world. In the following paragraphs I will explain why information abundance is intrinsic to the technology behind the internet and how information abundance and audience tracking led to an inevitable outcome of the ‘race for relevance’ for news websites:

personalization features.

From the very early days of the internet its advocates have emphasized its open nature. In his famous Declaration of the Independence of Cyberspace, John Perry Barlow defines computer-linked networks as

“a new social space, global and antisovereign, within which anybody, anywhere can express to the rest of humanity whatever he or she believes without fear” (Barlow, 1996 in Keane, 1999: 10). Kari Karppinen implies that advocates of the open internet such as Barlow adhere to the traditional thought that information scarcity leads to a concentration of media power (Karppinen, 2013: 103). As information is scarcer, distribution channels hold great power because alternatives are limited. The early internet advocates saw their technology as debunking these limitations on distributed information, and thus as diminishing the power of traditional distribution channels. Or as John Keane puts it, “abundance is the ideology of computer-linked electronic communications networks” (Keane, 1999: 9). The open nature of the internet did indeed breath life into a whole array of new media. To use the terminology of John Ellis, we moved from an “age of scarcity”, towards an “age of availability” and finally towards an “age of plenty” (Ellis, 2000, in Karppinen, 2013: 103). Combined with the advent of cable TV and the omnipresence of mobile media we now live in an era of “supersaturation” (Gitlin, 2001: 12). The abundance of information makes it increasingly harder for people to find relevant information online (Bozdag, 2013: 209). The vast amounts of information being produced everyday only get bigger, while only a fraction of it is relevant to a given person. This works against the logic of pre-internet mass media, where media themselves act as gatekeepers and decide what information appears in the limited media channels (Shoemaker & Vos, 2009: 1). In the media landscape, the internet led to a news market that is driven by users’ search and demand rather than by editorial power (Karppinen, 2013: 104).

The internet became user-driven because in a world where information is almost infinite, one word becomes key: ‘relevance’. It became important for every information intermediary, be it a newspaper or a search engine, to be able to pick the most relevant news articles for a given user. As Pariser states it:

“users, not sites, were now the focus” (Pariser, 2011: 50). This still fits in with the view of the early internet evangelists. They saw the internet as an “active” or “pull technology”, where a given user puts in the web address and gets the content from the page, as opposed to the “push technology” of television or newspapers where the media themselves “push” the media messages towards the user (Pariser, 2011: 67). But in a world with too much information, this user engagement is simply too much to ask. In order to provide their users with the most relevant news on the web, news organizations and information intermediaries use ‘agents’ to pick specific news items for their users. These agents can be either human or algorithmic. Since human agents are expensive, most websites use algorithmic ones (Pariser, 2011: 51), although one could remark that modern news media tend to have at least a few

‘hand-picked’ newsletters because of their high conversion rate (Owen, 2016). Human or algorithmic, these agents build, manage and represent information customized for individual users (Bozdag, 2013:

211) and in this way make the content of a website more relevant. The better these technologies become, “the amount of energy we’ll have to devote to choosing what we see will continue to decrease”

(Pariser, 2011: 69).

In order to be able to tailor news content to a certain user, a news website needs information about this user. In the modern era, tracking internet users down in order to create online profiles of them is common. These practices are often gathered under the term ‘big data’. Tracking and storing data has been embraced in various areas of social and economic life and there is “growing evidence that

(5)

something important is changing in the nature of data” (Lewis & Westlund, 2014: 447, emphasis in original). However, ‘big data’ is more than just collecting big amounts of user information. Following Boyd & Crawford, big data “is about a capacity to search, aggregate and cross-reference large data sets”

(Boyd & Crawford, 2012: 663). They see big data at the crossroad of technology, analysis and mythology, the latter referring to “the belief that large data sets offer a higher form of intelligence and knowledge that were previously impossible” (Boyd and Crawford, 2012: 663). For news websites, this higher form of knowledge partly refers to the fact that these algorithms can map out their users (both individual and grouped together) in a way they couldn’t when the newspaper was print-only. As Pariser argues, “the key to relevance, the solution to sorting through the mass of data on the web was … more data” (Pariser, 2011: 32). In order to fully understand its reader, a news website has to know what its users click on and how long they stay on a given article’s page. Big data, as in the collection and processing of user behavior, thus is one of the contributing factors of news personalization.

In this thesis I will focus on the personalization techniques that news organizations use on their online platforms. However, news personalization did not start out on news websites. Although Nicholas Negroponte wrote his visionary words on “The Daily Me” in 1995 (Negroponte, 1995: 153), it was not until deep into the 21st century that news outlets fully embraced personalization techniques. The news market was still pretty uncluttered in these days, and online advertising still paying off led to less economical pressure to innovate (Turow, 2011: 111-112). In order to understand the development of personalization at news websites, we have to focus on web shops, social media and search engines. The first major internet company to fully embrace personalization in its business model was Amazon (Pariser, 2011: 25). This company built an online bookstore that from the start had personalization built in, to “enhance the process of discovery” (Pariser, 2011: 26). As Cass Sunstein notes, personalized filtering is an extension of the free-market ideal of customer sovereignty (Sunstein, 2007: 38). It might seem contrary to call the limitations that filters impose on consumer options a free-market ideal. But in the process of tailoring news consumption to the user’s preferences and interests the notion of consumer sovereignty is very much at work, as long as “we act as if the purpose of a system of communication is to ensure that people see exactly as they want” (Sunstein, 2007: 40). Despite the success of filtering technologies in web stores, user data input could be fairly limited. True sophistication of web filtering paradoxically occurs when there is more data to be filtered, while more data requires more sophistication of these filters (Pariser, 2011: 38). So it wasn’t until social networks and search engines stepped into the game that personalization really became widespread. Their intermediary functions allowed them to gather large and useful user data, whilst their business models, based on relevance, pushed forward personalization algorithms.

Besides these companies there is a third kind of player in the online data world: the data broker. For these companies, gathering data about their customers is not only part of their business model; it is their business model. Data brokers gather users’ data on the internet and package it into user groups.

They sell these to other companies that might tailor their products to specific user groups (Committee on commerce, science and transportation, 2013: i). These companies “operate hidden from consumer view” (Committee on commerce, science and transportation, 2013: i) and are criticized for selling data from vulnerable groups in society, such as: “rural and barely making it” or “Ethnic second-city strugglers” (Committee on commerce, science and transportation, 2013: ii). Data brokers dig pretty deep: their tracking could include political leaning or medical conditions (Committee on commerce, science and transportation, 2013: 13-14). Because they sell their data to many other companies, data brokers can be seen as the ‘middle man’ of a tailored internet. If media companies would work together with these data brokers, they can narrow down their specific users extremely accurate. These companies do not only sell packages of users for targeted advertisements, they can also track down IP- addresses of website visitors in order to provide information about them (Lewis & Westlund, 2014:

457). But omnipresent as it may be, personalization using large-scale data sets isn’t the only way of implementing the technology. When the Washington Post included Facebook friends’

recommendations to their website earlier this decade, they also started the “Personal Post”, which was a personalization service based on the data users generated on their own website (Washington Post, 2012).

(6)

1.1 Information reduction before the age of the internet

In a 2006 study on Yahoo! News, Kalyanaraman & Sundar manipulated the portal, inserting various degrees of personalization. It turned out that the more they personalized the portal, the more satisfied the user was (Kalyanaraman & Sundar, 2006: 126). This study underlines a very important point to be made: personalized news filters exist because users like them. They “have positive attitudes towards news personalization and are interested in receiving personalized news” (Lavie et al., 2009: 492). This leaning towards personalized information is caused by a multitude of factors. Firstly, humans, “despite evidence to the contrary […] have a tendency to believe that the world is as it appears to be” (Pariser, 2011: 81). In this view of ‘naïve realism’, we have a strong tendency to acknowledge our own control of facts and the patterns we see in them (Pariser, 2011: 81). If personalized filters present us with our own truths, the comfort of being in control is strengthened.

This line of reasoning is highly compatible with that of ‘dissonance theory’. The field of dissonance theory states that individuals react aversively to experienced cognitive dissonance, which arises

“because of the negative aspects of the chosen alternative and the positive aspects of the non-chosen alternative” (Fischer et al., 2005: 470). It is an implication of naïve realism: if we expect to be in control of the facts and the patterns we see in them, facts not fitting in with our supposedly controlled view of the world arouse negative emotions. Within dissonance theory, the degree of picking supporting rather than conflicting information is called “confirmation bias” (Fischer et al., 2005: 470). Confirmation bias can also be inverted: under the denominator “positive test strategy” individuals have a tendency to

“discredit information that runs counter to a hypothesis they hold” (Fischer et al., 2005: 470). Although the confirmation bias and positive test strategy theses may hold, acquiring new ideas is still a vital part of the human thinking process, Pariser argues (Pariser, 2011: 84). There is a fine balance between acting upon what we already know and gathering new ideas. Personalized filtering clearly favors the implementation of ideas with which we are familiar, possibly strengthening our confirmation biases (Pariser, 2011: 84)

The human mind has it limits. While the amount of data in the world has steadily increased, our abilities to absorb all of this data have not (Bozdag, 2013: 211). A study by Eppler & Mengis shows that individuals can reason and decide better when they receive more information. But this is only true until a certain point, where “information overload” occurs. Here reasoning and decision making rapidly become worse (Eppler & Mengis, 2004 in Bozdag, 2013: 211). Although being a buzzword in modern days, the digital world does not have a monopoly on the ideas of information overload. In pre-internet times, the amount of information a human has to claw through could be enormous. It was Plato who warned against unbridled written communication is his Phaedrus, commenting that written texts could not choose their beholder and could therefore be a threat to social security, would affect people’s abilities to memorize and would provide people with too much choice. The latter would prohibit them in depicting which texts were morally just and which ones weren’t (Furedi, 2015, Gleick, 2011: 30).

In pre-modern times many great thinkers agreed with Plato. The early Rabbis did disapprove of extensive reading, as these practices could be endless and wear one out intellectually. Seneca himself declared in his Epistles that one should only study a handful of thinkers in order to be able to fully grasp their ideas (Seneca, in Furedi, 2015). After the inventions of the printing press, another wave of information criticism emerged. Erasmus argued that the ‘flood’ of new books made them “foolish, libelous and subversive” (Erasmus, in Furedi, 2015). Again, he was not alone in this prospect: notorious philosophers such as René Descartes and Sir Francis Bacon agreed. The latter even associated sadness and anxiety with information overload (Furedi, 2015). But as Furedi argues, it is not the availability of the actual texts themselves that evocates emotions such as these; it is that we have to “choose and discriminate between them” (Furedi, 2015). According to Furedi, conceptions of information overload sprout “when society lacks an authoritative philosophical and intellectual paradigm through which sources of information and knowledge can be interpreted” (Furedi, 2015).

Most historic accounts of information reduction focus on aspects of choosing between large amounts of information. It was the German philosopher Martin Heidegger who took this argument one step further and discussed the filtering of the “inexhaustible richness of being” (Beinsteiner, 2012: 2). For Heidegger, every culture in human history had its own vision on what it is meant to ‘be’, a “clearing of

(7)

being”, and following that reasoning that clearing of being limits reality to whatever that vision on

‘being’ allows one to think (Beinsteiner, 2012: 2). According to Heidegger, the fact that the clearing of being was mutable and that man’s view on whatever it meant to ‘be’ could change, meant that freedom of mankind is possible. When new aspects of reality are perceived, this entailed that the clearing of being is redefined (Beinsteiner, 2012: 2). In our age, one may refer to the mythological aspects of knowledge gathered from big data (Boyd and Crawford, 2012: 663) as one such aspect.

Although Heidegger thought of the filtering of being as something inherent about this being, he spoke of a trend in the modern age, which he named the “age of the world picture” (Beinsteiner, 2012: 2). The main aspect of this ‘age’ is that man is now the subject to everything and that “to represent [vor-stellen]

means to bring what is present at hand [das Vorhandene] before oneself as standing over against, to relate it to oneself, to the one representing it, and to force it back into this relationship to oneself as the normative realm” (Heidegger, 2008 in Beinsteiner, 2012: 3). What this rather dark and assumingly presumptuous statement of Heidegger entails is that in the age of the world picture, all that is not relevant to man is “filtered out of existence” (Beinsteiner, 2012: 3). In a very basic way, this conflicts with Heidegger’s hermeneutic nature of truth, and with his ideas on the freedom of man.

Digital technologies concerning big data and personalized filtering are a rather radical form of Heidegger’s idea of the age of the world picture, as well as they appear to be a new clearing of being.

The mythological belief that big data provides us with new aspects of reality (Boyd and Crawford, 2012:

663) and the fact that big data enthusiasts have goals that far outreach the course of human life could signal it as the latter. However, ever since the algorithms working these limitless data sets are still man- made could signal a radical form of Heidegger’s age of the world picture. Furthermore, big data devotees often handle a solutionist’s viewpoint that centers man-made ideology in all these realms (Morozov, 2014: 5). The personalization features a firm like Google insert in their code have one clear goal: they do not want to return thousands of links, but just a single one, “the one that you wanted”

(Pariser, 2011: 33).

All personalization algorithms follow this mantra: they want to provide the user with just the content that is relevant for them, and nothing more. In this sense, the application of personalization techniques can be seen as attempts to make sense of the unprecedented amounts of information that digital technology now allow humanity to gather. This causes an absence of a knowledge paradigm, which is common just after the invention of a new information technology (Furedi, 2015). As Furedi mentioned in his historical account of information overload, “it is through the process of selection and of interpretation that any given quantity of information … can be transformed into authoritative and meaningful knowledge” (Furedi, 2015). The following decades will provide us with answers to the question if the data drift can be extended into a truly ideological movement. As gaining meaning is “a cultural accomplishment” (Furedi, 2015), this will demand more than larger hard-drives and a smartwatch. Data has to drive our decision-making and morality, and we have to change our nature of thinking itself. But if measurability were ever to become a cornerstone of our culture, free human thinking might be the first sacrifice, as Heidegger warned.

1.2 News audiences, gatekeepers and media bias

To grasp what exactly makes personalization so momentous, we have to inspect the functions of collective media consumption and elaborate on the inner workings of democracy. Democracy is much more than voting and gathering the people’s preferences (Dryzek, 2002 in Hindman, 2009: 7). Citizens need to actively engage in discussions about policy making in order for a democracy to function properly (Hindman, 2009: 7). However, the degree in which citizens may need to do this varies according to the model of democracy a country adheres. Strömbäck outlines four main forms of democracy and their implications for journalistic practice. He distinguishes between “procedural democracy”, “competitive democracy”, ”participatory democracy” and “deliberative democracy”

(Strömbäck, 2005: 337). These forms mainly differentiate in their normative expectations of citizens: in a procedural democracy citizens only need to respect the democratic procedures, whereas in a deliberative democracy they also need to be ready to participate in public discussion, where they can change opinion and strive for mutual consensus (Strömbäck, 2005: 337).

(8)

The notion of journalism changes when the form of democracy changes. Within a procedural democracy just acting as a watchdog is adequate, whereas in a deliberative democracy media should also provide a platform for public discussion (Strömbäck, 2005: 341). One should keep in mind that what is “considered good journalism from the perspective of one model of democracy is likely to be judged differently from an alternative perspective” (Strömbäck, 2005: 341). However, most westernized countries adhere a deliberative model of democracy, or the adjacent participatory democracy. In these countries it is important for the press to “link active citizens together” Strömbäck, 2005: 341). This fosters the exchange of ideas needed to implement a further-reaching form of journalism. The space in which this form of active engagement is held, was famously dubbed “the public sphere” by Jürgen Habermas (Habermas, 1996 in Castells, 2008: 78). As Habermas described it, the public sphere is “a network for communicating information and points of view” (Habermas, 1996 in Castells, 2008: 78).

This is where citizens gather and rationalize their shared ideas to address policy makers in the political system (Castells, 2008: 78). The media are important for the public sphere in two ways: they have an organizational function because they provide a platform for deliberation (Castells, 2008: 79).

Furthermore, they provide the public with the media messages they need in order to be informed and to join the deliberative process (Shoemaker & Vos, 2009: 1).

Media are selective in what they report. A countless number of events occur each day, but the media decide to report on only a few of these messages, namely those they think of as newsworthy. This process is called gatekeeping (Shoemaker & Vos, 2009: 1). Shoemaker and Vos define gatekeeping as

“the process of culling and crafting countless bits of information into the limited number of messages that reach people each day” (Shoemaker & Vos, 2009: 1). The first personification of a gatekeeper was

“Mr. Gates”, a wire editor from a small-city newspaper who appeared in the research of David Manning White (White, 1950 in Shoemaker & Vos, 2009: 15). White found that the decision whether to use or discard wire copy was based on “highly subjective” criteria (White, 1950: 386 in Shoemaker & Vos, 2009: 15). Shoemaker and Vos later expanded on this model by including all kinds of variables, including organizational characteristics, life experiences of editors and the influence of outside forces such as public relations and interest groups (Shoemaker & Vos, 2009: 113-115). And although the gatekeeping process might not be as simplistic as White observed it to be, he still touched on the very core of the subject: the media’s role in shaping public debate. If the media do have a gatekeeper role, they also have a role in deciding which material is available for deliberation. Or as Strömbäck states it,

“since it is through media and journalism that citizens mainly access political discussions, the deliberative model of democracy places exacting demands on media and journalism” (Strömbäck, 2005:

340). But as stated before, the power that the media acquire through their gatekeeping role is dependent on the scarcity of information (Verhulst, 2007 in Karppinen, 2013: 104). The information abundance of today’s world, largely caused by network technologies, is quickly diminishing the media’s gatekeeping role and transferring it to new players in the game: personalization algorithms.

The use of personalization algorithms has some severe consequences regarding the idea of media audiences as well. In journalism’s traditional view of the audience, the gatekeeper model provides journalists with a situation where “the wants and desires of audience members are subordinated to journalists’ expert judgment about the stories that audience members need to know” (Anderson, 2011a:

529). But as the importance of relevance in news work is thriving, audience responsiveness is slowly becoming the norm (Anderson, 2011a: 529). In the internet zealot’s lingo this is seen as a democratic force, breaking down the excessive power of media outlets in the public sphere. But within this supposition two main problems arise: the dichotomy between professional and responsive visions is not straightforward and the democratizing potential of audience responsiveness can be questioned, as this disintermediation is only pretense (Anderson, 2011a: 536 & Morozov, 2013: 165).

Within the paradigm of “professional journalism”, there is a strong paradox. Whilst journalists adhering this vision see their audience as “consumptive”, they also see them as “disengaged”, largely ignoring the agenda-setting potentialities of the audience (Anderson, 2011a: 538). In a way, journalists within the “professional paradigm” deliver a consumer product without caring for consumer needs. As network technologies emerged and the scarcity of information disappeared, news media adapted various new ways of audience understanding and involvement. Within the paradigm of “public journalism”, journalists let go of the consumptive aspect, envisioning their public as engaged, related citizens instead of “atomistic individuals” (Anderson, 2011a: 537-538). But despite the fact that the audience is partly agenda-setting within this perception, news organizations held their gatekeeping

(9)

power (Schudson, 1999: 122 in Anderson, 2011a: 539). Within the paradigm of “Indymedia journalism”, the border between consuming news and agenda-setting is discarded entirely. Here the audience actively creates content within fragmented spheres, rejecting the notion of a unified audience in favor of an agonistic and deeply participatory public (Anderson, 2011a: 540). What both of these paradigms have in common is their participatory audience. It is the audience that sets the journalistic agenda, though in varying degrees. Within Anderson’s fourth and final audience paradigm, the

“algorithmic audience”, this notion is adopted as well. However, this agenda-setting function does not come to be in a participatory or deliberative manner, but through traceability and behavioral tracking of the audience (Anderson, 2011a: 540, Turow, 2011: 2).

Following this logic, the dichotomy between the traditional view of the audience and the algorithmic one seems straightforward. Nonetheless, there is a strong similarity between these two audience perceptions in the sense that they both comply with the idea of the consumptive, atomistic news consumer (Anderson, 2011a: 540). The same intrinsic market logic of the traditionalist’s view is adapted to the networked news environment. Nonetheless, there is one catch to this hypothesis. In the traditionalist’s view, the consumptive news user could come into being partly because of a lack of interest in user behavior. In print, it could be hard to obtain information about how much time readers spent on every article, or if they even read their newspapers at all. Moreover, journalists often weren’t interested in this information (Anderson, 2011a: 533). But within today’s world of user metrics and advanced user behavior, this no longer holds. With the ability to perfectly map out the reader’s needs and wants, the consumptive aspect shifts to fully catering those, leaving the traditional dilemma between supposed user needs’ and journalists’ gatekeeper functions in vain (Anderson, 2011a: 540).

Besides the fact that democratizing effects of audience responsiveness on news production are largely dependent on the de-individualization of a user, which is something that is completely absent in the paradigm of algorithmic audiences, the aspect of disintermediation also deserves further scrutiny.

Disintermediation inevitably flows from the loss of the gatekeeper effect of the media and/or individual journalists. This is heralded by internet advocates and academics alike because of the fact that these gatekeepers could carry various biases, inefficiencies and hidden agendas (Morozov, 2013: 165). These are indeed present in the current media system. From the moment that journalists adapted ‘objectivity’

as one of their main ideals, they instantaneously “recognized it limits” (Schudson, 2001: 164). And whereas American journalists tend to cling to their objective status, other news cultures (such as the European one) openly accept more subjective newsworkers (Schudson, 2001: 165).

However, disposing of these biases by disposing of journalistic gatekeeping altogether does not automatically lead to disintermediation, as in our media-saturated world full disintermediation is simply impossible. When we discard our journalistic gatekeepers, we simply trade them for algorithmic gatekeepers (Morozov, 2013: 165). Morozov names this ‘the opposite of disintermediation’ because of the fact that “the digitization of our public life is also giving rise to many new intermediaries”

(Morozov, 2013: 165). From automatic spam services to personalization tools, our digital information is flowing through a plethora of information filters. And most importantly, these new intermediaries are kept out of sight. Morozov thus concludes that we live in an era of “hyperintermediation” instead of

“disintermediation” (Morozov, 2013: 165). If disintermediation is indeed only pretense, these new intermediaries should be analyzed, for their role might stand in the way of the supposed democratizing aspects of the internet. It is not the intermediation process itself that poses a problem: as long as the algorithms truly are free from the biases that exist within the routines of journalistic gatekeeping, they could pose a far better alternative.

Theorists and public alike often “consider online intermediaries themselves as neutral” (Bozdag, 2013:

210). Mechanic-like automation, combined with algorithms “lurking in the dark territory of the unknown” (Pariser, 2011 in Morozov: 2011) leads to a common view in which algorithmic gatekeepers seem to operate without the well-known bias of their human adversaries (Bozdag, 2013: 210). This can be debated in two ways, according to Bozdag. First of all, computer systems have shown that they can carry biases with them. Secondly, humans do still affect the way algorithms work when they are implemented (Bozdag, 2013: 210). Bozdag’s first argument lines up with a broader tradition of scholars that inquire the political and ideological means of technology. As Kate Crawford mentions, this is a line of thought that stretches back as far as Plato, in modern times gaining momentum trough critique on industrialism and computation (Crawford, 2016: 78). Crawford splits the argument that technology is

(10)

political in two schools of thought. She starts by pointing out that certain features of a device or system could create a way of gaining power and authority, but that social actors remain the decisive factor in how those features will be configured. The second line of thought sees technology as inherently political, as “unavoidably linked to institutionalized patterns of power and authority” (Crawford, 2016:

78). Whether personalization algorithms simply provide means of establishing authority or if they are inherently political is debatable. But as these algorithms are taking over the gatekeeping role of the press, it is important to recognize that they carry with them the same political and possibly biased structures as journalistic gatekeeping. Personalization algorithms “exert power to shape the user’s experience and even their perception of the world” (Eslami et al., 2015: 1).

Acknowledging that these algorithms are possibly hauling political biases is in fact a liberating dimension in the debate surrounding the political aspect of personalization, because it implies that these algorithms are worth investigating. Crawford argues that “the criteria by which algorithms determine evaluation are obscured while they are enacting political choices about appropriate and legitimate knowledge. Yet these deliberations are crucial.” (Crawford, 2016: 86). This reasoning does not imply that algorithms should be investigated ‘as-is’, focusing on the isolated arithmetic. The algorithm with a big A doesn’t exist, as algorithms are merely agents in a process of negotiation between people, algorithms and institutions acting “in relation to another” (Crawford, 2016: 83).

Bozdag enforces this argument by adding a human operator in his model of personalization algorithms (Bozdag, 2013: 215). He states that the political nature of these algorithms does not only come to be because of its human-originated fabrication, but that human agents actively engage in manipulating and adjusting algorithms once they are installed (Bozdag, 2013: 216-217). All in all, personalization algorithms are not an objective hatch. As Bozdag demonstrates, a lot of the news routines that journalists use are integrated within the personalization algorithms that news websites use and human agents intervene in the selection and filtering process all the time (Bozdag, 2013: 217). Above all, personalization algorithms operate at the complex intersection of various actors’ interests, being neither a self-containing gatekeeper nor fitting in a deliberative paradigm of equal agents (Crawford, 2016: 86-87).

1.3 The challenges of news personalization

Here the great paradox of personalized news filtering arises. The information abundance that came with networked technologies led to the radicalization the ‘Age of the world picture’, where everything non-relevant to man is “filtered out of existence” (Beinsteiner, 2012: 3). It also created a new concept of what it is meant to ‘be’, a ‘clearing of being’, derived from the endless and near-mythological knowledge derivable from big data (Beinsteiner, 2012: 3 & Boyd and Crawford, 2012: 663). At the same time, the shift from human towards algorithmic gatekeepers spawn atomic, consumptive news users that retrieve strong agenda-setting roles without the deliberative or participatory aspects of previous audience reforms averse of media gatekeeping power (Anderson, 2011: 540). Combined, these trends lead to an internet where identity is being reconstructed from a panoply of signals left online, where that presupposed identity is used to cater news consumption and to hide articles deemed ‘irrelevant’ from view. This process of personalized news filtering leads to a fundamental narrowing of political debate through the creation of so-called “filter bubbles” (Pariser, 2011: 9).

Although Eli Pariser was the first to coin the term ‘filter bubble’, he was not the first to associate news personalization with this idea. Even in 1995, Nicholas Negroponte wrote that “being digital will change the economic model of news selections” and “make your interests play a bigger role” (Negroponte, 1995: 153). He envisioned “The Daily Me” (Negroponte, 1995: 153), where your newspaper would be personalized as if the full newsroom was working just for you. He noted that people might still be interested in its counterpart, “The Daily Us”, and that personalization should be customizable “with a knob that, like a volume control, allows you to crank personalization up or down” (Negroponte, 1995:

154). In the same era, more critical voices emanated from the academic landscape. In 1996 Elihu Katz saw the multitude of channels on TV as a severe threat to democracy: “Yet, from the point of participatory democracy, television is dead, almost everywhere. It no longer serves as the central civic space” (Katz, 1996: 24). Democracy needs a limited source of information, he argues, that was first to be

(11)

found in newspapers, then radio and later on television. Although using a somewhat technological determinist’s viewpoint (which even he himself admits (Katz, 1996: 26)) Katz continues by touching upon news personalization on the web and even envisions the digital personalized newspaper (Katz, 1996: 26). He sees this trend as incompatible with participatory nation-state (Katz, 1996: 26). So as all media drift towards a panoply of channels and customization, Katz leaves us with a rather bleak outlook of media that are no longer able to prevent segmentation (Katz, 1996: 33).

Cass Sunstein takes this argument one step further in his book Republic.com. He concludes that a well- functioning system of free expression “requires far more than restraints on government censorship and respect for individual choices” and places the “growing power of consumers to filter what they see”

within this arena (Sunstein, 2001: 8). Whereas Katz loathed personalization because it breaks down a former unified audience, Sunstein sees it as also actively dismantling the ability to participate in democracy (Sunstein, 2001: 8). The most important step that Sunstein takes is that of coupling the recommendation systems of sites like Amazon and Moviefinder to the personalization of news content.

Whereas the debate before mostly focused on ‘explicit’ personalization, now ‘implicit’ systems of recommendation come into play. However, the recommendation systems Sunstein describes might indeed be harmful, but at the time personalization technologies were not nearly as advanced as they are now. Even when he wrote an update to his book in 2007, called Republic.com 2.0, he still saw modern technology as enabling people to apply personalization features. When he describes his vision of the personalized newspaper, he writes that “for your own daily dose of news, you might subscribe to several review services and let a software agent or a human one pick and choose from them to compile your completely customized ‘newspaper’” (Sunstein, 2007: 39).

When Pariser writes about filter bubbles, he adapts most of the thought casted by Sunstein and Negroponte. In his theory, one too becomes atomized as personalization continues to intertwine with network technologies (Pariser, 2011: 9). However, he adds two dimensions to the concept: the invisibility of the filter bubble and its involuntary incorporation of the individual (Pariser, 2011: 9-10).

The fact that the filter bubble becomes invisible is primarily explained through the lens of the rise of information intermediaries and their eager adaptation of filtering technologies. When Sunstein published Republic.com 2.0 Google was indeed on the rise and information intermediaries were quickly gaining importance. But although the search results were better optimized than ever, results were still universal, as Google only uses customized search results since December 2009. Or as Pariser writes down, since that date, “there is no standard Google anymore” (Pariser, 2011: 2).

The launch of the ‘new Google’ was a silent one, and with the hidden filtering process most people still have no idea that certain information is tucked away within their search results (Pariser, 2011: 10). That one might not recognize a filtering process at work is adjacent to Pariser’s third dimension of the filter bubble: its involuntary entry by many users (Pariser, 2011: 10). Pariser argues that most web services that use personalization do not allow users to shut off personalization features entirely. This does no longer hold. Since the publication of Pariser’s book, a lot of privacy concerns were raised, and companies like Google and Facebook now allow you to customize most of their personalization settings (Morozov, 2011). With some, such as Google, it might limit some of its product features. But while personalization might be voluntarily in nature, it is still largely opt-out and most users might lack technical knowledge or interest in these features.

One of the main effects of filter bubbles takes place in the political arena. Since most westernized countries adhere to a “participatory” or “deliberative” model of democracy (Strömbäck, 2005: 337), filter bubbles have major impact on the ability of citizens in participating in the proceedings of these democratic forms. Unlike “procedural” and “competitive” forms of democracy, Strömbäck outlines that in the more far-reaching democratic models of “participatory” and “deliberative” democracy normative expectations of citizens are higher. They have to be cooperative and aware of each other’s opinions, let alone politically interested and opinionated (Strömbäck, 2005: 337). Among the first things the filter bubble hides from sight are dissecting political opinions. This is catastrophic for the workings of the above democratic systems.

Cass Sunstein acknowledges two trends that conflict with the normative expectations of citizens that models of participatory and deliberative democracy imply. The first problem ensues from the situation where individuals have complete control over their systems of communication and “exercise that

(12)

control so as to decrease shared communications experiences” (Sunstein, 2001: 48). Filter bubbles can lead to a very fragmented audience - the end of the process is an audience of one - which in turn leads to a “considerable difficulty in mutual understanding” (Sunstein, 2001: 48). As difficulties in mutual understanding abound, deliberation becomes increasingly harder; Strömbäck’s demands of cooperative citizens that are aware of each other’s arguments are progressively more difficult to meet as personalized filtering sweeps away most shared experiences. As Pariser writes: in order for democracy to function we must be “capable of thinking beyond our narrow self-interest. But to do so, we need a shared view of the world we cohabit.” (Pariser, 2011: 164).

He concludes that instead of a medium where whole groups of people “could co-create their culture through discourse”, the filter bubble gave us “a public sphere … hostile to dialogue” (Pariser, 2011:

164). One of the main aspects of personalized filters that spawns this fragmented public sphere is the fact that personalization decreases “exposure to materials that would not have been chosen in advance but that nonetheless are beneficial” (Sunstein, 2001: 48). In the process of filtering, opposing viewpoints might be ruthlessly cut out of your information flow. This prevents the refinement of arguments, as we will not contest our own viewpoints and ideas as a result of the filter bubble, because we are confronted with only ideas we already agree on (Bozdag & Van den Hoven, 2015: 252). Whilst it might be beneficial for ourselves to be able to change our opinions, it is crucial to deliberative democracy to have non-envisioned encounters with ideas we do not necessarily agree with at first (Strömbäck, 2005: 337).

Or as Morozov states it: “to practice active moral citizenship one needs to know that things could be otherwise” (Morozov, 2013: 201).

It might not even be citizens themselves that feel the urge to exclude dissenting opinions from their news consumption. Whereas Sunstein concludes that ultimate free choice in consumption will inevitably lead to consumer choices based on existing ideas, Garrett argues that “people are not using these capabilities to isolate themselves entirely from other ideas” (Garrett, 2009: 281). Effects of explicit personalization features on content exposure are debatable, but as the filter bubble comes into play consumers do have a very limited choice in whether they isolate themselves or not. Algorithmic filtering is mostly invisible and the amount of personalization added to the information mix is only accessible from a stowed away menu (Pariser, 2011: 10). As it becomes increasingly harder for consumers to escape from personalization, it also becomes harder for them to escape the endless repeats of their own ideas and arguments.

The second problem Sunstein has with personalization is that this echoing of arguments is dangerous for democracy because it leads to strong forms of polarization (Sunstein, 2001: 67). Sunstein argues that filtering techniques (remember, his book Republic.com was written in pre-‘filter bubble’ times) lead to an augmentation of two elements that form the basis of group polarization. Although Sunstein’s analysis is based on the polarization of online hate groups, his arguments are easily transferrable to the discussion of atomization caused by the filter bubble. In the first place, polarization thrives when there is a “limited argument pool” (Sunstein, 2001: 68). This implies that one’s position is formed on persuasive arguments. When there is a largely homogenous group of people, the range of arguments will be skewed in a certain direction. Thus, deliberation in a homogenous environment where all arguments are cast in the same direction may lead to a more polarized outcome of the deliberator(s) (Sunstein, 2001: 68).

This same “limited argument pool” is seen within the filter bubble and might not only lead to the strengthening of one’s ideas, but also to the radicalization of them. Sunstein’s second argument relies on “social comparison” (Sunstein, 2001: 68). This theory says that people who identify themselves with a certain social group will also want to seem to identify themselves with this social group. They will adhere to the view dominant within that social group once they hear what other members of the group believe. An overload of information in favor of this social group might lead to a lessening of deliberation and adaptation of dominant perspectives (Sunstein, 2001: 68-69). This is applicable to the filter bubble environment as well: one is catered with information of presupposed identifiable social groups, pushing one’s ideology to the dominant ideology within these groups, thereby feeding polarization.

(13)

One of the main reasons for news websites to implement personalization features is to generate traffic to their websites. Because of user traceability advertisers can now see how many people see their ads, and nearly all advertisers pay ‘per click’ nowadays. This trend has quite a strong influence on the media world. The use of user metrics varies per journalistic outlet. The Huffington Post promotes its

“dashboard”, where journalists can see how well their articles are ‘performing’, actively (Pariser, 2011:

70). In 2004, a Chilean newspaper started basing its contents fully on what users clicked on (Pariser, 2011: 71). Media such as the New York Times are more reluctant in their use of user metrics (Pariser, 2011: 70), as they fear that metrics might end up guiding their coverage too much. They cling to a more traditional understanding of the audience, where journalists’ judgment rules supreme over audience needs and wants (Anderson, 2011a: 529).

Audience tracking leading news coverage might not only cause journalists to miss out on initial leads that might not directly pop up as an interesting story, but it might lead to a future where traffic-friendly entertainment stories entirely prevail over editorial judgment (Pariser, 2011: 74-75). Investigating the effect of user metrics at the website Philly.com, Anderson concludes that “it is not an exaggeration to say that web traffic often appeared to be the primary ingredient in Philly.com news judgement”

(Anderson, 2011b: 561, emphasis in original). If we leave out editorial judgment completely in personalization techniques, Pariser speaks of “sycophantic personalization” (Pariser, 2011: 73). Pariser seems at ease with some application of sycophantic personalization processes, but in condemning the absence of editorial judgment he quotes Michael Schudson, commenting on Gawker’s ‘Big Board’

(showing live user metrics in the main newsroom): “It is a scary extreme, it’s surrender” (Schudson, in Pariser, 2011: 73). When researching the causes of the rise of ‘Big Boards’, Anderson concluded that the surpassing of editorial judgment by techniques around user tracking is largely dependent on three factors: if newsroom technologies, the overall business climate and basic journalistic culture remain constant the prevalence of user metrics-driven reporting is less likely (Anderson, 2011b: 563).

Being in a filter bubble does actively downplay your ability to engage in a democratic system, while possibly polarizing your political viewpoints (Pariser, 2011: 164, Sunstein, 2001: 48 & 67). Both arguments flow from the fact that the media you consume are based on your supposed identity that media derive from your online activities. In turn, the media you consume also identify who you are. So in the process of finding the right fit between you and the media you consume, the filter bubble might in the end, change you (Pariser, 2011: 112). Here the cognitive dissonance effect is at work again: we accept most things personalized filters throw at us just because we stand by our naïve realism vision of self-justifying our news consumption choices. We feel an urge to pretend to be in control of our behavior, even within the filter bubble (Pariser, 2011: 127).

Nonetheless, identity is a much more complex process than this line of reasoning implies. Following Buckingham, at the heart of the term ‘identity’ there is a strong dichotomy. Whilst identity implies (idem is Latin for ‘the same’) that we form it through relationships with collectives or social groups, it is also “something unique to each of us that we assume more or less consistent” (Buckingham, 2008: 1).

Identity is fluid and identity formation is a highly social process. Bauman states that identity is “almost infinitely negotiable” (Bauman, in Buckingham, 2008: 2). Bourdieu emphasizes that “being-perceived […] exists fundamentally through other people” (Bourdieu, 1991: 224). In social interaction, only the naming of identity frees the particularity from the unthought (Bourdieu, 1991: 224).

This logic mystifies when algorithmic personalization comes into play. We spend a great part of our lives online, and if our online environment is indeed intertwined with all kinds of personalization algorithms in the (near or distant) future, these algorithms and the data that those are based on will start to enact in our identity. A strange and confusing system is conceived when data mining leads to one or various profile(s) of you being stored in a database, after which these profiles are used to customize your online reality, which in turn influences your identity and therefore your online behavior, which finally leads to an adaptation of your online profile. Even language is just a representation of our identity (Bourdieu, 1991: 224), so we cannot expect data mining to fully grasp the complexities of who we are. We can however expect it to influence who we will be. When the filter bubble is at work, we risk the danger of self-fulfilling identity.

(14)

1.4 Solutions for the filter bubble

Eli Pariser poses a three-way solution: consumers, internet companies and governments alike should all work together in breaking, or at least mapping out, filter bubbles (Pariser, 2011: 219). In this section, I will outline these solutions, together with some critiques of them and some concrete examples to illustrate practical means of bubble breaking. As a start, Pariser argues, consumers should become more aware of the paths they follow in their internet routines: “Just by stretching your interests in new directions, you give the personalizing code more breadth to work with” (Pariser, 2011: 223). He also directs some responsibility to the user regarding his or her ‘information diet’ as something on which they should actively work (Pariser, 2011: 222). Morozov disagrees with this rhetoric, as he insists that not everybody will have the financial means to establish a healthy ‘information diet’ (Morozov, 2013:

283). Moreover, he sees it as diverting the attention from structural reform in the media world and governmental regulation to users’ responsibility (Morozov, 2013: 284).

According to Pariser, actively pursuing the manipulation or diversion of personalization algorithms is a second option for the consumer. Through things such as cookie deletion, opting out of privacy- endangering services and using incognito browsing, one could avoid his or her filter bubble (Pariser, 2011: 224-226). However, Bozdag and Van den Hoven (2015: 254) state that these methods are tiresome and discard with the possible benefits of personalization as well. The third option Pariser names is that users can acquire some basic knowledge of “algorithmic literacy” (Pariser, 2011: 228).

This solution adapts the arguments of the ‘learn to code’-movement that implies that it is vital to learn to code, because “increasingly, citizens will have to pass judgment on programmed systems that affect our public and national life” (Pariser, 2011: 228). While a basic understanding of code can certainly enhance the comprehension of personalization principles, this school of thought actually breeds life to consumers who use a technologically deterministic view on society. If everybody learns to code, we will increasingly give way for a society in which all problems could be addressed as solvable by computational technologies even if they are not or if the problem is only conceived as such through algorithmic logic, a mentality that Morozov has defined as “solutionism” (Morozov, 2013: 5). By focusing on breaking the filter bubble through using better code, arguments regarding human editors could fade away from the discussion.

In his account of “The Daily Me”, Nicholas Negroponte materialized the personalization features inherent of such a medium as “a computer display of news stories with a knob that, like a volume control, allows you to crank personalization up or down” (Negroponte, 1994: 154). As it turns out, news personalization does currently not work that way. One of the filter bubble’s most prominent features is that the bubble is largely hidden, and that its users mostly cannot crank its prominence up or down. It is such a knob that Pariser proposes as a means for internet companies to take their fair share in breaking the filter bubble (Pariser, 2011: 235). He combines this with a quest for openness, where he wants those companies to be more transparent about their personalization practices (Pariser, 2011:

229-230). This openness consists of two main parts: being open about our profiles, showing us who these companies think we are and being open about how they use this data to shape our online experience (Pariser, 2011: 231-232). It is important to note here that Pariser’s view is highly US- centered. In the EU, this openness is demanded by law (European Commission, 1995). After the publication of the book, both Facebook and Google have created simple to use dashboards that allow their users to access their privacy and personalization settings more easily (Newman, 2016 & Wagner, 2014).

The knob that Pariser wants to install at the top of your news feed does not only increase user control regarding the use of personal information, it also clarifies that there is a form of personalization going on (Pariser, 2011: 235). This type of technology might be repugnant to some ideals of functionalism.

However, we might broaden our vision of ‘functionality’ when we acknowledge that a debate about a personalized news feed is (sociologically) functional as well (Morozov, 2013: 328-329). When we do just that, we enter the realm of “adversarial design”. This is a new way of thinking about technology and design from Carl DiSalvo, where “the goal of design is not to build an artifact to fulfill some genuine social need ‘out there’ but also make us reflect on how that need has emerged, how it has become a project worth pursuing, and how, all things considered, it may actually not be worth pursuing at all”

(Morozov, 2013: 329). Crawford also advocates the use of these design methods. She argues that

(15)

algorithms have a nature of contest, “one that is choosing from often counterposed perspectives, within a wider sociotechnical field where irrationality, passion and emotion are expected” (Crawford, 2015:

87). We could show the complex processes that lie at the heart of these contests when “the backstage debates about the content are visible” (Crawford, 2015: 87).

In their 2015 study, Bozdag and Van den Hoven analyzed 15 different tools developed to counter the negative effects of filter bubbles (Bozdag & Van den Hoven, 2015: 254). These tools range from tools that visualize someone’s online reading habits to websites designated to show left out Google Search results. They concluded that the solutions these tools offered where largely fed by the democratic model its creators were envisioning during the creation process (Bozdag & Van den Hoven, 2015: 263). This implies that most of these tools do not assess all negative aspects of the filter bubble, or claim to be an all-in-one solution to the problem of personalized filtering. The more liberal variances “allow users to be aware of their own (or the platform’s) biases, understand these biases and allow the user to control incoming information and filters (Bozdag & Van den Hoven, 2015: 260). Deliberative solutions let users “discover diverse facts, perspective and disagreements, reflect on their own (and others’) arguments, aim for informed debate with epistemic justifications and increase the epistemic quality of information” (Bozdag & Van den Hoven, 2015: 260). The republican and agonistic views on democracy are absent in the tools that Bozdag and Van den Hoven researched (Bozdag & Van den Hoven, 2015:

259). Most of these plug-ins or platforms are focused on either awareness or an informed debate and these solutions are all third party (except for an experiment by Microsoft with its Bing engine), making their reach and effectiveness limited.

For a true solution, we have to get the platforms themselves aboard, and that is what Pariser asks of information intermediaries. He advocates “more serendipity, a more humanistic sense of identity and an active promotion of public issues and a cultivation of citizenship” (Pariser, 2011: 233). As curators gain power through gatekeeping information flows, they have to use that power in an ethical manner, about which Pariser concludes: currently they do not (Pariser, 2011: 291). Morozov shares that very same conclusion with Pariser, but uses it in favor of a wholly different argument. He states that implementing serendipity in personalization algorithms would mean that the weighing of what information is ‘societally important’ and which information outside of one’s filter bubble is relevant does only transfer more cultural paternalism into the hands of those who have shown to be unable to embrace basic democratic ideals (Morozov, 2013: 290). If information intermediaries would feel the need to build in a decent serendipity engine, they would need more data from its users, not less (Morozov, 2013: 290). Serendipity has a very different nature than randomness. Randomness entails a lack of variables needed in a selective process; serendipity (as Pariser describes it) actually needs more because the variables used so far produce a data set perceived too narrow. Built-in serendipity would indeed break some aspects of one’s filter bubble. Paradoxically, it would also mean an even higher level of personalization. Instead of handing more societal power into the hands of programming engineers, we could also choose a route where the negative effects of personalized filtering are handled by both individuals and politics.

Apart from users and companies, governments also have a grand role in monitoring and breaking filter bubbles because of their legislative powers. Legislation regarding personalization techniques focuses on the aggregation of data plus the adjacent data economy and the possibilities of auditing algorithmic gatekeepers in the same way we did with human journalistic gatekeepers. Personal profiles are based on the aggregation of personal data. Without data mining and big data practices the current state of personalization techniques would be impossible. So the need to impose restraints on personalization practices is best enforced through privacy law. Pariser argues that in order to create and enforce better privacy laws we have to enhance our awareness of the data economy by “thinking of personal data as a kind of personal property and protecting our rights on it” (Pariser, 2011: 240). There’s no such thing as a free lunch: our free services on the internet are being paid with our data. However, this transaction is cloudy: an information intermediary knows the value of customer data, but the customer itself has no clue (Pariser, 2011: 240).

Thinking of your data as something with a specific value could strengthen customer awareness and restore the power balance between information intermediaries and their users. One way of doing this is using services on the web that pay users for their data, such as Datacoup, Citizen.me and Digi.me

Referenties

GERELATEERDE DOCUMENTEN

353 In Chapter Six, the data from the qualitative and quantitative research are presented, analysed, described and interpreted in a systematic manner to provide

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of

Due to the limitations in the data that can be used to develop working visualisations and time restrictions, the decision has been made to -in order to be able to fully answer the

Compared to past studies, participants were given a point of reference for their evaluation, a fictive online dating profile of a person (male or women, depending on

Referring to Zheng Bijian's article “China's 'Peaceful Rise' to Great Power Status” as an unofficial spokesperson of the Chinese foreign policy intention in Foreign Affairs in 2005 7

Informational justice leads to popular ideas in combination with: the presence of the efficiency, effectiveness justification logic and the absence of the

According to the Czech respondents, the following bottlenecks are faced in the or- ganisation and implementation of CAA: lack of capacity to provide accommodation for large groups

Consequently, when both parents in the Netherlands have parental responsibility (the biological mother and the social mother in lesbian families and the divorced parents