• No results found

Algorithmic Diplomacy: Integrating Mediation Techniques into YouTube's Recommender System

N/A
N/A
Protected

Academic year: 2021

Share "Algorithmic Diplomacy: Integrating Mediation Techniques into YouTube's Recommender System"

Copied!
72
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Emillie V. de Keulenaar

Algorithmic diplomacy:

Implementing mediation techniques in YouTube’s

recommender system

September 2018

Supervised by Dr. Bernhard Rieder and read by Dr. Marc Tuters rMA Media Studies (New Media and Digital Culture)

(2)

Table of contents

Introduction 5

Chapter 1


Searching for deliberation in information systems design 11

1. Introduction 11

2. The operationalisation of values in technical design 11 2.1. Information filtering and normative thinking 11 2.2. Philip Agre, intellectual history and Critical Technical Practice 13 2.3. Systems and deliberation: conceptual parallels in values in design and

society-oriented design 16

2.4. Alternative design practices by industrial actors 17 3. Operationalisation as a form of deliberation 18 3.1. The legacy of political thought and technical design 18 3.2. Cybernetics: modularity and information flows 20

3.3. Floridi’s information ethics 21

3.4. Cooperative responsibility 21

Chapter 2


Conflict and polarisation in YouTube’s recommender systems 24

1. Introduction 24

2. Basic recommender system architectures 24

2.1. What is a recommender system? 24

2.2. Content-based recommenders 26

2.3. Collaborative filtering systems 26 2.4. Hybrid recommenders: the YouTube ‘related videos’ system 27 2.4.1. The YouTube recommender system in 2010 28 2.4.2. The YouTube recommender system in 2016 33 3. Notions of conflict in information filtering systems 35

3.1. Sunstein’s ‘filtering effect' 35

3.2. Pariser’s filter bubble 38

4. Polarisation in YouTube’s recommender system 40 4.1. Polarisation in collaborative filtering systems 41

(3)

4.2. Collaborative filtering systems and the polarisation of perception 45 4.3. Popular content and misinformation rabbit holes 47 4.4. Critiques: the need for conflict and different models of sociality 48

Chapter 3


Applying mediation techniques in YouTube’s recommender system 50

1. Introduction 50

2. Current applications of mediation techniques in information filtering systems 50 2.1. Computer-aided methods for conflict management 51 2.2. Mediation via critical interfaces 52 2.3. Diversification and diverse exposure 54 3. Collaborative filtering systems and contextualisation 56 3.1. Contextualisation as a mediation technique 57 3.2. Contextualisation in the collaborative filtering system 57

4. Dialogical recommendations 59

4.1. Mediation processes and modularity 59

4.2. Recommending argumentation 60

5. Alternative mappings 60

Conclusion 62

(4)

Acknowledgments

My immense gratitude to Bernhard is not just delimited to the help, presence and patience he handed me during his supervision of this thesis, but to ideas he formulates and proposes that I seem to rarely ever find the end of my interests in.

I am equally grateful to Ivan, who has not just provided me with his time and valuable help, but also with a company I have come to appreciate greatly.

I thank Julia for giving me the time to finish this thesis even in times when she needed my assistance for those thousand and one missions she still manages to accomplish every day. My gratitude also goes to Marc and Thomas, who has offered to read this thesis, and who have, in the course of this past year, offered me with numerous opportunities to teach and pursue research interests that make for a continuous source of development.

I wouldn’t have begun the research topic this thesis is the subject of it wasn’t for Rhubi and Maria, with whom I have not just worked on this together in the past, but equally worked on a friendship that has come to mock the constraints of the distance between us.

Finally, I would like to dedicate this thesis to Gianluca D’Avola, the memory of whom will always constitute my interest in technology as part a project we initiated together long ago. 


(5)

Introduction

Since the early two-thousands, scholars have been preoccupied about the ways in which they suspect platforms and their information systems process and redesign several practices central to democracy (Hofbauer; Dandekar et al.; Yardi and boyd; Barbera). A number of key concepts have since shaped the scholarly and popular imagery about these platforms and their information systems as sites that host, reproduce or contribute to social conflict. The notions of political polarisation and various terms drawn from social and organisational psychology (homophily, confirmation bias, selective exposure, and biased assimilation) partly reflect how scholars have come to locate conflict online as an ensemble of political phenomena specifically associated to selective information distribution and several of their techniques, particularly personalisation, maximising engagement, and sorting items based on user similarity. They suggest, for example, that users dwell in ‘filter bubbles’ curated by the information filtering techniques of recommender systems, or that they suffer from being trapped in the ‘echo chambers’ of social networks where content posted by friends are curated by the platform’s Newsfeed algorithms and its constant manufacturing of consent (Pariser; Sunstein, Republic.Com).

YouTube’s recommender system, in particular, has become the target of considerable public criticism not just from specialists in the field of information systems design, but from popular journalism and platform users at large (Bou-Franch and Garcés-Conejos Blitvich; Pihlaja; Lewis; Kaiser). In early 2018, for example, The Guardian published an investigative piece that enumerated a number of problematic techniques in YouTube’s recommender. It depicted the recommender as being tilted to reproduce problematic content in a misguided and irresponsible prioritisation of ‘popular’ content, inciting channels to sensationalise their content and exaggerate their input on political issues to obtain more views and higher chances of appearing on the ‘related videos’ bar or the ‘trending’ sections of the platform’s home page (Lewis; Albright). These critiques have often pointed to the platform’s important role as a meeting point for new radical political factions whose communication amongst each other and against opponent groups has made for starkly polarised successions of recommendations (Ricke; Nicas; Kaiser).

(6)

One of the problems attributed to polarization on the YouTube recommender system, however, is that it is partly a collateral effect and computational rendering of an immensely intricate and tightly knit apparatus the platform needs to continue thriving online. The basic architecture of recommenders is designed to facilitate a user’s discovery of new information by making inferences from a user’s personal browsing data and recommending it based on what other similar users have consulted before (Aggarwal 3). Associated to collaborative filtering systems, the tendency to partition recommendations based on different clusters of ‘similar’ users is what has been most often indicated as contributing to polarising content on YouTube (Badami 33).

Faced with the problematic aspects of these information systems, scholars have formulated another set of notions attuned to the informational formalisations of conflict in recommender systems. Information diversity, exposure diversity and ‘SmartParticipation’ attest to emerging efforts to engage relevant institutions in formulating models of social concord akin to the informational dynamics of algorithmic architectures (Bozdag and Poel; Helberger, Karppinen, et al.; Adomavicius and YoungOk Kwon; Terán Tamayo). The recommender system itself has been used to pitch political and educational causes, such as getting informational and social diversity to be actively recommended to different social strata to attend to various political causes (Hajian et al.). These experimentations may easily be welcomed as alternative recommender system designs that public governmental actors can tap into in order to actualise political solutions to the above-mentioned techno-political problems. The recommender may well suggest information based on different models of sociality, approaches to associating information with users based on different conceptions of who a user can be, and what (else) a technique such as ‘recommendation’ can be used for (Johansson).

Yet, such explorations often fall short of what appears to be an essentially epistemological problem dividing technical and political actors. On the one hand, computer scientists do not have access to the knowledge and perceptions of actors specialised in the social and political issues their recommenders replicate. Additionally, they often find that they cannot place normative judgment in systems that must aim for neutral consensus, objectivity and functionality (P. E. Agre). On the other side of the isle, a lack of interest in thinking about algorithmic techniques as media for political expression and actualisation often meets an implicit assumption in practitioners of fields relevant to those political issues that examining the technicalities of information technologies is an expertise far too

(7)

‘technical’ for professions largely based on social and political thought (Manor). In this setting, the implementation of algorithmic techniques is too often synonymous to losing decision-making, versatility and the appreciation of all sorts of circumstantial nuances that algorithmic design and programming languages may not always be able to capture and express (Bogen).

Both of these postures ignore a number of foundational similarities that computation and political deliberation share in their intellectual history and epistemological premises. Like computation, political thought also consists in encoding and amending to values within a given system, or within the boundaries of normative or practical viability. It seeks to attend to the functionality of values and deliberates over systems, structures and procedures in terms no other than design, for design formulates functionality. Accordingly, the idea that information technologies organise and conjugate elements of meaning, such as information and the bits and pieces of the ideas, knowledge and world-views it comprises, does not exempt platforms from sharing a normative responsibility in managing processes central to democratic societies, to speak of the production and management of culture, ideas and sociality.

The tight link that binds technical and political action gives reason to contemplate on a form of political deliberation attuned to software as a contemporary instrument of power. In a world partly ‘eaten up’ by it (Andreessen), this is an idea that calls for an epistemological bridging of technical and political thought on the level of software design (Rieder). It gives reason to invite public governance to embrace technique as a medium through which to intervene, express and actualise policy objectives in an effort to formulate resolutions to techno-political issues in a synthesis of technical and political thought. When it comes to polarisation in recommender systems, for example, it would ask that political actors be able to deliberate in terms that speak to the reality of information and information filtering systems (Helberger, Kleinen-von Königslöw, et al. 10).

Numerous efforts in academia have contributed to an ongoing experiment to marry some or various aspects of algorithmic design with political reasoning. Scandinavian design, values in design and society-oriented design have each left behind them an extensive collection of methods, concepts and initiatives that industrial actors and organisations close to policy making have explored to some extent (to think of CLAIRE in Europe and AI Now in the US) (Knobel and Bowker; Rieder; Lebow). Being placed in a position Agre has once

(8)

located at ‘the borderlands between social practices and computation’ (P. E. Agre) would imply finding the conceptual tools to bridge their conceptualisation of the practices they partake in. Why and how would a recommender system like that of YouTube output results that constitute a polarised set of users and information? How does it capture and operationalise elements that constitute polarisation, such as social discord, concord, difference and similarity? How does the computational formalisation of an activity such as ‘recommendation’ relate to a phenomenon like conflict? In virtue of presenting solutions to the problem of polarisation, one could then ask how political actors detaining expertise on conflict resolution may apply their own techniques or solutions to that of YouTube’s recommendation. How could a professional specialised in conflict resolution apply his or her own techniques in the form of informational techniques on a site of conflict such as YouTube? What and where is the meeting point between a computational and, say, ‘diplomatic’ understanding of conflict and conflict resolution?

The possibility of applying mediation techniques to a recommender system such as YouTube’s is an open one, in the sense that information technology design still has space for myriads of more conceptions about the objects they capture, whether that be language, action, interest, deliberation and other dynamics they formalise. The elements that partake in dynamics of polarisation in YouTube’s recommender system are an example: are concepts such as ‘similarity’, ‘interest’, ‘preference’, and ‘engagement’ sufficiently exhaustive, sensible or accurate descriptions of the broader reality they are part of? Do the designers who operationalise them ignore capturing other elements that may enrich that particular reality they are attempting to recreate in an information system? What other ideas can they welcome into their blueprints? These questions are not intended to portray information systems as having to be complete or ideal. They are instead meant to underline, as scholars of values in design do, that deliberation is constitutive of design (Niiniluoto; Manders-Huits; Hoven and Weckert; Friedman and Kahn); that design is malleable to the contingent nature of the elements it captures and to how one conceives them.

Integrating solutions to polarisation in an information filtering system such as YouTube’s recommender is indeed a possibility this thesis intends to explore. This thesis investigates the practical and conceptual possibilities of implementing a set of techniques provided by a body of theory specialised in conflict resolution — mediation theory — to alternate a set of techniques criticised for contributing to polarisation in YouTube’s recommender system. Consequently, in its first chapter, the first of the questions this

(9)

exploration will have to attend to is primarily of a methodological nature. It seeks to explore how one can envision professionals in mediation applying conflict mediation techniques to an information filtering system like YouTube’s. This exploration can in turn branch into multiple smaller questions. First, how does one integrate normative thinking into information systems and their respective fields of study, such as systems design, AI, or computation (all of which have been generally avoidant of any type of contemplation of normative values)? And second, how can a political actor such as a professional in conflict mediation negotiate with platforms for an alternative design of their information systems? In light of the former question, I focus on how a long tradition of works around critical technical practice (Agre), values in design (Knobel and Bowker; Niiniluoto; Manders-Huits) and society-oriented design (Helberger, Karppinen, et al.; Rieder) have proposed that notions originating from ‘application domains’ outside of computer science be brought into existing information systems and proposed to actors responsible for designing it. This will imply that I then underline the conceptual similarities between technical and political thought, notably by revisiting works on cybernetics and information ethics. In doing so, I argue that the notion of ‘norms’ may be superfluous in the face of information systems design, in the sense that it may otherwise be conceived as information flows and different operationalisations of such. In light of my second question, I go on to examine a few brief measures elaborated by Helberger, Pierson et al. on how to join diverse actors entangled by the ‘multisided markets’ of platforms (Rieder and Sire) to collectively negotiate in the context of a ‘collaborative responsibility’.

In a second chapter, I then explore previous literature having scrutinised the recommender system and detected techniques they deem to contribute to polarisation. To this end, I begin by describing what recommender systems are and how YouTube’s own recommender is designed. I then focus on key critiques addressed against the recommender system and particularly the collaborative filtering system. These include notions such as the ‘filtering effect’ (Sunstein), the filter bubble (Pariser), and polarisation (Badami). Finally, I contextualise these critiques on a broad (but brief) history of the collaborative filtering system and YouTube’s own recommender. While ‘legacy’ prototypes of the collaborative filtering system (Kalgren; Resnick et al.) help me understand the fundamental rationale or ‘raison d’être’ behind the recommender, recent publications by YouTube on its recommender system (Davidson et al.; Covington et al.) help me understand not just how YouTube’s recommender system is designed, but also what objectives it sought and still seeks to accomplish.

(10)

In chapter three, I examine how a number of mediation techniques may be applied onto the YouTube recommender system. I first outline how mediation theory and computation have already been combined in the past, as well as what specific solutions to polarisation have been applied to recommender systems in the past. Applications have already been done, in practice, by recent experiments in computer science. These include primarily efforts to open collaborative filtering systems to diversity, be it exposure diversity (Helberger et al.), aggregate recommendation diversity (Adomavicius and YoungOk Kwon), information diversity (Helberger et al.), encouraging diverse political viewpoints through recommender systems (Munson et al.) and content diversity (Möller et al.). While these studies demonstrate how certain resolutions or values may be applied to existing recommenders, they may not share the same expertise as professionals in mediation and conflict resolution. Additionally, the question remains as to what models of diversity users can choose from, and, in that, what diversity means in light of other criteria to differentiate or associate users and information. I thus turn to listing a number of applicable mediation techniques (contextualisation and bidirectional recommendations of information) that provide ideas as to how to alternate these criteria. I then describe how they could be formalised into a recommender system.

(11)

Chapter 1


Searching for deliberation in information systems design

1. Introduction

Whether they take root in Platonic ideas of cybernetics or in the more fundamental attempt to place values within an elaborate concatenation of causes and effects, normative thinking and technical design appear to have seldom been apart in the history of philosophy and political thought. Particularly over the last few decades, there has been a concentrated effort from a plethora of interdisciplinary studies to actively tackle the association of values and design in regard to information technologies. Efforts to associate them include studies in values in design and value-sensitive design (Friedman et al.), critical technical practice (P. Agre,

Computation and Human Experience), Scandinavian design (Segerstad), critical design (the

Danish-Swedish UTOPIA project in Sengers et al. 50), critical computing (Bertelsen), reflective design (Sengers et al.) and society-oriented design (Rieder). These efforts have tried to answer questions ranging from how information technologies operationalise values and ideas in computational techniques to how one could actively integrate and operationalise values in the running process of these techniques.

2. The operationalisation of values in technical design

2.1. Information filtering and normative thinking

Inquiring how to integrate ‘normative’ ideas external to information systems design must imply that one first looks into how these two knowledge and practice domains can share the same epistemological premises. Until recently, representatives of GAFA platforms would often excuse themselves from a conversation about the normative dimensions of their products under the premise that applying values to their products would directly touch upon a perverse intention to engineer users into making and executing choices that are ultimately not

(12)

theirs. These platforms will not possibly welcome ‘values’ as long as those are not decided by users themselves in the course of their experience with the platform’s products. And so, to the problem of polarisation and ‘filter bubbles’, numerous platforms have usually responded with studies proving correlations between a user’s own choices, preferences and actions and polarised or biased results (Bakshy, Messing, and L. Adamic).

YouTube’s response to a negative article from the Guardian released earlier this year makes for a good example (Lewis). The article quoted an ex-YouTube employee, who claimed the recommender was skewed to ‘make you spend more time online’ and was ‘not [optimised] for what is truthful, or balanced, or healthy for democracy.’ He explained that ‘watch time’, an important indication used to judge what items to recommend users, ‘was the priority’, leading to ‘distortions that might result from a simplistic focus on showing people videos they found irresistible.’ (Lewis). Such criticism of the platform’s focus on functionality, set by watch time maximisation, led YouTube to state that normative effects only occurred where there are normative intentions: the ‘search and recommendation systems’, they replied, ‘reflect what people search for, the number of videos available, and the videos people choose to watch on YouTube.’ The effect of their viewership, they stated, ‘is not a bias towards any particular [recommendation]; it is a reflection of viewer interest.’ (Lewis).

Their statement appears to have been correct, to the extent that the categories chosen to curate content for users drew from relatively neutral indicators of global consensus, such as ‘relevance’ and ‘popularity’. But the effects caused by the choice of such metrics — which the platform does not, and in some cases cannot, anticipate or take responsibility for — do remain problematic to the vast extent to which they, being part of the over-arching platform, draw from and touch upon information about a myriad aspects of the reality of their application domains. This reality touches upon various processes crucial to democratic societies, of which communication and dialogue make for a significant part. Information filtering systems may not be intentionally normative in their constitution, but they do operationalise meaningful content filled with normative significance. Referring to ranking mechanisms such as Google’s PageRank, for example, Rieder notes how ‘any system of ranking will favour certain sites over others; the question is which ones.’ (Rieder 9) This goes to say, as Winkler did, that ‘there is no anti-hierarchic medium’; it will always frame, or enframe, as Heidegger would say (in the sense of ‘conditioning’), certain aspects of a partial reality (Heidegger 19; Rieder 4).

(13)

2.2. Philip Agre, intellectual history and Critical Technical Practice

To this end, multiple efforts in values in design, society-oriented design and critical technical practice have begun to challenge the very notion that technical design and philosophical (or political) deliberation are fundamentally distinct. Followed by Simondon (1958), Agre’s contribution to this debate was perhaps fundamental, in the sense that he proposed a perspective of technologies as not just resembling or touching upon some aspects of philosophy and its many epistemological junctions, but as being subject to scrutiny in the same vein as an intellectual notion. Agre’s principal work, Computation and Human

Experience (1997), for example, makes for a painstaking attempt to invite the discipline of

artificial intelligence to overcome some of its technical impasses by developing a perspective of the field as a historically and intellectually contingent field of knowledge.

Agre himself was mostly concerned with formulating a way of thinking about technique in terms beyond functional imperatives, leading him to want to step beyond the culture, language and metaphors of self-sufficient computational logic. The ‘bound up’ aspect of ideas in computation, in particularly, often comes across as barring the possibility to decompose and historicise function as a philosophical notion. Function is an opaque and incontrovertible element that, in the synaptic processes that bind its raison d’être as ‘working’, seems not to be able to work otherwise. This conception of function would be accompanied by a work ethic that cannot question what a technical object is functioning for, perhaps in the same sense that ‘a technical method can be perfectly well defined and perform exactly according to specification, while at the same time being as wrongheaded as you like.’ (P. Agre, Computation and Human Experience 13).

To remedy this, Agre proposes to develop a way of thinking about technique that does not fall into the ‘unreflective emphasis on problems’ that philosophers in the likes of Heidegger and Günther Anders warned against so firmly (Anders 1956, Heidegger 1954). ‘Critical technical practice’ would behove to develop, as Heidegger would say, ‘a “free relation”’ to technology, so that those functional imperatives stop ‘colonizing our awareness of our lives.’ (P. Agre, Computation and Human Experience 9). This is not to say that a technician would need to be freed from a strict technological determinism devoid of human sensibilities (P. Agre, Computation and Human Experience 9). Instead, critical technical practice proposes at its bases how one can formulate ‘a technical practice for which critical

(14)

reflection upon the practice is part of the practice itself’ (P. Agre, Computation and Human

Experience xii).

One of the methods Agre proposes should support critical technical practice is to study technical objects as products of intellectual lineages. This would imply a study of both technical objects in relation to philosophy and technical objects as exercising (or reproducing) philosophies of their own. Notions in AI, for instance, are rooted in a long genealogy of ideas originating from philosophical debates that have given way to notions in science and engineering that have later become central to AI applications (P. Agre,

Computation and Human Experience 23). But they are equally part of ‘the structure of ideas

in AI and how these ideas are bound up with the language, the methodology, and the value systems of [their own] field.’ (P. Agre, Computation and Human Experience 4) It follows that, if intellectual history interrogates why and how ideas came to be formulated, it could equally examine the raison d’être of a function, what it is that its designer attempts to make functional, and why. This way, Agre speaks of the ‘internal logic’ of a function’s development, placing function and logic in a similar conceptual plane where both are bound by their own systems, binding their own semantic or practical sense (P. Agre, Computation

and Human Experience 3). This ‘internal log’ would then be the innermost component that

binds a larger functioning whole, including ‘the value systems of the field.’ (P. Agre,

Computation and Human Experience 4).

As information systems capture various elements of their application domain, they must operationalise and forge these elements within a coherent system. This system may be qualified as ‘normative’ partly in the sense that those elements it captures are forged within an order that works, that altogether makes (functional) sense. It is not (just) that those elements are normative by default; that they are units of ideas or values that a recommender, for example, would then process. Rather, these elements may be bits and pieces of a given situation that they then contribute to give meaning to or make sense of, in that they help place it within a system that specifies both their own role, proportion and context. A recommender’s capture of ‘user interest’ on YouTube may appear benign. In context of the entire recommender, however, one will see it will not only have a significant role in enhancing customer satisfaction on the site, but equally in determining the similarity that come to delineate user clusters and the choice of videos to be recommended to the user and his or her similar others. This way, ‘user interest’ becomes an important basis of social

(15)

cohesion on the site, yet it is (significantly) problematic for its role in contributing to the polarisation of user clusters (Badami 31).

In this sense, technique continues an implicit conversation among users, designers and the outside world. The ‘problem solving’ nature of techniques in artificial intelligence is one feature that answers to the expectation of computer scientists to make sense of information about the sites their techniques are part of. AI techniques offer functional interpretations (or ‘outputs’) of those problems they are designed to solve (Rich, ‘Artificial Intelligence and the Humanities’). They are then used to ‘tell us what knowledge’, or indeed what information or data, ‘would enable a machine to solve those same problems.’ (Rich, ‘Artificial Intelligence and the Humanities’ 117). The resulting systems capture, as ‘representational artefacts’, elements and processes of the phenomena they are intended to compute, since ‘the people who design them often start by constructing representations of the activities found in the sites where they will be used.’ (P. Agre, ‘Towards a Critical Technical Practice: Lessons Learned in Trying to Reform AI’ 131). They do not ‘simply have an instrumental use in a given site of practice; the computer is frequently about that site in its very design’ (P. Agre, ‘Towards a Critical Technical Practice: Lessons Learned in Trying to Reform AI’ 131).

Information systems then build schemes that extend that site of application and add to its production of information (P. Agre, Computation and Human Experience 132). A famous example is David Fincher’s The Social Network’s depiction of Facebook’s early conception (Fincher). In an early scene, Mark Zuckerberg (played by Jesse Einsenberg) explains to Eduardo Saverin the concept behind the early platform by proposing to ‘take the entire social experience of college and putting it online.’ The platform did indeed capture and operationalise several aspects of a standard college experience, such as being able to display various valuable aspects of one’s personal life in public: one’s love interest and one’s tastes and political convictions. The information students placed on the early platform added to their experience of being students in the sense that it gave practical affordances to things they didn’t have so much power over before (for example, being able to glimpse into someone’s personal life in more detail), thereby adding information to their environment.

Those people who are at the position Mark Zuckerberg was at the time, the ‘residents’ of the borderlands between social practices and computation, thus become translators ‘between languages and world-views: the formalisms of computing and the craft culture of the “application domain.”’ (P. Agre, ‘Towards a Critical Technical Practice: Lessons Learned in Trying to Reform AI’ 132). Indeed, and as one can gather from reading media archeologies

(16)

such as Mackenzie’s (Mackenzie) and Rieder’s (forthcoming), the range of elements one would need to take into account to study these processes is considerably larger than computer programs and their technical descriptions (P. Agre, Computation and Human Experience 21– 22). Methods from media archaeology opt, for example, for a historical analysis of techniques embedded in several different uses over time, anchored in ongoing episodes of technical ‘concretisation’ (Simondon 21). Mackenzie, for example, set out to explore machine learning ‘as a form of knowledge production and a strategy of power’ by identifying the ‘positivities’ of knowing delimited by machine learning (Mackenzie 9). These ‘positivities’ are articulated by what Mackenzie identifies as a constellation of humans (researchers, developers) and techniques, which altogether contribute, as described in his case study, to the development of a greater technology (his case study being machine learning) (Mackenzie xi). They would constitute ‘specific forms of accumulation of statements grouped in a discursive practice and an operational formation’, thus bringing forth what he calls ‘moments of formalization [...], circulation [...], generalization [...], and stratification [...]’ (Mackenzie 6). This would imply examining objects that secure these developmental stages of a technique, such as ‘code, equations, diagrams, and statements circulated in articles, books and various online formats (blogs, wikis, software repositories)’ (Mackenzie 7). Some of the concepts contained in such materials belong to the technical values of the object; Coles and Norman, for example, identify ‘effectivity, flexibility, precision, confidence and usability’ as ‘technical values of their own’ (Coles and Norman 160). Others are imported into computation from other application domains: ‘value, price and cost’ and ‘proportion and workmanship’ constitute respectively economic and aesthetic values (Coles and Norman 160).

2.3. Systems and deliberation: conceptual parallels in values in design and society-oriented design

In virtue of the difficulty in applying ‘normative’ thinking in technical design, considerable scholarly work has already been done that appears to have laid some conceptual foundations to make it possible to design technical objects while taking into account their normative dimensions. Many voices have critiqued industry actors for only posing ethical questions at the very end of their creative processes, hoping perhaps to leave them for law professionals to deal with (Floridi, ‘Technoscience and Ethics Foresight’). Authors such as Flanagan, Howe, Nissenbaum, Knobe, Bowker and Detweiler opt instead to make values ‘a critical component of the design process’ of technical objects (Knobel and Bowker 27).

(17)

In part, values in design asks that a given value be moulded and systematised into a particular practice that the technical object binds together (Hoven and Weckert 323). This stems from a distinction Jacob Metcalf marks between ethics and values. As Metcalf puts it, ‘Ethics are a set of prescriptions’, stable ‘nouns’ (Knobel and Bowker 28). Values are instead ‘tied to action’, hence verbs. Such values should be an integral part of the conception, the functioning and results of a technical object (Knobel and Bowker 28). It asks that values be ‘conjugated’ by the technical object, going on par with Rieder’s own definition of ‘society-oriented design’. Instead of ‘a […] form of normative speculation’, a form of society-‘society-oriented design attempts ‘to bridge the gulf between the contextualized practice of technical creation and the considerations of social benefit that go beyond efficiency, control and material prosperity.’ (Rieder 9)

2.4. Alternative design practices by industrial actors

Industrial actors have equally followed along these lines. A recent case was Facebook’s new ‘time well spent’ metric for the Facebook and Instagram Newsfeeds, conceived after a barrage of criticism against the platform’s accentuation of attention-maximising metrics as possible stimulators of sensationalistic misinformation (Newton). The initiative was portrayed as a complete change in the company’s policy, with Zuckerberg stating that he was ‘changing the goal I give our product teams from focusing on helping you find relevant content to helping you have more meaningful social interactions.’ (Zuckerberger) Instead of relevance, the goals of the metric became maximising ‘meaningful interactions’ between users by, for example, having them ‘see more from [...] friends, family and groups.’ The Newsfeeds would, for example, show a greater number of ‘posts that inspire back-and-forth discussion in the comments’ at the top, while ‘public content’, including ‘videos and other posts from publishers or businesses’ would be pushed further down the page (Zuckerberger; Newton). They would also attempt to make users less passive (not just ‘scroll through stuff’) by simply reducing engaging media, such as videos.1

On Instagram, ‘time well spent’ would be particularly influential on interface notifications (Constine). If users spend too much time on the platform, a notification will warn them ‘they’re all caught up’ and that they’ve ‘seen all new posts from the past 2

It may be interesting to note that this measure has been almost actively counterproductive, in that it

1

became necessary to prevent additional undesired effects by bringing the efficiency (and thus profit) of the whole machinery of the platform down.

(18)

days.’ (Constine). Past this notice, users would only be allowed to access ‘posts that iOS and Android users have already seen’ or that ‘were posted more than 48 hours ago.’ The rationale goes that while the feed is intended to show the most engaging posts, ‘it also can make people worry they’ve missed something’, justifying the presence of a warning to best ‘give them peace of mind.’ (Constine)

The ‘time well spent’ concept was originally conceived by members of a non-profit initiated by Tristan Harris, an ex-Google employee whose role as ‘product philosopher’ was to advise the company on how to best design their products in line with values other than engagement metrics. Joe Edelman, one of the founding members of the non-profit, sought to deepen Zuckerberg’s resolution by pointing to the need for designers to value the context in which their products are applied. Those values cannot be decided beforehand but must give space to whatever actions the user deems worthy to engage in a product’s interface and let ‘users live out their values through software.’ (Constine) ‘Most platforms’, he goes on to explain, ‘encourage us to act against our values: less humbly, less honestly, less thoughtfully, and so on.’ (Edelman, ‘How to Design Social Systems (Without Causing Depression and War)’). Designers can then ‘address this by understanding what’s difficult about relating according to different values’, in the sense that they must already consider how an interface will delimit prospective actions. They must then recognise ‘what features of social spaces can make [these actions] easier’ (emphasis his). They can consider if, for each value a user has, there are ‘features of social spaces which make practicing it easier’ (Edelman, ‘Can Software Be Good for Us?’).

3. Operationalisation as a form of deliberation

3.1. The legacy of political thought and technical design

What is still missing from academic and industry initiatives to combine elements of normative thinking into information systems design is perhaps specifying how different political actors and their expertise can embrace information systems design as a medium of deliberation and actualisation of policy. Existing solutions to polarisation provided by computer scientists may lack contributions from professionals possessing exhaustive information about the causes and dynamics proper to conflicts. These professionals may possess ‘techniques’ of conflict resolution, in that may have formulated ways of processing

(19)

and operationalising those causes and dynamics. The question, for now, is not yet to apply these techniques, but at least to envision how and why information system design can serve as a medium to actualise these techniques.

There is reason to believe that these recent efforts attempt to do what an older science — political philosophy — has already been doing since its epistemological inception. Juxtaposing political and algorithmic thought on the level of design may be redundant, since political thought already consists in deliberating over the functionality and viability of values in the form of various (societal) systems. The difficulty in having political governance embrace technical media may be a problem of literacy and accessibility. Information systems are especially challenging in that, despite their ubiquity, highly adaptable character and significant role in society, they remain a field of study badly understood by those who employ them indirectly. Their developers may have sway in deciding where to head the development of information filtering systems and how, but those unfamiliar with its constitution will be unable to have an informed opinion over how it should be used and why.

Not examining the structure and internal mechanisms of problematic information systems would dismiss the opportunity to approach online architectures as systems worth of scrutiny, perhaps in the same sense that broader, public structures (to think of public establishments, institutions) are the object of daily political critique. There does appear to be something profoundly conservative about technical design, carried as it often is by the conviction that the purpose of running systems cannot be conceived otherwise than in virtue of their functioning state (Agre). Still, as ‘models’, information filtering systems are contingent interpretations of various aspects of reality, bound by functional imperatives that invite an exercise which speaks to the very nature of political philosophy (at least of the modern liberal strand). This exercise implies untying tightly knit ‘functions’ or ways of working (if not ‘traditions’) to give way to a different system in the effort to accommodate a set of convictions or causes.

To this end, and in virtue of creating or modifying a given information system, ‘the freedom of a programmer’, says Rieder, can consist in ‘his or her capacity to formalise his or her thought […] in terms of software capable of processing such data and making it accessible in the form of a usable interface.’ (Rieder and Thévenet, Sphère Publique et

Espaces Procéduraux 7) It is this particular process that establishes programming as a key

power and deliberative activity. It would consist in systematising codified logic into functions, whether these be small scripts, standalone software, or policy that places itself, as

(20)

Agre would put it, in the place of ‘residents’ of the borderlands between social practices and computation (P. Agre, ‘Towards a Critical Technical Practice: Lessons Learned in Trying to Reform AI’ 131). Hence Rieder’s valorisation of the ‘freedom of a programmer’: it is a freedom to touch upon the power that resides in how information systems operationalise information, activities and various processes by formulating and negotiating for yet other systems and users, procedures, mechanisms, information, and so on. This requires not just that public actors be able to read, at least on some level, programming language (Rieder and Thévenet, Sphère Publique et Espaces Procéduraux 6). It may ask that a ‘critique be able to scrutinise all forms of code’ -- that is, all forms of rule and governance -- ‘be it legislative or informational, as a whole.’ (Rieder and Thévenet, Sphère Publique et Espaces Procéduraux 8)

3.2. Cybernetics: modularity and information flows

In actuality, there have already been entire traditions of political, economic and eventually informational thought dedicated to translating different ‘codes’ of their own. Beside economics, cybernetics has later become, notably thought Floridi, an important ‘intermediary resident’ between the formalisms of computer science and social practices. The question of how to organise social relations to their optimal and most balanced state was always close to cybernetics. After Cybernetics: Or Control and Communication in the Animal and the

Machine (1948), Wiener’s Human Use of Human Beings (1950) was especially receptive to

elaborating a type of deliberation attuned to information systems. Cybernetics itself is a theory about capturing, gathering information about the external world and then deriving ‘logical conclusions from that information’, ‘decide what to do’, and ‘carry that decision’ in the form of output (Hoven and Weckert 17). This would benefit from the malleability of information systems design. Moor’s view, for example, was that ‘computer technology is “logically malleable” in the sense that hardware can be [modified] and software can be adjusted, synthetically and semantically, to create devices that will carry out almost any task.’ (Hoven and Weckert 20). While running systems process information, ‘the processing of information becomes a crucial ingredient in performing and understanding the activities themselves.’ (Hoven and Weckert 20) Such ‘informational enrichment’ can then ‘change the meaning of old terms’, creating so-called ‘conceptual muddles’ that ‘have to be classified before new policies can be formulated.’ (Hoven and Weckert 20)

(21)

3.3. Floridi’s information ethics

Extending the tradition of cybernetics is Floridi’s information ethics, a branch of philosophy popular for its easily applicable notion of information and discernment of ethical situations partly as information managing problems. Ethics, for example, are defined as ‘informational’, in the sense that every ethical situation asks for the regulation or use of information. ‘A well-informed agent is more likely to do the right thing’, just as ‘evil and morally wrong behaviour’ can be said to result from a mismanagement or lack of information (Floridi, ‘Information Ethics: Its Nature and Scope’ 41). The same goes for moral responsibility: it tends to be ‘directly proportional to [one]’s degree of information: any decrease in the latter corresponds to a decrease in the former.’ (Floridi, ‘Information Ethics: Its Nature and Scope’ 41). Being able to act and deliberate by accessing, selecting and combining information implies that one lives in an ‘informational environment.’ (Floridi, ‘Information Ethics: Its Nature and Scope’ 57). In turn, those that design information systems thus have what Floridi calls ‘ecopoietic’ responsibilities; these responsibilities are addressed ‘not just to the “users” of the world but also to producers [of information] who are “divinely” responsible for its creation and well-being.’ ‘Ecopoiesis’ thus refers to the ‘morally informed construction of the environment’ (Floridi, ‘Information Ethics: Its Nature and Scope’ 58).

This is indeed reflective of several practices carried amongst governmental and other political actors. Conflict resolution, for example, often call for information to be provided to negotiating actors, so that they understand each other’s context or backgrounds. Likewise, conflict-based situations such as misunderstanding, neglect or disinterest reflect situations whereby actors (sometimes actively) lack of information about one another. Mediation between political actors is often done in the hope that actors exchange the information necessary for each other to reach a common understanding -- in a sense, information they both agree on, undoing the structural conditions that kept animosity between them. In a Spinozian vein, one’s respect for one another would equally follow from a respect for their life and what Floridi calls ‘entropy’ – the informational right to subsist – in the sense that ‘all aspects and instances of being are worth some initial, perhaps minimal and overridable, form of moral respect.’ (Floridi, ‘Information Ethics: Its Nature and Scope’ 57)

(22)

Floridi’s ‘ecological’ perspective of information flows points to the broader and structural dynamics that situate a problem such as polarisation in the YouTube recommender system, notably in a time whence platforms play the role of multisided markets (Rieder and Sire). Platform mechanisms are inserted in a broader network of implications that touch upon the harder domain of the law and of political, social and economic processes. The entangled character of multisided markets implies at a minimum that platforms are not just interdependent with interests from other societal actors but also constantly negotiated amongst these.

While public governmental actors can negotiate with tech actors for more knowledge of their systems, tech actors would (as they do) lobby for public governmental actors to relax conditions for their market interests to be accommodated. In this setting, public governmental actors are often mocked for the lack of knowledge they display about the products platforms maintain, losing political authority in a field where platforms have significantly more leverage: governing information systems. Public governmental actors could thus take freedom to, in fact, enter this field by making technical expertise a condition for their own governance. Ultimately, negotiation between these actors implies levelling out mutual interests, but equally mutual knowledge.

This may touch directly upon how the systematic organisation of information through algorithms constitute the means through which political solutions may be applied. Facebook and Google have already been attempting to tackle problems such as filter bubbles and fake news from a technical standpoint – yet, they may very much benefit from the perspective of those specialised in interpreting and resolving the nature of such issues, such as conflict and misinformation. Professionals in conflict mediation are in a unique place not just to offer their expertise, but also to formulate their own political philosophy of computational foreign policy by moulding values, strategies and processes proper to their field into information and information organising systems.

Such initiatives may come across as naïve for expecting too much of companies driven by private gains and peculiar platform business models. But it is this very problem that may push public governance to balance the public responsibility of tech actors and their patented systems (both widely originating from the U.S.), precisely by inviting the actors that design them to share a continuous, collaborative responsibility with their foreign, public

(23)

counterparts. This collaborative responsibility would be facilitated by mediators attuned to the technical and political dimensions of the issues that such systems reproduce.

(24)

Chapter 2


Conflict and polarisation in YouTube’s recommender

systems

1. Introduction

This chapter will specify how the recommender system, in particular YouTube’s, has been subject to criticism for its alleged contribution to political polarisation, filter bubbles and other politically problematic effects. I begin by giving a short outline of what recommenders are and how YouTube’s own recommender is designed, and then focus on at least three key concepts that have described the recommender in these terms: the filtering effect, attributed to Cass Sunstein (Sunstein, Republic.Com); the filter bubble, by Eli Pariser (Pariser); and polarisation, a notion echoed by research from several authors (Bessi et al.; Dandekar et al.; Badami). I then elaborate on how certain techniques belonging to the YouTube recommender may contribute to the phenomena these notions describe. Of these, the collaborative filtering system has attracted much criticism and will be sketched and examined in detail. This chapter is thus an opportunity for a small archaeology of the YouTube recommender system, as well as an occasion to unearth some of its basic rationales and the choices YouTube has taken to realise them in an algorithmic form.

2. Basic recommender system architectures

2.1. What is a recommender system?

Since at least 1994, recommenders have multiplied and been applied to an enormous variety of situations ranging from the recommendations of books and documents to films, music, news, research articles, search queries, social tags, products, experts, collaborators, restaurants, financial services, answers to questions, candidates to a presidential election,

(25)

insurance policies and romantic partners (Ricci et al. 10–14). Nearly forty years of history have backed the (at times unlikely) development of recommender systems as one of the most essential algorithms for the success and viewership of platforms such as Amazon and YouTube (Aggarwal 6). Amazon, in particular, once reported that ‘35% of its sales came from its recommendation systems’, while Netflix, in 2012, reported that ‘75% of what its users watched came from recommendations.’ (Nguyen et al. 677). ‘Company insiders’ of YouTube reportedly said that their recommender ‘is the single most important engine of YouTube’s growth.’ (Lewis). These facts alone invite one to think of recommenders as an essential actor in the history of present platforms and large parts of the web, particularly in the design of user navigation and the relationship between users and content.

Early conceptions of the recommender proved to be quite wide intellectual explorations intended to provide practical solutions to information overload and aesthetic inconveniences, such as interface cluttering and World Wide Web searches (Ricci et al. 2–3). Upon first impression, the history of the recommender is profoundly tied to questions not just about user taste and preferences but about the formalisation and management of human perception and its formation of knowledge and ideas. As was the case in Elaine Rich’s User

Modeling via Stereotypes (1979) and Jussi Karlgren’s An Algebra for Recommendations

(1994), ‘document recommenders’ were computational blueprints of a basic but fundamental human experience: that of the discovery, browsing and consecution of information, as one has, classically, when exploring vast library shelves or organising a more simple personal bookshelf' (Karlgren 2; Rich, User Modeling via Stereotypes 329). In retrospect, then, the recommender was bound to gain a political dimension in a very basic sense of the term: it was a formalisation of a process by which one can discover, accumulate, associate, deliberate and create ideas and other notions of reality in relation to other users and items.

Recommenders have, since the times of Rich and Karlgren, evolved into a subclass of information filtering systems. Most of these systems (with the exception of the Amazon recommender) function by answering one key problem: if user A consults item B, then which

other items will s/he like to consult next? This problem departs from the assumption that

‘significant dependencies exist between user- and item-centric activity’ which can be used as ‘good indicators of future [user] choices’ (Aggarwal 1–2). Solving this problem is thus a question of prediction; it requires guessing which ‘rating’ or ‘preference’ a user will give to an item. In turn, generating predictions is done mainly via two methods that are occasionally

(26)

combined into ‘hybrid’ recommender systems such as YouTube’s: content-based recommendations and collaborative filtering systems.

2.2. Content-based recommenders

The first and oldest of these methods, content-based filtering, collects item descriptors (usually in the form of tags or metadata fields) to find and recommend other items with similar properties (Aggarwal 1–2). It creates user profiles by collecting information about a user’s browsing history and, with it, recommends items that best match those the user has consulted, taking into account users with similar browsing histories (Badami 13).

Elaine Rich’s 1979 model, in particular, sheds much light on early personalisation of content browsing. The purpose behind her model was to address the need to have pre-designed models of clients, customers or other user types for computers to attend to (Rich 329). The example Rich uses is modelling library customers for librarians to best recommend books. To this end, her recommender would build a ‘user model’ with user profiles consisting of associations between each user and certain categories, such as ‘interests’, ‘politics’, or whether or not users would tolerate books with violence (Rieder, forthcoming). A user profile can, in this case, ‘indicate the user’s preferences in terms of […] keywords or attributes of their preferred items. These in turn can help the algorithm formulate a relevant query in order to find popular items using similar keywords.’ (Badami 13). This is all the more suggestive of a process that, given the predictive nature of the recommender, requires the quantification of the user into identification categories that can be computed in constant form (e.g., ‘the “liberal” user will almost always enjoy books such as The Motorcycle Diaries or Juliette’). This profile could then be used to ‘find similar users’, or ‘stereotypes’ in order to ‘recommend their interests to [a given] user.’ (Badami 13).

2.3. Collaborative filtering systems

The second and most popular of types of recommender systems, collaborative filtering, builds models from user’s past behaviour that are based on similar decisions made by other users (Badami 13). The primary goal of the collaborative filtering system is to predict the items a user would like to be recommended by drawing from the tastes and preferences of users with

(27)

a similar background (having liked similar items in the past, for example) (Aggarwal 8; Badami 14). It is considered ‘collaborative’, then, in the sense that missing information about a user’s interest (‘ratings’) need not always be actively provided by the user, but can be predicted from multiple users in a collaborative fashion (Aggarwal 2). ‘Similarity’ between users is defined by what the information they consult: the more similar the information they each consult, the more they will be considered as ‘similar’. Prediction matrices used to calculate the probability of a user’s inclination for a given item are fed by ‘ratings’, which users can give implicitly while expressing a positive relationship towards the items they consult, with, for example, clicks (as counts for the Google News recommender), watch time (as counts for videos in the YouTube recommender) or acquisitions (as counts for recommended products on Amazon) (Aggarwal 9).

There exist two methods to predict a user’s ratings for a possible recommended item: memory-based methods and model-based methods (Aggarwal 9). Also known as neighbourhood-based collaborative filtering algorithms, memory-based methods predict item ratings on the basis of a user’s neighbourhood. These neighbourhoods are defined in two ways. They are either user-based or item-based. User-based neighbourhoods are intended to determine which users are similar to a target user (Aggarwal 9). If, for example, ‘Alice and Bob have rated movies in a similar way in the past, then one can use Alice’s observed ratings on the movie Terminator to predict Bob’s unobserved ratings on this movie.’ (Aggarwal 9) Item-based neighbourhoods, on the other hand, are used to make predictions of the ratings for an item by looking into similar items, instead of similar users. One’s ratings on, say, science fiction movies like Alien and Predator can be used to predict that same user’s rating on a similar movie like Terminator (Aggarwal 9).

2.4. Hybrid recommenders: the YouTube ‘related videos’ system

As a hybrid recommender system, YouTube’s ‘related videos’ algorithm has been designed to process enormous amounts of data present on the platform (approximately ‘a billion users’ watching ‘a billion hours of video’ every day), calling for intuitive techniques with minimal need for external intervention (‘Press - YouTube’). In a situation where ‘a wider variety of inputs is available’, one can indeed enjoy ‘the flexibility of using different types of recommender systems for the same task.’ (Aggarwal 19) The flexible character of hybrid

(28)

recommenders reflects on the field of ensemble analysis, ‘in which the power of multiple types of machine learning algorithms is combined to create a more robust model.’ (Aggarwal 20). YouTube’s recommender model is indeed in constant mutation and has already, since Google’s last publication on their model (Covington et al.), evolved significantly, albeit silently (Lewis and McCormick).

2.4.1. The YouTube recommender system in 2010

By 2010, research on recommended systems was already an established field. YouTube sought to contribute to it by discussing the ‘unique opportunities and challenges for content discovery and recommendation’ the platform began to face (Davidson et al. 293). Indeed, ‘discover’ is a word they frequently use, and is suggestive, perhaps, of YouTube’s nascent role as an experimentation ground for information retrieval systems. Its 2010 paper claims that one of the explicit goals of YouTube’s recommender was to ‘provide personalized recommendations that [helped] users find high quality videos relevant to their interests’, in part also to ‘[…] keep users entertained and engaged.’ (Davidson et al. 293). To prevent that information be too scattered and irrelevant, it was equally important to keep ‘recommendations […] reasonably recent and fresh, as well as diverse and relevant to the user’s recent actions.’ (Davidson et al. 294).

The recommender system loosely constituted of metrics such as relevance, watch time, view counts, likes and dislikes, comments, followers, semantic metadata, combined with a user’s personal data and browsing history. The algorithm needed to go through several steps before deciding which ‘candidate’ videos to recommend and how to rank them. It went through four, main steps to capture and process several data, namely: 1) data from an given input or ‘seed video’; 2) data related to this ‘seed video’; 3) a ranking of ‘candidate’ recommendations based on the seed video; and 4) a feedback loop that continually provided the system with more feedback data it ‘learned’ along the way.


(29)

Fig 1. A sketch of the 2010 recommender, as outlined by Davidson et al. ‘Results’ are here used purely

for illustrative purposes; they have not been outputted by the 2010 recommender, but were instead obtained in February of 2017.

(30)

In its input phase, the recommender first draws information from a user’s personal activity and uses that information as seed data. Other such data includes various metadata associated with an input video, such as raw video streams (sequence and time stamp of video watches) and video metadata (its title, tags, description) (Davidson et al. 294). This metadata is then combined with user activity data the recommender collects from a user’s browsing history within one of several saved YouTube sessions.

In order to find related videos, the recommender will go through innumerable videos located in YouTube’s database (and by extension, Google’s) to fine-tune a selection of candidates that best resemble the input data. Candidate generation is done through collaborative filtering. A seed video is expanded by ‘traversing a co-visitation based graph of videos’ (Davidson et al. 294), including videos that are co-watched. This allows the system to capture videos from less explicit areas of one’s interests. It then ‘normalises’ candidate recommendations by assessing how much its candidates conform to their standard viewership or ‘global popularity’, ‘essentially favouring less popular videos over popular ones.’ (Davidson et al. 294).

Ranking is done with a classification of candidate videos based on video quality, user specificity and diversification. The recommender then tries to rank its chosen candidates in an order ‘relevant to the [user’s] interest’ based on a degree of ‘popularity’ and ‘diversification’ (Davidson et al. 295). Video quality refers to the ‘signals to judge the likelihood that the video will be appreciated irrespective of the user’ (Davidson et al. 295) and is intended to indicate general values such as view count, ratings, favouring, sharing activity, upload time, and the visual quality of the video. User specificity refers to videos closely matched to a user’s ‘unique taste and preferences’; it is defined by properties in seed videos in a user’s watch history, such as view count and time of watch (Davidson et al. 295).

In its final stage of ranking, the recommender goes through a process of ‘diversifying’ candidate videos. The authors recognise that recommendations are often too ‘narrow’, often ‘fail to recommend something new to the user’ and must keep up with the volatility of a user’s interest’ (Davidson et al. 295). The solution they propose is to broaden recommendations by ‘expanding [the candidate set] by taking a limited transitive closure over the related videos graph.’ (Davidson et al. 295) ‘Transitive closure’ entails multiplying the paths of a directed graph of candidate videos. Other techniques used are to ‘impose

(31)

constraints on the number of recommendations that are associated with a single seed video’ and limit ‘the number of recommendations from the same channel’, other than searching through diverse videos in terms of topics and content with ‘topic clustering and content analysis.’ (Davidson et al. 295)

The recommender then generates 4 to 60 candidates after choosing videos optimised by ‘a balance between relevance and diversity.’ (Davidson et al. 295) It pitches the top candidate of related videos by ranked scores. Scores have a certain minimum threshold, meaning that the recommender cannot find many recommendations from seed videos with too low of a view count.

Davidson et al. also dedicate particular attention to the user interface, since it had an important role in assisting users to make prompt decisions as to ‘whether they are interested in a video’ and allowing the internal workings of the platform to be more transparent (Davidson et al. 295). Descriptors such as a video thumbnails, their title, age (or ‘freshness’) and popularity would all indicate how worthy a video is to be watched. Curiously, at the time Davidson et al.’s paper was presented at a RecSys 2010 conference in Barcelona from the 26th-30th of September, the YouTube interface still allowed a user to customise their page and decide on how many recommendations to obtain, and where to locate recommendations in the home page (Davidson et al. 295).

(32)

Fig. 2. The YouTube home interface at the time of publishing of Davidson et al.’s paper, dating

September 29, 2010. Note, on the upper region of the page, the platform’s presentation of the ‘Recommended for You’ feature. On the right upper region of the page, users would be invited to customise their homepage to decide the location and quantity of recommended videos (Wayback

Referenties

GERELATEERDE DOCUMENTEN

The future of protoplanetary disk models: Brown dwarfs, mid-infrared molecular spectra, and dust evolution..

(2013) argues that in The Netherlands, the main disparities in health inequalities are caused by SES, a conceptual construct composed of the indicators of income, education

Pre-service training and in-service training should be made available to educators for the improvement of teaching and learning strategies in group work. Educators should

T his first issue of 2015 contains two specials: A CINet conference special and a topical special on the role of social networks in organ- izing ideation, creativity and

This is due to the fact that currently implemented tree priors in Bayesian phylogenetic tools, such as BEAST2 (Bouckaert et al., 2019), MrBayes (Huelsenbeck and Ronquist, 2001)

Opening: Nogmaals hartelijk dank dat u tijd voor mij vrij heeft gemaakt om in gesprek te gaan over het onderwijzen van vluchtelingenkinderen. Dit onderzoek richt zich op het

Consequently, studying ethnic dispositions by ethnic Dutch mainstream groups shows how symbolic hierarchies of ethnicity and ethnic belonging are constructed and attested..

Ook werd onderzocht of geaggregeerd fibronectine, plasma fibronectine en CSPGs pro-inflammatoire (geïnduceerd door IFNϒ +LPS) of regeneratieve (IL-4) activering van