• No results found

People You May Know and More Pages You May Like

N/A
N/A
Protected

Academic year: 2021

Share "People You May Know and More Pages You May Like"

Copied!
106
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

People You May Know and More Pages You May Like

Algorithmic recommendations and Islamophobic sentiments on the social media platform

Facebook

Author: Dominique Verschragen Student Number: 6284035 Email: do.verschragen@gmail.com

Date of Submission: 28-05-2016

University: University of Amsterdam, Graduate School of Humanities. Program: MA in New Media and Digital Culture.

Supervisor: Stefania Milan Second reader: Carolin Gerlitz

(2)

ABSTRACT

We live in a critical period during which populist movements in Europe are gaining

popularity. The increasing amount of migrants/refugees, in relation to recent terrorist attacks such as the Charlie Hebdo shooting and the November 2015 Paris attacks, seem to be used as a justification for negative attitudes towards Islam and Muslims in general. Since September 2015 the social media service Facebook appeared several times in the newspapers regarding the platforms attitude towards xenophobic, hateful or Islamophobic user generated content. More recently, the platform launched a campaign supporting counter speech and signed up to the EU code of conduct, including a series of commitments to combat the spread of illegal hate speech online in Europe. Both of these attempts seem to focus on user behaviour, and minimizes the platforms role as curator of our social landscape. This thesis aims to display that sentiments of Islamophobia on Facebook are not only explained by the online behaviour of users, but are also shaped by the algorithmic recommendations.

(3)

ACKNOWLEDGEMENTS

The following people have been helping me amazingly, in one way or the other. Thank you:

- Lucie Fortuin, for everything - Simon Padt, for everything

- Stefania Milan, for motivating me throughout the process and excellent feedback - Josse, for going to Africa

- Auke Rick Akkerman, for inspiring projects - My parents, for everything

I would also like to thank all the people of New Media and Digital Cultures, both teachers and students!

(4)

TABLE OF CONTENTS

1. INTRODUCTION ... 4 2. LITERATURE ... 8 2.2 ISLAMOPHOBIA, A SOCIAL REALITY ... 9 2.2.1 Islamophobia, a Contested Phenomenon ... 9 2.2.2 Islamophobia, an Identifiable Phenomenon? ... 13 2.2.3 Islamophobia in the Netherlands, a historical approach ... 14 2.3 FACEBOOK, A CURATED LANDSCAPE ... 17 2.3.1 Facebook, Violating the Terms ... 18 2.3.2 Facebook, a Public Sphere? ... 22 2.3.3 Facebook, Filter Bubble and the Culture of Connectivity ... 25 2.3.4 Facebook, Responding to Criticism ... 28 2.3.5 Facebook, Suggested Friends, Groups, Pages and the Issue of Polarization ... 29 3. METHODOLOGY ... 32 3.1 RESEARCH QUESTION ... 32 3.2 A PRIORI CONSIDERATIONS ... 37 4. RESULTS: ETHNOGRAPHIC DIARY ... 38 5. ANALYSING RESULTS ... 47 5.1 STEP 1: ETHNOGRAPHIC DIARY ... 47 5.1.1 The Subtle Locations of Algorithmic Recommendations ... 47 5.1.2 Algorithmic Friendship ... 48 5.1.3 Algorithmic Polarization of Groups and Pages ... 51 5.2 STEP 2: ISLAMOPHOBIC SENTIMENTS ... 54 5.2.1 Overview Outcome Step 1: Suggested Groups ... 54 5.2.2 Presence of Islam and Muslim targeted Posts ... 56 5.2.3 Categorization of Islamophobic sentiments ... 59 6. CONCLUSION ... 62 BIBLIOGRAPHY ... 65 APPENDIX ... 74

(5)

1. INTRODUCTION

More than one million migrants and refugees entered Europe in 2015, mostly from Syria, Africa and South Asia (IOM “Irregular Migrant”). April 26, 2016, the International

Organisation for Migration counts 181,476 people whom had reached Europe by sea since the start of 2016, and another 1,232 people either dead or missing (IOM “homepage”). The increasing group of migrants/refugees is used in close relation to the so called terrorist attacks by political agents, particularly by populist parties, in order to justify their negative attitude towards Islam and Muslims. With the increasing influx of migrants and refugees, online communities of populist movements have been growing in strength in Europe. Herein

populism is understood as an ideology that ‘pits a virtuous and homogeneous people against a set of elites and dangerous ‘others’ who are together depicted as depriving (or attempting to deprive) the sovereign people of their rights, values, prosperity, identity, and voice’

(Albertazzi and McDonnell 5). Concerning the migration crisis, populist movements are ‘defined by their opposition to immigration and multiculturalism, and concern for protecting national and European culture, particularly from Muslim majority countries’ (Barlett,

Birdwell and De Lange 12). In 2015, three populist spokespersons from different Western European countries have faced a lawsuit because of spreading hate speech and intolerance towards Islam, both in offline as well as in online environments. The German founder of the anti-Islamic and anti-immigration movement Pegida, Lutz Bachman, described Europe's new refugee population as "cattle" and "scum" in a Facebook post (Owen). The Dutch Party for Freedom (PVV) frontman Geert Wilders called for fewer Moroccans (Guardian “Dutch politician”). Moreover, the French National Front (FN) president Marine Le Pen compared Muslim street prayers to the Nazi occupation in World War II (Breeden).

(6)

Exposure to news, opinion and civic information is increasingly mediated through online social network services (SNS). Yet, for example in Germany, hate speech is routinely subject to investigation and prosecution only when spoken or printed. Therefore, recently a debate caught fire whether hate speech should be removed from SNS, mainly focusing on the most popular platform Facebook. Facebook users express their pro- and anti- Islam, Muslims, immigrants and refugees position by joining specific communities. Hence, a division

regarding Europe’s problem of how to deal with resettling people, is mirrored online. More general online communities as Rapefugees Not Welcome, Close the EU borders, are opposed to communities as Refugees are Welcome and Help Refugees. The same counts for national and local online communities as Refugees Not Welcome in UK and Refugees are Welcome - UK. In a broader perspective Stop Islam Now, Islam free world, Infidels at war with Islam and Destroy Islam, Save HUMANITY are opposed to communities as Non Muslims

supporting Muslims and Stop Insulting Islam. In September 2015 the German chancellor Angela Merkel asked the founder and chief executive of Facebook Mark Zuckerberg what he was doing to police racist and xenophobic content on the SNS (Mamiit). In February 2016 the Guardian published an article in which Zuckerberg suggests that the company is going to take action on that account by policing hate speech: ´[h]ate speech has no place on Facebook and in our community. [...] Until recently in Germany I don’t think we were doing a good enough job, and I think we will continue needing to do a better and better job´ (Guardian “Zuckerberg on”). In March 2016, two German lawyers, Chan-jo Jun and Christian Solmeck, accused Zuckerberg of facilitating the posting of anti-Semitic and other inflammatory content online (Dyer). Jun and Solmeck have compiled a list of more than 300 Facebook pages and posts that contain swastikas and other Nazi-related images as well as calls for violence against the Middle Eastern and North African immigrants who have flooded Germany over the past year.

(7)

In the academic field, scholars like José Van Dijck, Tarleton Gillespie, Eli Pariser, Tania Bucher, Antoinette Rouvroy, David Beer, John Cheney-Lippold and Rob Kitchin argue that Facebook, like other online services, shapes our lives. Moreover, they point out that news, opinion and civic information are not only mediated by the interface of online services, but they are also fostered by personalisation technologies. Algorithms guide an increasing proportion of our choices ; where to eat, what to read, where to sleep and who to sleep with. From search engine Google, to online retailer Amazon, to dating service Tinder, to the SNS Facebook, algorithms help shape what we know and how we act. The well known book The Filter Bubble (2011) of Eli Pariser highlighted two concerns. First of all, that algorithms help people surround themselves with media that supports what they already believe. Secondly, that algorithms tend to down-rank the presence of different kinds of media, which are of utmost necessity in a democracy when it comes to the sharing of news and information regarding social topics (Pariser “Did Facebook’s”). Referring to Facebook’s news feed, the constantly updated list of stories in the middle of each personal home page, Pariser argues that users get less exposure to conflicting viewpoints, and that they are intellectually isolated in their own informational bubble (“Did Facebook’s”). Legal scholar Cass Sunstein argues that people's opinions have been shown to become more extreme, simply because their view has been corroborated, and because they grow more confident with those opinions after seeing their views are widely shared (176). There is a case to be made that Facebook stimulates a similar process, as it applies personalized technologies for the News Feed, moreover, it recommends Groups, Pages and Friends. As such, the platform seems to know our preferences. However, Facebook pushed back on the filter bubble critique in May 2015 when they published an article “Exposure to Ideologically Diverse News and Opinion on Facebook” in Science, discussing polarization on Facebook. The authors suggest that:

(8)

individual choices (...) more than algorithms (...) limit exposure to attitude-challenging content in the context of Facebook. Despite the differences in what individuals consume across ideological lines, our work suggests that individuals are exposed to more cross-cutting discourse in social media than they would be under the digital reality envisioned by some (Bakshy, Birdwell and de Lange 1131)

The text does not expand on those who are meant by the word some, therefore, it does not provide any information that allows specific comparisons. Subsequently, Pariser himself reflected on the article in a blogpost “Did Facebook’s Big New Study Kill My Filter Bubble Thesis?” wherein he mentioned that the ‘co-authors were gracious enough to reach out and brief me at some length’. Regardless the political consequences, the authors point out that the power to expose oneself to contrasting perspectives lies first and foremost with individuals. In other words, the platform propagates itself as a medium that enables social connectedness, and therewith minimizes its role as curator of social networks and its inseparable visible content. This is in contrast with academic studies that take the position that Facebook curates the social landscape. Although several academic studies point out that Facebook’s algorithm organizes, selects, and presents information on the user’s news feed (Bucher, “Want to be on the top? Algorithmic power and the threat of invisibility on Facebook”; Van Dijck, The Culture of Connectivity; Eli Pariser, “Did Facebook’s Big New Study Kill My Filter Bubble Thesis?”), little is yet known about the ways Facebook’s algorithmic recommendations shape our life. Considering the critical period we are witnessing now regarding the increasing discrimination towards the Islam, Muslims, migrants and refugees and the growing

acceptance thereof; a debate about the ethics and effects of algorithmic recommendations is incredibly important.

(9)

In democratic societies, the question whether to regulate hate speech and how to regulate hate speech in offline and online environments forms an ongoing discussion. During the past months, especially after the meeting between Merkel and Zuckerberg mentioned above, this discussion received a lot of attention; the EU and international IT companies announced a code of conduct regarding online hate speech and Facebook launched a

campaign supporting counter speech. This thesis aims to contribute to the ongoing debate by exploring whether the platform Facebook - through the workings of algorithmic

recommendations - suggests Groups, Pages and people that accept and actively produce sentiments of Islamophobia, theoretically defined by Eric Bleich as indiscriminate negative attitudes or emotions directed at Islam or Muslims (1593). Although closely related, I have chosen to make use of the term and phenomenon ‘Islamophobia’ rather than ‘xenophobia’, for two reasons. First of all, this thesis is solely concerned with Islam and Muslims, and does not address foreigners in general. Secondly, the term Islamophobia is commonly used in regard to negative sentiments towards Islam and Muslims and is, at the same time, a highly contested term in need of further investigation. Also I prefer the term ‘Islamophobia’ to ‘hate speech’ as this thesis designates a social reality; ´that Islam and Muslims have emerged as objects of aversion, fear, and hostility in contemporary liberal democracies´ (Bleich 1593). This thesis aims to display that sentiments of Islamophobia on Facebook are not only explained by the online behaviour of users, but that they are also shaped by Facebook’s algorithmic recommendations.

2. LITERATURE

This chapter is divided in two sections: 2.1 Islamophobia, A Social Reality and 2.2. Facebook a Curated Landscape. To understand what Islamophobia refers to, we must first familiarize ourselves with the main points of discussion regarding the term. The first section will

(10)

therefore start the first definition proposed by Runnymade Trust, and it continues with other definitions and criticisms of Islamophobia. Furthermore, it discusses how to identify such a contested term and phenomenon. The second section discusses Facebook as curator of social connectivity. It will start out by giving a short overview on the current state of Facebook’s and Europe’s policy against hate and violating speech.

2.2 ISLAMOPHOBIA, A SOCIAL REALITY

The term Islamophobia has become increasingly common in everyday speech. However, it remains contested as to what exactly defines anti-Muslim or anti-Islamic attitudes. This section aims to display that Islamophobia, as a product of human dialogue and ongoing interaction, has turned into a social reality, therefore, it needs to be considered as an identifiable phenomenon. As the regional focus of this thesis is the Netherlands, the final part of this chapter discusses sentiments of Islamophobia in the Netherlands specifically.

2.2.1 Islamophobia, a Contested Phenomenon

In 1997, the publication “Islamophobia: a challenge for us all”, conducted by the British independent research agency ‘The Runnymede Trust’, played a pioneering role in identifying Islamophobia. The Runnymede Trust identified a number of essential elements, which it used in order to define Islamophobia as ‘an outlook or world-view involving an unfounded dread and dislike of Muslims, which results in practices of exclusion and discrimination’ (1). The report continued stating that Islamophobia is the ‘shorthand way of referring to dread or hatred of Islam – and, by extension, to fear or dislike of all or most Muslims’ (The Runnymede Trust 1; Allen, “Islamophobia” 149). This description can be seen as a

benchmark; ever since coined, the phenomenon and concept of Islamophobia has been widely discussed and debated on by scholars and journalists, often using the Runnymede report as a starting point for the general discussion on Islamophobia (Chris Allen; Ineke van der Valk;

(11)

Andre Oboler; Erik Bleich). In discussing Islamophobia, two particular criticisms are regularly raised. The scholars Roland Imhoff and Julia Recker explain that ‘the more intransigent objection denounces Islamophobia as a discursive weapon intended to silence well-justified critique of Islamic practices and dogmas’ (812). In other words, this criticism claims that a phobia is a state of mind for which there is no rational basis, involving irrational fear or hatred. However, the revulsion against Islam/Muslims is not phobic in character, but based on the actual existence of terror and murder. The website Bare naked Islam: It isn't islamophobia when they really are trying to kill you is an example of such criticism. The second criticism is mainly discussed in the academic spheres. Distinctive scholars such as Erik Bleich, Chris Allen and Ineke van der Valk argue that the term and phenomenon lack clear definition. In short, the scholars maintain that Islamophobia is so contested as a concept that any evidence for its occurrence is unlikely to be forthcoming. This is simply because there is little agreement on what exactly Islamophobia entails, and therefore it remains unclear what evidence would either support or undermine it. Despite its nebulosity, Allen suggests that ‘the spread of Islamophobia both discursively and conceptually in the public and political arena has meant that Islamophobia is now a social reality’ (¨Islamophobia¨ 167). In 2004 Secretary-General Kofi Annan opened a United Nations conference with the seminar “Confronting Islamophobia: Education for Tolerance and Understanding” with the lament ‘when the world is compelled to coin a new term to take account of increasingly widespread bigotry, that is a sad and troubling development. Such is the case with Islamophobia’ (qtd. Bleich 1582). Repeatedly, headlines of newspapers affirm Allen´s consideration; ‘Decent Tories must speak out against their party’s Islamophobic mayoral campaign’ (the Guardian), ‘How Islamophobia is driving young Canadian Muslims to reclaim their identity’ (CBC) and ‘Islamophobic crime in London 'up by 70%’ (BBC) are just a few examples of newspaper articles published last year. In these newspaper articles Islamophobia is used without

(12)

explicitly defining it. Drawing on the theoretical studies of Allen and Bleich, this thesis claims that, despite the complexity of the term and phenomenon, Islamophobia did become a social reality. In the language of Bleich this encompasses that `Islam and Muslims have emerged as objects of aversion, fear, and hostility in contemporary liberal democracies´ (1583).

The author of the book Islamophobia in the Netherlands, Ineke van der Valk, describes that history has helped shaping today’s Islamophobia via two routes. On the one hand, there is the culturally oriented Islamophobic rhetoric that originates in colonial

Orientalism, in which Muslims are primarily considered to be exotic. On the other hand, there is the Islamophobic vision that regards them as marauding aggressors (16). She explains that the first route refers to the prism of Edward Said's concept of Orientalism. This image is one of a rational, progressive West at constant and irreconcilable odds with an irrational,

backward East (Van der Valk 16). The second route has its origins in the crusades; from a western point of view the Christian Holy Land (an area roughly located between the Jordan River and the Mediterranean Sea) had been in the hands of the Muslims since 638, and it was against them that the crusades were, at least nominally, directed. Reflecting on this early history, scholar Slobodan Drakulic explains that ‘Europeans tend to harbour profound prejudices about Muslims and non-Europeans, regarding them as socio-culturally regressive and aggressive, bent on undermining the European social order and way of life’ (248). The early history of Islam in the Netherlands can be traced to the 16th century, when Ottoman traders began settling in the country's port cities. Yet, accounting for less than 0.1% of the population, Muslims and Islam appeared as external, distant and essentially foreign, both physically and conceptually. However, during the beginning of the large immigration inflows (1960´s), this critical – and possibly safe – distance eradicated. Within such conditions,

(13)

Muslims are no longer the enemy ‘other’, but much more contemporary, the enemy ‘within’ (Allen, “Islamophobia” 152).

In recent years anti Muslims/Islam attitudes have been fueled by a public anxiety regarding immigration and the integration of Muslim minorities in Europe. These tensions have been exacerbated through or the aftermath of the economic crash of 2007; the rise of populist nationalist parties and the growth of Islamic extremism and terrorism from the start of the new millennium (Open Society Foundations). Ever since 9/11, when referring to the Islam, terrorism and violence are often mentioned in the same breath. Terrorist attacks in the name of Islam and wars in the name of freedom - such as the Charlie Hebdo shooting and the November 2015 Paris Attacks - have influenced the general perception of Muslims and Islam and have aggravated concerns and anxieties on the integration of Muslims in Western

societies. To an increasing degree, Muslims are being seen as a threat not only to security but also to European cultural values, and the continent’s supposed cultural homogeneity (van der Valk 16). Along those lines, Gabriele Marranci argues that:

it is not through the stereotyping of physical characteristics that Islamophobia spreads, but through the misrepresentation of the Muslim world, and the representation of their lifestyle as alien from Western society. The misuse of Islamic texts, the reproduction of orientalist, and colonialist images of Muslim men as violent and patriarchal and of Muslim women as submissive and oppressed, [...] facilitate the representation of Islam as barbaric culture, and hence founded on anti-Western values. In this way, Muslims are increasingly represented as members of a threatening ‘transnational society’, in which people want only to ‘stone women’, ‘cut throats, and ‘beat their wives’. (107)

(14)

Salman Sayyid suggests that those who favor the use of the specific term Islamophobia, argue that the concept can serve as an instrument to describe a situation that would otherwise go unreported and unattended. More than an expression of hatred, Islamophobia can be

understood as a fear of a multicultural society, in which Islam may become a recognised and meaningful part of the European landscape (Marranci 116), expressed in indiscriminate negative attitudes or emotions directed at Islam or Muslims (Bleich 1593).

2.2.2 Islamophobia, an Identifiable Phenomenon?

According to Sabri Ciftci ` the degree of Islamophobia has been revealed on multiple occasions: [...] debates about the construction of an Islamic center near Ground Zero and mosque controversies in a dozen states in the USA; anti-veiling legislation in France; the “minaret” row in Switzerland; and the killing of Turkish immigrants in Germany are some well known examples´ (293). Correspondingly, reports conducted by the European Center for Monitoring of Racism and Xenophobia (EUMC) in Europe and by the Council on American– Islamic Relations (CAIR) in the USA also pointed to the rise of Islamophobia (Ciftci 293). The section above describes that Islamophobic attitudes are more nuanced than a simple dislike of Muslims. The section below elaborates further on how to identify islamophobia, specified as a phenomenon that is identifiable as a perceived threat and fear, which is different from general dislike.

The key to uncovering reliable indicators of Islamophobia lies in consistency (Bleich 1589). In the same line of thoughts, Salman Sayyid argues that `interpreting Islamophobia [...] is not a subjective practice, but rather a skilled one in which there has to be a sufficient degree of overlap between one’s reading and the other readings in play in the culture at the time´. He continues with stating that: `Islamophobia should be measurable in ways that produce evidence, which could be the basis of a rational policy´ (22, 23). According to Bleich, `it is possible to directly measure Islamophobia by examining unsolicited statements

(15)

proffered by politicians, civil servants, public figures, religious leaders, journalists, bloggers, and others whose words are recorded for posterity. When far-right politicians or public figures make anti-Muslim proclamations, these statements are not just indicators of their own Islamophobia; they are also likely to amplify or to reinforce Islamophobia among their devotees (Bleich 1590). For example, in 2007 the Dutch politician Pim Fortuyn published a book titled Against Islamization of Our Culture in which he attempted to show how Islam was incompatible with Western culture and, moreover, the dangers it brings forth to the survival of Western civilisation (Marranci 108). Bleich argues that the best way to measure Islamophobia is to seek out the most direct indicators of indiscriminate negative attitudes or emotions directed at Islam or Muslims. In the section above I described that Islamophobia can be understood as feelings of fear and threat of a multicultural society. In order to indicate what exactly this fear and threat is based on, the researchers Dunn, Klocker, Salabay

identified different forms that such fear can take; economical, military or cultural threat (¨Contemporary racism and Islamophobia in Australia¨). In other words, they asked the question: what are the central components of contemporary anti-Islam/Muslim sentiments? Similarly, the Online Hate Prevention Institute (OHPI) released a report “Islamophobia on the Internet, the growth of online hate targeting Muslims” wherein they divided hate speech into seven different types, ranging from content dehumanising or demonizing Muslims till the perception of Muslims as terrorists (Oboler).

2.2.3 Islamophobia in the Netherlands, a historical approach

The Netherlands has a relatively long history of immigration. Both refugees and economic migrants have come to the country in large numbers. Currently almost 20% of the Dutch population consists of immigrants or children of immigrant parents (Ersanilli 1).

The author of the study Focus Immigration, Netherlands, Evelyn Ersanilli, describes that ´for a long time the Dutch took pride in the fact that many people came to their country

(16)

because of its relative tolerance towards other cultures and religions´ (1). In the same line Hein de Haas suggests that until 1973, migration was looked at with optimistic eyes as

migrants were transferring capital and knowledge to the developing countries (2). Immigrants entering the country after the Second World War, exemplary are those welcomed as guest workers and citizens from former colonies, were initially encouraged to maintain their own cultures, even after it became clear they would stay in the Netherlands permanently. Access to citizenship was easy, and the pressure to assimilate was low. For immigrants who were not proficient in Dutch, many government services and documents were provided in their mother tongues.

However, this attitude changed into a migration pessimism between the years 1973-90, as concerns over brain drain raised (Haas 8). It became clear that the former guest workers and, to a lesser extent, migrants from the former colonies, were economically marginalised. Many different policies were executed to improve their position, but to little avail, and with the turn of the new century a turbulent time for Dutch multiculturalism arose (Ersanilli). Many wonder whether the Netherlands has left the multicultural track, that had aspired to recognize immigrants as culturally distinct groups, and is now pursuing a strict form of assimilation. Since 1998, several new immigration and integration laws have been introduced, and without exception they have made Dutch immigration and integration

policies stricter. Immigrants and their descendants continue to be viewed as culturally distinct groups, but whereas early integration policies aimed at maintaining cultural diversity, this diversity is now increasingly seen as something that obstructs integration into Dutch society.

In the 2002 elections, Pim Fortuyn, a politician who advocated the closure of borders and who claimed that “Islam is a backward culture”, ensured a twenty percent of popular votes for his party LPF. May 6, 2002, Fortuyn was assassinated by Volkert van der Graaf. At his trial in court, van der Graaf, an environmental and animal rights activist,stated that he had

(17)

murdered Fortuyn in order to stop him from exploiting Muslims as "scapegoats" and targeting "the weak members of society" in seeking political power (Hokstam). According to Merel van Beeren, Ayaan Hirsi Ali, a politician of Somali descent, was probably the most

prominent figure in negative portrayal of the Islam in the Netherlands (51). Hirsi Ali judged Islam for its treatment of women and does not propose reform but a complete rejection of the religion (van Beeren 51). Together with the filmmaker Theo van Gogh, she created the film Submission, Part I. A film on the oppression of women in Islam, it featured, amongst other things, images of the projection of Qur’anic verses on a naked woman underneath a sheer veil (van Beeren 52). In 2004, van Gogh was shot dead by Mohammed Bouyeri. The assaults on Pim Fortuyn and Theo van Gogh showed that the conversation surrounding Islam and the expression of Islamophobic sentiments in The Netherlands has escalated to a point where it can no longer be ignored. The Dutch politician who now most closely resembles Fortuyn is Geert Wilders. In 2006, Wilders founded the PVV (Partij Voor de Vrijheid), which presents itself explicitly as an anti-Islam party. This party does not consider Islam as a religion, but primarily as a political ideology in the disguise of a religion. [...]. [T]he political ideology of Islam is not moderate – it is a totalitarian cult with global ambitions” (Wilders 25, 26). In March 2008, Wilders released the film Fitna, which argued that Islam, and Muslims

generally, are violent, intolerant and bent on world-domination.In the Dutch general election of 2010, his party received 24 seats out of the 150 in total, and therewith reached a third place. The extent of support for first the LPF and later the PVV indicates that there is a polarization in the Netherlands with regard to Islam and Muslims as these political parties enforce issues of exclusion and separation which allow sentiments of Islamophobia in the Netherlands to develop.

(18)

2.3 FACEBOOK, A CURATED LANDSCAPE

In the first quarter of 2016, Facebook had 1.65 billion (1.650.000.000) monthly active users (Statista).It is well known that the platform connects people by allowing users to share personal information, join groups, send private messages and leave public notes on either a group or an individual’s wall, a section of screen real estate designated for this purpose (Oboler). Moreover, users can share photographs and multimedia and install and use third– party applications based on the Facebook platform (Oboler). Zuckerberg repeatedly stated that Facebook brings people together, and therewith it helps to make the world a better place. In an open letter to his newborn daughter Max, he wrote that in many ways the world is getting better: ´[h]ealth is improving. Poverty is shrinking. Knowledge is growing. People are connecting. Technological progress in every field means your life should be dramatically better than ours today´ (Facebook 2016). Yet these statements can be questioned. In the The Culture of Connectivity José van Dijck discusses Facebook as a curated landscape, in other words as a curator of ‘social’ connectivity. Following van Dijck, this section aims to understand the consequences of an age in which our sense of social coherence, is based in technology as opposed to a natural, organic contact with the other.

As mentioned in the introduction, numerous media scholars argue that algorithms shape our world. The researchers critically scrutinize the workings of algorithms, not by fully understanding their technical workings, but mainly by understanding the algorithms as a black box. Therewith they can research the impact of algorithms on user behaviour and question how the algorithms shape our social understanding. Drawing on previously mentioned studies, this section discusses the algorithms in the construction of sociability mainly through the writings of José van Dijck (¨Facebook as a Tool for Producing Sociality and Connectivity¨ and The Culture of Connectivity), Tarleton Gillespie (¨The Relevance of Algorithms¨) and Eli Pariser (The Filter Bubble). Before diving into theories I will shortly

(19)

expand on Facebook's attitude and the activities they undertake in relation to hate speech and other forms of harmful expressions on the platform.

2.3.1 Facebook, Violating the Terms

The rules governing the use of Facebook, published in their terms of service (TOS), are created and modified by the company itself. Facebook’s TOS forbids content that is harmful, threatening and any content that has the potential to stir hatred and incite violence (Facebook “What does”). In its community standards, Facebook states that the platform `removes hate speech, which includes content that directly attacks people based on their: race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender or gender identity, or serious disabilities or diseases´ (“Community Standards”). Although the community

standards clearly state that ´[o]rganisations and people dedicated to promoting hatred against these protected groups are not allowed a presence on Facebook´, it further states that ´[w]e allow humour, satire or social commentary related to these topics, and we believe that when people use their authentic identity, they are more responsible when they share this kind of commentary. For that reason, we ask that Page owners associate their name and Facebook Profile with any content that is insensitive, even if that content does not violate our policies. As always, we urge people to be conscious of their audience when sharing this type of content´ (Facebook, “Community Standards”). In 2012, Facebook introduced the possibility for users to report content they consider as violating the TOS agreement. Moreover, the platform offers tools to “socialize” reporting; allowing users to privately notify the author of a specific piece of content before formally asking Facebook for it to be removed

(Gagliardone 31). According to the author of Countering Online Hate Speech Inigio Gagliardone, ´these new opportunities represent interesting additions to other measures to respond to perceived hate speech, even if evidence is lacking about how effective they have

(20)

been over time and whether or not users are satisfied with the options they are being offered´ (31).

As mentioned in the introduction, in September 2015 Merkel called out to

Zuckerberg, for allowing racist and xenophobic content Facebook. January 2016, Facebook launched a campaign, the Online Civil Courage Initiative (OCCI), based in Berlin, to counter racist and threatening posts on the platform (Guerrini). The OCCI Counterspeech Handbook describes the campaign's mission as the promoting of civil courage displayed by

organisations and grassroots activists, by carrying out valuable counter speech work online. Herein, counter speech is described as ‘responses or content that is created to counter a range of extremisms and hate speech online’ (OCCI). The handbook explains their motivation for the campaign: ‘[w]e believe that engagement is more powerful than censorship in reforming prejudiced and bigoted opinions and voices, and are committed to amplifying campaigns which encourage positive dialogue and debate’ (OCCI). Furthermore, it states that ‘[g]oals guide the campaign and keep it focused on its original purpose i.e. to persuade people not to join far-right or Islamist extremist groups’ (OCCI italics added). In December 2015, the European Commision launched the EU Internet Forum. According to the European Commission, it brings together almost exclusively US Internet companies, government officials and law enforcement agencies to ‘counter terrorist content and hate speech online’ (“EU Internet Forum”). On the 31 of May 2016, the European Commission and the worlds biggests IT Companies: Facebook, Twitter, YouTube and Microsoft, agreed on a code of conduct that includes a series of commitments to combat the spread of illegal hate speech online in Europe (European Commision “European Commission”). Referring to this code of conduct, Vĕra Jourová, the EU Commissioner for Justice, Consumers and Gender Equality, said that `[t]he recent terror attacks have reminded us of the urgent need to address illegal online hate speech. [...] I welcome the commitment of worldwide IT companies to review the

(21)

majority of valid notifications for removal of illegal hate speech in less than 24 hours and remove or disable access to such content, if necessary´ (European Commision “European Commission”). From the European Commission’s perspective, the aim of the agreed code of conduct is that messages containing hate speech will be deleted quickly. Hate speech is defined in article 1 `Offences concerning racism and xenophobia´ of the 2008 Framework Decision on Racism and Xenophobia, which requires EU Member States to make sure that the intentional public incitement “to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin” is to be made “punishable” (¨Council Framework Decision 2008¨).

Reflecting on the Code of Conduct, Monika Bickert, Head of Global Policy Management at Facebook said:

We welcome today’s announcement and the chance to continue our work with the Commission and wider tech industry to fight hate speech. With a global community of 1.6 billion people we work hard to balance giving people the power to express

themselves whilst ensuring we provide a respectful environment. As we make clear in our Community Standards, there’s no place for hate speech on Facebook. We urge people to use our reporting tools if they find content that they believe violates our standards so we can investigate. Our teams around the world review these reports around the clock and take swift action. (European Commision “European

Commission”)

According to the executive director of European Digital Rights (EDRi) Joe McNamee ‘the “code of conduct” downgrades the law to a second-class status, therewith placing it behind the “leading role” of private companies that are asked to arbitrarily implement their terms of

(22)

service. This process, established outside an accountable democratic framework, sets unclear, almost hidden rules for companies. Therewith it also creates serious risks for freedom of expression on SNS, as legal but controversial content may well be deleted as a result of this voluntary and unaccountable take down mechanism’ (McNamee). In other words, McNamee criticises the IT companies handling these issues themselves, rather than by law enforcement. Besides, McNamee argues that the definition of hate speech, as detailed in the code of

conduct, is defined vaguely. It is defined in the document as ‘all conduct publicly inciting to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin’. Originally, EDRi was part of the EU Internet Forum that created the code of conduct along with the IT

companies and the EU Commission, but together with the Access Now (an international organization dedicated to defending and extending the digital rights of users at risk around the world), they announced their detachment from the code of conduct (“EDRi and Access Now”).

Both the code of conduct, as the initiative OCCI, point at the serious consequences of the promotion of online hate speech. Terror- and extreme rights groups radicalise, recruit and mobilise new supporters through the easy spread of online content. One of the public

commitments states that: `[t]he IT companies to have in place Rules or Community

Guidelines clarifying that they prohibit the promotion of incitement to violence and hateful conduct´ (code of conduct). Moreover, the handbook of the OCCI aims to persuade people not to join far-right or Islamist extremist groups by supporting counter speech. However, since the beginning of 2016, several conservative sources indicate that Facebook censors content offensive towards Islam: Facebook Censorship and the War on Free Speech (Hayward, Breibart News Network), Google Facebook and Twitter yield to German Govt demand to Censor Anti-Migrant Hate Speech (Kent, Breibart News Network), Facebook

(23)

Now Censoring People Who Oppose Wave of Muslim Refugee Immigration (Campbell, PolitiStick).

2.3.2 Facebook, a Public Sphere?

It should come as no surprise, online conversations today exist primarily in the realm of social media, most of which are owned by private companies. Therewith, such privately owned platforms have grown to occupy a significant role our daily social interaction.

In the book The Structural Transformation of the Public Sphere (1962) Habermas argues that the eighteenth century knew a short period of a flourishing public sphere. In spaces, separated from those that belonged to the established powers of state and church, people gathered for discussion and the exchange of their ideas. These places operated as an arena for debate, rather than for commercial transactions (Habermas 65, Van Dijck

¨Facebook as a Tool¨ 162). Summarizing the theory of Habermas, van Dijck explains that the rise of the public sphere was short-lived. `Driven by specific social circumstances, such as the emergence of a commercial press, growing literacy rates, and more access to printed

literature the public sphere transformated into commercialisation of the participation of the masses´ (162). Capitalist organisations shifted the public debate from the dissemination of reliable information, to the formation of public opinion (Boeder). This is in stark contrast with the ideal public sphere, wherein isolated opinions are brought together in order for the public’s collective opinions to result in an informed decision (Van Dijck ¨Facebook as a Tool¨ 163). The public sphere as envisioned by Habermas, is seen as central to a strong, democratic culture, ´enabling the voicing of diverse views on any issue´ (Dahlberg 828).

There has been much academic discussion and an increase of research is done on the possibility of the internet as facilitating the idea of a public sphere as introduced by

Habermas ( Aaron Barlow, The Rise of the Blogosphere; Graham, Deliberating in Online Forums: New Hope for the Public Sphere?; Sparks, The Internet and the Global Public

(24)

Sphere). The beginning of the Internet encouraged a wave of optimism, when interactive platforms came into being, they promised to make our culture more participatory,

empowering its users to connect, create and discuss. It was predicted that the growth of online interactivity and decentralisation of power relations would allow for a rational and informed debate. In other words, the interactive platforms could enable individuals to encounter more diverse views, and thus attain more moderate positions on political matters. Pointing at the blogosphere, Aaron Barlow suggested that the technology could act as a buffer against a public sphere that is dominated by commercial forces, and even allowing it to be opened up again (3). Similar to the public sphere indicated by Habermas, people could debate separated from the established powers. Moreover, it was argued that online

participants readily seek out and deliberately discuss with actors who are holding markedly different views, therewith allowing for the public sphere to expand (Dahlberg 828). In the article The Virtual Sphere, Zizi Parcharassi argued that:

people who would never be able to come together to discuss political matters offline are now able to do so online, and that is no small matter. The fact that people from different cultural backgrounds, states, or countries involve themselves in virtual political discussions in a matter of minutes, often expanding each other’s horizons with culturally diverse viewpoints, captures the essence of this technology (23).

However, initial optimism associated with a virtual public sphere has been replaced by doubts about whether this model was appropriate for the development of democratic values

(Losifidis). In contrast to the optimistic view, other scholars, as Pariser and Sunstein, argue that much of the online interaction simply involves the meeting of ‘like-minded’ individuals, allegedly leading to a fragmented public sphere, wherein group positions and practices are

(25)

reinforced, rather than openly critiqued as proposed in the ideal public sphere (Dahlberg 828). Habermas himself, in a 2006 speech, seriously questioned the internet as a viable new public sphere arguing that “the rise of millions of fragmented chat rooms across the world lead to the fragmentation of large but politically focused mass audiences into a huge number of isolated public issues” (Habermas 423). In line with this assumption, Pariser points explicitly to algorithms, including Facebook's Edgerank (the algorithm that decides which stories appear in each user's news feed), as manipulating the public sphere by connecting people and creating solidarity (28). In her book The Culture of Connectivity, van Dijck argues that social media constitute an arena of public

communication where norms are shaped and rules get contested (19). Drawing on Michel Foucault, van Dijck continues that `the power of norms, in the arena of sociality, is much more influential than the power of law and order’ (19). In a previous research I used Google Search, I typed in ‘people without Facebook’ and Google Search Autocompletion suggested: ‘people without Facebook are psychopaths’. By

following Google’s suggestion I came across sensation seeking newspapers, ‘The Huffington Post’ and ‘Daily Mail’, and scrolling down the pages I found articles with titles as

‘Jobseekers getting asked for Facebook passwords’(Valdes and McFarland) and ‘Beware, Tech Abandoners. People without Facebook Account are Suspicious’ (Hill). Although these articles can not count as valid academic sources, they point towards a certain natural and expected presence of Facebook. It can be argued that the platform is considered as a

(26)

an infrastructure. Moreover, as van Dijck mentioned in relation to Foucault, `the platform itself shapes the norms of sociality; to share, to like, to be connected constantly but also to accept personalized advertisements and other forms of monetization´ (“The Culture” 19). Regardless of the individual advantages, Facebook can be understood as a commercial platform of public communication, that steers and curates connections for monetary benefits. The media scholar Danah Boyd constructs SNS as networked ‘publics’; publics that are restructured by networked technologies. She argues that publics are simultaneously a space and a collection of people, and the architecture of networked publics, referring to the organization of code that produces digital environments, differentiates them from more traditional notions of publics (Boyd 3). Arguable, it does not make much sense to compare the social media sphere to the more traditional notions of publics as the public sphere of Habermas, because of the ways in which networked technologies shape and configure connectivity within the digital environment. However, we should not forget the symbolic value of the public sphere as a social concept enabling the voicing of diverse views on any issue.

2.3.3 Facebook, Filter Bubble and the Culture of Connectivity

The Internet contains a hardly imaginable amount of data. Since individuals are no longer able to orient themselves in the world of big data, it has become necessary to delegate this task to algorithms. As mentioned in the introduction, there are many different types of algorithms, and they are used in the most diverse domains. Yet, algorithms are not some kind of neutral procedures, they are devices that profoundly influence our daily lives, more so than it would appear at first sight. In the book The Culture of Connectivity (2013) van Dijck states that `social media are inevitably automated systems that engineer and manipulate

connections. In order to be able to recognize what people want and like, Facebook and other platforms track desires by coding relationships between people, things and ideas into

(27)

algorithms´ (12). As mentioned above, in Zuckerberg's letter to his daughter he wrote that the world is getting better, inter alia, because people are connecting. Believing in the progress enabled by technology he denotes that Facebook `was built to accomplish a social mission - to make the world more open and connected´(Sengupta and Miller). Van Dijck argues that making the world more social has turned out to be `making sociality technical´, also understood as platformed sociality (“The Culture” 12, 157). Based on the data Facebook collects, the platform steers their users specific needs. There is a case to be made that the logics at work in these algorithms do not only shape user practices, but also lead users to internalize the algorithmic norms and priorities (Bucher, Gillespie). The section above mentioned the individual presence on Facebook as a norm. However, this norm is not internalized by an algorithm, but by the social norm to be visible on Facebook, to have a virtual passport that is connected with the rest of the world. In the context of Facebook's news feed, the internalization of algorithmic norms is described by Bucher as a created impression that everybody participates on the platform, as such, Facebook simultaneously suggests that participation is the norm (1175). Technological pressure from the platform ranges from the pressure to adjust to the algorithmic norms, to be visible on the news feed, to connecting to people you may know, and groups you may like. Of interest for this thesis is that algorithms structure our interactions with others as members of networked publics: ‘(1) the space

constructed through networked technologies and (2) the imagined collective that emerges as a result of the intersection of people, technology, and practice’ (Boyd 1).

Pariser (2011) and, more recently, Gillespie (2014) have argued that, when algorithmic information services can be personalized to a certain degree, the diversity of public knowledge and political dialogue may be undermined. We are led, by algorithms and our own preference for the like-minded turn into "filter bubbles" (Pariser The Filter Bubble), where we find only the news we expect and the political perspectives we already hold

(28)

(Gillespie ¨the Relevance¨). This term was introduced by Pariser in his equally named book The Filter Bubble, What the Internet Is Hiding From You, published in 2011. A “filter bubble” is a result of this personalized search in which a website’s algorithm selectively guesses what information a user would like to see, based on information about the user (such as location, past click behavior and search history) and, subsequently, users become separated from information that disagrees with their viewpoints, effectively isolating them in their own cultural or ideological bubbles. Pariser argues that ‘to some extent we have always consumed media that appealed to our interests and avocations and ignored much of the rest. But The Filter Bubble introduces three dynamics we've never dealt with before’ (32). Namely, every person stands alone in the filter bubble, the filter bubble is invisible, and finally, no one chooses to be inside the filter bubble. Prime examples are Google Personalized Search results and Facebook's personalized news stream. In both cases the algorithms do not act from a distant in a static, fixed manner, but rather work in concert with each individual user. As the algorithmic recommendations are also reactive and mutable to inputs, this thesis argues that the filter bubble critique could also be applied on algorithmic recommendation; suggested friends, Groups and Pages.

Similar to Pariser, Gillespie argues that ´algorithms play an increasingly important role in selecting what information is considered most relevant to us, a crucial feature of our participation in public life´ (“The Relevance”). ‘Algorithms impinge on how people seek information, how they perceive and think about the contours of knowledge, and how they understand themselves in and through public discourse’ (Gillespie “The Relevance”). Again, the attention that is paid to Islamophobic sentiments on Facebook, indicates a pressing need to focus critically on algorithms and the influence they have on our everyday world.

(29)

2.3.4 Facebook, Responding to Criticism

As mentioned in the introduction, Facebook responded to the criticisms discussed above with an article published in the journal Science. In their own words, Facebook responded `to speculation around the creation of “echo chambers” (in which individuals are exposed only to information from like-minded individuals) and “filter bubbles” (in which content is selected by algorithms based on a viewer’s previous behaviors), which are devoid of attitude

challenging content (Baksy, Messing, Adamic 1). Led by two Facebook data scientists Baksy, Messing and a data scientist from the University of Michigan, Adamic, they had access to the anonymised data of more than a million Facebook users. The study counted to which extent individual users encounter more or less diverse news while interacting via Facebook’s algorithmically ranked news feed.Based on a large sample of users, and seven million web links, they found that many of the shared stories allowed people to get viewpoints different from their own. The study concluded that in comparison to algorithmic ranking, individual’s choices on what to consume, had a stronger effect limiting exposure to cross-cutting content (1). In other words, the “individual” choice plays a larger role in limiting exposure to

different viewpoints. The article explains that the combination of filters that is at work at Facebook still allows exposure to ideologically challenging news, but, that challenge depends solely on ¨friends¨ who claim an opposing political ideology from the content that is put forward by people's news feeds and thus reflect those diverse views. As such, the article draws a line between the online and the offline environment, in the sense that the diversity of voices depends on one's network. In case you are surrounded by people with different

political views, you will have access to a range of opinions that is far broader than those who live in an isolated community. The article concludes with the statement that `we do not pass judgment on the normative value of cross-cutting exposure´ (2). And it was only a couple of months after the article was published, that Facebook launched the OCCI campaign

(30)

supporting counter speech. Moreover, scholarly research about social media has expressed concern about the consequences of increasing selectivity and polarization (Pariser, Sunstein, Bucher). In relation to the focus of this thesis, it is of importance to examine to what extent the "individual¨ choice is shaped by algorithmic recommendations. And can we still speak of individual choice, or did we already accept the coercive power of algorithms?

2.3.5 Facebook, Suggested Friends, Groups, Pages and the Issue of Polarization

The previous sections display that Facebook can be seen as an online facilitator of human networks. However, under the guise of social connectedness, relationships are commoditized. Van Dijck argues that `even though

communicative traffic on social media platforms seems to be determined by social values such as popularity, attention, and connectivity, they are impalpably translated into monetary values and redressed in business models made possible by digital technology´ (“Facebook as a Tool”162). Moreover, the results of human input are shaped by

technological output and the other way around.

Individuals share their thoughts, ideologies, creations on the platform, but the imbedded automated systems affect also what individuals think and do; as such, Facebook engineers and manipulates connections. Facebook, however, stresses the importance of a platform that enables social connectedness, and it minimizes its role as curator of social networks.

(31)

Algorithmic recommender systems are becoming integral to how users connect with people and discover media. They are used for all major types of media, such as books, movies, music, news, and television and they are commonplace at major online firms

(Amazon, Netflix, and Apple’s iTunes store). On Facebook, features such as People You May Know, Pages You May Like and Suggested Groups `help´ individuals to find friends, interests and connect collective interests. The People You May Know feature operates on a “friend of a friend approach” (Chen et al.). Friend of a friend (FOAF) is a common algorithmic technique for modeling friend recommendations online, a good and potential friend, according to the FOAF algorithm, is one who already shares a friend with you (Bucher “The Friendship” 843). Additionally, the automated system suggests which Pages and Groups an individual might be interested in. Moreover, Facebook suggests to create a Group and displays what interests your friends have. As such, Facebook encourages the user to carry out specific actions in specific directions.

According to Facebook, ´Facebook Pages enables public figures, businesses,

organizations and other entities to create an authentic and public presence on Facebook [and] are visible to everyone on the internet by default´. Yet simultaneously: ´Facebook Groups are the place for small group communication and for people to share their common interests and express their opinion. Groups allow people to come together around a common cause, issue or activity to organize, express objectives, discuss issues, post photos and share related content´ (Facebook). Thus Facebook stresses that the main difference between Groups and Pages is that Pages are used by official public figures or entities to share updates and Groups can be created by any user in order to interact with other users who share the same interests. Facebook's suggested Pages and Groups feature recommends users what might be of personal interest, recommendations that are predicted by an algorithm. In a different text, Gillespie explains that when Amazon recommends a book `that "customers similar to you" bought, it is

(32)

invoking and claiming to know a public with which we are invited to feel an affinity - though the population on which it bases these recommendations is not transparent, and is certainly not coterminous with its entire customer base. These algorithmically generated groups may overlap with, be an inexact approximation of, or have nothing whatsoever to do with the publics that the user sought out´ (“The Workings”). Facebook algorithms assist users in finding Groups, Pages and friends and, similarly to the Amazon example, supposedly knows user’s preferences. In relation to the Filter Bubble critique, there is a case to be made that algorithmic recommendations serve as `invisible autopropaganda; indoctrinating us with our own ideas, amplifying our desire for things that are familiar and leaving us oblivious to the dangers lurking in the dark territory of the unknown´ (Pariser, “The Filter Bubble” italics added).

The filter bubble critique touches upon the broader field of polarization. Polarization is subject to much scholar inquiry, which includes research on the workings of

communication patterns, ideologies, fear, geography, and the influence of media on the ways in which people conceptualize and position issues (Dickinson, Foss and Chen 12). Findings suggest that the more individuals and groups speak about a specific issue, the more extreme their views will become (Mackie, Social identification effects in group polarization; Sia, Tan and Wei, Group polarization and computer-mediated communication: Effects of

communication cues, social presence, and anonymity; Sunstein, Going to extremes: How like minds unite and divide). An effect that is at work to an even larger extent during times of fear, such as after the Charlie Hebdo and November 2015 Paris shootings, individuals tend to divide the world into two camps those who are similar to them and others, in order to try to restore a sense of certainty, security, and civility. Group threat theory implies that the stronger the threat against group interests is (either perceived or real), the more negative the attitudes towards other groups of people grow (Hjerm 1259). This us versus them mechanism

(33)

reifies stereotypes and solidifies beliefs (Dickinson, Foss and Chen 12). Arguably, social media platforms contribute to intensify polarization as individuals tend to seek and consume media that reinforces, accentuates, and therewith further polarizes their beliefs. However, it should be noted that, the individuals mentioned in this last sentence, could be replaced by algorithmic recommendations.

3. METHODOLOGY

3.1 RESEARCH QUESTION

This thesis underlines the need to examine the effects of algorithmic recommendations on Facebook in relation to Islamophobic sentiments; anti- Islam, Muslims, immigrants and refugees targeted user generated content.

RQ: Does Facebook suggest, in the form of algorithmic recommendations, populist Groups, Pages and People that are generating Islamophobic sentiments understood as indiscriminative negative attitudes or emotions directed at Islam, Muslims, immigrants and refugees?

This thesis aims to display that sentiments of islamophobia, including practices of hate speech, are not only explained by the online behaviour of users, but that they are also shaped by algorithmic recommendations. As discussed in section 2.2 Facebook, a Curated

Landscape, van Dijck argues that ‘web 2.0 culture has been overwhelmingly hailed as a culture that celebrates the idea of sharing, participation, and community bonding’ (13), or, in Zuckerberg’s language, one can call it a platform that ‘makes the world more social’. This

(34)

research aims to contribute to studies that raise critical awareness of the culture of connectivity in relation to the effects on the circulation of Islamophobic sentiments.

Step 1

The research is divided in two steps. First of all, I created a fake Facebook profile of a Dutch girl named Lies Meerveld. Lies is 33 years old and supports the PVV. Through the character of Lies, this thesis explores if Facebook's algorithmic recommendations - People You Might Know, More Pages You Might Like, suggested Pages and suggested Groups - draw Lies into a bubble of ostensibly populist Groups, Pages and People, in all probability framed against Islam, Muslims, immigrants and refugees. So, if Lies Likes the PVV Page or joins a PVV Group, does Facebook suggests politically and/or ideologically related Pages and Groups, or does the platform suggest contradicting viewpoints or as they mentioned in the OCCI, counter speech?

Algorithmic recommendations embedded in Facebook organize information based on implicit and unexamined assumptions about popularity, relevance, and value. Kitchin argues that ‘by examining what data are fed into an algorithm and what output is produced it is possible to start to reverse engineer how the recipe of the algorithm is composed’ (10). However, `understanding the work and effects of algorithms needs to be sensitive to their contextual, contingent unfolding across situation, time and space´ (8). As such, this thesis considers an ethnographic diary as a promising approach to document the algorithmic recommendations in relation to the inputs. The diary documented the proceedings of a fictional PVV supporter and the investigated algorithmic recommendations on Facebook, during the period of one month. In order to put a finger on the understanding of the algorithmic system, one should use carefully selected data and observe what is outputted under different scenarios. However, the ethnographic diary does not aim to understand the

(35)

recipe of the algorithm, but intends to gain insights into the possible effects of the emergent and constantly unfolding algorithmic recommendations on an individual level. As such, the diary can be understood as an attempt to adopt an attitude that is both curious and critical, and as a means to develop competences at the individual level. So as to understand how we interact with the digital world, by exposing a process of technological connectivity through recording observations, thoughts and questions as they happen. The ethnographic diary also respects Gillespie´s notion of the sociology of algorithms. He argues that `a sociological analysis must not conceive of algorithms as abstract, technical achievements, but must unpack the warm human and institutional choices that lie behind these cold mechanisms´ (“The Relevance”). In the study “Post-Demographics Machines”, Richard Rogers argues that post-demographics ‘could be thought of as the study of the data in social networking

platforms, and, in particular, how profiling is, or may be, performed’ (1).This ethnographic diary has affinity with the study of post-demographics of Richard Rogers in the sense that the ‘interests are not the traditional demographics of race, ethnicity, age, income, and educational level - or derivations thereof such as class - but rather of tastes, interests, favorites, groups, accepted invitations, installed apps and other information that comprises an online profile and its accompanying baggage’ (2). Facebook can be seen as a platform that captures and uses data, wherein the users themselves input data in order to create and maintain social relations (Rogers 2). As such, one has to ask the questions ‘how profilers [as Facebook] recommend information, cultural products, events or other people (‘friends’) to users, owing to common tastes, locations, travel destinations and more’ (Rogers 2).

Step 2

The second step of this research aims to identify Islamophobic sentiments that are placed within the algorithmic suggested Groups (outcome of the first step). The first step explored

(36)

whether the algorithmic recommendations, potentially, create a populist bubble framed against Islam and Muslims. Potentially, because the results only display the title of the

recommended Group. The second step investigates this bubble more closely by considering if the user generated content posted in a certain Group can be classified as Islamophobic. According to Bonnie Brennen, the author of Qualitative Research Methods for Media Studies, one of the most challenging aspects of this method of research, is to gain access to a group, culture or organisation (163). Although on Facebook the Public Groups are accessible for observation, one needs to be a member in order to observe the Closed Groups. Depending on the settings of the Group, one needs to be accepted by either the administrator of the Group, or by one of the current members. As the second step is inherently linked with the first step, I use the profile of Lies to explore the user generated content posted in the suggested Groups.

To have an indication of the presence of Islamophobic content I calculated how many photos of the last 50 photos target Muslims, Islam, immigrants or refugees. Moreover, this part of the study includes a content (image

and text) analysis of a sample of posts including a photo within the suggested Groups. In every Group, the column on the left displays the option recent Group photos, see all (see screenshot 3.). By clicking on this option, Facebook displays an overview of all the photos added in the specific Group, organised by date (see screenshot 4.). I decided to research Groups instead of Pages as all members are allowed to post and share

(37)

photos. The section 2.2.2 Islamophobia, an identifiable phenomenon stressed that there should be a sufficient degree of overlap between one’s reading and the other readings in play in the culture at the time (Sayid 22). In brief I referred to the Online Hate Prevention Institute (OHPI) which released the report “Islamophobia on the Internet, the growth of online hate targeting Muslims”, that examines the different types of anti Muslim hate on the platform Facebook. They aimed to map Islamophobia on Facebook, ‘to the extent that this content gives a window into hate speech against Muslims that is currently circulating in society’ (Oboler 6). To capture the multiple forms of online expression of anti-Muslim sentiments, helps to understand specific types of attitudes that link Muslims to a perceived feeling of fear and threat. In order to explore the second step of the research question I decided to use the categorization of the report in the way that researchers analysed similar content. They examined 50 anti-Muslim Facebook Pages with names as “The Islamic threat” and “Mohammad the PIG”. From these 50 Pages the report documented 349 images of anti-Muslim hate and divided the images into seven categories:

● category A, Muslims as a Security Threat or Threat to Public Safety ● category B, Muslims as a Cultural Threat

● category C, Muslims as an Economic Threat

● category D, Content Dehumanising or Demonizing Muslims

● category E, Threats of Violence, Genocide and Direct Hate Targeting Muslims ● category F, Hate Targeting Refugees /Asylum Seekers

● category G, Other Forms of Hate ● category H, No form of Hate

The report explains that categories as following. Category A indicates that Muslims are dangerous. It often equates Muslims with terrorism, lawlessness or a desire to kill anyone who is not Muslim. Category B, the cultural threat argument is often linked to the idea of a

(38)

Muslim “take over”, either through immigration or demographics. In the section 2.2.1 Islamophobia a Contested Phenomenon, this is also explained as fear for a multicultural society. Category C suggests Muslims are a threat to the economic well being. Category D approaches make the victim appear a legitimate target for attack, or undeserving of the basic rights and respect that would be accorded to others in society. The following category, E, is a broad category for all content that promoted violence against Muslims. Category F involves an overt attack on Muslims as refugees or Asylum seekers.

3.2 A PRIORI CONSIDERATIONS

Data Collecting

The 20th of April, 2016, the leading Dutch digital rights organisation Bits of Freedom (BOF) published an opinion article, referring critically to removed pages by Facebook on the

account of hate speech (van der Kroft). On May 6th of 2016, Facebook removed the Group ´Steun de PVV´ (Support the PVV) after several users posted threatening messages targeting the Dutch member of parliament Tunahan Kuzu, whom is associated with the political

movement DENK (van der Galien). As mentioned in the introduction, Zuckerberg announced to actively fight hate speech. Therefore, it is likely that users who are distributing hate against Islam, Muslims, immigrants and refugees will be blocked during the research and hate

inciting Groups and Pages may be removed. User initiatives such as the Facebook community Stop de racistische Pagina's (Stop Racist Pages) actively report Groups, Pages and Users making use of hate speech.

Research Browser

Before I created the profile of Lies I, downloaded a new browser (Firefox) and created a new email address on yahoo.com. Although this research has little to do with Google Search, I

Referenties

GERELATEERDE DOCUMENTEN

Het is opvallend dat een festival dat zich op vrouwen richt toch een workshop biedt die alleen voor mannen toegankelijk is, maar daar blijkt wel uit dat ook mannen welkom waren

To what extent can the customer data collected via the Mexx loyalty program support the product design process of Mexx Lifestyle and Connect direct marketing activities towards

TRIPLE HELIX IDEATION: COMPARISON OF TOOLS IN EARLY PHASE DESIGN PROCESSING

[r]

In this paper we split the NER task into two separate tasks: Named Entity Extraction (NEE) which aims only to detect entity mention boundaries in text; and Named Entity

Adopting a two-wave longitudinal content analysis of Facebook messages in the context of the 18 th German federal parliament, acknowledging variations during election campaigns,

A broader understanding of the lives of people with mild intellectual disability who are supported professionally is important, first, to better understand why professionals

• You may use results proved in the lecture or in the exercises, unless this makes the question trivial.. When doing so, clearly state the results that