• No results found

Content Removal by Commercial Social Media Platforms: Implications of Compliance with LOcal Laws

N/A
N/A
Protected

Academic year: 2021

Share "Content Removal by Commercial Social Media Platforms: Implications of Compliance with LOcal Laws"

Copied!
75
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Content Removal by Commercial Social

Media Platforms: Implications for

Compliance with Local Laws

By Afef Abrougui (student number: 11102950)

Under the supervision of Dr. Stefania Milan

Master’s in Media Studies: New Media and Digital Culture University of Amsterdam: Graduate School of Humanities

24 June 2016

(2)

Table of Contents

Acknowledgements, 3 Abstract, 4

Keywords, 5

Chapter 1: Introduction, 5 Chapter 2: Literature Review, 9

Speech Regulation in Private Hands, 10 Governments as Powerful as Ever, 15

Implications for Compliance with Local Laws, 18 Not All Requests are Treated Equally, 21

Involving Users in Questions of Governance, 23 Chapter 3: Research Design, 26

Turkey as a Case Study, 26 Data collection methods, 30 Data analysis methods, 33

Chapter 4: Findings and Discussion, 34

Implications of the Country-Withheld Content Policies, 34 Facebook and Twitter: Platforms of Concern, 37

A Demand for More Transparency, 40

How Should Platforms Handle Requests, 44 Interference of Business Interests, 47

Chapter 5: Conclusion, 51 Appendices, 55

Appendix A: Interview Guide, 55

Appendix B: Two Interview Samples, 56 Appendix C: List of Interviews, 63 Bibliography, 64

(3)

Acknowledgements

I would like to express my gratitude to my supervisor Dr. Stefania Milan for her guidance and support throughout the entire writing process. I would also like to thank all the respondents who contributed their valuable insights and knowledge to this dissertation.

(4)

Abstract

Taking Turkey as a case study, this thesis examines the implications of

country-withheld content (CWC) policies. These policies enable governments to make requests to social media platforms to remove content for violating their local laws. CWC policies have serious implications for freedom of

expression online. As the case-study shows, speech-restrictive governments like that of Turkey are exploiting social media platforms’ policies of

compliance with local laws to crackdown on legitimate speech and silence critics. The dissertation draws upon previously published research, and primary data collected through interviews conducted with Turkish and non-Turkish net freedom advocates. This thesis contributes to existing research by exploring attitudes of the net-freedom activist community towards content removal in compliance with local laws. Respondents are concerned about governments’ abuse of these policies and platforms’ increasing compliance. Further, they want to see platforms appealing more often these requests, and revealing to the public their criteria for compliance.

(5)

Keywords

Country-withheld content (CWC), governance, transparency report, content removal (takedown) requests, country-specific restrictions, compliance.

Chapter 1: Introduction

New media technologies enthusiasts or “cyber-optimists” (Haunss 33) maintain that social media platforms empower citizens to do their own

reporting, and publish stories that receive little to no coverage from corporate or government owned media, and enable the emergence and coordination of social movements and protests. This stance is often backed by the coverage of real-life events including natural disasters, riots, elections, and protests, by citizens using their mobile phones, an internet connection, and “free”

commercial platforms on which they publish and disseminate their content. According to social media theorist Clay Shirky, the internet empowers anyone who has a message to easily spread it to a global audience. In his book Public Parts, journalism professor Jeff Jarvis describes the internet as “everyone’s printing press”, adding that “all of us no longer watch the same, shared news with the same, one-size-fits-all viewpoint” (24). As Yochai Benkler points out in his book The Wealth of Networks, the internet enables the creation of a networked public sphere, providing “anyone with an outlet to speak, to

inquire, to investigate, without need to access the resources of a major media organization”(12). He goes on: “We are seeing the emergence of new,

decentralized approaches to fulfilling the watchdog function and to engaging in political debate and organization” (Benkler 12). For instance, content posted by users on social media platforms, or user-generated content (UGC) in general, have made available to users “new hope and new possibilities for public reinvolvement in affairs of common interest” (Langlois 92).

(6)

When anti-government protests broke out in late 2010 in Tunisia, the state-owned media and the privately-owned radios and TV stations that existed at that time either turned a blind eye or focused on airing the government’s narrative that depicted protesters as “thugs” (Alakhbar), ignored their demands, and downplayed the number of victims (Ryan). On Facebook and Twitter, however, videos and photos of police brutality against defenceless protesters demanding “jobs, freedom and dignity” were widely circulated, in a challenge to the mainstream media’s coverage of the events (Delany). In another instance showcasing how content generated by users empowers involvement in affairs of public interest, the dissemination of police brutality footage against unarmed black men and women in the US, helped spark a debate about discrimination in the country’s criminal justice system (Laughland and Swaine). On 17 July 2014, 43-year-old Eric Garner was

pronounced dead in hospital following his arrest by the New York Police Department (NYPD). Witness Ramsey Orta captured Garner’s final moments in a video he filmed with his mobile phone, before handing it over to the New York Daily News. In the video, Garner can be heard repeatedly uttering the phrase “I can’t breathe” as an NYPD officer placed him in a chokehold

(Sanburn). Ever since, several more videos showing excessive, and at times deadly use of force by police officers against unarmed black Americans have been recorded and published by bystanders and witnesses. Speaking to CNN, one editor of a black newspaper said that they have been covering police abuse for years, but today “there are more ways to expose it”, thanks to tools that allow instant dissemination of footage (McLaughlin).

In this regard, social media platforms and content hosting services like to maintain that their mission is all about empowering users to communicate and express themselves freely (Mackinnon). Youtube provides its users with “a forum to connect, inform, and inspire” (Youtube About Page), while Twitter allows them to “watch events unfold, in real time, from every angle” (Twitter

(7)

homepage). Facebook, on the other hand, “connects [its users] with friends and the world around them” (Facebook homepage). Over the past years, however, these platforms have been increasingly “facing questions about their responsibilities to their users, to key constituencies who depend on the public discourse they host, and to broader notions of the public interest”, writes communication studies professor Tarleton Gillespie (“The Politics of Platforms” 348). Governance of speech is one area of regulation platforms often face questions about.

Commercial social media platforms and other online content-hosting services are continuously under the spotlight over their governance of the troves of content their users generate on a daily basis. Providing services to a global community of consumers living under different cultural norms and jurisdictions, these platforms face the challenge of drawing the line between what is acceptable speech and what is not (Gillespie, “Facebook’s

improved…”). When Facebook and other corporate actors make decisions about which type of content is allowed on their platforms and which is not (whether it is nudity, hate speech or graphic content), they are authorising themselves to act as “custodians of content” (Gillespie, “The dirty job of…”). Mostly owned by American companies, these platforms are also under

increased pressures to comply with requests made by foreign governments to remove content under repressive laws, which goes against their promise of user empowerment. For example, during the second half of 2015, Twitter received 2,211 content takedown requests from Turkish authorities for violating local laws, and complied with 23 percent of those requests(Twitter Transparency Report). Turkish activists say among the content affected, were tweets and accounts addressing corruption and criticising public figures (Daraghi and Karakas).

(8)

Serving as a “quasi-public sphere” (York, “Policing content…” 3) for a large number of international users, including those based in high-growth countries under undemocratic systems of governance, platforms will continue to face a never-ending dilemma: which comes first, their commitment to freedom of expression or the pursuit of revenue growth? This dilemma often leads to contradictions in the ways platforms handle government takedown requests. In one situation a platform may choose to fight back a government request under the pretext that it violates the right to free speech, while in another situation they decide to comply even though their compliance clearly violates that same right. Policies of compliance with local laws, and the lack of transparency about how such policies are enforced, leave users at the mercy of private social media platforms and their interests. This has serious

implications for freedom of expression online. As the case study will show, Turkish users seeking to express themselves away from government

censorship, often find themselves silenced by the platforms that were supposed to enable their right to free expression, at the request for their government.

In this dissertation, I explore how commercial social media platforms interfere with users’ free speech rights, and how foreign governments and jurisdictions affect their content regulation policies and practices taking

Turkey as a case study. The Turkey case study will look into the implications of compliance with local laws on Turkish users’ exercise of their online free

speech rights, by combining data collected from interviews conducted mostly with Turkish internet freedom advocates, and reports illustrating the impact of these policies on Turkish users such as news reports of content being taken down or transparency reports released by the companies.

Literature used in this dissertation include academic research in the fields of platform studies, social media governance (Flew) and the privatisation of

(9)

speech regulation. Research conducted by organizations and activist groups advocating for better protections for human rights online such as the

Electronic Frontier Foundation, and the Ranking Digital Rights project are also referred to in this dissertation, as their work usually illustrate concrete

examples and cases. Throughout the paper I will be referring to the country-withheld Content (CWC) rule or policy. This is how Twitter describes its policy of withholding content in a specific country for violating local laws (Twitter Help Center). Other platforms and companies do not name this policy but they simply refer to country-specific restrictions or government requests for content removal. Though I focus on Turkey as a case study, I will be referring to other countries whenever needed for the purpose of providing relevant examples. Chapter two is a review of the existing literature on the

privatisation of speech regulation, and its implications for users’ free speech rights. Chapter three lays down the data collection and analysis methods used, while in chapter four I analyze and discuss the findings of the Turkey case study.

Chapter 2: Literature Review

When it comes to the regulation of speech online, commercial social media platforms play a powerful role in deciding what kind of speech is permissible and what is not. Through algorithms, content moderators, and community-reporting or flagging tools, platforms govern the troves of

content posted by their users on a daily basis, to make sure that certain red lines such as nudity, threats of violence and hate-speech are not crossed. These platforms are not the only powerful actor in the regulation of online content. In fact, governments are increasingly influencing content removal decisions, particularly as platforms started to abide by local laws.

(10)

Speech Regulation in Private Hands

By taking and enforcing decisions about what is (and should be

considered as) appropriate content and what is not, commercial platforms have come to exercise significant powers over public speech (Langlois, MacKinnon, Zuckerman). They have been described as “custodians of content” (Gillespie, “The dirty job of…”), “proxy censors” (Kreimer 14), a “social media police force” (Dencik), and “digital superpowers” (MacKinnon 34).

Social media platforms, search engines, and app stores occasionally face accusations of censorship, raising questions about their policies and the power they have gained over our rights and liberties including our rights to free speech and access to information. Every now and then, we hear of a photo or a video taken down, an account suspended, or a link removed from the results of a search engine. In February 2016, Facebook removed a

photograph by Tunisian photographer Karim Kammoun showing bruises on the naked body of a woman victim of domestic violence (Marzouk). The

platform does not allow nudity. On 31 December 2015, Twitter suspended the account of human rights activist Iyad El-Baghdadi for half an hour after

confusing him with Abubaker al-Baghdadi, leader of the terrorist group ISIS (BBC, “Twitter ‘confuses”’...). Twitter staged a crackdown against groups and individuals using its service to threaten or promote terrorist acts, banning over 125,000 accounts between mid 2015 and early 2016 (Twitter). In September 2015, Apple removed from its app store metadata+, an

application that maps US drone strikes around the world citing “excessively crude or objectionable content” (Smith).

The cases we hear about from time to time do in no way reflect the reality, as not all cases become public and social media platforms do not publish data about actions taken to enforce their terms of use. Ranking Digital

(11)

Rights, a project that ranks ICT sector companies on respect for free expression and privacy including internet companies that own the most popular platforms such as Google, Twitter and Facebook, found that not a single internet company among a total number of eight publishes information about their terms of service enforcement, including the number of accounts affected and types of content restricted.

The internet is often credited for its decentralized infrastructure, and the new possibilities it offers to citizens to publish and access content without any interference, particularly from governments (Hintz). While attending the 1996 World Economic Forum in Davos, John Perry Barlow founder of the digital-rights advocacy group the Electronic Frontier Foundation (EFF), wrote and published the “Declaration of the Independence of the Cyberspace”.

Addressing governments in the declaration, Barlow wrote: “you have no moral right to rule us nor do you possess any methods of enforcement we have true reason to fear”. Written on the same day the then US President Bill Clinton signed the Communications Decency Act to regulate “obscene” content on the internet, Barlow warned governments that the “cyberspace does not lie within [their] borders” (Greenberg).

This belief in the emancipatory power of the internet has soon proved to be “naive” (Morozov xiii), as both governments and corporate actors

managed to acquire key roles in governing the cyberspace, and the rights of users. In addition, these two actors are increasingly collaborating with each other whereby the “state outsources interventions into citizens’

communication to these platforms” (Hintz 195). In this regard, the cyberspace is neither “independent” nor “sovereign”, as Barlow declared it to be in 1996. “While there might be a radical decentralization of communication online, it does not mean that power relations have disappeared", writes communication studies scholar Ganaele Langlois (99). According to Crawford and Lumby,

(12)

networked media regulation “accounts for a diverse, contested environment of agents with differing levels of power and visibility: users, algorithms,

platforms, industries and governments” (9). In fact, unlike the regulation of traditional media which usually involves governments and industry actors, new media governance further involves consumers and computer code.

In the governance of speech online, commercial platforms are key actors, acting as both hosts of and gatekeepers to content generated by their users, through their algorithms and Terms of Service (ToS). Through algorithms, platforms decide what is the most relevant content to a specific user (Tufekci). For instance, Google’s algorithms rely on more than 200 “clues” such as freshness of content and location “to guess what [users] might really be looking for” (Google Inside Search). In another example, Facebook’s

EdgeRank algorithm decides which stories are prioritized on each user’s newsfeed according to his or her connections and activity (Facebook Help Center). Algorithms further help companies enforce their terms of service, in addition to community reporting and content moderators (Youmans and York). For instance, Youtube has algorithms that help it detect pornographic content (Robinson), while Facebook’s artificial intelligence algorithms are flagging more photos as “offensive” than users or content moderators (Constine). On the other hand, Terms of Service play the role of a legally binding document between users and platforms (Bayley), and they “function much as traditional laws do” (Ammori 2276), since they determine what speech is permissible and what is not. Platforms have different conditions, and as a result what is permissible on one platform, may not be allowed on another one. For

instance, while Facebook and Instagram have strict rules regarding nudity, Twitter only asks its users to mark graphic content or nudity as “sensitive” (Fitzpatrick).

(13)

Despite the increased responsibilities and powers in the hands of the private sector, governments are not completely sidelined as they increasingly seek to exercise control by pressuring platforms to enforce national laws and take down certain types of content. For instance, these platforms have

recently been under criticism from a number of governments for not doing enough to respond to the spread of extremist propaganda (Frenkel), and the surge of hate speech in the midst of Europe’s refugee crisis (Donahue). In addition, commercial platforms place their users as agents in the governance of speech, by providing them with tools to flag or report content for violating community guidelines and rules. Even though the final decision to remove or retain a piece of content remains in the hands of moderators or a company’s policy team, flagging is useful to platforms as it “provides a practical

mechanism” (Crawford and Gillespie 2) for regulating the vast troves of content users post everyday, and further “offers a powerful rhetorical legitimation” (ibid.) to platforms when they face criticism for deciding to either remove or keep a specific piece of content. These community-reporting tools have previously been used by government and law enforcement

agencies in a number of countries. Crawford and Gillespie report that the UK’s Metropolitan Police has a “super flagger” status to report extremist content on Youtube (Crawford and Gillespie 7), while industry sources confirmed to the authors of a UNESCO report entitled “Fostering Freedom Online: the Role of internet Intermediaries”, that governments “sometimes seek to have content restricted” via community reporting mechanisms (Bar, Hickok, Lim and MacKinnon 143).

In the governance of USG, no actor intervenes independently, as algorithms, terms of service, consumers, corporations, and governments interact with one another and at different levels to determine what speech is tolerated and what is not. Algorithms are not independent of the corporations that put them in place, and they act to enforce corporate choices and policies

(14)

(Pariser 175). The regulatory choices and policies corporations take and adopt are, on the other hand, influenced by foreign laws and norms (Ammori 2014). The most popular social media platforms and content-hosting services are owned by transnational companies headquartered in the US. Though these platforms may be subject to US law, their terms of use and community

guidelines are also influenced by foreign jurisdictions and norms. Most users of these platforms live outside the US. In fact, 83.6 per cent of Facebook’s daily active users are outside the US and Canada(Facebook newsroom) and 79 per cent of Twitter accounts are also outside the US (Twitter Usage). Thus, it is in a company’s interest to put in place harmonious policies influenced by foreign laws and norms in order to attract a broad range of users (Ammori 2263). In this regard, the US constitution's First Amendment which

guarantees freedom of speech is but “a local ordinance”, writes internet policy expert at the Center for Internet and Society at Stanford Law School Marvin Ammori (ibid.). This is reflected in the regulatory approaches taken by social media platforms to govern “controversial” content that is usually

tolerated under the First Amendment such as hate speech or nudity. In the US, the First Amendment offers broad speech protections compared to European legislation and of course that of undemocratic regimes. For

example, while in several European countries hate speech is criminalised, this type of speech is more tolerated by the First Amendment unless it is

accompanied by a threat or incitement “intended to produce an imminent illegal conduct”(Volokh). As far as nudity is concerned, in the US, even though “obscenity” is considered unprotected speech under the First Amendment, nudity and most pornograhy are still protected (Ruane). Yet, American, social media platforms have taken different approaches to regulate it. Facebook, for instance, has a strict policy on nudity compared to other platforms like Twitter and Tumblr. While Facebook and Instagram ban photographs of genitals and female nipples (Facebook Community Standards), Twitter and Tumblr only

(15)

require users posting similar content to flag their accounts and blogs (the Twitter Rules. Tumblr Community Guidelines).

Governments as Powerful as Ever

In addition to taking into account foreign laws and norms when crafting their terms of use and community guidelines, American social media

platforms are increasingly expected to cooperate with governments. “We are forming our own Social Contract. This governance will arise according to the conditions of our world, not yours. Our world is different”, asserted Barlow in his 1996 declaration. The decentralised infrastructure of the internet was promising a space independent of any government interference, where “anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity” (Barlow).

Barlow like many other cyber-libertarians failed to anticipate how

governments would respond by purchasing and manufacturing technologies that allow them to filter and monitor this new supposedly liberating space. For example, Iran is reportedly building a ‘halal internet’, an intranet with its own national and religiously permitted services, that would isolate the Iranian cyberspace from the rest of the world (Doug). Undemocratic governments do not need to go as far as building their own national internet, as they can just acquire filtering and surveillance technologies, they usually purchase from corporations based in western democracies. When trouble arises, such as an uprising or protests, they can resort to entirely shutting down access to internet services, as the Egyptian authorities did in January 2011 to clampdown on protests against the dictatorship of then President Hosni Mubarak (Williams).

Further, throughout the years governments found ways to make internet companies collaborate with them, enlisting them as “proxy censors to control

(16)

the flow of information” (Kreimer 14), and pressuring them to take down content for violating their local laws. Platforms, on the other hand, comply based on calculations of legal and financial risks (Zuckerman). For instance, companies that have an office in a foreign country are under the legal

obligation to comply with that country’s laws. If they do not comply, they may risk to have their staff arrested or prosecuted. For countries in which they do not have operations, platforms could risk being blocked, which in turn could result in loss of profits (see “Not All Requests Are Treated Equally” section in this chapter and “Interference of Business Interests” in chapter four).

Facebook states that governments may ask them to restrict access to a piece of content, if they believe it violates their local laws such as legislation banning Holocaust denial in Germany. If Facebook finds the content in

question to be in violation of local laws, they restrict access to it in the relevant country or territory (Facebook government requests report, “About the Reports”). Twitter’s international users “agree to comply with all local laws regarding online conduct and acceptable content” (The Twitter Rules). Google, on the other hand, states that it “regularly receives requests from courts and government agencies around the world to remove information from [its] products” (Google Transparency Report). Other services and platforms that receive requests from foreign governments and courts include the online collaborative encyclopedia Wikipedia (Wikimedia Foundation), the

multinational technology company Yahoo (Yahoo Transparency Report) and the messaging application Snapchat (Snapchat Transparency Report).

In this regard, the “open” and “borderless” infrastructure of the internet is after all not without borders and constraints, and users are increasingly

expected to behave according to the local jurisdictions under which they are living. Otherwise, they may be silenced by the same platforms that promise to empower them, at the request of their governments. Senior lecturer at

(17)

Cardiff School of Journalism, Media and Cultural Studies, Arne Hintz writes that“the deterritorialized spheres of the internet have partly been

reterritorialized by states” (217). While York writes that “instead of an

unregulated, decentralized internet, we have centralized platforms serving as public spaces: a quasi-public sphere”, which is “subject to both public and private content controls spanning multiple jurisdictions and differing social mores” (3).

Further, the fact that today users across the world rely on a few big corporations that own the most popular services and products, makes it easier for states to exercise power in the cyberspace. One particular

company, Google, has ownership over too many services, that media studies professor Siva Vaidhyanathan described its expansion as the “googlization of everything” (Vaidhyanathan). Google’s services include the largest search engine Google Search, one of the most popular online news-aggregating services Google News, the blogging service Blogger, the video sharing website Youtube which has over a billion users (Youtube Statistics), and the biggest mobile operating system Android, which in 2015 had a global market share of 81.5 percent (Hahn). On the other hand, Facebook is by far the most popular social networking site with more than 1 billion active users. Over the past years, the company had also purchased the photo-sharing service

Instagram, and the instant-messaging app Whatsapp. This concentration of ownership in the hands of few corporate actors leaves users with very limited options and alternatives. For instance, activists living under repressive

regimes might be reluctant to leave a large platform like Facebook for a service that is less popular but more supportive of freedom of expression, as they have better opportunities of reaching out to their audiences on

(18)

Implications for Compliance with Local Laws

Social media platforms’ local laws compliance policies have three main implications for users’ free speech rights. First, by agreeing to comply with local laws, internet companies and online platforms face the dilemma of silencing what is considered legitimate speech under international human rights standards. Governments and platforms do have legitimate reasons to restrict certain types of content, such as child pornography and

cyber-bullying, as international human rights law allows for certain restrictions that are “provided by law” and “necessary”. Under article 19 of the International Covenant on Civil and Political Rights (ICCPR), freedom of expression could be restricted for the purpose of ensuring “the respect of the rights or

reputations of others, and the protection of national security or of public order, or of public health or morals”. However, legal restrictions on freedom of speech differ from one country to another. What is illegal speech in one country, could be totally legal in another one. And while some countries have in place legislation that are consistent with international human rights law, others do not. So, when companies agree to comply with local laws, they also agree to honour requests that violate their customers’ free speech rights, which are guaranteed by international human rights law. This, in turn,

contradicts the public commitment to free speech these platforms and their leaders make as described earlier. For instance, between January and June 2014, Facebook restricted seven pieces of content inside Saudi Arabia for violating local laws that ban criticism of the royal family (Facebook

Government Requests Report. Saudi Arabia). While between January to June 2015, Google received a total number of 14 requests from Indian authorities for violating local laws that prohibit religious offense (Google Transparency Report. India). While Google is legally required to comply with Indian

legislation since it has offices inside the country, Facebook does not have

(19)

offices in Saudi Arabia and is not in anyway under the obligation to abide by Saudi law.

Second, there is the risk of misinterpreting and misunderstanding local laws, and as a result “inflict[ing] unacceptable collateral damage” on users’ free speech rights (Kreimer 47). As mentioned earlier, most users of social media platforms are based outside the US, living under a variety of

jurisdictions, from the most restrictive to the most liberal. These jurisdictions and laws are themselves influenced by “informal institutions” which include cultural norms and values, traditions and customs, and belief systems (Flew 1). As a result, to properly understand the local laws of a foreign jurisdiction, it is not sufficient to study the laws as it is equally important to understand the cultural, social and political contexts. Internet companies rely on content moderators to remove pieces of content from their platforms to enforce their terms of service. Recently, the media coverage of and academic research into commercial content moderation revealed how major social media platforms outsource content moderators usually based in countries like India and the Philippines to “soak up the worst of humanity in order to protect the rest of us” (Chen), in other words to keep our feeds “clean” from gore, nudity, hate speech and threats of violence. This raised questions as to how can

moderators judge the “appropriacy” of content destined to an audience living in culturally, socially, and politically different environments from theirs. As media studies scholars Sarah Roberts puts it, content moderation tasks “often involve complicated matters of judgment and thus require a moderator be steeped in the social norms and mores of the places in the world for which the content is destined” (Roberts 2). These same questions apply to the legal teams of social media platforms tasked with addressing content takedown requests made by governments and law enforcement authorities of a foreign country they may not be familiar with its language, laws, judicial system, and its cultural norms and socio-political context. Though transnational American

(20)

social media platforms regularly publish transparency reports shedding light on content takedown requests made by courts and law enforcement in foreign countries, there is a lack of transparency about the process of evaluating and responding to such requests, and particularly about the teams that respond to such requests. To what extent are these teams diverse in terms of the regions represented and the languages spoken, in order to be able to make flawless decisions about which content to withhold and which to keep based on

requests made by governments from across the world? These platforms may be capable of understanding and perfectly interpreting the laws of the

countries in which they have offices and operations, but what about the larger number of countries in which they do not? “Even cultures that are broadly aligned — such as the United States and Europe— still have marked

differences in their approaches to controversial issues”, notes Internet governance and privacy expert Emily Taylor in a 2016 report for the Global Commission on Internet Governance (12). Taylor goes on to conclude that “making the right decision is difficult”, and that “for the most part, the line between what is acceptable and unacceptable is not so easy to draw;

decisions are difficult and nuanced, and different cultures have varying levels of tolerance” (13).

The third implication is that undemocratic and anti-free speech

governments will be increasingly making requests to social media platforms to take down content they find “objectionable”, as they now expect these platforms to comply with their local laws. “As governments grow aware of the fact that stifling speech is as easy as submitting an order to a corporation, the number of those orders will drastically increase”, writes York (“Complicity in censorship…”). In recent years, there has indeed been an increase in the number of government requests received by social media platforms and companies offering content hosting services. Since it first started publishing its data about third-party requests for content takedown or user data, also

(21)

known as the transparency report, Google has seen the number of

government removal requests more than double. Over the second half of 2009, the company received 1,062 requests, a number that reached 3,467 over the first half of 2015 (Google Transparency Report). Twitter, on the other hand, saw an even more significant increase in the number of takedown requests. In 2012, the platform received only 48 requests from governments, a number which jumped to 5,631 requests in 2015 (Twitter Transparency Report). Though the increase in the number of requests could be attributed to the increase in the number of users, governments’ growing exploitation of content takedown policies also reflects the increase in numbers. For instance, while Twitter’s number of users did not grow significantly over 2015

(Oreskovic), the number of requests the platform receives continues to rise. Over the second half of 2015, Twitter received 4,618 requests, compared to only 1,013 during the first half of the same year. While during the first half of 2013, Google registered a 68 percent rise in the total number of government takedown requests compared to the second half of 2012 (Google Official Blog).

Not All Requests are Treated Equally

Commercial platforms are owned by large internet companies and corporations whose aims include increasing the number of users, boosting revenues and entering new markets (Youmans and York). In this regard, a ban on a commercial platform particularly in large and profitable markets would result in the loss of users and revenues. For instance, with more than 44 million internet users as of 2015 (Internet Live Stats), Turkey represents an important market for platforms, and companies might prefer to censor their users there rather than face significant financial losses.

Though “business considerations may trump civic considerations” (Youmans and York 324), engaging in too much censorship, however, could

(22)

also be harmful to commercial platforms, by bringing about bad publicity (Zuckerman). As a result, these platforms need to strike a balance between their financial interests, and user satisfaction. In other words, they need to make sure that they do not upset governments, but do not censor too much either. As Roberts writes in reference to content moderation by commercial platforms: “these decisions may revolve around issues of “free speech” and “free expression” for the user base, but on commercial social media sites and platforms, these principles are always counterbalanced by a profit motive; if a platform were to become notorious for being too restrictive in the eyes of the majority of its users, it would run the risk of losing participants to offer to its advertisers” (5). While Ammori (2279) notes: “these companies must figure out how to adopt and enforce policies that comply with various national laws while advancing a corporate (and individual) interest in freedom of

expression”.

The need to strike this kind of balance can lead to inconsistencies in the free speech policies and practices of these platforms. What a platform may accept to restrict in one country (or under one context), it may refuse to take down in another country (or under a different context). In other words,

platforms may be more willing to collaborate with authorities in countries where they have high stakes than the ones in which they do not. Following last year's attack against the French satirical magazine Charlie Hebdo,

Facebook founder and CEO Mark Zuckerberg posted a #jesuisCharlie support message and revealed that in 2010 his platform refused to ban content

deemed offensive to prophet Muhammad in Pakistan despite the death

threats he received (Zuckerberg). Few days later, however, Facebook agreed to ban pages for insulting the same prophet in Turkey following a court ruling (Akkoc, “Facebook censors…”). Turkey and Pakistan are both countries with Muslim-majority populations, and legislation that ban religious offense. In another example, while in the US Google fought a court order against the

(23)

removal of the controversial “Innocence of Muslims” film posted on Youtube in 2012, and won (Roberts J), the company still agreed to remove the same video inside Pakistan. Authorities there blocked Youtube since 2012, until Google agreed to launch a localized (in other words censored) version of its video-sharing platform in the country (Variyar).

For years, several transnational companies sought to rely on American free speech legislation in response to foreign governments’ content takedown requests, but eventually ended up adopting country withheld content policies (Ammori). Unlike a free speech policy based on the American legal system, a country withheld content policy gives social media platforms more flexibility in dealing with government requests while allowing them to avoid the risk of being completely banned. Platforms are able to push back on certain

requests, but they can always comply when the stakes are high. Meanwhile, if they face criticism from activists and free speech advocates, they have local laws to blame. To give an example, during a Facebook Q&A event, Zuckerberg defended his company’s decision to operate in speech restrictive countries by laying the blame on local laws. He said: “Most countries have laws restricting some form of speech or another...The problem is if you break the law in a country then often times that country will just block the whole service entirely... I can’t think of many examples in history where a company not operating or shutting down in a country in a protest of a law like that has actually changed the law. I think overwhelmingly our responsibility is to continue operating” (Q&A with Mark).

Involving Users in Questions of Governance

Regulation of speech online is part of the broader question of internet governance to which several international and United Nations-led conferences and meetings have been dedicated over the past years, most notably the internet Governance Forum (better known by its acronym IGF). Governments

(24)

and corporations today dominate the Internet Governance (IG) process, including the regulation of our speech rights, and shaping what kind of content are users allowed and not allowed to post online, as shown above.

One principle of the internet governance process is

“multistakeholderism”, which refers to the participation on equal footing of multiple stakeholders including governments, non-governmental

organizations, the private sector, academics and the technical community in forums and discussions that shape IG policies whether at the international, regional or local levels (Diplo). This principle, however, is not always

honoured as powerful actors (governments and the private sector in

particular) often enjoy a larger representation in such events than the civil society and other marginalised communities, mainly thanks to the resources they have access to, which allow them to be widely represented in these events. In some regional or national IGFs, government and private actors also have the power to shape the agenda of the meeting and exclude or

marginalise certain participants and communities, the civil society in

particular. For instance, the 2015 Arab region IGF edition focused on financial and security concerns, while marginalising human rights issues (Nachawati Rego). In addition, not a single civil society member took part in any of the forum’s plenary sessions (Tarakiyee).

It may not be possible to adopt a multi-stakeholder approach in every single decision or policy corporations take or approve, but greater

involvement from users and rights groups and activists is needed. Over the past few years, corporations and civil society actors took a number of steps to maximize protections for user rights. In 2008, the Global Network Initiative (GNI) was founded with the purpose of “advancing freedom of expression and privacy in Information and Communication Technologies” (GNI borchure). ICTs related companies, civil society organizations, academics, and investors are

(25)

involved in GNI activities, including companies like Google, Microsoft and Yahoo, and civil society groups like Human Rights Watch and the Committee to Protect Journalists. GNI put in place “principles on freedom of expression and privacy”, which member companies commit to implement. In another positive development, several companies today publish transparency reports providing insights into requests made by governments for user data or

content takedown, even though information provided is often not

comprehensive, and as a result does not allow users and rights groups to get the full picture and hold these companies to account.

However, there is still much to do not only in terms of enhancing

transparency and maximizing rights protections, but also when it comes to involving customers. In her book Consent of the Networked, internet freedom advocate Rebecca Mackinnon refers to the principle of the “consent of the governed” in political philosophy, under which the authority of a government should be derived from the consent of its people. In the cyberspace, however, there is no such explicit consent except for the terms of use most users agree to without reading. This makes from governments and corporations

“sovereigns operating [in the cyberspace] without the consent of the

networked” (Mackinnon, xxiii). In this regard, netizens need to play greater roles in influencing the decision and policy making processes, and holding companies into account. This, however, is only possible when companies “build processes for engagement with users and customers, who are re-envisioned as constituents”, writes Mackinnon (248). “If companies want to gain trust amongst users, they need to be aware of the human rights

implications of their policies and develop systems to resolve issues between activists—as well as average users—and companies”, noted York in a 2010 report entitled “Policing Content in the Quasi-Public Sphere” (29). One area, social media platforms can engage users and civil society actors in, is the development of “realistic and robust processes for content moderation that

(26)

comply with international human rights standards” (Taylor 16 ). Development of similar standards would make it easier for companies to make decisions regarding content removal, and make it easier for users to hold the

companies into account. A number of respondents in the Turkish case study also mentioned the development of such standards as one of the solutions to mitigate threats to free speech rights, when companies take content removal decisions in compliance with local laws (see chapter four).

Chapter 3: Research Design

The purpose of the case study is to examine attitudes of internet freedom advocates towards the country withheld content policies and

practices of social media platforms. These attitudes will be explored through interviews conducted via email or skype, mostly with Turkish activists.

Turkey as a Case Study

Over the past few years, Turkey has been gradually heading toward more authoritarianism under the rule of the Justice and Development Party (AKP), led by one of its founders Recep Tayyip Erdogan who currently serves as the country’s President after winning the presidential election in 2014 ( BBC, “Recep Tayyip Erdogan: Turkey’s ruthless president”). In his bid to consolidate his grip on power, Erdogan staged a crackdown on civil and political liberties, including press and media freedoms (Kirişci). In 2013, the Committee to Protect Journalists (CPJ) listed Turkey as the world’s “leading jailers of

journalists” (Taal), documenting numerous attacks on and legal cases against journalists covering the anti-government Gezi protests (Dewey). Detentions and prosecutions of journalists are still common to this date. On 6 May, a court in Istanbul sentenced two journalists from the opposition newspaper

(27)

Cumhiryet to jail for revealing to the public Turkish arms shipments to Syrian rebels (The Economist).

In addition to silencing journalists and cracking down on media coverage, Erdogan has also been wary of the internet’s role in supporting political

speech in Turkey, describing social media as a “menace to society” and threatening to “eradicate” Twitter in 2014 (Ries). Seeking to control the internet, the parliament passed legislation that require Internet Service

Providers (ISPs) to keep a record of their users’ online activities for two years (Hurriyet, “Turkish parliament…”), and give the government the power to shut down websites without a court order (Hurriyet, “New bill gives…”). Further, the country’s internet censorship law (Law No. 5651 passed in 2007 on the Regulation of Publications on the Internet and Combating Crimes Committed by Means of Such Publications), bans among other types of content, child-pornography, “obscenity”, and criticism of the founder of the Turkish Republic Mustafa Ataturk (Akgul). Prosecutions of users for exercising their right to free speech online are not uncommon in Turkey. In 2014 for instance, 29 Twitter users were put on trial for sending tweets related to the 2013 Gezi protests (Amnesty).

The Turkish government repeatedly resorts to blocking social media platforms (Kasapoglu), as a way to “bully” them into complying with its censorship demands (Galperin). In March 2014, Facebook, Twitter, and Youtube were temporarily blocked to halt the circulation of leaked audio recordings purportedly showing evidence of corruption within the inner circle of the then Prime Minister and currently the President Recep Tayyip Erdogan. Turkish authorities also turn to restricting access to social media platforms by entirely blocking or throttling access to thwart coverage of terror attacks. Following a deadly car bomb attack in the capital Ankara on 17 February, activists reported that ISPs either blocked or slowed down access to Twitter and Facebook (Sozeri, “Turkey

(28)

cracks down…”). Blocking tactics often backfire, with users usually

circumventing the censorship. For example, in March 2014 Twitter users in Turkey posted 2.5 million tweets in the hours after ISPs blocked access to the site (Gayomali).

In addition, Turkish authorities have been increasingly taking advantage of content removal policies to pressure social media platforms, particularly Facebook, Twitter, and Youtube to comply with the country’s anti-free speech laws. Only recently, following a complaint filed by Turkcell, one of the

country’s largest mobile service providers, a court ordered Twitter to delete 862 tweets about a child abuse scandal at a religious foundation that is linked to the company, and has close ties to president Erdogan and his family

(Sozeri, “Turkish court orders…”). In fact, Turkey makes a significant number of requests to different companies. During the second half of 2015, Turkish authorities made 2,211 content removal requests to Twitter, almost half the total number of requests received by the social networking site over that period (Twitter Transparency Report). Though the increase in the number of requests can be explained by an increase in the number of internet users in the country, Turkey still makes more requests than countries with larger number of users such as the US, Germany or India.

Even though, the US, Germany, and India have more internet users (see table 1), Turkey has been making more content takedown requests to both Google and Twitter from 2012 to mid 2015 as shown below in figures 3.1 and 3.2. Facebook data is not represented below as the company only provides the number of pieces of content restricted and does not publish the number of requests the company receives from each country. But, between January and June 2015, the company restricted 4,496 pieces of content inside Turkey mostly under the country’s internet censorship law. Only India comes before Turkey, with more than 15 thousand pieces of content restricted. However,

(29)

there are more users in India than in Turkey. In addition, the company has several offices in India, making it subject to the country’s legislation.

Turkey1 Germany2 India3 US4

2012 33,779,438 66,273,592 158,960,346 249,635,976

2013 35,253,433 67,812,285 193,204,330 267,028,444

2014 39,568,141 69,509,013 233,152,478 279,070,327

2015 43,953,971 70,569,048 354,114,747 283,712,407

Table 1: Growth of the number of internet users in Turkey, Germany, India, and the US from 2012 to 2015. Source: Internet Live Stats, data elaborated by the International

Telecommunications Union (ITU), the World Bank and the United Nations Population Division.

Figure 3.1: Number of removal requests made by Turkey, Germany, India and US to Google from 2012 to 2015. * denotes only first half of 2015. Source: Google Transparency Report: Requests by the number from Germany, India, Turkey, and the US.

1 Number of Internet users in Turkey. Internet Live Stats. Accessed on 30 April 2016.

<http :// www . internetlivestats . com / internet - users / turkey />.

2 Number of Internet users in Germany. Internet Live Stats. Accessed on 30 April 2016.

<http :// www . internetlivestats . com / internet - users / germany />.

3 Number of Internet users in India. Internet Live Stats. Accessed on 30 April 2016.

<http :// www . internetlivestats . com / internet - users / india />.

4 Number of Internet users in the US. Internet Live Stats. Accessed on 30 April 2016.

(30)

Figure 3.2: Number of removal requests made by Turkey, Germany, India and the US to Twitter from 2012 to 2015. Source: Twitter Transparency Report.

It is for the reasons mentioned above that Turkey was selected for the case study of this dissertation. The government there is increasingly going authoritarian and wary of the role the internet plays in providing a public and open space for critics and political opponents. Among the tactics it employs to crackdown on dissent is making content removal requests to social media platforms.

Data collection methods

To gain an insight into the implications of the country withheld content policy on the free speech rights of users in Turkey, qualitative interviews were conducted with Turkish and non Turkish free speech and internet freedom advocates. I chose to conduct qualitative interviews instead of surveys as my aim is “integrating multiple perspectives” (Weiss 25) from the internet-activist community, and not the collection of statistical data. In the study, I am not interested in statistics and numbers as much as I am interested in exploring how Turkish activists perceive the country-withheld content rule and its use by their government to silence critics and suppress dissent. In his book

Learning From Strangers: The Art and Method of Qualitative Interview Studies published in 1995, Robert S. Weiss writes that when we conduct qualitative

(31)

interviews “we gain in the coherence, depth, and density of the material each respondent provides. We permit ourselves to be informed as we cannot be by brief answers to survey items. The report we ultimately write can provide readers with a fuller understanding of the experiences of our respondents” (16).

The number of interviewees is nine, and they were selected based on their knowledge of the topic, and their experience advocating for an open and free internet in Turkey, or elsewhere. Due to the political situation in Turkey, respondents were given the opportunity to remain anonymous. As Brennen writes, qualitative interviews raise “potential ethical dilemmas arising from the use of personal information”, and interviewers “have a moral

responsibility to protect their respondents” from any “potential harm” (29). Only one interviewee requested to have her identity concealed. Interviewees include activists, academics, journalists, and researchers. They are:

❖ Ahmet A. Sabancı, a freelance writer and researcher focusing on internet freedom issues in Turkey

❖ Arzu Geybulla, a freelance Azerbaijani writer who also covers Turkey ❖ Asli Telli Aydemir, a researcher with the Alternative Informatics Association, an Istanbul based civil society group promoting digital rights and internet freedom

❖ Efe Kerem Sozeri, a Turkish journalist who covers internet freedom issues in Turkey

❖ Erkan Saka, a political blogger and assistant professor at Bilgi University in Istanbul

❖ Isik Mater, a Turkish internet freedom activist

❖ Reem AlMasri, technology editor and researcher at 7iber.com, an online magazine based in Amman, Jordan

❖ Sarah Myers West, PhD researcher at the Annenberg School for Communication at University of Southern Carolina

(32)

❖ and finally, a Turkish lawyer who requested to remain anonymous

Having myself conducted research and reported on the subject of the intersection of technology and human rights, I have come to personally know and meet most of the respondents, which made it easy for me to get in touch with them. In addition, one respondent, recommended and helped me reach out to two other interviewees.

Before conducting the interviews, an ‘interview guide’ was developed based on the literature review and research questions (see Appendix A for interview guide). Weiss defines an interview guide as “a listing of areas to be covered in the interview along with, for each area, a listing of topics or

questions that together will suggest lines of inquiry” (Weiss 80). The

interviews are semi-structured in the sense that there is a “pre-established set of questions” (Brennen 28) that do not require limited answer options. There is also “greater flexibility” with semi structured interviews, as the order of questions may change and interviewers may ask follow-up questions

(ibid.). For this study, questions differed a bit from one interview to another depending on how each interview went and the background of each

respondent (see Appendix B for two sample interviews). For the non-Turkish interviewees, they were not asked specific questions about Turkey, but rather questions that are general or more related to the region and countries they focus on. The fact that I do not speak Turkish did not represent a challenge, since respondents speak English, the language in which all interviews were conducted. The interviews were conducted via skype or email, as not all respondents had time for skype conversations (see Appendix C for the

interviewing method of each respondent). For interviews conducted by email, follow up questions were sent to respondents whenever needed. Email

interviews may reduce the spontaneity of synchronous interviewing or communication, and “spontaneity can be the basis for the richness of data collected in some interviews”, however they also provide respondents with

(33)

the opportunity to think about their answers, and even list references and resources (Opdenakker 2006).

Data analysis methods

After each interview, answers (for emails) and transcripts (for skype calls) were “coded” (Weiss) to explore which questions and issues were raised, in order to identify “important insights, and information, outline key concepts, opinions, patterns and themes” (Brennen 38). The main themes of each interview were then listed in a spreadsheet, with the corresponding quotes and information provided by each respondent. Whenever needed, further research was conducted to “provide a contextual frame of reference from which the interview quotations are interpreted” (ibid.), and to make the writing process of the findings and discussion chapter easier later. Brennen notes that qualitative interviewing presents “one potential issue” which is the “reliability of the information provided by respondents” (38). For this reason, whenever needed, information and data provided by respondents as “factual” were verified and assessed to make sure they are accurate. This, however, does not concern the personal opinions and perspectives of interviewees which in this case study are treated as “authentic responses”, unlike factual information which “should be verified from other research sources as well as corroborated by other respondents during subsequent interviews” (ibid.).

The findings and discussion chapter (chapter four) was written using the interviews and a qualitative textual analysis of the news coverage and research related to Turkey’s internet freedom situation, in particular how authorities there exploit social media platforms’ content removal policies. It was written based on an “issue-focused analysis” (ibid.). An “issue-focused description is likely to move from discussion of issues within one area to discussion of issues within another, with each area logically connected to the

(34)

others” (ibid.). This allowed to divide the chapter into different sections, with each section focusing on a particular theme.

Chapter 4: Findings and Discussion

In the interviews, respondents addressed a number of concerns related to country-specific content removals. Turkish respondents, in particular, were openly critical towards the growing compliance of social media platforms (mainly Facebook and Twitter) with their government’s takedown requests which they said usually aim at silencing critics and legitimate speech. Respondents want to see platforms disclosing more information about government requests, and their criteria for complying with those requests.

Implications of the Country-Withheld Content Policies

Respondents expressed unfavourable views of country-specific restrictions of content in general, and for how it is being exploited by governments,

particularly the Turkish government in this case. Ahmet Sabanci, a writer and researcher focusing on internet freedom issues in Turkey, said he opposes such restrictions by social media platforms for two reasons. First, such restrictions contribute to the “balkanization” of the internet. The term

“balkanization” is often used pejoratively to refer to the fragmentation of the “universal” and “borderless” internet into “splinternets” or networks "that are walled off from the rest of the Web" (Maurer). However, the threat of

“balkanization” is not only due to attempts by governments such as those of China and Iran to build their own networks isolated from the rest of the web (Meinrath). “Balkanization” is also the result of non-harmonious legislation and regulatory regimes (Bleiberg), which governments increasingly use to demand content removal. As a result, some tweets and accounts that one user in the Netherlands is able to see, may not be accessible to another user in Turkey. Second, according to Sabanci, these restrictions are “useless” since

(35)

users can bypass them by “changing some little account settings” like setting up the account location to another country other than Turkey.

Reem AlMasri who researches Internet Governance in the Arab Region thinks social media platforms “should not be subject to local laws, especially in this part of the world where laws are made to restrict content”. Country-withheld content policies “give countries like Turkey more leverage to

persecute those who say things the authorities do not necessarily like”, said writer and journalist Arzu Geybulla. Citing as examples hate speech laws in Europe, and legislation banning Holocaust denial in Germany, Efe Kerem Sozeri, a journalist who covers internet freedom issues in Turkey, stated that internet companies can comply with national laws, but when there is an

independent judiciary, which is not the case in Turkey where removal requests are “politically motivated” (Sozeri, Efe Kerem. Interview. 11 May 2016). As explained above (‘Research Design’ chapter), Turkish authorities do indeed exploit content removal policies to crackdown on speech and dissent. As a result, these policies have implications for users’ free speech rights in Turkey and elsewhere. Respondents particularly mentioned four types of content that Turkish authorities target through country-specific restrictions:

❖ Criticism of Ataturk: internet law No. 5651 prohibits defamation of Mustafa Kemal Ataturk, the founder of the Turkish Republic (Human Rights Watch, “Turkey: Internet Freedom, Rights in Sharp Decline”). ❖ Criticism of the ruling authorities: Article 301 of the Penal Code

prohibits the “denigration of the Turkish nation, the state of the Republic of Turkey, the Turkish Parliament, the government of the Republic of Turkey and the legal institutions of the state” (Human Rights Turkey). Article 299 of the same code prohibits insults against the President (Ognianova).

❖ Content and accounts related to Kurdish politicians and activists, and the outlawed Kurdistan Workers’ Party (PKK), a leftist militant group

(36)

seeking to establish an independent Kurdish state (BBC, “Profile: Kurdistan Workers' Party”). Turkey often uses its anti-terror laws to crackdown on activists peacefully promoting Kurdish rights (Human Rights Watch, “Turkey: Terror Laws Undermine Progress on Rights”). The United Nations has previously criticized the use of such laws to prosecute those involved in “non-violent discussions of the Kurdish issue” (Global Legal Monitor).

❖ Coverage of major political events or breaking news: Turkish

authorities often place gag orders on coverage of major events. These orders also apply to social media platforms. For instance, last year a court ordered a ban on Twitter and Youtube over footage showing the hostage-taking of an Istanbul prosecutor by members of a far leftist group (Akkoc, “Turkey blocks social…”). The two platforms were, however, unblocked after they complied with the court’s removal requests (Franceschi-Bicchierai). In another example, in an attempt to obstruct the circulation of images showing the aftermath of a deadly car bombing in the capital Ankara in March 2016, both Twitter and Facebook were blocked following a court gag order (Akkoc, “Ankara blast…”). Once again, the two platforms were unblocked later.

Turkey’s tactic of intimidating platforms to comply with its takedown (or censorship) requests is not new. In 2007, Youtube was blocked over videos deemed “insulting” to Ataturk, even though the platform had removed the videos (Associated Press). The blogging platforms blogger.com and

wordpress.com were also targeted over posts deemed libelous against Adnan Oktar, a creationist TV host and author (Ben Gharbia). This trend continued to grow over the past few years, as Turkish authorities strengthen crackdown on civil and political liberties, and as the number of internet users in the country is increasing. This has in turn brought with it content regulation or moderation challenges, as platforms struggle between honouring their public

(37)

commitments to freedom of expression, and avoiding to upset governments that might ban them. While social media platforms “could have once used more light touch content moderation, they are now experiencing difficult cultural norms from country to country as well as different government imperatives as their customer base is growing globally”, said Sarah Myers West from onlinecensorship.org, a project that collects reports from users censored by social media platforms. “What was always a tension has only grown over time as a result of the increasing global base” (Myers West, Sarah. Interview. 2 May 2016).

Facebook and Twitter: Platforms of Concern

A platform responds to government requests for content removal

depending on a number of factors including its commitment to free speech, its user base, the importance of the market, and whether it has staff and offices inside the country issuing the request. In the interviews, respondents expressed concerns about the ways platforms handle takedown requests coming from Turkey, focusing mainly on two platforms: Facebook and Twitter. There were 39 million daily Facebook users from Turkey as of June 2015 (Daily Sabah and Anadolu Agency), and 6.5 million monthly Twitter users as of late 2014 (Dogramaci).

Facebook, considerably received more negative criticism than Twitter for how it handles the Turkish government’s takedown requests. Respondents were explicitly critical towards Facebook, blaming the platform for not fighting back or appealing requests, and for complying “very quickly and very easily” (Aydemir, Asli Telli. Interview, 2 May 2016). Facebook has on several

occasions faced accusations of (political) censorship in Turkey. During and in the aftermath of the 2013 anti-government Gezi protests, activists accused Facebook of removing accounts and pages of political activists and parties such as the page of the Kurdish Peace and Democracy Party (BDP) (Hurriyet

(38)

2013), the citizen journalism group the Others’ Post (Güler, “Facebook facing…”), and the page of the anti-racism and anti hate speech initiative “DurDe!”, or “Say Stop!” in English (Güler, “Facebook censors…”). The social networking site also engages in removing LGBT related content (Korkmaz) and posts deemed blasphemous or insulting to Islam, the dominant religion in Turkey (Akkoc, “Facebook censors...”). Because it “removes everything the Turkish government wants [to be removed]”, Facebook is becoming more of a “pro government media” Sabanci said.

Respondents also expressed concerns about Turkey’s exploitation of Twitter’s country-withheld content tool, but they were less critical towards the platform. For Sabanci, even though Twitter realizes that Turkey is a big

market, and they maintain relationships with the Turkish government, the platform “tries its best to fight back against censorship”, by appealing in courts and not always complying. “They are trying to find a middle ground in this situation, and so far they did some good jobs for fighting back against censorship”, he asserted. According to Erkan Saka from Istanbul’s Bilgi

University, “Twitter is relatively careful about handling the situation. It informs the users at every stage, although it complies, and they regularly counter-sue the government to unblock messages or accounts”.

One example showing how Twitter “fights back” is its decision to appeal a court order requesting the removal of 860 tweets related to a child-rape scandal involving an organization close to the Erdogan circle (Sozeri, “Turkish court orders…”). That is not the only case of Twitter appealing a Turkish

request. In response to an appeal filed by the platform, in March 2014 a court overturned a previous ban on an account that posted tweets accusing a

former government minister of corruption (Gadde). While, during the second half of 2016, the company said that it appealed 70 per cent of the 477

(39)

The fact that Facebook is not doing enough to appeal the requests, led a number of activists, including Sozeri, to quit the platform, preferring Twitter, instead, to address politics. This turned Twitter into the “most political

platform” for Turkish users (Sozeri, Efe Kerem. Interview. 11 May 2016). In addition, the fact that Twitter does not require real name registration like Facebook, seems to make it possible for users to evade censorship by staying anonymous or adopting pseudonyms. Sabanci mentioned how once users are notified by Twitter that their accounts or tweets are the subject of a court removal order, they switch their usernames or open new accounts. By

changing their usernames, Twitter is unable to apply the court order as they are unable to find the username in that order. Such tactic would not of course be possible when real name registration is required, as such policy makes it easier for a platform like Facebook to disable or “punish” accounts either because they violated its community standards or because they are the subject of a government request (York, “Facebook’s ‘real name’ policy…”).

But as Twitter continues to serve as a “public forum”(Akdeniz and

Altiparmak), for Turkish users seeking to debate their country’s current affairs, expose government wrongdoing, and access viewpoints and perspectives sidelined by the mainstream media in their country, the platform is facing growing pressures from the Turkish authorities to comply with their requests. In fact, Twitter started to implement its country-withheld content policy in Turkey following a 2014 ban on the site, over tweets related to a corruption scandal involving the inner circle of the then Prime Minister and current

President Erdogan (Sozeri, “Uncovering the accounts…”). Though the ban was later overturned by the country’s constitutional court, the damage was done already as Twitter bowed to Turkey’s pressures and started to comply more often with its content takedown requests.

Referenties

GERELATEERDE DOCUMENTEN

In respect of senior manager post – salary levels 13 to 16 the Job Evaluation Panel should make a recommendation to the head of the department as the delegated authority to make a

Furthermore, the variables education, gender, where people live in a city or rural area, social media use, partisanship, political interest, political knowledge,

The relevance of such an approach has previously been developed in the study of online climate change controversy and issue-mapping prior to social media (Rogers &amp; Marres, 2000),

This research examines how three elements of informative MGC (information quality, post popularity, and post attractiveness) can lead consumers to like and

Objective: Considering the importance of the social aspects of alcohol consumption and social media use, this study investigated the social content of alcohol posts (ie, the

1 What a municipal merger has meant for the possibilities of citizens to participate in wind energy policymaking processes, the case of Zijpe - No windmills in the new

The role of UAB in innovation and regional development has been strengthened especially since 2008 when Barcelona city expressed its particular interests in engaging

The largest study of patients undergoing cross-border reproductive care in Europe was conducted in 2008/09 by Shenfield et al. They surveyed all women from other countries who