• No results found

Disrupting and guiding online contention: Automated and non-automated pro-government agents on social media

N/A
N/A
Protected

Academic year: 2021

Share "Disrupting and guiding online contention: Automated and non-automated pro-government agents on social media"

Copied!
55
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

________________________________________________________________________

DISRUPTING AND GUIDING ONLINE CONTENTION: Automated and non-automated pro-government agents on social media

THESIS by

Giovanna M. Salazar Ojeda

Student number: 10848223 giovanna.salazarojeda@student.uva.nl

Research Master Media Studies Graduate School of Humanities

June 2016

Thesis Advisor: dr. T. (Thomas) Poell

(2)

DISRUPTING AND GUIDING ONLINE CONTENTION: Automated and non-automated pro-government agents on social media

ABSTRACT

Research demonstrates that states across the world, from dictatorial regimes to liberal democracies, are determined to control and monitor online communication. Authoritarian regimes have gone the furthest in this respect, actively intervening such communication, mainly through the use of internet filtering.

The filtering capabilities of states have, however, been increasingly undermined by the emergence of a wide variety of circumvention and surveillance evasion tools. Consequently, states have started to develop new techniques to control online contention. A major target are social media platforms which channel so much communication, and play a particularly crucial role in processes of activist mobilization and communication. Consequently, this project examines how platform-facilitated activism takes shape in the face of these new modes of online control; how are states trying to sway online communication to their advantage, without resorting to direct censorship and filtering mechanisms; and how are these techniques potentially facilitated or obstructed by the techno-commercial strategies of social platforms.

This dissertation examines these questions by exploring several recent cases in which different types of political regimes have attempted to shape online contention by employing non-automated and automated pro-government agents that obstruct and complicate online discourse and contention on social media. It sets out to examine how these agents operate and to what extent they are enabled by the particular characteristics of platforms. The investigation specifically focuses on China, Russia and Mexico, which in combination cover a significant portion of the political spectrum, ranging from an authoritarian regime to a flawed democracy. This makes it possible to critically discuss how such disruptive techniques are employed in regimes with different degrees of internet freedom.

This study demonstrates that (non-automated and automated) pro-government agents have been successfully deployed by the Chinese, Russian and Mexican states in order to interfere with social media activism. Furthermore, it argues that non-automated and automated pro-government agents follow different tactical principles. Non-automated pro-pro-government agents such as “patriotic hackers” follow a guiding principle, that is, their intention is to steer public opinion, whereas automated pro-government agents such as Twitter bots follow a disruptive principle, meaning that they are primarily deployed to interfere and disturb online expression. The cases illustrate that both the Chinese and the Mexican state target national online contention, while Russia is more concerned in targeting online dissent at the international level. Overall, this dissertation shows that governments across the political spectrum are exploiting critical features of social platforms to amplify and enable their propagandistic and controlling efforts online.

(3)

TABLE OF CONTENTS Abstract

1. INTRODUCTION………...3

2. THEORETICAL DISCUSSION………7

2.1 Internet filtering

2.2 Bypassing filtering capabilities of states 2.3 Beyond filtering

2.4 Disruption of social media activism 2.4.1 State disruption

2.4.2 Commercial disruption 2.5 Research questions

2.6 Operationalization / Methodology

3. ‘PATRIOTIC HACKING’: non-automated pro-government agents on social media………....20

3.1 The rise of non-automated pro-government agents on social media 3.2 “Patriotic hackers” at work

3.4 Interplay of features of platforms, ‘patriotic hackers’ and online contention

4. ‘TWITTER BOTS’: automated pro-government agents on social media…………....33

4.1 The rise of political Twitter bots 4.2 “Twitter bots” at work

4.4 Interplay of features of platforms, political ‘Twitter bots’ and online contention

5. CONCLUSION………..……….43

Acknowledgements

(4)

1. INTRODUCTION

As protests against the government of Bashar Al-Assad spread across Syria in 2011, so did the use of social media by political dissidents and activists. Social networking sites, such as Twitter, Facebook, and YouTube, enabled activists to contribute to the coordination and organization of demonstrations throughout the country. These sites also brought public attention to police abuses and arbitrary arrests of peaceful protesters and citizens (see, among others, Comninos 2011; Khamis et. al. 2012; Nachawati Rego 2012). Twitter was particularly useful for tracking the developments of protests through the hashtags #Syria #Daraa and #Mar15, as well as for bringing attention to the uprising (York 2011). However, pro-revolution Twitter users soon found their narrative obstructed by the activities coming from a set of recently created pro-regime accounts.

On the one hand, automated pro-regime accounts managed by the Bahraini company EGHNA flooded the #Syria hashtag by coordinately sending identical sets of tweets every few minutes (see Verkamp and Gupta 2013). According to Syrian blogger Anas Qtiesh (2011), the content of the messages was mostly unrelated to the revolution “such as photography, old Syrian sport scores, links to Syrian comedy shows, pro-regime news”. The high volume of tweets produced by such Twitter bots or automated agents, accompanied by the #Syria hashtag, made it much harder to find any information about the protests and, thus, effectively diluted the hashtag’s relevance for the Syrian revolution.

A different set of pro-government Twitter accounts also interfered with the Syrian opposition during this period. These accounts, run by individuals, verbally assaulted users, threatened government critics, and posted activists’ contact details online, while also tweeting favorably about the regime (see Noman 2010). Qtiesh further specifies: “Those accounts were believed to be manned by Syrian mokhabarat (intelligence) agents with poor command of both written Arabic and English, and an endless arsenal of bite and insults.” (ibid.). Such attacks were also linked to the ‘Syrian Electronic Army’ (see Karam 2011). Even though it has been difficult to verify the nature of the relationship between the so-called Army and the Syrian government, studies have reported their close connection (Al-Rawi 2014). President al-Assad has even thanked them publicly “hailing them as a ‘real army in virtual reality’ (al-Assad, 2011)” (Youmans and York, 323). These non-automated pro-government agents certainly served the purpose of amplifying a pro-regime narrative across Twitter, as well as of intimidating and favoring self-censorship amongst protesters.

(5)

These examples demonstrate that the relationship between activists, social media, and governments is complicated and often contradictory. While current research has extensively highlighted how social networking sites enable processes of activist mobilization and communication (see, among others, Bennett and Segerberg 2012; Castells 2015; Gerbaudo 2012) the previously outlined case of Syria illustrates that platform-facilitated activism is, in fact, highly vulnerable to state disruption. Moreover, it demonstrates the relevant role of social media platforms in providing the framework where such disruption takes place.

Despite the fact that a large body of research on the prolific field of online censorship has concerned itself with studying the different state actions that aim to control online contention, most of this research focuses on the ways in which authoritarian states employ filtering and blocking techniques (see, among others, Clayton et. al. 2006; Faris and Villeneuve 2008; Kalathil and Boas 2001; Murdoch and Anderson 2008; Rahimi 2007; Zittrain and Edelman 2003). What’s more, very few have examined how social media activism is currently, and increasingly, being curtailed specifically by state’s infiltration and exploitation of crucial features of social platforms.

Consequently, this dissertation seeks to understand, on the one hand, how states characterized by different political regimes are adapting and adopting new strategies to shape (rather than simply suppress) social media contention. To accomplish this, it examines how China, Russia and Mexico have sought to sway online contention by deploying automated and non-automated pro-government agents across different social platforms. On the other hand, the thesis explores the role of the platforms in facilitating or obstructing such state actions. Given that social platforms are, above all, designed to produce profits mostly through the tracking and profiling of user behaviors and patterns of use (Langlois et. al. 2009), this work recognizes that the commercial strategies of platforms do not necessarily correspond with activist interests. Furthermore, as work in the tradition of the political economy of platforms and software studies demonstrate, social media architectures significantly shape users interactions and activity (Beer 2009; Fuchs 2011; van Dijck and Poell 2013). Thus, this study finds it relevant to also critically examine how the techno-commercial strategies of platforms play a role in undermining social media activism.

The aim is to provide insights into the mutual articulation between state control efforts and the techno-commercial strategies of platforms.

(6)

The investigation is organized as follows: Chapter 2 critically discusses the current research on state efforts to control and shape online contention. It specifically focuses on the changing nature of internet controls through which states have shifted from limiting what can be discussed to influencing how topics are discussed. An examination on how the corporate and commercial nature of platforms can have negative (often unintended) consequences on activism will follow. Ultimately, this chapter posits that the obstruction of social media activism does not only have a state dimension (governments seeking to control online contention through increasingly complex techniques), but also a corporate dimension prompted by the techno-commercial strategies of platforms. The chapter closes with the focused research questions of the present investigation and by outlining the interpretive case study methodology and criteria behind the selection of the case studies and countries of analysis. Chapter 3 systematically describes and analyzes the state strategy of deploying non-automated agents (also referred to as ‘patriotic hackers’) in the three selected countries. By the same token, Chapter 4 investigates the strategy of using automated pro-government agents on Twitter (political Twitter bots). This platform was chosen as it has played a particularly relevant role in the world of social media activism and online contention; additionally, focusing on this specific platform allows for a more stable comparison between the countries of analysis. An overall Conclusion of the ways in which such disruptive strategies are employed by different types of political regimes and the role of social platforms in providing the landscape for such disruption to take place is provided in the final chapter.

(7)

2. THEORETICAL DISCUSSION

This study on the efforts of states to control and shape online contention through pro-government agents should be seen as a part of a larger body of research on internet censorship and control. As such, this research draws from numerous compelling studies which have been published over the past years, on the practices and mechanisms that governments worldwide are using to censor and control the Internet. Given that these studies have been most prominently concerned with blocking and filtering mechanisms, the first section of this chapter will be devoted to such government practices that directly deny and restrict Internet access and content.

2.1 INTERNET FILTERING

From 2008 to 2014, the interdisciplinary and collaborative partnership The OpenNet Initiative (ONI) 1 monitored and reported on internet filtering practices by governments worldwide. Of particular relevance and utility for the field have been its series of detailed reports: Access: Denied (2008), Controlled (2010), Contested (2011). The significance of this series lies in the mixed-method approach they developed to detect Internet filtering and surveillance practices worldwide, as well as on their empirical scope (see OpenNet Initiative 2008). Furthermore, these reports also identify the changing nature of the methods that governments are using globally to stifle the free flow of information on cyberspace. According to their findings, the dynamic of internet governance has shifted from directly controlling access to the network to, instead or simultaneously, shaping and influencing online expression through more subtle and sophisticated strategies. This led Principal Investigators Ron Deibert and Rafal Rohozinski (2010) to divide the techniques of information controls into different “generations”.

According to Deibert and Rohozinski (2010), the practice of Internet filtering corresponds to the first generation of information controls which consists on denying access to specific Internet content by “directly blocking access to servers, domains, keywords, and IP addresses” (22). Hence, this practice is mostly done through technical means. Amongst the most common practices of Internet filtering we find: domain name system (DNS) filtering, Internet protocol (IP) address filtering, or URL filtering (see Zittrain and Palfrey 2008). As explained by Murdoch and Anderson (2008), the choice of filtering mechanism will depend on the goals that governments may seek to achieve (e.g. they may opt to block specific

1

The ONI is composed by the Citizen Lab at the Munk School of Global Affairs, University of Toronto; the Berkman Center for Internet & Society at Harvard University; and the SecDev Group (Ottawa).

(8)

websites or services, make their access unreliable, or dissuade users to attempt to access the sites right from the outset) (58).

One of the most thorough technical internet filtering systems is found in China, where the state erected a technological barrier through which large portions of the Internet are filtered and blocked accordingly. This defensive perimeter, best known as the ‘Great Firewall of China’, works in part through keyword-based URL filtering; that is, it relies on inspecting web traffic and has the capacity to block websites based on keywords in URLs (see Clayton et. al. 2006; Zittrain and Palfrey 2008). Chinese filtering regime has been extensively analyzed throughout the years, as it has grown in complexity and scope (see, among others, Crandall et. al. 2007; Deibert et.al. 2008; Wright 2014; Xu et. al. 2011; Yang 2009, 2012; Zittrain and Edelman 2003).

China was one of the first countries to implement a national filtering system; yet, other countries have adopted similar schemes, such as Iran, Saudi Arabia, and Cuba. Even though this filtering style is rather strict and fixed, paradoxically, it is also very dynamic. During the tests that Zittrain and Edelman (2003) ran to assess the nature and scope of filtering in China they noticed that the set of sites blocked in the country is by no means static, on the contrary “whoever maintains the block lists is actively updating them, giving special attention to certain general-interest high profile sites where content changes frequently” (74-75). Similarly, MacKinnon (2008) details in her study on Chinese censorship that the lists of forbidden words and services that are fed to the filtering software of, specifically, blog hosting-companies are constantly being maintained and updated (38). This system is very similar to the Iranian “centralized filtering system” which searches for forbidden keywords or domain names on all route traffic, and where the banned information and content is constantly revised and updated. This way, thousands of websites are blocked on a regular basis, particularly those of international news media, human rights groups, and social media platforms such as Twitter and Facebook (Freedom House, Freedom on the Net 2015).

Countries that run these type of nationwide filtering mechanisms are compelled to keep the system very dynamic, most probably because, as the OpenNet Initiative maintains: “such a filtering regime is effective in part by keeping citizens guessing as to how the blocking will work over time and introducing uncertainty into the equation.” (OpenNet Initiative 2004-2005). The Iranian government, for example, in 2014 introduced what they

(9)

called "an 'intelligent' filtering program" that censors specific pages within a platform instead of blocking sites in its entirety. This new filtering technique was first implemented on Instagram (see Alimardani and Jacobs 2015; Digital Methods Initiative Wiki 2016).

A bulk of studies have also focused on internet filtering practices in the Middle East and North Africa, particularly on Egypt, Iran, Syria, and Tunisia where governments have a long tradition of blocking websites for pornographic or politically sensitive content, while also retaining legal provisions to imprison Internet users for expressing critical views (see, among others, Al-Rahdi et. al. 2013; Human Rights Watch 2005; Reporters Without Borders 2012; Mina 2007; Noman 2010).

Nevertheless, it is worth pointing out that internet filtering mechanisms are a global phenomenon not exclusive of dictatorships or authoritarian regimes. In fact, a significant number of countries have implemented some sort of internet filtering techniques in recent years (see Faris and Villeneuve 2008). Arguments related to safeguarding national security, protecting children against exploitation and pornography, as well as securing intellectual property rights have increasingly been used by all type of governments around the world in order to gradually filter and increasingly monitor citizen’s online activities2.

The research outlined in this section demonstrates that, during the first decade of state control efforts, filtering and blocking techniques were the prime methods used mostly by authoritarian states to assert control over the online activities of their citizens. However, as governments have gained experience in the technologies of internet controls, citizens have also acquired technical knowledge to circumvent such filters. The following section explores such tactics.

2.2 BYPASSING FILTERING CAPABILITIES OF STATES

As the internet continues to grow, filtering and blocking internet access and content through technical means has become increasingly difficult. According to Zittrain and Palfrey (2008) technical internet filtering is not perfect in any jurisdiction, given that: “every system suffers from at least two shortcomings: a technical filtering system either underblocks or overblocks content, and technically savvy users can circumvent the controls with a modicum of effort.” (34).

2

(10)

As reported by Freedom House’s yearly assessment on the state of online freedom around the world, during 2015 the effectiveness of internet filtering was significantly undermined in light of two particular developments: on the one hand, the widespread adoption of HTTPS (a secure protocol that encrypts communication over the internet) and, on the other hand, the profusion of circumvention tools that seek to sidestep the restrictions emplaced by filtering regimes.

In its analysis of the various tools and technologies that are used for Internet filtering, Murdoch and Anderson (2010) briefly point out to the circumventing possibilities for each of the filtering methods. The authors note that one of the most common ways to bypass several filters is by selecting alternative/external DNS servers, as well as by redirecting traffic through an open proxy server (67), however, they fail to provide particular examples on how such circumvention tools are, in fact, utilized.

Several organizations have released reports that survey and compare the use of a variety of circumvention tools particularly within repressive regimes (where they are most frequently used). For example, the Global Internet Freedom Consortium (2007), and alliance between various technology companies and non-profit organizations that have developed and deployed “anti-censorship” tools since 20063

, provides a detailed overview of the ‘anti-censorship’ devices, as they term them, that have been developed and used in China since the late 1990s and compares many aspects of such technologies, namely Garden, UltraSurf, Dynaweb, GPass, FirePhoenix, Tor, Triangle Boy and Freenet. The Consortium then, provides practical guidance for users “who need to make judicious choices of the best tools available for their protection” (3). However, their findings and suggestions need to be taken cautiously, given that several of the Consortium partners are companies that, in fact, develop and commercialize circumvention tools.

Freedom House (2010) a non-governmental organization that conducts research and

advocacy, among other things, on the state of freedom of expression and internet freedom worldwide4, technically evaluated how well eleven different tools (i.e. Dynaweb, Freegate, GTunnel, GPass, Hotspot Shield, JAP, Psiphon, Tor, UltraSurf YourFreedom, Google – Reader, Translation, Cache-) work in circumventing internet filters in Azerbaijan, Burma, China, and Iran. Each of the tested tools proved successful, to different degrees, in

3

See <http://www.internetfreedom.org/about/index.html>

4

(11)

challenging internet filters from inside the selected countries. Moreover, most of these tools achieved high scores in their overall evaluation, and were proved to deliver a high level of security and usability across the world (11). Yet, it is noteworthy that the latest reports from the Berkman Center for Internet & Society5 on circumvention tool usage (Palfrey et. al. 2010; Roberts et. al. 2011) found, on the one hand, that many internet users prefer using simple web proxies and Virtual Private Networks (VPN) instead of the more dedicated censorship circumvention tools -notably Freegate, Ultrasurf, Tor, and Hotspot Shield; and, secondly, that the overall usage of circumvention tools is still “very small in proportion to the number of Internet users in countries with substantial national Internet filtering” (2010, 2).

Nonetheless, this collection of studies and reports shows that, regardless of how intricate state’s filtering and blocking mechanisms may be, they can be circumvented with relative ease.

The study of Al-Saqaf (2010), a Yemeni software developer dedicated to the study and development of tools to confront and map censorship in the Arab world6, is also particularly relevant given that it tests the effectiveness of the circumvention tool alkasir precisely across Arab countries, yet, users of the tool were also based in North and South America, Europe, Sub-Saharan Africa, Asia and Oceania. As such, and according to the author “although the main beneficiaries of alkasir appeared to be Arab Internet users particularly in Yemen and Saudi Arabia, its potential in becoming a global viable circumvention solution was demonstrated by Chinese, Iranian and other netizens who used

alkasir to reach social networking and multimedia sharing websites such as facebook.com

and youtube.com.”(88).

Numerous other organizations such as Reporters Without Borders7, Electronic Frontier Foundation8, The Citizen Lab9 have published detailed guides to inform internet

users on how to assess which circumvention tool fits better their specific needs, and, most importantly, which of them is best according to their contextual settings.

5 A research hub based at Harvard University that focuses on the study of cyberspace. See

<https://cyber.law.harvard.edu/>

6 See <http://www.su.se/profiles/walsa-1.228225>

7 Reporters Without Borders. “Handbook for bloggers and cyber-dissidents”. 2004. 22 March 2016.

<http://archives.rsf.org/rubrique.php3?id_rubrique=542>

8 See <https://ssd.eff.org/en/module/how-circumvent-online-censorship> 9

Citizen Lab, “Everyone’s Guide to Bypassing Internet Censorship”. (Citizen Lab, Munk School of Global Affairs, University of Toronto, Toronto, ON, 2007). 22 March 2016. <https://citizenlab.org/publications/>

(12)

Overall, in light of the continual online innovations that facilitate the development of ever more effective and affordable tools to sidestep filtering regimes and their pervasive internet censorship schemes, the effectiveness of the “first generation” of internet controls has significantly been undermined. Consequently, as outlined in the following section, states have started to develop new techniques to control and shape online contention that go beyond filtering and direct censorship techniques.

2.3 BEYOND FILTERING

Precisely because of the fact that defeating internet filters is relatively easy, and also because most governments do not have complete control of the internet infrastructure in their countries (as is the case of China, Cuba and Iran), states are increasingly exploring and employing new and more complex strategies of internet control that go beyond establishing borders in cyberspace and denying access accordingly.

Such different strategies of information controls are what Deibert and Rohozinski (2010) have termed “next-generation” controls which are more subtle techniques that aim to shape online expression. Rather than being deliberate filtering and blocking mechanisms “[they are] strategies that seek to compete, engage, and dominate opponents in the informational battle space through persistent messaging, disinformation, intimidation, and other tactics designed to divide, confuse, and disable” (29). The different strategies of “next-generation” controls identified by the authors (2010b) are as follows:

- Justifying acts of internet censorship, surveillance, or silencing through the use of

legal measures. As the authors maintain (ibid.), creating a regulatory framework for

the internet does reflect a natural progression of bringing rules to cyberspace. However, sometimes “it also reflects a deliberate tactic of strangulation, since threats of legal action can do more to prevent damaging information from surfacing than can passive filtering methods implemented defensively to block websites. Such laws can create a climate of fear, intimidation, and ultimately self-censorship” (50). Among these latter sets of laws we find the ones related to slander, libel, copyright-infringement and several others that obstruct and limit online activities and expression.

- Disabling or attacking internet services at key moments in time, otherwise known as

(13)

attacks through, for example, the use of distributed denial-of-service attacks (DDoS) or botnets. Other methods include "shutting off power in the buildings where servers are located or tampering with domain-name registration so that information is not routed to its proper destination" (ibid.).

- Carrying out targeted surveillance through the use of software designed to infiltrate an unsuspecting user’s computer (54) such as ‘social malware’ or other technical resources. This type of infiltration allows governments to access activists’ computers and networks, and to retrieve sensitive information.

- Informally requesting Internet Service Providers, online hosting services, mobile phone operators, social media platforms, and other private companies, to tamper or

curtail internet services, remove information or even render services inoperative (e.g.

shutting down internet connection). Most often such government requests are met by private companies in order to assure they can keep on offering their services within particular jurisdictions.

- Pressuring private companies (ranging from the aforementioned ISPs, online hosting services, etc. to social media platforms and Internet cafes) into making decisions about content controls to comply with local censorship practices, laws and regulations. Companies are, thus, held liable for the type of information that is posted, hosted, accessed or communicated online, and, as a result, they are compelled to censor and surveil internet activity (52). This state strategy basically consists in ‘outsourcing’ censorship and monitoring controls to private businesses.

- Informally encouraging or tacitly approving the actions of patriotic groups who patrol chatrooms and online forums, post pro-government information, and censure critics. From this manifold of technical, legal, political and coercive strategies available for states to shape and influence internet activity and communication, two tendencies are particularly striking. First, these strategies often take place in a rather covert way, hence, it is very difficult to determine attribution of those responsible for carrying out such offensive (and often illegal) methods of internet control. Second, and very much related to the latter, states are increasingly relying on non-state actors to interfere and obstruct online contention, such as “third-party” social media platforms.

(14)

Given that social networking sites channel so much communication and have a growing importance for processes of online contention, this study finds it particularly relevant to further explore the relationship between activists, platforms, and states. How does platform facilitated activism take shape in the face of these new modes of online control? To what extent are states filtering social media platforms specifically? What role social platforms play in such disruptive state strategies? Drawing from available studies on the topic, the following section aims to provide some initial answers to these questions.

2.4 DISRUPTION OF SOCIAL MEDIA ACTIVISM

2.4.1 State disruption

As evidenced by the wide variety of social movements and protests that have emerged across the world during the past decade, social media has enabled activist communication and mobilization in remarkable ways. The paramount role of social media for activism, for instance, was specifically highlighted during the wave of protests that shook the Middle East and North Africa (MENA) region in late 2010 and throughout 2011. From massive protests in Tunisia to large mobilizations in Egypt, Libya, Yemen, Syria and Bahrain, among other countries, this wave of protests was characterized, as della Porta and Mattoni (2014) explain, by the relevant role that social networking sites played "in nurturing and supporting the mobilizations as well as keeping the mass media informed about protesters’ claims and actions" (39).

In light of the above, several governments within such region banned social media in an attempt to restrain the formation and expansion of the mobilizations, going as far as to cut-off internet and mobile networking services (see Eltantawy and Wielst 2011). Yet, in other cases states opted to infiltrate social media rather than restricting access to it altogether, as Howard and Muzammil (2011) maintain “[governments] worked social media into their own counter-insurgency strategies”. In this sense, Salem (2015) recognizes that during the 2011 Egyptian mobilizations social media were used by both protesters and the government for completely opposite ends. While she highlights that the rise of social media in Egypt surely allowed young activists to communicate and organize beyond the control of the state, she also observes that it was only a matter of time for the authorities to catch up on social media; in her words "The state was also able to use new technologies to exert control over Egypt, with key examples including surveillance, control over state media and the use of social media to address the nation." (186). Drawing from various studies on Egypt's uprising of late 2011,

(15)

Poell (2014) recognizes that social media were especially helpful to bring together "previously disconnected groups, and provide platforms for expressing grievances against the dictatorial regime (Gerbaudo 2012, 58-59; Lim, 2012, 244)" (193). However, he further specifies that during such period activist users were inclined to mistrust each other precisely because of state’s infiltration in social media. Furthermore, according to Youmans and York (2012) Bashar Al-Assad’s government went as far as to unblock social media sites such as Facebook, Blogspot, and YouTube, which had been blocked since 2007, in order to more effectively carry out targeted surveillance of activists and protesters during the Syrian revolution of 2011.

For regimes as China and Iran, which explicitly block access to major international social media platforms such as Facebook, YouTube, Twitter, Instagram, WordPress, among others, it is noteworthy that they in fact have favored the growth of domestic social networks. In the case of China, the microblogging service Sina Weibo and WeChat dominate as social networking sites, whereas in Iran the pro-government social platform Velayatmadaran.ir was created to promote the legitimate divine right of Ayatollah Khameini to rule the country (see Esfandiari 2010; Rahimi 2011; Rhoads and Fowler 2009).

As the case of China demonstrates, governments can exercise more direct control over such domestic social networks. Sina Weibo, for instance, employs both manual censors and automated programs to filter and delete messages that contain politically sensitive terms (see Bamman et. al. 2012). The regime has even pressured Sina Weibo into disabling certain functions at critical moments (see Yang 2012) as in the case of the calls for a ‘Jasmine Revolution’ in 2011 where, according to Poell et. al. (2014) “the state severely curtailed Weibo activity: ‘Post forwarding and photo publishing were suspended, and searches for the word jasmine were blocked’ (Canaves, 2011, p.77)” (12). Surprisingly enough, in their thorough analysis of censorship practices across Chinese social media, King et. al. (2013) were able to identify that the posts that are more likely to be removed by the different social media services are those representing, reinforcing or spurring collective action. In fact, Chinese authorities are most prone to allow government criticism than messages that may lead to social mobilization (14).

Chinese authorities at all levels have, in fact, created their own social media accounts. According to Shirk (2011) this serves two main purposes, identifying issues before they escalate, and favoring the emergence of self-censorship across the platform. As Poell et. al.

(16)

(2014) maintain, to a certain extent, the Chinese state has resourced to the Internet as it "facilitates the governing process" (3) by allowing the regime to scrutinize online debates and to identify matters of interest and concern to netizens.

Studies aimed at exploring the limits and obstacles of social media activism within non-authoritarian contexts have also been published over the past years. Such is the case of one of the latest articles from Emiliano Treré (2016), where he depicts a scenario where "everyday frictions, conflicts and struggles" (129) plague activists’ use of social media, particularly in Mexico. To illustrate this, Treré draws from his previous exploration on the movement #YoSoy132 (#IAm132) to describe how the movement’s website was infiltrated by a government's spy who stole relevant data from the activists. As he explains, this spy even managed to "post two videos to try to undermine the reputation of #YoSoy132" (133). He also describes how during the Mexican presidential elections of 2012 social media, and Twitter in particular, were used by political candidates to manipulate public perception online and to artificially boost consent through the massive use of bots, trolls and ghost followers.

In short, it is important to recognize that as much as social networking sites have provided new venues for expressing dissent and have served as relevant tools for activist communication and mobilization purposes, they can also be employed by state actors who seek to dampen political opposition. As della Porta and Mattoni (2014) maintain: “the use of Internet technologies in the context of political mobilizations always developed according to ‘a pendulum swing from activist advantage to government revanche to dense tactical contention between the two’ (Joyce 2011)” (56).

This tense dynamic is further complicated by the fact that social media have a commercial and corporate nature. In this sense, drawing from available studies on the subject, the following subsection explores how the “architectures and business models underpinning social platforms” (Poell and van Dijck 2015, 529) can facilitate or obstruct the disruption of social media activism.

2.4.2 Commercial disruption

Prominent portrayals of social media as empowering tools in the hands of activists (see Diamond 2010; Shirky 2011) or even as responsible for igniting social movements (see Sullivan 2009), have often obscured the fact that social platforms are guided by commercial interests which can, often negatively, impact activist users. In this sense, several critical

(17)

theorists have strived to shed light on the capitalist market structure in which these social media operate in order to demonstrate that commercial platforms are far from being neutral tools, and rather constitute a corporate power in themselves (see, among others, Fuchs 2011; Dean 2009; Morozov 2011).

By the same token, some authors have developed a critical perspective on the techno-commercial strategies of social media in relation to contemporary activism. Youmans and York (2012), for example, highlight in their study that main social platforms, such as Facebook, Twitter and YouTube, constrain users in at least two ways: through the platform design and the company policies. They specifically examine four cases in which particular platform developments -namely evolving policies, functionalities, and user guidelines- (316) altered the way in which activists employed social media, and even put them at risk during the Arab Spring uprisings. The authors successfully demonstrate that architectural changes in the platforms often adversely impact activist users, as they are not built to cater to activist users but rather are driven by commercial interests.

Similarly, Poell (2014) sets out to examine how the political and techno-commercial strategies of platforms take shape in dictatorial settings, and how they steer social media activism (192). Taking as a starting point that the architecture of social platforms is directly informed by their commercial strategies, and in line with the tradition of software studies that maintains that platforms shape users activity (Beer 2009; Chun 2011; Langlois et al. 2009b), Poell makes clear that social media activism takes shape within complex configurations in which political, commercial and technological mechanisms mutually articulate each other (202). And concludes: "only by critically examining these interconnections is it possible to arrive at informed assessments of, and interventions in, episodes of social media protest communication and mobilization." (203).

In combination, the collection of studies outlined in these sections show that to more thoroughly understand the dynamic of social media activism, it is necessary to move beyond the simplistic notion of social media as revolutionary tools, which governments seek to block and censor. On the one hand, as Downing (2008) highlights it is a common misconception to conceive social media as “technological message channels rather than as the complex sociotechnical institutions they actually are” (41). On the other hand, as previously pointed out, it is also crucial to note that social media represent large global corporations driven by a commercial logic.

(18)

Drawing from the political economy of platforms perspective and from software studies, this work seeks to contribute to the concern that platform-facilitated activism is not only challenged by the (increasingly complex) controlling efforts of states, but also by the commercial strategies and corporate nature of the social networking sites in which it increasingly relies on.

2.5 RESEARCH QUESTIONS AND OBJECTIVES

As we have seen, many of the studies that explore the dynamics between social media activism and state intervention tend to focus on specific examples (mostly within authoritarian regimes), while the techno-commercial aspect of social platforms is either neglected or explored separately. Hence, this dissertation will approach the subject matter in a different way by questioning: How are states with different types of political regimes trying to sway online communication to their advantage? And how are these techniques potentially facilitated or obstructed by social media platforms?

The research objectives of this dissertation are to systematically assess across different types of regimes, how states are trying to influence online expression and to shape online contention without resorting to direct censorship and filtering mechanisms on social media, and to examine to what extent social platforms potentially facilitate or obstruct such disruptive state strategies.

2.6 OPERATIONALIZATION / METHODOLOGY

In order to accomplish the above, this study sets out to examine the particular state strategy of deploying pro-government agents on social media. It does so, firstly, by making a clear distinction between non-automated and automated pro-government agents. The former category of agents corresponds to what Deibert and Rohozinski (2010) dubbed as ‘patriotic hackers’: individuals who take creative actions online against perceived threats to their country’s national interest (54). Such individuals “take it upon themselves to attack adversarial sources of information, often leaving provocative messages and warnings behind” (ibid.). The latter category is inspired by the notion of ‘automated disruption’ advance by Milan (2015) which refers to “virtual agents performing automated tasks [that] operate alongside with regular users, potentially distorting their activities and perceptions” (5).

In order to understand exactly what each one of these pro-government agents entails (how they are employed, what do they consist of, what features of social platforms are

(19)

exploited), one research chapter is devoted to systematically analyzing non-automated pro-government agents (‘patriotic hackers’) across different social platforms, while the second one focuses on studying automated pro-government agents (bots) on Twitter. This specific platform was chosen because it has played a particularly relevant role in the world of social media activism and online contention, and to provide a more stable comparison between the countries of analysis.

The study employs the interpretive case study approach considered as a “detailed examination of an aspect of a historical episode to develop or test explanations that may be generalizable to other events” (George and Bennett 2005, 5).

As such, the following research chapters provide a detailed examination of several incidents that involved the use of non-automated and automated pro-government agents across social media which took place specifically in China, Russia and Mexico. The countries were selected in order to cover an appropriately wide portion of the political spectrum. According to the Democracy Index (Economist Intelligence Unit 2015), China is categorized as an authoritarian regime; Russia is a hybrid regime verging to authoritarian; and Mexico functions as a flawed democracy. Furthermore, such countries have different degrees of internet freedom. According to Freedom House (Freedom on the Net, Country Score Comparison 2015), they are properly distributed along the range of ‘not free’ to ‘partly free’.

A comprehensive inventory of the different incidents where the Chinese, Russian and Mexican government resourced to the use of (non-automated and automated) pro-government agents within social media was built, drawing from a wide range of sources, including reports of specialized research institutes, alternative media outlets, activists’ blogs, and major news media, among others. The incidents that were to be examined in detail were selected from such inventory, based on the following criteria: they took place between 2010 and 2015; credible and verifiable documentation was available, meaning several sources of evidence confirmed the incidents had happened; interference with activist purposes had been documented, either in activists’ blogs, advocacy platforms, or in statements provided to media outlets by activists themselves.

The case studies demonstrate how these pro-government agents operate, how the state interference of social media activism takes place, and how such state interference is enabled by the particular characteristics of platforms.

(20)

3. ‘PATRIOTIC HACKERS’: NON-AUTOMATED PRO-GOVERNMENT AGENTS ON SOCIAL MEDIA

As previously noted, the use of ‘patriotic hackers’ is a “next-generation” control technique (Deibert and Rohozinski 2010) organized and incentivized by different governments to manage and steer online communication, especially contentious communication. According to the authors (ibid.) the practice consists on the deployment of pro-government groups that patrol chatrooms and online forums, posting information favorable to the regime and chastising critics accordingly (54)10. Because of the surreptitious nature of the activity itself, accessing information that confirms such state strategy is highly complicated and, in consequence, the way in which the strategy works has for a long time been a matter of speculation (see Fitzpatrick 2015; Pomerantsev 2015; Stern and Brien 2012). However, throughout the years, several leaks and anonymous interviews granted by alleged ‘patriotic hackers’ across the world have provided relevant insights into how such disruptive practice is employed by a variety of governments.

This chapter explores how this practice is used in China, Russia and Mexico. It does so by systematically describing and analyzing particular incidents where ‘patriotic hackers’ attempted to disrupt social media activism within such countries. The incidents are reviewed in detail as a means to explore the inner working of such disruptive strategy.

The chapter is organized as follows; first, an introduction will be provided, which details the forms of recruitment and main functions of the non-automated pro-government agents in each one of the countries of analysis; a set of illustrative examples of how such strategy has been employed across the different types of regimes will follow; an analysis of the way in which such strategy exploits the features of social platforms and its impact on the culture of online contention for each country will conclude the chapter.

10

The concept of ‘patriotic hacking’ is also used when referring to individuals who, having ties of allegiance towards a certain country, conduct politically motivated cyber-attacks against perceived enemies in the name of patriotism (see Carvalho 2015; Dahan 2013). However, for the purpose of this dissertation, we will employ the term ‘patriotic hacking’ following Deibert and Rohozinski’s (2010) definition.

(21)

3.1 THE RISE OF NON-AUTOMATED PRO-GOVERNMENT AGENTS ON SOCIAL MEDIA

China has, arguably, the most renowned example of ‘patriotic hackers’: the wumaodang, otherwise known as the ‘50 Cent Party’ or ‘50 Cent Army’, named after the amount of money each commentator is allegedly paid for every online post that advances the Communist Party line. As reported by Han (2012) the earliest mention to the '50 Cent Party' is found in a leaked 2004 official report of the Municipal Committee of the Province of Hunan (10). Such report, in fact, states that commentators were to be paid a basic monthly salary of 600 Yuan ($88), plus 5 Mao (50 cents) for each post that was favorable towards the government at a local level. It was not long until the use of pro-government agents spread across the different levels and within the different Chinese governmental sectors.

According to The Economist (2013) when anti-Japanese protests erupted across China in 2005, authorities from the central, provincial and local levels began to worry about online public opinion spiraling out of control; especially because the protests were largely organized through the chat groups of the social networking site Tencent QQ. As such, ministries from the central government concluded that both state policies and the ideology of the party needed to be advanced across the Chinese cyberspace, leading them to hire online commentators extensively “to steer conversations in the right direction”.

Up until then, as reported David Bandurski in his blogpost “China’s Guerrilla War for the Web” (2008), China’s traditional propaganda apparatus had been geared towards suppression of news and information “This or that story, Web site or keyword could be banned, blocked or filtered. But the Party found itself increasingly in a reactive posture, unable to push its own messages” (ibid.). In consequence, by the end of 2005, pro-government agents were working more or less simultaneously at all levels of the administration. In 2007 Chinese President Hu Jintao even called on the Chinese Communist Party (CCP) members to: “assert supremacy over online public opinion, raise the level and study the art of online guidance, and actively use new technologies to increase the strength of positive propaganda” (ibid.). Soon after that speech, the State Council began assembling teams of pro-government agents with the aid of schools and party organizations across the country.

Back in 2013, King et. al. estimated between 250,000 and 300,000 “patriotic hackers” working for the Chinese government. More recently, the same authors were able to estimate

(22)

that the Chinese government, in fact, fabricates and posts about 488 million social media comments a year (King et. al. 2016), after developing a largely quantitative approach to analyze an archive of emails leaked from the Internet Propaganda Office of Zhanggong. Among other things, such researchers found that nearly all of the 50 cent party posts were written by workers at government agencies including tax and human resource departments, and at courts (9-10).

In an anonymous interview, a former Chinese ‘patriotic hacker’ told the Global Times of China that commentators either work full-time for State-owned news portals, such as xinhuanet.com, people.com.cn and southcn.com or work part-time as government employees for various government branches, including ministries, public security and academic institutions (Lei 2010). Certain recruitment criteria are common, including loyalty to the party and the state and online communication skills. For instance, according to Han’s investigation (2012), a leaked document of the Hengyang Party School laid out the following four requirements (13):

(1) Must have a solid political stance; must champion CCP’s leadership; must firmly uphold the Party’s rules, principles, and policies; must be law-biding, and must have good ideology and moral character as well as the spirit of professionalism;

(2) Must be equipped with theoretical training, good at cyber languages, with a wide scope of knowledge and good at writing;

(3) Must be familiar with party school work and have basic computer skills and can adeptly use relevant software and internet applications.

(4) Must accept supervision and guidance of Party School Frontline website.

As evidenced by several anonymous interviews provided by former 50 centers they are explicitly hired to amplify the narrative of support towards the regime’s policies and the Chinese Communist Party's political stance (the following subsection of this chapter will elaborate more on this point).

Similarly, back in 2005 a group auto-denominated Nashi emerged in Russia precisely to “stand up to any such challenge to Putin's rule” (Elder 2011). Following the pro-democracy revolutions in Ukraine and Georgia, the group became infamous for its devotion to Vladimir Putin, its mass actions and grand public rallies (Atwal and White 2011). According to several reports, it rapidly became the largest political youth movement of its

(23)

kind in Russia, claiming upwards to 300,000 members during the electoral cycle of 2007 (ibid.). Also in 2005 the youth branch of the regime’s political party, United Russia, was born under the name of “Young Guard”. Both Nashi and Young Guard have been identified as state-sponsored movements, aimed to stimulate youth electoral participation in favor of the Kremlin.

As explained by Miriam Elder in her breaking report for The Guardian, in 2012 Nashi’s close ties to the Kremlin as well as the “longstanding suspicions that the group uses sinister methods” were confirmed following a massive leak of private emails between high members of both Nashi and Young Guard. The leak, authored by the Russian group of hackers Anonymous International, provides insights into the group’s strategy to boost pro-Putin coverage on the Internet. Such leak confirmed that the group spends huge sums of money in running a network of non-automated pro-government agents to “create the illusion of Putin's unfailing popularity”, carry out web-based advertisement campaigns in its favor, and discredit opposition activists and media through trolling and distributed denial of service attacks.

In 2014, a second set of leaks by the same hacker collective Anonymous International revealed that pro-government agents are now in charge of a Kremlin funded firm called Internet Research Agency. The documents detail that an army of young netizens are hired, on a regular-basis, to flood websites' comment sections around the world with pro-Russian and anti-Western rhetoric, as well as to infiltrate the Russian blogosphore and Russian-speaking social networks in order to amplify the pro-Kremlin narrative (Seddon 2014). The number of Russian “patriotic hackers” is estimated to be of at least 400, with a budget of a minimum of 20 million rubles (roughly 217,000 euros) a month (Chen 2015).

As reported by journalist Andrei Soshnikov in the Russian independent media outlet

Moi Region11 the large state-sponsored Internet Research Agency based in St. Petersburg

regularly employs over 400 people to produce hundreds of pro-Kremlin posts on LiveJournal, Twitter, Facebook, Google+ and several other social media platforms such as the Russian social network Vkontakte who are often recruited by responding to news paper ads (see Toler 2015). With the aid of a former employer of the professed agency named Ludmila Savchuk, Soshnikov collected documents and relevant materials that revealed the large “corporate” world of state-sponsored commentators in Russia. As Savchuk described it, working at the

11

(24)

Internet Research Agency was very "business-like". ‘Patriotic hackers’ have fixed schedules and quotas (she maintained she had to write, at least, five political posts, 10 non-political posts and 150 to 200 comments on other workers’ posts, over her two 12-hour a day shifts, followed by two days off), her supervisors monitored social media statistics, such as page views, number of posts, traffic charts, etc. and employees were compensated or fined accordingly12. According to Savchuk, the fabricated posts had to, for the most part, downplay any Ukraine and United States support, as well as to target any Russian opposition, while promoting the government’s policies and interests both domestically and in neighboring countries.

In 2009 Mexico also witnessed the rise of a youth group auto-denominated “ectivists”13

aimed at supporting the presidential candidacy of the, by then, governor of the State of Mexico, Enrique Peña Nieto. In its website the group described itself as “a national network of young people committed to Mexico and Enrique Peña Nieto”14. During the disputed elections of 2012, after which Enrique Peña Nieto won the presidency, some of its members openly admitted to having organized several rallies and support campaigns towards his candidacy, however, they always emphasized they were independent and were not receiving money from any political party or candidate. This notion would be debunked in the course of the following years.

By the end of 2012 a YouTube video surfaced showing a reunion where a man stood in front a crowd giving instructions to a group of young people on how and what to post on Twitter and Facebook to “spin online opinions into support towards Enrique Peña Nieto” 15

. The political team from the Institutional Revolutionary Party (PRI) did in fact recognize the veracity of the video, but continued to emphasize that the youth group was formed by unpaid volunteers. Yet, on 2015 an investigative report from the alternative Mexican outlet

SinEmbargo published a copy of a contract that demonstrated the PRI hired a private

company to deploy non-automated pro-government commentators across Twitter (dubbed in the contract as “trolls”) during the elections of 2012. The payment ascended to over 3 million Mexican pesos (approximately 68,000 euros) and stated that the company would make use of

12 See <http://www.nytimes.com/2015/06/07/magazine/the-agency.html?_r=0>

13 They replaced the letter “a” in “activists” with an “e” to denote the digital backbone of the group’s strategies. 14 See <http://ectivismo.com/>

15

See <http://aristeguinoticias.com/0805/post-elecciones/difunden-en-video-supuesto-entrenamiento-de-los-pena-bots/>

(25)

at least 500,000 non-automated pro-Enrique Peña Nieto internet commentators (see SinEmbargo 2015).

The so-called ectivists from Mexico were mainly recruited throughout 2012 on the website ectivismo.com in order to carry out “volunteer work in support for the Institutional Political Party (PRI) and its presidential candidate Enrique Peña Nieto (EPN)”. However, as demonstrated by the leaked contract made public by the alternative media outlet SinEmbargo many thousands of such non-automated pro-government agents were hired to counteract negative opinions against EPN on social media. According to Figueiras (2012) there were an estimated of 700,000 ‘patriotic hackers’ working systematically during the campaign period to successfully spread and situate Peña Nieto’s image on social media platforms.

The aforementioned information demonstrates that ‘patriotic hacking’ is a highly organized state-sponsored activity that emerged, for the most part, as a way for political regimes to maintain and enhance their legitimacy by spreading propaganda online and downplaying any oppositional views, without being directly linked to such efforts16. But, how do such pro-government agents exactly operate? What are the effects and reach of such state strategy? These questions will be addressed in the following sections.

3.2 “PATRIOTIC HACKERS” AT WORK

Following the detention of renowned human rights lawyer Pu Zhiqiang in Beijing, back in May 2014, his name was rapidly banned and censored throughout Chinese social networking sites (see Freedom on the Net – China 2015). After advocating for a considerable number of freedom of speech cases and having participated in the defense of an investigation that exposed corruption and abuse of power by Chinese officials, Zhiqiang was detained and formally arrested in 2014 for charges such as ‘creating a public disturbance’ and ‘illegally obtaining the personal information of citizens.’ Such allegations were related to a number of posts Zhiqiang had made on Chinese social media Sina Weibo in which he had “questioned the party's policies toward the Tibetan and Uighur ethnic minorities in the Tibet and Xinjiang regions, and mocked political figures” (Parra 2015). As the day of his trial approached, a huge amount of support towards him and his release spread throughout social media, particularly on Twitter, WeChat and Sina Weibo. Yet, as reported by the alternative media

16 This state strategy resembles another internet practice known as ‘astroturfing’ which is a prevalent campaign

practice in the marketing field that simulates online widespread support mostly in favor of corporate agendas (see Lee 2010; McNutt 2010).

(26)

outlet China Digital Times (CDT)17 the comment threads of such supportive comments were dominated by messages such as the following18:

58. Let’s keep a close eye on the Internet, so that people like Pu Zhiqiang don’t have anywhere to hide!

96. Pu Zhiqiang’s case has been publicly tried as a warning to others!

Such type of critical messages were found repeatedly across social media, even containing the same preceding numbers which gives out the impression, as the China Digital Times reports, of having been copied-pasted from a list. Several of these messages, such as the following, declared that Pu had in fact already pleaded guilty and had recognized the criminal charges of his prosecution, even though this was not the case:

99. The court’s guilty verdict on the Pu Zhiqiang case is correct, and the penalty is fitting. It is fair and reasonable.

Some days later, an official document issued by Chinese authorities detailing the type of posts that pro-government agents (50 centers) were required to spread online in regards to Pu Zhiqiang’s case, was leaked and distributed online. A translation of such “government directive” was made available by the China Digital Times in the form of a blogpost called: “Commentary Tasks for Pu Zhiqiang Verdict” 19

. One of the most direct instructions, which in fact resonates with the aforementioned message, read:

Please everyone follow news reports to arrange posts with the following content: Pu always plead guilty, offered a sincere apology to the community and victims, and recognized the criminal charges of the prosecution.

The directive even details specific timeslots for posting, includes examples of the position that pro-government agents should adopt, demands a work report to be made, among other specifics.

More on the workings of such strategy has been disclosed by employees of the government themselves. In 2012, for example, Chinese dissident and artist Ai Weiwei landed

17 This site was started by Xiao Qiang, with the support from the University of California, Berkeley dedicated to

aggregate and provide news and information from and about China. See <http://chinadigitaltimes.net/about/>

18 See “Pu Zhiqiang: Guilty in the Court of ’50 Cent’ Opinion”. 2015. China Digital Times. 18 May 2016.

<http://chinadigitaltimes.net/2015/12/189287/>

19

(27)

an interview with one pro-government commentator in exchange for an iPad (see Weiwei 2012). Throughout the interview, it becomes apparent that the work of an online commentator can be more nuanced than simply posting verbatim copies of the directives provided by the different government agencies. According to the anonymous interviewee "The principle I observe is: don’t directly praise the government or criticize negative news. Moreover, the tone of speech, identity and stance of speech must look as if it’s an unsuspecting member of public; only then can it resonate with netizens." (ibid.).

Such statement is suggestive, as it points out to the fact that 50 centers’ ultimate goal should be to gain netizens' trust in order to persuade their opinions and expressions in favor of the regime, especially as controversial issues are unfolding. As such, by resourcing to this strategy, the Chinese regime is aiming to manipulate online expression instead of resorting to overt censorship. This demonstrates a certain degree of resilience in the way in which the Chinese state approaches internet controls.

In Russia, “patriotic hackers” are also taking on the task of aiding the Kremlin limit and shape critical voices online. On February 2015, for example, within hours of the assassination of Boris Nemtsov (a prominent Putin critic, leading opponent against Russian war in Ukraine and former deputy prime minister), Russian pro-government agents employed at the Internet Research Agency received a written description of their task for the day, which consisted on “going online to pour bile on the former deputy prime minister and claim he was killed by his own friends rather than by government hitmen, as many suspect” (Parfitt 2015). This was explained by Lyudmila Savchuk, a freelance journalist who worked as a ‘patriotic hacker’ for two months while being undercover. In an interview for The Telegraph, Savchuk detailed that her job consisted on spending 12 hours a day praising the Kremlin and condemning perceived dissidents on social networks (ibid.). Among the evidence provided by Savchuk, there is a brief undercover video from inside the agency20 and a list of guidelines 'patriotic hackers' were to follow for producing their posts (see Toler 2015). Samples of their assignments show they were to target the Ukranian government, Barack Obama and Kremlin critics such as Alexei Navalny and the feminist activists Pussy Riot, both inside Russia (within domestic social media) and outside (on major social platforms and international sites).

20

(28)

A testament to the latter is the thorough investigation conducted by journalist Jessikka Aro (from the Finnish Broadcasting Company Yle) on how Kremlin-funded ‘patriotic-hackers’ potentially influence Finnish public opinion online (see Aro 2015, 2015b). After conducting a widespread survey amongst Finnish internet users, and interviewing administrators of Finnish websites as well as experts on the subject, her investigation was able to reveal that pro-Kremlin online commentators often manage to deceive Finns and manipulate online debates with “massive spreading of disinformation as their means” (Aro 2015). In addition, Aro followed and reported very closely the activity of online profiles that systematically distributed propaganda messages in support of the Russian leadership across Finland’s cyberspace, eventually becoming a target of the commentators’ attacks (see Miller 2016).

As several journalistic reports have demonstrated, Russian ‘patriotic hackers’ are mainly active on social media platforms Facebook, Twitter, VKontakte Google+, YouTube and LiveLeak and the discussion forums Reddit, Topix, Pprune, Russian.fi, Suomi24, as well as the comments sections from major news media outlets such as The Guardian, CNN, Bloomberg, Forbes, etcetera.

For instance, according to Chen (2015) during the minor Ebola break in United States at the end of 2014, hundreds of Twitter accounts started posting information on a huge outbreak of Ebola in Atlanta under the hashtag #EbolaInAtlanta, which in fact trended briefly. Also, a YouTube video quickly emerged showing a medical team in protective clothing against the virus, allegedly transporting a victim from the airport21. This turned out to be an intricate hoax, later proven to have been planned and carried out within the Russian

Internet Research Agency headquarters. Such hoax followed the same pattern as another

orchestrated campaign which reported a hazardous explosion of chemicals on Columbia supposedly carried out by ISIS. The campaign was carried out by “involving dozens of fake accounts that posted hundreds of tweets for hours, targeting a list of figures precisely chosen to generate maximum attention” (Chen 2015). As Chen further explains, in this case ‘patriotic hackers’ even manipulated screenshots from CNN homepage to announce the explosion, cloned websites of Louisiana TV stations and newspapers, uploaded a YouTube video where

21

Referenties

GERELATEERDE DOCUMENTEN

The ineffective Westphalian state system would soon be the ineffective and outdated mode of thinking, allowing the idea of glocal cosmopolitanism to grow in influence, through

Deze problematiek heeft niet alleen tot gevolg dat een aantal patiënten mogelijk de benodigde zorg ontberen waardoor de toegang tot de zorg voor hen wordt beperkt, maar het

”How does return policy leniency and the level of expertise of the reviewer impact buyers’ regret when a buyer reads a negative review, written by either an expert of a peer,

Thus, for a consumer that is highly involved with a product category, their involvement level has a moderating impact on the influence the online- and offline social

The purpose of this paper is to see if the principle of reciprocity as a social influence technique can be effective in an online context to increase level of compliance by

First of all, the results show that even though consumers might have different shopping motivations and value e-store attributes differently, alternative brands product

Personalities don’t seem to have a large impact on hedonic and utilitarian shopping motives overall, but when these are split up into multiple underlying shopping motives,

While this study builds on previous literature on online consumer reviews by studying real name exposure, spelling errors, homophily and expert status (Schindler