• No results found

The World Wide Insta-market: A picture is worth a thousand dollars words. An Actor-network approach towards the commercialization of the photo-sharing platform Instagram

N/A
N/A
Protected

Academic year: 2021

Share "The World Wide Insta-market: A picture is worth a thousand dollars words. An Actor-network approach towards the commercialization of the photo-sharing platform Instagram"

Copied!
82
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

The World Wide Insta-market A picture is worth a thousand dollars words

A platform studies and Actor-network theory approach towards the commercialization of the photo-sharing platform Instagram

Atossa Atabaki 10259899

Supervisor: Lonneke van der Velden Second reader: Anne Helmond University of Amsterdam

Media Studies: MA New Media and Digital Cultures 24 June 2016

(2)

Abstract

In September 2015, the photo-sharing platform Instagram opened its platform to all business enterprises. Through sponsored posts, companies are now able to advertise on Instagram. In addition, Instagram is now also being overruled by companies showcasing their products and collaborating with social media influencers, who in their turn offer their followers a

personalized yet commercialized account to follow. This thesis examines the role of humans and technology within Instagram’s commercialized platform. It is situated within the field of platform studies and complemented by the concept of actor-network theory. This is carried out by reflecting on the history and interface of the platform itself, but also by focusing on

the agency of users who commercially act on the platform for their own business objectives. In order to have a better understanding of their expertise and experience within the platform, a total of 10 Dutch companies and social media influencers were interviewed. As Instagram is still rapidly commercializing, but has not yet received much academic attention within platform studies, the contribution of this thesis is to offer a better understanding of the importance of professional and commercialized users in relation to the platform’s technology.

Keywords: Platform studies, Actor-network theory, Platforms, Instagram, Commercialization, Affordances, Intermediaries

(3)

Acknowledgements

I would like to thank my supervisor Lonneke van der Velden for her outright guidance within this thesis project and Anne Helmond for critically looking into this work.

A special thanks goes to my dear parents, who never stopped believing in me and depriving me form their enduring encouragement and support. I also would like to thank my friends for their great support and care over the past six months, especially Cindy Krassen, who has been a true inspiration to me. I really enjoyed her company and feedback during our many thesis-writing dates in the University Library. Last, but surely not least, I would like to thank Michael ten Pas for all his support during both my BA and MA and for being such a patient and motivational partner.

(4)

Table of contents

1. Introduction 4

2. Theoretical framework 7

2.1 Web History 7

2.1.1 The History of the Web 7

2.1.2 Commercialization of the Web 9

2.2 Methodological Approaches 12

2.2.1 Technological Determinism 12

2.2.2 The Social Construction of Technology 13

2.3 Platform Studies 16

2.3.1 Platformization of the Web 19

2.3.2 Platform Politics 20 2.3.3 Platform Affordances 22 2.4 Actor-Network Theory 24 3. Methodology 30 3.1 Historiography of Instagram 30 3.2 Interface Analysis 32

3.3 Interviews with Dutch Experts: Companies and Influencers 32

4. Analysis 36

4.1 Historiography of Instagram 36

4.2 Interface Analysis 46

4.3 Relations Dutch Instagram Experts 50

5. Discussion 53

5.1 Instagram: An Actant Interaction Process 53

5.2 Instagram’s Rhizome Network 56

5.3 Instagram’s Commercializing Platform 58

5.4 Social Influencer Marketing 60

5.5 Platform dynamics 61

6. Conclusion 63

Bibliography 65

List of Figures 80

(5)

Introduction

On the sixth of November 2010, the mobile photo-sharing application Instagram was launched in the Apple Store. As I was very keen on Facebook for years for sharing photos with friends, I actually discovered the application three years ago. Within a day I was more convinced than ever that this was the perfect social media platform for me. It enabled me to visually express myself and share these pictures with friends. As I was following bloggers for a while, the platform sparked my interest to follow their stories and life through pictures. However, I noticed that also non-blogger people started to become popular through Instagram. For

example, the Dutch Instagrammer Negin Mirsalehi rapidly got popular and today in 2016, she has over three million followers. As a MA student in New Media with an interest in online marketing, there were also other aspects of Instagram that sparked my interest. For example, the fact that Instagram has become a business for influential Instagram users like Mirsalehi who almost exclusively post pictures that are related to brands. Little did I know that a picture could be worth so much. In 2015 it was announced that, for example, celebrity Kendall Jenner with over 57 million followers is estimated to be worth approximately 300.000 dollars per post (Boon, 2015; Siegel, 2015). After Instagram opened its platform to businesses to buy sponsored posts in September 2015, I realised that Instagram was becoming highly commercialized. In order to further investigate the origin of the commercialization of this platform, we need to have a better understanding of the different elements and factors that are present and might have had a role in this commercialization trend. These elements and factors can be both human and non-human (e.g. technology). Therefore within this thesis I study the roles of both human and non-human elements within Instagram’s commercialized network by questioning: In what ways have both humans as well as technology contributed to Instagram as a

commercialized platform in contemporary society?

This thesis is positioned in the field of platform studies, which is a recent field within new media that has been introduced by Ian Bogost and Nick Montfort in 2007. Platform studies is concerned with studying how the technique layer of platforms, and thus technology are able to shape and influence the production of culture (Bogost and Montfort, “Platform Studies” 12): “Platform studies is about the connection between technical specifics and culture. In one direction, it allows investigation of how particular aspects of a platform’s design influenced the work done on that platform” (Ibid., 15).

(6)

As platform studies enables an in-depth research on Instagram, it provides a framework to focus on how the platform works and how its affordances are carried out. This will be researched via two research approaches. The first approach concerns the historiography of Instagram, in order to demonstrate how the platform has commercially evolved over time. With regard to the second approach, an interface analysis will take place concerning the increase of Instagram’s sponsored posts. In order to study the commercialization in relation to the above stated research question, this thesis is not limited to a technological investigation, but also takes into account non-technological factors. This third method consists of interviews with Dutch experts in Instagram’s commercialized landscape, namely with Dutch companies that carry out their marketing through Instagram and with professional Instagrammers who make a living out of their Instagram account and collaborate with companies on the platform.

A significant number of respective authors in the field have focused on economic interests of platforms, particularly of Facebook and Twitter. One can think of authors such as Ganaele Langlois et al. (“Networked Publics”), Carolin Gerlitz and Anne Helmond (“The Like Economy”) and Danah Boyd (“Social Network Sites”). Arguably, up till now not much light has been shed on Instagram as a commercializing platform, while it was launched ten years ago and bought by Facebook in April 2012 where after its commercialization increased (Constine and Cutler, n. pag.). By analysing these human and non-human elements and factors within Instagram’s commercialized landscape, this has brought me to actor-network theory. Actor-network theory is an approach that was introduced by Science and Technology Studies scholars Michel Callon, Bruno Latour and John Law around 1982 and believes that both human and technology have the ability to act and form relations (Law, “Actor network theory” 142). Thus where platform studies looks at mainly the technical and less at the social through its five levels that characterize how digital media have to be analysed (Bogost and Montfort, Racing the Beam 146), actor-network theory complements platform studies by setting out the relations between these factors by studying both human and non-human elements. In order to answer the

aforementioned research question, the next chapter focuses on the theories and methodological approaches within the theoretical framework. This chapter will start with an overview

concerning the history of the Web, by briefly explaining its development and how it has enabled commercialization. Thereafter a number of classical methodological approaches are outlined that argue for different positions of humans and technology. These theories are set out in order to position platform studies and actor-network theory in this debate. In what follows actor-network theory is explained and argued in the light of this research, by focusing on how interactions between different actors can take place. At the final point of the theoretical

(7)

framework the focus is on how platforms have commercialized and what their affordances are. The third chapter outlines the method for researching Instagram’s historiography, interface analysis and interviews with Dutch Instagram experts. Within the fourth chapter, these elements will be analysed. The fifth chapter discusses the ways in which both humans and technology have contributed to Instagram’s commercialization with regards to the three research

approaches. Finally, the conclusion will reflect on the most important findings and sets out a number of recommendations for both future research and platform studies. Therefore, by analysing Instagram’s commercialization and the role of humans and technology, through both platform studies and actor-network theory, this study offers a number of innovative insights. For example, the importance of human actors within the network and how relations between human and technological elements are always subject to change. So, what does this mean for Instagram’s commercialization?

(8)

2. Theoretical Framework

This first chapter discusses different classical theoretical approaches regarding the position of both human and technology. These theoretical approaches concern technological determinism, social determinism and the social construction of technology. By setting out these approaches, both platform studies and actor-network theory will be situated with reference to the research question concerning Instagram’s commercialization. To have a better understanding of how Instagram has evolved over time, it is important to briefly review the commercialisation of the Web and what both human and technology have contributed to today’s society. In what follows within this chapter, platform studies is explained in more depth, with regard to platform politics and its affordances, to discuss how this theory will contribute to the study of Instagram’s commercialization and the role human and technology play. At the end of this chapter, actor-network theory’s approach towards the interaction of human and non-human elements is discussed and it is argued how this approach complements platform studies’ understanding of human and technology.

2.1 Web History

2.1.1 The History of the Web

The internet and Web are often misinterpreted as terms and used interchangeably. The Internet is a networking infrastructure that was up and running by the 1970’s (Berners-Lee, 18). Via the Internet, computers can be connected to each other to transfer and store information (Beal, n. pag). On the other hand, the World Wide Web, or simply the Web, is software that was built enabling the internet to access the information that had been stored (Baldi et al., 10). This World Wide Web was invented and implemented by the British computer scientist Tim Berners-Lee in 1980-1991 at CERN, The European Particle Physics Laboratory (Berners-Lee, 19). In his 1999 book Weaving in the Web, Berners-Lee argues that his aim was to create an universal medium that enabled people to share information and create content. Back in 1980, Berners-Lee carried out this idea by building a system named Enquire, which firstly was for his own personal use and enabled him to remember connections between people, computers and projects at the laboratory. Soon he realized that the system could become a global information space if it would be decentralized and available to everyone. Berners-Lee believed that the computer was capable to make connections that otherwise would not exist. He decided to work

(9)

with hypertext1  as he aimed for the computer to label and link everything to each other so that it could refer to any node or document (Ibid., 5). Thus Hypertext Markup Language (HTML) was built for creating Web pages (Cailliau et al., 2; W3C n. pag.). This was the start of a Web of information that can be considered a global information space (Berners-Lee, 6).

In 1989 Berners-Lee had built all the tools that were needed for the World Wide Web. (Berners-Lee, 33). In August 1991 he posted a short summary of the World Wide Web project on several Internet newsgroups. This was a milestone for the Web, as it had become a publicly available service on the internet (Cailliau et al., 2). By 1992 the Web consisted of a small number of servers, however, from 1993 onwards the Web started growing as more Web

browsers emerged. From 1996 onwards there were signs of the Web commercializing (Cailliau et al., 3). At this stage, there was an increase of companies that carried out most of their

business online, through a website that had a “.com” domain. This period was marked as the dot-com bubble (Dzikevicius and Zamzickas, 170). This dot-com bubble burst in 2001, as a lot of dot-com companies were not able to become profitable on the Web (Ibid., 171). Web

pioneers Tim O’Reilly and Dale Dougherty argue that the burst of the dot-com bubble, marked an important turning point for the Web and so in 2004 the term Web 2.0 was coined (22). Many critics have argued that the term Web 2.0 is merely a marketing buzzword (Cormode and Krishnamurthy, n. pag.) or as Berners-Lee has argued: “a piece of jargon that nobody even knows what it means” (Laningham, n. pag.). According to O’Reilly, Web 2.0 was nothing new, but rather a fuller realization regarding the potential of the Web as a platform (34). Whereas Web 1.0 was the Web as a medium for publishing information, Web 2.0 enabled the Web to be an infrastructure where applications could be built on. Another important element that

distinguishes Web 1.0 from Web 2.0 is the shift from a read-only Web to a service that enables user-participation (Ibid., 34).

Nicholas Gane and David Beer argue that this move towards Web 2.0 was due to the shift within the Internet itself. They believe that the technology of the archive started to decentralize and be more open to access and production. According to them, this has changed the Web from “a static information source to a dynamic space where anyone with a connection can, with even the most basic of computer skills, create content” (72) This is marked by Felicia Song as “a discursive move from information to participation” (252), which means that a shift

1 The term was coined by Ted Nelson around 1965 and describes a text that has links to other texts (W3.org, n. pag.).

(10)

has emerged from a merely consumption culture towards a participatory culture 2 (Jenkins, xxii). Meaning that users are invited to become creators, rather than merely acting as consumers of content (Cormode and Krishnamurthy, n.pag.). O’Reilly has defined this co-creation as the architecture of participation (29), that ranges from blogging (Gane and Beer, 72) to creating videos and commenting on them on the video-sharing websites such as YouTube.com, and co-writing articles on the online encyclopaedia Wikipedia.org. Web 2.0 also gave rise to social network sites like MySpace and Facebook. This was due to the technologies that were built on Web 2.0, such as mashups, AJAX3 and comments (Cormode and Krishnamurthy n. pag.).

2.1.2 Commercialization of the Web

The previous section outlined how the Web has emerged, with a specific focus on the shift from Web 1.0 to Web 2.0. This section focuses on how and why the Web has become

commercialized. From January 1993 onwards the first signs of commercialization regarding the Web emerged, as commercial developers started launching their own Web browsers that were aesthetically appealing products (full colour with pictures) that were improved based on the user’s feedback (Berners-Lee, 69). From January 1994 on the internet too was rapidly

commercializing, as Internet service providers emerged who gave access to the Internet via a local telephone call. (Berners-Lee, 81). By this time, software company Netscape started displaying advertisements from other companies on their home page. Companies paid a lot of money to reach a large audience and it was becoming even clearer that the Web had become a business (Berners-Lee, 93).  Cailliau et al. have argued that the invention of the Internet was one of the biggest economic booms in history (3). As argued in the previous section, around 1996 a commercial shift took place: the period of the dot-com bubble (Dzikevicius and Zamzickas, 170). By that time, many people invested in the Web as they thought the industry had high potential and a wide reach due to the speed of its growth, low prices and accessibility (Berners-Lee 106). As traditional companies started noticing the speed of this growth, they shifted from more traditional advertising resources, such as newspapers and magazines, to the Web and

2 The participatory culture stands for the culture in which users do not merely consume, but also contribute and

produce (Jenkins, xxii).

3 The concept was introduced in 2005 by James Garrett and stands for Asynchronous JavaScript + XML. Ajax

incorporates several technologies and represents a fundamental shift for new possibilities on the Web as Ajax dismisses the beginning and ending of an interaction on the web by including an intermediary: the Ajax engine, between the user and the server. This engine renders the interface the user encounters and it communicates with the server on behalf of the user. Thus the engine allows the user to interact with applications independent of the communication with the server (Garrett, 1; Ullman and Dykes, 2).

(11)

became online businesses. Within this period Web users were considered online customers (Halavais, 112). This led to a direct Web-based commerce, or simply known as e-commerce. In what followed, more companies started to present their products on the Web (Cailliau et al., 3). By 1998 the Web had become a commercial battlefield, or even a product (Leiner, 108) as people started buying up popular domain names, such as soap.com or sex.com. Many people noticed that these domains could increase in value and snapped them up (Berners-Lee, 127). This rapid commercialization also opened up the discussion regarding the privacy of online users (Ibid., 143). As user-data became more accessible, questions started rising whether it was possible that unauthorized parties were able to monitor one’s online activity (O’Neil, 17). For example, in 1996 the Internet ad-company DoubleClick was launched, which worked on the development of databases by monitoring the behaviour of Internet users. By monitoring users, DoubleClick was able to create tailored digital advertising for them (Vogelstein, 39). According to Berners-Lee, from around 1997 onwards such companies made it easier for online marketers to anticipate on the Web users. As software became available that could log the user’s clicks, it became easier for companies to track and analyse the user’s clicking pattern on particular websites. In turn this helped companies to map and research the overall online behaviour of users (Ibid., 145). Berners-Lee argues that this information could also be sold to other companies for commercial purposes. In addition, the Web became familiar with the use of magical cookies, or simply cookies4. Before the 21st century the use of cookies was not transparently communicated to the online-user (Ibid., 145) and thus could capture information without users being aware of this.

One of the most prominent features of Web 2.0 has been the rise of the blogosphere in 2002. Although personal home pages already existed in Web 1.0, blogging was different as it had incorporated RSS5. RSS enabled users to subscribe to particular Web pages and get a notification whenever a new post was uploaded (O’Reilly, 24). Such subscriptions via RSS can be tracked, which has meant that blogging also made room for direct Web advertising

(O’Reilly, 25). This characterizes Web 2.0, as it has databases in which it captures, stores and shares data with third parties (Ibid., 25). Alexander Halavais argues that within Web 2.0, blogging opened up the possibility for many bloggers to present their story to a large audience. At this point, some bloggers even became ‘A-list’ bloggers as their audience kept growing and they had the same reach as popular mass-media outlets (Ibid., 115). This sparked the interest of

4  According to Berners-Lee, cookies can be understood as reference numbers that the server assigns to the browser for recognizing when the same person returns to a Web site (145).

(12)

marketers to promote their products through bloggers (Halavais 117, Gillin and Moore, 23; Hsu et al., 81). This shift gave rise to online influencer marketing, which encompasses marketing through individuals on social media instead of large traditional companies (Gillin and More, 180; Brown and Hayes, 53,76). These individuals that are covered by this term are known as social media influencers, who according to Karen Freberg et al. are “a new type of independent third party endorser who shape audience attitudes through blogs, tweets, and the use of other social media” (90).

As this thesis is centred on both the involvement of human and technological elements, both Instagram advertising companies and social media influencers that operate on Instagram are interviewed, in order to have a better understanding of their interaction with the platform. This will be explained in more depth in the next chapter. According to Duncan Brown and Nick Hayes the aim of the social media influencers’ message through any publishing medium is that people embrace it, act on it and lastly share with their own audience (77). These influencers can be bloggers, Instagrammers and YouTubers who operate within a large network of followers, in which they are able to create more awareness for brands or offer recommendations (Ibid., 174). According to blogger Jason Kottke (n. pag.) publishing through mediums like Instagram, YouTube, and Facebook has demonstrated a shift within blogging to other social media networks. Nevertheless, the blogosphere has embraced the functionality of social media platforms and is consequently shifting towards the ecosystem of social media (Helmond, The Web as Platform 68). Social media influencers differ from the traditional influencers in a variety of ways. Opposed to today's social media influencers that can be anyone, traditional influencers were mostly professional bodies, such as celebrities, governments, press, investors and pressure groups (Peck et al., 9). In addition, social media influencers have a concentrated, though large, online network of followers to whom they can address their posts (Basille, n. pag.). Opposed to traditional marketing, online word of mouth (OWOM) 6 (Westbrook, 216; Kozinets, 75) coming from influential and trusted people has the capacity to reach a larger audience (Hsu et al., 81). It is argued by Stefan Bernritter that when OWOM is carried out publicly, online and positively, this can lead to online brand endorsement. Carried out by an influential person, the chance of people endorsing a brand is significant (Ibid., 34; Brown and Hayes, 183). As everyone is now able to contribute to the Web, the role of influencer is now potentially open for everyone. Thus a shift has taken place from Web users merely being consumers to being producers.

6 OWOM can be understood as “informal communications directed at other consumers about the ownership, usage or characteristics of particular goods and services and/or their sellers” (Westbrook, 216).

(13)

Thus this section has argued how and why the Web has commercialized, and more importantly who have been involved in the process of commercialization. These human and technological elements will be studied in relation to Instagram and explained in more depth in the rest of this thesis. The next section within this chapter focuses on classical theories in order to position the role of human and technology within the platform.

2.2 Methodological Approaches

As the Web gained in commercialization over the years, it is interesting to recapture the dream Berners-Lee described in his book (Weaving the Web). His dream for the web was twofold: firstly, he dreamt that the Web would become more powerful to enable collaboration between people. Secondly, he dreamt of a machine becoming capable of analysing content, links and transactions between people and computers on the Web. Berners-Lee created the Web for social purposes so that people could participate in the process of the Web by browsing and creating content themselves (123). His intention was socially driven, however the execution was carried out through technology and had it effects on society, regarding their activities (Kaplan, xvii.). This section focuses on a number of classical concepts and theories that argue technologies’ effects on society and societies’ effects on technology from different perspectives.

2.2.1 Technological Determinism

The belief that technology is a key governing force in society dates back to the early stages of the Industrial Revolution. This belief is centred on the idea that changes in technology have a greater influence on society than any other factor (Smith, 102; Bijker, 66; Chandler, 44). The term technological determinism was coined by the American sociologist and economist Thorstein Veblen in the eighteenth-century (Marx, 238). In that time, pioneers of the Enlightenment7 believed technology to be a liberating force (Smith, 253). Technological determinism is, as the title already suggests, a deterministic theory. Which means that it believes that both social and historical aspects are based on one determining factor (Chandler, 2). Merritt Roe Smith argues that this technological determinism as a thought was deeply embedded in the American culture from the 1780’s onward where people started to give agency to technologies as industries emerged. Although this deterministic thinking later reached

7 Enlightenment is an 18th century philosophical movement that became dominant in the world. While

highlighting the position of Individual, it argues for individual autonomy, liberating it form the dominance of the religion, an act paving the way for the emergence of modern democratic societies. Its ideas were centered on liberty, progress and individualism (Zafirovski, 34).  

(14)

Europe, however, it was in America where its effect was enormous as people there were taken with the idea of progress (Ibid., 3). As for most Americans this meant that the pursuit of technology could make people better on an intellectual, moral and spiritual level and bring them more wealth (Ibid., 3).

As argued by Sergio Sismondo, technological determinism believes that available technologies can determine social events, mostly due to economic reasons. Technological systems have the ability to affect the world and therefore have much impact on how economic choices are made (Ibid., 79). Technological determinism thus is a theory that focuses on cause and effect. Daniel Chandler has argued that technological determinism is a reductionist theory, as it believes in mono-causality. Meaning it aims to reduce a complex whole, in this case society, to the effects of merely one factor; technology (Ibid., 6)8. Technological determinism is familiar with two different versions. The first version concerns soft determinism, which argues that changes in technology have the ability to determine what happens in society. At the same time itself can be influenced by social pressure coming from society (Chandler, 3). As opposed to the soft determinism, hard determinism believes that technological development is an

independent force that cannot be affected by social influences (Ibid., 3). In strong contrast, social determinism critiques technological determinism as a whole. From a social deterministic perspective, it has been argued that technological determinism does not acknowledge that humans in fact created technology and machines (Jackaway, n. pag.). Furthermore, Lelia Green has argued that one should rather look at social determinism as society is responsible for the development and distribution of technology (3). Thus Green advocates for a different view on technology, one that is determined by social processes to serve themselves (8). However, I believe social determinism is too extreme and can be critiqued as it gives full agency to society, without taking into account the ability technology has to influence. In this thesis I discuss that both classic theories are rather extreme, and not entirely applicable to platforms. Thus by focusing on the role of both human and technology within Instagram’s platform, I criticize both hard technological determinism as well as social determinism.

2.2.2 The Social Construction of Technology

A more nuanced vision contrary to technological determinism and social determinism can be found within constructivist studies of Science and Technology Studies (Sismondo, “Some

8  This resonates with the definition that the Oxford English Dictionary gives for the term reductionism: “the practice of describing or explaining a complex (especially mental, social, or biological) phenomenon in terms of relatively simple or fundamental concepts, especially when this is said to provide a sufficient description or explanation; belief in or advocacy of such an approach” (n. pag.).

(15)

Social Constructions” 516). This approach focuses on the possibilities for construction, rather than a force that determines influence, as argued within deterministic theories (Winner, 367). Contrary to social determinism, this theory argues that “facts, knowledge, theories, phenomena, science, technologies, and societies are socially constructed” (Sismondo, An Introduction to Science and Technology Studies 51). Social construction finds its origins in the constructivism theory, which is a theory of knowledge that separated knowledge from other theories of cognition (von Glasersfeld, Aspects of Constructivism 2). This approach was developed in the 1930’s by the Swiss psychologist Jean Piaget (von Glasersfeld, Radical Constructivism 54). According to Piaget knowledge has an adaptive function, meaning that the human’s sensory experience is based on the ways in which we observe the world (von Glasersfeld, “Aspects of Constructivism” 3). The term social construction was initially used in the title of Peter Berger and Thomas Luckmann’s 1966 work The Social Construction of Reality (Sismondo, An Introduction to Science and Technology Studies 52; Bijker, 64). Berger and Luckmann were interested in understanding the process in which human knowledge was developed within social reality. This social reality was perceived as institutions and structures that had emerged as a result of people’s actions, and thus they found it interesting to see how such social institutions had emerged from people’s actions (3).

In the 1970’s a group of concerned scientists, philosophers, sociologists and historians believed that scientific knowledge had to be understood in sociological terms (Sismondo, An Introduction to Science and Technology Studies). This gave rise to the strong programme in the sociology of knowledge (Ibid., 42). The Social Study of Science since then became concerned with demonstrating the extent to which science and technology can be explained through the work that had been done by scientists and engineers. However, there emerged clashing

thoughts regarding the extent to which knowledge was social or technical. This led to conflicts amongst several scholars within the program9. The strong programme has influenced the field of Science and Technology Studies, as they adopted the term social construction in the late 1970’s, as carried out by authors like Bruno Latour and Steve Woolgar (Sismondo, An Introduction to Science and Technology Studies 42)

Within Science and Technology Studies, social construction is familiar with two

approaches. The first approach concerns a mild perspective, which argues that it is important to take into account social context when looking at development in science and technology

(Sismondo, Some social constructions 246; Bijker, 64). In the early 1980’s social construction

9 In his 1981 article, Woolgar criticized that the social world is not fully determinate as science and technology studies construct aspects of the social world (373).

(16)

of technology (abbreviated as SCOT) was developed as a theory to shed a light of critique on technological determinism (Bijker, 71). Wiebe Bijker and Trevor Pinch first coined the social construction of technology in 1984. In their 1987 work, they argue how the black box10 of technology has to be opened to see what is inside (8). Here they refer to the development and success of the bicycle in the 1870’s. They argue that the bicycle was socially constructed by different meanings given to it by social groups11 (Klein and Kleinman, 28; Bijker and Pinch, 414). Bijker thus argues that the social construction of technology believes that these social processes are able to influence and construct the technicality of machines (65). As with the example of the bicycle, meanings was given based on people’s experience with an artefact. Through the eyes of women the bicycle was seen as an “unsafe” machine, whereas young men argued it to be “the macho machine” (Ibid, 36). According to Bijker, the social construction of the bicycle is the central idea of the methodological relativism that the social construction of technology advocates (Bijker, 74). Methodological relativism believes that there is not one definition or truth, as it is dependent of the different meanings that are given to something. Thus this method enables researchers to explain the development of technology by focusing on the process of how it’s meaning is constructed, rather than looking at its final success (Bijker, 74). What results are multiple artefacts due to the different meanings that have been given to it by social groups (Bijker and Pinch, 428). In what follows after multiple meanings arise, is that some meanings of artefacts become dominant over others, based on how they are perceived by different social groups. This results in the emergence of a single artefact (Bijker, 68). Within this stage the meaning of an artefact is placed within a broader technological frame to stabilize it (Bijker, 69). The technological frame is understood by Sismondo as “the set of practices and the material and social infrastructure built up and around an artefact or collection of similar artefacts” (83). Thus the technological frame describes the actions and interactions of actors, explaining how they socially construct technology, now and in the future (Bijker, 70). Bijker argues that this technical frame has distinguished the social constructivist approach from technological determinism, as it has demonstrated that technology does not have its own logic, but is rather socially shaped by humans (Ibid., 66).

In addition, the social construction of technology focuses on social dimensions that are denied by technological determinism and emphasize people’s ability to act and interact with technology. The social construction of technology can be classified in the soft social

10  The term black box is given to a complex device or system. What is known about the black box are merely its inputs and outputs. What happens inside the box is not clear (Winner, 365).

11 Social groups are groups of people who are relevant for describing an artefact when they attribute explicitly a meaning to that artefact (Bijker, 68).

(17)

constructivist approach, as Bijker argues that it is equally important to also stress on

technological factors since technology has the ability to shape what we as humans do and think. This is linked to the technological frame that describes both the interactions that lead to the social construction of technology and the influence technology has on the social (Bijker, 70). Thus the technological frame has provided a theoretical link between both social

constructivism and technological determinism. It believes that there is a constant interaction between technology and the meaning that is given to it by people. Both technology and society would simultaneously shape each other (Kaplan, 2). Therefore within this approach artefacts are no longer determined by the natural world, but are seen as constructions of individuals or social groups in relation to technology (Law, 105).

An overall critique on science and technology studies has come from theorists, such as Sally Wyatt. Wyatt argues that technological determinism is considered dead by Science and Technology Studies. She believes that this concept should no longer be dismissed, as it is important to have an understanding of technological determinisms different manifestations (176). Therefore she advocates that in science and technology studies, technological determinism has to be highlighted, just as technology itself is treated in a thoughtful way. It is important to take into account these different classical theories as they provide nuances and ramifications that are important to understand where both platform studies and actor-network theory are situated. In the following pages, Actor- network theory and platform studies are explained and situated within this thesis.

2.3 Platform Studies

Up to the previous section, the focus was on the different perspectives on technology and human involvement from different theories. This section demonstrates a shift from more classical technique studies to platform studies, as platform studies can also be studied in relation to the previous analyses on the development of technology. Platform studies is concerned with studying how the technique layer of platforms, and thus technology, can influence culture and has placed itself in the technology - human debate after being critiqued for being technological deterministic (Bogost and Montfort, “Platform Studies” 12).

As this thesis studies both the human and technological elements that have contributed to Instagram’s commercialization, this study is placed within the field of platform studies. Platform studies was introduced by Ian Bogost and Nick Montfort in 2007, as they believed that the computing infrastructure of platforms was ignored in the field of digital media (Bogost

(18)

and Montfort, “Platform Studies” 12). Therefore Bogost and Montfort’s aim with platform studies is to describe how platforms have been shaped and how they influence the production of culture (Bogost and Montfort, “New Media as Material Constraint” 178). This new focus on the study of digital media investigates the relationships between hardware and software design of platforms and the creative works produced on those platforms. Examples of these creative works are not merely limited to video games, but also entail digital art, virtual environments and electronic literature (Bogost and Montfort, “Platform Studies” 14; Apperley and Parikka, 3). Platform studies has been criticized for being a brand or theme, regarding the book that Bogost and Montfort have written on this topic. According to Dale Leorke, instead of taking platform studies to a challenging direction the book functions more like a formula that can be implemented in studying a platform (266).

In their article “Platform studies: Frequently Questioned Answers”, Bogost and Montfort argue that even though platform studies has been recently introduced, it is already misunderstood in many ways within the new media studies community (11). One of these misinterpretations is the argument that platform studies entails technological determinism as it studies the underlying infrastructure of Web services (Ibid., 12). Bogost and Montfort argue that platform studies positions itself opposed to the radical version of technological

determinism, which is the hard deterministic approach. As argued in this chapter, hard

determinism comprises the view that technological changes impact culture autonomously and deprive the presence of any social intervention (Kaplan, 2). Opposed to radical technological determinism, the soft deterministic perspective believes that technology has the ability to influence people. David Kaplan has argued that “technology mediates and steers society but it does not quite drive it” (Ibid., xvii). These technologies have not necessarily changed social life, however they have influenced the way we carry out tasks, such as communication through email and the technologies we use to cook our food. According to Kaplan, such technologies softly determine our lives, without having great effect on society as a whole (Ibid., xvii). From a platform studies perspective, Bogost and Montfort critique both the social constructivist and the hard technological deterministic perspective (“Platform Studies”, 11). Their criticism regarding the social construction of technology concerns that this approach merely focuses on the ways that technology is developed, rather than looking at how people respond to the technology after it is being constructed. Bogost and Montfort also criticize Science and Technology Studies and Latour’s work for not focusing on how people respond to technology after it has come to being (Ibid., “Platform Studies” 12). Within this thesis I discuss how actor-network theory can actually contribute and complement platform studies by

(19)

demonstrating what happens after technology has come to being in relation to Instagram’s commercialization. Bogost and Montfort also criticize the hard technological determinism, as they reject that technology occurs and evolves without any form of human interference. Platform studies does not state this directly, but from their position within these debates it can be argued that they lean more towards the soft deterministic approach. I argue this as Bogost and Montfort state in their article that people are continuously negotiating with technologies as they develop cultural ideas and artefact and people create technologies themselves as a reaction to social, cultural, material and historical issues (13). Thus platform studies believes that the interaction between the digital object and human content is a constant negotiation.

Bogost and Montfort argue that the literature within the field of platform studies has three elements in common. First it focuses on a single platform or platforms that are related to each other. In what follows, it is investigated how platforms work and lastly, it is discussed how platforms exist in a context of culture and society as they are developed based on cultural concepts and contribute to culture in many ways. An example of this is the manner in which people perceive and interact with platforms (Racing the Beam viii). Platform studies also enables analysing digital media, which is based on the five levels that characterize digital media analysis: reception/operation, interface studies, form/function, code and platform (Bogost and Montfort, Racing the Beam 145). Within reception/operation level, the focus is on the aesthetics of reception and includes how an actor: user, viewer or reader deals with

interaction on a platform. The second level concerns interface studies, which studies the platform’s interface and “sits between the core of the program and the user” (Ibid., 146). The form/function level includes the rules that are carried out on the platform. On the code level, the source code of a platform is analysed. Lastly, the platform level is the place underneath the code on which the code built (Ibid., 147).

Within this thesis, the above stated levels of digital media analysis are applied to Instagram’s research concerning its commercialization and the role of both human and technological actors. The reception theory level will be carried out by interviewing human experts in the field: Dutch companies active on Instagram and Dutch influential and

professional Instagrammers. On the interface level, both through Instagram’s historiography and interface analysis concerning recent updates and the increase in sponsored posts, its interface will be mapped out. On the level of form/function the affordances of the platform are outlined according to the previous level of analysis. Moreover, on a code level the

historiography research approach focuses on the Instagram API developer’s documentation in order to grasp the underlying code. Lastly, on a platform level it is investigated how the

(20)

platform itself allows developers to build applications on top of the platform. This can be studies through the API developer’s documents.

2.3.1 Platformization of the Web

As mentioned earlier in this chapter, O’Reilly argues that one aspect of the shift from Web 1.0 to Web 2.0 is that networks sites have become platforms. Consequently, the Web was no longer a medium for solely publishing information, but had become an infrastructure of participation that was suitable for building applications on top of (34). Tarleton Gillespie argues that the term platform has emerged recently describing online services of content companies that function as intermediaries that serve users, advertisers and professional content producers (348). Gillespie states that the definition of the word platform can be traced back to how the Oxford English Dictionary has signified the word platform in the past within four semantic territories (348). The Oxford English Dictionary’s oldest definition of the platform describes it as a physical object: the platform as architectural that is a raised level or surface on which people or things can stand (349). The second definition is computational, which describes the term from a technical perspective. Meaning that the platform is as an infrastructure or an online environment that allows users to design and install applications on top of it. The third definition is figurative, stating that the platform serves as a foundation for an action or event. Lastly, the political definition signifies the platform as a physical stage for politicians to articulate their political ideas. This is associated with the freedom of speech (350). Apart from the earlier definitions, Gillespie argues that “a platform is not a platform because it allows code to be written or run, but because a platform affords an opportunity to communicate, interact or sell’ (351). With this approach he emphasizes the participatory and economic aspects of a platform. In contrast, as mentioned before, Bogost and Montfort focus more on the computational aspect of platforms, as they argue that among the many things that currently are being defined as platform: “computational platforms are the specific basis for digital media work” (“Platform Studies” 13). They embrace Mosaic co-creator Andreessen’s 2007 definition for platforms: “a system that can be reprogrammed and therefore customized by outside developers, users, and in that way, adapted to countless needs and niches that the platform’s original developers could not have possibly contemplated, much less had time to accommodate” (“Platform studies” 13). Bogost and Montfort argue that this explanation shows the flexible character of the platform as it enables users to be creative.

Before the term platform became a widely used word on the web, social media

(21)

as social network sites, who provide their users with diverse possibilities to create and maintain social relations within the network (Boyd and Ellison, 210). Boyd and Ellison provide an extensive definition of social network sites in their article:

We define social network sites as web-based services that allow individuals to (1) construct a public or semi-public profile within a bounded system, (2) articulate a list of other users with whom they share a connection, and (3) view and traverse their list of connections and those made by others within the system. The nature and nomenclature of these connections may vary from site to site. (211)

Anne Helmond argues that network sites have transformed into social media platforms, as they have extended themselves into the rest of the web, making web data outside the platform

“platform ready” (“Platformization of the Web” 7). In her 2015 work The Web as Platform, Data

Flows in Social Media, she traces this transformation of social network sites into social media

platforms in order to see how social media have transformed the web. Helmond introduced the concept of platformization as a critique towards the programmability of platforms, to understand the transformation from an infrastructural perspective (24). Platformization of the web is

understood as “the rise of the platform as the dominant infrastructural and economic model of the social web and the consequences of the expansion of social media platforms into other spaces online” (Helmond, The Web as Platform 157). Thus the concept centres the computational aspect of platforms to expose how social media platforms work. In addition it also demonstrates how social media platforms have economic interests. This in turn can be linked to Gillespie’s definition that was mentioned before arguing platforms from a participatory and economic perspective (351).  

2.3.2 Platform Politics

The previous section described Helmond’s concept of platformization in relation to platforms

expanding themselves into other online spaces (Helmond, The Web as Platform 157). In this section emphasizes will be on the rules that are carried out by the platforms. Platform politics can be understood as an approach that critically examines the platform’s architecture in

determining how they shape a networked society and carry out their affordances (Helmond, The

Web as Platform 16; Gillespie, 350). This demonstrates how the platform’s economic interest is

carried out, as opposed to how it appears on the surface as, for example, a platform being a fun photo-sharing service or a network offering users the tools to extend their social network. The concept platformization is applicable to social media platforms like Facebook, Twitter and Instagram.

(22)

Gillespie’s argues that social media platforms appear to be open and encouraging towards their users to become creators. Nevertheless, their most important aspiration is to make profit (Gillespie, 352). According to Ganaele Langlois and Greg Elmer, corporate social media platforms may seem transparent on the surface, but are certainly not as these are companies that aim to make profit.12 As opposed to earlier stages of the Web where corporate social media were merely leaning on advertisement revenue, they are now also focused on designing how users have to communicate through digital objects such as liking, sharing, commenting. In turn, corporate social media are able to monitor all the activities the user outsources, such as profile information, messages, sharing, liking etc. (“The Research Politics” 2). By monitoring the user’s activity, corporate social media platforms weave themselves into the everyday life of users to profit from the user’s data. Langlois and Elmer argue that in this process corporate social media carry out double articulations of code and politics, which they define as “articulations of code and politics that link and reshape informational processes, communicational constraints and possibilities, and political practices in different and sometimes contradictory ways” (“Networked Publics” 417). They argue that this double articulation explains how corporate social media’s communication through digital objects can take place at one level and simultaneously create new articulations at other levels. These articulations take place on three levels: the user-interface level, the information-representational process level and the black box level (“Networked

Publics” 417). Figure 1 demonstrates how these levels relate to users’ experience and the interest and goals corporate social media have.

User’s experience Goal Company/platform User-interface level Engaging with different features of

the platform

Collect as much user data as possible

Information-representational process level

The user is confronted with a number of advertisements and recommendations, which (s)he can follow

Carry out political and economic interests: advertisement and recommendations/suggestions

Black-box level The user often embraces the recommendations and acts on it

Continue capturing data which can be translated to valuable information about the user Figure 1. Summary of Elmer and Langlois’ double articulations on three levels (created by Atabaki)

12 Langlois and Elmer call social media platforms, corporate social media platforms due to their commercial character

(23)

Corporate social media platforms can have an economic interest to collect online user-data. This process takes places on different levels. First, on an interface level the user can be

presented with different social features to use. As the user navigates on the platform and clicks on different buttons, the communication creates data about the user that can be translated into targeted interventions at the interface level that come from the back-end. As a result of clicking on specific buttons and following actions that are mediated by the informational practices of the digital object, the information-representational process takes place. What is important here is that the political and economic interests are translated into actions that are presented to the user in the form of either advertisements, recommendations or suggestions of people and stories to follow. Thus according to Elmer and Langlois the digital object does not merely include human actions, it is a constant interaction, a back and forth between human actions that are triggered by these double articulations that are incorporated in the digital object. By

carrying out their communication through these articulations, corporate social media are able to generate valuable information. Therefore they are constantly processing and analysing the data in their back-end to improve their communication, which is the black-box level. By doing this they can grasp the data and act on it to receive as much valuable data as possible.

2.3.3 Platform Affordances

The previous sections have focused on the field of platform studies, the platformization of the web and its politics. This section focuses on these politics in depth by looking at its

affordances, to determine how platforms carry out activities in relation to the user. The term affordance originates from aufforderungscharakter, which was coined by Kurt Lewin. J.F. Brown translated this term in 1929 as a term with an invitation character (Gibson, 138). James Gibson elaborated on this translation, by arguing that the term also comes from the verb “to afford” (127). Gibson approached the term from a nature perspective, arguing that affordance in nature is what it provides the animal. Nevertheless, this perception shifted as people modified the natural environment by using artificial materials. Thus the affordances of an object invites people to do something with it, which leads to a particular behaviour as it is in its nature to do so (Ibid., 139).

In relation to social media platforms and their politics, Danah Boyd discusses the different aspects of social network sites she has been researching for the past six years. She argues that social network sites can be seen as a form of networked publics, which is in relation to “publics that are structured by networked technologies; they are simultaneously a space and a collection of people” (42) Social network sites are networked publics as they shape and

(24)

configure how people within the network act. It is argued by Boyd that the technology behind these networked publics can structure the networked publics in a way that they can introduce their affordances, without forcing them upon users (35). According to Bogost and Montfort, “the platform's influence as experienced by a user is mediated through code, the formal behaviour of the program, and the interface” (“Platform Studies” 16). Thus, in what follows a group of people can act upon that and engage within the environment of the network, which amplifies the interaction possibilities. From an architectural perspective13, networked publics are different from other more traditional types of publics, as they have the ability to structure how information flows in these networks and how people interact. These affordances are able to influence how people act when they engage in social life (Boyd, 42). The content that flows from these networked publics (whether interactions between people or someone posting content) is constructed through bits14 and thus can be easily stored, distributed and searched for. It is important to note that affordances do not necessarily determine social practices, as technological determinism has set out. One can think of various features, or simply said affordances that are offered by these sites, such as s creating profiles, maintaining friends lists and communicating with others (figure 2).

Profiles Profiles enable individuals to represent themselves online. In addition, profiles

function as a place where people start conversations and share content. Therefore profiles serve as the locus of interaction for networked technologies

Friend list Having and maintaining a friend list is not simply a matter of social accounting.

People often think they have chosen a list of friends to connect with, which is their “imagined audience”. However, this intended public is not the actual public as various actors within the network are part of the public as well. Boyd argues that “ Yet, the value of imagining the audience or public is to adjust one’s behaviour and self-presentation to fit the intended norms of that collective” (Boyd, 45). Public communication As argued in relation to the maintenance of friendships that are imaged as a

public, comments can too not simply be seen a dialogue between two people. It is rather a performance of social connection that is presented to a broader

audience. Social network site Instagram for example introduced ‘status updates’ to encourage people to share short messages.

Figure 2. Various features on social network sites and their affordances (Boyd, 45, created by Atabaki)

13 Architecture here can be understood as the digital structural difference in technology (Papacharissi, 205). 14 The term “bit” is short for binary digit and refers to the smallest unit of data in a computer (Rouse, n. pag).

(25)

The content that is produced through interactions, creating profiles and maintaining friendships is made out of bits. This form of information packaging is simpler to store, duplicate, distribute and search for. According to Boyd these are the four types of affordances that emerge out of the properties of the bits that play a significant role in configuring networked publics (46). In relation to storing information, user actions are automatically recorded and archived, which means that user-data can stick on the Web. The second type of affordance concerns the easy duplicability of content on social media networks. This is due to the fact that these bits are easily shared across networks and therefore it is impossible to distinguish the copy from the original source. Another type of affordance that shapes the networked publics lies in the scalability of the affordance. As the internet has opened up a space with new possibilities for distribution, there is a lot of content to be found. However, what is shown within these

networked publics can contain data that not everyone wishes to see. The last type of affordance is searchability. With the rise of search engines, information nowadays can be easily accessed. The searchability character within networked publics is important as it enables to find people in networked publics through GPS-enabled mobile devices that are now part of everyday life and carried to almost any destination (Boyd, 49). As the content that is produced by people emerges from the contributions to the networked publics affordances, it can be argued that networked technology is capable of being more pervasive as opposed to other media. This is in relation to Lessig’s argument in Code is Law. He discusses that code regulates the structures that emerge in cyberspace and make it what it is (5). Thus social network sites are becoming more

pervasive and due to the affordances that are able to succeed, people act as is aimed by the social network sites. This is in relation to what Lessig argues when he refers code is power (79). Placing this affordance theory within platform studies, it can be argued that the platform’s affordances are able to shape (without dictating) how people think and how they engage in social life according to the activity on these platforms (Boyd, 36; Bogost and Montfort, “Platform Studies” 13).

2.4 Actor-network Theory

This section emphasizes on the actor-network theory, as this approach examines relations between actors by following them and looking at their associates within networks, instead of giving predetermined agency to one actor. Whereas Bijker ends the discussion in his work, stating that both technology and society can have the ability to affect each other (74), Bruno Latour already argued for such an understanding of technology and science (technoscience) in the actor-network theory (Latour, “Promises of Constructivism” 29; Gane and Beer, 27). John

(26)

Law argues that actor-network is not a theory, but rather an approach. Approximately around 1982 actor-network theory, often abbreviated as ANT, was developed within Science and Technology Studies by scholars Michel Callon, Bruno Latour and John Law (142). The approach emerged as a reaction to the Strong Programme, as researchers within this program argued that scientific knowledge could be linked to social variables (Ibid., 142). Actor-network theory is a constructivist theory, but contrary to the social construction of technology and the strong program, it believes that the production of scientific knowledge should be seen as a complex network of associations of heterogeneous elements that in itself constituting a network (Law, “Actor network theory” 141): “It is a false to argue that a machine on its own has

agency as it is to suggest that only humans do; rather it is the network as a whole that acts, affects and determines” (Lister et al., 338; Latour, We Have Never been Modern 120). According to Latour these are the type of connection between non-social elements

(Reassembling the social, 5), between both human and non-human actors (Reassembling the social 74). As opposed to the previous argued theories, actor-network theory is not focused on restricting agency to either human or non-human actors as how they are, but rather looks at their associations and thus how their actions can be measured within a network (Latour, “On Recalling ANT” 16; Latour, Reassembling the social, 87; Bucher, 37)15.According to John Law there are two main aspects to the actor-network theory approach. First, actor-network theory emphasizes on the ‘semiotics of materiality’, which argues that the relation of signs has to shift from the realm of semiotics to the analysis of everyday objects (“After ANT” 4). By doing so, actor-network theory describes how the relations between objects are formed (Law, “Actor network theory” 2). Furthermore Actor-network theory believes in performativity, which is the question of how actors, who can be both human and non-human, become what they are through their connection with other actors within a network (Ibid., 4).

Even though within actor-network theory the focus is on network, Latour argues in his 1996 article “On Actor-network theory: a Few Clarifications”, that the definition ‘network’ is not completely in place regarding actor-network theory, as the word network is often

misunderstood. He argues that it is often misinterpreted as being a technical network.

15 According to Olga Amsterdamska it is more important to take into account why both human and non-human actors have to be enrolled and that one must focus on the specific characteristics of a given network in order to determine why some actors are able to succeed while others fail (501). Amsterdamska critiques Latour’s actor-network theory, as she argues that it is problematic to not make a distinction between human and non-human actors. She argues that according to Latour an actor's power within a network is dependent of the amount of allies it can enrol and the relations it establishes within the network. Amsterdamska believes that it is more important to take into account why both human and non-human actors have to be enrolled and that one must focus on the specific characteristics of a given network in order to determine why some succeed while others fail as opposed to one’s power relations (500).

(27)

network theory differs from a technical network such as a train network that is final and stabilized actor-network, in the sense that it is not stable and subject to change. The second misinterpretation concerns actor-network theory relating to the study of social networks. Whereas the study of social networks focuses on social relations of human actors, actor-network theory is also interested in studying non-human actors (Latour, “On Actor-actor-network Theory” 369). This approach thus makes actor-network theory heterogeneous, as it does not assign agency based on whether one actor within a network is human or non-human. It believes that in the first place both actors have the ability to act (Latour, On “Actor-Network Theory” 387; Sismondo, An Introduction to Science and Technology Studies 65; Murdoch, 359).  The reason actor-network theory’s approach is mapped as a network, is to carry out that there is no hierarchy between different actors or actants and to show merely changes in types, numbers and topography of connections (Latour, “On Actor Network theory” 371). Shifting back to the misinterpretation of the word network, Latour argues that one has to think of a network that has a global entity with as many connections as possible. This corresponds with what Gane and Beer argue in their chapter “Network”. They state that Latour found value in building on the idea of Deleuze and Guattari’s rhizome16 system, as it represents connections that can move from any nod to another without restrictions (29). Therefore Latour argues that the value of actor-network theory lies in the fact that one can trace these relations between actants in such networks (Latour, Reassembling the Social 128; Gane and Beer, 31).

Thus what distinguishes actor-network theory from other approaches is the fact that within the framework its focus is on actants instead of actors. Contrary to actors, an actant can be defined as someone or something that acts or to which an activity is assigned by others (Latour, “On Actor-Network Theory” 373). So by using the term actant, there is no analytical priority for either human or non-human: “Instead of starting with entities that are already components of the world, science studies focuses on the complex and controversial nature of what it is for an actor to come into existence. The key is to define the actor by what it does – its performances - under laboratory trials” (Latour, Pandora's hope 303). Therefore, actor-network theory is concerned with relations, which can only mean something if they are taking into account with other factors, which are non-human such as materials, objects or technology (Latour, “Pragmatogonies” 792). Therefore according to this framework artefacts, machines, humans, animals are all equal based on what they are, but can differ in dominance concerning their performance within the network as they can exert influence on other actors. Thus this

16 According to Deleuze and Guattari, the rhizome is defined as ‘principles of connection and heterogeneity: any point of a rhizome can be connected to anything other, and must be’ (7).

(28)

network has to be considered complex, as dynamic lines arise from these connections that are made. Latour argues that what is most important in these networks is the work that is carried out, which are the connections that are made (Latour, the Social as Association 83).

In his book Reassembling the Social, Latour argues that from an actor-network theory perspective actors are enrolled into networks and are followed on the basis of their abilities to draw connections as opposed to being pushed into predetermined groups. Therefore he argues that groups are formed and that the word ‘group’ would be meaningless as “meaning is given to it when action takes place and groups are formed in relation to associations” (29). Within these networked associations, actors become defined with substance and essence (Ritzer, 1) as for actor-network theory neither society nor the social exists in the first place, but is rather shaped by the actors within the network. According to Latour these connections have the ability to explain how the social world is generated as they leave behind trace that can be picked up in the form of visible data (Reassembling the Social 31). In addition, actor-network theory

believes that, apart from their connections, there would be no groups without group makers and group holders who constantly perform in the making and remaking groups though non-social means (Ibid., 32). Therefore social groups are not ostensive17 objects, but rather performative as

the actors within these networks have the ability to move, change or even disappear when the action or performance stops (Ibid., 38). Latour argues that the means through which the social then can be produced is through intermediaries and mediators. An intermediary can be counted as one whole which is defined by Latour as “what transports meaning or force without

transformation defining its inputs is enough to define its outputs” (Reassembling the Social 105), which indicates that meaning is transported without changing the actions within the network. As argued by Latour “if some ‘social factor’ is transported through intermediaries, then everything important is in the factor, not in the intermediaries” (105). This is contrary to the mediator that can count for several, nothing or countless parts that can translate or modify the meaning of the elements they have to transport. Opposed to the intermediary the mediator’s outputs do not affect its inputs, therefore they have the ability to shape the message carrying (Ibid., 39).

For actor-network theory there is not a preferable type of social generator, as mediators have the ability to transform into intermediaries. According to Latour, as both intermediaries and mediators are part of the interaction process they have agency. However, the moment this

17 According to Latour an ostensive object is a fixed object that is defined by how it can be pointed at as opposed to a performative object that is defined by how it performs and moves itself. According to Latour the benefit of the performative character is that “it draws attention to the means necessary to ceaselessly upkeep the groups and to the key contributions made by the analysts’ own resources”(Reassembling the Social 35).

Referenties

GERELATEERDE DOCUMENTEN

Since long Europe has had a focus on the internal energy market, but the rapid integration of renewable energy has introduced new dynamics and issues.. National policies can

The Participation Agreement creates a framework contract between the Allocation Platform and the Registered Participant for the allocation of Long Term

Yeah, I think it would be different because Amsterdam you know, it’s the name isn't it, that kind of pulls people in more than probably any other city in the Netherlands, so

This Act, declares the state-aided school to be a juristic person, and that the governing body shall be constituted to manage and control the state-aided

Outcomes of correlational analysis of data from questionnaires confirmed the positive relationship of several social exchange constructs (perceived organizational support,

Apart from some notable exceptions such as the qualitative study by Royse et al (2007) and Mosberg Iverson (2013), the audience of adult female gamers is still a largely

Another simpler method of calculation (which is also applicable when there is no continual value recording) is the following: the difference between the

To cite this article: Jobien Monster (2012): A learning network approach to the delivery of justice, Knowledge Management for Development Journal, 8:2-3, 169-185.. To link to