• No results found

What you do can be used (against you): The quantification of qualities and the commercial repurposing of user data through social media

N/A
N/A
Protected

Academic year: 2021

Share "What you do can be used (against you): The quantification of qualities and the commercial repurposing of user data through social media"

Copied!
97
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

MA New Media and Digital Cultures

2013/2014

What you do can be used

(against you)

The quantification of qualities and the commercial

repurposing of user data through social media

Student: Serkan Yildizeli

Student number: 6233376 / 10020640

Supervisor: Dr. Carolin Gerlitz

(2)

1

TABLE OF CONTENTS

Abstract ... 2

1. Introduction ... 2

2. Literature Review ... 7

2.1 Surveillance studies and its relation to databases ... 7

2.1.1 ‘New’ surveillance and computational control: Providing subjects and data doubles ... 7

2.1.2 Captured: ‘Data-based’ identities ... 10

2.1.3 Data, intentions and predictions ... 12

2.2 Revaluing the Social for the sake of markets and brands... 14

2.2.1 Late capitalism and the relation to the ‘social’ ... 14

2.2.2 Valuation and ‘qualculation’ in calculative economies ... 15

2.2.3 Markets, brands and the valoristion of sociality ... 19

2.3 Quantification, measures and the role of data interoperability ... 21

2.3.1 Quantification and performativity... 21

2.3.2 Measures and the ranking of (data) subjects ... 23

2.3.3 Measuring and valorising affective investment and sentiments ... 27

2.3.4 The role of data interoperability and APIs on the repurposing of user data ... 29

3. Methodology ... 34

4. Case Study ... 38

4.1 Power Editor: A sorting system and data governing tool for commercial objectives ... 39

4.1.1 FBX and platform updates: Intensified valorisation of post-demographic data and ‘affect’ ... 50

4.2 Social.com: Monetising the intersection of Ads API data circulation and brand managing corporations ... 52

4.3 Coosto: Valorising social media user activity by means of recontextualisations through data interoperability .. 62

4.4 Klout: Voluntary self-valorisation through the accountability and recontextualisation of social data and user profiling ... 73 5. Discussion ... 81 6. Conclusion ... 86 6.1 Reflection ... 87 6.2 Further research ... 88 Bibliography ... 90

(3)

2

ABSTRACT

This thesis is dedicated to researching distinctive practices through which users’ activity are tracked, recontextualised, held accountable and valorised through social media. Following an empirical platform and software studies approach, I extend my analysis to the processes and contexts in which data is refashioned. Here, my specific interest lies in the path the data takes; moving from relatively technical data harvesting and exporting environments to spheres in which data is repurposed and introduced in marketable, commercial contexts. In order to critically analyse these practices and their implications, I pose a framework of concepts through which the social media networks and third party platforms are analysed in the case studies of Power Editor, Social.com, Coosto and Klout. The platform (and software) studies approach also takes into account the data and information ecology and networks around the related entities. In more general terms, cases of social media advertising, API interoperability and digital marketing analytics are studied. Finally, I suggest that the digital communicative acts and platform interactions on social media converge to a point where user activity is continuously captured, recontextualised and valorised through aforementioned actors and practices in order to generate commercial value, revenue and assets, ranging from brand equity to ROI and even financial rent.

Word Count: 22.980

Keywords: data capture, data bodies, new surveillance, data valorisation, social media,

advertising, brands

1. INTRODUCTION

The emergence and mass usage of social media networks and platforms in the digital world have presented their promises and services mainly with the notion of being able to maintain, improve and generate creative work and social/affective relations with other users. The global increase in social media activity and usage has not gone unnoticed to actors such as brands, marketers, information/data brokers and the (financial) economy (Ahmad, 2013). Of

(4)

3

course, this has also been remarked by the platform owners and networks such as Facebook and Twitter, which, as a result of growing stakeholder interest and power, have been reshaping and redesigning their interfaces constantly.

From a general perspective, popular social networks like Instagram (Delo, 2013), Twitter (Vascelaro, 2013), and YouTube (The YouTube Team, 2007) have been increasingly implementing extensive commercial infrastructures in providing brands, advertisers and third parties a platform to build and extend their commercial worlds and data gathering on. Together with this, the data capture and detail of the user information have been even more fine-grained through the same platform features that seemingly serve users, but are in fact implemented with an underlying monetisation strategy. Using Agre’s (2003) capture model and Langlois et al.’s (2009) double articulation, it can be said that corporate social media platforms embody such articulations by ostensibly promoting unfettered communication, while also employing data processing and analysis in their back-end to transform affective and behavioural of interacting and communication into (economically) valuable data (Langlois and Elmer, 2013: 5-6). So, technical platform updates should also be seen in the way that they offer commercially interested parties with market information that can be used to pin-point and shape markets and segments.

Furthermore, in parallel with this, the advertising spending on social networking sites have seen an increase and are expected to grow even more (relatively) (eMarketer, 2014). This increase in (absolute) spending can also be seen as an increase in the valuation of social media themselves for their role in managing commercial strategies and purposes. In a sense, it could be argued that social networks have become institutionalised and validated as legitimate and effective marketing channels.

Moreover, user data is not only being evaluated by platforms but also by an ecology of third party platforms that are increasingly build on top of other existing social networks and services (Bodle, 2011). The types of services in this scope are the more broadly oriented (big) data monitoring/listening and analytics platforms such as Coosto1 and Radian62. These

services offer marketers and brands ways to make sense of big data, APIs and other

1 http://www.coosto.nl

(5)

4

algorithmic systems. These third parties often pay for API partnerships that establish data circulation and recombination. Conversely, their main sources of income are mostly fees for memberships and subscriptions to the tools and analytics of brands and agencies. Through these means, brands gain the ‘conditions of possibility’ (Kant, 1998) to monitor, in a new surveillance manner (Lyon, 1992, 2001, 2003), activity and sentiment associated with their labels. These analytics systems support marketers in the sense that there is a continuous tracking of the platforms and user activity, which potentially enable the prediction of ‘market needs’.

Branding and advertising are just one of the many practices that rely and act on user data that is generated by the interaction with these platforms. A different, yet in many ways similar development besides these digital platforms and interfaces has been services that monitor and police personal online reputation such as Klout3 and TrustCloud4. The data

usage and acting on user’s data is effectuated in a more evaluative and reputation structuring way. For, technically acquired and algorithmically formed social data identities and market segments are potentially used and held accountable (Garfinkel, 1967) in ways that were previously unimaginable. Social data nowadays is put into use when evaluating reputation and trustworthiness and is given meaning to through various metrics and value, whereby profiles and ranks of individuals are shaped inside the platforms. These scores are mainly established through APIs offered by social media platforms, by which data - representing a combination of a user’s engagement, ‘qualities’ and reviews across sites - are recombined and restructured in new forms, measures and ranks.

As mentioned in the example above, (affective) user engagement originating from social media spheres is repurposed and is brought into commercial and evaluative contexts. Moreover, the processing of individual data is in no way linked back and the user is not necessarily notified about the multitudes in which his data is being put into relation. All of these cases in which data is in various ways gathered, processed and valorised have one purpose in common: to generate commercial returns and revenue by mining, recombining

3 http://klout.com/home

(6)

5

and repurposing user data. Specific questions that originate from this scoping are the ways in which user data is captured and commodified on multiple platforms and through various technical means. In the extension of this lies the question of accountability and whether the captured data can be or is used to derive judgments and evaluations on behalf of the users that eventually affect the subjects. Another related issue is the way in which captured data is put into different contexts, in order to be ‘prepared’ and valorised for commercial returns. Hence, this study will be involved with questions and issues of (user) accountability, data (self-)valorisation and recombination. In fact, I will suggest that users and people are increasingly held accountable for their digital deeds and actions (digital data profiles/identities), while not really being in the position to control data and evaluations. Also, in this (real time) process of companies sharing and exploiting data for revenues, users are not or barely informed about this taking place. So there is also an information asymmetry between the users and the platforms and services that manage the data and can repurpose it for revenue-generating activities, through which the data is recontextualised and commodified. Additionally, in a more general and through the concepts such as database quantification (Mackenzie, 2012) and the Like Economy (Gerlitz and Helmond, 2013), I will be arguing that the value of data itself does not lie in single data points or specific contexts, but rather in the recombination and recontextualisation of it.

As a result of this line of thinking and approach, the research question that will be guiding and delimiting the research and its scope is formulated as follows: ‚In what ways is user data tracked, captured and held accountable through social media platforms, and how does the resulting data recontextualisation enable data valorisations that potentially provide stakeholders with commercial returns?‛

The research relevance of this question then stems from the contested topics it is encompassing and aims to study. In fact, social media and usage around the globe have conquered and taken up a significant part of the daily lifestyles, communications, work and socialities, not only attracting creative work but also businesses and brands.

As it can be deducted from the question outlined above, the research will be concerned with determining the ways in which users and their data are tracked, recombined. In the extension of this, the focus points will be the resulting data recombination and valorisation practices

(7)

6

through which user qualities and interests are quantified and commodified. An important focus in this perspective is the critical analysis of the abstraction of users into data bodies and the data governmentality and accountability that result from this.

Gaining insights in the debate and key topics mentioned in the research question will enable the deconstruction of the monetisation strategies and business models of actors such as social media platforms and the third parties. Through the findings of this research, a new set of critiques will be contributed to the existing debates that will be contributing to the conceptions of how and in which ways data pertaining to communicative acts, expressed (abstracted) sentiments, suggestive platform interactions and abstracted online/social behaviour from social media platforms is put to use and is processed, introduced, recombined and valorised in commercial contexts.

This research scope and the associated corpus will be analysed by the means of platform and software studies, which will be supplemented by an ‘intraplatform’ approach studying networks of commercial stakeholders of both social media platforms, as well as third party services and the interoperating streams of data. The software focus on the tools and platforms will be complemented by the study of developer documentation and specific pieces code and platform technicities.

Additionally, the question concisely includes the parties and actors of interest that will be featured in the case study. Through a platform studies approach, the interfaces and data/information ecosystem of the platforms, data brokers and their services will be empirically studied. This classification of the specific data-repurposing actors will allow me to deploy the different layers of the theoretical framework in order to make sense of the cases. In the following chapter, key literature and concepts will be reviewed and made into a framework.

(8)

7

2. LITERATURE REVIEW

The chapter is comprised of the literature review and provides a more detailed approach regarding the concepts and research that have already been covered and discussed in this area of interest. Also, through the review, analysis and combination of existing concepts, a coherent theoretical framework is constructed. Herein, different schools of thoughts are distinguished and reviewed in thematic sections, which will guide the narrative of objectives of the research. This way, a seamless transition from the literature review to the case study will be made.

2.1 SURVEILLANCE STUDIES AND ITS RELATION TO DATABASES

The discussion of surveillance studies in the research context is necessary, because I argue that mining and valorising various forms of user data is inherent to a digital (new) surveillance. From a control society and new surveillance studies standpoint, I emphasize on concepts that are involved with topics such as the control and accountability of information and the governmentality based on data bodies.

2.1.1 ‘NEW’ SURVEILLANCE AND COMPUTATIONAL CONTROL: PROVIDING SUBJECTS AND DATA DOUBLES

The foundation and journey to formulating the theoretical framework I want to propose, start with the ideas of Gilles Deleuze (1992) as they have been presented in ‘Postscript on Societies of Control’. In his essay, he moves beyond Michel Foucault’s historical understanding of ‘disciplinary societies’, where power is exercised within enclosed institutions and spaces, towards the notion of 'societies of control'. By drawing on Foucault’s ‘Discipline and Punish’ (1979), Deleuze argues that our environment and conditions have shifted from disciplinary societies to societies of control (1992: 4). In addition, he has

(9)

8

discussed that we are living in a generalised crisis where spaces of enclosure mold people into data ‘dividuals’ that can be easily transpored, aggregated and contained in databases and banks (6).

Accordingly, in these societies of control what is important is no longer an individual’s signature or a number, but a code, which is a password that is compatible with the dividual’s identity contained in underlying databases. He has further suggested that control has a numerical language and that it is made of codes that ‚mark access to information, or reject it‛ (6). To highlight the dominant machines and tools in use in such a society, he has noted that the societies of control, as opposed to the machines of the disciplinary society which involved energy and dangers of entropy and sabotage, operate with machines of a third type, the computers, with dangers of jamming and viruses (idem). In addition to his argument about the dividualisation, he has argued that these machines of a third type are integrated in the process of tracking the person’s position, whereby a universal modulation is being effected. From this, it can be derived that there is (remote) acting upon the individual who has become a data double or dividual, whereby power and decentralised control are executed on the data. As will be analysed later, forming profiles on social media simultaneously gives away fruitful information for the recombination and repurposing of user data.

Similar to Deleuze, David Lyon (1992) has criticised and built on Foucault in his text ‘The New Surveillance’. As Deleuze has done, he mentions that new technologies, based on microelectronics, make up for ‘new surveillance’. He has argued that due to the use of computers’ routinising and deepening capacities, these technologies alter surveillance that amounts to qualitative and not just quantitative change (159). As opposed to Deleuze (1992), he has focused on the practical cases and exertions of this new surveillance. He has argued that it is computers and databases that automatically track the daily lives of persons (Lyon, 1992: 165). He even makes the point that personal data is bought and collected by companies for direct marketing purposes.

Finally, he has noted that the computer-aided surveillance power bears panoptic traits, whereby the (physical) limitations of the panopticon and Bentham’s project are overcome due to the (virtual) decentralisation of the subject and control mechanisms (169). In fact, to him, subjects are willingly participating in their own monitoring, ‚by bearing the

(10)

bar-9

codes, triggering the signals that locate and identify us to agencies and machinesystems unseen‛ (170).

In the same way as Deleuze and Lyon have remarked in their aforementioned works, Mark Poster (1996) has also perceived a shift to control societies in terms of a kind of superpanopticism. Like Lyon (1992), he argues that it does not operate via external force or internalized norms but instead, in almost Deleuzian terms, of discourse and the linguistic properties of digital computation. In Poster’s sense, a database can be considered a discourse because it constitutes subjects. He proposes that databases produce subjects who are actually willing participants in their own surveillance. At the core of the superpanopticon, which is a technologically enhanced realisation of Foucault’s Panopticon, is a computerised database; a sorting machine that organizes and produces subjects (1996: 179). The superpanopticon also interpellates and governs the subject through the discourse of databases, because it is now part of a discursive network and system, which can act on the data subject (185). The database thus, produces subjects that are multiple and decentred. This superpanoptic ‘characteristic’ is hereby clearly formulated by Lyon:

the subject is multiplied and decentred in the database, acted on by remote computers each time a record is automatically verified or checked against another, without ever referring to the individual concerned< computers become machines for producing retrievable identities (Lyon 2001: 115).

The decentering or doubling of the subject (Deleuze’s dividualising) can be understood in terms of simulation of the lived body surveillance. Hence, it can be stated that in the control society, individuals do not produce their databased selves, but the databased selves produce them. Besides the loss of the power of the lived body, this process also leads to an alientation between the subject and its data double, which constitutes its digital body. Potentially, this can alienate the lived body from the data body, as the data is an abstracted, ‘data-based’ representation of the subject. In this sense, this ‘dataveillance’ is not aimed and exerted on the body, but instead on the data double (Agre, 2003; Langlois et al., 2009)

According to Lyon (2003), the goal with abstract data is to plan and predict by classifying and assessing user profiles. He has called this the classifying drive of contemporary surveillance ‘social sorting’. In this sense, this idea can viewed from the notion

(11)

10

of commercial worlds and the tracking and classification of users based on their (post)demographic data. Lyon (2003) has done this briefly by stating that with database marketing, ‚the policing systems are symptomatic of broader trends‛ (15). The trend in his context is towards attempted prediction and pre-emption of (consumer) behaviours.

The next section will take data further and will provide an overview of concepts related to the means and concepts by which the mining and capture of user data can be understood and analysed.

2.1.2 CAPTURED: ‘DATA-BASED’ IDENTITIES

In addition to Deleuze (1992), Lyon (1992, 2001, 2003) and Poster’s (1996) ‘extension’ and review of Foucault’s Panopticon (1979), Philip Agre (2003) has presented a useful framework to understand the emergence and practices of the surveillance and database society. More generally, as he notes, the ‘capture model’ has manifested itself mostly in the areas of information technologists. The capture model in this sense is able to explain the implications of data mining in a different and comprehensive way than a surveillance concept might (743-744).

Additionally, Agre has noted that ‚whereas the surveillance model originates in the classically political sphere of state action, the capture model has deep roots in the practical application of computer systems‛ (744). According to Agre, and as noted earlier, the underlying mechanism of this capture are tracking systems (i.e. web cookies and web activity tracking pixels utilised by platform owners). He remarks that these systems have a lot in common, because in each case some entity changes state and a computer internally represents those states. Here certain technical and social means are provided, whereby the correspondence between the representation and the reality is maintained, but without the inclusion of the social and contextual relations. In the context of digital media, he has pointed out that users of new media are able to perform a set of standardised activities, which in turn produce standardised data that is aggregated and mined through standardised interactive user interfaces (755), observable in platform functions such as Facebook’s like button and Twitter’s ‘retweet’ (RT). This in turn produces numbers and potentially makes these

(12)

11

activities portable and reusable in various contexts. More on this notion will be discussed in the last section of the theoretical framework.

Similarly, Felix Stalder (2012) contrasts different tracking systems and has noted that in (traditional) mass media the limited the amount of detailed data lead to a targeting of only very large groups based on age and location. He has argued that this has changed online, because individuals can now be tracked in great detail. Also, groups of any size and characteristics can be dynamically aggregated into relevant audiences (250). Accordingly, he has written that every activity online can generate a trace that can then be gathered and compiled. As an example he has stated that companies like Google go to great length making sure that traces are generated in a manner that they can be gathered and (‘monetisingly’) used. In fact, through providing a host of services on its own servers, as well as integrating its offers on advertising Google Analytics into ‘external’ sites (Leaver, 2013), Google is able to gather and capture user data in both web ‘spheres’.

Additionally, Agre has outlined the use of the ‘capture’ metaphor, which points to the act of acquiring (user) data as input. Still, he recognises that much of the personal information people ‘leave’ behind arises initially through the capture of activities of various kinds (757). This line of thought can be traced back to Foucault’s (1979) notion in which the induction of the inmate realises a state of conscious and permanent visibility that assures the automatic functioning of power. In the capture model, the inmates are subjects and the generated data, whereby more fine-grained levels of information yields equally diverse computational processes that can be analysed, governed and further recombined with other activity data.

This property of the capture model in which information goes into many directions, and for many purposes, is a notable difference between the surveillance model. Besides, through grammars of actions, continuous tracking and capture systems, activity and its capture coincide and become a simultaneous process, through which captured data can be restructured for commercial purposes (Langlois et al., 2009; Langlois and Elmer, 2013). On this, Langlois and Elmer (2013) argue that the use of social media platforms result in the ‚patterning of communication‛ through so-called media objects and features (2). It is through these platform functionalities that communication and creative acts are recorded. Also, through these means of the standardisation of the input and capture of information on social media, data recombination and recontextualisations are made possible.

(13)

12

In the case of Facebook, this could be possible through the user interface and interactive features of the platform, which can be analysed by another aspect of the capture model, the structural metaphors: the captured activity is figuratively assembled from a ‚catalog‛ of parts provided as part of its institutional setting (744) and can be used for different purposes. Having discussed concepts that review the tracking and capture of subjects and information, I will now outline concepts that are more related to the recombination and commodification of digital information.

2.1.3 DATA, INTENTIONS AND PREDICTIONS

In his book The Search, John Battelle (2005) explores how search technology works and describes the power of targeted advertising as a business model. Aside from the relation to the Google search engine, my aim here is to outline the idea of the ‘database of intentions’.

Broadly, this database can be described as where user information and search queries are aggregated to understand intentions as well as general trends in social wants and needs (Battelle, 2005). With this concept, Battelle emphasizes on how the capture of user input in databases can present us with what our culture wants (2). In the search engine context, he points out to the Google Zeitgeist and the millions of queries streaming into Google server (4). Through this, as Battelle has argued, Google has created the ‘Database of Intentions’ in a commercial manner. Battelle has speculated on the issue that Google and companies like it might come to know what the world wants, which will undoubtedly attract commercial interest (10). In fact, nowadays, this might be realised and is still evolving. Examples are Google Trends and Google Now5, but also Facebook advertising with its ability to use and

combine internally and externally generated data. Regarding the idea of ‘data bodies’, Battelle has noted that through companies like Google, an individual’s digital identity is immortalised and can be retrieved and acted on by demand.

(14)

13

Related to the capture model and the database of intentions, Langlois et al. (2009) write about the paradox that major commercial Web 2.0 sites present us with. Langlois et al. (2009) explain that the ease of communication, discovery of interests and other benefits of the service ‚can only take place through agreeing to terms of service and terms of use that allow for dataveillance and the commercialisation of user-generated content through advertising‛.

Consequently, the authors conclude that, in their critique, the evolution of Web 2.0 is about the creation of inhabitable ‘worlds’ where users and subjects can exist and extend themselves according to ‚technocultural logics‛ that shape the conditions of possibility for them and their world. In a similar way, Cheney-Lippold (2011) has argued that data pertaining to purchase and other behavioural data are aggregated for the sake of marketers’ insights on and interactions with individuals through the construction of ‘databases of intentions’. Herein, mathematical algorithms applied on the data assist marketers and allow them to make (commercial) sense out of these data. These efforts are then related to a better understanding of how to more effectively target services, advertisements, and content. The data can be processed even further by identifying and labelling the commonalities and patterns between data. In almost Deleuzian terms, Cheney-Lippold has continued noting that dividual pieces obtained by internet marketing surveillance networks, are made intelligible and thus ‚constitute the digital subject through code and computer algorithms‛ (169).

This then can be related to the concept of ‘soft biopower’ and ‘soft biopolitics’. Soft biopower refers to the changing and fluid nature of categories that on their own regulate and manage life. It regulates how those categories themselves are determined to define life (175). Here, the constitution of the algorithmic identity is a dynamic and continuously optimised process. As user identities become tethered to a set of statistically defined movable modules, the process of identification becomes mediated (or interpellated) by soft biopolitics exerted by institutions and corporations (178). Because of the integration of web analytic firms and their surveillance networks in the web, it can be stated that users are continuously subject to biopolitics.

The decisions made by data processing can then be criticised, because of their faith in digital number and data, that ‘should’ represent living bodies. In fact, data mining allows for the identification of unforeseen or hidden relations and factors and therefore establishes the

(15)

14

possibility for actions predicated on the basis of topologically abstracted patterns of data (Fuller and Goffey, 2012).

In this sense, Fuller and Goffey have noted that, ‚concentrations of data, and the information it grounds, establish new centres of gravity as they couple with sorting systems‛ (327). In line with this thought, it can be argued that we are moving from surveillance to capture, from narrative, a coherent body, to modular parts and patterns and from the present to the future (predictability through algorithms) (326). This shift towards the predicate logic can be connected to relational databases, as they are able to govern and sort, analyse and project implicit, but usable relations. Thus, not only do technocultural logics create commercial worlds, but also marketers/brands and relational databses have their share in the performativity and predicate logic of it. In order to clarify this idea, the next part is devoted to an extensive study of related concepts.

2.2 REVALUING THE SOCIAL FOR THE SAKE OF MARKETS AND BRANDS

Before providing concepts covering processes of data valorisation and commodification practices in the contemporary (information) capitalism, it is epistemologically helpful to provide theoretical insights into concepts about economic interest in the valuation, remaking, and predicting of the social and way of life. In following section, I will look into concepts of how information, data and sociality coincide and are recombined in order to have economic value in contemporary forms of capitalism.

2.2.1 LATE CAPITALISM AND THE RELATION TO THE ‘SOCIAL’

In late capitalism, consumption, due to it being the effect of being a qualitative (Callon, 2002) process, is seen as productive practice in which the enterprise does not create its object, but instead the world within which the object exists (Lazzarato 2004: 188). This line of thought is

(16)

15

especially interesting with regards to social media platforms (Langlois et al., 2009), which provide a space and interactions-rich interface for self-creation but make no claims to produce anything. In this respect, the affordances for users (Gaver, 1991: 79), which are built into the architecture, can be seen as the means for companies to achieve preferred outcomes. These companies create conceptual spaces that actors such as users, marketers and data traders can inhabit to use and create value while remaining seemingly autonomous. This notion will be extended and become clearer throughout the case studies on such platforms. Maurizio Lazzarato (2004) continued explaining that in current capitalism, business companies invest a significant share of their turnover in marketing, advertising, thus capturing the mind, whereby the investments in ‘the expression machine’ can even exceed the investments in ‘labour’ (189). In this sense, the capture of a clientele’s attention and desires means establishing loyalty, which is also a prominent idea in brand management and valuation, as it will become clear in the following sections. Furthermore, he has argued that the machines of expression and constitution of the sensible also in act on finance markets and that similar process in advertising apply in the fixation of rates on the stock exchange (196).

Lazzarato has also made clear that surplus value is not being produced in the factory but in society and in everyday life through productive capacities of life and ‚heterogeneity of subjectivities‛ (202-203).

Thus, through Lazzarato’s (1996, 2004) conceptions about immaterial labour and ideas on today’s capitalism, it can be stated that in order to create value from worlds, relationships and collective knowledge, life (outside of the ‘factory’) is being put to work. The following section will give way to the presentation of insights and theories on the workings of economies and the valuation of affects and social relations.

2.2.2 VALUATION AND ‘QUALCULATION’ IN CALCULATIVE ECONOMIES

Moving along the perspectives on labour and the valorisation of it, Fabian Muniesa (2012) proposes a shift from talking about value(s) to talking, with a pragmatist approach, about valuation as an action, or performance of consideration and appraisal/appreciation. Referring

(17)

16

to John Dewey, Muniesa takes issue with the understanding of value, which posits that value is a property that something has by ‚virtue of how people consider it‛ (24), which is a property that an object has as a result of its own condition. He explains this by stating that value is neither objective nor subjective, but practical in the sense that the situation in which judgment of value is required is in fact not mental, but existential (25). Muniesa then writes that ‘signification’ or valuation happens because what interprets a phenomenon, actor or object as valuable is a relational and active process out of which something can hold as the value of something (32). In addition to this, Graeber (2006) distinguishes three different, yet interrelated uses of the term ‘value’. In fact, he talks about moral values, market value and value as meaningful difference (in relations). Values can also imply an importance, worthwhileness or meaningfulness of something, while this importance should be seen in comparative terms (439-440). In contrast, Stark (2011) notes that economic valuation ought not to be reduced to pricing and that market value and moral values are entangled (317). Still, herein, Graeber (2006) makes a distinction in forms of value that are considered as unique and incommensurable, while others may be ranked (452-453). In the last part of the literature review, I will be looking into this issue of measuring and ranking the (in)commensurable.

Franc ois Vatin (2013) takes a different approach to the notion of valuation and remarks that it blurs an important distinction for understanding economic processes. On the one hand this is the distinction between processes of assessment (évaluer) and on the other the processes of production (valoriser) (31). Regarding this distinction, Vatin notes that ‚evaluation no longer appears as a simple preliminary to valorisation‛ (45). To him, valorisation is present in acts of evaluation along the chain of production ‚in that they are provisional modalities for establishing a value that is under construction.‛ Thus, assessment and production are continuous processes and can be adjusted as desired. This process relates well to the idea presented earlier that data bodies are continuously governed and algorithmically optimised, which highlight the evaluative, accountable and judgmental properties of data processing and valorisation.

More concretely, Vatin states that we must consider the work of qualification in a dynamic way, because acts of production cannot be understood without thinking about how they insert themselves into the economic realm where the market is already calibrated,

(18)

17

classified and measured. Finally, it can be said that valuation is shaped by values and is productive of value.

Related to the valuation and qualification of products and services, Callon et al. (2002) outline that qualifying products and positioning goods are major concerns for agents evolving within the' ‘economy of qualities’. This qualification is articulated as the classification of goods that are offered to consumers in market.

Callon et al. (2002) have also related the economy of qualities to use of new information and communication technologies such as the web and have argued that herein, the logic of singularisation reaches its peak. In fact, on the internet, this is in a purer form, because of the constant qualifying (classifying, evaluating and judging) of the products offered to users on the new medium. In addition, the authors had already foreseen that ‚e-commerce companies hope to base their competitive lead on their ability constantly to observe customers making choices, linking products and showing their preferences‛ (210). Accordingly, these companies are able to record customers’ previous purchases and their judgments like reactions to new offers, so that suppliers gather information about customers as they do about what they want and expect.

Nowadays, this tracking, monitoring and accumulation of experiences in a database is a common practice and is very much related to the computational new surveillance as discussed in the first part of the literature review. This way, it also becomes apparent that the qualification of goods is strongly entangled with the ability to capture the market, it subjects and being able to act on databases containing data doubles, which can include intentions, behaviours and interests and even predictive patterns of users and profiles on platforms such as Facebook and Twitter. Additionally, Lury (2004) has viewed brands as new media objects due to their patterns of continuous interaction and adaptation that create feedback loops. In a sense then, brands, in their constitution and performance, seem to have a (cybernetic) ‘soul’ (Deleuze, 1992; Lazzarato, 2011).

Building on the concept of the economy of qualities, Callon and Muniesa (2005) have examined the question of the calculability of goods. Accordingly, they describe a market as a collective device for the evaluation of goods (1230). The authors state that ‚this calculation is possible only if goods can be calculated by calculative agencies whose encounters are

(19)

18

organised and stabilised to a degree‛ (1245). Here, the diversity of calculative agencies is determined by the tools, whereby the writers consider algorithmic configurations where such agencies and good encounter each other as multiple and diverse. This can be further associated with ‘content and contact’ strategy to differentiate brands and qualifications (Spurgeon, 2008).

An accomplished calculation can then be obtained by isolating objects from their context, grouping them in the same frame, establishing original relations between them, classifying them and summing them up (Callon and Muniesa, 2005: 1231). Moreover, calculation can be placed on a continuum of intuition and (qualitative) judgement to algorithmic formulation (quantitative calculation). For this continuum, the intermediate term ‘qualculation’ is used in order to further redefine the notion of calculation that includes judgment (Callon and Law, 2005: 717). An example of this on social media is the judgment of users to interact with a constant stream of newsfeed or timeline objects.

The co-production of singular and objectified properties require the involvement of certain ‘market professionals’. The requirement here is that designers and sellers study and have knowledge of the buyers’ attachments in order to be able to propose new ones. In the description of the authors, the consumers face a multitude of professionals ‚armed with computers‛, who are studying (or tracking) their movements and calculating margins (or ROI) (1238).

Here, Callon and Muniesa acknowledge that the utilisation of computers have changed our conception of markets. They have argued that the growth of e-commerce and the automation of financial markets have highlighted the ‚multiplicity of practical forms of confrontation between supply and demand‛ (1240).

Similar to Callon et al. (2002) they have added that algorithmic configurations in this new economy are real sociotechnical arrangements and that the ‘market’ is interlinked and dependent of them for its operationality and performativity. Here, algorithmic configurations are able to identify and calculate encounters for market mechanisms. The novelty of information technologies then lies in their capacity to allow ‚physically distant and desynchronised‛ entities to constantly renew that encounter (1242). In the next section, I will move from markets and actors as being ‘qualculative’ to relatively more concrete concepts involving brand valuation and the valorisation of data generated in social spheres.

(20)

19

2.2.3 MARKETS, BRANDS AND THE VALORISTION OF SOCIALITY

After qualifying markets and products and identifying key agents in a broad sense, we can turn ourselves to practices of comparison, singularity and agency in brand management and valuation. The concepts in this field can be seen as extending on the previous parts, since brands utilise platforms and track/mine user data themselves or use intermediaries and brokers for this. In this respect, brands require user data for their valuation purposes whereby user data is a key ‘asset’. This section will constitute a framework to study the various ways in which ‘socially’ rich user data is processed in different contexts in order to be valorised in the contexts where brand can exert power on subjects and govern their data. In their paper Moor and Lury (2011) study how aspects of the social world, including relationships and affects are absorbed into the brand as values. They show that brands and the agencies that offer them services have developed ways of acknowledging (online) social and relational activities and capabilities in their valuation. Due to this, they are considered to have created distinctive calculative spaces.

Attempts such as the measurement of the type and intensity of sentiments associated with brands and the level to which brands can function as nodes within social networks are considered and evaluated in the valuation. Brands then, have the objective to quantify the value of the brand in order to gain competitive advantage through being singular and thus hard to compare (441).

Moor and Lury have also put the valuation in online spheres and note that through the tracking of behaviour the ‘propensity to recommend’ is further elaborated as simultaneously a property of the brand (445). They argue that valuation professionals and marketers now see brand equity as residing in a wider ‘information environment’ and any potential encounter with the brand, thus, including in the minds of the consumers (448). Here, the reoccurrence of the ‘encounter’ and the interpellation of subjects can be distinguished (Callon and Muniesa, 2005; Poster, 1996). Further, it can be stated that brands and marketers also evaluate, rely and act on the data doubles. On this, Arvidsson notes that:

(21)

20

‚Brand management contains a variety of techniques that all aim at controlling, pre-structuring and monitoring what people do with brands, so that what these practices do adds to its value.‛ (Arvidsson, 2006: 82)

It is through these techniques and means that brands are inserted into existing networks of interaction and communication. This leads to adding social and more affective dimensions of use-value to the brand (Arvidsson, 2006: 69). Hence, as Arvidsson has critically pointed out ‚the brand becomes a hyper-socialised, de-territorialised factory ‚(82).

In this respect, Colleoni et al. (2011) noted that social media platforms have a twofold role in increasing the strategic importance of corporate reputation management, and also rendering the management of it more difficult. This is mainly due to potentially more complex and diverse consumer networks and utterances (3). In fact, information and opinions can, theoretically, spread through online communities very rapidly without any control (8). This is one of the reasons why companies and brands would love to know, monitor and preferably ‘steer’ the market.

From a different point of view, Hugh Willmott (2010) has noted that ‚contemporary investment in branding is related to the financialisation of brands as intangibles that make a growing contribution to market capitalisation‛. Since social media platforms are brands as well, financialisation is also leveraged establishing yet another way in which user data is turned into commercial returns. In relation to this, Akasia (2000) had argued that due to the stock market rewards with high price/earnings ratios, brand-owning companies can earn huge returns on their capital and grow faster.

As it is outlined so far, data and the valorisation of it is a common phenomon in processes of brand management, valuation and financialisation. These are but a few ways where data recontextualisation and valorisation is conceptualised and academically documented. In the last part of the theoretical framework I outline more digitally native ways of these processes and build on measures and API data interoperability, where social media platforms suggestively decide which data is measured and is to be (re)used and recontextualised.

(22)

21

2.3 QUANTIFICATION, MEASURES AND THE ROLE OF DATA INTEROPERABILITY

In this section, I build on the first two parts of the theoretical body and discuss how the valuation and valorisation, after extensive monitoring and capture, are accomplished technically on social media platforms. Here, I specifically focus at the role of quantification and metrics and the ways they potentially yield monetary value in information capitalism. In fact, issues of measure and value and the relations between the two have been studied and discussed often since the inception of sociology (Adkins and Lury, 2012).

2.3.1 QUANTIFICATION AND PERFORMATIVITY

After having discussed concepts related to user/information tracking, valuation, economic qualities and brands, it is necessary to implement ideas around the quantification and measures of the social. For, supplementing the framework with these concepts allows me to further analyse and enhance the critical sense-making of the cases.

In line with this Latour (2010), has pointed out that sociology has been obsessed by the goal of becoming a quantitative science, without being able to reach this goal (1). Also, he has stated that sticking to the individual will result in detecting qualities, while a move towards the structural and distant could gather quantities (3). This is the opposite for Tarde, since his ideas were that intimacy of the individual would result in discrete quantities and that moving away from the individual towards the aggregate would result, due to the lack of proper quantitative instruments, in a loss of quantities. Still, the technological capabilities and the emergence of social networking sites seem to have made this argument falsifiable, because ‘big data’ can be processed with today’s instruments.

When social scientist study qualities they handle only one entity, and quantification begins, as Latour writes, with collections of large numbers of those entities. However, for Tarde, quantification began with the individual and was very difficult to maintain when shifting to aggregates (4). Thus, the aggregation of entities and qualities for Latour has an inevitable

(23)

22

outcome of quantification. In the traditional view, quantification started when enough individual atoms were assembled, so that a structure began to appear. According to Latour, this structure starts as a ‚shadowy aggregate (a double), then as a whole, and finally as a law dictating how to behave and act on specific elements (10). This connects well to dividuals and data doubles that consist of molded parts and data points that can be acted on through database governmentalities (Deleuze, 1992; Agre, 2003; Langlois et al., 2009).

As stated by Latour, Tarde had imagined a progressive fusion between the technologies of statistical instruments and the very physiology of perception, which can in a way be found in the foundations of control societies. Again, relevant for digital tracking practices, Tarde has predicted that standardisation and development of statistics and data gathering instruments will begin to follow the trajectory of data about the social world.

Regarding this, Latour is justly remarking that the digital medium is in fact an extension of this principle of traceability (13). He points out that this is used for aspects such as opinions, rumours and personal buying behaviour. He adds that whereas this was only possible for scientific activity is now also possible for most events leaving digital traces and cookies, which are then archived in digital databanks (idem).

Arnold Roosendaal (2012) has hinted on this by noting that tracking users over the web is valuable in the sense that this allows the profiling of users/audiences. Consequently, the behaviours and interests revealed of users can be targeted for personalised advertisements (Roosendaal, 2012: 1; Langlois and Elmer, 2013: 3). Hence, companies that earn their revenues from targeted advertising have great stakes and interest in using these surveillance style data mining techniques.

Further, a (third party) cookie, which is a small text file, is usually are placed on the user’s web browser without any direct visibility (5). Langlois and Elmer have added to this that besides existing data trackers like cookies, increasingly Facebook’s ‘like’ button is embedded on webpages, allowing even richer contextual and behavioural clues of users. Due to the interconnection and entanglement of such websites and services, the externally captured data bodies can be combined with existing Facebook profiles, in order to be used for revenue generating activities like targeted advertising (Roosendaal, 2012; Gerlitz and Helmond, 2013; Langlois and Elmer, 2013)

(24)

23

Thus, quantitative instruments that Tarde had no way of obtaining, like the ‚gloriometer‛ (Latour, 2011) for following reputation and conversations for understanding economic transactions are now easily accessible and are the object of many marketing analytics tools. Latour has shown that once unimaginable approaches, tools and insights for academic and commercial spheres are now deemed possible through the digital (social) media and data. Through Callon (2007), the conception of the performativity of economics (and markets) where such data circulates can be reviewed. Here, performativity suggests that rather than theories being true or false, they are accepted or not accepted, and there are many roles to be played in this acceptance process by humans and non-humans alike. Still, as mentioned above, it does not simply mean that (only) beliefs matter, or that economic theories are ‚performed‛ as self-fulfilling prophecies (15). Callon has further elaborated that:

‚the notion of expression is a powerful vaccination against a reductionist interpretation of performativity; a reminder that performativity is not about creating but about making happen‛ (22).

Concluding on this, it can be said that economics play an important role not only in describing markets and economies, but also in framing and shaping them (Callon and Muniesa, 2005), in the same way as marketers and brands try to do this on a different level for brand management and valuation through users data on social media. In the next section, individual data subjects and their measurements will be discussed, while also relating the ranking metrics to the aggregate level.

2.3.2 MEASURES AND THE RANKING OF (DATA) SUBJECTS

Performativity, as mentioned before, is not only an effective phenomenon within economics and quantification of data, but also emerges within digital metrics and personal rankings. Espeland and Sauder (2007) have written about the proliferation of measures that respond to increasing demands for accountability and transparency. They have, in short, argued that measures and the resulting rakings create different forms of reactivity. This is, as the authors

(25)

24

argue, because people ‚continually monitor and interpret the world and adjust their actions accordingly‛ (2). Performativity here is thus the altering of behaviours in reaction to being evaluated or measured (6).

Then, there are also the self-fulfilling prophecies and reinforcing standardisation of measures by ranking mechanisms which influence the ‘validity’ of the measured (Espeland and Sauder, 2007: 15). On this, Strathern (1996) had noted that ‚when a measure becomes a target, it ceases to be a good measure‛ (35). In a way, rankings seem to shift importance from quality and context to quantity and rank.

Also, suggestive ranks and the classification of users has an induction of their accountability, allowing social media platforms, institutions, (commercial) third parties and other networks of actors to evaluate these subjects. The accountability of users based on their digital activity implicates that the data governing party has the ability to recontextualise the data of abstracted user investments and evaluate the subject with distinct context-specific criteria and pass judgment on the subject, whereby this evaluation is actually out of the context and environment in which it was generated. Since the subject is continuously monitored through dataveillance on platforms online, this process is carried out without the subject being aware of her evaluation and thus, is not able to interfere, adapt and reflect on this evaluative process, resulting in a situation where the databased self is producing the individual.

Furthermore, Espeland and Sauder (2007) authors discuss the transformation of qualities into quantities that share a universal or standardised metric (16). A result of this is the reduction or abstraction of complexity to simple, authoritative numbers (idem). This way, the simplification of what the authors call ‘commensuration’ is able to produce decontextualized, depersonalised numbers. Due to their highly portable nature number can decontextualized easily, they make recontextualisation possible.

This recontextualisation then, adds another layer of meaning to the abstracted number. Likewise in Verran’s (2012) argument, numbers should not be seen as objective descriptions but as enumerated entities. Derived from the notion of an encompassing, shared metric commensuration can be seen as distinguishing and uniting objects around a common relationship to each other (Espeland and Sauder, 2007: 19).

(26)

25

In a different way and context, Gerlitz and Lury (2014) have discussed the case of self-evaluation on social media. The key approach here is to create relations and re-aggregations of data points that are captured by the predefined features of platforms. As a result of this (re-)aggregation, data and numbers are brought into new contexts. Through the disaggregation and re-aggregation, processed by relational databases of social media platforms, data points become decomposed and recomposed. A result of this is the creation of new relations between data points, which provide novel ways for recontextualisation and valorisation (Latour, 2010; Gerlitz and Lury, 2014).

Further, participation in this sense is validated through and can be explained by the thoughts of Adrian Mackenzie (2011) on the relational-database who states: ‚no-one belongs to a database as an element, but many aspects of contemporary lives are included as parts of databases‛ (12). Although Mackenzie does not state it explicitly, in his argument the potential for recontextualisation of numbers/data and ways for valorisation can be distinguished. As mentioned before, Gerlitz and Lury (2014) argue that social media activities and interactions are also multivalent and can thus be valorised through the repurposing in ‘appropriate’ contexts. Likewise with enumeration, social network engagement can be turned into numbers and can subsequently be deployed in various contexts for different purposes (17). In line with this, Bechmann (2013) has argued that profiling can be used for personalisation purposes in third party apps or advertisements, but can also potentially be used to single out individuals in risk assessments by organisations and businesses ranging from government agencies to the financial industry (75).

In this sense, commercial organisations increasingly and continuously produce profiles by aggregating and analysing information and social media data such as likes, behaviour and tastes, which are in turn repurposed through a range of (monetisation) purposes (Adkins and Lury, 2012: 1).

Moving along the lines of measures that are exerted on users participating on platforms such as Klout and the resulting form of accountability of the data body, Harold Garfinkel’s (1967) ethnomethodology provides a useful framework for understanding this way of tracking, capturing and profiling/ranking user activities. As Garfinkel has stated, ethnomethodology’s ‚central recommendation is that the activities whereby members produce and manage settings of organised everyday affairs are identical with members procedures for making

(27)

26

those settings account-able‛ (Garfinkel 1967: 1). Here, account-able refers to definition such as being able to give an account of or for and to be (held) accountable.

In the context of research (and data gathering), he relates ethnomethodology to rational properties of indexical expressions of organised artful practices of everyday life (11). Since social media try to mediate aspects of everyday life such as friendship, relations, communicative expressions, ethnomethodology could have a significant contribution to social media user accountability. The definition implies that the indexed record of artful (creative) activities of users can be potentially used to deduct judgement and hold the actual subject accountable for his captured (digital) deeds.

This is precisely what Tristan Thielmann has done: relating the Garfinkel’s contribution to the theorisation of the accountability constituted on social media (Oosthuyzen, 2013). Accordingly, Thielmann has argued that the socio-techniques of Garfinkel from the 1950s remain relevant for exploring social media. Since, as Thielmann has stated, social networks are constructed around gathering personal data and self-documentation, they are characterised by an increased accountability. As self-documentation in the form of network of subjects and objects, brand preferences and demographical information enters into the digitally public spheres and contexts of social media, the predictability increases and supports the predicate logic (Fuller and Goffey, 2012) used to valorise user information.

Continuing on this repurposing and recontextualising of data, Mackenzie (2011) argues that shifts in data aggregation methods and techniques blur the boundaries between distinctive commercial, scientific, technological, and regulatory domains. In this sense, vertical aggregates superimpose economic, state, political and cultural processes in shifting consortia of actors on local as well as international level.

Further, Mackenzie remarks an emergence of relational database management systems (RDMS) as the standard form of the database (6). Accordingly, the SELECT statement across relational databases can be seen today on Web 2.0 and smartphones as they generate like suggestions and connections, or manage large aggregate identities and groups (9). As argued by Mackenzie, relational database takes openness as its principle of operation, which indicates openness to the insertion of new elements, thus the derivation of new relations. This openness allows databases to aggregate and recontextualise on large scales

(28)

27

(11). In the following section, concepts on affective investments, their aggregation in databases and commercially repurposing will be reviewed.

2.3.3 MEASURING AND VALORISING AFFECTIVE INVESTMENT AND SENTIMENTS

Very much related to the ideas of simplification of affections/feelings and the enumeration of qualculative interactions are the way in which affective judgements are measured, commoditised and valorised. Here, Jenkins’s (2006) notion of ‘affective economics’, as outlined in Convergence Culture, is contributing to the interpretation and critical elaboration on the monetisation of feelings. The notion of ‚the enhanced ability of consumers to participate in the process‛ outlined by Mark Andrejevic (2011) is directly relatable to aforementioned phenomena and features that track, mine and capture new types of (human) interactions and evaluations. This can be referred to the idea of ‘emotional capital’ of Jenkins (2006), which Andrejevic builds on (606).

Andrejevic views this ‚circulating, undifferentiated emotion‛ as an exploitable resource that is part of ‘infrastructure’ of this economy (608). The goal of this affective economy, then, is to structure, govern, and manipulate affect, whereby it is transformed and abstracted into discrete, calculable emotive units. Thus it is about more about ‚probing the emotional pulse of the Internet‛ than about gauging personal human emotion (610).

These anticipative analytics on consumer behaviour using big data open ways for database governmentality. In fact as Kosinski et al. (2013) have argued in their quantitative study, patterns and analysis of Facebook liking behaviour yields possibilities for revealing user attributes such as sexual orientation, ethnicity and personality traits. As discussed before with the feedback loops of brands (Lury, 2004), these strategies of detailed tracking can be applied to modulate audience feedback, volume and circulation by intervening actively and adjusting in real-time to alter the information/economic landscape and qualities (Andrejevic, 2011; Callon et al., 2002). These practices present both pre-emptive and productive attributes in the sense that they can manage threats while at the same adapting to ‘demand’ and maximising sales (Andrejevic, 2011; Callon et al., 2002). The utilisation of this

(29)

28

in shaping decision-making strategies is made feasible, because ‚pre-emption brings the future into the present‛ (Andrejevic, 2011: 614).

Concluding, Andrejevic (2011) states that the increase of emotions expressed, behaviour tracked and aggregated will only feed predictive systems and will allow marketers to refine ways to ‘fix’ affect in ways that it is translated in greater commercial earnings. So, in a way feelings become tied to monetisation in an affective economy, which is fuelled by emotive units of data.

Thus, while social media platforms increase the empowerment of stakeholders, they also capture and provide a large-scale source of information about feelings and sentiments of people that (un)willingly allowing technical infrastructures to measure and monitor reputation, through the analysis of user generated content (9-13).

Through ‘General Sentiment’ Arvidsson (2011) has stated that ‚the corporate economy itself has opened up to the inclusion of such diverse orders of worth by means of the calculative devices that it deploys to determine value‛ (41). This is, as he argues, made viable through developments in the objectification, abstraction and measurement of affect. As a result, affect has entered into the very calculative devices and condition by which economic values are assessed and set. Thus, individual affective investments by means of digital interactions have become manifestations of an abstract general equivalent what he has called ‘general sentiment’.

Moreover, the forms and uses of the monitored and captured public and measureable affect can be seen as a commodity and asset for valuation and monetisation. This affect then, can be represented independently of the source and context to which it is linked. In this sense actual actions and feelings become abstracted into data points (Arvidsson, 2011; Lyon, 2001, 2003). As indicated before, social media add to the process of the making affect public and into commodities. Hence, affective relations and investments become tangible through natural language processing (NLP) such as sentiment analysis wherein automatic recognition of the affective valence of words takes place (Arvidsson, 2011; 51). Herein, there is a focus on the affective proximity of words provided algorithmically by mainly big data sets aggregated from social media platforms. Here, the criterion of ‘distance’ is able to generate a measure of this general sentiment that is independent of the particular ideas and representations that might ground individual value judgments.‛ This creates a ‚universal, if temporary, scale of

(30)

29

positivity and negativity‛ which can be used for commercial purposes like brand valuation (53).

Moreover, the ‘trend’ of monetising online spaces can be related to what Gerlitz and Helmond (2013) have called the ‘Like economy’. In the principle of the like economy the main determinant of value is direct forms of user engagement that are quantifiable through the proliferation of social buttons like Facebook’s like and share buttons (4-5).

In relation to Facebook’s tentacles of tracking systems, Roosendaal (2012) has outlined that cookies and web user interests are of commercial/monetising importance in scenarios such as users already having an Facebook account, users without an account and in the cases of becoming or withdrawing from being a Facebook user (2012: 6; 2010). In this respect, the main purpose of cookies is recognition, which potentially enables interpellation (Poster, 1996) and valorisation in Facebook’s monetisation strategy.

Contributing to theories on the control society and to Garfinkel, Roosendaal (2012) has written that when user data from different roles and contexts are collected and combined by a party like Facebook, the individual will no longer be able to ‚keep roles and contexts separated‛ (12). In a way, thus, the individual is restricted in his ability to construct an own individual identity and gets ‘taken over’ by the platform and data-constructed self (idem).

Hence, the individual user is affected in his autonomy by tracking technologies under the deployment of social networking platforms, while his accountability increases due to the multivariate data points constituting the data body that is evaluated. Herewith, it can be concluded that the individual loses control over his aggregated and recombined personal data and identity.

2.3.4 THE ROLE OF DATA INTEROPERABILITY AND APIS ON THE REPURPOSING OF USER DATA

Not only are user investments involved in process of data monetisation, but also technicity, such as data circulating APIs, content structuring algorithms, platform plugins and interrelated third party actors. As it might be conceived from this approach, there seems to be an established ecology of agents, whereby value making is increasingly being decentralised among them. This section outlines the roles of APIs and data

Referenties

GERELATEERDE DOCUMENTEN

In niet vooraf geëgde grond leidde de egbewerking drie tot vier dagen na opkomst tot een plantverlies van 16%, die nauwelijks beïnvloed werd door de rijsnelheid.. In wel

grond hiervan moes daar gepoog vvord om die invloed van die twee faktore op die toetsnommers uit te skakel aangesien daar ~ be- duidende verskil in

The research data that was gathered included information on motivational factors that motivated women entrepreneurs to start their own businesses, current obstacles that

I was also involved in the groundwork for the national online health and health care portal, a project emanating from a series of advisory reports on eHealth by the Council for

Typically, three activity regions could be distin- guished (cf. However, for catalysts in which these crystallites were absent, or were decomposed into surface rhenium

To what extent can the customer data collected via the Mexx loyalty program support the product design process of Mexx Lifestyle and Connect direct marketing activities towards

By building on the prototypical model of Baron and Ensley (2006), we found that in a social setting, individuals identify significantly more prototypical dimensions related

Our technique of simulating haptic feedback optically opens up an additional communication channel with the user that promises opportunities for novel interaction styles