• No results found

Who assembles the ‘future of work’? Mapping actors and their relations in U.S. American web debates on job automation

N/A
N/A
Protected

Academic year: 2021

Share "Who assembles the ‘future of work’? Mapping actors and their relations in U.S. American web debates on job automation"

Copied!
40
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Who assembles the ‘future of work’? Mapping actors and their

relations in U.S. American web debates on job automation

Paul Dunshirn (12287997) Research Master Social Sciences Graduate School of Social Sciences, University of Amsterdam

Supervisor: Prof. Dr. Marieke de Goede Second Reader: Assoc. Prof. Eelke Heemskerk

17th August 2020, Vienna

Abstract

Scholars from the field of science and technology studies (STS) tend to criticize public debate about the ‘future of work’ for being overly sensationalist, deterministic or unrealistically utopian/dystopian. While this critique may be accurate in certain cases, the aim of this thesis is to investigate how important these sensationalist discourses actually are within the overall public debate on automation in the United States. For that purpose, I conduct an inductive ‘mapping’ of these debates, concentrating on concrete actors that formulate sensationalist and other types of discourses in relation to the issue of job automation. This issue mapping exercise is based on a self-collected hyperlink and webpage data set. Methodologically, it relies on a combination of network analysis and qualitative close-reading of linking practices and webpage contents. As this analysis reveals, public web debates about automation are not centrally driven by sensationalist discourses. Instead, debates are shaped by a variety of discourses on automation (some more and some less polarized), and organized in terms of social relations amongst institutions or clusters around concrete sub-issues.

(2)

Contents

1. Questioning automation sensationalism by empirically ‘mapping out’ public debate ... 1

2. Theoretical framework: The politics of assembling job automation on the web ... 4

2.1. Conceptualizing actorness with ANT ... 5

2.2. The politics of issue formatting ... 6

2.3. Combining ANT and network analysis ... 7

3. Data and methodology ... 7

3.1. Data collection and preparation ... 8

3.2. Identifying network patterns using community detection ... 12

3.3. Analyzing community assembling practices ... 13

4. Analysis of actors and debate structure of the full network ... 14

4.1. Actors ... 15

4.2. Debate structure and communities ... 18

5. Case study 1: Job automation as inevitable technological progress in the ‘Singularity community’ ... 22

5.1. Hyperlink practices in the ‘Singularity community’ ... 22

5.2. The content of the Singularity community ... 24

5.3. Relations to the rest of the network ... 28

6. Case study 2: The politics of automation in the ‘Warehouse community’ ... 29

6.1. Hyperlink practices in the ‘Warehouse community’ ... 29

6.2. The content of the Warehouse community ... 31

6.3. Relations to the overall structure of the network ... 33

7. Reflections ... 34

7.1. Findings regarding the topic of job automation ... 34

7.2. Reflections on the research approach ... 35

(3)

1. Questioning automation sensationalism by empirically ‘mapping out’

public debate

Many academic and non-academic commentators criticize public debates about job automation for their high levels of sensationalism, polarization, as well as for misunderstanding or overblowing empirical research on this topic. Thomas Frey and Michael A. Osborne’s intensively discussed working paper (Frey & Osborne 2013) is a good case in point. This paper has sparked public debate particularly for its estimation that 47% of US jobs are at ‘high risk’ of being displaced by automation by 2050. For instance, Bloomberg News took up these findings and published a page titled ‘Find Out If Your Job Will Be Automated’1. On this page, readers are invited to search for particular jobs and to find out ‘the probability of your job being automated’. While this page indicates that its information is based on Frey and Osborne’s estimations, the two researchers have never actually estimated such probabilities2. Instead, Frey and Osborne estimated the technological capabilities needed to automate a job and assessed on this basis how vulnerable certain professions are in comparison to the other professions. If a job falls into the estimated ‘high risk’ category, it only means that these jobs are the most vulnerable

in comparison to other jobs. However, it does not say anything about how likely it actually is that a job

is going to be displaced. As the authors emphasize, probability of automation is a different question that depends on many other factors such as regulation, politics, and social pressure. These factors were beyond the scope of their analysis: ‘We make no attempt to estimate the number of jobs that will actually be automated and focus on potential job automatability’ (Frey & Osborne 2013:44).

These fine but important differences in the interpretation of this and similar research often get lost or confused in public debate. In some cases, such as in the example of the Bloomberg page, empirical research gets re-packaged into dystopian imaginaries about a future in which about 47% percent of all jobs in the U.S. are probably going to be automated. In this way, non-deterministic estimations of automation susceptibilities are being transformed into highly deterministic and ‘revolutionary’ tropes about how computers and automation are impacting existing jobs. Misinterpretations of research and sensationalism do not only occur in news reporting, but also in widely read book publications on the topic3. Silicon Valley type technophiles and corporate actors, such as the tech-company IBM, also contribute to sensationalist public debate, yet in utopian rather than dystopian terms (Boenig-Liptsin & Hurlbut 216).

This apparent polarization between utopian and dystopian perspectives has led anthropological researchers to criticize public discourse for mis-representing the complex process of technology-society interaction and for stirring unrealistic narratives about a dooming ‘computer revolution’ (Hakken &

1 See Whitehouse & Rojanasakul (2017, July 7). Find Out If Your Job Will Be Automated. Bloomberg.

https://www.bloomberg.com/graphics/2017-job-risk/ [all webpages referenced in this thesis were accessed between June and August 2020].

2 For an article discussing the wrong interpretation of Frey and Osborne’s estimations see Schumpeter (2019, June 27th). Will a robot really take your job?. The Economist.

https://www.economist.com/business/2019/06/27/will-a-robot-really-take-your-job.

(4)

Andrews 1993; Collins 2018). STS scholar Judy Wajcman voices a similar critique, arguing that these discourses create overly deterministic imaginaries of technological progress, thereby distracting from the fact that the outcome of job automation is essentially a question of the distribution of societal power and of influential actors’ ability to ‘define the future of work’ in this context (Wajcman 2017).

While these critiques of public debates are certainly not unreasonable, my thesis seeks to establish an alternative argument by drawing on STS scholar Sally Wyatt’s particular perspective on the issue of technological sensationalism. Wyatt has pointed out that simply criticizing sensationalist discourses for being sensationalist has not succeeded in de-legitimizing them (Wyatt 2008). According to her, it is time for researchers to inquire why these sensationalist discourses have been around for such a long time, who the actors are that articulate them, and what functions they serve within general debates about particular technologies (or automation in my case). Inspired by this argument, the aim of this thesis is to abstain from a critique of the truth value of sensationalist discourses (e.g. on whether they provide accurate descriptions of technology-society interactions), and instead to inductively and empirically explore the actors and the social dynamics that together constitute public debate about automation as a whole. As such, the research questions of this thesis are:

RQ1: Who are the influential actors that structure U.S. American public web debates about automation?

RQ2: What is the structure of U.S. American public web debates about automation? RQ3: What social dynamics drive the formation of this structure?

This approach differs from the previously described critiques in that it does not take the importance of notions such as polarization or sensationalism for granted, but seeks to explore public debate as a whole in order to inquire the relative importance of utopian, dystopian, and other kinds of discourses in the debate. It does so by following a research approach known as ‘issue mapping’ (Rogers et al. 2015; Marres & Rogers 2005). Issue mapping is a research program that guides researchers to inductively explore the general structure of a public debate and the involved actors, rather than limiting the scope of analysis to presumably important discursive instances beforehand (such as particular utopian or dystopian discourses).

An issue mapping approach to public debate about automation

Issue mapping is an approach that originated in the fields of media studies and STS at the end of the 1990s, when researchers started investigating how public debates and techno-scientific controversies played out on the Internet – mostly in the context of environmental or health-related matters. Due to the dominant role of hyperlinks in navigating surfers and organizing content on the web at that time, these researchers started considering links as not only reflecting the overall structure of public debate, but also as mediators of social and discursive relations between influential web actors (Rogers & Marres 2000; see also Park 2003). In this sense, issue mapping researchers trace hyperlink relations empirically, thereby systematically discovering and visually ‘mapping out’ the issues, the actors, and the relations between them.

(5)

It needs to be clear that the combination of issue mapping and hyperlink data is just one of many possible ways to study public debate. Identifying actors and their practices of articulating discourses about automation can alternatively also be done, for instance, by analyzing articles form a pre-defined set of news organizations. However, the use of hyperlink data entails several advantages over this more conventional approach. One key advantage of hyperlinks is that they naturally extend beyond geographically and institutionally defined settings. So, instead of studying e.g. the discourse in ‘the 5 most widely read U.S. American newspapers’, hyperlinks allow researchers to trace discursive associations both within the U.S., but also between the U.S. and other geographical settings. For instance, my corpus features web actors that are not strictly U.S. American, such as the World Economic Forum, in their role of articulating issues about automation in U.S. public debate. Furthermore, hyperlinks display discursive associations not only for news organizations, but also for other types of actors, such as think tanks, public institutions, or international organizations. These are important advantages for the purpose of open-ended and inductive explorations of public debates and its actors that I intend to pursue in this thesis.

Besides the advantages of hyperlink data in pluralizing geographical contexts and the scope of actor types under consideration, hyperlink data opens up research on public debate to practical techniques of data collection and analysis. Hyperlink crawling, which I use for data collection in this thesis, is a practical and time efficient technique to collect middle- to large sized corpora of webpages for analysis. In addition, the hyperlinked web is naturally organized in a network format (as implied in the term world wide web), which facilitates productive methodological and conceptual combinations of discursive research and network analysis. For instance, I use in-degree counts of nodes (i.e. webpages) to infer influence of actors who participate in public debate on job automation on the web. I am also using network analysis to identify structural patterns in the debate by applying a community detection algorithm to the network.

A third reason for using hyperlink data is the increasing difficulty of using platforms like Twitter or Facebook for research. While data sources from social media or apps have received a lot of attention from discourse researchers in recent years (Rogers et al. 2015), these platforms’ current attempts to control and set their own conditions for research has led digital methods and communication researchers to argue that hyperlinks and scraping techniques are likely to re-gain some of their importance in studying digital phenomena (Venturini & Rogers 2019; Freelon 2018). In this context, my research can be understood as an exploration of the advantages and disadvantages of using hyperlink data for independent and critical forms of discourse analysis. I will return to this point in the concluding chapter. For now, I am moving on to describe the chapter structure of the thesis.

Chapter structure

I discuss issue mapping in more detail in the upcoming chapter 2, explaining its theoretical grounding in principles of actor-network-theory (ANT) and how this approach allows me to study the politics of assembling automation on the web. Chapter 3 explains the operationalization of this issue mapping approach concretely by outlining the data collection and preparation procedure, as well as by introducing the methodological combination of network analysis and qualitative close-reading.

(6)

I present the findings of this research in chapters 4-6. To begin with, chapter 4 provides an analysis of the entire network. It first responds to RQ1 by identifying the debate’s influential actors in terms of in-degree counts and by describing their institutional types (section 4.1.). It then responds to RQ2 by describing the general structure of the observed web debates (section 4.2.). This structural analysis is based on a systematic community detection procedure (Blondel et al. 2008) through which I gradually determine a meaningful number of network partitions. The results indicate that different network formations exist within the observed debates, some apparently driven by ‘social’ dynamics, while others are more related to a shared concern about particular thematic issues.

This observation prepares the following chapters 5 and 6, which present two case studies for both of these types of network formation dynamics. Both of the two case studies follow the same structure. At the beginning of each case, I visualize the community network and interpret the hyperlink practices through which actors have assembled it into its current form. I then engage in an analysis of the community’s framing of job automation by close-reading the content and the hyperlinks of the domains of the 15 highest degree actors (i.e. actors with the highest sum of in-links and out-links). Towards the end of each case study, I analyze the investigated communities’ structural relation and functionality to the rest of the network. Overall, these two case studies respond to RQ3 about the social dynamics driving public debates about automation. Besides reflecting the assumed difference between issue and socially driven network formation, the case studies also highlight a lack of interest from the general debate network in the polarized discourses, which are central to the investigated communities. Finally, chapter 7 re-collects the insights from the three research questions and provides a discussion of the methodological and thematic insights of this thesis.

2. Theoretical framework: The politics of assembling job automation on the

web

The broad theoretical approach of this thesis is inspired by work in the field of critical discourse analysis, which assumes reality to be shaped by discourse in one way or another. For instance, Fowler (1991) has argued that discourse should be understood as a technology of governance through which actors exert power and establish consensus in relation to individual consumers of discourse. Similarly, DiMaggio et al. (2013) argue that media re-present conflicts and alliances between different political and public elites, thereby shaping the views of the reading public. My own work builds on these assumptions, but takes a specific issue mapping approach that focuses on the discursive as well as on the socio-material form in which actors engage with one another. According to this understanding, not only discourses matter in establishing what can be legitimately thought and acted upon, but also social relations and their material manifestation are involved in how these discourses emerge, are negotiated, and achieve their particular effects in co-producing reality.

In this chapter, I outline the theoretical foundations of this approach. In the first section I will discuss some of the principles of ANT and how they relate to this concern about the form of interaction

(7)

between actors. Then, I move on to specify how these principles shape my own issue mapping approach, and how this facilitates an analysis of the politics of assembling automation in public debate.

2.1. Conceptualizing actorness with ANT

ANT is a prominent research approach in the field of STS. Back in its early days in the 1980s, researchers started studying social practices involved in constructing scientific facts with close attention to the diversity of involved actors, such as humans, objects, concepts, and geographical settings (e.g. Latour 1987). Throughout the years, this research approach has developed into a widely applicable framework that seeks to challenge taken-for-granted concepts or truths by studying how domains (such as science or technology) get ‘assembled’ as objective, technological, or autonomous in the first place. In this sense, ANT entails an agenda to limit ontological assumptions about a domain as much as possible in order to study inductively how statuses are being established by concrete actors in and across particular settings (Latour 2005).

The topic of automation of labor lends itself nicely to such an ANT approach, especially considering the already mentioned sensationalist discourses that often frame automation as an autonomous process impinging on society in pre-determined manners, either with desirable or undesirable consequences. In contrast to these sensationalizing accounts, an ANT perspective on automation encourages us to focus on concrete actors and their practices of framing automation in sensationalist (or other) terms. According to this understanding, actors who participate in public debate engage in processes of negotiating and constructing imaginaries about automation, thereby assembling forms of governance and influence in relation to media consumers.

These negotiations between actors in assembling automation publicly may be best understood in terms of questions about what entities ‘act’ upon other entities, and to what effect. In other words, the politics of assembling automation are about who influences whom in constructing a narrative or discourse about automation. According to ANT principles, these actors should not be considered as acting individually and autonomously. Rather, their ‘actorness’ is constituted by the sum of their interactions with other actors. This means that an actor influences another if it is recognized as such in its account – for instance, if the second actor cites the first actor in support of an argument. The more citations an actors registers, the more ‘stable’ or ‘institutionalized’ its actorness is (Latour et al. 2012). Actors with a high level of citations (i.e. in-link counts) can thus be considered as influential, in the sense that they function as central points of reference in a public debate. In the context of my investigated set of hyperlink data, this relational conceptualization of actorness or influence is not an objective measure of a webpage’s general influence on the web. Rather, it has to be understood as a context-specific indicator of actorness within my particular curated debate network. This indicator of actorness gradually emerges through the iterative process of data collection (more on this procedure in the methodology section).

This conceptualization of relational actorness guides my approach of identifying and analyzing actors at different levels of influence in the observed debate. In the next section, I explain how the notion of issue mapping connects to this conceptual approach.

(8)

2.2. The politics of issue formatting

Issue mapping draws on the ANT principle of following actors in their practices of engaging or associating to one another in the process of assembling reality. In this context, the term ‘issue’ can be understood as the ‘thing’ that triggers these actors to interact with one another4. Noortje Marres, who was centrally involved in the development of issue mapping as a research program, has theorized this form of ‘thing politics’ in reference to literature on participatory democracy (Marres 2007). She argues that issues are a crucial dimension of democratic politics, as they offer citizens an occasion for articulating interests towards decision makers. In these terms, questions of who participates in the articulation of a public issue and to what effect are understood as crucial factors to the quality of democratic participation in or across political systems. In this sense, issue mapping provides theoretical resources and an empirical research program that connects ANT principles to an explicit interest in the politics of public involvement and questions about the democratic quality thereof.

Crucial to this approach is an analysis of how public debate is organized or formatted in the first place. As Noortje Marres and Richard Rogers (2000; 2005) found out, studying public debate on the web puts into question the very notion of ‘public debate’. Indeed, in the cases that Marres and Rogers studied at the beginning of the 2000s, web debates between actors are organized in terms of various hyperlinked clusters of webpages that link a lot internally, but rarely between one another. In this sense, public involvement is not very well characterized in terms of a ‘big public debate’, but rather as various sets of actors discussing an issue in more or less connected manners. Indeed, Marres and Rogers observed that these sets of actors, which they call ‘issue networks’, take various shapes and displayed a variety of social dynamics in articulating an issue. For instance, the authors studied issue networks about ‘Ferghana Valley’ (Marres & Rogers 2005), which is a valley in Central Asia that has received a lot of international attention for issues such as drug trafficking, ethnic tensions, and Islamic fundamentalism. Using hyperlink analysis, they identified various types of networks around the ‘Ferghana Valley’, including national media networks, governmental networks, and a network of international NGOs. These networks differed considerably in their attention to the various issues that affected the region, in their framing of the issues, as well as in their inclusion or exclusion of actors to the debate. Overall, the insights from this issue mapping project were very much shaped by the fact that Central Asian countries were under semi-dictatorial rule, which restricted the involvement of particular actors like local NGOs and the overall access to information considerably. Other issue mapping projects have investigated topics such as climate change or racism in social media (Rogers & Marres 2000; Matamoros-Fernandéz 2017). These cases often reflect much more civil society engagement – and thus an entirely different form of issue articulation.

Crucial to the analysis of issue networks and their politics is thus an investigation of the dynamics that drive network formation. Indeed, as Rogers and Marres emphasize, sets of actors that can be identified on the web do not necessarily take the form of issue networks. Networks of hyperlinked actors often share a geographical location, funding, or political leaning, only to name a few dynamics

(9)

that constitute networks on the web. For such a social network to qualify as an issue network, its actors have to actually share concerns for a particular topic and to be engaged in a process of ‘publicizing’ it – in assembling a topic as a political issue in this sense (Marres & Rogers 2005). If a network cluster does not reflect such a shared affiliation to a particular topic or issue, the observed pattern is better understood as a mere ‘social networks’ rather than an ‘issue network’.

As these elaborations indicate, mapping out the process of articulating issues allows researchers to investigate the patterns and dynamics of inclusion or exclusion of actors in a debate network. This actor-centered perspective encourages us to stop thinking deterministically about the future of work and instead to scrutinize the ways in which actors with different levels of influence ‘assemble’ and negotiate the meaning of job automation in terms of various more or less connected hyperlink clusters on the web.

2.3. Combining ANT and network analysis

Issue mapping entails an idea of combining ANT with network thinking. It is important to point out that this is not an unproblematic combination. Indeed, issue mappers often abstain from taking advantage of network analytical techniques, as they are considered to be somewhat incompatible with some of the principles of ANT. For instance, in most applications of social network analysis, network edges are considered to represent more or less stable relations between entities. In contrast, a lot of ANT accounts seek to problematize the stability of such relations, to highlight the processes through which they came into being in the first place.

Nevertheless, some ANT researchers have defended to combination of ANT and network analysis. For instance, Venturini et al. argue that network analysis can be a simplistic yet useful tool for representing the results of an ANT analysis in a visually comprehensive form: Like geographical maps, network visualizations do not look ‘like’ the territories they represent, but they can still be useful for informing us about certain relevant aspects of the represented territory (Venturini et al. 2018:11). This argument certainly inspires my approach in this thesis, but I intend to go beyond the mere use of network visualization. By applying a network community detection algorithm (Blondel 2008 et al.) to the traced associations between actors, I am testing whether such advanced network analytical techniques may be capable to contribute to ANT-inspired research. As I describe in 3.2., I consider the idea to detect and investigate structurally existing communities as nicely compatible with the inductive issue mapping research agenda outlined above.

After this brief elaboration of the theoretical framework, I am now moving on to the operationalization of this approach, explaining the data collection and the methodology.

3. Data and methodology

This chapter describes how I translated the theoretical ideas of issue mapping into concrete procedures of data collection, preparation and analysis. I introduce the data collection and preparation steps one-by-one (section 3.1.), starting with the selection of geographical settings, moving on to the selection of webpages as starting-points, to the hyperlink crawling procedure, and eventually describing the preparation of the resulting corpus of webpages for the subsequent analysis. In the remaining two

(10)

sections of this chapter, I describe the analytical work I did for interpreting the debate structure (3.2.) and for studying network formation processes (3.3.), the latter being an analysis of what I call ‘community assembling practices’.

3.1. Data collection and preparation

Case selection

The initial aim of my thesis was to study public debate about automation across various countries. In order to identify suitable countries for comparison, I started off by superficially exploring the state of public web debates in various countries around the world using simple Google searches. The U.S. appeared to be a good case right away, because of the relatively large share of articles about automation in U.S. American news media, and because of the fact that the topic was widely discussed during this year’s Democratic Primaries5. I quickly settled on the U.S., and moved on to define criteria for the further selection of countries to be compared to the U.S.:

1. In order to interpret a debate, I need to understand a country’s national language.

2. A country should be considered a ‘developing nation’, as I was specifically interested in how issues play out in transnational relations of unequal access to, or benefits from, technological innovation.

While certain African or Caribbean countries might have also qualified for this, I eventually decided to restrict my search to Spanish-speaking Latin American countries, for their particular socio-economic relation to the U.S.. I then considered various factors6, and finally chose the cases of Chile, Mexico, and Argentina.

As it turned out, this cross-country approach did not work very well. During data collection, I discovered that job automation is simply not very much publicly discussed in the Latin American countries that I looked at. There were also very few existing hyperlink connections between Latin American and U.S. webpages on automation. While I still kept the few Latin American webpages that I collected in the corpus of webpages, I now abstain from a specific cross-country analysis, as the number of Latin American pages is simply too low. In this sense, the analysis of this study is limited to the specific case of public debate in the U.S. (even though the transboundary nature of hyperlinks had the effect that webpages from other contexts that are involved in U.S. American debate are still included).

Re-purposing Google to establish country-specific webpages on job automation

Every researchers working on public discourse needs to decide what data to use in order to capture ‘public debate’ of a chosen country. It is common practice to collect a corpus of text items from widely

5See Vigdor, N. (2019). Is Automation Threatening American Jobs? Democrats Debate.

https://www.nytimes.com/2019/10/15/us/politics/automation-democratic-debate.html.

6 My search gradually narrowed down to Chile and Mexico, as they are both heavily entangled with U.S. American investment and chains of production. Because these two economies are considered as being very open to economic globalization, I added a more protectionist country, Argentina, in order to have some contrasts. There are other factors in economic literature that I considered (e.g. levels of employment polarization due to advances in technology; more generally how the production of value is organized; and welfare state systems).

(11)

read newspapers or from social media platforms, such as Twitter. My approach is different in that I am using Google’s search algorithm as a re-purposed research device to identify a starting set of ‘texts’ (i.e. webpages) for consideration. Using Google in this way is useful as it does not require a pre-defined set of actors to extract information from. Instead, Google returns a list of diverse types of actors as suggestions in response to a query that a researcher conducts (Rogers 2013). In this sense, it is a more inductive, as well as a temporally and thematically grounded form of identifying starting pages for analysis. This method is particularly useful in combination with hyperlink crawling, as the returned webpages can easily be used as starting points for a subsequent tracing of hyperlinks to map out public debate7.

How exactly do I use Google to identify starting pages for the subsequent crawling procedure? I simply query ‘job automation’ for the U.S. domain of Google8. I confined the search to pages registered as U.S. American, and to webpages in English. I then saved the first 40 search returns9 to use them in the next stage as starting entities. I repeated the same procedure for the Chilean, Argentinian, and Mexican Google domains (looking for Spanish-speaking webpages resulting from the combined query of ‘automatización’ and ‘trabajo’). Combining the returns for all 4 countries, I eventually ended up with a set of 160 starting pages. I derive the ideas and the technical details of this procedure from related ‘digital methods’ research (Rogers 2013).

Crawling webpages in Hyphe

Once the identification of starting pages was finished, I moved on to crawl the 160 starting webpages for further relevant actors. Crawling is an automated process of identifying out-links of webpages, of following them to target pages, and of saving the followed links and the newly discovered pages as data. If necessary, this process can be repeated multiple times. While crawling has been used to construct very large data sets for more quantitative research interests, qualitative researchers have also identified it as a promising tool, particularly in the context of the notion of issue mapping (Marres & Weltevrede 2013). For qualitative or mixed methods researchers, crawling offers a useful technique to set up systematic data collection procedures of middle to large scope that still allow for a considerate level of qualitative inspection and control.

For this crawling procedure I used the software called Hyphe (developed by the Medialab at Sciences Po). Hyphe was created by a team of researchers working with ANT, specifically for the purpose of facilitating a combination of crawling techniques and qualitative decision-making over which webpages to in- or exclude. While other programs for mapping out issues through semi-controlled

7 It needs to be clear that re-purposing Google introduces new problems. Most importantly, my selection of starting entities is strongly shaped by Google’s recommendation algorithm. I will discuss the implications of this research approach in more detail in the reflection section.

8 I am using a clean research browser and the online tool ‘Search Engine Scraper’ to facilitate the search (https://wiki.digitalmethods.net/Dmi/ToolSearchEngineScraper).

9 I inspected the search returns superficially, and only excluded returns for job postings on search platforms. Theoretically, companies looking for ‘automation engineers’ through such platforms might also be considered as part of the issue network. But for the selection of starting entities, I considered it as more relevant to take webpages that actually contribute something to public debate on job automation.

(12)

crawling procedures exist10, I chose Hyphe mostly because of its ability to define ‘web entities’ at different levels – either in terms of homepage domains or as individual sub-pages (Jacomy et al. 2016). For my case, being able to differentiate these two levels seemed important for the following reasons:

1. Breaking individual subpages out of their host domains and looking at whether they still link to the network is technically a much more time efficient way of building a large hyperlink network while also allowing for qualitative control (because there is an in-built function for this in Hyphe). This approach still allows me to merge various sub-pages back to their host at a later stage, as I do for the analysis of actor in-link counts in chapter 4. The alternative, more time-consuming approach would be to look at the sub-pages of each domain individually and decide whether they are relevant parts of their host actor in terms of stance on job automation or not. 2. Having individual sub-pages makes the visualization of network structures more meaningful. Visualizing a network with host pages as nodes (such as nytimes.com or amazon.com) may be interesting for an analysis of relations between institutions, but not as much for the purpose of the analysis of issue assembling practices.

The decision to break host actors into various entities has implications for the ‘depth’ at which researchers intend to crawl pages using Hyphe. In my case, I crawled the 160 starting pages at depth 0, which means that the crawler only searches for out-going links of the exact starting pages that I defined, rather than moving deeper into their hierarchical webpage structure to search for additional links from other sub-pages of the same host. A crawling depth of more than 0 would make more sense if keeping hosts and their sub-pages together11.

Inspecting discovered webpages in Hyphe

Through the crawling procedure in Hyphe, I discovered several hundred additional webpages as potential candidates for adoption to the corpus. I evaluated their relevance based on one main principle: A webpage needs to be substantively connected to the topic of job automation. I assessed whether a webpage qualifies in various ways. For newspaper articles, a quick look at the article’s title, combined with a simple key word search for ‘automation’ or ‘robots’, was usually enough. Other webpages required closer inspection. For instance, I discovered various webpages on artificial intelligence (AI). In order to maintain focus on job automation in my network (which is necessary due to limited time resources), I excluded pages that discuss technical properties of AI without really connecting to issues of labor. I scrutinized webpages in a similar way if they formulate their content in terms of ‘digitalization’, or ‘technological globalization’.

10 For instance, researchers from the Digital Methods Initiative at the University of Amsterdam developed the software ‘IssueCrawler’ specifically for the purpose of issue mapping. See:

http://www.govcom.org/Issuecrawler_instructions.htm.

11 The whole procedure of defining starting entities through Google and of crawling through connected pages took place between the 13th and 28th of March. Thus, my data certainly reflects some specificities of this specific time period. The analysis that I present in the following chapters does not entail a consideration of the time dimension. However, as I discuss in the reflection chapter, it might be important to do so in subsequent research if somebody were to reproduce a similar research project.

(13)

However, it is always a tricky and ambiguous decision whether a webpage has a substantive association to job automation. For instance, I exclude articles on the impact of chatbots on elections, because this seems to deal with questions of democracy rather than job automation, even though these bots also replace human labor in political campaigning. I also exclude webpages on blockchain technology, because the fact that this technology replaces certain workers employed in more traditional forms of money circulation did not seem reason enough to expand my network to the huge debate on blockchain. In other cases, I decided to include ambiguous topics, such as discussions on the value of ‘handmade’ products or on the political organization of workers (I provide a comprehensive overview over identified topics in chapter 4).

If I consider a webpage as ‘in’12 the network, I move on to establish its sub-pages as separate actors by breaking them out of their host. Once I evaluated and separated all the discovered pages, I moved on to crawl them again at depth 0. Overall, I repeated this crawling- procedure 3 times, which resulted in a corpus of 1828 webpages13.

Preparing the corpus of webpages for analysis

Once the crawling was finished, I moved on to classify each of the included webpages in terms of two variables for further analysis: actor type (whether an actor is a firm, an author, an NGO, etc.), and textuality (whether a page presents a textual narrative, such as a New York Times online article, or whether it serves infrastructural purposes, such as the homepage nytimes.com). In many cases, looking at the URL (and already being familiar with the included webpages because of the previous webpage inspection) was enough to determine the values for the two variables. In cases in which it was not as clear, I manually inspected each webpage in order to classify it correctly, or looked up additional information from external sources of information.

For most part of the analysis presented in this thesis, I use the textuality variable to exclude all non-textual pages, which reduces my dataset to 1491 pages. While losing out on some information and on some links between pages by doing so, sticking only to textual pages turned out to facilitate a much cleaner identification of structural patterns with the community detection algorithm. With non-textual pages included, the algorithm discovers mostly structural patterns that originate in institutional affiliation, such as links between a New York Times article and the homepage nytimes.com. However, my main interest lies in mapping out the relations between particular discursive contents, which works better if limiting the community detection to textual pages only (because non-textual pages do not have much discursive content to interpret).

I narrowed the dataset further down to only include connected components with 7 or more webpages, which resulted in a final network of 861 textual pages, organized in 9 connected

12 When evaluating candidate webpages, Hyphe allows researchers to set them either to ‘in’, ‘out’, or ‘undecided’. Leaving webpages on blockchain or chatbots as undecided for a certain time allows researchers to explore linkages between topics and to decide at a later stage whether it is worth expanding the network into these topics.

13 Due to time limitations, in the final round I only considered pages that received in-links by more than 1 other page in the corpus. The possibility to inspect in-link counts of webpages and to look at their prospective network position in Hyphe is helpful to evaluate relevance of the discovered pages.

(14)

components14. Excluding these very small components has the advantage that network visualizations become better readable. Furthermore, analyzing sets of webpages that are smaller than 7 pages does not tell much about hyperlink issue assembling practices (because they do not have a lot of links to analyze). In the final set of 861 pages, the largest (main) component of my network of textual pages has 759 webpages, the second-largest 29 and the third-largest 18.

3.2. Identifying network patterns using community detection

Once the data collection and preparation was finished, I moved on to consider ways to analyze the resulting hyperlink network. Noortje Marres, Richard Rogers and various other digital methods researchers have used a variety of techniques to identify and distinguish networks that assemble around issues. A common strategy to study issue maps is to combine network and content analysis. For instance, digital methods researchers often use a research tool called Lippmannian device15, which measures the number of manually defined keyword mentions per webpage in the corpus (Rogers et al. 2015). It is possible to then plot these counts onto the network nodes and thus to analyze how keyword frequencies differ across areas in the network. In this way, researchers may analyze how actors in different domains of the network frame an issue in diverging languages, potentially disagreeing over what the issue is all about in the first place.

Because of my explicit interest in the form of the debate on job automation, I decided to explore a different way of analyzing an issue map that draws more extensively on network analytical techniques and focuses on the hyperlink network’s structural properties. For that purpose, I am applying the so-called Louvain community detection algorithm to study clusters of actors (which Marres and Rogers described as networks that assemble around an issue) in terms of structurally existing communities16. This algorithm analyzes network structure to identify the number of partitions at which communities display the highest internal linking density (Blondel et al. 2008). This procedure is based on a principle of modularity maximization: for various numbers of communities, the algorithm compares the density of links inside communities to the number of links between communities in order to find the ideal balance.

The assumption towards the thereby identified structural communities is that they do not mobilize at random, but that certain social dynamics led them to hyperlink more intensively to each other. In the context of the issue mapping approach, actors might interlink into such a structural community because of an open public debate about a shared issue, or for different reasons, such as a shared affiliation to a particular think tank (here I am talking about Marres & Rogers’ already mentioned distinction between issue and social networks). Besides the aim of investigating the structural form of an investigated network, the main purpose of detecting communities in the described way is to

14 A network component is a set of webpages that link to one another, but do not link to other sets of webpages. 15https://wiki.digitalmethods.net/Dmi/ToolLippmannianDevice.

16 As far as I know, the use of community detection algorithms to interpret issue maps is not a very common technique to study issue networks. A paper by Burgess & Matamoros-Fernández (2016) is one of the rare cases in which this technique is actually used for that purpose.

(15)

investigate exactly these dynamics or processes that drive network formations empirically (Blondel et al. 2008).

I consider the idea of studying structural community formation processes and the inductive research agenda of issue mapping to be largely compatible. Combining both of them, I do not assume the ‘nature’ of the communities that I detect (not even whether they are an issue network or a social network in Marres & Rogers’ terms). Instead, my research aim is to describe a communities’ forms in terms of involved actors’ community assembling practices (i.e. linking and framing practices), so as to infer why they assembled into their particular shape. Due to time restrictions, I select two case studies of communities for this kind of analysis.

3.3. Analyzing community assembling practices

For the analysis of linking practices, I am drawing on literature that applies ethnographic principles for investigating linking behavior. The anthropologist and STS scholar Anne Beaulieu has introduced such an approach that focuses on textual linking contexts and on the positionality of actors in the overall debate to interpret relations between interlinked websites. This analysis consists of an interpretation of the way an actor on the receiving end is being framed by the actor who establishes a link (Beaulieu 2005). It also involves a consideration of the receiving webpage’s function to the sending webpage.

Some hyperlink research has drawn on Beaulieu’s approach to establish typologies of hyperlinking motivations. For instance, Richard Rogers studied linking behavior between various actors involved in controversies about climate change. He observed that NGOs tend to link frequently to governmental pages, while governmental pages usually refrain from linking back (Rogers 2012:199). Based on an interpretation of hyperlinks as social relations, he typologized links as cordial, critical or aspirational types of associations. Cordial links are the most common, representing friendly links between partners or otherwise agreeing web actors. Critical links occur when an actor criticizes another web actor (such as NGO’s criticizing governmental institutions in the example). This occurs rarely, because in-links are considered as endorsements by search engines. Actors who are critical of one another are thus more likely to ignore or criticize in text form rather than by establishing a hyperlink (Venturini et al. 2018:7). Aspirational links occur when smaller or less recognized institutions link to more powerful actors, such as funding institutions or other actors who receive a lot of in-links on the web. I will refer to this typology at certain moments in the analysis. However, the main point of this approach is not really to establish generalizable typologies of linking motivations, but to interpret links inductively in terms of the functionality and meaning they assemble in concrete network settings (Beaulieu 2005; Thelwall 2006).

In my case studies of two structural communities, I engage in this kind of analysis of linking practices to abstract patterns that shape each communities. Based on a combined analysis of a community’s actor types (whether they are NGOs, think tanks, news reports, etc.), a visual analysis of its hyperlink patterns (see Visualizations 3 and 5), and a close reading of linking contexts and involved webpage contents, I investigate whether a community’s ‘substance’ constitutes a social or an issue network in Marres and Rogers’ sense. This is essentially a question about what connects actors in a

(16)

community: is it a mere network of similarly minded actors? Or are actors articulating an issue because of a sense of conflicts and dissent? Certain actors might form a community because they share an interest in settling or closing an issue (Beder 1991). This agenda could be inferred if actors appear to silence, ignore, or integrate dissenting voices. In other cases, a community might assemble because of actors who articulate disagreement in relation to an issue, for instance over which scientific report to trust, or about the very definition of the issue (e.g. whether job automation is an issue of state regulation or one of education and training). But, again, these are the empirical questions I am going to investigate. For the analysis of webpage contents, I conduct a structured coding to identify re-current themes in a community, which I then relate back to the hyperlink patterns.

For each of the communities that I investigate in more detail, I focus on the 15 actors with the highest community-internal degree counts (i.e. the sum of an actor’s in-links and out-links). My assumption is that these degree counts reflect the importance of actors in assembling a community17. In this sense, I assume actors with high degree counts reflect linking and content dynamics of a community in ideal-typical form, as they were most frequently involved in formatting it.

4. Analysis of actors and debate structure of the full network

This chapter presents the results of my investigations on the full network. This includes the analysis of actor influence and actor type, which is the subject of section 4.1. Importantly, this actor analysis includes all 1828 pages (textual and non-textual), and is based on a definition of actorness that is located at the level of web domains, and not at the level of individual sub-pages (see section on actor influence for explanation). In contrast, the investigation of the debate structure in section 4.2. necessitates interpretable page contents, which means that I restricted it to the 681 individual textual pages. The analysis of the debate structure directly prepares the subsequent case studies of community assembling practices in chapters 5 and 6.

17 This is slightly different from the conceptualization of relational actor influence, because also actors who do not receive any in-links but establish a lot of out-links have a strong assembling effect for the communities (the algorithm does not account for the directionality of the links).

(17)

4.1. Actors

Actor types

Table 1 displays the outcome of the actor type classification for the full network of textual and non-textual pages. As this table indicates, I classified a large share of the 1828 pages as related to news reporting. This category entails various online articles on job automation from a large number of different news organizations, such as the New York Times or The Economist. News actors also entail articles from several news organizations specialized on technological issues, such as the MIT Technology Review or the magazine Wired, as well as from non-U.S. American domains, such as the British newspaper The Guardian, or various Latin American news outlets.

The second largest actor group, think tanks, features mostly U.S. American institutions such as Brookings or the Aspen Institute. Identifying an actor as a think tank can be quite ambiguous. For instance, the so-called Singularity Hub or Singularity University could be considered a think tank, a training and educational institution, a blog or, even a news actor. In these unclear cases, I concentrated on the nature of the content that the respective actor added to the general debate. Based on this, I classified actors such as the Singularity University as think tank types, because of their primary function of performing and communicating research and advocacy in my data.

Other frequently occurring actor types are scientific actors, firms, and public institutions. The category of scientific pages contains publications that were either published on university web domains or on domains of academic publishers, such as Elsevier or Sage. A lot of these publications are behind a paywall, so in these cases the URLs only lead to the abstracts of articles or books. Firms entail webpages of various technology companies, such as IBM or Amazon. They also feature firms that specialize more specifically in automation technology, such as WP engine, as well as pages of consultancy firms, such as McKinsey or Deloitte. Public institutions feature research published by the

+---+---+ | Actor type | Count | |---+---| | News | 738 | | Think tank | 366 | | Science | 207 | | Authors/Books/Blogs/Politics | 183 | | Firms | 155 | | Public institutions | 75 | | Intergov. organizations | 46 | | Videos | 26 | | NGOs | 21 | | Education/Training | 11 | +---+---+ Table 1. Distribution of actor types in the corpus of textual and

(18)

U.S. Department of Labor, the U.S. Bureau of Labor Statistics, the National Highway Traffic Safety Administration, or the White House, only to name a few.

I included an umbrella category for (non-academic) book authors, books, blogs, and political pages. Blogs are quite diverse types of webpages, such as freakonomics.com or prospect.org. Books mostly entail specific book entries on domains of online retailers, such as Amazon.com, or on domains that provide free access to them, such as openlibrary.org. The only pages that I classified as ‘politics’ were part of Andrew Yang’s homepage – the candidate in this year’s Democratic Primaries who adopted automation and universal basic income as his central campaign topics. Surprisingly, I did not discover any other pages of political parties or politicians through my data collection procedure.

The corpus features only a small number of intergovernmental pages, videos (mostly on YouTube), NGOs, and webpages related to educational or training purposes. The intergovernmental pages belong either to the OECD, to the World Economic Forum or the World Bank. NGOs entail various pages of non-profit organizations that articulate particular issues or concerns in relation to automation, such as the domain basicincome.org. Similar to what I already said above, the distinction between NGOs and think tanks is hard to draw in some cases. Finally, pages in the ‘educational/training’ category offer training programs for people to acquire new skills to cope with the effects of automation on workplace settings.

Actor influence

My initial plan for measuring actor influence was simply to account for the in-degree of pages by confining the network to the 861 textual webpages in my corpus. However, the results of this approach did not actually provide insights into actor influence, but rather reflected the number of pages of a particular domain. For instance, all articles on automation published on the Argentinian news domain Perfil.com had quite high in-degree values, because they all interlink to one another a lot. However, these articles were not at all well connected to the rest of the network. In this case, it did not seem reasonable to conclude that all these different Perfil.com articles are ‘influential’ because of their high number of links. It might have been possible to partially account for this problem by combining in-degree values with a consideration of different network centrality or clustering measures. But this would have conflicted with my conceptualization of relational actorness, which entails the idea of establishing influence based on counting the number of references a page receives. In the end, I decided to deal with this problem by merging all the pages of my corpus (textual and non-textual), as well as their links, back into their host domains. The resulting dataset features 383 of these domains, which need to be understood as aggregations of all the individual sub-pages. For instance, I aggregated all the individual New York Times articles and their hyperlink into one ‘nytimes.com’ node.

The results of this analysis of influence between aggregated actors is presented in Table 2. With the additional help of the actor type classification layer, we can see that those domains that receive by far the most hyperlinks from other domains are mostly news organizations, such as the New York Times, Quartz, or Forbes News. Highly cited domains of other actor types include the retail company Amazon,

(19)

+---+---+---+ | Domain | Indegree | Actor type | |---+---+---| | Nytimes.com | 28 | News | | Gizmodo.com | 23 | News | | Theverge.com | 18 | News | | Qz.com | 17 | News | | Vox.com | 16 | News | | Amazon.com | 13 | Firms | | Forbes.com | 13 | News | | Theguardian.com | 13 | News | | Medium.com | 12 | News | | Cnbc.com | 11 | News | | Politico.com | 11 | News | | Hbr.org | 11 | News | | Brookings.edu | 11 | Think tanks | | Bls.gov | 10 | Public institutions | | Mckinsey.com | 10 | Firms | | Wsj.com | 9 | News | | Buzzfeednews.com | 9 | News | | Doi.org | 8 | Science | | Theatlantic.com | 8 | News | | Pewinternet.org | 7 | Think tanks | | Dol.gov | 7 | Public institutions | | Youtube.com | 7 | Videos | | Clarin.com | 7 | News | | obamawhitehouse.Archives.gov | 6 | Public institutions | | Pewresearch.org | 6 | Think tanks | | Singularityhub.com | 6 | Education/training | | tcf.org | 6 | Think tanks | | Technologyreview.com | 6 | News | | blogs.Lse.ac.uk | 5 | Science | | Stlouisfed.org | 5 | Public institutions | +---+---+---+

Table 2. 20 most cited domains in the hyperlink network.

the U.S. American think tank Brookings, or the U.S. Bureau of Labor Statistics (Bls.gov), only to name a few. Indeed, news organizations do not only appear to be the most influential pages amongst the top 30 domains, but actors in this category display higher average in-degree values than other actor types in the overall network. While news domains register about 2.7 in-links on average, this value is considerably lower for other actor types, such as public institutions (2.0), and the lowest for NGOs (1.1).

According to the conceptualization of relational and aggregated actorness, these findings indicate that news organizations are generally most often referenced when actors establish their narrative or argument. I assume that this finding is partially caused by the relatively large number of news actors

(20)

in my corpus, as well as by a tendency for these news pages to link more intensively to one another than to other types of actors.

In response to RQ1, I conclude that large U.S. American and British news organizations are the most influential actors in articulating issues about job automation on the hyperlinked web. However, also particular think tanks, scientific organization, firms, and public institutions are important points of reference in the observed debate. In order to analyze how these influential actors frame automation, and how they relate to one another discursively and socially (i.e. through hyperlinks), I now move on to the question of the debate structure. As explained in the methodology section, studying debate structure through community detection also reveals patterns of high levels of social or issue-specific interaction between these actors, which in turn allows us to understand the meaning and the function of high influence actors in the overall debate.

4.2. Debate structure and communities

I investigate the structure of the debate network by applying the Louvain community detection algorithm at different resolution parameters (Blondel et al. 2008). Adjusting the resolution parameter changes the number and the sizes of the communities that the algorithm calculates. For instance, if researchers want to investigate whether the hyperlink network displays a polarized network of actors, they can lower the resolution from its structurally ideal setting until it results in only two communities. If no meaningful pattern emerges if separated into two communities, or if researchers are interested in identifying a hierarchical structure of sub-topics within topics, it is possible to increase the number step-by-step and to look at the network at different scales (for an example of this procedure for a different type of network see Heemskerk & Takes 2016).

In my case, I first tested whether separating the network into two communities creates a meaningful pattern. I consider a community as meaningful if a quick reading of the webpage names and contents is sufficient to get an idea of the assembling mechanisms that brought them together. My assumption is that a meaningful two-fold community pattern could be an indicator of polarization, reflecting tendencies for particular actors to be structurally closer either to utopian or dystopian accounts about automation. However, no meaningful patterns emerged for the separation into two communities, nor for other low resolution settings. I thus increased the resolution step-by-step, and the partitions started to make some sense once I reached about 13 communities. At this setting, I could identify a large cluster of webpages discussing autonomous vehicles and automation of transportation. However, I continued increasing the resolution because other communities were still too large to reveal interpretable patterns. I finally settled on 21 communities as the ideal setting for further interpretation. At this resolution, the ‘transport and autonomous vehicles community’ had separated into two distinct communities. Several other meaningful communities became identifiable at this setting as well. Cutting the network into 21 communities means that there are 13 communities in the main component and 8 communities in separate components (see Visualization 1)18.

18 As I describe in chapter 3, I excluded all those components of the network with less than 7 nodes, in order to conduct a more focused analysis. After this exclusion, I remained with these 8 disconnected components and

(21)

As I found out during a first interpretation of these communities, both the question of whether communities are driven by issue or by institutional affiliations (i.e. whether a community is a social and/or an issue network), as well as the question of coherency of framing automation appears to depend on community sizes. For example, there are some communities consisting of roughly 20-60 webpages of mostly the same host domain. Coming from the same domain, they often present rather similar ways of framing job automation. This is the case for community 10 (56 webpages), which I call ‘Singularity’ because of the fact that most of its pages belong to the think tank actor Singularity University. Actors in this community are on average very positive, clearly articulating sensationalist and mostly utopian perspectives on automation. Similarly, community 12 (48 webpages) deals with transport and autonomous vehicles (hence the name ‘Transport and autonomous vehicles II’), mostly containing webpages published by the information platform fleetowner.com. All these pages discuss various technological and economic developments that affect trucking in the U.S.. Other cases of networks that are clearly structured in terms of institutional affiliation are: the ‘World Economic Forum; Forbes.com community’ (Community 1; 46 nodes), which features several reports published by the World Economic Forum and the business magazine Forbes; the ‘Retraining workers community’ (community 13; 33

with the main component. The modularity algorithm classifies each disconnected component as a community in itself (communities 14-21). The enumeration of the communities does not have any meaning.

Visualization 1. Hyperlink network partitioned into 21 communities. Node color reflects community membership. Community number labels are placed on the node with the highest degree value inside of the community. Node

(22)

webpages), which focuses on the state of worker retraining to adapt to job automation and contains news reports and various posts by the U.S. think tank Aspen Institute; and the ‘Andrew Yang community’ (community 9; 27 webpages), which revolves around webpages of Andrew Yang’s campaign for the 2020 Democratic Primaries and discusses Universal Basic Income as one of his policy proposals. For all of these smaller sized communities, there appears to exist a pattern of shared affiliation to particular institutions or web actors, as well as a tendency to frame job automation homogenously within the community. In these cases, the concrete topic itself (e.g. warehouses or trucking) seems less significant in bringing the involved actors together. However, this observation is based on a rather superficial analysis, and I engage in a more detailed analysis these assumed patterns in the two case studies.

In contrast to these institutionally driven patterns, other communities of sizes roughly between 60-100 pages appear to be much more organized around a shared topic, while being much more diverse in featured web domains. For example, the “Technological unemployment numbers community’ (community 3; 101 webpages) is the largest community in the network. It revolves around the already mentioned study by Frey and Osborne (2013) and features many other webpages of think tanks, news media, academic blogs, or reports of the consulting company McKinsey & Company. This community appears to be driven by a discussion about the number and type of jobs that are endangered by automation (thus the name ‘Technological unemployment numbers). The ‘General community’ (community 4; 96 webpages) is the second largest community and features a lot of different actors who discuss job automation in very general and unspecific terms. The ‘Working conditions at Amazon warehouses community’ (community 2; 81 webpages) features a relatively large number of newspaper articles from various media outlets and discusses working conditions at Amazon, particularly in its warehouses. The ‘Transport and autonomous vehicles I community’ (community 8; 62 webpages) consists mostly of news reports on various types of jobs that are being automated, focusing on transportation (e.g. discussing the topic of autonomous trucking or Amazon’s plans to use drones in its delivery system). The ‘Warehouse community’ (community 7; 59 webpages) also deals with warehouses but is less specific about Amazon. Instead, it features scientific reports published by the Labor Center of UC Berkley, as well as news reporting on labor union politics and discussions about automation’s impact on job quality. This community appears to be rather negative in sentiment, especially in comparison to the other community on warehouse work. The ‘Etsy vs. Amazon community’ (community 5; 56 webpages) consists of various news reports and blog posts about the online retail company Etsy. It discusses the company’s competition with Amazon, the value of handmade products and the impact of automation on small scale businesses more generally.

One case that does not fit the pattern well is the ‘Brookings.edu community’ (community 6; 73 webpages), which is relatively large but still appears to be institutionally driven, namely by the U.S. think tank Brookings and by several newspaper articles by The Guardian. These pages discuss job automation rather generally, but with a focus on policy responses. The ‘? community’ (community 11; 21 webpages) is the smallest connected community of the main network component. For this community it is not really possible to identify a pattern by looking at webpage names. For the sake of focus, I am not going to analyze the disconnected network components individually. However, they also present

Referenties

GERELATEERDE DOCUMENTEN

This concept was used by Cazayon (1992a) to study the chronic effect of lack of job control and social support, and work pressure on strain among computer users, and by Carayon

To increase the fit with their job, we expected that employees whose open-ended FTP increased over the 1-year time period would also craft more structural and social job resources

The fact that manufacturers of the wide range of automated immunoassay analysers available at present have not seen fit to incorporate proinsulin into their present test menus

No evidence is found to conclude that trust in automation explains the relation between automation reliability and human performance; no correlation or mediating effect is

Elyousfi et al [7], present a fast intra prediction algorithm using a gradient prediction function and a quadratic prediction function to improve the encoding speed

A plausible solution is that the construction of the temple in Gerizim during the Persian period followed the importance given to Shechem from ancient times, but, like many

Muslims are less frequent users of contraception and the report reiterates what researchers and activists have known for a long time: there exists a longstanding suspicion of

The following input parameters are considered in the numerical study: holding and penalty costs, warehouse and retailer lead times, the number of retailers, and the mean and