• No results found

The Societal Impact of Robotization

N/A
N/A
Protected

Academic year: 2021

Share "The Societal Impact of Robotization"

Copied!
134
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

The Societal Impact of Robotization

A Distant Reading of British Broadsheet Newspaper Articles on Robots in 2010 and 2016

MA Thesis Digital Humanities,

2016-2017, University of Groningen (Netherlands)

Name student: Marco R. S. Post, MA Student number: sXXXXXXX

Name first supervisor: Prof. Dr. Johan Bos Name second supervisor: Dr. Barbara Plank Course code: LHU999M15

Workload: 15 ECTS

Date of submission: July 6th, 2017

Abstract:

Due to the exponential rate of progress in robotics, robots have become vastly more important in everyday life within just a couple of years. For just the four British broadsheet newspapers analyzed in this research project (the Financial Times, The Guardian, The Independent and The Daily/Sunday Telegraph), the amount of news coverage has increased with 746% from 2010 to 2016. Drones and self-driving cars, particularly, are hot topics. The dataset of newspaper articles was collected from the online database LexisNexis and consisted of 1,040 newspaper articles (after having filtered out all irrelevant articles). Despite the fact that the implementation of robots has massive implications for the power dynamics in society, no clear differences in the news coverage by leftwing versus rightwing newspapers could be detected; instead, each newspaper expresses a unique point of view on the robotization of society, with subtle but poignant differences, both in emphasis and tone. Moreover, this research project reveals that even though anxiety towards robots is abundant in popular culture, a pessimistic tone does not seem to be prevalent in the newspaper articles of the dataset. Neither the sentiment analysis (using the tool Vader) nor the opinion mining (with SentiWordNet 3.0) algorithms used in this research project could detect any bias towards either optimism or pessimism about the robotization of society in the dataset in general. On the basis of an analysis of bigram collocation frequencies, though, the Financial Times turned out to be the most optimistic of the four newspapers about the robotization of society and The Telegraph the least optimistic.

Keywords: Robots, Artificial Intelligence, Societal Impact of Automation, Computational

(2)

Table of Contents

Table of Contents ... 2

Chapter 1: Introduction ... 3

Chapter 2: Theoretical Background ... 6

Section 2.1: Robots in General ... 6

Section 2.2: Robots and Employment ... 13

Section 2.3: Robots and Warfare ... 19

Chapter 3: Hypotheses ... 26

Chapter 4: Methodology I – Acquisition of the Dataset ... 27

Chapter 5: Description of the Dataset ... 38

Chapter 6: Methodology II – Data Analysis ... 47

(3)

Chapter 1: Introduction

For much of the twentieth century, the only robots that the average person came into contact with were those in fiction. From the robots of Isaac Asimov‟s Foundation novels to C-3PO and R2-D2 in Star Wars, from the Replicants in Ridley Scott‟s Blade Runner to the relentless assassin-cyborg in James Cameron‟s Terminator, robots have sparkled the imagination of screenwriters and novelists alike for decades and have acquired the status of cultural icons. Robot technology in real life, however, was considerably behind its imaginary counterpart and scientists have been struggling for a long time with letting the robots perform what are to us humans very simple tasks such as walking on the pavement, climbing a ladder or navigating through a labyrinth of corridors in a building. Within the last decade, though, a series of momentous breakthroughs in robotics has been made which have dramatically accelerated development of robots. Within a very short span of time, what was once only science fiction has now become reality (Ford 2015, Jordan 2016, Brynjolfsson & McAfee 2014).

Over the past few years, we have already seen how robots have been massively put to service for practical tasks in the real world and if this trend is going to continue (which is more than likely), their implementation in society will have massive consequences for almost everyone. A mass-scale adoption of driverless cars, for instance, will significantly affect not only the way we commute from home to work, but it will also affect aspects such as traffic regulations, safety, car ownership, city planning, as well as that it will likely oust millions of cab drivers and truckers out of work. Or, to take another example, the deployment of drones in warfare already has serious ramifications on how wars are being fought, from a strategic, logistic, psychological, political and economic perspective. Moreover, once military drones are operated by artificial intelligence rather than by a remote human pilot, this will force us to rethink the ethical and legal aspects involved in allowing a machine to make decisions about killing flesh-and-blood people. Whether it concerns the advent of 3D-printing (additive manufacturing), drones from Amazon.com delivering our orders, robotic prosthetics to aid disabled people, Roombas to clean our houses or the robots we send to the planets and moons in our solar system for space exploration, robots are here to stay and their impact on people‟s lives will most likely be massive (Ford 2015, Jordan 2016, Brynjolfsson & McAfee 2014).

(4)

ones they think should better be avoided. The decisions that are being made right now are likely to set the standards for robotics for the decades to come. Given the vast scope of these developments, a participation of as many people as possible in this debate is highly desirable (Jordan 2016). In this process, journalists have an important role to play. Their participation in this is twofold: on the one hand, it is their job to inform the people of recent developments in robotics, so that they can make well-informed decisions about these. A thorough and substantive media coverage about robotics is of the essence, so that the general public can know what to expect of robots, how it is going to affect their lives and also so that they can be thinking about effective and fact-based policy about the implementation of robots in society. On the other hand, though, journalists are more than a passive conduit of information: it is by their very process of selecting specific news items (but not others) and giving these certain emphases and highlights that they actively give shape to the societal debate about robots. For instance, newspapers act as a platform where influential thinkers and engineers can express their hopes, expectations and fears about robots, but it is the journalists behind these newspapers who decide whom to interview and whom not.

As a consequence, it is very important to have a thorough overview of and a keen insight in the way journalists report about robotics, as this will also enlighten us much about the shape of the debate about robots in society in general. This is exactly what I hope to accomplish with this research project. In this MA thesis, I will be mapping the newspaper coverage about robots in the UK in the years 2010 and 2016. I will do so on the basis of a dataset from LexisNexis, compiled of newspaper articles from four British broadsheet newspapers: the Financial Times, The Guardian, The Independent and The Daily/Sunday

Telegraph. I will be looking at news about robots in the broadest sense of the word, that

means including driverless cars and military drones. My analysis will consist of a distant reading of the newspaper articles, for which I will be making use of techniques from computational linguistics: sentiment analysis, opinion mining, bigram collocation frequencies, frequencies of specific keywords and of course summary statistics of article word lengths.

In this research project, I will be addressing the following research questions:

1.) What are the most important issues about the robotization of society that are raised in the

media? Which aspects of the impact of robots on people‟s lives receive the most attention?

2.) What is the predominant angle from which the discussion about the robotization of society

(5)

3.) Are there any significant dissimilarities in the framing of robotization by rightwing or

leftwing broadsheet newspapers?

4.) Have there been any significant developments in this over time? What are the most

(6)

Chapter 2: Theoretical Background

Section 2.1: Robots in General

Before I commence with my MA thesis, one very important question should be answered first, namely: what is a robot? This question is a very simple one to ask, but as it turns out not one that has a simple answer to it at all. As John Jordan observes: “Reason 1 why robots are

hard to talk about: the definitions are unsettled, even among those most expert in the field”

(Jordan: 2016, emphasis in original). This is caused by the large variation among robots, which range from the humanoid Atlas robot by Boston Dynamics (Figure 2.1) to the affective seal-shaped PARO robot by the AIST company (Figure 2.2), from self-driving cars by Tesla (Figure 2.3) to the X-47 drone from Northrop Grumman which is able to autonomously launch from and land on aircraft carriers (Figure 2.4). „Robot‟ hence is very much a catch-all term for intelligent, versatile devices rather than being a term for a specific tool designed with a specific task in mind and it is difficult to come up with a term which includes all robots but which excludes all non-robots.

(7)

Figure 2.3 Figure 2.4

(Sources: Figure 2.1: en.wikipedia.org/wiki/Boston_Dynamics, Figure 2.2: catch.org.uk/past-project/paro/, Figure 2.3: http://time.com/4391175/tesla-crash-autopilot-driverless-cars/, Figure 2.4: dronecenter.bard.edu/weekend-roundup-519/; Accessed: 7 May 2017)

Let us look at the different definitions of the concept „robot‟ given by some authorities in the field to see what we can infer from them. According to Maja Matarić, “[a] robot is an autonomous system which exists in the physical world, can sense its environment, and can act on it to achieve some goals” (2007). This is a very concise definition, but a very profound one nonetheless. The fact that it has to exist in the physical world means that it excludes autonomously acting pieces of software with no connection to the world of matter, such as chatbots. It needs to have sensors of some sort (in order to “sense its environment”), such as cameras, microphones and/or LiDAR, and it needs to have tools to engage with the outside world (called effectors), such as wheels, tracks, mechanical arms and/or magnets. Furthermore, Maja Matarić explicitly states that the robot needs to be “autonomous,” which means that what she calls “teleoperated machines” do not count (2007). This means that she would exclude from her definition of a robot a Predator drone surveying the airspace in Afghanistan which is remotely piloted from a military base in Nevada. P.W. Singer has a similar but slightly different conceptualization:

(8)

There is considerable overlap between the definition of Singer and Matarić, but with the important distinction that according to Singer the autonomy of a robot, the “think” part of the “sense-think-act paradigm”, is a continuum. As he elaborates further on in his monograph:

That a machine can make a decision on its own, with or without a human in the loop, does not define whether or not it is a robot. The relative independence of a robot is merely a feature called “autonomy.” Autonomy is measured on a sliding scale from direct human operation at the low end to what is known as “adaptive” at the high end. (Singer: 2009)

Hence, according to P.W. Singer, military drones most definitely fall into the category of “robots.”

(9)

rather than only a few in depth. It is therefore only fitting if my approach of analyzing these newspapers were to be calibrated accordingly.

The very same difficulty of defining robots also emerges when attempting to discuss the history of robots. The concept of humans creating artificial life and imbuing it with intelligence and a will of its own is a very ancient motif across cultures: think only of the myth of the sculptor Pygmalion who fell in love with one of his own sculptures which eventually came to live in Ovid‟s Metamorphoses, or the Golem in Jewish folklore. Also of particular note in this respect is Mary Shelley‟s Frankenstein, or the Modern Prometheus. The term “robot” itself is from fiction as well, as it was coined 1921 by the Czech playwright Karel Čapek for use in his play Rossum’s Universal Robots (R.U.R.), which is about the creation of a new race of sentient but subservient beings which eventually revolts against its masters and gains world supremacy. The term “robot” here has been derived from two Czech words: rabota and robotnik, meaning “obligatory work” and “serf” respectively (Matarić 2007).1 However, the robots in R.U.R. were made of flesh and blood; only later on they were imagined to be made from steel. The fact that the term robot originates from fiction is poignant and also synecdochic for the fact that for most of the twentieth century people mainly experienced robots through fiction rather than through real-life encounters. The result of fiction preceding fact is that the fictional representations of robots have contributed much to our preconceptions of what robots are (Kakoudaki 2014). As John Jordan asserts: “Robots are technological tools. Yet few tools have such rich mythologies supporting them” (Jordan 2016). However, robots in fiction are often designed to suit their role in the plot. As cold, unemotional entities, they serve for instance as the ideal villain in stories, and narratives about robot uprisings or villainous robots abound. These fictional constructs, though, are rather at odds with the present state of robotics, where practical questions such as which models for arm prosthetics would be most desirable and practical or which machine learning algorithm is most efficient for letting cars drive autonomously are more directly relevant than farfetched scenarios of a world-takeover by a robot army. As such, science fiction does not always offer suitable guidance to our direct questions with respect to developments in robotics (Jordan 2016). Nonetheless, the very fact that such a wide mythology exists about robots does make it a wholly different matter from studying the history of e.g. toasters or hot-air balloons. Societal expectations, hopes and fears about robots range from the utopian and messianic to the dystopian and infernal. “Deep religious concepts, including salvation, eternal life, and some

1

(10)

state of otherworldly perfection are all informing our discussions of robots just as surely as are considerations of battery life, machine vision, and path-planning algorithms” (Jordan 2016).2

With respect to the technological side of the story of robotics, too, it can be said that the construction of robot-like devices has a very long history, but here as well the definitions of what robots are and the expectations that people have of them are rather shifty. Throughout the early-modern period automata have been created which are in many ways reminiscent of robots, although without entirely being so. Particularly noteworthy is the eighteenth-century invention of The Writer, an automaton in the shape of a boy which can write with quill on parchment, which automatically refills the ink in the quill when depleted and which has eyes moving in accordance with its hands, to mimic the appearance of looking at what it is writing. Even more remarkably, it can be programmed to write any short text desired of it. It was made by the Swiss clockmaker Pierre Jaquet-Droz (1721-1790) (HowToMake 2014). However, as this device does not sense what it is doing, nor thinks about its actions, it cannot be perceived as a genuine robot. The first onset of the development of what are now considered to be real robots started in the 1940s with Grey Walter‟s autonomous biomimetic machines, which could do simple things such as moving on their own or following a light source (Matarić 2007). For most of the twentieth century, though, the development of robotics was very limited and proceeded at snail pace. The reason for this is that computers, necessary for letting robots interpret what they sense on their own and then for allowing them to act on it, were for the most part too cumbersome to fit into a man-sized machine. This was in the time when mainframe computers filled a whole room, after all (Matarić 2007).

Another factor which significantly retarded the pace of robotics development is a factor known as Moravec‟s Paradox. To quote the famous roboticist Hans Moravec himself here: “It is comparatively easy to make computers exhibit adult-level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility” (Moravec, in Brynjolfsson & McAfee 2013).The environment within a computer is stable, predictable and safe: for most calculations a computer has to deal with, it only encounters a limited number of unknown

2

(11)

variables, if at all. The physical world of real life, contrariwise, is messy, complex and unpredictable. Computers therefore have a lot more trouble coping with everyday activities in real life, such as folding a piece of cloth, than with abstract mathematical affairs such as playing chess. Just to take the area of machine vision: we humans do not have any trouble at all with distinguishing an apple from a tomato or a cat from a dog, as we possess highly sophisticated brains which have evolutionary been trained to do precisely these tasks for hundreds of millions of years. However, for a computer to distinguish physical objects on the basis of vision is highly complex and requires much effort from programmers. This is precisely why, when people are confronted for the first time with a real robot doing daily household chores, such as picking up a beer bottle from the ground, they are experiencing the robotic technology as “underwhelming” (Jordan 2016), as it does not live up to the expectations of dynamic and highly competent robots from science fiction. It is also precisely this, though, which has significantly hampered progress in robotics for decades, as real-life situations were too complex to deal with for most robots up until only quite recently. The very first situation where robots were put to use commercially for practical purposes is the factory, but here they could only work as long as every movement was calculated in advance and input was handed to the robot on a highly predictable scale. If the input was given to the robot only an inch off its expected position or merely half a second too late, the robot could not function properly (Jordan 2016).

(12)

months. The impact of Moore‟s law on computing is enormous, as it de facto means that the processing speed of CPUs has doubled every eighteen months. Given the fact that Moore‟s law is still applicable to developments in computer science, Gordon Moore has proven himself to be remarkably prescient. The mathematical term for the kind of growth predicted by Gordon Moore is exponential growth, and the peculiarity of exponential growth is that while this growth might seem to proceed slowly at first, it can add up surprisingly quickly in later stages, with the rate of growth increasing more and more at each step. As both the “sense” and “think” parts of the “sense-think-act” trinity of robots depend on integrated circuits, exponential growth in microchip developments have simultaneously boosted the growth in robotics likewise. With respect to the second of the three causes mentioned by Brynjolfsson and McAfee, thanks to the Internet and soon the Internet of Things, there is now a lot more digitized data available than ever before. Digital data is for instance necessary for the development of driverless cars, in the form of very accurate maps of the roads the autonomous car is driving at, much more accurate than is currently possible with GPS technology, which is only accurate to the level of meters, whereas digitalized geographical data for driverless cars needs to be accurate to the point of centimeters. Recent advances in the digitalization of data have made this possible, though. Thirdly, with recombinant innovation is meant that all the recent advances in technology can be put together in new and innovative ways. Technological developments often reverberate to adjacent fields in manners previously unforeseen by anyone (Brynjolfsson & McAfee 2013):

There‟s a thought experiment in which you‟re asked to take the position of an engineer in 1890 and project the amount of horse manure in New York City in 1920. Linear extrapolation produces a frightening result, which, of course, never happened: the invention of the automobile shifted the externalities of transportation, and, instead of staggering quantities of horse manure in 1920, we got suburbs, McDonalds, high-speed roadways, and dozens of other side effects, starting in 1930 and continuing on through the present. (Jordan 2016)

(13)

rugged terrain in the Mojave desert. There were in total 106 applicants, but only a few of these even made it beyond the start line. Carnegie Mellon‟s Red Team was the most successful team, but their converted Humvee caught fire after seven and a half miles and got stuck in an embankment. Nobody even made it near the finish line. DARPA did not award the prize money that year to anyone. Eighteen months later, in 2005, DARPA hosted another Grand Challenge: same track, same conditions, but now with doubled prize money.195 teams applied to compete in the race. This time, five teams reached the finish line, with Stanford University‟s team under Stanley Thrun winning the race, completing the course in six hours and fifty-four minutes (Singer 2009).

With all these developments in robotics occurring within a blink of an eye, it is crucial for both scientists and the general public alike to start a debate in what direction these developments should go and which directions should better be avoided. Within the robotics community itself, self-reflection and contemplations about the ethical aspect and the societal impact of technological breakthroughs are often felt to be lacking (Singer 2009). As a very large amount of people will be affected by these developments very soon, large-scale participation of the general public is most desirable, lest people are confronted with huge negative effects of technology without their having a say in what is happening with their lives. In the following two sections, I would like to map out some of the moral dilemmas for robotization in two key domains: employment (Section 2.2) and warfare (Section 2.3). I will not pretend to be comprehensive here, as doing so would not be feasible within the limited scope of this research project, but rather I aim at giving the reader a quick overview of the complexities involved when contemplating the societal impact of robotization.

Section 2.2: Robots and Employment

Concerns about technology taking over people‟s jobs have been around for quite a while, starting with the Luddite uprising at the beginning of the nineteenth century, when British textile workers destroyed mechanized weaving looms which were felt to be ousting the textile workers from employment. Amongst a large group of economists there is therefore some significant scepsis about the propensity of technology to reduce employment opportunities: after all, why would it be different this time, now that we have heard the same argument for over two centuries? However, as Martin Ford keenly observes: “This time is always different where technology is concerned: that, after all, is the entire point of innovation” (2015).

(14)

expanding service sector, recent advances in artificial intelligence and robotics are now threatening to take away many white-collar jobs as well. Computers are now able to diagnose certain diseases like breast cancer, to write fluent and clear journalistic summaries of sports events, to trade shares and bonds on the stock market within milliseconds and to translate text and soon speech as well from one language to another. The implications for these recent developments on the labor market seem massive and are likely to get even worse as technology progresses - exponentially (Ford 2016).

Another problem closely associated with this is what is called the polarization of the job market: when computers and robots take away solid middle-class jobs, it is only the jobs at the top or the bottom of the labor market which remain. For some high-demanding jobs, humans remain better than robots and computers and maybe they will always be that way in the future: it is here that humans have specific talents which computers cannot (yet) best. Those are the elite jobs which yield high salaries but of which there are only a very few positions available. The competition for these positions therefore is intense. Some jobs that fall into this position are CEO, senior software developer, cybersecurity specialist, marketing strategist and top-tier positions in the cultural sector. Alternatively, there are jobs which require little skill for humans but which are very hard and expensive to automate, also because of Moravec‟s paradox. As these jobs often yield low wages, it is not profitable for companies to automate these jobs. Some jobs that fall into this category are hairdresser and gardener. The disappearance of middle-class jobs might result into an hourglass-shaped labor market with either a few highly-paid elite jobs or a lot of underpaid menial jobs. Failure to end in one of the top-tier jobs would then automatically result into ending up in the bottom rungs of the labor market (Ford 2016). To quote Erik Brynjolfson and Andrew McAfee:

there‟s never been a better time to be a worker with special skills or the right education, because these people can use technology to create and capture value. However, there‟s never been a worse time to be a worker with only „ordinary‟ skills and abilities to offer, because computers, robots, and other digital technologies are acquiring these skills and abilities at an extraordinary rate. (2014)

(15)

Figure 2.6: The cumulative change labor productivity and workers‟ compensation in the US (Source: Ford 2015)

Figure 2.7: The contribution of labor to the national income of the US (Source: Ford 2015)

(16)

Figure 2.9: Creation of new jobs in the US per decade (Source: 2015)

Figure 2.6 is a graph of the cumulative change in labor productivity and the financial remuneration laborers received for their work in the US. It shows that until the 1970s, these two were correlated positively, but that since the late 1970s productivity has increased without a similar rise in salaries for employers. Figure 2.7 shows how the contribution of labor to the US national income has decreased, from 65% in 1947 to 58% in 2014. Instead, more profit seems to come from the ownership of capital (including owning high-tech machinery) rather than from labor. Figure 2.8 is a graph of the corporate profits as percentage of the US GDP, it shows that much of the profits gained by automation has gone into the pockets of major companies. Contrary to the trickle-down-economics professed by rightwing economists, these increases in corporate profits have not resulted into the creation of new jobs. Figure 2.9 shows the job creation on the US labor per decade, relative to its preceding decade. As can be seen, this has been in decline for several decades. It might argued that these economic tendencies towards greater income inequality and less remuneration for more productivity are caused by the US‟s rightwing economic policies, but these economic trends appear in by far the most Western industrialized countries worldwide, regardless of economic policy chosen, albeit that these trends appear more vehemently in the US due to the lack of mitigating policy measures. Even though it is exceptionally difficult to pinpoint a direct causal relationship between economy and technological development, the start of rising income inequalities does correlate with the advent of the information revolution in the late 1970s.3 Nonetheless, there is much debate amongst economists over the exact interpretation

3

(17)

of these statistics and no definitive conclusions have been made yet (Ford 2015). One might argue that the information revolution creates just as much IT jobs as it takes away jobs in other sectors, but this turns out not to be the case (West 2015). Major technology companies like Google and Facebook wield immense power and have a net worth of hundreds of billions of dollars, but employ only a couple of thousands employees (Ford 2015). In fact, the US Department of Labor even predicts that for the next few years, there will be a decrease of jobs in the information sector (West 2015).

Even if we are to judge the warnings about the dwindling employment in the future due to advances in AI and robotics as premature and lacking substantive empirical evidence, the fact still remains that the rapid mass-scale implementation of these technologies will radically alter business models throughout the entire economy. To take just one example: IBM has created Watson, an artificial intelligence that was able to defeat the very best human players of Jeopardy!, a famous American TV quiz. The commercial deployment of this invention could amongst others radically alter the way call centers work. Additionally, the implementation of the driverless car will force the whole logistics and mobility sector to re-evaluate its business model, just as new advances in the construction of houses by additive manufacturing might radically upset the construction sector (Ford 2015.) The mass-scale implementation of robots in society at large also brings with it all sorts of ethical dilemmas. Take for instance the implementation of affective robots for elderly care, like the PARO. These could take away some of the burden for people working in this sector, but would we feel comfortable outsourcing the giving of affection and attention to our parents and grandparents to machines? It might even make things worse, as children might no longer feel guilty about neglecting their elderly parents, as their affective needs are already been “taken care of” by the robot (Jordan 2016).

Sometimes also peculiar traits of human psychology might slow the advance of robots. Prof. Masahiro Mori has hypothesized the phenomenon of the “uncanny valley”: people are willing to accept robots when they are either very machine-like or completely indistinguishable from humans; however, when they look almost like humans yet not entirely so, this will lead into a psychological repulsion of the human towards the robot (Jordan 2016). Due to Moravec‟s paradox, it might yet take a while to create humanoid robots that are

(18)

completely indistinguishable from humans. In the meanwhile, the adoption of humanoid robots in for instance schools and hospitals might be hampered due to the uncanny valley.

One of the most important determinants for the impact of mass-scale robotization and the implementation of advanced AI will be the question what skills will remain uniquely human and which ones can be taken over by machines effectively. Many predictions about this have already been made, but many of these have also been falsified by the course of history (Brynjolfsson & McAfee 2014). Nonetheless, as John Jordan observes:

The human metaphors embedded in robotics and artificial intelligence are shaping progress in this field, likely holding us back as much as they inspire. Twenty-second century AI will probably no more mimic the human brain than the airplane mimics a bird, or a wheel mimics a leg. (Jordan 2016)

If this is true, then robots should be perceived as complementing rather than replacing humans, where each excels at his/her/its unique features. There is some evidence that points in this direction: already in 1997 IBM‟s Deep Blue managed to defeat the world‟s best human chess player, Gary Kasparov, at chess. One might therefore expect that, with the computer technology of today, almost any computer could easily best any human in this. However, it turns out that a combination of a human chess player plus a computer are better than even the very best of chess computers today. In this particular future scenario, humans and computers working in tandem will amplify one another‟s strengths rather than undermining them (Brynjolfsson & McAfee 2014). This might also take place in the shape of bionic implants insert into humans, so as to augment their abilities. However, here too there are serious ethical ramifications: if humans have for the past millennia been willing to wage war, to torture, to rape, and to enslave fellow humans over trivial differences like race or religion, what would it mean to society if one group of people were to become vastly more intelligent, strong and healthy than others by means of these bionic implants? What might seem like a utopia to some might appear like the resurrection of old fascist dreams of a Herrenvolk to others (Singer 2009).

(19)

their societal function to be the most important distributor of wealth – simply for the fact that there are no longer enough of these, the state should provide for a basic income to all people (West 2015, Ford 2015). The availability of more free time on people‟s hands could mean that people have more time for volunteer work or for the pursuit of culture and art (West 2015), or it could mean the creation of a technofeudalism with a vast underprivileged underclass versus a very small elite of highly privileged individuals if no measures are taken to check the growing social inequality caused by robotization (Ford 2015). The disruption caused by the exponential growth of robotics are likely to be so large and to occur so rapidly that it would be very difficult to oversee these developments reasonably, while simultaneously now more than ever good planning and policy is required to put the tremendous power that these new technologies offer to good purposes.4

Section 2.3: Robots and Warfare

Another very important dimension of the robotization of society is the interrelation between robots and warfare. DARPA is the most important funding agency of robotics research, as we also saw earlier with the DARPA Grand Challenge for driverless cars. The US Defense has for decades been the single largest investor in computer technology and many technologies that are currently being exploited by Silicon Valley companies have been funded de facto by American taxpayers. For much of the 1950s and 1960s, computers were not really economically viable for exploitation by the private sector, so much investments in this technology were made instead by the Department of Defense (DoD) (Winner 1992). Even though the private sector is also currently investing in IT R&D, it is estimated that approximately 80% of all research funding for artificial intelligence comes from the US DoD (Singer 2009).

The technology for robotics in warfare has been around for several decades – the US military already made use of remotely piloted aircrafts and torpedoes in WWI and WWI on an experimental basis – but the robotization of warfare only commenced at a serious level since quite recently. During the war in Bosnia and Kosovo in the 1990s, the US military started to make use of drones for reconnaissance and surveillance on an incidental basis. It was in the wars against Afghanistan and Iraq, though, that they were to be deployed on a massive scale. For example, at the onset of the war in Iraq, the US had zero robots on the ground. By the end

4 For a very insightful if somewhat speculative and idiosyncratic contemplation on the societal impact of future

(20)

of 2004, this number was already up to 150. One year later, this number was 2,400. By the end of 2006, the US military deployed over 5,000 ground robots in Iraq. At first, military units were compelled bottom-up to make use of a robot within their operations, even though they were not really enthusiastic about using these. When the robots started to demonstrate their usefulness in action not long after their first deployment, though, they became quite popular amongst the lower ranks as well and military units then almost started to beg to have a robot on their team too (Singer 2009).

The two primary methods of deployment for robots in the military is as drones or as driverless vehicles: drones are deployed both for reconnaissance as well as air-to-ground operations (so far, there has hardly been any air-to-air combat with drones) and the robotic vehicles on the ground are ideal for explosive ordnance disposal, carrying luggage, or to scout in dangerous terrain and for instance to enter a suspicious building first, before the rest of the troops do. The Navy also would like to use robots either on or underneath the surface of the sea, but the salty water of maritime environment is detrimental for robotic technology, hampering significant progress in this direction so far. Several ships in the US fleet, however, have automated turrets that can automatically take out incoming missiles at a considerably quicker reaction speed than a human operator would (Singer 2009).

(21)

action there, resulting into the end of US presence on the ground in Somalia (Singer 2009). Another important benefit of robots is that they are better at performing dull, mind-numbing routine tasks: when a human pilot has to surveil a building or a town square for suspicious activities, his/her concentration will quickly diminish, whereas a drone can be used for automated surveillance for 24 or sometimes even up to 48 hours, whilst being sharp and ready all the time (Jordan 2016).

The deployment of robots is unlike any previous changes in equipment or weaponry, as it is a radical reconceptualization of what it means to be a warrior. In the past, war was the domain reserved for human only, now we delegate the decisions about life-and-death to software embodied in metal machinery. War is about more than strategy, economy, or effective use of violence, it also encompasses a large psychological and ethical component which is very difficult to outsource to a machine. As P.W. Singer observes: “humans‟ 5,000-year-old monopoly over the fighting of war is over. (…) [T]he introduction of unmanned systems to the battlefield doesn‟t change simply how we fight, but for the first time changes who fights at the most fundamental level. It transforms the very agent of war, rather than just its capabilities. It is difficult to weigh the enormity of such a change” (2009). According to John Pike of Global Security organization: “First, you had human beings without machines. Then, you had human beings with machines. And, finally you have machines without human beings” (in Singer 2009). Or, in the words of security analyst Christopher Coker: “We now stand on the cusp of post-human history” (in Singer 2009).

Perhaps the most controversial issue currently with respect to robotics and warfare is the surrendering of complete autonomy to robots, including the decisions about life and death. As of this moment, the US military does not possess robots that can go out on missions on their own, pick their own targets and then make the decision to kill them, after which to return to home base, all without any human intervention. Having said that, robot autonomy is a continuum, and there are many robotic systems which already have an advanced degree of autonomy. On the Demilitarized Zone between North and South Korea, for instance, there are robotic sentry guns from Samsung which will automatically kill any transgressors of the border, without any human to back this decision. However, the supervision of a human over a robot does not preclude the occurrence of machine errors. On July 3rd, 1988, the U.S.S.

Vincennes was patrolling the Persian Gulf equipped with a Aegis radar system, which can

(22)

plane and suggested to shoot the vehicle out of the sky. The command crew on board of the U.S.S. Vincennes consisted of eighteen persons, but no one chose to overrule the decision by the Aegis radar system to treat it as a hostile entity, as the crewmembers completely relied on its efficacy and had full faith in its wisdom and accuracy. As a result, they allowed the Aegis to target the airliner and to take it down, with the death of 290 civilians as a consequence. So even with a human supervising a computer, the computer‟s reputation of efficacy, or the time pressures and stress experienced by the people operating it might still result into such a „glitch‟ going unnoticed (Singer 2009).

The most important problem with completely autonomous weapon systems is that it is very difficult to point out the one responsible for the act of killing. Even with our ordinary laptops and desktops at home, bugs, glitches and computer crashes are a frequently reoccurring phenomenon: what if a bug in the software of a killer robot would make it go on a killing spree?5 Alternatively, what if the robot deliberately chooses to murder a civilian, knowing full well that it would transgress war ethics, but it does so anyway because it reasons a higher purpose is served with this? Who should be taken to answer for this crime, then? The software developers of the robot, because they designed (deliberately or not) the software to be malicious? The company which manufactured and sold the robot? Or the robot itself, for making such an unethical decision on its own? But how is one then to hold a robot accountable for war crimes exactly? Put it in prison or deliberately destroy some of its equipment as a kind of punishment? Robots, though, do not feel pain or guilt or remorse, and if they would, then their main advantage of being efficient yet expendable assets that one can send easily to risky assignments would fall away (Sparrow 2007). Robert Sparrow therefore argues that military autonomous killer robots should legally be treated as child soldiers: both are morally unaccountable for their actions, both are ethically highly problematic, both can cause unchecked mayhem if let loose upon the world, both should be outlawed by the ius

bellis (2007).

However, in spite of all the moral ambiguities with respect to completely autonomous military robots, the US military is very hesitant to open up a discussion about the moral pros and cons on this topic. According to P.W. Singer: “Arming these more intelligent, more capable, and more autonomous robots is the equivalent of Lord Voldemort in the Harry Potter novels. It is the Issue-That-Must-Not-Be-Discussed” (2009). In the rare occasions that the US military is willing to discuss this topic, they exclusively focus on the benefits of giving

5 One DARPA-funded roboticist replied to the question of accountability over malfunctioning military killer

(23)

military robots more autonomy, completely denying even the possibility that things might go wrong here. There most definitely is an ongoing trend within the military to giving robots more instead of less autonomy. There is an inherent logic within robotics to do so, as this would weed out some of their inherent weaknesses and significantly amplify their strengths. For one, having one human operator per machine is not cost-efficient: if robots are about saving manpower, giving robots a larger degree of autonomy would be more in agreement with this goal. Secondly, the reliance of robots on instructions sent to them over radio channels make them particularly weak to opponents jamming these signals, as this would result into the robot becoming entirely numb while it awaits further instructions (Singer 2009). Do these perceived benefits weigh against the cost of amorality that accompanies the further granting of autonomy to military robots, though? A public discussion on the benefits, disadvantages and intricacies on this subject would be more than welcome.

Even if military robots would not be equipped with full-fledged autonomy, their deployment still carries with it massive implications for the logistics, strategy, economics and psychology of war. For instance, would the considerably lower price ticket of warfare by robots, both in financial terms as well with respect to human lives, mean that the threshold to engage in warfare becomes significantly lower as well? In the past, concerns about both money and the lives of one‟s own troops were a significant restraint to declare war upon another power. Equally important is the question how the home front would respond to the deployment of one‟s nation‟s military robots in foreign countries if the general public does not need to pay a significant price for it. The wars of the first half of the twentieth century were an experience that completely immersed and thoroughly affected the whole of society: every family at least knew a soldier that was sent to the front, people were incentivized to buy war bonds to lend the government a helping hand to win the war and a defeat in the war would mean the destruction of one‟s cities, the subjugation of one‟s populace and the annihilation of one‟s political system. If war means to a people just daily updates on what „villains‟ got killed in some remote country, with live video feeds demonstrating the alleged efficacy of one‟s war equipment as happened in the First Gulf War, the nature of war might very well be reduced to some sort of “pest control” (Singer 2009).

(24)

are more familiar with these, so that these interfaces require less training to make use of. Of course, it is equally true that with a war with soldiers on the ground, it is precisely the emotional tensions and chaos and uncertainty of battle, frequently coupled with a recent loss of team members as well, when soldiers might be psychologically pushed beyond their limits, so that they might freak out and commit war crimes. It could therefore in theory be argued just as easily that a more detached manner of warfare is more clean, objective and professional. However, many psychologists are in agreement that it is generally more easy to kill an anonymous, dehumanized person at a distance than to kill a living, breathing person standing right in front of you (Singer 2009, Whetham 2013).

It must not be forgotten, though, that many of the opponents being fought by US drones are more than willing to sacrifice their lives in suicide missions to become a martyr for their faith. If they perceive that their opponent is unwilling to sacrifice their lives into the battle and that it instead outsources its warfare to robots in order to minimize all potential risks, it might be perceived as cowardice by the insurgents and inspire them even more in their fight. Alternatively, the lack of sufficient „eligible‟ targets on their own soil might further incentivize terrorists to seek out these targets on the soil of the Western countries instead, thereby further legitimizing terrorism in their point of view (Singer 2009, Whetham 2013).

(25)

formerly might have used a semiautomatic rifle for a killing spree. As robots with the ability for violence become ever more pervasive, there is a serious risk of what could be called “open-source warfare.” As P.W. Singer observes:

War no longer involves mass numbers of citizen soldiers clashing on designated fields of battle. Nor is it being carried out exclusively by states. So, in a sense, we are witnessing the linked breakdown of two of the longest-held monopolies in war and politics. History may look back at this period as notable for the simultaneous loss of the state‟s roughly 400-year-old monopoly over which groups could go to war and humankind‟s loss of its roughly 5,000-year-old monopoly over who could fight in these wars. (2009)

Consequently, similar to the worries over the lowering threshold to declarations of war by nations owning armies of robots, there is much concern over whether the abundant availability of technologies that could harm many people without much effort would also facilitate violence by individuals or political/religious/ethnic groups (Singer 2009).

The deployment of robots in war therefore has much more than merely technological ramifications, it requires a wholly different conceptualization of how to wage war. In the words of P.W. Singer:

Because they are not human, these new technologies are being used in ways that were impossible before. Because they are not human, these new technologies have capabilities that were impossible before. And, because they are not human, these new technologies are creating new dilemmas and problems, as well as complicating old ones, in a manner and pace that was impossible before (2009).

(26)

Chapter 3: Hypotheses

On the basis of the theoretical framework from the previous section, I will now formulate the following hypothetical answers to the research questions I have posed in the introduction:

1.) What are the most important issues about the robotization of society that are raised in the

media? Which aspects of the impact of robots on people‟s lives receive the most attention?

I predict that robots in warfare and the impact of robots on employment have received the most attention. These were also the subject about which the most books and articles were written by researchers interested in the societal impact of robots

2.) What is the predominant angle from which the discussion about the robotization of society

is held in the media? Is the tone generally optimistic and forward-looking, or pessimistic and distrustful towards this pervasive technological innovation?

I hypothesize that the newspaper articles will contain a blend of both optimistic as well as anxious interpretations on the robotization of society. Throughout my reading of the theoretical works, I have found both optimism and pessimism, even though the anxiety about robots seems to be the most prevalent sentiment in the books about the social impact of robotization. The books focusing on the technology of robotics, however, were generally more optimistic.

3.) Are there any significant dissimilarities in the framing of robotization by rightwing or

leftwing broadsheet newspapers?

I predict that rightwing newspapers, which are generally more pro-capital and less critical to the power dynamics that be, will be more optimistic about robots, whereas I expect the leftwing newspapers to be more critical towards the impact of robots on society.

4.) Have there been any significant developments in this over time? What are the most

important developments in the news coverage on robotization in 2016 compare to that in 2010?

(27)

Chapter 4: Methodology I – Acquisition of the Dataset

If one is interested in analyzing the public debate about a certain topic of societal relevance by means of distant reading, there are several corpuses available for this purpose. One option would for instance be scraping online posts about the topic of interest from social media. Many social media, however, are primarily used for discussing topics of everyday life of interest to one‟s friends and are less commonly used for the discussion of topics of societal relevance. Twitter is the exception to this rule: of all the social media, this one is probably the one most frequently used for participating in what Jürgen Habermas would call the public sphere (Habermas 1991, Dahlgren 2005, Colleoni, Rozza & Arvidsson 2014). When I did a quick preliminary investigation on Twitter, which entailed the scanning the search results when looking for robot-related keywords, however, I found a lot of „noise‟ amidst the data: a very sizeable part of the dataset contained tweets about science-fiction robots, people dressed up as robots, people having made drawings of robots, or people using the word „robot‟ in a jocular, metaphorical sense of the word. I decided therefore that making use of Twitter to obtain a dataset about the public debate about robots would therefore not be a prudent decision. Alternatively, I could of course also have scraped comments and articles about robots from specialized blogs and discussion forums on the Internet. The downside of this, though, is that the participants in these discussions are generally people expert on this issue, so that my dataset would hardly be gauging the public discussion on this topic of the laymen.

(28)

Another disadvantage of LexisNexis is that it only contains the content of general newspapers, not that of popular scientific journals. It would certainly have been interesting to make a comparison between the coverage of robots in a digital dataset of general newspapers versus popular scientific journals with an interest in technology such as Wired.

I decided to compile my dataset out of newspaper articles from four British broadsheet newspapers: The Guardian, The Independent, the Financial Times and The Daily/Sunday

Telegraph. I wanted to choose newspapers from a major industrialized country which was

quite advanced in robotics, so that I could be assured of enough newspaper coverage and so that the newspaper articles would be covering developments happening nearby which have a direct link to the people of that country. The issue of language proficiency by the researcher also affected the choice of country: as the undersigned is for instance not fluent in Japanese or Chinese, newspapers from these countries were not an option. Another advantage of choosing a dataset of English-language newspaper articles is that my research would be transparent to a broad, international audience, whereas this would have been less the case had I selected Dutch or German newspaper articles. Finally, the availability of the dataset was also somewhat determining in selecting the country: for all four major broadsheet newspapers of the UK the full text editions of the newspaper articles is available, whereas for the USA many important newspapers are either unavailable or only available as an abstract, such as The Boston Globe,

The Wall Street Journal and the Chicago Tribune. One cannot blindly assume a priori that the

USA and UK are in the same stage of implementation of robotics, so I did not want to mingle articles from one country with those from another. Another disadvantage of having a dataset from several countries is that generalizations about the whole dataset would obscure potential intercultural differences, which is why I thought it would be very important to have a dataset of newspaper articles from just one country. Given these considerations, I decided that a dataset of newspaper articles from the UK would be the best choice.

(29)

newspapers on the other hand, tend to be more comprehensive and thorough in reporting on background issues.

For the selection of newspapers, I really wanted to be sure to have an even distribution of the political leanings of the newspapers. One reason for this is to attempt to ensure a diversity of viewpoints, so to make sure various positions of the political spectrum are included and to attempt to limit politically-based biases in my dataset. Another important reason behind this is that I wanted to compare leftwing versus rightwing newspapers. In the theoretical background section of this thesis, we saw how the robotization of society might have a rather sizeable impact on opportunities for employment, income inequality and the position of employers versus employees. We also read there how the robots used in warfare have a direct impact on the power balance in international relations. Due to the inherent intertwinement of robotization with power relations, it could therefore be hypothesized that leftwing newspapers, generally more critical in their newspaper coverage of the power dynamics in society, might be more critical in their newspaper coverage of robots than rightwing newspapers. At the moment, however, this is merely an a priori assumption, one which I would like to test by means of an analysis of my dataset. For these reasons, I included two leftwing newspapers (The Guardian and The Independent) and two rightwing newspapers (The Daily/Sunday Telegraph and the Financial Times) in the dataset.

In addition to a division of the dataset into articles from four different newspapers, the dataset can also be divided into newspaper articles from two years: 2010 versus 2016, so as to make possible for a diachronic as well as a synchronic comparison. 2016 is the most recent year, at the moment of writing this thesis, for which data is available for the entire year, so as to provide me with the most up-to-date newspaper coverage about developments in robotics. In order to be able to measure some sort of difference, I chose a year of comparison which was a few years before 2016.The interval of several years between 2010 and 2016 is large enough to make it likely to see some sort of developments (as the progress of development in robotics is very rapid). However, because 2010 postdates key reference works by P.W. Singer (2009) and Maja Matarić (2007), which describe the recent exponential advances in robotics, it is still recent enough to cover modern robots, that is to say, robots after the revolutionary quickening of the pace in developments occurring roughly halfway in the first decade of the twenty-first century. Newspaper coverage of robots from e.g. the 1980s or the 1990s would be reporting on this issue before the mass implementation of robots in society.

(30)

sections in the newspaper articles where you can look for keywords and I decided to focus on keywords in the headline. The headline summarizes the content of an article within a single sentence; the occurrence of a robot-related keyword in the headline therefore makes it relatively likely for the article to be primarily about robots, although this is of course no guarantee, as the robot-related keyword might be used metaphorically here. Looking for the occurrence of a keyword in the body of the text, however, would have resulted into the database offering every article from that newspaper and year which mentions the robot-related keyword at least once, even if only tangentially, so that would have resulted into the acquisition of a dataset with considerably more noise.

The exact choice of keywords is both a crucial as well as a difficult part in compiling the dataset, as I have to find a compromise here between being either over-restrictive in my selection, which would mean excluding relevant articles from my dataset, or being too permissive, which would result into a messy dataset with much noise that would interfere with my results. To quote a famous computer science adage here: “Garbage in, garbage out.” This difficulty is further complicated by the fact that there is much diversity and heterogeneity in the forms, functions and variants of robots. Different labels are being applied to them by the general populace, but not in a consistent or systematic manner. Adhering to the position established in the theoretical background of this thesis, I will make use of a broad definition of robots, including robots of various degrees of autonomy, but excluding robots that have no direct effect on the physical world, such as chatbots. On the basis of these factors, I have come to the decision to query the LexisNexis database for the following keywords: “robot”, “robotisation”, “robotization” , “cyborg”, “autonomous car”, “self-driving car” and “drone”.

(31)

more, sometimes the word “car” is replaced by “vehicle”, such as in “autonomous vehicle”. The lack of standard definitions on this topic and my failure to have included all variants of the keyword could potentially mean that the dataset on driverless cars is not as comprehensive as it could and should have been the case.

I downloaded the newspaper articles from LexisNexis as plain text files (.TXT format). In addition to this, I also downloaded metadata about the newspaper articles in a separate CSV file, containing an index number, the date of publication, the headline of the newspaper article, the length of the article in words, the name of the newspaper where the article was published and the section of the newspaper where the article was published in the newspaper. I could not download all the newspaper articles simultaneously, but I had to do a separate query for each year and for each newspaper. Furthermore, there is a download limit of 200 articles per query, so if for a given year a newspaper had published more than 200 relevant articles, I had to do several queries to get all the articles I was interested in. I did this by splitting the year into several months, so that I would for instance first download the newspaper articles for the first one third of the year, then for the middle third of the year and later on the last part of the year.

The acquisition of all the newspaper articles plus metadata did not mean that I was ready for analyzing my data, though, as I still had to clean the data and prepare it for data analysis.

(32)

cat text_file.txt | sed –e ‘/[0-9]* of [0-9]* DOCUMENTS/,+9d’ | sed ‘s/BODY://’ | tr –s ‘\n’ ‘ ’ > text_file.txt

The first of these commands (the part before the first pipe symbol, or „|‟) loads the .txt file with the full text of the newspaper articles. The name “text_file.txt” is a placeholder here, as in reality I used a different .txt file each time. The next command in the shell was about deleting the first part of the header, starting at the beginning characteristic “X of Y DOCUMENTS” plus a few lines underneath. The amount of lines underneath differed from newspaper to newspaper and in some cases from year to year. This is also why I had to process one .txt file at a time, instead of conveniently processing all newspaper articles simultaneously with “*.txt”. Then I removed the superfluous word “BODY:”, which was not really part of the newspaper article, but which was textual formatting added to the article by LexisNexis, after which I removed all the new lines, so that the entire .txt file would be on a single line. Subsequently, I was planning on converting all the instances of “HEADLINE:”, another piece of formatting added by LexisNexis which was now the sole delimiter between separate newspaper articles, into newline symbols, so that all formatting by LexisNexis would be removed and each article would be positioned on a separate line. However, for some reason I could not accomplish this in the Linux bash shell with the command

sed ‘s/HEADLINE:/\n/’. My hypothesis is that the surrogate Linux I made use of, CYGWIN, is not powerful enough as a tool to process a single line of text of too much length. Instead, I opened the .txt file in Notepad++ and replaced “HEADLINE:” with “\n” there. At the end of this procedure, I had a couple of .txt files with one newspaper article on each separate line without any formatting provided by LexisNexis. It was very important to remove the extra text used for formatting by LexisNexis, as the words which are being used there reoccur in every article, so that they would have interfered with analysis of word frequencies. Now the full text of the articles was ready to be inserted into the CSV files with the metadata. For this purpose, I wrote a special python script called append_data_to_csv.py (Appendix A). After having merged the data with the metadata, though, I did a manual check to see whether the data and metadata corresponded. This turned out not to be the case for one CSV file; apparently, I had made a mistake in the downloading process at LexisNexis and after downloading the TXT and CSV file again and merging them for the second time, the metadata and data corresponded for all files.

(33)

CSV files because I had to divide the query for some newspapers into several parts per year due to the download limit of 200 articles per query. I wrote another Python script, merge_csvs_into_one.py (Appendix B), for merging the separate CSVs for one year of newspaper into one. However, while merging these two programs, something I did not anticipate happened: as I merged several CSV files with an index all starting at 1, Pandas created an extra index: one with the original indexing and one with the new indexing for the whole CSV. Furthermore, it turned out that the original CSV file was a bit contaminated, as it had one comma too many, resulting into one superfluous column with no contents or value. That is why I created yet another program to remove these two redundant columns (the empty column and the original indexing), called remove_unnecesarry_columns_in_datasets.py (Appendix C).

For the previous three programs, as well as throughout much of this research project, I made use of the Python dictionary Pandas. Pandas is a highly versatile tool for data analysis which offers Python much of the capabilities offered in other programming languages for data analysis such as R or SQL. I was very concerned about the commas and quotation marks in the headlines and the bodies of the articles, as these are interpreted by most CSV readers as special characters that affect the division of the columns. Pandas, however, is smart enough to know when to interpret commas and quotation marks as column delimiters and when not. As a corollary of my procedure, though, the CSV files of my dataset can only be read by the CSV reader of the Pandas dictionary, not by means of simpler CSV readers.

(34)

flexibility: if there are errors in the detection of the relevancy, one merely has to change the value of that particular field. Copying and pasting rows from other CSV files would have been much more cumbersome.

The most effective way of separating relevant from irrelevant articles is to judge all the articles manually on a case-by-case basis. One can do this either by reading the entire article or by merely reading all the headlines of the articles. However, as the dataset is rather large, this method is quite time-consuming; moreover, reading each and every article by hand would beat the very purpose of distant reading, which entails the algorithmic analysis of certain micropatterns in a large corpus of texts rather than a close reading of a relatively small text (Moretti 2013). Alternatively, one could have chosen to outsource the manual reading and classification of all the articles to platforms such as CrowdFlower or Amazon Mechanical Turk, but since the newspaper articles are copyrighted material, this was not an option either. Hence, I decided to make use of another method: I inventoried all the sections into which the newspaper articles are divided, I zoomed in on sections where I suspected I would find a lot of noise and then I would look which sections, or else which keywords, I should tag as irrelevant. Before I conducted the classification on the basis of this matter, I did a quick manual scan of about two hundred newspaper headlines to check which articles are most likely to be irrelevant. The vast majority of the articles turn out to be about actual robots in real life, but there were also a few articles about fictional robots or instances where robots are used metaphorically, for instance as a nickname of an American baseball player. On the basis of this, I decided that I would look in particular to sections about culture, television, film and sports, as I was most likely to find instances of fictional or metaphorical robots there. I applied this method to the articles of the four British newspapers in 2016 and for The

Guardian in 2010. The amount of articles for the other newspapers in 2010 were so small,

(35)

Table 4.1: Overview of different sections per newspaper

Financial Times Guardian 2010 Guardian 2016 Independent Telegraph

Letters Financial Technology Features News

Leader Comment Art MMA Sport

Comment International Television Gaming Business

Companies Home World Americas Letters

World Review US News Features

Front The Guide Business Photography Review

Work Features Kia Tech Technology

National Leader Opinion Editorials Ultra Travel

Arts Sport Film International Your Money UK Film Media Europe China Talking Tech Education Sport Home Travel

Book Review Politics Business Cars

Personal Technology

Fashion Middle East Telegraph

Analysis Children People Careers

Small Talk Music Science Gardening

Companies & Markets

Education Reviews Editorial

Lombard UK Asia The Connected Business Environment Africa Life Voices Science UK Global News Crime Society Australasia

Football Olympics

Books Features Marketing Obituaries

Side Hustle Property

Travel TV

Sustainable Motoring Culture Films

Referenties

GERELATEERDE DOCUMENTEN

Verbetering werkwijze gemeente, meer inzet en middelen voor de groene openbare ruimte Enkele bewoners willen zich ook fysiek inzetten voor de groene openbare ruimte, door te behe-

Voor de segmentatiemethode op basis van persoonlijke waarden is in dit onderzoek speciale aandacht. Binnen de marketing wordt het onderzoek naar persoonlijke waarden voornamelijk

We attempt to identify employees who are more likely to experience objective status inconsistency, and employees who are more likely to develop perceptions of status

Looking at previous organizational literature, this thesis assumes that there will be a positive relation between firm size, firm age, experience of the manager and the

Unfortunately, only partial support for the ownership structure hypothesis (H2) was found. We did however find interesting results in the interaction between foreign

Expected is that in certain states, with stricter societal norms, these social contracts also are stricter and the pressure for firms to adopt a proposal is higher..

Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers) Please check the document version of this publication:.. • A submitted manuscript is

In addition to reducing the noise level, it is also important to (partially) preserve these binaural noise cues in order to exploit the binaural hearing advantage of normal hearing