• No results found

How to address the social and economic consequences of automation drive technological unemployment

N/A
N/A
Protected

Academic year: 2021

Share "How to address the social and economic consequences of automation drive technological unemployment"

Copied!
1
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

How to Address the Social and Economic Consequences of

Automation Driven Technological Unemployment

Tim Hosters S4323866

Word count: 22.158

Thesis Submitted in Partial Fulfilment of the Requirements for the Degree of Master in Political Science (MSc)

Master Political Theory

Supervisor: dr. B.R. van Leeuwen Nijmegen School of Management

Radboud University, Nijmegen, The Netherlands 28-06-2020

Image by John Danaher via https://philosophicaldisquisitions.blogspot.com/2015/11/will-technological-unemployment-lead-to.html

(2)

Contents

Introduction...3

Chapter 1: The Limits of Artificial Intelligence...7

The Turing Test...7

The Chinese room...9

The Intentional Stance Theory...10

Chapter 2: Will Automation Lead to Technological Unemployment?...14

A History of Automation...14

Is This Time Different?...16

Addressing Scepticism...18

Chapter 3: The Social and Economic Consequences of Mass Technological

Unemployment...22

The Economic Consequences...22

The Social Consequences...23

Chapter 4: A Universal Basic Income as Solution to Automation Driven

Technological Unemployment...26

What is Universal Basic Income?...26

Would a UBI address the issues caused by Technological Unemployment?. .29

So would UBI be a good solution?...32

Conclusion...36

(3)

Abstract

This paper discusses whether a universal basic income (UBI) is adequate as a solution to the problems caused by possible mass unemployment following further automation. In their 2013 study, Carl Benedikt Frey and Michael A. Osborne predict that about 47% of employment in the US and other industrialised economies is at high risk of automation. It does not take a lot of imagination to predict that the economic consequences of 47% of people being long-term or even permanently unemployed would be disastrous. It would leave millions of people without an income to support themselves, forcing them to rely on doubtlessly strained welfare systems. On top of that if almost half of the population were to lose access to a stable income, mass market industries would start to fail rapidly. And for many people their job has important non-monetary value. For them their work is a source of meaning and self-esteem. Loss of employment can often have strong negative effects on peoples mental wellbeing. And while it is easy to see how a UBI would solve the economic problems that result from people’s loss of a stable income, it is less clear how it could address the more social dimension of mass unemployment. While a life of sitting at home, lazily living of free money provided by the government might sound appealing to some, form many the lack of purpose and meaning, and the feeling of being unable to contribute would wear them down psychologically. This paper argues that a UBI, if applied properly, would not lead to such a future. The opposite in fact, instead of forcing people to languish away, without anything to do, a UBI would set people free to contribute to their communities in ways that are meaningful to them.

Keywords: automation, technological unemployment, universal basic income

Introduction

There are two horses in the early 1900s talking to each other about all the new technological creations that humans keep coming up with. One of them starts discussing the automobile, and is worried that this new mechanical muscle will start replacing horses, leaving them without jobs. The other tells him not to worry, and reminds him that all the other human technologies until now haven’t managed to replace horses, and in fact, have made their lives easier. He reminds his friend that back in the day, horses had much harder jobs. Back then, horses had to carry heavy ploughs in the fields, carry messages to far places, sometimes even dying from exhaustion in the process, not to mention riding into deadly battle during wars. Sure, all of those jobs have now been taken over by machines, but those jobs were all terrible, and these new jobs of driving human carriages around in cities are much more pleasant. And even if these automobiles become popular, there will be new, likely better jobs for horses in the future that two of them couldn’t even think of.

Of course this prediction did not come to pass, and today genuine labour for horses is almost non-existent, and the horse population itself has dwindled as a result (Kilby, 2007). Today automation technologies seem to be coming for human labour, and you hear many experts echo the sentiments of the sceptical horse.

Worries about machines replacing human workers is certainly nothing new. We can see examples as far back as 1811 when the English luddites rebelled against automation in the textile industry. And in every large wave of automation since then, there have been people

(4)

warning against the harms that automation would entail for our society. Yet our society is still standing, and those that would once have found employment on farms or in factories now have other, perhaps even better, employment opportunities. It is therefore no surprise that many economists and theorists are little concerned by the coming of further automation technologies. Many of them are staunch believers in the creative nature of capitalism, saying that while automation and other technological advancements might cost some people their jobs in the short term, a functioning capitalist economy would create other, often better opportunities for them. The jobs that are lost as the result of automation will often be low-skillled, low-paying jobs that might be considered to be on the less desirable end of the job market. Far from being a disaster, automation will result in these workers being freed up to pursue further training and education and as a result land a more desirable and higher-paying jobs instead.

However, others are wary of the possibility that the future increase in automation could potentially result in a large degree of structural and even permanent unemployment, as workers will be unable to keep up with technological advancements. In his 1930 essay “Economic Possibilities for our Grandchildren” Keynes referred to this phenomenon as “Technological Unemployment”, which he defined as “unemployment due to our discovery of means of economizing the use of labour outrunning the pace at which we can find new uses for labour”. Erik Brynjolfsson and Andrew McAfee, argue in The Second Machine Age (2014) that while both conventional economic theory and a large amount of historical data support the argument that technology cannot create ongoing structural unemployment, there are also good reasons to believe that technological unemployment is likely to have a significant impact on the job market in the near future. And it is far from certain that the creative nature of capitalism will be able to provide these ‘freed up’ workers with better jobs, even if they pursue further training and education. First of all, it is uncertain for how long the majority of workers can keep climbing the skill ladder before they reach their own limits. Can any person do any job, given they receive the necessary amount of training and education? Or does every person have certain natural skills in a certain area? What is someone to do, if the industry in which their strengths lie has been fully automated, their specific skill set making them not just unemployed, but effectively unemployable? But even if it turns out that we are able to teach any person to do any kind of job, there is a second problem. The labour market has always been pyramid-shaped, with more lower-skilled, more routine jobs at the base, and less higher-skilled, often creative jobs at the top. If the coming wave of automation were to effectively replace all the human workers at the base of the pyramid, it is far from certain that there is enough room at the top for all of the people who used to occupy that base. And as even high-skilled and better paid positions start falling victim to automation, the space at the top itself will start to shrink.

Those who believe that technological unemployment poses a real problem are concerned with the impact on economic justice (Martin Ford 2016; Brynjolfsson & McAfee 2014), or more specifically the inequality that can come about as a result of technological unemployment. Automation has the potential of heavily favouring owners of capital (in this case specifically, the owners of automated machinery), who will reap the benefits of the greater productivity that these machines bring at lower costs. At the same time, those workers who have been displaced as a result of automation will have their income significantly reduced. And if they are truly a victim of technological unemployment, meaning that there is no longer any need for their skills in the economy, they will have little means of providing for themselves or their families.

But this also has wider implications for the economy. Our current system of consumer-capitalism relies on people actually buying the products that are being produced. And as Martin Ford (2016) rightly points out, workers are also consumers. A worker who cannot find

(5)

a job due to technological unemployment will have a hard time consuming anything but the bare necessities (and sometimes they might not even be able to afford that). If the people hit by technological unemployment remain relatively few, this is regrettable for them personally, but is would not have significant impact on the wider economy. But some believe the impact on employment will be much more severe: Carl Benedikt Frey and Michael A. Osborne (2013) at the University of Oxford even going so far as to predict that nearly half of US employment will become vulnerable to technological unemployment in the near future. Our current consumer-economy is hardly equipped to deal with these numbers.

Therefore, one of the most frequently discussed measures to deal with massive labour disenfranchisement as a result of technological unemployment is to get purchasing power back into the hands of consumers. Automation technology will almost certainly create a large amount of wealth; to redistribute some of the wealth created by these new machines across society would be one way to at least partially address the hardship that results from heavy use of automation. It would also allow those disenfranchised by technological unemployment to remain consumers and keep the economy going. Martin Ford (2016), among others, proposes a form of universal basic income, while Brynjolfsson and McAfee (2014) prefer a negative income tax, believing it addresses some shortcomings with basic income, but many of these proposals boil down to the same principle: give people money.

But one of the problems that solutions like these do not address is that a person’s work is often more than a means to make money, it will typically form an important part of their identity. For many people their job gives them the sense that they contribute to society and their local community, and this often makes it an important factor in their feeling of self-worth (Schweiger, 2014). What kind of future are we offering these people hit by technological unemployment by giving them a guaranteed income? A life of sitting on the couch playing video-games, while receiving a monthly payment so they can provide for themselves and keep the consumer-economy going? Some could consider such a semi-luxurious lifestyle greatly desirable, but I suspect that many would feel that such an existence lacks a sense of purpose. So we must also consider ways that we can provide the technologically unemployed not only with resources to maintain a somewhat comfortable lifestyle, but also with avenues through which they can contribute to society or give meaning to their lives. This paper is an attempt to take a closer look at these concerns and asks the question: Given the real possibility of structural and increasingly technological

unemployment, is a universal basic income the best way to deal with this threat?

It is important that we answer this question, and other like it, because the next wave of automation is fast approaching, and in some areas, is already happening. The emergence of the self-driving car is just one of many examples of developments that are ongoing, and are sure to have far reaching consequences. This invention will undoubtedly put many taxi-drivers and truck-taxi-drivers out of business, and will likely affect anyone currently employed in any kind of transport related industry. Will all of those who lose their jobs as a result of this new technology be able to find new and stable jobs, or will they fall prey to Technological Unemployment? And how do we deal with it if these people (as well as undoubtedly people from many other industries that face automation) cannot find new employment because they are outpaced by technology? It is vitally important to society that we find suitable answers to these important questions before the scenario of technological unemployment plays out.

This paper is divided into three sections. In the first section I will discuss the limits and capacities of Artificial Intelligence. Here I will try to figure out whether AI could ever perform all tasks as well as, or better than, human minds. After all, if AI is only capable of competing with humans in a limited amount of jobs, we may not have to worry about such machines replacing human workers on a large scale. There are several divided stances on this question. Alan Turing, one of the pioneers on the subject of Artificial Intelligence research,

(6)

argues that AI will likely one day match humans in mental capacity, and has tried to develop a test to determine whether a machine has reached said level of mental capacity. John Searle however, in his famous Chinese room argument, say that A.I. programming (at least as it exists now) is of a completely different category and is not comparable to human-like intelligence.

The second section will discuss how AI driven automation has affected the labour market in different sectors, and to what extent it is under threat of further automation in the near future. Here I will also address several sceptical arguments, which dispute the severity of the impact of AI driven automation. This section also covers the effect that automation could have on the people affected, and how it will impact the distribution of wealth across society.

The final section will discuss how we could best deal with the results of large-scale technological unemployment. The solution proposed in this paper is a universal basic income as a way to distribute wealth across society. This would ensure that all people would still have a livelihood, even if they are unable to find employment due to automation. However, it does not replace the role that work plays in providing a sense of purpose and meaning in many people’s lives. In this part I will also discuss how this more existential challenge can be addressed.

(7)

Chapter 1: The Limits of Artificial Intelligence

Not everyone believes that automation is likely to cause serious problems for human employment. For some this is because they do not believe that artificial intelligence is fundamentally different to human intelligence, and as an extension, that AI will never be able to match or surpass humans in the ability to perform certain tasks, often because people believe these capacities to be essentially human. It isn’t always clear which specific capacities are exclusive to humans. And since the capacities of AI are rapidly advancing, there seems to be an ever moving (or retreating) goalpost, of human-exclusive skills. For these reasons I will not be directly addressing claims of specific tasks or forms of labour that machines will supposedly be forever incapable of performing. Instead I will attempt to go right to the common source of these arguments, and analyse the differences and similarities between AI and the human mind. There are already some prominent theories that discuss whether or not A.I. could possess these qualities or not. Perhaps the two most important of these, each roughly covering one side of the argument, are the Turing test, by Alan Turing who argues that A.I. will likely one day be capable of conscious thought and has devised a method to test for it, and the Chinese room argument by John Searle, who argues that A.I. programming as it exists now is of a different category entirely from human-like intelligence and will not be capable of developing true minds. Then before we can truly decide to what extent AI is comparable to the human mind, we must delve into the concept of minds themselves and establish exactly what they are. Daniel Dennett offers a good look into the nature of consciousness, and gives a good starting point for the question of whether the concept of mind is applicable to A.I.

The Turing Test

Alan Mathison Turing (1912-1954) was a British computer scientist and one of the most important thinkers in the field of artificial intelligence (A.I.). His theories are the foundation for many modern A.I. theories and have been at the centre of the debate surrounding A.I. cognition. His theory of, what we today call, the Turing test is one of the first and most influential modern theories trying to explain the possibility of artificial minds. The Turing test was presented as a method for determining if a machine could think. This Theory first came up in Turing’s paper ‘Computing Machinery and Intelligence’ (1950), with the central question ‘can machines think?’

Before we can discuss Turing’s theory on thinking machines, we must first define the type of machine that Turing was talking about. For in the broad sense we humans are ourselves thinking machines, but this is obviously not the point Turing is looking for. He therefore tries to establish conditions that exclude humans but include machines such as computers and robots. He originally starts of by suggesting three criteria that must be met for something to be considered a machine. These criteria pertain as to how the machine can be constructed. First, he wants to cast the net as wide as possible by suggesting that every kind of engineering technique can be used to make the machine. Second, the engineers don’t have to be able to describe how the machine operates, this is because we want to allow them to be able to use experimental methods, even if they themselves are not completely sure as to how it works. Third he states that all human beings born in the usual manner should be excluded. But he then rejects these conditions because it would allow a cloned human to be considered a machine. For this reason, he abandons the condition that every kind of engineering technique should be allowed. Because of the difficulty in setting a strict demarcation between humans

(8)

and machines, Turing decides it is best to focus on one specific type of machine, in this case the digital computer.

Turing defines a digital computer as being a discrete-state machine. This means that the machine is able to jump from one definite internal state to another depending on the input. In theory one should be able, if he/she knows what state a discrete-state machine is in, to predict all future states based on the input it receives. What makes a digital computer unique is it’s supposed ability to perfectly mimic any other discrete-state machines. This is why Turing considers it a good candidate when considering if machines can think.

The Turing test consists of what Turing himself called the imitation game. In this game there are three players: two humans and one computer. One of the humans takes the role of interrogator: it is their job to figure out which of the other two players is the computer, and which is the fellow human. The goal for the computer will be to fool the interrogator into falsely identifying it as a human, while the second human player will try to help the interrogator make the correct distinction. In order to exclude factors that Turing deems irrelevant, such as appearance and tone of voice, the players will not be able to see each other and all communication will be text based.

If the machine is capable of consistently fooling the interrogator into believing that it is actually a human, Turing says that we will have good reasons for believing that this machine is capable of thought. It is not an uncommon belief among philosophers that the ability to use language in the way humans do, is exclusive to thinking things. Descartes also seemed to allude to this belief in his ‘Discourse on the Method’ (Lafleur, 1960). For Descartes the ability of humans to use language to express our thoughts and beliefs is one of our most defining features. Descartes believed that, while we could probably construct a machine that is capable of mimicking the sounds of human words, it would never be able to arrange those sounds to form a meaningful reply to anything that was said in its presence. The reason for this would be that, according to Descartes, human reason is a universal instrument, while machines would only ever be capable of performing the limited set of actions that they are designed for. So even though Descartes seems to believe that machines will never be capable of rational thought, he seems to agree that the Turing test would be an appropriate method for testing this hypothesis.

The next question is what Turing claims his test can do for us, and what kind of information it provides. Does Turing mean to say that only entities that can pass the Turing test can possess intelligence (making it a necessary condition)? Is it so that all entities that show reliable success in the Turing test must be intelligent (making it a sufficient condition)? Or is there a different kind of information that it can provide us with?

Let’s first discuss if the Turing test gives us a necessary condition for possessing intelligence and consciousness. If it does, that would mean that only those entities that are capable of passing the Turing test could be considered intelligent. Personally, I believe we can safely dismiss this possibility, and it seems unlikely that Turing meant for his idea to be interpreted in this way. For it is not hard to imagine intelligent beings that, for one reason or another, would have trouble with the Turing test. There are plenty of examples of non-human animal species that while, by many definitions not as intelligent as humans, are still attributed at least some level of intelligence. And even though we do consider these animals to possess intelligence, they would be unable to pass the Turing test.

Is it a sufficient condition then? In other words, does reliable success in the Turing test guarantee the presence of mind and intelligence? There are theorists that seem convinced that the Turing test is supposed to give us a sufficient condition for intelligence. Meaning they suppose that the Turing test claims that it is either logically or practically impossible for something that lacks intelligence to succeed in the Turing test. Under this premise some of these theorists give reasons against accepting the Turing test as a valid method for

(9)

establishing the presence of intelligence. For example, Ned Block (1981) argues that it would be possible, at least in theory, for something that clearly lacks intelligence to successfully pass the Turing test. A similar counterargument to the Turing test has been put forth by John Searle in his thought experiment ‘the Chinese room’. This counter argument will be discussed more in depth later in this chapter.

However, I would argue that the Turing test is most likely meant to give us neither necessary nor sufficient conditions, but rather provide probabilistic support for the presence of a human-like mind and intelligence. The idea that passing the Turing test serves as a probability raiser is supported by Turing’s own predictions in regards to its results. Turing predicted that around fifty years after the publication of the article (fifty years after 1950 being the year 2000) A.I. computer-science would have advanced to the extent that the Interrogator in the Turing test would have no more than a 70% percent chance of correctly identifying the computer. He believed that at this point someone could speak of thinking machines without the expectation of being contradicted. Though it is evident that his predictions about the year 2000 have not come true, it does paint a probabilistic picture of Turing’s theory and its claims, namely that Turing believed that if we have only a 70% percent chance of correctly distinguishing robots from humans we will have good reasons for escribing thought to robots.

The Chinese room

One of the most important opponents of the Turing test as a way of confirming the presence of intelligent minds is John Searle with his thought experiment known as the Chinese room argument. This argument originates from Searle’s article ‘Minds, Brains and Programs’ (1981). He argues that an A.I. program can, by itself, never be sufficient for thinking, and that A.I. research does not contribute to the further understanding of the human mind.

To illustrate his argument Searle uses his now famous thought experiment, known as the Chinese room. In this thought experiment, Searle imagines himself locked in a room. In this room he has two batches with Chinese1 writing, one batch labelled ‘題 ’ (meaning

‘questions’), the other labelled ‘答案’ (‘answers’) (note: all Chinese translations in this article are the courtesy of Google Translate and have as of yet not been checked by any native Chinese speaker), and a set of rules in English. The rules give him instructions and tell him how to correlate a set of Chinese symbols from one batch to a set of Chinese symbols from the other. For example, the instructions tell him that if he is given a collection of Chinese symbols saying ‘什 麼是 中國 的首 都 ’ (‘What is China’s capital?’) he should respond by writing (though from his perspective it can perhaps better be described as ‘drawing’) ‘北京’ (‘Beijing’). The catch is of course, that Searle cannot read or write a single word of Chinese, he is not even aware that these symbols are meant to spell out questions and answers, to him ‘Chinese writing is just so many meaningless squiggles’ (1981, p. 3).

Outside of this room (this room being the titular Chinese room) are native Chinese speakers that write down questions in Chinese on pieces of paper and sliding them into a small opening in the Chinese room, expecting a coherent response. When Searle takes these pieces of paper, he looks up the symbols in the ‘題’ batch and, following the instructions seeks the proper response in the ‘答案’ batch. When he then draws the corresponding symbols that form the answer on another piece of paper, he slides it though the opening handing it to the native Chinese speakers. Now, to them the output of the Chinese room is indistinguishable 1 Technically speaking “Chinese” isn’t a language, and it would be more accurate to refer to it as Mandarin. However, since Searle’s though experiment is called “the Chinese room”, and Searle himself referred to

(10)

from answers that would be given by another native speaker, leading them to believe that the person/mechanism inside the room must have a proper understanding of Chinese. But of course, there is no understanding going on in the Chinese room, for Searle can’t read or write Chinese. For him, these Chinese characters could have been replaced with any kind of random squiggles, and from his perspective the process would remain unchanged; Searle is merely engaging in meaningless symbol manipulation.

This thought experiment demonstrates that just because two different systems have the same input and output, this does not mean that they work through the same internal process. And the point that Searle aims to make is that the principles that govern a computer program cannot be considered sufficient for understanding, because the same process can be replicated inside of the Chinese room, where there clearly isn’t any understanding present. Pure symbol manipulation can by itself never be sufficient for understanding Chinese, or anything else for that matter.

What then is the difference between humans and computer programs? Why is one capable of understanding and intelligence, while the other can only engage in meaningless symbol manipulation? Searle argues that humans have minds because of the causal powers of the brain, which in turn are the result of the brain’s physical properties. Brains do not produce understanding because of the programs that are installed on them. Searle argues that current AI is about programs, not about machines. It is not concerned with the physical properties of machines, but instead with the symbol manipulation program installed on it. And it is only the physical properties of the machine that could potentially produce understanding, as it is the case with brains.

The Intentional Stance Theory

I will finish the cognition part of this chapter with Daniel Dennett’s intentional stance theory. The intentional stance theory gives us a new explanation of mind and intelligence that is different form the Turing test, but does not necessarily oppose it. The theory also manages to both support some parts of the Chinese room argument, while opposing others. Dennett agrees with Searle in that it is the physical machine and its characteristics that determines whether or not it is conscious, but Dennett’s theory also puts computers and brains on the same continuous scale where the difference is in degree, while Searle argued they belonged to completely different categories.

Dennett begins his 1981 article ‘True Believers, The Intentional Strategy and Why it Works’ by positioning himself in a centre position within a debate about the nature of beliefs, with a ‘realist’ position on the one hand and a ‘interpretationist’ on the other. For realists the question of whether someone has a particular belief is a matter of objective fact about the brain. To them having a belief must in the end come down to the brain being in a particular physical state. So if I believe there is milk in my refrigerator, someone could deduce this by dissecting my brain and inspecting a particular spot and be able to verify the presence of this belief in that way. For Interpretationism, whether someone has a particular belief is a matter of interpretation. It is a question similar to whether that person is fashionable or a good friend: it depends on your point of view.

For many, including both realists and interpretationists, these positions are mutually exclusive and exhaustive, yet in a way Dennett tries to combine them in his Intentional stance theory. To Dennett if a person has a particular belief this is a perfectly objective fact about them (realism), but this fact can only be discovered by adopting a certain point of view, or in this case, a stance (interpretationism).

In Dennett’s theory a stance is a strategy for predicting the behaviour of a system. Here the concepts of both behaviour and system are defined very broadly. For example, a

(11)

windmill could be considered a system and the rotation of its blades could be described as its behaviour. In Dennett’s theory, a system can partly be defined by the predictive strategies that can be used to reliably predict and explain its behaviour.

Dennett describes three prediction strategies in his theory, the physical stance, the design stance and the intentional stance. The physical stance is a strategy that predicts the behaviour of a system via our knowledge of the laws of physics. To use the physical stance you must first determine the physical makeup of the system the behaviour of which you are trying to predict. Then you regard the system’s interactions with the outside world and use the laws of physics to predict an outcome. How deep down this goes depends on the nature of the system and the behaviour you are trying to predict. You can use this strategy to predict that water will boil at a hundred degrees Celsius, in which case you only need some elementary knowledge of the physical world. But if you are trying to predict the outcome of an intricate physics experiment, you might have to describe the system and its interactions down to the molecular or even sub-atomic level. Dennett poses that in principle all behaviour of all systems can be predicted though the use of the physical stance, some determinist thinkers, such as Laplace, would theorise that the even the behaviour of the universe itself can be predicted in this way. As such, all systems, including the universe itself can be considered to be physical systems. However this is only in principle, because it can require a lot of knowledge to use the physical stance properly, it is sometimes practical to use other strategies. Even if those other strategies would be less accurate.

The prediction that you can make with the design stance are often less accurate than ones made using the physical stance. Also, because not every system we encounter has been intentionally designed, it the reasoning behind the design stance is not always applicable. However, the design stance is often much easier and more intuitive to use, even if the assumptions behind it are technically incorrect. To use this strategy, you must assume that a system is designed to perform a certain function and predict it will behave as it is designed to behave in this particular situation. This allows people who aren’t necessarily familiar with the inner workings of a system, to still reliably predict its behaviour. Most people for example, have very little knowledge of the technology inside their smartphones or computers, but they can still effectively use these devices because they know what they are designed for. Naturally, the drawback is that it only works on designed systems, though depending on your definition of ‘design’ it also works on un-designed systems, such as the organs in a human body (though you could say they are ‘designed’ by evolution, or depending on your beliefs, by a god). Though even in that case, only designed behaviour can be predicted. If you want to predict the behaviour of your smartphone while it is submerged in liquid helium, you will have to fall back on the physical stance.

The intentional stance is probably the most used strategy out of the three: it is the strategy that is used in social interaction, as well as the strategy to predict the behaviour of non-human animals. In the past (and to an extend in the present), many have used this strategy in regards to the natural world, claiming that a natural disaster was the result of an angry god, etc. To use it, you have to treat the system you are observing as a rational agent with beliefs and desires. First, you have to start by attributing beliefs to the system. You do this primarily by figuring out what beliefs, from your perspective, it ought to have. If you see me browsing on the internet on my smartphone, it is a pretty safe assumption that I believe there to be a smartphone in my hand. Then you figure out what desires, from your perspective, the system is most likely to have. In humans this will often include the basics, such as survival, absence of pain, food comfort, procreation, entertainment, etc. In general, we ascribe to a system desires for those things it believes to be good for it.

As previously said, this strategy is useful in predicting the behaviour of humans and non-human animals. But it can also be used to predict the behaviour systems that in

(12)

themselves clearly lack intentionality. One might for example look at a simple thermostat and ascribe to it the ‘belief’ that the room is too cold, and that it should turn the boiler on in order to make it warmer. Thermostats and other such simple machines obviously won’t be passing the Turing test any time soon, but it can still sometimes be a useful shorthand to pretend that they have beliefs and desires in order to predict behaviour.

But what method do we have then to distinguish systems that really do have this intelligence and intentionality on the one hand from systems for which it can be useful to pretend like they have it on the other? It could be that a system is truly an intentional system if it displays behaviour that can only be predicted through the intentional stance. But are the differences between the different stances not merely the result of human epistemic fallibility? Perhaps a far more intelligent being would be able to easily predict the behaviour of humans and other complex systems by simply using the physical stance, which would in turn make the intentional stance irrelevant to them.

Dennett discusses this idea in a scenario involving highly intelligent aliens that are able to use the physical stance in order to predict, not just the behaviour of humans, but in truly Laplacian fashion, the entire universe. In this scenario the alien subjects a human to a sort of Turing test, this time in the form of a predicting contest, to see if the human is an intentional system. Here the alien and the human both observe the following scene.

The telephone rings in Mrs. Gardner's kitchen. She answers, and this is what she says: "Oh, hello dear. You're coming home early? Within the hour? And bringing the boss to dinner? Pick up a bottle of wine on the way home, then, and drive carefully." (p. 562)

Using the intentional stance, the human predicts that within a specified amount of time a type of mobile metal container will arrive at the house and that two humans will come out, one of them carrying a glass bottle containing an alcoholic fluid, and walk to the door. Using the physical stance, the alien comes to roughly the same prediction, but has to use a lot more information about the physical state of the universe to come to these predictions. Information that, as the alien knows full well, the human has no access to. This leads to the situation where the alien, who only knows the physical stance, has no way of explaining how the human was able to come up with her/his explanation. The only solution would be for the alien to adopt the intentional stance and acknowledge the human as an intentional system. This shows us both that the intentional system is capable of uncovering real patterns in the behaviour of intentional systems, that cannot be seen via any other strategy, and that successful use of the intentional stance can be evidence that users themselves are intentional systems.

But this still leaves us with the question of when we can consider something an intentional system. Dennett claims that there is no ‘magic moment’ in the transition from a simple thermostat to a true intentional system. Instead the road from thermostat to human is more gradual. According to Dennett, even something as simple as a thermostat has what he calls, belief-like states, like about the temperature of the room, or the current state of the boiler. But these belief-like states are of course no substitutes for real beliefs. If we wish to turn our thermostat into an intentional system, what we must do is give it a closer and richer connection to the world. Doing this would give its belief like states ‘more to do’ and increase the systems internal complexity. As the machines attachments to the word become richer and attains greater internal complexity, its belief-like states will start more and more to approach the status of actual belief, and the system itself will come closer and closer to being a true intentional system. Though again, there is no magic moment, and therefore no clear cut-off point.

So are AI and human intelligence categorically different things? And does this mean that there are tasks which will always remain the exclusive domain of humans? Or is it the

(13)

case, as Turing and Dennett argue, that the difference between AI and human intelligence is one of degrees, driven by their respective internal complexities? Real world developments seem to favour the latter position, as more and more areas once deemed the exclusive purview of humans, is being encroached upon by machines. Also, to say that Searle’s Chinese room argument shows that there is a categorical demarcation between humans and machines is not quite accurate. It would be more accurate to say that Searle argues that machines will not reach the same levels as human intelligence if you only focus on the AI program installed on it. But developments are being made on both the software and hardware fronts. And as the internal complexity of both AI and computers continue to advance, the gap between what humans can do, and what machines are capable of, will shrink further and further.

(14)

Chapter 2: Will Automation Lead to Technological

Unemployment?

The worry that automated machines will replace human labour is not new. Increasing automation has been going on since the industrial revolution at least, and likely even earlier than that. And yet there are still enough jobs for the vast majority of people. The economy was able to produce new jobs for most people whose previous work had been automated. So is there any reason to believe that this time will be any different? There are those that believe that it will be different. In this chapter I will cover various arguments and trends that seem to support the idea that unlike in previous cases, the next wave of automation is likely to have serious, long term effects on human participation in the labour market. I will first go over a short history of automation. Then I will cover the arguments that automation would lead to what John Maynard Keynes (1963) called ‘technological unemployment’. Which refers to the situation where due to rapidly advancing technology, the rate at which we are able to automate labour will outrun the pace at which we can find new uses for labour. And I will end this chapter by addressing various counter arguments of those sceptic of the idea of technological unemployment.

A History of Automation

To get a feel for the impact that automation can have on the jobs in a sector, one need look no further than the agricultural sector. Agriculture used to encompass almost half of the total jobs in western countries. But through the introduction of various forms of machinery, agricultures share of the western job market has today fallen to around two percent (Ford, M. 2016). And as the field of robotics advances this number is likely to drop even further.

In their article ‘The Future of Employment’ (2013) Frey and Osborne argue that the rate of automation is not determined by the rate of scientific research and invention, but rather by where the social and economic interests happen to lie at the time. To illustrate this, they bring up two contrasting examples of the invention of the stocking frame knitting device by William Lee in 1589, and the Luddite revolt from between 1811 and 1816. In 1589 William Lee invented a knitting device that would mechanise the process of hand-knitting. But when Queen Elizabeth I came to inspect his invention, she seemed primarily concerned with the impact the device could have on employment and refused to grant him a patent.

Contrast this to the case of the Luddite revolt. Revolts broke out when the British Parliament revoked a law prohibiting the use of certain types of automation in the production of wool. Many workers in wool production saw this development as a dire threat to their livelihood, and responded by destroying the offending machines. However, by the time of the revolt the attitude of the government towards automation had changed significantly. The destruction of machinery had been made punishable by death and the British government suppressed the revolt by sending an army of 12.000 men.

What could explain this stark difference in response to automation by the British government? Frey and Osborne argue that the balance between job conservation and technological progress reflects the balance of power in society, and that the boundaries for automation are not set by a lack of inventive ideas, but by whether those in positions of power benefit from maintaining the status quo. The decision of the Queen to deny Lee his patent in 1589 was likely motivated by pressure of the countries guild system, who at the time still held considerable sway. The guilds were afraid that the new invention would threaten the position of its artisan members. And as the crown earned quite a bit of money from the granting of

(15)

guild licences, this gave queen Elizabeth I motivation to deny Lee a patent. As argued by Kellenbenz (1974, p. 243) “guilds defended the interests of their members against outsiders, and these included the inventors who, with their new equipment and techniques, threatened to disturb their members’ economic status”. But by 1811 the influence of the guilds had all but disappeared, and with the establishment of Parliamentary supremacy over the British Crown, large property owners had become the dominant class in British politics. This new ruling class stood to gain much from the introduction of new mechanisation technologies, and so, instituted policies to promote the development of new ways of automation, and protect the machinery. And the workers and artisans no longer had the political clout to resist these developments.

From these examples we can learn that when we observe the development of automation technology (or any new technology for that matter) we should not look merely at what is technologically feasible, but also at who stands to gain from the introduction of new technologies.

While today we mostly believe that automation technology poses a threat to relatively unskilled workers, while complementing the role of more skilled and creative work, this wasn’t always the case. During the nineteenth century mechanisation largely played the role of ‘deskilling’ work. Factors such as cutting a process into smaller, highly specialised and simplified sections, the introduction of interchangeable parts for machines, and the introduction of the assembly line all worked together to take a process that used to be performed by a highly skilled artisan and simplify it. And by substituting for skill these machines increased the relative demand for unskilled labour.

However, in the twentieth century this began to change as, partly though the introduction of electrically powered machines, the demand for unskilled labour began to decrease. These new machines no longer deskilled the process, but started to fully automate it. More skilled workers on the other hand became more sought after, as they had the skills necessary to operate and maintain these new machines. This new demand was soon to be met, as in many Western countries the education system was reformed in what Frey and Osborne refer to as the high school revolution (p. 11). But they also point out that this new supply was followed by a sharp decline in the wages of clerical occupations relative to the wages of production workers, as the larger supply of educated workers started to result in degree inflation. This process shows that even if the population becomes more educated, their wages do not automatically rise with them. And the introduction of computers would further automate labour performing routine tasks, eroding wages for such clerical occupations. The Manufacturing industry has also already been heavily automated. In many cases, a factory that used to employ over 2,000 people some decades ago, may only reach around 150 jobs today, while still producing the same, or even higher amount of products, and many of those remaining jobs amount to filling in the gaps between machines, gaps that continue to shrink more and more. However, recently the process of automating factories has resulted in the creation of some jobs in the sector. For while highly automated factories do not employ a lot of people, it does make the local manufacturing industry more competitive with low-wage countries, bringing back some jobs that where lost due to offshoring, as well as creating more jobs in peripheral areas, like with suppliers and transportation. There is however no guarantee that these jobs will be around for long, as factories become more and more fully automated.

Throughout the history of automation, many voices have expressed the fear for technological unemployment, yet time and time again these concerns have proven exaggerated. It is commonly believed by economists that automation has not lead to large scale long-term unemployment because of a rebound effect. As machinery replaces workers in production, the costs will go down. In many cases, lower production costs will lead to lower prices, increasing people’s purchasing power. This means that consumers will have more

(16)

money left, and so increase the demand for other products. Which creates more employment opportunities for the previously displaced workers. But now, once again certain people are ringing the alarm bells, proclaiming with a new wave of automation on the horizon, technological unemployment is going to cause many people to be in danger of becoming permanently unemployed, or unemployable.

Is This Time Different?

Martin Ford (2015) indicates seven economic trends that seem to indicate that this time might be different. These “seven deadly trends” as he calls them, while not directly related to automation, seem to suggest that new technologies are changing our economic landscape. Ford argues that these trends are the result of the computerisation that emerged around the same time. Whereas previous technological innovation allowed wages to rise alongside productivity, the introduction of computerisation increased productivity, and did improve wages for the people with the right skill set, but for many other workers computers had a more negative effect. The tech bubble and the rise of the IT sector did create a lot of new well-paying jobs, and as a result waged did improve, but still remained below productivity growth. And as information technology accelerated, many of these new well-paying jobs began to be either automated or offshored. Increasingly computers and machines were replacing workers rather than increasing the value of their labour.

The first of these trends is the stagnation of wages and the decoupling of wages from productivity. While productivity has been growing steadily, and wages have grown with it until 1970, after that, wages have started to stagnate despite the fact that productivity kept growing at the same rate.

The second trend concerns the shrinking fraction of national income going to labour verses the part going to capital. Like the link between wages and productivity, the idea that these fractions remained relatively stable over time, had become accepted fact among many economists (this idea is known as Bowley’s Law). But this has changed, again from the 1970s on: the share of the GDP going to labour started to shrink. Ford remarks that this trend is all the more surprising considering that, when calculating the share going to labour, this is not limited to the income of the average middle- to low-income worker, but also includes the income of CEO’s, Wall Street executives and movie stars. These are all considered under labour as well, and since these have demonstrably been increasing as opposed to declining, it means that if you were to limit the calculation to looking only at the bottom 99 percent of labour income, the labour-capital distribution would likely become even more askew.

The third trend describes the declining labour force participation. Ford notes that after the Great Recession, whenever unemployment seemed to be declining, this was not the case because people were finding new employment, but because workers became discouraged and decided to stop looking for work and leave the labour force entirely.

The fourth trend is connected to the third as it is about the diminishing job creation. Ford describes how the last decades have seen less and less new jobs created, and how the 2008 financial crisis only manages to make things considerably worse. He talks about a jobless recovery and how the problem of joblessness after the crisis was not a product of more jobs being destroyed, but fewer new jobs being created during the following recovery. Ford’s fifth trend is growing inequality. This trend is likely a direct result of the other trends. Ford describes how between 2009 and 2012 95% of all income gains went to the top 1% wealthiest people.

The sixth trend is that recent college graduates are dealing with declining incomes, and increasing unemployment. While they still fare better than those with only a high-school

(17)

education, the incomes of recent college graduates have fallen by about 15% in the 2000s and half of new graduates are unable to find work that relates to their education.

The final “deadly” trend concerns the polarisation of the labour market, as most of the jobs destroyed in the recession have been the good middle-class jobs, and those that have come in their place are largely in low-wage sectors. This polarisation leads to an hourglass-shaped labour market where workers who cannot secure a job at the top have to content themselves with remaining at the bottom.

Before a task can be automated, a programmer must be able to program a computer to perform that task. To be able to do this the task must first be specified and broken down into clear rules and instructions. For this it is important that the criteria for success are quantifiable and the results can be easily evaluated. Thus the boundaries for automation are set by the question which tasks can be sufficiently specified.

David Autor, et al. (2003) categorised tasks along a two-by-two matrix. On one side they divided tasks into manual and cognitive tasks, on the other into routine and non-routine tasks. Routine tasks are defined as tasks that follow explicit rules. Non-routine tasks however are not sufficiently understood to be formulated in explicit rules. Manual tasks involve physical labour, while cognitive tasks involve mostly knowledge work. Many people used to assume that automation mostly targeted routine tasks that followed clear and explicit rules. This is because those rules can then be turned into computer code, allowing the machine to perform the task. Something that is much harder to accomplish with tasks which are not inherently defined by clear and explicit rules. However, since the data revolution and advances in machine-learning algorithms, many of these barriers have been overcome. Big data produces a lot of examples of successful attempts at the task, which can be used to improve the algorithm and further quantify the criteria for success, so that these can be put into code. Following these developments, automation is no longer confined to just routine tasks, and jobs centred around non-routine tasks are now under threat of being automated.

For cognitive tasks one of the most important advantages computers have over humans is scale. Computers are almost always much better at processing large amounts of information for performing large calculations.

(18)

As illustrated in their graph, Frey and Osborne (p. 37) predict that we are going to see two large automation waves, separated by a plateau. The first wave includes the jobs that they consider to be at high risk of automation, the following plateau are those that are considered to be at medium risk, and the last wave are the professions that are considered to be at low risk of being automated soon. They speak of waves, because the assumption is that even the group concentrated at the low-risk end of the graph are likely to be automated eventually, just not quite jet. So this graph can also be considered to be a automation timeline-prediction, where the large amount of high-risk jobs will be automated relatively soon, which will be followed by a slow period where the relatively few jobs in the medium-risk category will be automated, to be finally closed off by the second (and last) wave of automation made up of the professions that are currently considered to be at low risk of automation. And as this graph contains 100% of the working population, this means that when the final wave has happened, all jobs will have been automated. Frey and Osborne give no specific dates for their prediction, especially not for the medium and low-risk categories, which are probably considered to be too far off for accurate predictions. However they do state:

“According to our estimate, 47 percent of total employment is in the high risk US category, meaning that associated occupations are potentially automatable over some unspecified number of years, perhaps a decade or two.”

The idea that 47% of total employment is at high-risk of being automated in the next two decades is a rather alarming idea.

Addressing Scepticism

Those who are sceptical of the premise that advancement in artificial intelligence will render a significant portion of the population unemployable will often bring up the theory of comparative advantage. According to supporters of this theory a person will always be capable of finding employment as long as he or she specialises in their area of expertise due to a concept known as opportunity cost. Opportunity cost means that when a person decides to do something, they must necessarily give up the opportunity to do something else. If I decide to spend a day watching shows on Netflix or reading a book for pleasure, I give up time I could also have used to make money, or write my thesis.

To illustrate this theory I will use a scenario of these concepts in action given by Martin Ford (2015). This scenario concerns two people, Jane and Tom, who both have different skillsets. Jane is a very good cook as well as one of the best neurosurgeons. Tom is a reasonably good cook, but knows absolutely nothing about surgery. Now the question is, should Jane hire Tom as a cook. The intuitive answer is no, since Jane is a better cook that Tom, she has no reason to higher him to cook for her, as she would be better of cooking for herself. However, if she spends time cooking for herself, she has less time to devote to her greatest skill, neurosurgery. According to the law of comparative advantage, even though Jane is also a better cook than Tom, she would benefit from hiring him as a cook, because this frees her up to focus on surgery. In this way, according to the theory of comparative advantage, as long as someone specialises in the thing they are the most skilled at, they will be able to find employment. And even if AI were to become advanced to the point that it eclipsed humans in a number of skills, humans could still find jobs.

Yet, as Ford (ibid.) points out, this idea fails to take into account one important characteristic of machines and AI, namely that they can often be easily replicated. Let’s take another look at our scenario. Now let’s give Jane the ability to clone herself. In this alternate scenario the situation for Tom seems significantly more dire. The theory of comparative

(19)

advantage relies on the concept of opportunity cost: Jane cannot be in two places at once, performing two tasks at once. If she is cooking she cannot perform surgery. But with Jane’s ability to clone herself, this concept no longer applies. She can clone herself so many times until she has exhausted the demand for brain surgeons, and still make copies to cook. In this scenario, there is little chance of Tom (or any other cook of his skill-level) being able to find employment as a cook. The fact that AI and robots can be easily replicated completely upends the concept of opportunity cost, to the point that the theory of comparative advantage no longer functions.

Another popular counterargument against the idea that automation can lead to large scale unemployment cites the creative force of capitalism. Erik Brynjolfsson and Andrew McAfee (2014, p. 98) summarise the argument as follows:

“while technological progress and other factors definitely cause some workers to lose their jobs, the fundamentally creative nature of capitalism creates other, usually better, opportunities for them. Unemployment, therefore, is only temporary and not a serious problem.”

The idea is that as workers are replaced by machines the production process becomes cheaper and more efficient. And an increase in the efficiency of production leads to a reduction in the price of the goods produced. And as the price of certain goods becomes lower, consumers will have more money left to spend in other places, thus an increase in efficiency in one area will lead to an increase in demand in other areas, which will need more workers to fulfil this higher demand. And so these workers who became unemployed because of automation will find new jobs as a result off the very machines that replaced them.

The idea that automation and other types of technological innovation end up creating more jobs that they destroy rests both on accepted economic theory and hundreds of years of historical evidence. But Brynjolfsson and McAfee (2014) bring up several counterarguments in favour of the possibility of technological unemployment.

The first argument that an increase in productivity lowers the prices of goods, which leads to people spending the money they save on other goods, expresses the idea that the overall economy has, in technical terms, a price elasticity of one. This means that as prices decline by 1 percent, demand rises by 1 percent also. Keynes (1930) disagreed, he believed that lower and lower prices would not mean that people would begin to consume more and more. Eventually people would become satiated and start to consume less.

The second argument is concerned with the ability of workers and institutions to keep pace with the rate of technological innovation. Those who believe that the threat of technological unemployment is addressed by the inherent creative nature of capitalism, say that when a worker loses their job due to automation, that unemployment is only temporary and given time, they will find a new (possibly better, higher paying) job. But even if that is true, this process takes time. It may take a while for the invisible hand to work its market-magic and create new jobs in other sectors. And it will take even longer for the unemployed to start filling these new jobs, as to do so, they will in all likelihood first have to acquire the new skills and training necessary to do those jobs. And this process can only start after the new jobs have been created. Because unless they possess incredible foresight, the newly unemployed will not know where these new positions are going to open up. And in the time that this process takes place, automation technology will continue marching on. And the faster technological progress goes, the harder it will be for workers and institutions to keep up, leading to widening gaps, and increasing the possibility of technological unemployment.

The structural economic changes that result from recent technological advancement have created groups of winners and losers. Workers with the right kind of skills – skills that

(20)

either have not yet been automated, or skills that are augmented by being paired with new technology – will find their work becoming more valued. On the other hand, workers with the kind of skills that are being encroached upon by machines will find their labour being devalued, and their income suffer as a result. The second group of ‘winners’ is those who possess a significant amount of capital. After all, the process of automation is one of replacing labour with capital. And as discussed before in this chapter, capital’s overall share of the economy has been steadily growing these past decades, at the expense of labour’s share.

The third group of winners comprises of superstars in winner-takes-all markets, which have been supercharged with the introduction of digital technology. Whenever a field or industry becomes more digital, meaning that its products can be digitally copied and distributed, we quickly see an upsurge in the income and popularity of that field’s superstars, while those that are even second-best will start to struggle, even if they are almost as good. Frank and Cook (1996) attributed this to the difference between relative performance and absolute performance. Absolute performance can be found in a traditional market, there the most skilled worker cannot take every job, and so, someone who is almost as skilled can still compete and make almost as much money. But in a digitised economy the most skilled worker can create a product that can then be cheaply copied and distributed to everyone, and so the second-best will have to compete with an infinite amount of copies of the number one. A programmer who writes a program that is slightly better that the competition can come to completely dominate the market, and in that market there is going to be little to no demand for the tenth best program. This is an example of relative performance.

All of these factors work together to reduce demand for certain types of skills and labour. And if demand falls low enough, one might not even be able to sell their labour, even at near-zero cost. I might offer to sing a song, but even if I offered to do so for free, someone might (rightly) prefer to pull up Spotify and listen to that song performed by a famous artist. This leads to a scenario that is well summarised by Brynjolfsson and McAfee (p. 101):

“Thus, there is a floor on how low wages for human labour can go. In turn, that floor can lead to unemployment: people who want to work, but are unable to find jobs. If neither the worker nor any entrepreneur can think of a profitable task that requires that worker’s skills and capabilities, then that worker will go unemployed indefinitely. Over history, this has happened to many other inputs to production that were once valuable, from whale oil to horse labour. They are no longer needed in today’s economy even at zero price. In other words, just as technology can create inequality, it can also create unemployment.”

Even so, these recent negative economic trends could also be ascribed to other factors. The most popular alternate explanation is globalisation, and in particular the outsourcing of jobs to lower wage countries like China. They argue that any competitive market will tend towards a single common price for a given good (in this case that good being labour). And because our world is becoming more and more connected, globalisation is creating a single global market for some forms of labour. This means that workers in western countries have to compete against workers in countries like China, who can do the same amount of work for less cost. This then reduces the competitive price for labour in the single global market, leading to lower wages and unemployment in western countries.

But this narrative is undercut by various conflicting observations, as described by Martin Ford (2016). Firstly, the percentage of workers in manufacturing have been on the decline since before the economic rise of China and the trend of offshoring began. And in the US the decline seems to have been halted recently. Then there is the fact that manufacturing employment in China itself is also on the decline. Since 1996 manufacturing employment in China has fallen by around 25 percent. It seems that Western workers are not being replaced by Chinese workers, both Western and Chinese workers are being replaced by automation.

(21)

This idea is further reinforced by the statistics showing that despite declining manufacturing jobs, both China and the US are producing more than ever before. We are producing more stuff, we are just doing it with more machines, and fewer workers.

I have illustrated in this chapter why there are good reasons to expect that in time developments in automated machine technology will lead to high levels of technological unemployment. In the next chapter, I will discuss some of the possible ramifications. How would our economy react to large scale technological unemployment? And since for many people, work forms such a large and important part of their life, how will they be affected on a social and personal level if they become essentially unemployable?

(22)

Chapter 3: The Social and Economic Consequences of Mass

Technological Unemployment

Work forms a core part of many people’s lives, and employment figures are often seen as one of the most important measures of a countries economic health. It should therefore come as no surprise that the mass technological unemployment predicted in the previous chapter would have large social and economic consequences, both on the macro-scale of society as a whole, as well as the micro-scale of the individual. After all, our current economy is one largely based on mass consumerism, and since most people get their money from their employment, mass unemployment would pose a serious threat to a consumer economy. But on a social level, people get more than money out of their job. For many their job is an important part of their identity, as well as a source for meaning and self-esteem. So what will happen to people when such sources become unavailable to them?

The Economic Consequences

While automation threatens millions of jobs and could lead to technological unemployment on a unprecedented scale, it will also undoubtedly create large amounts of wealth. Through replacing human workers with automated machines, corporations can save a lot of money on production costs, and their profits will soar as a result, at first. The question is to what extend this economic growth will be sustainable in the face of mass unemployment and growing inequality. After all, workers are also consumers, and with millions of workers being hit by technological unemployment, far less people will be able to actually buy the products and services that these automated machines provide.

Imagine that tomorrow someone will develop androids with fully comprehensive general artificial intelligence, that are capable of doing almost anything a human worker could do. At first such machines would be very expensive, but once costs goes down businesses would quickly start replacing human workers with these androids to save on labour costs. Productivity would go through the roof and due to reduced production costs profits would soar. Even employers who may initially have been reluctant to let go of their human workers would be forced to start replacing them with androids if they wish for their business to stay competitive. As a result unemployment figures are rising to unprecedented levels. Many will appeal to the government to stop this large scale displacement of workers, but the government is in much the same position as the initially reluctant businesses. If a country were to enforce a ban or heavy restrictions on the use of these androids, this would give the country a vast disadvantage internationally, compared to countries that do allow the use of androids. The owners of capital reap great profits from these development as the sale of luxury goods and services are booming. And as the rise in productivity allows products to be made for cheaper prices, the lower socio-economic classes can continue to remain consumers, at least for a time. But soon as long term unemployment drags on, savings will run out, and due to the massive scale of technological unemployment, unemployment benefits will be spread thin. As a result, large numbers of people will have to start cutting back on their spending, and as a result will only be able to afford the bare necessities. The androids of course need recourses to keep running, but other than that they do not consume, they certainly do not go out to buy the products they are making. The rising profits for corporations will turn out to be unsustainable, most of it resulted from the cutting labour costs, but as people stop buying their products, the overall earnings stagnate and ultimately decline.

Referenties

GERELATEERDE DOCUMENTEN

Rode klinkervlakken op de weg bleken in dit onderzoek verschillende effecten te hebben op het kijkgedrag, afhankelijk van elementen op de weg (aanwezigheid rode fietsstroken 

RPE ICC Rules of Procedure and Evidence of International Criminal Court RPE KSC Rules Procedure and Evidence of the Kosovo Specialist Chamber VPO Victims’

Can we use visitor journey data in combination with an unsupervised clustering algorithm to identify user segments that can correctly describe the audience of the website?... Can

The option to use the aid delivery modality of budget support would channel available funds to support the higher education systems in the region through

Hiermee kunnen ziekteprocessen in het brein worden bestudeerd maar ook cognitieve processen zoals het waar- nemen van objecten of de betekenis van woorden in een

How does the rising interest for lifestyle blogs influence the on- and offline appearance of women’s magazines in the Netherlands and in what way does this change the

RlOO span· deer word om almal van vaste rei!lings in kenDis te stel word die nodlge rei!llngs soos vervat in grondwette en reglemente vir een maal gellasseer

Over alle andere proefsleuven van zone D werden lineaire sporen gevonden die hoogstwaarschijnlijk als greppels of grachten kunnen geïdentificeerd worden.. Mogelijk