• No results found

Human-machine interactions: implications for different types of jobs

N/A
N/A
Protected

Academic year: 2021

Share "Human-machine interactions: implications for different types of jobs"

Copied!
91
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Human-machine interactions:

implications for different types of jobs

A Simulation Study

Master Thesis

Niels Haan

s2234602

(2)

Abstract

Empirical evidence suggests that AI machines can potentially be detri-mental for employment and can positively affect productivity. This study tries to contribute to the AI literature by comparing the performance and costs impacts of different types of jobs after introducing an AI machine to an organization. This should help managers to better consider whether introduc-ing an AI machine will be beneficial for the overall costs and performance of the organization. Three types of jobs are examined through computational simulation: creative, normal, and routinized jobs. In the results, it became clear that normal jobs outperform routinized jobs and creative jobs in per-formance and costs after the introduction of an AI machine in the long run. Furthermore, in terms of performance and costs, the creative type of job outperforms the routinized type of job in the long run. The findings suggest that normal jobs are the perfect balance for replacing a lot of work without making the employees worry about losing their jobs to the machine.

Keywords: machines, artificial intelligence, jobs

(3)

Contents

1 Introduction 1

2 Literature Review 5

2.1 Complementarity or substitutability . . . 6

2.2 Machinery and its impact on the economy . . . 6

(4)

3.4.3 Recruitment decisions . . . 21 3.5 Simulation design . . . 23 3.6 Model robustness . . . 23 3.6.1 Job variety . . . 24 3.6.2 Machine . . . 25 3.6.3 Substitutable threshold . . . 26

3.6.4 Costs performance variable . . . 26

4 Results 27 4.1 Main results . . . 27

4.2 Routinized work . . . 31

4.3 Normal work . . . 32

4.4 Creative work . . . 33

4.5 Robustness of the model . . . 34

4.5.1 Job variety . . . 34

4.5.2 Machine . . . 35

4.5.3 Substitutable threshold . . . 35

4.5.4 Costs performance dependency . . . 36

5 Discussion and Conclusion 37 5.1 Theoretical implications . . . 37

5.2 Practical implications . . . 41

5.3 Limitations . . . 42

5.4 Future research . . . 44

A Robustness Checks 48 A.1 Job variety . . . 49

(5)

A.1.2 Equal to 0.1 . . . 50

A.1.3 Equal to 0.05 . . . 51

A.2 Machine . . . 52

A.2.1 Initial performance equal to minus two . . . 52

A.2.2 Learning rate equal to 0.5 . . . 53

A.2.3 Learning rate decay equal to 0.00001 . . . 54

A.3 Substitutable threshold . . . 55

A.3.1 Equal to 0.6 . . . 55

A.3.2 Equal to 0.95 . . . 56

A.3.3 Equal to 0.99 . . . 57

A.4 Costs performance dependency . . . 58

A.4.1 Equal to 0.2 . . . 58

B Source Code 60

(6)

1

Introduction

Technology has advanced tremendously in the past decade. The rise of information technology, artificial intelligence (which entails also machine learning), data analytics, industrial robots, and other digital technologies are changing the innovation landscape. Firms who want to stay competitive have used these digital technologies as an essential part of their business models. Therefore, digital innovation, which is “the creation of (and consequent change in) market offerings, business processes, or business models that result from the use of digital technologies” (Nambisan et al., 2017), is becoming more and more important for the survival of the firm (Cefis & Marsili, 2005).

(7)

In this thesis, one particular digital technology, artificial intelligence (AI), and its im-plications within organizations will be researched. According to (Kurzweil, 1990) AI can be defined as: “The art of creating machines that perform functions that require intelligence when performed by people”. The creation of these machines has raised the usual questions with regard to the impact on human productivity (Cockburn, Henderson, and Stern 2018; Brynjolfsson, Rock, and Syverson 2018). Will these machines substitute or complement humans in the workforce? (Autor, 2015; Markov, 2015; Acemoglu and Restrepo, 2016).

Historically, the usage of machines in general (i.e., not per se artificial intelligence ma-chines) resulting in an increase of automation both complemented and substituted human jobs (Autor and Salomons, 2018). As shown by Acemoglu and Restrepo (2017b), at macro-level “one more robot per thousand workers reduces the employment to population ratio by about 0.18-0.34 percentage points and wages by 0.25-0.5 percent”. It can be expected that if organizations start buying artificial intelligence machines, humans will be even more replaced, reducing the employment to population ratio even more. Moreover, once an organization decides to buy such a machine, it wants to optimize its usage in such a way that performance and profit are maximized and that costs are minimized (i.e., organizations want to maximize their benefit/costs ratios). This is more complicated than one might initially expect, an organization often consists of different tasks requiring different sort of skills from the employee. In this thesis three types of jobs are central: creative, normal, and routinized.

(8)

tasks. In this thesis, we define creative jobs as jobs consisting mainly of non-routine tasks. According to Autor, Levy, and Murnane (2003) two broad sets of tasks can be distinguished from these non-routine tasks which are proven to be challenging to computerize. The first category can be categorized by tasks that are ‘abstract’, and require problem-solving capabilities, intuition, creativity, and persuasion. They are characteristic of professional, technical, and managerial occupations. For this category, workers with high levels of education and analytical capability are employed, and they place a premium on inductive reasoning, communications ability, and expert mastery. The second broad category includes tasks requiring situational adaptability, visual and language recognition, and in-person interactions which are called ‘manual’ tasks by the authors. Manual tasks are characteristic of food preparation and serving jobs, cleaning and janitorial work, grounds cleaning and maintenance, in-person health assistance by home health aides, and numerous jobs in security and protective services. In this thesis, normal jobs are defined as the middle between creative and routinized jobs, so they consist both equally out of routine tasks and non-routine tasks. It thus consists both out of tasks which fit in the category for routinized jobs and creative jobs. Hence, more or less half of the job can be codified, whereas for the other half of the job it cannot be codified.

(9)

full benefits or the human is scared to lose his job to the machine and therefore tries to sabotage the machine to not lose his job. For this reason, managers face a tradeoff in their decision making. On the one hand, they want to introduce the machine to improve performance and reduce costs. On the other hand, the introduction of the machine can lead to employees sabotaging it which increases costs for the organization. To date, there is little to no prior research done into how a manager can deal with this problem. Therefore, in this research, we try to find which type of job performs the best in terms of performance and costs after the introduction of the machine. The manager can then become more aware of what types of jobs are mostly present within the company, and can then better consider whether introducing the machine will be beneficial for the overall costs and performance of the organization.

(10)

2

Literature Review

In this thesis, a broad and comprehensive definition of machines will be used, which includes all and only technologies being able to replace tasks previously performed by one or more humans (i.e., industrial and service robots, artificial intelligence (AI) machines, and other automated machines). From this definition, it is important to know that for all the machines the goal is ultimately automation.

(11)

2.1

Complementarity or substitutability

Will machines substitute or complement humans in the workforce? (Autor, 2015; Markov, 2015; Acemoglu and Restrepo, 2016). According to Autor (2015), machines usually have a comparative advantage over humans in performing routine and codifiable tasks, such as diagnosing diseases, predictive analytics, image and speech recognition, translating and processing languages, or assisting customers after sales (Brynjolfsson, Mitchell, and Rock, 2018; Wilson & Daugherty, 2018).

Conversely, humans still have a comparative advantage in (some) problem-solving tasks, and in jobs requiring creativity, flexibility, adaptability and subjective judgment (Gaggl and Wright 2017). The fact that machines can replace humans, might have an influence on the human’s usage of the machine. In this research, the humans may voluntarily sabotage the machine to protect their jobs which is at risk because of the possible replacement by the machine. Therefore, there might occur a trade-off here. On the one hand, humans want to improve their performance by using the machine, but on the other hand humans do not want to lose their jobs to the machines.

2.2

Machinery and its impact on the economy

(12)

to do this, the authors categorize tasks by their likelihood of being automated. These tasks are then mapped to the O*NET job survey, which provides open-ended descriptions of skills and responsibilities involved in an occupation over time. Using this data together with data from the Bureau of Labor Statistics (BLS) about employment and wage allows Frey and Osborne to propose certain subsets of the labor market that may be at high, medium or low risk of automation. The study finds that 47 percent of U.S. employment is at high risk of computerization.

The method of Frey and Osborne (2013) has also been used by researchers in other parts of the world. Brzeski and Burk (2015) mapped the occupation-level findings to German labor market data and found that 59 percent of German jobs are highly likely to be automated in the future. The same analysis by Pajarinen and Rouvinen (2014) in Finland resulted in 35.7 percent of the jobs at risk to automation.

(13)

changing nature of skills required for the AI era (e.g., analytical, creative, interpersonal, and personal). Second, the professionalization of personal care occupations, especially in healthcare and education. These occupations provide the bulk of future employment growth, but currently they involve little training and technology and have low wages. Finally, governments should affect the direction of technological change, companies should strive for human enhancing innovations and not human replacing.

There has been more systematic work on the effect of industrial and service robots on the economy. According to Graetz and Michaels (2015) for these industrial and service robots, there are large effects for robots on productivity growth. Using data at the country, industry and year level from the International Federation of Robotics (IFR), they found that robots may be responsible for ten percent increases in the GDP of these countries between 1993 and 2007 and that productivity may have increased by fifteen percent. Moreover, Graetz and Michaels (2015) showed that wages on average increase with robot use, and that hours worked drops for low and middle-skilled workers. However using the same IFR data at industry level, Acemoglu and Restrepo (2017) found that industrial adoption in the U.S. labor markets was negatively correlated with employment and wage during 1990 and 2007. In addition, they estimate that each newly introduced robot can reduce employment by six workers and reduce wages by a half percent. According to the authors, these effects are most visible in the manufacturing industry.

(14)
(15)

3

Model

As described earlier in this research, an agent-based computation model is developed to compare performance and costs over time for three different types of jobs (i.e., routinized, normal, creative). Multiple scholars have found that computer simulation can have significant contributions to social sciences (Davis et al., 2007), despite the fact that it is still fairly unknown in social business theory development. Moreover, according to Harrison et al. (2007) it is recognized as a third way of doing science. This way of doing science (i.e., simulation) can be defined as a “method for using computer software to model the operation of real-world processes, systems or events” (Davis et al., 2007). In

(16)

addition, simulation is easily understood, relatively free of mathematics and often quite superior to mathematical methods that may be too complex or not available (Berends & Romme, 1999). In this research, a simulation approach is appropriate, as it enables to objectively study the complex interactions and actions between the agents (i.e., the machine and the humans). Moreover, there is no data available on this matter, besides that it is also not easy to obtain. The main reason for this is that it is really difficult to measure after each human-machine interaction the performance levels of the machine and the human. In addition, humans are probably biased to a certain extent, because they will most likely not tell you that they are deliberately sabotaging the machine to not lose their job. Thus, in this case, the simulation approach is preferable to empirical or mathematical analysis, because objective data to study the human-machine interactions is unavailable.

3.1

Model setup

(17)

The code for the simulation is written in C++, which is a general-purpose programming language created by Bjarne Stroustrup as an extension of the C programming language, or ”C with Classes” (wiki ref here). The reason I choose to write it in C++ is because simulations can take long, therefore performance and speed were important. The reason that C++ is good in terms of performance and speed is that it provides a minimal amount of abstraction at the smallest costs to performance and speed. Because of this minimal amount of abstraction, C and C++ are comparatively low-level languages compared to other languages (e.g., JavaScript, Python, Java). In general, the more abstractions are added to a language, the farther you move from the machine code, which is in terms of speed and performance not good.

(18)

3.1.1

Input parameters

In table 3.1 an overview of all the input parameters can be found. For each type of job, three levels of complementarity are run: creative (0.2), normal (0.5) and routinized (0.8). These levels were chosen in order to see a distinct change in pattern and to receive meaningful information from the simulations. Moreover, these values refer to the mean which is used to draw values from a random normal distribution function. More information can be found in the human section. The other parameters are used to test the robustness of the model. All the parameters will briefly be described and what their function is in the model.

Name Symbol (Default) Value(s)

time t 1000

number of humans Nhumans 200

number of firms Nf irms 1000

introduction machine tmachine 100

costs performance dependency α 0.8

type of work W T ‘creative’, ‘normal’, or ‘routinized’.

costs related to the machine Cmachine 10

(initial) performance of the machine Pmachine -5

learning rate of the machine µ 0.1 substitutable threshold γ 0.8

job variety φ 0.2

learning rate decay  0.00005

Table 3.1: Input parameters.

(19)

Each iteration is equal to one time unit. Every time unit the firms will update themselves resulting in updated values for performance and costs. The default value is one thousand, this is similar to other literature (Article here). The parameter Nhumans(i.e., the number of

humans) determines the sample size. In this research, all the simulations are executed with two hundred humans. This is also similar to other literature (Greve, 2002). The parameter Nf irms(i.e., the number of firms) specifies how many simulations are performed. In this

way, later on, the average can be taken of all the simulations, this will provide a better and more accurate picture of what happens to the performance and costs for the firm over time when humans are able to use a machine. The parameter tmachine(i.e., introduction

machine) specifies after how many time units the machine is bought by the company and is thus introduced. For performing the simulations, the machine will be introduced at t = 100, this to get a clear view of what happens before and after the introduction of the machine. The parameter α (i.e., costs performance dependency) specifies the degree to how performance and costs are related to each other. In this research, the assumption is made that if the human performs well at his tasks, it will result in fewer costs for the company. More on this relationship will be described in the human section. The parameter W T (i.e., type of work) specifies for which type of work we want to perform simulations, ultimately the type of work defines the complementarity level. As introduced earlier, there are three levels of complementarity: creative (0.2), normal (0.5) and routinized (0.8). The parameter Cmachine (i.e., the costs related to the machine)

specifies the costs of the machine, which the company has to incur after each time unit (i.e., average costs for each time unit in the form of maintenance, replacement). The parameter Pmachinespecifies the initial performance of the machine (i.e., performance

(20)

drawn randomly from a standardized normal distribution. To make the performance and the costs of the machine and the human comparable to each other, the assumption is made that a machine at t = 0 is comparable to a human performing its tasks really poor. Therefore the default value for Pmachineis minus five, meaning its performance five

standard deviations worse than the average human. For Cmachinea similar assumption

is made, the default value for this is ten, meaning compared to the average human, the machine incurs ten standard deviations more costs than the average human. The parameter  (i.e., learning rate of the machine) specifies how well the machine learns from each interaction with a human. Therefore, the more interactions there are the faster the machine can learn. The parameter γ (i.e., substitutable threshold) defines at which complementarity level the humans will feel that their jobs are at risk to automation. For example, a human with a complementarity level of 0.9 will feel that its job is at risk if the default threshold is used (i.e., 0.8). The final input parameter φ (i.e., job variety) specifies the standard deviation when values for the human’s complementarity levels are randomly drawn from a normal distribution. The higher the job variety, the more variation there is between the humans in terms of their complementarity levels.

3.2

Machine

3.2.1

Attributes

The machine’s initial attributes are all defined at the beginning of the simulation. In the current model, there are only fixed costs for the machine. This can also be seen in the parameter Cmachine which won’t change over time. However, the parameters

(21)

Name Symbol Comment

learning rate µ An input parameter, learning rate slowly decreases over time.

learning rate decay  An input parameter. Stays constant over time

costs Cmachine An input parameter. Stays constant

over time.

performance Pmachine An input parameter. Depending on

the humans and the quality of infor-mation they provide to the machine, performance will change over time. Table 3.2: Attributes of the machine.

3.2.2

Functions

For the machine, there are two functions defined. First, there is a function which is called every time a human interacts with the machine (see formula 1). In this formula, I is the quality of the information the human provides to the machine, i refers to the firm number where the machine is located in, and t refers to the current time unit (i.e., how many iterations already have passed for the firm).

Pmachinei,t(I) = Pmachinei,t+ µi,t · I (1)

(22)

point that the machine can no longer improve. At this point, the machine has reached its limit.

µi,t = µi,t· (1 − ) (2)

3.3

Human

3.3.1

Attributes

The human has three important attributes (see table 3.3): performance, costs, and complementarity. For all these values, it is important to know that in the current model that after initialization they will not change any more. For performance, a random draw is done from a standardized normal distribution. This performance value is then used to determine the costs for the human together with cost performance dependency (see 3.3.2).

Name Symbol Comment

performance Phuman Initialized by random draw from a

standardized normal distribution. costs Chuman Defined by a random draw using

PHumanand α.

complementarity θ Initialized using a random draw from a normal distribution using in-put parameters W T and φ.

Table 3.3: Attributes of the human.

(23)

be used for the random draw, whereas the job variety decides what the standard deviation will be in for the random draw. To make the values for complementarity between zero and one, a function is created to round the value to one when the random value is above one. And the same thing happens when a value below zero is generated, this will result in the value being rounded to zero. The reason why the level of complementarity has to be between zero and one is that it will be used as a percentage of what the human does and what the machine does in a certain task.

3.3.2

Costs function

If we define a function f (mean, sd) to obtain random values from a normal distribution, then costs for each human j is initialized in the following way:

Chumanj,0 = α · −Phumanj,0 + (1 − α) · f (0, 1) (3)

Because of the minus before Phuman, a higher performance level results in a lower costs

(24)

3.4

Firms

3.4.1

Attributes

In this research, the firm’s performance and costs levels are central. Moreover, for each firm, we are interested in how many of their employees got fired. Hence, there is a variable for the number of people fired. Furthermore, the number of employees who are substitutable is also tracked. A human (or employee) has the potential to be substituted if his complementarity level exceeds the substitutable threshold. Finally, all firms have Nhumansemployees and one machine, therefore each firm has a machine object and a list

(in C++ a vector is used as a data structure) containing all the human objects. All the firm’s attributes and symbols are shown in table 3.4.

Name Symbol Comment

costs Cf irm Starts at zero each iteration. For

each iteration, we know what the costs for the firm were.

performance Pf irm Starts at zero each iteration. For

each iteration, we know what the performance of the firm was. number of people substitutable Nsubstitutable Each iteration we look at how many

people are substitutable.

number of people fired Nf ired We keep track of how many already

had to leave the company.

humans Lhumans A list consisting of all the human

ob-jects currently in the company. (for the human’s attributes see table 3.3)

machine M An instance of the class machine

(for the machine’s attributes see ta-ble 3.2)

(25)

3.4.2

Functions

For each firm i in every iteration the costs, performance, and the number of people substitutable is calculated. Moreover, for each firm i we iterate through all of the humans (i.e., Lhumans). For each human j, we look closely if his complementarity level is

higher than the substitutable threshold. If this is the case the human can be marked as substitutable. Therefore, after iterating through Lhumansi, the number of people who are

(26)

measure for how much of the human’s tasks can be replaced by the machine. However, the possibility of perceiving θ as zero in formula 4 allows us to show what happens in just one formula and making it more understandable.

Pf irmi,t =

Nhumans

X

j=1

θj · Pmachinei,t+ (1 − θj) · Phumanj

Nhumans

(4)

Since the costs for a machine and a human won’t change in the current model, the calculation for the firm’s costs is more straightforward than for the firm’s performance. It works in the same way as performance, in both calculations, the average is used to calculate the final output. However, for the costs formula, we divide by Nhumans + 1

because the firm’s costs are determined by taking the average costs over all the units (i.e., humans and machine). Conversely, for the performance calculation, we divide by Nhumanssince we are interested in the average performance over all the tasks, where each

task is partly determined (depending on the level of complementarity) by the machine and partly by the human. Note that in this model, the number of tasks is equal to the number of humans, this is in the real world probably not the case.

Cf irmi,t = Cmachinei,t Nhumans+ 1 + Nhumans X j=1 Chumanj Nhumans+ 1 (5)

3.4.3

Recruitment decisions

(27)

two conditions the human is fired: if the machine can do most of the human’s job and if the machine achieves a higher performance in performing the human’s job (the complementary part). After the firing, the manager will randomly hire a new human, the reason for is this is that we assume that the market is inefficient (e.g., quality signals do not work). Therefore, the number of humans within the company stays the same and over time only the best humans are able to survive (i.e., humans with good performance and low complementarity levels). For the substitution of human capital, there are costs in the form of adjustments costs (e.g., recruitment costs). Every time a human is fired and replaced by a new human, extra costs are randomly drawn from a standardized normal distribution and added to the total costs for the firm. Therefore, in the case of the firm having to fire human capital at time t, the total recruitment costs Crecruitmentcome on

top of the total costs for the firm. In the case of the firm not having to fire human capital at time t, we will find Crecruitment = 0. To visualize this, formula 5 can be extended in

the following way:

Cf irmi,t = Crecruitment+

(28)

3.5

Simulation design

To obtain the results, we iterate over time and over all the firms. Meanwhile, each iteration, values for each firm at time t are saved. This to obtain in the end the averages measures for each firm at time t. The process can be more clearly seen in algorithm 1. Finally, a script in Python is used to plot the obtained data (i.e., using matplotlib).

Algorithm 1: Pseudocode of the main loop in Simulator::Run() create arrays to save average values

initialize all the firms in an array named firms for each t in the time interval do

for each firm i in firms do i.nextIteration()

avgPerformances[t] += Pf irmi,t/Nf irms

avgCosts[t] += Cf irmi,t/Nf irms

avgNumberOfPeopleSubstitutable[t] += Nsubstitutablei,t/Nf irms

avgNumberOfPeopleFired[t] += Nf iredi,t/Nf irms

avgMachinePerformance[t] += Pmachinei,t/Nf irms

end end

3.6

Model robustness

(29)

dependency could be changed more in order to check if the model is robust.

3.6.1

Job variety

(30)

3.6.2

Machine

(31)

0.5 and 0.00001 respectively).

3.6.3

Substitutable threshold

The substitutable threshold can realistically not be a really low value, it would be weird if a company would start firing someone while he does most of the job. Therefore, the most extreme lower case is 0.6, where the human does at least less than the machine, but the human has still contributed a lot to the total performance of the task. Conversely, the substitutable threshold can realistically be a really high value. It would make sense that only if the machine can do 95 percent or 100 percent of the job, the human will lose his job. Therefore to validate the robustness of the model, these two extreme cases are tested together with the lower extreme case of 0.6.

3.6.4

Costs performance variable

(32)

4

Results

4.1

Main results

In the simulation, the three types of work were studied: routinized, normal, and creative. The different type of jobs had different means for the complementarity: creative (0.2), normal (0.5) and routinized (0.8). In figure 4.1 the main results in terms of performance and costs can be found for these three types of jobs. This figure plots the average performance and costs for each type of job over a thousand periods. The figure clearly shows that the introduction for the machine worked out the best for firms with normal

(33)

jobs (i.e., using a mean of 0.5 to obtain complementarity levels for the humans). For both the performance and the costs the normal type of job outperforms routinized and creative type of jobs in the long run. Furthermore, in terms of performance and costs, the creative type of job outperforms the routinized type of job in the long run. Therefore, the introduction of the machine for routinized jobs seems to be the least useful in terms of performance and costs compared to creative and normal jobs.

(a) performance (b) costs

Figure 4.1: Main results for the three types of work

(34)

to perform better than the human. Hence, the manager cannot fire any human, because the performance levels of the humans will stay higher than the machine’s performance level. Looking closer at the peaks for creative and normal jobs, we can also perceive that for normal jobs there are more fluctuations in the costs compared to creative jobs. Also, for normal jobs it starts earlier to increase, it takes longer to reach the maximum and it takes longer for the costs to become stable compared to the costs for creative jobs. The fact that there are more fluctuations for normal jobs compared to creative jobs can be explained by that for normal jobs a mean of 0.5 is used to obtain a complementarity level of a new human, whereas for creative jobs a mean of 0.2 is used to hire a new human. The mean used to obtain complementarity levels for normal jobs is closer to the substitutable threshold (i.e., in this case, γ = 0.8), therefore there is a higher likelihood of humans with normal jobs exceeding the substitutable threshold (i.e., θ > γ) when we hire a new one than for humans with creative jobs. The same reasoning can be used to explain why for normal jobs the curve starts earlier to increase and takes longer to become stable. However, this reasoning cannot be used to explain why the maximum is earlier reached for creative jobs, this is because for creative jobs the performance of the machine grows faster and can reach a higher performance level than for normal jobs. This can be seen in figure 4.2, which will be used and explained in more detail later. The higher machine performance increases the likelihood that the machine is better than the human, therefore the maximum in terms of recruitments costs is earlier reached for creative jobs than for normal jobs.

(35)

be explained by Darwin’s theory ‘survival of the fittest’, only the best humans (i.e. with a good performance in their jobs) can survive the introduction of the machine. Third, the assumption we made about the cost performance dependency variable (i.e., people with higher performance have fewer costs for the firm), this will cause the people who do survive to have fewer costs. Therefore, in the end for normal jobs, the total costs are lower compared to the other types of jobs. With the help of figure 4.2 showing the machine’s performance over time for the three types of jobs, the pattern becomes more understandable.

There are a couple of things which can be seen in this figure. First, for routinized work, the machine’s performance does not seem to improve. In section 4.2 we can more clearly see this. The main reason behind the machine’s performance not increasing for routinized work is that too many humans are sabotaging the machine. Therefore, introducing the machine for routinized work does not seem to be valuable.

Figure 4.2: Machine’s performance for the three types of work

(36)

(although the machine reaches a higher performance) it is harder for the machine to assist the human in his job. Therefore, for normal jobs, a larger part of their jobs can be substituted than for creative jobs. Once the machine has reached a good level of intelligence, it can do this larger part of the normal job a lot better than the human would. Therefore, for normal jobs, the machine has a greater potential to influence the firm’s overall performance compared to creative jobs.

4.2

Routinized work

From figure 4.3 it becomes clear that for routinized work the introduction of the machine did not make an impact on the firms. As mentioned before, the main reason behind the machine’s performance not increasing is that too many humans are sabotaging the machine.

(a) performance vs costs (b) jobs (c) machine performance Figure 4.3: Results for routinized work

(37)

threshold is close to the mean used to draw the complementarity levels from the normal distribution. Therefore, there is a high likelihood that humans exceed the substitutable threshold, which will make them feel like their jobs are at risk to automation. When there are so many people at the same time sabotaging the machine, the machine will not get the opportunity to learn over time and to become good. Figure 4.3c even shows that the performance decreases over time, although it is worth noting that this decrease is really small and that the performance of the machine stays around minus five (where it also started). Overall, routinized jobs have the potential to be replaced by the AI machine, but due to how the humans are using the machine it can never reach its full potential.

4.3

Normal work

(38)

Darwin can be applied here.

(a) performance vs costs (b) jobs (c) machine performance Figure 4.4: Results for normal work

As we can see from figures 4.3b, 4.4b, 4.5b the number of people substituted for normal jobs is the highest. Therefore, companies with normal jobs had a chance to replace worse human capital with better human capital, resulting in a higher performance level for the company. Moreover, due to the assumption for the costs performance dependency parameter, higher performance means also lower costs. This is the reason why normal jobs score better in costs reduction.

4.4

Creative work

(39)

will not have to worry about employees sabotaging the machine or not using the machine, but should worry about how well the AI machine can help the humans in their jobs.

(a) performance vs costs (b) jobs (c) machine performance Figure 4.5: Results for creative work

4.5

Robustness of the model

To analyze the robustness of the model, multiple robustness checks have been executed. The results of these robustness checks can be found in Appendix A.

4.5.1

Job variety

(40)

4.5.2

Machine

In order to check for robustness, the machine parameters initial performance (i.e., Pmachinet=0), learning rate (i.e., µ) and learning rate decay (i.e., ) were changed (see

A2.1 - 2.3). In all cases the same results were found; normal jobs outperforms routinized and creative jobs in performance and costs. In fact, the differences between the jobs in performance have even become larger. Also, it is interesting to see that for routinized jobs the machine now has made a positive impact (i.e., performance increases and people are being fired). However, the effect is still very small compared to creative and normal jobs.

4.5.3

Substitutable threshold

(41)

good intentions. Resulting in the machine being able to improve over time, which makes it possible for the manager to fire people whose jobs exceed the threshold and perform worse than the machine.

4.5.4

Costs performance dependency

(42)

5

Discussion and Conclusion

5.1

Theoretical implications

Prior studies focused mainly on how AI and industrial machines impact productivity and employment and did not compare the performance and costs impacts of different types of jobs after the introduction of an AI machine. To the best of our knowledge, this thesis is the first to introduce a model using simulation research to compare how an intelligent machine (AI) impacts different types of jobs in terms of costs and performance. In this study, we use agent-based modeling to understand the impact and the usage of the

(43)

machine which is able to learn over time, on the different types of jobs.

In an organization, humans will behave differently after the introduction of the machine depending on the type of job they have because the more the job consists of routine tasks the more the human feels that his job is at risk to automation. Therefore, the human can behave in two ways: the human is scared to lose his job to the machine and therefore tries to sabotage the machine in order to not lose his job or the human is not scared to lose his job to the machine and therefore is willing to use the machine to its full benefits. Therefore, managers face a tradeoff in their decision making. On the one hand, they want to introduce the machine to improve performance and reduce costs. On the other hand, the introduction of the machine can lead to employees sabotaging it which increases costs for the organization. This thesis contributes to the AI literature by showing how managers can cope with this tradeoff faced in their decision making. It tries to give managers an idea for which type of job the introduction of the machine is the best in terms of performance and costs. We find that after the introduction of the machine normal jobs outperform routinized and creative jobs both in performance and costs. Also, in terms of performance and costs, creative jobs outperform the routinized type of job in the long run.

Proposition 1: Normal jobs outperform routinized jobs and creative jobs in performance and costs after the introduction of an AI machine in the long run.

Proposition 2: Creative jobs outperform routinized jobs in performance and costs after the introduction of an AI machine in the long run.

(44)

their jobs are at risk to automation, the machine will not get the opportunity to increase its intelligence and performance. Therefore, the manager cannot fire any human, because the performance levels of the humans will stay higher than the machine’s performance level. Also, because the machine’s performance level will not increase, it cannot influence the overall firm’s performance level.

Normal jobs are defined as the middle between routinized jobs and creative jobs, con-sisting both more or less equally out of routine tasks and non-routine tasks. Therefore, for normal jobs, a mean of 0.5 is used to obtain complementarity levels for the humans. The complementarity level used is lower than for routinized jobs, therefore the machine cannot assist the human as well in his job as for routinized jobs. However, the humans will not feel so much that their jobs are at risk to automation since the difference between the substitutable threshold has become larger compared to routinized jobs. Therefore, in contrast with routinized jobs the machine can become better over time because people are willing to use the machine to its full benefit instead of sabotaging the machine. Although, the machine had for routinized jobs a higher potential to replace tasks and help the human because of the higher average complementarity level. The result for routinized jobs was only an increase in costs, while performance stayed stable over time.

Proposition 3: The higher the average complementarity level of the humans, the higher the risk the machine will only incur extra costs for the organization.

(45)

is using it with good intentions. This implies that when the average complementarity level of the humans is low, the machine’s performance can become high. At the same time, although the machine’s performance can become high, the lower complementarity level for a job causes the machine to be less useful. The machine is simply not capable to perform a large part of the job. However, the small part of the job which the machine is capable to do, will be performed really well by the machine, since the machine’s performance is high.

Proposition 4: The lower the average complementarity level of the humans, the less effective the machine becomes for increasing performance and reducing costs for the organization.

Proposition 5: The lower the average complementarity level of the humans, the higher the machine’s performance level will become.

To conclude, although the machine reaches a higher performance for creative jobs than for normal jobs. The overall firm’s performance level for normal jobs is higher because the machine is less useful for creative jobs than for normal jobs. The machine can better assist the human in normal jobs, and therefore has more influence to improve the overall performance of the firm. Conversely, for routinized jobs, the machine is better able to assist the human compared to normal jobs, but for routinzed jobs the machine’s performance is not able to grow in the same way compared to normal and creative jobs. This is because too many people feel that their jobs are at risk to automation, which gives these people an incentive to sabotage the machine in order to keep their jobs. This would imply that normal jobs are the perfect balance for replacing a lot of work without making the employees worry about losing their jobs to the machine. Moreover, it would also imply that there is a tradeoff between the machine’s performance level and the machine’s effectiveness (i.e., how useful is the machine).

(46)

opti-mally, there must be a balance between on the hand seeking to increase the machine’s performance and on the other hand seeking to increase the machine’s effectiveness.

5.2

Practical implications

(47)

policy which will reward the top ten percent of the employees with bonuses on top of their base salary. This will motivate the employees to use the machine, because it has the potential to make the performance of a certain task higher, or will allow the human to do more in less time. Also, the humans who do not use the machine or who try to sabotage the machine will not survive the competition, which is another reason why competition may motivate employees to use the machine in a good way. Another way is to introduce policies which aim to decrease the average level of complementarity. This could be done by providing the humans with some kind of tasks the machine cannot do, which will make the humans feel like they will not lose their jobs to the machine. Therefore, a manager should think about re-educating these employees. In this way, the employees will be trained to do more non-routine tasks, which the machine cannot do. This is similar to what Trajtenberg (2018) suggested with innovative strategies for governments to enhance the positive effects of AI, where one of his strategies was to make education more oriented to the changing nature of skills required for the AI era (e.g., analytical, creative, interpersonal, and personal).

5.3

Limitations

(48)
(49)

5.4

Future research

(50)

Bibliography

[1] Acemoglu, Daron and Pascual Restrepo. 2016. The Race Between Machine and Man: Implications of Technology for Growth, Factor Shares, and Employment. Working paper, MIT.

[2] Acemoglu, D., & Restrepo, P. (2017). Robots and jobs: Evidence from US labor markets. NBER working paper, (w23285).

[3] Autor, D. H., Levy, F., & Murnane, R. J. (2003). The skill content of recent tech-nological change: An empirical exploration. The Quarterly journal of economics, 118(4), 1279-1333.

[4] Autor, D. H. (2015). Why are there still so many jobs? The history and future of workplace automation. The Journal of Economic Perspectives, 29(3), 3-30.

[5] Autor, D., & Salomons, A. (2018). Is Automation Labor Share-Displacing? Produc-tivity Growth, Employment, and the Labor Share. Brookings Papers on Economic Activity, 2018(1), 1-87.

[6] Berends, P., & Romme, G. (1999). Simulation as a research tool in management studies. European Management Journal, 17(6), 576-583.

(51)

[7] Bessen, J. (2018). AI and Jobs: The role of demand (No. w24235). National Bureau of Economic Research.

[8] Brynjolfsson, E., Rock, D., & Syverson, C. (2017). Artificial intelligence and the modern productivity paradox: A clash of expectations and statistics (No. w24001). National Bureau of Economic Research.

[9] Brynjolfsson, E., Mitchell, T., & Rock, D. (2018, May). What can machines learn, and what does it mean for occupations and the economy?. In AEA Papers and Proceedings (Vol. 108, pp. 43-47).

[10] Brzeski, C., & Burk, I. (2015). Die Roboter kommen. Folgen der Automatisierung f¨ur den deutschen Arbeitsmarkt. INGDiBa Economic Research, 30.

[11] Cefis, E., & Marsili, O. (2005). A matter of life and death: innovation and firm survival. Industrial and Corporate change, 14(6), 1167-1192.

[12] Cockburn, I. M., Henderson, R., & Stern, S. (2018). The Impact of Artificial Intelligence on Innovation (No. w24449). National Bureau of Economic Research. [13] Daugherty, P. R., & Wilson, H. J. (2018). Human + machine: reimagining work in

the age of AI. Harvard Business Press

[14] Davis, J. P., Eisenhardt, K. M., & Bingham, C. B. (2007). Developing theory through simulation methods. Academy of Management Review, 32(2), 480?499. [15] Frey, C. B., & Osborne, M. (2013). The future of employment.

[16] Gaggl, P. & Wright, G. C. (2017). A Short-Run View of What Computers Do: Evi-dence from a UK Tax Incentive. American Economic Journal: Applied Economics, 9(3), 262?294.

(52)

[18] Harrison, J. R., Lin, Z., Carroll, G. R., & Carley, K. M. (2007). Simulation modeling in organizational and management research. Academy of Management Review, 32(4), 1229?1245.

[19] Levy, F., & Murnane, R. J. (2005). The new division of labor: How computers are creating the next job market. Princeton University Press.

[20] Markov, John. 2015. Machines of Loving Grace. HarperCollins Publishers, New York.

[21] Mycielski, Jan (1992). ”Games with Perfect Information”. Handbook of Game Theory with Economic Applications. Volume 1. pp. 41?70

[22] Nambisan, S., Lyytinen, K., Majchrzak, A., & Song, M. (2017). Digital Innovation Management: Reinventing innovation management research in a digital world. Mis Quarterly, 41(1).

[23] Pajarinen, M., & Rouvinen, P. (2014). Computerization threatens one third of Finnish employment. Etla Brief, 22(13.1), 2014.

[24] Kurzweil, R. (1990). The Age of Intelligent Machines. MIT Press.

(53)

A

Robustness Checks

(54)

A.1

Job variety

A.1.1

Equal to 0.3

(a) performance vs costs (b) jobs (c) machine performance Figure A.1: Results for θ = 0.3 for routinized work.

(a) performance vs costs (b) jobs (c) machine performance Figure A.2: Results for θ = 0.3 for normal work.

(55)

A.1.2

Equal to 0.1

(a) performance vs costs (b) jobs (c) machine performance Figure A.4: Results for θ = 0.1 for routinized work.

(a) performance vs costs (b) jobs (c) machine performance Figure A.5: Results for θ = 0.1 for normal work.

(56)

A.1.3

Equal to 0.05

(a) performance vs costs (b) jobs (c) machine performance Figure A.7: Results for θ = 0.05 for routinized work.

(a) performance vs costs (b) jobs (c) machine performance Figure A.8: Results for θ = 0.05 for normal work.

(57)

A.2

Machine

A.2.1

Initial performance equal to minus two

(a) performance vs costs (b) jobs (c) machine performance Figure A.10: Results for Pmachinet=0 = −2 for routinized work.

(a) performance vs costs (b) jobs (c) machine performance Figure A.11: Results for Pmachinet=0 = −2 for normal work.

(58)

A.2.2

Learning rate equal to 0.5

(a) performance vs costs (b) jobs (c) machine performance Figure A.13: Results for µ = 0.5 for routinized work.

(a) performance vs costs (b) jobs (c) machine performance Figure A.14: Results for µ = 0.5 for normal work.

(59)

A.2.3

Learning rate decay equal to 0.00001

(a) performance vs costs (b) jobs (c) machine performance Figure A.16: Results for  = 0.00001 for routinized work.

(a) performance vs costs (b) jobs (c) machine performance Figure A.17: Results for  = 0.00001 for normal work.

(60)

A.3

Substitutable threshold

A.3.1

Equal to 0.6

(a) performance vs costs (b) jobs (c) machine performance Figure A.19: Results for γ = 0.6 for routinized work.

(a) performance vs costs (b) jobs (c) machine performance Figure A.20: Results for γ = 0.6 for normal work.

(61)

A.3.2

Equal to 0.95

(a) performance vs costs (b) jobs (c) machine performance Figure A.22: Results for γ = 0.95 for routinized work.

(a) performance vs costs (b) jobs (c) machine performance Figure A.23: Results for γ = 0.95 for normal work.

(62)

A.3.3

Equal to 0.99

(a) performance vs costs (b) jobs (c) machine performance Figure A.25: Results for γ = 0.99 for routinized work.

(a) performance vs costs (b) jobs (c) machine performance Figure A.26: Results for γ = 0.99 for normal work.

(63)

A.4

Costs performance dependency

A.4.1

Equal to 0.2

(a) performance vs costs (b) jobs (c) machine performance Figure A.28: Results for α = 0.2 for routinized work.

(a) performance vs costs (b) jobs (c) machine performance Figure A.29: Results for α = 0.2 for normal work.

(64)
(65)

B

Source Code

The source code used for the simulations and the usage instructions can also be found here: https://github.com/nbhaan/Agents-Based-Model-Simulation

1 /* main.cpp

2 *

3 * written by Niels Haan

4 */

5

6 #include "simulator.h"

(66)

7 #include <time.h> 8

9 int main(int argc, char *argv[]) {

10 clock_t startTime = clock(); 11 if (argc < 2 || argc > 3) {

12 std::cerr << "Usage: " << argv[0] << " in-file [out-file]" << std::endl; 13 return 1; 14 } 15 16 Simulator simulator; 17 18 if (!simulator.readInput(argv[1])) {

19 std::cerr << "Error: reading input from " << argv[1] << " failed - no output generated."<< std::endl;

20 return 1; 21 } 22 23 std::string ofname; 24 if (argc >= 3) { 25 ofname = argv[2]; 26 } else { 27 ofname = argv[1];

28 if (ofname.size() >= 3 && ofname.substr(ofname.size() - 3) == ". in") {

29 ofname = ofname.substr(0, ofname.size() - 3); 30 } 31 ofname += ".out"; 32 } 33 simulator.run(); 34 simulator.saveOutput(ofname); 35

(67)

<< " seconds ---" << std::endl; 37 return 0; 38 } Code 1: main.cpp 1 /* simulator.h 2 *

3 * written by Niels Haan

4 */ 5 6 #ifndef SIMULATOR_H 7 #define SIMULATOR_H 8 9 #include <iostream> 10 #include <iomanip> 11 #include <string> 12 #include <algorithm> 13 #include <vector> 14 #include "parameters.h" 15 #include "firm.h" 16 17 class Simulator { 18 19 private: 20 Parameters *parameters; 21 std::vector<Firm> firms;

22 std::vector<double> avgPerformances; 23 std::vector<double> avgCosts;

24 std::vector<double> avgNumberOfPeopleSubstitutable; 25 std::vector<double> avgNumberOfPeopleFired;

26 std::vector<double> avgMachinePerformance; 27 void initFirms();

(68)

29 Simulator() { }; 30 ˜Simulator(); 31 void run();

32 bool readInput(std::string filename); 33 void saveOutput(std::string filename); 34 }; 35 36 #endif Code 2: simulator.h 1 /* simulator.cpp 2 *

3 * written by Niels Haan

4 */ 5 6 #include "simulator.h" 7 #include "omp.h" 8 #include <fstream> 9 #include <iomanip> 10 #include <math.h> 11 12 Simulator::˜Simulator() { 13 free(parameters); 14 } 15 16 bool stringToBool(std::string s) {

17 if (s == "true" || s == "True" || s == "TRUE" || s == "T" || s == " t") {

18 return true;

19 } else if (s == "false" || s == "False" || s == "False" || s == "F"

(69)

22 std::cerr << "Invalid argument for function stringToBool" << std:: endl; 23 return false; 24 } 25 } 26 27 void Simulator::initFirms() {

28 for(int i=0; i<parameters->numberOfFirms; i++) { 29 firms.push_back(Firm(*parameters)); 30 } 31 } 32 33 void Simulator::run() { 34 int progress = 0; 35 avgPerformances.resize(parameters->time); 36 avgCosts.resize(parameters->time); 37 avgNumberOfPeopleSubstitutable.resize(parameters->time); 38 avgNumberOfPeopleFired.resize(parameters->time); 39 avgMachinePerformance.resize(parameters->time); 40 41 initFirms(); 42

43 std::cout << "Running simulation.." << std::endl; 44 for (int t=0; t<parameters->time; t++) {

45 for (int i=0; i<parameters->numberOfFirms; i++) { 46 firms[i].nextIteration(*parameters);

47 avgPerformances[t] += firms[i].getPerformance() / parameters-> numberOfFirms;

48 avgCosts[t] += firms[i].getCosts() / parameters->numberOfFirms; 49 avgNumberOfPeopleSubstitutable[t] += (double)firms[i].

getNumberOfPeopleSubstitutable() / parameters->numberOfFirms; 50 avgNumberOfPeopleFired[t] += (double)firms[i].

(70)

51 avgMachinePerformance[t] += firms[i].getMachinePerformance() / parameters->numberOfFirms;

52 }

53 progress++;

54 std::cout << "\r" << std::fixed << std::setprecision(2) << " Progress: " << (double)progress/parameters->time * 100 << "% "

<< std::flush; 55 }

56 std::cout << std::endl << "Simulation finished." << std::endl; 57 }

58

59 bool Simulator::readInput(std::string filename) { 60 std::ifstream file(filename);

61 std::string line; 62

63 if (!file) {

64 std::cerr << "Error: unable to open " << filename << " for reading." << std::endl;

65 return false; 66 }

67

68 getline(file, line); 69 int time = stoi(line); 70 getline(file, line);

71 int numberOfHumans = stoi(line); 72 getline(file, line);

73 int numberOfFirms = stoi(line); 74 getline(file, line);

75 int introMachine = stoi(line); 76 getline(file, line);

77 double costsPerfDependency = stod(line); 78 getline(file, line);

(71)

80 getline(file, line);

81 double costsMachine = stod(line); 82 getline(file, line);

83 double initPerfMachine = stod(line); 84 getline(file, line);

85 double learningRateOfTheMachine = stod(line); 86 getline(file, line);

87 double substitutableTreshold = stod(line); 88 getline(file, line);

89 double jobVariety = stod(line); 90 getline(file, line);

91 double learningRateDecay = stod(line);

92 parameters = new Parameters(time, numberOfHumans, numberOfFirms, introMachine, costsPerfDependency, typeOfWork, costsMachine, initPerfMachine, learningRateOfTheMachine, substitutableTreshold, jobVariety, learningRateDecay);

93

94 return true; 95 }

96

97 void Simulator::saveOutput(std::string filename) { 98 std::ofstream output;

99 output.open(filename); 100

101 for (int t=0; t<parameters->time; t++) {

102 output << std::fixed << std::setprecision(4) << t << "; " << avgPerformances[t] << "; " << avgCosts[t] << "; " <<

avgNumberOfPeopleSubstitutable[t] << "; " <<

avgNumberOfPeopleFired[t] << "; " << avgMachinePerformance[t] << std::endl;

103 }

(72)

Code 3: simulator.cpp

1 /* parameters.h

2 *

3 * written by Niels Haan

4 */ 5 6 #ifndef PARAMETERS_H 7 #define PARAMETERS_H 8 9 class Parameters { 10 public:

11 const int time;

12 const int numberOfHumans; 13 const int numberOfFirms; 14 const int introMachine;

15 const double costsPerfDependency; 16 const char typeOfWork;

17 const double costsMachine; 18 const double initPerfMachine;

19 const double learningRateOfTheMachine; 20 const double substitutableTreshold; 21 const double jobVariety;

22 const double learningRateDecay; 23

24 Parameters(int time, int numberOfHumans, int numberOfFirms, int

introMachine, double costsPerfDependency, char typeOfWork, double

costsMachine,

25 double initPerfMachine, double learningRateOfTheMachine, double

substitutableTreshold, double jobVariety, double learningRateDecay );

(73)

27

28 #endif

Code 4: parameters.h

1 /* parameters.cpp

2 *

3 * written by Niels Haan

4 */ 5

6 #include "parameters.h"

7

8 Parameters::Parameters(int time, int numberOfHumans, int

numberOfFirms, int introMachine, double costsPerfDependency, char

typeOfWork, double costsMachine, double initPerfMachine, 9 double learningRateOfTheMachine, double substitutableTreshold,

double jobVariety, double learningRateDecay): time(time), numberOfHumans(numberOfHumans), 10 numberOfFirms(numberOfFirms), introMachine(introMachine), costsPerfDependency(costsPerfDependency), typeOfWork(typeOfWork), costsMachine(costsMachine), initPerfMachine(initPerfMachine), 11 learningRateOfTheMachine(learningRateOfTheMachine), substitutableTreshold(substitutableTreshold), jobVariety( jobVariety), learningRateDecay(learningRateDecay) 12 { } Code 5: parameters.cpp 1 /* machine.h 2 *

3 * written by Niels Haan

4 */ 5

(74)

7 #define MACHINE_H 8 9 class Machine { 10 private: 11 double learningRate; 12 double performance; 13 double costs; 14 double learningRateDecay; 15 public: 16 Machine(){ };

17 Machine(double learningRate, double performance, double costs,

double learningRateDecay); 18 void use(double input); 19 double getPerformance(); 20 double getCosts(); 21 }; 22 23 #endif Code 6: machine.h 1 /* machine.cpp 2 *

3 * written by Niels Haan

4 */ 5

6 #include "machine.h"

7 #include <iostream> 8

9 Machine::Machine(double learningRate, double performance, double

costs, double learningRateDecay): learningRate(learningRate), 10 performance(performance), costs(costs), learningRateDecay(

(75)

12

13 void Machine::use(double input) {

14 performance = performance + learningRate*input; 15 learningRate = learningRate*(1 - learningRateDecay); 16 } 17 18 double Machine::getPerformance() { 19 return performance; 20 } 21 22 double Machine::getCosts() { 23 return costs; 24 } Code 7: machine.cpp 1 /* human.h 2 *

3 * written by Niels Haan

(76)

19 double costs;

20 double complementarity; 21 public:

22 Human() { };

23 Human(Parameters parameters);

24 void update(Parameters parameters, Machine &machine); 25 void print();

26 double getPerformance() const; 27 double getComplementarity() const; 28 double getCosts() const;

29 double getRecruitmentCosts();

30 bool operator<(const Human &h) const; 31 }; 32 33 #endif Code 8: human.h 1 /* human.cpp 2 *

3 * written by Niels Haan

4 */

5

6 #include "human.h"

7

8 const double MEAN_STANDARDIZED_DIST = 0.0; 9 const double STANDARDIZED_SD = 1.0;

10

11 // type of jobs

12 const double ROUTINIZED_MEAN = 0.8; 13 const double NORMAL_MEAN = 0.5; 14 const double CREATIVE_MEAN = 0.2; 15

(77)

17 std::random_device rd; 18 std::mt19937 gen(rd());

19 std::normal_distribution<double> distribution(mean, sd); 20 return distribution(gen);

21 } 22

23 void setWithinSpecifiedRange(double &var, double min, double max) { 24 if (var < min) {

25 var = min;

26 } else if (var > max) { 27 var = max; 28 } 29 } 30 31 Human::Human(Parameters parameters) { 32 performance = randomValueFromNormalDistribution( MEAN_STANDARDIZED_DIST, STANDARDIZED_SD); 33 costs = parameters.costsPerfDependency*performance + (1 -parameters.costsPerfDependency)*randomValueFromNormalDistribution( MEAN_STANDARDIZED_DIST, STANDARDIZED_SD); 34 35 switch(parameters.typeOfWork) {

36 case 'r': // routinized job

37 complementarity = randomValueFromNormalDistribution( ROUTINIZED_MEAN, parameters.jobVariety);

38 break;

39 case 'n': // normal job

40 complementarity = randomValueFromNormalDistribution(NORMAL_MEAN, parameters.jobVariety);

41 break;

42 case 'c': // creative job

(78)

44 break; 45 }

46 setWithinSpecifiedRange(complementarity, 0, 1); 47 }

48

49 void Human::update(Parameters parameters, Machine &machine) { 50 if (complementarity > parameters.substitutableTreshold) { 51 // reason to sabotage 52 machine.use(-abs(randomValueFromNormalDistribution( MEAN_STANDARDIZED_DIST, STANDARDIZED_SD))/parameters. numberOfHumans); 53 } else {

54 // reason to use machine to full benefit

55 machine.use(abs(randomValueFromNormalDistribution( MEAN_STANDARDIZED_DIST, STANDARDIZED_SD))/parameters. numberOfHumans); 56 } 57 } 58 59 void Human::print() {

60 std::cout << "Human: " << "[" << performance << ", " << costs << ", " << complementarity << "]" << std::endl;

61 } 62

63 double Human::getPerformance() const { 64 return performance;

65 } 66

67 double Human::getComplementarity() const { 68 return complementarity;

69 } 70

(79)

72 return costs; 73 } 74 75 double Human::getRecruitmentCosts() { 76 return abs(randomValueFromNormalDistribution(MEAN_STANDARDIZED_DIST, STANDARDIZED_SD)); 77 } 78

79 bool Human::operator<(const Human &h) const { 80 return performance < h.getPerformance(); 81 }

Code 9: human.cpp

1 /* firm.h

2 *

3 * written by Niels Haan

(80)

21 int numberOfPeopleSubstitutable; 22 int numberOfPeopleFired;

23 std::vector<Human> humans; 24 Machine machine;

25 void initHumans(Parameters parameters, std::vector<Human> &humans,

int sampleSize); 26 public:

27 Firm() { };

28 Firm(Parameters parameters);

29 void nextIteration(Parameters parameters); 30 double getMachinePerformance(); 31 double getPerformance(); 32 double getCosts(); 33 int getNumberOfPeopleSubstitutable(); 34 int getNumberOfPeopleFired(); 35 void print(); 36 void printHumans(); 37 }; 38 39 #endif Code 10: firm.h 1 /* firm.cpp 2 *

3 * written by Niels Haan

(81)

12 numberOfPeopleFired = 0;

13 initHumans(parameters, humans, parameters.numberOfHumans);

14 machine = Machine(parameters.learningRateOfTheMachine, parameters. initPerfMachine, parameters.costsMachine, parameters.

learningRateDecay); 15 }

16

17 void Firm::initHumans(Parameters parameters, std::vector<Human> & humans, int sampleSize) {

18 Human human;

19 for (int i=0; i<sampleSize; i++) { 20 human = Human(parameters);

21 humans.push_back(human); 22 }

23 } 24

25 void Firm::nextIteration(Parameters parameters) { 26 performance = 0;

27 costs = 0;

28 numberOfPeopleSubstitutable = 0;

29 for (int h=0; h<(int)humans.size(); h++) {

30 if (humans[h].getComplementarity() > parameters. substitutableTreshold) { 31 numberOfPeopleSubstitutable++; 32 } 33 34 if (time > parameters.introMachine) { 35 humans[h].update(parameters, machine);

36 if (machine.getPerformance() > humans[h].getPerformance() && humans[h].getComplementarity() <= parameters.substitutableTreshold ) {

37 performance += (humans[h].getComplementarity()*machine.

(82)

getPerformance()) / parameters.numberOfHumans; 38 } else {

39 performance += humans[h].getPerformance() / parameters. numberOfHumans;

40 } 41

42 costs += humans[h].getCosts() / (parameters.numberOfHumans + 1); 43

44 if (humans[h].getComplementarity() > parameters.

substitutableTreshold && machine.getPerformance() > humans[h]. getPerformance()) {

45 // fire human

46

47 // replace human with other human

48 humans.at(h) = Human(parameters); 49

50 // adjustment costs for new human

51 costs += humans[h].getRecruitmentCosts(); 52

53 // remove human instead of replacing

54 //humans.erase(humans.begin() + h); 55 56 numberOfPeopleSubstitutable--; 57 numberOfPeopleFired++; 58 } 59 } else {

60 performance += humans[h].getPerformance() / parameters. numberOfHumans;

61 costs += humans[h].getCosts() / parameters.numberOfHumans; 62 }

63 }

(83)

65 costs += machine.getCosts() / (parameters.numberOfHumans + 1); 66 } 67 time++; 68 } 69 70 double Firm::getMachinePerformance() { 71 return machine.getPerformance(); 72 } 73 74 double Firm::getPerformance() { 75 return performance; 76 } 77 78 double Firm::getCosts() { 79 return costs; 80 } 81 82 int Firm::getNumberOfPeopleSubstitutable() { 83 return numberOfPeopleSubstitutable; 84 } 85 86 int Firm::getNumberOfPeopleFired() { 87 return numberOfPeopleFired; 88 } 89 90 void Firm::print() {

91 std::cout << std::fixed << std::setprecision(2) << "Firm: " << "["

<< time << ", " << performance << ", " << costs << ", " <<

numberOfPeopleSubstitutable << ", " << numberOfPeopleFired << "]"

<< std::endl; 92 }

93

(84)

95 for (int i=0; i<(int)humans.size(); i++) { 96 humans[i].print(); 97 } 98 } Code 11: firm.cpp 1 # make_figure.py 2 #

3 # written by: Niels Haan.

4

5 import sys

6 import pandas as pd

7 from matplotlib.figure import Figure

8 from matplotlib.backends.backend_agg import FigureCanvasAgg 9 from matplotlib import style

10

11 def plot_all(df, ax, column):

12 row_length = int(len(df[column])/3)

13 ax.plot(df.index[0:row_length], df[column][0:row_length], label=' creative')

14 ax.plot(df.index[0:row_length], df[column][row_length:2*row_length], label='normal')

15 ax.plot(df.index[0:row_length], df[column][2*row_length:3* row_length], label='routinized')

16

17 def plot_ratio(df, ax):

18 ax.plot(df['performance'], label='performance') 19 ax.plot(df['costs'], label='costs')

20

21 def plot_jobs(df, ax):

22 ax.plot(df['substitutable'], label='Substitutable') 23 ax.plot(df['fired'], label='Fired')

Referenties

GERELATEERDE DOCUMENTEN

In het WALQING-project (Work and Life Quality in New and Gro- wing Jobs) gefinancierd door het Europese Zevende Kaderprogram- ma onderzochten negen Europese onderzoeksteams waar

Een theoretische constructie die toelaat om te ver- klaren waarom de oude instrumenten niet meer werken en die tegelijk een autonomie postuleert voor een beleid dat ingrijpt op

binnen de OESO zelfs de laagste werkzaamheids- graad voor allochtone vrouwen (32% tegenover 53% voor de autochtone vrouwen), bij de mannen is de kloof tussen allochtonen en

In the situation of open-loop information structure the government will manipulate its tax policy in such a direction, that at the switching mo- ment from investment to dividend,

We also explore possible causes of socially useless jobs, including bad management, strict job protection legislation, harmful economic activities, labor hoarding, and division

workers in the public sector are much less likely to report having a socially useless job than workers in the private sector (more than 6 percentage points lower, which is

project is too small at this moment, the LIDAR data described in this thesis is not yet used for training and testing of the collision risk assessment algorithm.. The safety

Our interest in the UCEP data was to use placement evidence from this high-quality training provider to check whether their reported employment figures are consistent with our