• No results found

The computerization of labor in Europe and the US : fad or fact?

N/A
N/A
Protected

Academic year: 2021

Share "The computerization of labor in Europe and the US : fad or fact?"

Copied!
26
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Bachelorthesis economics

Title: The Computerization of Labor in Europe and the US: Fad or Fact? Name: Riaan Zoetmulder

Date: 29-06-2015

Studentnumber: 6072909 Supervisor: Ron van Maurik

(2)

2

Statement of Originality

This document is written by Student Riaan Zoetmulder who declares to take full responsibility for the contents of this document.

I declare that the text and the work presented in this document is original and that no sources other than those mentioned in the text and its references have been used in creating it.

The Faculty of Economics and Business is responsible solely for the supervision of completion of the work, not for the contents

(3)

3 Abstract

This research aims to find a relation between the probability of computerization and the level of employment for 25 European countries and the United States. Furthermore it aims to explain the jobless recovery in the US by assessing the job growth for occupations with differentprobabilities of computerization. It builds on research conducted by Frey and Osborne(2013), who calculated the probabilities of computerization for 702 occupations in the United States. Data about inflation, economic growth, employment and internet usage were acquired from the Bureau of labor statistics, Eurostat and the Worldbank. The probabilities were converted to an international standard using the weighted average of employees per occupation. A time fixed effects regression showed a significant effect of the probability of computerization on employment, and internet usage on employment. However, the probabilities of computerization did not explain the jobless recovery in the United States. Furthermore, this paper discusses several strengths and limitations of the research. This research concludes by suggesting that policymakers should not underestimate the likelihood of large scale computerization of occupations and their effects on employment.

(4)

4

Index

Introduction 5

Literature review 6

What has held back technological unemployment? 6 What will cause technological unemployment? 7

The present research 10

Method 13 Probabilities of Computerization 13 Further Data 14 Analysis 16 Results 17 Results panel 2004-2014 17

Results United States 2010-2014 19

Conclusion 20

Discussion 21

Post-Recession Job Growth United States

Unrelated to Probabilities. 21

Missing Data Army 22

External Validity 22

Dual Hypothesis Testing 23

Reverse Causality 23

Discriminant Validity 24

(5)

5

Introduction

The debate about whether technology can cause widespread unemployment in economies has been reinvigorated after Frey and Osborne published a paper on the subject matter in 2013. Historically, the debate about “technological unemployment”, unemployment caused by technological progress, has been a recurring one. An example, where worry about technological unemployment was

widespread, was during the Luddite protest in England (Mortensen & Pissarides, 1997, p.735). During this protest, workers in the textile industry protested against the mechanization of various aspects of the job. Despite the Luddites protests, mechanization of their jobs did occur. However because of mechanization managerial positions became more widespread (Katz & Margo, p.5) and the loss of occupations was compensated by the creation of new occupations. Another example is the invention of a stocking frame knitting machine, in 16th century England. The queen forbade the implementation of the machine in industry because it would deprive her subjects of their source of income (Acemoglu and Robinson, 2012, as cited by Frey and Osborne, 2014, p.6). These two

historical examples create a juxtaposition of the different impacts that attempts at using technology to improve or supplant labor may have.

Furthermore, predictions about the impact of technology on the demand for labor are also common. An example of such a prediction is on that was made by John Maynard Keynes ; “due to our discovery of means of economizing the use of labor outrunning the pace at which we can find new uses for labor” (Keynes, 1933, p.3, as cited by, Frey and Osborne, 2013, p.2). Besides

predictions and historical protests, there are also examples of how technology has replaced workers more recently. Examples include phone operators and data entry clerks (Bresnahan, 1999, pp.403-405). Because of such examples of technology destroying specific occupations it is important to know whether more recent technological advances are capable of destroying more jobs than they create. If this is the case, policy can be implemented to remedy the adverse effects of high

(6)

6

The current paper aims to research whether technological progress will cause technological unemployment in the nearby future. It does so by; Firstly, reviewing the literature on the effects of technological progress on unemployment and arguing why technology will destroy more jobs than it creates. Secondly, it will review other variables that may impact employment growth. Thirdly, the methodology used to answer the research question will be discussed. Fourthly, the results of the research will be presented. Fifthly, the findings of the present research will be discussed and linked to the literature review.

Literature Review

Substantial literature has been devoted to researching technological unemployment. In this paragraph this literature is discussed. Firstly, a discussion will be presented that answers why previous technological breakthroughs have not already destroyed more jobs than they created. Secondly, theory will be discussed that argues that technology has progressed far enough to allow for computerization of jobs that were previously not possible to computerize.

What has Held Back Technological Unemployment?

Previous technological breakthroughs have not destroyed more jobs than they created because they didn’t allow engineering problems to be specified in a sufficient enough amount to be computerized Autor et al. (2003, p.4). Furthermore, researchers believed that there were two competing effects in the economy. The first effect was the “creative destruction” effect. Creative destruction was caused by economic growth, which spurred innovation, which in turn caused job loss due to automation and skill obsolescence (Aghion & Howitt, 1992, p.477). The second effect was the “capitalization effect” , this effect was caused by growth. The growth in turn lead to more firms entering the market, because of increased returns, and increased job openings (Aghion & Howitt, 1992, p.478) . Whether unemployment rises or falls depends on the strength of these effects. Mortensen and Pissarides

(7)

7

(1997, p.734) build on this model by allowing firms three choices: they continue producing with the technology present at the job, they pay a fixed renovation cost to update the technology, or destroy the job and stop producing. They found that a higher rate of technological progress would lead to a high rate of job destruction, however if the cost of renovation was sufficiently low, firms would renovate rather than destroy (Mortensen & Pissarides, 1997, p.752). On the job creation side they found that higher technological progress with high implementation costs would make job creation drop. However if renovation costs were sufficiently low, job creation would be stimulated.

What Will Cause Technological Unemployment?

The main reason why some researchers believe technology will now destroy more jobs than it creates is due to advances in “Machine learning” and “Mobile Robotics”, both subfields of “Artificial Intelligence” (Frey & Osborne, 2013, p.14). These recent advances will allow computerization, the automation of jobs and tasks by use of computers, of workplace tasks which were previously impossible to computerize and as such would increase the amount of “creative destruction” . To properly explain why these tasks can be computerized nowadays, a brief summary of Autor et al.’s(2003) task categorization model is necessary. According to this model, workplace tasks may be divided along 2 dimensions, routine versus non-routine and cognitive versus manual. Routine tasks follow strict rules and can thus be easily accomplished by an algorithm. Non-routine tasks are more ambiguous and cannot be translated to code as easily. Manual and cognitive work refer respectively to the amount of physical and cognitive labor required for the completion of the job. Whereas previous technological breakthroughs merely infringed upon the domain of workers performing routine tasks (Autor & Dorn, 2013, as cited by Frey and Osborne, 2013, p.15), the recent advances in machine learning and mobile robotics allow computers to do non-routine labor too.

(8)

8

Because of these advances, Autor et al.’s (2003, as cited by Frey & Osborne, 2013, p.14) task categorization model will no longer hold in predicting the effect of computerization on the content of tasks of employment (Frey & Osborne, 2013, p.23). They suggest the following production function as a substitute:

Q = (𝐿𝑠+ 𝐶)1−𝛽𝐿𝑁𝑆 𝛽

(1)

Where, 𝛽 ∈ (0,1), 𝐿𝑠 stands for the amount of labor susceptible to computerization, C stands for

computer capital, 𝐿𝑁𝑆 stands for non- susceptible labor and Q stands for quantity of goods

produced. Three engineering bottlenecks were identified. All of these bottlenecks corresponded to a certain type of labor input. These three labor inputs were labor inputs requiring perception and manipulation, creative intelligence and social intelligence, denoted respectively by; 𝐿𝑃𝑀, 𝐿𝐶, 𝐿𝑆𝐼.As

such non-susceptible labor can be defined as:

𝐿𝑁𝑆= ∑𝑖=1 𝑛 (𝐿𝑃𝑀,𝑖+ 𝐿𝐶,𝑖+ 𝐿𝑆𝐼,𝑖 ) (2)

According to Frey & Osborne(2013, p.27) the probability of a task being automated can be described as a function of the task characteristics in formula two. Many non-routine tasks can be computerized as well, now or in the nearby future, however tasks that require a higher degree of perception and manipulation, creative intelligence and social intelligence will be computerized later.

Frey and Osborne(2013) calculated the probability that each job could be computerized. They did this using data from O*NET, an online tool developed for the Department of Labor in the United States. O*NET provides both qualitative and quantitative information on the characteristics of occupations as they develop. The probabilities were categorized into three categories, p<0.3, 0.3<p<0.7 and p>0.7. This yielded the distribution presented in figure 1.

(9)

9 Figure 1

Figure Displaying the Probability of Computerization per Occupation in the United States.

Note. Adapted from “The Future of Employment: How Susceptible are Jobs to Computerisation?” by C.B. Frey

and M.A. Osborne, 2013,p.37. Copyright 2013 by C.B. Frey and M.A. Osborne. Adapted with permission.

With regards to the interpretation of the model Frey and Osborne 2013 stated the following;

It shall be noted that the probability axis can be seen as a rough timeline, where high probability occupations are likely to be substituted by computer capital relatively soon. Over the next

(10)

10

decades, the extent of computerization will be determined by the pace at which the above described engineering bottlenecks to computerization can be overcome. (p.38)

Furthermore they explain that computerization will occur in two waves , separated by a “ technological plateau”. As can be seen from figure 1, the high risk category will be computerized first, after which computerization will continue at a slow pace through the medium risk occupations. Lastly the low risk occupations will be computerized (Frey & Osborne, 2013, pp.38-40).

The Present Research

The present research attempts to explore the relationship between the probability of

computerization and employment growth. However, because the model proposed by Frey and Osborne (2013) is forward looking and the technologies that will computerize the occupations have not yet been implemented, one additional assumption is needed to conduct this research. This is the assumption of forward looking economic agents. We can hypothesize that if agents are forward looking, and understand that many jobs will be replaced by computers in the nearby future, they would be less inclined to hire more costly workers. This is because investing in workers would have a higher opportunity cost than saving and investing in cheaper machinery. As such those jobs that have a high probability of being computerized would already have depressed employment growth rates, whereas jobs that have a low or medium probability of being computerized would not. Furthermore, if such an effect is found, it would be a new form of forward looking behavior in the sense that previously forward looking behavior required a formal and credible policy announcement by an institution, whereas this forward looking behavior would not.

Such forward looking behavior would require a great deal of information from news sources. An economic agent would need to remain up to date on issues regarding technological development. A source of such information is the internet, therefore in our analysis we are controlling for internet

(11)

11

users per 100 inhabitants per country. We therefore expect the amount of internet users per 100 inhabitants to be positively related to employment growth, since the internet also provides opportunities for employment. Another reason for including internet users per 100 inhabitants, is because internet penetration is related to economic growth via international trade (Meijers, 2014, p.27). Economic growth is, in turn, related to the growth rate of employment. This is a part of the statistical relationship known as Okun’s Law (Chamberlin, 2011, p.104), which was observed in 1962 by Arthur Okun. The rule stated that for every two point drop in economic growth, unemployment grew by one percent (Chamberlin, 2011, p.104). Okun’s law is not a structural relationship but a statistical one, as such it can be subject to structural breaks(Chamberlin, 2011, p.104). Although in his paper Chamberlin suggests that Okun’s law has lost some of its usefulness as a tool. It still remains interesting in the short term (Chamberlin, 2011, p.131). Therefore economic growth is included in the model and is expected to have a positive influence on employment growth.

Furthermore, the statistical relationship between inflation and unemployment, known as the Phillips curve , must be accounted for in the present research. Although it has received a lot of critique, according to Fuhrer(1995, p.55), no evidence could be found that the model is not structurally sound. The hypothesized relationship would be that higher inflation reduces

unemployment and vice versa. It is therefore expected that inflation has a positive relationship with employment.

The periods after the past 3 recessions in the United states have been characterized by “Jobless recovery”. Jobless recovery is defined as a recovery in aggregated GDP, but a lower rate of growth in employment (Jaimovich & Siu, 2014, p.2). Another phenomenon that has been occurring in the united states labor market is “job polarization”. This phenomenon refers to the trend of increased amount of jobs that require high-skill or low-skill, but that lets the middle skilled jobs disappear. According to Autor et al.(2003, p.4), computerization acts as a substitute for routine occupations. This is an explanation for why job polarization occurs. Furthermore, Autor et al.(2003, p.42) show

(12)

12

that job polarization is related to jobless recovery. Frey and Osborne, however predict that the low skill jobs will be most vulnerable to computerization. To test this a regression is performed on the post 2010 United States data. It is expected that the occupations that have the highest probability for computerization will have less job growth than the occupations with a low and medium probability for computerization.

This paper will firstly explore the relationship between these variables and the 2004-2014 data. Secondly it will examine the relationship between these variables post 2009. To see whether technology can account for the jobless recovery that is occurring in the United States.

(13)

13

Method

In this section the methodology of this research will be discussed. Firstly, an explanation will be givin about how the probabilities of computerization were calculated. Secondly, the data about economic growth, employment growth, inflation and internet users per 100 inhabitants will shortly be

discussed. Thirdly, an overview of the analysis will be presented.

Probabilities of computerization

Before discussing how the probabilities were calculated per sector, it is necessary to give a brief description of how the probabilities were calculated by Frey and Osborne (2013). As was noted earlier, they ranked the occupations first. The first step in their ranking process was to label, together with various experts in the field of machine learning, an occupation as fully computerize-able or not computerize-computerize-able. Respectively assigning a 1 or a 0. They did this for 70 occupation. The second step consisted of ranking the jobs according to O*NET data. Nine different characteristics of the occupation were classified as either contributing to the degree of perception and manipulation, creative intelligence or social intelligence required for the occupation. As their third step they used probabilistic classification. Probabilistic classification is a technique where an algorithm is

implemented that can predict the probability of a variety of classes given a sample input. For 70 of the hand labeled occupations the nine O*NET variables were put into a vector, these 70 vectors were used as training data (Frey & Osborne, 2013,p.32). The 70 labels that were assigned by hand were also given alongside the corresponding vectors. From this the probabilistic classification algorithm could learn to see patterns in the data. They only “trained’ the algorithm on half the training data so the other half could be used as a test. They found that the labels assigned by the algorithm closely matched those that were given by hand. After having validated their approach, they continued to the next step, which was finding the probabilities of computerization. They used a

(14)

14

logistic regression to find this probability for each of the 702 occupations. Lastly, they categorized the occupations according to their probabilities in a high, medium or low probability of

computerization category. All occupations with P>0.7 were categorized in the high category, those with 0.3 < P <0.7 were categorized in the medium category and those with P<0.3 were categorized in the low category.

For the current research however, the thresholds for low medium and high probability are respectively, P<0.33, 0.33<P<0.66 and P> 0.66. This is done to ensure that all groups cover an equal range of probabilities.

Further Data

The United States data about employment per SOC code, the code by which occupations are

classified in the United States, was acquired from the Bureau of Labor Statistics (BLS). This was done for the years 2004-2014. The data for all of the occupations were matched with the correct

probabilities from the paper of Frey and Osborne(2013). Frey and Osborne used the 2010 version of the SOC, all the data that was about years prior to 2010 had some of their codes changed to match the 2010 SOC system using a conversion table, so that data could be included. Some occupations were split into different occupations, these were removed from the dataset since it was impossible to ascertain the exact amounts of people in the occupations. In total this yielded 680 analyze-able occupations. The data was then converted to ISCO standards, the international standard for

classifying occupations, this was done using the crosswalk that was also acquired from the BLS. Some of the occupations in the SOC system could not be translated using the crosswalk, three occupations were omitted as a result. Using the crosswalk, the probabilities of computerization per ISCO sector were calculated by taking the weighted average of the probabilities per occupation. This yielded nine probabilities, since insufficient data was available about the armed forces (sector 0) due to it being classified information, this sector was omitted from the analysis. the calculated probabilities are displayed in table 1. Furthermore the amount of people employed per sector was calculated using

(15)

15

the crosswalk. The matching of the SOC codes, calculations of probabilities and amount of people employed per sector were calculated by a custom program written for this research.

Table 1.

Probabilities of Computerization per Sector. And Respective Categories ( Low, Medium, High).

ISCO code Name of sector Probability

1 Managers 0.129*

2 Professionals 0.173*

3 Technicians and associate professionals 0.538**

4 Clerical support workers 0.847***

5 Service and sales workers 0.775***

6𝑎 Skilled agricultural, forestry and fishery workers 0.661**

7𝑎 Craft and related trades workers 0.666***

8 Plant and machine operators and assemblers 0.85**

9 Elementary occupations 0.758***

Note .𝑎These 2 sectors could have been categorized in either the high or medium category. However if rounded to 2 decimals 6 would be categorized in the medium category and sector 7 would be categorized as high.

*Low Probability, **Medium Probability, *** High Probability.

The European data was acquired from Eurostat. Because the American data was acquired in May, the values for employment per sector were taken from quarter two as to make sure the data was acquired at approximately the same date. Furthermore the data about the inflation rates was also acquired from Eurostat, the measure for inflation that was chosen was the Harmonized Index of Consumer Price (HICP). This measure was chosen because it attempts to harmonize all the inflation rates in the EU and the BLS also calculates it for the US. The data about GDP growth was acquired

(16)

16

from Eurostat. The data about internet users per 100 inhabitants was downloaded from the World Bank. Values for the year 2014 for internet users per 100 inhabitants were missing.

To sum up data was acquired for Belgium, The Czech Republic, Denmark, Germany, Ireland, Greece, Spain, France, Croatia, Italy, Cyprus, Latvia, Luxembourg, Hungary, Malta, The Netherlands, Austria, Poland, Portugal, Finland, Sweden, the UK, Iceland, Norway, Switzerland and the United States.

Analysis

Using Stata, the format of the data was changed. It was reformatted to long format, to make it suitable for panel data. Dummy variables were created to indicate low, medium and high

probabilities of computerization. Because of the possibility of omitted variables that change over time, such as recessions, the method used to analyze the data was a time fixed effects regression. Prior to the regression, the regression assumptions, absence of collinearity or heteroscedasticity, were tested. Next descriptive statistics were acquired for the variables economic growth, employment, inflation and internet users per 100 inhabitants. Lastly, the time fixed effects regression was performed on the data.

In order to analyze the post 2009 data for the United States, all countries except for the United States were removed from the database. Furthermore all data prior to 2010 was removed. Tests were performed to test for collinearity and heteroscedasticity. Descriptive statistics for employment were acquired, lastly the time fixed effects regression was performed.

(17)

17

Results

Firstly, the results for the tests on the data from 2004 -2014 from 26 countries are presented, secondly the results of the tests on the data for the United States post 2009 are discussed.

Results panel 2004-2014

The first thing the data was tested for was collinearity, these tests indicated that collinearity was not strong enough to be of concern , both VIF and tolerance were well below the required cutoff. see table 2 for the results.

Table 2.

Results for the Tests of Collinearity of the Data. Both VIF and Tolerance are Indicated and Below Cutoff Scores.

Variable VIF Tolerance

Economic Growth 1.04 0.9651

Inflation 1.02 0.9805

Internet users per

100 1.03 0.9704

Medium probability 1.33 0.75

High probability 1.33 0.75

A Breusch-Pagan test was performed to test for heteroscedasticity. This test indicated that heteroscedasticity was a problem (Χ2(5, N= 2,322) = 152.77, p< 0.001). Therefore robust standard errors were used in the time fixed-effects regression. Descriptive statistics were calculated and are indicated in table 3.

(18)

18 Table 3.

Descriptive statistics for the data.

Variable Mean Standard Deviation N Economic Growth 1.455634 3.254021 2,556 Inflation 2.341549 2.066998 2,556

Internet users per

100 67.52422 18.31439 2,340

Employment 1,361,267 3,083,416 2,574

The time fixed effects regression was performed and the individual coefficients , their respective standard errors, t-values, p-values, and number of observations (N) are reported in table 4, the F-value can also be found here.

(19)

19 Table 4.

Time Fixed- Effects Regression Output Showing the Coefficients of Economic Growth, Inflation, Internet Users per 100 Inhabitants and the Medium and High Probability of Computerization Dummies.

Regressor Coefficient Standard Error t- value p-value N F-value

Economic Growth -17399.32 73296.2 -0.24 0.818 2556 1,239.35***

Inflation -61478.29 52254.2 -1.18 0.27 2556

Internet users per 100 8637.3 3787.061 2.28 0.049* 2340 Medium probability 760466.8 27512.14 27.64 0***

High probability -99223.42 26084.31 -3.8 0.004**

Constant 945805.3 252609.2 3.74 0.005**

Note.*p < 0.05. **p<0.01. ***p<0.001.

As can be seen in table 4, the coefficients of Internet users per 100 inhabitants, Medium probability and High probability are significant. These three coefficients are in line with the expectations. Furthermore, economic growth and inflation are not significantly related to

employment, which is not in line with expectations. Lastly, the F-value indicates that the model is significantly different from zero.

Results United States 2010-2014

The results for the data about the United States post 2009 will now be presented. Firstly, tests for collinearity indicated that collinearity was not a concern for the data. (Medium Probability,

(20)

20

Tolerance =0.75 , VIF =1.33 ; High Probability, Tolerance =0.75 , VIF =1.33). Furthermore a Breusch-Pagan test for heteroscedasticity did not indicate there was a problem.

Descriptive statistics for employment were calculated (M = 13,000,000, SD = 8,053,171, N = 45). Lastly a time fixed-effects regression indicated that the model does not explain a significant amount of variance. None of the coefficients were significant, although the medium probability was almost significant (t(45) = 1.94, p = 0.06) and the constant was significant (t(45) = 4.62, p = 0.00)

Conclusion

In this paper it has been shown that from a theoretical point of view, the effect of computerization has an ambiguous effect on employment. It depends on the strength of the creative destruction effect, the capitalization effect and the cost of renovation. The model that was recently proposed by Frey & Osborne(2013) however, suggested that computerization has put 47 percent of the United States labor force at risk of losing their job within ten to twenty years (Frey & Osborne, 2013, p.38).

The empirical research conducted in this paper has found that internet usage growth is related to employment growth. Furthermore the medium and low risk of computerization category had a higher amount of jobs created, and that the high risk category had a lower amount of jobs created over the panel data covering all the selected countries. This was in line with the predictions made by Frey and Osborne(2013). Against expectations, economic growth and inflation did not have an impact on the amount of jobs created. The analysis of the post-recession United States data, indicated no difference in job growth from each of the categories. This could be due to the United states having more entrepreneurs compared to Europe (Reynolds & Curtin, 2008, p.158). Therefore more businesses are created in the United states, which creates more jobs, which increases the strength of the capitalization effect. This could cause the effect of technological development on unemployment to be less pronounced, when contrasted with Europe. Several points that warrant further discussion will be addressed now.

(21)

21

Discussion

Several points of discussion will be reflected upon in this section, furthermore recommendations for future research will be made. Firstly, a paragraph is devoted to discussing reasons why in the post-recession data of the United States, probability of computerization is seemingly unrelated to employment growth. Secondly, the implications of the missing data about the army will be discussed. Thirdly, threats to this research its external validity will be reviewed. Fourthly, dual hypothesis testing will be discussed. Fifthly, the possibility of reverse causality will be reflected upon. Sixthly, the discriminant validity of the construct “probability of computerization” is discussed.

Post-Recession Job Growth United States Unrelated to Probabilities.

The finding that post-recession job growth in the United States was unrelated to the probability of computerization was curious. It was expected that the probability of computerization. would explain the jobless recovery. It is however possible that due the deletion of some of the occupations from the analysis the effect was obscured. It was indeed the case that the medium probability would almost be deemed to have a positive effect on the creation of jobs. Therefore future research should try to include all of the SOC codes rather than 680, as to increase the representativeness of the data. Furthermore the calculation of the probabilities also requires discussion. While calculating the probabilities, occupations were removed from the analysis because there was no categorization available for them in the ISCO system. Therefore some of the calculated probabilities may have been effected. This could have an impact on the results of this analysis, since two of the sectors were on the fringe of being categorized in a different risk category. This could have had an impact on the analysis. Even when using the original cutoff scores for the categories proposed by Frey and Osborne (2013, p.38), several sectors could still be categorized differently if all SOC codes could be converted. Future research should therefore try to categorize the occupations that do not have a sector

(22)

22 Missing Data Army

Another shortcoming of the data is that data about the army is classified. As such no conclusions can be drawn about this sector. It would be a very interesting sector to study since computerization is also present in the military. The recent adoption of autonomous drones is an example. Potential information about computerization of this sector would also be important for policymakers. It would require discussion about the moral and ethical standpoints of autonomous drones. For example should completely autonomous machines be allowed to make life and death decisions? Analysis of this sector is therefore not only important for labor economics, but for the future direction our society will take. Therefore, future research should attempt to acquire this data.

External Validity

Another point of discussion is the external validity of the results. Firstly, it needs to be mentioned that several European countries did not have data for the correct periods, and therefore could not be analyzed. This could have had an influence on the results. However all major economies in Europe were analyzed, and it would be unlikely that results would be very different. Secondly, several other western economies were not included in the data, these were Australia, new Zealand and Japan. It is therefore unknown whether the results can be generalized to these economies. Future research should therefore also acquire the relevant data from these countries. Lastly, other countries in the Middle east, south America and Africa were not included in this analysis. It is therefore unknown whether the results, both theoretical and empirical generalize to countries in these regions. It is very likely that due to the structural differences between developed and developing economies, the models and empirical findings for the latter type of economies are different from the former type of economies. Therefore, an interesting research question that researchers should address next would be what the impact of computerization on employment in developing economies.

(23)

23 Dual Hypothesis testing

Another issue with the current research is that two hypotheses are tested at once. This is a shortcoming that has been pointed out by Koopmans, during the “measurement without theory debate”. During the measurement without theory debate one of Koopmans his arguments was that “statistical analysis of the data requires additional assumptions about their probabilistic

characteristics that cannot be subject to statistical testing from the same data. These assumptions need to be provided by economic theory and should be tested independently.” (Koopmans, 1947, as cited by, Boumans & Davis, 2010, p.38). In this research, two hypotheses are also simultaneously being tested, namely whether economic agents are forward looking and whether the probability of computerization has an impact on employment. Because multiple hypotheses are being tested at once, it is possible that only one of the hypotheses is true, but the test nonetheless indicates that both are true. Relating this problem to the current research, an effect has been found for the probabilities of computerization. This could mean one of three things; firstly, economic agents are forward looking and there is an effect of computerization on the natural rate of unemployment. Secondly, there is already an effect of computerization on the natural rate of unemployment, but economic agents are not forward looking. Lastly, economic agents are forward looking, but

incorrectly assume that computerization will destroy occupations. Because it has not explicitly been tested whether economic agents are forward looking , which of these three hypotheses is true cannot be ascertained. Therefore future research should try to measure the beliefs of economic agents in the labor market, through use of techniques from behavioral economics or psychology.

Reverse causality

Another methodological point of discussion is reverse causality. There is no way of establishing a causal relationship between the engineering bottlenecks, probabilities of computerization and

(24)

24

employment. Other potential relationships are for example, unemployment creating a large

incentive to computerize certain occupations. Although much was controlled for in this research, it is impossible to rule out reverse causality.

Discriminant Validity

The last point of discussion regards the correlation between education, wage and the probability of computerization. Frey and Osborne have indicated that the probability of computerization is strongly and negatively correlated with education and wage level. The higher educational attainment and wage the lower the probability of computerization and vice versa. This has implications for the interpretation of this research. Could it be that the results that have been found up until now are caused by wage level and educational attainment? The possibility surely exists. Future research should therefore assess whether the construct “probability of computerization” sufficiently discriminates from wage level and educational attainment, future research should also assess whether the probability of computerization explains additional variance, when compared to wage level and educational attainment.

This research has given insight into the future of employment, it has opened up future research directions. Furthermore, the results of this research have provided sufficient evidence for policymakers to seriously consider the influence of computerization on employment. Whether or not policymakers and researchers decide to use this information, it does add to the public debate about whether computerization can influence the natural rate of unemployment in developed economies.

(25)

25

References

Aghion, P., & Howitt, P. (1994). Growth and unemployment. The Review of Economic

Studies, 61(3), 477-494.

Autor, D.H., Levy, F., Murnane, R.J.(2003). The Skill Content of Recent Technological Change:

An Empirical Exploration. Quarterly Journal of Economics. 118 (4), 1279 –

1333.

Boumans, M., & Davis, J. (2010). Economic methodology: understanding economics as a

science. Palgrave Macmillan.

Bresnahan, T. F., Brynjolfsson, E., & Hitt, L. M. (1999). Information technology, workplace

organization and the demand for skilled labor: Firm-level evidence (No.

w7136). National Bureau of Economic Research.

Chamberlin, G. (2011). Okun’s Law revisited. Economic and Labour Market Review, 5(2),

104-132.

David, H., Levy, F., & Murnane, R. J. (2001). The skill content of recent technological change:

An empirical exploration (No. w8337). National Bureau of Economic Research.

Frey, C. B., & Osborne, M. A. (2013). The future of employment: how susceptible are jobs to

computerisation. Retrieved September, 7, 2013.

Fuhrer, J. C. (1995). The Phillips curve is alive and well. New England Economic Review,

(Mar), 41-56.

Jaimovich, N., & Siu, H. E. (2012). The trend is the cycle: Job polarization and jobless

recoveries (No. w18334). National Bureau of Economic Research.

(26)

26

Katz, L. F., & Margo, R. A. (2014). Technical Change and the Relative Demand for Skilled

Labor. Human Capital in History: The American Record, 15.

Meijers, H. (2014). Does the internet generate economic growth, international trade, or

both?. International Economics and Economic Policy, 11(1-2), 137-163.

Mortensen, D. T., & Pissarides, C. A. (1998). Technological progress, job

creation, and job destruction. Review of Economic

Dynamics, 1(4), 733-753.

Reynolds, P. D., & Curtin, R. (2008). Business creation in the United States:

Panel study of entrepreneurial dynamics II initial

assessment. Foundations and Trends in Entrepreneurship, 4(3),

155-307.

Referenties

GERELATEERDE DOCUMENTEN

All postpartum de- pressed women in the high-risk group reported at least one of the following risk factors: personal history of depression or high depressive symptomatology

From July next year, AIMS will offer upgrading courses to mathematics teachers — both at primary and secondary school level — who may themselves lack formal mathe- matics training

This research focused on both the impact of Basel III, the new capital requirements for banks, on the amount of risk banks take as well as the relationship between the leverage

This study showed that graduates are significantly more loyal to brands than students, so when brands can ensure that students have a preference for their brand (and

Keywords: Enterprise Risk Management, Firm value, Insurance sector, ERM rating, Chief Risk Officers, Value creation, Insurance

Using a trade in value-added methodology and an input-output decomposition technique, I find that BRIC final demand induced increasing amounts of value added and jobs

Taking into account that there is a maximum capacity on the electricity grid and storage, this means that a solar park ideally is designed to meet the demand of a city including

The third hypothesis of this study was that after taking part in the JOBS program, participants from the experimental group will report higher levels of self-esteem, compared to