• No results found

HOW TO PREDICT THE EFFECTS OF AUTOMATION ON JOBS

N/A
N/A
Protected

Academic year: 2021

Share "HOW TO PREDICT THE EFFECTS OF AUTOMATION ON JOBS"

Copied!
35
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

HOW TO PREDICT THE EFFECTS OF

AUTOMATION ON JOBS

UTILISING DATA TRIANGULATION TO DEVELOP A FRAMEWORK OF TASK CATEGORIES

JELMER JAN KOORN1

Student number UvA: 10444955 Student number VU: 2528577 University of Amsterdam Faculty of Science

Thesis Master Information Studies: Business Information Systems Final version: 13-07-2017

Supervisor: prof. dr. ir. Hajo Reijers Examiner: dr. Vanessa Dirksen

Abstract. The effects of automation on jobs, particularly job losses, are both relevant and heavily debated. The focus at the core of the debate is on what tasks can and cannot be automated. This research focusses on making a more realistic prediction of the effects of automation on jobs. To make such predictions, various data sources and perspectives are adopted, each with its own strengths and weaknesses. Using accountants and auditors as exemplary cases, an in-depth study using data triangulation of historic data, literature, and interviews is utilised. To compare the results from these various data sources a new framework of task categories is defined. The new framework consists of both new task categories as well as redefined traditional task categories. Applying the new framework on other jobs seems to generate a more nuanced prediction of the effects of automation on jobs. Future research should attempt to further develop and evaluate the newly proposed framework.

Keywords. Technological Change, Automation, Employment, Job Tasks, Human-Machine Interaction.

1 Jelmer Koorn is master student information studies: business information systems at the University of Amsterdam (UvA). He would like to

(2)

Page | 2 Table of Contents

Introduction ... 3

1. Literature Review ... 4

1.1 The Effects of Automation on Society ... 4

1.2 Human-machine interaction ... 4

1.3 Data collection and task categories ... 5

2. Methodology ... 6 2.1 Case studies ... 6 2.2 Data sources ... 6 2.3 Research design ... 7 2.4 Task categories ... 7 2.5 Historical data ... 8 2.6 Literature ... 8 2.7 Interviews ... 9 2.8 Evaluation ... 9 3. Results ... 10

3.1 Historical data analysis ... 10

3.2 Literature analysis ... 11 3.2.1 Task categories ... 11 3.2.2 Scenario A or NA? ... 11 3.3 Interviews ... 12 3.3.1 Task categories ... 12 3.3.2 Scenario A or NA? ... 12 4. Framework design ... 13 4.1 New framework ... 14

4.2 Redefining SO task categories ... 14

4.3 Additional task categories ... 15

4.4 Remaining task categories... 15

5. Framework evaluation ... 16 5.1 Difference in prediction ... 16 5.2 Generalisability ... 16 5.3 Reflection on critique ... 16 6. Discussion ... 17 7. Conclusion ... 18

8. Appendix – Abbreviation table ... 20

9. Appendix – Literature review: Methodology ... 20

9.1 Methodology literature long list ... 20

9.2 Short list ... 20

10. Appendix – Literature review: DOA model... 21

11. Appendix – Verb list ... 21

12. Appendix – Methodology: Historical data ... 21

12.1 General information ... 21

12.2 Tasks ... 22

12.2.1 Accountant ... 22

12.2.2 Auditor ... 22

12.2.3 Teachers and salespersons ... 22

12.3 Work activities ... 23

12.3.1 Accountant ... 23

12.3.2 Auditor ... 23

12.3.3 Teachers and salespersons ... 23

13. Appendix –Historical data: Importance vs level scores ... 23

14. Appendix – Methodology: Interviews ... 23

15. Appendix – Results: Historical data analysis ... 25

16. Appendix – Results: Literature analysis ... 28

17. Appendix – Results: Interview analysis ... 29

18. Appendix – Results: New framework ... 31

19. Appendix – Results: Evaluation ... 31

19.1 Short summary comparison SO on categorising tasks ... 31

(3)

Page | 3 Introduction

“According to our estimates around 47 percent of total US employment is in the high risk category. [These] jobs, we expect, could be automated relatively soon. Perhaps over the next decade or two.” (Frey & Osborne, 2013, p. 48). Statements like this drew a lot of attention to the work of Frey and Osborne. Before delving deeper into this discussion it is important to clarify a definition. What Frey and Osborne refer to as computerisation, is also referred to by other authors as automation (e.g. Parasuraman et al., 2000). In this research the term automation will be used and it is defined as: “the full or partial replacement of a function previously carried out by the human operator.” (Parasuraman et al. , 2000, p. 287).

Frey and Osborne their findings have been heavily criticised from various fields of study. Most authors argue that the 47 percent estimation heavily overstates what will happen in reality. Most of the debate revolves around the methodological decisions that lay the groundwork for their ‘misjudgement’.

Arntz et al. (2016) show that part of the problem with the methodology of Frey and Osborne is that they aggregate to occupation level, where they should be analysing on task level. According to Arntz et al. a job is nothing more than a bundle of tasks. Working under this assumption, they replicate the analysis of Frey and Osborne and find that only nine percent of all jobs are at risk of being automated. This illustrates the wide variation in predictions. Predictions seem to be either over- or underestimating the effects of automation on jobs.

From the field of ergonomics another important aspect is revealed, Frey and Osborne do not to take into account that with an increase in automation, an increase in human-machine interaction tasks is inevitable (Parasuraman et al., 2000). The DOA model by Parasuraman et al. (2000) addresses human-machine interactions and the delicate balance between automation and productivity. From the DOA model it becomes apparent that the automation of some types of tasks does not lead to an increase in productivity. This shows how a task can technically be automated, but will not be automated in practice because it is not profitable. This raises the question: ‘What types of tasks will be automated?’.

In the literature there are various ways to categorise tasks in order to determine which tasks are susceptible to automation. Each task categorisation has been defined using different data sources. The four main data sources in the literature are: (1) historical data, (2) literature, (3) field-work, and (4) panel of experts. Correspondingly, there are task categorisations with various scopes and levels of detail. Autor et al. (2003) laid the groundwork for the researches of, amongst others, Frey and Osborne. Autor et al. adopted a broad scope when predicting the effects of automation on skills. Their framework consists of four categories: routine manual, non-routine manual, routine analytical and interactive, and non-routine analytical and interactive.

In order to come to more realistic prediction and new task categorisation this research takes a number of steps. First, the issue of method of data collection is addressed. The starting point of this research is that there is no superior method of data collection. However, they each have their respective strengths and weaknesses. Therefore, data triangulation is applied to combine three methods of data collection. All three data sources based on the literature will be adopted: (1) historical data, (2) literature, and (3) interviews.

Having decided on the methods of data collection, the different task categorisations can be addressed. In order to compare the results of the different types of data sources, it is important that one task categorisation is chosen as starting point. Spitz-Oener (2006) provides a useful task categorisation, as it is holistic and includes additional details in their task categories to modernise the outdated categorisation of Autor et al. (2003).

To develop a new framework, the cases of accountant and auditor have been chosen as exemplary cases. These occupations ensure that all data sources provide valuable insights. Both occupations have been rated as most vulnerable to the effects of automation by both Frey & Osborne (2013) and Arntz et al. (2016). In addition, these occupations have seen a rapid change in the past, see for example: Wilson & Sangster (1992) or Moffitt et al. (2016). Thus, these occupations are very likely to provide sufficient and valuable data, both in retrospect (historical data) and future predictions (literature and interview data).

The following research question is formulated:

How can types of tasks be categorised to realistically predict the effects of automation on jobs?

The following sub-questions are posed in order to guide the path taken to answer the main question:

1) Which data sources and task categorisations are useful to predict the effects of automation on jobs? 2) How can data triangulation be utilised to obtain a reliable prediction of the effects of automation? 3) How can an up-to-date framework of task categories be developed using the exemplary cases of

accountants and auditors?

(4)

Page | 4 1. Literature Review

A structured literature research was conducted, starting with a long list based on search terms (see Appendix 9.1). The list was reduced to a short list based on explicit criteria (see Appendix 9.2). The literature review starts with a broad review of the effects of automation on society and zooms in ever further. Then, a closer look is taken on a model describing the human-technology interaction and the effects of an automated system on an employee. The smallest scope is adopted by looking in detail which data sources and task categorisations are used to predict the effects of automation on jobs. In this research a number of abbreviations will be used, in Table 5, Appendix 8, an overview of these abbreviations and their corresponding descriptions can be found.

1.1 The Effects of Automation on Society

This is not the first time that fear breaks out over the effects of advancements in technology. Autor (2015) describes how there has been fear for a long time, for example when the first factory was built in the UK. A lot of the most recent public debate about the effects of automation on society started when Frey & Osborne (2013), from here on referred to as FO, published their research stating that around half of the jobs existing today in the USA were likely to disappear in the near future (coming decade or two). Their methodology was replicated and applied to other OECD countries.

Soon, strong critique was voiced on the methodology adopted by FO. Arntz et al. (2016), from here on referred to as AEA, combine and rebut a number of these critiques: FO consider automation of whole jobs rather than tasks, there is a (large) time lag between technology invention and implementation, if technology is ready other obstacles could prevent it from implementation, e.g. laws, ethics, and economic incentives. The main critique of AEA is that tasks rather than jobs are automated. They empirically test how this different perspective impacts the findings of FO. They find that, on average, around nine percent, rather than 47 percent, of jobs in OECD countries are in danger of being automated.

The discussion on the effects of automation is partly a methodological one: should jobs be considered as a whole or as a bundle of tasks. Besides this methodological point, AEA also point out that there is a gap between what can and what will be automated. The next section will elaborate on how the field of ergonomics shows that this gap partly depends on the interaction between humans and machines.

1.2

Human-machine interaction

One of the earliest models to describe various levels of automation was a model by Sheridan & Verplank (1978; see also Sheridan, 1992). This model was expanded by Parasuraman et al. (2000), from hereon referred to as PAR, to not only describe levels of automation but also classes of automation. As a result, the degree of automation (DOA) model was proposed with four classes and ten levels. At level zero a task is performed fully manually, at level ten it is performed fully automatically. The four classes of automation concern types of tasks: information acquisition, information analysis, action selection, and action implementation. The levels and classes of automation combined results in a degree of automation of a task. This model maps to what extent a system is automated. In Table 6, Appendix 10, a description of the classes of automation can be found.

Wickens et al. (2010) perform a meta-analysis on four indicators that capture the interaction between humans and machines: routine performance, failure performance, situation awareness, and workload. They found evidence that if DOA goes up, routine performance goes up and workload goes down. This means that if both machine and human perform as expected, more work can be done in less time and the human operator has a lower workload. However, as more tasks are automated to a higher level, failure performance and situation awareness deteriorate. This is also referred to as the lumberjack effect: ‘the higher a tree, the harder it falls’. Meaning, if the system breaks down, less work is done compared to when there was no system, and the human operator loses more and more overview on what is happening and how the processes work.

Hancock et al. (2013) and Onnasch et al. (2014) try to uncover the reasoning behind these relations, most importantly to understand why failure performance and situation awareness deteriorate when the DOA increase. Hancock et al. (2013) find that reduced failure performance originates in two phenomena the human operator experiences: (1) generation/out-of-the-loop effect, and (2) complacency. The former reflects the lack of involvement in the development of various (alternative) actions and decisions. The latter refers to the tendency to rely too heavily on a machine. Combined these two phenomena cause the human operator to lose awareness of the situation. Onnasch et al. (2014) investigate this more thoroughly and find that there is a critical threshold after which the lumber jack effect is notably stronger. This threshold is crossed when tasks past class two of the DOA model are automated, i.e. when selection of an action and action implementation tasks are automated. When this threshold is crossed, the negative effects of automation severely reduce the productivity of a worker.

Onnasch et al. (2014) conclude that a medium degree of automation is ideal in terms of productivity as this keeps the human operator in the loop. A medium degree of automation means the first two classes of tasks

(5)

Page | 5 are highly automated, but the third and fourth class have a low level of automation. Level three describes how decisions are selected (based on the analysis from level two) and level four describes how these decisions are turned into action implementation. The underlying theoretical boundary that is crossed here is the move from analysis to synthesis. During analysis one increasingly zooms in from a larger picture into detail. Synthesis describes how to move from detail to the larger picture. In other words, synthesis requires the person or machine to take into account a large number of factors from the environment on which a decision has an influence. In contrast, analysis reduces the number of factors to the smallest number possible to get a better understanding.

However, not all authors agree that this is the optimal degree of automation. Jipp & Ackerman (2016) argue that the role of the human operator changes from performer to supervisor. According to them, this would make the high degree of automation the optimal point. One main critique on this study is that the gains from physical workload are countered by the increase in mental workload.

The balance between man and machine is a delicate one and is shaped like an inverted u-curve, where DOA is on the horizontal axis and productivity on the vertical. Although the optimal point of automation in this spectrum is contested, it is evident that more automation does not necessarily mean higher productivity. Meaning, there are situations in which it is technically possible to automate a task, but the task will not be automated in practice as it does not increase productivity. This refers back to the gap presented in the previous section between technical possibilities and practical decisions with regard to technology. From the field of ergonomics suggestions are made as to which types of tasks will not lead to an increase in productivity from a human-machine interaction perspective.

1.3 Data collection and task categories

Having focussed on the human-machine interactive tasks, this section will focus on the question: ‘What types of tasks will be automated?’. This is an inherently difficult question to answer as automation is ever progressing and changing. From the field of ergonomics it is argued that tasks can be divided into categories, referred to as classes. As described in the previous section, the DOA model of PAR consists of four classes of tasks. This classification is based on Broadbent (1958). He reviewed literature to build a model of selective attention. This categorisation is very strong for defining tasks describing the interaction between human and machine. However, this strength is at the same time a weakness as the categories are strongly limited to capturing these types of tasks. It is hard to categorise tasks describing, for example, human-human interaction. Besides the DOA model, other methods to categorise tasks have been proposed in the literature. Below, four other methods and the critiques on their methods will be discussed.

The model of Müller et al. (2016) is based on fieldwork by observing in companies with an assembly plant and studying the pilot studies in these companies. In other words, they studied the advancements of technology on-site and in pilot studies to determine to what degree tasks could be automated. The strength of this model is that it shows what is and will be automated in the near future in practice. In terms of generalisation, their approach is more limited compared to the DOA model categories. Müller et al. define the basic tasks and their corresponding specific tasks, which are almost unique for each job. For example, the set of basic tasks for an assembly line worker is very different from that of a software engineer.

Autor et al. (2003) aimed to gain insights into changes in job skill demands. As a consequence, the categories have a high degree of generalisability. For their research, four task categories were formulated: routine manual, non-routine manual, routine analytical and interactive, and non-routine analytical and interactive. The categories are formulated under the assumption that routine tasks can be automated and non-routine tasks cannot be automated. Autor et al. are not explicit in terms of what data sources were used to define these categories. The main strength of this task categorisation main is its generalisability. The categorisation of Autor et al. is utilised by a number of other researches (e.g. FO and AEA). However, the main weakness of the task categorisation of Autor et al. is that it is outdated.

AEA replicated part of the method of FO where a panel of technological experts are asked to come to an indication of what can and cannot be automated in a workshop setting. Note that their goal is to find what technically can and cannot be automated. Their fundamental assumption is that everything can be automated, but not all at the same pace. ‘Engineering bottlenecks’ indicate the types of tasks that will be automated at a slower pace. They argue that the distinction between routine and non-routine made by Autor et al. is no longer relevant and a new task categorisation should be made. To do this, the engineering bottlenecks are separated into three task categories: (1) perception and manipulation, (2) creative intelligence, and (3) social intelligence. All other tasks are placed in an implicit fourth task category: automated.

Spitz-Oener (2006), from here on referred to as SO, uses West German historical data to confirm the decreasing relevance of the distinction between routine and non-routine. She emphasises a more precise subdivision within the non-routine categories by splitting it into three categories: non-routine analytical (also referred to as analytical), non-routine interactive (also referred to as interactive), and non-routine manual.

(6)

Page | 6 All researches discussed above use different data sources resulting in a different task categorisation. The differences in methodologies can be highlighted by answering two questions: ‘What data sources are used to develop a task categorisation?’ and ‘What are the definitions of the task categories?’.

2. Methodology

First, the choice for accountant and auditor as exemplary cases will be explained. The second part elaborates on the strengths and weaknesses of the various data sources used in this research. Consequently, the research design is presented in the third section. Then, an extensive discussion on strengths and weaknesses of the various task categories is presented, at the end of which a decision is taken on the task categorisation used as starting point for this research. The next three sections describe the methodology of the three data sources utilised in this study: historical data, literature, and interviews. Finally, the method used to validate the findings of the research is discussed.

2.1 Case studies

In order to gain fruitful insights into the research question a job group will be selected based on the worst-case-scenario as depicted by AEA, i.e. the job group that is, according to their findings, most likely to be automated in the near future. The occupations that are selected in this study are accountant and auditor. These occupations are chosen as both FO and AEA predict that these jobs are influenced more by automation than other jobs. In addition, Wilson and Sangster (1992) show that in accounting and auditing there has been a long history of a strong impact of automation. They describe how automation has been a general concern to the accountant/auditor throughout the years. This is reconfirmed by Moffitt et al. (2016) who conclude the same from a historical literature review of AIC (Accounting Information Systems) between 1986 and 2014. Therefore, accountancy and auditing can be considered exemplary cases in portraying the effects of automation.

2.2 Data sources

As has become evident from the literature there are four main data sources commonly utilised in the various researches: (1) historical data, (2) literature, (3) field-work, and (4) panel of experts. This research will utilise historical data, literature, and will combine field-work and panel of experts into interviews as will be explained below. In addition, each of these data sources and their respective strengths and weaknesses are discussed. An overview of the three data sources utilised in this research and their respective strengths and weaknesses are expressed in four characteristics in Table 1. The timeframe indicates whether the data is retrospective, prospective, or a combination of both. The perspective specifies whether the data source focusses on theoretical possibilities or practical implications of a new technology. The new categories indicate whether the new data source is capable of providing insights into possible new task categories besides the ones provided in the literature. ‘Hard’ evidence indicates if a data source provides results based on evidence or opinions.

Table 1.Strengths and weaknesses of each data source utilised in this research.

Timeframe Perspective New categories ‘Hard’ evidence

Historical data Retrospective Practical No Yes

Literature Combination Theoretical Yes Both

Interviews Prospective Both Yes No

Historical data is used to reveal changes in the various types of tasks over the last decades. This type of data consists of an annual recording of the tasks belonging to an occupation. The historical data is particularly strong in providing hard evidence to support claims of changes in types of tasks over the years. The most prominent weakness of historical data is that it cannot indicate how new task categories could be formed.

A literature study is one way of accumulating various predictions resulting in a balanced view on the effects of automation. In this research, statements made by various authors as to what can/will and can/will not be automated are used as ‘raw data’ for the literature data source. The main strengths of this data source is that it can indicate how new task categories could be formulated. The literature can provide both hard evidence or not, depending on the evidence used to make a prediction. Its main weakness is that it is biased towards providing a more theoretical perspective.

The third data source utilised in this research is interviews. Müller et al. (2016) consider practical examples and pilot studies of large corporations only representing the potential of technology in the very near future. This heavily understates the potential of technology, as it does not consider the theoretical possibilities. In contrast, a workshop with a panel of experts heavily overstates the potential of experts as they are less inclined to consider the practical complications. Thus, predicting what will happen in a much more distant future. In this research, interviews will be conducted to combine these practical and theoretical perspectives into

(7)

Page | 7 one data source. To ensure both perspectives are represented, a diverse sample consisting of both professional and academics is taken. The main strength of this data source is that it provides insights into new categories where the researcher can determine a timeframe. Its main weakness is that it cannot be assumed that the results are based on hard evidence.

2.3 Research design

In Figure 1 a visual representation of the design of this research can be found. In the first phase of the research data is collected from the three data sources described above. The historical data will provide hard evidence as to how types of tasks have changed thus far. However, it cannot make predictions for the future, especially in terms of defining new task categories. Therefore, the literature data is added to find new task categories not explicitly stated in the literature. The literature is biased towards making predictions from a theoretical rather than practical perspective. Lastly, interviews are added as they can combine both practical and theoretical possibilities in a prediction. However, its main weakness is that it does not provide hard evidence. Finally, this shortcoming is countered by both literature and historical data which can provide hard evidence.

It is important to consider each data source as a separate building block for the new framework. Each building block provides its unique perspective. Thus, it is expected that the findings of each building block differ from the findings of the other building blocks. Only in the design phase of the research, the results of all building blocks will be combined. This results in both a holistic and fine-grained framework of task categories.

The last phase of the research focusses on the evaluation of this newly designed framework. The evaluation consists of three parts: (1) the differences in predictions between the new framework and existing literature, (2) the generalisability of the new framework, and (3) a reflection on how the methodology of this research compares to the critiques voiced in the literature.

Figure 1.Research design 2.4 Task categories

There is a notable distinction between narrow and broad task categorisations in the literature. The task categorisation of Müller et al. (2016) categorise tasks for a particular job and is not suited for generalisation. The task categorisation of PAR is narrow as it focusses on human-machine interaction tasks. Broader task categorisations are presented by Autor et al. (2003) and the studies who take their categories as guideline. As this research adopts a broad scope, viable candidates to act as starting point for task categorisation are Autor et al. (2003) and its modified versions as presented by FO and SO.

Both FO and SO present different ideas on where a division should be made within the non-routine task categories. The approach taken by FO is to add detail to the non-routine cognitive type of tasks. At the same time, they presume that the remaining tasks types are automatable. Consequently, they lose all detail in those types of tasks. In contrast, SO make an extra subdivision within the non-routine category while retaining the other task categories. Their task categories are: routine manual, non-routine manual, routine cognitive, (non-routine) analytical, (non-(non-routine) interactive. Thus, the task categorisation of SO provides broad, yet fairly detailed task categories that cover a wide range of types of tasks. And SO present a more modern task categorisation compared to Autor et al. (2003). SO alsopublished a list of task-specific verbs corresponding to the task categories. For example, the verb ‘to negotiate’ refers to the interactive task category. This list has been used as guideline for the coding and can be found in Appendix 11. Note that the task categories are also used to categorise work activities in the historic data.

Rohrbach-Schmidt and Tiemann (2013) reviewed three methods of defining task categories in detail: (1) statistical classification, (2) criterion validation, and (3) literature. They find that the factor analysis used for the statistical classification yields internal consistency scores below social science standards, meaning not all items in their categories measure the same construct. Furthermore, using the criterion validation tasks were only assigned to the routine or non-routine manual task categories. As this research considers cognitive tasks as well

(8)

Page | 8 as manual tasks, criterion validation is not a good fit. Therefore, the literature based approach is most suitable to gain insights into effects of automation on types of tasks across various jobs, for this research this means adopting the definitions and verbs as proposed by SO.

2.5 Historical data

The O*NET database (National Center for O*NET Development, 2017) is used to gather historical data. In this database, over 13,000 jobs are classified together with skills, tasks, educational level required, and much more. The database has been expanded and kept up to date over the years, and online databases are available from 1998 onwards. The data is collected from three different sources: incumbents, occupational experts, and occupational analysts. In order to determine what tasks and work activities exist in a selected occupation two random samples are chosen to fill in questionnaires: (1) one random sample of businesses expected to employ workers in the targeted occupation, and (2) one random sample of workers in those occupations within those businesses. This data is complemented by data from job incumbents who fill in standardised questionnaires (National Center for O*NET Development, 2017).

For this research, the variables tasks and work activities are of importance. The choice to include work activities as an additional variable was two-fold: it was a pragmatic choice as well as a theoretical one. On the pragmatic side, the data for 1998 proved to be significantly less detailed and inconsistent in wording and categorisation compared to the two later data points. On the theoretical side, work activities closely resemble tasks in terms of their definition in the O*NET database. The work activities variable describes the general types of behaviour associated with multiple jobs, e.g. ‘Documenting/recording information’. The definition of tasks: an activity that occurs in order to produce a product or outcome required on the job (Peterson et al., 1999). For each variable importance scores are measured to reflect the importance of every task/work activity to a job. Appendix 12 provides basic descriptive statistics of the variables and elaborates on the importance scores.

To make the most of the retrospective character of this data source, the timeframe it covers will start with the most recent available data point and ends with the oldest available digital data point. The most recent available point is 201. As mentioned above, the oldest available point that is digitally available is from the year 1998. Between these two data points the year 2007 has been chosen as controlling third data point. This brings the distance between each of the three data points to a time period of roughly a decade. Although every year a new database is released, not all occupations are updated each year. Therefore, it is possible that there are small deviations between the year of the database and the year the data was collected.

2.6 Literature

This study revisits the literature, as selected in the short-list presented in Appendix 9. Quotes are used as raw data to find indications as to what types of tasks can/will and can/will not be automated. In order to keep the literature predictions as up to date as possible, only the works between 2000 and 2017 from the short-list will be included in the sample. One thing to note is that because the findings of FO regarding what can and cannot be automated are replicated in AEA, FO is not included in the literature sample as to prevent counting the same findings double. In order to analyse the quotes, the following coding scheme was utilised:

Step 1: all relevant quotes were selected and extracted from the transcriptions with a line number and letter to allow tracing back to the exact origin of the quote. Then, all quotes were given a general code in terms of: automated, not automated, or too vague to analyse.

Step 2: each quote was summarised in key words.

Step 3: the key words were used to assign each quote to its appropriate task category as defined by SO. In case the quote did not fit into the SO task categories, the quote was included in a ‘rest’ category.

Step 4: all quotes that did not fit into the SO task categories were grouped together and in vivo coding was applied to generate new task categories.

Step 5: after the first cycle of coding, a second cycle of coding was conducted. The first part of the cycle consisted of revisiting all task categories within each general code (automated or not automated) and comparing them with each other.

Step 6: the second part of the cycle consisted of comparing all task categories across the general codes (automated or not automated). This ensured that similar categories were combined under one umbrella term or larger groups were split into more precise subgroups. This led to a finalised list of task categories.

In total twelve journal articles provided useful quotes. In those twelve journal articles a total of 116 quotes were found of which 101 were included in the final sample. In total 39 of these quotes fell into the rest category. Adopting the inductive coding exercise described above, six new task categories were formed and one of the task categories from SO was expanded. Some quotes were also placed in multiple categories as they mention multiple tasks. For example: ‘Flexibility, creativity, negotiation and communication skills’ is placed in the categories: adaptive, creative and interactive tasks.

(9)

Page | 9 During the coding, fifteen quotes were excluded for several reasons. One group of quotes was excluded because they were too vaguely formulated to categorise. For example, ‘routine tasks’ can be included in routine cognitive and routine manual or could even be interpreted as an interactive task. Therefore, either the quote has to be excluded or included in all possible categories. Exclusion from the data set is the best option as the other option would result in unnecessary noise in the data. On the basis of this argument nine quotes were excluded. Three quotes were removed as they referred to skills. For example: ‘Skilled white-collar work’. Two quotes were removed as it was unclear if they referred to routine or non-routine situations. An additional single quote: ‘Alert nurse to abnormal patient symptoms’ was removed as this quote does not clarify whether it is an analytical (i.e. light a bulb, or make a sound) or interactive task (i.e. communicate with nurse).

2.7 Interviews

Interviews provide the flexibility of combining academics and professionals into one sample and simultaneously focus on the changes that will take place the coming decade. The timeframe of a decade was chosen as it ensures on the one hand that the results are not in danger of becoming outdated too fast. On the other hand, it limits the error of margin of wrong predictions and wild guessing that come with an extended time frame.

The interview part of this research consists of two phases: (1) explorative interviews, and (2) in-depth interviews. The explorative phase consisted of an interview with a professor from TIAS. The main goal of this phase of the interviews was to: (1) get a better understanding of the field of accounting and auditing, (2) get an interview guideline, and (3) create a list of candidates for the in-depth interviews. To include the theoretical as well as the practical perspective, both academics and professionals were included in the sample. It has to be noted that academics in the field of accountancy and auditing are often also active as professionals. A snowball sampling method was adopted for the in-depth interviews with the goal to create a sample as diverse as possible. In total four people were interviewed in the in-depth phase, of which three were male and one female. One interviewee is currently director of the digital innovation and assurance department at one of the bigger consultancy companies. The second respondent has twenty years of accountancy experience and now leads a start-up company specialising in automating accountancy analyses. The third interviewee is currently partner in audit innovation at an established accounting firm and is working on a Ph.D. project concerning the integration of process mining in auditing. The final respondent is a professor at Tilburg University specialised in accounting information systems and has experience as accountant as a partner at an established accountancy firm.

The interviews were semi-structured and increased in depth of questions as the interview progressed, the interview questions are presented in Appendix 14. The interview consisted of two parts: (1) uncovering which task will and will not be automated, and (2) reflect upon the task categories as defined by SO. Throughout the interviews it was important to continuously keep the interviewee thinking in terms of tasks rather than general trends. If this tendency was noticed during the interview, the interviewee was asked to provide a detailed practical example. Three interviews were conducted in person and one over the phone. The interviews lasted between 45 and 60 minutes. All interviews were recorded and fully transcribed.

The same coding scheme as presented in the literature was applied to the interview quotes. In total 201 quotes were extracted of which 188 were included in the final sample. One specific general code was added for quotes regarding the final question concerning the reflection on the task categories presented by SO. Thirteen quotes were deleted in total for being too vague (.e.g. “Routine accountancy tasks can be automated”) or not specifically talking about a task that can be automated (e.g. “Automation will allow us to place all our tooling within the cloud”). 99 quotes were coded as will be automated and seventy as will not be automated. In addition, nineteen quotes were coded as reflecting on task categories.

2.8 Evaluation

First, it is important to clarify the goal of the evaluation part of this research. This paper started with stating the prediction that around half of the jobs would disappear in the coming decades. The aim of this research is to make a more realistic prediction of the effects of automation on jobs. In order to do this, data triangulation for the exemplary cases accountant and auditor were utilised. As there is no way of looking into the future, there is no other way but to wait and see whether the newly proposed task categorisation will predict the effects of automation more realistically than others have done so far. Thus, the evaluation of this research cannot provide evidence. However, for the new categorisation to have better predictive power, two fundamental requirements need to be met: (1) ‘Is the new categorisation generalisable beyond the exemplary case?’, (2) ‘Does the new categorisation lead to a different prediction regarding the effects of automation on tasks?’.

As it is not possible to apply the new framework on all jobs, besides the exemplary cases, two jobs at the ends of the spectrum of automation are chosen: one job heavily affected and one job lightly affected by automation over the years. This should indicate if the new framework does not merely reflect that changes in the exemplary cases, but also captures changes in task categories relevant to other occupations. Taking FO and

(10)

Page | 10 AEA as starting point, retail salespersons are indicated to be in the high risk category in both studies. Elementary school teacher is predicted to be in the lowest risk category in both studies.

Next, an assessment must be made whether the new framework leads to a different prediction compared to others. This can be done by taking two measures: (1) difference in classification of tasks/work activities, and (2) difference in prediction in number of automatable tasks. First, it is important to determine if the tasks and work activities of the two jobs selected for the evaluation would be classified differently using the new framework compared to using the one from SO. Second, a comparison should be made on the number of tasks that are predicted to be automated based on the findings of FO. The prediction of FO is that all tasks are automated, except for tasks that fall into one of three engineering bottlenecks: (1) creative intelligence, (2) social intelligence, and (3) perception and manipulation.

3. Results

The results from each of the three data sources will be discussed in turn: historical data, literature, interviews. The literature and interviews sections will start with an elaboration on the task categories followed by the presentation of which types of tasks are mentioned in what scenario. The scenario of can/will be automated is referred to as scenario A, and the scenario of can/will not be automated is referred to as scenario NA. Making this distinction will add depth during the analysis as it provides can illustrate the reasoning behind why tasks are perceived to be automatable or not. This will aid in developing the new framework presented in chapter 4.

3.1 Historical data analysis

In Table 2 an overview of the trends per occupation, per variable (tasks and work activities) for both number of tasks/work activities (size) and average importance score percentages (importance) for all relevant task categories can be found. The non-routine manual task category will not be discussed as there were no tasks or work activities found in any year for any of the jobs. The routine manual task category is excluded from the table because no tasks and only one work activity is identified for this category. Figures 3-6, Appendix 15, present an additional, more detailed picture of the results per occupation split on tasks and work activities with their importance scores for each of the years.

Table 2.Trends in number of tasks/work activities (size) and average importance score (importance) per variable (tasks and work activities) for accountants and auditors between 1998 and 2017 for all relevant task categories.

Occupation Variable Analytical Interactive Routine cognitive

Size Importance Size Importance Size Importance

Accountant Tasks ↓ − ↑ − ↑ ↑

Work activities ↓ ↑ − ↑ ↓ ↑

Auditor Tasks ↑ ↑ − ↑ − −

Work activities ↑ − ↑ ↑ − ↑

The importance scores show a very clear pattern, the scores always increased or in some cases remained equal over time. However, there seems to be no link between size and importance scores. Most trends in size of the categories do not show a clear pattern. To start, there is no clear trend in the size of analytical tasks or work activities. For accountants, both analytical tasks and work activities have decreased in size. In contrast, auditors have seen an increase in size of both analytical tasks and work activities over the same time period. For the category interactive tasks/work activities there is a clearer trend in the data: it either increased or remained equal in size. Routine cognitive is a category for which a mix of trends was found. Within accountants, a contradicting movement was found where the size of routine cognitive tasks increased, but the size of routine cognitive work activities decreased. At the same time, for auditors, no changes were observed in size of either routine cognitive tasks or work activities.

The routine manual category is an interesting case. Exactly one work activity was identified for both occupations each year. Although the size remained equal it saw the biggest increase in importance score of all task categories from 15 to 25 %. A closer look reveals that the definition of the work activity changed between 1998 and 2007/2017 from a broader ‘Operating Vehicles or Equipment’ in 1998 to a more precise description of ‘Interacting with Computers’ in 2007 and 2017. It is debatable whether interacting with computers should be coded as routine manual in the first place, but SO clearly identifies that routine manual tasks include tasks or activities that consist of: “Operating or controlling machines.” (p. 243).

The message that can be taken from the historical data in light of the new framework is that there exist contradicting trends within the analytical and routine cognitive task categories. This could indicate the need for an additional subdivision within these task categories. As it seems that a subcategory of analytical/routine cognitive tasks are automatable and another subcategory is not. However, the historical data cannot provide insights into where exactly this subdivision should be placed.

(11)

Page | 11 3.2 Literature analysis

3.2.1 Task categories

In Table 3 an overview is given to show the aggregate results from the literature analysis. Table 3 Visual representation of task categories and their corresponding future perspectives as taken from the literature.

Automated Debated Not automated

Routine manual Adaptive Interactive Routine cognitive Analytical Tacit

Decision selection System supervision Action implementation Creative

Non-routine manual

Quotes from twelve carefully selected articles were analysed. Some of the quotes did not fit in the task categories defined by SO. During the regrouping of these quotes a number of things stood out. Firstly, it became clear that SO defined her categories under the (implicit) assumption that tasks can either be completely done by people or machines. As automation progresses, it becomes evident that tasks arise which, in the core, address the interaction between people and machine. These types of tasks are hard to capture using the SO categorisation. Routine manual tasks do include tasks that require machine operation on a much more practical level of operation. But, the interaction between people and machines on a more abstract level of operation are not captured well. In the field of ergonomics, PAR built the DOA model with a goal to characterise the nature of the interaction between humans and machines. In order to categorise the group of codes describing human-machine interaction two task categories were formed based on class three and four of the DOA model: decision selection and action implementation.

From the description given by PAR, information acquisition (class one in the DOA model) is interpreted as mostly being a routine cognitive task. If the information acquisition task is non-routine, it is categorised as an analytical task. To see the effect of this, the routine cognitive task category is expanded with the following key words: retrieving, sorting, and storing of information. The second class from the DOA model (information analysis) is already included in the analytical task category of SO.

Furthermore, four other categories are defined during the recoding of the quotes. Firstly, system supervision tasks are added. In the categorisation of SO supervisory tasks are only named in interactive tasks as they are assumed to involve the supervision of people. However, the tasks that are included in this new category are tasks regarding the supervision of systems rather than people. The operation and controlling of machines which is mentioned in the routine manual task category is not considered to cover these tasks because supervision is assumed to be neither routine nor manual.

The second category that has been added is adaptive. This task category includes tasks for which the input of the environment is crucial for the completion of a task. The context in which a task takes place is important. It also concerns tasks for which no precedent exists as the environment is too unpredictable.

Thirdly, tacit tasks refer to tasks of which we understand how to perform them but cannot express this in words. Quotes containing words like ‘intuition’ or ‘common sense’ were included in this category.

Finally, creative tasks concern tasks such as ‘developing new meaningful ideas’ or ‘developing a hypothesis for a poorly understood phenomenon’. Pinpointing what creativity exactly entails is extremely difficult. Nonetheless, it is not a task that can be captured in any one of the SO task categories.

3.2.2 Scenario A or NA?

In Table 7, Appendix 16, for every category is shown which articles have named them and if they were named in scenario A or NA. There is no discussion in the literature that routine manual and routine cognitive tasks can be automated. This underscores the common thinking that the routine tasks, of any kind, can be automated. One can think of tasks like alphabetising a list or operating a machine on an assembly line. In addition, decision selection and action implementation are consistently named in scenario A. These statements originate in the DOA model. These task categories are prominent in, for example, aviation. Here, systems can offer support to a pilot when he or she in making decisions. For example, the system could present various flight paths and let the human operator decide which path to take. Alternatively, if the auto-pilot is switched on the system could take and implement these decisions autonomously, with the possibility of the human operator interfering.

On the other side of the spectrum there is consensus on the fact that the tasks in the following categories cannot be automated: interactive, tacit, system supervision, creative, and non-routine manual. These include tasks such as: ‘Forging relationships with customers’ [interactive], ‘common sense’ [tacit], ‘supervisory control’ [system supervision], and ‘Develop a new hypothesis’ [creative]. The results for the non-routine manual task category should be interpreted with caution as it was only mentioned once in one journal article.

For two task categories quotes in both scenario A and NA were found: adaptive and analytical. In the adaptive task category tasks in scenario NA contain words like ‘Flexibility’, ‘Adaptability’ and ‘Unpredictable environment’. Only one quote mentioned an adaptive task in scenario A: ‘Self-driving vehicles’. For analytical

(12)

Page | 12 tasks no such pattern became apparent. For example, quotes containing words such as ‘Diagnosis’ were mentioned in both scenario A and NA.

The literature has proposed six potential additional task categories for the new framework, two of which capture human-machine interaction. The question remains if these categories of the DOA model form separate categories or if it highlights a division that has to be made within the task categories defined by SO. Three newly defined task categories were only mentioned in the scenario of NA: creative, tacit, and system supervision. And one new task category was mentioned in both scenarios: adaptive. Finally, with regard to the SO task categories: the literature results highlighted that analytical tasks might be too broad of a category, but no clear indication could be found on how this category should be split into subcategories. In addition, the routine cognitive task category was expanded with tasks concerned with information acquisition.

3.3 Interviews 3.3.1 Task categories

In Table 8, Appendix 17, an overview of the categories with their codes can be found. In Table 4 all task categories and their corresponding scenarios are presented.

Table 4.Visual representation of task categories and their corresponding future perspectives as taken from the interviews.

Automated Debated Not automated

Information acquisition Action implementation Tacit Routine cognitive Assurance Big picture

Monitoring Information exchange Information processing Standardisation Analytical Interactive Judgement

Three task categories: analytical, interactive, and routine cognitive, are taken from SO. Interestingly, all interviewees see automation as a ‘tool in their toolbox’ which they can use to do their job better. Three categories closely related to the DOA model are found: information exchange, information processing, and action implementation. Information exchange consists of the interchange of information, i.e. incoming and outgoing information from or to a system or organisation. This task category is an expansion of the information acquisition category in the DOA model which only captures the incoming information stream. Information processing is included in the information analysis category in the DOA model, but was explicitly distinguished from analysis in the interviews and thus included as separate category.

Six more task categories arose from coding the interviews: tacit, monitoring, big picture, standardisation, judgement, and assurance. Tacit refers to tasks of which we understand how to perform them but are not capable of putting how to perform them in words. Monitoring is exclusively focussed on observing data and data changes. Big picture refers to tasks for which the broader context has to be considered. For example, place the results of an analysis in a broad context to come to a conclusion. Standardisation refers to the standardised structuring and meaning of data and data elements. Finally, judgement and assurance form the core of the accountancy and auditing field and are thus unsurprisingly mentioned often. There is a fine line between judgement and assurance. In this research, judgement refers to the, often implicit, judgement made by an individual. Thus, judgement is relatively open for adjustments. In contrast, assurance is a judgement converted into a value statement. This statement guarantees and nails down a specific judgement.

3.3.2 Scenario A or NA?

In Table 9, Appendix 17, for every task category is shown in what scenario they have been named by the interviewees. Important to note is that the code reporting is mentioned both in the analytical and routine cognitive category. In the interviews two types of reporting can be distinguished. The first type refers to the process of making a non-standard report. For example, make a report about an outlier in the data. In this case reporting is coded as analytical task. The other type of report is standardised both in structure as well as in definition of the data elements. For example, writing a report based on the annual financial statement. In this context reporting was coded as routine cognitive.

Interestingly, the vast majority of the task categories are mentioned in both scenarios. There exists some disagreement amongst the interviewees as to what tasks will and will not be automated. Interestingly, interviewees also contradicted themselves by presenting statements throughout the interview where they mentioned the same category in both scenario A and NA. Thus, it seems likely that there exists a subdivision within these task categories. Below all categories found in the debated scenario are discussed.

(13)

Page | 13 The division within action implementation is highlighted in the following example. A company uses scanning and recognition software which can automatically recognise a bill and update the amount in the corresponding place in the budget. If something goes wrong for some reason, for example, a digit in an account number cannot be read, the system can automatically send a notification that something went wrong. However, the system cannot act to correct the mistake, this requires human intervention.

In assurance all interviewees mentioned how the role of accountant/auditor will shift from providing assurance on document level to assurance on data level. The document level assurance refers to reports that are structured in a standardised way with a standardised definition for each data element. For example, annual financial statements. As such, they are perceived to be extremely vulnerable to automation. In contrast, the data level assurance is where accountants guarantee the quality of data flows and data elements. For example, output of an ERP system. This is perceived to be much less susceptible to automation.

Monitoring. What was often mentioned in the interviews is that the speed of financial traffic has increased to such a degree that companies desire real-time monitoring rather than one financial report at the end of the year. For example, real-time monitoring allows accountants/auditors to warn their clients when the client is in danger of going over budget. Most quotes in monitoring are mentioned in scenario A, but some are mentioned in scenario NA. For example, when an (automatic) message arrives notifying the accountant of an outlier, these outliers have to be evaluated by a human operator (e.g. accountant/auditor). Besides, it will often be the human operator designing the bandwidths to detect an outlier.

Three other task categories are closely related to each other: information exchange, information processing, and standardisation. Designing the coupling of data sources is not automatable. But, once the template has been designed exchanging and processing data can be automated. XBRL is a program that is often mentioned as an example: “After a company submits their data in XBRL format to the bank, without human intervention, the XBRL data can be processed within the banking system.”. In scenario A quotes are more likely to be mentioned in a situation where the accountant/auditor only has to press a button and an automated template imports or couples data from (various) data sources processing the data in such a way that it is prepared to start conducting analyses. The underlying insight gained here is that the standardisation tasks determine whether information exchange and information processing tasks are considered to be automatable or not.

The remaining three task categories were placed in both scenarios: analytical, interactive, and judgement. These were also addressed by the interviewees during the final part of the interview where they were asked to reflect on the task categorisation of SO. The analytical task category of SO was criticised by all interviewees. Five themes were addressed which the interviewees believed would be a valuable distinctive factor within the analytical task category: judgement, interpretation, (placing in) context, subjectivity, and experience. Furthermore, the routine/repetitive distinction was also addressed extending the original split proposed by Arntz et al. (2003). The interactive task category was criticised by half of the interviewees who mentioned the following potential distinctive factors: routine, repetitive, and standardisation.

From the interviews a number of potential new task categories were formulated with regard to the new framework. The DOA model could be found in four of these new task categories. In addition, the task categories: assurance, judgement, monitoring, and standardisation, tacit, and big picture were formulated. Most of the task categories had an implicit subdivision and thus appeared in both scenario A and NA. Only the tacit and big picture tasks were exclusively named in scenario NA. With regard to the SO categorisation analytical and interactive tasks were at the centre of debate, and most authors suggested implementing a split within each category. Routine cognitive was expanded by the inclusion of the task of standardised reporting.

4. Framework design

The role of the new framework in this research is to facilitate the comparison of the results from the three data sources in order to make a more realistic prediction of the effects of automation on tasks. The new framework should reflect the current state of technology by updating the task categorisation of SO. One additional aspiration for the new framework is to serve as a template for future research. The new framework should have a balance between, on the one hand, functioning well on an aggregate level by not taking into account every detail. On the other hand, the new framework should be suitable to modifications as automation progresses with the purpose of adding details to the task categories to suit a smaller research scope as well. Two main observations can be made from the results of the three data sources.

Firstly, from the task categories of SO interactive, analytical, and routine cognitive were identified as most interesting to investigate more thoroughly. From the historical data it could be seen that the interactive task category was the only one with a clear general trend. It consistently increased in both size and importance level over the years. The literature data showed that the interactive tasks were perceived to be relatively resilient to the effects of automation. In contrast, interviews indicated that certain types of interactive tasks were more susceptible to automation than others. Regarding the analytical and routine cognitive tasks, the historical data revealed that there is contradicting evidence concerning the trends in sizes of both categories. The contradicting

(14)

Page | 14 trend in analytical tasks consistently resurfaced in the literature and interviews. Only the interviews found indications as to what the division within this task category could be. The routine cognitive task category was consistently named in scenario A, but literature and interviews redefined routine cognitive in different ways.

Secondly, a number of additional categories were found as well. In terms of automatable tasks, the implications of the effects of the DOA model were prominent in the literature and interviews. However, the task categories based on the DOA model were also named in the NA scenario in the interviews. Task categories exclusively named in the NA scenario in literature and/or interviews were: tacit, big picture, system supervision, and creative. The literature presented one new category on which no consensus was found: adaptive tasks. The interviews presented a large number of task categories placed in both scenarios. Interviewees also presented concepts describing the split between scenario A and NA for most task categories were found.

4.1 New framework

The new framework is presented in Figure 2 where the left side presents the general framework and the right side a more detailed version. A number of things are important to note. First, the vertical axis represents the perceived difficulty of automation based on the findings of this research. As automation continuously progresses, there is little value in defining which task categories will and will not be automated. Rather, the new framework should be adaptable to the progress of automation by shifting task categories along the vertical axis.

In order to make a valuable comparison in the evaluation, assumptions are made as to what types of tasks will and will not be automated based on the results of this research. In section 4.2 and 4.3 all task categories are discussed in more detail. Creative and adaptive tasks categories are most difficult to automate. The task categories perceived to be most easily automatable are: routine cognitive, information exchange, and information processing. Routine interactive tasks are automatable, and non-routine interactive tasks are not automatable. Within the analytical task category the most left side of the axis (findings) are automatable and from interpretation up to recommendation is not automatable. In most cases there was insufficient detailed information on the level of standardisation. This will be further elaborated on in the discussion.

Interestingly, two out of three engineering bottlenecks defined by FO correspond with the top categories of this research. The other engineering bottleneck, interactive tasks is placed lower in this framework as the interviewees explicitly mentioned it could be partially automated. The second thing that requires explanation is the dashed line which runs above the bottom two task categories. This line represents the fact that in the data not enough evidence has surfaced to predict the effects of automation on manual tasks, the discussion will return to this point.

Figure 2.New task categorisation framework: general (left), detailed (right). 4.2 Redefining SO task categories

For the task category analytical throughout the three data sources contradictions were found. However, both the historic data and literature found no indications for the subdivision within this category. Only in the interviews suggestions were made as to how to divide the category. These suggestions consisted of the following aspects: judgement, interpretation, (placing in) context, subjectivity, routine/non-routine, and experience. Two concepts are chosen as factors that split the category of analytical tasks: evaluation and standardisation. In Figure 7, Appendix 18 a visual representation of the new analytical category can be found.

Evaluation is an umbrella term for the terms found in the interviews. Patton (1987) has suggested one way of separating between the concepts within evaluation. He argues that there are four levels: findings, analysis/interpretation, judgement, and recommendations. The analytical task category is split by a continuous axis that starts with findings and ends in recommendation as the lines between these concepts are very thin. Moving from beginning to end it is perceived to be increasingly difficult to automate these tasks.

The second axis splitting the analytical task category is the degree to which standardised data is used. Standardised data consists of two parts: standardised structure and standardised meaning. Once both these

Referenties

GERELATEERDE DOCUMENTEN

In the task familiarity dimension, it was predicted that the familiar words (represented by the Dutch words), would be executed faster and compared to the unfamiliar words

The proposed analytical framework is developed through an iterative design science research approach and is comprised of a core competence analysis and an

We can conclude that the impact of automation on geo-relatedness density is positively associated with the probability of exit of an existing occupation

H1: Expertise dissimilarity will have a negative effect on the (a) ability, (b) benevolence, and (c) integrity components of perceived trustworthiness of fellow

It was expected that educational and functional background diversity are positively related to team performance respectively, and the positive relationships would

Teams in which team members dare to speak up, reveal and discuss errors, ask for help when necessary, and seek feedback (Edmondson, 1999; Edmondson, 2004) have a better developed

No evidence is found to conclude that trust in automation explains the relation between automation reliability and human performance; no correlation or mediating effect is

Because the sensing and cognitive capabilities of the technologies have taken over various tasks that were first performed by the employee, the skill variety and use