• No results found

The machine learning implementation framework : an empirical study of the implementation processes of machine learning : a case study in the Netherlands

N/A
N/A
Protected

Academic year: 2021

Share "The machine learning implementation framework : an empirical study of the implementation processes of machine learning : a case study in the Netherlands"

Copied!
52
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

The Machine Learning Implementation Framework: An Empirical Study

of the Implementation Processes of Machine Learning

(2)

2

The Machine Learning Implementation Framework: An Empirical

Study of the Implementation Processes of Machine Learning

A Case Study in the Netherlands

Mark Plantagie (11174528)

Date and version: August 2018, final version

University of Amsterdam / Amsterdam Business School

Executive Programme in Management Studies (MSc Business Administration) - Strategy Track Master thesis supervisor: dr. Ir. J. Kraaijenbrink

(3)

3

Statement of Originality

This document is written by Student Mark Plantagie who declares to take full responsibility for the contents of this document.

I declare that the text and the work presented in this document is original and that no sources other than those mentioned in the text and its references have been used in creating it.

The Faculty of Economics and Business is responsible solely for the supervision of completion of the work, not for the contents.

(4)

4

Table of Contents

List of Tables ... 5 List of Figures ... 5 ABSTRACT ... 6 1. INTRODUCTION ... 7 2. LITERATURE REVIEW ... 9 2.1 Background ... 9

2.2 Synthesis of Implementation Frameworks ... 11

2.2.1 Systematic Literature Review ... 11

2.2.2 Consulting Experts ... 13

2.2.3 Results ... 13

2.3 Design principles for machine learning ... 15

2.4 Theoretical Framework ... 18

3. RESEARCH METHOD & DATA ... 23

3.1 Design ... 23

3.2 Data Collection ... 24

3.3 Analysis Methods ... 26

4. RESULTS ... 28

4.1 Within-case analysis ... 28

4.2 Cross case analysis ... 37

5. DISCUSSION AND CONCLUSION ... 43

5.1 Conclusion ... 43

5.2 Limitations and Future Work ... 45

REFERENCES ... 47

Appendix A: Interview Questions ... 51

(5)

5

List of Tables

Table 1: Sources for implementation frameworks included in the synthesis ... 13

Table 2: Design principles for implementing machine learning ... 16

Table 3: Summary of the phases and critical steps ... 17

Table 4: Definition key constructs theoretical framework ... 18

Table 5: Distribution of participants... 24

List of Figures

Figure 1: Flow Diagram of selected sources ... 12

Figure 2: Theoretical framework machine learning implementation ... 21

(6)

6

ABSTRACT

Since its enterprise applicability machine learning has become one of the top technology trends. However, the implementation of machine learning goes beyond the traditional IT implementation and software development requirements where little practical guidance is offered, making

machine learning implementation more challenging.

First goal of this thesis was to look to the existing literature to see what can be learned about implementation frameworks to improve the current way of working in organizations. For this a theoretical framework is created by synthesizing existing literature and consulting experts in the area. Second goal was to empirically test and enrich this theoretical framework to describe how the framework for machine learning implementation should looks like.

As results of the study, a coherent framework is presented with different workstreams and

iterative phases for implementing machine learning successfully. Based on the empirical research the developed framework is deemed as comprehensive, implementable, complete, and useful. A comprehensive overview of machine learning is delivered with novel insights into the nature of machine learning implementation processes.

The implementation processes can service as a useful blueprint for future research and practice and the findings has taken the first steps to create a comprehensive method for implementing machine learning.

Keywords Machine learning – Implementation – Implementation framework – Implementation processes

(7)

7

1. INTRODUCTION

Machine learning is not new, but the hype around machine learning has been pushed by the increasing computer power, the availability of open-source tools and libraries, and the growing volume of data that is available to organizations. Most technologies that are at the top of the Gartner hype cycle are all basically associated with machine learning (Bini 2018) and these technologies are now in surging rate available in enterprise applications to drive improvements and solutions to business problems that cannot be done easily by humans. This transformation more strategic role of the technology is in line with the concept of Exponential Organizations that reach exponential growth by investing in these new technologies. “An exponential organization describes an organization because of its ability to leverage new technologies to claim production, output or overall impact that is at least ten times larger than a regular organization in the same field” (Ismael 2014). Deloitte Global predicts the number of machine learning pilots and

implementations will double in 2018 compared to 2017, and double again by 2020. However, it has also been predicted that majority of these initiatives are doomed to failure due to poor understanding of the machine learning implementation processes and the technology itself. Despite the increasing awareness of the potential of machine learning, there is little theoretical work that guides the implementation. This is also supported by Holtel (2016) who argues that these new technologies cannot be implemented with the current methodologies, procedures, and best practices. How machine learning works is not present in traditional IT implementations or software development which are static. The testing and training of the model based on data is substantial different than other implementation techniques. To date there is no generic framework available that guides the implementation processes of machine learning. Within organizations a standard way of working is adopted based on existent knowledge throughout the organizations and past experiences. This gap in current knowledge of management and organization will be researched.

Therefore, in this thesis the best practices related to implementation processes of machine learning are researched. Firstly, by synthesizing the current literature on IT implementation and design methods based on consulting experts in the area. Secondly, the theoretical framework is evaluated via interviews with experts in a machine learning project.

(8)

8

Research Question

The topic of implementing machine learning successfully in organizational environments has not yet been empirically explored. Hence, the aim of this thesis is to connect real life with theory, a unique proposition, and to answer the following research question: How should a framework for successful machine learning implementation look like?

This research contributes to the ongoing research on (IT) implementations and explores the current situations in organizations. For organizations themselves the research attempts to provide practical recommendations on how to successfully design and implement processes for the adoption of machine learning. There are 12 cases included in this research within one banking organization in the Netherlands.

Overview of this thesis

This thesis has in total 5 chapters. The second chapter contains a systematic literature study with a description of machine learning design principles and synthesized theoretical framework. It discusses the key constructs of the theoretical framework in more detail. The third chapter contains the second part of the research and describes the methods and data collection for the empirical research. Chapter four presents the results in a within-case analysis followed by a cross-case analysis to identify what steps are taken, what worked, and what should be changed on the theoretical model. Last, in chapter 5 the research question is answered, the findings are discussed, followed by the conclusion and future research.

(9)

9

2. LITERATURE REVIEW

This section provides a brief overview of machine learning and the work on the theoretical framework in which current knowledge is synthesized. For the latter a systematic literature review is combined with recommended approaches to software development and IT

implementation by experts in the area. This approach is applied to complement the systematic literature review and ensure a comprehensive synthesize of current knowledge.

2.1 Background

Machine learning

To define machine learning, first Artificial Intelligence (AI) is explained to put it into

perspective. Machine learning is related to AI because an intelligent system should be able to change in its environment (Russel and Norvig 2002).

Artificial Intelligence (AI) is a very broad field and depending on the scope of application many definitions exist. To define what AI is different definitions are combined that are currently used in business research. AI is an area of computer science that emphasizes the creation of intelligent machines that work and react like humans (Techopedia). Gartner (Sicular 2017) defines AI as systems that change behavior without being explicitly programmed, based on data collected, usage analysis and other observations. AI is the ability of machines to exhibit human-like intelligence (Burnett 2017). All these definitions have in common that AI refers to intelligent systems that exhibit human-like intelligent without being explicitly programmed based on data. AI learns, based on data, to identify and classify inputs patterns, probabilistically predict, and operate unsupervised. Humans are now able to create systems that can simulate human-like capacities and even outperform human-expertise in specific domains or tasks such as playing board games and answering trivia questions.

Within AI we distinguish between narrow AI and general AI. Narrow AI is the ability to learn limited in a particular area and has shown enterprise applicability. General AI on the other hand seeks to perform any intellectual task that a human can do. This is still more a theoretical concept or science fiction. General AI is similar to the human brain and it is the ultimate goal to have a computer operating as a human. However, the focus of this thesis will be on narrow AI, more

(10)

10

specifically on machine learning, a branch of AI that executes one specific task. Machine learning is used for solutions to the problem of obtaining useful insights, predictions, and decisions. This allows to tackle business problems that are difficult to handle with conventional approaches due to the amount of data and ambiguity of information.

For this research an implementation is considered as machine learning when solutions fueled by data, with the ability to learn and improve by using algorithms provide new insights without being explicitly programmed to do so (Sapp 2017). Machine learning is a technical discipline to solve business problems utilizing mathematical models that can extract knowledge from data. This is in contrast to traditional IT and software development, which aims to solve business problems by explicitly defining the software logic (Sicular 2017). With machine learning

systems, one hypothesizes a model design to meet a particular purpose and then lets the model be adapted to the specific situations, as described by the data. The data, which is the biggest driver of machine learning progress, is used to simulate the human learning. For a theoretical

background of machine learning please refer the work of Apaydin (2010).

Machine learning is programming computers to optimize a performance criterion using example data or past experience. Machine learning uses the theory of statistic in building mathematical models with parameters to fit the data best, because the core task is making inference from a sample. The approach to machine learning corresponds best with data scientists. Data scientist design experiments to make observation and collect data. They then try to extract knowledge by finding out simple models that explain the data they observed.

Machine learning implementation differs from IT implementations and software development in three ways; much more data-intensive and detailed; models are typically trained based on training data; and the models improve by learning (Davenport and Ronanki 2018). Machine learning is about acquiring knowledge through data, and it differs from traditional applications or programs that generate statistics or engineering output. The data model of machine learning needs to be tested in an iterative manner and the maintenance is also different, because the model needs to be continuous retrained (Davenport and Ronanki 2018). Model development differs from traditional software development because of the requirements to monitor and tune machine learning in never-ending iterations.

(11)

11 2.2 Synthesis of Implementation Frameworks

The term implementation framework is used to describe relevant research that focuses on IT implementation, software development, and to a lesser degree data science implementation. This is consistent with prior work in which the term framework is defined as “strategic or action-planning models that provide a systematic way to develop, manage, and evaluate intervention” (Tabak et al. 2012) and are used to describe and/or guide the process of translating research into practice by specifying steps (Nilsen 2015).

2.2.1 Systematic Literature Review

This section describes the frameworks currently available in academic literature and the key constructs are utilized by synthesizing and clustering the key concepts to describe one approach. The study was conducted as a systematic literature review; first identifying relevant sources, keywords, and initial literature pieces, thereafter interpreting and evaluating the obtained results to operationalize the key constructs (Xiao and Watson 2017). The construct of literature will serve as a backbone for the theoretical framework and will enable the empirical study to extent current knowledge rather than to be exhaustive (Meyers et al. 2012, Thomas and Harden 2008). Inclusion Criteria and Literature Search Procedures

To be included in the review, academic articles about implementation had to meet two criteria: (1) contain an implementation framework, and (2) be published in English between 1998 and 2018. The strategy used to locate relevant literature is keyword based searches in the database of Thomson Reuters Web of Knowledge using variants of multiple search terms (e.g. IT OR

information technology OR AI OR artificial intelligence OR machine learning OR system OR software OR information system OR IS AND development OR implementation OR design AND process OR approach OR method OR framework OR model), and backward searches by using the reference list of relevant articles.

(12)

12

Not included are articles that have not been cited more than once in the literature, articles that focus on contextual factors that can influence implementation, articles that focus on evaluating implementations, or articles that do not put enough focus on the prosses of implementation. Instead, only articles are included when a framework for implementation is offered that can be applied generally across one of more areas. Figure 1 is a flow diagram depicting the study selection for the synthesis of implementation frameworks. Each article was examined to extract the key concepts about the processes of implementation, identify differences, commonalities, and themes to synthesis findings into a theoretical framework (Neely et al. 2014). Generalizations and overarching themes were found by adopting an existing synthesize method, creating broad

categories to group similar concepts and activities from the different frameworks into phases (Meyers et al. 2012).

Figure 1 Flow diagram of included sources for the implementation framework synthesis.

(13)

13 2.2.2 Consulting Experts

To ensure a comprehensive synthesize of current knowledge experts in the area are consulted for additional framework to software development, IT implementation, and data science. Four experts (Managing Director Emerging Technologies, Emerging Technologies Leader, Artificial Intelligence Executive Director, Service Design Lead) participated in this study which was conducted by phone. The questions focused on machine learning in general, what approaches are currently used in the field, and current research on IT implementation. Four additional

frameworks were added to the synthesis (Agile, Design Thinking, IBM Analytics Solution Unified Method, Microsoft Team Data Science Process).

2.2.3 Results

A total of 11 relevant sources are uncovered which are listed in Table 1 with a short reference to the frameworks. Most of the frameworks are about IT implementation and describe high-level implementation steps for application (n = 1), product (n = 2) or software development (n = 2), while other sources are specifically related to modelling tools (n = 2).

Table 1 Sources for implementation frameworks included in the synthesis

Source Frameworks

Gernaey and Gani (2010) Model-based framework: Modelling tool Morschheuser et al. (2018) Method for engineering gamified software Gallardo et al. (2012) Model-driven method

Kulkarni and Padmanabham (2016)

Extended waterfall model & agile model

Parad (2000) Intelligent Information System (IIS) Framework & Concurrent Engineering (CE)

Bajgoric et al. (2002) The Fusion method Browning (2017) Integrative Process Model Takeuchi and Nonaka (1986) Agile (Scrum) Methodology Brown and Martin (2015) Design Thinking Theory

(14)

14

IBM Asum Web site Analytics Solution Unified Method (ASUM) Microsoft Azure Web site Team Data Science Process (TDSP)

When analyzing the key concepts, a substantial overlap in constructs is found between the frameworks. All but one framework (Morschheuser et al. 2018) are used for traditional IT implementation or software development and follow similar well-defined process steps. This staged approach is linear and follow a specified sequence, but at the same time iterative by revisiting earlier activities.

Furthermore, three steps are present in all the frameworks: ‘Analyze, Design, and

Implementation’. In general, the analyze phase highlights understanding the requirements of the project. In the design phase the technical solution is documented prior to the full implementation. A common theme in most of the frameworks is that the requirements are known before starting the actual implementation, which can be linked to the waterfall model (Prasad 2000). The

waterfall model is seen as the traditional way for IT and only works after all the requirements are defined (Kulkrani and Padmanobham 2016, Prasad 2000). While software development basics such as analysis, design, implement, test, and deployment still matter, the waterfall methodology is too cumbersome. With this methodology there is no going back to modify the project or

direction, each phase must be ended before starting a new phase. Detailed requirements and plans are created upfront and passed sequentially from function to function. A bottleneck in one phase can slow or even halt the entire development process. In response to the waterfall methodology. In the context of machine learning systems, where not everything is known upfront because of the high level of uncertainty because of the data component, an adaptive Agile (Takeuchi and Nonaka 1986) approach is more suited. Last is the implementation step where most frameworks describe a separate development and test phase with some overlap or iterations to build in some flexibility and learning (Kulkarni and Padmanabham 2016, Browning 2017).

The additional four frameworks based on consulting experts are formulated based on experience in software development, published research or originated in organizations (e.g. IDEO Design Thinking, IBM Analytics Solution Unified Method, Microsoft Team Data Science Process). In comparison with the academic literature the latter methods provide more practical guidance for the implementation process.

(15)

15 2.3 Design principles for machine learning

To support the implementation of machine learning and add to the findings from the existing frameworks key design principles are formulated below; principles that machine learning methods should cover. Based on the knowledge from the literature review the results are

summarized as 6 principles; (1) Business Understanding; (2) Focus on User Needs; (3) Machine Learning Knowledge; (4) Continuous Feedback-loop; (5) Iterative Process Design; (6) Integrate in Organization. These principles are deemed most important for implementing machine learning (table 2). The design principle in more depth:

• Design Principle 1 ‘Business Understanding’: A clear understanding of the business perspective or problem is fundamental for implementing machine learning. A common design principle that is found in the literature and methods is, therefore, a profound understanding of the business needs or problem and the operational context in which the solution should be applied.

• Design Principle 2 ‘Focus on User Needs’: Both the literature (Brown and Martin 2015, Morschheuer et al. 2018) and experts recommend to design based on customer needs rather than the internal needs of the business and stress the important of user involvement, especially the ideation and design phases. In these phases quick feedback loops ensure that the solution addresses actual user needs. Implementation starts with the design, driven by the business, focused on delivery value, and to ensure that people will use it (Walls et al. 1992).

• Design Principle 3 ‘Machine Learning Knowledge’: Machine learning designers need understanding of the user, the data and the technical feasibility. Design methods in the literature are helpful to start, but the experts emphasize that these methods cannot replace knowledge and experience needed to model machine learning (Holtel 2016). Machine learning is data intensive and trained based on data which needs to be collected and cleaned into the right format first (Zhou et al. 2017, Bini 2018). In this context, machine learning systems should be designed by testing with data without failing into the pitfall of trying to use a generic model to fit the data due to missing knowledge of the technology. An applied principle is thus the work of interdisciplinary teams to combine the required skills, both data scientists as business representatives.

(16)

16

• Design Principle 4 ‘Continuous Feedback-loop’: Literature recommend testing machine learning systems early on before implementing the full solution when substantial

resources are invested. The developer is present from the beginning, starting from the design phase, to build and test prototypes together with the customer to validate the problem and solutions continuously.

• Design Principle 5 ‘Iterative Design Process: Implementing machine learning systems is an iterative development process to allow agility and learning for learning. With machine learning systems a model is tested and trained based on data, which is substantial different from other implementation methods because it is not explicitly defined. The iterative evaluation to determine the performance and adjust the parameters or choose different algorithms is generally aligned with the Agile methodology (Zhou et al. 2018, Bini 2018). The literature recommends continuous evaluation for feedback to adjust the algorithms as a prerequisite for success.

• Design Principle 6 ‘Embed in Organization’: With machine learning system there is no clear end or formal hand-over to the business. A machine learning system is not like standard IT implementations, but more like growing the workforce digitally. You can compare it with people that needs to be trained, managed, provided work, supervised, and retrained. Never ending (data) maintenance, monitoring, and retraining from the

perspective of the user and the model are required.

Table 2 Design principles for implementing machine learning Design principles

DP1: Understanding of the business needs and context

DP2: Focus on user needs and involve them during the ideation and design phase

DP3: Profound knowledge in machine learning technology and work with interdisciplinary teams DP4: Test and evaluate machine learning design ideas as early as possible

DP5: Follow an iterative design process

(17)

17

Two channels are used for the literature review and thus far the knowledge of current implementation frameworks is gathered, and a list of machine learning design principles is developed. To synthesize this knowledge into one approach an overview of the key constructs in the literature is made. This overview was documented in a summary table that divided the process phases and steps into three distinct categories: pre-implementation, process of implementation, or post-implementation. Based on this categorization and derived design principles, a coherent theoretical framework is created that synthesized the identified

implementation frameworks in the literature. The framework reflects the design principles (see Table 3) and is based from only the literature the answer on the research questions.

Table 3 Summary of the 5 process phases and 10 critical steps that are associated with successful machine learning implementation Phase 1: Analyze

1. Business understanding 2. Plan & Communicate Phase 2: Ideate

3. Data Acquisition & Preparation 4. Design & Prototyping

Phase 3: Implement Design 5. Modeling

6. Evaluation

Phase 4: Implement Solution 7. Deployment 8. Validation Phase 5: Operate 9. Maintenance 10. Retraining Project Management

(18)

18 2.4 Theoretical Framework

As illustrated in Table 3 the critical steps of the theoretical frameworks can be consolidated into five phases; (1) Analyze; (2) Ideate; (3) Implement Design; (4) Implement Solution; (5) Operate; where Project Management supports all phases. This theoretical framework indicates a staged, but highly iterative approach which highlight where to focus for designing and implementing machine learning. Table 4 provides an explanation of the key constructs before explaining in more detail.

Table 4 Definition key constructs theoretical framework

Phase Overview

Analyze

(IBM ASUM, Bajgoric et al 2002, Kulkari and Pradmanabham 2016)

Defines the products, its evaluation criteria, and the problem that need to be solved. Obtain agreement between all parties about these objectives and requirements.

Ideate

(Brown 2008, Morschheuser et al. 2018)

Development of user-centered design, identification of data sources, and rapid prototyping to generate feedback and match people’s needs with what is technologically feasible.

Implement Design (Microsoft TDSP, Browning 2018)

Train, test, and evaluate with the users to improve the accuracy of the model in iterative cycles to create a Minimum Viable Product MVP).

Implement Solution

(IBM ASUM, Kulkarni and Padmanabham 2016)

Integrate machine learning solution (MVP) in overall product, configure as necessary, communicate the deployment to the business user audience, and validate customer

acceptance.

Operate Monitoring and managing of the subsequent iterations of the model to continuously

(19)

19

(Takeuchi and Nonaka 1986, Morschheuser et al. 2018))

improve and retrain to keep up with the performance.

Project Management (IBM ASUM)

Consists of processes which assist with managing and

monitoring the progress and maintenance of the project.

Analyze

According to DP1, understanding of the business needs and context is important to design and implement machine learning solutions (Table 2). All the framework literature supports the activities to engage with the business to define the problem, need, or both. The practice of machine learning based design is grounded in understanding of real users, their goals, tasks, experiences, need and wants rather than business objectives (Walls et al. 1992, Earl 1993, Kulkrani and Padmanobham 2016). However, only a few studies highlight the importance of understanding the context-specific requirements of the machine learning software.

The first step is done by the business representatives together with the knowledge engineers and consists of selecting the use cases and conducting a small assessment to combine the business insights, technology, and human to perform a small feasibility study. With machine learning the human machine interaction and technical feasibly is also important, if you want that the business adopt the solution it needs to functional. By looking to the business need, the objectives and requirements, whether machine learning is the appropriate solution, and technical feasibility a first go/no go decision is included in the process (Morschheuer er al. 2018). Additional, for the objectives a clear measure of success should be used that can guide and quantify the success of the project, such as adaption rate and accuracy of the model.

The objectives and requirements from the business perspectives are translated by the knowledge engineers to user stories for planning, and to keep the customers and stakeholders involved the reasons to change need to be clearly communicated. The work involved in this first phase in in preparation for the iterative phases that follow.

(20)

20

Ideate

The second phase of the implementation framework is ‘Ideate’. Rather than asking knowledge engineers to make applications that support the business, companies are asking them to use technology as an enabler to create products that betters meet customer’ needs. The former role is tactical, and results in incremental added value, the latter is more strategic, and leads to dramatic forms of value and change (Morsscheuher et al. 2018).

There are two important activities in this phase: 1) data acquisition and preparation; and 2) design and prototyping. Machine learning starts with prototyping models and algorithms with small sample data sets to capture and refine the hypothesis that you want to test. The first step is to produce a clean, high-quality data, followed by designing from the perspective of the user and prototyping. You need to assemble data and involve key stakeholders with different disciplines for contribution and validation: people from the business to translate the customer needs, knowledge engineers to bring in the solution knowledge, and data scientists to gather and clean the data. By brainstorming with the business, knowledge engineers, and other stakeholders a list of ideas is created and followed by iterative cycles of prototyping, testing and refinement with sample data (Brown 2008).

This ideation phase, see how it works and fix it, is important for machine learning models which is also highlighted in DP4. If the model accuracy does not reach the threshold you can decide to stop and not invest further in using machine learning to solve this business problem and look for other solutions. Engineering here should be a situational and iterative development process with a high degree of user involvement and early testing of design ideas (Morsscheuher et al. 2018). Because machine learning is not as static like traditional IT or software development there is no one size fits all and the number of alternatives, the different models that fit the sample data, are narrowed by trial and error in an iterative manner.

Implement Design

After evaluating the design and prototype the modelling starts by testing the outcomes of the model to refine and strengthen training of the model. This testing and training of the model with data is substantial different than other implementation techniques (Davenport and Ronanki 2018).

(21)

21

With machine learning you implement the solution in a small and iterative manner to learn something new from each iterative step, there is no large-scale rollout of the solution (Brown and Martin 2015). The Agile-like lean series, with test and learn cycles, are done by involving users in executing the performance tests. It is better to develop one simple component at a time than to build the whole solutions at once, which is highlighted in the Agile manifesto.

The actual implementation of the design and prototype requires installing a development environment were a MVP of the solution can be build that will be improved in subsequent iterations after deployment in production. The decisions which vendor to select, which software to adopt, and how to deploy will steer the actual implementation approach.

Implement Solution

The fourth phase indicates that the solution will be implemented into the production environment. The two critical steps in this phase are deployment and validation if the project meet customers’ needs based on the V-model (Browning 2018, IBM ASUM). With deployment is meant planning a business simulation and progressive ramp-up in a pre-production environment before actually deploying the solution into production. Deployment includes creating a plan to run and maintain the solution, documenting, communicating the deployment to the business, and configuring as

Figure 2 Theoretical Framework to move from concept to product in each phase. The arrows indicate a continuous flow of new inputs throughout the process. While a sequence in steps is visualized, the framework is in fact highly iterative with evaluations in each step.

(22)

22

necessary because the test and production environment are rarely identical. People need to start adopting the solution and somethings additional training is needed.

Operate

While machine learning is perceived as a (never-ending iterative process of ideate, implement design, implement solution, and operate in according with DP5 and DP 6, the reviewed methods in the literature do not highlight this aspect. Most traditional IT implementation and software development recommend a clear end in the process. Here there is a hand-off to another operational team for maintenance and monitoring. Although a hand-over to another team is possible this is not where the implementation stops because of the learning component and that the model is more like people. In line with Agile machine learning is a perceptual product that is never finished, and the solution will become part of the workforce. You need to train, test, manage, provide work, supervise, build means for escalation and retrain the solution. All the consulted experts emphasized that machine learning should not be considered as tradition IT implementations or software developments with a clear end. Even after the initial MVP the solution needs to be retrained for new data sources and to keep up with the performance. A typical outcome of this phase is a list of improvements or plan for new iterations.

Project management

In support of all the previous phases project management has a critical role in the implementation process. With the introduction of Agile, Scrum teams, DevOps etc. organizations adopt

organizing projects through self-organizing teams and overlapping development phases

(23)

23

3. RESEARCH METHOD & DATA

3.1 Design

The knowledge objective of this research is to obtain a more sophisticated understanding and new insights into how organization can successfully implement processes for the adoption of machine learning. For this research a mixed methods approach is used, by employing a comparative case study design with process tracing and content analysis the variables in the data are identified (Beach 2017, Skocpol 1979). By conducting interviews, the developed theoretical framework is evaluated, and factors of cause and effect are collected to develop characteristics of a

methodology. With process chasing and structured cross-case comparison different events, success and failure, are analyzed to capture the similarities and drivers of successful machine learning implementation.

Process tracing is used to demonstrate causality, the role of logic of inference. The working of causal mechanism is traced as they operate within a case to come to generalizable causal mechanisms linking the different events to a successful outcome in similar cases. For each process there is analyzed to see if they had a positive impact on the outcome to unpack the causal process and identify logical shortcomings in the theoretical framework and critical links to causal stories that are particularly interesting to elaborate on. See Beach (2017) for the criteria to

determine that a causal connection exist between two of more variables.

To generalize, the study is combined with a comparative case study which is appropriate in new topic areas (Eisenhardt 1989). The objective is to establish causes of successful implementation and since comparative historical analysis uses comparisons among positive cases, and between positive and negative cases, to identify and validate causes, rather than descriptions a similar methodology will be applied (Skocpol 1979). Multiple cases with different outcomes are used to make a systematic comparison of themes across successful and failed instances of machine learning projects to formulate a coherent framework (Skocpol 1979). This coherent framework of themes presents an accurate understanding of the big picture instead of emphasizing specific propositions (Stemler and Bebbel 1999).

(24)

24

The research is a combination of an inductive and deductive approach. Based on the theoretical framework there is searched for a set of themes in the data, but also for new patterns by isolating key constructs that make meaningful contributions to the research question (Stemler 2001, Eisenhardt 1989).

3.2 Data Collection

The research includes primary qualitative data from 12 interviews with subject matter experts. As little is available on the topic, a rich and detailed information is necessary that can be used to generate generalized themes. Furthermore, interviews allow to clarify ambiguous answers, seek follow-up information, and collect comparable data (Yin 2009, Paradis et al. 2016).

As the research topic is relative new for enterprises no large number of available cases exist in the Netherlands. The case study that is appropriate is ING Bank Netherlands, here multiple case studies in one context are available to examine examples of both success and failure (Gering 2004, Eisenhardt 1989). People who are involved in machine learning implementations, subject matter experts, are contacted to participate in this study. Table 5 shows the overview of the interviews, the departments of the participants, their function, years at the company, and duration of the total interview.

Table 5 Distribution of participants Interview

#

Department Function Years at the company

Duration of the interview 1 Wholesale Banking Head of Robotics 7-8 years 43min 2 Wholesale Banking

Advanced Analytics

Data Scientist 2-3 years 45min

3 Wholesale Banking Advanced Analytics

Data Scientist 5 years 59min

4 Domestic Bank Financial Markets IT

(25)

25

5 Retail BI Developer and Scrum Master

>15 years 68min

6 Innovation Center Advanced Analytics

Product Owner Data / Lead Non-Financial Risk

7-8 years 64min

7 Retail IT Chapter Lead Engineer

5-6 years 48min

8 Wholesale Banking Product Owner 15 years 60min 9 Retail Advanced

Analytics

Data Analyst 2-3 years 37min

10 Retail Advanced Analytics

Data Analyst & Data Engineer

2-3 years & 3-4 years

48min

11 Wholesale Banking Principal Product Owner

>9 years 46min

12 Innovation Center Advanced Analytics

Product Owner 8 years 44min

The interviews are conducted face-to-face and through Skype (interview 12) and lasted on

average 50 minutes. A list of predetermined questions was used to ensure all topics were covered. The topics focused on how projects started, if a standardized organizational approach was used, what the process steps were in the process, if the implementation process deviated from the theoretical model, and about the outcome of the project. The interview questions used can be found in Appendix A.

The interviews were recorded in Dutch or English and transcribed to enable the analysis and documentation (transcripts are included in appendix C). These transcriptions were then send to the respondents to see if they agree with the contents and have any remarks on the transcripts, which in turn increased the validity of the data.

(26)

26 3.3 Analysis Methods

Content Analysis is used to identify variables in the data in a replicable and systematic manner. The data analysis part was an iterative process, the key constructs and central outcome of the research became clear during the analysis. The extant theory is used for a grounded

understanding for the events and to operationalize the key constructs of the study (definition of product success, various phases within the product development cycle). Next, patterns in the data are examined to isolate the similarities and differences cross-case to see which process steps contribute to success.

Success criteria to measure and determine success

Project success consists of product success and project management success (Lech 2013). Baccarini (1999) makes a distinction between project (management) success which consists of time, budget, and functionality criteria, and customer’s organizational expectations, product success. The definition of Lech (2013) will be used for this research to have a binary metric, a project is either a success or a failure. A project is considered successful when it meets the criteria 1) business / organizational goals are met and 2) time, budget, and functionality are met or adjusted for uncertainty. If changes in time, budget, and functionality are caused by changes in the project circumstances that could not have been predicted during the objective setting, the project can still be considered successful. Failed projects however are project that did not meet business/organizational goals, regardless of the project management criteria. This means that the project management criteria alone are not sufficient to evaluate the success of project. Success is more than just meeting the requirements in the business case, organizations should focus on agreement of the definition of success before a project starts and on the delivery of benefits to the company (Thomas and Fernández 2008). Although project management criteria are easier to measure, they could be impacted because machine learning projects are highly uncertain. The interview transcripts are coded in MAXQDA Analytics Pro. This software allowed the researcher to structure the transcripts and code all interview data. Each case is analyzed

individually and compared to each other to examined patterns and similarities that contribute to the holistic view that was set out to be formulated (Baxter and Jack 2008).

(27)

27

An inductive and deductive approach was used by marking data that addresses the research questions (open coding) and assigning codes to paragraphs or specific sentences (Strauss 1987). Subsequently these codes were brought back to the theoretical frameworks themes and for the new factors that emerged from the data new labels were created before generalizing the

overarching themes to create a ‘house’ that contributed to answer the research question (Seidel and Kelle 1995). The codebook can be found in Appendix B.

(28)

28

4. RESULTS

The research findings from the twelve interviews will be discussed in this chapter. First by looking to the results per case and subsequently cross-case to retrieve data patterns to provide an answer to the research question. Besides following a clear interview structure, per interview the implementation process approach is described with the most important conclusions to ensure a systematic analysis and to plan for the cross-case analysis. The implementation process is described using the theoretical framework which was discussed in the literature review: analyze, ideate, implement design, implement solution, operate, and project management.

4.1 Within-case analysis

Interview 1

First interview was with the head of robotics who is involved in multiple projects related to Robotic Process Automation, Natural Language Processing, and chatbot. Not on a function level, but rather from a governance perspective.

We discussed the implementation of a chatbot in a business to business environment where the dependency on data was stressed because the data is stored across multiple systems and you must think about how to translate this data to client communication. The project was initiated from the business in an innovation project where the organization looked to how they could build a

chatbot. For this an internal developed approach was followed in which multiple stage gates are included, such as what are the client’s needs and how will we deliver the service.

When looking to the agile methodology and the methodology on which the organizational

planned approach is based, the respondent emphasized that the project started with testing an idea in the market to see if there is a demand. By testing it in the market client understanding is

created for subsequent phases. “This is actually the biggest mistakes big companies are making, that they build something because they think that is what the customer wants” – Respondent 1 To implement the prototype data was needed and the project experienced that the technology worked but finding correct and sufficient data was too complex. The project was stopped because the data needed to be collected from different internal sources, with optimal security, and those

(29)

29

sources had different data mappings. The organizations changed the implementation process to collect more data first, analyze the data to learn what the client need is, and then see if this could be improved by using technology. Here focus is on client needs to improve the process or

experience instead of business needs because the solution will differ.

To conclude, it is better to start testing a prototype in the market and learn by doing instead of analyzing and designing based on the business needs. The client and business should be brought together, but success is measured based on the client adaption. Furthermore, start experimenting with a small scope, collect more data, and adopt an iterative design process.

Interview 2

Second interview was with a Senior Data Analyst in Advanced Analytics who is involved in projects from idea to finished product.

The project originated rather organically and not with a classic business case development, the data science team worked with the business to see how they could help.

After this organic part they started a proof of concept, almost a feasibility study to show that it is feasible, that the data could be collected for idea generation, and start with the design and

prototyping for user feedback. If tested feasible then proceeding with a minimum viable product (MVP) that will be developed further by involving users for ideas and trail and fail to optimize the accuracy. In this chaotic process where multiple phases are revisited, and as steps like business understanding and modelling happen continuous, a standardized method is used with different stage gates to see if the product would be a success. Validation of the model is done at the end before going to production because models change often. The validation, to see if the model works and keeps on working as designed is done by interpreting the outcome. Data scientist needs to know what the model does, build specialism, and therefore need a profound knowledge of machine learning. “You can drive a car, but that does not imply that you know how the engine works. We do want you to know how the engine works.” – Respondent 2

To summarize, start ideating with the business to create a proof of concept, something that is clickable for feasibility validation and that this makes their lives easier before moving to a MVP. Start with data acquisition for idea shaping, modelling, and build a prototype where steps are

(30)

30

continuous and non-linear. Unlike typical business processes where there is overview, next steps are known upfront, data science is try first before implementing. The classical scrum is difficult to apply here.

Interview 3

Respondent works as a Data Scientist in the Analytics Tribe, a team compromising of 12 data scientists and 50 data analysts who help the business with data insights.

Project starts by bringing ideas to the business and discussing past project with the product owners for inspiration. However, the simplest operation is when the business already has ideas and help refining the business problem.

The project made use of agile and scrum to make users stories and prioritize the desired features on impact to guide the data collection. This prioritization was done together with the users in the field to have a starting point and understanding of the data. The first results after building the model were evaluated with the business based on sampling and provided new ideas for next iterations. These steps are repeated to build the models in iterations by showing the business what the models generates. Next was piloting of the model to know if the model works in real life and have a sign of the potential. Only then you really know what the model does. You can simulate this on your computer, but that is always an interpretation.” – Respondent 3

This first process is prototyping where the second process is to bring it into production and there the software engineering aspect comes into play. For the second process a governance framework supports the handover from the prototype environment to the production environment where a different squad maintains and monitors.

In summary, iterations are conducted over the phases in the first process prototype but after the end customers pain points are defined to know what to build and that the scope will not change. Paramount important is to involve the business, the user in the whole process for feedback and to create ownership. Standardization between the separated platform teams create clarity and

validation is done on a functional level to test if the produced outcomes makes sense. Both teams need to know how it is build up and how it works which deviated from normal software

(31)

31

Interview 4

This case is an example of a project on best effort basis next to respondent’s main daily function which never reached production stage.

The project started by going to the business to help them with their daily work and together they came up with a short list and started prototyping. This was driven by the business: “...that was the purpose, my purpose, to not to come up with anything of myself because the vision is quite different between IT person and the business persons… we just need to do something that is really useful.” – Respondent 4

The project did not follow a certain methodology but common sense and from one use case they build some prototypes with good accuracy. The project started lean, build small, ask for

improvements, inflow of feedback, and test to prevent the model generating inadequate data. However, when submitting the developed work to the regulatory for approval they were not ready to accept some black box. Machine learning is still a relative a new topic and not everybody has the same perspective on it.

It needs to be close to the end user and involving them for testing, for feedback is important as they understand the business and context. Looping through the different phases of data,

modelling, test several times supports an iterative design process where a deployment pipeline should be in place that connects the test and production environments. This can be reused by different projects because they use the same framework and standardized tools.

Interview 5

Interview was with a member of the Retail DevOps team that is responsible for the operational application maintenance and development.

In the agile environment the projects enter diffuser where people seek each other to solve

business problems and budget to start. People with machine learning skills sell themselves to the business and start by collecting data and modeling in improvised environment. However,

problems arise when they want to bring it live. The infrastructure that is needed is most probably not in place or need to be adjusted which takes time. “An infrastructure within a bank is the most

(32)

32

slowest in the world. There is nothing slower than infrastructure. Even organizational development is quicker” – Respondent 5

In a Machine Learning Implement Framework infrastructure needs to be incorporated. When people start analyzing and experimenting they need a framework, a specific laboratory

environment to ensure that the model also can be transferred to production. A more professional way of working is required for deployment with separated environments and clear roles and responsibilities. Standardization supports this hand-over between two team because there needs to be rules. It is not a software implementation, but a configuration that is transferred and you need to prevent a platform gap. However, there is a tradeoff between quick experimentation and quick deployment. Therefore, people from both teams need to link about new tools in the market to ensure that the two environments are identical.

Interview 6

Respondent had during the project the role of data engineer and is now product owner data and as lead non-financial risk responsible for securing that the delivered products fit in the legal

compliance, operational, and risk management.

The project idea come from another product and the business was interested because they had a clear need. However, normally it starts with a business problem and examining which

technologies could be beneficial.

It started with a prototype, based on real data proofing that the concept is real for further investment and that it lands in the business. With the business the needs or requisites are

prioritized for the scope and milestone of the MVP. From the start they worked with super users from the business that already started using the product for constant feedback and to sponsor embedding. After the MVP the onboarding of the users followed by collecting feedback for future steps of the product, because it was good enough to launch but certainly not finished. The implementation process was done by following the scrum methodology with a product owner from the business and on top of that a steering committee that functioned as kind of stage gate next to the user testing and validations. For the project a development production line,

(33)

33

worked. Development Testing was with a lot of data to validate, test, train and acceptance environment was identical to production. This was the last step were the version was validated before it was send to production and integrated with the product.

To conclude, it starts with analysis in self steering interdisciplinary teams instead of the classic project management governance. The business needs and context understanding happens when the business is involved in the process. If the data is already there continuing with experimenting, trying different models, start modeling, evaluating. Fail fast and already look for optimization before you integrate with the product.

Hardest part is the user adoption. Commitment of senior managements helps, but in general people do not really know what machine learning is or what it does. The users must trust the output what is supported by explaining what the model does by data scientist with profound machine learning knowledge. However, the biggest key factor is the constant supply of quality data.

Interview 7

The respondent is IT Chapter Lead, has a background in software development and is involved in multiple project.

The project of the respondent followed a software development way of working by starting with a business case and analyzing the intentions. The project looked to what data is available, what data is needed, and then tested different models. Evaluation was done based on predefined criteria to reach the end goal of the MVP. Projects do follow a certain sequencing the first time, however, after that the order of the implementation process varies widely.

What usually happens is that the data scientist and product owner sit together, create a solution which is translated to a model and then pushed to bring it into production. This leads to security, data, and infrastructure challenges for the operation and that the current way of working is not scalable. Therefore, interdisciplinary teams are important in the implementation process to enable delivery at scale. “That is really the next step… The gap between data science and operations, closing that is one of the biggest enablers for scaling of your business cases.” – Respondent 7

(34)

34

Interview 8

The respondent is responsible for transferring operational work to a nearshore Shared Service Center and introducing one Agile way of working.

There was a clear business problem foreseen by the respondent and this was the trigger to make a business case and start a proof of concept with machine learning techniques. A team with both business as machine learning knowledge was composed and started prioritizing the top 10 cases. Next was collecting enough data samples for these top 10 and directly start training the model, looking to the results, and finetuning the models by involving the users for the process

knowledge. Step by step, by evaluating the output with the business for retraining, the model was accurate enough to go to production. In the beginning the scope of the solution was still limited, but the tool was embedded in the new way of working. This embedding was supported by a new work instruction and presentation about what machine learning is and what it does.

To conclude, the project followed a combination of the agile and waterfall methodology with a clear business case, building a MVP, performing UAT, and improve the tool in subsequent iterations with the users. For the project success it is important to compile a team with people who know what is required, who have machine learning knowledge, hands-on development and who know the process. Start small, build something concrete by focusing on user’s needs, and then scale it.

Interview 9

The interviewee started around 6 years ago as a quantitative analyst and was asked to support on a kind of solo machine learning project.

When the respondent joined the project, the analysis was already done, and the question was clear. First step was collecting relevant data, do data preparation and then started with trying several models to have a clear understanding of the data and the models behind it. Then they applied the models to the data, analyzed the results, and tried to improve the model. This first part was iterative since they performed this cycle several times with different ideas. When

(35)

35

the end user. This implementation was based on common sense rather than following a standard procedure or approach.

Important in the implementation is to discuss with as many people, such as other data scientists and users in understanding the data, getting ideas, and sharing experience.

Interview 10

The respondents are data analyst and data engineer in the project which was about speeding up the reporting phase and also trying to identify data quality issues without doing it manual. The project started from a business case with a business problem that needed to be addressed before sitting together with advances analytics team to gather knowledge and think about the how they could build something useful. The team has been expanded with an additional person for focusing on the model development because they experienced a gap. Furthermore, these meetings were held weekly to see what was done, what needs to be done, and what are the challenges going forward. During these meetings implementation happened by brainstorming and looking into the data and to the results. Starting from the business by looking to the end users they started with something basic was improved by building a parallel process. Because at the same time the process was done manually the differences in manual output and the output of the model were used for adjusting and improving the model. This recursive process of model improvement is also continued now the model is in production. During the implementation process they focused also on stakeholders because when the project is live it needs to be used.

To conclude, for analyzing phase the team composition and involvement of the right people is important: “In your team the content guys should also be data driven so they can get insides from the data. But on the other side IT guys should also have business understanding how the

processes are going in the company. Otherwise there is miscommunication between these two parts.” – Respondent 11

This communication should keep on going and there needs to be look for a business case before analyzing and designing is started.

(36)

36

Interview 11

Formally the product owner for the client portal, but also involved in experimental project with newer technologies such as natural language processing, OCR, image recognition. The project discussed was in collaboration with an external partner.

The project started through an inspiration session where the external partner showcased their capabilities to the business and where people would see the application for it. “The way where the business can see wat technology is available towards them en that allows it to be able to provide a context towards this. When that context is being identified then you can go into the analyze phase just to really take this a little bit further.” Respondent 12

The project then proceeded in establishing an engagement with the partner in a mixed team and analyzed the problem to be solve. They went through an agile way of working with sprints and refinement sessions to create a MVP but were depending on the data. “So the results we swat from this was that the parts were we had good quality structure data provided a higher confidence level…” – Respondent 12

The project focused on four different workstream: general project governance, data, non-financial risk, and the development workstream. Effectively all these workstreams worked in parallel with linking points and within the MVP part they had a demonstrable product in acceptance

environment. They wanted to demonstrate the business value upfront, so the training elements came like a secondary delivery which would be seen as an MVP+. However, when ready to move to production there was pushback from IT. General rational was that they were not involved early enough in the process to sort their buy in. This ended up being the blocker for the project to continue.

What to conclude is first focus on something that works, demonstrate the business value and get the buy in and confidence of the stakeholders to continue with the MVP+. For this governance is important. Make sure IT get a chair at the table from the beginning. Make sure the support is there by involving all relevant parties to increase the probability of success. Besides the dependence on the right level, quality, and amount of data, success could also be experience. Experience in the current wat of working through the implementation process and in

(37)

37

Interview 12

As product owner the respondent had a bridging function between the data scientists and the business by translating the user needs to development priorities of the team.

By combining a theoretical analysis with a concrete business need the project was initiated. This become the test case for making the theoretical concept concrete. The project started with a session with all users to brainstorm over the deliverables to bring this as an MVP to production. To build a MVP that makes impact the project went continuous through the cycles of business understanding, data acquisition, preparation, build, test and evaluate. Key point was bringing the two worlds of someone who builds the model and the end user together. This needs to be

facilitate during the implementation, but also afterwards. Important process step in production was the training of the users for embedding in the current wat of working. Explaining how to interpret the results to increase the impact of the model.

In summary, it is important to monitor the end view and to have a bridge between the end user and someone who is building it. Work towards a point on the horizon and when closing in set a new one to focus on. Overall, you want to lower the risk that you develop a nice tool that does not make an impact.

4.2 Cross case analysis

Analyze

Across cases there are differences in how projects start. Cases 3 and 11 started with inspiring the business by showing use cases or explaining what the technology could do. When this context has been identified the analyze phase starts. Other projects (cases 2, 6 and 12) started by going to the business and proposing to improve their daily work and defining possible use cases together. Although this way of generating projects works, it helps when the business already has a clear business problem or opportunity (cases 4, 5, and 7-10).

Analyze as an explicit phase was not mentioned in the cases, but the understanding of the business problem or need was reiterated throughout multiple cases. For example, case 3

(38)

38

mentioned a recommended reparatory phase to have a clear business understanding before start of development. Business understanding is important to ensure that the expectations of all parties are aligned and to prevent the scope from changing.

In summary, the first step of the implementation process is defining the problem or opportunity that motivates the search for solutions. This business understanding is line with the theoretical model and has some overlap with the following phase: ideate.

Ideate

As discussed in the literature review this phase is about the process of generating, developing, and testing ideas based on the problem or opportunity defined (Brown 2008). All but two (case 5 and 8) have mentioned these processes in their approach by starting with experimentation. The cases experimented with the available data to generate feedback or built prototypes to test feasibly early in the process. By conducting these processes repeatedly new ideas are generated and more is learned about the user.

In general machine learning projects are ad hoc at the beginning because the data scientist need to try things first to learn from it and then proceed accordingly. Furthermore, multiple participants pointed out that the phasing of the theoretical model deviates from the actual implementation. Ideate should come before the analyze phase, because process starts with ideas, then looking for the business case and subsequently analyzing for design.

To conclude, the results are similar with the theoretical framework in which experimenting starts in the ideate phase, however the sequencing should be adjusted to start the implementation processes with ideate instead of the analyze phase.

Implement Design

Based on the literature this phase description would indicate that it starts with a design as input. However, this is not present with machine learning. All cases started by defining objectives, or by prioritizing the solution and iteratively building towards that end goal. Multiple times the data collection, modeling, testing, evaluating, and refining steps are revisited before validation the

(39)

39

solution to go to production. This whole process is done by involving users, not only in the ideate phase but during the whole implementation process and afterwards to make sense of the data output, know which data to add, and what features to build in next iterations. This involvement also leads to a constant inflow of feedback, improved organizational embedding, and business understanding because it needs to be close to the end user.

The validation stage gate was in some cases a final test to see if it is good enough to launch to the users before improving further. This is crucial for user adoption: “It is just like when you install an app on your phone, and it does not work or not very well, then you de-install it within seconds. This is not very different with people within this organization.” – Case 6

Other cases adopted more professional activities that are in line with the traditional software development. These cases tested the solution based on acceptation criteria, added documentation, performed quality assurance on the output of the model, and communicate to the business that it is delivered.

Implement Solution

This phase is about the deployment phase. After validating the solution is promoted to the production environment. This process is standardized in most of the cases to since this is a bit more dramatic with machine learning because you cannot foresee some things. With machine learning it is more a configuration that is moved to production.

Furthermore, the data of the interviews indicate that in multiple projects (6, 8, and 9) the product is integrated with an application to be used by the users. The machine learner itself is not the end product.

For the embedding in the organization it is important to already start this during the implementation by making the users part of the process and create a form of ownership. Furthermore, training, explaining what the tool does, and integrate it with the way of working contribute to embedding. When people are not trained or know what the tool does there is the risk of having built a nice tool which eventually does not have an impact on the daily operations. It is important that is explained how the tools works and how to interpret or use the output, because for a lot machine learning is still seen as a black box.

Referenties

GERELATEERDE DOCUMENTEN

In the literature overview (chapter two, paragraph three) we already gathered some explanations on why this would happen and after collecting the data and using statistical methods

The training data that enters a machine-learning environment might contain personal data (ensuring a role for data protection law); however, once the MLE starts

The section that fol- lows contains the translation from [1] of the learning problem into a purely combi- natorial problem about functions between powers of the unit interval and

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of

We show that the most often used method, confound adjustment of input variables using regression, is insufficient for controlling confounding effects for the

The results of the analysis for the expected moderation of the degree of perceived managerial support on the relation between the implementation intervention

Income growth data per municipality from 2010 till 2017 is used together with a variable containing the lagged votes and a political variable.. While the CBS does not provide

Based on 50 Chinese chemical firms which implemented ERPs from 1998 to 2005, Liu, Miao, and Li (2008) investigated the impact of ERPs on pre-to-post financial performance of