• No results found

Eindhoven University of Technology MASTER Discovery and analysis of field service engineer process using process mining Rusu, S.M.

N/A
N/A
Protected

Academic year: 2022

Share "Eindhoven University of Technology MASTER Discovery and analysis of field service engineer process using process mining Rusu, S.M."

Copied!
100
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Eindhoven University of Technology

MASTER

Discovery and analysis of field service engineer process using process mining

Rusu, S.M.

Award date:

2011

Link to publication

Disclaimer

This document contains a student thesis (bachelor's or master's), as authored by a student at Eindhoven University of Technology. Student theses are made available in the TU/e repository upon obtaining the required degree. The grade received is not published on the document as presented in the repository. The required complexity or quality of research of student theses may vary by program, and the required minimum study period may vary in duration.

General rights

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.

• Users may download and print one copy of any publication from the public portal for the purpose of private study or research.

• You may not further distribute the material or use it for any profit-making activity or commercial gain

(2)

in partial fullment of the requirements for the degree of

Master of Science in Business Information Systems

Stefania Rusu

Supervisor: prof. dr. ir. Wil M.P. van der Aalst Tutor: R.P. Jagadeesh Chandra Bose (JC)

Examination Committee:

prof. prof. dr. ir. Wil M.P. van der Aalst dr. A.J.M.M. Ton Weijters

dr. Guillaume Stollman R.P. Jagadeesh Chandra Bose (JC)

Eindhoven, December 2010

(3)
(4)

my work.

First of all, I would like to thank my supervisor Prof. Wil van der Aalst for giving me the opportunity to conduct my master's project in Philips Healthcare. Moreover, thank you for the constant guidance and constructing feedback, it was really moti- vating. Second, I would like to thank my tutor R.P. Jagadeesh Chandra Bose for his night and day guidance throughout the project. JC you were a great tutor, even a friend, always trying to give me the most critical feedback and endless support; it was your conduct and professionalism that motivated me. Further more I would like to thanks Guillaume Stollman and Eugene Ivanov for their assistance and support in my master project, and Prof. Ton Weijters for taking part in my master project evaluation.

Many thanks to all my friends here in the Netherlands: my special friend Cris- tian, Vanessa, Yaqing, Amalia, Andreea, Cem; and my team back home: Roxu, Alina, Ralu and Crisu. Even more thanks for my home away from home, Chu, for all the support and care that you showed, in good and bad times.

Last, but not least, I would like to thank my family for having so much faith in me and for their good thoughts, if it wasn't for you all I would not be where I am today. Bi, this is for you, so you know where good thoughts can get you.

Stefania Rusu December 2010

(5)

Abstract

Process mining is an emerging eld providing means to gain insights into business processes supported by information systems by analyzing event logs. For this the- sis, process mining techniques have been applied to get insights into the workow of Field Service Engineer in Philips Healthcare. In order to get insights into his/her work there is a need to create hierarchical models that abstract from the low levels at which events are stored to higher levels of activities, based on the role of the an- alyst. To get additional insights, performance analysis helps to identify bottlenecks and thereby the scope for improvement regarding the work of FSE. Unfortunately, existing approaches to project performance onto process models are limited to at process models and do not support the annotation of hierarchical process models.

In this master's thesis we have proposed an approach to annotate process maps with performance information extracted from event logs. The performance informa- tion is obtained by replay of event logs onto process maps. Moreover, to support the approach, we have implemented a new plug-in in the ProM framework. The plugin consists of: the annotation of process maps with performance information, that has a high visual impact on the user , and the display of all computed Key Performance Indicators in a tabular pane.

(6)

1.2 Product Line Allura Xper . . . 7

1.2.1 Philips Remote Services (PRS) . . . 8

1.2.2 Field Service Corrective Maintenance . . . 10

1.3 Problem Denition . . . 12

1.3.1 Problem Identication . . . 12

1.3.2 Problem Generalization . . . 13

1.4 Research objective . . . 14

1.5 Research questions . . . 16

1.6 Research Methodology . . . 16

1.7 Thesis Overview . . . 17

1.8 Outline . . . 17

2 Analysis of Field Service Engineer Process Using Existing Process Mining Techniques 19 2.1 Process Mining . . . 19

2.2 ProM Framework . . . 21

2.3 Process mining in the context of Philips . . . 21

2.4 Data collection . . . 22

2.4.1 XML Log Data . . . 23

2.4.2 Case denition . . . 25

2.5 Data pre-processing . . . 27

2.5.1 Log conversion . . . 27

2.6 Data analysis . . . 33

2.6.1 Case study Converter Velara 8E . . . 33

2.7 Problem Identication . . . 39

3 Mining Hierarchical Process Models and Measuring Performance 41 3.1 Event Logs Formalization . . . 41

3.2 Process Maps . . . 42

3.3 Activity-Process Node Association Graph . . . 45

3.4 Replay algorithm . . . 47

3.5 Key Performance Indicators (KPIs) . . . 60 2

(7)

Contents 3

3.5.1 Process KPIs . . . 60

3.5.2 Node KPIs . . . 61

3.5.3 Edge KPIs . . . 62

4 Implementation 64 4.1 Plugins Overview . . . 64

4.2 Pattern Abstractions Plugin . . . 65

4.3 Fuzzy Map Performance Analysis Plugin . . . 67

4.3.1 Performance projection − Graph Panel . . . 67

4.3.2 Performance Information − KPIs Tabbed Pane . . . 69

5 Case Study 77 5.1 Case Study  Tube MRC . . . 77

6 Conclusions 85 6.1 Summary . . . 85

6.2 Future work . . . 87

A KPI formalization 92 A.1 Process KPIs . . . 92

A.2 Node KPIs . . . 93

A.3 Edge KPIs . . . 94

(8)

processes supported by information systems by analyzing event logs. For this thesis, process mining techniques have been applied to event logs generated by cardio vas- cular X-ray systems of Philips Healthcare to get insights into the workow of Field Service Engineer.

In the context of process mining, processes can be analyzed from dierent perspec- tives, such as control-ow, organization and data elements of a process, by analyzing event logs. Process discovery techniques are used to extract process models using the information from event logs. There also exist extensions to such process models to get information about resources, decisions taken in the process, etc. Such infor- mation is valuable for a business analyst to better understand a business process.

Process models play an important role in the analysis of business processes. Ad- ditional information about the way activities are performed can be provided by performance analysis. Process models annotated with performance information can show where bottleneck activities are located in the process. There exists several techniques for performance analysis with process models in the ProM [1] framework such as: Performance analysis with Petri Net [13], which annotates performance information on Petri Nets, Performance Analysis of Business Processes [3], which annotates performance information on Simple Precedence Diagram, etc.

Traditional process discovery techniques have problems dealing with less structured processes and generate spaghetti like process models which are hard to compre- hend. The primary cause of this is that events are logged at low levels of granularity, which in a majority of cases is not the desired granularity from the point of view of an analyst. Recently, means to address the problem of dealing with less structured process models, have been proposed by enabling hierarchical process models discov- ery [12], [11], [16], [6].

In the context of Philips Healthcare, the complexity of event logs is too high due to the complexity of the systems and the exibility of system use. The input for analysis is the Field Service Engineer process, the activities that he/she executes

4

(9)

1.1. Business Context 5 in the eld during diagnosis of a faulty system. Moreover, Philips is interested in knowing on which activities does the the FSE spend more time, whether mandatory activities are being performed by FSEs, etc. In order to get insights into his/her work there is a need to create hierarchical models that abstract from the low levels at which events are stored to higher levels of activities, based on the role of the an- alyst. To get additional insights, performance analysis helps to identify bottlenecks and thereby the scope for improvement regarding the work of FSE.

After constructing a hierarchical process model from event logs and extracting addi- tional performance information, the challenge is to combine them in a single output.

This master's project deals with the computation and estimation of various relevant KPIs and the annotation of hierarchical process models with performance informa- tion.

This chapter gives an overview of the environment in which this master's thesis has been conducted and the research problem denition. Section 1.1 describes the busi- ness and organizational background of the environment where the master's project has been carried out. Section 1.2 introduces the Allura Xper systems whose event logs are used for the analysis in this project. The problems which are tackled in this thesis are explained in Section 1.3 and Section 1.3.2. The objective, questions and methodology of this research are presented in Section 1.4, Section 1.5 and Section 1.6, respectively. Section 1.7 describes the generalization of the problem, such that the analysis and techniques developed/proposed in this project could be applied in dealing with similar problems in other domains. The organization of this thesis is presented in Section 1.8.

1.1 Business Context

The following subsections introduce the business context in which this master's project is carried out.

1.1.1 Royal Philips Electronics

Royal Philips Electronics of the Netherlands consists of three business sectors: Con- sumer Lifestyle, Healthcare and Lighting as illustrated in Figure 1.1. The founda- tions of Philips were laid in Eindhoven in 1891 as a carbon-lamps manufacturer.

Now it is one of the largest global diversied industrial companies with sales in 2008 of ¤26,4 billion. Philips has around 116,000 employees worldwide.

The mission of Philips is to improve the quality of life by making it more enjoy- able and productive through understanding people's real needs. It strives to bring

Sense and Simplicity to consumers by designing products that specically meet their needs.

(10)

Figure 1.1: Business of Royal Philips Electronics

Figure 1.2: Organizational structure of PMS

1.1.2 Philips Healthcare

The healthcare business line of Philips started its medical activities in 1918 when the rst X-Ray tube was introduced. In 1933 Philips was manufacturing medical equipment in Europe and the United States. Philips Healthcare has around 32,500 employees spread over 60 countries and an annual sales income of ¤7,6 billion. It is a market player in the business areas, Home Healthcare, and Professional Healthcare and a global leader in diagnosing imaging systems, healthcare information technol- ogy solutions, patient monitoring, and cardiac devices.

Figure 1.2 highlights the division of the Philips Healthcare business in the con- text of this project. Imaging Systems is a business module which oers solutions to doctors in the diagnosis of problems, by taking images of the patient's body at various levels of details.

1.1.3 Business Unit Interventional X-Ray

This master's project is carried out in the Business Unit (BU) Interventional X-Ray (iXR), and more specically in the Research & Development (R & D) division.

(11)

1.2. Product Line Allura Xper 7

Figure 1.3: Organizational structure of iXR BU

The organizational structure of iXR BU is presented in Figure 1.3. The mission of iXR BU is to develop medical equipments and perform research in the areas of: in- terventional neurology, interventional cardiology, interventional radiology, pediatric cardiology, and electro physiology. The R & D division is responsible for designing, architecting, and developing systems, as well as integrating and validating systems under development. This division consists of two major sub-divisions: Innovation Programs and Resource and Process.

The Innovation Programs department is responsible for the development of practical concepts and requirements from dierent perspectives: clinical science, marketing and services. This department consists of four sub-departments as illustrated in Figure 1.3: X-Ray & Geometry, Imaging Applications, Ease of Use & Multi- modality, and Services. This thesis is carried out in the Innovation Service team, a sub-department responsible for reliability, installability and enhancement of services for the iXR systems, such as upgrade services and repair services.

1.2 Product Line Allura Xper

The iXR BU makes products for two main clinical areas: Cardio and Vascular. It consists of X-Ray systems designed to diagnose and possibly assist in the treatment of all kinds of diseases, like heart or lung diseases, by generating images of the internal body.

There are two types of Allura Xper systems: mono-plane systems (systems that have one arm and are capable of acquiring images from one side), and bi-plane systems (systems that have two arms and can acquire images from two sides simultaneously).

In Figure 1.4, the Allura Xper FD10 system is presented. The prex FD in FD10 stands for Flat Detector. The sux 10 (can also be 20) refers to the size of the

(12)

Figure 1.4: Allura Xper FD10 system

detector in inches that is installed (larger detector can scan a wider area at once).

The Allura Xper systems can operate in two dierent modes: Application and Field Service. Application mode refers to the use by the doctors and assistants, i.e. for the treatment of patients. Field Service mode is used to install, update, congure, adjust, and do other administrative activities to maintain the systems [8]. These operation modes will be described in detail, with their role in the analysis, later on in Chapter 2.

During the daily operations, a system continuously logs events that are triggered by users, as well as events triggered from/within the system, like commands executed internally, errors, warnings, etc. The log information is embedded in the system, and most of the information is retained for a certain period. The log les can be collected remotely if an agreement is made between the customer hospital and Philips [17].

The infrastructure of data collection currently in place is presented in Section 1.2.1.

This data represents the input for the analysis conducted in this project.

1.2.1 Philips Remote Services (PRS)

Philips has Allura Xper systems connected to the PRS network throughout the world, based on an agreement between the customer hospital and Philips. The Re- mote Service Network (RSN) is responsible for storage of log data and data trans- mission between hospitals and Philips. The log information is normalized into a proprietary CDF (Common Data Format) and stored in the Remote Services Data Warehouse (RSDW). The log les in RSDW are retrieved by Questra and Remote Analysis, Diagnostics and Reporting (RADAR) system. RADAR systems use the retrieved log les for multiple applications (i.e. monitoring the system performance,

(13)

1.2. Product Line Allura Xper 9

Figure 1.5: Data collection

patients throughput, etc.) and converts the log les from CDF to XML format. Fig- ure 1.6 shows the top level infrastructure of remote collection of log les. RADAR also contains a knowledge database built according to the experience of development specialists, Field Service Engineers (FSEs) and Zone Technical Specialists (ZTSs).

If any signs of system defect are detected, further analysis can be performed using the knowledge base. This database is populated with relevant log messages, pos- sible causes, and solutions for a limited number of customer calls and is based on successfully solved customer calls in the past and suggestions from the developers.

Figure 1.6: Systems connected to PRS

(14)

Figure 1.7: Data sources

1.2.2 Field Service Corrective Maintenance

Allura Xper systems are complex systems that can malfunction during their lifetime and in such situations customers complain when faults occur during the operation of the systems. In such cases, the FSE is required to x the problem within the system in the hospital. Depending on the type of problem, the eld service process can consist of the following corrective actions:

• Conguration: hardware/software reconguration

• Calibration: correction of abnormal settings according to standards

• Field Replaceable Unit (FRU) Replacement: replace the detected faulty FRU with a new one

In general, the maintenance of the systems by FSEs consists of either Installation Activities, Planned Maintenance (PM), Corrective Maintenance (CM), and Field Change Order (FCO). Installation activities consists of hardware/software congu- ration of the system in the hospital, when the customer buys the equipment. PM is an arranged maintenance carried out at regular time intervals and consists of ac- tivities such as calibration of the X-Ray tube. CM handles the complaints of the customers when a system malfunctioned and usually is associated with the replace- ment of a FRU.

There exists a job sheet database, called Customer Support Data Warehouse (CS- DWH), that records repair-relevant information like all customer complaints, cus- tomer call open/close date, replaced FRUs, call type (PM, CM, etc.), maintenance hours, etc.

Due to the complexity of system architecture and the exibility of system use, the diagnostic procedure is not always easy and eorts are needed to help FSEs during corrective maintenance cases:

(15)

1.2. Product Line Allura Xper 11

Figure 1.8: Corrective Maintenance case

• Sometimes FSEs cannot x the problems correctly on the rst time, and the system will continue to behave abnormally in an intermittent manner and the FSE will have to pay additional visits to the customer. For example, the

rst-time-correct-x-rate is quite low for some components.

• Sometimes FSEs replace wrong FRUs. The reason for that is either that they are not able to pinpoint the root cause of the problem, or that there was pressure from the customer to x the problem quickly. The time spent to solve the problem might be too long for the customer to bear. In these situations, FSEs don not have enough time to diagnose the problem thoroughly and they replace all potentially faulty parts.

• Some FRUs have high variation in CM hours spent during diagnosis, which implies that some of the cases are not easy to diagnose. Figure 1.8 depicts the CM hours spent on a system for the replacement of the part Converter Velara 8E. The FSE spent 4 hours in the fault isolation step, to identify the cause failure, then 1.5 hours on repairing the system (i.e., the actual replacement of the part), 0.5 hours on calibrating and conguring the system and 0.5 hours on verication of the system functionality. Thus, the duration of a CM case is dened as the time spent in fault isolation, repair, conguration & calibration and verication. Typically, the last three steps (i.e., repair, conguration &

calibration, and verication) have a low variability and mainly are predened steps related to the procedure of part replacement, while the fault isolation time as well as the activities performed during this step, vary a lot from case to case.

The actions performed by the an FSE during diagnosis (maintenance) are expected to be manifested in event logs. The malfunction of a certain system is indicated in the job sheet records (the customer complaint and the FRU replacements). Having both data sources available, viz., event logs and customer complaints, there is an opportunity to link them.

With the availability of event logs and correlating them with the job sheet database we can use process mining techniques to get insights into the work of FSE. In Figure 1.7 examples are given for a log le and a job sheet.

The motivation for this project arise out of the necessity to understand the work of the FSE and identify the bottlenecks in the process (discover which activities take more time), check if all mandatory activities are performed, etc., in order to help the organization better understand the workow of FSE and identify means to reduce the repair time. The ultimate objective is to help improve customer service quality and to reduce costs, and more importantly to improve customer satisfaction.

(16)

1.3.1 Problem Identication

The goal of this thesis is to better understand the workow of the Field Service En- gineer in Philips Healthcare. The FSE performs activities like installation, update, conguration and other administrative activities for the maintenance of the Philips Healthcare X-Ray systems. Currently, there are certain issues and questions about the activities performed by the FSE. Time spent on maintenance of systems varies a lot and sometimes is very high, wrong replacements of FRUs are not uncommon.

It is unclear for domain experts what the causes of these problems are. So there is a need to understand the way how engineers work. The main requirement is to discover the workow from the log les and better understand what the FSE does in the eld during diagnosis.

Traditionally, process mining has been focusing on discovery, i.e. deriving informa- tion about the original process model, the organizational context, and the execution properties from enactment logs [19]. Traditional discovery algorithms have problems dealing with unstructured processes and result in process models which are hard to comprehend. Multiple activities in the event log are composed by a number of sub- activities. Considering these activities in isolation contributes to the spaghettiness

of process models. There is a need for abstractions by discovering common patterns of activities, that can help in improving the discovery of process models and assist in dening the conceptual relationship between activities. Constructing hierarchical process models can provide dierent views on a process model, hiding irrelevant content from users but with the possibility to uncover comprehensible models with a seamless zoom in/out facility.

In real life event logs, the granularity at which events are logged is typically dif- ferent from the desired granularity. For example in Philips case there are over 500 commands FSEs can execute and these commands are stored in event logs. Con- structing a process model from these event logs (at the level of commands) can result in complex process models which are hard to comprehend, due to the exibility of the system use. Moreover, the perspective of analysis diers according to the role of the analyst, for example, a manager may prefer a high level view on processes while a specialist may be interested in detailed aspects [16]. The perspective from which the management in Philips is interested in understanding the work of the FSE is at the level of procedures that engineers are executing and not at the level of commands.

Apart from discovering process models at dierent hierarchical levels, process mining

(17)

1.3. Problem Denition 13 techniques should facilitate the extraction and projection of performance information at every level of such a hierarchical process. In order to understand the bottlenecks in the process and identify areas of improvement, there is a need for performance analysis of the discovered processes. Dening the appropriate performance metrics, Key Performance Indicators (KPIs), computing them from event logs and projecting them onto hierarchical process models is necessary for such an analysis. Currently there are no such techniques that can facilitate this. In the context of Philips, the requirements are to check whether certain mandatory tasks are being performed and to identify the bottlenecks/hotspots in the workow of the FSE. In order to identify which are the activities on which the engineer spends more time, there is need for performance analysis.

The identied main problems can be summarized as follows:

• Low level of granularity of event logs

• Activities considered in isolation and not as sub-processes of an abstract ac- tivity

• Process mining techniques should allow context-dependent views of process models

• Performance information is not available on process models at dierent levels of abstraction

1.3.2 Problem Generalization

Analyzing the workow of the FSE by extracting information from event logs can be translated to a generic problem that has a wide applicability in various organi- zations and domains. More precisely, the generation of hierarchical process models and annotation of models with performance metrics is a technique of analysis that can be applied on event logs which may originate from all kinds of systems, like enterprise information systems or hospital information systems.

Real life event logs tend to be less structured than expected and can create problems for process mining techniques, i.e. discovering a process model out of low grained event logs will result in spaghetti - like models [16]. No matter the origins of such event logs, the creation of hierarchical models will potentially overcome the problem of complicated models which are dicult to understand.

Current process mining techniques that project performance information onto pro- cess models are restricted to at process models. In [13] and [3], two approaches to obtain performance information through the use of process mining are proposed. The

rst approach [13] projects performance information onto Petri nets. The perfor- mance information is obtained by replaying an event log onto the Petri net, process models which can be generated by using techniques such as the α-algorithm [25].

The second approach [3] projects performance information onto simple precedence diagrams by replaying an event log onto the process model. These techniques do not support the annotation of hierarchical process models. Thus, in this thesis, we pro- pose an approach to project performance metrics onto hierarchical process models,

(18)

FSE, compute and project performance information onto hierarchical process mod- els. To create the process models, already existing process mining techniques will be explored and adapted appropriately to enable the annotation with performance met- rics. This is a new plugin which needs to be developed.

Starting point for this thesis is based on the technique for Discovering Process Maps [16] with an additional visualization add-on consisting of projecting per- formance information which is obtained by replaying event logs onto the process maps at dierent levels of abstractions. For this performance analysis technique, we dene/identify certain Key Performance Indicators and propose an algorithm for replaying an event log onto hierarchical process models and thereby computing the KPIs. As depicted in Figure 1.9, for the process discovery step, the process maps are created based on the abstractions dened over common execution patterns in the logs. The discovered process maps are used together with an event log to cal- culate the performance information of the process; the obtained information is then projected onto the process map such that performance insights can be obtained intuitively.

(19)

1.4.Researchobjective15

Figure 1.9: Overview of the analysis approach

(20)

• What activities does the FSE perform in the eld during diagnosis? Does he/

she perform the mandatory activities in a corrective maintenance case?

The workow consists of the activities that are performed by the FSE during diagnosis. The activities that an FSE performs to diagnose the cause of the system failure can vary for each case, but once the cause is found, the engineer has to follow certain procedures to change one or more parts in the system.

In this case, there are mandatory activities that need to be performed and it is important to track if these activities are indeed being performed.

• On which activities does he/she spend more time?

The analysis focuses on retrieving performance information about FSEs actions in order to better understand the work of FSEs: if there are bottlenecks in the process, identifying what activities take more time and need to be optimized, etc.

• Is there an inuence of version of the system or geographical regions on the workow of the FSE; in other words, do we see a dierence between the work-

ows of FSEs mined on dierent versions or geographical regions?

The Allura Xper systems are constantly improved and new versions of software are updated for these systems. It is expected that with newer versions bugs are xed and the systems are improved. One of the goals of the research is to understand if there is an inuence of these changes on the work of the FSE.

Another aspect is that, being spread globally, the systems are maintained by dierent engineers coming from dierent cultures, who took part in dierent training programs, etc. Considering event logs from systems in dierent ge- ographical regions will help in understanding whether there is an inuence of geography on the work of FSE. Mining workows from dierent perspec- tives, i.e., system version, geographical region, can help understand the main dierences on the way of work of the FSE.

The answers to the research questions are meant to help understand the work of FSE in order to oer better support to their work, to reduce the time to repair of systems and to avoid false replacements that end up in customer dissatisfaction.

1.6 Research Methodology

To give an answer to the research questions, the following steps are taken:

1. Conduct a case study on event logs taken from Philips X-Ray systems, using existing process mining techniques

(21)

1.7. Thesis Overview 17 2. Identify limitations of existing techniques and challenges of mining/analyzing

event logs

3. Develop an approach for discovering and conducting analysis on the workow of FSE

(a) Mine interactive hierarchical workows of FSE with potential incorpora- tion of domain knowledge

(b) Analyze the workow of FSE to answer the research questions (i.e. is there any deviation in the workow of the FSE considering dierent per- spectives?)

4. Develop a technique for estimating and projecting performance information onto hierarchical process models

(a) Develop an algorithm to replay the event logs onto hierarchical process models

(b) Dene and compute performance metrics

(c) Visualize and project performance metrics on the process models 5. Implement the approach as a new Plugin in ProM framework [2]

6. Evaluate the implemented methods using real life event logs

1.7 Thesis Overview

For supporting this methodology implementations were realized using the ProM framework.

The ProM framework is developed by the process mining group and is a generic framework designed to support process mining. It consists of a range of plugins cor- responding to dierent mining and analysis techniques. For this thesis, a new plugin is developed to annotate process maps expressed as fuzzy maps with performance metrics. The plugin takes as input:

• A hierarchical process model: Hierarchical process models are meant to repre- sent complex processes by abstracting activities to higher levels. The hierar- chical process models are represented by process maps, constructed using the Fuzzy miner [12], [11] based on the technique proposed in [16]. In the context of Philips, these process maps represent the workow of the FSE.

• A given event log: In Philips context, an event log representing the activities performed by the FSE.

The output of the plugin is a process map annotated with performance information obtained by replaying the event log onto the process map.

1.8 Outline

The remainder of this thesis is structured as follows:

In Chapter 2, we provide the preliminary study case conducted in Philips Healthcare

(22)
(23)

Chapter 2

Analysis of Field Service Engineer Process Using Existing Process

Mining Techniques

This chapter provides the details of preliminary analysis of the event logs and the observations at Philips Healthcare. Section 2.1 provides an introduction to process mining as an approach to gain insights into processes by analyzing event logs. Sec- tion 2.4 provides a description of the data collection phase. Section 2.5 presents the pre-processing of logs with a focus on converting and preparing the data for the MXML (Mining XML) format needed by the ProM framework and the correspond- ing ProMImport plugin. The results of the analysis are presented in Section 2.6.

The identied problems are presented in Section 2.7.

2.1 Process Mining

Process mining combines process modelling and analysis techniques with data min- ing and machine learning. In the context of Workow Management (WFM) and Business Process Management (BPM), it can help in the diagnosis of business pro- cesses; in the system test area it can be applied to extract test scenarios. Process mining techniques aim at extracting dierent kinds of knowledge based on the event logs. Event logs from dierent environments are dierent in nature but they have one thing in common: they show occurrence of events at specic moments in time, where each event refers to a specic process and an instance, i.e. a case [9].

The goal of process mining is to exploit the information recorded in the event logs by using it for a series of analysis techniques. Typically these analysis techniques assume that it is possible to sequentially record events.

In process mining there are three perspectives from which process analysis can be performed: the process perspective, the case perspective and the organizational per- spective [9].

• Process Perspective: The process perspective focuses on the control-ow, i.e., the ordering of activities. The goal of mining this perspective is to nd a good

19

(24)

The goal is to either structure the organization by classifying people in terms of roles and organizational units or to show the relation between individual performers.

Figure 2.1: Overview of Process Mining

Figure 2.1 gives an overview of the process mining domain. Depending on the desired outcome of the analysis, there are three basic types of process mining [25]:

• Process discovery: derives information about the original process model, the organizational context, and execution properties from enactment logs. An example of a technique addressing the control ow perspective is the α - al- gorithm [25], which constructs a Petri net model describing the behaviour observed in the event log. In the case of process discovery there is no a priori model, the model will be constructed based on the event log. The process discovery technique used in this chapter is the Fuzzy Miner [12], [11].

• Conformance: compares an a-priori model with the observed behaviour as recorded in the log, i.e. reality is compared with process models or business rules and deviations are detected. Conformance checking based on Linear Temporal Logic (LTL) [22] can be used to detect the deviations, to locate, explain, and to measure the severity of these deviations/violations [19].

(25)

2.2. ProM Framework 21

• Extensions: involve extending an a-priori model with additional information that is extracted from the log such as information about timing, resources, de- cisions, etc. An example is the extension of a process model with performance related metrics, i.e. projecting information about bottlenecks in a process on an a-priori process model [19]. Also, there exist approaches that help visual- izing performance information [10].

The environment that includes these process mining techniques and the one that is used in this master's thesis is the ProM framework [1], [2].

2.2 ProM Framework

ProM is an extensible framework that supports a wide variety of process mining techniques in the form of plugins. The latest stable version of the ProM frame- work is ProM6 [2]. Currently it has more than 170 plugins and supports discovery and analysis plugins, such as: plugins supporting control-ow mining techniques, analysing the organizational perspective, mining less-structured exible processes, verication of process models, verication of Linear Temporal Logic formulas on log les and performance analysis techniques. Besides these techniques, the current version of ProM also supports the implementation of other related subjects such as model conversion and log lters for cleaning event logs.

The ProM framework enables functionalities to be added or removed easily in the form of a plugins. The ProM architecture can be described as depicted in Figure 2.2. The Object Pool component consists of a repository of all objects that are either output or input of the plugins, e.g., various process models, event logs, etc. Connec- tion between these objects are also stored in the repository. The list of objects can be visualized using the User Interface component, limited to the Visible Objects.

Hidden Objects, such as semantics, are not shown by the User Interface component.

Elements of the Plugins component are all ProM plugins, e.g., mining, analysis, etc.

Each of these plugins has its own interface in the form of input and output param- eters. Process mining techniques, like Dotted Chart Analysis [20], Conformance Checking [22], Pattern Abstractions [6], available in the ProM framework, are going to be used for the analysis in this Chapter.

2.3 Process mining in the context of Philips

In the context of Philips, all three basic types of process mining presented before are used for the analysis conducted on the event logs. The problems addressed by the analysis are based on the following research questions presented in Chapter 1: What are the activities that the FSE perform in the eld during diagnosis?,

Does the FSE perform the mandatory activities in a corrective maintenance case?

and What is the workow of the FSE?. While trying to provide answers to these questions, we have to keep in mind the identied problems: low level granularity of event logs and activities considered in isolation. The main goal of the analysis is to discover the workow of the FSE during a case of corrective maintenance on

(26)

Figure 2.2: The ProM framework architecture

an Allura X-Ray system. To mine the control-ow perspective describing the FSE work, the Fuzzy Miner2 is chosen. Using this mining technique a process model is constructed that describes the behaviour of the FSE as observed from the event logs, the series of activities that he/she performs in the eld during diagnosis. To give an answer to the research question Does the FSE perform the mandatory activities in a corrective maintenance case? conformance-checking based on linear temporal logic is used to detect the deviations from the predened mandatory activities that an FSE has to perform after replacing an FRU. To get additional insights about the process of the FSE and get a helicopter view of the process, the Dotted Chart Analysis [20] is used. This discovery technique emphasizes the time dimension, helps in discovering performance issues and puts no requirement on the structure of the process. Based on the observations, the main issues discovered in the event logs will be underlined along with the possible solutions considering existing mining techniques and possible improvements.

2.4 Data collection

As mentioned in Chapter 1, on the RADAR system the event logs are stored in an internal database in XML format. These event logs are the main input for the analysis conducted in this chapter.

2A modelling formalism that can provide a highly simplied and abstract view on processes [12], [11]

(27)

2.4. Data collection 23

Figure 2.3: UML model of XML log le

2.4.1 XML Log Data

Each system that is connected to the PRS system generates log les that record every operation done on the system on a daily basis. These systems are identied by an unique identier. A Log.XML le for each day of the system under operation is stored. Figure 2.3 represents the UML class diagram of XML log les [21].

Each system can create one or more XML log les, which in turn consists of one or more LogEntry XML elements. Each log le has the same XML data structure.

This is shown in the UML model as the classes System, XMLFile, LogEntry, and their relationships.

The elements contained in a LogEntry are explained in Table 2.1.

The LogEntry element can be classied into three types:

• User Message: The entry will be logged when a message is shown to the end user on the screen, e.g, Geometry restarting.

• Command: The entry will be logged when a command is invoked in the sys- tem, either by the system itself or by the user. A command always has a Name attribute which represents the corresponding name of the given com- mand. In some cases, a command can also have a Params attribute which gives more information about the logged command, i.e., the voltage values, or even requested, completed type of commands.

• Warning & Errors: These entries are logged when the systems observe devi- ations from the expected behaviour and it reports warnings, usually followed

(28)

Elements of Log Entry

Description

Index It refers to the index of the log entry. For each log le, the index starts from 1.

Unit According to the system architecture, the CV system consists of several subsystems, and these subsystems comprise of units, like Session Manager, Reviewing, X-Ray control etc. The value for this element corresponds to the unit that generated this log entry.

Date On which date the message was recorded by the logger.

Time At what time the message was recorded by the logger.

Severity The severity of a log entry. It can either be information, error, or warning. Fatal is another kind of severity though uncommon.

EventID It uniquely identies each message generated from a particular system unit.

Description Description (information) of the event.

Memo More detailed information about the event is included. Sup- pose a log entry description eld refers to the start of some ser- vice, then the corresponding memo eld could depict the software module related with the started service.

SystemMode The system can have several modes, namely Startup, Shutdown, Shutdown Completed, Warm Restart, Normal Operation, and Field Service.

LogID They are about the logger. LogID refers to the identier of the logger, while line number and thread describes the properties of the logger from the software perspective.

Line Number Thread

Module They are related to source code of the software inside the system. The module records the directory of the executable le.

SourceFile

Table 2.1: Descriptions of the Elements in a Log Entry

(29)

2.4. Data collection 25 by related errors.

Information, User Message and Warnings & Errors LogEntries are not used in the remainder of this chapter, however the Command is used for the conversion of event logs because it represents the actions executed on the system by the user. Out of the system modes presented in Table 2.1, only Field Service mode and Normal Operation mode will be considered. Field Service is the mode in which the FSE is performing the necessary operations/actions for maintenance of a system. Normal Operation is the mode when the user (doctor/laboratory technician and even FSE) operates the system. For example, when the doctor does a surgery on the patient, the system logs commands selected by the doctor and commands/events generated automatically by the system under normal operation mode. The FSE can also exe- cute actions under Normal Operation mode, i.e. when testing the functionality of the system, performing verication of its behaviour.

The event logs are selected from the Allura Xper systems considering the replace- ment of a FRU, using dierent perspectives such as version of the system, release, region and geography distribution.

A complicating factor in the case of the X-Ray system's log les is the highly com- plex and exible execution of activities. At the same time, the granularity of events stored in log les is at the lowest level.

2.4.2 Case denition

In this section we are going to describe in detail the corrective maintenance case which is the basis of the data collection step. In the situation of a system mal- function the result is a corrective maintenance case which, as described in Section 1.2.2 is usually associated with the replacement of a unit in the system (FRU) or software updates. The FSE will visit the hospital and perform diagnostics on the system to identify the cause of the failure and most of the times replace a part or a set of parts. The process of such a replacement represents the focus of our analysis:

what the FSE does on the eld during the diagnosis of a faulty system. Whether it is the part replacement, performing calibration or verication of the system, all the activities that the FSE performs represent the FSE process. Every corrective maintenance case is recorded in the job sheets database, CSDWH (described in Sec- tion 1.2.2) that contains information about the customer complaint, replaced FRU, maintenance hours, etc.

A corrective maintenance in which a FSE is required to service a faulty system rep- resents a corrective maintenance call. These calls are coming from the customer who complains about the malfunction of the system. Each call has an unique entry in the CSDWH database and is stored with information like the callID, open date and close date, the id of the system on which maintenance was performed (congID), etc.

As Figure 2.4 presents, the analysis starts with selecting a part of interest and iden-

(30)

Figure 2.4: Data collection

tifying the set of systems on which this part had been replaced, i.e. for this analysis the chosen part is the Converter Velara 8E, presented in detail in Section 2.6.1.

The reason for choosing the Converter Velara 8E is that the mean time to repair (MTTR) for the replacement of this part is much longer than expected, and it also has a large variation across dierent calls.

A part can be replaced on more systems and each system can have associated mul- tiple calls, it can be the case that the same part had to be replaced several times on the same system during a period of time. Furthermore, each call has an associated set of log les. The data collection represents the selection of log les corresponding to each call related to the replaced part.

The time window of a case is dened to be k days before and after the call open date and call close date. This is a parameter that can be changed in the data collection phase. Because sometimes it happens that the calls are stored in the database by the engineers with dierent dates than they were performed, because the systems store the logged data at later dates and because in some cases the customer may not report the problem immediately, it was decided to consider this time window. In the case of the Converter Velara 8E part, since it is a critical component and it is expected that a customer immediately calls after experiencing a problem, k was cho- sen to be three days before the call open date and three days after the call close date

(31)

2.5. Data pre-processing 27

Figure 2.5: Time perspective associated with calls

Figure 2.6: Case denition

and is common for every call. Figure 2.5 shows an example of a case from a system along a time window. A call is unique per system, this is the reason why a single case is represented by the unique association between calls and systems. Each call is represented by a process instance consisting of a list of ordered activities, which are either FSE commands or system commands. For each case the FSE performs activities in both Field Service Mode (FS Mode) and Normal Operation Mode (NO Mode). From the event logs, the commands performed by the FSE under these two modes are considered. As it can be seen in Figure 2.6, in a case there are identied

rst and last FS mode. It can be the case that in between these, there are other FS or NO modes, which will also be considered. Moreover, because sometimes the FSE starts diagnosing the system under NO mode before logging into the FS mode, and also tests the system in NO after logging o, we consider three NO modes before the rst FS mode, and three NO modes after the last FS mode.

2.5 Data pre-processing

The data pre-processing phase focuses on the preparation of data necessary for the rest of analysis, in order to give an answer to the research questions presented in Section 1.5.

2.5.1 Log conversion

The original event logs of PH are in XML format and in order to be used for analysis we have to convert them into the common MXML (Mining XML) format. This is possible by means of a custom-built ProMImport plugin, which facilitates the log transformation. ProMImport is a generic framework that assists the transformation of data from dierent formats into the MXML format.

(32)

Figure 2.7: ProMImport plugin

Figure 2.7 depicts the ProMImport tool. On the left hand side of the gure, the highlighted FieldService CongId Filter import plugin is developed in the context of this thesis. In the center part of the gure we can see the properties of the selected plugin.

The following properties are required as inputs for the plugin:

• LogDirectory represents the location of the XML event log les that will be converted.

• LogFileRegEx represents the regular expression matching the name of the log

les that needs to be converted

• CallInfo represents a text le containing the call information needed to select the log les. An example of such a le is presented in Figure 2.8, and consists of the identication number of the call callID, followed by rsnID, the call open date and call close date, the system type and the version of the system for a chosen part

• Database Mapping represents a database with domain knowledge necessary for the conversion. The information in the database enables the abstraction of the low level activities (commands) stored in event logs to high level activities.

For eld service specic commands, the dened abstraction levels are: Proce- dure, Group and Main Menu as presented in Figure 2.9. For the normal user specic commands, the dened abstraction levels are: CommandGroup, Com- mandGroup2 as presented in Figure 2.10. The database is constructed upon discussions with experts and documentation available in Philips. Currently in

(33)

2.5. Data pre-processing 29

Figure 2.8: Call Info le

Figure 2.9: FS specic commands mapping Database

this database there exists mapping for 461 eld service specic commands and 315 user commands.

The Main Menu mapping of commands represents the menu of the application that the FSE is using to operate the system named Field Service Framework(FSF). Once the FSE logs into the system he/she will select the options available in this menu to perform the necessary activities.

Figure 2.9 presents the conversion of data with additional information taken from Database Mapping: the mapping of the FS command FSCGenerator\ FSCXN- TubeAdaptationAdjustment to higher levels of abstraction: Procedure Tube Adap- tation, Group Generator and Main Menu Adjustments; Figure 2.10 presents the mapping of normal user command DeselectAnalysisMethod to higher levels of abstraction: CommandGroup SelectQA and CommandGroup2 Quantitative Analysis. In order to include the mapping information in the resulting MXML le, the ProMImport plugin takes as input this database built based on domain knowl- edge, that contains the required information.

In Philips, domain experts are more acquainted to higher levels of abstraction than the raw commands. Specialists analyze the work of FSEs at the level of procedures

(34)

Figure 2.10: User specic commands mapping Database

that are executed and management is interested at the main menu level of the FSF application. For this reason, the database is very important for the mapping of commands to higher level of abstractions.

The developed ProMImport plugin is used to convert the log les into the MXML format. Figure 2.11 shows the MXML format in which a log is composed of pro- cess instances (i.e. cases), and within each instance there are audit trail entries (i.e. events). Process instances and audit trail entries can have various attributes that may refer to data elds, timestamps or some additional information. Fig- ure 2.12 depicts an example of how the original data in XML is converted into the MXML format: the LogEntry of the original XML le is mapped to the ATE of a process instance. In this example, the LogEntry corresponds to a eld ser- vice specic command Login under the SystemMode Field Service. The Index, SystemMode, Command - Name and Command - Params are kept as the data attributes of the ATE. Two les are created, the command le FSCommands that has the WorkflowModelElement at the level of command Login and the main menu le

FSMainMenu that has WorkflowModelElement at the level of main menu Field Service Window. The EventType of the ATE is created using the attribute Command - Params of the LogEntry and the Timestamp of the ATE is created using the at- tributes Date and Time of the LogEntry. In order to perform analysis on log les at dierent hierarchical level of activities it was decided to create two dierent mxml

les: one at the command level and another at the main menu level.

(35)

2.5. Data pre-processing 31

Figure 2.11: MXML Schema

(36)

32 Figure 2.12: Data Conversion Example

(37)

2.6. Data analysis 33

Figure 2.13: Distribution of systems per versions

2.6 Data analysis

In this Section, we present some results obtained through detailed analysis of the Philips Healthcare event logs from the Allura Xper Systems. More specically, we elaborate on mining results based on the three major perspectives in process mining presented in Section 2.1 (process discovery, conformance checking and extensions).

2.6.1 Case study Converter Velara 8E

The case study of Converter Velara 8E represents all the corrective maintenance calls that involved the replacement of this part for the Allura Xper systems of Philips.

For the year 2009, the total number of such calls were 172. Out of these calls, there are log les available for 58 systems. For the rest of the systems, it can be the case that they were not connected to the Philips RSN network on the specied period or there is no mapping available. In the CSDWH database the systems are stored with the identication congID and in the RADAR database with the identication rsnID. For such systems the mapping from these two IDs is currently not available and hence it is not possible to identify the proper log les for the corresponding calls.

Out of these 58 systems logs of only 53 systems are usable. The reason why 5 of the systems are not usable is because the FSE did not log into the system. In this case it is not possible to make a distinction between the operation of the system by a normal user (doctor, technician) and the engineer. If the FSE does not log into the system there are no activities logged under eld service mode.

Further on, the selected systems are split for the analysis on dierent versions of the system. Figure 2.13 gives an overview of the distribution of systems per version.

The event logs are transformed using the ProMImport plugin described earlier. The

(38)

have chosen dierent process mining techniques to get insights into the process of the replacement of Converter Velara 8E part: Conformance checking [22], Dotted Chart Analysis [20] and Fuzzy miner [12], [11].

• Conformance checking: Does the FSE perform the mandatory activities in a corrective maintenance case?

To answer this question we propose the use of the LTL checker plugin. The plugin is based on Linear Temporal Logic and provides means for checking temporal formulas against event logs [22]. An example of such a formula is to check whether an activity is executed in each process instance of the log.

Order of activities Activity Name Status

1 Tube Adaptation mandatory

2 Tube Yield Calibration optional

3 EDL Verication mandatory

Table 2.2: Mandatory activities for replacement of Converter Velara 8E

Table 2.2 shows the mandatory activities that need to be executed for the Converter Velara replacement process as obtained from the domain experts. In this process there are mandatory activities, i.e. Tube Adaptation and EDL Verication, but also optional activities such as Tube Yield Calibration. In the current case the check is done to see whether the activity Tube Adaptation is performed in each case.

The formula for conformance checker plugin is eventually_activity_A where A = Tube Adaptation and checks if the activity Tube Adaptation has been performed.

The result is that in over 73% of cases (38 cases) it was performed as can be seen in Table 2.3.

Activity Name Number of systems Percentage

Tube Adaptation 38 73.037%

Tube Yield Calibration 3 5.66%

EDL Verication 3 5.66%

Table 2.3: Mandatory and optional activities performed

The conformance of the reference process was further checked with the following formulas: eventually_activity_A where A = Tube Yield Calibration and even- tually_activity_A where A = EDL Verication. First formula checks whether the

(39)

2.6. Data analysis 35

Tube Yield Calibration activity was performed, while the second formula checks whether the EDL Verication activity was performed. As it can be seen in Ta- ble 2.3, both activities are performed for 3 systems only, less than 6% of the total number of cases. Considering that the Tube Yield Calibration is an optional ac- tivity in comparison with EDL Verication the answer to the research question is negative, mandatory activities are not always performed. The results of the confor- mance checking for the replacement procedure are surprising for people in Philips, considering that the Converter is an important and expensive part.

Further on, the order of activities in the reference process has been veried using the LTL Checker. The formula for conformance checking is eventually_activity_A_and _eventually_activity_B where for the rst case A = Tube Adaptation and B = EDL Verication (i.e., that checks if the activity EDL Verication was performed after the activity Tube Adaptation), for the second case A = Tube Adaptation and B = Tube Yield Calibration, and the last case A = Tube Yield Calibration and B = EDL Verication. The results of the checks are presented in Table 2.4.

As we can see, the order of activities in not respected in all cases.

Order of activities Respected Not respected

Tube Adaptation > EDL Verication 1 1

Tube Adaptation > Tube Yield Calibration 3 0 Tube Yield Calibration > EDL Verication 0 0 Table 2.4: Number of systems that respect/do not respect the order of activities

• Dotted Chart analysis: What are the activities that the FSE perform in the eld during diagnosis?

This plugin has been developed to analyze performance of business processes and to provide insights by showing overall process execution at a glance. The dotted chart shows the process events in a graphical way such that the analysts get a

helicopter view of the process. The advantage of the plugin is that, unlike other discovery techniques, it emphasizes the time dimension and puts no requirements on the structure of the process [20].

In Figure 2.14, we use the dotted chart analysis to show the overall events in the event logs. The events are displayed as dots, time is measured along the horizontal axis of the chart and the vertical axis depict the process instances. The time option is set to Relative (Ratio) in order to see the relative distribution of events in each process instance. The events that took a long time for completion can be easily identied by the long lines connecting two dots of the same color which represent the start and completion of the event [19].

User can also obtain information about the patterns in the log by looking at the dotted chart. In Figure 2.15, the Tube Adaptation is identied. The pattern reects the situation that the Tube Adaptation activity is composed of a series of events which are executed by the system in between the start and complete

(40)

Figure 2.14: Dotted Chart with relative time of activities

Figure 2.15: Dotted Chart for Tube Adaptation

events of this activity. It is often the case that the execution of an user command implies the execution of other system activities. Thus we can underline the need for abstracting these sub-activities into a single abstract activity consisting of the start event, intermediary events and complete event.

Using the Dotted Chart Analysis plugin we can answer the research question What are the activities that FSEs perform in the eld during diagnosis?. More over we can see a time distribution of these activities, understand which activities have a high throughput time, but there is no performance information on individual activities.

Although these give answers to some of the research questions, the dotted chart depicts just a helicopter view of the process and it doesn't give a detailed picture of the work of the FSE. There is a need for a more detailed view containing performance information for each event in particular.

• Process discovery using Fuzzy miner: What is the workow of the FSE?

(41)

2.6. Data analysis 37 The Fuzzy Miner [12], [11] is a process discovery technique suitable for mining less- structured processes which exhibit a large amount of unstructuredness and con-

icting behaviour. By expressing behaviour in a fuzzy, non-concrete manner, fuzzy models are able to simplify complex patterns of behaviour, which makes them prefer- able for the task of process analysis and exploration.

Fuzzy miner addresses the issues of unstructured processes by using abstraction and aggregation techniques for the representation of the process thereby making the mined models understandable to an analyst. The miner provides a high-level view on the process by abstracting from undesired details, limiting the amount of infor- mation by aggregation of interesting details and emphasizing the most important details.

The Fuzzy Miner was applied on two dierent logs (FSCommands.mxml) and (FSMainMenu.mxml) on dierent levels of abstraction in order to construct pro- cess models. These levels were controlled in the pre-processing phase to store les for both command level (lowest granularity possible in the log) as in Figure 2.16 and on Main Menu level (highest granularity level using the mappings described in Section 2.5.1) as in Figure 2.17. As it can be seen from both gures the resulting process models are unstructured and hard to comprehend. Even more, at the level of Main Menu it was expected to be more structured by changing the hierarchical level of activities in the preprocessing phase but it turned out to be the contrary.

We attribute this behaviour to the following reasons:

• In the event log there are system commands as well as user activities which do not have a corresponding mapping to Main Menu.

• As described in the analysis using Dotted Chart, there are activities that consist of several sub-activities. Mapping the commands in the preprocessing phase do not completely cover this situation, only the start and end activities, that have a direct mapping will be considered, while the rest of activities are going to be at the lowest level of abstraction. This underlines the need to abstract the event log in a more exible way in which the user has the possibility to select the sub-activities of an abstract activity.

• The fault isolation processes during CM calls do not have a single kind of ow but a lot of variants due to the high number of hardware or software that can be faulty; therefore, it is not surprising that the processes derived from the event logs are spaghetti-like. As it can be seen in both the cases, the event log at Command level and at Main Menu level, the resulting process models are hard and almost impossible to comprehend.

(42)

Figure 2.16: Fuzzy Model on Command level

Figure 2.17: Fuzzy Model on Main Menu level

(43)

2.7. Problem Identication 39

2.7 Problem Identication

The ndings of the analysis have raised a series of questions regarding the event logs of Philips, the availability of data and regarding the work of FSE. Considering the event logs, the main problems mentioned in Section 1.3.1 turned out to be highly relevant in this initial case study. The granularity at which the events are logged is too ne grained. This creates problems in constructing the workow of the FSE and unravel answers to the research questions. Also there are activities which consists of a series of sub-activities and there is a need for abstraction of such activities to higher levels.

Currently the database consisting of command mapping is not complete, it does not contain all eld service specic commands but mainly normal user commands.

This results in a high number of events even in the preprocessed logs thereby the preprocessed logs are still complex for analysis.

Considering the availability of data there are serious limitations due to non-existent mapping of identications used in dierent data sources. For storing the events, dierent identications are used to identify a system in the RADAR database and in the job sheet. The mapping of these is currently an issue that is being addressed by Philips.

Another reason for non-availability of data is that the FSE perform operations on a system without logging into the FSF which makes it dicult/impossible to identify the proper data: the exact time when an FSE was working on the system and the corresponding activities cannot be dierentiated from those of a normal user (i.e., doctor, technician) activities.

This chapter presented the results of the preliminary analysis of the FSE process with the corresponding case study. However, during the analysis, we have discovered certain problems that need to be addressed along with the problems identied in Section 1.3.1. In order to tackle some of these problems, in the next chapter, we will present an approach for dealing with the unstructuredness of the FSE workow and a means to identify bottlenecks in the workow. As depicted in Figure 2.18 the starting point of the approach is the conversion of data into MXML format (i.e., ProMImport), then next step is the creation of abstractions (i.e., Pattern Abstrac- tions) and the discovery of hierarchical process models as process maps (i.e., Fuzzy Miner). The discovered process maps are used together with the event logs to cal- culate performance information of the process and project it onto the process maps (i.e., Fuzzy Map Performance Analysis).

(44)

tion40

Figure 2.18: The approach for analysis and plugins of the FSE workow

(45)

Chapter 3

Mining Hierarchical Process Models and Measuring Performance

In this chapter, we explain the approach to measure performance of a business process represented as a process map based on an event log, and annotate the map with performance information extracted from the event log. Section 3.1 gives a formal representation of event logs. Section 3.2 gives an overview of process maps.

Section 3.3 provides the representation of a process map as a graph structure used for performance computation and Section 3.4 presents an algorithm to replay the event log and to compute the KPIs. Section 3.5 provides a complete description of the Key Performance Indicators (KPIs) considered.

3.1 Event Logs Formalization

The goal of performance analysis techniques is to extract and analyse additional information from the event logs, such as: performer or originator of an event (i.e., the person/resource executing or initiating an activity), the timestamp of the event, or data elements recorded with the event (e.g., the size of an order) [23]. For the approach to measure performance information presented, we are going to dene the concept of event log and trace that will be used further on in this chapter.

Denition 3.1.1 (Event Logs)

An event log L is dened as: L = (Σ, (T , f), E, time), where:

Σ denotes the set of distinct activities/event-classes E is the set of all events in the log

time : E → R+0 is a function assigning a timestamp to events e =time(e) the time of an event e

T is the set of traces

f : T → N≥1 denotes the number of occurrences of a trace t Denition 3.1.2 (Trace)

A trace t is a nite sequence of events t ∈ E such that each event appears only once and time is non-decreasing, i.e., for 1 ≤ i ≤ j ≤ |t| and t(i) 6= t(j).

41

Referenties

GERELATEERDE DOCUMENTEN

Finally we consider in section 3 a simple inventory control model (with- out fixed order cost) and we give approximations for the value of the Baye- sian equivalent rule.. We

Tijdens de terreininventarisatie is door middel van vlakdekkend onderzoek nagegaan of er binnen het plangebied archeologische vindplaatsen aanwezig zijn die

landse firma's gaan H en Hhuishoudelijlce artilcelen zalcen H. Door 4 respon- denten wordt het beoefenen van hobbies als stimulerende factor genoemd, zoals HsportenH en Hbij

Maar het is niet a-priori zeker dat dezelfde resultaten ook voor kleine steekproeven worden bereikt.... Evaluatie

Wat betreft de verdeling van de instTuctieprocessen over de drie hoofdcategorieen geven de stroomdiagram- men een kwalitatieve indruk. Ze vertonen aile hetzelfde grondpatroon

In Section 3 we model the efect of quantization noise in linear MMSE estimation and show how adaptive quantization can be performed based on four metrics derived from this

Our proposed algorithm is especially accurate for higher SNRs, where it outperforms a fully structured decomposition with random initialization and matches the optimization-based