• No results found

2.6 Data analysis

2.6.1 Case study Converter Velara 8E

The case study of Converter Velara 8E represents all the corrective maintenance calls that involved the replacement of this part for the Allura Xper systems of Philips.

For the year 2009, the total number of such calls were 172. Out of these calls, there are log les available for 58 systems. For the rest of the systems, it can be the case that they were not connected to the Philips RSN network on the specied period or there is no mapping available. In the CSDWH database the systems are stored with the identication congID and in the RADAR database with the identication rsnID. For such systems the mapping from these two IDs is currently not available and hence it is not possible to identify the proper log les for the corresponding calls.

Out of these 58 systems logs of only 53 systems are usable. The reason why 5 of the systems are not usable is because the FSE did not log into the system. In this case it is not possible to make a distinction between the operation of the system by a normal user (doctor, technician) and the engineer. If the FSE does not log into the system there are no activities logged under eld service mode.

Further on, the selected systems are split for the analysis on dierent versions of the system. Figure 2.13 gives an overview of the distribution of systems per version.

The event logs are transformed using the ProMImport plugin described earlier. The

have chosen dierent process mining techniques to get insights into the process of the replacement of Converter Velara 8E part: Conformance checking [22], Dotted Chart Analysis [20] and Fuzzy miner [12], [11].

• Conformance checking: Does the FSE perform the mandatory activities in a corrective maintenance case?

To answer this question we propose the use of the LTL checker plugin. The plugin is based on Linear Temporal Logic and provides means for checking temporal formulas against event logs [22]. An example of such a formula is to check whether an activity is executed in each process instance of the log.

Order of activities Activity Name Status

1 Tube Adaptation mandatory

2 Tube Yield Calibration optional

3 EDL Verication mandatory

Table 2.2: Mandatory activities for replacement of Converter Velara 8E

Table 2.2 shows the mandatory activities that need to be executed for the Converter Velara replacement process as obtained from the domain experts. In this process there are mandatory activities, i.e. Tube Adaptation and EDL Verication, but also optional activities such as Tube Yield Calibration. In the current case the check is done to see whether the activity Tube Adaptation is performed in each case.

The formula for conformance checker plugin is eventually_activity_A where A = Tube Adaptation and checks if the activity Tube Adaptation has been performed.

The result is that in over 73% of cases (38 cases) it was performed as can be seen in Table 2.3.

Activity Name Number of systems Percentage

Tube Adaptation 38 73.037%

Tube Yield Calibration 3 5.66%

EDL Verication 3 5.66%

Table 2.3: Mandatory and optional activities performed

The conformance of the reference process was further checked with the following formulas: eventually_activity_A where A = Tube Yield Calibration and even-tually_activity_A where A = EDL Verication. First formula checks whether the

2.6. Data analysis 35

Tube Yield Calibration activity was performed, while the second formula checks whether the EDL Verication activity was performed. As it can be seen in Ta-ble 2.3, both activities are performed for 3 systems only, less than 6% of the total number of cases. Considering that the Tube Yield Calibration is an optional ac-tivity in comparison with EDL Verication the answer to the research question is negative, mandatory activities are not always performed. The results of the confor-mance checking for the replacement procedure are surprising for people in Philips, considering that the Converter is an important and expensive part.

Further on, the order of activities in the reference process has been veried using the LTL Checker. The formula for conformance checking is eventually_activity_A_and _eventually_activity_B where for the rst case A = Tube Adaptation and B = EDL Verication (i.e., that checks if the activity EDL Verication was performed after the activity Tube Adaptation), for the second case A = Tube Adaptation and B = Tube Yield Calibration, and the last case A = Tube Yield Calibration and B = EDL Verication. The results of the checks are presented in Table 2.4.

As we can see, the order of activities in not respected in all cases.

Order of activities Respected Not respected

Tube Adaptation > EDL Verication 1 1

Tube Adaptation > Tube Yield Calibration 3 0 Tube Yield Calibration > EDL Verication 0 0 Table 2.4: Number of systems that respect/do not respect the order of activities

• Dotted Chart analysis: What are the activities that the FSE perform in the eld during diagnosis?

This plugin has been developed to analyze performance of business processes and to provide insights by showing overall process execution at a glance. The dotted chart shows the process events in a graphical way such that the analysts get a

helicopter view of the process. The advantage of the plugin is that, unlike other discovery techniques, it emphasizes the time dimension and puts no requirements on the structure of the process [20].

In Figure 2.14, we use the dotted chart analysis to show the overall events in the event logs. The events are displayed as dots, time is measured along the horizontal axis of the chart and the vertical axis depict the process instances. The time option is set to Relative (Ratio) in order to see the relative distribution of events in each process instance. The events that took a long time for completion can be easily identied by the long lines connecting two dots of the same color which represent the start and completion of the event [19].

User can also obtain information about the patterns in the log by looking at the dotted chart. In Figure 2.15, the Tube Adaptation is identied. The pattern reects the situation that the Tube Adaptation activity is composed of a series of events which are executed by the system in between the start and complete

Figure 2.14: Dotted Chart with relative time of activities

Figure 2.15: Dotted Chart for Tube Adaptation

events of this activity. It is often the case that the execution of an user command implies the execution of other system activities. Thus we can underline the need for abstracting these sub-activities into a single abstract activity consisting of the start event, intermediary events and complete event.

Using the Dotted Chart Analysis plugin we can answer the research question What are the activities that FSEs perform in the eld during diagnosis?. More over we can see a time distribution of these activities, understand which activities have a high throughput time, but there is no performance information on individual activities.

Although these give answers to some of the research questions, the dotted chart depicts just a helicopter view of the process and it doesn't give a detailed picture of the work of the FSE. There is a need for a more detailed view containing performance information for each event in particular.

• Process discovery using Fuzzy miner: What is the workow of the FSE?

2.6. Data analysis 37 The Fuzzy Miner [12], [11] is a process discovery technique suitable for mining less-structured processes which exhibit a large amount of unless-structuredness and

con-icting behaviour. By expressing behaviour in a fuzzy, non-concrete manner, fuzzy models are able to simplify complex patterns of behaviour, which makes them prefer-able for the task of process analysis and exploration.

Fuzzy miner addresses the issues of unstructured processes by using abstraction and aggregation techniques for the representation of the process thereby making the mined models understandable to an analyst. The miner provides a high-level view on the process by abstracting from undesired details, limiting the amount of infor-mation by aggregation of interesting details and emphasizing the most important details.

The Fuzzy Miner was applied on two dierent logs (FSCommands.mxml) and (FSMainMenu.mxml) on dierent levels of abstraction in order to construct pro-cess models. These levels were controlled in the pre-propro-cessing phase to store les for both command level (lowest granularity possible in the log) as in Figure 2.16 and on Main Menu level (highest granularity level using the mappings described in Section 2.5.1) as in Figure 2.17. As it can be seen from both gures the resulting process models are unstructured and hard to comprehend. Even more, at the level of Main Menu it was expected to be more structured by changing the hierarchical level of activities in the preprocessing phase but it turned out to be the contrary.

We attribute this behaviour to the following reasons:

• In the event log there are system commands as well as user activities which do not have a corresponding mapping to Main Menu.

• As described in the analysis using Dotted Chart, there are activities that consist of several sub-activities. Mapping the commands in the preprocessing phase do not completely cover this situation, only the start and end activities, that have a direct mapping will be considered, while the rest of activities are going to be at the lowest level of abstraction. This underlines the need to abstract the event log in a more exible way in which the user has the possibility to select the sub-activities of an abstract activity.

• The fault isolation processes during CM calls do not have a single kind of ow but a lot of variants due to the high number of hardware or software that can be faulty; therefore, it is not surprising that the processes derived from the event logs are spaghetti-like. As it can be seen in both the cases, the event log at Command level and at Main Menu level, the resulting process models are hard and almost impossible to comprehend.

Figure 2.16: Fuzzy Model on Command level

Figure 2.17: Fuzzy Model on Main Menu level

2.7. Problem Identication 39

2.7 Problem Identication

The ndings of the analysis have raised a series of questions regarding the event logs of Philips, the availability of data and regarding the work of FSE. Considering the event logs, the main problems mentioned in Section 1.3.1 turned out to be highly relevant in this initial case study. The granularity at which the events are logged is too ne grained. This creates problems in constructing the workow of the FSE and unravel answers to the research questions. Also there are activities which consists of a series of sub-activities and there is a need for abstraction of such activities to higher levels.

Currently the database consisting of command mapping is not complete, it does not contain all eld service specic commands but mainly normal user commands.

This results in a high number of events even in the preprocessed logs thereby the preprocessed logs are still complex for analysis.

Considering the availability of data there are serious limitations due to non-existent mapping of identications used in dierent data sources. For storing the events, dierent identications are used to identify a system in the RADAR database and in the job sheet. The mapping of these is currently an issue that is being addressed by Philips.

Another reason for non-availability of data is that the FSE perform operations on a system without logging into the FSF which makes it dicult/impossible to identify the proper data: the exact time when an FSE was working on the system and the corresponding activities cannot be dierentiated from those of a normal user (i.e., doctor, technician) activities.

This chapter presented the results of the preliminary analysis of the FSE process with the corresponding case study. However, during the analysis, we have discovered certain problems that need to be addressed along with the problems identied in Section 1.3.1. In order to tackle some of these problems, in the next chapter, we will present an approach for dealing with the unstructuredness of the FSE workow and a means to identify bottlenecks in the workow. As depicted in Figure 2.18 the starting point of the approach is the conversion of data into MXML format (i.e., ProMImport), then next step is the creation of abstractions (i.e., Pattern Abstrac-tions) and the discovery of hierarchical process models as process maps (i.e., Fuzzy Miner). The discovered process maps are used together with the event logs to cal-culate performance information of the process and project it onto the process maps (i.e., Fuzzy Map Performance Analysis).

tion40

Figure 2.18: The approach for analysis and plugins of the FSE workow

Chapter 3

Mining Hierarchical Process Models and Measuring Performance

In this chapter, we explain the approach to measure performance of a business process represented as a process map based on an event log, and annotate the map with performance information extracted from the event log. Section 3.1 gives a formal representation of event logs. Section 3.2 gives an overview of process maps.

Section 3.3 provides the representation of a process map as a graph structure used for performance computation and Section 3.4 presents an algorithm to replay the event log and to compute the KPIs. Section 3.5 provides a complete description of the Key Performance Indicators (KPIs) considered.

3.1 Event Logs Formalization

The goal of performance analysis techniques is to extract and analyse additional information from the event logs, such as: performer or originator of an event (i.e., the person/resource executing or initiating an activity), the timestamp of the event, or data elements recorded with the event (e.g., the size of an order) [23]. For the approach to measure performance information presented, we are going to dene the concept of event log and trace that will be used further on in this chapter.

Denition 3.1.1 (Event Logs)

An event log L is dened as: L = (Σ, (T , f), E, time), where:

Σ denotes the set of distinct activities/event-classes E is the set of all events in the log

time : E → R+0 is a function assigning a timestamp to events e =time(e) the time of an event e

T is the set of traces

f : T → N≥1 denotes the number of occurrences of a trace t Denition 3.1.2 (Trace)

A trace t is a nite sequence of events t ∈ E such that each event appears only once and time is non-decreasing, i.e., for 1 ≤ i ≤ j ≤ |t| and t(i) 6= t(j).

41

the next section we give an overview of process maps.