• No results found

4.3 Fuzzy Map Performance Analysis Plugin

4.3.2 Performance Information − KPIs Tabbed Pane

The visualization can be customized based on the need and preferences of users.

Moreover, detailed performance information is needed that does not t in the dia-gram. Therefore, the visualization of the plugin displays on the bottom panel eight tabs containing either KPIs or settings for the adjustment of KPIs.

1. The rst tab, Process KPI tab, shows all process KPIs which were described in Section 3.5.1. Figure 4.5 depicts the detailed KPIs in the Process KPI tab:

the event log consists of 6000 cases, out of which 1661 are unique cases. There are 0 tting cases in the event log and the arrival rate is approximately 0.5 cases/minute.

Figure 4.5: Process KPI tab

2. The second and third tabs are the Node KPI tab and the Edge KPI tab respec-tively, which show the node and edge related KPIs described in Section 3.5.2 and Section 3.5.3 respectively. Depending on the node or the edge selected in the model, the KPIs of this element are displayed in the corresponding tab.

Figure 4.6 depicts the detailed selection of elements in the annotated process map: two nodes, Register (complete) and [High] (complete), and the con-necting edge (ow), Register (complete)→[High] (complete).

Figure 4.6: Detailed example of elements in the annotated process map

If the user selects the rst node, Register (complete), the result will be dis-played in the corresponding Node KPI tab as presented in Figure 4.7. The number of executions of node Register (complete) is 6000 and the initializa-tion frequency is 6000 (thus the pink color of the corresponding INIT box).

The node is a primitive node that corresponds to an atomic activity in the event log, therefore its total execution (throughput) time is 0.

Figure 4.7: Node KPI tab for node Register (complete)

If the user selects the node [High] (complete), the result is displayed in the corresponding Node KPI tab as presented in Figure 4.8. The number of execu-tions of node [High] (complete) is 2425. The node is an abstract node and its total execution (throughput) time is approximately 1775 hours. For this node several statistics are computed, such as average, minimum, maximum, and standard deviation of the execution (throughput) time: 1.45, 0.02, 21.45 and 1.82 hours respectively. Moreover, the throughput time threshold values are computed: the lower bound is 1.45 hours and the upper bound 2.9 hours. The computation of these bounds is explained later on in this section. Moreover, the user can change these values for the node, by selecting the corresponding button and inserting the new value.

If the user selects the edge Register (complete)>[High] (complete), the re-sult is displayed in the corresponding Edge KPI tab as presented in Figure 4.9. The number of executions of this edge is 2134 and the total execution (throughput) time is approximately 7366 hours. For this edge several statistics are computed, such as average, minimum, maximum and standard deviation of the execution (throughput) time: 3.45, 0.72, 26.95 and 1.95 hours

respec-4.3. Fuzzy Map Performance Analysis Plugin 71

Figure 4.8: Node KPI tab for node [High] (complete)

tively. Moreover, the throughput time threshold values are computed: the lower bound is 3.45 hours and the upper bound 6.9 hours. The user can change these values for the edge, by selecting the corresponding button.

Figure 4.9: Edge KPI tab for edge Register (complete)>[High] (complete)

3. The fourth tab is the Activities not in model tab which shows the list of activities that are encountered in the log but not in the process map, with the corresponding counts, as described in Section 3.5.1. For the running example all activities in the event log are present in the model.

4. The fth tab is the Activities not in log tab which shows the list of nodes in the process map that do not have a corresponding activity in the event log.

For the running example all activities in the model are present in the event log.

5. The sixth tab is the Activities Statistics tab which provides for each activity encountered in the process map: total number of executions, total execution time, average, minimum, maximum, and the standard deviation of the execu-tion time, as depicted in Figure 4.10.

Figure 4.10: Activities Statistics tab

6. The seventh tab is the Edges Statistics tab which provides the for each ow (edge) encountered in the process map: total number of executions, total execution time, average, minimum, maximum, and the standard deviation of the execution time, as depicted in Figure 4.11.

7. The eight tab is the Global Settings tab which provides the following function-alities:

• change time unit

• change scenario for threshold computation

• save and load threshold values

In the Global Settings tab the user can change the time unit of the performance metrics as shown in Figure 4.12. Event logs may record event timestamps in the order of milliseconds which might not be a convenient time unit for conducting analysis. There are several time unit options provided in the tab by a combo box: millisecond, second, minute, hour and day. With these options, performance values can be displayed in the most convenient time unit according to the user's preference.

Figure 4.12: Interface to select the time unit in Global Settings tab

The second functionality in the Global Settings tab is to change the scenario for threshold computation. The threshold values consist of two values: low and high threshold boundaries and help in computing the performance coloring of nodes and edges in the process map, based on which we can identify the bottlenecks in the process. This corresponds to the initial setting, before the analyst decides to change the values. The thresholds can be computed based on two dierent scenarios that the user can select from the Global Settings tab as illustrated in Figure 4.13.

For all the nodes/edges in the process map we compute: µn the average and σn the standard deviation of the execution time of a node n; and µe is the average and σe the standard deviation of the execution time of an edge e.

• Scenario 1 - Performance coloring based on standard deviation: Average and standard deviation are computed separately for each node/edge in

4.3. Fuzzy Map Performance Analysis Plugin 73

Figure 4.13: Interface to select the threshold computation Scenario in Global Settings tab the map. Each node/edge will have individual thresholds. The reasoning behind this scenario is to help the user understand the variations in the execution of a node/edge, by having the standard deviation as a base for performance coloring. For example, some activities take very little time in some cases and much longer time in other cases, thus they will have a high variation in their execution time. These activities can be highlighted by performance coloring in order to be identied by the analyst.

For this scenario, the user has to select two parameters δ1 and δ2 for the threshold computation, such that if: these values based on the characteristics of the process, as being any real values, with the condition that δ1 < δ2.

The lower bound will then be lb = µ × δ1 and the upper bound is ub = µ × δ2. Thus lb and ubthresholds are lower and upper bounds for performance coloring of nodes/edges as follows:

 if σ ≤ lb then the color of the element is green

 if lb < σ ≤ ub then the color of the element is yellow

 if σ > ub then the color of the element is red

• Scenario 2 - Performance coloring based on average execution time at a level: Average and standard deviation are computed for the set of nodes/

edges per level in the map. For example, for all the nodes at a certain level n ∈ Nlev in the process map, the average and standard deviation are computed as follows:

fn is the average execution time of all nodes at level lev, fn is the total number of executions and ttn is the total execution (throughput) time of node n. Moreover, σlevis the standard deviation of the execution time of all nodes at level lev, i.e., it is a constant value for all nodes at a certain level in the process map.

Similar computations are used for the edge boundaries. The boundaries are based on these average and standard deviation for each level, com-puted for the nodes and separately for the edges. In this way, all nodes at a certain level in the process map will have the same boundaries; the

nodes/edges are as follows:

 if µ ≤ lb then the color of the element is green

 if lb < µ ≤ ub then the color of the element is yellow

 if µ > ub then the color of the element is red

where µ and σ are average and standard deviation of a certain ele-ment in the map

The third functionality in the GlobalSettings tab is the Save/Load throughput time threshold values as depicted in Figure 4.14. We propose a methodological approach for the comparison of models by analysing the deviations in perfor-mance.

Figure 4.14: Interface to save/load throughput time threshold values in Global Settings tab The user can adjust the thresholds for each node/edge and then export the val-ues. These values can then be used as reference for another process map/event log, by loading the values into the process map. This approach for comparison of processes can help in identifying the bottlenecks in a process by taking the threshold values from an ideal case as reference and use it on dierent cases to identify the activities that took more time than expected.

By saving the threshold values for a process map, a .txt le is created, con-taining the list of all nodes, edges and the corresponding threshold values.

This le can later be used as reference for another process map by loading the threshold values. For each node in the second process map that is identied in the threshold le, the boundaries will be updated with the new values. In this way, for performance coloring of nodes and edges the loaded threshold values will be used. Thus we can identify the elements that will change their performance coloring. For example, the nodes that before were coloured in green and now are coloured in red can underline one of the dierence between two process models, such as activities that in one process took less time and now have high execution times.

For example, node [High] (complete) has a total execution time of 1774.733 hours with an average of 1.454 hours and a standard deviation of 1.817 hours.

4.3. Fuzzy Map Performance Analysis Plugin 75 The threshold boundaries for this node are: 1.454 and 2.907, the lower bound and respectively the upper bound, thus the color of the node is yellow (based on Scenario 1, the standard deviation has a value higher than the lower bound and lower than the upper bound). Next, the user exports the threshold val-ues for all elements in the process map, including node [High] (complete).

Let's assume that these thresholds are used as reference in a second process map that also contains node [High] (complete) with a total execution time of 2474.5 hours with an average of 2.454 hours and a standard deviation of 3.217 hours. Using the exported bounds for this node and based on Scenario 1 the standard deviation is now higher than the upper bound, the color of the node will be red. Thus, we identify node [High] (complete) as taking more time than expected. Similarly, for all nodes and edges the threshold boundaries are exported and used as reference.

As shown, the Fuzzy Map Performance Analysis Plugin provides two means of dis-playing performance information: Graph Panel and KPIs Tabbed Pane. The user can easily see some performance metrics on the annotated elements and at the same time identify all the other KPIs in the corresponding tabs.

Moreover, the plugin has no constraints on the number of hierarchical levels of the process map, and neither on the level of the input event log (i.e., can be at any level considering the process map). The example in Figure 4.15 takes as input a process map that has two hierarchical levels and an event log having the log level 0 (as dened in Section 3.4), i.e., the deepest level in the map. Using the same process map but a dierent event log, this time, level of log is 2, we can see the result depicted in Figure 4.16. As we can see in Figure 4.15, when the event log is at the lowest level in the hierarchy, performance metrics are available for each level of the process map. Moreover, all abstract nodes have execution time information, while the primitive nodes do not because they are mapped to atomic activities in the event log. Using the event log at level 2 with activities that map to nodes in the map directly at the highest hierarchical level (Figure 4.16), the lower levels do not have performance information available. Moreover, in this case only ows in the process map will have time information, while nodes do not have time information, since every node will correspond to an atomic activity in the event log.

Figure 4.15: Annotated process map using the event log with level of log 0

Figure 4.16: Annotated process map using the event log with level of log 2

Chapter 5 Case Study

In this chapter, a case study analysing the Field Service Engineer process is de-scribed. Here we use the FMPA plugin and already existing process mining tech-niques. The tube MRC was chosen as the Field Replaceable Unit (FRU) for this case study because it is the most expensive part in the Allura X-ray systems, with a high frequency of replacement and signicant variations in the time to repair. Dur-ing this case study we will try to answer the research questions posed in Chapter 1.

5.1 Case Study  Tube MRC

The case study of tube MRC represents all the corrective maintenance calls that are related to the replacement of this part on an Allura Xper FD20 system with software version 2.0.0. The total number of systems for which there exist log les is 28. Moreover, we split these 28 systems in two sets: the set of calls that took less than mean time to repair (LMTR) and the set of calls that took more than mean time (MMTR) to repair. The reasoning behind this partitioning is that there is a high variation in corrective maintenance hours spent on systems for the replacement of the tube, and we want to understand what the causes for this deviation are.

We rst consider the cases for which the declared corrective maintenance time is less than the mean time to repair (12 hours). The less than mean time to repair

cases consist of 14 systems, out of which, after preprocessing there are 7 systems left. The other 7 cases do not have logged the Field Service mode which implies that we cannot identify the actual work of the FSE. Thus, the preprocessed event log consists of 7 cases, 3950 events, 134 event classes and 2 event types.

For this log, let us try to address address the following research questions:

• Does the FSE perform the mandatory activities in a corrective maintenance case?

According to domain experts, the following actions need to be performed by an FSE during the replacement of a defective tube:

77

 Hand over to the customer

There are two major steps that the FSE has to execute: perform adjustments and perform verications in the Field Service Framework. The two steps consist of a series of procedures that the engineers have to perform in the Field Service Framework: adjustment procedures and verication procedures.

The actions performed during these two steps are stored in the event logs.

As illustrated in Chapter 2, we can use the LTL checker plugin in order to check whether the FSE performed the mandatory activities. The results of this analysis step are presented in Table 5.1 for the adjustment procedures and in Table 5.2 for the verication procedures. As we can see, FSEs do not always perform the mandatory activities required in the replacement of a part, deviations/violations from the expected replacement procedures were discovered. Activities like Tube Conditioning and Tube Adaptation were performed in the required order in 43% of the cases (i.e., in three cases). The rest of activities occur for even fewer cases, or not at all, such as Alignment of tube/collimator assembly and Detector Entranceplane Adjustment. It is really surprising for the people in Philips that the adjustment and verication procedures are not strictly followed during the replacement of tube.

Order Procedure Name Number of systems Percentage

1 Beam Propeller Current Adjustment 1 14%

2 Beam CArm Current Adjustment 1 14%

3 Alignment of tube/collimator assembly 0 0%

4 Detector Entranceplane Adjustment 0 0%

5 Tube Conditioning 3 43%

6 Tube Adaptation 3 43%

7 Tube Yield Calibration 2 28%

8 Entrance Doserate Limitation 1 14%

Table 5.1: Conformance checking of mandatory adjustment procedures

• What is the workow of the FSE?

Figure 5.1 presents the workow of the FSE on the raw event log, mined using the Fuzzy miner plugin. As we can see, the model generated from the raw log is really complex and hard to comprehend. The workow of the FSE is not a stream lined process, the engineers perform activities in a repetitive

5.1. Case Study  Tube MRC 79

Order Procedure Name Number of systems Percentage

1 FT Beam Propeller Movement Test 0 0%

2 FT Beam CArm Movement Test 0 0%

3 Air Kerma Rate Verication -

-4 Level One IQ Test -

-Table 5.2: Conformance checking of mandatory verication procedures

manner, especially in the fault isolation step. This, plus the low granularity at which the events are stored, containing a high number of system commands besides the commands of the FSE, end up in a spaghetti like process model.

In order to generate a more comprehensible process model, we use the iterative method presented in Chapter 3 and abstract the log to form multiple levels.

This is done by using the Process Abstractions plugin. At rst iteration we identify the loop constructs and replace occurrences of the loop manifestation with abstract activities. 14 abstract activities were chosen concerned in this step of log transformation. The resulting event log contains 7 cases, 770 events and 65 event classes. The number of abstract activities in this log is 54, each of them having a corresponding sub-log. A second iteration of base maximal repeat patterns identication was performed and 14 abstract activities were chosen for log transformation and pattern discovery. The resulting abstract log contains 7 cases, 532 events and 23 event classes. The number of abstract ac-tivities in the abstract log is 18, each of them having a corresponding sub-log.

Taking this log as input we construct the process map as depicted in Figure 5.2. The resulting process map is easy to comprehend. Moreover, the names of the activities are generated using domain knowledge that the analysts in Philips are used to and represent the actions that the FSE execute in the Field Service Framework. The resulting process map provides the functionality of drilling down in the hierarchy to see the underneath sub-processes.

• On what activities does the FSE spend more time?

Having both data sources available, an event log consisting of the activities performed by FSEs, and a process map representing the workow of the FSE, the next step is to replay the event log onto the process map and obtain per-formance information. This is done by using the Fuzzy Map Perper-formance

Having both data sources available, an event log consisting of the activities performed by FSEs, and a process map representing the workow of the FSE, the next step is to replay the event log onto the process map and obtain per-formance information. This is done by using the Fuzzy Map Perper-formance