• No results found

Towards a Higher Understandability of Declarative Process Models Through Declarative Process Modelling Guidelines

N/A
N/A
Protected

Academic year: 2021

Share "Towards a Higher Understandability of Declarative Process Models Through Declarative Process Modelling Guidelines"

Copied!
89
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Towards a Higher Understandability of

Declarative Process Models Through

Declarative Process Modelling Guidelines

Chris Hazeu

Master’s Thesis – Final Version

MSc. Information Sciences – Track Business Information Systems

Date: 29-07-2015

First supervisor: Hajo Reijers (VU)

(2)

A C K N O W L E D G E M E N T S

In this paper, I proudly present my master’s thesis, which could not have been realised without the help of many people.

First of all, many thanks to my supervisor, Hajo Reijers. Your connections, feedback and support have been very helpful. The meetings we had were inspiring and I have re-ally enjoyed working on the project. Also, I would like to thank you for the opportunity to work on my thesis in the Computer Science department on the VU University. I’ve met a lot of great people and it made it easier to complete the project.

Therefore, I would like to thank the BPM group, the Software Development group and the various guests at both groups. I’ve enjoyed all the coffee breaks, lunches and Friday afternoon drinks including a lot of good conversations, discussions and humour. Special thanks to the guys of the BPM group: Ermeson, Han and Henrik, for helping me throughout the project.

Thanks to all the BPM experts and students who participated in this research. I’ve experienced the sessions as fun and always filled with interesting exchanges of ideas afterwards.

Looking back on the whole study year, many thanks go to my fellow students who I’ve cooperated the most with in many projects throughout the year: Amir, Baraa, Dorothea, Laith, Sander and Stefanie. You all made the year a lot easier for me and I’ve enjoyed all the group projects and social activities. Also many thanks to all my friends who were there for me this year to support me.

Lots of love to my family, especially Mom, Dad and Thomas, and my girlfriends family, for giving me the best support I could imagine. Last but not least, many love and thanks to Elze, for withstanding me in my periods of numerous complaining and always brightening me up with hugs, pep-talks, episodes of Modern Family and pictures of penguins.

(3)

A B S T R A C T

In this thesis, we research how declarative process models and in particular Declare models can be made more understandable. First, we provide background information about business process modelling languages and about currently used declarative mod-elling languages. Next, we describe our evaluation of declarative modmod-elling language Declare against the principles of effective visual notation. Based on this evaluation, on earlier research results on Declare and on theories from cognitive psychology, we propose a set of Declarative Process Modelling Guidelines (DPMG) to make Declare models more understandable. To test for the effect of these guidelines, we conducted a two-condition quasi-experimental study measuring the effect of DPMG on the inter-pretation, required mental effort and user acceptance of Declare models. We also asked participants to mention challenges for understanding Declare and points of improve-ment to make Declare more understandable.

The results show that the DPMG partly have a positive effect on the interpretation of Declare models. In the interpretation of Declare models, unrelated activities are respon-sible most errors. The DPMG do not have an effect on the required mental effort. Also, both Declare and specific DPMG constructs are rated as useful and easy to use, which indicates a high user acceptance. Finally, the most mentioned challenges related to the process model size and the lack of knowledge on Declare, whereas the most mentioned points of improvement regarded hidden dependencies, visual richness and the order of activities. Following these results, we conclude that the DPMG are promising for im-proving understandability of Declare models, but that further development is necessary. Finally, we discuss limitations of our research and implications for future research.

(4)

C O N T E N T S

1 i n t r o d u c t i o n 11

1.1 Business Process Models 11

1.2 Imperative and Declarative Modelling Languages 12

1.3 Problem Statement and Thesis Outline 13

1.3.1 Problem Statement 13

1.3.2 Thesis Outline 13

2 d e c l a r at i v e m o d e l l i n g l a n g ua g e s 15

2.1 Declarative Modelling Languages in General 15

2.2 Declare 16

2.3 DCR Graphs 17

2.4 BPCN 18

2.5 Evaluation of Declare in Earlier Research 18

3 d e c l a r at i v e m o d e l l i n g p r o b l e m s a n d p r o p o s e d s o l u t i o n 20

3.1 Evaluating Declare against the Principles of Effective Visual Notation 20

3.2 Research Direction 21

3.3 Creating the Declarative Process Modelling Guidelines 22

4 m e t h o d o l o g y 30

4.1 Main Research Question 30

4.2 Independent Variables 30

4.3 Research Questions and Dependent Variables 31

4.3.1 Model Interpretation 31

4.3.2 Mental Effort 32

4.3.3 User Acceptance 32

4.3.4 Challenges and Points of Improvement 33

4.4 Defining and Planning of the Studies 34

4.4.1 Subjects 34

4.4.2 Objects 34

4.4.3 Procedure 35

4.5 Performing of the Studies 36

4.5.1 Execution 36

4.5.2 Data Validation 36

4.5.3 Data Analysis 37

4.6 Findings of the Pilot Study 37

5 r e s u lt s 39

5.1 Demographics and Experience 39

(5)

5.3 Mental Effort 43

5.4 User Acceptance 45

5.5 Challenges and Points of Improvement 47

6 c o n c l u s i o n a n d d i s c u s s i o n 49

6.1 General Conclusion 49

6.2 Limitations and Future Work 50

6.3 Concluding Thoughts 53

7 a p p e n d i c e s 54

7.1 Appendix A: The Declarative Process Modelling Guidelines. 54

7.2 Appendix B: The Full List of Elements of Declare and BPCN 60

7.2.1 Declare 60

7.2.2 BPCN 63

7.3 Appendix C: The Evaluation of Declare, against the Principles of Effective

Visual Notation from [17]. 64

7.4 Appendix D: The Process Models used in the Studies 69

7.4.1 Pilot Study 69

7.4.2 Main Study 72

7.5 Appendix E: The Complete Tables of Results from in the Studies 75

7.5.1 Pilot Study 75

(6)

L I S T O F F I G U R E S

Figure 1 The difference between prespecified and loosely-specified

(constraint-based) process models, from [23]. 12

Figure 2 The chapters in this thesis. 13

Figure 3 Basic Declare elements, from [23]. 16

Figure 4 Semantic meaning of basic Declare constraints, from [10]. 17

Figure 5 DCR Graphs: Constraints and an example model, from [12]. 18

Figure 6 Basic BPCN constraints, from [13]. 18

Figure 7 Process model example with and without DPMG1. 23

Figure 8 The spatial arrangement of activities to which people are

predis-posed to interpret a causality relation, from [17]. 23

Figure 9 How external memory works with process models, from [29]. 24

Figure 10 Process model example with and without DPMG2 and DPMG3. 25

Figure 11 Process model example with and without DPMG4. 26

Figure 12 How chunking works with process models, from [29]. 26

Figure 13 Process model example with and without DPMG5. 28

Figure 14 Process model example with and without DPMG6. 29

Figure 15 The chronological steps in our research procedure. 35

Figure 16 The mean scores and 95% confidence intervals for the different

types of interpretation questions, specified per condition. 40

Figure 17 The mean usage and 95% confidence intervals for the cheat sheet

usage, specified per condition. 42

Figure 18 The distribution of points of the relation between the total

inter-pretation scores and the familiarity with Declare. 43

Figure 19 The mean ratings and 95% confidence interval for the mental

ef-fort ratings, specified for each question type and for each

condi-tion. 44

Figure 20 The mean ratings and 95% confidence interval for the user

accep-tance of Declare, specified for the user accepaccep-tance constructs and

for each condition. 45

Figure 21 The mean ratings and 95% confidence interval for the user

ac-ceptance of DPMG constructs, specified for the user acac-ceptance

constructs and for the DPMG constructs. 46

Figure 22 An example of a more complex process model, from [10]. 51

Figure 23 Process model example with and without DPMG1. 55

(7)

Figure 26 Process model example with and without DPMG5. 58

Figure 27 Process model example with and without DPMG6. 59

Figure 28 Existence constraints in Declare, from [19]. 60

Figure 29 Relation constraints in Declare, from [19]. 60

Figure 30 Negation constraints in Declare, from [19]. 60

Figure 31 Choice constraints in Declare, from [19]. 61

Figure 32 Explanation of the graphical notation in Declare, from [19]. 62

Figure 33 Selection constraints in BPCN, from [13]. 63

Figure 34 Scheduling constraints in BPCN, from [13]. 63

Figure 35 The example process model which the participants had to analyse

in both conditions in the pilot study. 69

Figure 36 The main process model which the participants had to analyse in

the experimental condition in the pilot study. 70

Figure 37 The main process model which the participants had to analyse in

the control condition in the pilot study. 71

Figure 38 The example process model which the participants had to analyse

in both conditions in the main study. 72

Figure 39 The main process model which the participants had to analyse in

the experimental condition in the main study. 73

Figure 40 The main process model which the participants had to analyse in

(8)

L I S T O F TA B L E S

Table 1 Overview of the pilot study findings for each research question. 38

Table 2 Modelling experience of the participants. 39

Table 3 The Declare constructs that the true/false interpretation

ques-tions were measuring and the amount of correct answers for each

question. 41

Table 4 The challenges mentioned (more than once) and the amount of

participants that mentioned a given point, per condition and the

total sum. 47

Table 5 The points of improvement mentioned (more than once) and the

amount of participants that mentioned a given point, per

condi-tion and the total sum. 48

Table 6 Overview of the main study findings for each research

ques-tion. 50

Table 7 Evaluation of Declare, against principles of effective visual

nota-tion from [17]. 64

Table 8 Modelling experience of the participants in the pilot study. 75

Table 9 Interpretation scores in the pilot study: µ (SD). 75

Table 10 The Declare constructs that the different true/false questions were

measuring in the pilot study. 76

Table 11 The amount of correct answers per true/false question in the pilot

study, per condition and the sum of both conditions. 76

Table 12 The cheat sheet usage in the pilot study, per condition: µ SD. 76

Table 13 The correlations of the interpretation scores with the various

ex-perience measures in the pilot study: r. 77

Table 14 The average mental effort rating per question type in the pilot

study, per condition: µ (SD). 77

Table 15 The correlations of the mental effort ratings with the various

ex-perience measures in the pilot study: r. 78

Table 16 The average user acceptance per construct in the pilot study, per

condition and the total average: µ SD. 78

Table 17 The average user acceptance per construct in the pilot study, for

the wavy lines and the constraint annotations: µ SD. 78

Table 18 All challenges mentioned in the pilot study and the amount of

participants that mentioned a given point, per condition and the

(9)

Table 19 All points of improvement mentioned in the pilot study and the amount of participants that mentioned a given point, per

condi-tion and the total sum. 80

Table 20 Modelling experience of the participants in the main study. 81

Table 21 Interpretation scores in the main study: µ (SD). 81

Table 22 The amount of correct answers per question in the main study,

per condition and the sum of both conditions. 82

Table 23 The cheat sheet usage in the main study, per condition: µ SD. 82

Table 24 The correlations of the interpretation scores with the various

ex-perience measures in the main study. 82

Table 25 The average mental effort rating per question type in the main

study, per condition: µ (SD). 83

Table 26 The correlations of the mental effort ratings with the various

ex-perience measures in the main study. 83

Table 27 The average user acceptance per construct in the main study, per

condition and the total average: µ (SD). 84

Table 28 The average user acceptance per construct in the main study, for

the subprocesses and the constraint annotations: µ (SD). 84

Table 29 All challenges mentioned in the main study and the amount of

participants that mentioned a given point, per condition and the

total sum. 85

Table 30 All points of improvement mentioned in the main study and the

amount of participants that mentioned a given point, per

(10)

A C R O N Y M S

BPCN Business Process Constraint Network

BPM Business Process Management

BPMN Business Process Management Notation

DCR Dynamic Condition Response

LTL Linear Temporal Logic

PEOU Percieved Ease Of Use

PU Percieved Usefulness

RQ Research Question

TAM Technology Acceptance Model

(11)

1

I N T R O D U C T I O N

1.1 b u s i n e s s p r o c e s s m o d e l s

For as far as 30,000 years ago, humans have been using abstract models to make sense of reality [26]. A model, a simplified version of something real, can be used to explain phenomena, to make predictions and decisions and to communicate. Using models has proved to be useful in the field of Business Process Management (BPM). BPM is defined as a number of methods, techniques and tools to discover, analyse, redesign, execute and monitor business processes [6]. A business process is defined as a set of one or several related events, activities and decisions which involve an amount of actors and objects, to help realise a business goal and thus be of value to at least one customer [6, 23]. BPM is currently a strong worldwide asset, as the global market value of BPM was 4,71 billion in 2014 and is expected to grow to 10.73 billion in 2019 [1]. Thus it is important to keep optimising the elements of such a strong asset and especially the part of modelling the business processes, as this is the way that users can make sense of the business processes.

At one point in BPM, the processes have to be mapped in one or several process models, which describe the business processes on a higher level of abstraction. Such models should include all the activities in the process, their attributes and the links between the activities [23]. These process models have several uses. First, using process models increases the comprehensions of the processes due to higher simplicity and understandability. Second, process models can improve the communication of people involved in BPM projects. Finally, process models can be implemented in BPM systems to automate the execution of the process [6]. However, in this paper we focus on process models which are not necessarily executable.

For business processes, a distinction can be made between two types: processes that are repetitive and can be fully prespecified and processes that are knowledge-intensive and should thus be more loosely-specified [23]. Prespecified processes are usually modelled in an imperative way and loosely-specified processes in a declarative way. An imperative modelling language specifies how to do something, where a declarative modelling language states what should be done without saying how to do it [21].

(12)

Fig-Figure 1: The difference between prespecified and loosely-specified (constraint-based) process models, from [23].

1.2 i m p e r at i v e a n d d e c l a r at i v e m o d e l l i n g l a n g ua g e s

An imperative process model describes the exact steps that have to be followed in a process [20]. This is often visualised in terms of activities and the control-flow depen-dencies between the activities [9]. In imperative process models, a continuous forward trajectory is followed [7]. The most popular imperative modelling language currently used is Business Process Management Notation (BPMN), developed by the Object Man-agement Group (OMG), who also developed the popular general modelling language Unified Modeling Language (UML). One of the main problems with imperative process models is that they tend to lead to over-specification. Furthermore, the static nature of the models leads to less flexibility [20, 22]. The problem with this is that in the current society, the success of organisations is more and more dependent on the ability to cope with a continuously changing environment and thus flexibility is getting increasingly important [23]. A declarative modelling approach has the potential to overcome the issues with imperative modelling.

Declarative modelling languages specify what should be done, without saying how this should be done. Users thus have more control and flexibility in the way they want to fulfil the process. Where the relationships between tasks in imperative modelling languages are control-flow dependencies, in the declarative modelling languages the relations are defined by constraints, which represents the policies that have to be taken into account with the process and which can thus prevent undesired behaviour [10, 20]. The main problems with declarative modelling languages are the understandability of the models [9, 10] and proper tool support [24]. The lack of understandability has several reasons. First of all, the visualisation of the elements lack clarity [24] and some

(13)

elements have the same look, but different semantic meaning than imperative modelling languages, which can lead to confusion [10]. Furthermore, the amount of constraints in a model has a negative impact on the understandability [9]. Thus the current declarative approaches are less applicable with complex process models. The declarative modelling approaches are further explained in section 2.1.

1.3 p r o b l e m s tat e m e n t a n d t h e s i s o u t l i n e

1.3.1 Problem Statement

As stated in the introduction, there is a need for flexible BPM. Imperative modelling languages, which are currently most used for BPM models, have a lack of flexibility and can lead to over-specification. Declarative modelling languages have the potential, but they are too underdeveloped to provide a full replacement for imperative BPM languages several problems as well. A solution for these problems might be to improve the understandability of declarative process models. Therefore, in this thesis we will research how the understandability of declarative process models can be improved.

1.3.2 Thesis Outline

Figure 2 shows an overview of our thesis outline.

Figure 2: The chapters in this thesis. Chapter 2: Declarative Modelling Languages

First, in this chapter we give a general explanation on declarative modelling lan-guages, followed by an overview of the three declarative modelling languages that are currently most used. Next, we give an overview of the evaluations of the language Declare in previous research.

(14)

on theories from cognitive psychology, we propose a solution in the form of Declarative Process Modelling Guidelines (DPMG).

Chapter 4: Methodology

To check for the effect of the DPMG, we conducted sessions with students in which they analysed Declare models. The setup of the study is thoroughly described in this chapter.

Chapter 5: Results

In this chapter, we describe the most interesting results from the study, combined with the implications of these results.

Chapter 6: Conclusion and Discussion

In this final chapter we review the results, followed by the limitations of these results. Based on these results and limitations, we suggest future research directions.

(15)

2

D E C L A R AT I V E M O D E L L I N G L A N G U A G E S

In this chapter, we describe the current declarative modelling languages. In section

2.1 we provide more basic information on declarative modelling languages. This is

followed by a description of three declarative modelling languages: Declare in section

2.2, Dynamic Condition Response (DCR) Graphs in section 2.3 and Business Process

Constraints Network (BPCN) in section 2.4. We finish this chapter in section 2.5 with an overview of Declare evaluations in earlier research.

2.1 d e c l a r at i v e m o d e l l i n g l a n g ua g e s i n g e n e r a l

As stated in the introduction, a declarative process model describes what should be done, without saying how to do it. This is done by defining the activities in a business process and specifying a set of constraints, business rules, event conditions and other (logical) expressions that define properties of and dependencies between activities [9]. In [9], the authors examined multiple declarative modelling languages and based on that, they identified core declarative modelling characteristics. The main interesting findings are that declarative modelling makes it easier to model business rules, but that the declarative models quickly become less readable when the size grows.

The exact constraints differ among declarative modelling languages. Three declarative modelling languages are Declare [10, 20, 19, 24], DCR Graphs [12, 24] and BPCN [13]. These languages are further explained in the next sessions.

(16)

2.2 d e c l a r e

Declare (formerly ConDec, DecSerFlow) has been the subject of several studies. This language makes use of Linear Temporal Logic (LTL) for the constraints, by using logical operators and temporal operators as a base for the constraints [20, 10]. Figure 3 shows a graphical overview of some basic Declare constraints.

Figure 3: Basic Declare elements, from [23].

The Declare constraints can be divided in four categories. The amount of execu-tions for an activity in a given process instance is stated through existence constraints, the order of activities through relation constraints, negative relations between activities through negation constraints and the choice between different activities through choice constraints [20]. Figure 4 shows an overview of the semantic meaning of some basic constraints. Appendix B contains all Declare constraints, which are 38 in total.

(17)

Figure 4: Semantic meaning of basic Declare constraints, from [10].

2.3 d c r g r a p h s

DCR Graphs has been created as a reaction to ConDec/Declare [12]. The authors tried to improve the visualisation possibilities and the understanding of the end user. This is done by bringing the amount of constraints back to a set of four and through direct expression of the semantics, instead of indirect expression through LTL. Figure 5 shows the four DCR constraints (which are all the constraints that DCR Graphs consists of) and an example process model. This example shows a process in which the prescription of medicine should be signed. If it is not trusted, then it should be reworked, until it is approved, after which the medicine is given.

(18)

Figure 5: DCR Graphs: Constraints and an example model, from [12].

2.4 b p c n

In 2009, another taxonomy of constraints has been developed: BPCN [13]. The con-straints in this language are divided in two categories: selection concon-straints, which specify the restrictions and inter-dependencies among tasks, and scheduling constraints, which specify how tasks are performed in terms of order and temporal dependencies. Figure 6 shows an example of some basic BPCN constraints. Appendix B contains all BPCN constraints, which are 17 in total.

Figure 6: Basic BPCN constraints, from [13].

2.5 e va l uat i o n o f d e c l a r e i n e a r l i e r r e s e a r c h

Of the three discussed declarative modelling languages, Declare has received the most attention and evaluation in earlier research. Therefore, we will mainly focus on Declare

(19)

Declare can be most useful when high flexibility is required [19]. Though in the re-searches that addressed Declare, several possible issues have been evaluated, with tool support and understandability as the main identified problems. Where BPMN is sup-ported by a large amount of tools [22], Declare is lacking this kind of tool support [24]. In [9], the authors show that declarative process models become less readable when they become larger. This implicates that declarative process models are not suitable for large, complex processes, which diminishes the chance that declarative process mod-elling becomes widely adopted. Therefore, we will look at why Declare models have understandability issues and how these issues can be overcome. We did not take tool support into account in our research, for the scope would have become too broad.

(20)

3

D E C L A R AT I V E M O D E L L I N G P R O B L E M S A N D P R O P O S E D S O L U T I O N

In this chapter we present a more extensive explanation of the problems with Declare, the research direction to tackle these problems and our proposed solution for the prob-lems. We start in section 3.1 with an explanation of how we evaluated Declare against the principles of effective visual notation. Based on this evaluation and results from earlier research, in section 3.2 we define the research direction. In section 3.3 we explain our proposed solution for the problems with Declare: the Declarative Process Modelling Guidelines (DPMG).

3.1 e va l uat i n g d e c l a r e a g a i n s t t h e p r i n c i p l e s o f e f f e c t i v e v i s ua l n o

-tat i o n

In [10], the authors identified an issue to be particularly problematic in Declare: aspects that are similar in imperative and declarative modelling languages at a graphical level, but differ in semantics. This matches one of the principles that were developed for ef-fective visual notation [17]. They developed nine principles based on ontological and cognitive research, with the goal to increase the comprehensibility of visual notations as much as possible. For example, for the specific problem stated above, the principle of semiotic clarity can be applied, which states that there should be a 1:1 correspondence between semantic constructs and graphical symbols. Furthermore, modelling experts find the graphical notations in Declare not intuitive [24]. Therefore, we critically eval-uated Declare against all principles of effective visual notation [17]. The goal of this evaluation was to find possible violations to the principles, which could impede un-derstandability of Declare. Appendix C contains the thorough evaluation of Declare regarding the principles of effective visual notation.

(21)

Following the evaluation, we identified the following visual notation issues with De-clare:

• Declare lacks perceptual discriminability and visual expressiveness of different constraints. Examples are positioning, size and colour.

• Declare has no set spatial arrangements of visual elements which can make model more semantically transparent.

• Declare has no set way to deal with complexity of process models.

• Declare can make more use of text to complement the graphical notation. • Declare has too many different constraints (38).

3.2 r e s e a r c h d i r e c t i o n

We divided the identified visual notation issues of Declare in two categories: the visual-isation of the constraints and the structure of the declarative process models. Due to the scope of our research, we could only evaluate a solution to one of the two problem cat-egories. Therefore, we mainly focused on finding a solution for the structure of Declare models.

A main understandability issue of Declare comes from the combination of constraints. In the first place, a large amount of constraints can lead to a lower quality of declarative models [24]. Furthermore, when certain constraints are combined in a Declare model, new implicit constraints can arise following the earlier constraints. Such an implicit con-straint is called a hidden dependency. When these kind of hidden dependencies have to be kept in mind, the capacity of the working memory can be exceeded, which leaves the model understanding prone to errors [7, 10, 29]. In the solution, we therefore included the diminishing of the negative effects caused by hidden dependencies, for which we specifically looked at making the optimal use of human cognition. Furthermore, in the visual notation evaluation of Declare, we found points of interest regarding the structure of the process model: the positioning of objects and semantically transparent relationships.

Also, in the past it has proven useful for imperative process models to create mod-elling guidelines. A generic approach for increasing understandability of imperative process models are the seven guidelines for process modelling (7PMG) [15]. The au-thors developed these guidelines to help modellers in creating processes that are easy to analyse and understand. Given the high number of citations of their research, this has been a well-followed approach. However, such guidelines do not exist yet specific for declarative process models. Our research direction will therefore be the development of a set of guidelines for declarative process modelling: the Declarative Process Modelling Guidelines (DPMG).

(22)

3.3 c r e at i n g t h e d e c l a r at i v e p r o c e s s m o d e l l i n g g u i d e l i n e s

In this section, we give an overview of the different DPMG and the underpinning of these guidelines. The theoretical and empirical underpinning is derived from various earlier research on declarative process modelling, on understandability of declarative process models and on the optimal use of human cognition.

The human working memory is an important part of human cognition, as it is used for processing information on short term and keeping this information in mind. General consent in long-lasting cognitive psychology research is that the working memory can generally contain seven pieces of information, plus minus two information pieces. [16]. The working memory can be used more efficiently through different strategies [16, 17,

25, 29].

One cognitive strategy is computational offloading [25, 29]. This states that a repre-sentation should reduce the amount of cognitive effort needed to comprehend a process model. In process models, this particularly expresses itself in the identification of all possible traces that can be followed. Where this is explicit in imperative process mod-els due to the sequential nature, in declarative process modmod-els this has a more implicit nature. Therefore, more cognitive effort is required to comprehend sequential informa-tion in process models [7]. This also manifests itself in that having a large amount of constraints can diminish the understandability of process models [24] and that hid-den depenhid-dencies following these constraints make the understandability even worse [7, 10].

A solution that uses computational offloading is the ordering of activities. This has been suggested in [10], where the authors evaluated the understandability of Declare models. They identified different ways in which the understandability could be im-proved. Among these was a change in layout, by sorting the tasks in different orders. They found that readers generally have a sequential style of reading a process model, which means that readers tend to look for the start event in the upper-left corner and in the bottom right corner find the last event. We identified the first guideline based on this reading style:

(23)

DPMG1: If there is a start event, this should always be placed in the top left corner. If there is an end event, this should always be placed in the bottom and preferably in the right corner.

Figure 7: Process model example with and without DPMG1.

The next step for reader is to try to find the activities after the start event and/or be-fore the end event, with an emphasis on the order of these activities. In [17], they identi-fied that ”certain spatial arrangements of visual elements predispose people to a partic-ular interpretation of the relationship among them”. For example, a sequence/causality is generally depicted through a preceding activity on the left with an arrow pointing to a following activity on the right, which is depicted in figure 8. This is reflected in the most widely used imperative modelling language BPMN, where sequential activi-ties are generally placed from left to right and non-sequential activiactivi-ties (e.g. in an XOR split) underneath each other.

Figure 8: The spatial arrangement of activities to which people are predisposed to inter-pret a causality relation, from [17].

(24)

Figure 9: How external memory works with process models, from [29].

Another cognitive strategy is using external memory: any information that is stored outside the human cognitive system [25, 29]. Information that is placed in this external memory is referred to as a cognitive trace. The usefulness of external memory expresses itself in process models when trying to find the path of a trace of activities and being able to see the already passed activities in this trace. Using this, it is not required to store all the previous steps of a given trace at a given point in the diagram in the working memory [29]. An example of how this works is shown in figure 9, where the thick line indicates the trace of activities that has already been followed. Guidelines which improve the positioning of activities can make such a cognitive trace easier to follow. Based on the predisposed spatial arrangements and the use of external memory, we identified the next three guidelines:

(25)

DPMG2: The first activity of a sequence should be placed at the most left side. DPMG3: Activities which sequentially follow another activity, should be placed on the right of the preceding activity.

(26)

DPMG4: Non-sequentially related activities should be placed underneath each other.

Figure 11: Process model example with and without DPMG4.

Finally, one more cognitive strategy is chunking [29]. When humans memorise in-formation, this is stored in ’chunks’ in the long-term memory. A chunk in turn takes up one piece of the working memory, instead of several pieces which would have been taken when the information was stored separately. This can be useful, as the work-ing memory generally can only contain around 7 pieces of information [16]. When modelling processes, it can prove useful to show elements in a way that it is easy to remember it in chunks. The concept of chunking is depicted in figure 12.

(27)

As stated earlier, declarative process models tend to become less understandable when they become larger [9]. Chunking might provide a (partial) solution for this by placing elements in chunks where possible, i.e. through modularisation: dividing the model into smaller parts. Empirical research has shown that modularisation of dia-grams can improve understanding [17].

This can be expressed by making the distinction between unrelated elements more clear and thus making chunks of the related elements, for example by dividing unre-lated activities and placing them in separate blocks or rectangles. Though an important downside to this might be the confusion with the pool/lane construct in the popular im-perative modelling language BPMN [22] and as earlier research had shown, confusion can arise from elements that are graphically the same in different languages but differ semantically [10]. The first solution that we suggested was dividing the unrelated ele-ments through wavy lines. Regular lines could also draw confusion with the pool/lane construct, where a dashed line is already used for messages in BPMN and for notes in BPMN and UML [8, 18]. A wavy line does not have a similar construct at the graphical level in other popular business process modelling languages [6, 8, 18, 27].

However, the results of our pilot study showed that the wavy lines on itself were not enough. Therefore, for making the distinction between unrelated process parts, we chose to extend the wavy lines with decomposition of the model in subprocesses. In [10], they showed that hierarchy can lead to higher understanding of Declare models. Hierarchy can also enhance the cognitive strategy of chunking [29]. Due to a lack of research on this topic, it’s not clear where the barrier should be for the decomposition of the models. We chose to decompose the main model if there are more than 10 activities: partly following the rule that the working memory can contain around seven elements at the same time [16], but also partly following the practical usability. A process model can become unreadable if the decomposition barrier is too low, due to a chaos of too many subprocesses. Another point that we added to the guideline is the use of labels. With labels at both the main process and the subprocess, both the context of the processes and the distinction between the main and subprocess(es) are made more clear.

Based on the wavy lines, the decomposition and the process labels, we identified the following guideline:

DPMG5: If a model consists of more than 10 activities, it should be decomposed in one main process and one or more subprocesses. The subprocesses should be depicted inside wavy lines. Labels should be added to the main process and the subprocess(es).

(28)

Figure 13: Process model example with and without DPMG5.

In our evaluation of Declare against the principles of effective visual notation [17], we identified that the different constraints in Declare use a low amount of visual variables and may thus be hard to distinguish. To resolve this, more textual elements can be used. According to dual channel theory, the human cognition uses different systems for processing textual and visual information, thus using both can reduce the cognitive load [14, 17]. One end of the textual-graphical notation continuum is the use of fully textual process models. Though in [11], they showed that a combination of graphical and textual elements is more understandable than only using textual elements. A less thorough solution is to oblige the use of annotations for the constraints. In some Declare research these annotations have already been used [9, 19, 24], but it is not the standard, as it has not been used in other Declare research [10, 21, 28]. Also, researchers have shown that it is more effective to show annotations on the diagram itself then separately,

(29)

authors show that it can draw confusion if graphical elements from different languages are graphically on the same level but have different semantic meanings. The arrows in Declare contain a high graphical overlap with the flow construct in BPMN, thus the constraint notations can be useful to distinguish the Declare constraints with arrows from the BPMN flow construct. Based on the dual channel theory and the results in earlier research on the use of annotations, we identified the last guideline:

DPMG6: Textual annotations should be added to the constraints.

(30)

4

M E T H O D O L O G Y

In this chapter we present the setup we used to test the effect of the DPMG. First, in section 4.1 we define the main research question. Next, in section 4.2 we define the independent variables that were used. This is followed by section 4.3 where we break down the main research question into smaller research questions and we explain the dependent variables that measured these research questions. We describe the subjects, objects and procedure of our research in section 4.4, followed by a description of the execution of our research, the data validation and the data analysis in section 4.5. Finally, in section 4.6 we present the results from the pilot study.

4.1 m a i n r e s e a r c h q u e s t i o n

Before performing the main study, we conducted a pilot study with BPM modelling experts. The reason for this was to initially see how Declare, either with DPMG or not, is received by people who have high knowledge on BPM. To be applicable for this pilot study, the experts had to have a significant experience with BPM models. The results from the pilot study have been used to improve the main study.

We performed the main study with students, as our goal with DPMG was to make Declare understandable for a broad audience. We used the following main research question (RQ):

Main RQ: What is the effect of the Declarative Process Modelling Guidelines on the under-standability of Declare models?

This question was be broken down in various smaller research questions in four cat-egories: model interpretation, mental effort, user acceptance and challenges/points of improvement.

4.2 i n d e p e n d e n t va r i a b l e s

In our research, two conditions were used, based on one independent variable: the application of DPMG to Declare models. This independent variable was applied for all research questions.

(31)

4.3 r e s e a r c h q u e s t i o n s a n d d e p e n d e n t va r i a b l e s

4.3.1 Model Interpretation

RQ1.1: What is the effect of the DPMG on the interpretation correctness when analysing Declare models?

First, we checked the interpretation of the Declare models by measuring the correct-ness of the answers to interpretation questions. These questions are divided in two categories. First, descriptive open questions were asked about the models which were answered by thinking out loud and second, true/false questions were asked about the correctness of given (sub)traces and statements about the process models. Next to the options ’True’ and ’False’, participants could select ’Don’t know’. As for most items that were included in our research, this was adopted from [10]. Their argumentation for including this ’Don’t know’ option was to prevent a forced guess of the subjects. After our pilot study, we replaced or removed some true/false questions due to ceiling effects.

Use of Traces to Measure Correctness

We measured the interpretation of declarative process models through activity traces: a completed process instance [10, 23]. These traces can be valid (all constraints are satisfied) or invalid (one or more constraints are violated). A sub-trace, a sub-sequence of a trace, can also be temporarily violated (not all constraints are satisfied, but this can still become valid by adding activities before or after the trace). The minimal trace of a process model is the valid trace that has the lowest number of activities.

RQ1.2: Which Declare constructs are responsible for the most interpretation errors?

Second, we looked which type Declare constraints were specifically causing troubles: the existence, negation or relation constraints. The true/false interpretation questions relate to one or more of these different constraint types. According to the correctness of these questions, we determined what caused the most errors. For answering this question, we also took ’Hidden Dependencies’ and ’Unrelated activities’ into account as Declare constructs. Hidden dependencies proved to be a source of model interpre-tation problems in previous research [10] and unrelated activities is one of the major differences between declarative and imperative process models.

RQ1.3: What is the effect of the DPMG on the added value of a cheat sheet when analysing Declare models?

Because the goal of DPMG is to make declarative process models more understand-able, these models should require less theoretic help while analysing a process model. We operationalised this through asking participants how often they had to check the theoretic background of Declare on a cheat sheet. After analysing the process models, the participants self-rated how often they used this cheat sheet, on a seven point Likert scale ranging from ’All of the Time’ to ’Never’. We deliberately chose not to tally the

(32)

RQ1.4 Is there a relation between the model interpretation and the experience with imperative models, experience with declarative models and familiarity with Declare?

We also checked if previous experience and familiarity with (declarative) process mod-els play a role. In the introduction questions, next to more general questions about demographics, the participants entered their years of process modelling experience and their recent exposure to both imperative and declarative models. Also, participants rated their familiarity with Declare based on three seven-point Likert scales, ranging from ’Strongly Agree’ to ’Strongly Disagree’. We checked these scores for relations with the total interpretation scores.

4.3.2 Mental Effort

RQ2.1: What is the effect of the DPMG on the mental effort when analysing Declare models? As stated in the theoretical underpinning of the DPMG, cognitive elements play a large role in the understanding of process models. As adapted from earlier research [10], we measured the cognitive load through the self-rated mental effort of participants, measured in two ways. First, after a question, participants self-assessed the mental effort required to answer the question on a seven-point Likert scale, ranging from ’Extremely high mental effort’ to ’Extremely low mental effort’. Second, at the descriptive model interpretation questions, participants gave a spoken explanation of their mental effort rating. We used these spoken answers in answering RQ4.1.

RQ2.2 Is there a relation between the mental effort and the experience with imperative models, experience with declarative models and familiarity with Declare?

For the mental effort we also analysed if previous experience and familiarity with (declarative) process models play a role. Therefore, we checked the average score of the self-rated mental effort questions for relations with the modelling experience and Declare familiarity questions.

4.3.3 User Acceptance

RQ3.1: What is the effect of the DPMG on the User Acceptance of Declare models?

Though Declare can be sound on paper, it will only be a success when people actually use it. Well-established predictors for the use of a (new) technology come from the Technology Acceptance Model (TAM) [5]. The foundation of the TAM derives from the theory of planned behaviour, which states that given behaviour follows from the intention to perform that behaviour [4]. Specific for the TAM, this implies that the usage of a given technology follows from the intention to use that technology. The intention to use a given technology can in turn be predicted by the perceived usefulness

(33)

Thus, we wanted to see what the effects of the DPMG are on the user acceptance of Declare. We measured user acceptance through the two main predictors of user accep-tance, PU and PEOU. Participants rated Declare on these predictors through a seven-point Likert scale, ranging from ’Strongly Agree’ to ’Strongly Disagree’. Furthermore, participants rated the easiness of reading the constraints, on the same type of scale as the other user acceptance questions. Though the original scales for measuring PU and PEOU contained multiple items [5], we measured both constructs using one question for each construct. As we had to cope with a limited time-frame for each participant, we were restricted in the amount of total questions that we could ask.

RQ3.2: How high is the User Acceptance of specific DPMG constructs?

We also looked at the intention of using specific DPMG constructs. However, we only asked questions about as the visually most distinct constructs: the subprocesses and the constraint annotations. As the other guidelines all relate to the ordering/positioning of activities, these are less visually distinctive and thus make the chance lower that the questions will provide useful answers. Thus in the experimental condition participants rated the PU and PEOU of the subprocesses and constraint annotations.

4.3.4 Challenges and Points of Improvement

Finally, we collected thoughts of the participants about Declare. These thoughts have been divided in two categories: the challenges that participants encountered when analysing the process models and the suggestions that participants gave for improve-ment of Declare.

RQ4.1: What are challenges when trying to understand Declare models?

For the mental effort rating of the descriptive interpretation questions, participants gave a spoken explanation for their answers. The comments from the different questions have been bundled to the total amount of comments about which process parts were challenging.

RQ4.2: What are suggestions for improvement of understanding Declare models?

We asked two open questions about suggestions for improvement which were an-swered by thinking out loud: one specific for the process model and one for Declare in general. The answers to these two questions were bundled to the total amount of comments about points of improvement.

(34)

4.4 d e f i n i n g a n d p l a n n i n g o f t h e s t u d i e s

4.4.1 Subjects

For the pilot study, we selected experts on business process modelling as the target group. These experts have multiple years of experience on business process modelling and a reasonable amount of read and created process models. We accepted experts who only have experience with imperative process models, as experts with significant experience on declarative process models are rare and thus hard to find.

For the main study, our the target group concerned students with some experience on BPM. We estimated participation in the research as too complicated without the basic knowledge on BPM. The participating students have zero or little experience with Declare, as declarative process modelling languages are not (yet) included in the core teaching material of BPM courses.

4.4.2 Objects

The complexity of Declare was initially brought down from 38 constraints to 14 con-straints, following our evaluation of Declare against the principles of effective visual notation [17], in which we identified that Declare contained to many constraint types. For example, the most popular imperative modelling language BPMN also suffers from a superfluity of elements, as presented in [30, 22]. They showed that out of the 52 elements presented in BPMN 1.1, only 12 elements were used in more than 25% of 120 checked process models and only five elements were used in more than 50% of these process models.

However, we concluded from the pilot study that the amount of 14 constraints was still too high. The pilot study participants rated the PEOU of Declare low and com-mented that the language was difficult due to the high number of different constraint types. Furthermore, we expected that non-experts on BPM modelling will have more trouble understanding the language. Therefore, in the main study we reduced the dif-ferent amount of constraint types from 14 constraints to six constraints. We expected the constraints that were kept in the core set to make the most sense and to still make it possible to transform (most of) the business rules into a declarative process model that made practical sense.

In the research sessions, we let the participants analyse two process models. First, we gave a small process model to familiarise the participants with the Declare language. This model was equal in both conditions. Second, we gave a larger process model,

(35)

which differed per condition. We adapted the process models from [10] and modelled these through CPN tools [2]. For the experimental condition, we applied changes to the model design following the DPMG. Appendix D contains the process models used in our research.

For the main study, the example process consisted of four activities, two precedence constraints, one direct succession constraint and one existence constraint. In the trol condition, the main process model consisted of 24 activities, 13 precedence con-straints, two existence concon-straints, four chained succession constraint, one co-existence constraint and two not-co-existence constraints. In the experimental condition, the main process model contained seven activities, of which three were collapsed subprocesses, two chained succession constraints, and three existence constraints. The three subpro-cesses contained a total of 20 activities, 13 precedence constraints, two chained succes-sion constraints, one co-existence constraint and two not co-existence constraints.

4.4.3 Procedure

We let the participants work with sheets of paper and an online questionnaire. The online questionnaire showed the steps to guide the participants through the research and the different questions that the participants had to answer. The papers consisted of a small booklet containing the process models and a paper (cheat sheet) explaining the concepts of Declare. We printed the papers for the different conditions, put these in blank envelopes and shuffled them, to provide random allocation of the participants among the conditions. It was not possible to provide double blindness, as we had to answer questions of the participants about the concepts of Declare, which made it impos-sible to remain completely double-blind. Though in both conditions, we only answered questions from participants about the concepts of Declare, not about the answers of the online questionnaire. Figure 15 shows a basic outline of our research procedure.

Figure 15: The chronological steps in our research procedure.

We started the sessions with explaining the basic structure of the research, followed by asking the participant to open the sealed envelope and to start reading the handout

(36)

were actually clear to the participant. When these questions were asked and possible misconceptions were corrected, the participant continued with the online questionnaire, in which he/she first filled in the demographic and experience questions.

Next, the participant worked on analysing the example process. Descriptive ques-tions contained giving a description of the process model to ensure proper reading of the model, followed by questions about the minimal trace, a valid trace and an invalid trace. One true/false question relating to a sub trace was asked, after which the partic-ipant received feedback to emphasise the difference between a sub trace and a whole trace. Questions about analysing the main process model consisted first of descriptive in-terpretation questions, regarding a description of the process model, the minimal trace, two valid traces and two invalid traces. These descriptive interpretation questions were all followed by a Likert-scale rating of the required mental effort and a spoken explana-tion of this mental effort rating. Third, true/false interpretaexplana-tion quesexplana-tions were asked, followed by only a Likert-scale rating of the required mental effort. After the analysing questions, closing questions were asked. These questions contained Likert-scale ratings of the user acceptance and open questions about the challenging elements of the models and the points of improvement for Declare. After the session, we explained the goal of the research to the participant.

4.5 p e r f o r m i n g o f t h e s t u d i e s

4.5.1 Execution

The location of the research sessions was highly variable, as the only requirement for the experiment location was a quiet area, which made it possible to perform the sessions at different locations. The pilot study sessions were performed over the timeframe of one week and took place in Amsterdam (2), Eindhoven (2), Deventer (2), Arnhem (1) and Maastricht (1). The main study sessions, 18 in total, were executed over the course of three weeks. Of these sessions, 17 took place in Amsterdam and one in Utrecht.

4.5.2 Data Validation

Various steps were taken to ensure quality of the data. First, we held one test session with a preliminary version. In this session we found that the participant tended to skip thinking-out-loud questions, which led to an incomplete data set. As a consequence, we included an extra step for the participants: after each spoken answer, they had to type ’ok’ in the online questionnaire to confirm the finishing of the spoken answer.

To ensure that the participants were using the process models belonging to their con-dition and that they were using the right process model at the right questions, we added code words to the process model booklet. These code words were stated on the

(37)

intro-duction page, the example process page and the main process page. Before continuing on the online questionnaire, participants had to enter the code word from the relating page. Neutral code words were used, adopted from the NATO phonetic alphabet [3]: the experimental condition showed the words ’Alpha’, ’Bravo’ and ’Charlie’, where the control condition showed the words ’Delta’, ’Echo’ and ’Foxtrot’. When we checked data after all sessions had taken place, we found the code words correctly entered at all checkpoints.

To ensure that the participants in the pilot study are actually experts on BPM mod-elling, we asked questions about their current status and their experience. All pilot study participants have significant experience, either academic or professional, in the amount of studied and/or created models and in the years of experience. We excluded the data of one participant in the pilot study. During this session, the online question-naire suffered from a technical error. The delay suffered by this error was of such a length that it would lead to an invalid comparison with the other data. The final pi-lot study data set consists of seven participants, of which four are in the experimental condition and three are in the control condition.

In the main study, we added a control question for the students to ask if they had actually followed a BPM course. All the participants have followed such a course. The main study data set consists of 18 participants, of which 10 are in the experimental condition and eight are in the control condition.

4.5.3 Data Analysis

To analyse the data, we first transcribed all the thinking-out-loud answers. We then coded the descriptive model interpretation answers to correct/incorrect, to make fur-ther analysis possible. When the participants were asked for multiple traces in one question, each trace was scored separately for correctness. For the closing opinion ques-tions, e.g. regarding challenging model parts, we collected the statements of participants and summed these up. Next, we converted the true/false/don’t know answers to cor-rect/incorrect, where ’Don’t know’ was also converted to ’incorrect’. We analysed the descriptive scores of this data in SPSS, transformed these into charts and performed quantitative tests where this might prove useful.

4.6 f i n d i n g s o f t h e p i l o t s t u d y

Table 1 contains an overview of the results for the research questions in the pilot study. More extensive tables with the results from the pilot study can be found in Appendix E. Following the pilot study results, the preliminary conclusion was that the DPMG show interesting results for the understandability of Declare models. Promising is the high PU

(38)

hierarchy/subprocesses and grouping, the reduction of different constraint types and improving the visual richness. Points of improvement that participants have given relate to improving the practical sense, making the activity labels more clear and grouping the activities more. We took several of these points of improvement into account when we designed the main study.

RQ Result

1.1 High overall interpretation scores. No effect of the DPMG.

1.2 Hidden dependencies are responsible for most true/false model interpretation

errors.

1.3 Moderately high usage of cheat sheet. No significant effect of DPMG.

1.4 High positive relation between interpretation scores and amount of read and

cre-ated Declare models.

2.1 Medium overall mental effort ratings. No significant effect of DPMG.

2.2 High positive relation between required mental effort and work days of training

from the past. High negative relation between required mental effort and year of experience.

3.1 High PU of Declare, low PEOU of Declare, medium PEOU of reading of

con-straints.

3.2 Moderately high PU and PEOU for specific DPMG constructs.

4.1 Most challenging for participants have been the large process size/lack of

sub-processes, unclear activity labels/lack of practical sense and a high amount of different constraint types.

4.2 Most given suggestions for improvement are the reduction of objects/use of

sub-processes, improvement of visual richness and the grouping of activities. Table 1: Overview of the pilot study findings for each research question.

(39)

5

R E S U LT S

In this chapter, we discuss the results to the various research questions: demographics and experience in section 5.1, model interpretation in section 5.2, mental effort in section

5.3, user acceptance in section 5.4 and challenges and points of improvement in section

5.5. Appendix E contains more extensive tables of the results.

5.1 d e m o g r a p h i c s a n d e x p e r i e n c e

Of the participants, 13 are male and 5 are female. All participants are in the age range of 18-29. The participants are all students and have all followed a course in BPM, which were requirements for the data validation. Table 2 contains the experience of the partic-ipants, which only shows the overall numbers. There are some differences between the conditions, but these differences are not significant or only significant due to outliers.

Surprising is that one or more participants answered ’zero’ to the imperative models experience, as this is surely taught in courses on BPM. An explanation for this might be that participants were unaware of what modelling languages can be categorised under imperative modelling languages. Overall, the experience is lower than in the pilot study.

Measured Construct Min Max Mean (SD)

Years of experience 1 6 2.28 (1.84)

Imperative Models read last year 0 200 37.22 (46.72)

Imperative Models created last year 0 84 16.11 (24.57)

Declarative Models read last year 0 30 1.78 (7.06)

Declarative Models created last year 0 20 1.17 (4.71)

Work days of training last year 0 60 7.89 (13.96)

Work days of self-education last year 0 100 13.86 (23.95)

(40)

5.2 m o d e l i n t e r p r e tat i o n

RQ1.1: What is the effect of the DPMG on the interpretation correctness when analysing Declare models?

We summed up the descriptive and the true/false interpretation scores and calculated the mean scores for each condition, which is shown in figure 16. For the descriptive questions, a maximum score of five points is possible, where this is seven points for the true/false questions. The error bars in figure 16 suggest a difference between the conditions for the descriptive questions, which made us perform a two-sided t-test.

This test shows that the difference for the descriptive questions is significant (ρ≈0.01).

Thus we conclude that the DPMG have a positive effect on the correctness of descriptive interpretation questions. The true/false interpretation scores are generally high, but show no difference among the conditions.

Figure 16: The mean scores and 95% confidence intervals for the different types of inter-pretation questions, specified per condition.

(41)

RQ1.2: Which Declare constructs are responsible for the most interpretation errors?

We identified which true/false interpretation questions measured which Declare con-struct and we summed up the amount of participants that had answered a given true/false interpretation question correct. Table 3 contains the results. Questions that contain un-related activities have the lowest total scores (12). Questions with hidden dependencies also have low total scores, but contrary to the pilot study results, it was not the key construct that caused errors.

Measured Construct Q1 Q2 Q3 Q4 Q5 Q6 Q7

Existence X X X X

Relation/Negation X X X

Hidden Dependency X X X X X

Unrelated activities X X X

Total Amount of Correct Answers (Max = 18)

12 16 15 16 12 13 17

Table 3: The Declare constructs that the true/false interpretation questions were mea-suring and the amount of correct answers for each question.

RQ1.3: What is the effect of the DPMG on the added value of a cheat sheet when analysing Declare models?

The participants self-assessed their usage of the cheat sheet on a scale ranging from

1 (All of the time) to 7 (Never). We calculated the mean usage of the cheat sheet in

each condition, which the error bar chart in figure 17 shows. The mean usage is slightly higher in the control condition than in experimental condition, but the confidence in-terval of the control condition fully overlaps with the confidence inin-terval of the experi-mental condition. Thus no effect of the DPMG is found on the added value of the cheat sheet.

(42)

Figure 17: The mean usage and 95% confidence intervals for the cheat sheet usage, spec-ified per condition.

RQ1.4: Is there a relation between the interpretation scores and the experience with imperative models, experience with declarative models and familiarity with Declare?

We calculated the relations between the interpretation scores and experience of the participants and found one high relation between the interpretation scores and the fa-miliarity with Declare. Figure 18 shows the distribution of points in this relation. As the familiarity with Declare has been rated on a scale ranging from 1 (Strongly agree) to

7(Strongly disagree), this relation implicates that the less familiar a participant is with

Declare, the higher the interpretation scores are for this participant. We did not find an explanation for this result.

(43)

Figure 18: The distribution of points of the relation between the total interpretation scores and the familiarity with Declare.

5.3 m e n ta l e f f o r t

RQ2.1: What is the effect of the DPMG on the mental effort when analysing Declare models? After each descriptive and true/false question, the participants rated their mental effort on a scale from 1 (Extremely high mental effort) to 7 (Extremely low mental effort). Figure 19 shows the mean ratings that we calculated for the different questions types and for each condition, and the confidence interval of 95% for these mean scores. Overall, medium mental effort is rated to answer the question. The mean ratings for the mental effort at the true/false questions are a bit lower in the experimental condition than in the control condition, though the confidence intervals for both conditions still show overlap. Following this results, we draw the conclusion that the DPMG does not have an effect on the mental effort.

(44)

Figure 19: The mean ratings and 95% confidence interval for the mental effort ratings, specified for each question type and for each condition.

RQ2.2: Is there a relation between the mental effort and the experience with imperative models, experience with declarative models and familiarity with Declare?

We calculated the relations between the mental effort ratings and the experience of the participants. No high relations are found and the highest relations are to a large extent caused by outliers. Therefore, we conclude that there are no relations between the mental effort and the experience of participants.

(45)

5.4 u s e r a c c e p ta n c e

RQ3.1: What is the effect of the DPMG on the user acceptance of Declare models?

The participants rated the user acceptance factors on scale ranging from 1 (Strongly agree) to 7 (Strongly disagree). Figure 20 shows the mean scores that we calculated for the constructs, for each condition, and the confidence interval of 95% for these mean scores. The PU of Declare and the PEOU of reading the constraints are high, while the PEOU of Declare is moderately high. No effect of the DPMG is found, for the differences are too small and the confidence intervals are too large. Thus the user acceptance is high in general, but the DPMG do not have an effect.

(46)

RQ3.2: How high is the user acceptance of specific DPMG constructs?

The participants rated the user acceptance factors on scale ranging from 1 (Strongly agree) to 7 (Strongly disagree). Figure 21 shows the mean ratings and 95% confidence intervals calculated by us. Both the subprocesses and the constraint annotations have high ratings on PU and PEOU, which indicates a high user acceptance of these specific DPMG constructs.

Figure 21: The mean ratings and 95% confidence interval for the user acceptance of DPMG constructs, specified for the user acceptance constructs and for the DPMG constructs.

(47)

5.5 c h a l l e n g e s a n d p o i n t s o f i m p r ov e m e n t

RQ4.1: What are the challenges when trying to understand Declare models?

We collected the comments from the open mental effort questions and from open ques-tions after analysing the process models, in which the participants stated the challenges that they experienced. To maintain readability, table 4 only contains the comments that were mentioned more than twice. The full table of comments can be found in Appendix E. The most mentioned challenge relates to the size of the process model. Though the process model is decomposed in the experimental condition, the large size is still highly mentioned. Also interesting is the high amount of positive comments about understand-ing the process model and the constraints. Finally, other high mentioned challenges are the concept of traces, the lack of language knowledge and the combination of unrelated activities. Comment Exp. (10) Cont. (8) Total (18)

Difficult due to large process size. 8 3 11

Process model is clear. 4 4 8

Easy to violate constraints. 4 3 7

Difficult due to concept of traces. 3 4 7

Constraints are easy to understand. 5 1 6

Constraints are intuitive. 4 2 6

Difficult due to lack of knowledge of language. 4 2 6

Difficult due to lack of order. 4 2 6

Constraints are useful. 3 3 6

Difficult due to unrelated activities. 4 1 5

Difficult due to unclear structure. 1 2 3

Table 4: The challenges mentioned (more than once) and the amount of participants that mentioned a given point, per condition and the total sum.

(48)

RQ4.2: What are suggestions for the improvement of Declare models?

We collected the improvement suggestions from two open questions at the end of the research sessions. Table 5 contains the comments that are mentioned more than once, the full table can be found in Appendix E. The most mentioned comments relate to making the hidden dependencies more clear or removing them, improving the visual richness of the constraints and putting more order in the activities. Interesting is that the comments on visual richness and the order of activities are only mentioned in the control condition, which advocates for the DPMG: the ordering of activities is an important part of the DPMG and the visual richness could be less necessary due to the constraint annotations. Some participants in the experimental condition suggest the removal of constraint annotations, which we attributed to the reduction of complexity: with less different constraint types, the constraint notations are less necessary. Finally, another comment related to including more existence constraints.

Comment Exp. (10) Cont. (8) Total (18)

Reduce/highlight hidden dependencies. 2 1 3

Improve visual richness. 0 3 3

Order the activities. 0 3 3

Remove constraint notations. 2 0 2

Include more existence constraints. 2 0 2

Table 5: The points of improvement mentioned (more than once) and the amount of participants that mentioned a given point, per condition and the total sum.

Referenties

GERELATEERDE DOCUMENTEN

E.g. In order to find out how these experienced, or serial, acquiring companies design and execute the M&A process we have conducted an extensive literature research, aided

These objectives can be integrated into the development of the Company X Business Process Framework, containing a process design method, enterprise-level processes and a governance

[r]

As both operations and data elements are represented by transactions in models generated with algorithm Delta, deleting a data element, will result in removing the

chain response (at anchor, under way using engine) 5% 100% For example, constraint chain response(moored, under way using engine ) is satisfied for 100% of the process instances,

Governmental Experts on Early Warning and Conflict Prevention held Kempton Park, South Africa, on 17-19 December 2006, produced a Concept Paper in which engagement with civil

Of patients in the Metropole district 96% had not received a dose of measles vaccine or had unknown vaccination status, and the case-fatality ratio in under-5s was 6.9/1

Heeft u foto’s of andere herinneringen aan mensen, dieren of mogelijkheden die u bent kwijtgeraakt, plak deze dan hier. Welk verhaal kunt u