• No results found

Identifying and Classifying Approaches for Sharing Architecture Decisions

N/A
N/A
Protected

Academic year: 2021

Share "Identifying and Classifying Approaches for Sharing Architecture Decisions"

Copied!
50
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Identifying and

Classifying Approaches for Sharing

Architecture Decisions

A report on how to plan and conduct a systematic literature review in the field of software engineering.

Rayan Brouwer & Sophie Hugenholtz 8-7-2016

Version 6

(2)

1

Revision History

Version Date Author Description

1 19/5/2016 Sophie Outline of the thesis

2 6/6/2016 Rayan Start writing on Introduction, Method search strategy, results preliminary search

3 16/6/2016 Sophie Rayan

Rewriting introduction, research questions, refining search strategy and results preliminary search

Refining search strategy, adding QGS, evaluation of search string, start on results selection criteria

4 30/6/2016 Sophie Rayan

Write Quality assessment method + revise search strategy and selection criteria + revise results preliminary

Refining parts in the Method section, writing about data extraction strategy

5 6/07/2016 Rayan

Sophie

Refine structure of thesis, rewriting introduction, writing Research method and Results section.

Review everything, write validation, intro, background and evaluation

6 8/7/2016 Sophie

Rayan

Finalizing thesis Finalizing thesis

(3)

2

Inhoud

Revision History ... 1

1 Introduction ... 4

2 Background ... 4

2.1 Systematic Literature Review ... 4

2.2 Review Protocol ... 5

2.3 Related work ... 6

3 Research Method ... 6

3.1 Context ... 6

3.2 Research questions... 6

3.3 Methods ... 7

3.4 Project overview ... 8

4 Results ... 8

4.1 RQ1: Search Strategy ... 8

4.1.1 Establishing the quasi-gold standard ... 9

4.1.2 Automated and manual search ... 0

4.2 RQ2: Study Selection Criteria and Procedures ... 11

4.3 RQ3: Study Quality Assessment ... 11

4.4 RQ4: Data Extraction Strategy ... 12

5 Validation ... 13

5.1 Research Method of Pilot Study ... 13

5.1.1 Pilot of the quasi-gold standard ... 13

5.1.2 Pilot of the Automated Search ... 13

5.1.3 Pilot of the Selection Criteria ... 14

5.1.4 Pilot of the Quality Assessment... 14

5.1.5 Pilot Data Extraction ... 14

5.1 Results Pilot Study ... 14

5.1.1 Results of pilot of the Quasi-Gold Standard ... 14

5.1.2 Results of pilot of the Automated Search ... 15

5.1.3 Results of pilot of the Selection Criteria ... 15

5.1.4 Results of pilot of the Quality Assessment ... 15

5.1.5 Results of pilot of the Data Extraction ... 15

6 Evaluation ... 16

6.1 Evaluation of results ... 16

6.2 Evaluation of project ... 17

(4)

3

7 Conclusion ... 18

Bibliography ... 19

Appendices ... 20

Appendix A. Preliminary search ... 20

Appendix B. Quasi-gold standard of ScienceDirect ... 23

Appendix C. Evaluation of search strings for ScienceDirect ... 25

Appendix D. Reflection ... 26

Appendix E. Protocol ... 31

(5)

4

1 Introduction

A literature review is an important type of research study, used mainly for summarizing existing knowledge or identifying unexplored subjects [1]. Such a review is systematic when performed in an unbiased and replicable way, and when making use of a review protocol. Several different approaches to writing (parts of) this protocol have been published, some of which containing parts contradicting with others. This study therefore looks into how a protocol to such a systematic literature review (SLR) is constructed.

As an SLR should be conducted replicable and unbiased, a review protocol should explain exactly and in detail how every step of the SLR should be taken. The protocol should contain many specific sections, but this paper will mainly focus on some of the most important parts. These are the search strategy, the selection criteria, the quality assessment criteria, and the data extraction strategy. The result of this study is a protocol (cf. Appendix E) that can be used for conducting an SLR on identifying approaches to architectural knowledge sharing (AKS). Architectural knowledge can be defined as the sum of design and design decisions, and AKS as capturing, documenting and communicating architectural knowledge.

This paper looks into the construction of a review protocol, it discusses the resulting protocol, and validates this result through a pilot study. This paper and the resulting protocol presents, among other things, an optimal search strategy that aims at identifying all studies relevant to the SLR. This strategy includes an objective approach for evaluating searches.

The remainder of this paper is structured as follows: some background and necessary information on the subject is given in Section 2, the methodology and approach of the research is specified in Section 3, the results are presented in Section 4, the pilot study to validate the results is described and discussed in Section 5, all results are discussed and evaluated in Section 6 and in Section 7, the conclusions are presented, along with some suggestions for future work.1

2 Background

This section goes into more detail about specific parts important to this study, the protocol or the SLR.

It gives explanations of what an SLR and review protocol are exactly and gives some information about related work. This last section does not discuss any specific papers, but does give an explanation of what is out there, and some brief background on constructions of review protocols. Section 2.1 explains what an SLR is, including its objective, Section 2.2 describes the function and objective of a review protocol, and Section2.3 gives information about related work.

2.1 Systematic Literature Review

Literature reviews are secondary studies into existing knowledge and previously published articles with the purpose of gaining more insight or obtaining new knowledge about the studied subject. Data, results and findings from several studies are combined to be able to draw new conclusions. If researchers start searching for studies without any guidelines, selecting relevant articles along the way, the selection of articles is prone to researcher bias. It is tempting to select only studies beneficiary to the conclusions the researcher wants to draw, leaving possible contradicting articles out. Even when researchers have no intention of doing this, researcher bias is always likely without guidelines. To this end, Kitchenham [1] wrote a document with guidelines for performing an SLR. These describe in detail

1 All sections of this protocol were written by the two writers of this document together, except for Section 4, the results, and Section 5, the validation. Section 4 was mainly written by Rayan Brouwer, except for one subsection, and Section 5 was mainly written by Sophie Hugenholtz.

(6)

5 how and SLR should be performed, the importance of every step described, and necessary preparatory work that needs to be (largely) completed before conducting the actual SLR.

2.2 Review Protocol

The guidelines by Kitchenham [1], mentioned in the previous section, suggest and explain the importance of the use of a review protocol. Such a protocol describes in detail all the necessary steps for conducting an SLR along with a description of how those steps should be taken. When conducting an SLR it is important to precisely follow the protocol or adjust the protocol whenever this turns out to be impossible, ineffective or incorrect. Writing a review protocol is an iterative process, where all methods and processes are carefully tested several times along the way. Also, a review protocol is likely to be adjusted and refined as the SLR itself is being conducted.

A review protocol specifies how to conduct an SLR and the guidelines [1] suggest using the following sections:

 Background

A section giving insight in the background of the subject of the SLR. For instance, related work, a brief history, definitions, challenges in the field, etc.

 Research Questions

The title of this section speaks for itself. It states the research questions for the SLR, along with an explanation of the question, why it is asked, and what the question aims for.

 Search Strategy

This section specifies in detail how relevant studies for the SLR are going to be searched and found. This detailed explanation is necessary for a replicable and transparent SLR.

 Study Selection Criteria

As the search strategy is likely to result in both relevant and irrelevant studies, selection criteria are defined in this section.

 Study Selection Procedures

In this section a process stating how the selection criteria from the previous section should be applied is specified.

 Study Quality Assessment

This section gives means by which the empirical quality of relevant studies can be assessed, along with an explanation of this quality assessment.

 Data Extraction Strategy

A careful description of how relevant data from the studies should be extracted is stated in this section.

 Synthesis of the Extracted Data

In this section, one describes how the extracted data should be analysed.

 Dissemination Strategy

This section specifies how the results of the analysed data are being reported, as well as how the entire SLR should be reported.

 Project Timetable

As the title suggests, this section contains a planning for conducting the SLR, including when to start with what part of the research.

The last three sections of the list above are not in the scope of this paper and can therefore not be found in the resulting review protocol. The created protocol includes next to the sections mentioned above, an introduction to the subject and the rationale behind the SLR, and a revision history. This paper will focus on the construction of a Search Strategy, the Study Selection Criteria and Procedures, the Study Quality Assessment and the Data Extraction Strategy.

(7)

6

2.3 Related work

Many protocols for SLRs have been written already, but not many studies to the construction of specific protocols have been done. Protocols are often written, without a paper specifying and justifying how and why it was written the way it is. There do exist studies to SLR’s in software engineering in general.

For instance, Zhang et al. [2]conducted an empirical study to how well SLRs have been adopted within the field, Brereton et al. [3] investigated challenges when performing SLRs and presented possible solutions, MacDonnell et al. [4] investigated how reliable the SLRs within the field of software engineering generally are, and Kitchenham et al. [5] conducted a tertiary study towards the current practice, or state of the art, of SLRs in software engineering.

There exist several papers on how to write a protocol in general, or about specific parts of the protocol as well. The most important and well-known one is written by Kitchenham. This is the document giving guidelines for performing an SLR in software engineering including the suggestion that a review protocol is used. These guidelines were used for the construction of the resulting review protocol of this paper. Several other papers about specific strategies or parts of the protocol, for instance the search strategy, are mentioned elsewhere in this paper, because they were used to answer the research questions and construct the protocol.

3 Research Method

The goal of this study was to create transparent and repeatable strategies for conducting an SLR.

These strategies are defined in a review protocol, which is the final result of this study (cf. Appendix E). This paper will discuss the protocol and how the protocol was piloted. In this section we describe the context of the study, the research questions, the methods used, and an overview of the project.

3.1 Context

This study was done as a Bachelor project assignment. The assignment was to create a review protocol for a systematic literature review. Conducting the systematic literature review was not in the scope of the assignment. The SLR will be conducted by the supervisor of this project, with the aim to create an abstract overview of AKS approaches in the form of a pattern language. We created the main part of the protocol with input from our supervisor.

This paper looks into how to perform an SLR in the field of software engineering on the subject AKS by means of researching the construction, writing and piloting of a protocol. The result is a carefully constructed and piloted protocol, useful for an SLR about sharing architectural knowledge. The suggested methods in the protocol are discussed in this paper, together with a validation of the methods by doing a pilot study.

3.2 Research questions

There are four parts of the protocol that were considered most important in a review protocol. The following research questions are defined, based on these four parts:

RQ1: What is a strategy that will identify all relevant primary studies for this SLR on AKS approaches in a transparent and replicable way?

RQ2: What are the criteria that includes appropriate studies and excludes inappropriate studies according to the topic?

RQ3: What are the criteria to assess the quality of the primary studies found for this SLR?

RQ4: What is the data extraction strategy for the SLR?

(8)

7 For conducting an SLR it is very important to identify as many studies as possible, which are related to the research topic. Therefore it is crucial to define a good and detailed search strategy in the protocol.

RQ1 aims at defining such a strategy, to identify all the relevant primary studies for an SLR on AKS approaches. This strategy should be transparent and replicable.

For RQ2, selection criteria should be defined that will include all the relevant studies for the SLR, and exclude irrelevant ones. A selection process should also be defined here, such that these criteria can be applied in a systematic way.

When all the relevant studies for the SLR are identified, the quality of those studies should be assessed.

RQ3 aims at defining criteria for this assessment, so that this can be done in an unbiased way.

The last research question aims to define a strategy for extracting the data from the primary studies.

RQ4 is answered with such a strategy. The result of applying this strategy to the relevant studies will be used to answer the research questions of the SLR.

3.3 Methods

To start the study, and on every step along the way, the guidelines for performing Systematic Literature Reviews in Software Engineering [1] were consulted. These guidelines helped defining the different methods in the protocol in a transparent and replicable way. In addition to the guidelines, a lot of research into already existing SLRs and studies suggesting methods for performing an SLR was done.

While writing the protocol, a lot of articles on architectural knowledge sharing were read in order to get a better understanding on the research topic.

The guidelines [1] describe the use of a preliminary study prior to conducting an SLR, and, in accordance, this search is suggested in the protocol. In parallel with constructing the protocol, a preliminary search was conducted. The aim of this unsystematic search was to get insight in the volume of potentially relevant studies, to be able to assess whether the scope of the SLR was not too broad or too narrow. The selected databases defined in the scope of the search (cf. Protocol Section 3.2.2.) were tested by doing automated searches on those databases, trying out several different search strings.

The preliminary search was used to define the first selection criteria, test the scope of the search (including the selected databases and venues, and the specified time period) and establish the first search strings that are used for the automated search. A main search string is stated in the protocol (cf. Protocol Section 3.2.3), which is derived from the results of the preliminary search. Multiple search terms were identified during this search which formed an initial search string to be used for the automated search.

In order to gain some insight in the volume of the potentially relevant studies related to the research questions, articles from the selected databases in the protocol were retrieved by using the suggested initial search string. We assessed the relevancy of the retrieved articles by liberally applying the defined selection criteria on the titles, keywords and abstract. This resulted in a rough overview of how many primary studies can be found for the SLR. Because this search was preliminary, and not to identify all relevant primary studies, the search was time boxed. One man-week was considered for the preliminary search. Results of the preliminary search can be found in Appendix A.

To validate the protocol created, a pilot study was done to test several strategies suggested in the protocol. The whole pilot study was done by two people. One database that was selected in the scope of the search was chosen for piloting the protocol. The Search Strategy was piloted in one man-week and the Study Selection Criteria and Procedures also in one man-week. The Study Quality Assessment and Data Extraction Strategy were piloted together in four man-weeks. The total duration of the pilot study was four weeks, because alongside the pilot, the protocol was refined. More about the pilot study can be found in Section 5.

(9)

8

3.4 Project overview

Figure 1 shows an overview of the whole project. The total duration of the project was thirteen weeks.

The focus in the first eleven weeks was mainly on creating and piloting the protocol, where the last two weeks the focus was on this paper. In order to process the results of the preliminary search and the pilot study, writing this paper started earlier, as shown in Figure 1.

The first few weeks of the project started with some preparation work and writing an initial version of the protocol. For preparation we familiarized ourselves with the subject by reading about SLRs and AKS. We also created an outline for the project to set some deadlines. In the end of week 3, we established an initial version of the protocol. Because of the iterative process, the protocol was refined several times until the end of the project. The pilot study started in the eighth week and ended before the final presentation of this project, in which the results were presented.

Figure 1. Timeline of project

4 Results

This section will give the results of the research questions defined in Section 3. A more detailed description of what is suggested in the protocol is stated in this section. The first subsection will state which strategy was used to identify the primary studies for this SLR. The second subsection is about how the selection criteria were defined to include relevant studies and exclude irrelevant studies and how these criteria were applied to the studies. The third subsection gives the criteria that were used to assess the quality of the relevant studies and the last subsection is about the strategy that was used to extract data from the primary studies found for the SLR.

4.1 RQ1: Search Strategy

To establish the transparency and replicability of an SLR, a detailed description of the search strategy is included in the protocol. This is crucial, since an SLR aims to identify as many articles related to the research questions as possible, without any bias.

We based the search strategy suggested in the protocol on the quasi-gold standard concept [6], which helped to construct an objective, repeatable and transparent search approach. This concept was also used to evaluate our search strategy. As stated in the protocol, the search strategy consists of three stages. Prior to conducting the SLR an unsystematic preliminary search is performed. (cf. Section 3.3.1) While conducting the SLR, two stages are left. First a subset of the primary studies will be manually established, called the quasi-gold standard, and is used for evaluation; secondly the protocol suggests to use an integrated search including an automated search complemented with a manual search.

(10)

9 4.1.1 Establishing the quasi-gold standard2

Search strategies for SLRs in software engineering are often evaluated in a subjective way [6], which means that the strategy defined in a protocol is evaluated by external experts before an automated search is performed. After the automated search these experts evaluate the results based on their own knowledge of the domain, making it difficult to test whether the search strategy is reliable.

To overcome these limitations, the quasi-gold standard (QGS) was introduced by Zhang et al. [6] as an objective way to assess the search strategy. This concept is called ‘quasi’ because, whereas the gold standard represents all the primary studies that can be found, the quasi-gold standard only represents a subset of all relevant primary studies [6]. The quasi-gold standard is a gold standard with the restriction of time and venues. A relevant time period has to be chosen and venues have to be selected according to the research topic. The QGS is established by manually searching all the selected venues that were defined in the scope of the SLR. The identified studies constitute a QGS. Because different databases represent different venues, the identified studies will be grouped by their respective database (the selected databases defined in the scope). This will form a set of the QGS for every database. A QGS for a database can then be used to evaluate search strings used for the automated search on this database. When evaluating the search string, the automated search will be performed only on the specific venues in the database that were chosen to represent the QGS. The evaluation is done by calculating the quasi-sensitivity and the precision of the results of a search string. The quasi- sensitivity and precision are calculated the following way:

quasi-sensitivity = the number of articles from the QGS retrieved with the search string the number of articles in the QGS 100%

precision = the number of relevant articles retrieved with the search string the total number of articles retrieved with the search string 100%

The higher the quasi-sensitivity and precision, the better the search string is. An optimum search string will have a quasi-sensitivity range of 80-99% and precision range of 20-25% [7]. Once a satisfactory search string is found, it can be used on the entire database, to find more relevant articles.

The quasi-sensitivity is derived from sensitivity, a term used mostly in the medicine domain. Sensitivity defines the amount of relevant articles found among all the relevant articles. As we do not know the set of all relevant articles, the sensitivity cannot be calculated. For this reason the quasi-sensitivity was introduced, which can only be calculated when doing the evaluation of the search strings. One could find all relevant articles in a specific database, by manually searching all articles, but this would be too time consuming, and also make the automated search irrelevant.

Compared to the quasi-sensitivity, the precision of the search can be calculated when doing an automated search on all the venues available in the database, and is expected to be close to the precision of the search limited to the selected venues.

Figure 2 shows a QGS, which is the intersection between circle 1, all articles in the database; circle 2, all articles of the selected venues in the database; and circle 3, all relevant articles in the database.

Since circles 2 and 3 are entirely in circle 1, a QGS is also given by the intersection between circles 2 and 3. The area marked with ‘X’ contains all articles found with a search string that are in the QGS. So as can be seen in Figure 2, how more articles from the QGS are found with a search string, the greater the X will be, which results in a higher quasi-sensitivity.

In mathematical notation, a QGS, ‘X’, the quasi-sensitivity, and the precision are given by:

2 This subsection was written by both authors of this document.

(11)

𝑄𝐺𝑆 = 1 ∩ 2 ∩ 3 = 2 ∩ 3

quasi-sensitivity = #𝑋

#𝑄𝐺𝑆100%

𝑋 = 𝑄𝐺𝑆 ∩ 4

precision =#𝑋

#4100%

Where the numbers refer to the circles, and # denotes the amount of articles in a circle.

4.1.2 Automated and manual search

According to guidelines by Kitchenham [1], searching only the digital databases is not sufficient for an SLR. The guidelines suggest to also conduct manual searches to increase the search perimeter. So in order to increase the accuracy of the identification of the primary studies, an integrated search strategy is defined in the protocol, which is adopted from the QGS concept by Zhang et al. [6]. The strategy contains an automated search, complemented with a manual search.

The goal of the manual search is to find primary studies that are not or cannot be found with the automated search. For example, journals related to the research topic, at least the ones defined in the scope, should be searched. Also related conference proceedings, or other kinds of grey literature could be search and reference lists of relevant studies that have already been found should be reviewed (snowballing).

The goal of automated search is to find the bigger part of all the relevant studies according to the research topic, and also retrieve the relevant studies from the venues that are not included in the QGS.

The search string found after evaluating with the QGS should be used on the entire database, without the restriction of the venues selected for the QGS. This should be done on all the selected databases defined in the protocol.

All the retrieved studies, from both manual and automated search, will be put in a spreadsheet, to keep track of which studies are included and which studies will be excluded after the study selection.

In this spreadsheet at least the titles, authors, year of publication, abstracts and keywords, and URL to pdf from all the retrieved studies should be included.

1 - All articles in database 2 - Venues

3 - Relevant Articles

QGS

4 - Articles

Found X

Figure 2. Quasi-Gold Standard, for evaluating search strings.

(12)

11

4.2 RQ2: Study Selection Criteria and Procedures

In the protocol multiple selection criteria are defined and a three-stage process is presented for applying these criteria. The guidelines suggest the use of study selection criteria as means to evaluate the relevancy of the articles found. These criteria describe which articles to include, and which to exclude from the list with the retrieved studies. Some selection criteria were already defined in the first weeks of constructing the protocol. Because of the iterative process, these criteria were not final yet. During the process, some changes had to be made to the criteria.

There is one inclusion criterion defined in the protocol. This will include studies that present an approach for sharing architectural knowledge, a study that evaluates or compares one or multiple approaches for sharing architectural knowledge, a study that is about the state of the art of AKS or a study about industrial practices with regard to AK sharing. This inclusion criterion was first defined as

“A study is directly related to at least one of the research questions in some way”. Because of the research questions asking specifically about a suggested approach, relevant studies about AKS in general or comparative studies talking about multiple approaches, were excluded.

There are multiple exclusion criteria defined in the protocol that will exclude articles, for example, based on language, year of publication or on the kind of article, i.e. editorial or abstract. An exclusion criterion that was added in a later stage was to exclude studies about knowledge sharing, but not specifically about architectural knowledge. When piloting the protocol, we encountered multiple studies about knowledge sharing, and we decided to exclude those studies because the research topic is mainly about sharing design and design decisions. The final selection criteria can be found in Section 3.3 of the protocol.

The selection criteria should be applied by at least two reviewers in a three-stage process. This process suggested in the protocol entails the following steps:

 First make a broad call, interpret the criteria very liberally and assess the relevancy based on the title, keywords and abstract of an article. This results in a list of potentially relevant articles.

 Second, obtain the full text of the articles from the list established in the first step. Judge the relevancy now by precisely applying the selection criteria on the full text.

 Lastly, compare the inconsistencies between the assessments of the different reviewers. Try to come to an agreement and if this is not possible, another reviewer will come in and make the final call.

4.3 RQ3: Study Quality Assessment

Quality assessment criteria are defined in the protocol and piloted, in order to objectively assess the quality of each relevant article. The quality assessment can be done for two reasons. One can exclude articles with a poor quality score, and it can be used as an importance weight [3]. Since the selection criteria have been modified such that they include less empirical studies as well, and because an article of poor quality might still present useful information, it was decided not to use the quality assessment as an exclusion criterion.

The quality criteria in the protocol are means to assess the empirical value of an article and are based on the criteria defined by Dybå and Dingsøyr [8]. Next to assessing the empirical value of an articles, the criteria by Dybå and Dingsøyr cover three more quality issues: “Relevance, Rigour and Credibility”[8]. We did not include the relevance in our quality assessment, but our criteria are related to the validity, trustworthiness and meaningfulness [8]. The final quality criteria are defined in the Protocol Section 3.4.

The quality criteria are defined as yes/no questions and are answered with 0/0.5/1, where 0 denotes a ‘no’, 1 a ‘yes’, and 0.5 means ‘to some extent’. The sum of all the answers, will give the final quality

(13)

12 score. The quality assessment defined in the protocol consists of nine questions, which will be put in a form, to increase the usability when performing the quality assessment.

The quality assessment should be done by one reviewer, and at least one other reviewer should check the assessment. For every criterion a motivation for the score should be given, which makes it more practicable for the second reviewer to check the quality assessment. The quality assessment will be conducted in parallel with the data extraction.

4.4 RQ4: Data Extraction Strategy

For the extraction of the data from the primary studies, a form was created, as is suggested by Kitchenham [1]. The form defined in the protocol contains question blocks, each containing one or more questions. These questions were all derived from the research questions defined in the protocol for conducting the SLR. The extracted data will then also be used to answer these research questions.

The research questions that were defined are the following:

 RQ1: What approaches for sharing AK have been proposed in the literature?

 RQ2: What are the characteristics of the approaches identified in RQ1?

o RQ2.1: What is the intended project context of the approach?

o RQ2.2: Which knowledge management strategy does the approach support?

o RQ2.3: What are the mechanisms of the approach to support sharing of AK?

o RQ2.4: What are the architectural knowledge entities captured by the approach?

o RQ2.5: How prescriptive or descriptive is the approach?

o RQ2.6: What are reported benefits and limitations of the approaches?

o RQ2.7: What empirical evidence do we have for each approach?

 RQ3: How are the approaches identified in RQ1 related to each other?

 RQ4: What is a pattern language for architectural knowledge sharing?

Most of the research questions were split up in a question complemented with an explanation, which is represented in a question block. The final data extraction form contains twelve question blocks (cf.

Protocol Section 4). The first question block is only for documentation purposes, it contains information like the title, authors, year of publication and the abstract. The second question block is about who performed the data extraction, how was this study found and what was the final quality score of the study. The quality of a study was assessed with a separate form (cf. Section 4.3).

Question block three will answer the first research question in the protocol, which is: “What approaches for sharing AK have been proposed in the literature?” To answer this research question, the name of the approach proposed in the study and a short description will be documented in the form.

For question block four, we proposed to use a model to clearly describe the project context of the approach. This question block is related to research question 2.1. For describing the project context, the Octopus model from Phillipe Kruchten is used [9]. Using this model the different approaches can be classified with their corresponding project context, which is useful for the final goal of the SLR:

creating a pattern language on AKS approaches. The factors that are used in this model to describe the project context are: size, criticality, business model, stable architecture, team distribution, governance, rate of change, and age of system.

Question blocks five till ten cover research questions 2.2 till 2.7 and question block eleven relates to RQ3, about comparing the found approaches and identifying relations between them.

The last question block is added so that there is some space for additional notes. We did not want to miss out on relevant information that we did not foresee, therefore we added question block twelve.

This final block also helped us to refine our form.

(14)

13

5 Validation

In order to validate the results of the study, a pilot study was conducted for most methods included in the protocol. There is no pilot of the manual search, as this is outside the scope of this study. The methods that were piloted are the quasi-gold standard, the automated search, the selection criteria, the quality assessment and the data extraction. This section describes this pilot study. It describes per subject the research method, section 5.1, followed by the results of the pilot, section 5.2.

5.1 Research Method of Pilot Study

The methods of the protocol that were piloted, were only piloted on the database ScienceDirect. The pilot studies of each section of the protocol was done sequentially; first the QGS is piloted, then the automated search, and finally the quality assessment and the data extraction. ScienceDirect was chosen because the preliminary search showed many potentially relevant studies can be found here, most of the relevant studies here are articles, and the results showed clear venues with every study found, as opposed to, for instance, Google Scholar, which resulted in too many irrelevant articles including a lot of grey literature. This section describes the different parts of the pilot, how they were approached and executed. Section 5.1.1 describes the piloting of the quasi gold standard, Section 5.1.2 the pilot of the automated search, Section 5.1.3 the pilot of the selection criteria, Section 5.1.4 the pilot of the quality assessment and Section 5.1.5 the pilot of the data extraction strategy.

5.1.1 Pilot of the quasi-gold standard

Piloting the quasi-gold standard consists of two parts. First establish the QGS, and secondly, use the QGS for evaluating the search strings. The QGS for ScienceDirect was established by manually searching for relevant articles from the Journal of Systems and Software (JSS). The scope of the search, defined after the preliminary search in the protocol defines some relevant venues. Since more than half of the potentially relevant studies from ScienceDirect are from JSS, and none of the potentially relevant studies are from any of these venues, only JSS was used to establish a QGS for ScienceDirect (cf. Appendix A). Alongside the pilot of the concept of the QGS, the selection process and criteria were piloted, more about this in Section 5.1.3.

All articles published since 2004 were evaluated for their relevance, according to the selection process defined in Section 4.2 of this document and 3.3 of the review protocol. The relevant articles found, constitute a quasi-gold standard for ScienceDirect. This QGS can be found in Section 5.2.1 and Appendix B of this document.

Once the QGS was established, several search strings were tried and evaluated with the QGS, until a sufficient result was established. The search strings were performed on only JSS in ScienceDirect with a time period of 2004 until now. Starting with the search string found in the preliminary search and with the definition of AKS given in the review protocol, several different strings were evaluated on a trial-and-error basis. The strings were evaluated based on their quasi-sensitivity and precision. The final optimal search string for ScienceDirect and the rest of the result of this pilot can be found in Section 5.2.1 and Appendix C.

5.1.2 Pilot of the Automated Search

After an optimal search string for ScienceDirect has been defined, the automated search was piloted on the entire database of ScienceDirect. All the articles retrieved from ScienceDirect with this search string were evaluated according to the selection process and criteria as defined in Section 4.2 of this document and 3.3 of the review protocol. This resulted in a list of all relevant articles from ScienceDirect and a final precision of the search string for this database. The results can be found in section 5.2.2.

(15)

14 5.1.3 Pilot of the Selection Criteria

The selection process and criteria were piloted alongside the pilot of the QGS and the automated search. In order to say something about the relevance of a study, selection criteria and a selection process are needed to be able to select the relevant studies. Each article found went through the selection process as described in Section 4.2, but relevance was not only assessed by means of the selection criteria, but also if the article was really part of the scope of the SLR. This way, any flaws in the criteria got noticed and adjustments were made accordingly. After a change in the criteria, the list of potentially relevant studies, which are the once obtained after executing step one of the selection process, was assessed again with the new criteria. The results of this pilot can be found in Section 5.2.3.

5.1.4 Pilot of the Quality Assessment

The quality assessment criteria and the data extraction strategy were piloted at the same time for three weeks. This was possible since these two parts are independent of one another. For some of the relevant studies found with ScienceDirect, a quality assessment was done. A Google Form was used to answer the different questions described in Section 3.4 of the review protocol. By piloting the quality assessment on several different articles, more insight was gained into how well the criteria work for the different kinds of articles included. The results of this pilot can be found in Section 5.2.4.

5.1.5 Pilot Data Extraction

For all studies for which a quality assessment was done, the data was also extracted to pilot the data extraction form. The aim for this was to evaluate whether the form is complete, the questions are clear, and the form is usable [1]. A digital form was created with Google Forms, with all the questions of the data extraction form in Section 4 of the protocol. The studies were carefully read in order to be able to answer all the questions. Whenever a flaw in the form was discovered, the form was adjusted accordingly, and all articles that already had their data extracted were evaluated again with the changes. The results of this can be found in Section 5.2.5.

5.2 Results Pilot Study

This section describes the results of the different pilot studies described in the previous section.

5.2.1 Results of pilot of the Quasi-Gold Standard

After the relevance of all articles published by JSS, with the given time period, was evaluated with the selection process and after some changes in the selection criteria, a final QGS for ScienceDirect contained 18 articles. The authors, title, and year of publication of those 18 articles can be found in Appendix B.

With this QGS, several search strings were evaluated by means of their quasi-sensitivity and their precision. After seven trials, trial six turned out to be an optimal search string. This search string retrieved 69 articles from JSS including all the 18 articles from the QGS. This resulted in a quasi- sensitivity of 100% and a precision of 26.1%. The search string looks like:

"software architecture"

AND

("architectural knowledge" OR "architecture knowledge" OR "architectural design decision" OR

"architecture design decision") AND

("capturing" OR "sharing" OR "communicating" OR "documenting") The other trials, with their results, can be found in Table 9 in Appendix C.

(16)

15 5.2.2 Results of pilot of the Automated Search

The search string defined in the previous section was used for the automated search in ScienceDirect.

ScienceDirect was searched for all the venues available. This resulted in 123 articles and 41 book chapters. Since there was no way of obtaining the full chapters, they were excluded from the pilot. A full copy is needed in order to be able to correctly assess the relevance of a study. These 123 articles were reviewed in the selection process to assess the relevance. This finally resulted in a precision of 21.1%, which is close to the precision that was calculated after the evaluation.

5.2.3 Results of pilot of the Selection Criteria

The selection process and criteria defined in the protocol were applied to all articles published by JSS available on ScienceDirect within the given time period and on the results of the automated search on ScienceDirect. Since the relevance of the articles was not only assessed with the selection criteria, but also with a general assessment of what are thought to be relevant in general, we gained a lot of insight in the performance of the selection criteria. This extra assessment showed no irrelevant studies were included and no relevant studies included with the selection criteria.

By applying the first step of the selection process to all articles published by JSS, from 2004 till now, a list of 96 potentially relevant articles was obtained (combined results of three reviewers). After step two and step three of the process were applied to these articles, the final result was a QGS of 23 articles. When changes were made in the selection criteria, the QGS was reviewed again, and resulted in 18 relevant articles. From the 123 retrieved articles from ScienceDirect, 26 articles were left after the selection process.

5.2.4 Results of pilot of the Quality Assessment

For fifteen of the 26 relevant articles from ScienceDirect, the online form for the quality assessment was filled in. Since the quality assessment checks the empirical value of a study, it was difficult to assess non-empirical studies. Therefore, those articles often obtained a very poor quality score. This was mainly an issue for articles presenting current practice of AKS. For all other articles it was pretty straight forward how to answer the different questions.

At first, it was impossible to motivate the answer to each question. Since there are no exact guidelines as to when to award how many points, this resulted in quality scores that appear to be quite random.

To solve this problem, an extra explanation area was added to each question where one must motivate the given answer.

The lack of exact guidelines causes another difficulty. Their absence makes the score not only appear random, but even be sometimes quite random. It is for instance difficult to assess exactly when something is explained well enough. It is not possible to create such exact guidelines because each article is different, each author describes things differently, and they would have to be applicable to each relevant article found. This makes it even more important that that the answers to the questions are motivated.

5.2.5 Results of pilot of the Data Extraction

For each of the fifteen articles for which the quality was assessed, the data was also extracted. Since the questions in the data extraction form focus on specific approaches being proposed in studies, it was known beforehand that the form would not work for all studies. This turned out to be true. The form was hardly useful for studies describing the state of the art of AKS. Most questions had to remain unanswered. Since those articles do contain relevant information, the additional notes question is very useful for these types of studies.

The fact that the questions in the form are so focussed on a specific approach, they were also slightly difficult when a study was of the comparing type. This was solved by numbering the different

(17)

16 approaches compared in the study, and each question was answered as accurate as possible. Since these articles do not propose anything, the answers to the questions may be slightly different than when the articles do propose an approach, as the form is designed for the latter.

A last difficulty that was experienced was with the questions referring to the problem context (RQ 2.1).

Often the answers to the questions were not explicitly stated in the study. Giving a specific answer was therefore often hard and since the answer often came from interpretations of parts of the description of the proposed approach, the answers might not always be completely accurate and/or objective.

Therefore, it was important to motivate the answers given, especially when they are based on interpretations. Sometimes the answers were given based on the problem context of a case study, where the approach was often applied and tested in an industrial environment. The disadvantage of this is that the approach was probably not designed for that specific industrial environment, and hence answers based on interpretations of the case study probably limit the approach too much. To avoid such inaccuracies, it is clearly stated in the motivation of the answer that the answer is based on a case study.

6 Evaluation

The results presented in section 4 were carefully validated with a pilot study as described in section 5.

Since the pilot study was performed on only one database, and sometimes even on only one venue, there is no way of knowing whether this works for all databases and venues. Nevertheless, it is assumed that the results of the pilot are a good representation of the expected results for the not piloted databases and venues.

6.1 Evaluation of results

The protocol suggests to aim for an optimum search string, which has a quasi-sensitivity between 80%

and 99 % and a precision between 20% and 25%. In the pilot study, the objective evaluation using the concept of QGS, resulted in a search string for ScienceDirect with a quasi-sensitivity of 100% and a final precision of 26.1%. This search string (cf. Section 5.2.1) is well within limits of this optimal search string, and even exceeds expectation as it has an even higher quasi-sensitivity than the optimum search string described. The pilot study on ScienceDirect of the automated search resulted in 26 relevant articles among the 123 retrieved articles, which gives a precision of 21.1%. As expected, this precision value is close to the precision of the evaluation and within limits of the optimum search string.

The number of relevant articles found with the automated search on ScienceDirect is less than and not close to the number of potentially relevant studies found in the preliminary search. As one of the goals of the preliminary search is to gain insight in the volume of relevant studies available, one would expect these numbers to lie closer together. However, as the selection criteria were only liberally applied to the articles (only to the title, keywords, and abstract), which resulted in a list of potentially relevant studies, and since the selection criteria have changed later on, it is not an unexpected result.

The selection process, in which the selection criteria are applied, was applied to over a thousand articles. During the process we, including the supervisor, got a better understanding of the scope of the SLR. Which articles we wanted to include for the SLR, and which articles we wanted to exclude.

Therefore, and due to the iterative process, some changes had to be made to the selection criteria, which was expected. With the final selection criteria, as defined in the review protocol, no relevant studies were excluded or irrelevant included from the SLR during the pilot. This result suggest that the criteria defined work well for this SLR.

As expected, the quality assessment does not work well for all relevant studies identified, as it assesses the empirical value of a study and since non-empirical studies are also included in the SLR according to the protocol. Since the main goal of the quality assessment is to assign an importance weight to the

(18)

17 selected studies, a little precaution is needed. Before quickly discarding a study based on a low quality score, the researcher should look why the one who extracted the data gave the study its poor score.

As with the quality assessment, the data extraction form does not work great for all types of articles selected for the SLR. The biggest struggle lies with articles describing the current practices of AKS, since hardly any of the questions in the form apply to this type of studies. The additional notes are therefore necessary, and when the data are interpreted, the researcher should take the difficulties of the form and the additional notes into account.

Overall, the methods and strategies suggested in the protocol can probably be used for other SLR, except, of course for the subject specific ones. Especially, the search strategy is probably not only efficient for this SLR, but can likely be applied to other SLR’s about different subjects, with only minor changes. The selection process, apart from the selection criteria, the quality assessment, and the data extraction strategy, except for the form, are not subject specific and likely at least a good start for other researchers interested in conducting an SLR in software engineering.

6.2 Evaluation of project

The project took part over 13 weeks, a lot of time has gone into writing the protocol and refining it.

While first versions of most of the sections of the protocol were on paper within two weeks, it still took quite some time before the pilot started. It might have been more efficient to start this pilot earlier, while still refining the protocol, as that would have resulted in an earlier identification of problems.

While the late start of the pilot did not give any direct drawbacks, it was not beneficiary to the efficiency of the researchers, nor to the perception of enough time left to finish the project. In the end, a better planning and looking further ahead than just the next two weeks or so would have allowed for the project to be more structured.

First versions of the protocol and the thesis were finished quite soon, allowing much time for review and refinement, which was beneficial to the quality of both papers. Some parts of the pilot study took a lot of time. The data extraction and quality assessment were piloted together but were very time consuming to pilot. However, this was anticipated, as every article should be read carefully, and some parts more than once, before one was able to confidently answer the quality assessment or data extraction question. As the data extraction form was changed several times during the pilot, extracting data again from the articles that were already done did slow the process down even more. Since this pilot study was time boxed, it was no danger to our planning, but rather to the number of articles used for this pilot. In the end, the quality assessment and data extraction strategy were piloted on over half of the relevant articles from ScienceDirect.

The pilot of the QGS was expected to take a very long time, as the selection process had to be applied to over a thousand articles. It still took quite some time, but was finished quicker than expected. This was also the case with the automated search, as this was done on ScienceDirect only, the search string was used only once, after which the resulting articles, minus those from JSS, had to go through the selection process. This was all done in one day, which allowed us to move on quickly to the next elements.

At the beginning of the project, it was often a bit unclear what was expected exactly of the protocol, but as progress was made the objective became more and more clear. This made it easier to refine the protocol. The pilot study was very straightforward, and as we had learned a lot about protocols for SLRs, the evaluation and assessment of the results was not a complex task. Another reason for this is that clear objectives were often defined beforehand.

A personal reflection on the project of each of the authors of this document can be found in Appendix D.

(19)

18

7 Conclusion

This study resulted in a review protocol for an SLR on Architectural Knowledge Sharing approaches. In this protocol we suggest an objective method for evaluating the search strategy and constructing an objective search query. We adopted the quasi-gold standard concept introduced by Zhang et al., which helped creating the systematic and unbiased search strategy. This search strategy also increases the search perimeter by using an integrated method of an automated search complemented with a manual search. For the automated search, an optimum search string can be found by evaluating the results of the search using the quasi-gold standard. For the manual search snowballing is mainly used, which means the reference lists of the relevant primary studies will be reviewed. Also conference proceedings, or other kinds of grey literature, related to the research topic can be searched. Non- empirical studies will not be excluded because of the desire for as much relevant information as possible.

The four main parts of the review protocol, which are the Search Strategy, the Study Selection Criteria and Procedures, the Study Quality Assessment, and the Data Extraction Strategy were tested in a pilot study of four weeks. Piloting on ScienceDirect, we showed that the objective evaluation of the search and the construction of an optimum search string resulted in impressive results for the quasi-sensitivity and precision. After a few iterations the final selection criteria were established that include the relevant articles and exclude the irrelevant articles according to the research topic. These selection criteria were tested in the pilot study when establishing the QGS for ScienceDirect and when retrieving the articles from ScienceDirect with the automated search. The automated search resulted in 26 relevant articles from ScienceDirect, after the selection criteria were applied in the three-stage process. Using these 26 articles, the quality assessment and data extraction were piloted in parallel, which resulted in clear and usable forms for these methods.

Since the resulting protocol is not yet a finished protocol, as several sections were out of the scope of this study, there is no way of knowing whether the quality assessment and the data extraction have their desired effect for the next stages. We tested whether the form worked and the quality assessment questions were answerable, but we don’t know whether the extracted data is useful to draw conclusions. What is known, is that the strategies proposed, the quality assessment questions, and the data extraction form work at least reasonably well for the studies they were piloted on.

The pilot study is assumed to be a good representation of how well the protocol works for the entire SLR, and hence we conclude that the resulting protocol works very well for the first stages of conducting an SLR.

The protocol developed for this study will be further refined and finalized by the supervisor of this project, after which the SLR is conducted. It is important to notice that until the SLR is finished, the protocol is not finished, and that there will likely be small revisions in the future. The methods suggested in this paper and in the review protocol have been carefully piloted and will therefore be useful for others interested in performing an SLR.

(20)

19

Bibliography

[1] B. Kitchenham and S. Charters, “Guidelines for performing Systematic Literature Reviews in Software Engineering,” Engineering, vol. 2, p. 1051, 2007.

[2] H. Zhang and M. Ali Babar, “Systematic reviews in software engineering: An empirical investigation,” Inf. Softw. Technol., vol. 55, no. 7, pp. 1341–1354, 2013.

[3] P. Brereton, B. A. Kitchenham, D. Budgen, M. Turner, and M. Khalil, “Lessons from applying the systematic literature review process within the software engineering domain,” J. Syst. Softw., vol. 80, no. 4, pp. 571–583, 2007.

[4] S. MacDonell, M. Shepperd, B. Kitchenham, and E. Mendes, “How Reliable Are Systematic Reviews in Empirical Software Engineering?,” IEEE Trans. Softw. Eng., vol. 36, no. 5, pp. 676 – 687, 2010.

[5] B. Kitchenham, R. Pretorius, D. Budgen, O. P. Brereton, M. Turner, M. Niazi, and S. Linkman,

“Systematic literature reviews in software engineering-A tertiary study,” Inf. Softw. Technol., vol. 52, no. 8, pp. 792–805, 2010.

[6] H. Zhang, M. A. Babar, and P. Tell, “Identifying relevant studies in software engineering,” Inf.

Softw. Technol., vol. 53, no. 6, pp. 625–637, 2011.

[7] O. Dieste and A. G. Padua, “Developing Search Strategies for Detecting Relevant Experiments for Systematic Reviews,” Proc. - 1st Int. Symp. Empir. Softw. Eng. Meas. ESEM 2007, pp. 215–

224, 2007.

[8] T. Dybå and T. Dingsøyr, “Empirical studies of agile software development: A systematic review,” Inf. Softw. Technol., vol. 50, no. 9–10, pp. 833–859, 2008.

[9] P. Kruchten, “The Frog and the Octopus — Experience Teaching Software Project Management,” Proc. Can. Eng. Educ. Assoc., 2011.

(21)

20

Appendices

Appendix A. Preliminary search

The preliminary search resulted in a table of synonyms and alternative terms useful in our domain.

These terms, shown in Table 1, and the research questions were used to construct the initial search string for the automated search. This search string can be found in Section 3.2.3 of the protocol.

Table 1. Synonyms of search terms

Synonyms Source

Software development management Clerc et al. (2007) Knowledge engineering Babar et al. (2007) Knowledge-based approach Li et al. (2012) Architectural knowledge sharing

approach

Research questions Architecture knowledge management Ali Babar et al. (2009) Architectural decisions making

process

Dragomir et al. (2014) Architectural design decisions Chen et al. (2010)

The databases considered in the preliminary search are the following:

 IEEE XPlore

 ScienceDirect

 SpringerLink

 ACM digital library

 Google Scholar

When these databases were searched with Search string 1, already good results were obtained. Only the ACM digital library resulted in only two articles. These results are shown in Table 2. The relevance was checked by two reviewers and the precision was calculated. The precision is the percentage of the relevant articles among all the retrieved articles. When the search was conducted using Search String 2, ACM digital library resulted in more relevant articles, but also more irrelevant articles. We decided that we prefer to have a search string that results in more articles, even when there are more irrelevant articles among the retrieved articles. This way it is more certain that we are not missing many relevant articles. The results of the search with Search String 2 are shown in Table 3.

The articles retrieved by Google Scholar with Search string 2, which were 1290 articles, were not all checked. There was decided that the results retrieved with Search string 1 were sufficient enough and this search string was chosen to use for the automated search. In Table 4 is shown which specific search string is used for the different databases, with in Table 5 the results of the conducted searches with these search strings.

Search string 1.

("architecture knowledge sharing" OR "architectural knowledge sharing" OR "decision sharing") AND "software architecture"

Search string 2.

("software architecture" AND "architectural knowledge" AND ("architectural design decisions" OR

"sharing"))

(22)

21

Table 2. Preliminary search result with search string 1.

Database Number of

articles/chapters

Relevant articles Precision

IEEE Xplore 48 44 92

ScienceDirect 16 11 69

SpringerLink 36 29 81

ACM digital library 2 2 100

Google Scholar 206 141 68

Table 3. Preliminary search result with search string 2.

Database Number of

articles/chapters

Relevant articles Precision

IEEE Xplore 48 44 92

ScienceDirect 93 51 55

SpringerLink 192 111 58

ACM digital library 92 56 61

Google Scholar 1290 -

Table 4. Databases with the corresponding search strings.

Database Search string Scope

IEEE Xplore ("architecture knowledge sharing" OR "architectural knowledge sharing"

OR "decision sharing") AND "software architecture"

Full text ScienceDirect ("software architecture" AND "architectural knowledge" AND

("architectural design decisions" OR "sharing"))

Full text SpringerLink ("software architecture" AND "architectural knowledge" AND

("architectural design decisions" OR "sharing"))

Full text ACM digital

library

Any fields matches all "software architecture" "architectural knowledge"

Any fields matches any "architectural design decisions" "sharing"

Full text Google

Scholar

("architecture knowledge sharing" OR "architectural knowledge sharing"

OR "decision sharing") AND "software architecture"

Full text

Table 5. Search results of databases and search strings in table 5.

Database Number of articles Relevant articles Precision

IEEE Xplore 48 44 92

ScienceDirect 93 51 55

SpringerLink 192 111 58

ACM digital library 92 56 61

Google Scholar 206 141 68

Of the 111 potentially relevant studies found with SpringerLink (Table 5), only 6 are articles. The rest of the studies are all book chapters. ACM (Table 5) also returned only 6 relevant studies published in a Journal, the rest of them are mostly proceedings, which can be called grey literature. As for Google Scholar, it was often difficult to determine the exact origin and type of a study, and the list of sources

(23)

22 in the excel is not complete. Both ScienceDirect and IEEE Explore gave many relevant articles. Because ScienceDirect gave more relevant articles, the protocol was piloted on this database. Below, in Table 6, one finds a dissection of the potentially relevant results from ScienceDirect based on the venue:

Table 6. Results ScienceDirect, relevance per venue

Venue % of relevant studies from all

relevant studies

JSS 56.86

Information and Software Technology 15.69

Book Chapter 9.80

Science of Computer Programming 7.84

Future Generation Computer Systems 3.92

Information Sciences 1.96

Computers in Industry 1.96

Accounting, Organizations and Society 1.96

(24)

23

Appendix B. Quasi-gold standard of ScienceDirect

Table 7 shows the final QGS that was established for ScienceDirect. Table 8 shows articles that were excluded from the QGS in a later stage.

Table 7. The QGS for ScienceDirect.

Author Title Year

Capilla, Rafael; Jansen, Anton; Tang, Antony; Avgeriou, Paris & Babar, Muhammad Ali

10 years of software architecture knowledge management: Practice and future

2016

Manteuffel, Christian; Tofan, Dan;

Avgeriou, Paris; Koziolek, Heiko &

Goldschmidt, Thomas

Decision architect - A decision documentation tool for industry

2016

Gaubatz, Patrick; Lytra, Ioanna &

Zdun, Uwe

Automatic enforcement of constraints in real-time collaborative architectural decision making

2015

van Heesch, U.; Avgeriou, P. & Tang, A.

Does decision documentation help junior designers rationalize their decisions? A comparative multiple-case study

2013

van Heesch, U.; Avgeriou, P. &

Hilliard, R.

A documentation framework for architecture decisions

2012 Weinreich, Rainer & Buchgeher,

Georg

Towards supporting the software architecture life cycle

2012 Zimmermann, Olaf; Miksovic,

Christoph & Küster, Jochen M.

Reference architecture, meta-model, and modeling principles for

architectural knowledge management in information technology services

2012

Hoorn, Johan F.; Farenhorst, Rik;

Lago, Patricia & van Vliet, Hans

The lonesome architect 2011

Tang, Antony; Avgeriou, Paris;

Jansen, Anton; Capilla, Rafael &

Babar, Muhammad Ali

A comparative study of architecture knowledge management tools

2010

Unphon, Hataichanok & Dittrich, Yvonne

Software architecture awareness in long-term software product evolution

2010 Christensen, Henrik Bærbak &

Hansen, Klaus Marius

An empirical investigation of architectural prototyping

2010 Jansen, Anton; Avgeriou, Paris & van

der Ven, Jan Salvador

Enriching software architecture documentation

2009 López, Claudia; Inostroza, Pablo;

Cysneiros, Luiz Marcio & Astudillo, Hernán

Visualization and comparison of architecture rationale with semantic web technologies

2009

Zimmermann, Olaf; Koehler, Jana;

Leymann, Frank; Polley, Ronny &

Schuster, Nelly

Managing architectural decision models with dependency relations, integrity constraints, and production rules

2009

Jansen, Anton; Bosch, Jan &

Avgeriou, Paris

Documenting after the fact: Recovering architectural design decisions

2008 Tang, Antony; Jin, Yan & Han, Jun A rationale-based architecture model

for design traceability and reasoning

2007

(25)

24 Tang, Antony; Babar, Muhammad

Ali; Gorton, Ian & Han, Jun

A survey of architecture design rationale

2006 Roeller, Ronny; Lago, Patricia & van

Vliet, Hans

Recovering architectural assumptions 2006

Table 8. Excluded papers from QGS.

Authors Title Year

Van Vliet, Hans & Tang, Antony Decision making in software architecture 2016 Liang, Peng; Jansen, Anton; Avgeriou,

Paris; Tang, Antony & Xu, Lai

Advanced quality prediction model for software architectural knowledge sharing

2011 Kruchten, Philippe What do software architects really do? 2008 Sowe, Sulayman K.; Stamelos, Ioannis

& Angelis, Lefteris

Understanding knowledge sharing activities in free/open source software projects: An empirical study

2008

Corbin, Richard D.; Dunbar, Christopher B. & Zhu, Qiuming

A three-tier knowledge management scheme for software engineering support and innovation

2007

Referenties

GERELATEERDE DOCUMENTEN

Crucial to his reckoning was the knowledge of how the Mixtec counted their years in relation to the Aztec system: a Mixtec year 1 Reed roughly corresponded to an Aztec year 2 Reed,

En daar gaven ze ook al aan dat je wel met z’n allen heel druk bezig kan gaan met het optimaliseren op voice search of feature snippets, maar als je kijkt naar de volume van

The goal of this paper is to find out whether personalized search engines provide results which are politically biased and to what extent are these results

Geconcludeerd wordt dat 20 tot 25% van de leerlingen in havo of vwo zijn/haar bètatalent onderbenut, want deze leerlingen hebben geen NT gekozen, maar hadden dat qua capaciteiten

Does mandatory participation in such fund and the consequent deprivation of the possibility of obtaining füll compensation through the courts constitute an impermissible violation

The structure of the paper is as follows. We begin in section 2 with literature review. In section 3 we give an overview of the legal issues the search engine market raises

Analysis of the surface lattice geometry was done by Low Energy Electron Diffraction (LEED), growth rate and sample composition were characterized by Auger Electron Spectroscopy

Consequently, we propose to develop a plugin, also called an extension, for web browsers which captures the keywords of the search query, the top 10 search results (URLs), the