• No results found

Look before you leap Supporting decisions in software evolution by modeling expensive changes first

N/A
N/A
Protected

Academic year: 2021

Share "Look before you leap Supporting decisions in software evolution by modeling expensive changes first"

Copied!
52
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Look before you leap

Supporting decisions in software evolution by

modeling expensive changes first

Pavol Kincel

secult@gmail.com 19-10-2015, 51 pages

Supervisor: Dr. Ana Lucia Varbanescu Host organisation: Quality Positioning Services BV

Universiteit van Amsterdam

Faculteit der Natuurwetenschappen, Wiskunde en Informatica Master Software Engineering

(2)

Contents

Abstract 3 1 Introduction 4 1.1 Problem statement . . . 4 1.2 Research objective . . . 4 1.3 Research plan . . . 5 1.4 Context . . . 5 1.5 Terminology. . . 5 2 Related work 7 2.1 Theoretical basis . . . 7

2.1.1 The need to change . . . 7

2.1.2 Architecture reconstruction . . . 7 2.1.3 Requirements . . . 8 2.1.4 Decision-making . . . 9 2.1.5 Operational prototyping . . . 10 2.2 Related experiments . . . 10 2.3 Reporting guidelines . . . 11 3 Design 12 3.1 Research question. . . 12 3.2 Approach . . . 12 4 Results 14 4.1 Data collection . . . 14 4.2 Architecture reconstruction . . . 14 4.2.1 Tools . . . 14

4.2.2 Dependency search process . . . 16

4.3 Requirements engineering . . . 17

4.3.1 Domain understanding. . . 17

4.3.2 Requirements elicitation . . . 17

4.3.3 Evaluation . . . 17

4.3.4 Specification and documentation . . . 18

4.3.5 Consolidation . . . 18

4.4 Model-based evaluation framework (MBEF) . . . 20

4.4.1 Rationale behind using MBEF . . . 20

4.4.2 Architecture . . . 20

4.5 Proof of concept . . . 24

4.5.1 Selection of QA scenarios . . . 24

4.5.2 MBEF example usage with RabbitMQ back end . . . 25

4.5.3 MBEF example usage with ZeroMQ back end. . . 29

4.6 Verification . . . 31

4.6.1 Results from the latency measurement in the real application . . . 31

(3)

4.7 Validation . . . 31

5 Conclusion and Future work 34 5.1 Conclusion . . . 34

5.2 Relation to existing evidence . . . 34

5.3 Implications and Outcome . . . 35

5.4 Limitations and future work . . . 35

Bibliography 37

A Project proposal 39

B Project plan 41

C Configuration script examples 49

(4)

Abstract

Maintainability issues in complex systems are well known for decades. To effectively handle issues rising from the increasing complexity, proper investigation has to be done before decisions are made. Research question: How to support decision-making while facing a major change in the software? A process of gathering the information useful for decision-making purposes is described in this thesis. This is done in a form of a case study. The process consists of a partial architecture reconstruction of the existing software, gathering requirements and building operational prototypes based on the model of the application. To support feasibility of this approach, usage of two inter-process communication back ends is investigated in the proof of concept. Latency measurement based on these back ends provides useful evidence about two out of three quality attribute scenarios.

Conclusively, the described process is able to gain some rationale to support decision-making, although there is no short-term way of its verification in the production environment.

(5)

Chapter 1

Introduction

Maintenance and software evolution are one of the main topics in software engineering domain[CE15]. Quality Positioning Systems BV (QPS) has given me an opportunity to work with an example of a complex application, with a large code base continuously developed and maintained over approxi-mately 20 years.

The application consists of dozens of various submodules. There are complex data-flows and inter-actions between the data aquisition, processing and interfacing with the user. Strict requirements on latency and throughput are made due to the need of immediate data processing. At the same time, the application has to be responsive and user convenient.

As the software changes under the influence of new requirements, and the number of communicating modules in the application is growing, the inter-process communication has been put under investiga-tion. Stakeholders are seeking for improvements in important quality attributes. This investigation also provides an option to replace the current, shared memory based solution.

1.1

Problem statement

To summarize problems with the inspected software, it seems appropriate to cite 2nd and 5th Lehman’s “Laws of Software Evolution”, as they are stated by Lehman et al.[LRW+97].

“As an E-type system evolves its complexity increases unless work is done to maintain or reduce it.”

Although some maintenance was done on the inter-process communication (IPC) subsystem in the company’s software, it was said by the developers that it had been increasingly difficult to work with it. Tackling the increasing complexity of a system is a problem not only in this company. This thesis documents one way how decisions regarding the software evolution could be supported.

“As an E-type system evolves all associated with it, developers, sales personnel, users, for example, must maintain mastery of its content and behavior to achieve satisfactory evolution. Excessive growth diminishes that mastery. Hence the average incremental growth remains invariant as the system evolves.”

Over 20 years, people came and left the development in the company, taking parts of implicit knowledge about the system with them. Our interest is in the situation, in which developers occur more and more often, not only in this company - there has to be an important architectural decision made during modernization of a complex software. The situation, it’s context, and one of the possible ways how to successfully go through this process is examined in this research.

1.2

Research objective

The Goal Question Metric(GQM) template is used to describe the research objective. Analyze the proposed process of information gathering

for the purpose of providing rationale to support decision-making during a software moderniza-tion

(6)

with respect to the ability to accurately predict the application’s behavior from the viewpoint of a system analyst

in the context of the environment of a software development company.

In other words, the objective of this study is to provide empirical evidence about one of the ap-proaches to support software modernization in the company environment. As the decision-making during a modernization of a working software is difficult, mainly due to the risk of threatening an already working business, any additional real-world experience from this domain could help future engineers to mitigate the risk.

As proposed by Jedlitschka and Pfahl[JP05], the objective of the research is stated according to the GQM method. This is done due to “comparability of research objectives, e.g., for the purpose of systematic reviews.”

1.3

Research plan

The assignment consists from several elements stated in the research proposal (AppendixA):

• Investigation of the data flow between modules in the application

• Identification of the requirements for the application with interest in those related to the IPC • Evaluation of the available middleware solutions

• Implementation of a proof of concept based on the selected solution

The problem was analyzed and a plan to address the issues was prepared.

Data flow investigation was expected to be addressed by architecture reconstruction approach based on the Reverse architecting approach for complex systems from Krikhaar[Kri97], and the research of Ducasse and Pollet[DP09].

Requirements for the IPC was expected to be addressed by the recent available knowledge in the requirements engineering domain.

Evaluation of possible middleware solutions was planned to be done as a replication of the research done in 2011 by Dworak et al.[DESS11] and subsequently, the proof of concept was planned to be implemented.

All these parts were attempted to be addressed during the research. The rest of the thesis presents how, and to what extent the research followed the plan and what are the results.

1.4

Context

Research described in this thesis was done during the author’s three-month internship in the QPS company and shortly afterwards. The project proposal described the company’s need to explore the possible upgrade of the inter-process communication subsystem in a Windows based C++ application (AppendixA). The researcher was daily on-site during a three month period. Around 20 people were involved in the development of the software and were interacting with the researcher.

This research involved investigation of the current situation, architecture reconstruction of the subsystem responsible for the communication. 35 requirements for the system that were related to the IPC has been gathered, and 29 different IPC tools and libraries were at least partially evaluated whether they comply to the requirements.

1.5

Terminology

Several terms are often used throughout this thesis. They are explained below. Their meaning might be specific to this application.

inter-process communication (IPC) “Exchange of data between one process and another, either within the same computer or over a network.”[IPC15] There are multiple types of inter-process

(7)

communication techniques available, e.g. shared memory, (named) pipes, message passing, signals, semaphore. In our case, we are interested in a MS Windows based inter-process com-munication methods that are useful in the near real-time systems and being able to transfer large amounts of data.

model - simplified representation; in our case - of the software under investigation

future system, solution - desired system after the change of the inter-process communication sub-system will be made

application node - an executable program or process within the application that communicates through inter-process communication with other application nodes

(8)

Chapter 2

Related work

Literature related to this study is of three types.

• There are studies explaining the theoretical concepts and methodological approaches that are

either used or related to this study.

• Experiments or studies similar to our research, either by it’s design or by the common goal of

the study.

• Guidelines for writing this document, explaining the rationale behind the document’s structure

and motivation behind using some of the applied concepts.

2.1

Theoretical basis

2.1.1

The need to change

Changes in software evolution emerge from two sources, according to Burge et al.[BCMM08]: from the continuing changes in hardware capabilities, and from the volatility of user requirements. The other sources of new requirements are the management and development teams, with the growing demands on maintenance as the software grows. This is in line with the studies of Lehman et al.[LRW+97], where they also recognize continuing growth of a software over time as an issue.

2.1.2

Architecture reconstruction

There is a widely accepted assumption, that to be able to modify a software, person should be able to comprehend it[Kri99]. Complex software is often developed over years or decades and it could be a joint effort of multiple people working on the project in distinct times. One of the ways how to understand the system and to keep track of its changes is to maintain an architectural representation. Often happens (also in our case), that the architecture is outdated, or it is not thorough enough in particular points of interest.

Krikhaar[Kri99] explains the theory of Relation Partition Algebra (RPA), that is a theoretical basis for software architecture reconstruction tools. In another article[Kri97], he explains the Reverse Architecting approach. It is said that it should be applicable for various complex systems. Three steps are described in his article: analysis of import relation, analysis of the part-of hierarchy and analysis of use relations at code level.

In 2012, Ganesan[Gan12] reused some of the former data extraction scripts of Krikhaar for ex-tracting hierarchy and file name extensions. He also developed tool called FACE, which was used to extract data from C++ source code. He also mentions usage of Apache Lucene search tool to analyze similarities and provide search ability through data.

Bach et al.[BCK13] explain four basic steps of the software architecture reconstruction: 1. Raw view extraction - obtaining data from the source code, people

(9)

2. Database generation - standardization of the data for further use 3. View fusion - combination of the data to a valuable information 4. Architecture analysis - using the information for the intended purpose

Note that in their book, the Architecture Analysis part was explained for a different purposes than in our research - their intention was to use it mostly for conformance. In our case, the purpose is more exploratory.

For our research, these sources of information about architecture reconstruction are valuable due to the need to identify the parts of the software directly involved in the inter-process communication, the work-flow of the communication, and interfaces used when IPC is needed.

2.1.3

Requirements

During the internship, requirements related to the IPC were gathered and prioritized. These require-ments serve as a base for the decision-making.

Requirements engineering, according to van Laamswerde[vL09] has a life cycle, in which different activities are executed and they yield various products. These activities are:

• domain understanding • requirements elicitation • evaluation and agreement • specification and documentation • requirements consolidation Domain understanding

Study of the current application and understanding the problems and challenges in the domain are the essential part of the requirements engineering, due to the need to capture the scope of the research, the right stakeholders, identification of the constraints given by the environment or legacy systems. Requirements elicitation

This process should consist of discovering the requirements. Requirements could be gathered in multiple ways.

An important difference between the classical requirements elicitation and the requirements elici-tation done in our case is that there is already a working software - the knowledge what is needed is already used and gathered over years. A problem is that the knowledge is not systematically gathered and stored. Also, as time goes by, new constraints and expectations are put on the system, with as-always performance demands growing over time. Multiple techniques for the requirement elicitation could be used, as apprenticing with the customers[BH95] or interviewing the stakeholders. Develop-ers, end usDevelop-ers, management - they all could provide better insight to the product. Specification sheets could be consulted to understand the constraints given by the third parties.

Evaluation

Prioritization, making sure that the right requirements are captured and negotiation about conflicting requirements are steps that could help with creating the right product. It is a way how to focus the execution energy on the right goals and prevent some conflicts in the development stage.

(10)

Specification and documentation

Multiple aspects influence documentation of the requirements. If the intended recipients is composed by non-technical based users(e.g. for the purpose of validation or prioritization), the form of the documentation should be understandable to them.

Persona stories[Hud13] are one of the possible ways, how to describe requirements in an understand-able way. People could identify with the personas or imagine a real situation of a persona using the system in a particular way. It leads to better understanding of the requirements and makes the rea-soning more consistent, as requirements could be directly mapped to a situation. Furthermore, more precise specification of the requirements could be done by the Quality Attribute scenarios explained in Software Architecture in Practice by Bach et al.[BCK13].

Consolidation

According to Van Lamsweerde[vL09], validation and verification should be done for quality assurance purposes. Validation means to check whether the actual needs are represented by the requirements. Verification means to check whether the requirements are consistent, testable, etc.

2.1.4

Decision-making

As the software architecture could be defined as a set of decisions, the drivers of these decisions play a significant role in the software architecture domain. “There are three main factors that drive software architecture design: reuse, method, and intuition”[FCKK11]

For every valid requirement, there should be a way to determine whether it is satisfied or not, or the extent of its satisfaction. In an ideal case, requirements are all clear and well defined, their satisfaction is possible without any trade-offs. In the real world, often trade-offs are done in favor of some requirements because of various reasons. There are multiple ways, how to handle these kinds of situations. One of the way is called requirements prioritization, which explicitly makes some requirements more important than others. Prioritization could be done up to a various extent, depending on the intentions.

“A decision-making technique describes a systematic way of choosing an alternative among a finite number of given ones.”[FCKK11]

There are various decision-making techniques available to be used in software development - most of them are useful, but none of them could be said that is universally better or worse than another.[YHNM11, KPBP12, KPBP13] There is one well-known decision-making paradox explained by Triantaphyllou and Mann[TM89]. In the article, the problem - which decision-making method to use - is lifted to the meta-level, and decision-making methods are used to determine, which method is the best for the decision-making about choosing the best decision-making method. The paradox is in the fact, that method X points out to method Y that it is the best. But when method Y is used, method Z is marked as the best one.

Another insight into the reasoning, during the time when software architecture is being changed, was summarized as an empirical study by Dersten, Axelson and Froberg[SJJ12]. Authors gathered questions asked by architects and technical managers about refactorings that should be made. After that, questions were sorted by the areas of interest in which they belong to, e.g. development, sales, planning.

Results has shown that both managers and architects were interested in technical details, cost, profit, supplier information and requirements prior to making a decision. Some of the domains were often not asked about or missed by some of the decision-makers and own experience was used to estimate some of the information needed to decide. In the same time, these “soft factors” were difficult to estimate and not present in the release planning tools. Also, to support decisions for the higher management, well-reasoned arguments and facts were said to be important.

Based on the evidence, authors believe that instead of computer-aided decision-making tools, decision-makers should use support packages including guidelines for impact investigation.

(11)

Figure 2.1: Difference between throwaway and evolutionary prototyping by Davis[Dav92]

2.1.5

Operational prototyping

Prototyping and difference between evolutionary and throwaway prototyping in explained by Davis[Dav92]: “A prototype is the partial implementation of a system built expressly to learn more about a problem or a solution to a problem”. Then, he states that these two have almost nothing in common, as the motivation and use intentions are different (see Figure2.1).

As the both approaches had some problems with ineffectivity, author proposes operational pro-totyping as an advancement in the methodology. Operational propro-totyping was said to be different: as some of the requirements are well understood and important and therefore should be taken care of during the design of the prototype. Software evaluated by the stakeholders is composed from a

quality baseline software and a throwaway prototype used together with it. User experiences should

be gathered and even in case that user consider the usage of the new prototyped feature useful, the prototype is thrown away. Related feature request is written down and used as a reference for future implementation in the quality baseline software.

2.2

Related experiments

Virtual network simulator Authors propose a system to study network algorithms. They im-plemented a C program in Unix, with virtualized nodes as separate processes. Messages were sent between nodes and message queuing processes. Authors implemented a simulator driver to handle the execution of the rest of the application. They introduced a specifications program, that specify the nodes, their number, topology, names and number of messages. Article is mostly about how the program works. Authors’ intentions were to support rapid prototyping of network algorithms, their development in a normalized environment and reduction of the overhead and effort, contrary to the simulation languages of that time. Unfortunately, there is no form of validation of their simulator in the article.

Middleware trends and market leaders 2011 Dworak et al. in the article Middleware trends and market leaders 2011[DESS11] capture an evaluation of the middleware solutions in the year 2011 that could replace an old, CORBA based solution to control particle accelerators in CERN. Maintainability and extensibility were issues with the old solution, and they had selected new libraries that replaced the CORBA based solution over time. However, the research was done in 2011 and since then, a lot of new development was done in the domain. For example, AMQP-based product

(12)

family was not taken into the account because AMQP standard was still evolving[sic]. In the present (2015) AMQP standard has already reached version 1.0 and there are several mature solutions based on both the old and the new AMQP versions.

Application of Windows Inter-process Communication in Software System Integration Yuan and Gu in 2010[YG10] explain another approach solving inter-process communication problem in an application that has real-time requirements. Their application-of-interest was a surface modeling tool in aerospace engineering domain, and they explain usage of pipes and shared memory with a proxy module. They also describe that there are seven types of IPC in Win32: “COM/DCOM, File Mapping, WM COPATA, Pipe, Mail slots, RPC and Windows Sockets”. Then they explain pipes in more detail.

2.3

Reporting guidelines

Guidelines for conducting and reporting case study research in software engineering Runeson and Host in 2008[RH09] justify the usage of case studies in software engineering. They ex-plain that case studies and empirical studies in general are not excessively used in the domain. There was also criticism of the case studies mentioned - “for being of less value, impossible to generalize from, being biased by researchers etc”. Authors explain that as the objects of study are hard to study in isolation and they are not intended to generate the same results as in controlled experiments. The motivation behind seems to be based on a belief, that “knowledge is more than statistical signifi-cance”. Authors also gather definitions of the case study, focused on the same result - to investigate contemporary phenomena in its context. Lack of experimental control and unclear distinction between the context and the point of interest are also mentioned as a common identifiers.

Reporting Guidelines for Controlled Experiments in Software Engineering Reporting guidelines for empirical software engineering from Jedlitschka and Pfahl[JP05] are partially used to form the structure of the document and the proposed usage of Goal Question Metric in the Research Objectives section was adopted.

Creation of a laboratory package is supposed to be a major improvement in usability and replica-bility of the whole concept of a study, as Brooks et al. explained in 2008[BRW+08].

(13)

Chapter 3

Design

3.1

Research question

The case that we have to investigate in this research is the following process:

There are seemingly tens of equivalent design alternatives. There is a need from the stakeholders to decide which alternative is the most suitable for their use. Incorrect decision could induce significant waste of time and effort, and therefore money, with a possible risk of threathening a working business. To be able to convince management about a design alternative adoption and implementation, extensive rationale-supported evidence has to be used. Alternative decisions varies in reliability, performance and other quality attributes.

The research question follows:

RQ: How to support decision-making while facing a major change in the software?

With information from the research done by Dworak et al. in 2011[DESS11], we can try to answer this question - an evaluation of design alternatives, with consideration of requirements, has to be done. Depending on the requirement types, even some more detailed comparison could be made. In the end, sufficient rationale could support the best alternative.

In this thesis, a process of obtaining this kind of evidence is presented. The following null hypothesis is expected to be falsified:

H0: Usage of proposed evaluation framework does not influence the decision-making in this situation. To falsify the null hypothesis, we need to show that at least one decision about the software was based on the rationale acquired during the work with the evaluation framework.

3.2

Approach

Data collection

Runeson and Host[RH09] propose usage of a case study protocol, where decisions are documented. Also, the case study protocol should suit as a logbook.

Analysis

Based on the process of implementation of a possible solution into the evaluation framework and running the measurements, the researcher should be able to use this information as a support for the architectural decisions. Analysis of the results is done by determination whether there is any decision made or not, especially decisions leading to exclusion of some particular solution from further evaluation.

(14)

Validity

For every decision made, the evidence supporting the decision has to be traceable and documented. Any decision made without appropriate evidence should be either not taken into consideration or returned for additional documentation. Also, triangulation should be used for validating results from the measurement, for example by calculating the estimated results.

“For case studies, the intention is to enable analytical generalization where the results are extended to cases which have common characteristics and hence for which the findings are relevant, i.e. defining a theory.”[RH09]

(15)

Chapter 4

Results

4.1

Data collection

Contrary to proposed usage of a case study protocol to guide the data collection procedure, project plan served as a base for the research. Important deadlines and theoretical bases for the research were stated in it. At the beginning of the on-site research, general guidelines from the plan was brought up to a whiteboard (see Figure4.1) with time estimates. General observations and experiences from the study were collected to a log and internal knowledge base website. The knowledge base website has been used as a long-term documentation of the research. It also serves as an introduction to the laboratory package. The laboratory package was made to simplify possible replication or help with other research purposes.

4.2

Architecture reconstruction

Prior to the detailed technical inquiry into the software, preparation for a change in the software had started with understanding the purpose of the software, users’ needs and concerns of stakeholders. Also understanding of the motivation and the context behind the previous design decisions was in scope. This important part of investigation was mostly done by formal and informal interviews within the company, using the software as an end user and attending the company’s meetings. Clearer picture of the software was made this way. All the information was written into a logbook. Hands-on experience from a developer’s point of view - e. g. an attempt to do a small refactoring (changing namespace) of one part of the software - also enriched the knowledge of the application.

The motivation behind execution of the software architecture reconstruction (SAR) were:

• the need to identify interfaces of the inter-process communication middleware • the need to get acquainted with the software

The reconstruction was mainly focused on the details regarding inter-process communication and modules using it, specifically on the module parts that were responsible for the IPC.

4.2.1

Tools

It was given that the reconstruction will be done purely on an application built in C++ language. Representative sample of libraries was provided only from this language. Therefore, tools that support information extraction from this language were searched for and then used. Platform was also relevant for our research, as Windows 7 and 8 were the operating systems used to run the software. Therefore any dynamic analysis based tools not working on these platforms were not taken into consideration.

The search for tools supporting architecture reconstruction started paper from Armstrong and Trudeau [AT98], where authors mention and review some of the tools used for extracting information from the software architecture: Rigi, Dali workbench, the Software Bookshelf, CIA and SNiFF+. In the PhD thesis of Krikhaar[Kri99], tools Rigi and Dali were also mentioned. Software Architecture

(16)
(17)

Figure 4.2: Example of a dependency graph generated by the CppDepend

Group (SWAG) from the University of Waterloo, besides the Software Bookshelf implementation, called The Portable Bookshelf, developed some other tools supporting SAR.

All these tools mentioned were difficult to obtain or run, as they were built more than over 10 years ago for a different systems. I was able to run lsedit, a graph visualization tool.

Various Architecture Analysis & Description Language (AADL) tools were found, but not used, as the motivation behind using these tools is clearly different (the software is already built).

Nowadays, architecture support tools are more and more integrated into the integrated development environments(IDEs) - for example generation of class dependency diagrams in Microsoft’s Visual Studio 2013 Ultimate and 2015 Enterprise RC. It helped to understand the structure of the project. It could be possible to reuse the graph and describe some upcoming changes. Usage of the profiler to generate caller/callee dependencies was also possible.

However, there were some problems with visualizing some of the information:

In C++ language, namespaces are used to structure the code into logical blocks and to prevent naming conflicts. When these namespaces are not systematically used in the whole project, tools have problem to identify a module to which the piece of code belongs to.

Some logical modules in the real code are flat and unstructured - and over time, they grow to a size, that the attempt to render them hangs on the computer.

With a static code analysis tool called CppDepend, it was possible to parse the code and visualize information in a dependency structure matrix and also in dependency graphs(see example in Figure 4.2). It was possible to submit search queries in a query language called CQLinq.

4.2.2

Dependency search process

Static code

12 header files were provided as containing core of the inter-process communication information. Header parser tool called CppHeaderParser was used to extract the redefined variables and classes from header files. Small Python script was created to automate the extraction process and to generate search queries for the CppDepend tool.

Search queries were then executed on the application code base by the CppDepend tool to locate other classes using and inheriting the IPC classes and variables. These were exported as XML files.

(18)

further analysis.

Identifiers defined by pre-processor directive #define was not processed by this approach. As there was only few of them, these were searched for by Visual Studio’s “Find in files” tool and added to the database separately.

Dynamic analysis

Another solution was done to triangulate the results and to be able to get some information needed to create sequence diagrams.

Visual Studio has an option to attach profiler to running code. This was done on the software with parts of it built with compiler set to include debug information. Caller/Callee information could be then exported as CSV from the processed data in Visual Studio. In the result, we were able to obtain the subset of binary relations involving IPC classes, that were used during the run-time.

People

It is crucial to involve people that has knowledge of the system into the reconstruction, as no source analysis tool is able to take all aspects into the consideration, as well as to verify the findings.

4.3

Requirements engineering

4.3.1

Domain understanding

This was done by using the old application, talking to the stakeholders, reading specification sheets, manuals, online knowledge bases to the old application but also getting general knowledge about the maritime geodesy and navigation - the domain in which the software is used.

One of the important parts was the architecture reconstruction described in the subsection above. Reconstruction of the architecture in the old software greatly enhanced the understanding of the system, showing pitfalls that a person could learn from, and it explained the flow of the data through the system. This data flow was abstracted to the model of the application.

4.3.2

Requirements elicitation

Interviews with stakeholders played the most important part in this step. Interviews were of various duration, even with some of the stakeholders the interviews were done multiple times to ask more questions or specify given information more in detail. The information was written down during the interview. Possibilities like recording an interview on a camera were considered, but not executed, mainly due to disturbing effect on the stakeholders’ openness.

The structure of the stakeholders’ community was composed by the company staff available on-site, including people from development, testing and support departments. Real use situations were mediated by the product manager. As rest of the staff was also trained in using the software in production as an end user, they have been able to provide information from the user point of view. Apprenticing as proposed by Beyer was done with a person from the testing department to seek for requirements regarding testability of the system.

4.3.3

Evaluation

Evaluation was done during one of the meetings with the stakeholders, where the requirements were presented and discussion about them was initiated. During the discussion, proposed requirements were refined and new requirements were added. After the meeting, feedback forms about the requirements were given to the stakeholders. Requirements were prioritized by the MoSCoW method. Reasons behind usage of the MoSCoW requirements prioritization approach were: minimized time effort from the stakeholders’ side (compared to Weighted Sum Model method or Analytic Hierarchy Process) and sufficient precision made by this way of prioritization for our case.

MoSCoW prioritization method is easy to understand and execute - respondents state whether a requirement is of the following types:

(19)

Must A requirement is compulsory in a way, that the project could be void if the requirement is not met.

Should A requirement is important, but it is not critical in the project. Could A requirement is a nice to have.

Won’t/Would A requirement will not be done in this stage or it is not desirable in the application at all.

After the feedback forms were gathered, following rules were used to refine the priority of a require-ment:

In general, stakeholders in this case were people that understand the software and needs of the users, therefore their opinions were quite similar. But there were cases, when a stakeholder has a different opinion about a requirement. Then the stakeholder was interviewed to provide explanation for his opinion. If the explanation brought bigger insight into the problem, it was consulted with other stakeholders, otherwise the final priority was assigned based on the modus of the voices. Each stakeholder had one voice and the feedback forms were not disclosed to the rest of the stakeholders to prevent anchoring, biasing and politics inside the company.

4.3.4

Specification and documentation

Only some requirements from the overall requirements set are shown. Based on the article from Hudson[Hud13], persona stories were used as a way of representing the requirements in a context. This way of representing requirements helps stakeholders with better understanding and deeper insight. My addition to this approach was usage of familiar or well known characters - namely characters from the Star Wars movies. The underlying theory is that remembering new information about familiar objects takes less effort than remembering new information about new objects. By eliminating the need of remembering the new objects and by simply using the familiar characters, no new names are needed to be remembered.

For the purposes of requirements prioritization, persona stories were presented first, and afterwards, more detailed requirements from these stories were presented to be prioritized.

4.3.5

Consolidation

Verification was done by enforcing the description of a requirement to a predefined template. Prede-fined fields are related to the measurability of the satisfaction of the requirement, and the degree of it’s satisfaction. The template is in AppendixD. Validation was done partially during the prioritization - those requirements that did not add value in the eyes of the stakeholders are categorized as won’t and therefore were not used. The other part of the validation - trying to realize whether all relevant requirements are captured is an ongoing challenge in the business modeling domain in general.

It was assumed by the author that the satisfaction of a requirement is possible to predict by an analysis of available information about the system, its modules, and design decisions before the actual implementation. The information could be obtained from various sources, including:

• GitHub repositories • issue trackers

• documentation of the system • measurement

• hands-on experience

Focus was mostly put on the last two - measurement and hands-on experience, to determine satisfac-tion of the requirements. Requirements are written in a form similar to Quality Attribute scenarios as proposed by Bass, Clements and Kazman[BCK13]. The focus in on requirements being “unambiguous and testable”.

(20)

Luke (Module developer)

Image: starwars.wikia.com

Luke is a module developer. He develops driver and monitor modules for the application. It is not his job to struggle with complexity of the rest of the

application, because he is obliged to just deliver the modules as fast as possible.

To develop driver modules, he needs to know the structure of the incoming data, settings applicable on the driver, and the structure of the data that the driver sends further to the system. He does not care which process will consume the data afterwards (L1). To successfully test and prove the functionality of a driver, he has to be able to create and modify settings, input data, and to see the output data (L2). He does not need to know anything about the inter-process communication, for him, its just a black box, where the module sends the data (L3).

He needs to create a driver for a new laser scanner with a scan rate defined in the specifications(L4). He is able to create the driver, but he does not know, how much data from the scanner is required to be

processed afterwards (L5).

To develop monitor modules, he needs to know, which data are available to be used. He wants to have a simple access to available sources of data and the structure of the data (L6) (L7) (L8).

(21)

• observations during the process of evaluation and measurement run-time • configuration script files and measurements results files

• possible decision made based on the data

4.4

Model-based evaluation framework (MBEF)

4.4.1

Rationale behind using MBEF

During the architecture reconstruction phase, interfaces for the inter-process communication module were searched for, but no clear separation was found. It was difficult to just separate the part of the system responsible for the inter-process communication, mostly due to the years-long entangling of the communication with the other modules. A lot of time was spent and effort was made during the introduction of latency measurement code into the application (used in Section 4.6.1), experiencing the burden of the complexity of the application.

During preliminary search for possible inter-process communication alternatives that could be used in the system, dozens of possibilities were found and had to be investigated. To be able to evaluate them and see whether they are suitable for the usage as a replacement of the current IPC, common platform was intended to be chosen. Assumed that it would be inefficient to use the existing legacy application as the base, alternative in a form of a model of the application was decided to be used. The assumed advantages were:

• comparability of the measurements • faster development

• better ability to simulate future challenges - current application could limit the potential of the

usage of the IPC

• ability to measure only communication module behavior, not side effects made by the application

(ceteris paribus assumption) Intended use

Jeruchim et al.[JBS02] stated in the preface of their book: “the objective of modeling and simulation is to study and evaluate the behavior and performance of systems of current interest”. The intended use of the Model Based Evaluation Framework (MBEF) is to aid with the decision-making process - by being able to predict satisfaction of some requirements by an application in the process of evolution. The purpose is to provide evidence for qualified decisions regarding the evolution of the inter-process communication subsystem in the existing software, because wrong decisions may be costly and time-consuming.

4.4.2

Architecture

In accordance with the throwaway prototype approach, the model is not particularly well-designed. It could be mostly seen on the spread of the measurement tools among the application and also inclusion of the Controller unit within the measurement tools, as can be seen in the Figure4.4. Sharing the same virtual environment within the Python-made tools (Memory usage measurement, Controller,

CPU usage measurement ) speeds up the development, as the third-party libraries could be shared

among the units. Elements catalog

ApplicationModel The main framework library. There are three basic types of nodes implement-ing the abstract IApplicationNode: GeneratorNode, RepeaterNode and SinkNode. These nodes are used to represent the subprocesses of the application. Furthermore, GeneratorNode and SinkNode implement SaveResultsMixin, that is used to save timestamping information, such as time of emission and time of delivery of the transmitted block.

(22)

Figure 4.4: Module view

(23)
(24)

Figure 4.7: Abstract implementation of the communication back end

GeneratorNode It represents the data generation node. In the real application, this could be an arbitrary process processing the data incoming from a source outside of our application. It’s responsibilities are, mainly, to generate data within predefined parameters.

RepeaterNode RepeaterNode, besides the receiving the data from either GeneratorNodes or

other RepeaterNodes, has an ability to introduce delay to the transmission process. This feature simulates the work load - processing of the data in the real application.

SinkNode This node type is responsible for receiving the data from the other nodes and gener-ating delivery time-stamps for the data.

Latency measurement Latency measurement unit is responsible for creating time-stamps with

100ns precision. Due to the aspect-oriented nature of this unit, it is difficult to categorize it in an object-oriented manner.

Configuration scripts Configuration scripts, written in YAML, are used to set up the evaluation, control the existence of the nodes and define the settings required for communication. Prespecified scripting of the scenarios were preferred instead of the approach made by Domka and Freiling[DF84], mainly because:

• the ability to run scripts multiple times after each other

• the ability to document the configuration in a Don’t repeat yourself (DRY) manner • to be able to reuse parts of the configuration in other scripts

The data representation language used for setting up the scenarios was chosen due to simple usage and conciseness. Greenfield et al.[GDB15] propose this language as a basis for a new data format in astronomy, evaluating the language more thoroughly. JSON syntax is referred to be an official subset of the YAML.

YAML YAML is a data representation language with a number of usable features that we can use. Data types such as lists and dictionaries could be written in this language in a easily readable way. Nesting hierarchies are more clear because of the compulsory whitespace indentation. References within the document greatly help with reduction of the duplication of the text, e.g. when sender address of one node is the same as the and receiver address of other node. This reduces space for errors in the document.

(25)

The biggest downside of the language is the missing schema validation mechanism - it could be more difficult to validate the structure of a document (e.g. like in XML). However, it is not such a big problem in our case, as our document’s structure is not so complex. Another problem was that some text editors add tab characters instead of the space characters to indent, which is not a valid way of structuring the YAML document. This could be a convenience problem for the developer.

Script structure Example in AppendixC.

Script structure is mixed, partially linear and partially hierarchical. Node types, their configura-tion and behavior could be defined in the script, and also measurement tools to be used and their parameters. This script is then loaded by the main entry point of the application, validated and executed.

Reference IPC implementations During the research, two reference back ends were imple-mented. They serve as a working demonstration of the evaluation framework functionality. Learning by example approach was chosen to explain the usage and possible extension of the framework. Measurement tools

CpuUsageMeasurement Python script using psutil library to measure CPU utilization. It was empirically discovered, that the results are updated every 125ms.

MemoryUsageMeasurement Python script using psutil library to measure RAM utilization. Controller On the contrary to the package name Measurement tools, Controller is used to run the evaluation process. It controls the initialization of the nodes, measurement tools and partially their termination in the end. It is similar to simulator driver from the research of Domka and Freiling[DF84].

4.5

Proof of concept

This section demonstrates the usage of the MBEF. Evaluation of two alternative inter-process com-munication back ends against three Quality Attribute (QA) scenarios was done.

4.5.1

Selection of QA scenarios

More thorough description of requirements explained in subsection 4.3.4 was done by the “quality attribute scenario” approach, proposed by Bach et al. in the Software Architecture in Practice book[BCK13]. Reason to do this in our case is to have a standardized way of describing a condition to determine whether a requirement is satisfied. The scenarios shown below were chosen due to their illustrativeness. Easy explanation of the impact of the results was also a reason to use them.

Terminology

Quality attribute (QA) “is a measurable or testable property of a system that is used to indicate how well the system satisfies the needs of its stakeholders”[BCK13]

Stimulus “is a condition that requires a response when it arrives at a system”[BCK13]

Source of stimulus - “is some entity (human, system, or any other actuator) that generated the stimulus”[BCK13]

Environment means conditions under which the stimulus occurs.

Artifact “is a portion of a system to which the requirement applies.”[BCK13]

(26)

Response measure “Determining whether a response is satisfactory”[BCK13]

Satisfaction of a requirement ability to meet a defined response measure when a particular solu-tion is used, or to provide a sufficient evidence that the particular solusolu-tion will meet the QA response measure.

R1 - Latency under normal load Source of stimulus

External data acquisition device (e.g. GPS module) Stimulus

Data block arrival from the device Environment

Normal configuration (as defined in script) Artifact

System Response

Data processed through the system Response measure

End-to-end latency under 100ms(best) End-to-end latency under 500ms(acceptable) R2 - Latency under high load

Source of stimulus

External data acquisition device (e.g. GPS module) Stimulus

Data block arrival from the device Environment

High load configuration (as defined in script, see also Figure4.5.1) Artifact

System Response

Data processed through the system Response measure

End-to-end latency under 500ms(acceptable) R3 - Transmission of large chunks of data

External data acquisition device emitting megabyte-order sized data chunks Stimulus

Data block arrival from the device Environment

Large chunks configuration (as defined in script) Artifact

System Response

Data processed through the system Response measure

With no memory buildups.

4.5.2

MBEF example usage with RabbitMQ back end

Installation and implementation notes

(27)
(28)

Figure 4.9: Scenario R1 latency measurement - RabbitMQ back end

Figure 4.10: Scenario R2 latency measurement - RabbitMQ back end

Results related to the requirement R1

The latency is with average lower than 2 ms, no problems noticed. Results related to the requirement R2

The latency is also with average lower than 2 ms and median of 1ms. Results related to the requirement R3

On the memory usage graph of R3 scenario, we can see that the memory usage is increasing over time steadily, with peak memory usage of 3233MB (710MB above the baseline). This is not a problem in applications, where data are sent in bursts, but it is in our case, where data are sent in this pace over long periods of time. This may cause main memory depletion over short period of time.

Further investigation has shown, that the latency is also increasing in time. This supports our suspicion that this IPC implementation(not the back end itself) has problems with dealing with this scenario. With an average latency of 3134ms and growing over time, the software based on this implementation would be useless for the intended, near real-time use.

Further investigation By further investigation and simplifying the scenario (by removing the repeater nodes) we discovered that the bottlenecks in our implementation are the Repeater nodes. We can assume that it is due to the need to receive and send the data sequentially. It is expected

(29)

Figure 4.11: Scenario R3 memory usage measurement - RabbitMQ back end

(30)

Figure 4.13: Scenario R1 latency measurement - ZeroMQ back end

Figure 4.14: Scenario R2 latency measurement - ZeroMQ back end

that the simplistic approach - single-threaded implementation of the Repeater node is much slower than either single Generator or Sink node.

4.5.3

MBEF example usage with ZeroMQ back end

Installation and implementation notes

Zeromq-c 4.0.4 32-bit library and cppzmq (c++ binding) were used in the implementation. The installation and usage procedures are in Appendix C.

Results related to the requirement R1

Similar to the usage of the RabbitMQ back end, this scenario did not possess a problem to the ZeroMQ back end implementation. There is a spike in the beginning of the measurement caused by the asynchronous connection done by ZeroMQ. Average latency is lower than 7ms (lower than 2ms neglecting the initial spike).

Results related to the requirement R2

Again we can see the initial latency spike and no problems with the latency afterwards. Average latency was lower than 1ms, neglecting the initial spike.

Results related to the requirement R3

We may see the same problems as when using RabbitMQ back end, resulting in latency and memory buildup.

(31)

Figure 4.15: Scenario R3 latency measurement - ZeroMQ back end

(32)

4.6

Verification

4.6.1

Results from the latency measurement in the real application

To verify the approach, end-to-end latency of the original application was measured under similar conditions as in scenario R1. The measured latency was ˜60ms. However, due to inability to separate the time of transmission from the processing time, we cannot say how much time was spend on purely on the communication.

Latency measurement points were introduced:

• just before the data have been copied to the shared memory buffer from the data generator • immediately after the data have been copied from the shared memory buffer at the final receiver

module

Minimizing the influence of the data generation and processing was the main reason to introduce the measurement points in these places. The places were identified during the architecture reconstruction phase, after the sequence diagrams were constructed.

4.6.2

Application and the model comparison

One of the relevant noticable differences between the model of the application and the actual appli-cation is that the model does not do any real computations with the data - data are just passed to the next node. To mimic the latency introduced by the work load done by the data processing nodes, intermediate node (RepeaterNode) could be configured to postpone the transmission for a specified amount of time after data are received and before they are sent further away.

The configuration (see Figure 4.17) was adjusted by introducing 20ms delay to RepeaterNode2, RepeaterNode3 and RepeaterNode4 (totally 60ms). ZeroMQ back end was used for this purpose. The results of the measurements are shown in the Figure4.18.

After adjustments to the configuration of the model, the measured average latency was 73ms. As we can see from this example, the model could mimic delays of the application caused by com-putation. The difference in precision of the set latency (60ms) and the measured latency (73ms) is probably a combination of overhead, task scheduling and other unknown factors. For the declared purposes of the model, the precision seems sufficient. For more precise measurement the task schedul-ing(process priority) could be altered and therefore the precision is estimated to get to millisecond accuracy. Based on this information, we may assume that the framework is adequate for simulating and measuring latency-related scenarios when predicting an ability of the IPC subsystem to satisfy a requirement.

4.7

Validation

The effect of the results on the decision-making is not clear. It was not able to determine whether the decisions influencing the application are made within the company, due to the fact that the decisions are made after this thesis is written. From the researchers point of view, the data gathered in the proof of concept section (4.5) provide sufficient rationale to say that the two alternatives will both satisfy the requirements resulting in QA scenarios R1 and R2. The satisfaction of requirements resulting in QA scenario R3 is not possible to determine, probably due to the immaturity of the MBEF.

(33)

1 # Average scenario

2 # Latency utilisation measurement 3 generator:

4 − sender address: &G1address tcp://127.0.0.1:5555/ #Multibeamer 5 message size: 12348

6 frequency: 10

7 defined message count: 600

8 serialize : no

9 identifier : Multibeamer

10 − receivers addresses : &I1address tcp://127.0.0.1:5560/ #GPS 11 message size: 252

12 frequency: 20

13 defined message count: 1200 14 serialize : no

15 identifier : GPS 16 repeater :

17 − senders addresses: ∗G1address # preprocessor 18

19 receivers addresses : ∗I1address 20

21 22

23 − receiver address : ∗I1address # adjustment 24 repeater work load: 20

25 sender address: &I2address tcp://127.0.0.1:5561/ 26

27 − senders addresses: ∗I2address # position filter 28 repeater work load: 20

29

30 sender address: tcp://127.0.0.1:5562/ 31

32 − senders addresses: tcp://127.0.0.1:5562/ # multibeamer 33 repeater work load: 20

34 sender address: tcp://127.0.0.1:5563/ 35

36 sink :

37 − senders addresses: tcp://127.0.0.1:5563/ # display 38 expected message count: 1800

39 serialize : no

(34)
(35)

Chapter 5

Conclusion and Future work

5.1

Conclusion

From the results, it can be observed that it was possible to get to the information needed to predict satisfaction of two out of three Quality Attribute scenarios. It is possible to make a relation of the main research question and the results of our observations.

RQ: How to support decision-making while facing a major change in the software? The answer, resulting from this research is:

By the described process of architecture reconstruction, requirements engineering, creation of a model of the existing application and evaluation of the alternatives by operational prototyping, it is possible to gain some rationale to support decision-making.

There was also null hypothesis proposed:

H0: Usage of proposed evaluation framework does not influence the decision-making in this case. There is not enough evidence to say whether this hypothesis is falsified, as no design decision was changed or rejected based on this proof of concept. However, the gathered rationale strengthened the possibility of using one of the alternative back ends.

5.2

Relation to existing evidence

Several theories and concepts explained by other authors were used.

We can spot some similarities between our study and study made by Domka and Freiling in 1984[DF84]. This article seemed as a nice related work to the evaluation framework proposed in this thesis. There are similarities in design of their simulator and the evaluation framework, es-pecially in the concept of simulation driver and specifications program. Note that this article was discovered after the design of the evaluation tool was made. Similar requirements lead to similar design decisions in this case.

From the gathered data, it is possible to add to the article from Dworak et al. about inter-process communication[DESS11], that AMQP-based RabbitMQ undertook four more years of development and it is used in the production environments, therefore it is a mature tool already. Our results provides no evidence of a major performance disadvantage against the favoured ZeroMQ in the original research, although this was not even intended. The planned replication of this research did not finish completely during this research, but principally a similar approach was taken. Performance tests from their research could be understood as an alternative to the evaluation framework from this thesis. More concerns, not only those performance based, should have been addressed in our case.

Tools mentioned by Krikhaar[Kri99] were in principle the similar to the tools used in the present, albeit they were less convenient to use. Transitive closures explained in the chapter about Relation Partition Algebra were a proven basis for the dependency search.

The motivation behind software evolution, written by Burge et al.[BCMM08], was possible to apply on our case directly - increasing capabilities of the hardware over time were reported by the

(36)

stakeholders. Conclusively, more and more data are generated from the measurement devices, which is, naturally, resulting in demands of the customers to process the data.

The structure of this thesis is based on the proposed structure for a case study, as written by Runeson and Hst in 2008[RH09] with some usage of the former concepts of Jedlitschka and Pfahl[JP05]. This structures seems to be standardized enough in the present. Under the influence of the fact that the studied case is a process, the meaning of some of the document sections were changed.

5.3

Implications and Outcome

With usage of the proposed evaluation framework, we can not only model and evaluate possible inter-process communication candidates, but we can also compare them, due to the introduced common measurement tools independent on the IPC implementation.

The whole research was documented on the internal web of the company, therefore, this research could be replicated or extended withing the company. Laboratory package was made to simplify the software use. This could impact the cost of development and reduce the time spent on it.

Before the internship started, research plan had been made reflecting the project proposal. The plan is in AppendixBand it can be seen that this thesis deviated from the initial plan. This was mostly due to the unrealistic time estimates, underestimation of the extent of the research and insufficient theoretical preparation. The critical moment, when the rest of the plan was abandoned, happened after the architecture reconstruction was done and when the real extent of the research was clear. The replication would take much more time than was available.

In the project proposal, the following parts of the assignment are stated:

1. Investigating the current data flow between processes and identifying the requirements of these connections (f.i. minimum bandwidth, maximum latency)

2. Identifying and documenting middleware requirements

3. Researching which middleware solutions are available and identifying their strengths and weak-nesses

4. Selecting a middleware solution that best fits QPS needs 5. Implementing a proof of concept of the selected middleware

Goals 1 and 2 were considered done. Identification of the strengths and weaknesses during completion of the goal 3 was shortened: The middleware solutions, that were not able to satisfy an important requirement by design, were not considered further. Less than five middleware solutions were pre-selected and recommendations were made related to the goal 4. Two proof of concepts were not implemented on the real application as stated in the goal 5, but on its model.

Overall, major part of the assignment was considered done.

Findings were presented to the company during the last meeting and recommendations were made. Reaction on the findings was mostly positive, however, the fact that no final solution was proposed was a bit dissapointing. The most important recommendations were:

• refactoring leading to more modular code would help to satisfy some of the requirements without

changing the underlying middleware solution

• in case the decision to change the underlying IPC middleware is made, the required effort to do

so will be much lower after the refactoring

5.4

Limitations and future work

There are severe limitations of this research.

Relevancy of the requirements Despite any declared usefulness of the model, inability to capture relevant requirements could plague the whole requirements engineering effort.

(37)

Precision due to scheduling By default, usage of “sleep” function in C++ or Python does not ensure any guarantees in a millisecond-precision level. According to The Python Standard Library documentation, “The actual suspension time may be less than that requested because any caught signal will terminate the sleep() following execution of that signals catching routine. Also, the sus-pension time may be longer than requested by an arbitrary amount because of the scheduling of other activity in the system.”[Pyt] Windows’ task scheduler significantly influences the precision of the measurement tools and also ability to adjust latency of the Repeater nodes in a more precise steps than 15ms. Implementation of the spinning technique was considered, but not used due to the excessive use of the CPU and therefore affecting the measurements.

Verification in production Only a few months of time spent on the research, made the work on this research constrained, so the intention was not to provide completeness of the study in a form of verification of the results over time - it was not possible to see whether the results from the proposed evaluation framework would match the experience from the final implementation in the real application.

Other aspects Research was done on only a part of the code base, generalized assumptions about the whole code base could be imprecise. Replication with capturing requirements of a different application may be desirable.

Comparison of the usage of the described process with other approaches in terms of effort and accuracy could help with extending the answer to the research question.

By the Occam’s razor principle, there might be different, simpler hypotheses explaining what could influence the decision-making. It might be said that the rationale could have been acquired by different approaches. Answer for the question, whether the rationale could be acquired in a different, maybe more cost efficient, or more precise way, could be base for a future work. Author believes that the usage of the described process is a significant step toward efficiency, compared to the implementation of a new IPC prototype into the application directly. Therefore, it is better to look before you leap.

(38)

Bibliography

[AT98] M. N. Armstrong and C. Trudeau. Evaluating architectural extractors. Proceedings of

the fifth Working Conference on Reverse Engineering, 1998.

[BCK13] Len Bass, Paul Clements, and Rick Kazman. Software Architecture in Practice. Addison Wesley, 2013.

[BCMM08] J E Burge, J M Carroll, R McCall, and I Mistrik. Rationale-Based Software Engineering. Springer Berlin Heidelberg, 2008.

[BH95] Hugh R Beyer and Karin Holtzblatt. Apprenticing with the customer. Communications

of the ACM, 38(5):45–52, 1995.

[BRW+08] A. Brooks, M. Roper, M. Wood, J. Daly, and J. Miller. Replication’s Role in Software

Engineering (chapter of Guide to Advanced Empirical Software Engineering). Springer,

2008.

[CE15] Gerardo Canfora and Sebastian Elbaum. Report on the technical track of icse 2015, April 2015. URL: http://www.icse-conferences.org/sc/ICSE/2015/ ICSE2015-Technical-Track-Report-Canfora-Elbaum.pdf.

[Dav92] Alan M Davis. Operational prototyping: A new development approach. IEEE Software, 9(5):70–78, 1992.

[DESS11] A. Dworak, F. Ehm, W. Sliwinski, and M. Sobczak. Middleware trends and market leaders 2011. 2011.

[DF84] Walter F. Domka and Michael J. Freiling. Vns - a virtual network simulator. ACM

SIGSIM Simulation Digest Volume 15 Issue 1, January 1984, 1984.

[DP09] Stphane Ducasse and Damien Pollet. Software architecture reconstruction: a process-oriented taxonomy. IEEE Transactions on Software Engineering, vol.35, no. 4, pp.

573-591, July/August 2009. doi:10.1109/TSE.2009.19.

[FCKK11] D Falessi, G Cantone, R Kazman, and P Kruchten. Decision-making techniques for software architecture design: A comparative survey. Acm Computing Surveys, 43(4),

Acm Computing Surveys, 2011.

[Gan12] Dharmalingan Ganesan. Software architecture discovery for testability, performance, and

maintainability of industrial systems. PhD thesis, VU Amsterdam, 2012.

[GDB15] P. Greenfield, M. Droettboom, and E. Bray. Asdf: A new data format for astronomy. Astronomy and Computing, pages –, 2015. URL: http: //www.sciencedirect.com/science/article/pii/S2213133715000645, doi:http:// dx.doi.org/10.1016/j.ascom.2015.06.004.

[Hud13] William Hudson. User stories don’t help users: introducing per-sona stories. volume 20, pages 50–53, 2013. URL: http: //interactions.acm.org/archive/view/november-december-2013/

user-stories-dont-help-users-Introducing-persona-stories, doi:10.1145/ 2517668.

Referenties

GERELATEERDE DOCUMENTEN

When we look at the application factors, the most important factor for application were the benefits that arise when neuromarketing and fMRI in particular are applied in the

In this section we discuss the feasibility of the PrICE approach. Is it feasible to create realistic redesign alternatives with the application of the PrICE approach and the

• a formal model for the representation of services in the form of the Abstract Service Description ASD model and service versions through versioned ASDs, • a method for

In the evaluation study, the DIIMs suggested that following three drivers were most important: 1. Realizing a focus on the core competences. A decreased in the total cost of

Hypothetische reconstructie van de genese en de evolutie van het landschap rond de Blokwaters, op basis van de huidige observaties (schematische Zuid-Noord doorsnede van het

The second loading vector (space) is associated with the change in the shape and amplitude of the T wave over the different channels.. The third loading vector corresponds to

In addition to reducing the noise level, it is also important to (partially) preserve these binaural noise cues in order to exploit the binaural hearing advantage of normal hearing

It is shown that for autonomous systems with infinite-dimensional behavior the existence of a description by means of first order PDE’s is equivalent to strong-Markovianity..