• No results found

Eindhoven University of Technology MASTER A model based approach for software testing audit on software product quality Basu, R.

N/A
N/A
Protected

Academic year: 2022

Share "Eindhoven University of Technology MASTER A model based approach for software testing audit on software product quality Basu, R."

Copied!
67
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

MASTER

A model based approach for software testing audit on software product quality

Basu, R.

Award date:

2020

Link to publication

Disclaimer

This document contains a student thesis (bachelor's or master's), as authored by a student at Eindhoven University of Technology. Student theses are made available in the TU/e repository upon obtaining the required degree. The grade received is not published on the document as presented in the repository. The required complexity or quality of research of student theses may vary by program, and the required minimum study period may vary in duration.

General rights

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.

(2)

A model based approach for software testing audit

on software product quality.

Automotive Technology Master Thesis

Rishav Basu [1280414]

Research Group

Supervisors:

Marc Hamilton dr. ir. Ion Barosan

Project Phase Report

Eindhoven, August 2020

(3)

I have read the TU/e Code of Scientific Conducti.

I hereby declare that my Master’s thesis has been carried out in accordance with the rules of the TU/e Code of Scientific Conduct

Date

………12-08-2020………..…………..

Name

………Rishav Basu………..…………..

ID-number

………1280414………..…………..

Signature

………..…………..

Submit the signed declaration to the student administration of your department.

i See: http://www.tue.nl/en/university/about-the-university/integrity/scientific-integrity/

The Netherlands Code of Conduct for Academic Practice of the VSNUcan be found here also.

More information about scientific integrity is published on the websites of TU/e and VSNU

(4)

With the growth in the number of cars equipped with ADAS (Advanced Driver Assistance Sys- tems), comes an increasing reliance on complex embedded software in automobiles. Software is more crucial than ever before for performance, safety,and efficiency. Due to the high demands for high quality software, various global software standards have emerged, outlining processes to follow for software development and testing, as well as standards for software product quality. Ex- ample of this include the ASPICE (Automotive Software Process Capability dEtermination) and CMMI (Capability Maturity Model Integration) for software development processes, and TMMi (Testing Maturity Model integration) for software testing. Examples of software product quality standards include ISO/ IEC 9126 and ISO/ IEC 25010.

Companies have different processes that they use to develop and test systems and software.

These processes should comply with specific standards in order to make software robust. Based on their adherence to these process standards, companies processes’ are certified. Assessors conduct interviews, and gather data from companies to provide this certification. In some cases tools may be developed to support this assessment process. Using model driven software engineering, tools may be developed to help this assessment process.

In the past, such a software process and product assessment tool, CAPPassessor [12] was built based on the CAPPMM Common Automotive Process and Product Meta-Model [12]. This tool can be used for doing assessments, and checking the software process impact on software product quality characteristics. However the CAPPassessor tool needs to be improved because in it there is the possibility to influence the score calculation in the same editor that is used to register the assessment data. This is due to problems in the meta-model, thereby affecting the assessment tool. The meta-model has combined the levels of defining standards, defining assessments as well as conducting assessments all in one.

The tool had to be improved to solve this problem. Improving the tooling starts with re-defining the meta-model(s) to support the separation of concerns that is inherent to the assessment process.

As a solution to this problem, three new meta-models were created, each having a different purpose.

The assessment definition level data is clearly separated from the scoring data as a result. These 3 meta-models together are called the CAPPTMM (Common Automotive Process and Product Testing Meta-Model). Based on the CAPPTMM, a graphical tree editor is built which helps in doing assessments by building audit trees. This is useful in collecting data and analyzing the impact that following standard processes, has on software product quality characteristics such as reliability, security, maintainability, etc.

The assessment tool is validated by two industrial case studies, to achieve the same result as that achieved by Excel sheets which are the current mode of conducting assessments.

(5)

Throughout the duration of this project I have received an abundance of help and support, from various people. Writing this page to acknowledge those people may not be enough, but I shall try.

First and foremost I would like to thank my project advisor, Marc Hamilton at ALtran B.V.

His constant support and encouragement went a long way in helping me complete this thesis.

Several concepts that I used in this project were unknown to me in the beginning of this project.

Furthermore, I had not worked with the software tools that I used in this project, before. Marc’s expert guidance and helpful attitude made me pick up a lot of knowledge about meta-modelling and Ecore-EMF. There were several times when I needed direction regarding what approach to take. No matter how many times I asked questions, they were always answered patiently and all my doubts were cleared by him. In the month of March when a lock-down was imposed in The Netherlands due to the Corona crisis, most people had to work from home, and were advised to not go their offices. Unfortunately this led to a delay in my thesis as life changed suddenly a lot.

Marc’s positive and patient attitude helped a lot in making me get back on track with the project.

His feedback regarding the report too was of immense help and overall I have learnt a lot from him about hard work and helping people.

I would also like to express my deep gratitude towards dr. ir. Ion Barosan. Throughout the duration of my Master’s at TU Eindhoven, there were different times when I needed guidance regarding courses or my internship at the university. His mentorship and friendly attitude have been invaluable during the duration of my study. His feedback to me while writing this report was crucial for its improvement.

There are some other colleagues at Altran B.V that I would like to thank. Firstly I want to thank Leslie Aerts at Altran B. V. for helping me out during the lock-down period when I needed some encouragement to get back on track with the project. The stand up meetings conducted by him were always a good method of self evaluation and planning. I would also like to thank Egbert Touw at Altran for his valuable insights into how assessments are done at Altran and his helpful attitude. His inputs helped to create the software tool that is the end-result of this project.

Adrian Yankov, a friend, and engineer at Altran also offered to help me at times during my thesis.

I appreciate the advice that was given to me by dr. Yaping Luo at the beginning of this project, guiding me about potential directions that this project could take.

My gratitude also goes out to my family members, especially Shyamali Basu and Sandip Basu for their unconditional and constant support and encouragement at every step. Talking to me whenever I doubted myself and encouraging me helped me to complete this thesis.

Last but not least I want to thank my friends in the Netherlands, as well as back home in India for taking time out for me whenever i wanted to meet or get on a phone call with them.

Some of them especially, helped a lot whenever I needed some motivation or fun so that I could be recharged to work the next day enthusiastically.

(6)

Contents iii

List of Figures v

List of Tables vii

1 Introduction 1

1.1 Introduction. . . 1

1.2 Project Objectives . . . 1

1.3 Research Questions . . . 3

1.4 Structure of the Thesis Report . . . 3

2 Overview of Process and Product Standards 4 2.1 CMMI . . . 4

2.1.1 Benefits of CMMI . . . 4

2.1.2 Architectural Components of CMMI . . . 5

2.1.3 CMMI Representations . . . 5

2.1.4 Levels of CMMI . . . 6

2.2 TMMi . . . 7

2.2.1 Maturity Levels of TMMi . . . 8

2.2.2 Benefits of TMMi . . . 8

2.3 TMMi and CMMI . . . 10

2.3.1 Comparison between CMMI and TMMi . . . 10

2.4 Product Standard ISO 25010 . . . 10

2.5 Relation between CMMI/TMMi and ISO 25010 Quality Model . . . 10

3 Assessment Process Support 12 3.1 What is an Assessment Process? . . . 12

3.1.1 What does an assessor do? . . . 12

3.2 Shortcomings of CAPPMM-The Old Tool . . . 12

4 Building the Common Automotive Process, Product and Testing Meta-Models 14 4.1 What is EMF? . . . 14

4.2 Improving the Meta-Models . . . 15

4.3 The standards Meta-Model . . . 17

4.3.1 Description of the Classifiers . . . 18

4.3.2 Dynamic Instance of the Meta-Model . . . 20

4.4 The assessment Meta-Model . . . 21

4.4.1 Description of the Classifiers . . . 21

4.4.2 Operations and Scoped Relations Used. . . 25

4.4.3 Dynamic Instance of the meta-model. . . 34

4.4.4 Run-time Instance Testing of the Meta-Model. . . 34

4.5 The actual Meta-Model . . . 36

(7)

4.5.1 Description of the Classifiers . . . 36 4.5.2 Operations and Scoped Relations used . . . 39 4.5.3 Run-time Instance of the Meta-Model . . . 42

5 Changing the Editor 44

5.1 Changing the Text in the Tree Editor for Instances of assessment . . . 44 5.2 Changing the Text in the Tree Editor for Instances of actual . . . 46

6 Validation of the Model 48

6.1 Case Study for CMMI . . . 48 6.2 Case Study for TMMi . . . 50

7 Conclusions and Recommendations 53

7.1 Conclusions . . . 53 7.2 Recommendations . . . 55

Bibliography 56

(8)

2.1 An overview of the Process Areas in CMMI-DEV [12] . . . 6

2.2 The Maturity Levels and Process Areas present in TMMi [19] . . . 7

2.3 The Structure and Components present in TMMi [17] . . . 9

2.4 The benefits of TMMi [3] . . . 9

2.5 The Product Quality Model in ISO/IEC 25010 [10] . . . 10

2.6 Software Audit Reliability Matrix [12] . . . 11

3.1 Consequence levels in the CMMI process audit table [12]. . . 13

4.1 A Basic Family meta-model [6] . . . 15

4.2 The Altran Excel sheet that supports process audit for the quality characteristic of Reliability . . . 16

4.3 The standards Meta-Model . . . 19

4.4 A dynamic instance of standards showing the editor and Properties view. . . 21

4.5 The assessment Meta-Model . . . 23

4.6 An example of how Scoped can be used . . . 25

4.7 The Ecore view of the Scoped Shop example . . . 26

4.8 The Ecore view of the Scoped relation in the AssessmentPracticeContribution class 27 4.9 A run-time instance of assessment showing the scoped relation being used for the element AssessmentPracticeContribution . . . 28

4.10 The Ecore view of the Scoped relation in the AssessmentGroupDefinition class . . 29

4.11 A run-time instance of assessment showing the scoped relation being used for the element AssessmentGroupDefinition . . . 29

4.12 The Scoring class in assessment in the Ecore tree editor . . . 30

4.13 The Scoring class and its overridden form in the actual meta-model . . . 30

4.14 The Value of Score contained within the Pivot of Score in the Scoring class of assessment . . . 31

4.15 The ScoreAverage class in assessment in the Ecore tree editor . . . 32

4.16 The body of the convertScore function . . . 32

4.17 The body of the calculateScoreAverageValue function . . . 33

4.18 The ScoreAverage class and its overridden form in the actual meta-model . . . 33

4.19 A run-time instance of assessment . . . 35

4.20 A run-time instance of assessment showing the option of selecting Consequence Levels for a Specific Practice . . . 35

4.21 The actual meta-model . . . 37

4.22 The scoped relations in assessment . . . 39

4.23 The body of the findMeasurement operation contained in the AssessmentAspectScor- ing class . . . 40

4.24 The body of the findAssessmentPracticeContribution operation contained in the AssessmentAspectScoring class . . . 40

4.25 The body of the findRelevantScores(AssessmentProcessScore)function in the class in actual . . . 40

(9)

4.26 The functions in the AssessmentScore class in actual meta-model . . . 41

4.27 The body of calculateScoreAverageValue in the actual meta-model . . . 41

4.28 A run-time instance of the actual meta-model showing the Assessment Score and the Assessment Process Scores . . . 42

4.29 A run-time instance of the actual meta-model . . . 43

5.1 The changes to the AssessmentGroupDefinitionItemProvider.java . . . 45

5.2 The changes to the AssessmentPracticeContributionItemProvider.java . . . 45

5.3 The changes to the AssessmentProcessItemProvider.java . . . 46

5.4 The changes to the AssessmentItemProvider.java . . . 46

5.5 The changes to the AssessmentGroupItemProvider.java . . . 47

6.1 The Excel sheet of the CMMI assessment . . . 48

6.2 Editors- run-time instances needed for a CMMI Reliability assessment . . . 49

6.3 The Excel sheet of the TMMi assessment . . . 51

6.4 Editors- run-time instances needed for a TMMi Reliability assessment . . . 52

(10)

2.1 A summary of the cost benefits and Impact of CMMI [7]. . . 5

2.2 A summary of the Schedule benefits and Impact of CMMI [7] . . . 5

2.3 Capability Levels within CMMI [17] . . . 6

2.4 Maturity Levels within CMMI [17] . . . 6

2.5 Comparing the two models . . . 10

4.1 Concepts in a Process Audit for product Quality in the Altran Excel Template (for Assessments) . . . 17

4.2 The different Classifiers in the standards Meta-Model . . . 20

4.3 The different classes in the assessment meta-model . . . 24

4.4 The different classifiers in the assessment meta-model . . . 25

4.5 The different classes in the actual meta-model. . . 38

(11)

Introduction

This chapter introduces the need for model driven software engineering (MDSE), and how it can be used to help in conducting assessments. It elaborates on the objectives of this thesis project, the research questions that need to be answered and also the overall structure of the entire report.

1.1 Introduction

Due to the increase in demands for comfort and efficiency, there are more electronic and electrical systems present in automobiles today than ever before. With this dependence, comes more reli- ance on software for the sake of the embedded systems present in automobiles. Advanced Driver Assistance Systems (ADAS) are the latest application of complex software in order to help the driver of the vehicle. Example of ADAS include Adaptive Cruise Control (ACC), Parking Assist- ance Systems and Lane Departure Warning Systems. In order to satisfy consumers’ fast growing demand for advanced features, efficiency and convenience, more and more software intensive sys- tems are being developed. Software is also increasingly applied to power safety critical systems.

The failure of software systems in the automotive domain can have catastrophic consequences, and also grave financial losses to the automotive companies because of lawsuits and vehicle recalls.

In order to prevent such events, it is imperative to assess and improve software quality. Various processes are followed by manufacturers and suppliers to produce equipment and parts and software to meet safety requirements, functional requirements and so on.

Companies have various engineering processes that they use to develop systems/software.

These processes should comply with a certain specific standards. Based on their adherence to these process standards, the companies’ processes are certified. Assessors conduct interviews, and gather data from those companies to provide this certification. In some cases, there are tools avail- able to support the organization in collecting and processing the data. By the use of model-based techniques, these processes can be made easier.

1.2 Project Objectives

The process assessments that Altran B.V. does for other companies are done using Excel sheets by the assessors who are the subject matter experts on process standards such as CMMI [17], ASPICE [15] and TMMi [17]. The challenges that the company faces while doing assessments with Excel sheets are as follows:

1. Every time an assessment is to be done for a specific product/process standard, a new Excel spreadsheet has to be created.

2. Every time a new project is defined, a new Excel Spreadsheet has to be created from scratch, or time and effort is needed to modify existing templates.

(12)

3. Using different versions of Excel may lead to compatibility issues.

4. The relationship between the concepts in the standards and the Excel sheets cannot be visualized.

Altran has its own Altran Quality Assessment (AQA) program according to which it assesses various processes of other companies. To automate this assessment process, and make it easier, tools can be developed, as has already been done by Narayan [13] and Tummalapalli[12]. Using model driven software engineering we want to create the modeling infrastructure that supports the assessment process.

The idea for the approach in this thesis is based on the work shown in two Master theses by Tummalapalli and Narayanan. Both projects were done at Altran Netherlands B. V. and the results shall be used as the basis for the research that shall be done in this thesis. Tummalapalli used model driven software engineering to build conceptual meta-models of software process and product standards, in order to support a process audit for various product quality characteristics, from the ISO 25010 standard [10]. Based on the meta-models, a Common Automotive Process and Product Meta-Model (CAPPMM) [12] was built. Using the CAPPMM, two graphical tools were built: CAPPeditor and CAPPassessor [12], both of which were validated by case studies. The CAPPeditor is used to build domain specific models (DSMs) of the standards. The CAPPassessor enables users to build process audit tables based on these DSMs. Furthermore, users can collect and analyze the data required for software product quality, from the process point of view. As a result, the CAPPassessor gives a process score and project scores based on product quality characteristics from a process viewpoint, specifically for CMMI [17] and ASPICE [15].

The CAPPassessor and CAPPeditor have mixed up the different levels, because: standard definition, assessment definition, as well as conducting an assessment and processing its results have been built at the same level, all in one meta-model, the CAPPMM. This results in a mixture of various stages of an assessment process in one and the same resulting tool. Although it is attempted to separate the stages by creating different (table) editors, the effectiveness of the tool is hampered due to unwanted side effects resulting from this fundamental problem.

Excel sheets are currently used by the the assessor to enter data that is collected from the company being assessed. Currently the tool available to Altran has only been tested with processes adherence to the CMMI and ASPICE standards. At the end of this project, there will be a tool that can assess adherence to the TMMi standard [17], along with the CMMI standard. This will be similar to how the existing tool made during Tumalapalli’s thesis tests adherence to the CMMI and checks the process impact on product characteristics. For this, we will first check the structure of the TMMi standard. We shall also define TMMi assessment knowledge and how the various processes listed in it affect ISO 25010 product characteristics, according to the assessment experts at Altran.

Once the tool changes and the new tool have been completed, we shall apply it to the reference case of Altran for both TMMi and CMMI assessments, in order to validate it. The CAPPMM tool had mixed up the 3 different levels of the assessment in one single meta-model, i.e. the Define standard level, the Define Assessment level as well as the Actual Assessment level. There are three use cases for the project:

• Define the standards knowledge in a meta-model. For this the documentation of the stand- ards must be studied- CMMI, TMMi and ASPICE.

• Define the assessment knowledge, a clear picture is needed on what product quality aspects are affected to what extent by the various processes of standards, both CMMI (for the CAPPassessor) as well as TMMi (which we are going to work on). This includes two parts, as described in the problem definition, above. Firstly, fixing consequence level for the process audit tables, not giving the assessor the option of changing it for a specific project while doing an assessment. These changes should also reflected in the meta-model. We can define an assessment for a specific case. For this, we can refer to the quality and testing assessments done by Altran.

(13)

• The assessment can be executed by providing a new tool/ documentation for assessment, including an improved tool for CMMI, TMMi and ASPICE assessment, modifying the CAP- Passessor and CAPPeditor. Furthermore, since the TMMi standard is similar to the CMMI standard, presenting a tool that can redo the TMMi assessment as well. This step involves using the available previous assessment done by Altran B.V. and getting the same end result using the tool that has come from this project.

1.3 Research Questions

• RQ1: How do the current tools (CAPPassessor and CAPPeditor) work in real world assess- ments?

What are the problems of the current tool in supporting an assessment process? What are the needs to support an assessment process?

• RQ 2: How can the current tool be improved for better process assessments?

How can we re-define the meta-models to improve the assessment tool?

• RQ 3: How can the improved tool be extended to support TMMi assessments as well?

Can the extended tool be applicable for the TMMi assessment done in the Altran reference case?

1.4 Structure of the Thesis Report

Chapter 2 of the thesis report gives the reader an explanation of the different standards that are involved for following processes and the ISO 25010 standard used in industries for gauging product quality. It also deals with how software development and testing processes affect the product char- acteristics of software. Chapter 3 deals with how the entire assessment process is setup, with the phases of capturing the relevant standards, defining an assessment and the result score calcula- tions, and conducting the actual assessments. In Chapter 4, the CAPPMM (Common Automotive Process and Product Meta-Model) is worked upon and modified to build a CAPPTMM (Common Automotive Process Product and Testing Meta-model) This includes building three new meta- models to replace the one CAPPMM. In Chapter 5, the editor of the CAPPTMM is modified to make it more user friendly for the person who will conduct the assessments, such as what text the user will see so that they have a better understanding of the tool, while doing test and/or process audits. In chapter 6, the tool is validated doing two industrial case studies i.e. projects from Altran. Lastly, in Chapter 7, the project is concluded and recommendations for further improvement are elaborated upon, as future work.

(14)

Overview of Process and Product Standards

This chapter talks about the various process and product standards that are being dealt with for assessments, namely CMMI, TMMi, and the ISO 25010 Quality Model. It elaborates on the structure of these standards, the components in it and the benefits of using them. It also outlines the relationship between process and product standards.

2.1 CMMI

CMMI (Capability Maturity Model Integration) models are a set of practices globally that have been proven to drive business performance by the building of bench marking key capabilities. The models are developed combining the efforts of product teams form the industry in conjunction with the Software Engineering Institute (SEI) [18]. In this report and for the purpose of this project, we have dealt with CMMI version 1.3. Furthermore Altran B.V. is also currently using this version for its assessments. CMMI addresses 3 areas, as different models, in that version, namely [9]:

• CMMI for Development (CMMI-DEV)- The CMMI DEV model guides organizations in applying CMMI best practices in organizations that work in development. The practices in this model focus on activities for developing high quality products/ services for end users.

• CMMI for Services (CMMI-SVC)-The CMMI SVC model guides organizations in improving organizational capability, in order to provide quality services for customers and end users.

• CMMI for Acquisition (CMMI-ACQ)-The CMMI-ACQ model guides organizations for ap- plying best practices for organizations that acquire products and services. The practices in this model focus on activities that start and manage the acquisition of products and services to meet the requirements of customers.

For the purpose of this project, we have dealt with CMMI version 1.3. Furthermore Altran B.V. is also currently using this version for its assessments.

2.1.1 Benefits of CMMI

Table 2.1 shows instances where organizations have reported reductions in the cost of work products, in the costs of processes, and also just generally saving more by using model based process improvement.

(15)

Result Model 33% decrease in the average cost to fix a defect (Boeing, Australia) CMMI 20% reduction in unit software costs (Lockheed Martin M&DS) CMMI 15% decrease in defect find and fix costs (Lockheed Martin M&DS) CMMI 4.5% decline in overhead rate (Lockheed Martin M&DS) CMMI Improved and stabilized Cost Performance Index (Northrop Grumman IT1) CMMI

Table 2.1: A summary of the cost benefits and Impact of CMMI [7]

Table 2.2shows the improvement in schedule, in two aspects: Reductions in the time require to do tasks, as well as being able to predict schedules better. Further details can be found on looking at the report of Goldenson and Gibson [7].

Result Model

Reduced by half the amount of time required to turn around releases (Boeing, Australia) CMMI 60% reduction in work and fewer outstanding actions following pre-test and post-test audits

(Boeing, Australia) CMMI

Increased the percentage of milestones met from approximately 50% to approximately 95%

(General Motors) CMMI

Decreased the average number of days late from approximately 50 to fewer than 10

(General Motors) CMMI

Increased through-put resulting in more releases per year (JP Morgan Chase) CMMI 30% increase in software productivity (Lockheed Martin M&DS) CMMI Improved and stabilized Schedule Performance Index (Northrop Grumman IT1) CMMI Met every milestone (25 in a row) on time, with high quality and customer satisfaction

(Northrop Grumman IT2) CMMI

Table 2.2: A summary of the Schedule benefits and Impact of CMMI [7]

2.1.2 Architectural Components of CMMI

The CMMI model gives users the options of staged and continuous representations, and the ar- chitectural components [17] are present in both.

1. Process Areas - A Process Area is set of related practices in an area, which when satisfied, also satisfy a set of goals that are important for that area.

2. Specific Goals - ’Describes the unique characteristics that must be present to satisfy the process area’.

3. Generic Goals - These goals are “generic” because the same goal statement applies to multiple process areas.’

4. Specific Practice - ’The description of an activity that is considered important in achieving the associated specific goal.’

5. Generic Practice - These practices are called “generic” because the same practice applies to multiple process areas.’

2.1.3 CMMI Representations

The CMMI model offers staged and continuous improvement of processes, by using maturity levels and capability levels respectively. CMMI supports improvement through these two paths. Both representations have the same architectural components. The Staged representation, capability levels are used to check an organizations process improvement in individual process areas, and the

(16)

four capability levels are numbered 0 to 3. Maturity levels can apply to an organizations process improvement across multiple process areas. Each maturity level is a defined evolutionary plateau, and comprises a given set of process area, preparing it to move to the next maturity level.There are 5 Maturity levels in the Staged Representation [17].

2.1.4 Levels of CMMI

Capability Level 0 Incomplete Capability Level 1 Performed Capability Level 2 Managed Capability Level 3 Defined

Table 2.3: Capability Levels within CMMI [17]

In Table 2.3 can be seen the capability levels in the continuous representation of the CMMI.

The continuous representation allows an organization to choose a specific Process Area (PA) and improve it by achieving the Specific and Generic Goals in that specific PA. Once Capability Level 3 has been achieved in certain process areas, organizations can improve their processes further by implementing higher maturity level process areas (by achieving their respective goals).

Maturity Level 1 Initial Process Unpredictable, Poorly Controlled Maturity Level 2 Managed PP, PMC, SAM, REQM, MA, PPQA, CM Maturity Level 3 Defined IPM, RSKM, RD, TS, PI, VER, VAL,

DER, OPF, OPD, OT Maturity Level 4 Quantitatively Managed QPM, OPP

Maturity Level 5 Optimizing CAR, OPM

Table 2.4: Maturity Levels within CMMI [17]

In Table 2.4, can be seen the maturity levels in the staged representation of CMMI. At each maturity level, there are some Process Areas defined within the standard. These Process Areas can also be seen in Figure2.1. The maturity level is achieved by finishing the Specific Goals and Generic Goals of the Key Process Areas defined at that respective level.

Figure 2.1: An overview of the Process Areas in CMMI-DEV [12]

Figure 2.1 shows an overview of the 22 key Process Areas (PAs) [17] contained within the CMMI-DEV model.

(17)

2.2 TMMi

Even though testing is known to account for 30-40 percent of project costs, very little attention is given to testing in software improvement models such as CMMI. Thus the testing community came up with its own models for improvement, one of them being the Testing Maturity Model integration (TMMi) [19]. It is a detailed model for improving the test process and is designed to be complimentary to CMMI. The TMMi framework was built by the TMMi Foundation as a reference framework and guidelines to improve test processes, for issues that are important to test managers, test engineers and professionals who work in the field of software quality.

What is Testing?

According to TMMi, testing is defined as ”The process consisting of all lifecycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects” [19].

Figure 2.2: The Maturity Levels and Process Areas present in TMMi [19]

(18)

2.2.1 Maturity Levels of TMMi

CMMI has both a staged as well as continuous representation, and since the development of TMMi was guided by the work done on the CMMI, the TMMi was developed as a stage model.

The staged model comprises levels/stages that an organization progresses through as its testing process involves. If the TMMi model is followed, it starts from an unmanaged process and goes onto one that is managed, defined, measured and finally at a stage that is in state of continuous improvement, which is known as Optimization. In Figure 2.3 can be seen the process areas for each maturity level of the TMMi. There are 5 levels in the TMMi that show an evolutionary path to test process improvement. Each of these levels has a number of processes that the organization needs to implement to gain that maturity at that level.

Components of TMMi

The following are the components present in the TMMi model:

1. Maturity Levels - These ’can be regarded as a degree of organizational test process quality.

It is defined as an evolutionary plateau of test process improvement.’ Each maturity level defines what must be done to obtain that level. The higher the maturity level the organiz- ation achieves, the higher is the maturity of the test process of that organization. In order to reach a particular maturity level, the organization must achieve all generic and specific goals of the process area of that level along with those of all the lower levels. All organiza- tions already have a minimum level of 1, since this level does not consist of any goals to be satisfied.

2. Process Areas - These ’identify the issues that must be addressed to achieve a maturity level. Each process area identifies a cluster of test related activities.’ Except Level 1, each maturity level comprises a number of process areas that show where an organization should pay attention to to improve its test process. For a maturity level to be achieved, all process areas of that maturity level as well as lower maturity levels must be satisfied. For example to be Level 3 certified, it must have satisfied the process areas present in TMMi Level 2 as well as TMMi Level 3.

3. Specific Goals - It ’describes the unique characteristic that must be present to satisfy the process area. A specific goal is a required model component and is used in assessments to help determine whether a process area is satisfied.’

4. Generic Goals - It ’describes the characteristics that must be present to institutionalize the processes that implement a process area’. These are called generic because these goals are applied to all process areas in TMMi.

5. Specific Practices - It ’is the description of an activity that is considered important in achieving the associated specific goal’. If the activities described in the specific activity are done than that specific goal within that process area has been achieved.

6. Generic Practices - These ’appear near the end of a process area and are called ‘generic’

because the same practice appears in all process areas.’ Generic practices are the activities which upon completion result in the achievement of the associated generic goal.

Figure 2.3summarizes the components and the relationship between them.

2.2.2 Benefits of TMMi

The reasons why an organization can benefit from a TMMi assessment can be broadly grouped as follows:

The improvements indicated in Figure 2.4are based on a few sample projects.TMMi focuses on testing practices and enables an enhancement in quality through excellence in testing, and also

(19)

provides tangible benefits to organizations, considering the fact that the testing phase of a project accounts for 40 to 50 percent of project related efforts and cost [3].

Figure 2.3: The Structure and Components present in TMMi [17]

Figure 2.4: The benefits of TMMi [3]

(20)

2.3 TMMi and CMMI

TMMi is designed and positioned to complement the CMMI. In several cases, a given TMMi level needs support specifically from the process areas at its corresponding CMMI level or lower CMMI levels. For some exceptions, they are even related to higher CMMI levels.

2.3.1 Comparison between CMMI and TMMi

The similarities between the 2 models have been elaborated upon already. since TMMi was designed to complement CMMI, they have the same architectural elements. In Table2.5, can be seen the key differences between the two models.

Capability Maturity Model Integration Test Maturity Model integration Limited focus on test improvements Limited focus on non-testing improvements Has both staged and continuous representation, so

uses both maturity and capability levels to measure improvement

Has only staged representation, so only uses maturity levels to measure improvement CMMI Version 1.3 has 3 frameworks-

CMMI for Development CMMI for Acquisition CMMI for Services

No additional TMMi frameworks

Developed by the Software Engineering Institute

at the Carnegie Mellon University Developed by the TMMi foundation

Table 2.5: Comparing the two models

2.4 Product Standard ISO 25010

ISO/IEC 25010 is the updated standard model of ISO/IEC 9126. [4]This version of ISO/IEC 25010 was released in March 2011. This model comprises eight product quality characteristics.

The model contains 31 quality sub-characteristics in contrast to ISO/IEC 9126 21 quality sub- characteristics

Figure 2.5: The Product Quality Model in ISO/IEC 25010 [10]

2.5 Relation between CMMI/TMMi and ISO 25010 Qual- ity Model

As explained in Chapter 2, CMMI and TMMi are process standards, and the way a software development process or a software testing process is executed, will have a major effect on software

(21)

product characteristics such as Reliability, Maintainability, Usability, and so on.

In order to do their assessments for clients, Altran B.V. has a matrix for Software Quality Audits, that is used for doing assessments. Using this matrix, the reliability level/score (on a scale of 1-4) of the process practice can be obtained, based on the consequence level (which is predetermined before the assessment while defining it), and the Satisfaction Level (which is found as a result of the assessment) of that particular practice. Further information on this can be found in Chapter 4 and Chapter 6.

Figure 2.6: Software Audit Reliability Matrix [12]

The goal of this project is to support process and testing audit for software quality assessments.

To achieve this goal, the impact of software development and testing processes on product quality needs to be assessed. Keeping in mind the time limit of this project we considered certain specific parts of the CMMMI and TMMi standard, and the impact that they have on the ISO 25010 software product quality of Reliability.

(22)

Assessment Process Support

This chapter talks about what support is needed for the whole assessment process. This is done by outlining how an assessment process occurs at the company with the steps that are involved, and how the older tool lacks in certain aspects of that.

3.1 What is an Assessment Process?

[11] An assessment is an activity to gather, and audit proof of consistency and implementation against a reference standard, reference model or reference structure. The reference standards for this tool could vary from CMMI to ASPICE to TMMi. The output of an assessment is an assess- ment report. The report provides evidence of following the standard or non compliance against it, along with suggestions for improvement. It also may or may not provide a final quantitative score for the report to give a clear idea to a reader, but it also gives a summary on the overall status of compliance.

The different phases of an assessment process in terms of working on a software tool are as follows:

• Defining the relevant standards. This is done once per standard, according to the text documentation available and released by the respective standards governing body.

• Defining the assessment to be conducted and the score calculations (once per client or type of assessment). These steps, among others selects the relevant parts of the standards for the purpose of the assessment and connects them where relevant for the client. This is done once per client or type of assessment.

• Conducting the assessments. This may be done multiple times for the same assessment definition.

3.1.1 What does an assessor do?

In order to do the assessment, a company assigns an audit team with a lead assessor. The assessor is in charge of the overall assessment being done. Altran Netherlands as a company is involved as an independent party to supply insight into the quality aspects of software for various companies.

It could be a process assessment or a code assessment or even a test assessment. It could include steps such as documentation study, interviews and reporting. Often an assessment helps to identify risk areas and provides recommendations for improvement.

3.2 Shortcomings of CAPPMM-The Old Tool

The process audit tables of the CAPPassessor, have consequence levels. These levels are the meas- ure of a how much a particular process within a process standard affects a product quality/sub-

(23)

quality. In the current version of the CAPPassessor, the consequence levels can be changed by the user of the tool. The assessor can chose from the consequence levels Null, Insignificant, Minor, Moderate, Major, and Extreme. This should be modified to a fixed level for each process, before- hand. The assessor should not be the one to decide during the time of the assessment how much a specific practice or a generic practice (of a process) affects a certain product quality. An unwanted effect of this could be biased results. This consequence level should be predetermined according to the assessment knowledge. In Figure3.1, is a table with CMMI process audit for Reliability, in which the user can choose the consequence level in the ‘Practice Consequence Level’ column.

Figure 3.1: Consequence levels in the CMMI process audit table [12]

In the CAPPMM, the the levels were not clearly demarcated in terms of defining the stand- ards involved, defining the assessment and finally executing an assessment itself.The defining of the assessment and its execution are done by the the lead assessor and assessment team. The implementation of these steps was merged into one single level in terms of the meta-model and that can be confusing and inconvenient for somebody who wants to define a new assessment for a company assignment, or even someone who wants to incorporate a new standard into the tool. In this project we have attempted to clarify the confusion and incorporate more traceability between the meta models and our version of the assessment tool.

Problems in the meta-model affect the entire tool-set. Improving the tooling starts with re- defining the meta-model(s) to support the separation of concerns that is inherent to the assessment process.

To improve the assessment process by ths use of a software tool instead of using Excel, this

’single meta-model’ solution is not enough. In Chapter 4, we present the the new meta-models, which solve the aforementioned problems.

(24)

Building the Common Automotive Process, Product and Testing

Meta-Models

This chapter describes the approach that was taken to create three new meta-models necessary for the development of a new software tool for assessments. There are three different meta-models along with detailed explanations on the various elements in each meta-model and the relationships between them. Various functions present in the classes are described as well. Thereafter the dynamic instances used to test the tool are elaborated upon, and finally run-time instances which show the tree editor for each meta-model, thus giving real examples of how the models are used.

4.1 What is EMF?

EMF (Eclipse Modeling Framework)[8] is a modeling framework and tool for code generation for building software tools and other applications, based on a data model that has been structured.

EMF comes from a model specification described in XMI, and it provides tools and support for run-time in order to produce Java classes for the model, along with a set of adaptor classes that allow users to view and do some command based editing of the model, and also a basic editor.

Ecore

Under Eclipse Modeling tools falls Ecore, the core/centre of EMF, which is used to build meta- models. The concepts that a user wants to incorporate into their project can be modeled under EMF in an Ecore file. The structure is stored in files with the ”ecore” extension, i.e. ending in .ecore. Eclipse has a file creation wizard that helps users to create such files. By default when such a (new) file is opened, it opens in a tree based editor, which enables one to build the structure of Ecore model elements. Other options available include a Ecore Tools Diagram Editor, an the EMF Forms based Ecore editor [14].

To summarize, Ecore is EMF’s object oriented language. When an Ecore model is built, (regardless of which editor is being used), in essence, the user is building an object structure, a structure of

’meta-objects’. The reason they are called meta is because they are at a conceptual level above the instances that they are modeling [14] .

Figure 4.1 shows an example of an Ecore model, representing the structure of a family. The basic concept of a family is that it includes several Persons. This is shown with a composition relationship between the Family class and the Person Class. The Person could be a Man or a Woman, shown with the super type/inheritance relationship from the Person class. Each Person also has a name attribute and the following references: textitmother, father, two parents and children.

(25)

The concepts and relationships of a family have been modeled here using the Palette, using the components of Classifier, Relation and Feature tools.

Similarly, Requirements for any project can be modeled using EMF, in an ecore file, as a meta- model. In the later parts of this chapter will be explained how we built the various meta-models in order to build the software tool that can be used for assessments.

Figure 4.1: A Basic Family meta-model [6]

4.2 Improving the Meta-Models

The first thing that was realized at the outset of this project was that there needed to be more clarity between the meta-model(s) and the tool with which assessments would be done. In the tool of Tummalapalli, there was only one meta-model built in which everything was integrated, as explained earlier in Chapter3. With his tool, it is difficult to differentiate between the levels of what needs/requirements that the tool can fulfil. In our tool, in order to deal with this ’Combined One Model’ problem, the levels that are being referred to are as follows:

1. Defining Standards- Assessments that are done are always with reference to a specific standard (i.e CMMI, ASPICE, etc). The various classes in this model can only be concepts that are already defined in standards such as CMMI and TMMi. Those standard needs to be clearly defined in an instance of the meta-model, so that a person defining an assessment in the next step, has access to the material/information given in the standards. This level meta-model deals with defining standards. In our project we have named it ’standards’.

2. Defining Assessments - This model level deals with defining an assessment before going to a client to do the assessment. It involves keeping a ’template’ ready that makes the job of the assessor (or the assessment team that will do the actual assessment task) easy. This model has been named ’assessment’.

3. Doing an Actual Assessment - This model level deals with the the assessment that is being done, that is gathering the data, doing surveys conducting interviews and so on, in order to get a final assessment score. This model has been named ’actual’.

In Figure 4.2, is an assessment sheet from the Altran Excel template for a client. In Table 4.1the various concepts from the Excel sheet are explained. All these concepts need to be taken

(26)

into account while making the meta-models. This needs to be done so that whatever data was represented in this Excel template can also be replicated in our software assessment tool.

Figure 4.2: The Altran Excel sheet that supports process audit for the quality characteristic of Reliability

(27)

Concept Description EMF Relationship

ID Identification Attribute

Description Description of the Process Group/ Process/ Practice

that is being evaluated Attribute

Documentation A short description of the documentation relating to

the practice being evaluated. Attribute

Interviews A note on the response received in the interview Attribute Doc

(Documentation Score)

Represents the score on the documentation:

Y: Documentation is available, compliant

X: Documentation is available, but not compliant M: Documentation is missing

Enumeration

Act

(Satisfaction Level)

Represents the satisfaction level of the process or practice, this is decided by the assessor at the time of the assessment.

Fully satisfied: practices consistently executed, appropriate artifacts are present, no major weaknesses found.

Largely satisfied: practices almost always executed and almost all appropriate artifacts present, some improvements possible.

Partially satisfied: the practices are frequently (or recently) executed, some artifacts are present.

Not satisfied: the practices are not executed, no artifacts are present.

Enumeration

Consequence (Consequence Level)

Represents the impact of the process on the software product quality characteristic

Insignificant: Process or practice may affect the

software quality characteristic in exceptional circumstances (may happen after 10 years).

Minor: Process or practice might affect the software quality characteristic in 5-10 years.

Moderate: Process or practice might affect the software quality characteristic in 2-5 years.

Major: Process or practice could affect the software quality characteristic in 1-2 years.

Extreme: Process or practice may affect the software quality characteristic within a short period.

Enumeration

Score Represents the software Product quality assessment for

each process/ practice/ goal Enumeration

Table 4.1: Concepts in a Process Audit for product Quality in the Altran Excel Template (for Assessments)

In the following sections are explained the 3 different meta-models that were made to build the software assessment tool, starting with the standards meta-model.

4.3 The standards Meta-Model

Figure4.3represents the first level meta-model which is used to facilitate the defining of standards that are needed for assessments to be done. The various classes added, along with the relations and data types in it can be seen in Figure4.3.

(28)

4.3.1 Description of the Classifiers

In Table4.2, the different EMF classifiers (classes, enumerations, etc) from Figure4.3 have been listed along with the relationships that they have with each other. The StandardDefinitions class is the parent class of the whole diagram and is named so since it is through this meta-model that we want to define all relevant standards for process and product quality assessments. It contains the StandardDefinition abstract super class whose children sub-classes can represent types of process and product standards, hence it is an abstract class. The classes under it are concrete since they are the classes to be instantiated [2]. Furthermore, for the purpose of this tool, every occurrence of Standard Definition, is either a product standard or a process standard. Due to this reason, ProcessStandard and ProductStandard classes are specializations of StandardDefinition.

The ProductStandard class is the one to incorporate product standards that have characteristics and sub-characteristics, and metrics, and so the corresponding classes are in the meta-model. As can be seen in Figure4.3, it contains theProductCharacteristic class, which contains the SubChar- acteristic class, which in turn contains the Metric class.

The ProcessStandard class is the one that is used to represent process standards, such as CMMI, TMMi and ASPICE. For now there are some aspects of ASPICE that have been incorporated into this model but due to a shortage of time and data we stopped working on ASPICE, but future modifications can easily be made in the meta-model to include other standards that have different structures. GenericPracticeGroups, Process, and ProcessGroup classes are contained within the ProcessStandard class. As explained in Chapter 2, Generic Goals are the goals needed to be achieved for every process in both CMMI and TMMi, and so we have placed this class contained directly within Process, instead of GenericPracticeGroups. The PracticeGroup class, is contained within the Process class as well as the GenericPracticeGroup class. It is an abstract one, and is a generalization of the Goal class. Since Goals in the standards are a way of grouping Practices, we have made this generalization. ProcessGroup was initially made to resemble Process Groups as described in ASPICE. However, it can also be used for CMMI and TMMi assessments, to group processes together while defining Standards. An example of this would be grouping processes for CMMI-DEV (Development), or CMMI-SVC (Service).

The MaturityLevel class is contained with Process and is used to give each process its maturity Level. It contains an attribute maturity of Etype Maturity, which is used to allocate maturity levels to processes, as pre-defined in the CMMI and TMMi standards. In the meta-model, there is an Enumeration (the afforementioned Etype) named Maturity, which has the 5 literals One, Two, Three, Four, Five which describe the 5 maturity levels defined in CMMI and TMMi for processes.

Attributes such as Name, Description, ID are common for almost all the classes in this meta- model, so the NamedElement class is a super-type for all the classes, so the classes inherit those attributes. Hence instances of those classes can each have their own name, ID and description.

Since there is a generalization relationship with all the other classes, there would be several more arrows showing those relations in Figure4.3. To avoid this clutter in the diagram those relations have been hidden.

The Vocabulary class is used to describe meanings of new terms or for abbreviaions. The Note class is used to add detail, justification or background on any other component from the CMMI model. The annexure class is to add additional information about the process if required.

The WorkProduct class is from the CMMI standard, and is the output result of a practice.The ProcessAttribute class is related to the ASPICE standard and is used as a way of scoring processes in that standard. However, this aspect has not been elaborated upon further since ASPICE aspects were not added due to a time constraint in the project.

(29)

Figure 4.3: The standards Meta-Model

(30)

Classifier Name EMF Relationship to Other Classes Type of Classifier StandardDefinitions Composition (Owner) of StandardDefinition Concrete Class StandardDefinition

Supertype/Generalization of ProcessStandard and ProductStandard

Composition (owned by) with StandardDefinitions

Abstract Class

ProcessStandard

Specialization of StandardDefinition, Composition (owner of) with ProcessGroup, Process and GenericPracticeGroups

Concrete Class

ProductStandard Specialization of StandardDefinition, Composition

(owner of) ProductCharacteristic Concrete Class ProductCharacteristic Composition (owner of) with SubCharacteristic,

Composition (owned by) with ProductStandard Concrete Class SubCharacteristic Composition (owner of) Metric,

Composition (owned by) with ProductCharacteristic Concrete Class Metric Composition (owned by ) with SubCharacteristic Concrete Class ProcessGroup Composition (owned by) ProcessStandard

Process

Composition (owned by) ProcessStandard and ProcessGroup

Composition (owner of) with MaturityLevel, ProcessAttribute, WorkProduct, Note, Vocabulary, Annexure, PracticeGroup, Outcome.

Concrete Class

Outcome Composition (owned by) Process

Reference from Specific Practice Concrete Class GenericPracticeGroups Composition (owned by)with ProcessStandard

Composition (owner of) with PracticeGroup Concrete Class SpecificPractice Specialization of Practice

Reference to Outcome Concrete Class

PracticeGroup

Composition (owned by) GenericPracticeGroup Supertype of Goal

Composition (owner of) with Practice

Abstract Class Practice Composition (owned by) with PracticeGroup Abstract Class WorkProduct Composition (owned by) with Process Concrete Class

Note Composition (owned by) with Process Concrete Class

Vocabulary Composition (owned by) with Process Concrete Class

Annexure Composition (owned by) with Process Concrete Class

MaturityLevel Composition (owned by) with Process Concrete Class

Maturity Etype that is used by Maturity Level Enumeration

ProcessAttribute Composition (owned by) with Process Concrete Class NamedElement SuperType of every Class in this meta-model except

MaturityLevel and ProcessAttribute Abstract Class Table 4.2: The different Classifiers in the standards Meta-Model

4.3.2 Dynamic Instance of the Meta-Model

The term dynamic instance relates to the fact that the mechanism doesn’t rely on generated Java classes, but instead uses a special EObject subclass that supports all aspects of Ecore including EAttributes and EReferences based on the .ecore meta-model definition. Hence, they have the same behaviour as instances of generated Java classes [5]. A dynamic instance is an easy way to create an instance of a meta-model during the initial development stages. The meta-model behaviour can thus be tested [1] [16].

The generic Ecore instance editor (Sample Ecore Model Editor) is one way to create dynamic instances. In turn, the generic editor is named the Sample Reflective Ecore Model Editor, and

(31)

can edit Ecore model instances. These are object graphs which provide tree-based editing of the main hierarchical structure of elements conforming to the EClass found in the corresponding Ecore model [5]. There are even commands for creating, deleting, copying and pasting elements, and a property sheet for editing details. Outside the tree editor is the Properties View, used for editing.

An example of the editor and Properties view is shown in Figure4.4. The root node represents the file, which contains a Standard Definition, in this case, the CMMI Process Standard, and later below a Product Standard -the ISO/IEC 25010 Quality Model. The CMMI process standard contains Generic Practice Groups, and the different Process elements (Requirements Management an Process and Product Quality Assurance). Contained within the Process elements are the Goal elements (for example, Specific Goal 1 under Requirements Management), and within these are the Specific Practice elements (Understand Requirements, Obtain Commitment to Requirements, etc). Contained within Product Standard, can be seen a number of Product Characteristic elements, namely Reliability and Functional Suitability. Since Process Standard is selected, its attributes (inherited from NamedElement ) are shown in the Properties view. In order to edit its value, one just has to click on the Value column.

To create new elements, one can right-click on the parent (to-be) and select New Child and the type of element to add. Only legal types of elements are shown, e.g. one can add Generic Practice Groups or a Process to a Process Group, but not a Process to a Generic Practice Group. This

’legality’ is based on the meta-model, and the various relationships between classifiers, as shown in in Figure4.3and Table4.2. The new element is inserted at the bottom of the list of children underneath the parent, so you may have to move it using drag and drop. If you want to place the new element in the middle of a list of children, you can alternatively right-click on the child just above where you want the new element and select New Sibling and the element type [5].

Figure 4.4: A dynamic instance of standards showing the editor and Properties view

4.4 The assessment Meta-Model

In Figure 4.5, is the second level meta-model which has been made in order to be able to define assessments that are needed for an assessment to be done. The various classes added in it, along with the relations and data types in it can be seen in Figure4.5.

4.4.1 Description of the Classifiers

In Table4.3 and 4.4 can be seen the different classifiers in the 4.5 meta-model along with their respective EMF relationships.

The AssessmentDefinitions class is the parent class of this meta-model. Contained within it is the AssessmentDefinition class, which is abstract, and contains all Process Assessment Definitions

(32)

under it. The ProcessAssessmentDefinition class is the one used to define all assessments. For example, if a CMMI/TMMi assessment is coming up for the company, and a few weeks or months before it, the assessment team is deciding which processes to pick from the standards, because they believe that those are the ones that impact a particular product characteristic (and that product characteristic is the one that the client wants to target). This class is a specialization of AssessmentDefinition.

Contained within the ProcessAssessmentDefinition class is AssessmentProductCharacteristic, be- cause at this level while defining an assessment, the assessor and their team must choose the product characteristic that a client company has told them they want assessed. This class has a reference to the ProductCharacteristic, which is contained withing the ProductStandard class.

Furthermore, the AssessmentProductCharacteristic class contains AssessmentProcess and As- sessmentProcessGroup. The assessor can either directly add processes after selecting the Product Characteristic, or can add process groups, first and then processes under them. Process Groups are added so that it can be identified on what basis subsequent processes are added. For example

’Processes in CMMI-DEV + Processes in CMMI-SEV affecting Reliability’. In this example, we are grouping processes that affect the Product Quality Characteristic of Reliability.

The AssessmentProcessGroup class is a generalization of the GenericPracticeGroupReference. This class, in turn refers to the GenericPracticeGroups class, which has been loaded from the standards meta-model, and the person defining the standards can use this class to add Generic Goals defined in them.

The AssessmentProcess class is used to add processes from standards. It contains the Assess- mentGroupDefinition class. It contains a reference to the Process class which has been loaded from the previous ’Standards meta-model. The AssessmentGroupDefinition class is the medium class, used to reference ’Practice Groups’. These have been called so because in the standards that we are dealing with, practices are grouped together under goals. It also contains the Assessment- PracticeContribution class, which has an attribute called consequence, of Etype ConsequenceLevel.

This class is used by the team defining the assessment, to define what consequence levels a par- ticular practice has on a product quality characteristic. This stage of ’Assessment Definition’ has been explained in earlier chapters. This class has a reference to Practice, which is contained within the PracticeGroup class. The PracticeGroup class is contained within the GenericPracticeGroups class. Essentially ’Practice Groups’ are ’Goals’, defined in the standards. This class has a reference to PracticeGroup, so that the user, while defining an assessment can look up goals. This class contains Practice, which is a generalization of SpecificPractice and GenericPractice, all three of which have been loaded from the previous meta-model.

The Scoring class is the one in which the assessor will calculate individual practice scores, it is used in the actual meta-model too, which is our third and final level meta-model. It has a reference to AssessmentPracticeContribution and Measurement classes, since in order to calculate individual practice scores, both consequence levels needed to be decided beforehand for practices, and satisfaction levels need to be decided on, as a result of an assessment. (Refer Figure 2.6 and Chapter3). The Measurement class has a literal satisfaction of Etype SatisfactionLevel, and the AssessmentPracticeContribution class has a literal consequence of Etype ConsequenceLevel.

The Scoring class has two operations findAssessmentPracticeContribution and findMeasurement, which are explained later in this chapter. It also has a literal, score of Etype Score.

The SatisfactionLevel enumeration is what will be used while doing an actual assessment, i.e.

in the next meta-model, but we have defined it at this level, since this is where assessments are being defined. This is what the assessor has to enter in, while doing an assessment, and they can choose between 4 levels, according to the Altran reliability matrix shown in Figure2.6.

(33)

Figure 4.5: The assessment Meta-Model

The ConsequenceLevel enumeration, has also been added because Altran uses consequence levels for its assessments. This has 5 levels, and unlike the previous enumeration, this is actually used at this level, while defining assessments. The team defining assessments assigns a consequence

(34)

level to individual practices. Within this enumeration are 5 literals (Insignificant, Minor, Mod- erate, Major, Extreme), which are from the Altran Scoring Matrix in Figure 2.6, described in Chapter2.

The GroupScore class is an abstract one and has a reference to AssessmentGroupDefinition, and is also a specialization of the ScoreAverage class, which contains 3 operations/functions that are used to calculate the scores of individual practices and processes. It also contains a literal, scoreAverageValue of Etype Edouble.

Finally, the Score enumeration is used to get the final score of each practice defined in the assessments. This is according to the Altran matrix for scoring as shown in Figure 2.6. Within this enumeration are 4 values one, two, three, four which are scores from the Altran scoring matrix.

The Score values are calculated from a combination of Consequence Levels and Satisfaction Levels.

Classifier Name EMF Relationship to Other Classes Type of Classifier AssessmentDefinitions

Parent class of the meta-model Composition (owner of) with AssessmentDefinition

Concrete Class AssessmentDefinition Owned by AssessmentDefinition Abstract Class

ProcessAssessmentDefinition

Generalization of AssessmentDefinition Composition (owner of)

AssessmentProductCharacteristic

Concrete Class

AssessmentProductCharacteristics

Composition (owner of) AssessmentProcessGroup

Reference to ProductCharcteristic

Composition (owner of ) AssessmentProcess

Concrete Class

GenericPracticeGroupsReference

Generalization of AssessmentProcessGroup Generalization of

AssessmentProductCharacteristic

Abstract Class

AssessmentProcessGroup

Specialization of

GenericPracticeGroupsReference

Composition (owner of) AssessmentProcess

Concrete Class

ProcessGroup Reference from AssessmentProcessGroup

Composition (owner of) Process Concrete Class

Process Composition(owner of) PracticeGroup

Reference from AssessmentProcess Concrete Class

AssessmentProcess

Reference to Process Composition (owner of) AssessmentGroupDefinition Composition (owned by)

AssessmentProductCharacteristic

Concrete Class

ProductCharacteristic Reference from

AssesmentProductCharacteristic Concrete Class

ProductStandard Owner of ProductCharacteristic Concrete Class

AssessmentGroupDefinition Owner of AssessmentPracticeContribution

Reference from GroupScore Concrete Class

AssessmentPracticeContribution

Composition (owned by) AssessmentGroupDefinition Reference to Practice Reference from Scoring

Concrete Class

Table 4.3: The different classes in the assessment meta-model

(35)

Classifier Name EMF Relationship to Other Classes Type of Classifier Scoring Reference to AssessmentPracticeContribution

and Measurement Concrete Class

Measurement Reference from Scoring Concrete Class

Practice

Generalization of SpecificPractice and GenericPractice

Owned by PracticeGroup

Abstract Class PracticeGroup Reference from AssessmentGroupDefinition Abstract Class GenericPracticeGroups Reference from GenericPracticeGroupReference Concrete Class

GenericPractice Specialization of Practice Concrete Class

SpecificPractice Specialization of Practice Concrete Class

SatisfactionLevel Etype used in Measurement Enumeration

ConsequenceLevel Etype that is used in AssessmentPracticeContribution Enumeration

Score Etype that is used by Scoring Enumeration

GrScore Etype used in ASPICE standard Enumeration

GroupScore Specialization of ScoreAverage Abstract Class

ScoreAverage Supertype of Groupscore Abstract Class

NamedElement Supertype (Generalization) of every

class in this meta-model Abstract Class

Table 4.4: The different classifiers in the assessment meta-model

In the following section we talk about the various operations that have been implemented in this meta-model, are explained along with the scoped relation which is from a plugin provided by Atran.

4.4.2 Operations and Scoped Relations Used

A Scoped relationship is used to select specific instances of a class to be used. In Figure4.6 is a simple example of a scoped relation in a meta-model. A Shopper can only shop items that are available in the Shop where he is shopping. Therefore, the shopItem relation is scoped by the shop relation.

Figure 4.6: An example of how Scoped can be used

In Figure 4.7 is the Ecore tree view of the Shopper example meta-model in Fig 4.6. The ext

(36)

annotation with the scoped keyword ’binds’ the scoped constraint (which is pre-programmed in the Altran EcoreExt plugin) to the Shopper class. Since the shopItem relation is scoped by the shop relation, there must be an EAnnotation added to the shopitem reference within the Shopper class. The Ecore EAnnotation indicates which constraints to check for the class. This is a list of constraints, where scoped needs to be added, to have the scoped constraint be evaluated for the Shopper class.

This is made possible by a plugin provided by Altran that enables Ecore to handle such scoped relations. In the following part is explained how the scoped relation is present in various relations in the assessment meta-model.

Presented in Figure 4.8 is the Ecore tree view of the scoped relations used in the assessment meta-model.

Figure 4.7: The Ecore view of the Scoped Shop example

(37)

Figure 4.8: The Ecore view of the Scoped relation in the AssessmentPracticeContribution class

AssessmentPracticeContribution class

The relation practice in AssessmentPracticeContribution is scoped by practicegroup in Assessment- GroupDefinition, as indicated in Figure4.8. The scoped relation is used in this class to select from the Specific Practices that have been defined in the standard at the previous level meta-model. In Figure4.9, on the left is the CMMI standard, defined in an instance of the standards meta-model, and on the right hand side is an instance of the assessment meta-model where an assessment has been defined. Only the practices that have been defined in the instance of standard, within the Goal PPQA SG1, can be selected as Practices, in the instance of assessment.

Referenties

GERELATEERDE DOCUMENTEN

Hoewel audit soft ware door verschillende professio- nals kan worden gebruikt, wordt voor het vervolg van dit artikel de aanname gemaakt dat de opdracht wordt uitgevoerd door

V&VN is partner in Zorg voor Beter om zorgmedewerkers in de langdurige zorg praktische informatie te bieden die aansluit bij de praktijk.. Het is goed om te horen dat

- Routes to acquisition and collection of nucleotide sequence data - Routes to acquisition and collection of amino-acid sequence data - Routes to global analysis of gene expressions.

Then, a start place and initializing transition are added and connected to the input place of all transitions producing leaf elements (STEP 3).. The initializing transition is

The relations between the elements in the test documents and with the user requirements, the high-level design, the detailed design and the implementation

[SC1.3] Formal Elements: The following elements formally specify the user require- ments: relational diagram of data/object model, process models of use case scenarios, and

Automatic support for product based workflow design : generation of process models from a product data model Citation for published version (APA):..

[r]