• No results found

Eindhoven University of Technology MASTER Model-based testing and data combination Vishal, V.

N/A
N/A
Protected

Academic year: 2022

Share "Eindhoven University of Technology MASTER Model-based testing and data combination Vishal, V."

Copied!
82
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Eindhoven University of Technology

MASTER

Model-based testing and data combination

Vishal, V.

Award date:

2012

Link to publication

Disclaimer

This document contains a student thesis (bachelor's or master's), as authored by a student at Eindhoven University of Technology. Student theses are made available in the TU/e repository upon obtaining the required degree. The grade received is not published on the document as presented in the repository. The required complexity or quality of research of student theses may vary by program, and the required minimum study period may vary in duration.

General rights

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.

• Users may download and print one copy of any publication from the public portal for the purpose of private study or research.

• You may not further distribute the material or use it for any profit-making activity or commercial gain

(2)

Model-Based Testing and Data Combination

Vivek Vishal August 21, 2012

Master Thesis

Eindhoven University of Technology

Department of Mathematics and Computer Science In cooperation with:

Philips Healthcare Best, The Netherlands

Supervisor at TU/e:

Dr. Mohammad Reza Mousavi Assistant Professor

Department of CS, TU/e, Eindhoven M.R.Mousavi@tue.nl

Supervisors at Philips Healthcare:

Rachid Kherazi

Project Manager, FXD Department Philips Healthcare,Best.

Rachid.Kherazi@philips.com

Supervisor at MIT:

Dr. Preetham Kumar Associate Professor and Head Of Department Department of I & CT, MIT, Manipal Preetham.Kumar@manipal.edu

Mehmet Kovacioglu, M.Sc., PDEng.

Verification Designer, FXD Department Philips Healthcare,Best.

Mehmet.Kovacioglu@philips.com

(3)
(4)

Acknowledgments

This master thesis is carried out in the Formal System Analysis group at Eindhoven University of Technology (TU/e), in cooperation with Philips Healthcare, Best, The Netherlands. I would like to take this opportunity to thank all who have directly or indirectly supported me.

I would like to convey my sincere thanks to Dr. Mohammad Reza Mousavi, my supervisor at TU/e for his immense support and guidance throughout my work.

My sincere thanks to Rachid Kherazi (project manager) and Mehmet Kovacioglu (men- tor) for giving me the opportunity to perform this research at Philips Healthcare and guiding me in my work. I would like to extend my gratitude to all my colleagues at the FXD Department, Philips Healthcare for providing me all the information required about the subsystem and a very friendly working environment.

I would also like to thank Geert van Birgelen, Engineering Group Leader, for providing me all necessary equipments required for this project.

I am grateful for Prof.dr. Manohara M. Pai, for providing us with an opportunity to pursue our study at the TU/e. I am thankful to Dr. Preetham Kumar for being my project coordinator from Manipal University and for his evaluation. I am grateful to Prof.dr. Mark van den Brand, the program director for the Manipal Program for his valuable guidance at every step on the way through this master‘s study at TU/e.

Finally, I would like to thank to all members of the Spec Explorer forum, for providing valuable feedback well within time.

iii

(5)
(6)

Abstract

Model-based testing (MBT) has attracted a major interest in the past years. It relies on models (derived from system’s specifications) that encode the intended behavior of a system. Models are used for the generation of test cases. Test cases are used to verify System Under Test’s (SUT) expected behavior against its implementation.

We apply MBT to test the Image-Detection Subsystem (IDS) of the X-Ray machines against its specification. Initially, we prepared the infrastructure necessary for MBT.

Then we modeled the behavior of IDS using its Product Requirement Specification (PRS). Test cases were generated from the model and were executed against the imple- mentation. In the second step of our modeling process, we incorporated data explicitly.

Data is an important aspect of testing in IDS and the interdependency among the data makes the modeling process even more challenging. We describe a methodology, that aims at capturing the dependencies among various parameters while performing testing.

We also develop a tool that implements the described methodology. Various combina- tions of interdependent data, reflecting dependency among parameters are generated by the tool. These combinations are then provided as input to the MBT tool to generate test harnesses. The developed tool also automates the manual and error-prone process of extracting data from the PRS.

In most of the practical applications input parameters are interdependent. A parameter affects the values of other parameters or gets affected by the values of other parameters.

Most of the current testing methods, which apply several combination strategies in order to reduce the input space, do not take data dependency into account and this may lead to inappropriate selection of data combinations. We also developed a generalized tool, that addresses these problems by using a generalized approach towards handling data dependency. It generates test suites that can be used by any testing infrastructure and is not limited to MBT.

We also describe the advantages of MBT over traditional testing approaches and show how MBT can increase the efficiency and effectiveness of testing.

v

(7)

Contents

1 Introduction 1

1.1 Problem statement . . . 3

1.2 Research contribution . . . 4

1.3 Thesis outline . . . 4

2 Background 5 2.1 The Image Detection Subsystem . . . 5

2.2 Infrastructure . . . 8

2.3 Model-Based Testing and Spec Explorer Overview . . . 8

2.3.1 Model Creation . . . 9

2.3.2 Model Analysis . . . 10

2.3.3 Test Case Generation . . . 10

2.3.4 Execution of Test Cases . . . 11

2.3.5 Evaluation of Test Results . . . 12

3 Approach 13 4 Basic Model and Adapter 16 4.1 Introduction . . . 16

4.2 Construction of the model . . . 17

4.3 Test-Case Generation . . . 19

4.4 Execution of Test-Cases . . . 19

4.5 Results . . . 19

5 Data Dependency 20 5.1 Data Handling . . . 20

5.1.1 Equivalence Partitioning . . . 21

5.1.2 Boundary Value Analysis . . . 21

5.1.3 Combination Schemes . . . 22

5.2 Open Issues . . . 25

5.3 Data Model for SUT . . . 40

5.4 Analysis . . . 40 vi

(8)

5.5 Implementation . . . 42

5.5.1 Separate tool for data combination . . . 42

5.5.2 The Tool . . . 45

5.6 Tool Improvement . . . 46

5.6.1 Tools Combination . . . 49

5.6.2 Tool Functionality Enhancement . . . 51

5.7 Spec Explorer and the Tool . . . 52

5.8 Results . . . 55

6 Generalized Tool 56 6.1 Generalization of data dependency . . . 56

6.1.1 Dependency Representation . . . 57

6.2 Conclusion . . . 59

7 Results 62 7.0.1 Comparison . . . 64

8 Conclusions and Future work 69 8.1 Conclusions . . . 69

8.2 Future Work . . . 70

(9)
(10)

Chapter 1

Introduction

Developing a system is a notoriously error-prone activity. In order to detect and fix such errors well within time, most development processes use testing. There is a great advantage to test the correctness of any system before it is put into actual use which otherwise may result in heavy loss. Testing is the most important activity in the case of safety-critical systems and commercially critical systems (Intel lost millions of dollars by releasing their Pentium chip with FDIV error [1], which cannot be detected due to insufficient testing).

Formally, testing is “an investigation conducted to provide stakeholders with information about the quality of the product or service under test. It is an independent view of the product to allow the business to appreciate and understand the risks of system imple- mentation” [2]. It helps to gain confidence in the correct behavior of a system or shows the difference between expected and actual behavior of the system.

However, testing is an expensive and labour intensive activity. It accounts for 50% of the total development cost and even more for safety-critical system [3]. One of the goals of testing is to automate as much as possible, thereby significantly reducing its cost, minimizing human error, and making testing well ahead of time.

Model-Based Testing (MBT) [4], [5] is a step towards test automation. It is an auto- matic testing technique that relies on models. Models are abstractions of real life objects.

In the context of MBT, a model is used to encode the intended behavior of a system.

Traces of the model serve as test cases for an implementation. These test cases capture all possible executable paths of a system and hence provide complete information about its quality. MBT also offers some possibilities in reducing the cost of test generation, increasing the effectiveness of the tests, and shortening the testing cycle [6]. Futhermore, traditional and current testing techniques involve execution of a system and observation of its behavior against the specification which are all automated in the MBT approach.

(11)

2 Chapter 1. Introduction

Specification documents often tend to be incomplete, ambiguous and sometimes even contradictory. Designing tests from such documents is inadequate to provide informa- tion about the quality of a product. MBT offers a promising approach to overcome such drawbacks in the current testing process as the process of modeling resolves all ambiguities, incompletenesses and contradictions of the requirement specification.

This thesis applies MBT to the Image-Detection Subsystem (IDS) also known as Flat X-Ray Detection (FXD) of Philips Healthcare. Flat X-Ray Detection (FXD) group of Philips Healthcare is responsible for the subsystem which generates, detects and trans- lates X-Rays to images. The subsystem consist of a controller and a flat X-Ray detector (a digital photo-sensitive plate with layers that convert X-Ray to light). There are many different medical applications that use different configurations of this subsystem. The customers are system developers in the medical domain who integrate the subsystem to a complete X-Ray system. Important requirements enforced by regulatory authorities (such as the Food and Drug Administration in the US) in the medical domain include providing consistent tracking of requirements to implementation and tests. For this rea- son work has to be done within a strictly regulated domain and a heavy test and quality assurance process.

Until recently most of the tests were performed manually with only part of the function- ality covered within the given amount of time. The current test environment consists of a test tool that can simulate (parts of) the hardware needed for the subsystem to work and an integrated solution based on a well known unit-test framework (NUnit) to perform (semi-) automatic tests. The framework is also used to generate clear and consistent test logging and tracing. Although the current setup of the test environment has provided a flexible solution for (semi-) automatic testing of various configurations and setups, there is still ample room for increasing the effectiveness and the efficiency of the tests with respect to coverage versus time. MBT turns out to be an ideal candidate for this purpose. It offers a promising approach in test generation automation. Using MBT, testers can update the model and rapidly regenerate a new test suite, avoiding tedious and error-prone editing of the suite of hand-crafted test. It can be particularly very effective for the systems that are changed frequently.

Data is an important aspect of testing in the IDS. Parameters, at any point of time, determine the amount of X-Ray generated. Since exposure to X-Ray is harmful to human beings, it has to be made sure that they are well within their prescribed range.

Testing this aspect involves choosing data values for different parameters of the system.

As the SUT has a large number of parameters , the combination of all parameters soon becomes unmanageable. Moreover, the parameters are not independent and selection of one value of a parameter determines the valid range of other parameters.

Most of the MBT tools offer very limited support for combining parameters that take into account the dependencies among parameters. A major focus of the thesis is concentrated around development of an algorithm that effectively tests the input domain of parameters

(12)

1.1. Problem statement 3

on the basis of the dependencies among other parameters and the implementation of this algorithm for the current SUT. The combination of parameters serves as an input for the MBT tool. The MBT tool converts these combinations into test harnesses (executable test code).

Different applications have different forms of data dependencies. This thesis provides a generic approach to handle data dependencies and generate parameter combinations fully automatically. A tool based on this generic approach is also developed. The combinations generated by the tool can be used by any testing technique and is not limited to MBT.

1.1 Problem statement

The overall problem can be divided into the following subproblems:

• Designing a model of the SUT:

– Development of infrastructure necessary for MBT.

– Designing a model for the SUT using its Product Requirement Specification (PRS).

– Generating test suite from the model.

– Establishing direct communication with SUT (to execute test cases).

– Executing the test suite against implementation.

• Efficient data modeling:

– Designing an algorithm that effectively combines parameters of the IDS in such a way that the dependencies among parameters are reflected.

– Implementing the algorithm.

– Integrating the algorithm with MBT.

• Developing a generalized tool to reflect data dependencies:

– Development of a generalized tool to model dependencies among parameters.

– Generalizing the format for expressing dependencies.

• Comparison:

(13)

4 Chapter 1. Introduction

– Comparing MBT with the current testing practice at FXD.

1.2 Research contribution

The purpose of this research is to design an effective behavioral model of the IDS.

The model together with its environment can be used to automate most of the testing activities in the entire test cycle. As the SUT is available in different versions, and various other versions are expected to come in the near future, MBT facilitates the fast test generation for all versions with a small change in the model according to the specification. Present research demonstrates the advantages of MBT over current (semi-) automatic testing practice at FXD. The research provides and implements an algorithm necessary to reflect dependencies among parameters for the current SUT, resulting in the effective selection of data combination. The generalized tool developed as a part of this research is also applicable to test other applications having different forms of data dependencies.

1.3 Thesis outline

Chapter 2 includes the description of the SUT, MBT and the MBT tool Spec Explorer.

We also discuss the infrastructure necessary to communicate with the IDS.

Chapter 3 presents the approach applied to solve the problems mentioned in chapter 1.

Chapter 4 describes the process of creation, exploration, and generation of test cases from the model of the SUT.

Chapter 5 discusses the influence of interdependent parameters and how we effectively handle them in the proposed approach. We discuss the limitations of a number of combination strategies being currently used. We finally present our approach to generate parameter combinations and its integration with MBT.

Chapter 6 presents the requirements and the design of a generalized tool developed to reflect data dependencies.

Chapter 7 presents the results obtained. It also compares MBT with the current testing strategy in the Image Detection Subsystem.

Chapter 8 provides a conclusion of our work and the recommendations for future work.

(14)

Chapter 2

Background

In this chapter, we will give a brief description of the Image Detection Subsystem (IDS), the testing infrastructure and Model-Based Testing (MBT). While describing MBT, we also provide a brief overview of our MBT test tool Spec Explorer (SE).

X-Rays systems are used for medical imaging. The whole system is used in a number of X-Ray interventions like Cardio, Vascular and Neuro procedures. The system under test is the Flat Detector Subsystem (FD) also known as the Image Detection Subsystem (IDS) responsible for generation, detection and translation of X-Rays to images.

2.1 The Image Detection Subsystem

The IDS consists of following components:

• Flat Detector: responsible for detecting X-Rays.

• Anti-scatter Grid: Ensures that X-Rays falls normally (at 90 degree angle) on the flat detector.

• (Optional) Temperature Control Unit (TCU): Maintains the temperature of the flat detectors.

• Flat Detector Controller: responsible for a number of activities:

– Flat Detector control via a number of commands.

– Image Pre-Processing: Correction of image-artifacts.

(15)

6 Chapter 2. Background

Figure 2.1: System and Flat Detector

Figure 2.2: Controller

– Image Transformations: Basic image processing functions.

– X-Ray Dose measurement and control.

– Temperature control.

– Subsystem timing generation.

The system and the SUT within the system is shown in Figure 2.1 and Figure 2.2 The IDS is controlled by Front End Control (FEC) through an interface known as FEC- IDS interface. Figure 2.3 contains the architectural view of the IDS, showing its internal components and the interfaces with its environment. We do not discuss the details of the environment and focus only on the subsystem.

(16)

2.1. The Image Detection Subsystem 7

Figure 2.3: The FEC-IDS interface

(17)

8 Chapter 2. Background

Figure 2.4: MBT Infrastructure

2.2 Infrastructure

The MBT infrastructure is shown in Figure 2.4. The IDS is wrapped around by a test environment known as Bellephoron (Bello). Bello contains everything (Hardware/Soft- ware) that is needed to test the SUT. It is responsible for creating an environment which enables the script to communicate with the different interfaces of the SUT.

The infrastructure for MBT also requires the creation of the test adapter. The test adapter is a communicator between the model and the implementation. It is a thin software layer that sits between the model and the SUT. When we create a test adapter, we declare the adapter methods and events that communicates directly to the SUT. As a result, the generated test code will call methods (and expect events) in the adapter.

The adapter has the task of turning these calls into SUT calls. This architecture is very helpful when the implementation is not managed, when the communication is remote, when there is a test infrastructure in the middle [7] (all of which are applicable in our case). Therefore we have created a test adapter for the SUT.

2.3 Model-Based Testing and Spec Explorer Overview

Models are abstractions of real world objects. They can be used to represent the expected behavior of a system. Model-based Testing (MBT) is an automatic testing technique that

(18)

2.3. Model-Based Testing and Spec Explorer Overview 9

Figure 2.5: MBT Structure

relies on behavioural models. Test cases are derived from the model and are executed against actual implementation. The expected behavior of the system according to the model is compared against its actual behavior. If they match, the test passes, otherwise it fails. Figure 2.5 shows the general structure of the MBT. The process of modeling the behavior of a system resolves most of the misconceptions, incompletenesses and ambiguities of a system specification.

Spec Explorer is a Model-Based Testing tool from Microsoft. It extends the Visual Studio Integrated Development Environment with the ability to define a model describ- ing the expected behavior of a system. Models are defined using model programs in Spec Explorer. It uses the C# language for writing model programs. Modeling with C#, facilitates the use of high-level data types like sets, maps, sequences and bags as well as comprehension notations like universal and existential boolean quantifiers, non-deterministic choice, contracts in the form of preconditions, postconditions, and invariants. Its ability to enumerate over all instances which have been created from a particular object type makes it a good candidate for object-oriented software systems.

MBT is a five step process which involves creating a model, analyzing the model, gen- erating test cases from the model, executing the test cases and evaluating the verdict.

We describe each of these steps in detail:

2.3.1 Model Creation

Creating a model is an important and challenging task that requires many different skills (such as abstraction, understanding of requirements and familiarity with the modeling language). The model should be abstract enough such that it does not contain unnec-

(19)

10 Chapter 2. Background

essary implementation level details of the system and at the same time it should not be very much abstract since it could miss the requirements crucial to imitate the system’s behavior. There are basically three kinds of models [8]:

• State-based models: Z, B, Java Modeling Language (JML)

• State-machine-based models: Finite State Machine (FSM), Labelled Transtion System (LTS), Process Algebra

• Algebraic models: Abstract Data Types (ADTs)

All of the above-mentioned notations can be used for constructing models, and our selection is highly influenced by the tool used for the purpose of MBT. All the notations are based on the idea of states and the transition between them. At any moment, the system can be in one of the allowed states. A transition can take place upon execution of an action if the system satisfies some precondition.

Spec Explorer uses action machines [9] which falls in the category of State-machine- based models. Action machines are variation of LTS, where labels represent observable activities (henceforth actions) of the described artifact, and states capture full data models.

2.3.2 Model Analysis

Once the model has been defined in C#, it can be explored in the form of a graph, representing the SUT’s expected behavior. The graph can be analyzed for the correctness of the model against the specification. If a mismatch is found between the behavior of the model and the specification, model has to be redefined, otherwise we can proceed to generate test cases from the model.

2.3.3 Test Case Generation

A generic test case generation process accepts two main inputs:

• A formal model of the system under test, and

• A set of test generation directives which guides the tool.

As an output the process generates the test cases for the system under test.

Spec Explorer uses the model defined earlier in C#, and has three test case generation algorithm [10]:

(20)

2.3. Model-Based Testing and Spec Explorer Overview 11

Figure 2.6: Spec Explorer Work flow

• Test Code Generation (TCG): A test suite (set of test cases) is derived from the model created by translating the exploration results into test cases according to the selected traversal algorithm using the test generation criteria.

• Dynamic Traversal (DT): DT is a special case of TCG. The users can then either use the default dynamic traversal class or create their own.

• On-The-Fly (OTF): An experiment is described as a single trace of interactions with the system under test (SUT). A test run is a collection of experiments gen- erated against the model. In a nutshell, the exploration and interaction with the SUT happens simultaneously.

A general work flow of testing with Spec Explorer is shown in Figure 2.6.

2.3.4 Execution of Test Cases

After test cases have been generated, they can be executed against the SUT. Test cases provide the model’s recommended stimuli to the SUT and the responses are monitored to see if they match with the expectations established by the model. This is done in the generated test code by treating actions sent to the SUT as calls to the implementation’s methods and by interpreting the method’s parameters as in and out to the SUT.

(21)

12 Chapter 2. Background

2.3.5 Evaluation of Test Results

The evaluation of the results greatly depends on the support provided by the tool. Spec Explorer gives a detailed trace of all the actions that was executed as a part of the test, which makes the analysis and the follow-up activities easier. In this phase we look for any false negatives and false positives (how to find it is described in Chapter 3). If they are found, we correct them in the model. In case of true negative, we make a problem report for the SUT.

(22)

Chapter 3

Approach

In this chapter, we discuss the methodology adopted in solving the problems described in Section 1.1.

Initially all subproblems were carried out independently. We subsequently combine the basic model (model without data) of the subsystem with its data model. To develop the basic model and the data model of the subsystem, we follow an iterative and incremental approach. Figure 3.1 shows an iterative and incremental model development process. To develop the generalized tool, that support test case generation for applications having dependencies among parameters, we follow an incremental approach.

Figure 3.1: Iterative & incremental development model

We initially consider a basic set of requirements in the specification, design a model to capture those requirements, generate a test suite from the model and evaluate the results after executing the test suite. This cycle continues untill all the requirements have been covered correctly, i.e., all requirements are reflected in the model. A detailed description of the steps involved are given below:

• Construction of the model:

(23)

14 Chapter 3. Approach

Construction of the model is the most important and crucial step of MBT. A clear understanding of the SUT and its specification is crucial to designing a correct model. Incorrect modeling may lead to false positive or false negative (described later).

• Analysis of the model:

The SUT’s expected behavior can be viewed in the form of a graph. The graph can be used to analyze the correctness of the model. If any inconsistency with respect to the specification is detected in the graph, the model needs to be corrected before we can generate test cases from it.

• Generate test suite from the model:

Once we are able to construct the model, the tool can generate a test suite of test cases from it.

• Execute and evaluate tests: The generated test suite can automatically run against the implementation and a verdict is generated together with test run de- tails. Based on the correct or the incorrect implementation of both the model and the SUT, the test results can fall into four categories [11]:

– A false negative: The result of incomplete or incorrect modeling. Since, the implementation has correctly implemented the requirements. This give rises to a contradiction. Hence, it is easy to detect the modeling error.

– A false positive: The result of both incorrect modeling and incorrect imple- mentation of requirements by SUT. There is no contradiction. These errors are hard to find out.

– A true negative: The modeling is correct. However, there is a fault in the implementation. This is a desired outcome of the testing process.

– A true positive: Both model and implementation have correctly imple- mented the requirements. Test case passes. This is a desired outcome of the testing process

During the evaluation phase, our task is to look for every false negative (this is not hard to find and can be easily found out by scrutinizing the trace of failed test cases) and try to be careful with false positives. One of the recommendations to deal with false positives is to verify the model generated in the process of MBT with the requirement engineer. As the requirement engineer has deep insight of the system’s requirements, he can easily catch any inconsistency in the model which may lead to a false positive. We, start from a small and less complex model to make our task easier. Only when we have resolved all false negatives and verified the model for false positives, we should go for adding new requirements to the

(24)

15

model.

For the construction of data model, we will go for an incremental approach, adding new functionalities on the top of the old and verified model.

(25)

Chapter 4

Basic Model and Adapter

This chapter gives an introduction to the state behavior of the SUT and how we can model it within our MBT tool. We discuss the level of abstraction used for modeling and the motivation behind it. We also mention the advantages of using an adaptor (especially with respect to raising the abstraction level) within the current MBT framework. Finally we show how to generate and execute test cases from the model.

4.1 Introduction

The basic model aims to capture all the states and transitions of the IDS. The subsystem contains a finite set of commands, which provides a means to interact with the it. The subsystem may reside in a number of states during its execution.

Only a certain number of commands (actions) can be executed from a particular state, which may or may not cause a transition to another state according to its specification. However, the subsystem is expected to issue an appropriate message for any command requested from any state.

The infrastructure for MBT uses instance-based methods, i.e., there can be sev- eral instances of the infrastructure at a time. Therefore, while making the model we have to make sure that the entire model executes on one instance of the in- frastructure, otherwise the results will be unexpected as different instances of the subsystem can be in different states. The graph obtained as the result of explo- ration of a model in Spec Explorer (SE) reflects this phenomenon by explicitly enumerating each instance of the adapter. Each action in the graph is preceded

(26)

4.2. Construction of the model 17

by its corresponding instance as shown below:

M odelP rogram#0.DoSomething(P 1, ..., P n)/Output where,

– ModelProgram#0 represents the instance number.

– DoSomething(P1,...,Pn) represents an action with parameters P1...Pn.

– Output represents the expected outcome.

4.2 Construction of the model

Construction of model is the most important aspect of MBT. In Spec Explorer a model is constructed using a piece of C# code known as model program. While constructing a model we often encounter the state space explosion problem. This problem causes an unmanageable number of states in the model. It is mainly caused due to data. To make our basic model relatively simple, we have not explicitly incorporated data into it. However, not all the states of the SUT can be reached without data as actions responsible for making transactions to those states require data as parameters and we intend to cover as many states in the SUT as possible.

There are a number of parameters that have to be tested in the SUT. Each com- bination of the parameters is either valid or invalid. When an action with a valid combination is executed , it results in a transition different from an invalid one. In order to reach all states, we need to have a mechanism that generates valid and in- valid combinations of parameters. To get rid of stating the parameter combination explicitly in the basic state transition model, we made an extra function in the adapter which takes a boolean parameter. When we call that function with true from the model program, it internally makes a valid combination of all parame- ters and fills those parameters in their respective buffers and does the opposite for false. As the behavior of SUT is similar for every valid combinations of parameters (same hold for every invalid combination), one valid and one invalid combination of parameters is sufficient to verify system’s behavior. These combinations can easily be made from the specification. This kind of flexibility can only be achieved if we have an adapter in between of the implementation and the model program.

We cover data explicitly in the next chapter.

Next, we consider the construction of the the model from the Spec Explorer’s per- spective. Only a certain number of actions are valid from each state. However, the SUT should respond to each action corresponding to its specification. A fragment of Spec Explorer’s model program is presented in Figure 4.1. The whole model

(27)

18 Chapter 4. Basic Model and Adapter

program is a collection of such rules, which defines the possible transitions. A fully explored state space for the image detection subsystem is shown in Figure 4.2. The model shown in Figure 4.2 just illustrates how a model looks in Spec Explorer.

Figure 4.1: Code Fragment

Figure 4.2: Resulting Model from the model program

(28)

4.3. Test-Case Generation 19

4.3 Test-Case Generation

After having built a model, we can generate test cases from the model. Spec Explorer supports both online testing as well as offline testing. In the context of MBT, online testing refers to exploration of the model concurrently with generation and execution of test cases. Offline testing requires a complete suite of test cases before they can be executed. For online testing we can test the system directly from the model obtained from the model program (even without exploring it).

However, for offline testing we first have to generate the test suite. The model obtained from the model program is not suitable for generating a test suite, because from each state (in our case), there can be a number of transitions. We can non- deterministically go into two or more states from a state. In order to generate a test-suite from a non-deterministic model, non-determinism has to be resolved.

The non-determinism is resolved by using the tool implicit functions. We can derive one or more sequence of actions without any branching from the model.

These sequences of actions serve as test cases for the implementation. Executable test cases are automatically generated as a collection of files in C# that can be used by any .NET capable test framework (e.g., Visual Stdio).

4.4 Execution of Test-Cases

The next step in the testing process is to execute the test cases against implemen- tation. This is as simple as clicking a button in Visual Stdio. We get a verdict pass or fail for each test case and we can also see the trace of action executed against the implementation and its outcome.

Based on the verdict, we either redefine our model (false positive or false negative) or generate a fault report (true negative).

4.5 Results

In the whole process of MBT of the basic model of the IDS, a number of iterations have to be done as explained in the Chapter 3. This was primarily due to the poor understanding of the system’s specification which result in a number of false positive and false negative. In the final model, the tool generated a total number of 192 test cases, covering about 10000 transitions (resulting from a different sequence of actions) within seconds. The execution of those test cases took less than 10 minutes. Unfortunately (from a tester’s perspective), all the test-cases passed.

(29)

Chapter 5

Data Dependency

In this chapter, we discuss the importance of data in MBT and mention the impact of dependencies among them. We describe a number of parameter combination strategies along with their limitations. Finally, we come up with an algorithm that reflects the data dependency while selecting a combination of parameters.

5.1 Data Handling

Data is an important factor in testing. There can be various parameters that need to be tested in the SUT. Futhermore, there can be mutual dependencies among the values assigned to those parameters. Due to various possible combinations of data, soon the number of test cases becomes unmanageable and needs to be handled in an efficient way. Every possible combination of parameters is a potential test case.

It may range from thousands to millions in an actual implementation, resulting in a large number of test cases.

In cases where we have several data fields and each can have a large number of valid values, the combination of these values can result in huge number. The potential explosion in the number of possible combinations is why testing for every possible combination is not feasible.

Therefore, several techniques came up to cope up with this problem. The most important of them are Equivalence partitioning and Boundary Value Analysis, which are explained below:

(30)

5.1. Data Handling 21

5.1.1 Equivalence Partitioning

Equivalence partitioning is one of the most used test techniques in practice. The basic idea is to divide the input domain into equivalence classes or partitions of data, which according to the specification lead to the same behavior. The basis of the technique is that any value chosen from an equivalence partition is as good as any other since it should be processed in a similar way. In this way we can get a small but effective test set. Consider an example of grade system which has the following rules concerning the value of parameter mark:

– If a student scores 90% or more but less than (or equal to 100%), the grade is A.

– If a student scores 80% or more but less than 90%, the grade is B.

– If a student scores 70% or more but less than 80%, the grade is C.

– If a student scores 60% or more but less than 70%, the grade is D.

– If a student scores less than 60%, the grade is F.

The only parameter in this application is mark, which has a set of valid values from 0 to 100. Parameter mark decides the grade. Therefore, all valid values in the domain of mark plus some invalid values ( -1 or 101) make potential test cases, resulting in the total number of 102 test cases. Now, we apply equivalence partitioning in this application. The resultant partitions are shown in Figure 5.1.

Figure 5.1: Equivalence Partitioning

Invalid 0

F

60 D

70 C

80 B

90 A

100

Invalid

Every value in a particular partition is expected to have the same result. We just test a single value from each and every partition. Therefore, the test suite is now limited to seven test cases.

5.1.2 Boundary Value Analysis

Boundary Value Analysis is a methodology for designing test cases that concen- trate testing effort on cases near the limits of valid ranges. It refines equivalence partitioning. Boundary value analysis generates test cases that highlight specific type of errors better than equivalence partitioning, as most of the errors are likely to occur near the boundaries. For example, errors in the code concerning the use

(31)

22 Chapter 5. Data Dependency

of < operator in place of ≤ are very common.

Consider the example mentioned in Section 5.1.1. There are six boundaries at 0, 60, 70, 80, 90 and 100 to be tested. For a complete boundary value analysis, we have to test three values along every boundary value [12]. Test cases for boundaries at 0 and 60 are shown in Figure 5.2.

Figure 5.2: Boundary value analysis

Invalid 0

F

−1 10 60

59 61

60 D

70 C

80 B

90 A

100

Invalid

In a similar way test cases can be created along every boundary. Therefore, a total of 18 test cases are required for a complete boundary value analysis. How- ever, in most of the applications only two values are sufficient to test the correct implementation of boundaries as out of three values, two have similar behavior [12].

5.1.3 Combination Schemes

By using either Equivalence Partitioning or Boundary Value Analysis the number of test cases can be reduced drastically, while maintaining the effectiveness of test- ing. However, consider a simple scenario where a tester has to test a combination of 20 parameters. Suppose using equivalence partitioning one obtains two values (one valid and one invalid) for each parameter. Still the number of test cases ob- tained will be 220, i.e., more than a million. Even when the test generation process is automated, the process is quite tedious, while it is unimaginable when the test cases have to be obtained manually.

Hence, a number of other combination schemes came up. A comparison of more than 15 combination schemes is given in [13].

Each Choice Combination Strategy

While applying Each Choice Combination Strategy, a tester has to include each value of every input parameter. Suppose we have 3 input parameters A, B and C.

A has 10 valid values (say 1 to 10), B has 7 valid values (say 1 to 7) and C has 5 valid values say (1 to 5). Then a total of 10 test cases as shown in Table 5.1

(32)

5.1. Data Handling 23

is sufficient for Each Choice Combination strategy. Combinations in the table are randomly selected.

Table 5.1: Each Choice Combination Strategy

A B C

1 1 1

2 2 2

3 3 3

4 4 4

5 5 5

6 6 1

7 7 1

8 1 1

9 1 1

10 1 1

Base Choice Combination Strategy

Applying the Base Choice Combination Strategy starts by identifying one base test case manually. This choice can be influenced by factors like simplest, smallest, from an end user point of view or it can be based on developer’s perspective with respect to most complex behavior (a combination that leads to some complex structure of the code).

Now, new test cases are generated by varying the value of one parameter at a time, keeping other parameters of Base Choice intact. For the example of Each Choice combination strategy, applying Base Choice will result in a total of 20 test cases. They are shown in Table 5.2. The base test case identified was (1, 1, 1) for parameters A, B and C respectively.

All Combination Strategy

Applying All Combination Strategy results in all possible combinations of input parameters. In the above mentioned example, this strategy will result in 350 test cases as shown in Table 5.3. This strategy is normally not used as it results in very large number of test cases.

(33)

24 Chapter 5. Data Dependency

Table 5.2: Base Choice Combination Strategy

A B C

1 1 1

1 1 2

1 1 3

1 1 4

1 1 5

1 2 1

1 3 1

1 4 1

1 5 1

1 6 1

1 7 1

2 1 1

3 1 1

4 1 1

5 1 1

6 1 1

7 1 1

8 1 1

9 1 1

10 1 1

Pairwise Testing

Pairwise testing is a combinatorial testing method that tests all possible combina- tions of for a pair of parameters. The value of other parameters can be randomly selected. The number of test cases generated is the order of n × m, where n and m are the number valid values for each of the two parameters. For the above men- tioned example, total number of 70 test cases is possible if we choose parameters A and B for pairwise testing.

The motivation for pairwise testing is to catch bugs resulting from interaction between two parameters as the bugs in a program are generally triggered by a interaction of two parameters combination. Bugs involving interactions between three or more parameters are progressively less common.

A number of other combination strategies together with their comparison and implementation level details are given in [14].

(34)

5.2. Open Issues 25

Table 5.3: All Combination Strategy

A B C

1 1 1

1 1 2

... ... ...

1 1 5

1 2 1

... ... ... ... ... ...

10 7 5

Random Testing

One of the alternatives to Equivalence Partitioning is random testing. Test cases are just randomly chosen from from the entire input domain without any criteria.

Random testing proved to be less effective than partition testing methods like Equivalence partitioning [15]. However, at the positive side, it is dynamic in nature while all other combination strategies are static. It is more likely to detect a bug, when test cases are generated multiple times as bugs become immune to static testing methods after some iterations. We elaborate more on random testing, and how it can be customized so that it is as efficient as Equivalence Partitioning and is still dynamic in nature, while making data model for our SUT.

5.2 Open Issues

Most of the combination strategies apply several techniques to reduce the input space of test cases. However, they treat parameters independently of each other.

The test cases generated by these strategies are not sufficient to verify the de- pendencies among parameters. In this section we demonstrate the concept of dependency among parameters with the help of the classical triangle problem.

Suppose we have an application that accepts three input parameters A, B and C as sides of a triangle. On the basis of parameters value the application determines whether the combination of parameters can form a triangle and the category of the triangle formed, i.e.,

– Scalene: all sides of the triangle are not equal.

– Isosceles: two sides of the triangle are equal.

(35)

26 Chapter 5. Data Dependency

– Equilateral: all sides are equal.

In order to test such an application, we have to determine the values that param- eters A, B and C can take. If the requirement specification explicitly mentions the range of values that a particular parameter can take, we will take that range for the parameters otherwise, ideally any positive number is a valid value for a parameter (as sides of a triangle cannot be negative).

Now lets have a look at the interdependency among the parameters.

– For being a triangle, it is required that sum of two sides should be greater than the third side, i.e.,

A < B + C and B < A + C and

C < A + B – For being a scalene triangle:

A 6= B and B 6= C and

A 6= C – For being an isosceles triangle:

A = B or B = C or C = A – For being an equilateral triangle:

A = B and B = C

The question that comes next is how to combine and how many combinations are required to reflect all dependencies. The answer lies in one of oldest and most efficient ways to represent and analyze complex logical relationships, i.e., Decision Tables. A decision table is a table consisting of conditions, rules and expected outcomes. More on decision tables can be found in [16].

(36)

5.2. Open Issues 27

Next, we construct a decision table for this particular application in hand and gradually prune it. Pruning a decision table is the process of excluding the cases that make no sense for testing. It particularly depends on the application in hand. The initial decision table is shown in Table 5.4. The upper-left quadrant contains the conditions. The upper right quadrant contains the condition rules for alternatives, i.e., for T the condition holds and for F the condition does not hold.

The lower left quadrant contains the actions to be taken as a result of combination (all possible outcomes of the application)and the lower right quadrant contains the action rules which shows the outcome for a specific combination of condition rules.

MinValue and MaxValue represent the range of values of any side of the triangle.

Table 5.4: Initial Decision Table

c1: MinValue ≤ A ≤ M axV alue T T T T T ...

c2: MinValue ≤ B ≤ M axV alue T T T T T ...

c3: MinValue ≤ C ≤ M axV alue T T T T T ...

c4: A <B + C T T T T T ...

c5: B <A + C T T T T T ...

c6: C <A + B T T T T T ...

c7: A = B T T T T F ...

c8: A = C T T F F T ...

c9: B = C T F T F T ...

a1:Value out of range a2:Not a triangle a3:Scalene

a4:Isosceles ×

a5:Equilateral ×

a6:Impossible × × ×

Each column consisting of a T and F forms a test case. As we can see there are nine conditions, and each condition can either be True or False, 512 possible combinations of parameters are possible. Some of them are feasible while some are not (cases like A = C and A = B and B 6= C, all are true). These combinations can be pruned.

We can reduce the total number of combinations in a number of steps:

– Step 1: If the length of each side (A, B and C) of the triangle is out of its range (say they are negative), the (expected) outcome of the system should always be Value out of range. We assign to one constraint (either c1, c2 or c3) to the value False and to the other two the value True at a time and put don’t care (shown as ‘-’ in the decision table) for the rest of the constraints. If the implementation has not correctly implemented the first three constrains, these three combinations are sufficient to reveal this fact. More on pruning

(37)

28 Chapter 5. Data Dependency

of combinations can be found in [16].

As a result the total number of combination comes down to 67 (26+ 3).

– Step 2: The constraints c4, c5 and c6 check whether combinations of sides can form a triangle and constraints c7, c8 and c9 checks for the type of triangle. If either of constrains c4, c5 or c6 does not hold then there is no point in checking constraints c7, c8 and c9. Therefore, we again assign to one constrain (either c4, c5 or c6) the value False and to the other two the value True at a time and just do not care for the rest of the constraints. In this way we again reduce the total number of combination to 14 (23+ 3 + 3).

Resulting complete table is shown in Table 5.5.

Table 5.5: Final Decision Table

c1: MinValue ≤ A ≤ MaxValue F T T T T T T T T T T T T T

c2: MinValue ≤ B ≤ MaxValue T F T T T T T T T T T T T T

c3: MinValue ≤ C ≤ MaxValue T T F T T T T T T T T T T T

c4: A <B + C - - - F T T T T T T T T T T

c5: B <A + C - - - T F T T T T T T T T T

c6: C <A + B - - - T T F T T T T T T T T

c7: A = B - - - T T T T F F F F

c8: A = C - - - T T F F T T F F

c9: B = C - - - T F T F T F T F

a1: Value out of range × × ×

a2: Not a triangle × × ×

a3: Scalene ×

a4: Isosceles × × × ×

a5: Equilateral ×

a6: Impossible × × ×

Hence, we have shown, how we can prune the decision table to 1/50 of its original size and we are left with a small but effective number of cases. It satisfies all domains of testing as the original table does. Each column (except the first) of the decision table will eventually become a test case before actual testing activity.

Now, once we have the decision table in place, we have to assign values to the parameters such that it satisfies all the constraints of a column at a time. In this particular example of triangles, when we tried to manually assign values to pa- rameter A, B and C for all columns of the decision table 5.5, it took us several minutes even when the dependencies are in their very primitive nature. However, most practical applications have complex dependencies among parameters. Assign- ing values to parameters for every column of decision table will consume several hours depending on the number of parameters, number of dependencies and the

(38)

5.2. Open Issues 29

complexity of dependencies. A graph representing the relationship between the time required to generate test cases and dependencies is shown in Figure 5.3. The graph was obtained experimentally and might slightly differ for different forms of dependencies (simple or complex) in an application.

Figure 5.3: Test case generation time against number of parameters + number of dependencies

Time

Nr. of Parameters + Nr. of dependencies

Decision Tables can efficiently represent dependencies in any form. However, if the values of the parameters have to be assigned manually, this is a tedious task and is often impossible in cases where there are too many dependencies among parameters (in triangle case we have only 6 dependencies among parameters).

In order to exploit the full potential of decision tables, we need to automate the assignment of values to the parameters in such a way that for each column of Decision Table (which represents a test case), we have a satisfying assignment for its parameters. This automation helps to evaluate any number of dependencies among parameters in an application.

Not much work in this area has been done in literature. The only relevant work can be found in [17]. The authors have described a method (CECIL = Cause-Effect Coverage Incorporating Linear boundaries) for generating test data for problems involving complex linear dependencies between two or more variables. The CECIL method was integrated in a test generation tool set IDATG (Integrated Design and Automated Test case Generation) [18]. In [19] authors have used TEMPPO Designer, a model-based test automation tool which integrates IDATG, to model

(39)

30 Chapter 5. Data Dependency

the electronic interlocking system for railways. They have used CECIL method to model data. However, the CECIL method described in this work only allows linear dependencies, i.e., 5 * A ≤ B is allowed but C * A ≤ B is not. The dependencies in most of the practical applications including the current SUT is non-linear in nature. Therefore, the proposed method was inadequate to generate test data for our application.

We propose an algorithm that effictively assigns values to parameters for every column of the Decision Table for the current SUT. However, before going into details of the algorithm or giving details of the dependencies among parameters in the SUT, we describe some concepts required to understand the algorithm.

Data dependent equivalence classes

Suppose we have an application with two parameters (say A and B) and one dependency of the form A = B + 10, where A has valid values between 5 and 15 and B has a valid values between 0 and 10, i.e.,

5 ≤ A ≤ 15 0 ≤ B ≤ 10

The decision table for this application is shown in Table 5.6. We deliberately did not write the rules in the table as rules are application specific and are irrelevant in the present context.

Table 5.6: Decision Table

c1: 5 ≤ A ≤ 15 T T T F c2: 0 ≤ B ≤ 10 T T F T c3: A = B + 10 T F - -

Four test cases corresponding to four columns of the decision table have to be gen- erated in order to verify the dependency. For the last two columns of the decision table, we can simply assign values to parameters A and B based on equivalence partitioning as it does not take into account of the dependency. However, in order to assign values to first two columns of the decision table we have to repartition the original equivalence classes of the parameters.

The motivation behind the repartitioning of the original partition of valid values of a variable is to make sure that if we select any value from valid partition of one parameter, we are always left with some valid values of the other parameters that can reflect the dependency among them. We do not lose anything by doing a repartitioning as the new range of valid values remains a subset of the original range of valid values.

(40)

5.2. Open Issues 31

Suppose we are constructing test case for the first column of the decision table where all conditions are true. Constructing such a combination leads to the con- clusion that B + 10, should have the same range as that of A because if we select B as 9, which is valid, leads to assigning value to A as 19 which is not a valid value of A. Therefore, the ranges of valid values of A and B have to be redefined, to reflect this kind of dependency.

If we do not repartition the original equivalence classes of parameters and select the values randomly from each equivalence class, then in most cases the combination of values will not reflect the required dependency and in some cases we are even not left with the valid values of parameters that can reflect the dependency. We will show how we can repartition the original equivalence class of the parameters by several examples before going into the actual algorithm.

There can be several categories of data dependency. We will mainly concentrate on two of them.

1. Equality dependency 2. Inequality dependency

Equality dependency

Equality Dependency is a kind of data dependency in which a combination of zero or more parameters have to be equal (or not equal) to another combination zero or more parameters.

Example 1 Consider the following kind of data dependency:

A = B + 10 where, 5 ≤ A ≤ 15 and 0 ≤ B ≤ 10.

In this particular case we have to select a value of A (within its valid range) in such a way that there exists a value for B (within its valid range) that can satisfy the equation. The same holds for the other way around (if one selects the value of B initially). The minimum value of A derived from the equation is 10 ((minimum value of B) + 10). Similarly its maximum value is 20. However, original range of valid values for A is in between 5 and 15. If we intersect the two ranges, we get the new range of valid values for the parameter A (10 ≤ A ≤ 15). We follow the same procedure for calculating the new range of valid values for the parameter B. The invariant used in this process is that the left hand side of the equation (A) should have the same range of valid values as its right hand side (B + 10) and vice-versa.

(41)

32 Chapter 5. Data Dependency

The reconstruction of the range of valid values of A results in:

10 ≤ A ≤ 15 The range of valid values of B is redefined as:

0 ≤ B ≤ 5

The new ranges for the parameters satisfy the motivation behind the repartitioning, i.e., if we select any value from the valid partition of one parameter, we are always left with some valid values of the other parameters. In case one of the valid ranges (for any parameter) turns out to be empty, then the dependency will not hold in any scenario. A generalized mechanism to repartition the original range of valid values for the parameters is given in Algorithm 1.

Example 2 Consider the following type of data dependency:

A = B + C

where A and B have the same range of valid values as in Example 1 and C has a range of valid values as:

10 ≤ C ≤ 20

The data dependency requires that the range of valid values of A has to be equal to the range of valid values of B + C, leading to the reconstruction of equivalence classess of A, B and C as:

10 ≤ A ≤ 15 0 ≤ B ≤ 5 10 ≤ C ≤ 15

If one selects the value of parameter A as 8 (within its original range), then he is not left with any value of the parameter B and the parameter C that can verify the validity of the equation A = B + C (satisfy the equation).

Example 3 Consider the following kind of data dependency:

A + B = C + D

where, A, B and C have the same range of valid values and D has a range of valid values as:

7 ≤ D ≤ 12

(42)

5.2. Open Issues 33

Now it requires that A + B should have the same range of valid values as C + D, i.e., between 17 and 32 and vice-versa. Leading to the reconstruction of the valid equivalence class of A, B, C and D as:

7 ≤ A ≤ 15 2 ≤ B ≤ 10 10 ≤ C ≤ 18

7 ≤ D ≤ 12

This roughly covers the types of dependencies that may occur in any application domain. There can be two major variations that can take place, i.e., there can be other types of mathematical operators that may occur instead of ‘+’ and can give rise to a more complex algebraic structure. The other type of variation is with respect to the dataset of variables. In the above examples we have taken a dataset of variables as range. Other forms of input can be a number (Constant), set of valid values, and Booleans.

We can treat a constant as a range with the same minimum and maximum values as the constant itself. A set of valid values as range with the minimum value as the value of minimum constant in that set and maximum value of the range as the value of maximum constant in that set. If the set of valid values contains strings then there is no need to apply the algorithm as none of algebraic operations can be performed on strings. For booleans we do not apply the algorithm either.

Inequality dependency

Inequality dependency is a kind of data dependency in which algebraic combination of zero or more parameters has to be greater than, less than, greater than equal to or less than equal to the algebraic combination of zero or more parameters.

Example 4 Consider the following form of data dependency:

A ≤ B + 10 where,

5 ≤ A ≤ 15 0 ≤ B ≤ 10

In this particular example A has to be less than equal to B + 10 and B + 10 has to be greater than A, giving rise to the following range of valid values:

0 ≤ A ≤ 15 0 ≤ B ≤ 10

(43)

34 Chapter 5. Data Dependency

Example 5 Consider the following form of data dependency:

A ≥ B + C where,

5 ≤ A ≤ 15 0 ≤ B ≤ 10 10 ≤ C ≤ 20

Proceeding from the Algorithm 1 give rise to the following redefined range of valid values.

10 ≤ A ≤ 15 0 ≤ B ≤ 5 10 ≤ C ≤ 15

Example 6 Consider the following form of data dependency:

A + B ≥ C + D where,

5 ≤ A ≤ 15 0 ≤ B ≤ 10 10 ≤ C ≤ 20

7 ≤ D ≤ 12

Again proceeding from the Algorithm 1, we obtain the following redefined range of valid values:

7 ≤ A ≤ 15 2 ≤ B ≤ 10 10 ≤ C ≤ 18

7 ≤ D ≤ 12

Next we show how we repartitioned the equivalence classes of Example 3 using the Algorithm 1:

Equation E is A + B = C + D.

MinA = 5.

MaxA = 15.

MinB = 0.

(44)

5.2. Open Issues 35

Variables (with ranges) + Dependencies (equations)

Split equation into LHS and RHS

Define range for LHS and RHS

Assign new range to every variable

Intersect them with the original range

Figure 5.4: Flow Diagram of Algorithm 1

(45)

36 Chapter 5. Data Dependency

Algorithm 1 Data Dependent Equivalence Class Input: Variables with their range

Input: Dependency in form of equations

1: procedure Data Dependent Equivalence Class

2: for all Equations E do

3: for all Variable V in E do

4: MinV = Minimum value of V in its defined range.

5: MaxV = Maximum value of V in its defined range.

6: end for

7: Split E into LHS and RHS

8: if V preceded by + or * or any increasing sequence then

9: LHSmin = Substitute Minimum value of each variable in LHS

10: LHSmax = Substitute Maximum value of each variable in LHS

11: RHSmin = Substitute Minimum value of each variable in RHS

12: RHSmax = Substitute Maximum value of each variable in RHS

13: else

14: LHSmin = Substitute Maximum value of each variable in LHS

15: LHSmax = Substitute Minimum value of each variable in LHS

16: RHSmin = Substitute Maximum value of each variable in RHS

17: RHSmax = Substitute Minimum value of each variable in RHS

18: end if

. LHSmin and LHSmax become the new range of the corresponding RHS and vice-versa

19: LHSmin≤ RHS ≤ LHSmax

20: RHSmin ≤ LHS ≤ RHSmax

21: for all Variable V in LHS do

22: for all Variable V1 6= V do

23: if V1 preceded by + or * or any increasing sequence then

24: CorrectedMinV = Substitute Max value of V1 in RHSmin= LHS

25: CorrectedMaxV = Substitute Min value of V1 in RHSmax= LHS

26: else

27: CorrectedMinV = Substitute Min value of V1 in RHSmin= LHS

28: CorrectedMaxV = Substitute Max value of V1 in RHSmax= LHS

29: end if

30: end for

31: end for

32: for all Variable V in RHS do

33: for all Variable V1 6= V do

34: if V1 preceded by + or * or any increasing sequence then

35: CorrectedMinV = Substitute Max value of V1 in LHSmin= LHS

36: CorrectedMaxV = Substitute Min value of V1 in LHSmax= LHS

37: else

38: CorrectedMinV = Substitute Min value of V1 in LHSmin = LHS

39: CorrectedMaxV = Substitute Max value of V1 in LHSmax= LHS

40: end if

41: end for

42: end for . two range of same variable V is R1 and R2

(46)

5.2. Open Issues 37

43: R3 = R1 ∩ R2.

44: if R3 == NULL then

45: V does not have any valid value

46: else

47: R3 is new range for V

48: end if

49: end for

50: end procedure

MaxB = 10.

MinC = 10.

MaxC = 20.

MinD = 7.

MaxD = 12.

LHS of E is A + B.

RHS of E is C + D.

LHSmin = 5.

LHSmax = 25.

RHSmin = 17.

RHSmax = 32.

Defining the range for RHS and LHS

RHSmin≤ LHS ≤ RHSmax, i.e., 17 ≤ A + B ≤ 32 LHSmin ≤ RHS ≤ LHSmax, i.e., 5 ≤ C + D ≤ 25 For variables in LHS

CorrectedMinA = 7 (17 = A + 10) CorrectedMaxA = 32(32 = A + 0) CorrectedMinB = 2 (17 = 15 + B) CorrectedMaxB = 27(32 = 5 + B) For variables in RHS

CorrectedMinC = -7 (5 = C + 12) CorrectedMaxC = 18(25 = C + 7) CorrectedMinD = -15 (5 = 20 + D) CorrectedMaxD = 15(25 = 10 + D)

To get the final range, we intersect the original range and the new range of each variable:

MinA = 7.

MaxA = 15.

MinB = 2.

(47)

38 Chapter 5. Data Dependency

MaxB = 10.

MinC = 10.

MaxC = 18.

MinD = 7.

MaxD = 12.

However, the pseudocode of the inequality dependency as given by examples 4, 5 and 6 is not included as a part of Algorithm 1 to make it simple and readable. In case of such a dependency lines 19 and 20 of Algorithm 1 which typically represents equality dependency have to be extended with the following pseudocode:

LHSmin≤ LHS ≤ RHSmax LHSmin ≤ RHS ≤ RHSmax

If the dependency equation contains ≥, the following pseudocode represents it:

RHSmin ≤ LHS ≤ LHSmax RHSmin ≤ RHS ≤ LHSmax

Remark: Algorithm 1 just provides an initial domain of values from which the parameters should be valuated in order to reflect the dependency. While the actual assignment of values to the parameters to construct test cases is a different issue.

The algorithm may be executed several time during the process of assignment of actual values parameters. We explain the overall working of the process with the help of dependencies among parameters in the current SUT.

Figure 5.5 represents the geometrical solution of the Algorithm 1 for the Example 1. The dependency equation, i.e.,

A = B + 10

has the solution on the line segment A = B + 10. The initial range of parameters A and B are shown by brown and blue color strips respectively. The dependencies among parameters can only be reflected if values of parameters A and B are selected from an area where the brown strip, the blue strip and the line segment A = B + 10 intersects. This is shown by a red color segment in Figure 5.5. Any other value selected from the line segment A = B + 10 will certainly satisfy the equation A = B + 10, however, the values will be out of the range of parameter A or parameter B.

If an application has only few parameters, we can easily solve it graphically. How- ever, with the increase in number of parameters, the dimensions of the graph also increases and soon it becomes very difficult to solve the problem graphically.

Algorithm 1 solves this problem in a generic way.

(48)

5.2. Open Issues 39

Figure 5.5: Geometrical Overview

(49)

40 Chapter 5. Data Dependency

5.3 Data Model for SUT

In the Image Detection Subsystem, there are more than 20 parameters that results in a particular state of the subsystem. Most of the parameters are independent of each other and can be tested in isolation. However, there are five parameters say A, B, C, D and E that are interdependent on each other. The dependency is of the form:

M inV alue < B ∗ (C/100) ∗ D ≤ M axV alue

The MinValue and MaxValue is predefined for every valid combination of the parameter A and the parameter E.

The given valid range of parameters are as follows:

A ∈ {set of values based on a particular conf iguration}

1 ≤ B ≤ 9999999 100 ≤ C ≤ 10000

2 ≤ D ≤ 32

E ∈ {set of values based on a particular conf iguration}

Now, in order to generate efficient test cases for the given five parameters, a decision table can be used. The decision table for the SUT can be constructed as shown in Table 5.7.

Table 5.7: Initial Decision Table for SUT

A ∈ {set} T T T T T ...

1 ≤ B ≤ 9999999 T T T T T ...

100 ≤ C ≤ 10000 T T T T T ...

2 ≤ D ≤ 32 T T T T F ...

E ∈ {set} T T F F T ...

MinValue < B * (C / 100) * D ≤ M axV alue T F T F T ...

5.4 Analysis

The number of test cases derived from the above table is 64 (6 conditions each with valuations True or False making 26 = 64 test cases). However, this cannot be considered as a complete set of test cases as the last condition consists of MinValue and MaxValue. These values vary for every A and E combination. Hence, ideally for every valid A and E combination, we should have a separate decision table. So

Referenties

GERELATEERDE DOCUMENTEN

In total, 16 different models of five types were estimated: (a) one model with the intercept as the only random effect (the LLTM); (b) four models with the intercept and the weight

The feasible test cases modelled as a test case dependence graph can be reduced by eliminating extraneous statements that cannot affect the parameters of the

Another consideration is the FEA accuracy, as discussed in Section 3.2.. The climb controller tested used a fixed set of step algorithm instances. The parameters of these

This thesis presents an application of the formal modelling and model checking toolkit mCRL2 and the model-based testing tool JTorX in the signalling domain.. The mCRL2 toolkit is

The split insertion policy tests each integrated circuit twice per test program, in which one of the tests is tested with a higher test parallelism.. The goal of the model is

An order start day for Test is planned forward from the FASY start day shown in the AMSL file 7. At the beginning of this project the execution of Test for standard and non

By first estimating peak-forces resulting from mechanical impacts, a finite force attributed to an impact load on the end-effector was used in a static force analysis to design for

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of