• No results found

Evaluating the efficiency of GUI Ripping for Automated Testing of Android Applications

N/A
N/A
Protected

Academic year: 2021

Share "Evaluating the efficiency of GUI Ripping for Automated Testing of Android Applications"

Copied!
26
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Evaluating the Efficiency of GUI

Ripping for Automated Testing of

Android Applications

Santiago Carrillo

sancarbar@gmail.com

August 1, 2014, 25 pages

Supervisor: Jan van Eijck Host organisation: Minddistrict

Universiteit van Amsterdam

Faculteit der Natuurwetenschappen, Wiskunde en Informatica Master Software Engineering

(2)

Contents

Abstract 3

1 Problem statement and motivation 4

1.1 Problem context . . . 4

2 Research Details 5 2.1 Research Goal. . . 5

2.2 Research Question . . . 5

2.3 Hypothesis . . . 5

3 Background and context 6 3.1 Automated testing of GUI Applications: State of Art. . . 6

3.2 Model-based Testing . . . 7

3.3 UI/Application Exerciser Monkey. . . 8

3.4 Testing Frameworks Comparison . . . 9

3.5 Android Testing . . . 9

3.5.1 Testing Structure . . . 10

4 The Applications Under Test 11 4.1 Minddistrict Android App . . . 11

4.2 Fun Menu . . . 13

5 The Empirical Experiment 14 5.1 Experiment details . . . 14

5.2 Experiment configuration . . . 14

5.2.1 GUI Ripping . . . 14

5.2.2 Random Events Generator: The Monkey . . . 15

5.2.3 Additional Experiments . . . 15

6 Results 16 6.1 Minddistrict App . . . 16

6.1.1 GUI Ripping . . . 16

6.1.2 Random Events Generator: The Monkey . . . 19

6.1.3 Test coverage . . . 19

6.2 Test Results of the Additional Experiments . . . 19

6.2.1 Minddistrict Android App with errors . . . 19

6.2.2 Fun Menu V2 App . . . 20

6.2.3 Test coverage . . . 20

7 Analysis and conclusions 21 7.1 Analysis . . . 21

7.2 Conclusions . . . 22

7.3 Further work . . . 22

(3)
(4)

Abstract

Mobile devices and their applications continue increasing at a fast rate. Android is the leading mobile operating system, with more than 1 billion active devices worldwide [16,20]. In order to improve the quality of Android applications, we must explore and find cost-efficient solutions to automate part of their testing process.

GUI Ripping is a technique that allows a systematic and automated exploration of an application’s Graphical User Interface (GUI). During this exploration, test cases are generated and executed. To evaluate GUI Ripping, I conducted an empirical experiment using GUI Ripper [12] and the Monkey [19]. The efficiency of these testing tools was evaluated, comparing the code coverage and the number of faults detected on two Android applications.The experiment results showed the better efficiency of GUI Ripping, achieving higher code coverage and fault detection rate.

(5)

Chapter 1

Problem statement and motivation

1.1

Problem context

Android is the dominating mobile operating system in the world, holding 81% of the smart-phone market [23]. Android not only runs on smart-phones, but also other devices: tablets, laptops, TVs, smart-watches, car computers and game consoles. Currently, the Google Play Store(Official Android application’s store) has more than 1.3 million applications [15]. The high success of this platform shows the necessity of providing a cost-efficient solution to automate part of the testing process for its applications.

The Android SDK provides a testing tool called Monkey [19]. This tool generates random user events on the User Interface(UI), as well as system level events in order to stress the applications. Although the Monkey can detect some defects of the applications, it does not necessarily guarantee a large code coverage or effectiveness in faults detection. The limited configuration settings of this tool don’t give much control to the developer in order to properly test the applications.

Another technique used for automated testing of event-based applications is GUI Ripping [25]. By using reverse engineering, a work-flow model is created based on the information and events of the widgets (GUI objects). GUI Ripping will automatically traverse the application’s GUI, generating and executing test cases as new events are founded [12]. This approach makes the exploration of the application more exhaustive, which could increase the code coverage and faults detected.

The host organization for this research is Minddistrict. This company specializes in seamless e-Health. Their main product is a web-based application, divided into modules that help their cus-tomers to provide e-health services for mental health treatments. Currently, they are developing two mobile applications for Android and iOS to extend their platform services. These applications will support some functionality of the current modules of the web application.

The Minddistrict development team created a large GUI-based test suite for their web application, using the software testing framework Selenium. This framework provides the domain-specific lan-guage Selense to write tests using a programming lanlan-guage like Java, C#, Groovy, Perl, PHP, Python or Ruby. The Minddistrict development team is very satisfied with the quality of their software, therefore they expect a high quality on their mobile applications as well. GUI ripping is a technique that can help the organization achieve their goals. By generating automated tests, the test coverage for the mobile applications can be increased. In order to evaluate this tool the Minddistrict Android Application will be used as Application Under Test (AUT).

To have more data to evaluate the efficiency of the testing tools, an additional application was tested. The second AUT, Fun Menu, was developed for another research project [17]. Although the functionality of this application is not connected to the domain of e-health, Fun Menu presents some characteristics that makes it a good test candidate for the experiment. The characteristics and details of the Applications Under Test (AUTs) can be found at chapter4.

(6)

Chapter 2

Research Details

2.1

Research Goal

Measure and compare the efficiency of GUI Ripping vs the Monkey for automated GUI testing of Android applications.

In order to compare both tools, the following data will be collected when testing the Android applications:

• Coverage: Metrics based on the source code that was executed by the testing tool. • Efficiency: Amount of defects detected by the testing tool.

2.2

Research Question

• Primary research question:

How efficient is GUI Ripping compared to the Monkey for automated GUI testing of Android applications?

• Secondary research question 1:

How efficient is GUI Ripping at detecting defects compared to the Monkey? Metric: Number of defects detected.

• Secondary research question 2:

What’s the highest code coverage reached for GUI testing between: GUI Ripping and the Monkey?

Metrics:

– classes where code was executed / total classes – methods where code was executed / total methods – blocks where code was executed / total blocks – lines of code executed / total lines of code

2.3

Hypothesis

If you do GUI testing on an Android application using GUI Ripping [12] and the Monkey [18] then: • Code coverage of the tests will be higher when testing with GUI Ripping.

• The number of bugs detected, originated from GUI events will be higher when testing with GUI Ripping.

(7)

Chapter 3

Background and context

3.1

Automated testing of GUI Applications: State of Art

Graphical User Interfaces allow the users to interact with a software system. The user generates different events, such as pressing a key from the keyboard, mouse clicks or scrolling. The GUI reacts to these events by method calls, or messages. A large part of system code is dedicated to develop the GUI. The GUI of an application can compose up to 60% of its code [24,29,30,26]. Testing the GUI of an application can help to determine the system’s correctness and improve the quality of the software [26,28,27,31].

Existing GUI testing techniques are still insufficient. One of the most used techniques is based on record-playback. In this technique, the tester interacts with the application by generating events on the UI (e.g. clicking on a button, typing some text, dragging a UI object). The events created by the user are recorded in a file which can be executed later with different inputs. The record-playback test process requires a lot of work from the tester, 50 events for different widgets takes 20-30 minutes [28]. The record-playback process relies on the ability to define execution paths that can lead to errors in the system, the generated test cases are usually few and insufficient [26,26, 25].

Another alternative to test the GUI of an application is to release beta versions of the software to let users help with the testing, however, not all the software companies have a large community of users willing to test their software for free.

Other approach is to use model-based automated techniques [27, 14]. A model helps to specify, understand and develop a system; test cases can be generated from a model that describes the behaviour of the System Under Test (SUT) [14]. A model of a system provides knowledge to the tester about the expected behaviour. Such information can be used to write test cases. However, generating a model of a System can be expensive [27]. Several applications do not have any specifications or documentation. It requires a different set of skills to abstract the functionality of the system and define it in graphical format.

An event-flow model represents events, event interactions and all possible sequence of events that can be executed on GUI [27]. Event-flow models provide sufficient information to generate a large set of test cases, executing all the possible combinations of all the different events.

The term GUI Ripping was introduced 11 years ago as: ”a dynamic process in which the software’s GUI is automatically ’traversed’ by opening all its windows and extracting all their widgets (GUI objects), properties, and values” [28]. The information collected from this process was verified and used by the tester to create the test cases using platform-specific libraries [25,28]. The tests produced by this manual process are not sufficient to properly test the application; the test suite becomes dependent on the resources used for the manual creation of tests [25]. A few years later the GUI test automation was taken a step farther: the term GUI ripping was redefined as a ”technology takes as input an executing GUI-based application and produces, as output, its workflow model(s)” [25]. A research group at the University of Maryland used the GUI Ripper work-flow for model-based testing, to develop techniques to auto-generate a large set of test cases [25]. The GUI Ripper tool Guitar [22,25,31] was created with following main features:

(8)

1. GUI reverse engineering 2. Automated test case generation 3. Automated execution of test cases

4. Support for platform-specific customization 5. Support for addition of new algorithms as plugins.

6. Support for integration into other test harnesses and quality assurance workflows

The re-definition of GUI Ripping allowed others to explore applications of this methodology to other technologies. Amalfitano applied the GUI ripping technique to test Android applications by using the high-level automation library Robotium [12,10].

3.2

Model-based Testing

Graphical User Interfaces (GUIs) are crucial in the Android applications. Users interact with the application through events on the GUI e.g, tap on a button, swipe or drag a UI object or pinch to zoom. When these events occur, the code of the application is executed. GUI testing is a critical part to evaluate the correctness of an application [27,28,26]. Testing the GUI events manually consumes a lot of time. Test cases can be also written to interact with the application, however mobile appli-cations change rapidly. Therefore, test cases need to be adjust every time the App is modified.

A model-based testing approach provides a method that allows automatic generation of the test cases for the AUT [9,28,33,10]. This process analyses the Application’s GUI at run-time, interacting and opening all different windows of the AUT. During this process the UI objects and their properties are extracted [25]. As a result, two models of the GUI are created:

1. GUI Tree:

This component represents the tree structure of all the different windows/screens of the appli-cation, and the hierarchical relationships among them. The relationships between the nodes (Screens of the AUT) are built based on the application’s navigation. The root of the tree is the first screen of the application. The children are all the possible new windows that can be opened by interacting with the UI objects of this first screen.

(9)

2. Event-Flow Graph:

This graph represents the interaction among the events of the UI components. The root of the tree is connected to all the events that you can interact from the application’s first screen.

Figure 3.2: Event Flow Graph Minddistrict App Zoom-in (See full size figure6.2)

The information obtained from the GUI models can be used to generate different test cases [10,25,

28,32].

3.3

UI/Application Exerciser Monkey

The Monkey [19] is an automated testing tool provided as part of the Android SDK. This tool runs on any emulator or real device that uses Android. The tool randomly generates:

• pseudo-random streams: clicks, touches or gestures.

• system-level events: device rotation, sensor state changed, low battery notification and other events.

The Monkey has 6 types of events. Each of these events have different actions to interact with the device (e.g. A key event action can be to press down on certain area of the screen). The total combinations of events/actions that the Monkey can generate are 27:

Event Type No. Actions flip 2 key 15 source 2

touch 5 trackback 3

The Monkey works as a command line tool program that runs on your emulator or device. You can configure the execution with the following categories:

• Basic configuration options: define the number of events to generate. • Event types and frequencies

• Operational constraints (e.g. restrict test to single package) • Debugging options

During the execution, the Monkey monitors and handles 3 different type of conditions: • Block attempts to navigate to a different package of the AUT.

• Reports on the generated events and the execution progress. • Stop execution and report the error if:

(10)

– Application crashes

– Application generates a not responding error

You can execute the Monkey from a command line or from a script. It must be launched from the same environment where the emulator/device runs. The basic syntax is:

$ adb s h e l l monkey [ o p t i o n s ] <e v e n t −count>

3.4

Testing Frameworks Comparison

Framework name Model generation Model verification Test case generation Test oracle Supported platforms

GUITAR Rev. Eng (A) Manual Model based (A) Custom Multiple

Monkey None None None Supported events Android

[31]

3.5

Android Testing

Android applications are structured into 4 different components, these components are the building blocks that define the application’s behaviour. Each component serves a specific purpose, and provides an entry point for the system to access the application. Each component has its own lifecycle that defines how its created and destroyed [1].

• Activities: represents a single screen with a user interface. An Activity is implemented as a subclass of Activity.

• Service: Executes long-running operations in the background.

• Content providers: manages the app shared data. The data can be store using a file system, an SQLite database, or any other persistent storage accessed by the application.

• Broadcast receivers: reacts to system-wide broadcast messages. The system notifies the appli-cation when different events happen (for example, when the battery is low, or when the user unlocked the device’s screen).

Android applications are composed of different Activities. Each Activity is responsible for creating a window for the User Interface (UI). An Activity provides the methods to communicate with the different UI objects and the system services. The window of an application is filled with a parent view that holds the views that composed the UI. Each view defines its own drawing behaviour, and handles the different events that happen on the UI level [1]. The Android SDK offers several types of views with a default behaviour. A developer can also extend and create custom views, as well as customize the view’s behavior.

(11)

3.5.1

Testing Structure

Android applications are tested using JUnit. The tests are decomposed into methods that test a particular part of the AUT. These methods are organized in classes known as test suites [6]. The tests rely on the Instrumentation testing framework [18]. This framework provides the methods to interact with the Activities and GUI Objects of the AUT.

(12)

Chapter 4

The Applications Under Test

The applications under test were chosen based on common characteristics of an Android application: • Collects data from the user with different types of input views: EditText, SeekBar, RatingBar,

Camera and Media Images.

• Displays collected data in a list view.

• Contains several Activities, and data is share among those Activities.

• Executes tasks in the background thread using the ExecutorService (class to execute long pro-cesses in background).

Two applications were tested using the Monkey and Android GUI Ripper:

4.1

Minddistrict Android App

The main application under test (AUT) was the Minddistrict Android application. The Minddistrict app allows you to register important moments on-the-go, anytime, anywhere. Just say how you feel and describe the situation you are in, add a picture if you like. You can access and add your moments from the web application as well. An overview of your moments are shown on your timeline. You need an account to login [4].

The first version of the Minddistrict Android Application is a commercial product. Although it hasn’t been released officially to clients, the application is already deployed to the Play Store. This application holds several characteristics and design patterns common to popular social mobile applications like Facebook, Twitter, Instagram. All these applications have a common behavior. Different types of data are collected from the user: text, pictures or video. Then this information is stored locally(On the mobile device), and uploaded to a server using a REST API. The data entered by the user is displayed as a list of items, and the user can browse all the entries ordered by time (timeline). The user’s data is associated with the account that is logged in. When the user logs out the data is removed. When the user logs in the data is synchronized and downloaded from the server using the REST API.

Because of the different limitations on the mobile devices (storage, processing capacity, battery life, etc), mobile applications rely on a server API to persist the data. Additional processing load is also delegated to the mobile device, in order to minimize the amount of data that needs to be transferred through the cellular network (e.g. re-size large images, compress files). The Minddistrict Android application holds the following common characteristics of Android Applications:

• Collects different types of data: images, text and intensity (a number withing a range). • Displays the information entered by the user in a timeline.

(13)

• Uses the system service to take pictures.

• Uses the system service to retrieve images from the device media folder. • Executes asynchronous process on a separate thread.

• Requires user authentication.

• Contains 9 different screens and 10 Activities.

• Uses the Shared Preferences to preserve the data when the application is put on background or close.

(14)

4.2

Fun Menu

This application allow users to browse a restaurant’s menu and make orders. The information of each menu item is displayed: price, description, photo and rating made by other customers. The user picks the items and the quantity, then the items are added immediately to the order list. The Android application communicates with a REST API in order to submit the orders and authenticate the users. The application was created for research purposes [17]. It has the following main characteristics that makes it similar to the Minddistrict Android Application, and a good candidate to evaluate the testing methods:

• It contains 9 different screens and 9 Activities.

• The navigation to complete an action of the application’s core functionality (add items to the order) has 3 screens/Activities depth. This makes harder the exploration for both testing tools in order to find defects.

• The application does not use the background thread. Therefore, a large part of the code can be reached by interacting with the application’s UI.

• Uses the Shared Preferences to preserve the data when the application is put on background or closed.

• Supports screen rotation. • Collects text data from the user.

• Displays and stores the data collected from the user in a list view. • Uploads the data to the server via the REST API.

(15)

Chapter 5

The Empirical Experiment

5.1

Experiment details

The main application under test (AUT) was the Minddistrict Android application. In order to prove the hypothesis, the code coverage of the AUT was measured using the coverage tool Emma [2]; integrated in the Android SDK. The number of bugs detected were also accounted for. The goal of the experiment was to compare the efficiency of the Monkey testing tool vs the GUI Ripping technique. Two main experiment configurations were used to evaluate each testing technique.

The original source code of the Minddistrict App was modified because: 1. The Android GUI Ripper uses a different build tool based on Apache Ant.

2. The Minddistrict Android application uses different frameworks based on annotations and post-compile processes. This created run-time errors and made the application unstable when trying to test it with the Android GUIRipper tool.

Annotations over classes, methods and variables were removed. The equivalent code implementa-tion was added to accomplish the same expected run-time behaviour. The testing tools don’t allow the tester to provide custom data for the authentication forms. Therefore, the test user’s credentials to authenticate were hard-coded in the login method of the AUT.

Two additional experiments were conducted to collect more data to evaluate the testing methods: 1. 12 bugs were introduced in different locations of the code of the Minddistrict Android

applica-tion. The application was re-tested using the two different experiment configurations.

2. The Fun Menu Android application [17] was used as AUT using the two different experiment configurations.

5.2

Experiment configuration

5.2.1

GUI Ripping

The tool used was the configurable Android GUI Ripper v1.1 for Java 7 [11]. The experiment was executed using the default settings of the tool, with the application’s compiled file .apk as input (See figure5.1).

The experiment was run on a Windows based system with the following modifications to the original batch scripts:

• Added additional step to uninstall previous versions of the AUT.

• Run the test in a faster emulator Gennymotion [21]. This emulator uses a x86 architecture virtualization. It offers several images of the different Android versions, emulated as virtual machines using VirtualBox.

(16)

Figure 5.1: GUIRipper configuration

• Removed additional steps to restart the emulator after each test case execution. • Removed some execution pauses used to wait for the emulator.

These modifications made the execution of the Android GUI Ripper more stable and 3 times faster. To measure the test coverage, the generated Test Suite class was manually modified. When Android applications are compliled the file R.java is generated. This file contains variables with unique iden-tifiers for the application’s resources, including the UI objects. The actual value of identifier values was hard-coded in the auto-generate Test Suite. These values were replaced with a reference to the variable instead. These changes made the generated Test Suite work independently; in any different built version of the application.

5.2.2

Random Events Generator: The Monkey

To test the random events, the Monkey Testing Tool Library [13] was used. This library is a copy of the original Android Monkey tool [19], implemented on higher level on top of the Android Instrumentation methods [18]. The library let you add code analytics and customization for the random tests.This second experiment was executed under the following conditions:

• An execution pause of 100 milliseconds was added before the execution of each random event. This pause allowed the application to finish the computations before the next random event was triggered.

• The experiment was executed two times for each configuration, with intervals of thousand ran-dom events (1000-10000, 15000).

To automate the execution process, a bash script was created for building the application, running the tests and generating the test coverage report. A sample project can be found on the thesis public repository [3]. This example uses as AUT the open source application Tomdroid [7].

5.2.3

Additional Experiments

The additional experiments to evaluate the effectiveness in fault detection were executed under the same conditions mentioned before, with the following two variations:

• The code of the Minddistrict application was modified; 12 bugs were introduced in different locations of the the application’s code.

(17)

Chapter 6

Results

6.1

Minddistrict App

6.1.1

GUI Ripping

After executing the ripping on the application, 3 relevant outputs were generated: 1. GUI Tree that shows the different paths of the application (See figure6.1).

2. Event Flow Graph that shows the sequence of executed GUI events [28] (See figure6.2). 3. Test Report Traces processed 102 success 86 fail 9 crash 1 exit 6

(18)

Figure 6.1: GUI T ree Minddistrict App

(19)

Figure 6.2: Ev en t Flo w Graph Minddistrict App

(20)

6.1.2

Random Events Generator: The Monkey

No errors were detect with any of the different experiments setups with the Monkey.

6.1.3

Test coverage

Figure 6.3: Test Coverage Minddistrict App

The GUI Ripper had a coverage of: 82% over the classes, 63% over the methods, 57% over the blocks and 57% over the lines of code. The Monkey had the highest coverage with the execution of 8000 events with: 72% over the classes, 58% over the methods, 54% over the blocks and 54% over the lines of code. The GUI Ripper had a higher coverage for all the measurements with an additional 10% over classes, 5% over methods, and 3% over blocks and lines of code.

6.2

Test Results of the Additional Experiments

6.2.1

Minddistrict Android App with errors

Figure 6.4:

The GUI Ripper detected a total of 8 out of 12 of the introduced errors. The Monkey had the highest detection with 4 bugs for the executions of: 6000, 9000, 10000 and 15000 events. The GUI Ripper was 50% more efficient than the Monkey.

(21)

6.2.2

Fun Menu V2 App

Figure 6.5:

The GUI Ripper detected a total of 4 failures. The Monkey had the highest detection with 2 failures for the executions of: 7000, 9000, 10000 and 15000 events. The GUI Ripper was again 50% more efficient than the Monkey.

6.2.3

Test coverage

Figure 6.6: Test Coverage Fun Menu V2 App

The GUI Ripper had a coverage of: 55% over the classes, 51% over the methods, 55% over the blocks and 55% over the lines of code. The Monkey had the highest coverage with the execution of 7000 events with: 52% over the classes, 46% over the methods, 35% over the blocks and 39% over the lines of code. The GUI Ripper had a higher coverage for all the measurements with an additional 3% over classes, 5% over methods, and 20% over blocks and 16% over lines of code.

(22)

Chapter 7

Analysis and conclusions

7.1

Analysis

In the first experiment conducted with the Minddistrict Android Application, the fault detection rate was very low. The Android GUI Ripper detected only 1 failure in the application and the Monkey detected none. This results can be explained by two main reasons:

1. The version used of the Minddistrict App is very stable. During the development phase, several unit tests and functional test were written to validate the correctness of the application. 2. The application does not support rotation. Several bugs of Android applications are originated

when the Activities are re-created. This happens mainly when the devices are rotated and the application supports the rotation.

The Minddistrict App has the structure of a very generic mobile application: • Commonly used UI elements:

– ListView – Grids

– ImageView and Buttons

– Forms with different EditText and data validations – Alert dialogs

• User data is stored in a server application. The data is retrieved and cached after the user is logged in successfully.

• Network operations • Login screen

• Usage of Android Services

• Device Media Manager and Camera

These characteristics show evidence that GUI Ripping can be a reliable method to support part of the testing process of many Android applications. Additional corner cases and validations should be considered in order to improve the code coverage and the failure detection:

• Android fragmentation • Testing on real devices

(23)

7.2

Conclusions

1. GUI Ripping is a reliable automated process to test commercial Android applications. The experiment showed that the code to handle the events and create the GUI holds 50% - 80% of the total application’s code. These numbers correspond with other findings where the amount of code composed by the UI was up to 60% [24,29,30,26]. The effectiveness of the GUI ripping method to test Android applications can depend on the following conditions:

• Percentage of the application’s code that correspond to the GUI.

• The number of lines of code that are executed on the UI thread (Handles the execution of UI triggered events and it is the Application’s main thread).

• Amount of long process and tasks that run as Services or Asynchronous Tasks (some can’t be tested by the GUI Ripping method).

2. GUI Ripper was more effective than the Monkey [12]:

• Code coverage was 10% higher on the Minddistrict Android Application, and 16% higher over the lines of code on the Fun Menu Application.

• Detect rate was 2 times higher on: the version of the Minddistrict App with bugs added, and the Fun Menu App.

3. The generated test suite from the GUI Ripping process can not be 100% reused. If the ap-plication changes, several test cases will fail; the process relies on the apap-plication’s data and state.

4. The generate test suite does not work on some real devices like Samsung(This manufacturer holds 65% share of all Android devices [5]). The Action Bar Menu used on Samsung devices is not standard; test cases that interact with this UI component will fail.

5. The generated test suite contains code duplication. A post process could be added to detect and extract repeated steps into methods. This can help to detect test cases that execute the same part of the code. Duplicated test cases can be removed in order to reduce the execution time of the test suite.

6. Automated test are not yet sufficient to fully test an Android Application. However GUI Ripping is a great solution to detect bugs on an early stage of an application. The results correspond with other findings where different applications were tested [25,12].

7. GUI Ripping testing method could be integrated to large applications that use a continuous integration system to contrubute with the quality assurance process.

7.3

Further work

GUI Ripper can be used with any application that relies on GUI [25,12]. A new version of the tool could be created based on GUI crawling frameworks like Selenium. These frameworks are platform independent. The implementation is based on the the Web driver [8]. This allows to introspect and query all the elements of the UI. The data is sent as JSON object with all the UI objects’ informa-tion. This information is updated when any UI changes happen on the applicainforma-tion. The data return from the driver provides the sufficient information to build the models and execute the GUI ripping algorithm. This approach allows the creation of a generic ripping algorithm that can explore and test not only Android applications but other popular platforms like iOS.

Recommendations to improve the Android GUI Ripper tool:

1. Access the UI elements by using the variables from the R.java file in the generated test suite. This will allow you run the generated test suite against any compiled version of the AUT.

(24)

2. Implement a platform independent version of the Ripper based on Java, and that uses a scripting language like Python to avoid dependency on Operative System specific scripts.

3. Use Genymotion as emulator to execute the GUI Ripping to reduce the execution time of the GUI Ripping process.

4. Add an additional step to the installation process to uninstall any previous version of the AUT.

7.4

Recommendations to Minddistrict

In order to improve the testing process for the mobile applications two additional testing process could be added to the continuous integration system:

1. GUI Ripping is a testing technique that can help to complement the testing process. The constant changes on the application require additional work for: testing, re-writing test that failed and creating new tests. GUI Ripping can help to test the system against changes efficiently. Android GUI Ripper tool is not ready to use in a commercial environment. The current version only supports Windows systems and its not very stable. Research about GUI Ripping is still being conducted, so it will be important to consider such a tool as a future addition to the testing process.

2. The Monkey testing tool could also complement the testing process, although the coverage and fault detection rate is lower. This tool can help to stress the application and detect failures. The Monkey is part of the Android SDK, ready to used, and fairly easy to configure. A daily task of random testing could be added to the testing process. This can help to detect failures in the system due constant changes of the application’s code. In order to improve the effectiveness of this tool, it’s recommended to:

• Use a number of random events between 10000 - 15000

• Configure the tool so a small pause can be added between the execution of each random event.

(25)

Bibliography

[1] Application fundamentals android documentation. URL: https://developer.android.com/ guide/components/fundamentals.html.

[2] Emma: a free java code coverage tool. URL: http://emma.sourceforge.net/.

[3] Master thesis public repository. URL:https://github.com/sancarbar/master-thesis. [4] Minddistrict android app. URL: https://play.google.com/store/apps/details?id=com.

minddistrict.android&hl=en.

[5] Samsung remains king of the android market with 65% share of all an-droid devices, localytics. URL: http://www.localytics.com/blog/2014/ samsung-remains-king-of-the-android-market/.

[6] Testing fundamentals android documentation. URL: http://developer.android.com/tools/ testing/testing_android.html.

[7] Tomdroid - tomboy note-taking on android. URL:https://launchpad.net/tomdroid.

[8] Web driver, w3c. URL: https://dvcs.w3.org/hg/webdriver/raw-file/tip/ webdriver-spec.html.

[9] Pekka Aho, Matias Suarez, and Atif M Memon. Industrial adoption of automatically extracted gui models for testing.

[10] Domenico Amalfitano, Anna Rita Fasolino, Porfirio Tramontana, Salvatore De Carmine, and Gennaro Imparato. A toolset for gui testing of android applications. In Software Maintenance (ICSM), 2012 28th IEEE International Conference on, pages 650–653. IEEE, 2012.

[11] Domenico Amalfitano, Anna Rita Fasolino, Porfirio Tramontana, Salvatore De Carmine, and Atif M Memon. Guiripperwiki. URL:http://wpage.unina.it/ptramont/GUIRipperWiki.htm. [12] Domenico Amalfitano, Anna Rita Fasolino, Porfirio Tramontana, Salvatore De Carmine, and Atif M Memon. Using gui ripping for automated testing of android applications. In Proceedings of the 27th IEEE/ACM International Conference on Automated Software Engineering, pages 258–261. ACM, 2012.

[13] androidmonkey. Android monkey library. URL:https://code.google.com/p/androidmonkey/. [14] Larry Apfelbaum and John Doyle. Model based testing. In Software Quality Week Conference,

pages 296–300, 1997.

[15] appbrain. Number of android applications. URL: http://www.appbrain.com/stats/ free-and-paid-android-applications.

[16] Businessinsider. Google: We have 1 billion monthly active an-droid users, June 2014. URL: http://www.businessinsider.com/ google-we-have-1-billion-monthly-active-android-users-2014-6.

(26)

[18] Android Documentation. Android instrumentation. URL: http://developer.android.com/ tools/testing/testing_android.html#Instrumentation.

[19] Android Documentation. Android monkey testing tool. URL:http://developer.android.com/ tools/help/monkey.html.

[20] Gartner. Gartner says worldwide traditional pc, tablet, ultramobile and mobile phone shipments to grow 4.2 percent in 2014, July 2014. URL:http://www.gartner.com/newsroom/id/2791017. [21] Genymotion. The fastes android emulator. URL:http://www.genymotion.com/.

[22] Daniel R Hackner and Atif M Memon. Test case generator for guitar. In Companion of the 30th international conference on Software engineering, pages 959–960. ACM, 2008.

[23] IDC. Android pushes past 80% market share while windows phone shipments leap 156.0% year over year in the third quarter, November 2014. URL:http://www.idc.com/getdoc.jsp? containerId=prUS24442013.

[24] Rohit Mahajan and Ben Shneiderman. Visual and textual consistency checking tools for graphical user interfaces. Software Engineering, IEEE Transactions on, 23(11):722–735, 1997.

[25] Atif Memon, Ishan Banerjee, Bao N Nguyen, and Bryan Robbins. The first decade of gui ripping: Extensions, applications, and broader impacts. In Reverse Engineering (WCRE), 2013 20th Working Conference on, pages 11–20. IEEE, 2013.

[26] Atif M Memon. Gui testing: Pitfalls and process. Computer, 35(8):87–88, 2002.

[27] Atif M Memon. An event-flow model of gui-based applications for testing. Software Testing, Verification and Reliability, 17(3):137–157, 2007.

[28] Atif M Memon, Ishan Banerjee, and Adithya Nagarajan. Gui ripping: Reverse engineering of graphical user interfaces for testing. In WCRE, volume 3, page 260, 2003.

[29] Brad A Myers. User interface software tools. ACM Transactions on Computer-Human Interaction (TOCHI), 2(1):64–103, 1995.

[30] Brad A Myers and Dan R Olsen Jr. User interface tools. In Conference companion on Human factors in computing systems, pages 421–422. ACM, 1994.

[31] Bao N Nguyen, Bryan Robbins, Ishan Banerjee, and Atif Memon. Guitar: an innovative tool for automated testing of gui-driven software. Automated Software Engineering, 21(1):65–105, 2014. [32] Tommi Takala, Mika Katara, and Julian Harty. Experiences of system-level model-based gui testing of an android application. In Software Testing, Verification and Validation (ICST), 2011 IEEE Fourth International Conference on, pages 377–386. IEEE, 2011.

[33] Wei Yang, Mukul R Prasad, and Tao Xie. A grey-box approach for automated gui-model genera-tion of mobile applicagenera-tions. In Fundamental Approaches to Software Engineering, pages 250–265. Springer, 2013.

Referenties

GERELATEERDE DOCUMENTEN

The answer to the first part of this question is given conclusively by the answers to the first ten research questions and is clear: DNA testing is used considerably more often

The testline mentioned that there were issues with regard to language, lacking TMAP ® and business knowledge at the offshore side, not fully understanding and using

Components that are part of the ISFTCM are: Total number of testers and navigators with hourly rate, number of test cases, number of test-runs, test environment costs, license and

Recalling that betting on away long shots was the least profitable strategy in the weak form efficiency analysis, it comes as no surprise that betting on the away team

The second important finding of this study is that circulat- ing marginal zone B cells could not only distinguish between TB diagnosis and the end of treatment, but also

The usability tests showed that users familiar with statistical model checking are able to edit simulation models, perform simulations and read off the result data?.

For this reason, this project’s goal was to develop a simple user interface for version control which could be employed inside a content authoring tool, that is used by a

(The JS files are only read once when the Acrobat application is opened.) When all else fails, try rereading the installation instructions again (install_jsfiles.pdf), found in the