• No results found

SonarQube Lua Analyzer for Code Smell Detection

N/A
N/A
Protected

Academic year: 2021

Share "SonarQube Lua Analyzer for Code Smell Detection"

Copied!
73
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

SonarQube Lua Analyzer

for Code Smell Detection

Fatemeh Ahmadi

(fati.ahmadi66@gmail.com) Juli 17, 2017, 73 pages

Supervisor: Clemens Grelck

Research and Implementation of an

SVG Optimisation Algorithm

Sander Ginn

sander@ginn.it July 29, 2017, 53 pages

Academic supervisor: dr. Clemens Grelck Host supervisor: Rolf Timmermans

Host organisation: Voormedia, http://www.voormedia.com

Universiteit van Amsterdam

Faculteit der Natuurwetenschappen, Wiskunde en Informatica Master Software Engineering

(2)

2

Contents

Abstract ... 4 1. Introduction ... 5 1.2 Motivation ... 5 1.2.1 Personal Motivation ... 5 1.2.2 Research Motivation ... 5 1.3 Research Problem ... 6 1.3.1 Problem Analysis ... 6 1.3.2 Proposed Solution ... 7 1.4 Research questions ... 7 1.5 Research methods ... 7 1.6 Outline ... 8

2. The Lua Programing Language ... 9

2.1 Types and Values ... 9

2.2 Table constructor ... 9 2.3 Function ... 9 2.4 Coroutines ... 10 2.5 TailCall ... 10 3. Background ... 11 3.1 ProRail ... 11 3.2 SonarQube ... 11 3.3 SonarQube architecture ... 12 3.4 Quality Model ... 13 3.4.1 ISO/IEC 25010 ... 13 3.5 SQALE ... 14 3.5.1 Introduction ... 14

3.5.2 SQALE Quality Model ... 14

3.5.3 The SQALE Analysis Model ... 16

3.6 SonarQube Quality Model ... 16

3.6.1 SonarQube Rule definition ... 17

3.6.2 The remediation cost ... 18

3.6.3 How Rule Measure the source code? ... 18

3.6.4 Technical Debt ... 19

3.6.5 Technical Debt Ratio ... 19

3.6.6 SQALE Ratio ... 19

3.7 SonarQube’s Seven Axes of Quality ... 20

3.7.1 Comments ... 20 3.7.2 Duplications ... 21 3.7.3 Complexity ... 21 3.7.4 Code Coverage ... 22 3.7.5 Coding Rule ... 22 3.7.6 Bugs ... 23 3.8 Summary ... 23 4. Code Smells... 24 4.1 Introduction ... 24

4.2 Code Smell in Lua ... 24

4.3 Lua Coding Rule ... 29

4.3.1 Lua Rule Definition ... 29

(3)

3

5.1 Requirements ... 31

5.2 Source Code Metric ... 32

5.2.1 Source Code Metric Definition ... 32

5.2.2 LuaMetric ... 32

5.2.3 Source Code Metric Collection ... 33

5.3 Implementation of the SonarQube Lua Analyzer... 33

5.3.1 Grammar ... 34

5.3.2 LuaAstScanner ... 34

5.3.3 Measuring Unit Size ... 35

5.4 Visitors ... 37

5.4.1 Visitor to apply the rules ... 37

5.4.2 Visitors to compute the metrics ... 39

5.5 Plugin Architecture ... 40 5.5.1 Lua Plugin ... 41 5.5.2 CoberturaReportParser class ... 43 5.5.3 Continuous Integration ... 43 6. Result ... 44 6.1 Introduction ... 44 6.2 Size ... 45 6.3 Complexity ... 45 6.3.1 Average CC in function ... 46 6.3.2 Average CC in file ... 46 6.3.3 Average CC in Class ... 48 6.4 Documentation ... 49 6.5 Issues ... 50 6.6 Duplication ... 51 6.6.1 Duplicated Blocks ... 51 6.6.2 Duplicated Lines ... 54 6.7 Maintainability ... 55 6.8 Rules ... 57 6.9 Test Coverage ... 60 7. Discussion ... 63

7.1 Discussion SonarQube Quality model ... 63

7.2 Discussion Function call complexity ... 63

7.3 Discussion Functions ... 63

7.4 Discussion Cyclomatic Complexity ... 64

7.5 Discussion Severity... 64 8. Related Work ... 65 9. Conclusions ... 67 9.1 Summary ... 67 9.2 Contribution ... 67 9.3 Evaluation ... 67

9.4 Evaluation of the Results ... 68

9.5 Future Work ... 70

(4)

4

Abstract

The aim of this project is to create a tool capable of analyzing the Lua source code within SonarQube which is an open source platform related to continuous inspection of code quality and provide the Lua programmers with the feedback about the quality of the software system. Lua is a scripting language widely used in the gaming industry as well as other industrial applications. However, the quantity of tools related to analyzing Lua code is relatively limited. SonarQube Lua Analyzer, the tool that is created in this project, collects different code metrics, calculates the metrics, and assigns values to each one. The analyzer applies coding rules to the Lua project’s code in order to detect and present Code Smell in the result that can be browsed via SonarQube User Interface. Based on the severity of each issue, its maintenance cost will be defined calculated and displayed. Automatic detection of Code Smells is helpful to the Lua programmer for deciding when and where to refactor the source code. Introducing such a tool is expected to have a positive impact on improving the quality of the system. Technical quality has been measured by using the SonarQube quality model, which is based on the SQALE quality model that meets the ISO/IEC 25010 standards [1]. Lua analyzer is implemented in Java language. The analyzer provides the features of SonarQube. The features are compatible to features provided by other SonarQube language analyzer such as Python, java. The SonarQube Lua analyzer is an open source tool which has been successfully deployed and integrated into the SonarQube Update Center.

(5)

5

Chapter 1

1. Introduction

Detection of significant issues and removal of them from the source code are vital tasks in the maintenance phase of the software’s life cycle as the software programs have become difficult to understand and evaluate. Consequently, the market now offers multiple analysis tools for software quality improvement and visualization of issues. This project is initiated to analyze the source code of Lua project with regard to improving readability and understandability of the source code. In this section, we talk about the Lua language in general and mention the functionalities such as Table, Coroutine and TailCall etc..in the chapter two. We discuss only the functionality that has an impact on the complexity and understandability. Other functionality of the Lua language that is related to this thesis we will discuss in the related chapter in more details.

The Lua programming language was created in 1993 by Roberto Ierusalimschy, Luiz Henrique de Figueiredo, and Waldemar Celes, members of the Computer Graphics Technology Group (Tecgraf) at the Pontifical Catholic University of Rio de Janeiro, in Brazil [83]. Lua has been designed as a dynamic general purpose to be embedded in host client. It is dynamic as it runs the most of the common behavior at run time, instead of static programing language that performs during compilation.

The Lua does not have a notion of “main”. It is provided as a library of C functions that can be linked into the host application. The host application is able to read, write the variable and run a piece of code in Lua by invoking a function in the library and register the function to be run in Lua code. Lua is simple, powerful, fast, portable and lightweight [81]. Lua by presenting the mechanism tables make the programmer's able to combine procedural features with powerful data description facilities [82].

1.2 Motivation

1.2.1 Personal Motivation

While working as an IT engineer in different software industries, we have personally experienced software that was crucial to understand and develop further. Lack of software documentation is commonplace. Most of the time the original developer had already left the organization without providing any technical documentation for their colleagues or new developers. Some of the developers felt their code had high quality and did not warrant review. Sometimes a programmer would abandon a project because fellow programmers thought their own code was superior theirs. The creation and implementation of an independent source code analyzer would prevent such problems; thus, an unbiased judgment would hedge against team discord.

At the time of this project I was present once a week as an IT engineer for btd-planner. While I was looking for a project for my Master’s thesis, a btd-planner architect suggested that I focus on a “Lua analyzer.” That was the launching point of this project.

1.2.2 Research Motivation

Studies have shown that if a software product does not change continuously it is not able to keep the competitive edge [4]. An example here can be the app killer [5] which was created

(6)

6 at the end of the 1980s. The app was very popular from the beginning but after while they could only release new features rarely since the code was messy because of adding lots of features to the program. So it was difficult for the programmers to read and understand the code. So the code became messy and was getting worse and worse. Therefore, the developers were not able to fix the bugs and the bugs were reproduced after each release and crashing increased. Killer, the popular app, was not anymore the favorite app for the professionals and very soon the company went out of business. When we add new features to a program to provide a product to conform to the requirements of the business or client, it makes a software structure more complex. So it can divide the code into two or more pieces [5]. These kinds of issues were motivation to introduce the Laws of Refactoring by (Lehman, 1980) [3] The Refactoring is a technique which can be used to reduce the complexity. The aim of this technique is to improve the software structure without changing the software’s observable behavior [4]. Refactoring enhances the readability and understandability of the source code which is the main character of a maintainable code.

Fowler and Beck have identified a list of bad smells that can exist in source code [6] which can be removed and enhanced by refactoring technique. For this project, the Code Smells are identified based on the needs of the Lua programmers of the btd-planner project.

1.3 Research Problem

Code Smells are bad coding practice. They are not good structure and should be removed from the source code by means of refactoring technique to improve the understandability of the sources code [5]. One example of such structure is “too long parameter list”. A function with a” long parameter list” is difficult to understand and test as it means the function does a lot things and more test cases are required to cover all paths in the function.

In the next section, we will discuss bad coding practice in more detail with regard to Lua language. We will limit the scope of the thesis for btd-planner project mentioned in the background.

1.3.1 Problem Analysis

Code readability and understandability are some of the key challenges that the Lua programmers of btd-planner are facing at the time of this project. For almost every piece of code, the programmers have to make a long documentation on the code to make the code clear for other colleagues. There is almost more documentation than code in the project. One of the items which can have an impact on the complexity is Table. Tables with a large size are difficult to understand and maintain. It means that table could have a lot of data or a lot of methods and responsibilities. A large object has a lot of code that can lead to duplicated and chaos code [6]. The following example shows how a table can be difficult to understand if it grows. It represents how the size of a table can grow and become difficult and complex to understand. In the following example, we create a table and store its reference in ‘Test’.

1. Test={}

2. foo= function (x,y) return x+y end 3. Test[foo]= function (x,y) return x+y end

4. goo= function() print (“table can even contain functions”) end 5. Test [goo]= function() print (“table can even contain functions”) end 6. Test={

7. foo= function (x,y) return x+y end,

8. goo= function() print (“table can even contain functions”) end 9. }

(7)

7

Code listings above at line 1 represents how a table which is called Test is created. At line 2

a function is created and assigned to the variable ‘foo’. At line 3 the function that is stored in the variable ‘foo’ is added to the table where the foo is now a key of the table Test. At line 4 another function is created and stored in the variable ‘goo’ and at line 5 it is added to the table. We can store as much as elements that we need for the table Test. If we store a lot of items and functions in the table as we are able to do, the table Test would become very difficult to read and understand. We have to keep it small to prevent more complexity.

In the next section, we will talk about how to solve these kinds of problems. In chapter four, we further talk about Code Smell which would cause complexity in the Lua code and need to be checked.

1.3.2 Proposed Solution

To solve such problem mentioned in the section before and improve the code quality, the btd-planner team has to refactor the code and remove Code Smells to reduce the complexity and make it easier to understand and reduce the maintenance cost in the future. But it would be crucial for developers to determine which part of the code should be refactored. The solution is creating an engineering tool which would be able to analyze the code and detect the Code Smells. Since ProRail uses SonarQube to analyze the code of other languages, btd-planner team is interested in a SonarQube analyzer which would be able to perform static code analysis on the Lua source code.

1.4 Research questions

From the (numerous) points highlighted in the section before, requiring further investigation, this thesis aims to answer the following research questions:

● RQ1 How can we design and implement a Lua analyzer which can be run within SonarQube to analyze the Lua source code?

● RQ2 Can the Lua analyzer provide the same features as other SonarQube language analyzers, such as Python?

● RQ3 Can Different Code Smells be detected by the analyzer? How? ● RQ4 How Can we measure the maintenance cost of Code Smells?

1.5 Research methods

Research questions mentioned in section 1.4 are answered by means of empirical research and static analysis. The data supporting the research are gathered from source code management systems for open source projects, as well as one actual project provided by the btd-planner team at ProRail.

An extensive literature study related to Lua language and SonarQube architecture is done. In addition Lua programmers from the btd-planner project team are interviewed with regards to defining complexity items in Lua and bad Code Smell that they want to be removed from the source code. In terms of validating the results, we cover each piece of the analyzer code with a unit test. The analyzer is tested for one week locally at ProRail. After that, a pre-release is planned for 72 hours. During this time, the analyzer is tested by the SonarQube community. Following pre-release, the feedback and the issues are reported by the users will work on. Once released, the analyzer will be added to the SonarQube plugin library to be used by the public.

(8)

8

To answer RQ1 and RQ2 a Lua analyzer is designed and implemented based on the libraries and documentation offered by SonarQube [9][10]. After creating the Lua analyzer, the plugin is installed into the SonarQube update center. We analyze some projects and show the features provided by the analyzer are similar to other SonarQube language analyzers such as Python.

Regarding RQ3 and RQ4

To calculate the metric and detect Code Smell, first we create an AST (Abstract Syntax Tree). AST is created using SonarQube framework. AST represent the nodes that can be visited by the visitors which are able to build up the metrics. Visitors are classes that do operations in other classes without any changes in their algorithm. Calculating the code metrics and Code Smells for each project are done using a Sensor. The sensor is a Java class which is located in the Lua Plugin and is able to execute the visitor to gather the metrics from source code. After calculating the metric for the project, Sensor sends the last report to the SonarQube Server to the compute engine component (see SonarQube Architecture). In the

compute engine component, the ratings are given to the metrics and the result is sent to the

SonarQube Web Server.

Some interviews are arranged with Lua programmers of the btd-planner team to create a checklist to check the code with regard to the Code Smells. For each check on that checklist a rule will be defined based on the SonarQube Quality model which is based on the SQALE Quality model [2]. SQALE uses rules as the lowest-level quality hierarchy component. The rule is able to measure characters of source code. After measuring, the rule can determine whether the result of measuring is correct or incorrect. When the result is bad, it results in a ‘rule violation’. To calculate the maintenance cost, we define a SQALE method in the rule level and passed a unit of time as its parameter.

1.6 Outline

The thesis is structured as follows. The next chapter gives information about the Lua programing language. Chapter 3 provide background that contains information about SonarQube and its architecture, and the quality model information on the context of the research. Chapter 4, will be about Code Smell and Coding Rule in Lua. Chapter 5 will present the SonarQube Lua analyzer. Chapter 6 will describe the result and the features which are provided by the SonarQube Lua analyzer, Chapter 7 provides the discussion related to the whole chapters and chapter 8 conclusion and future work of the thesis.

(9)

9

Chapter

2

2. The Lua Programing Language

In the previous chapter, we talked about the history of the Lua programing language in this chapter we discuss the functionality in Lua programing language that we cross in this thesis.

2.1 Types and Values

Lua is a dynamically typed language. It means the language does not have any type definition each value has its own type.Lua supports eight types: nil, boolean, number, string, user data, function, thread, and table.

Nil: is a type of a single value, Lua uses nil as a kind of non-value, to representing the

absence of a useful value.

Boolean: value has two value true or false.

Number: The number type represents real numbers which are double-precision

floating-point. Lua has no integer type as the numbers. Lua can represent any long integer without rounding problems.

Strings: are a sequence of characters. Strings in Lua may contain characters with any

numeric value, including embedded zeros

2.2 Table Constructor

Constructors are expressions that create and initialize tables. A constructor can be empty constructor, {}, which creates an empty table. In the Lua constructor is also used to initializes arrays. The arrays called also sequences or lists. For example, the following example indicates table constructor colors statement

colors = {"red", "green", “blue”}

We initialize the colors [1] with the string “red” The first element has always the index 1. In the table constructor, the value of each element can be any kind of expression not only constant value.

2.3 Function

A function in Lua is a first-class value which can be passed as an argument to other functions, stored in a value, nested, and returned as a result [46].Lua is able to call functions in Lua and call functions in C as all standard library are written in C. It contains functions for string manipulation, table manipulation, I/O, access to basic operating system facilities,

mathematical functions, and debugging. Functions are created with the function keyword as function (args) body end.

The following example shows a simple function that receives a single argument and returns the addition of its value plus two:

(10)

10

2.4 Coroutines

Another example in Lua that is worthwhile of mentioning is the concept of coroutines. A coroutine in Lua indicates an independent thread of execution [52]. A coroutine is constructed using coroutine.create, which has a single argument that is a function. It

creates a new function and assigned to a variable. The following code snippet shows how to create a routine:

1.co = coroutine.create(function () 2.print("test")

3.end)

The code snippet above creates a function and stored in the variable co. Coroutine has three different statuses, suspend, dead, run. When we create the coroutine it has automatically the status suspended. The status of the coroutine can be checked by the function status. The following code snippet shows how the status of the coroutine is checked by the function status.

1.print(coroutine.status(co)) --> suspended

We can use the coroutine.resume change the status of the coroutine from suspended to running. The fillowing code snippet shows how to change the status of the coroutine to the running:

1.coroutine.resume(co) --> test

After running the coroutine, the status will be changed to dead.

2.5 TailCall

A tail call happens when a function calls another function as its last action. It is a kind of goto. In Lua, only a call in the format return function call is a tail call. The following code snippet shows a tail call.

1.function test (x) 2. if x > 0 then

3. return test (x - 1) 4. end

(11)

11

Chapter

3

3. Background

3.1 ProRail

ProRail is responsible for the rail network in the Netherlands and provides network stations in conjunction with other partners and, including construction, maintenance, management, and security. It independently allocates space on use of the tracks by rail service providers. ProRail is accountable for the maintenance of existing tracks, switches, signals and crossings [13]. In order to provide the customers the best service quality, ProRail is required to review the code manually or by using monitoring tools.Code analysis, related to the enhancement of the quality of the delivered product, enhances their sustainability. Manual code review, like peer review, is both expensive and time-consuming. Consequently, ProRail folk has been using automated tools. SonarQube is one of ProRail’s most utilized monitoring instruments. Concurrent with the time of this thesis project, SonarQube did not support Lua hence the necessity for a Lua analyzer.

“btd-planner” (Buiten Vrije Tijd planner) is one of the ProRail projects provides its stakeholders a program where they are able to view and book train availability. The architecture of the btd-planner project is based on three different languages: C#, Javascript, and Lua. The btd-planner project uses SonarQube to analyze C# and Javascript, but the part of the code which is developed in Lua cannot be analyzed since the Lua language has not been created as an analyzer for SonarQube.

3.2 SonarQube

SonarQube is an open source platform used to manage technical code quality. It measures the technical quality of a project’s source code and provides summary reports for the projects. SonarQube is able to detect different style issues in the code such as minor styling differences and critical design errors. It drills down into the source code, layer by layer, and moves from the module level down to the class level [20]. SonarQube is published and licensed under the GNU LGPL (GNU Lesser General Public License) v3 [19]. The SonarQube platform is developed in Java and more than 24 different languages are supported by adding plugins to the SonarQube program.

SonarQube is able to perform static and dynamic code analysis tools. Static code analysis involves running the program in a non-runtime environment to compute the metric in the source code [20] and it also provides programmers with various warnings which can be related to software design. Warnings include notifications about the coupling between the modules or poor code style such as long parameter list in a method [21].

Dynamic code analysis involves running the program in a runtime environment. One example of dynamic code analysis is Unit Test line coverage, which is a metric that calculates the percentage of lines of code which is covered by unit tests [31].

(12)

12

3.3 SonarQube architecture

The SonarQube platform consists of four components: [23]

1. The Scanner is responsible for analyzing the source code and running plugins. 2. The Sonar-Plugins are hosted in the SonarQube Server where all the language

plugins are located.

3. The SonarQube database stores the project information after calculating the final result by the compute engine; this data can then be sent to the web server and copied to the database.

4. SonarQube Server:

● Compute engine: The result of the scanner will be sent to the compute engine component. The compute engine is located in the SonarQube server. It is responsible for the calculation measurements for each project and gives a rating to the metric if necessary. The result can be displayed to the user on the user interface of the SonarQube web server.

● The SonarQube web server is a part of the SonarQube server which is responsible for displaying the result to the users on the SonarQube User Interface.

● Search Server: Is used to search and reply to requests from the User Interface.

Figure 3.1 represents the upper-level components of the SonarQube platform and how they interact with each other.

1. There are four possible methods available when an analysis request is triggered. In order to run the source code, the user needs to use one below tools, which work with sonar runner and will take the source code from the repository:

i. SonarQube scanner for Maven Plugin ii. SonarQube scanner for Ant Task iii. SonarQube scanner for Gradle iv. SonarQube scanner for MSBuild

v. SonarQube scanner

2. The analysis request will be received by the scanner and it starts analyzing the project’s source code. The sonar analyzer takes the source code and selects the correct plugin based

(13)

13 on the language of the source code. After analyzing each file, the result is then sent to the compute engine component.

3. The compute engine: calculates the metrics the whole report from the scanner and gives the rating for the whole project.

4. After completing the whole analysis process, the result is sent to the database.

5. The database is connected to a web server. After analysis of the source code, the user interface is updated with the new data. The web server is able to display the analysis report via https://localhost:9000. From there the users can browse the analyzer result

3.4 Quality Model

The SonarQube quality model is based on the SQALE quality model, meeting ISO/IEC 25010 standards [84]. In the next section, we will discuss ISO/IEC 25010 and the SQALE quality model in further detail in section 3.4 we discuss SonarQube quality model.

3.4.1 ISO/IEC 25010

Figure.3.2 Product quality characteristics specified by ISO/IEC 25010

The ISO/IEC 25010 is provided by the International Organization for Standardization. As figure 3.2 shows, it defines eight different product quality characteristics that can be measured and evaluated to improve the technical quality of a software product [24]. ISO/IEC 25010 does not measure the quality characteristics. In the next section SQALE method will be discussed. We will show that the main characteristics of the methods meet ISO/IEC2510.

(14)

14

3.5 SQALE

3.5.1 Introduction

SQALE (‘Software Quality Assessment based on Life Cycle Expectations’) [26] is a generic method that is dependent neither on the languages nor on any analysis tools [27]. The lack of any standard method to evaluate the quality of source code objectively has been a motivation to create the SQALE quality model [29]. One of the fundamental principles of the method is that the quality of the source code is a non-functional requirement. Since software development projects have some objectives regarding the cost and deadlines that should be achieved, formalization of this objective is required. This formalization has been translated into requirements. The requirements related to the source code called non-functional requirements [27]. Related to the code’s quality SQALE provided a SQALE Quality Model for formulating and organizing the non-functional requirements. The SQALE Quality Model is organized in three hierarchical levels. SQALE has also provided an Analysis Model which uses defined rules to normalize the measurements and to control the source code. After measuring the source code, the next step is the aggregation of the measured values.

In the following section, we will study the three levels related to SQALE Quality Model and later, we will take a deeper look at the SQALE Analysis Model.

3.5.2 SQALE Quality Model

1. Level 1(Characteristics Level) This level contains eight characteristics which are

displayed in figure 3.3. These characteristics result from the ISO/IEC 25010 standard [24] the characteristics meet the characteristic of the quality model ISO/IEC 25010 (see figure 3.2) and which are ‘projection of ISO 9126 model on the chronology of a software application’s life cycle’ [27] (ISO 9126 has been replaced by ISO/IEC 25010 ) .This means that the chronological sequence of the characteristics in the model are very important [28] because each of them meets a phase in the software life cycle, and if there is a failure at any level, this would have an impact on the subsequent and previous levels [28]. For example, if the software is not maintainable, it cannot be portable, and therefore, it is not reusable.

Figure 3.3 represents characteristics of SQALE quality model. 2. Level 2 (Sub-characteristics Level): In this level, each characteristic in Figure 3.3 is

divided into one or more sub characteristics [27]. For example, Figure 3.4 displays how the two characteristics of maintainability and testability are divided into two Sub-characteristics. As shown in the figure the characteristic of Maintainability is divided into

(15)

15 understandability and readability, and the characteristic of Testability is divided into integration testing and unit testing. Each sub-characteristic has a link with one or more requirements (see figure 3.5).

Figure 3.4 represents characteristics maintainability and testability are divided each into sub-characteristics.

3. Level 3 (requirements level): This level called sub-characteristic contains the source

code requirements or rules [26][27][28][29]. The rules are measurable and related to internal attributes of the source code such as source code complexity, documentation, etc. [30]. The requirements are defined to improve the quality of the source code so the source code requirements would be different for each programing language. Some of the rules can be defined and added to the requirements list based on the needs of the programmers with regards to the difficulty of a part of code.

(16)

16 3.5.3 The SQALE Analysis Model

Introduction: The goal of the analysis model is to measure the source code and aggregate

the values of the measurement based on the rules [29] [27]. This analysis model includes three parts that we discuss them in the following sections in details.

Remediation Function: The remediation function’s goal is to normalize the violations

related to that rule normal. The remediation function is a parameter or a factor that is defined in the rule level and has a value that represents the average remediation cost for a normalization violation. The value of the remediation function is dependent on the activities that should be done to make that violation normal. For example, the remediation cost to fix a violation related to a file is different than the remediation cost to fix a violation related to a function [27].

Aggregation Rule: All aggregation in SQALE is performed by addition. This is based on the

remediation cost [27]. The SQALE remediation cost for each characteristic is the sum of all remediation costs that are needed to fix the violations of the associated rules. The additive process of the remediation cost for each characteristic is referred to as the “index”. The summing up of the remediation cost for the whole is referred to as the “indices”.

The SQALE Indicators: SQALE defines three indicators [27]. These indicators help the

user to make a decision whether to fix the violations or rewrite the project from the beginning. The SQALE indicators are related to characteristics of the quality model. For example, the density index for maintainability can be defined by dividing remediation cost by hours related to characterizing maintainability by the size of the whole project in man-hours. The result will be mapped on a rating scale which shows whether the result is good or bad. For example, if we have a project which requires 15 man-hours to correct rule violations, and if the size of the project is 5KLOC, where the average cost to fix each KLOC is estimated at 100 man-hours, then the SQALE rating for this project conform to the table of figure 3.6 will be (15/500= 0/03) as C.

Figure 3.6 represents an example of SQALE rating color mapping [27]

There are two other indicators SQALE Pyramid and SQALE Debt map [27]. We do not study two other indicators here, as they are not relevant to this thesis.

3.6 SonarQube Quality Model

At the time of this project, the SonarQube Quality Model supports three characteristics of the SQALE Quality Model. Each characteristic has a direct link with the rules. This means SonarQube does not support the sub-characteristics anymore (see the chapter discussion 7.1).

(17)

17 Figure 3.7 shows the three characteristics that have a direct link with the requirements. In SonarQube the term Rule is used instead of requirements.

Figure 3.7 represents SonarQube Quality Model with three characteristics that are connected to the rules examples.

As we studied at the SQALE section, when analysis is done, for each characteristic the issues related to associated rules are gathered and coupled to the characteristic. In SonarQube, the term “issue” refers to all violations that are found during the analysis of all characteristics. The issues related to the characteristic of Maintainability are called “Code Smell”, the issues regarding the characteristic of security are called “Bugs”, and the issues related to the characteristic of Reliability are called “Vulnerabilities.”

For this thesis, we focus on the quality characteristic of Maintainability, which is defined as: “Modification of a software product after delivery to correct faults, to improve the performance or other attributes, or to adapt the product to a modified environment” [25].

3.6.1 SonarQube Rule definition

Rules are sets of conventions that cover file organization, comments, statements, programming practices [86]. Each rule should have a key, tag, description, type, and priority.

Each rule has a type. The issue type which is defined by SonarQube are: ● Bugs are related to stability issues

● Vulnerabilities are related to security issues

● Code Smell is related to maintainability issues. Something that confuses a developer or causes difficulty in reading of the code.

Each rule has at least one tag. Some of the common-across-languages tags are defined for this project [35]. For this project, we use the following tags.

● Convention: coding convention is regard to formatting and naming.

● Brain-overload: there is too much information to keep in a short period of time

SonarQube has defined much more tags which can be found and studied in SonarQube documentation or via reference [35].

(18)

18

3.6.2 The remediation cost

Each issue has a remediation cost. SonarQube calculates the remediation cost based on the effort which is needed to fix an issue and defined in a unit of time. So SonarQube has classified the effort to: Trivial, Easy, Medium, Major, High, Complex [86].

Trivial Easy medium Major High Complex

5 min 10 min 20 min 1 h 3 h 1 d

Table 3.1 displays the base estimate effort needed to fix various severities.

Based on this table we define the effort that is needed to fix an issue. For example, changing a function name is trivial as fixing that kind of issue does not take a lot of time. It is estimated as 5 minutes of effort.

3.6.3 How Rule Measure the source code?

As we studied in the section about the SQALE analysis model, we are using rules to measure the aspects of the source code, but a rule is not equivalent with a metric [28]. We use the metrics to analyze the source code: the input of a metric is source code and the output is a value. A rule, however, is not only a measure of the source code, but it is also able to show us whether the value of the measurement is good or bad. In other words, it can determine whether a value is an issue (violation) or not. Code listings 3.1 represents a rule definition to control complexity in a function call.

1. @Rule(

2. key = "FuncCaLL", 3. name = "FunctionCaLL",

4. description = "FunctionCall should not be too complex.", 5. priority = Priority.MAJOR,

6. tags = Tags.BRAIN_OVERLOAD )

7. @SqaleLinearWithOffsetRemediation(coeff = "1min", offset = "10min", 8. effortToFixDescription = "per complexity point above the threshold") 9. public class FunctionCallComplexityCheck extends LuaCheck {

10 private static final int DEFAULT_MAXIMUM_FUNCCALL_COMPLEXITY_THRESHOLD = 5;

11 @RuleProperty(

12 key = "maxFuncCallComplexityThreshold",

13 description = "The max authorized call that is allowded for a functionCall.", 14 defaultValue = "" + DEFAULT_MAXIMUM_FUNCCALL_COMPLEXITY_THRESHOLD)

Code listings 3.1 represent a rule definition for function call.

Linear with offset: To determine whether the result of measurement is good or not, a rule

collaborates with metrics to measure the source code and parameters to determine what is good and what is bad [28]. A rule is also associate with the SQALE method to define the remediation function related to repair an issue. For the rule “function call” shown in the code listings 3.1 above, the rule uses the parameter “Threshold” to define the maximum of complexity in a function call. When the result of the complexity metric in a function call is more than 5 which is defined as the max complexity in Threshold, there will be an issue (the rule violation) registered for the rule “function call.” The remediation function method from SQALE that is associated with the rule allows us to calculate the effort which is needed to fix an issue. In the “function call ‘the offset is 10 minutes. “It takes a certain amount of time to

(19)

19

analyze an issue of this type (this is the offset)” [87] Then we need 10 minutes to analyze what we should do and for each complexity above defined threshold a Coeff with value 1 which means 1minute per complexity point above the threshold. For example, if the complexity of a function call is 6, then the remediation cost that is needed to normalize or fix that “function call” will be 11 minutes. These kinds of remediation costs called linear with cost with the formula [87].

For the rules related to the complexity we use linear remediation

Total remediation cost = offset + (number of noncompliance * coefficient).

Total remediation cost for function call = 10 + 1*1=11

Constant remediation function: A remediation cost can be defined as a constant value. Code listings 3.2 represents a rule definition for checking the length of the files line.

1.@Rule( key = "LineLength",

2.name = "Lines should not be too long", 3.priority = Priority.MINOR,

4.tags = Tags.CONVENTION)

5.@SqaleConstantRemediation("1min")

6.public class LineLengthCheck extends SquidCheck<LexerlessGrammar> 7.implements CharsetAwareVisitor {

8.private static final int DEFAULT_MAXIMUM_LINE_LENGTH = 80;

Code listings 3.2 represents a part of rule definition with a constant remediation cost.

As it shown in the code listings at line 5 one minute of effort is needed to normalize a file which contain a line with more than 80 length. That means for each issue we have a fix unit time to fix the violation.

3.6.4 Technical Debt

In SonarQube Technical Debt or SQALE density indices is defined as “Effort to fix all maintainability issues” [33] in minutes. In other words, the effort that is needed to fix all Code Smells in the source code.

3.6.4 Technical Debt Ratio

The effort that is needed to fix the issues related to maintainability divided by the effort which is needed to develop the software is a ratio. That is called Technical Debt Ratio [33].

Ratio formula conform SonarQube is:

Remediation cost / Development cost Which can be restated as:

Remediation cost / (Cost to develop 1 line of code * Number of lines of code) The value of the cost to develop a line of code is 0.06 days [33].

(20)

20

3.6.6 SQALE Ratio

SQALE ratings can be calculated by mapping the range of density indices to a scale of ratings. Figure 3.8 represents a set of the ranges that are mapped to a number of scales that is used by SonarQube [33] What is considered ‘good’ and ‘bad’ is left to the user of the model [28].

Figure 3.8 represents SQALE rating color mapping for SonarQube

3.7 SonarQube’s Seven Axes of Quality

The SonarQube software quality model specifies that the quality of a software system can be measured and evaluated using seven different quality attributes which are called the Seven Axes of Quality. These characteristics are addressed by SonarQube as bugs, coding rules, test coverage, duplications, documentation, complexity, and architecture [16]. Figure 3.9 displays SonarQube Seven Axes of quality attributes.

Figure 3.9 Seven Axes of quality characteristics specified by SonarQube

The characteristics are determined by measuring the following source code properties: 3.7.1 Comments

This is a measure of maintainability with a comment explaining what is happening in a program and makes it readable for the programmers in the case the code is complex to understand by other colleagues [36]. Adding a comment in the code makes the code easier to understand. This is a reason that the comments are measured, because they would have the impact of being easier to work with the program. The comments are measured with the idea that programmers should write about the code and their decision that they make during

(21)

21

development the code. This will help the other team members understand much faster what the last programmer’s thought was, or what he was doing when he wrote the part of the code which they need to call [16].

3.7.2 Duplications

Having outdated copies is the most obvious problem caused by duplications. When we copy and paste the code in different places, we are making the program larger and more complex [16]. Study shows that about 5-10% of the source of large-scale programs is duplicated code [40]. We copy a part of the code and paste that part of code in another place of the program as we want to reuse that part of code to reduce our time and make a programming faster, but this is considered a bad practice because if we have bugs in the original code, we will spread the bugs in when we copy it. If we want to change one instance, all other copies should be changed [41]. Large blocks of code with only minor differences in a couple of lines or, only a few characters apart are a sign of copy paste. Understanding this code is a time-consuming task which makes further modifications of the code extremely difficult. It means the maintainability of the software decreases dramatically [16].

3.7.3 Complexity

This is calculated based on the McCabe’s CC (Cyclomatic Complexity) and measures the number of independent execution paths in a computer program. It was introduced by Thomas McCabe [8]. The cyclomatic complexity is calculated as follows:

V (G) = e − n + 2 where

V (G) = cyclomatic complexity of graph G e = number of edges

n = number of nodes

As an example, consider the following piece of code: 1.function printTheNumber(x,y,z) 2. if x == 10 then 3. if y > z then 4. x = y 5. else 6. x=z 7. end 8. end 9. print(x,y,z) 10 end

11. –call the function

12.printTheNumber (10,5,30) 13 output--> 30 5 30

As the code snippet below shows, the complexity of this piece of code is 4. V(G)= 11-9+2= 4

(22)

22 Figure 3.10 display graph for the function printTheNumber.

McCabe believes that small cyclomatic number in a program module increases the testability, analyzability, readability, and understandability of the module.

3.7.4 Code Coverage

Coverage of the lines of code by unit test shows that those lines are working and if there is a problem with those lines we would be aware of that problem earlier. Code coverage is a useful mean for finding untested parts of a codebase [37]. In this case, we can improve the quality of the code by removing the errors or failures that are found by the Code coverage approach. There are different kinds of measures related to the coverage. Statement Coverage, which means that each line of the program would be tested at least once [38]. The other is Branch Coverage, which tests both the true and false cases of each decision in the code. Branch coverage is able to cover the statement coverage [39].

3.7.5 Coding Rule

Rules are a set of conventions that cover file organization, comments, statements, programming practices, and classes. They are a set of standards and guidelines that should be used when writing the source code for a program [16]. The reason is that these rules help to improve the code readability and understandability. Therefore, the complexity of the code will be reduced, and it will be easier to manage should the code be extended or changed because of new needs of the client. It helps to make changing the code easier.

(23)

23

3.7.6 Bugs

These are errors which produce an unexpected result. In this project, we do not cover the bugs.

3.7 Summary

In this chapter, we studied in section 3.2 SonarQube architecture and in the section 3.4.1 we studied the ISO/IEC 25010 Quality model although this model indicates the main characteristics related to Software quality it does not provide an approach to measure the software code quality. In section 3.4 we talked the SQALE model that provided a quality model which contains three levels Characteristics that meets the ISO/IEC25010 Quality model, Sub-characteristics have link with the lowest level Requirement, Requirements are measurable and have relation with an internal character of source code. We studied SQALE analysis model that provides approach remediation cost to measure the effort related the part of code that is violated by the rules. In section 3.6 we studied the SonarQube quality model which meet three Characteristics of SQALE model and does not use any Subcharactristic. But the SonarQube analysis model is still based on the SQALE model and we also showed using examples how remediation function can be implemented in the rule to calculate the effort. In section 3.7 we studied the features that are provided by SonarQube to measure the technical quality of the source code.

(24)

24

Chapter 4

4. Code Smells

4.1 Introduction

Code Smells are not syntax errors that can be found by a compiler during compile time. Rather, code smells are the flags that indicate something has gone wrong somewhere in the code. Such problems are not necessarily bugs or technically incorrect because, they sometimes allow the program to still function correctly. Instead, they signal flaws in the design of the software and can be a reason why development may slow down in the near future [47]. Although Code Smells do not have any impact on execution of program, they can cause some potential problems in the areas of maintainability and understandability. The Code Smells would be an obstacle for the developers if they need to change some part of code with regard to adding a new functionality or supporting a new platform in the future. From the software engineer’s point of view, detecting Code Smells will be a major concern related to the enhancement of maintainability [47].

In this section, we discuss the concept of Code Smells generally and in the next section we will talk about the Code Smells as implemented in this project. An overview of the techniques that are used by the analyzer to detect these smells will be presented in chapter 5.

4.2 Code Smell in Lua

Fowler has introduced a list of Code Smells [6] which provide a good basic explanation. Code Smell can be vary greatly depending on the programing language, code style, and project differences [43]. Based on this fact, and because there is no standard Code Smell list which can be used for every programing language, we define the Code Smells based on the needs of the Lua programmers. Some types of the Code Smells, such as ‘long parameter lists’ are in common in many different programing languages. In this project we consider the complexity of some functionalities of Lua that would have an impact on the readability and understandability of the source code. Later, we design the rules to control and reduce the complexity.

Function Complexity: complexity in functions increase code complexity, which decreases

the understandability and readability of the code, and makes the code more difficult to analyze. A function is combination of a group of statements that perform a task [44]. A function would become difficult to read when it uses many different statements. The following example shows how a function that contains a nested-if would become difficult to understand.

1. function PrintNumToString(number) 2. if number == 5 then

3. print("five")

4. elseif number == 6 then

5. print("six")

6. elseif number == 7 then

7. print("seven") 8. else

9. print("nothing") 10. end

(25)

25 The code listings above displays function “PrintNumToString” that converts an integer to its equal value in a String. The input of the function is a number. The number is checked against several conditions. The function “PrintNumToString” is responsible for checking three conditions necessary in order to perform its task. This function can grow to check even more conditions, checking more conditions will result in adding more else-if and if statements. It becomes more difficult to understand and test. Figure 4.1 displays the flow chart of the function “PrintNumToString”.

Figure 4.1 displays how a function which containing more statements becomes more complicated As it shown in figure 4.1 function “PrintNumToString” contains four independent paths, and each path will need to be tested. According to McCabe, [8] each independent path adds extra complexity to the program since each path represents a situation that needs to be understood, analyzed and finally tested [45].

Complexity in Nested Functions: A nested function is a function that is defined within

another function [48]. The nesting can grow to any depth. These types of functions cause complexity since they create coupling between the functions and they are dependent on the parent function, causing them to share their lexical scope [49]. It also increases the CC because the parent function is carrying the complexity of the inner functions. We consider the complexity of nesting functions using the next example:

(26)

26 1. local function a(x)

2. local function b(y) 3. local function c(z) 4. return z+z 5. end 6. return c(y)*c(y) 7. end 8. return b(x)+b(x) 9. end V(G) = e − n + 2 V(G)= 5 - 4 + 2 = 3

Figure 4.2 displays graph for nested functions

As the code listings shows, function a(x) is the parent function that contains function b(y), which in turn contains function c(z). Figure 4.2 shows the graph for the code listings the left side. The CC of the code is calculated and assigned to V(G). Conforming to the one responsibility rule [79], this function should be broken into 3 independent functions to become easier to read and understand.

Long Parameter List in Function: A long parameter list is difficult to understand and

modify [6] [50]. This is because a long parameter list provides a significant amount of data that makes understanding and analyzing very difficult for the users [6]. A parameter is a variable that refers to one of the pieces of data that are provided as an input to the function; that piece of data is called the argument [51]. The arguments can be passed by value or by reference. In the case where an argument is passed as a reference, the argument will be a variable which is an address to a piece of code such as a function, table, etc.

The following code listings display how calling by a value or by reference can become confusing.

1. testPrint = function(x) 2. print(x)

3. end

4. function add(num1,num2,aFunction) 5. result = num1 + num2 6. aFunction(result) 7. end

8. --call the functions

9. testPrint(11)--> output=11

10. add(10,12,testPrint)--> output=22 Code listings 4.1 display passing by reference in the function

At line 1, function(x) is defined and stored in the variable testPrint. At line 4, the function

(27)

27 sum of the two parameters num1,num2 is stored in variable ‘result’ and in line 6 we see

that ‘result’ is the input of the parameter aFunction. This means that parameter aFunction is an address of a function that can be called and used at line 6. The last line of code listings

shows an example of calling the function

‘add’

in the form of add(10,12, testPrint).In this

case, passing by reference is the function testPrint(), but it can be the address of any other function that can be used in line 6.

As we have demonstrated, handling such parameters that refer to addresses can be a confusing and time-consuming process as they are not consistent, and are therefore difficult to read and use [6]. The call may cause the disordering of the arguments, or we can make mistake by supplying too many or too few arguments. Each of these situations can cause a mismatch between the parameter and argument lists, and resulting in the function returning an incorrect answer or generating a runtime error [51].

Function Call Complexity

In the previous Code Smell we discussed how long parameters list in the function can increase the complexity and should be limited by a rule. In this section, we discuss the complexity within a function call. A function call can have different features in Lua. Since each argument can be a function, table, or coroutine etc.., therefore readability of the function call can become very complex.

The following examples display the function calls that pass a function as a parameter.

The first example displays at line 1 table ‘list’, which stores 4 other tables as elements. In line 2 is displayed a call ‘table.sort(list)’ which table ‘ list’ is passed as parameter to be sorted. The method of the table.sort() sorts the table. The Lua library contains functions or methods to manipulate a table in ways such as with the function ‘sort’, which is able to sort the

elements of a table in a given order. As it is shown in the code listings, there is a table ‘list’ that contains four elements. To sort the elements inside the ‘list’, we need to pass the table

’list’ as a parameter into the method ‘table.sort’ and define a function as the second

parameter of the method to define sorting algorithm that we wish to perform on the table list. 1. list = {{3}, {5}, {2}, {-1}}

2. table.sort(list)

3. table.sort(list, function (a, b) return a[1] < b[1] end) 4. output:

5. -1 6. 2 7. 3 8. 5

In the situation where we need to define a function that can accomplish difficult tasks in a call, the call becomes even more difficult to understand. In this example, we have shown only a single function as a parameter in a call, but a call can contain more complex items that need to be controlled.

In the next example create a function and stores that in variable co., and that function creates the new coroutine which is assigned to variable co.

1.co=coroutine.create(function() 2. for i=1,10 do 3. print("co", i) 4. coroutine.yield() 5. end 6. end)

(28)

28 In this example where functions contain more statements and conditions, the result is that it can make understanding the Coroutine very difficult.

Control Nested Flow: The order in which individual statements are executed is called

control flow [53]. When the statements are nested, the structure of the program is difficult to understand [54]. We do not consider the complexity of nesting level structure based on the McCabe complexity approach because the CC number only considers the decision structure of a program [54][55]. The control paths to the graph only consider which nodes contribute to other nodes complexities [56]. It does not recognize the complexity of the individual blocks within the program. The McCabe approach provides an indication of the difficulty that would be encountered in debugging a program using testing and has no solution for determining how difficult the static program text would be to understand [56]. Deeply nested code is a common feature of complexity and creates a structural programming issue since it is hard to read and understand [58]. Every added level of nesting is another piece of context that your brain has to keep track of. Each nested block becomes another item that you have to line up by eye to see what condition it lines up with [57]. Therefore, nested control flow such as conditional blocks (if) and loops (for, while) are hard to understand when they contain more than three levels of nesting [65] [66]. This is known as “Dangerously Deep Nesting” [59] and is a reason that justifies a redesign.

Complexity in Nested Table: More level nesting in the table increases CC in the outer table.

It has an impact on the readability and testability of the table. Nested tables lead to data access problems. For example, consider a table ‘Test’ which contains 3 other tables:

1. Test = {table 1={table 2={table 3={a}}}}.

To access the value of data “a” in table 3, we have to go through the other tables Test.table1.table2.table3.a . It is difficult to access the data ’a’ because so many layers must be passed. If one of the layers is moved, our code will not be correct anymore. Another complexity is that tables don’t have a fixed size and can grow dynamically. This characteristic has an impact on creating more complexity. To control this kind of problem, we have to check the depth of nesting within a table.

Too Many Field in Tables: A table is a composition of (key, value) [60] and the fields are

keys that refer to values. The values can be a function, a call or a table, etc. In the case of This makes the analysis of a table very difficult and increases the CC of the table. As we mentioned before, a table can be treated as an object and therefore can have the own operation. The following example demonstrates how a field can be defined as an object and become more complex.

1. Account = {balance = 500} 2. function Account.withdraw (v)

3. Account.balance = Account.balance - v 4. end

5.—call the function

6. Account.withdraw(300.00).

This code listing shows at line 1 tableAccount that contains a field having a key balance

and a value 500. At line 2, a function is created which calls Account.withdraw (v) and stored in the field withdraw (v).

(29)

29 The main problem here is that if the fieldwithdraw (v) is removed from the table the function will no longer work [7]. The larger tables contain more objects that would be removed and leave the code with many calls that are not working anymore. More object means more responsibility. A control on a number of table fields is needed.

Too many lines in Files: File is a unit of functionality and should be kept small to be focused and easier to understand and test. A file that contains more lines can become very complex. Increasing the number of lines of code results in a linear relation with technical debt as we mentioned in chapter 3 section 3.6.4 Technical Debt. When we add more line of code we would increases Code Smell in a file. If we reduce the line of code or split the file into several smaller files. The files are easier to understood and maintain. Checking the line of the file can help to keep it small.

Local Function Name: Bad naming is also another issue which would make cause confusing

about what a function exactly does. Sharing some naming conventions is a key point to make it possible for the team to efficiently collaborate. For this project, we check the Local Function name in Lua based on CamelCase naming convention which is proposed by the Lua programmer to be used [32].

4.3 Lua Coding Rule

For the Code Smells that we discussed above, we provide rules regarding how to check the complexity of the function, function call, file and table constructor, and how to determine the nesting level for depth control flow, nested functions, and nested tables. As we mentioned in chapters three, the rules are created based on the SonarQube rule definition. In the next section, we will discuss the rules related to using a table in Lua. The default values and thresholds are chosen based on the meaning of the programmers and what is common in another language in SonarQube. The following code listings display how a rule is defined in the code.

1. @Rule(

2. key = "LineLength",

3. name = "Lines should not be too long", 4. priority = Priority.MINOR,

5. tags = Tags.CONVENTION)

6. @RuleProperty(

7. key = "maximumLineLength",

8. description = "The maximum authorized line length.", 9. defaultValue = "" + DEFAULT_MAXIMUM_LINE_LENGTH) Code listings 4.1 shows how a rule is defined in the code

4.3.1 Lua Rule Definition

Rule Name Tag Priority THRESHOLD Offset in

(min) coeff Constant remediation cost in (min) Function Complexity Functions should not be too complex BRAIN_ OVERLOAD Major 10 10 1

(30)

30 Method Complexity Methods should not be too complex BRAIN_ OVERLOAD Major 10 10 1 LocalFunctio nComplexity Local Function should not be too complex BRAIN_ OVERLOAD Major 10 10 1 Table Complexity Tables should not be too complex BRAIN_ OVERLOAD Major 10 10 1 FunctionCall Complexity FunctionCall should not be too complex BRAIN_ OVERLOAD Major 5 10 1 FunctionWit h TooMany Parameters Functions/Meth-ods should not have too many parameters BRAIN_ OVERLOAD Major 7 20 NestedContr olFlowDepth Control flow statements \"if\", \"for\", \"while\" should not be nested too deeply BRAIN_ OVERLOAD Major 3 10 LocalFunctio nName Local Function names should comply with a naming convention CONVENTI ON Minor 5 File Complexity

Files should not be too complex BRAIN_ OVERLOAD Major 200 30 1 TableWithTo oManyFields Tables should not have many fields BRAIN_ OVERLOAD Major 5 10 NestedFuncti onsDepth Function should not be nested BRAIN_ OVERLOAD Major 1 10 NestedTables Depth

Table should not be nested BRAIN_ OVERLOAD Major 3 10 TooManyLin esInFile Check

Files should not have too many lines

BRAIN_ OVERLOAD

Major 1000 60

As shown in the table, if the complexity or the nesting level is more than what is defined as the THRESHOLD, a violation will occur. In chapter seven, we will discuss the reason for why we have defined three different rules related to function.

(31)

31

Chapter 5

5. SonarQube Lua Analyzer

In this chapter we will first discuss the requirements that should be realized by the analyzer, and how we collect metrics and apply the rules to the source code.

5.1 Requirements

SonarQube Lua analyzer is required to collect the metrics and apply the rules on the source code. We collect the metrics that are provided by SonarQube as we mentioned in the chapter three section 3.7 and applying the following rules that we defined in the chapter four section 4.3.1.

● Collect the following metrics

○ Lines of code ○ Source Lines of code ○ Documentation

○ Average Units of Complexity ○ Complexity of the Source Code ○ Statements

○ Duplication ○ Test Coverage ○ Code Smell (Issues)

● Applying the following coding rules

o TooManyLinesInFileCheck o FunctionComplexityCheck. o MethodComplexityCheck o FunctionWithTooManyParametersCheck o LocalFunctionNameCheck o TableComplexityCheck o NestedControlFlowDepthCheck o LineLengthCheck o FileComplexityCheck o TableWithTooManyFieldsCheck o FunctionCallComplexityCheck o NestedFunctionsDepthCheck o NestedTablesDepthCheck o LocalFunctionComplexityCheck o MthodComplexiyCheck

– Computing the metrics and displaying results on the SonarQube User Interface.

Display distribute complexity in files and functions, and display that on the SonarQube User Interface.

– Integrate the analyzer within SonarQube server

– Make it possible to explore results sorted by project and files of projects on the SonarQube User Interface.

Referenties

GERELATEERDE DOCUMENTEN

This paper addresses the societal returns of research in more detail. It presents a conceptual framework that builds upon logical models, science communication

Sir, While I applaud the call for more education about the Great War for the generations who know little about it, may I also suggest that those involved in such education should

This is in contrast with the findings reported in the next section (from research question four) which found that there were no significant differences in the

The converted colours of the 76 sources were plotted in relation to standard MS, giant and super giant stars on the colour-colour diagram in Fig 4.7 and in the colour-magnitude

However, in general, social issues that may be affected by ecological compensation policy are related with the issue in landownership and in participation mechanism for

Overexpression of the C-terminal region of polycystin-1 in human embryonic kidney 293T, HEK293T, cells has been shown to activate a reporter construct containing the promoter

To resolve the lack of a coherent and systematic measurement this research focuses on how to measure firms’ sustainability and their transition towards it, by looking at

Functional perpetration by natural persons (in short, causing another person to commit a criminal.. offence) is a possibility in many offences. And there are many offences that