• No results found

Generating feedback on comments in the code from novice programmers

N/A
N/A
Protected

Academic year: 2021

Share "Generating feedback on comments in the code from novice programmers"

Copied!
22
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

novice programmers

Marek Thomas van der Hoeven

10837787 Bachelor Thesis Credits: 12 EC

University of Amsterdam Bachelor Information Science

Supervisors:

Dhr. drs. M.S. (Martijn) Stegeman Dhr. Jelle van Assema, MSc.

(2)

Abstract

Code quality is often one of the learning objectives when following an introductory programming course. One of the criteria of code quality is ’commenting in code’. We built a proof of concept that is able to generate feedback on the overuse of comments. This feedback is generated by using the comment density and certain comment characteristics, such as the comment length or the comment placement in the code. The feedback is focused on the main comment error type. We evaluated the proof of concept of validity and usefulness. The results indicate that the tools feedback can assist students in properly adjusting the overuse of comments in the code, but the feedback still lacked some in-depth solutions in comparison with a teaching assistants feedback.

(3)

1 Introduction 4 2 Theoretical foundation 5

2.1 Comments in the code . . . 5

2.2 Comment density . . . 6

2.3 Feedback . . . 6

3 Research question 7 3.1 Approach . . . 7

4 Recognising the problematic comments 9 4.1 Method . . . 9

4.2 Results . . . 10

4.3 Analysis . . . 12

5 Structuring the feedback 13 5.1 Method . . . 13

5.2 Results . . . 13

5.3 Analysis . . . 13

6 Proof of Concept design 15 6.1 The algorithm . . . 15

6.2 Building a proof of concept . . . 16

6.2.1 Classes . . . 16 6.2.2 Static feedback . . . 17 7 Evaluation 18 7.1 Method: validity . . . 18 7.2 Results: validity . . . 18 7.3 Method: usefulness . . . 19 7.4 Results: usefulness . . . 19 8 Discussion 20 8.1 Validity . . . 20 8.2 Usefulness . . . 20 8.3 Overall experience . . . 21 8.4 Limitations . . . 21

(4)

Chapter 1

Introduction

Well written code should not only work properly, it should also be simple to under-stand, maintain and reason about. These terms are part of what is defined as code quality. High-quality code makes maintaining software projects easier and less time-consuming, which ultimately saves money. Code quality can be succinctly defined as the understandability of code (Boehm et al., 1976).

Writing high-quality code is often part of the learning objectives in introduc-tory programming courses. Students improve by getting feedback on their code assignments from a teacher.

Stegeman et al. (2016) developed a rubric to standardise the process of giving feedback on code quality. ‘Comments in code’ is one of the nine criteria derived from this rubric.

Several tools (Steidl et al., 2013; Binkley, 2007) have been developed to analyse comments in the code. These tools check whether comments have valid syntax or conform to a certain set of guidelines. Our goal is to develop a tool (a proof of concept) that provides feedback on the overuse of comments in the code. Such overuse can happen when too many errors in the comments are being made. One example of such an error is unnecessarily repeating the code. Figure 1.1 shows this example.

# List of all the female names

female_names = [’Alex’, ’Anna’, ’Joyce’]

Figure 1.1: An unnecessary comment that merely repeats the code.

We use students of an introductory programming course. The proof of con-cept works with the same programming language used during this course, which is Python.

(5)

Theoretical foundation

2.1

Comments in the code

Commenting code is viewed as a good practice by many programmers (Van De Van-ter, 2002). CleanCode: A Handbook of Agile Software Craftsmanship by Robert C. Martin (2009) is a respected book about commenting techniques and the use of proper commenting in code. We derive six types of comments out of this book. Cer-tain types are not recommended to use in the code, while other types are advised. This results in two lists (advised and discouraged ) a comment may fall under: Advised:

• Summary of the code

Comments that summarise code distill a few lines of code into one or two sentences. This kind of comment makes it easy and pleasant to scan through code, especially when someone other than the code’s original author tries to modify or read the code.

• Description of code’s intent

Comments that are operating at the level of the problem, instead of at the level of the solution. Repeating the code in comments are stating the solution (given in the code). Specifying the intent of code helps people know more about the problem that the code needs to solve.

• Information that cannot be expressed by code itself (non-expressive)

This kind of comment are given for information that cannot be expressed in the code. Author names, version state and legal terms are examples of these kind of comments.

Discouraged:

• Repeat of the code

A comment with repetitious content. It merely repeats the code. It gives no additional information to the reader and provides no value. Removing this kind of comment is recommended.

• Explanation of the code

This kind of comment is often given to explain tricky or complicated pieces of code. Rewriting the code and removing the comment is recommended. Improve the code and use summary or intent type of comments if needed.

(6)

• Marker in the code

Comments that mark certain code. Often to remind the programmer to take another look at it. This kind of comment should never be used in released software, but are often unintentionally left in.

2.2

Comment density

Using too few or too many comments can make code hard to understand (Jones, 2000). The value of the comment density of the code can be used to express the overall comment usage in the code. Comment density is calculated by dividing the number of lines containing a comment by the number of lines of code in a certain file (Fenton and Pfleeger, 1997). Arafat en Riehle (2009) found a link between successful open source projects and their comment density. They noticed that the comment density varies by programming language but remains invariant with respect to team size and project size. For successful open source projects, written in the Python programming language, an average comment density of 0.1150 was found with a standard deviation of 0.0803.

2.3

Feedback

The importance of giving feedback is shown by Hattie (2008). He analysed multiple aspects that influence the learning outcome on students’ achievements. His results demonstrates that providing feedback is the most influencing aspect.

According to Sadler (1989), effective feedback provides information about what is the good performance on a task, tells students their current level regarding to the good performance and provides a way to improve at the given task.

Two types of feedback are distinguished. The first type of feedback focuses on the errors made in the task (negative feedback). The second type of feedback is praising a student on the parts done correctly (positive feedback). Positive feedback is more likely to help a student to learn if the student is motivated and wants to do the task. Negative feedback is more likely to help students to learn if the student is undertaking a task that the student is not committed to (and hence has to do) (Hattie and Timperley, 2007).

(7)

Research question

A system can use the comment density to decide when there are too many comments used in the code. However, a system cannot use the comment density to explain the underlying problem for detecting a certain (higher than expected) comment density value. Our goal is to enable a system to find the (most likely) problem that causes a higher comment density value and provides feedback based on the problem found. The goal results in the following research question:

How can a tool be developed to give useful feedback on the overuse of comments in the code based on the comment density?

Six different types of comments were put into two lists: ’advised ’ and ’discouraged ’ (Chapter 2.1). The comment error types found in the ’discouraged’ list provides an explanation about why these kind of comments are considered ’discouraged’ and how to solve them. To provide feedback on the main underlying problem for the overuse of comments, a system needs to detect (as well as possible) if a comment falls under one of the error types found in the ‘discouraged’ list. Such a system cannot understand the ’human-meaning’ of a comment to determine the possible error type. Therefore, the system will have to look at other characteristics. These characteristics are, for example, the length of the comment, or the placement of the comment. This results in the first sub-question:

1. What characteristics of comments can be used to identify each of the error types?

Chapter 2.3 shows the importance of providing well-structured feedback. We found no literature about the structure of feedback on comments in the code from students. The second sub-question tries to give some insight into how such feedback might be structured:

2. What does feedback look like that will help a student actually improve their use of comments in the code?

3.1

Approach

We answer the research question by developing a proof of concept. We build this proof of concept using input from answering our two sub-questions.

To answer sub-question 1, we examine a dataset of code-assignments to find the possible characteristics of comments the proof of concept could use, to determine the main comment error type that is (likely) causing the overuse of comments.

(8)

To answer sub-question 2, we examine the feedback on the comments given by a teaching assistant. We analyse this feedback to reveal how the feedback on commenting code could be structured.

The proof of concept is based on an algorithm. This underlying algorithm uses all the suggested thresholds and methods, found in the literature and pretests, to detect the overuse of comments in the code and to find the most likely error type that causes this higher comment density value. The proof of concept provides feedback to help students resolve this most found error type.

(9)

Recognising the problematic

comments

This chapter describes the process of answering question 1. We answer this sub-question by finding certain characteristics of comments that could help a system detect the likely problem that causes a higher value comment density than preferred. Two comment error types could cause this overuse (Chapter 2): a student explaining the code too much or the student repeating the code too much in the comments.

4.1

Method

We examine 2165 Python code assignments made by students. We notice that small comments placed near certain pieces of code are often considered ’discouraged’. This leads to a hypothesis that single-line comments (comments that only use one line in the program) placed near if-statements, for/while-loops and variable declarations is a good indicator for detecting the two comment error types (repeat of the code or explanation of the code, Chapter 2.1) and the most occurring context they can be found in. Also, the length of such single-line comments seem a good indicator for detecting the exact type of error made in the comment. We explore this hypothe-sis by constructing and analysing three JSON-datasets, each containing single-line comments and their characteristics. We shortly explain how each dataset was es-tablished.

Datasets

We write a Python-script to automatically extract all the single-line comments and their context-characteristics from the 2165 code assignments. We only extract the comments that are placed one line above an if-statement, loop-statement or variable declaration (a context) or on the same line of such a context. This Python-script extracts two properties: the value of the code from the context (the if-statement, the loop-statement or the variable declaration) and the value of the single-line comment near the context.

Each context and its data is put into a separate dataset. This results in 1763 single-line comments near if-statements, 372 single-line comments near loop-statements and 314 single-line comments near variable declarations. The reason the number of if-statements is higher than the other two, is that every instance found in the dataset is being reviewed by hand. After reviewing all the if-statements, it became clear that this process would take too much time and some concessions for time management have to be made. This results in fewer hand reviewed instances of the other datasets. The reviewing by hand determines the possible comment error made in an instance

(10)

and this information is added to the dataset.

We analyse the three datasets and calculate the probabilities of finding a wrongly used single-line comment near a certain context and the influence of the comment density on these probabilities. Lastly, we examine the influence of the length from a single-line comment near a certain context (if-statement, loop or variable) on the exact comment error type.

4.2

Results

Table 4.1 shows an overview of the results found after analysing the three constructed datasets. It shows the number of instances that were reviewed and the probability of finding a ’discouraged’ single-line comment.

Amount of instances Discouraged probability if-statement 1763 81.5%

loop-statement 372 77.8% variable declaration 314 80.0% Table 4.1: Results from the datasets

Figure 4.1 shows the relationship between the probability of finding a single-line comment error and the minimum comment density value of the file where the instance was found.

Figure 4.1

Figures 4.2, 4.3 and 4.4 show the probability of finding a certain type of single-line comment error based on its character length. The intersection in figure 4.2 is at comment length 41, the intersection in figure 4.3 is at comment length 37 and the intersection in figure 4.3 is at comment length 30.

(11)

Figure 4.3: Error type probabilities near for-loops

(12)

4.3

Analysis

The results show that the characteristics of the comments lead to two conclusions: Context of single-line comments

Single-line comments placed near if-statements, loops or variables are very likely (around 80% for every context) to be instances of one of the two ’discouraged’ com-ment types. This probability increases slightly when the comcom-ment density increases. Error type and comment length

The results in the figures 4.2-4.4 show the probability of finding the type of error (in single-line comments) based on the length property it has. Short comment length in a single-line comment (placed near an if-statement, loop statement or variables declaration) show a higher probability of being a repeat of code error. Every graph shows a flipping point of comment length and likely error type made. These flipping points are the intersection values found in the figures.

(13)

Structuring the feedback

This chapter describes the process of answering sub-question 2. The proof of concept provides feedback to a student. We base our feedback structure on the analysis of the results from this sub-question.

5.1

Method

The purpose of the tool is to generate useful feedback for a student. We want to get a sense of how useful feedback (on the comment use in the code) looks like. In general, the feedback that a teacher or assistant gives a student is considered best practice. We ask one assistant of an introductory programming course to provide feedback on a small Python code sample. This code sample contained a few errors in the comments. Most errors fell under the category repeat of the code, but the category explanation of the code was also present (Chapter 2). The assistant only had to give feedback on the comments in the code example. Lastly, we review the feedback obtained from the assistant.

5.2

Results

• The feedback was not specific. No individual lines of errors were pointed out. The assistant gave a general recommendation.

• The feedback focused on the most occurring error (repeat of code).

• The feedback mentioned the context where the most occurring error was made, for example “A lot of comments just repeat what the variable name already is saying”.

• The feedback gave suggestions to solve the most occurring error, for example, “Try to remove the repeating comments and before adding any comments try to think about the value it’s adding to your program”.

• The placement of a comment was deemed important. The feedback suggested to always use the same placement for a comment (above a statement or next to it).

5.3

Analysis

The results of the analysed feedback from an assistant provide guidelines for the feedback on commenting usage that a tool generates. The feedback on comments in

(14)

code should not be too specific. Pointing out the most occurring error type in the comments and the context it is found in. A general suggestion on how to improve the comments usage in the code can be added as well. Lastly, the feedback on com-menting code should contain some information about the consistency of comment placement.

(15)

Proof of Concept design

We develop a proof of concept to answer the main research question. This proof of concept is based on an algorithm. This underlying algorithm uses all the suggested thresholds and methods found in the literature and pretests: comment density, prop-erties of the single-line comments and their context.

6.1

The algorithm

1. Checking the overuse of comments in the code. By calculating the com-ment density of a piece of code, an estimate is established for knowing when too many comments are used. The threshold of the comment density for overuse of commenting is set as follows: the proof of concept is being programmed with the Python programming language and the results (in Chapter 2) show that the average comment density in successful open source projects is valued at 0.1150. Since the proof of concept is aimed for students, the threshold for determining too many comments is adjusted (more comments are allowed) and we add the standard deviation of 0.0803 to the comment density mentioned before, resulting in a comment density of 0.1953. This is rounded to 0.20. 2. Where are the most comment errors been made? Counting all the

single-line comments near every context (if-statement, loops, variables) shows where (likely) the most problematic comments are located (Chapter 4).

3. What kind of error is the most problematic? After knowing the placement where those single-line comments are prevalent, an analysis of the single-line comments in that context (found in step 2) is conducted. The length of the single-line comment determines whether the comment error is ’repeat of the code’ or ’explanation of the code’. Every context has its own threshold (Chap-ter 4). The total number of every error type (in the context found in step 2) is calculated and the largest number is selected.

4. Are comments consistently placed? All the comments are tested by looking at their placement in the code. A comment can be placed next to a line of code or above it. When one or more comment(s) are placed inconsistently, the feedback will be adjusted (Chapter 5).

5. Putting it all together. The algorithm now knows four properties about a reviewed code-sample:

1. If there is an overuse of comments in the code.

(16)

3. What the most frequent comment error type is in this context. 4. If the comments are placed consistently.

Based on these values, feedback is selected from a static list. If the comment density is lower than the set threshold, the feedback will give a compliment about using the proper number of comments and a small suggestion is given about watching the comments in the found context error (step 2).

If the comment density is higher than the set threshold, the feedback will be more specific about the most counted error type in the comments and the context it can be found in the most. The feedback will give a general example on how to solve this kind of error type properly at the found context. A feedback example, based on certain values found in a Python code sample, is shown below.

Example of generated feedback: Algorithm determines:

• Too many comments (comment density above 0.20): Yes • Context with most error comments: variable declarations • Most counted error type at this context: Repeat of the Code • Consistent placement of comments: No

Feedback given by the tool:

1) You are probably using too many comments in the code, especially near variables. Dont just repeat what the code is doing. Let the code speak for itself.

General example for improving comments: # List of all the female names

female_names = ["Alex", "Anna", "Joyce"]

The comment can be removed, since the comment simply repeats what the code is already showing. The variable name speaks for itself. 2) Try to be consistent with the placement of comments. Place comments

always above the code or next to the code.

6.2

Building a proof of concept

We build a Python-program to use and test the designed algorithm. This proof of concept (the tool) is shortly explained in this section.

6.2.1 Classes

There is a main class that every other class is based on. This class, named SourceFile, has general methods that every other class needs, for example, the method that can extract all the code from a code sample.

(17)

6.2.2 Static feedback

The Feedback class follows the principles of the algorithm described earlier. Based on the values that the algorithm extracts, a feedback is selected (as a string) from a static list. The list contains all possible kinds of feedback that the tool could give. The given feedback depends on the comment density (too high or too low), the context of the most occurring errors (if, loop or variable statements) and the most occurring error type made in this context (repeat of code or explanation of the code). This resulted in a list of 12 (2 x 3 x 2) possible static feedback strings.

(18)

Chapter 7

Evaluation

We evaluate the proof of concept, that was built based on the designed algorithm, with two main goals: measuring the usefulness and validity. The results are dis-cussed in Chapter 7.

7.1

Method: validity

We ask three assistants from an introductory programming course at the University of Amsterdam to give feedback on four kinds of Python code samples. Three of the samples originated from real-life code assignment made by students, all of which had a specific kind of general commenting problem. These problems were determined beforehand (by hand). The last code sample was written by the author of this paper and had no problematic comments in it. We compare the feedback from the assistants, to the feedback that the tool gave for each code sample.

7.2

Results: validity

• Code snippet 1:

Main problem: Code snippet contained too many comments in general, with most comments repeating the code. These repeating-errors were mostly placed around if-statements.

Assistant feedback: All the assistants found every wrong usage of a com-ment in the code. The feedback they gave were similar to each other. No feedback contradicted another. The feedback came down to advice to not re-peat every logical step in the program: “A logical equation often speaks for itself ”.

Difference with tool: The tools feedback found the same main problem in the use of comments and does not differ much with the assistants feedback. The biggest difference was the amount of feedback on consistency of comments. Consistency on every property of a comment was deemed very important by the assistants. The tool only checks for the consistency of comment placement. • Code snippet 2:

Main problem: Too many comments were used and unnecessary explana-tions of variables were given.

(19)

com-The advice from the assistants was a bit more in depth. com-The tool cannot give the recommendation of usage of summary comments.

• Code snippet 3:

Main problem: Overall comment usage in the code is low, most comments are given at for-loops, repeating what the code is stating.

Assistant feedback: “I don’t mind the comments too much. Again, try to summarise larger parts of the code when commenting about a for-loop”. Difference with tool: The tool correctly detects a low number of comments. A small suggestion for improvement is given by the tool. It suggests removing certain comments. This suggestion differs from that of the assistants. They gave proper advice about how and when to use summarising comments instead. • Code snippet 4:

Main problem: None

Assistant feedback: Almost no feedback needed, consistency of ending a comment line with a dot was important for one assistant.

Difference with tool: The tool detects low amounts of comment usage and gives only a small suggestion about repeating the code and how to solve this, this feedback was deemed unnecessary in comparison to the feedback from the assistants.

7.3

Method: usefulness

We evaluated the usefulness of the tool by giving two students feedback, generated from the proof of concept, about two pieces of Python code. Their task was to improve the code (after looking at the feedback) however they saw fit. At the end of the evaluation, the programmers were asked about their opinion of the feedback, asked to describe useful feedback and asked what their opinion was about very specific feedback versus general given feedback. The results are:

7.4

Results: usefulness

• All the unnecessary ’repeating’ comments were addressed and were correctly changed by both students (16/16).

• ’Explanation’ errors in comments were mostly (correctly) solved (10/16). The rest of the errors (6/16) were not corrected appropriately.

• The placements of comments were all correctly changed to be more consistent. Both students give the recommendation to use general feedback. The main reason is because the students are then forced to think (again) critically about their comment-ing use. If too specific feedback is given, they feared the possibility of just changcomment-ing the lines provided by the feedback without thinking about the specific error and how to avoid it next time.

(20)

Chapter 8

Discussion

An algorithm was built upon the analysis of the results from the two sub-questions. Sub-question 1 showed how a system could recognise what the (likely) main error type is why a student is using too much comments. Sub-question 2 gave knowledge about the shape and form of the feedback provided by the tool. The algorithm was tested by building it into a Python-program. This Python-program (the proof of concept) is the answer to the main research question. This program shows that by using the comment density, the context of single-line comments and the length of those comments, useful feedback is generated for students about the most (likely) used comment error type. However, some limitations of this proof of concept were found during the evaluation.

8.1

Validity

The validation of the tool was evaluated by looking at the similarities between the feedback of the tool and the feedback from the assistants. The results show that the general feedback of the tool was as accurate as that of the assistants. The tool pointed out the same biggest error type made in the comments and provided decent suggestions for improvements. The suggestions given by the tool were mostly the same as the assistants, but sometimes lacked a more in-depth solution. Assistants provided more reliable suggestions for improvements on when and how to use a summary comment to solve the overuse of explanatory comments. This kind of suggestion was not present in the tools feedback.

The feedback from the tool on consistency of the comments was similar of that given by the assistants. The tool only looked at the consistency of placement, but the assistants also looked at more properties of the comments and their consistency. They checked whether comments were all in the same language, whether they all started with (or without) a capital letter and whether the comments all ended the same way (with a dot or not).

After this evaluation, a few suggestions for improvement are identified. First, the tool should provide better solutions to different error types. Currently, only one solution exist per error type. Researching more and better suited solutions to a type of error is needed to match the feedback of an assistant. Second, the tool should focus on more aspects of consistency in comments.

(21)

as a solution, but other times changing the code is a better solution to this error. The static advice of the tool was not always enough for the student to figure out how to solve the given errors. Also, the consistency of placement of the comments were all edited correctly by the students.

8.3

Overall experience

The students had a positive attitude towards the use of this proof of concept. They experienced the feedback it gave as helpful and thought provoking. It makes the students think twice about their commenting style in the code and helps them im-prove the comments by giving small suggestions for imim-provement about how to solve the most used comment error type in a file. The evaluation showed some promising possibilities for the tool, but still lacked in-depth solutions and feedback in certain areas in comparison to the feedback of an assistant. More research about provid-ing different solutions to certain comment error types and focusprovid-ing more on the consistency of the comments are the main improvements for this proof of concept.

8.4

Limitations

The thresholds set for detecting the comment overuse and detecting the comment error type were based on literature review and the results obtained from sub-question 1. Although these thresholds were carefully chosen, one cannot be sure that they are truly optimal and research is needed for more precise values.

The three datasets obtained from the code samples were reviewed and edited by the author of this paper. This was done to the best of the abilities of the author, but multiple code-quality experts reviewing these datasets may lead to more accurate results for sub-question 1 and perhaps better threshold values.

The evaluation of the proof of concept was done using three assistants and two students. Although more subjects and code samples may have measured the use-fulness and validity of the proof of concept more accurately and perhaps in a more (better) quantitative way, this was not done due to time restraints and the number of assistants and students available.

The characteristics found in sub-question 1 were derived by looking at code-assignments written in Python. The algorithm is based on these results. It should be noted that the results found in sub-question 1 could differ when using code-assignments built in a different programming language. More research may not validate these comment characteristics for other programming languages. This is supported by Arafat and Riehle (2009), they show difference in comment density between programming languages.

The second sub-question was constructed to get a sense of the feedback structure. The feedback from the assistant gave some insight on how assistants might give feedback, specifically on the use of comments. It is advised to perform a research with more than one assistant and more different code samples, to perhaps get more reliable results and a better understanding of the best feedback structure to use.

(22)

Bibliography

Arafat, O. and Riehle, D. (2009). The commenting practice of open source. In Proceedings of the 24th ACM SIGPLAN conference companion on Object oriented programming systems languages and applications, pages 857–864. ACM.

Binkley, D. (2007). Source code analysis: A road map. In Future of Software Engineering, 2007. FOSE’07, pages 104–119. IEEE.

Boehm, B. W., Brown, J. R., and Lipow, M. (1976). Quantitative evaluation of software quality. In Proceedings of the 2nd international conference on Software engineering, pages 592–605. IEEE Computer Society Press.

Fenton, N. and Pfleeger, S. (1997). Software metrics: A rigorous and practical approach. international thomson computer press. London, UK.

Hattie, J. (2008). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. Routledge.

Hattie, J. and Timperley, H. (2007). The power of feedback. Review of educational research, 77(1):81–112.

Jones, C. (2000). Software assessments, benchmarks, and best practices. Addison-Wesley Longman Publishing Co., Inc.

Martin, R. C. (2009). Clean code: a handbook of agile software craftsmanship. Pearson Education.

Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional science, 18(2):119–144.

Stegeman, M., Barendsen, E., and Smetsers, S. (2016). Designing a rubric for feedback on code quality in programming courses. In Proceedings of the 16th Koli Calling International Conference on Computing Education Research, pages 160–164. ACM.

Steidl, D., Hummel, B., and Juergens, E. (2013). Quality analysis of source code comments. In Program Comprehension (ICPC), 2013 IEEE 21st International Conference on, pages 83–92. IEEE.

Van De Vanter, M. L. (2002). The documentary structure of source code. Informa-tion and Software Technology, 44(13):767–782.

Referenties

GERELATEERDE DOCUMENTEN

Robot goes (which shows us that audiences' engagement with critique is distracted from the critical potential of Mr. Robot by narrative excess).. Furthermore, when

Amendment proposal: (TO BE DELETED) Comments/motivations: This article creates an artificial incentive for the holders of PTRs to nominate them in order to avoid not being

VERBUND Trading GmbHArticle 58Remuneration at Market Spread or Marginal Price of Capacity (see above). The cap should be calculated on the yearly congestion income from the LTR at

If major modification of the marionette should be necessary, it should be taken into account the possibility to replace the present marionette with a standard one: the use of a

ONLINE TRAINING: OMGAAN MET VERANDEREND GEDRAG BIJ DEMENTIE VOOR MANTELZORGERS.. De online training ‘Omgaan met veranderend gedrag bij dementie’ bestaat uit video’s en

Dit getal wordt met G aangegeven; wanneer G groter is dan de eenheid, dan is het aantal onge- vallen toegenomen; omdat mag worden aangenomen dat op deze wijze

35 In de winter en de zomer wordt op seizoensbasis het verband tussen de gemiddelde temperatuur en totaal gasverbruik niet of nauwelijks beïnvloed door het feit of er al dan

Dmitr would be identical to the hypocoristic name Domko / -a in birchbark letters N1045 and N1047, which are part of the Luka-Ivan network (see above, Sect. This hypothesis proved to