• No results found

Thesis submitted for the degree PhD in Applied Linguistics at the North-West University, Potchefstroom Campus.

N/A
N/A
Protected

Academic year: 2021

Share "Thesis submitted for the degree PhD in Applied Linguistics at the North-West University, Potchefstroom Campus."

Copied!
183
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

MarkWrite:

Standardised

feedback on

ESL student

writing via a

computerised

marking

interface

Thesis submitted for the degree PhD in Applied Linguistics at the North-West University,

Potchefstroom Campus.

Henk Louw

Supervisor: Professor A. J. (Bertus) van Rooy

January 2011 Potchefstroom

(2)

ACKNOWLEDGEMENTS

I would like to thank the following people for their assistance, support and encouragement in the completion of this thesis:

Bertus van Rooy

Susan Coetzee-Van Rooy

My colleagues and friends at the School of Languages My family

God

All the participants in the experiments.

(3)

KEY WORDS

CALL

Second language writing Writing pedagogy Feedback

Error correction Checklists

(4)

ABSTRACT

The research reported on in this thesis forms part of the foundation of a bigger research project in which an attempt is made to provide better, faster and more efficient feedback on student writing.

The introduction presents the localised and international context of the study, and discusses some of the problems experienced with feedback practice in general. The introduction also gives a preview of the intended practical implementation of the research reported on in this thesis.

From there on, the thesis is presented in article form with each article investigating and answering a part of two main guiding questions. These questions are:

1. Does feedback on student writing work?

2. How can feedback on student writing be implemented as effectively as possible? The abstracts for the five individual articles are as follows:

Article 1

Article 1 presents a rubric for the evaluation of Computer-Assisted Language Learning (CALL) software based on international recommendations for effective CALL. The rubric is presented after a brief overview of the pedagogical and implementation fundamentals of CALL, and a discussion of what needs to be included in a needs analysis for CALL evaluation. It is then illustrated how the evaluation criteria in the rubric can be used in the design of a new CALL system.

Article 2

Providing feedback on student writing is a much-debated topic. One group of researchers argues that it is ineffective and another group remains convinced that it is effective, while at ground level teachers and lecturers simply carry on “marking” texts. The author of this article contends that both arguments have valid contributions to make and uses the arguments both for and against feedback to create a checklist for effective feedback practice. Adhering to this checklist should counter most of the arguments against feedback while supporting and improving the positive arguments in favour of feedback.

Article 3

This article reports on an experiment which tested how effectively standardised feedback could be used when marking L2 student writing. The experiment was conducted using a custom-programmed software tool and a set of standardised feedback comments. The results of the experiment prove that standardised feedback can be used consistently and effectively to a degree, even though some refinements are still needed. Using standardised feedback in a standard marking environment can assist markers in raising their awareness of errors and in more accurately identifying where students lack knowledge. With some refinements, it may also be possible to speed up the marking process.

(5)

Article 4

This article describes an experiment in which Boolean feedback (a kind of checklist) was used to provide feedback on the paragraph structures of first-year students in an academic literacy course. The major problems with feedback on L2 writing are introduced and it is established why a focus on paragraph structures in particular is of importance.

The experiment conducted was a two-draft assignment in which three different kinds of feedback (technique A: handwritten comments; technique B: consciousness raising through generalised Boolean feedback; and technique C: specific Boolean feedback) were presented to three different groups of students. The results indicate that specific Boolean feedback is more effective than the other two techniques, partly because a higher proportion of the instances of negative feedback on the first draft were corrected in the second draft (improvements), but more importantly because in the revision a much lower number of changes to the text resulted in negative feedback on the second draft (regressions). For non-specific feedback, almost as many regressions occurred as improvements. In combination with automatic analytical techniques made possible with software, the results from this study make a case for using such checklists to give feedback on student writing.

Article 5

This article describes an experiment in which a series of statements, answerable simply with yes or no (labelled Boolean feedback), were used to provide feedback on the introductions, conclusions and paragraph structures of student texts. A write-rewrite assignment (the same structure as in article 4) was used and the quality of the student revisions was evaluated. The results indicate that the students who received Boolean feedback showed greater improvement and fewer regressions than students who received feedback using the traditional method.

The conclusion provides a brief summary as well as a preview of the immense future research possibilities made possible by this project.

(6)

OPSOMMING

Die navorsing waaroor in hierdie tesis verslag gedoen word, vorm deel van die fondasie van ʼn heelwat groter navorsingsprojek. Hierdie projek het ten doel om beter, vinniger en meer effektiewe terugvoer op studente se skryfwerk te lewer.

In die inleiding van die tesis word die plaaslike en internasionale konteks van die studie uitgestip, sowel as ʼn aantal probleme rakende terugvoer op skryfwerk. Die inleiding gee ook ʼn vooruitskouing van die praktiese implementering van die navorsing waaroor daar in die tesis verslag gelewer word.

Die tesis word in artikelformaat aangebied, met elke artikel wat ʼn deel van die rigtende vrae ondersoek en beantwoord. Die twee vrae is:

1. Werk terugvoer op studente se skryfwerk?

2. Hoe kan terugvoer op studente se skryfwerk meer effektief geïmplementeer word? Die opsommings van die vyf individuele artikels is as volg:

Artikel 1

Artikel 1 stel ʼn rubriek bekend wat gebruik kan word vir die evaluering van sagteware-pakkette vir rekenaargesteunde-taalonderrig, oftewel “Computer-Assisted Language Learning” (CALL). ʼn Kort oorsig oor die fundamentele pedagogiese beginsels van CALL word hier voorsien, sowel as die implementeringsbeginsels daarvan. Die behoeftebepaling wat met die implementering van die nuwe CALL gepaard moet gaan word ook bespreek. Die rubriek word dan bekend gestel en daar word geïllustreer hoe dieselfde rubriek gebruik kan word wanneer oorweging geskenk word aan ʼn nuwe sagteware-pakket se ontwerp.

Artikel 2

Die lewering van terugvoer op studente se skryfwerk is ʼn veelbesproke onderwerp. Een groep navorsers beweer dit is oneffektief; ʼn ander groep navorsers beweer dit is funksioneel, en op grondvlak gaan onderwysers en dosente bloot voort om tekste na te sien. Die skrywer argumenteer dat sekere aspekte van beide argumente geldig is, en as gevolg hiervan word insigte van beide argumente in hierdie artikel gebruik om ʼn oorsiglysie op te stel wat effektiewe terugvoer definieer. Deur te hou by die vereistes van hierdie oorsiglysie sal die meeste van die argumente wat teen terugvoer gemaak word teengewerk word, terwyl die positiewe aspekte met betrekking tot die voorsiening van terugvoer versterk en ondersteun sal word.

Artikel 3

Artikel 3 rapporteer oor ʼn eksperiment waarin getoets word hoe effektief gestandaardiseerde terugvoer gebruik kan word wanneer tweedetaalstudente se skryfwerk nagesien word. Die eksperiment is uitgevoer deur van ʼn pasgemaakte sagtewarepakket gebruik te maak, sowel as ʼn stel voorafvervaardigde kommentaar. Die resultate van die eksperiment bewys dat gestandaardiseerde terugvoer tot ʼn mate konsekwent en effektief

(7)

gebruik kan word, alhoewel sekere afronding steeds nodig is. Die gebruik van gestandaardiseerde terugvoer in ʼn standaard-nasienomgewing kan die nasieners help om hul bewustheid van foute te verhoog en om meer akkuraat te identifiseer waar studente se kennis ontbreek. Die tegniek kan ook die nasienproses versnel.

Artikel 4

Die artikel beskryf ʼn eksperiment waarin Booleaanse terugvoer (ʼn soort oorsiglysie) gebruik is om terugvoer te lewer op die paragraafstrukture van eerstejaarstudente in ʼn module van Akademiese Geletterdheid. Die grootste probleme wat ten opsigte van terugvoer op tweedetaal-skryfwerk ervaar word, word uitgewys, waarna die fokus op paragraafstrukture regverdig word.

Die eksperiment was ʼn werksopdrag in twee weergawes, waar drie soorte terugvoer gebruik is vir drie verskillende groepe studente. Die drie soorte terugvoer is handgeskrewe kommentaar, bewusmakingskommentaar deur algemene Booleaanse terugvoer, en spesifieke Booleaanse terugvoer. Die resultate toon dat spesifieke Booleaanse kommentaar meer effektief is as die ander twee tegnieke, omdat ʼn groter gedeelte van die negatiewe kommentaar op die eerste weergawes van die tekste ná die tweede weergawes gekorrigeer is. ’n Belangriker aspek is egter dat die hersiene weergawes minder regressie toon. Vir algemene kommentaar was daar amper net soveel regressie as verbetering. In kombinasie met outomatiese analitiese tegnieke wat moontlik gemaak word deur sagteware, ondersteun hierdie eksperiment die stelling dat sulke oorsiglysies gebruik kan word om effektiewe terugvoer te lewer op studente se skryfwerk.

Artikel 5

Hierdie artikel beskryf ʼn eksperiment waarin ʼn aantal stellings, wat beantwoord kan word met “ja” of “nee” (genaamd Booleaanse terugvoer) gebruik is om terugvoer te lewer oor die inleiding, slot en paragraafstrukture van studente se tekste. ʼn Skryf-herskryf-oefening is gebruik en die kwaliteit van die studente se hersiene weergawes is geëvalueer. Die resultate dui daarop dat die studente wat Booleaanse terugvoer ontvang het, beter kwaliteit hersiene weergawes kon lewer en minder regressie in hulle skryfwerk getoon het as studente wat op die tradisionele wyse terugvoer ontvang het.

Die tesis se gevolgtrekking bied ʼn kort oorsig oor die bevindinge van die navorsing, sowel as ʼn oorsig oor die verdere navorsingsmoontlikhede wat deur die studie moontlik gemaak word.

(8)

Contents

ACKNOWLEDGEMENTS ... I

KEY WORDS ... II

ABSTRACT ... III

OPSOMMING ... V

LIST OF FIGURES ... XIII

LIST OF TABLES ... XIV

CHAPTER 1

INTRODUCTION ... 1

1.1 BACKGROUND ... 1

1.2 WHAT SEEMS TO BE THE PROBLEM: GLOBAL CONTEXT? ... 2

1.3 SYNTHESISING FEEDBACK TECHNIQUES ... 4

1.4 LOCALISED CONTEXT OF THIS STUDY ... 5

1.5 RESEARCH QUESTIONS ... 6

1.6 AIMS OF THE STUDY ... 6

CHAPTER 2

OVERVIEW OF THE THESIS ... 8

2.1 ARTICLE 1:DESIGN CONSIDERATIONS FOR CALL BASED ON EVALUATION CRITERIA FOR CALL ... 8

2.1.1 Main question ... 8

2.1.2 Purpose of the article ... 8

2.1.3 Methodology ... 8

2.2 ARTICLE 2:MOVING TO MORE THAN EDITING – A CHECKLIST FOR EFFECTIVE FEEDBACK... 8

2.2.1 Main question ... 8

2.2.2 Purpose of the article ... 9

2.2.3 Methodology ... 9

2.3 ARTICLE 3:MOVING TO MORE THAN EDITING – STANDARDISED FEEDBACK IN PRACTICE ... 10

2.3.1 Main question ... 10

2.3.2 Purpose of the article ... 10

2.3.3 Methodology ... 10

2.4 ARTICLE 4:YES/NO/MAYBE – A BOOLEAN ATTEMPT AT FEEDBACK ... 11

2.4.1 Main question ... 11

2.4.2 Purpose of the article ... 12

2.4.3 Methodology ... 12

2.5 ARTICLE 5:YES AGAIN – ANOTHER CASE FOR BOOLEAN FEEDBACK, OR “HOW TO MARK ESSAYS WITH STRATEGIC ‘YES’ AND ‘NO’” ... 13

2.5.1 Main question ... 13

2.5.2 Purpose of the article ... 13

2.5.3 Methodology ... 13

2.6 OUTLINE OF ARGUMENT ... 13

(9)

CHAPTER 3

ARTICLE 1 – DESIGN CONSIDERATIONS FOR CALL BASED ON

EVALUATION CRITERIA FOR CALL ... 17

3.1 PRELUDE TO ARTICLE 1 ... 17

3.2 INTRODUCTION... 17

3.3 BASIC UNDERSTANDINGS AND TERMINOLOGY OF CALL FROM THE LITERATURE ... 19

3.4 CALL SYSTEMS NEED TO HAVE A SOLID EDUCATIONAL BASE AND BE INTEGRATED INTO THE WHOLE TEACHING CURRICULUM ... 20

3.5 TEACHING TOOL ... 21

3.6 COACHES VERSUS TOOLS ... 23

3.7 SPECIALISATION ... 24 3.8 POLICY ... 24 3.9 ENVIRONMENT ... 25 3.10 STAFF ... 26 3.11 RELEVANCE ... 27 3.12 NEEDS ANALYSIS ... 29 3.13 LECTURERS ... 29 3.14 STUDENT NEEDS ... 30

3.15 SYSTEMS ADMINISTRATOR AND IT PERSONNEL ... 30

3.15.1 Systems administrator ... 30

3.15.2 IT specialist ... 31

3.16 IT INFRASTRUCTURE AND SOFTWARE COSTS... 31

3.17 BUDGET ... 32

3.18 EVALUATING SOFTWARE ... 32

3.18.1 Content ... 33

3.18.2 Format ... 33

3.18.3 Operation ... 34

3.19 WRITING A PROGRAM EVALUATION RUBRIC ... 34

3.20 CAN EVALUATION CRITERIA BE USED IN THE CREATION OF CALL? ... 41

3.21 CONCLUSION ... 55

3.22 BIBLIOGRAPHY... 56

CHAPTER 4

ARTICLE 2 – MOVING TO MORE THAN EDITING: A CHECKLIST FOR

EFFECTIVE FEEDBACK ... 59

4.1 PRELUDE TO ARTICLE 2 ... 59

4.2 INTRODUCTION... 59

4.3 METHODOLOGY... 60

4.3 WHAT EXACTLY IS FEEDBACK? ... 61

4.4 WHAT IS EFFECTIVE FEEDBACK PRACTICE? ... 64

4.4.1 Feedback should be clear and understandable ... 64

4.4.2 Feedback should be consistent, complete and thorough ... 65

4.4.3 Feedback should be correct ... 66

4.4.4 Feedback should indicate error status ... 66

4.4.5 Feedback should aim at improvement, not just correctness ... 67

(10)

4.4.7 Feedback should be purposeful ... 69

4.4.8 Feedback should place responsibility on the learner ... 69

4.4.9 Feedback should encourage communication and rewriting ... 70

4.4.10 Feedback should encourage language awareness ... 71

4.4.11 Feedback should be individualised... 72

4.4.12 Feedback should be time effective ... 72

4.4.13 Feedback should be searchable, archiveable or recordable and also allow for research ... 73

4.5 CONCLUSION ... 73

4.6 ACKNOWLEDGMENTS ... 74

4.7 BIBLIOGRAPHY... 74

CHAPTER 5

ARTICLE 3 – MOVING TO MORE THAN EDITING: STANDARDISED

FEEDBACK IN PRACTICE ... 76

5.1 PRELUDE TO ARTICLE 3 ... 76

5.2 INTRODUCTION AND BACKGROUND TO THE PROJECT ... 77

5.3 WHAT IS EFFECTIVE FEEDBACK? ... 78

5.2 CAN FEEDBACK BE STANDARDISED? ... 79

5.4 METHODOLOGY... 80

5.5 RESULTS ... 83

5.5.1 Can standardised feedback be used consistently? ... 83

5.5.1.1 Marker tendencies ... 83

5.5.1.2 The markers’ personal favourites ... 85

5.5.1.3 Least used tags ... 87

5.5.2 Close analysis of the use of one tag ... 88

5.5.3 Doubles: More than one possible tag ... 89

5.5.4 Personal preferences ... 90

5.5.5 Incorrectly tagged... 90

5.5.6 Errors that were difficult to classify ... 90

5.5.7 Consistency: Marker comments ... 91

5.5.8 Ease of use for the marker ... 91

5.6 CONCLUSION ... 92

5.7 AUTHOR’S NOTE ... 93

5.8 BIBLIOGRAPHY... 93

5.12 ADDENDUM A:EXTRACT FROM TAG SET ... 95

5.13 ADDENDUM B:QUESTIONS TO MARKERS WHO USED THE MARKING SYSTEM ... 97

5.14 ADDENDUM C:EXAMPLES OF STUDENT WRITING ... 99

5.15 ADDENDUM D:PERSONAL FAVOURITES ... 101

CHAPTER 6

ARTICLE 4 – YES/NO/MAYBE: A BOOLEAN ATTEMPT AT FEEDBACK 104

6.1 PRELUDE TO ARTICLE 4 ... 104

6.2 INTRODUCTION... 106

6.3 PROBLEMS WITH FEEDBACK ... 106

6.3.1 Earlier attempts at improving the effectiveness of feedback ... 108

(11)

6.3.3 What are the qualities of effective paragraphs? ... 109

6.4 RESEARCH METHOD ... 111

6.4.1 Study population ... 111

6.4.2 Design of the experiment ... 111

6.4.3 Measuring improvement ... 113

6.4.4 Results ... 114

6.4.4.1 Hypothesis 1: Effectiveness of feedback ... 114

6.4.4.2 Hypothesis 2: Relative merit of individual feedback techniques ... 116

6.4.5 The effectiveness of specific Boolean feedback ... 118

6.4.6 Possible criticism ... 121

6.7 CONCLUSION ... 123

6.8 ACKNOWLEDGEMENTS ... 123

6.9 BIBLIOGRAPHY... 123

6.10 POSTSCRIPT TO ARTICLE 4 ... 126

6.10.1 Proposed implementation of radio buttons in MarkWrite: “Automatic discussion” ... 126

CHAPTER 7

YES AGAIN: ANOTHER CASE FOR BOOLEAN FEEDBACK, OR “HOW TO

MARK ESSAYS WITH STRATEGIC ‘YES’ AND ‘NO’” ... 127

7.1 PRELUDE TO ARTICLE 5 ... 127

7.2 WHY THE FOCUS ON PARAGRAPHS, INTRODUCTIONS AND CONCLUSIONS ... 127

7.3 INTRODUCTION... 131

7.4 HUMAN FALLIBILITY AND CHECKLISTS ... 131

7.4.1 Marking scheme as a checklist? ... 132

7.5 WHY THE FOCUS ON INTRODUCTIONS AND CONCLUSIONS? ... 132

7.6 EFFECTIVE INTRODUCTIONS AND CONCLUSIONS ... 133

7.7 THE EXPERIMENT ... 134

7.7.1 The test group ... 134

7.7.2 Aim of the experiment ... 134

7.7.3 The structure of the experiment ... 134

7.7.4 Thesis ... 138

7.8 RESULTS ... 139

7.8.1 Improvement of marks after revision ... 139

7.9 CONTRIBUTION OF FEEDBACK CHECKLIST ... 140

7.10 RELATIONSHIP BETWEEN MARKS AND SECTIONS FROM FEEDBACK CHECKLIST ... 143

7.11 WHY DOES IT WORK? ... 144

7.12 MARKER COMMENTS... 146

7.13 PROPOSED IMPLEMENTATION ... 147

7.14 CONCLUSION AND FUTURE RESEARCH ... 149

7.15 BIBLIOGRAPHY... 149

CHAPTER 8

CONCLUSION ... 152

8.1 THE FINDINGS OF THIS STUDY ... 152

(12)

8.2.1 The contribution of the MarkWrite Project in national and international context 155

8.3 CURRENT STATE OF MARKWRITE ... 155

8.4 FUTURE DEVELOPMENTS AND FURTHER RESEARCH ... 156

8.4.1 Future developments: MarkWrite Marker and MarkWrite Student ... 156

8.4.2 Future development: innovation and technologies ... 156

8.4.2.1 Global development ... 157

8.4.2.2 Automatic error identification ... 158

8.4.2.3 Batch scanning... 158

8.4.2.4 Nosey thesaurus and phrase analyser ... 158

8.4.2.5 Style analyses ... 158

8.4.2.6 Text comparison and plagiarism detection ... 159

8.4.2.7 Additional radio buttons: Boolean argument analysis ... 159

8.4.2.8 Student prompts in MarkWrite Student ... 159

8.4.2.9 Customised remedial exercises ... 159

8.4.2.10 No exercise, no mark ... 159

8.4.2.11 Selective marking ... 160

8.4.2.12 Improved feedback categories and shared feedback sets ... 160

8.4.2.13 Assessment assistance: Radio buttons and marker assistance ... 160

8.4.2.14 Voice recognition ... 161

8.4.2.15 Audio feedback ... 161

8.4.2.16 Peer review ... 161

8.4.2.17 Order of development feedback ... 161

8.4.2.18 Free up lecturers’ time ... 161

8.4.2.19 Type/token ratio ... 162

8.4.2.20 Reading ease score ... 162

8.4.2.21 Level of importance of various feedback categories ... 162

8.4.2.22 Research on user friendliness ... 163

8.4.2.23 Pre-checks for students ... 163

8.4.2.24 Type/token ratio feedback ... 163

8.4.2.25 Style analyser ... 163

8.4.2.26 Teacher check-ups ... 163

8.4.2.27 Mobi sites and cellphone usage ... 164

8.4.2.28 Effect of reading comprehension on feedback interpretation ... 164

8.4.2.29 Screen capture ... 164

8.5 IMPLEMENTING MARKWRITE IN WRITING ACROSS THE CURRICULUM ... 164

8.6 PROBLEMS TO OVERCOME ... 165

8.6.1 Funding ... 165

8.6.2 Research volume and time ... 165

8.6.3 Theoretical/pedagogical inconsistencies in techniques ... 165

8.6.4 Standardisation is difficult ... 165

8.6.5 Technophobia ... 165

8.6.6 Lack of access to computers ... 165

8.6.7 Mindset shift ... 166

8.6.8 What about fully automatic marking? ... 166

8.6.9 Beware “work to rule” ... 167

(13)

8.7 FINAL REMARKS ... 167 8.8 BIBLIOGRAPHY... 167

(14)

LIST OF FIGURES

Figure 2.1: Halliday and Matthiessen’s strata of language ... 11

Figure 4.1: The communication timeline of feedback on writing in a learning context ... 62

Figure 5.1: Example of hieroglyphic feedback ... 79

Figure 5.2: Essay marker screenshot ... 81

Figure 5.3: Student view illustrating how students will receive feedback ... 82

Figure 6.1: Example of marking grid ... 110

Figure 6.2: A typical student text ... 112

Figure 6.3: Percentage improvement per feedback technique ... 117

Figure 6.4: Percentage regression per feedback technique ... 118

Figure 7.1: Graphic illustration of cohesion and coherence in a written text ... 129 Figure 7.2: Original average marks out of ten for two groups of assignments, with average

(15)

LIST OF TABLES

Table 3.1: Design criteria taken into account during the planning and programming stages of

MarkWrite ... 41

Table 5.1: Top 20 tags used by the markers ... 84

Table 5.2: Tags that occurred in the makers’ personal favourites ... 86

Table 5.3: Least used tags ... 87

Table 6.1: Classification of data ... 113

Table 6.2: Differences in mean number of YES-scores for original and revised paragraphs per feedback technique... 115

Table 6.3: Classification of individual responses per marking technique ... 116

Table 6.4: The effectiveness of specific Boolean feedback ... 118

Table 7.1: Marking scheme ... 135

Table 7.2: Extract from raw data sheet ... 137

Table 7.3: Classification of the data ... 138

Table 7.4: Distribution of differences between original and revised versions for all responses to elements from the introduction checklist, with observed numbers followed in brackets by expected values ... 141

Table 7.5: Distribution of differences between original and revised versions for all responses to elements from the paragraph 1 checklist, with observed numbers followed in brackets by expected values ... 142

Table 7.6: Distribution of differences between original and revised versions for all responses to elements from the paragraph 2 checklist, with observed numbers followed in brackets by expected values ... 142

Table 7.7: Distribution of differences between original and revised versions for all responses to elements from the paragraph 3 checklist, with observed numbers followed in brackets by expected values ... 142

Table 7.8: Distribution of differences between original and revised versions for all responses to elements from the conclusion checklist, with observed numbers followed in brackets by expected values ... 142

(16)

CHAPTER 1

INTRODUCTION

1.1

Background

The work reported on in this thesis is part of a long-term project. The main aim of the project is to improve the efficiency and speed of providing feedback on student writing in a way that will assist lecturers, students and researchers. The research revolves around the development of the computerised marking interface, MarkWrite, in collaboration with the Centre for Text Technology (CTexT®) at the North-West University.

Research on this project commenced in 2004 with a Master’s dissertation (Louw, 2006) which investigated whether it would be possible to standardise written feedback on second language student writing. The dissertation identified problems with the practice of feedback as elaborated on in international research. Reasons for continuing with the practice of feedback were investigated, and it is postulated in the dissertation that, despite the problems identified with the practice of feedback, the principle of feedback is sound. It was hypothesised that standardising feedback and incorporating it into a computerised marking system would improve the practice.

The dissertation then investigated in an experiment what markers typically focus on while providing feedback. Using that information, as well as a literature review, a list of standardised feedback “tags” was then created and tested in a revision exercise experiment on actual students. The results showed that students were more likely to improve texts marked with the standardised feedback than texts marked with normal, handwritten feedback. Both these groups also outperformed a “blank” group in which students were asked to identify and correct errors in a text that had no feedback marked on it. The study concluded that it was indeed possible to standardise written feedback on student writing, but the implementation thereof still had to be developed.

Feedback is, however, a two-way communicative process, and having established that standardised feedback could assist the student, the effectiveness of standardised feedback on the other role-players in this communication process (the markers or lecturers) had to be investigated. Also, the implementation of the standardised feedback necessitates the use of computers and software since some of the characteristics thereof are difficult to achieve by hand. Thismoves the research into the realm of Computer-Assisted Language Learning (CALL), which is a research field of its own. Furthermore, the standardised feedback was not effective in all areas as students experienced difficulty revising the higher-order (structural) elements of the texts.

This thesis therefore continues the research and development initiated in 2004. It refines and implements some of the findings from the previous study, by investigating lecturer behaviour when using standardised feedback, piloting a new technique for providing feedback on the structural elements of texts, and establishing whether the whole project adheres to best practice in CALL.

(17)

Firstly, the qualities of effective feedback, as established from international research, were reworked into a checklist which could be used to design and evaluate the effectiveness of any one specific feedback technique. Thereafter, an experiment was conducted to establish what markers typically focus on. In other words, the lecturer side of the two-way communication process was investigated. Based on problems identified in this experiment and those identified by Louw (2006), a subsequent experiment was conducted in which a new technique (aimed at structural qualities of texts) was tested. When positive results were obtained, a similar experiment aimed at more structural elements was commenced. At the same time, the software for the implementation of the standardised feedback was being programmed by the Centre for Text Technology (CTexT®) at the Potchefstroom Campus of the North-West University. It was therefore necessary to ensure that the program met the requirements of best practice in CALL, and an investigation into this best practice was begun. The end result of the two studies was a software marking system (MarkWrite) incorporating a tag set of standardised feedback and a set of checklists for the effective marking of introductions, conclusions and paragraph structures. Both the tag set and the set of checklists can also be implemented effectively on their own in a limited fashion.

1.2

What seems to be the problem: global context?

There are many problems with current feedback practice, not least of which is the question “what exactly is the problem?” Writing is a very complex human action and research on feedback specifically has pointed out numerous problems. This is exactly what the problem is: there are many different problems, compounded by many different variables. Ferris (2004) claims that research on feedback on student writing suffers from a lack of consistency in both research methodologies and findings. The conclusions reached to date by various researchers do not agree (for example the “Grammar Correction Debate” discussed in the Journal of Second Language Writing by Truscott, 1996; Chandler, 2009 and Ferris, 2004). As early as 1985, Zamel (in a much-quoted paper) bemoaned feedback practice with statements such as:

ESL writing teachers misread student texts, are inconsistent in their reactions, make arbitrary corrections, write contradictory comments, provide vague prescriptions, impose abstract rules and standards, respond to texts as fixed and final products, and rarely make content-specific comments or offer specific strategies for revising the text. What is particularly striking about these ESL teachers’ responses, however, is that the teachers overwhelmingly view themselves as language teachers rather than writing teachers (Zamel, 1985:86).

While attempts at improvements have been made, the problem (as stated above) is compounded by an astounding number of variables. Among these is the difference between first-language writers and second-language writers. This difference is considered to be so profound that a whole journal is devoted to the teaching of writing to L2 students – the Journal of Second Language Writing. Other variables include individual student preferences and learning styles, student differences across cultures, the influence of language acquisition

(18)

on writing, and many more, each of which can be considered a “problem” based on the effect it has on the efficiency of feedback.

In addition to the above variables, there are practical matters of concern as well. In some of the classes taught by the author, there were up to 450 students. While marking assistants were available in some cases, the time needed for marking assignments in this situation was still excessive. For example, to mark a single 500-word essay assignment takes about five minutes (if done fairly superficially). That amounts to 2 250 minutes (almost 38 hours) to complete the marking of the whole class’s assignments, or practically a whole working week used up. More efficient ways need to be found, if not for the sake of saving time, then at least for efficiency.

Much research has gone into the teaching of writing to second-language students of English (for example Kroll, 2003). Different techniques are used in this research process, for example the process approach to writing (Krapels, 1990), error analysis (Ellis, 1996:48; James, 1997) and corpus linguistics (Granger, 2002; Wible, Kuo, Chien, Liu & Tsao, 2001). It is rare to find a study that attempts to incorporate elements from the different techniques into one approach. In addition, it is rarer still to find a study that attempts to implement practically findings from all the different methods. The aim of the present study is to contribute to the body of knowledge by utilising insights gained from Computer-Assisted Language Learning (CALL), the process approach to writing, writing across the curriculum, error analysis, academic literacy and corpus linguistics and implementing them practically.

Since the 1970s a movement called “Writing across the Curriculum” has been gaining ground. Its main aim is to promote general as well as discipline-specific learning through writing (Deng, 2009). Students need to be reminded of the importance of accurate and effective writing throughout their entire education (Snively, Freeman, & Prentice, 2006, quoted by Deng, 2009). Experience has taught us that it is not possible to adequately address students’ lack of proficiency in writing in one or two semesters of writing instruction, since writing proficiency needs time to develop and students need to be able to practice writing often (Deng, 2009). Students need to receive feedback on their writing so that they know what to improve, but problems with feedback are numerous (cf. Ferris, 2002; Hyland, 2003 and 1998; Krapels, 1990; and Louw, 2006).

In addition, there has been an ongoing global debate since the 1980s about whether or not feedback is effective – i.e. does feedback lead to a demonstrable long-term improvement in student writing? The researchers participating in this debate are once again those involved in the Grammar Correction Debate mentioned above, but including Zamel (1985) and Askew and Lodge (2001). In Louw (2006) this debate is dismissed as irrelevant to the MarkWrite project since the practice of providing feedback is firmly established. The practice of providing feedback will not disappear because society, lecturers and students expect it, and the broad definition of feedback and error by Louw (2006) implies that the mere act of indicating improvement or assigning a mark is feedback in itself.

While Truscott (2007) argues that it is bad practice to provide feedback simply because learners expect it, others claim that it is indeed necessary to provide learners with what they

(19)

think they need as well. Feedback is, however, not just provided because students want it, it is also provided for the following reasons:

1. The students expect feedback (Chandler, 2009: 58).

2. Feedback is an old and established technique of teaching (although the practical application thereof will affect the effectiveness of the pedagogy).

3. Feedback is a communication process indicating to the writer how a reader interprets his or her text (Askew & Lodge, 2001).

4. Feedback can be effective although research (e.g. Truscott & Yi-Ping Hsu, 2008) has also illustrated that not always to be the case.

Although feedback has not always been found to be effective, it is still expected and can still have advantages. While a few variables and problems have been mentioned in passing in the discussion so far, it is not the purpose of this introduction to discuss the totality of feedback problems in detail. Rather, the problems with feedback relevant to this study are identified as follows:

1. The lack of consistency in technique and error identification by markers. 2. Incorrect focus by markers.

3. Unclear comments by markers.

4. Students’ inability to understand and use feedback independently.

5. The amount of time it takes lecturers to comment effectively on students’ texts. 6. Lecturers are not always consciously aware of how to provide students with

effective writing pedagogy through feedback. This is especially relevant in content subjects where the lecturers are not trained in writing.

(Kasanga, 2004; Louw, 2006; Spencer, 1998; and Deng, 2009.)

1.3

Synthesising feedback techniques

Many different techniques have been developed in an attempt to improve on feedback, enhance learning gained from feedback, involve students more, speed up language acquisition and so forth. Some of these techniques are selective marking, audio feedback, feedback conferences, marking codes (correction codes), peer review and writing seminars. All of these have their advantages and disadvantages, but it is outside the scope of this thesis to discuss them in detail. What this thesis does, is to borrow some aspects of these techniques in order to improve feedback. If any of these techniques have some positive aspect (read: “it works”), it should not be neglected and a way should be found to implement it consistently to gain the maximum benefit from it.

To establish what is considered the most effective way to provide feedback, this thesis identifies the best and worst aspects of feedback. In Chapter 1 these aspects are used to construct a checklist for effective feedback. It is argued later in the thesis that it is not

(20)

possible to adhere to this checklist using conventional feedback techniques. The only way to provide feedback as effectively as prescribed is to use computer applications to compensate for human shortcomings.

1.4

Localised context of this study

Louw (2006) argues that by combining CALL with insights from research into feedback, the teaching of writing can be much more effective. He points out that this can be achieved by standardising feedback comments to an extent, and implementing this feedback in a computerised marking interface. He tested the first assumption and found that standardised feedback did indeed lead to greater improvement than traditional marking techniques when marking surface level errors. However, the standardised feedback tags used for issues of coherence, paragraph structure and textual cohesion were less effective and needed revision and re-evaluation.

For the experiment mentioned in Louw (2007), a first version of the electronic marking system was created. The first version was not user friendly and contained too many programming errors to be useful, so the Centre for Text Technology (CTexT®) at the North-West University, Potchefstroom Campus, commenced with programming the newer versions.

The ultimate aim of MarkWrite is to have a student side (MarkWrite Student) and a marker side (MarkWrite Marker). The student side of the marking system will be a pre-warning system where students receive automated feedback on features which the computer can reliably identify such as spelling errors, commonly confused words, incorrect use of fixed expressions, and a host of other text features (cf. an early attempt by Trushkina, 2006). These still need to be developed or researched and fall outside the scope of this dissertation. They are merely mentioned as an indication of the scope of the MarkWrite project.

MarkWrite Marker is the focus of the current project (this PhD), and this study views feedback on student writing as the intersection point between knowledge from the different approaches used in the past to study this phenomenon. Weideman (2007) refers to the responsibility of applied linguistics in this regard, in which practical implementation and testing of solutions are attempted. The intention of testing and implementation is to alleviate real-world problems in society. The focus of this study (and the end product, MarkWrite Marker) is exactly that –testing of solutions and applying them practically to real-world problems.

Before continuing, it should be noted that the notion of marking and evaluating texts by hand is not an out-dated concept. Electronic assessment tools and rating tools such as Criterion and E-Rater from ETS, as well as commercial products such as Whitesmoke and other advanced grammar and style checkers have become available in the last 20 years. These products are not yet without problems and it may still be a number of years before the human marker is substituted by a fully automatic system.

Apart from being a practical solution, MarkWrite is also intended to be a research tool. The system contains tracking and other features which can be used to great effect for research.

(21)

It is long overdue that the immense amount of effort that goes into the marking of student texts be used for other purposes as well. When marking student texts, the marker is in fact doing work which is very similar to that of a corpus annotator annotating a corpus text. Techniques used in MarkWrite, such as radio button marking and assessment, can also generate vast amounts of data which could all be used for further research (see the work done by Wible et al., 2001).

1.5

Research questions

With this background in mind, the research questions of this thesis are:

1. How does one evaluate a Computer-Assisted Language Learning (CALL) software package, and does MarkWrite qualify as effective CALL software?

2. What are the qualities of effective feedback on student texts? 3. What do lecturers focus on when marking student texts?

4. Can Boolean feedback (also called “radio button feedback” in this thesis) be used to provide useable feedback on paragraph structures quickly and efficiently? If the technique works, why does it work?

5. Can radio buttons be used to provide useable feedback on introductions, conclusions and overall textual coherence quickly and efficiently? If so, why?

1.6

Aims of the study

The aims of the study are to:

1. Establish a metric which can be used for the evaluation of CALL software to justify MarkWrite as acceptable CALL software.

2. Establish the qualities of effective feedback in such a way that it can be used to evaluate the effectiveness of feedback practices in student writing.

3. Determine what lecturers choose to focus on when marking student texts while using standardised feedback.

4. Evaluate the effectiveness of Boolean feedback on paragraph structure when students revise texts, and attempt to explain the findings.

5. Evaluate the effectiveness of Boolean feedback on introductions, conclusions and overall textual coherence, when students revise texts, and attempt to explain the findings.

This research project aims to demonstrate that it is possible to integrate and practically implement insights from writing pedagogy from a variety of approaches in a way that will

(22)

benefit both student and teacher and will solve (or at the very least, alleviate) some of the problems associated with the teaching of writing to students.

The research revolves around the partial creation and partial testing of MarkWrite. As such it forms only a part of MarkWrite and should not be seen in isolation, while at the same time most of the techniques can easily be applied in other teaching contexts without the use of a dedicated software system.

(23)

CHAPTER 2

OVERVIEW OF THE THESIS

The study is structured by answering the different questions in article format, with the articles flanked by an introductory and a concluding section. Each of the articles is briefly outlined below. The methodology for each article is introduced, but also explained in more detail in the article itself.

2.1

Article 1: Design considerations for CALL based on evaluation

criteria for CALL

2.1.1 Main question

Since the implementation of the research on feedback in this study revolves around the computerised marking system, it is necessary to establish what the qualities of an effective Computer-Assisted Language Learning (CALL) system are. This article poses the question: what are the qualities of a good CALL system? This is necessary to evaluate whether MarkWrite has the best possible chance of working in a language learning environment.

2.1.2 Purpose of the article

The article has the main aim of establishing what are considered internationally to be recommended traits of CALL systems, so that an evaluation rubric can be created to assess the suitability of a CALL system for a specific pedagogical purpose.

2.1.3 Methodology

This article comprises two steps. In step one, a literature study on the evaluation of CALL systems was conducted to establish what the qualities of good CALL systems are. The overlapping and vague definitions used in the international literature are synthesised into a practically useful evaluation rubric which can be used by language lecturers or systems administrators to evaluate whether a software system is suitable for their situation. International literature referred to includes the works of the highly acclaimed Graham Davies of EuroCall (Davies and Hewer, 2009; and Davies, Hamilton, Weideman, Gabel, Legenhausen, Meus and Myers, 2009) as well as Ngu and Rethinasamy (2006).

Using this rubric, MarkWrite was then evaluated as CALL system in step two to demonstrate that it satisfies international requirements for effective CALL. Effective pedagogy was also identified as a vital part of CALL, which justifies determining the most effective methods for providing feedback if it is to be used in a CALL environment.

2.2

Article 2: Moving to more than editing – a checklist for effective

feedback

2.2.1 Main question

(24)

Louw (2006) identified the qualities of effective feedback. A re-evaluation of these qualities in view of new, practical insights brought to light the fact that the qualities mentioned overlap and that the responsibility of the marker tends to be underestimated. The list of qualities was therefore tightened up and improved to provide a rubric for evaluating the effectiveness of a feedback technique.

2.2.2 Purpose of the article

Effective feedback has numerous characteristics. One technique of providing feedback on student writing may be more (or less) effective in a particular area of student writing than another technique. The differences between various techniques and theories are influenced by the practicalities of implementing the technique, such as the time available, the level of competence of the marker and the intended purpose of the feedback. Since the whole study attempts to create feedback which is clear, user-friendly, consistent and above all effective, it is necessary to establish what exactly is meant by effective feedback, in such a way that the relative effectiveness of a specific technique can be checked systematically. This article therefore serves as the central standard for evaluating the effectiveness of techniques experimented with in the later articles.

2.2.3 Methodology

This article is mainly a literature review, attempting to establish the status quo of research on feedback. It draws together information from various perspectives to establish what the qualities of effective feedback practice are. The various perspectives come from:

researchers who believe feedback is effective (e.g. Hyland, 2003:219 and Askew & Lodge 2000:2)

researchers who believe feedback is ineffective (e.g. Truscott, 1996)

researchers who try to establish which technique of feedback is more effective than others (Spencer, 1998)

researchers who attempt different kinds of writing instruction in order to facilitate better writing, such as the process approach, for example Matsuda (2003:21)

researchers who use technology in various formats to enhance their pedagogy (Wible, Kuo, Chien, Liu & Tsao, 2001).

The argument is that, since these researchers base their findings on systematic research, there have to be points where they agree on what effective feedback entails. By establishing what those criteria are, the creation of standardised, computerised feedback can be done more effectively.

(25)

2.3

Article 3: Moving to more than editing – standardised feedback in

practice

2.3.1 Main question

Can standardised feedback work in practice?

An experiment was conducted by Louw (2006) in which it was proved that standardised feedback is more effective than normal feedback when students revise their texts. However, the effectiveness of feedback is not just measured by whether students are able to use it, but also by the type of student problems on which feedback is presented by the lecturers. Excellent feedback on low-value problems is still largely a waste of lecturers’ and students’ time. What exactly do markers mark?

2.3.2 Purpose of the article

Article two therefore attempts to establish what markers focus on when using standardised feedback so that possible shortcomings in the feedback loop (from the lecturers’ side) can be identified and addressed. Truscott (1996) indicated that feedback is ineffective because markers are often not capable of providing effective feedback. If this is the case, what do markers focus on, and what do they ignore? How can the marker be assisted in focusing on the important aspects or the aspects they choose to ignore?

2.3.3 Methodology

Four markers were asked to mark 400 L2 essays from the Tswana Learner English Corpus and the Afrikaans Learner English Corpus, using the standardised feedback incorporated into the very first version of MarkWrite. The intensive nature of the marking made it impossible to use more than four markers. Markers were shown how the system worked but were not given any additional instruction on how to mark or on what to provide feedback. Two of the markers were experienced at marking student texts, while two were relatively inexperienced. Once the marking had been done, all the feedback was tabulated and analysed. This was done to investigate what the “natural tendencies” in marking are.

A second step in the experiment was to interview the markers to establish their thoughts on the marking process and the effectiveness of computerised marking. The results were analysed and the findings indicated that lecturers were not consciously aware of what they focused on, were not able to identify their students’ most frequently recurring errors, and tended to focus on surface level errors (Louw, 2007). The interviews also indicated problems with the marking interface which needed to be addressed, but confirmed that from a marker’s perspective, the marking of student texts on computer is a feasible option. These problems prompted the next stage of the investigation, where an attempt had to be made to direct markers’ attention to those aspects they tended to neglect.

(26)

2.4

Article 4: Yes/no/maybe – a Boolean attempt at feedback

2.4.1 Main question

Louw (2006) illustrated that feedback on surface structure elements is more effective than feedback on textual organisation. Feedback on matters of relatively lower importance was therefore more effective than feedback on matters of higher importance. This problem had to be addressed for standardised feedback to be considered effective overall. This raised the question of how to encourage or remind lecturers to focus on elements of higher importance, such as the effective structuring of a paragraph, or textual cohesion and coherence. Viewed in terms of Halliday and Matthiessen’s (2004:24) strata of language, feedback tends to be provided only in terms of expression, while both strata of content and the context are ignored. This oversight on the part of markers needed to be rectified.

(27)

The system used to provide feedback therefore needed to focus lecturers’ attention on the other important issues and enable them to comment on those issues in a standardised way without burdening them unnecessarily with more work. The feedback should at the same time be effective in that students are able to use it to improve their writing.

Radio buttons (a type of checklist) was considered a good way to assist lecturers as they are one of the quickest and easiest ways to select options in a computer interface. If the qualities of a good paragraph can be captured by a finite list, a checklist of these statements can be used to provide feedback. The statements will cover enough of the characteristics of a good paragraph that feedback can be deduced from a combination of yes/no answers on these statements.

The main question investigated in this article is to what extent radio buttons can be used as feedback on paragraph structure to: (a) assist lecturers to focus on the important aspects, (b) generate assessment assistance to lecturers, and (c) help learners to understand and to revise their writing more effectively.

2.4.2 Purpose of the article

The aim of the article is to test a computer-replicable technique for providing more standardised feedback which should solve some of the problems identified in Article 2, as well as some of the earlier established problems with feedback. The technique encompasses elements from text linguistics, feedback research, consciousness raising, assessment research, composition training and reading and writing research.

2.4.3 Methodology

First-year students of Academic Literacy were assigned a topic on which to write two paragraphs. The paragraphs of three different groups were marked using three different techniques:

1. Marking the text with a single set of radio buttons for all paragraphs. 2. Marking each paragraph with radio buttons.

3. Marking the text by hand using the technique referred to as “hieroglyphic marking” by Louw (2006), which is in essence marking with scribbled notes.

The students then had to revise their paragraphs. The original and the revised paragraphs were retyped, randomly shuffled and then marked again by four different markers using radio buttons. The results obtained were used to compare the original and revised versions of the paragraphs to establish how effective the various marking techniques were.

A pre-intervention and post-intervention comparison was done with χ2 to determine whether a statistically significant improvement had been obtained, and whether any one of the techniques had led to improvement. The proposed technique was compared to practices adopted by lecturers in Academic Literacy.

(28)

2.5

Article 5: Yes again – another case for Boolean feedback, or “how

to mark essays with strategic ‘yes’ and ‘no’”

2.5.1 Main question

Article 3 above focuses on the use of Boolean feedback for improving and partial standardising of feedback and assessment on paragraphing skills. A similar technique was used to provide feedback on overall textual cohesion, which includes feedback on introductions, conclusions and overall textual cohesion within the text. Once again, a pre-intervention and post-pre-intervention comparison was done with χ2 to establish the validity of the results.

The main question of this article is therefore to what extent radio buttons can be used to provide improved and more standardised feedback and assessment on overall text structures, introductions and conclusions, to the benefit of both the lecturer and the student. The argumentation structure of a text is in this sense operationalised as the relation between the introduction, the execution of stated intent and the conclusion.

2.5.2 Purpose of the article

The purpose of the article is to test and refine a series of radio button-based questions (Boolean feedback) which could be used to provide effective, relatively standard feedback on and assessment of text cohesion and structure. The article incorporates research findings from the overlapping areas of text linguistics (Halliday, 1976), composition training (Ferris, 2003; Gennrich, 1997), assessment (Bacha, 2001) and SLA (Katznelson, Perpignan and Ruben, 2001).

2.5.3 Methodology

Short essays by first-year students of Academic Literacy were marked using Boolean feedback dealing specifically with introductions, conclusions and textual coherence. The essays were then rewritten by the students. A control group received feedback using hieroglyphics. The results of the two groups were compared statistically.

2.6

Outline of argument

The argument in this thesis revolves around two main problems – what works as feedback, and how should one apply this knowledge? The two issues are investigated in article format, with some findings leading to new research questions.

Due to the article format of the thesis and the intended computerised consolidation of the techniques, considerable overlap occurs between the problem statements and literature reviews of the five articles. Article 1 is an attempt at situating the research within the broader context of CALL. Most of the overall literature review is covered by Articles 1 and 2. Article 2 (a checklist for effective feedback) then serves as a guiding principle for the design and investigation of the technique reported on in Articles 4 and 5. In other words, Article 2 refines the approach and establishes the benchmark for feedback practice. Article 2 identified additional problems which had to be covered by the technique reported on in Articles 4 and 5.

(29)

2.7

Bibliography for chapters 1 and 2

1

Askew, S. & Lodge, C. 2000. Gifts, ping-pong and loops – linking feedback and learning. In Askew, S., (ed.) Feedback for learning, pp. 1-17. London: RoutledgeFalmer.

Bacha, N. 2001. Writing evaluation: what can analytic versus holistic essay scoring tell us? System 29 p. 371-383.

Chandler, J. 2008. Response to Truscott. Journal of Second Language Writing 18:57-58. Davies, G. & Hewer, S. (2009) Introduction to new technologies and how they can contribute to language learning and teaching. Module 1.1 in Davies G. (ed.) Information and communications technology for language teachers (ICT4LT), Slough, Thames Valley University [Online]. Available from: http://www.ict4lt.org/en/en_mod1-1.htm [Accessed 16 September 2009].

Davies, G., Hamilton, R., Weideman, B., Gabel, S., Legenhausen, L., Meus, V. & Myers, S. 2009. Managing a multimedia language centre. Module 3.1 in Davies, G. (ed.) Information and communications technology for language teachers (ICT4LT), Slough, Thames Valley University [Online}. Available from: http://www.ict4lt.org/en/en_mod3-1.htm [Accessed 4 August 2009].

Deng, X. 2009. The case for writing centres. English Language Teaching World Online. Website: http://blog.nus.edu.sg/eltwo/2009/08/12/the-case-for-writing-centres/html

Ellis, R. 1994. Understanding second language acquisition. Oxford: Oxford University Press. Ferris, D.L. 2002. Treatment of error in second language student writing. Michigan: The University of Michigan Press.

Ferris, D.L. 2003. Response to student writing. Implications for second language students. London: Lawrence Erlbaum.

Ferris, D. 2004. The ‘‘Grammar Correction’’ Debate in L2 Writing: Where are we, and where do we go from here? (and what do we do in the meantime …?). Journal of Second Language Writing 13: 49–62

Gennrich, D. 1997. Teaching writing for 2005: new models of assessment and feedback. The NAETE Journal 12:16-31.

Granger, S. 2002. A bird’s-eye view of learner corpus research. In Granger, S. Hung, J and Petch-Tyson, S. (eds.) Computer learner corpora, second language acquisition and foreign language teaching. Amsterdam: John Benjamins Publishing Company.

Halliday, M.A.K. & Matthiessen, C.M.I.M. 2004. An introduction to functional grammar. Third ed. London: Arnold.

1

The author inserted separate bibliographies for the separate sections of the thesis, since the article model has been followed. Each article already had its own bibliography. The additional sections (the two introductory chapters, the preludes to the articles and the postscripts to the articles) sometimes contain additional sources not in the articles, which necessitated separate bibliographies.

(30)

Halliday, M.A.K. 1976. Cohesion in English. Oxford: Oxford University Press.

Hattingh, K. 2009. The validation of a rating scale for the assessment of compositions of ESL. Potchefstroom: North-West University(PhD thesis).

Hyland, F. 2003. Focusing on form: student engagement with teacher feedback. System, 31:217-230.

Hyland, F. 1998. The impact of teacher written feedback on individual writers. Journal of Second Language Writing, 7:255-286.

James, C. 1997. Errors in language learning and use – exploring error analysis. Edinburgh Gate: Longman.

Katznelson, H., Perpignan, H. & Rubin, B. 2001. What develops along with the development of second language writing? Exploring the “by-products.” Journal of Second Language Writing, 10:141-159.

Kasanga, L. 2004. Students’ response to peer and teacher feedback in a first-year writing course. Journal for language teaching. 38(1):64-100.

Krapels, A.R. 1990. An overview of second language writing process research. In Kroll, B., ed. Second language writing – research insights for the classroom. Cambridge University Press. p. 11-23.

Kroll, B. ed. 2003. Exploring the dynamics of second language writing. Cambridge University Press.

Louw, H. 2006. Standardising written feedback on L2 student writing. North-West University (Potchefstroom Campus). MA dissertation.

Louw, H. 2007. Moving to more than editing: standardised feedback in practice. Ensovoort, 11(2):83-104.

Matsuda, P.K. 2003. Second language writing in the twentieth century: a situated historical perspective. In Kroll, B. ed. Exploring the dynamics of second language writing. Cambridge University Press.

Ngu, B.H. & Rethinasamy, S. 2006. Evaluating a CALL software on the learning of English prepositions. Computers & Education 47: 41–55.

Spencer, B. 1998. Responding to student writing: strategies for a distance-teaching context. Pretoria: University of South Africa. D.Litt thesis.

Truscott, J. 1996. The case against grammar correction in L2 writing classes. Journal of Second Language Writing, 46:327-369.

Trushkina, J. 2006. Automatic error detection in second language learners’ writing. Language Matters, 37(2): 125-140.

(31)

Weideman, A. 2007. A responsible agenda for applied linguistics: confessions of a philosopher. Per Linguam, 23(2): 29-53

Wible, D., Kuo, C., Chien, F., Liu, A. & Tsao, N. 2001. A Web-based EFL writing environment: integrating information for learners, teachers and researchers. Computers & Education, 37:297-315

(32)

CHAPTER 3

ARTICLE 1 – DESIGN CONSIDERATIONS FOR CALL BASED ON

EVALUATION CRITERIA FOR CALL

3.1

Prelude to Article 1

While this thesis focuses first and foremost on the provision of effective feedback on student writing, the ultimate goal is to implement the knowledge gained in a computerised marking interface. This immediately broadens the study and places it within a much-researched field – Computer-Assisted Language Learning (CALL).

As part of the development and planning, an investigation was launched to answer the question: “What constitutes effective CALL?” The research followed an approach in which both positives and negatives associated with the concept of CALL were investigated in order to create a rubric which could be used to evaluate CALL software. Based on the common business practice of first finding out what the end user wants, and then designing the product, this set of evaluation considerations was then used as design considerations for CALL.

CALL is, however, an immense research field in itself, spanning numerous pedagogical and linguistic research fields. This article is only a very brief overview of the concepts.

Publication information for Article 1

The article was submitted for review and publication to the Journal for Language Teaching, but at the time of writing the reviewing process was still in progress. Minor editorial changes were necessary for adherence to the general format and layout of this thesis.

Abstract

This article presents a rubric for the evaluation of Computer-Assisted Language Learning (CALL) software based on international recommendations for effective CALL. After a brief overview of the pedagogical and implementation fundamentals of CALL, and a discussion of what should be included in a needs analysis for CALL evaluation, the rubric is presented. The author then illustrates how the evaluation criteria in the rubric can be used in the design of a new CALL system.

Keywords

Software evaluation, CALL, language laboratory, MarkWrite, writing across the curriculum, software development

3.2

Introduction

Computer-assisted language learning (CALL) came onto the scene of language pedagogy almost at the same time as the advent of the personal computer but, much as in the case of automatic translation, has not made as much headway as was once enthusiastically expected (Hémard, 2006). A disappointingly small number of lecturers espouse the use of

(33)

CALL. Just as new language learning books are continually published, new computer programs for learning and coaching languages are also produced annually. While the criteria for the creation of a text book are relatively fixed, the criteria for evaluating and designing CALL are not as cast in stone, largely because of the immense possibilities of the medium and the number of variables to take into account. Just as a book is written with the reader in mind, and evaluated accordingly, this article argues that a CALL system should be evaluated as well as designed with a set of detailed considerations in mind.

While many articles have been written on the evaluation of software (many of them are provided in the bibliography), none of them are complete enough to use in practice to make an informed decision on the best CALL package to purchase. Even worse, none of them makes explicit the link between what is evaluated and what is designed. To put it more bluntly: a complete evaluation grid could not be found for CALL software packages. Secondly, it seems as if designers and evaluators work from two different rule books – designers design what they think is needed, whereas evaluators evaluate according to their needs. It is easy to identify a few reasons for this. For example, the complex nature of language pedagogy and the individual needs dictated by different contexts may be crucial to why software is evaluated differently in specific situations.

The cost of both purchasing software (when compared to books) and developing software (also compared to books) simply increases the urgency of establishing a detailed set of considerations for evaluating and designing software.

The purpose of the article is twofold – to establish which considerations should be used to evaluate a CALL system and to illustrate how these considerations could be used in the design of a CALL system. The design of the new software program MarkWrite is used as an example to illustrate how these considerations may be used.

Although there are many similarities between selecting a CALL system and selecting a new course book for a module, selecting a CALL system is more difficult. One obvious difference is that it is easier to quickly page through a book to get an overview of its contents, while it is a considerably bigger and costlier task to buy, install and evaluate a piece of software. The more interactive nature of software also provides many variables which are not part of the evaluation of a textbook, such as more graphics, sound and audiovisual material, navigation, system requirements, and many other computer-specific considerations. The long-term implications of the choice of software may be more severe, since it is often an institution which has to buy software, whereas it is the students who buy new textbooks annually for themselves.

This article first provides general considerations for the use of CALL from national and international literature, after which a set of evaluation criteria for CALL is proposed. The evaluation criteria are then used as design criteria, using the MarkWrite software as an example.

Referenties

GERELATEERDE DOCUMENTEN

Geology of the Tosca area with the major economical centers, roads, dry riverbeds, irrigation areas (blue circles) and surface elevation contours (mamsl). The resource units RU1,

For 200 D-optimally sampled points, the pCO 2 prediction with the fourth order equation yields a Root Mean Square (RMS) error of 15.39 µatm (on the estimation of pCO 2 ) with

The church order is a rendition of the scripturo.l principles of church government and lS therefore founded On

The experiment presented here goes beyond previously published studies by (1) using the Simon task whose conflict resolution requirements are thought to be less reliant

Particulier-verzekerden zijn :meer gemoti- veerd dan ziekenfondsleden (dit hangt weer samen met de hogere inkomens en lagere leeftijd van de eerstgenoemden), en

The aims of the study were to determine student field hockey players’ perceived need for Mental skills training (MST), and their perceptions regarding their ability to

In African languages of South Africa, an element is usually classified as a true (or prototypical) object if it conforms to the following criteria: (a) in the canonical

Omdat die meerderheid van ‘n rekenaar lokale netwerk se infrastruktuur binne in geboue voorkom, is dit noodsaaklik dat die informasie bestuurs sisteem die netwerk binnehuis sowel