• No results found

Previous literature has proposed the collaborative processes inherent to CSCW to benefit text quality, although questions remain regarding various factors that could mediate this association

N/A
N/A
Protected

Academic year: 2021

Share "Previous literature has proposed the collaborative processes inherent to CSCW to benefit text quality, although questions remain regarding various factors that could mediate this association"

Copied!
68
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)
(2)

Linking Process to Product: The Relationship between Collaboration and Writing in CSCW V.A.L. Spithorst, S2918412

Word count: 13,840

Prof. Dr. M.C. Michel, Dr. S. Jager June 19, 2020

MA thesis, Department of Applied Linguistics Faculty of Arts, Rijksuniversiteit Groningen

(3)

Acknowledgements

First and foremost, I would like to sincerely thank my supervisor Marije Michel and Laura Stiefenhöfer for guiding me through this thesis, answering the (lists of) questions I had during the process. Our Google Meet sessions have helped me tremendously, so that even during a pandemic I could always rely on your valuable advice. I also want to thank my dear friend Laura for helping me during the past months and our countless online writing sessions.

Finally, I am extremely grateful to Jesse for your continuous encouragement and support throughout this thesis. Thank you.

Valentine Spithorst, Groningen, June 19, 2020

(4)

List of abbreviations Academic writing (AW)

Collaborative writing (CW)

Complexity, accuracy and fluency (CAF)

Computer-supported collaborative writing (CSCW) English for Academic Purposes (EAP)

International English Language Testing System (IELTS) Mean length of clauses (MLC)

Measure of textual lexical diversity (MTLD) Noun phrase (NP)

Postgraduate (PG) Second language (L2) Undergraduate (UG)

Weighted clause ratio (WCR)

(5)

Abstract

Digitally mediated tools have opened new avenues for second language (L2) teaching, as its affordances may arguably enhance L2 learning in the context of computer-supported

collaborative writing (CSCW), for example. Previous literature has proposed the collaborative processes inherent to CSCW to benefit text quality, although questions remain regarding various factors that could mediate this association. The present study aims to provide further insights into the relationship between collaborative patterns and the quality of written

products as part of a larger study by Stiefenhöfer (2018). 128 international students conducted two data commentary tasks in pairs using Google Docs as part of an English for Academic Purposes (EAP) course. Collaboration was scrutinized using Meier et al.'s (2017) rating scheme to describe peer interaction and screen-recordings of the produced texts. An

evaluation of text quality of writing products was provided using several measures targeting complexity, accuracy and fluency (CAF), comparing the chat interactions and texts across different task orders and topics. Lastly, the relationship between collaborative processes and text quality was assessed through correlation analyses targeting chat functions and CAF measures. Findings reveal pairs to approach the writing tasks differently, illustrating a wide variety of collaborative processes and writing products. Task topic is found to affect several CAF measures, and task order influences chat interaction predominantly during the first task.

No overall correlation was found between the overarching categories of collaborative

processes and text quality, although pairs adopting a more collaborative approach to produce less accurate texts.

Keywords: L2 learning, CSCW, collaborative process, text quality, CAF

(6)

Table of contents

1. Literature review ... 8

1.1 Collaborative processes in CW ... 8

1.2 Technology and writing digitally: CSCW in the L2 classroom ... 11

1.3 Text quality in writing products through the lens of CAF ... 14

1.4 Differences in CSCW: the influences of learner and task ... 20

1.5 Language development: a product vs. process approach ... 21

1.6 Statement of purpose ... 22

2. Methodology ... 24

2.1 Participants ... 24

2.2 Materials ... 25

2.3 Procedures and design ... 25

2.4 Coding and analysis ... 27

3. Results ... 32

3.1 Collaborative processes ... 32

3.2 Text quality ... 35

3.3 Task differences ... 38

3.4 Relationship between writing process and product ... 40

4. Discussion ... 42

4.1 Collaborative patterns in CSCW ... 42

4.2 Text quality through CAF indices ... 47

4.3 The influences of task order and topic ... 48

4.4 Exploring the relationship between process and product ... 51

5. Conclusion ... 56

5.1 Results of the present study ... 56

5.2 Implications ... 58

5.3 Future directions ... 58

References ... 60

Appendices ... 65

Appendix A ... 65

Appendix B ... 67

(7)

Introduction

The use of technology to enhance learning has been studied extensively, as digital tools are increasingly implemented in the second language (L2) classroom to stimulate language development. Online learning has played a vital role in education in recent times as the COVID-19 virus urged society to educate and be educated from home. Thus far, previous literature has explored the affordances that digitalized learning may provide, as technology allows for learning possibilities that were formerly inconceivable (Juffs & Friedline, 2014;

Stockwell, 2012). Writing collaboratively through the use of online platforms (i.e., Google Docs) allows pairs to pool their ‘shared knowledge’ (Swain, 2000; Storch, 2019) since students can discuss their ideas whilst performing the collaborative writing (CW) task.

Moreover, CSCW was found to stimulate L2 development as it allows for immediate peer feedback (Elola & Oskoz, 2010). Empirical findings demonstrate collaboratively written texts to be of a higher quality, for example due to these writings being shorter and more accurate (Wigglesworth & Storch, 2009; Fernández Dobao, 2012). Moreover, there is evidence that collaboratively written texts are more coherent with more appropriately selected content (Strobl, 2014; Abrams, 2019). However, questions remain regarding the relationship between collaborative processes in CW and text quality in writing products, and, consequently, how these findings can be used optimally in L2 teaching.

The current research is part of a larger study by Stiefenhöfer (2018), based on data collected within an EAP course. The main research purpose is to provide further insights into the relationship between collaborative processes in CW and text quality of writing products.

The following research focuses have been incorporated to be able to describe this relation.

Firstly, collaborative processes evolving from the two CSCW tasks are described, followed by an assessment of text quality through several CAF measures targeting syntactical and lexical complexity and accuracy. Subsequently, the influences of task differences with regards to task

(8)

order and topic in both chat interaction and CAF indices are explored. Lastly, to examine the relationship between collaborative processes and text quality, a correlation analysis was implemented. Based on previous literature, expectations are that both the process of writing collaboratively and the performance of CW tasks show highly variable patterns.

This study begins by outlining previous literature, providing a theoretical overview with a focus on collaborative processes in CSCW and text quality, in addition to empirical findings thus far. The methodological approaches will be established, including information on models used to evaluate collaborative processes and the assessment of text quality through CAF measures. Subsequently, the results of quantitative and qualitative analyses are outlined regarding collaborative processes and text quality, followed by an examination of the effects of task differences. Results are presented and contextualized through the use of examples from the present study in addition to previous literature. Finally, limitations with regards to the operationalization of constructs and future directions are delineated.

(9)

1. Literature review

In order to explore computer-supported collaborative writing (CSCW) and

collaborative processes in final texts, several aspects should be considered to fully understand the role of technological tools in teaching practices within the L2 classroom. The following section will elaborate on existing literature thus far regarding the learning opportunities of digital tools, focusing on CSCW as a result of the growing use of technology in L2

classrooms. Secondly, collaborative processes are contextualized in academic writing (AW) tasks, distinguishing between writing processes and final texts. On the one hand, collaborative writing (CW) constitutes a process that is illustrated through collaborative patterns which have shown to affect writing products. On the other hand, from a product-oriented view, texts can be explored by means of complexity, accuracy and fluency (CAF) measures. Finally, the role of task order and topic for both collaborative processes and written texts is explored, followed by a statement of purpose outlining the research purposes and hypotheses.

1.1 Collaborative processes in CW

Writing collaboratively in CW tasks may provide learning opportunities to enhance L2 development due to the collaborative aspect of these tasks. Crucial to CW is that learners contribute to their writing products, in that they share “the responsibility for and the ownership of the entire text produced” (Storch, 2019, p. 40). Contrastingly, a cooperative approach to the writing task, in which there is little collaboration between peers, does not require this mutual authority. A shared responsibility promotes collaborative processes since learners are required to work together to find solutions to content and language-related problems encountered during the task. This enhances the “collaborative dialogue” (Swain, 2000) which subsequently aids in developing L2 proficiency through “languaging” (Swain, 2006). Put differently, the collaborative process as a result of CW stimulates the discussion of

(10)

content or linguistically related topics in the target language and is therefore beneficial to the learners’ L2 development.

Theoretical frames have been developed to analyze collaborative processes in CW.

Firstly, Storch’s model of “dyadic interaction” (2013) describes collaboration by means of four categories, as shown in Figure 1.

Figure 1. Storch’s model of dyadic interaction (2013, p. 62)

Horizontally, a distinction is made between the learners’ equality in contribution to a CW task, while mutuality (i.e., the extent to which learners contribute to each other’s

writings) is demonstrated vertically in this model, creating four categories. Firstly, low equality and low mutuality describe a dominant/passive pattern with one learner carrying predominant authority over the writing process, leaving the other participant with relatively little contribution to the writing process and product. Contrastingly, high equality and mutuality describe a collaborative approach to performing CW tasks, in which both learners participate actively in the writing process, sharing the responsibility by discussing task-related issues frequently. In third place, learners may participate highly equal whilst their mutuality is

(11)

low, approaching the task cooperatively. Distinctive to this approach is the degree to which learners engage with each other’s work. Lastly, the expert/novice pattern describes a low equality and high mutuality between learners as they do not participate to a similar level. This pattern characterizes one learner portraying the role of a tutor, supporting their peer during the task (Storch, 2013).

Collaborative processes have been analyzed qualitatively using this model, although the complex nature of CW should be emphasized as collaborative patterns may be influenced by various factors (Storch, 2013). Therefore, in addition to this model, a quantitative approach could be used to establish further insights into the quality of peer interaction during CW tasks (Strobl, 2015). In a study on learners of German as an L2, Strobl implemented a rating

scheme originally designed by Meier, Spada & Rummel (2007) used to reflect “the effects of evaluation support in addition to instructional measures for computer-supported

interdisciplinary problem-solving” (Strobl, 2015, p. 196). This model was adapted to suit CSCW contexts (Kahrimanis et al., 2009), resulting in a scheme that describes collaborative patterns in a similar manner. Nevertheless, it remains based on theoretical findings and empirical observations, allowing for a qualitative analysis on collaborative patterns in CSCW tasks. In total, 8 chat dimensions were delineated to illustrate a range of topics inherent to CSCW, for instance regarding the communication flow or cooperative approaches between pairs, or the degree to which peers discuss task-related issues in terms of content or language pooling (Strobl, 2015). The performance of CSCW tasks in pairs or groups is analyzed through chat functions categorized in several phases to illustrate the writing process (see Table 1). For example, sustaining a mutual understanding of the task is demonstrated to solely occur in the planning phase of the CSCW task, meaning that pairs focus on achieving a

baseline before starting their writing phase. Similarly, language pooling is claimed to arise in

(12)

the translation and revision phase, although it is not expected to be found in other phases in the writing process (Strobl, 2015).

Table 1

Chat dimensions according to Strobl (2015, p. 200-201)

Chat dimension Phase

(1) Sustaining mutual understanding Planning

(2) Communication flow All

(3) Content pooling Planning, translating

(4) Language pooling Translating, revising

(5) Argumentation All

(6) Structuring All

(7) Cooperative approach All

(8) Individual orientation All

As can be seen in Table 1, the majority of chat dimensions are categorized as

occurring in ‘All’ phases, meaning that this particular type of interaction emerges in various stages of the writing process. Contrastingly, sustaining mutual understanding and content or language-related pooling are limited to the planning, translating and revising stages of

writing. However, as discussed by Michel et al. (2020), a difference in dynamic development of phases in CSCW may occur depending on the learners’ individual contexts. Proficiency level, for instance, was found to affect phases in that high proficiency students are more variable in their writing activities, whereas lower proficiency learners demonstrate a more comparable approach to their writing process (Roca de Larios, Manchón, Murphy, & Marín, 2008). However, this study was not conducted in a CSCW context, therefore it would be relevant to explore the effects of writing collaboratively on phases in the current study.

1.2 Technology and writing digitally: CSCW in the L2 classroom

The role of technology in L2 classrooms is becoming increasingly evident. For example, in the Netherlands, the National Expertise Centre for Curriculum Development

(13)

(SLO, 2019) has emphasized the importance of digital literacy, promoting the use of technological resources in contemporary education. This development is reflected by a growing body of literature on the implementation of technology in L2 classrooms,

emphasizing the need for further research on the use and effects of digital tools in L2 teaching practices (Juffs & Friedline, 2014). Thus far, existing research has demonstrated a wide variety in implementing digital tools in relation to their effect on L2 development (Stockwell, 2012) as technological appliances could facilitate learning opportunities that were previously inconceivable. To optimize learning opportunities as provided by CSCW tasks, a clear approach to and objective of the learning task is required. However, Elola and Oskoz (2017) have demonstrated the importance of understanding and implementing technological

appliances appropriately, as the affordances evolving from these writing tools may only benefit L2 learning if applied correctly. Affordances can be defined as the opportunities supported by the learners’ context, although it was claimed that these affordances, in order to benefit L2 development, should be sufficiently suitable for the learners’ individual

environments (Elola & Oskoz, 2017).

Technology in the L2 classroom has been described by means of a distinction between computer-as-tutor and computer-as-tool (Levy, 1997). Whereas computer-as-tutor focused implementations examine the final assessment of the learners’ product, in which computers act as a replacement of the teacher, computer-as-tool enhances L2 learning to a greater extent, adopting computers as tools to pursue learners’ objectives (Stockwell, 2012). Furthermore, peer interaction through a computer-supported platform (e.g., in Google Docs) benefits L2 development since it enables learners to act highly participative, emphasizing the discussion of linguistic constructs in the process (Isbell, 2018). Digital tools have been used to facilitate CW tasks requiring learners to write collaboratively, that is, in pairs or groups. Furthermore, CSCW provides learning opportunities in the target language (learning-to-write), whilst also

(14)

enabling the learner to focus on writing proficiency in general (writing-to-learn). Academic writing (AW) tasks in which students are required to summarize or compare sources are located at the center point of these functions, thus serving favorable in L2 development (Strobl, 2015). Moreover, CW tasks have been argued to enhance learning opportunities, promoting both content and language pooling between learners as a result of peer interaction inherent to collaborative processes (Elola & Oskoz, 2017).

The social aspect of CSCW is favorable in L2 development since writing collaboratively through online communication allows for continuous peer interaction

throughout the task. Elola and Oskoz (2010) argued that this collaborative aspect results in an increasing focus on content compared to learners working individually. Additionally, they proposed that online interaction between peers requires learners to focus on linguistic forms from a more structured perspective, allowing for peer feedback simultaneously as well. The collaborative factor in CSCW stimulates peer interaction, which, in turn, allows for

scaffolding. Scaffolding has been demonstrated to favor L2 development as a consequence of peer interaction and feedback regarding both content and linguistic aspects in the writing process (Li, 2018). In other words, reviewing and developing collaboratively written texts positively affects language development since it facilitates immediate peer feedback.

Empirical evidence suggests that learning opportunities in CSCW strongly correlate with advantages provided by technology in CW since digital tools were found to enhance the collaborative process. CSCW tasks more accurately represent everyday contexts as they accurately reflect real-life situations by involving peer interaction (Wigglesworth & Storch, 2009). Therefore, peer interaction and collaborative processes can be analyzed to explore the relation between these factors and L2 development. Wigglesworth and Storch (2009)

examined the effects of CSCW on language development, suggesting that peer interaction enhances the learners’ ability to express their ideas by allowing for communication when

(15)

working collaboratively. Furthermore, interactional processes promote language learning through stimulating “learning mechanisms” as facilitated by content and language pooling (Elola & Oskoz, 2017, p. 446).

Moreover, Wigglesworth and Storch (2009) found that the social aspect of writing collaboratively results in a higher quality of the produced texts (i.e., more accurate), enabling learners to acquire more knowledge throughout the task. However, due to the inherent

complexity of CSCW, learning opportunities as a result of CW differ per individual context.

Nevertheless, writing collaboratively was argued to promote language-related episodes (LREs, i.e., discussing linguistically related forms that are encountered in CSCW tasks). This, in turn, was argued to positively affect L2 development, as students creatively assess

grammatical and lexical task-related issues by accessing the individual knowledge of both learners to collaboratively build their L2 competency (Fernández Dobao, 2012). However, empirical evidence underlined the importance of implementing CSCW tasks appropriately, as different factors have been found to affect learning opportunities as provided by the task.

Group dynamics, for instance, may affect the learning outcomes (Hassaskhah & Mozaffari, 2015) as learners that are already acquainted with each other were found to adopt a more collaborative approach. Moreover, the extent to which supportive tools such as visualizations are provided can influence collaborative patterns in CSCW as well (Krishnan, Cusimano, Wang, & Yim, 2018). In this sense, offering students a list of commonly used sentences in academic English as done in the present study would enhance learning opportunities and aid L2 development.

1.3 Text quality in writing products through the lens of CAF

Quality of the writing product may be assessed through a range of CAF measures.

Although these measures may be difficult to delineate when aiming to provide a clear

description of each construct, complexity can be considered the most disputed index. Previous

(16)

literature has adopted complexity measures to evaluate the “diversity” in language products (Michel, 2017, p. 50). Nevertheless, this term remains relatively unclear, as there are several manners of interpreting linguistic variances. Amongst others, the developmental, cognitive and linguistic aspects of complexity have been distinguished, the latter being focused on predominantly in the majority of studies that have incorporated CAF measures (Michel, 2017). Accuracy is considered to be more straightforward and can be defined as the extent to which a text differs from the established norm (Michel, 2017). Accuracy has previously been operationalized through the use of rating scales to examine the learners’ L2 errors (e.g., Kuiken & Vedder, 2008; Foster & Wigglesworth, 2016), as will be discussed subsequently.

Fluency, on the other hand, is generally defined as the smoothness and pace of language production and has frequently been studied with regards to speaking competencies,

operationalized by means of speech rate or hesitations (Michel, 2017). Therefore, analyzing fluency in writing products shows high levels of variation and possibly inaccurate without the use of appropriate measurement tools, such as key-stroke logging (Skehan, 2009; Leijten &

Van Waes, 2013).

Pallotti (2009) argues to use CAF to assess L2 competencies dynamically since they provide insights into the highly variable process of language development. According to Wigglesworth and Storch (2009), this dynamic approach to evaluating students’ learning abilities may be implemented to support the learner’s L2 development. In contrast, traditional assessment tools determine the students’ L2 competencies at a specific point in time, evading learning opportunities that are facilitated by means of providing feedback throughout the writing process. However, it is important to note that learners should be familiar with the context of working collaboratively and being assessed through CAF measures (Wigglesworth

& Storch, 2009).

(17)

Notwithstanding the aforementioned advantages CSCW tasks may provide, recent studies have shown ambiguous results as to whether writing collaboratively benefits text quality as measured by CAF. On the one hand, Elola and Oskoz (2010) did not find greater text quality in their study on CW using wikis, operationalizing writing products’ quality through CAF indices. However, they emphasize the fundamental distinction between writing tools, as the use of wikis may have resulted in different collaborative patterns in comparison to chat interaction through Google Docs. Furthermore, while Strobl (2014) and Abrams (2019) did not encounter significant differences in text quality regarding CAF measurements, they could see that collaboratively written texts generated a higher internal organization, allowing for a higher quality of selected content and a more coherent narrative. On the other hand, Villarreal and Gil-Sarratea's (2019) study looking into students in secondary education found that collaboratively written text products are shorter, more accurate and contain a slightly increased grammatical and lexical complexity.

1.3.1 Complexity.

As stated previously, complexity can be interpreted in various ways, contravening the measurements’ clarity. The present study will thus define complexity as “the size,

elaborateness, richness and diversity of the L2 performance” (Michel, 2017, p. 50). Several types can be established within this measurement inherent to supporting the language’s advancement, distinguishing between syntactical and lexical complexity, amongst others (Skehan, 2009). Both syntactical and lexical complexity target the learners’ capabilities regarding linguistic diversity, providing insights into the L2 quality on different levels as well. On the one hand, syntactical or structural complexity aims at illustrating language writing on a sentence-level, whereas lexical complexity describes the learners’ degree of variation on a word-level (Bui & Skehan, 2018). Although uncertainty remains regarding the appropriate method to select complexity measures, several factors are valuable in assessing

(18)

learners’ L2 competencies, targeting a “multidimensional” technique in evaluating text quality (Norris & Ortega, 2009). Syntactical complexity can be measured using both general measures as well as more specific factors. Firstly, mean length of clause (MLC) was argued to accurately represent language development from a broader perspective whilst simultaneously allowing for a more specific analysis of the L2 productions’ syntactical complexity on a sentence level. Moreover, it was considered particularly valuable when analyzing relatively high-proficiency learners that are in the process of developing their L2 to be more extensive and diverse (Norris & Ortega, 2009). Nevertheless, it is important to note that, when

implementing a complexity measure based on clausal units, careful consideration should be assigned to the definition of a clause, as this unit may appear ambiguous without careful delineation (Bulté & Housen, 2012).

Although L2 complexification may be illustrated using general complexity measures that focus on the learners’ syntactical development, AW has been argued to challenge L2 learners in a different manner than in informal speech (see also Biber, 2006; Biber, Gray, &

Poonpon, 2011). Thus, incorporating a syntactical complexity measure that specifically targets the assessment of AW skills is essential to provide an accurate illustration of text quality. Whereas previous research has underlined the elaborate and complex nature of both L2 speech production and the AW genre, these processes may be considered inherently distinct with regards to how they differ in complexity (Biber & Gray, 2010). More

specifically, whereas L2 speech was found to be complex in that it requires an “elaborate”

language production, AW challenges the learner to produce linguistic forms that are essentially “structurally compressed” (Biber & Gray, 2010, p. 3). Due to this intrinsic discrepancy between language production methods, complexity measurement tools reflect these linguistic characteristics as well. In their article, Biber and Gray (2010) have argued that AW is consolidated predominantly through the use of phrasal modifiers included in noun

(19)

phrases (NPs). Moreover, the use of modifiers in NPs was argued to provide an accurate representation of L2 development in AW, as it largely depends on the inclusion of information which, in turn, is of great importance in this text genre. In other words, AW concerns a language register that is highly dense in information, presented in “structurally compressed” linguistic forms (Biber & Gray, 2010, p. 2).

In addition to analyzing syntactic complexity, lexical complexity indices provide further insights into the L2 development regarding their complexity in terms of the “density, diversity and sophistication” of the learners’ vocabulary (Bulté & Housen, 2012, p. 28). It serves as a tool to describe L2 competencies by illustrating proficiency (Verspoor, Lowie, Chan, & Vahtrick, 2017), and is valuable in evaluating the text quality of intermediate to high proficiency L2 learners (Norris & Ortega, 2009). Assessing lexical complexity in terms of diversity can be defined by the array of different words that an L2 learners use in their text, in which a greater range of words connotates to a higher lexical diversity (McCarthy & Jarvis, 2010). In their study, McCarthy and Jarvis validated the measure of textual lexical diversity (MTLD) index, concluding that MTLD was the only measure in comparison to other lexical diversity measures such as type-token ratio to not vary as a consequence of text length (2010).

Empirical research suggests that lexical complexity may be increased by providing the students with ideas to write about (Révész, Kourtali, & Mazgutova, 2017). In other words, offering students graphs or examples to base their writing upon yields a more cultivated vocabulary, as the presence of suggestions aids the development of linguistic complexity.

Moreover, the academic word list (Coxhead, 2000) has been used in earlier studies to determine the learners’ lexical sophistication in the AW genre particularly. In a longitudinal study on the development of academic English in three students, Verspoor et al. (2017) found an increase in the use of academic words, providing an accurate indication of their proficiency

(20)

level. Moreover, academic words, unique words and average word length were found to correlate since words generally used less often are more academically sustained.

1.3.2 Accuracy.

Accuracy in written text is often defined as the number of errors that occur within different units of a text, or, in other words, the deviance of the produced writing from

traditional norms (Michel, 2017). Tasks that allow for interaction between learners as well as pre-task planning, as allowed for in the present study, are generally assumed to positively affect both complexity and accuracy (Skehan, 2009). However, there is no straightforward consideration of what this norm entails, or to what extent errors that do not affect

understandability in a text should be treated as an inaccuracy. Empirical research on the influences of collaboration on writing texts have found CW to result in more accurate texts (Wigglesworth & Storch, 2009; Fernández Dobao, 2012), although more research is needed to further explore the relations between collaboration and writing. Therefore, to allow for

analyzing texts of varying lengths, ratios are used to provide valuable insights into the text quality. The weighted clause ratio (WCR) allows for an in-depth representation of the learners’ L2 development as it requires a scaled classification of errors in clauses, depending on the degree of comprehensibility (Foster & Wigglesworth, 2016). They found this measure to facilitate an enhanced overview of the degree to which clauses contain errors, since even relatively small errors that do not necessarily influence the texts’ understandability become apparent. Empirical findings have validated WCR to accurately illustrate written accuracy, although this was proposed to especially hold for lower proficiency learners. Moreover, further research incorporating WCR values is needed in order to substantiate possible insights into text quality that WCR may provide (Evans, Hartshorn, Cox, & Martin de Jel, 2014).

(21)

1.4 Differences in CSCW: the influences of learner and task

Various aspects may influence the execution of CW tasks and subsequently L2 development when incorporating CSCW tasks in teaching practices, emphasizing the highly variable nature of CSCW and the importance of contextual differences in the L2 classroom.

Task differences have been studied in relation to their effect on L2 development in CSCW tasks, although no straightforward rationale can be established due to the variety in

implementing technological appliances in different contexts. In a study on the effects of task design on vocabulary learning, Swain and Lapkin (2001) found that jigsaw tasks yielded texts of lower accuracy compared to dictogloss tasks. More specifically, learners who described a story collaboratively using various images committed more errors in the use of pronominal verbs. Furthermore, no disparity was encountered between the focus on language forms in different tasks, although it was speculated that this may have occurred due to the invariable objective of the jigsaw and dictogloss tasks, as the aim in both tasks was to produce written text (Swain & Lapkin, 2001). Contrastingly, when comparing the same task in two contexts (i.e., and ESL and EFL learning environment) it was concluded that, overall, students start by clarifying the task. Subsequently, the execution and discussion of task-related issues is approached differently, in contrast to the idea that tasks are performed similarly

notwithstanding the learners’ method of execution, as proposed by Storch and Sato (2020).

Task complexity may influence the collaborative processes in CSCW and writing product. It was proposed that an increase in task complexity may result in writing products of greater complexity and accuracy, at the expense of fluency (Révész, 2011). However, not only the content of a task can be affecting text products, as task differences in terms of order or topic can be argued to influence the final writings as well. The learners’ approaches towards performing their CSCW tasks can be described as their developed writing strategy, which was previously found to be affected by a variety of factors (Kessler, 2020). For

(22)

instance, computer-related factors such as the online writing tool Google Docs can affect the process of CSCW, but it could be argued that as learners develop their initial writing strategy, this might influence the remaining writing process as well. With this, the role of task

objectives and methodological approaches to implementing digital tools becomes evident.

Task effects may be used advantageously to derive maximum benefit for L2 development.

Thus, adopting an efficient methodological approach to CSCW tasks is essential when implementing digital tools to be used for language development.

1.5 Language development: a product vs. process approach

An increase in the use of technology in language classrooms not only provides advanced learning opportunities, but also allows for insights into the process of language development by means of screen-recordings and online applications, for instance. Amongst others, Penris and Verspoor (2017) proposed the process to be the focal point rather than a traditional product-oriented approach in which attention is centered around language development as a fixed, final level. According to Strobl (2015), this change in perspective should be incorporated in teaching practices as well, including a process-focused viewpoint rather than solely emphasizing the final product of the learner. Recent developments in L2 teaching practices argued that this is advantageous to L2 learning processes, as CW was considered to not only benefit the writing product, but also the learners’ individual development (Wigglesworth & Storch, 2009). More specifically, writing collaboratively allows learners to discuss their ideas and content with regard to the writing task, which additionally complements the writing process in later stages. With the aim of finding out whether there is a relationship between collaborative processes evolving from CSCW and the participants’ final texts, both a process-oriented approach (i.e., collaborative processes) and a product-analysis on the quality of the texts are incorporated.

(23)

1.6 Statement of purpose

Previous research focused on learning opportunities as provided by CSCW, often comparing individually written texts to CW. In addition to analyzing the process of writing collaboratively, existing literature has assessed final texts through the use of CAF measures to illustrate L2 competencies. Empirical evidence has demonstrated the learning opportunities that are provided by CSCW, although the extent to which this form of practicing language proficiency positively affects L2 development remains uncertain. Notwithstanding the fact that a handful of articles have explored the influences of writing collaboratively on text quality, the quality of writing as a result of collaborative processes in relation to task differences has yet to be researched. Moreover, CSCW was found to be affected by several contextual factors that influence the writing process and final texts, delineating the purpose of the present study. This study analyzes the relationship between collaborative processes and text quality in an attempt to explore whether there are collaborative patterns that may result in qualitatively better written texts, to understand what are optimal learning opportunities that enhance language learning. Therefore, the present study aims to explore the following research purposes:

1. How do international L2 learners of English collaborate in academic CSCW tasks, as illustrated by an adaptation of Strobl's (2015) rating scheme?

2. What is the quality of the writing products (i.e., the texts produced) in terms of several CAF measures?

3. To what extent do task order and topic affect the collaborative processes and final writing products?

4. How do collaborative processes in CSCW and text quality relate to each other?

(24)

Expectations are that pairs will execute writing tasks by adopting various methodological approaches due to the complex and dynamic nature of CSCW, thus demonstrating large discrepancies in the collaborative processes and text quality. Task differences could potentially influence CSCW and final writings as well, where task order is more probable to influencing collaborative processes and task topic may affect text quality.

Based on previous literature, text quality as operationalized by the selected CAF measures (viz., MLC, modifiers per NP, MTLD, academic words and WCR) correlate differently with learners’ collaborative processes. However, previous findings indicate a possible relationship between collaboration and writing products as tasks that allow for interaction between

learners and pre-planning phases affect accuracy in texts, for instance. The present study aims to describe this relation with the objective of providing further insights into the use and effects of CSCW in the L2 classroom.

(25)

2. Methodology

This study adopts a mixed-methods approach to analyzing collaborative processes and text quality using. With the objective of exploring the relationship between collaborative patterns in terms of chat interactions and the quality of writing products regarding CAF measures, the chats and screen-recordings of two data commentary tasks conducted in pairs are evaluated. Data come from a larger study by Stiefenhöfer (2018), in which international students participated in a CSCW activity as part of an English for Academic Purposes (EAP) course. Collaborative processes were evaluated using an existing rating scheme (Strobl, 2015;

Meier et al., 2007), followed by a CAF-analysis of final texts targeting syntactic and lexical complexity, and accuracy. Subsequently, differences due to task order and topic on chat interactions and final writings were explored. Lastly, the relationship between collaborative processes and text quality was examined through conducting correlation analyses.

2.1 Participants

Participants were enrolled in an EAP course of 4 weeks that consisted of 3 modules (Academic Reading and Writing; Listening, Reading and Discussion; and Oral Presentation).

Participants were between 18 to 20 years old (SD = 2.63 years) and the majority were female and native speakers of Chinese (see Figure 2 and 3). In total, 128 participants conducted the writing task, of which 62 were Postgraduate students (PG) and 66 were Undergraduate

students (UG). On average, participants had studied English for 12.3 years (SD = 4.21) and as a requirement to participate in the EAP course, all students (PG and UG) had obtained an International English Language Testing System (IELTS) score of at least 5.5 in all subcategories of the test. Additionally, PG students obtained an average score of 6.0.

Participants had varied experience in writing collaboratively or performing tasks using Google Docs, some had practiced both CW and writing with Google Docs, whilst others had never done so.

(26)

Figure 2. Gender distribution Figure 3. Native language distribution

2.2 Materials

Questionnaires were deployed online using Qualtrics (2005) or on paper to obtain participants’ background information. Stiefenhöfer (2018) had created 30 anonymous Google Accounts students could use during the task. Two data commentary tasks were specifically developed for the current study (see Appendix A) and were piloted with two different cohorts of students. Task design was similar in both tasks, differing in task topic that were both closely related to the central themes of the EAP course. Learning objectives were to practice writing data commentary and to acquire knowledge in using quantitative data

argumentatively, with topics referring to plagiarism and reported grades in relation to group dynamics. Instructions were provided as images in a shared Google Docs file in which pairs were required to write their task as well, sharing the authority over their document. The use of Google Docs as an online writing tool allowed peers to interact with each other through the use of a built-in chat function, facilitating online communication between pairs throughout conducting the tasks.

2.3 Procedures and design

Students were paired up randomly with a peer correspondingly to their educational level (i.e., pairs consisted of either UG or PG participants) in an attempt to avoid disparities in language proficiency between learners. Data was obtained in computer laboratories,

(27)

consisting of either UG or PG students in each room. Since the experiment was planned in week 3 of the module Academic Reading and Writing in the EAP course, students were acquainted with the tasks’ content as they have discussed related topics (i.e., plagiarism and groupwork dynamics) during earlier weeks of the course. Tasks were conducted in a

counterbalanced manner during 5 consecutive days in order to control for task order and topic effects to enhance reliability. For example, on the first day, participants started with Task A (plagiarism), and then continued with Task B (group dynamics). Subsequently, on the next day, participants executed the tasks reversely. In total, 10 groups completed both tasks, with 2 groups on each day of the week. Furthermore, the CW tasks were highly similar to what participants would have been required to do in the EAP course in any case, aiding the ecological validity of the study. To minimize laboratory effects due to the fact that

participants would not commonly communicate through chat interaction whilst located in the same room, pairs were seated at opposite ends of the room.

The experiment started at 9.30 AM with an explanation of the research project, at which point participants signed their consent forms which informed them about the right to withdraw from the experiment. All students continued with the tasks as part of the EAP, but data of students who withdrew were not included in the study. This took approximately 20 minutes, after which the students were introduced to the data commentary tasks. Before starting the experiment, an introduction was provided on the task which included an

exemplified text of data commentary and a list of frequently used constructions in academic English. Participants were unaware of who they would be working with before starting the experiment, although there were no restrictions on discussing personal details during the task.

Students were instructed to start with task 1 at 10.00 AM and after 25 minutes they were informed to continue with task 2. In total, the task lasted approximately 90 minutes. Chat interactions were recorded using the publicly accessible tool Camtasia to generate screen-

(28)

recordings of the participants’ writings. During the experiment, participants were not allowed to use any additional sources such as (online) dictionaries or other web pages. However, they were provided a list of commonly used constructions data commentary in academic English.

After the experiment was completed, chats were copied and pasted in a Microsoft Word file by two research assistants.

2.4 Coding and analysis

Based on a preliminary selection of Stiefenhöfer (2018), 7 chats were complemented with 5 additional chats consisting of at least 150 words in order to analyze a meaningful subset of 12 pairs to provide an accurate representation of collaborative processes in the chat communication of participants. First, the number of words and turns per chat were counted.

Consequently, Strobl’s rating scheme (2015) that was based on an earlier model by Meier and colleagues (2007) was used to evaluate the quality of collaboration evolving from the CW tasks. This scheme was developed to assess collaboration quality through coding peer interaction with regards to several chat dimensions, that have been categorized according to phases inherent to the writing process. For example, chat instances were labelled for content or language pooling, structuring the writing process or the cooperative orientation of

individuals. Table 2 demonstrates all chat categories delineated in Strobl’s model in addition to corresponding phases and examples from the present study for illustration.

(29)

Table 2

Chat dimensions according to Strobl (2015, p. 200-201) and illustrative examples

Chat dimension Phase Examples

(1) Sustaining mutual understanding

Planning P1: so what do we have to do?

P2: I think our task is to summarize the data and discuss (2) Communication

flow

All P1: but I don’t know what 13(65%) mean

P1: what is the number in front of the percentage?

P2: the number of student who receive individual grade (3) Content pooling Planning,

translating

P1: so how about the conclusion?

P2: give a summarize and recommendations?

P2: the school should increase the regulatory levels of plagiarism in school

P1: conclude both of them

P1: I think the university should pay more attention to the sham (4) Language pooling Translating,

revising

P1: I corrected `to sum up` because is not academic P2: ok

(5) Argumentation All P1: what I think is to divide high percentage and low perc P2: yes I agree but these two kinds of plagiarism have the highest percentage

(6) Structuring All P1: shall we move on the task two?

P2: just one implication that could leads us to one recommendation

P1: because we are running out of time P2: wait

(7) Cooperative approach

All P1: so which one you want to write

P2: you choose first P2: I’m both ok

P1: me again??? hhh nice of you (8) Individual

orientation

All P1: we need to tell more in the introduction

P2: it is vividly that most students got Cr and D in the assignments?

P2: vivid, (sorry)

For each participant, all utterances were coded for manually using this scheme in Atlas.ti (2019), in addition to any emergent codes of patterns that occurred throughout the coding process (e.g., agreements or disagreements, see Table 3). In addition to this, phases

(30)

were coded for time-related aspects of the chat interaction. However, due to very low numbers, these codes were excluded from further quantitative analyses.

Table 3

Emergent coding based on chat interactions and illustrative examples

Category Examples

Phases Unrelated P1: hello, who r u

P2: xxx P1: and you ? P2: i am xxx

Planning P1: so we r going to finish the task 1 together?

P2: yeah

P1: where do we write our task?

Writing P1: you have idea about conclusion

P2: the summary?

P1: you can type

Editing P1: I see you copy the same paragrath?

P1: you can delete one of them P2: yeah I delete it!

P2: if you found something need to correct you can delete and revise directly:)

Emergent coding

Dominant/passive collaborative patterns P1: complete ur sentanse as soon as possible P1: complete please

P1: complete ur sentancs P1: thanks mate

P2: finsh

Getting acquainted (e.g., greeting each other) P1: whats your name?

P2: xxx

P1: where are you sitting?

P2: 2

(Dis)agreement P1: so we can add it as the last implication part P2: yep, agree

P1: ok i saw that

Note. Personal information has been replaced with ‘xxx’ for privacy reasons.

(31)

Whereas the majority of chat dimensions were coded for using the copied chat interaction, the categories of communication flow and cooperative orientation were rated using the participants’ screen-recordings to accurately illustrate the collaborative processes that were highly dependent on the social interactional flow between pairs. The pairs’

individual orientation was rated for all students individually, as each participant may show different collaborative approaches during the task. Subsequently, values were assigned to the ratings of chat dimensions on a scale of 1 to 3 (1 indicating a minimal presence of the chat function and 3 illustrating a high representation of the related category), following Strobl (2015).

Text quality was assessed in a subset of 100 texts containing more than 50 words in order to accurately represent the quality and inherent coherence of texts by disregarding substantially short writings. To examine the quality of (academic) English writing products, several CAF measures were selected based on previous empirical findings (see chapter 1.3).

Firstly, syntactical complexity was operationalized using MLC ratings and the use of

modifiers per NP. Moreover, lexical complexity was assessed with regards to MTLD and the academic word list followed by an analysis of the accuracy of writing products in terms of WCR. All CAF indices were examined using corrected texts, meaning that errors in

interpunction or spelling were revised in order to illustrate text quality accurately, following Révész, Kourtali and Mazgutova (2017). For MLC, the web-based L2 Syntactical Complexity Analyzer (Lu, 2011) was used. The use of modifiers per NP and MTLD values were provided by the online tool Coh-Metrix (Graesser, McNamara, Louwerse, & Cai, 2004). An analysis of academic words using a web-based tool named Vocabprofile (Cobb, 2003). Accuracy (WCR) was assessed manually by two linguistically trained raters that coded 24 randomly selected texts individually. Subsequently, disparities in ratings were discussed upon agreement after which the remaining 76 texts were coded.

(32)

Using RStudio (2018), statistical analysis provided descriptives for all variables.

Paired Samples t-tests compared task differences with regards to task order and topic (p <

.05), in which the null hypothesis would be that there is no difference to be found. Spearman correlations were implemented to evaluate the relationship between collaborative processes and text quality, the null hypothesis being that there is no relationship between related factors.

(33)

3. Results

The present study aims to describe collaborative processes and text quality from a process and product-oriented perspective, as well as explore the relationship between these factors. First, collaborative processes are evaluated first through quantitative and qualitative descriptive analyses. Subsequently, text quality is assessed by means of syntactical and lexical complexity measures in addition to an accuracy ratio. The effects of task differences

regarding task order and topic are compared, after which the relationship between collaborative processes and text quality is explored.

3.1 Collaborative processes

Collaborative processes are evaluated by means of a descriptive analysis regarding the word and turn count in chat interactions of 24 participants (12 pairs) in task 1 and 2, as

presented in Table 4.

Table 4

Descriptive analysis of word and turn count of each participant per task

M SD Min Max

Number of words Task 1 139 48.5 51 220

Task 2 46.5 35 0 126

Number of turns Task 1 26 10.4 6 50

Task 2 11.3 7.8 0 25

It can be seen that pairs appear to be more active in chat communication with their peer in task 1 than in task 2, as reflected by the average word count and number of turns. This finding is also reflected by the maximum and minimum values in each task, with the

maximum word count and frequency of turns being higher for task 1 than for task 2. In task 1, the minimum average word count and frequency of turns is also higher than in task 2.

(34)

Subsequently, both SD values illustrate a high variety within chat interaction during both tasks, as demonstrated in Figures 4 and 5.

Figure 4. Boxplot on the number of words produced in each chat per task

Figure 5. Boxplot on the frequency of turns of each participant per task

The boxplots illustrate the difference in peer communication between task 1 and 2. To explore collaborative processes more qualitatively, all chat utterances were coded for in terms of a previously developed scheme that incorporates 8 categories (Meier et al., 2007; Strobl, 2015). Each chat dimension was then assigned a rating on a scale from 1 to 3, 1 meaning that

(35)

there was essentially no presence of that category and 3 indicating a very strong presence of this dimension (see Appendix B for all ratings per pair), as shown in Table 5.

Table 5

Descriptives of the qualitative rating scheme to illustrate collaborative processes

Chat dimensions M SD Min Max

(1) Sustaining mutual understanding 2.50 0.67 1 3

(2) Communicative flow 2.42 0.67 1 3

(3) Language pooling 1.75 0.87 1 3

(4) Content pooling 2.25 0.87 1 3

(5) Argumentation 1.75 0.62 1 3

(6) Structuring 2.33 0.49 2 3

(7) Cooperative approach 2.33 0.89 1 3

(8a) Individual orientation P1 2.25 0.62 1 3

(8b) Individual orientation P2 2.33 0.49 2 3

Note. Individual orientation was evaluated per participant, with P1 referring to the first and P2 to the second participant of each pair. Scores ranging from 1 (no presence of this dimension) to 3 (strong presence of this dimension).

As can be seen in Table 5, pairs adopted different approaches to writing collaboratively, as reflected by the disparity in average scores of all chat dimensions.

Language pooling and argumentation occurred less frequently in comparison to other

categories, indicating that participants discussed language-related issues less often. Moreover, this indicates that there were also little incidences of discussion or reasoning between peers, implying that pairs did not provide an argumentative justification to their decisions.

Contrastingly, on average, participants demonstrated a higher means in sustaining a mutual understanding in both tasks and communication flow between pairs. In other words, there was frequent discussion about task execution and comprehension, and the high communication flow indicates that pairs quickly responded to each other. Structuring resulted in a higher

(36)

minimum score than the remaining categories, suggesting that all pairs were concerned with the tasks’ practical aspects. Lastly, chat dimensions of language and content pooling and cooperative approach revealed great variety in ratings compared to other categories, meaning that there was a considerable disparity between pairs regarding the discussion of language or content-related issues inherent to the task and the collaborative approach between pairs.

3.2 Text quality

Text quality of 100 texts produced by participants in task 1 and 2, excluding writings that contain less than 50 words was assessed. Table 6 provides the descriptives of all CAF measures per task targeting syntactical complexity (MLC and number of modifiers per NP), lexical complexity (MTLD and academic word list scores) and accuracy (WCR).

Table 6

Descriptive analysis of text quality (CAF) regarding task order and topic

Order Task 1 Task 2

M SD Min Max M SD Min Max

MLC 4.56 0.50 3.55 5.94 4.58 0.60 3.34 5.75

Modifiers per NP 0.89 0.21 0.47 1.53 0.85 0.21 0.46 1.29

MTLD 53.52 11.37 31.81 98.58 56.19 13.55 32.59 87.26

Academic words 12.39 6.31 2.70 27.78 11.51 6.74 2.82 24.23

WCR 0.73 0.10 0.29 0.88 0.75 0.08 0.54 0.92

Topic Group dynamics Plagiarism

M SD Min Max M SD Min Max

MLC 4.55 0.56 3.34 5.70 4.61 0.56 3.55 5.94

Modifiers per NP 0.95 0.20 0.47 1.43 0.79 0.19 0.46 1.53

MTLD 52.63 12.35 31.81 87.26 57.08 12.41 32.75 98.58

Academic words 15.12 6.03 2.82 26.09 8.77 5.34 2.7 27.78

WCR 0.73 0.10 0.29 0.88 0.75 0.07 0.58 0.92

Note. All CAF indices are provided per text.

(37)

At this point, no clear patterns can be established in terms of text quality when comparing task order and topic. These findings are presented in Figures 6 and 7, distinguishing between MLC, modifiers per NP and WCR on the one hand and lexical complexity measures (MTLD and academic word list) on the other hand to accurately illustrate the descriptive analysis as demonstrated in Table 6.

Figure 6. Boxplot on text quality in task order regarding MLC, modifiers per NP and WCR

Figure 7. Boxplot on text quality in task order regarding lexical complexity

(38)

Figures 6 and 7 illustrate text quality as operationalized by CAF indices to show similar scores in terms of task order. Subsequently, Figure 8 and 9 provide a representation of these measurements regarding task topic.

Figure 8. Boxplot on text quality in task order regarding MLC, modifiers per NP and WCR

Figure 9. Boxplot on text quality in task topic regarding lexical complexity

(39)

Notwithstanding the broad range of mean values presented in the Figures 8 and 9, there appears to be little variation between CAF indices in writing products with regards to the differences in task order and topic. Nevertheless, Figure 8 and 9 do show a slight increase in the academic word list and modifiers per NP values, suggesting that the topic of plagiarism may have yielded higher scores regarding these CAF indices. However, due to the

discrepancies in the linguistic forms targeted by these CAF measures, the quality of texts may be difficult to determine at this point. Task differences are further explored using inferential analysis by comparing the differences in text quality regarding task order and topic, as will be demonstrated in the next section.

3.3 Task differences

Task differences can be explored in terms of task order and topic, regarding

collaborative processes and text quality. The process of CW has been analyzed firstly with regards to task order, comparing chat interaction and text quality in task 1 to task 2.

Considering the results as provided by Table 4, the number of words produced and the frequency of turns per participant in each chat were compared in task 1 and 2 using a paired samples t-test. Results indicated that, on average, students showed higher participation in chat communication as measured by the number of words produced in each chat during task 1 than during task 2. This difference was significant, t(23) = 6.97, p < .005, 95% CI [65.26, 120.41]

and gave a large effect size, r2 = 0.68 as illustrated in Figure 4. Furthermore, participants also were more active in communicating with their peers in terms of the frequency of turns per participant in task 1 (M = 26, SD = 10.4) than in task 2 (M = 11.3, SD = 7.8). This difference was significant, t(23) = 4.9978, p < .005, 95% CI [8.62, 20.796] and gave a large effect size, r2 = 0.52 as visualized in Figure 5.

To assess whether there are discrepancies in text quality with regards to task order, paired samples t-tests were conducted to compare the writing products of task 1 and 2.

(40)

Findings show that, on average, students did not write qualitatively better texts in either task (cf. Table 7) as differences were non-significant with a small effect sizes.

Table 7

Inferential analysis on the influences of task order on text quality

MLC Modifiers per NP Academic words MTLD WCR

t -0.19 0.79 0.53 -1.42 -0.79

p .85 .43 .60 .16 .43

95% CI [-2.11, 1.75] [-0.056, 0.13] [-2.47, 4.22] [-6.44, 1.10] [-0.046, 0.0202]

r2 .001 .013 .006 .04 .013

Table 7 shows that writing performance was not found to be affected by task order regarding the CAF indices as presented here. A similar analysis was conducted to assess task topic effects on writing products in Table 8.

Table 8

Inferential analysis on the influences of task topic on text quality

MLC Modifiers per NP Academic words MTLD WCR

t -0.88 4.54 5.78 -2.23 -1.09

p .39 <.001*** <.001*** .031* .28

95% CI [-2.59, 1.016] [0.089, 0.23] [4.15, 8.57] [-8.45, -0.45] [-0.049, 0.014]

r2 .015 .296 .405 .092 .024

It can be seen that syntactical complexity (regarding MLC) and accuracy do not show a significant difference when comparing task topics, although lexical complexity and the use of modifiers per NP do. The significant differences in text quality depending on task topic are of a large effect size, whereas those of non-significant differences are of a medium size.

Referenties

GERELATEERDE DOCUMENTEN

This model describes dense granular packs and makes some se- vere assumptions, among which are that contacts between grains do not change during compression, the distribution

The second model verifies the effect of the lagged change in the long-term interest rate, the short-term interest rate and the debt to GDP ratio on the growth rate of

Overall, the scope and extent of the international legal personality will differ from case to case, as well as from institution to institution. However, regardless of which theory

Deze bepaling is vooral belangrijk wanneer een criminele organisatie in de zin van artikel 140a Sr niet kan worden aangetoond, maar wel sprake is van een afspraak tussen twee

Die spreker wat die toespraak hou, maak van gesigsimbole ( gebare en mimiek) en gehoorsimbole ( spreektaal) gebruik. Oor die vereiste vir goeie spraakgebruik het ons

In summary, gentamicin-loaded PTMC discs degrading in lipase solution showed antibiotic release kinetics and biofilm inhibition properties that are comparable to those of

This is a blind text.. This is a

Key players – such as regional civil servants and the city and regional aldermen but also entrepreneurs, non-governmental organizations such as housing corporations – should