• No results found

University of Groningen Unrooting the illusion of one-size-fits-all feedback in digital learning environments Brummer, Leonie

N/A
N/A
Protected

Academic year: 2021

Share "University of Groningen Unrooting the illusion of one-size-fits-all feedback in digital learning environments Brummer, Leonie"

Copied!
19
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Unrooting the illusion of one-size-fits-all feedback in digital learning environments

Brummer, Leonie

DOI:

10.33612/diss.171647919

IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to cite from it. Please check the document version below.

Document Version

Publisher's PDF, also known as Version of record

Publication date: 2021

Link to publication in University of Groningen/UMCG research database

Citation for published version (APA):

Brummer, L. (2021). Unrooting the illusion of one-size-fits-all feedback in digital learning environments. University of Groningen. https://doi.org/10.33612/diss.171647919

Copyright

Other than for strictly personal use, it is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license (like Creative Commons).

Take-down policy

If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.

Downloaded from the University of Groningen/UMCG research database (Pure): http://www.rug.nl/research/portal. For technical reasons the number of authors shown on this cover page is limited to 10 maximum.

(2)
(3)

“Feedback, like rain, should be gentle enough to nourish a

per-son’s growth without destroying its roots”

Shute, 2018, p. xv

Introduction

This quote captures the complexity and richness of instructional feedback as a concept. In particu-lar it captures how learners perceive, process, and act upon feedback. Feedback can be considered a catalyst in the learning process and a key component across instructional settings (Hattie & Timpe-rley, 2007). Instructional feedback can be defined as suggestions on how to bridge the current and desired level of performance/ comprehension (Butler & Winne, 1995; Hattie & Timperley, 2007; Nar-ciss, 2013; Roos & Hamilton, 2005). Even though delivering instructional feedback has been widely acknowledged as an opportunity of adapting learning performance, not all instructional feedback is considered equally effective (Hattie, 1999; Jaehnig & Miller, 2007; Narciss, 2013; Nicol & MacFar-lane-Dick, 2006; Van der Kleij, Feskens, & Eggen, 2015). Despite the fact that researchers agree upon this key concept of instructional feedback (i.e., feedback as an opportunity of adapting learning per-formance), many different conceptualizations and definitions exist describing many different fea-tures varying in their degree of specificity (see e.g., Hattie & Timperley, 2007; Jaehnig & Miller, 2007; Narciss, 2008). As a result, the feedback message—stemming from the conceptualization of instruc-tional feedback—can be broadly described as having both an evaluative and an informative compo-nent (Narciss, 2008). The evaluative or verification compocompo-nent relates to the learning outcome and indicates the current level of comprehension, such as by sharing whether responses are (in)correct, which percentage of responses is correct, and how large the gap is between the current and desired level of performance/ comprehension. The informative or elaborated component(s) consists of addi-tional information related to the task (e.g., task rules, task constraints, and task requirements), topic (e.g., conceptual, procedural, and metacognitive knowledge), errors and mistakes, or solutions. Due to the instructional feedback message typically consisting of both components, a large variety exists in feedback contents that interact within relevant factors of the feedback situation (Narciss, 2008).

Internal and External Feedback Loop

When a learner is confronted with instructional feedback, two feedback loops are involved: an in-ternal feedback loop and an exin-ternal feedback loop. Narciss’ (2013) detailed model of instructional feedback includes both feedback loops and comprises five essential aspects for regulating and con-trolling the performance and the desired level of performance/ comprehension (for a more elaborate description and visualization of the model see Narciss, 2013). The internal feedback loop is based on the learner’s decision to obtain feedback and is inherent to (self-)regulation and engagement (Clark, 2012). The contribution of each feedback loop to the processing of feedback becomes apparent when the model is explained by discussing the five essential aspects for regulating and controlling the performance and the desired level of performance/ comprehension. The first aspect comprises the determination of the internal and external standards. The internal standards consist of the learner’s

(4)

subjective representation of competencies. The external standards are determined according to an external representation of competencies and are considered situational (i.e., depending on context and task factors). An example of external standards are the learning continuity pathways in primary and secondary education for a range of subjects. The second aspect of the model pertains to a contin-uous assessment of the current level of comprehension by both the learner (internal assessment) and by an external feedback source (external assessment; Narciss, 2013). The third aspect comprises con-trollers—with the internal controller for the internal feedback loop and the external controller for the external feedback loop—which compare the current level of comprehension with the desired level of performance/ comprehension. The external controller generates an external feedback message. This message is send by a feedback source, such as a teacher, peer, or digital system (Clark, 2012; Debuse, Lawley, & Shibl, 2007; Narciss, 2013). If there is no gap between the current and desired level of per-formance/ comprehension, this external feedback message can simply appraise or confirm reaching the desired level of performance/ comprehension. If there is a gap between the current and desired level of performance/ comprehension, this external feedback message can provide evaluative feed-back, such as percentage of correct responses, or may be more informative when more information is provided to close that gap. As a result, the contents of the external feedback message can vary greatly. The fourth aspect is the processing of the external feedback message by the internal controller along with the internal feedback. The result of this process is a series of comparisons: (a) the learner’s de-sired level of performance/ comprehension with their internal feedback (their current level of com-prehension as a result of internal assessment), (b) the learner’s desired level of performance/ compre-hension with an external feedback message, and (c) the learner’s internal feedback with an external feedback message. These comparisons lead to internal control actions selected by the learner. The fifth aspect is selecting and transmitting these actions to the controlled process, where learners have to implement the selected control actions, such as error correction strategies and revision activities. If the learner is able to accurately identify any discrepancies resulting from the three comparisons, the learner can generate specific actions to reduce those discrepancies. Examples are prioritizing subse-quent steps needed for reaching specific goals, changing learner’s perceptions of their own ability, and seeking feedback from teachers, peers, or through/by a Digital Learning Environment (DLE) for confirmation, clarification, or attaining goals (Narciss, 2013).

Within the external feedback loop, the processing of the external feedback can either be mandatory or optional. Mandatory external feedback is provided by an external source and outside of the control of the learner. In contrast, optional external feedback relies more heavily on monitoring processes, because the learners have to make a decision whether external feedback is necessary and, if so, which feedback would be suitable, when it should be delivered, and who/what can deliver the feedback. Regardless of feedback being mandatory or optional—as a result of receiving external feedback and contrasting it with their internal feedback—the learner can ignore, adjust, or implement the external feedback (Butler & Winne, 1995; Narciss, 2008, 2013). Nevertheless, optional external feed-back—compared to mandatory external feedback—allows the learner to close the gap between the current and desired level of performance/ comprehension with more willingness to improve because the learner is investing effort in obtaining or seeking instructional feedback. The decision of seeking

(5)

feedback can be the result of applying self-evaluation strategies (Zimmerman, 2008) and is linked to both metacognition (Vandewalle, 2003) and motivation (Ashford, De Stobbeleir, & Nujella, 2008; Herold & Fedor, 2003; Kitsantas & Chow, 2007).

Feedback-Seeking Behaviour and Intrapersonal Factors

The learner’s actions directed towards seeking feedback can be conceptualized as Feedback-Seeking Behaviour (FSB; Crommelinck & Anseel, 2013; Papi, Rios, Pelt, & Ozdemir, 2019; Tanaka, Murakami, Okuno, & Yamauchi, 2002). Two main types of FSB can be distinguished. The first main type is

mon-itoring, which consists of in-depth observations of the teachers and/or peers to obtain feedback for

personal use. If the instructional feedback triggers a necessity to adapt one’s own learning perfor-mance, the feedback will probably be processed and acted upon. For example, the learner overhears the teacher giving feedback to a fellow student that the signalling words ‘therefore’ and ‘nevertheless’ were used incorrectly in a summary writing assignment. The learner can decide to review the use of signalling words in his/her own summary—even though the teacher’s feedback was initially directed to a peer. The second type of FSB is inquiry, which involves explicitly asking others for instructional feedback (Crommelinck & Anseel, 2013). For example, the learner can ask the teacher directly why the homework assignment was given a certain grade.

The seeking and processing of feedback is influenced by intrapersonal factors related to metacogni-tion (e.g., planning, prioritizing), and facets of motivametacogni-tion (e.g., goal-setting, self-efficacy); yet, the seeking and processing of feedback also influences the intrapersonal factors (see Crommelinck & Anseel, 2013; Hwang & Arbaugh, 2006). With this bidirectional influence, the learner needs to con-tinuously monitor cognitive, metacognitive, and motivational processes related to the instructional feedback while applying cognitive, metacognitive, and motivational strategies that vary according to the task (Narciss, 2013; Rouet, 2006). Thus, intrapersonal factors determine how instructional feed-back is sought, perceived, processed, and acted upon (Butler & Winne, 1995; Cutumisu, 2019; Win-stone, Nash, Parker, & Rowntree, 2017). When the setting, either digital or face-to-face, is considered, the role of intrapersonal factors becomes more apparent—and of particular interest—given the in-creased use of DLEs in educational settings (Mandernach, 2005; Smith, Sorensen, Gump, Heindel, Caris, & Martinex 2011). The emphasized role of the learner’s intrapersonal factors matches learn-er-oriented study approaches (Bergman & Trost, 2006).

Digitally Delivered Instructional Feedback

Digitally delivered instructional feedback appeals to different psychological processes as compared to face-to-face feedback (Wu, Xu, Kang, Zhao, & Liang, 2009; Winstone et al., 2017), which impact the learner’s engagement with this feedback. For example, feedback-seeking in DLEs is considered less threatening because the learner’s ‘inadequacy’ is not openly displayed as compared to face-to-face settings (Wu et al., 2009). In other words, DLEs allow for a certain degree of anonymity (Berstrom, Harris, & Karahalios, 2011). Furthermore, FSB is embodied differently in digital settings compared to face-to-face settings (Hwang & Arbaugh, 2006), that is, monitoring might be impossible if the digital system does not provide options to inspect the feedback that peers have received. Directly asking

(6)

others for instructional feedback can be supported with a chat forum or by adding options that deliv-er feedback upon the learndeliv-er’s request. Thus, the digital setting influences the seeking and process-ing of feedback.

DLEs can function as an external feedback source in two ways. First, DLEs can serve mainly as the in-itial feedback sender by delivering preprogramed feedback, typically also mandatory feedback, such as evaluative feedback for a multiple-choice question. In case of an incorrect answer, the feedback can be extended with the correct answer or an explanation why that particular answer is (in)correct. DLEs serving as feedback senders can also send preprogramed instructional feedback upon a learn-er’s request and, thereby allowing feedback to be optional. Second, DLEs can also function as an

in-termediary, in which the DLE is used as a channel by an external agent, such as a teacher or peer, to

communicate feedback to the learner (Debuse, Lawley, & Shibl, 2007). An example is the feedback a teacher delivers to a learner by commenting on an uploaded document, such as a summary writing assignment. The teacher can, if applicable, highlight sentences that need to be revised and comment on the use of domain-specific words or content. DLEs functioning as initial feedback senders rather than as an intermediary appeal more to the learner’s intrapersonal factors than to interpersonal fac-tors, because there is no confounding effect of and with other agents’ intrapersonal factors.

In general, DLEs have unique features for both learners and teachers when it comes to instructional feedback (Cheung & Slavin, 2012; Luschei, 2014; Mandernach, 2005; Munshi & Deneen, 2018; Pana-dero, 2016; Smith et al., 2011; Warren, Lee, & Najmi, 2014). For learners, DLEs allow feedback to be de-livered rapidly in various formats and, if necessary, upon a learner’s and/or teacher’s request (Swart, Nielen, & Sikkema-de Jong, 2019). Thus, a DLE can meet the learner’s needs for instructional feed-back in terms of frequency and timing (Dillon & Jobst, 2005). Furthermore, DLEs differ with regard to the degree of learner control they offer or allow. For example, learners can determine whether they would like to receive feedback and, if so, how specific that feedback can be. Other examples include determining the order in which learners would like to access or read information, the content of this information and its display (Deeley, 2017; Scheiter & Gerjets, 2007). Allowing learners to control the activities they engage in—including feedback-seeking—activates their (self-)regulation and engage-ment with the subject matter (Clark, 2012). In addition, logistical barriers that are present in face-to-face settings are alleviated when it concerns digitally delivered feedback (Wu et al., 2019). DLEs can allow learners more flexibility in deciding where and when they want to work, and where and when they decide to search for and obtain instructional feedback (Kitsantas & Chow, 2007). Besides the advantages of DLEs for learners, teachers can also benefit from DLEs. Features for teachers include flexible access to the DLE and rapidly generated overviews of learners’ progress and current levels of understanding to adapt and/or direct instructional feedback or instruction (Kitsantas & Chow, 2007; Pellegrino, Chudowsky, & Glaser, 2001).

The Effects of Digitally Delivered Instructional Feedback

The increased use of DLEs in educational settings allows researchers to examine the contributions of the digital setting compared to face-to-face settings, and—within the digital setting—the

(7)

con-tribution of different types of digitally delivered instructional feedback to learning (e.g., Hattie & Timperley, 2007; Jaehnig & Miller, 2007; Kluger & DeNisi, 1996; Swart et al., 2019; Van der Kleij et al., 2015). Kluger and DeNisi (1996) found a strong effect of evaluative feedback in the digital setting on learning performance compared to a face-to-face setting and reported a Cohen’s d of .41. This finding is underlined by Hattie and Timperley (2007), who have shown that feedback provided by a computer yielded a high effect size (Cohen’s d = .52); however, they did not specify whether the feedback was evaluative and/or informative. Despite these seemingly positive contributions of digitally delivered feedback, the effects of feedback are situation-dependent (Hattie & Timperley, 2007; Kluger & DeN-isi, 1996). This situation-dependency is expressed in factors such as the feedback type and degree of specificity, amongst others factors that can be categorized in context, content, and task factors. Re-search has shown that the effects of evaluative feedback—more specifically knowledge of result (KR) type of feedback—were stronger for simple tasks when compared to more complex tasks (Kluger & DeNisi, 1996). Jaehnig and Miller (2007) devoted their review to determining which type of feedback was effective in programmed instruction on a non-specified learning task and found KR to be as effec-tive as no feedback. In addition, knowledge of correct response (KCR; similar to KR extended with the correct result) and elaborated/informative feedback (feedback with additional information besides KR/ KCR), were found to be more effective than KR. However, their results are difficult to interpret without information about the assessment used to measure learning performance. Whereas Kluger and DeNisi (1996) included situational and methodological characteristics in their review, these char-acteristics were absent in the review of Jaehnig and Miller (2007). Moreover, Jaehnig and Miller (2007) are vague about how they selected their study pool, which raises questions in terms of replicability and generalizability. In contrast, Van der Kleij et al. (2015) provide sufficient situational and method-ological characteristics of the included studies as well as descriptions of the assessments in the stud-ies, which allows for a better interpretation of results. They concluded that elaborated/informative feedback was more effective in improving learning performance than KR and KCR. However, due to the small sample size (70 effect-sizes from 40 studies), assumptions regarding the effectiveness of KR and KCR on lower order learning outcomes (i.e., tasks in which learners have to recognize, recall, and understand concepts without applying this knowledge) could neither be rejected nor confirmed. Elaborated/informative feedback was found to be effective for both lower and higher order learning outcomes, where the latter are typically reflected in learners’ application of the acquired knowledge in new situations (i.e., transfer; Van der Kleij et al., 2015). Last, Swart et al. (2019) examined the effects of feedback on learning from text (i.e., expository texts). They found that evaluative and informative feedback are most effective when it is delivered directly after reading. When the feedback is delivered during reading, feedback is less effective.

Conceptual and Contextual Issues as a Motivation for

this Dissertation

Over the last decade, research has provided meaningful insights regarding the processing of instruc-tional feedback. For example, task-dependency of the effects of instrucinstruc-tional feedback on learning has been widely acknowledged (see Hattie & Timperley, 2007; Kluger & DeNisi, 1996; Van der Kleij et al., 2015). This dependency is expressed in the inclusion of factors such as the type of assignment,

(8)

the type of learning outcomes (e.g., dichotomized in lower or higher outcomes cf. Van der Kleij et al., 2015), and the specificity of the feedback message. Furthermore, Narciss’ (2008, 2013) model pro-vides a detailed description of how the internal and external feedback loops operate (see section In-ternal and exIn-ternal feedback loop), allowing the clusters (or collection) of context, content, and task factors in their combination—as well as clusters of intrapersonal factors of the learner—to play a role in the feedback process (Butler, Godbole, & Marsh, 2013; Butler & Winne, 1995; Cutumisu, 2019; Gordijn & Nijhof, 2002; Maier, Wolf, & Randler, 2016; Winstone et al., 2017). However, the quantity of potential influential clusters of (intrapersonal) factors—and their subsequent interactions within and between these clusters—makes it difficult to fully grasp the complexity of the feedback process. As a result, providing meaningful practical recommendations is thwarted. An example of a learner working on an assignment can illustrate this complex conjoint influence of factors. Imagine that a learner is reading an expository text with several questions about the content of the text. The learner has his/her intrapersonal factors, such as his/her initial reading level and preferences for topics. In addition, the learner has knowledge of relevant strategies to work through the text, to increase com-prehension when (s)he does not fully comprehend the text, and to finish reading the text in a specific amount of time. The quality of these strategies can differ. The learner—with his/her intrapersonal factors—interacts with or responds to situational factors. Examples of situational factors are context factors (such as the educational level), content factors (such as the amount of specificity), and task factors (such as the format or display). The learner can respond to a collection of factors or respond specifically to one factor. As a result, there are numerous interactions between situational factors and the learner’s intrapersonal factors.

Feedback studies have mainly focused or placed emphasis on one cluster of (intrapersonal) factors, stemming from either too narrowly or widely conceptualized clusters of context, content, task, and/ or intrapersonal factors. Too narrowly conceptualized clusters lead to recommendations that are only transferrable to specific learning settings, leaving them unsuitable for other settings (e.g., Cutumisu, 2019). Too widely conceptualized clusters are generally—and sometimes even vaguely—phrased. These recommendations are applicable across many educational settings, but without sufficient specification how it can positively contribute to learning (e.g., Gibbs & Simpson, 2004a). Appropri-ate research methods need to be adopted to step away from this fragmented approach to studying context, content, task, and/or (learner’s) intrapersonal factors in the processing of feedback and/or for learning performance. One way to examine, or to at least acknowledge, the role of intrapersonal factors on the processing of feedback and/or on learning performance—for example when reading texts presented in a DLE—is by adopting a learner-oriented approach. A learner-oriented approach can be described as taking “a holistic and dynamic view of the individual as an integrated totality over time” (Bergman & Trost, 2006, p. 604). At a theoretical level, a learner-oriented approach involves a collection of factors, such as (clusters of) intrapersonal factors, that interact and develop as a system all together. At a methodological level, this approach can identify a subsystem relevant for the task at hand (e.g., the processing of feedback or learning performance). At both levels—theoretical and methodological—a learner-oriented approach fits the complexity that individual learners bring to the processing of feedback and/or learning performance.

(9)

The difference in the educational setting—when comparing digitally delivered feedback with face-to-face feedback—needs to be acknowledged and considered because learners may experience dif-ferent psychological and logistical processes in both settings (Winstone et al., 2017; Wu et al., 2009). As a result, guidelines derived from face-to-face practices cannot always be seamlessly matched to guidelines obtained from digital settings (Van der Kleij et al., 2015). Also the ecological validity of studies investigating digitally delivered feedback is subject to debate. Studies conducted in “arti-ficially developed” settings—here defined as studies not implemented in or combined with exist-ing lessons, classes, and/or curriculum topics—are problematic in terms of their ability to provide practical recommendations. Furthermore, these simplified educational settings show little resem-blance with the realistic settings found in education. For example, the study by Hung, Hwang, Lin, Wu, and Su (2013) involved field trips to teach observational skills to 11- and 12-year old learners. This study meets the condition of being conducted in a naturalistic setting, raising enthusiasm amongst the learners, but the learning objectives—observing ecological systems and posing questions—are not explicitly linked to other activities within the curriculum, making this field trip an isolated activity. Regardless of a study’s ecological validity, the learner still determines to a large extent what will be done with the instructional feedback in terms of perceiving, processing, and/or acting upon the feedback (Narciss, 2008; Shute, 2008; Strijbos & Müller 2014). Allowing this additional variation in the feedback situation—stemming from learners’ intrapersonal factors and the passive and/or active participation in that situation—requires feedback research with a sufficiently detailed conceptualization of clusters of (intrapersonal) factors of the learner—including subsequent interactions within and between these clusters—and an ecologically valid methodological approach across educational settings.

It is time for feedback researchers to face this seemingly paradoxical and complex challenge. De-livering instructional feedback cannot be conceived as a one-size-fits-all solution nested upon the assumption that the learner is merely a passive recipient of instructional feedback. Delivering in-structional feedback does not necessarily mean that the feedback will be processed or understood nor does it improve or adapt learning or performance per se. The need to dispose of this incorrect conceptualization is supported by previous research which reported that learners have difficulties implementing the digitally delivered instructional feedback if it is delivered too late, too generally phrased, perceived as too authoritative, and/or delivered in an expert language (e.g., use of academic, profession-, or competence-specific language; Jonsson, 2013). In addition, it has been reported that learners generally receive insufficient guidance or training to implement feedback (Hattie & Timpe-rley, 2007; Shute, 2008; Weaver, 2006), but would benefit from a feedback training if they received it (Clark-Gordon, Bowman, Watts, Banks, & Knight, 2018).

Conceptual and Contextual Issues in a Demanding Setting

A major type of instruction in nearly all school subjects is the use of expository texts (Alexander, 2005), which requires learners to combine domain-specific skills with reading strategies to build a coherent mental text representation (Aydin, 2011). Subjects that primarily use expository texts as its main means of learning are particularly challenging. The learner is confronted with many multisyl-labic words, a high informational density, and a large number of abstract and logical relations

(10)

(Berke-ley, King-Sears, Vilbas, & Conklin, 2016; Gregg & Sekeres, 2006). These characteristics complicate the process of building a coherent mental text representation (Aydin, 2011; Van den Broek, 2010). An ex-ample of a school subject experiencing this challenge is geography, in which learners have to tackle expository texts alongside domain-specific skills. Moreover, many learners struggle to comprehend what they read due to insufficient knowledge regarding metacognitive, motivational, and reading strategies and/or their inability to implement these strategies in a correct or timely way (Alexander, 2005; Pearson, Roehler, Dole, & Duffy, 1992), which additionally complicates the process of building a coherent mental text representation (Aydin, 2011; Van den Broek, 2010). Yet, instructional feedback can be a means to support readers in building this representation.

External feedback—either delivered to or sought by learners—can support them during reading. The instructional feedback can target, for example, prior knowledge about the contents of the text, or knowledge of relevant strategies to establish coherence (e.g., reading, metacognitive, and motiva-tional strategies; Alexander, 2005; Pearson et al., 1992). Research has already shown that metacog-nition and motivation are essential for the processing of feedback (Butler, Godbole, & Marsh, 2013; Butler & Winne, 1995; Cutumisu, 2019; Gordijn & Nijhof, 2002; Maier, Wolf, & Randler, 2016; Narciss, 2008, 2013; Timms, DeVelle, Schwanter, & Lay, 2015; Shute, 2008; Winstone et al., 2017), the act of seeking feedback (Ashford et al., 2008; Crommelinck & Anseel, 2013; Papi et al., 2019), and for expos-itory text comprehension (Hwang & Arbaugh, 2006; Fordham, 2006). The possible interactions of these factors—and their subsequent effects on a learner’s mental text representation—are numer-able and complex. For example, a typical expository text is lengthy and contains challenging and/or multisyllabic words. The learner has to implement strategies to retrieve the meaning of those words to create more coherence while monitoring reading progress and comprehension. During or after reading of the text, the learner might decide to ask a peer or teacher for feedback to confirm or clarify his/her comprehension and/or strategies that might help to improve reading and comprehension. However, due to the typical length of expository texts, the learner might become demotivated af-ter some time and, consequently, employ strategies focused on task value (e.g., the learner reminds him-/herself that reading is helpful for learning new topics and for other school subjects), or goal-set-ting (e.g., the learner is going to read one chapter per day with corresponding questions from the workbook) to continue reading. Furthermore, time constraints—such as the typical lesson duration of 50 or 60 minutes—may influence the learner’s approach to the text. These examples show that the included factors from the intrapersonal clusters of metacognition and motivation, and feed-back-seeking combined with context, content, and task factors—create a complex situation in the case of reading expository texts and with numerous possible interactions between the (clusters of) factors. The composition of these interactions, and therefore also their effects on a learner’s mental text representation, is different for digital settings. The digital medium requires implementing dif-ferent strategies to comprehend the text compared to the printed medium, regardless whether the requirements of these strategies positively or negatively influence the mental text representation (Singer & Alexander, 2017). These strategies are related to, for example, the visual ergonomic char-acteristics of the digital medium, such as the lighting source in LCD computer screens, refresh rates, and scrolling (see Garland & Noyes, 2004; Lee, Ko, Shen, & Chao, 2011; Proaps & Bliss, 2014). Thus, the

(11)

digital setting brings additional challenges to the already challenging nature of expository text com-prehension, including the conjoint influence of context, content and task factors, and the learners’ intrapersonal factors.

Research Rationale

Due to the central role of the individual learner in the feedback situation, I emphasize—in line with Aoun, Vatanasakdakul, and Ang (2018), Narciss (2008, 2013), Nicol (2019), and Singh (2016)—that instructional feedback and its subsequent interactions with and between (clusters of) context, con-tent, task, and intrapersonal factors should be viewed and operationalized as a complex concept. For example one can acknowledge this complexity by including one factor of each cluster, or by including more than one factor per cluster. Including one or more factors per cluster may seem common sense; however, developments in research and practice always follow a slower pace to implement this con-ceptualization. The explicit acknowledgement of this complexity is partly expressed in stepping away from an one-size-fits-all solution nested upon the assumption that the learner is merely a passive recipient of instructional feedback. Conversely, by focusing solely on one (type of) context, content, or task factor(s) and/or one (cluster of) intrapersonal factor(s) at a time, research has provided a scat-tered repository of information concerning which intrapersonal factors or context, content, or task factors might positively influence learning performance. As a result, research should be designed and operationalized by including factors stemming from at least two different clusters of factors, for example, from context, content, task, and intrapersonal factors to make an attempt to capture the complexity of the feedback situation.

Digitally delivered instructional feedback or the opportunity to seek such feedback can be helpful when reading expository texts typical to many school subjects and especially in the high-demanding context that learners face at the end of primary education and the beginning of secondary tion. Reading is the means of learning in education, especially in senior general secondary educa-tion and preuniversity educaeduca-tion. When reading, learners have to be able to build a coherent mental text representation. In addition, they have to combine reading strategies with domain-specific skills by activating and utilizing their cognitive, metacognitive, and motivational resources (i.e., intraper-sonal factors). In particular the combination of influential intraperintraper-sonal factors—stemming from clusters such as metacognition and motivation—on both the processing of instructional feedback and on text comprehension, has not yet been studied extensively. So far, the isolated manner of in-vestigating (clusters of) context, content, task, and intrapersonal factors has not fully acknowledged the complexity of (subsequent) interactions between these (clusters of) factors whilst their combi-nation is a crucial part of conceptualizing instructional feedback, its processing by the learner, and, perhaps, even more essential, this complex conjoint influence resembles the feedback situation as experienced by learners and teachers in everyday educational settings. Furthermore, feedback-seek-ing behaviour is embodied differently in digital settfeedback-seek-ings compared to face-to-face settfeedback-seek-ings (Hwang & Arbaugh, 2006). Monitoring might be impossible if the digital system does not provide options to inspect the feedback that peers received. Directly asking others for instructional feedback can be supported with a chat forum or by adding options that deliver feedback upon the learner’s request.

(12)

Research Project Gazelle

The topic of this dissertation was derived from a research project2 that explored which types of hints—

containing evaluative feedback or optional informative feedback focused on cognitive, metacognitive, motivational, and reading strategies—in the digital learning environment called Gazelle3, positively

contributed to learners’ expository text comprehension, self-regulated learning, and motivation (see Ter Beek, Spijkerboer, Brummer, & Opdenakker, 2018). The project—covering 2015 to 2018—implemented a yearly intervention for the school subjects of history and geography. In the following sections, this re-search project will be described, including the link between this rere-search project and the dissertation. The start of the research project, covering its first year, was mainly focused on the development of materials: constructing the expository texts for geography and history and the corresponding assess-ments (multiple-choice questions, summary writing assignassess-ments), the construction of the digital learning environment Gazelle, including the content of the feedback and hints, and the selection and translation of self-report questionnaires—including tailoring the items to the subjects geography and history—for metacognition and motivation. The next two years focused on studying effects of differ-ent combinations of hints. Each year consisted of two intervdiffer-ention periods of eight weeks each for both school subjects, implemented halfway through semester one and halfway through semester two, and each intervention adopted a pretest-posttest design. In the first and final week of each intervention pe-riod questionnaires were administered to measure domain-specific metacognitive reading strategies, motivational orientations. The pretest and posttest of expository text comprehension were administered in the second and seventh week. The four weeks in-between consisted of practice in reading and of as-sessment tasks, with a similar procedure for each week. From the second until the seventh week, learn-ers started with answering a few questions related to motivation for the subject, topic, and reading. Subsequently they read an expository text, followed by writing a summary and answering ten multi-ple-choice questions directed towards five domain-specific skills (i.e., causal relations, explaining, ques-tioning, arranging, and perspective taking; Schöps, 2017; Van der Schee, 2012). Subsequently, they were then asked to reflect on their summary, and completed the weekly assignments by giving themselves advice for the next week. This advice was repeated the following week.

In school year 2016-2017, the experimental group received mandatory evaluative feedback (except in the first and eight week) for each multiple-choice question. In addition, the experimental group was granted access to hints focused on cognitive and metacognitive strategies during the first inter-vention period. The access to cognitive and metacognitive hints was extended with access to moti-vational hints in the second intervention period. The hints were designed with two layers: the first layer consisted of a more general description of the information that can help the learner (e.g., “For a question starting with ‘why’ you are looking for an explanation for a phenomenon. You can look at

1In Dutch these tracks are called havo and vwo respectively.

2Licensed under grant 405-15-551 from the Netherlands Organization for Scientific Research (NWO).

3Gazelle is a Dutch acronym for ‘Gemotiveerd en Actief Zelfstandig Lezen’, which can be roughly translated into ‘Motivated and

(13)

signalling words like ‘because’ or ‘therefore’.”), whereas the second layer contained a more specific de-scription (e.g., “Search in the text for the signalling word ‘because’ and link this to what has been said about the beneficial position of Rotterdam. Linking these together will provide the contribution of Rotterdam to Europe.”). Figure 1 shows examples of a cognitive hint for layer 1 and 2, separated by the dashed line in the pop-up message. The control group only received evaluative feedback in the first intervention period. During the second intervention period, the control group continued receiving evaluative feedback (see Figure 1.2), but was also granted access to cognitive and metacognitive hints. In school year 2017-2018, a similar set-up was implemented: all groups had access to cognitive, metacognitive, and motivational hints. Teachers were implicitly (intervention period one) and ex-plicitly instructed (intervention period two) to use the visualizations of student performance data that were collected and displayed in Gazelle. This data included overviews of scores per week, per domain-specific skill, and hint use. Teachers were prompted to implement this information in their lessons and complement it with strategy instruction. This strategy instruction targeted similar strat-egies as the ones covered by the hints. In addition, prior to intervention period two, the teachers from the experimental group received a training about implementing the visualization of student perfor-mance data from Gazelle into their lessons, whereas teachers from the control group did not receive this training.

In the present dissertation, geography materials from the Gazelle research project were implement-ed in Chapters 4 and 5. More specifically, the study in Chapter 4 usimplement-ed one geography text and ten multiple-choice questions from week five in intervention period one of school year 2017-2018. The study was conducted in the same schools as the project but with learners from different classrooms in the same age group. Chapter 5 adopted the complete research approach of the project. This study used the data from self-report questionnaires, performance scores on multiple-choice questions and summary writing assignments, and log files covering students’ navigational patterns while seeking feedback by accessing hints, from the second intervention period of school year 2017-2018.

(14)

Figure 1.1. Screenshot

from GAZELLE with optional feedback, visible in the pop-up message, for the multiple-choice question.

(15)

Figure 1.2. Screenshot

from GAZELLE with evalua-tive feedback, visible in the pop-up message, about the multiple-choice question.

(16)

This dissertation includes four interrelated studies, each focusing on distinct aspects of the feedback process and learning process. The complexity of these processes is captured in two conceptual re-view studies (Chapters 2 and 3) and two empirical studies (Chapters 4 and 5). The choice for including both conceptual and empirical studies in my dissertation are founded on a need to step away from a one-size-fits-all solution nested upon the assumption that the learner is merely a passive recipient of instructional feedback. To this end, I will integrate factors—stemming from the clusters of con-text, content, task, and intrapersonal factors—to avoid further contributing to a scattered repository of information. These goals will be met by revisiting and systematically reviewing existing research regarding the effects of digitally delivered feedback, while making a distinction between the impact of (clusters of) context, content, and task factors on learning performance (Chapter 2) and the clusters of intrapersonal factors included in that research (Chapter 3). Furthermore, the empirical studies in Chap-ters 4 and 5—both viewed as an extension of Chapter 3—also allow for the investigation of the com-plex influence of clusters of intrapersonal factors to justify the comcom-plexity of the feedback situation and again to avoid further contribution to an already scattered repository of information. These studies are centred around the reading of expository texts in the school subject geography, which requires learn-ers to combine reading strategies with domain-specific skills by investing cognitive, metacognitive, and motivational resources. Instructional feedback is reasoned to support learners during reading to help them build a coherent mental text representation. In the following sections, I will describe the goal(s) of each study, explain why the study was conducted, elaborate on the unique contributions to the overar-ching theme, and present the research questions per study. Overlap of theoretical and methodological information throughout the chapters is inevitable due to the overlap of concepts.

Chapter 2 presents a meta-analysis regarding the effectiveness of (clusters of) context, content, and

task-related factors of digitally delivered feedback on learning performance. The conjoint influence of (clusters of) content, context, and task-related factors—and their subsequent interactions—posi-tively contributes to learning performance. Factors related to the context, content, and task clusters were included with a focus on analysing their unique contributions to learning performance in the meta-analysis and the interactions of these (clusters of) factors in a meta-regression. In particular, the addition of a meta-regression—besides the main meta-analysis of a range of (clusters of) context, content, and task factors—will prevent further contributing to a scattered repository of information by combining the most effective factors that positively contribute to learning performance. This col-lection of (clusters of) factors has the potential to provide sufficient detail for practical recommenda-tions, but will also allow generalizability to other domains. The following research questions (RQs) were addressed in Chapter 2:

RQ1: Which context factors (educational level, feedback timing, learner control, rewards, study setting)

mod-erate digitally delivered feedback and are most effective in improving learning performance in digital learning environments?

RQ2: Which content factors (form, focus, function) moderate digitally delivered feedback and are most effective

(17)

RQ3: Which task factors (assessment developers, assignment, discipline, feedback display) moderate digitally

delivered feedback and are most effective in improving learning performance in digital learning environments?

RQ4: Which combination of context, content, and task factors moderates digitally delivered feedback and

con-tributes to learning performance in digital learning environments?

Chapter 3 tapped into the operationalization of the learner’s (clusters of) intrapersonal factors in

existing research by means of a systematic review, because the processing of digitally delivered in-structional feedback is—to a large extent—determined by the individual learner. Whereas the me-ta-analysis focused on the (clusters of) factors surrounding the learner (i.e., context, content, and task factors), this systematic review focuses on the operationalization of learner’s intrapersonal factors. With the increased use of DLEs in education, the focus shifts from the learners’ inter- and intraper-sonal factors to predominantly intraperintraper-sonal factors, because the learner interacts only with a rela-tively fixed set of features in the DLE rather than with other agents, such as the teacher or peer with their own intrapersonal factors. The decision to focus on the operationalization of intrapersonal factors—as a consequence of the increased use of DLEs in education that requires a focus on pre-dominantly intrapersonal factors—provided an overview of whether this focus is present in existing research. The research aim of the current systematic review was to explore how (clusters of) intraper-sonal factors have been operationalized—according to five themes: cluster of intraperintraper-sonal factors, type of measure, use of multiple measures, measurement goal, and measurement focus—in previous research involving digitally delivered feedback. Furthermore, a distinction was made between stud-ies emphasizing the essential role of (clusters of) intrapersonal factors in feedback-processing in their theoretical rationale and studies in which this essential role is not emphasized.

Chapter 4 reports on the influence of metacognition, motivation (i.e., two clusters of intrapersonal

factors), and feedback-seeking behaviour on expository geography text comprehension. In educa-tion, each of these aspects has been extensively studied in isolaeduca-tion, but in particular research on feedback-seeking behaviour is absent in the context of geography education. This study investigates their unique and shared contribution to three question formats of expository text comprehension: (a) performance on multiple-choice questions, (b) performance on a summary writing assignment, and (c) a combination of both indicators. These three question formats contribute differently to building a coherent mental text representation—with multiple-choice questions signifying a more passive contribution and writing a summary a more active contribution—and the composite text compre-hension indicator resembles the type of assessment learners receive in practice to indicate the grade for the school subject. To this end, path analysis was employed. Simple models were included to ex-amine direct relations between the impact of intrapersonal factors metacognition and motivation, and feedback-seeking behaviour on expository text comprehension. Complex models were included to examine potential mediation effects of feedback-seeking behaviour (in terms of monitoring and in-quiry) on the relationship between the clusters of intrapersonal factors metacognition and motivation, and expository text comprehension.

(18)

Chapter 4 addressed the following research questions (RQs):

RQ1: What is the direct contribution of feedback-seeking behaviour (monitoring and study inquiry),

metacog-nition, and motivation to expository text comprehension for geography (operationalized as a multiple-choice question, summary, and composite text comprehension indicator?

RQ2: To what extent is the contribution of metacognition and motivation to expository text comprehension

for geography (operationalized as a multiple-choice question, summary, and composite indicator), mediated by feedback-seeking behaviour (monitoring and inquiry)?

Chapter 5 presents a learner-oriented approach to determine the impact of the access to hints on

expository geography text comprehension by deriving meaningful profiles—based on the clusters of intrapersonal factors metacognition and motivation, feedback-seeking, and expository text com-prehension—to which learners can be allocated. Data from self-report questionnaires was used to measure the (clusters of) intrapersonal factors, whereas navigational patterns were derived from log files serving as indicators for feedback-seeking, and performance scores were measured with multi-ple-choice questions, the writing of a summary, and a composite indicator. In addition, an effect size was calculated to test the predictive value of these profiles for expository text comprehension. The following research question was addressed: Which meaningful profiles can be derived from feed-back-seeking behaviour, metacognitive reading strategies, and motivational facets, and can these profiles predict expository text comprehension?

Chapter 6 provides a summary and an integration of the findings of the four studies in this

disserta-tion. Furthermore, theoretical and practical implications, and limitations of the four studies will be discussed and extended with directions for future research.

(19)

Referenties

GERELATEERDE DOCUMENTEN

Unrooting the illusion of one-size- fits-all feedback in digital learning environments Leonie

Such a training should not focus on expository text comprehension, but include perceptions of digitally delivered instruction- al feedback as being a part of the learning process,

Om deze reden kon ik slechts voor enkele clusters meerdere schalen meenemen (bijvoorbeeld voor motivatie, maar niet voor metacognitie), waardoor ik onvoldoende tegemoet kon komen

Het beschrijven van leerlingen geschiedt beter door te kijken naar het proces, zoals het gebruik van metacognitieve leesstrategieën, dan naar het product, zoals de leesprestatie

Everything that has to do with preaching a sermon in a worship service, whether in a church building or in an online service, plays out in the field of the tension between the

Door deze rijkere omgeving zijn biologische varkens minder agressief en angstig, en komt staartbijten minder voor dan bij gangbaar gehouden varkens.. Stro is verder positief voor

4 POTENTIAL OBSTACLES TO A ‘DIRECT AND FULL’ EU-CHINA LINKAGE: LAW & ECONOMICS CONSIDERATIONS 57 4.1 Analytical framework of the dissertation 57 4.2 ETS designs

Box 217 7500 AE Enschede The Netherlands E-mail: a.bratchenia@tnw.utwente.nl Robert Molenaar University of Twente.. Institute of Biomedical Technology Faculty of Science and