• No results found

Effectiveness of Blended Learning - Factors Facilitating Effective Behavior in a Blended Learning Environment

N/A
N/A
Protected

Academic year: 2021

Share "Effectiveness of Blended Learning - Factors Facilitating Effective Behavior in a Blended Learning Environment"

Copied!
197
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Factors Facilitating Effective

OF BLENDED LEARNING

Learning Environment Behavior in a Blended

Nynke Bos

EFFECTIVENESS

(2)

Copyright Nynke Bos 2016 ISBN/EAN 9789492231338

(3)

Effectiveness of Blended Learning Factors Facilitating Effective

Behavior in a

Blended Learning Environment

PROEFSCHRIFT

ter verkrijging van de graad van doctor aan de Open Universiteit op gezag van de rector magnificus

prof. mr. A. Oskamp ten overstaan van een door het College voor promoties ingestelde commissie

in het openbaar te verdedigen op vrijdag 16 december 2016 te Heerlen

om 16.00 uur precies door Nynke Renate Bos

geboren op 1 september 1976 te Meppel

(4)

Promotor

Prof. dr. F.L.J.M. Brand-Gruwel, Open Universiteit Overige leden beoordelingscommissie

Prof. dr. P.J. van Baalen, Universiteit van Amsterdam Prof. dr. J. Elen, Katholieke Universiteit Leuven Prof. dr. J.M. Voogt, Universiteit van Amsterdam Prof. dr. P.A. Kirschner, Open Universiteit dr. H. Drachsler, Open Universiteit

(5)
(6)
(7)

CHAPTER 1 9

INTRODUCTION

CHAPTER 2 21

ASYSTEMATIC REVIEW OF RECORDED LECTURES AND ITS INFLUENCE ON COURSE PERFORMANCE

CHAPTER 3 43

THE USE OF RECORDED LECTURES AND THE IMPACT ON LECTURE ATTENDANCE AND COURSE PERFORMANCE

CHAPTER 4 61

WHO WILL BENEFIT FROM BLENDED LEARNING?INDIVIDUAL DIFFERENCES IN THE USE OF BLENDED LEARNING AND ITS IMPACT ON COURSE PERFORMANCE

CHAPTER 5 85

STUDENT DIFFERENCES IN REGULATION STRATEGIES AND THEIR USE OF LEARNING RESOURCES: IMPLICATIONS FOR EDUCATIONAL DESIGN

CHAPTER 6 111

APATH ANALYSIS OF SELF-REGULATION,LEARNING ACTIVITIES, AND COURSE PERFORMANCE IN A BLENDED LEARNING COURSE

CHAPTER 7 139

DISCUSSION

CHAPTER 8 161

SUMMARY

CHAPTER 9 171

SAMENVATTING

APPENDIX A 181

LIST OF PUBLICATIONS 189

ACKNOWLEDGEMENTS / DANKWOORD 193

(8)
(9)

Chapter 1

Introduction

(10)
(11)

1

B

ACKGROUND OF THE DISSERTATION

Generally, each new medium seems to attract its own set of advocates who make claims for improved learning and stimulate

research questions which are similar to those asked about the previously popular medium (Clark, 1983, p.447).

It was almost 35 years ago that Richard Clark wrote his restatement of the claim that there were “no differences [to be] expected” when learning with the use of media as compared to the traditional face-to-face teaching methods (Clark, 1983). By combining several meta-analyses, Clark concludes that there are no learning benefits to be gained from employing any media to deliver instruction. In an attempt to redefine this debate a decade later, Kozma (1994) advocates a refocus on the conditions under which media might influence learning by concentrating on particular students, learning tasks and/or situations. In the meantime, learning with the use of media has transformed into learning using technology, yet a recent report published by the Organisation for Economic Co-operation and Development (OECD, 2015) concludes that technologies are not widely adopted in formal education, and the impact of technology on education is mixed, at best. The real contributions technology can make to teaching and learning have yet to be fully realized and explored.

The theoretical potential of technology in education to enhance teaching and learning is well described in literature, where it is hailed as a modernization of education, making subject matter more attractive, leading to enhancements in student satisfaction, creating increased enrollments and income, and even achieving improvements in academic performance (Owston, 2013; Poon, 2013; Stockwell, Stockwell, Cennamo, & Jiang, 2015). However, research on actual utilization of educational technologies within hybrid / blended1 or fully online courses, show little or no effect on course performance when compared to offline courses (OECD, 2015;

Sitzmann, Kraiger, Stewart, & Wisher, 2006; Spanjers, Könings, Leppink, & Van Merriënboer, 2014; Zhao, Lei, Yan, Lai, & Tan, 2005), creating the ‘no significant difference’ phenomenon (Russel, 1997). Despite this lack of positive results to reinforce the importance of the use of educational technology, it continues to be an important priority for government, politicians and policymakers, especially in terms of national efforts to steer towards ‘new’ and ‘improved’ education and learning (Ministerie van onderwijs, cultuur en wetenschappen, 2015; Obama, 2011). The main reason for this divide to persist is the difference between the theoretical utilization and application that could be used in education, versus the actual utilization and application of technology

1This dissertation uses the term blended learning. Other known synonyms are hybrid learning, technology-

(12)

in education to enhance teaching and learning (Selwyn, 2013). This difference is nicely illustrated in a recently published report about the actual utilization and employment of educational technology by universities and lecturers (Voogt, Sligte, Van den Beemt, Van Braak, & Aesaert, 2016). Their survey amongst lecturers revealed that, when employing technology in their education, lecturers are convinced of the impact that educational technology can have on enhancing teaching and learning. However, this belief is mainly based on experience with a certain technology, rather than on qualitative measures or specific affordances of said technology, causing a suboptimal utilization and application of educational technology.

Beside lecturers, students, too, struggle to find how to use educational technology in a way that enhances learning. When students enter the university they are often considered to be ‘digital natives’ (Prensky, 2001). Although the existence of students being ‘digital natives’ has been condemned (Kirschner & Van Merriënboer, 2013), the belief remains that students nowadays are more digitally adept, which should have led to a change in the way they interact with and respond to digital devices, the way they think and the way they process information. As such, and despite the critique on the

‘digital native’, there is still the assumption that students are able to use digital technologies throughout all aspects of their university studies in an effective way, so that it will enhance the quality of learning and increase course performance. However, it is important to recognize the difficulties that both universities and students face in making ‘good’ use of educational technology (Henderson, Selwyn, & Aston, 2015).

Blended learning

Despite the gulf between the theoretical potential and the struggle both lecturers and students face in daily practice to make good use of educational technology, the technology itself has expanded rapidly. The use of internet, computers, mobile devices and learning management systems (LMSs) is daily practice at universities (Siemens &

Long, 2011), despite the lack of proof for these technologies to lead to enhancement in course performance or enhancements in the quality of teaching and learning. The current development aims at implementing these known technologies in a form of blended learning (Johnson et al., 2016). The term blended learning is not clearly defined and can relate to combinations of instructional methods (e.g. discussions, recorded lectures, simulations, serious games or small workgroups), different pedagogical approaches (e.g. cognitivism, connectivism), various educational transfer methods (online and offline) or it can relate to various technologies used (e.g. e-learning, podcasts or short video lectures) (Bliuc, Goodyear, & Ellis, 2007). The common distinction lies in the two different methods used in the learning environment: face-to- face versus online learning resources. In a more explicit definition, “blended learning describes learning activities that involve a systematic combination of co-present (face- to-face) interactions and technology-mediated interactions between students, teachers

(13)

1

and learning resources” (Bliuc et al., 2007, p 242). Blended learning is often associated with student-oriented learning, in which students have varying degrees of control over their own learning process. Blended learning could contribute to the autonomy of the students, as it enables them to have more control over their learning path and this autonomy should encourage students to take responsibility for their own learning process (Lust, Elen, & Clarebout, 2013). However, current approaches of blended learning within universities are related to logistics, providing students with additional resources that can be accessed at the students own pace to preface or complement face- to-face learning (Henderson, Finger, & Selwyn, 2016). These approaches are not the creative, connected and collective forms of learning that are often described in literature. This dissertation focuses on the current approach where the focus of blended learning is to preface or complement face-to-face learning.

One of the most basic forms to complement face-to-face learning is the deployment of recorded lectures. These recorded lectures mostly comprise integral recordings of face-to-face lectures which are made available as a supplement to students directly after the lecture. In these cases, face-to-face lectures are often not mandatory. State of the art technology makes recording lectures ongoing business and the recordings are subsequently embedded in many institutions. However, the impact of the usage of recorded lectures on exam performance is not clear. Moreover, research in blended learning environments shows that students do not interact with learning technologies in the same way (Inglis, Palipana, Trenholm, & Ward, 2011; Lust, Vandewaetere, Ceulemans, Elen, & Clarebout, 2011), and the same is the case for interaction with recorded lectures. There is little insight into why students do or do not use certain learning technologies and what the consequences of these (un)conscious choices are in relation to a students course performance, although research suggests that goal orientation (Lust et al., 2013), and different approaches to learning (Ellis, Goodyear, Calvo, & Prosser, 2008), may be an important predictor of frequency and engagement of use of the technologies in the educational environment.

Learning analytics

One of the advantages of blended learning is that the learning activities take place in an online environment, which easily generates data about these online activities. The methods and tools that aim to collect, analyze and report learner-related educational data, for the purpose of informing evidence-based educational design decisions is referred to as learning analytics (Siemens & Long, 2011). Learning analytics measures variables such as total time online, number of online sessions or hits in the learning management systems (LMS) as a reflection of student effort, student engagement and participation (Zacharis, 2015). Interaction with these technologies creates learner produced data trails—trace data—of the use of these technologies. Although initially considered a byproduct, the current transition is into explicitly capturing these data,

(14)

since analyzing these data provides important and timely information on how students interact with the technology. Moreover, it presents insight into temporality aspects of the learning process. The interpretations of these data trails can be used to improve the understanding of teaching and learning, to predict course performance and can be used to inform support interventions (Gašević, Dawson, Rogers, & Gasevic, 2016).

However, up to the present time the application of learner data analysis has had a strong focus on predicting course performance and student retention, but these predictions lack scalability across courses, domains and contexts, and hardly provide insight into the actual learning process (Gašević, Dawson, & Siemens, 2015).

To improve the understanding of teaching and learning, learning analytics can be used to model the dynamic process of learning. However, this modeling of the learning process should build on existing knowledge about the learning process and account for both external and internal conditions for learning, as well (Gašević et al., 2015).

Considering the external conditions is regarded important since these conditions – amongst others, the instructional design of a course, or a students previous history with the use of particular learning tool– can change the interpretation of learning data analysis and the results of the research findings. For example, a fully online course will use educational technology more intensely compared to a blended learning course, wherein face-to-face contact still plays a substantial role. Moreover, considering external conditions could provide valuable information for universities struggling with making ‘good’ use of educational technology. The focus on external conditions should target distinctive elements in a course, such as differences in how and the extent to which different tools are utilized within a course (Gašević et al., 2015). In order to do so, generic capabilities at the software application level should be determined, rather than application at the variable level. The focus at the software application level should analyze the software at a course level and explain key risk variables of the software given their instructional aims.

Also, the internal conditions for learning with educational technology are yet to be fully understood in relation to the collection of and measurement from trace data.

Considering these internal conditions, such as the previously mentioned achievement goal orientation or cognitive load, could support students in making ‘good’ use of educational technology.

To summarize: the application of learning data analysis, combined with existing educational research about how internal and external conditions influence learning, could identify causes for the struggle that universities and students face in making

‘good’ use of educational technology, thereby achieving better utilization and application of educational technology in the future.

(15)

1

A

IM OF THE DISSERTATION

The aim of this dissertation is to determine which factors either facilitate or hinder effective learning for students while they interact in a hybrid or blended learning environment. These factors could be related to the instructional conditions of the course (external conditions), or based on student characteristics (internal conditions). This dissertation uses learning analytics to determine these factors and combines the results with existing educational research about internal and external conditions that influences learning.

The research reported in this dissertation aims to identify these factors by addressing the central research question: which student characteristics and external conditions when utilizing blended learning could explain the degree of adoption and effectiveness that blended learning has on course performance?

O

UTLINE OF THE DISSERTATION

This dissertation contains seven chapters. The next two chapters, Chapters 2 and 3, focus on external conditions, with the aim to determine generic capabilities at the software application level. Chapter 2 presents a systematic review of the literature based on empirical studies on the use of lecture capturing. Through a review of recent literature, this paper explores the added value recorded lectures (might) have on course performance. It tries to determine to what extent recorded lectures are associated with improved outcomes on course performance. Moreover, it tries to determine which instructional conditions of the course (external conditions), like subject field, duration of the course, duration of the lecture, and class size, are associated with improved outcomes on course performance.

Chapter 3 addresses the first empirical study on the use of recorded lectures in an authentic setting. It specifically focuses on the relationship between lecture attendance and the use of the recorded lecture and aims to provide insight into the added value that recorded lectures have on academic achievement. It investigates the individual use of recorded lectures and its relationship with actual lecture attendance, taking into account the different learning objectives for the course. It tries to determine to what extent students make voluntary use of face-to-face or recorded lectures and if there is a difference in course performance between different usage patterns and different learning objectives (knowledge base vs. higher order thinking skills). Moreover, it determines if time on task has an effect on exam performance and if this time on task contributes to course performance when assessing the knowledge base, and on an exam assessing higher order thinking skills.

To answer these research questions, data of 396 students’ use of recorded lectures was paired with data of their attendance at the face-to-face lectures for a course on

(16)

Biological Psychology. These data were matched with the two summative assessment scores students could obtain during the course. The first assessment covered the first part of the course and had a focus on assessing the knowledge domain. The second assessment covered the second part of the course and had a focus on assessing higher order thinking skills.

The next three chapters focus on the internal conditions for learning, to enhance our insight into teaching and learning processes. Chapter 4 elaborates on individual differences between students and their use of (digital) learning resources. The study explores how students use different learning resources, online and offline, throughout the course in a blended learning setting by determining ‘user profiles’ through the means of a cluster analysis. These clusters are based on data from the use of various learning resources: lecture attendance, use of recorded lectures, the use of formative assessments and the resulting scores, and basic LMS data about duration of LMS-use and number of hits within the LMS. Moreover, it tries to explain these differences in the use of learning resources by combining data of the use of the learning resources with data about the internal conditions for learning, or more specifically: the regulation of the learning process. Subsequently, it determines which combinations of the use of learning resources contribute the most to course performance and what the actual impact is of these distinct user profiles on course performance.

To answer these research questions, data about the use of the different learning resources, besides that of recorded lectures, and self-reported data about regulation of the learning process were added to the existing data set.

Chapter 5 tries to determine if differences in regulation strategies cause differences in the use of learning resources. This chapter focuses on differences in regulation strategies, and analyzes how differences in regulation strategies reflect in the use of different learning resources within a blended learning course. When examining the use of different learning resources, the use of offline learning resources (face-to-face activities) was combined with the use of digital learning resources, since these two nodes of delivery are inextricably linked together in a blended learning setting.

This research first identifies which ‘regulation profiles’ can be determined based on differences in regulation strategies, through the means of cluster analysis. Next, it tries to establish if differences in regulation strategies reflect in differences in the use of (digital) learning resources and what combination of (digital) learning resources contribute most to course performance for each individual profile. Finally, it determines if differences in regulation strategies reflect in differences in course performance.

Chapter 6 describes the last study of this dissertation. The goal of this study was to establish a general prediction model in order to achieve a more general view of the important intertwined relationship of different external and internal variables in the study: the use of the learning resources, regulation of the learning process, and course performance. Often, multiple linear regression is used for prediction purposes but the current study explores these inter-related variables by subjecting them to path modeling,

(17)

1

with course performance as the outcome variable, in an attempt to identify the decisive factors that impact academic achievement.

To establish this general prediction model, Structural Equation Modeling (SEM) was used to determine which variables have a direct influence on course performance, and more specifically, how the use of different LMS tools influences course performance. Moreover, it determined which aspects of regulation of the learning process influenced the use of different LMS tools and/or course performance.

For the purpose of generalization of the results across courses and domains, an extra data set (n=516) about regulation of the learning process, the use of different learning resources, and the impact on course performance, was collected from a course on Contract Law.

Finally, the general discussion (Chapter 7) presents an overview of the main findings and conclusions of the studies described in this dissertation. It concludes with implications for educational practice and recommendations for further research.

(18)

R

EFERENCES

Bliuc, A. M., Goodyear, P., & Ellis, R. A. (2007). Research focus and methodological choices in studies into students' experiences of blended learning in higher education.

The Internet and Higher Education, 10(4), 231-244.

Clark, R. E. (1983). Reconsidering research on Learning from Media. Review of educational research, 53(4), 445-459.

Ellis, R. A., Goodyear, P., Calvo, R. A., & Prosser, M. (2008). Engineering students' conceptions of and approaches to learning through discussions in face-to-face and online contexts. Learning and Instruction, 18(3), 267-282.

Gašević, D., Dawson, S., Rogers, T., & Gasevic, D. (2016). Learning analytics should not promote one size fits all: The effects of instructional conditions in predicting academic success. The Internet and Higher Education, 28, 68-84.

Gašević, D., Dawson, S., & Siemens, G. (2015). Let’s not forget: Learning analytics are about learning. TechTrends, 59(1), 64-71.

Henderson, M., Selwyn, N., & Aston, R. (2015). What works and why? Student perceptions of ‘useful’ digital technology in university teaching and learning.

Studies in Higher Education, 1-13.

Henderson, M., Finger, G., & Selwyn, N. (2016). What’s used and what’s useful?

Exploring digital technology use(s) among taught postgraduate students. Active Learning in Higher Education.

Inglis, M., Palipana, A., Trenholm, S, & Ward, J. (2011). Individual differences in students' use of optional learning resources. Journal of Computer Assisted Learning, 27(6), 490-502.

Johnson, L., Adams Becker, S., Cummins, M., Estrada, V., Freeman, A., & Hall, C.

(2016). NMC Horizon Report: 2016 Higher Education Edition. Austin, Texas: The New Media Consortium.

Kirschner, P. A., & Van Merriënboer, J. J. G. (2013). Do learners really know best?

Urban legends in education. Educational psychologist, 48(3), 169-183.

Kozma, R. B. (1994). Will media influence learning? Reframing the debate.

Educational technology research and development, 42(2), 7-19.

Lust, G., Vandewaetere, M., Ceulemans, E., Elen, J., & Clarebout, G. (2011). Tool-use in a blended undergraduate course: In Search of user profiles. Computers &

Education, 57(3), 2135-2144.

Lust, G., Elen, J., & Clarebout, G. (2013). Students’ tool-use within a web enhanced course: Explanatory mechanisms of students’ tool-use pattern. Computers in Human Behavior, 29(5), 2013-2021.

Ministerie van onderwijs, cultuur en wetenschappen, (2015). De Waarde(n) van weten:

Strategische agenda hoger onderwijs 2015-2025. The Hague, The Netherlands.

Obama, B. H. (2011). State of the Union address. Washington, DC: Office of the President.

(19)

1

OECD (2015). Students, Computers and Learning. Making the Connection. Paris:

OECD. Retrieved June 6, 2016 from http://dx.doi.org/10.1787/9789264239555-en Owston, R. (2013). Blended learning policy and implementation: Introduction to the

special issue. The Internet and Higher Education, 18, 1-3

Poon, J. (2013). Blended learning: An institutional approach for enhancing students' learning experiences. Journal of online learning and teaching, 9(2), 271.

Prensky, M. (2001). Digital natives, digital immigrants part 1. On the horizon, 9(5), 1-6.

Russell, T. L. (1999). The no significant difference phenomenon: A comparative research annotated bibliography on technology for distance education: As reported in 355 research reports, summaries and papers. North Carolina State University.

Selwyn, N. (2013). Distrusting educational technology: Critical questions for changing times. New York, NY, USA: Routledge.

Siemens, G., & Long, P. (2011). Penetrating the Fog: Analytics in Learning and Education. EDUCAUSE review, 46(5), 30.

Sitzmann, T., Kraiger, K., Stewart, D., & Wisher, R. (2006). The comparative effectiveness of web‐based and classroom instruction: A meta‐analysis. Personnel psychology, 59(3), 623-664.

Spanjers, I. A. E., Könings, K. D., Leppink, J. & Van Merriënboer, J. J. G. (2014).

Blended leren: Hype of verrijking van het onderwijs? Rapportage voor Kennisnet.

Retrieved June 6, 2016 from http://www.kennisnet.nl/onderzoek/nieuws/blended- leren-hype-of-verrijking-van-het-onderwijs.

Stockwell, B. R., Stockwell, M. S., Cennamo, M., & Jiang, E. (2015). Blended learning improves science education. Cell, 162(5), 933-936.

Voogt, J., Sligte, H.W., Beemt, A. van den, Braak, J. van, Aesaert, K. (2016). E- didactiek. Welke ict-applicaties gebruiken leraren en waarom? Amsterdam:

Kohnstamm Instituut. Retrieved June 6, from

http://www.kohnstamminstituut.uva.nl/rapporten/pdf/ki950.pdf.

Zhao, Y., Lei, J., Yan, B., Lai, C., & Tan, S. (2005). What makes the difference? A practical analysis of research on the effectiveness of distance education. The Teachers College Record, 107(8), 1836-1884.

(20)
(21)

Chapter 2

A systematic review of recorded lectures and its influence on course performance

This chapter has been submitted for publication as:

Bos, N. R. & Brand-Gruwel, S. (2016). A systematic review of recorded lectures in higher education and its influence on academic achievement.

Manuscript submitted for publication.

(22)
(23)

2

I

NTRODUCTION

In the early twentieth century Révész and Hazewinkel (1924) published their research on how lantern slides and video can have an impact on learning. In their study they explore the added value of the use of video in education and they compare the impact of film with the more used lantern slides. Although their results are inconclusive, they already claim great potential for video in education, mainly because of its naturalistic character.

Almost a century later, video in education is intertwined with daily practice.

According to McGarr (2009) there are three distinct approaches in which video can impact learning. First there is the substitutional use, in which video is used to capture a traditional face-to-face lecture for later playback by students. Second there is supplemental use where video is used to complement, enhance or summarize learning material. These videos can be produced by lecturers themselves or produced by other instructors or other institutions. Finally, there is the creative use of video in education where students produce their own videos as part of a more formal learning approach.

The distinction between substitutional use and supplemental use is not always straightforward. This lack of distinction also applies to recorded lectures. Recorded lectures are integral recordings of face-to-face lectures and are provided to students with the aim to supplement the face-to-face lectures. However, as research shows part of the students will use these recordings as a substitute for lecture attendance (Bos, Groeneveld, Van Bruggen & Brand-Gruwel, 2016).

However, in surveys students indicate that they do not use recorded lectures as a substitute for lecture attendance (Brotherton & Abowd, 2004; McNulty et al., 2009;

Gorissen, Van Bruggen & Jochems, 2012). Students report that they use the recordings primarily to elaborate their lecture notes or to prepare themselves for the final exam (Brotherton & Abowd, 2004; Fernandes, Maley, & Cruickshank, 2008; Gorissen et al., 2012; McNulty et al., 2009; Traphagan, Kucsera, & Kishi 2010; Van den Bossche, Verliefde, Vermeyen, & Van den Bunder, 2012). In addition, students report that the availability of recorded lectures has a positive effect on their exam performance (Cramer, Collins, Snider, & Fawcett 2007; Van den Bossche et al., 2012) and helps them to study more effectively (Brotherton & Abowd, 2004) or allowing them to perform equally well with less work (Davis, Connolly, & Linfield, 2009). Results of these surveys should be carefully interpreted since students are found to overestimate their actual lecture attendance (Fernandes et al., 2008) or overestimate the percentage of recorded lectures they have reportedly watched (Gorissen et al., 2012).

Surveys about the perceptions of the use of recorded lectures in education from a lecturer’s perspective are scarcer, but consist mostly of concerns about procrastination (Elvers, Polzella, & Graetz 2003) and lower attendance rates to the face-to-face lectures (Jensen, 2007; Silverstein, 2006).

(24)

Although students express that recorded lectures have a positive effect on their course performance or allows them to perform equally well with less work, the actual impact recorded lectures have on course performance is less clear. This lack of clarity complicates attempts to deploy policy regarding the use of recorded lectures on an institutional level or to justify the sizable investments an institution should make to facilitate these recorded lectures. The disciplinary homogeneity of most research into the deployment of recorded lectures adds further complexity in interpretation of the research findings, leaving open the possibility that disciplinary, contextual and course specific effects may be contributing factors on its positive or negative impact on course performance (Gašević, Dawson, Rogers, & Gasevic, 2016). When establishing the impact that the deployment of recorded lectures has on course performance it is important to consider these contributing factors, since it could help to identify key risk or success variables for deployment given their instructional aims.

The different disciplines at the universities can broadly be described in four different areas. Each of these four different disciplines has their own practice and knowledge (Becher, 1994). First, there are the ‘hard-pure’ disciplines, like mathematics and chemistry, where knowledge is cumulative, simplified and results in explanation or discovery. Second, there are the ‘soft-pure’ disciplines, like history and philosophy, where knowledge is holistic and results in understanding or interpretation. Next, the

‘hard-applied’ discipline, like engineering and medicine, where knowledge is pragmatic and results in products or techniques. And last, the ‘soft-applied’, like social sciences and law, where knowledge is functional and results in procedures and protocols. Since the concepts of knowledge differ between these disciplines, we hypothesize that the impact of lectures, and subsequently the impact of recordings of these, are likely to differ between these disciplines. Moreover, the concepts of knowledge evolve from undergraduate towards graduate education, that also this disciplinary effect will influence the deployment of recorded lectures and course performance.

The contextual effects of a course consist of elements related to the context of the course: duration of the course, duration of the lecture and class size. The duration of the course and the duration of the lecture can have an effect on the use of recorded lectures, since it likely that either a longer course or longer lectures will increase the cognitive load, and more specific the germane load (Sweller, Van Merriënboer, & Paas, 1998).

The class size might have an impact on the use of recorded lectures, since students report that they use recorded lectures to manage distractions during lectures (Gorissen et al., 2012) which might increase with class size.

Third, the course specific elements deal with issues concerning the availability of the recorded lectures during a course and lecture attendance. First, the availability of the recorded lecture which can be provided to students throughout the course or just for a limited amount of time. Second, whether or not lecture attendance is mandatory, since mandatory attendance automatically leads to supplemental use of the recorded lectures instead of substitutional use.

(25)

2

Through a review of the literature, this paper explores the added value recorded lectures (might) have on course performance while considering the disciplinary, contextual and course specific elements. This review aims to examine two key questions:

1. Can we identify specific elements, disciplinary, contextual or course specific, that are associated with improvements in course performance when deploying recorded lectures?

2. To what extent are recorded lectures associated with improvements on course performance?

M

ETHODS AND MATERIALS

For the purpose of this study we attempt to identify all the experimental studies that examined the use of recorded lectures and made an assertion of its impact on course performance.

Eligibility criteria

The study eligibility criteria included articles published in English that examined the effectiveness of recorded lectures on course performance. Inclusion criteria for systematic review are:

o The recorded lectures are made available after the lecture with the aim of supplementing the lecture (not substitute the lecture). They comprise of integral recordings of the face-to-face lectures.

o Studies that draw conclusions about the relation between the deployment of recorded lectures and its influence on course performance; which provide objective data on final grades of the course (not by self-assessment or surveys).

o Studies which include data on actual use of these recorded lectures based on objective data (trace data and logs) instead of self-assessment since previous research has shown that students either overestimate their lecture attendance (Fernandes et al., 2008) or overestimate their use of recorded lectures (Gorissen et al., 2012).

o Studies that report on multiple recordings during a course. They do not report about a recording of a single lecture.

o The articles which were written after 2002, since preliminary research on video in education began to emerge in 2002 (Kay, 2012).

(26)

Information sources, search and selection

A converging procedure was followed to ensure a high quality review of the literature on recorded lectures based on the above-mentioned criteria. First, a database search was completed with a wide range of key terms including recorded lecture(s), webcast(s), vodcast(s) and podcast(s), lecture method, virtual lecture(s) and web lecture(s), but also including brand names of several lecture capture systems such as Mediasite, Presentations 2 Go, Kaltura, Echo 360, Panopto and Tegrity.

The systematic literature search was conducted in PsycINFO (Ovid), ERIC (Ovid), Medline (Ovid MEDLINE In-Process & Other Non-Indexed Citations and Ovid MEDLINE version), and Web of Science databases by one of the authors and a qualified librarian. Within the behavioral and social sciences, PsycINFO is one of the most highly utilized databases in the world. Whereas ERIC one of the biggest databases in Educational science. Web of Science was selected because of its interdisciplinary nature and the high quality of the indexed journals. Because this type of educational research is relatively common for the medical field, we also added Medline. Medline being one of the most widely used medical databases. We developed a search strategy in PsycINFO and adapted this for other databases (see Appendix). No language limits were applied. Since preliminary research on the use of video in education began to emerge in 2002 (Kay, 2012), a limit for publication year was applied.

The records were imported into reference management software, and identified and discarded exact duplicates. All the titles were screened and abstracts identified through the search strategy. We obtained full reports of any title or abstract that seemed likely to meet the inclusion criteria. Our search strategy yielded a total of 1735 citations after duplicate citations were removed using Refworks. We had eligibility criteria to guide the selection of studies with potential to be included in our review.

The third step of the search was a forward search, by checking the studies that cite the selected articles in the first two steps. This added one title to the systematic review.

The last step was a manual search in key journals that publish about Educational Technology or Educational Innovation including, but not limited to, journals as:

Computers and Education, British Journal of Educational Technology, Australasian Journal of Educational Technology and Computers in Human Behavior, although the majority of the titles is also covered in the searched databases. At the end of the search and selection process, 14 peer-reviewed articles, met the criteria (see Table 1).

(27)

2

Table 1

Overview of the Included Studies: authors, year and journal of publication

Authors Year Journal

Bacro, Gebregziabher, & Ariail 2013 Anatomical Sciences Education Bacro, Gebregziabher, & Fitzharris 2010 Anatomical Sciences Education Bollmeier, Wenger, & Forinash 2010 American Journal of Pharmaceutical

Education

Brooks, Erickson, Greer, & Gutwin 2014 Computers and Education Cramer, Collins, Snider, & Fawcett 2007 British Journal of Educational

Technology

Drouin 2014 Teaching of Psychology

Ford, Burns, Mitch, & Gomez 2012 Active Learning in Higher Education Johnston, Massa, & Burne 2013 Nurse Education in Practice McNulty, Hoyt, Gruener, Chandrasekhar,

Espiritu, Price, & Naheedy 2009 BMC Medical Education

Traphagan, Kucsera, & Kishi 2010 Educational Technology Research and Development

von Konsky, Ivins, & Gribble 2009 Australasian Journal of Educational Technology

Wieling & Hofman 2010 Computers and Education

Wiese & Newton 2013 Canadian Journal for the Scholarship of Teaching and Learning

Williams, Birch, & Hancock 2012 Australasian Journal of Educational Technology

Data items and analysis

First, each selected study was analyzed based on disciplinary elements: the subject area that was taught and level of the course (undergraduate or graduate). Next, the contextual elements were scored: duration of the course in weeks, duration of the lecture in minutes and class size. Since the amount of students that were enrolled in the course, the class sizes, sometimes differs from the eventual number of student who participated in the research and analysis, the population over which an analysis had been carried out, was scored separately. Last, the course specific elements were scored:

duration of the availability of the recorded lectures throughout the course and whether or not lecture attendance was mandatory. A meta-analysis was not conducted since the methods of data sampling of the population, data collection and analysis varied considerably among studies.

Table 2 shows the coding scheme of the selected studies examining the impact of recorded lectures on course performance.

(28)

Table 2

Coding of research papers reviewed recorded lectures

Variable Description Scoring criteria

Year Year study was conducted Y - Year

Subject Area Subject area that was taught

using recorded lectures HP – Hard Pure SP – Soft Pure HA – Hard Applied SA – Soft Applied Level of the course Level of the course taught Grad – Graduate

Under – Undergraduate Duration of the course The duration of the course in

weeks DC – Duration Course

Duration of the (recorded)

lecture The duration of the lecture in

minutes DL – Duration Lecture

Size The amount of students that

were enrolled in the course N – Number NS – Not Specified

Population Group size over which an

analysis over the course grade has been made

P – Number NS – Not Specified

Availability Is the recorded lecture available throughout the course or only for a limited time period?

T – Throughout the course Lim – Limited Time NS – Not Specified

Attendance Is lecture attendance

mandatory? Yes

No

NS – Not Specified

Performance What was the impact of

recorded lectures on performance in the course (final grade)?

Neg – Negative Pos – Positive Incon – Inconclusive NI – No Impact

(29)

2

R

ESULTS

First, the main characteristics of the studies will be discussed and the key study features will be presented. Next, the main results will be discussed, subdivided into disciplinary, contextual and course specific elements. Lastly, we will discuss describe the general results of improved outcomes on course performance.

Study characteristics

The studies were published between 2007 and 2014. All selected studies were peer- reviewed journal papers. Sample sizes ranged from 57 to 1341 students. Table 3 summarizes the key study features.

Disciplinary elements

Subject area

The five studies that reported a positive impact on course performance, were taught in different fields of subjects: Science & Technology (Brooks, Erickson, Greer, & Gutwin 2014), Psychology (Cramer et al., 2007), Medicine (Wiese & Newton, 2013) and Business and Law (Wieling & Hofman, 2010; Williams, Birch, & Hancock 2012).

These subjects were also taught in studies that reported a negative impact (Drouin, 2014) or reported no impact on course performance (Traphagan et al., 2010; Von Konsky, Ivins, & Gribble 2009; Ford, Burns, Mitch, & Gomez 2012; Johnston, Massa,

& Burne, 2013; McNulty et al., 2009). This would indicate that subject area has no impact when determining the effect recorded lecture have on course performance.

However, only applied disciplines were reported in this research, indicating that either hard disciplines do not tend to use the recorded lectures or tend to evaluate its use by means of research.

Level of the course

All the reviewed studies were taught at an undergraduate level. Therefore, we cannot conclude if the level of the course has an influence on course performance.

(30)

Table 3 Code d articles included in the review AuthorsYearSubjectLevel DC (weeks) DL (min) N P Availa bility Atten dance

Perfor mance Barco et al. 2013 HA Under 11 50 57 57 NS Yes NI Bacro et al. 2010HAUnder1850 NS NS T NS NI Semester Bollmeier et al. 2010 HA Under 180 195 122 Lim No NI

Brooks et al.

2014

HA

Under 16

NS

636 546 197 232 333 NS T No

Pos

Cramer et al. 2007 SA Under Semester 50839 140 T NS Pos Drouin 2014 SA Under Semester 141 141 T No Neg Ford et al. 2012 SA Under Semester NS 119 119 NS No NI Johnston et al. 2013 HA Under NS 180 499 499 NS No NI McNulty et al. 2009 HA Under 10

NS

143 139 143 139

NS

NS

NI

Traphagan et al. 2010 SA Under NS 50 364 364 T No NI Konsky et al. 2009 SA Under 15 120 148108 NS No NI Wieling et al. 2010 SA Under NS 90 1341474 T No Pos Wiese et al.

2013

HA

Under

Semester Semester Semester

288 602 597 288 602 597 T

NS

Pos

Williams et al. 2012 SA Under 13 45866 371 T No Pos

(31)

2

Contextual elements

Duration of the course

The duration of the course varied from 10 weeks (McNulty et al., 2009) to 16 weeks (Brooks et al., 2014). Four studies did not report a specific duration of the course (Cramer et al., 2007; Drouin, 2014; Wiese & Newton, 2013; Wieling & Hofman, 2010).

Seven studies used the general expression of ‘semester’ to indicate the length of the course and failed to specify this duration any further. Since only six studies reported the exact duration of the course, this amount of studies is too limited to draw a conclusion about the relation between the duration of the course and the use and impact of recorded lectures on course performance.

Duration of the lecture

Lecture duration was not specified in two of the studies (Brooks et al., 2014; Wiese &

Newton, 2013). The remaining ten studies report a lecture duration varying from 45 minutes (Williams et al., 2012) up to 180 minutes (Bollmeier, Wenger, & Forinash 2010; Johnston et al., 2013). These lecture durations were similar to those studies that reported no significant difference on course performance. The assumption that recorded lectures would be more effective for longer lectures is not be confirmed in this review, although this preliminary conclusion is based on a small number of studies due to a lack of reporting the duration of the lecture.

Class size

The class sizes in the studies that reported a positive effect were considerable, ranging from 197 students (Brooks et al., 2014) to 1341 students (Wieling & Hofman, 2010).

However, four of the five positive reports (Brooks et al., 2014; Cramer et al., 2007;

Wieling & Hofman, 2010; Williams et al., 2012) only used a proportion of the student population of that course to assess the impact recorded lectures have on course performance. These four studies were the studies that used informed consent by students for research purposes. This informed consent impacts the research in such that only a part of the population is being studied, while the majority of the other studies report on the entire population. These differences in class size and the actual population studied cause that any conclusion about the relation between class size and deployment of recorded lectures, would be premature.

Course specific elements

Availability of the recorded lectures

The studies can be grouped according to the different levels in which they report about the use of recorded lectures: from a general class level to student specific level. Four studies reported on the availability of recorded lectures on a class level and disregard the actual use on a student level by reporting the number of users and no-users or

(32)

reporting a duration of the use of the recordings on a class level (Drouin, 2014; Ford et al., 2012; Traphagan et al., 2010; Wiese & Newton, 2013). Four studies measured the use of recorded lectures on an individual level by means of accessing the recording (clicking on the link), but disregard the duration of the use of the recordings (Johnston et al., 2013; McNulty et al., 2009; Von Konsky et al., 2009; Williams et al., 2012).

Again, four studies reported on accessing the lecture (clicking on the link) on a student level but also took into account the duration of that use in minutes (Bacro, Gebregziabher, & Ariail 2013; Bacro, Gebregziabher, & Fitzharris 2010; Bollmeier et al., 2010; Cramer et al., 2007). One study made a distinction whether or not a student had used the recording of each specific lecture in a course (Brooks et al., 2014) and one made a distinction whether or not a student had used a lecture recording during the entire course (Wieling & Hofman, 2010). Only one study reported limited availability of the recorded lectures (Bollmeier et al., 2010) and despite their lengthy lectures, it reported no impact on course performance.

Lecture attendance

Whether the recorded lecture is used as a supplement or a substitute has an impact on achievement. Students who use the recorded lectures as a substitute are at a disadvantage in terms of achievement (Bos et al., 2016; Whitley-Grassi & Baizer, 2010). In one course, attendance was required (Bacro et al., 2013) so students could only use the recorded lectures as a supplement. In some courses the attendance to the face-to-face lectures was not mandatory and lecture attendance was registered (Bollmeier et al., 2010; Drouin, 2014; Traphagan et al., 2010; Von Konsky et al., 2009).

In these cases, it could be determined if students used the recorded lectures as a supplement or as a substitute to the face-to-face lectures. In other courses the attendance to the face-to-face lectures was not mandatory either, but attendance was not registered, making it impossible to determine whether students use the recorded lectures as a supplement or substitute (Brooks et al., 2014; Johnston et al., 2013; McNulty et al., 2009; Wieling & Hofman, 2010; Wiese & Newton, 2013; Williams et al., 2012). In three studies it was not clear whether presence to the face-to-face lectures was mandatory and/or presence of students at the face-to-face lectures was registered (Bacro et al., 2010; Cramer et al., 2007; Ford et al., 2012).

Several authors mention a decline in lecture attendance (Bollmeier et al., 2010;

Drouin, 2014; Wiese & Newton, 2013; Williams et al., 2012) indicating that students use the recorded lectures as a substitute. More studies report this drop in lecture attendance with attendance being 50% lower in the lecture capture condition (Johnston et al., 2013; Traphagan et al., 2010). A general low usage of recorded lectures is reported, showing that approximately 70% of students are classified as no-users or low users (Bacro et al., 2010; Brooks et al., 2014).

(33)

2

Improved course performance

Of the 14 selected studies, five found a positive relationship between the use of recorded lectures and course performance (Brooks et al., 2014; Cramer et al., 2007;

Wieling & Hofman, 2010; Wiese & Newton, 2013; Williams et al., 2012), one study found a negative relationship (Drouin, 2014) and eight studies found no significant difference of the use of recorded lectures and its impact on course performance (Bacro et al., 2010; Bacro et al., 2013; Bollmeier et al., 2010; Ford et al., 2012; Johnston et al., 2013; McNulty et al., 2009; Traphagan et al., 2010; Von Konsky et al., 2009).

D

ISCUSSION

The current review explored the impact that recorded lectures have on course performance. Its aim is to identify specific elements, disciplinary, contextual or course specific, that are associated with improvements in course performance. Although it initially seems that this topic has been covered extensively in the literature, a closer look at the research methods of those articles suggest otherwise. A majority of the research regarding the relation between the use of recorded lectures and course performance use self-reports to determine its effect on either attendance (Brotherton &

Abowd, 2004; Kinlaw, Dunlap, & D’Angelo 2012) or course performance (Gorissen, Van Bruggen, & Jochems 2013; Larkin, 2010) and were therefore not included in the systematic review. Only 14 studies met the eligibility criteria of the current literature review.

The studies included in the systematic review are all based in the applied discipline;

so results of the current review are restricted to those disciplines and lacks the reflection of hard-pure and soft-pure disciplines. However, no effect of either hard-applied or soft- applied discipline, could be determined.

The contextual elements about the duration of the course and duration of the lecture, could not be determined due to a lack of studies who reported these durations specifically. Regarding class size and population, some selection bias in the results occur. Four of the five studies that reported a positive impact (Brooks et al., 2014;

Cramer et al., 2007; Wieling & Hofman, 2010; Williams et al., 2012) on the use of recorded lectures used only a sample of the entire population to determine its effect due to informed consent issues. It is plausible that only students with a positive attitude towards recorded lectures gave their consent to participate in the research study. This selection bias may explain this discovered positive relationship between the use of recorded lectures and course performance for those studies. Earlier Von Konsky et al.

(2009) reported self-selection issues when students are asked to consent with the research. However, a positive attitude towards of something that is of value for learning is both associated with higher actual use (Von Konsky et al., 2009) and as non- predictive of their use (Lust, Elen, & Clarebout, 2012).

(34)

The majority of the reported issues seem to related to course specific factors. A majority of the studies report a relative low number of students who actually use the recorded lectures (Bacro et al., 2010; Bacro et al., 2013; Bollmeier et al., 2010; Cramer et al., 2007; Drouin, 2014; McNulty et al., 2009). The actual usage is lower than anticipated since actual use or recorded lectures seemingly contradicts with the positive attitude students express in surveys towards recorded lectures (Van den Bossche et al., 2012; Cramer et al., 2007; Davis et al., 2009). Students indicate that they would like to have availability to recorded lectures but access does not directly imply usage. Actual usage is not influenced by course elements as difficulty of the subject, length of the course or lecture or class size. It is, however, influenced by quality of teaching, like teaching at a fast pace or murmuring by the lecturer (Bacro et al., 2013; Bollmeier et al., 2010).

In surveys students also indicate that they use the recorded lectures not as a substitute for lecture attendance (Brotherton & Abowd, 2004; McNulty et al., 2009;

Gorissen et al., 2012), although a considerable amount of authors mentions a decline in lecture attendance (Bollmeier et al., 2010; Drouin, 2014; Traphagan et al., 2010;

Johnston et al., 2013; Wiese & Newton, 2013; Williams et al., 2012) indicating that students do use the recorded lectures as a substitute for lecture attendance. This drop in attendance did not reflect in lower course performance. The drop in attendance might be caused by the fact that recorded lectures are not used as a supplement to the face-to-face lectures: it duplicates the lecture and therefore reduces the need for attendance. Decline in lecture attendance varies across fields of study with medical students showing a tendency to frequently visit lectures (Bacro et al., 2010; Von Konsky et al., 2009; Wiese

& Newton, 2013) reflecting different student populations and their varying needs. The deployment of recorded lectures has an impact on lecture attendance which differences between fields of subjects. Attendance mediates achievement and mainly affects groups who might already be at risk of failure (Drouin, 2014; Williams et al., 2012).

Summarized, the majority of the selected studies reported no significant impact between the use of recorded lectures and course performance. Eight studies found no significant impact between of the use of recorded lectures and final grade (Bacro et al., 2010; Bacro et al., 2013; Bollmeier et al., 2010; Ford et al., 2012; Johnston et al., 2013;

McNulty et al., 2009; Traphagan et al., 2010; Von Konsky et al., 2009); while five studies found a positive effect (Brooks et al., 2014; Cramer et al., 2007; Wieling &

Hofman, 2010; Wiese & Newton 2013; Williams et al., 2012). Only one study reported a negative effect (Drouin, 2014) on the use of recorded lectures in relation to course performance.

Some of the studies report different user patterns of the recorded lectures which might be caused by differences in learning strategies or approaches to learning. (Bacro et al., 2010; Bacro et al., 2013; Brooks et al., 2014; Johnston et al., 2013; Von Konsky et al., 2009; Wiese & Newton, 2013). Wiese & Newton (2013) make a distinction between surface learners and deep learners and show that students with a surface

(35)

2

learning approach watch the entire recording while deep learners only watch snippets of the recording. Also Brooks et al. (2014) see similar strategies with high activity viewers, students who watch the recordings on a regular base during the course, and low activity users who show cramming behavior right before the exam. This cramming behavior is related to educational immaturity of the student (Drouin, 2014) and can be counterproductive (Bacro et al., 2013; McNulty et al., 2009; Williams et al., 2012). It seems necessary to guide less mature students in how to use the recording for purpose of revision and review to benefit learning. Student do not automatically have the background to be able to judge which learning activities best support their learning needs (Johnston et al., 2013).

To sum up, it is unfair to say that the deployment of recorded lectures within the curriculum inevitable leads to higher or lower course performance. Specific elements, either disciplinary, contextual or course specific, are not responsible for its success or failure. Despite these challenges, as current results show, it is possible to identify key risk and success variables given their instructional aims. For recorded lectures these variables are: the negative relationship between lecture attendance and the use of recorded lectures, the large individual differences in the use of recorded lectures, and the differences in impact on course performance indicating that the educational goals and learning resources must be well aligned in order to enhance course performance.

However, the current studies in the systematic review lack a focus on actual use of the recorded lectures and most studies in the systematic review confound deployment of recorded lectures with actuals use of the recorded lectures.

Deployment of recorded lectures leads to a drop in lecture attendance accompanied by a low usage of these recordings. Lecture attendance seems to depend on field of study and is not directly linked to the deployment of recorded lectures. Actual use of the recorded lectures does not depend on its educational setting but arises from teacher quality and learning strategies used by students. Research conducted on a more granular level shows that for some groups with a specific usage pattern, recorded lectures can be beneficiary (Brooks et al., 2014; Wiese & Newton, 2013).

Limitations

The limitations of the current study lie mainly in the methodological limitations of the reviewed studies. Five of the studies used a quasi-experimental design with the course being given twice at the same location, e.g. a morning and afternoon group (Drouin, 2014; Ford et al., 2012, Traphagan et al., 2009), or with the course being taught at two different campuses simultaneously (Johnston, Massa, & Burne, 2013) or with students being randomly assigned to either a group which has access to recorded lectures within a course or a group that has no access to recorded lectures (Wieling & Hofman, 2010).

Two studies used an historical control group (Bollmeier et al., 2010; Wiese & Newton, 2013). Two studies used modeling techniques to provide more insight between the

(36)

relationship of various dependent variables on the independent variable by either making a regression analysis (Williams et al., 2012) or by clustering student data based on their usage of the lecture recording system (Brooks et al., 2014). These differences in methodology makes comparability of recorded lectures and course performance challenging.

Most research does not consider lecture attendance with regard to the use of recorded lectures, making it impossible to establish whether students use it as a supplement or substitution to the actual lectures. Many studies do not segment the users from the non-users but calculate a correlation on a classroom level. In this case no distinction is made between supplemental use and substitutional use. Besides, the used designs of the reviewed studies vary, which makes it harder to generate empirical conclusions.

Implications for daily practice

Despite the variations in the used designs and methods, most studies show that offering lecture recordings besides face-to-face lectures does not have a significant impact on course performance. Students use recorded lectures as a substitute for face-to-face lectures, instead of a supplement, leading to a decline in lecture attendance (Drouin, 2014; Johnston et al., 2013; Traphagan et al., 2010).

Studies reporting a positive effect of the use of recorded lectures on course performance use a selection of the population, causing a selection bias. Based on that bias, we can conclude that recorded lectures can be beneficiary for certain groups of students, but will rarely improve the results of the entire cohort of students. Future research should focus more on identifying those groups and determine which behaviors and personal characteristics (such as motivation and self-regulation) may be responsible for this positive relationship. Future studies might also want to consider if a student is a novice or not at using recorded lectures in his/her learning strategy. As more K-12 education uses video in their teaching, new students may approach the use of recorded lectures in their education with already shaped habits and expectations, influencing their use of recorded lectures in a way previous cohorts have not experienced.

Recording lectures can have a positive impact on course performance when used as a supplement, but students have a tendency to use the recorded lectures as a substitute.

Limited availability, in time, could diminish this effect, but more research is needed to determine the optimal scenario for availability: after the lecture, before the exam, and the amount of time made available.

Except for the fact that the educational design should ensure that the students use recorded lectures not as a substitute but as a supplement to the face-to-face lectures, the educational design must also ensure that the use at all is limited, as excessive use of recorded lectures can be counterproductive (McNulty et al., 2009; Williams et al., 2012).

(37)

2

R

EFERENCES

Bacro, T. R., Gebregziabher, M., & Fitzharris, T. P. (2010). Evaluation of a lecture recording system in a medical curriculum. Anatomical sciences education, 3(6), 300-308.

Bacro, T. R., Gebregziabher, M., & Ariail, J. (2013). Lecture recording system in anatomy. Anatomical sciences education, 6(6), 376-384.

Becher, T. (1994). The significance of disciplinary differences. Studies in Higher education, 19(2), 151-161.

Bollmeier, S. G., Wenger, P. J., & Forinash, A. B. (2010). Impact of online lecture- capture on student outcomes in a therapeutics course. American journal of pharmaceutical education, 74(7), 127.

Bos, N. R, Groeneveld, C., Bruggen, J., & Brand‐Gruwel, S. (2016). The use of recorded lectures in education and the impact on lecture attendance and exam performance. British Journal of Educational Technology, 47(5), 906-917.

Brooks, C., Erickson, G., Greer, J., & Gutwin, C. (2014). Modelling and quantifying the behaviours of students in lecture capture environments. Computers & Education, 75, 282-292.

Brotherton, J. A., & Abowd, G. D. (2004). Lessons learned from eClass: Assessing automated capture and access in the classroom. ACM Transactions on Computer- Human Interaction (TOCHI), 11(2), 121-155.

Cramer, K. M., Collins, K. R., Snider, D., & Fawcett, G. (2007). The virtual lecture hall: Utilisation, effectiveness and student perceptions. British Journal of Educational Technology, 38(1), 106-115.

Davis, S., Connolly, A., & Linfield, E. (2009). Lecture capture: making the most of face-to-face learning. Engineering Education, 4(2), 4-13.

Day, J., & Foley, J. (2006, April). Evaluating web lectures: A case study from HCI. In CHI'06 Extended Abstracts on Human Factors in Computing Systems (pp. 195-200).

ACM.

Drouin, M. A. (2013). If you record it, some won’t come using lecture capture in introductory psychology. Teaching of psychology, 41(1), 11-19.

Elvers, G. C., Polzella, D. J., & Graetz, K. (2003). Procrastination in online courses:

Performance and attitudinal differences. Teaching of Psychology, 30(2), 159-162.

Fernandes, L., Maley, M., & Cruickshank, C. (2008). The impact of online lecture recordings on learning outcomes in pharmacology. Journal International Association of Medical Science Educators, 18(2), 62-70.

Ford, M. B., Burns, C. E., Mitch, N., & Gomez, M. M. (2012). The effectiveness of classroom capture technology. Active Learning in Higher Education, 13(3), 191- 201.

Referenties

GERELATEERDE DOCUMENTEN

While many studies aim to predict course performance based on students’ use of resources or stu- dent activity within a blended course (Tempelaar et al., 2015), the complication

Gerekend met de cij- fers over het jaar 2006 zouden er dan in Neder- land in ongevallen tussen twee auto’s 10 ver- keersdoden minder zijn gevallen onder bestuur- ders.. Daarnaast

A simple sampling device for isotachophoresis and zone electrophoresis is de- scribed, whereby the sample solution is introduced directly into a broadened part

Bij de takken van de tweede kwaliteit had Mariachi Blue fractie 3 de meeste bloeiende bloemen per tak en Lilac Shadow met fractie 2 en Caesar Violet Spring het minste aantal

A total of 120 interior architecture students were sur- veyed about their experiences on five fields of instruction: (1) course design, learning material and electronic

A more student-oriented approach is needed within educational design of blended learning courses since previous research shows that students show a large variation in the

Aangaande de docent heeft ons onderzoek verder gedemonstreerd dat bepaalde zaken aan studenten kunnen worden overgelaten, een blend de docent iets meer inzicht

- It is advised that the lecturer respond after a couple of days (for example at the end of the week) to the answers given by the student. This way the students know what answers