• No results found

The use of learning analytics and its effect on students’ learning outcomes in adult education

N/A
N/A
Protected

Academic year: 2021

Share "The use of learning analytics and its effect on students’ learning outcomes in adult education"

Copied!
46
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

The Use of Learning Analytics and its Effect on Students’ Learning Outcomes in Adult Education

Charlotte Marks (10506942) Bachelor thesis Educational Sciences

University of Amsterdam Mentor: Tessa van Schijndel

Date: 25-06-2018 Number of words: 10717

(2)

Index

1. Abstract………...3

2. Method………4

3. Introduction……….6

4. How could Learning Analytics be used to enhance students’ learning outcomes?...11

5. What is the current evidence of the positive effects of Learning Analytics on students’ learning outcomes?...24

6. Conclusion and Discussion………...34

(3)

1. Abstract

In the last decade, the amount of available data increased extremely. This data is used for many purposes in different areas of life, as well as in education. The gathering, analyzing and acting on learners’ data is called Learning Analytics. Learning Analytics is expected to transform education on different levels by improving learning for students, teaching for instructors and decision-making for educational institutions and government. It is also expected to be a powerful mean for researchers to better understand the learning process. However, Learning Analytics also faces limitations, technically, practically, and ethically. In this thesis, the potential, current effects on learning outcomes and the limitations of Learning Analytics are investigated in adult education. The different factors that play a role in every Learning Analytics intervention are elaborated, namely the people, data and technologies. Also, the evidence of positive effects of Learning Analytics on students’ learning outcomes is investigated. Results show that in order to implement a Learning Analytics intervention successfully, all factors should be integrated and considered carefully. The results also show that Learning Analytics interventions could have a positive effect on students’ learning outcomes, but that the current evidence is inconclusive. There are also some serious limitations that need attention, such as the limited availability of comprehensive data, data literacy of the students and instructors, the lack of use of educational theory and the lack of a clear ethical framework. More research is necessary to address these limitations and to help Learning Analytics to reach its potential.

(4)

2. Method

Because this thesis is a literature study the used method is searching literature. In the initial stage of the process literature was searched via Google Scholar, where the keywords ‘Learning Analytics’ were used. This resulted in the first articles to explore the subject. In this stage the goals were to learn more about the subject and formulate a research question. After the research question was formulated and there was some acquittance with the subject, deeper and more specific literature searching was necessary. The search was extended with the database PsychINFO. The keywords ‘Learning Analytics’ were used, but also other keywords like ‘data mining’, ‘social network analysis’, ‘data visualization’, ‘statistics’, ‘theoretical framework’, ‘ethic*’, etc. Also the function AND/OR was used to combine keywords. The results were filtered on recent publication year, namely less then ten years old and the publication type, namely peer reviewed journal. Besides using Google Scholar and PsychINFO, snowballing was used to find more literature. Snowballing was used on the articles that were found via Google Scholar and PsychINFO. The goal was to build a body of knowledge to write the introduction and investigate the first main question.

When literature for the second main question was needed, difficulties appeared. Google Scholar and PsychINFO did not provide the necessary type and amount of articles to investigate the second main question. Therefore, the search was extended to another database, namely Scopus. This database was found via the UvA Database-selector under the field of Information Studies. This database provided articles from more technical fields too, and in this database enough articles were found. In Scopus, the same strategy and filters were used. The previous keywords were extended with ‘implementation*’, ‘learning outcome*’,

‘effect*’, ‘adult education’, ‘student success’, ‘performance’, ‘universit*’, etc. After this, a basic collection of articles were collected to investigate the main questions of this thesis.

(5)

When more articles were needed, Google Scholar, PyschINFO, Scopus and snowballing were used to find additional literature.

(6)

3. Introduction

All over the world data is becoming increasingly important. With the rapid

development of technology and people using those technologies, people leave traces of data everywhere. For example, when they like a post on Facebook, when they search anything on Google or when they transfer money. The amount of data that is collected and published has grown fast and doubles every two years (Lohr, 2012). This data is used for all kinds of

purposes and is intensively used in economics, business, technology, healthcare, governments and, also education. All these field use data for their own purposes. For instance, in healthcare data is used to recognize illnesses at earlier stages, helping doctors to make a decision about which treatment to use, and making health management more efficient (Raghupathi & Raghupathi, 2014). But basically the process is similar in all fields, namely gathering data, cleaning the data, analyzing the data to see patterns, trying to make predictions for the future, and make informed decisions based on those predictions (Chatti, Dyckhoff, Schroeder, & Thuüs, 2012).

Even though this development of moving towards the use of data started decades ago, the field of education has made limited use of data in comparison to other fields (Siemens, 2013). Despite the lag of the educational field in the use of data, there is a big growth of interest in Learning Analytics. Siemens (2010) defined Learning Analytics as: “the use of intelligent data, learner-produced data, and analysis models to discover information and social connections, and to predict and advise on learning” (p. 1). Different definitions for Learning Analytics are described in the literature (EDUCAUSE, 2010; Elias, 2011; Johnson, Smith, Willis, Levine, and Haywood 2011). Those definitions all have in common that Learning Analytics is about converting data into useful interventions to stimulate learning.

In the Learning Analytics literature, several models for the Learning Analytics process are identified (Clow, 2012; Chatti et al., 2012; Elias, 2011). Clow (2012) identified the

(7)

Learning Analytics process as a iterative cycle of four steps, namely: learners, data, metrics, and interventions (See figure 1). The cycle usually starts with learners. This could be learners of all kinds, such as students in primary or secondary school or university, but also a learner who takes online classes or a working professional who takes part in education for the sake of professional development. In other words: anyone engaged in learning.

When those learners interact with the educational environment and content, they will generate data, which is the next step in the cycle. This step focusses on collecting data generated by the learners. Examples of this kind of data can be demographics and previous academic performances of the learner, login and clickstream information in the Learning Management System (LMS) or interaction and collaboration with peer-students from postings on a forum. LMS is by Mahnegar (2012) defined as: “software used for delivering, tracking and managing training/education. LMSs range from systems for managing

training/educational records to software for distributing courses over the Internet and offering features for online collaboration.” (p. 148). Examples of LMS are Blackboard, Canvas and Moodle. Consequently, this data needs to be converted into metrics or analytics. In this steps most of the technology is put in. The goal of the metrics is to provide insight in the learning process of the analyzed learners. An example could be a list of students who are at risk of failing or dropping out. In the last step, interventions, the metrics are actually used to act upon. To continue with the last example, this step could be a teacher focusing more attention to the students who are at risk. After this last phase of intervention, the intervention should reach the learners and, have an effect on the learners, what makes the cycle run again.

(8)

Figure 1. The Learning Analytics cycle, from Clow (2012).

Other models are quite similar. The study of Chatti et al. (2012) also described the Learning analytics process as an iterative cycle which consists of learners, the capture of data these learners generate, processing the data in order to be analyzed and making decisions and interventions based on these analytics. After this last phase of intervention, the cycle moves to learners again and the cycle starts again. Elias (2011) describes the cycle of Learning

Analytics in a similar iterative way beginning with the gathering of educational data, followed by information processing and concluding by knowledge application. All three models have a slightly different perspective, but the cycle and the steps it consists are very similar.

Prior research has focused on different aspects of Learning Analytics. The first aspect that has been investigated is its theoretical foundation namely, the definitions, goals,

technologies and limitations of Learning Analytics (Campbell, DeBlois & Oblinger, 2007; Chatti et al., 2012; Elias, 2011; Ferguson, 2012; Greller & Drachsler, 2012; Siemens, 2013) In order to describe and organize those aspects, some of those studies developed reference models where a framework for the use of Learning Analytics is build (Chatti et al., 2012; Greller & Drachsler, 2012). The second aspect of prior research is practical and contains the

(9)

assessment of the effectiveness of Learning Analytics implementations on students’ learning outcomes (Arnold & Pistilli, 2012; Chen, Chang, Ouyang & Zhou, 2018; Huberth, Chen, Tritz, & McKay, 2015; Liu, McKelroy, Corliss & Carrigan, 2017; Tabuenca, Kalz, Drachsler, & Specht, 2015; Saqr, Fors, Tedre, & Nouri, 2018; Van Horne et al., 2017; Wise, Zhao, & Hauskrecht, 2014). Another subject that has been investigated is the accuracy of prediction models on student performance and identifying students’ who are at risk (Di Mitri et al., 2017; Guarín, Guzmán, & González, 2015; Kondo, Okubo, & Hatanaka, 2017; Yang, Lu, Huang, Huang, Ogata, & Lin, 2018). However, in this thesis the focus will be on students’ learning outcomes. The last aspect of research that is performed within the field of Learning Analytics is about the ethical worries and implications of the collection and use of students’ data in order to put Learning Analytics interventions to work. With the emergence of the Learning Analytics field, concerns about student privacy, transparency of the decisions and

interventions based on data, student control over the data, and security of students’ data had raised (Lawson, Beer, Rossi, Moore, & Fleming, 2016; Pardo & Siemens, 2014; Scholes, 2016; Slade & Prinsloo, 2013). The ethical aspect of Learning Analytics will be further elaborated in the discussion of this thesis.

Because the research of Learning Analytics is spread over multiple disciplines such as computer science, educational science, technology enhanced education, and social sciences and is approached from different perspectives, the goal of this thesis is to provide an overview of the literature for the field of educational sciences to understand the processes, technology, potential, and limitations of Learning Analytics. In order to do so, two main questions will be investigated. The first question is: “How could Learning Analytics be used to enhance students’ learning outcomes?”. In order to understand the use of Learning Analytics, it is important to have knowledge about its processes, the used technologies and how these can contribute to the goal of enhancing learning outcomes. However, articles that elaborate those

(10)

technologies have very technical focus, which is hard to integrate in fields that are non-technical, such as educational sciences. In this thesis the goal is to investigate these

technologies from an educational perspective. In chapter three, all important elements of an Learning Analytics intervention will be elaborated by looking at various key concepts and explanation of most used technologies in learning analytics. Subsequently, a model of Learning Analytics will be elaborated.

Second, various studies are already conducted about the potential of Learning

Analytics and the development of models to use Learning Analytics. However, the question is whether this use of technology really meets its goal of improving education by enhancing students’ learning outcomes. Therefore the second question is: “What is the current evidence of the positive effects of Learning Analytics on students’ learning outcomes?”. The current evidence of the positive effects of Learning Analytics on students’ learning outcomes will be investigated and an overview of this evidence will be provided in chapter four. This overview will be used to answer the second research question.

(11)

4. How could Learning Analytics be used to enhance students’ learning outcomes? In order to answer the question how Learning Analytics could be used to enhance student’s learning outcomes, it is important to have a comprehensive understanding of what Learning Analytics contains. Learning Analytics contains people, data, and technologies (Elias, 2011). People are the stakeholders in the process. Different stakeholders can have different interests and objectives. Those different interests and objectives determine what kind of data is necessary to gather and from which sources. Those different interests and objectives also determine which technologies are best fit to serve the goal of the Learning Analytics intervention. When it is clear who the stakeholders are, what the objectives are and what data is necessary to gather, technology is needed to convert the data into valuable information and insights to act upon. All these aspects together, and the decisions that are made during the process, will determine the outcome of the Learning Analytics intervention on the students’ learning outcomes. Therefore, the different kind of people, data and technologies will be elaborated in this chapter to build an understanding how Learning Analytics could enhance learning outcomes.

4.1 People 4.1.1 Stakeholders

There are different stakeholders that have an interest in, or are influenced by Learning Analytics. Pena-Ayala (2017) identified the following stakeholders, namely educational institutions, teachers, and learners, but also governmental leaders, policy-makers, researchers, and educational developers. Greller & Drachsler (2012) added a distinction in data subjects and data clients. Data subjects are people who generate the data that will be processed and acted upon, such as learners. Data clients are the people who gain insights about the learning process and can act upon the outcomes of Learning Analytics, such as teachers. However, sometimes these two groups of stakeholders are the same. For example, when a learner

(12)

receives real-time, personalized feedback generated by the Learning Analytics intervention and acts upon these outcomes by adapting his learning path, hence the learner is both data subject and data client. Apart from learners and teachers, other stakeholders can be identified, mostly in the group of data clients. Other examples of data clients can be educational

institutions who can use Learning Analytics to monitor student performance like grades, drop-out and graduation rates. By using Learning Analytics they can make data-informed decisions to improve courses. Another example of data clients can be researchers who use educational data and Learning Analytics tools to investigate their subject of research.

Besides identifying the stakeholders and their distinctive goals and interests in Learning Analytics, Chatti et al. (2012) also stressed the importance of integrating Learning Analytics in the daily life of these stakeholders in order for them to use it. Learning Analytics in itself is quite technical, but the Learning Analytics interventions should be developed in such a way that the stakeholders do not need skills in data science, statistics or programming to be able to use these interventions. Also sufficient support, guidelines, and feedback should be provided to stakeholders for reflection, raising self-awareness, and decision support. 4.1.2 Objectives

In the Learning Analytics literature, various objectives and goals of Learning Analytics are described on different levels. Some are at the institutional level and consist of using Learning Analytics for increasing administrative efficiency and productivity, and evidence based decision making. For example, how to allocate resources and how to improve curricula (Klašnja‐Milićević, Ivanović, & Budimac, 2017). In this thesis the focus will be on the objectives in service of the learning process itself, in order to contribute to answering the main question how Learning Analytics could enhance students’ learning outcomes. Different authors emphasize different objectives, but the main recurrent objectives are prediction,

(13)

support, and reflection (Chatti et al., 2012; Clow; 2013; Greller & Drachsler, 2012; Klašnja‐ Milićević et al., 2017; Pena-Ayala, 2017; Siemens, 2013).

4.1.2.1 Prediction

The first objective concerns the prediction of student success based on available data and current performance. This prediction can be used to identify students who are at risk of failing and provide these students in need with additional support. Also, when a prediction is not only available for teachers or educational institutions, but also for the learners itself, the prediction of student outcome (e.g. grade) could also help the student to regulate their learning (Huberth et al., 2015). When a student is not satisfied with the predicted grade, he could adapt his efforts to improve the outcome.

4.1.2.2 Support

Support consists of assisting students in their learning process or providing teachers with insights to support learners. One way of support is monitoring. By monitoring the progress of the student and analyzing the results, students who are falling behind could be identified and actions may be taken to assist them. Monitoring can be the goal itself, but it can also be used as a first step for further intervention, such as tutoring and mentoring. The goal of tutoring and mentoring is to guide students throughout their learning process. Tutoring mostly concerns content-specific help. For example, helping students to get acquainted with a learning environment for a specific course. Mentoring is more general, and its aim is to guide students in career planning, monitoring their goals and achievements. In tutoring, the control lies with the tutor and the focus is on the content of the course, while in mentoring the control lies with the student and the focus is on the learning process itself. Last, monitoring can also be the basis for adaption, personalization and recommendation. Here, the goal is to adapt educational content to its learners. Combined with a tutoring system described above,

(14)

data. For example, in a statistics course the data can show that learners have trouble with one specific subject more than another subject. As a result of that, the teacher can adapt his lectures by spending more time on the subject where students have trouble with. In most current systems without Learning Analytics, this kind of information is only available after students had filled in evaluation forms of the course, or when the final test showed a lack of knowledge and skill in one particular subject. In personalization, the goal is supporting an individual learner in his learning process. The big difference with adaption lies in the fact that adaption is very teacher centered. The control for adapting educational content lies with the teacher. In personalization the control lies with the learner, who can decide for himself what to do next. The goal of recommendation is to support the learner in making the decisions what to do next by providing recommendations based on data.

4.1.2.3 Reflection

Finally, reflection. The goal of this objective is to use Learning Analytics to reflect on the learning process, for both teachers and students. Teachers and students can use reflection based on data to improve their the effectiveness of their teaching and learning by comparing it with work of others across the class, faculty or institution. Based on reflection, teachers and students can improve their performance in the future.

4.2 Data

In order to develop Learning Analytics interventions that predict students’ learning outcomes accurately and support students and teachers with appropriate recommendation of the course content and learning paths, data should truly reflect the complexity of the learning process (Siemens, 2013). The data used for implementations of Learning Analytics in

institutions and in Learning Analytics research comes from various sources. Roughly the sources can be divided into two categories: centralized educational systems and distributed learning environments (Chatti et al., 2012). A centralized educational system is a system that

(15)

is managed by the supplier of the education (e.g. universities or providers of online education like edX.org). Examples of data that comes from centralized educational systems are activities of students within the LMS (Blackboard, Canvas, Moodle). Those activities can be login activity, interaction with the LMS like reading content, writing posts, completing tests, downloading educational content like research articles or uploading written papers.

Distributed learning environments, on the other hand, are various environments that are not formally managed by the educational institution. Distributed learning environments are usually generated by the user (learner) and can contain data from both formal and informal sources (Chatti et al., 2012). Examples are data from: social media, educational apps, and mobile devices that can track activity and location (Siemens, 2013). When the Learning Analytics field just emerged, most data came from centralized educational systems However, only relying on these data sources do not reflect the complexity of the learning process

(Siemens, 2013). Nowadays, more various sources of data are used beyond the LMS, like data from the distributed learning environments. A huge challenge for using all this data is

identifying which type of data is useful for the goal of the Learning Analytics intervention. However, when the sources of data are growing, the data also becomes richer and more holistic. It may be more work to clean and process the data, but it can also give a deeper insight in the learning process, especially when it is combined with observation and human manipulation of the available data sets (Siemens, 2013).

A special environment where data is collected are those of MOOC’s. MOOC’s (Massive Open Online Courses) are online courses provided by universities from all over the world. The instruction, interaction, and assessment takes place online. MOOC’s are openly available and can be followed for free. However, if the student wants a formal certificate after successfully completing the course, he needs to pay for the certificate. Based on the previous definitions of centralized learning systems and distributed learning environments, the

(16)

environments where MOOC’s are provided can be classified as a centralized learning system, however, the data that is collected is much richer than traditional LMS data. The rich nature of the data can be explained by the nature of the online learning environment (Siemens, 2013). In this process, an enormous amount of data is gathered: the amount of time it took the learner to complete this process, the reading speed, playback, pauses, and repetitions of watching the video lecture, performance and errors made in the exercise and tests, the interaction with peer-students to solve problems, and so on. Because all this data is gathered in one environment, it is useful for both instructors and researchers to use it for analysis.

4.3 Techniques

The heart of Learning Analytics are the techniques that are used to extract the

meaningful and useful patterns and information that can be used to support the stakeholders to improve the learning process. However, those techniques cannot function without human judgment to choose the right technique for the right goal and interpreted the results. Moreover, these techniques will only work if they are based on and embedded in sound educational and analytical theory. Essentially, Learning Analytics techniques rely on computers, people, theory, and organizations (Elias, 2011). The most commonly used techniques are statistics, information visualization, data mining and social network analysis. These techniques will be elaborated below. An overview will be provided with a short explanation of the technique and the goal it can serve in Learning Analytics.

4.3.1 Statistics

Basic statistic descriptive statistics are widely used in Learning Analytics to report some information about the students’ dataset. Descriptive statistics can be computed through interaction with the LMS, such as the number of logins, quantity of visited pages, distribution of logins over time, the quantity of the students’ postings and percentage of the content that is actually downloaded and read. The descriptive statistics consist mostly of means, averages

(17)

and standard deviations. The goal of calculating these statistics is to support stakeholders with interpreting the data.

4.3.2 Data mining

Data mining is the discovery and extraction of useful information or patterns from large datasets, texts, images, and the Web (Liu, 2011). Data mining is the core activity of Learning Analytics, namely extracting meaningful data from the data source and performing analysis on this data. Although data mining has some overlap with statistics and serves the same aims, namely the discovery and identification of structures in data, the two are separate fields. The main difference between the two approaches is that one the one hand statistics quantifies data; it has foundations in mathematics and provides tools for data mining. While on the other hand, data mining focusses on developing models to discover patterns and associations in data. Furthermore, statistics are mostly applied in structured and relatively small data sets while data mining entails the whole data analysis process, including cleaning and preprocessing large, unstructured datasets and visualization of the results (Harshini, 2017). In order to clean, preprocess and visualize these large datasets, the data mining fields does not only use statistical tools but also tools from the field of computer science and artificial intelligence. The data mining techniques that are most commonly used in Learning Analytics can be divided into three categories: classification, clustering and association rule mining (Liu, 2011).

Classification consists of predicting a certain outcome based on a given input. In order to predict the outcome, an algorithm processes a data set containing a set of characteristics and the respective outcome. The algorithm attempts to discover relationships between the characteristics that would make it possible to predict the outcome (Chen, Han, & Yu, 1996). In other words, with classification, predictive models of interest can be extracted from the data. An example, if a teacher wants to know if certain characteristics of a student can predict

(18)

its final grade, he could use classification on the dataset to help to discover the classes (sets of characteristics) that predict the final grade. Possible classes in this case could be hours per week spend studying, average GPA, etc. Methods are used for classification are decision trees, neural networks, naïve Bayesian classification, support vector machines and k-nearest neighbor classification. For more information and explanation about these methods, see Liu (2011).

Clustering is a technique that tries to find structure in the dataset without knowing the variables in advance. That is also the biggest difference with classification. With

classification, the stakeholder knows which variable (e.g. final grade) is predicted. With clustering, these variables are unknown at the start of the analysis. Clustering can show which structures occur naturally from the data by looking at data points that are grouping together. Those groups are called clusters. If a particular data point belongs to one cluster, the data point is more similar to other data points in that cluster than to data points in other clusters, or data points that do not belong to a cluster (Baker & Inventado, 2014). An example of

clustering in Learning Analytics is provided by (Amershi & Conati, 2009). They found characteristic patterns in how learners use exploratory learning environments. They used this analysis to distinguish effective and ineffective learning strategies.

Association rule mining is used to identify meaningful associations and correlations in the dataset. The most commonly form is the if-then rule. This means that if a variable has a certain value, the associated variable will also have a specific value linked to that value (Baker & Inventado, 2014). An example could be that if a student is found to show a high degree of self-regulative behavior, the student is also more likely to perform well on finals. 4.3.3 Information visualization

Information visualization is an important technique in Learning Analytics because it is mostly used to make the results of the analysis interpretable and user-friendly for

(19)

stakeholders. Numbers and tables alone can be hard to interpreted, and visualizations can help to make interpretation easier and more effective (Mazza, 2009). This is because humans can process a large amount of information if it is represented in the right way (Shemwell, 2005). This makes information visualization an essential step in moving from analysis to intervention (Elias, 2011). At the start of implementations of Learning Analytics, most visualizations used to be charts, scatterplots, 3D representations and maps (Romero and Ventura, 2007).

Nowadays, these techniques are mostly replaced with the use of dashboards. A dashboard is a specific data visualization tool that shows one or more real-time metrics for the learner or group of learners, such as current performance compared to peers, percentages of completed assignments, and predicted final grade. The dashboard is tailored to provide learners or teachers with real-time information This can help to provide insight in the learners’ progress and support decision making such as providing extra support from teacher to learner, or changing learning strategy of the learner. For example, see Figure 2. This dashboard is part of a Learning Analytics intervention. This dashboard is developed to provide the student with the Learning Analytics in a graphical way. The students’ homework, discussion activity, exam results, and overall performance are extracted from the dataset, visualized and compared to his class. Also, a prediction of the final grade is visualized, and based on the students’ data, there is a personalized message with recommendation for action. The intervention itself will be discussed in detail in the next chapter.

(20)

Figure 2. Example dashboard, from Van Horne et al. (2018) 4.3.4 Social network analysis

Social network analysis is a technique that is used to manage, analyze and visualize

interactions in groups of people (i.e. networks) (Chatti et al., 2012). Therefore, in Learning Analytics this technique is used to support collaborative learning and student engagement and

(21)

engagement in the peer group (Chen et al., 2018; Saqr et al., 2018; Wise et al., 2014). Social network analysis has its roots in sociology and although social network analysis itself is an independent technique from the above mentioned techniques, it has links with data

visualization because an important part of social network analysis is to provide the user with a clear visualization of the analysis to make it interpretable. Variables that can be analyzed are density, centrality, connectivity, betweenness, and degrees (for a thorough explanation, see Galaskiewicz & Wasserman (1993)). For social network analysis, data can come from chat logs, posts on a discussion forum, posts on a blog and the comments that are written. The social network analysis technique extracts, compares, evaluates, and visualizes this data into a form that is interpretable for stakeholders. If a social network analysis visualization is

accessible for students, they can be encouraged to adapt their interaction in a desirable way. Teachers can use the visualization to guide interactions between students and provide additional support if necessary. For example, see figure 3. This visualization of a social network analysis is part of a Learning Analytics intervention to monitor and guide collaborative learning. Each green circle represents a stakeholder, such as teacher and students. This circle is called a node. The size of the node shows how many interaction this person has with the rest of the network (centrality). The lines between the nodes (arrows) shows the interactions itself. The thicker arrows, the higher the frequency of the interactions. The heads of the arrow shows the direction of the interaction. The arrow head represent the target of interaction. The color range shows betweenness centrality, which is the frequency that a person coordinated interactions between persons that did not show interactions before (Saqr et al., 2018).

(22)

Figure 3. Example visualization social network analysis, from Saqr et al. (2018) 4.4 Summary

In this chapter, the aim was to answer the question how Learning Analytics could be used to enhance students’ learning outcomes. First, different stakeholders and their possible objectives in Learning Analytics were elaborated. Second, a closer look was taken into the different sources of data. Finally, an overview of the most commonly used techniques in Learning Analytics was provided. These subjects were elaborated with the goal of providing a ground understanding of what Learning Analytics is and how it can be used.

In order to enhance students’ learning outcomes, the first important factor is to

consider every aspect of Learning Analytics carefully and in relation to each other. A teacher, educational institution or researcher cannot initiate a successful Learning Analytics

interventions without doing this. The first question that should be considered is: ‘who are the different stakeholders and what are their interests and goals in this context?’. Second, the initiator should think about where to gather the data that will be used for analysis. This data should reflect the complexity of the learning process (Siemens, 2013). In order to reflect this complexity, data from the centralized learning system will most likely not be enough and other data from distributed learning environments should be considered. Third, dependent on the objective of the Learning Analytics intervention and the data that is gathered, an accurate technique for analysis should be chosen. For example, if the goal is to analyze students’ interaction with each other and the teacher and to make them aware of those interactions in

(23)

order to adapt their behavior, a combination of social network analysis and data visualization could be a good choice. But if the goal is to predict which student is at risk of falling behind and provide the teacher with that insights to help him make decisions how to allocate his time and efforts, a combination of data mining (for the prediction model) and data visualization (to provide the teacher with the insights) could be a better choice.

In conclusion, Learning Analytics has the potential to enhance students’ learning outcomes when the Learning Analytics intervention is carefully developed and thought out and has considered all the aspects, stakeholders, objectives, data and techniques. However, there are also some important limitations and restraints that could limit this enhancement, which will be further elaborated in the discussion chapter.

(24)

5. What is the current evidence of the positive effects of Learning Analytics on students’ learning outcomes?

This question will be investigated by reviewing eight research articles about

implementations of Learning Analytics and their effect on students’ learning outcomes. The different implementations targeted different kinds of students’ learning outcomes, namely student engagement, collaborative learning, self-regulated learning, and academic success. The research articles will be reviewed by discussing each different learning outcome

separately. After that, a summary will be provided where all the outcomes are synthesized in a conclusion about the current evidence of the positive effects of Learning Analytics on

students’ learning outcomes.

5.1 Student engagement

Two articles investigated the effect of the implementation of Learning Analytics on student engagement and engagement (Chen et al., 2018; Wise et al., 2014). Both studies investigated how student engagement in online discussion could be enhanced.

The study of Chen et al. (2018) was conducted in an undergraduate online course during 14 weeks. The goal of the study was to foster student engagement in online discussions by designing a tool and investigating its effects on student engagement and student

perceptions on usability of the tool. A total of 39 students of a large public university in the United States of America from various disciplines such as psychology, medicine, journalism biochemistry, management, and economics were observed in two courses (20 students in one course and 19 in the other) while participating in weekly online discussions. The intervention consisted of two phases. In the first phase, data from the discussions were extracted from the LMS by the teacher. After that, the teacher provided the students a visualization of their interactions and a word cloud, where words appeared that showed up in the discussions. The size of the words represented how often the word was used. Together with these

(25)

visualizations, the teacher wrote a guiding message to help students interpreted the

visualizations and motivate them to improve their interactions. In the second phase, the data was not gathered and analyzed by the teacher before reporting back to the students, but a tool was implemented that interacted automatically and directly with the students. Based on the data that students generated by interacting in the online discussions, the tool provided students with a social network analysis visualization of their interactions. Students could click on a node that represented a particular student to see the basic statistics of the interactions of that student. The term cloud was also implemented in this tool. Students could interact with the term cloud by clicking on a term to see how many times that term was used. Overall, the tool of phase two compared to phase one was more interactive and gave more information and opportunities for the student to reflect and compare its discussion behavior. For analysis, both data mining and social network analysis were used. The students’ attitudes about the usability of the tool was measured by an established scheme of software usefulness and usability, filled in by the students (Davis, 1989). The results were mixed. Although most students used the tool, not everyone found the tool useful or usable. The students who found the tool useful used the tool to set personal engagement goals. But at the end of phase one, where only one of the two groups of students had access to the tool, the group who had access did not show more engagement than the second group. In the second phase, where both groups had access to the tool, both group increased in engagement. This results are contradictive. A possible explanation is that only the tool of phase two is effective for improving student engagement. This study revealed some promising but incomplete efficacy of the social learning analytics tool in online discussion. Based on the results of this study, the authors emphasize for simplicity of tools, the importance of sufficient support and the need to enhance the data literacy of students.

(26)

The study of Wise et al. (2014) also studied the effect of learning analytics on student engagement. The study was conducted in a blended seminar for graduate students in

educational technology that lasted for one semester. A total of nine students participated in the study. The seminar consisted of one meeting a week and online discussions between the meetings. The intervention consisted or three elements, namely providing the students with embedded analytics, extracted analytics, and keeping an reflective journal. The first five weeks of the course students were provided with embedding analytics of their behavior in the online discussion. Embedding analytics are real-time metrics that are integrated in the

learning situation itself, in this case the discussion environment. In this intervention, the discussion was visualized in a tree-structure. The analytics provided real-time visualization of their activity in the discussion. In this way, students could see the structure of the discussion and the location of their posts in it. They could see how posts were connected to each other, which posts had a lot of replies, and which did not. They could also identify read and unread posts by different colors. The goal of providing the students with these analytics was to give students more insight in their own behavior in the discussion. After the first five weeks, the extracted analytics were added to the course. Unlike the embedded analytics, the extracted analytics were not shown in real-time during the learning activity, but were extracted from the system and fed back to the student and teacher. The reason for the delay is usually that an extra step of analysis is required. To illustrate, a dashboard visualization discussed in chapter three is an example of extracted analytics. The extracted analytics were provided to students at the end of the week and consisted a table of nine metrics. Examples of those metrics are: average post length and percent of posts read. Metrics were provided for the individual and the group as a whole. The goal of those analytics was to make student engagement more meaningful by making them aware of the structure of the discussion and their engagement compared to the group, and support them in setting goals for engagement. Third, students

(27)

were asked to use the analytics to write a reflective journal every week and use the journal to set their personal engagement goals. The techniques that were used in this intervention is a combination of data mining and statistics for the gathering of the data (i.e. logfiles from the discussion forum) and the calculation and analysis of the metrics, and data visualization for showing the metrics to the students and teacher. Other data that was not collected for the intervention itself, but for the study were the reflective journals and semi-structured

interviews with each student and the teacher. Those were, together with the logfiles, analyzed with a mixed-method design. The results show that the intervention had a positive effect on student discussion engagement, in particular in reading posts of other students instead of scanning them. The study also shows that the students felt like the extracted analytics were more useful than the embedded analytics. Although these results are promising, they need to be interpreted with caution. Because the number of participants was so small, it was not possible to perform thorough statistical analysis.

5.2 Collaborative learning

One study investigated the effect of the implementation of Learning Analytics on collaborative learning (Saqr et al., 2018). The goal of this study was to investigate how social network analysis can be used to monitor online collaborative learning and guide an informed intervention. First, collaborative learning in an online environment was investigated in three courses with a total of 82 medical school students. The goal of the use of collaborative learning was to enhance clinical reasoning and critical thinking skills by discussing

hypothetical cases with peer-students in the online environment. Interaction data was gathered and analyzed using data mining and social network analysis techniques. The results showed that there was a low level of collaboration among students, that interactions were teacher-centered and level of exchanging information was low. After analyzing these results an intervention was developed based on the outcomes of the social network analysis. Five

(28)

actions were addressed: to increase awareness about engagement, promote collaboration by informing about attitude, promoting interactions and evaluating group-contributions, improve the educational content, prepare teachers for guiding the collaboration, and to practice with feedback. The intervention provided students and teachers with a full-day training to inform them about awareness, give them training to effectively use the online environment and to get some practice in doing so. This training was given to enhance the users’ data literacy, because providing good support to the actual users of the intervention (e.g. students and teachers) is crucial for meaningful and use (Slade & Prinsloo, 2013). Participants were showed the visualizations of the social network analysis to help them understand better their behavior in the online environment could help them to collaborate and make the most of the learning opportunity. Results of the social network analysis after the implementation of the

intervention show that students student-student interaction and teacher-student interactions were significantly enhanced, where most interactions before the interventions were teacher-centered. Also, the level of collaboration patterns was significantly improved.

5.3 Self-regulated learning

One study investigated the effect of the implementation of Learning Analytics on self-regulated learning (Tabuenca et al., 2015). The goal of this study was to investigate whether the tracking and monitoring time spend on learning with a mobile tool could enhance self-regulated learning. For a time period of four months, a total of 36 graduate students from three different online courses were provided a mobile tool named LearnTracker backend. This tool tracked the time they spend on learning were they could choose between asynchronous and synchronous tracking. Asynchronous tracking means that students recorded the duration of their activity after they finished. Synchronous tracking means that students started tracking their activity when they started, by clicking a ‘start’ button in the tool. When they finished, they clicked the ‘end’ button. Based on the data the students put in (the time-tracking), the

(29)

students were send two notifications per week to plan, evaluate, and stimulate their learning days. Notifications could be either customized (using Learning Analytics) or generic. To investigate whether the timing of these notifications were important, and whether fixed or random times were preferable, notifications were send at the beginning and end of the day, and at random times. This data was analyzed and notifications were customized using data mining techniques. Also, within the tool the students got to see visualizations of their time spend on learning in graphs in different contexts. For example, how many time they had spent across different tasks or how many time they had spent compared to other students. To

measure self-regulated learning and time management, students filled in The Online Self-Regulated Learning Questionnaire (OSLQ) (Barnard, Lan, To, Paton, & Lai, 2009) and the Validity and Reliability of Time Management Questionnaire(VRTMQ) (Alay & Kocak, 2002). Self-regulated learning was measured at four time-points during the four month intervention. The results show that tracking time spend on learning increased the skill of time management over time. Also, the results showed increased values of time planning (a subscale of the VRTMQ). The results also showed that the notifications send at fixed times had a positive moderating effect on time management skill. Finally, the notifications that were customized resulted in higher scores on time management, compared to generic notifications. However, this difference was small.

5.4 Student success

Four articles investigated the effect of the implementation of Learning Analytics on student success (Arnold & Pistilli, 2012; Van Horne et al., 2018; Huberth et al., 2015; Liu et al., 2017).

The study of Arnold & Pisstilli (2012) investigated the implementation of Course Signals, which was developed to provide real-time feedback to the students to increase student success. This study investigated the outcomes of the use of Course Signals at Purdue

(30)

University in the USA in the period between 2007-2009. The intervention consists of an algorithm that identified if a student is at risk of failing a course or dropping out by using data mining and statistical techniques. This calculations were based on data from large datasets using metrics such as recent performance in the course, efforts made by the student, prior academic history, and student characteristics. After that, data visualization was used to provide the student a red, yellow or green signal. This signal can be seen by both student and teacher. If the signal is red or yellow, actions could be initiated. Examples of those actions are the teacher sending email-messages, reminders or text messages, referral to academic advisor or face to face meetings with the teacher. Results are classified in two categories: academic performance and student retention. The results showed a significant increase of high grades for students who participated in at least one course supported by Course Signals and a significant decrease of low grades compared to non-users. Results also showed significantly higher student retention scores for Course Signal users compared to non-users. This article shows that even a relatively simple use of learning Analytics can have a positive impact on students’ learning outcomes.

The study of Van Horne et al. (2018) investigated the implementation of another Learning Analytics intervention, the Elements of Success platform. The goal of this study was to investigate if the frequency of visiting a dashboard was related to learning outcomes. A total of 840 students from an chemistry course at a university in the United States of America participated. The intervention consisted of three elements. First, a visualization on the LMS homepage showing the achieved points of the student. Second, a dashboard page that summarizes the students’ performance in the chemistry course, consisting of a histogram showing the current performance of the student compared to other students in the course, the estimate of the students’ final grade and the percentile rank, and customized feedback on the online homework of the students. Third, an application only accessible for students who

(31)

gained less than 50% of the current possible points with information about resources for support provided by the university. This intervention made use of data mining techniques and statistical techniques to extract and analyze the data and data visualization techniques to build the student dashboard. Results showed that students who used the dashboard in a high and moderate frequency earned higher final grades than students who did not use the dashboard or only used the dashboard in a low degree. The difference between high-use and low/no-use group, and the difference between the moderate-use and low/no-use group was significant. However, the difference between the high-use and moderate-use group was not significant. This could indicate that more usage of the dashboard does not necessarily result in a stronger effect.

The study of Huberth et al. (2015) also investigated the effect of Learning Analytics on student success. They investigated the implementation of E²Coach. The goal of the study was to investigate the effect of the implementation of E²Coach on student performance in large courses. A total of 2234 students participated, divided over four introductory physics courses for the duration of one semester. The intervention consisted of two main attributes that were based on the gathering of information about the student like demographics, characteristics, previous academic performance, study habits, and planned approach for the course. The first attribute was a message system which send personalized messages to

students based on de student data using data mining techniques. The purpose of this messages was to coach and motivate the student by providing personalized feedback. The second attribute was a visualization of current performance and an interactive grade prediction tool that predicts the final grade based on the student information and current performance and what effort is necessary for increasing the predictive grade. Results showed that students who used the tool in high and moderate usage had a positive effect on the final grades, which improved the GPA significantly compared to students who did not use the tool.

(32)

Last, the study of Liu et al. (2017) investigated the effect of an adaptive learning intervention named Brightspace LeaP™ on student success. A total of 128 students

participated, divided over four different courses that prepared students for a pharmacy degree entrance. The intervention consisted of an interactive system that offered a personalized learning path based on student data (such as previous performance and demographics) and the goals set by the teacher. For example, one of the courses was chemistry. The online

environment consisted both content and tests. If a student scored poor on one particular subject, he was directed to content about that subject and suggested to put more time in that subject to do better next time. To measure the effect of the intervention, both students who did and did not participate in the intervention took a pre-test and post-test on the content of the courses they participated in. The results show that only the chemistry course showed a significant effect on student success. Students who participated in the intervention while following the chemistry course showed a bigger increase in score between pre-test and post-test compared to students who did not participate. The other three courses did not show a significant difference.

5.5 Summary

When synthesizing these eight articles, a number of interesting observations can be made. First, the evidence of learning analytics tools on different kinds of students’ learning outcomes seems promising, but inconclusive. This is mainly imputable to the small sample sizes of some of the studies that made it difficult to provide power of evidence (Chen et al., 2018; Wise et al., 2014; Tabuenca et al., 2015), and the premature stage of some of the studies (Chen et al., 2018; Wise et al., 2014). Second, related to the first main question of this thesis, there was a clear line in the used techniques of the interventions. All but one study (Chen et al., 2018) used data mining techniques to extract and analyze useful data out of the large datasets (Arnold & Pistilli, 2012; Van Horne et al., 2018; Huberth et al., 2015; Liu et al.,

(33)

2017; Saqr et al., 2018; Wise et al., 2014). All studies used data visualization techniques to make the data interpretable for students and teachers. Furthermore, interventions where student engagement and collaboration were investigated also used of social network analysis (Chen et al., 2018; Saqr et al., 2018; Wise et al., 2014). Another notable observation is that interventions that were relatively easy to use had the most effect (Arnold & Pistilli, 2012) and that interventions where the effect was weaker showed that students found that the content was too much to handle (Liu et al., 2017) or was difficult to understand (Chen et al., 2018). For this reason, most authors stressed the importance of embedding the intervention into the institution and to provide both students and teachers with clear information and sufficient support to use the intervention. As Wise et al. 2014 noted: students seem to be more likely to use the intervention is the usage is clear and the system is transparent.

In conclusion, the current evidence of the positive effects of Learning Analytics on students’ learning outcomes is promising but inconclusive. More research is necessary to discover what interventions, and which elements of that interventions are effective in enhancing students’ learning outcomes.

(34)

6. Conclusion and discussion

6.1 How could Learning Analytics be used to enhance students’ learning outcomes? In this thesis, the potential and evidence of positive effects on students’ learning outcomes of Learning Analytics in adult education is investigated through a literature study. The first main question that was examined was: “How could Learning Analytics be used to enhance students’ learning outcomes?”. This thesis elaborated the process of Learning Analytics, its stakeholders, objectives and instruments. Greller & Drachsler (2012) stressed that all of these separate factors need to be considered carefully in order to implement a Learning Analytics intervention successfully. This chapter showed that, dependent on the stakeholders and the objectives of the Learning Analytics intervention, different kind of techniques could be used and that in one Learning Analytics intervention different

technniques are needed to serve all the goals. For instance, data mining and statistics are often necessary for the analysis while data visualization is needed for making those analytics interpretable. This chapter also elaborated how algorithms and data mining can help

institutions to set up their strategy and make decisions, how researchers can learn about the learning process, and how teachers can improve their practice based on using Learning Analytics for prediction models, personalized feedback, support and recommendation, and reflection. In conclusion, when all factors are in tune and considered carefully, Learning Analytics has the potential to enhance students’ learning outcomes.

Despite of al the potential that Learning Analytics has to offer, there are some serious issues that limits the implementation of Learning Analytics in the current reality. Siemens (2013) stated that the most important limitations of Learning Analytics are not technical, but include worries about data quality and scope, the lack of theory-driven approaches in

Learning Analytics, the lack of stakeholders’ data literacy, privacy, and ethics. The first challenge of successful implementation of Learning Analytics is the limited availability of

(35)

comprehensive datasets about the learner. In order to create accurate prediction models, provide institutions and teachers with analysis they can base their decisions on and offer students a personalized and reflective learning path, the data should reflect the complexity of the learning processes where learners are going through. The data that is mostly used at this moment is data obtained from centralized learning systems such as the LMS. However, this provides a one-sided and incomplete view of a student and misses out on valuable

information. Data from centralized learning systems like login information, student characteristics, current performance, and participation should be combined with data from distributed environments like mobile data, biometric data and mood data. Richer datasets can help stakeholders understand the learning process better (Ferguson, 2012), and help to constitute better quality of predictions and recommendations.

The second challenge is the lack of sound theoretical basis where Learning Analytics interventions are based upon. The field of Learning Analytics is data-driven and not theory-driven. However, in order to provide interventions that enhance learning, a theoretical basis is critical (Clow, 2013). Therefore, the field of Learning Analytics should build strong

connections with the field of educational sciences to embed interventions in theory (Ferguson, 2012) and to validate those intervention from an educational perspective.

The third challenge is the lack of stakeholders’ data literacy. Although the technology that drives Learning Analytics is quite sophisticated, without accurate interpretation of the results and translating those results into proper actions, there is no added value of Learning Analytics. That is why the data clients (e.g. teachers, institutions, researchers and sometimes also learners) should have knowledge about how to interpreted and act upon the Learning Analytics results. In a survey conducted by Drachsler & Greller (2012) among Learning Analytics experts, only 11% of the 111 respondents held the view that learners would be able to interpret and act upon Learning Analytics results accurately. Data literacy should consist of

(36)

basic numeric skills, ethical knowledge, and the skill to critically evaluate the Learning

Analytics results (Greller & Drachsler, 2012). Data literacy is one of the factors that should be considered before implementing a Learning Analytics intervention. A possibility could be to train all the stakeholders to provide them with the necessary data literacy skills before the intervention is actually used. In order to use Learning Analytics in a meaningful way for all stakeholders, further research needs to be conducted. First, meaningful Learning Analytics interventions embedded in educational theory should be developed. Second, applied research should be performed to assess the effectiveness of these interventions and the implications on different stakeholders.

6.2 What is the current evidence of the positive effects of Learning Analytics on students’ learning outcomes?

The second main question was: “What is the current evidence of the positive effects of Learning Analytics on students’ learning outcomes?”. A total of eight Learning Analytics interventions were assessed on their effectivity in a range of different learning outcomes, namely: student engagement (Chen et al., 2018; Wise et al., 2014), collaborative learning (Saqr et al., 2018), self-regulated learning (Tabuenca et al., 2015), and student success (Arnold & Pistilli, 2012; Van Horne et al., 2018; Huberth et al., 2015; Liu et al., 2017). The results of these studies showed that the effect of Learning Analytics on students’ learning outcomes are promising. These studies also gave insights of the most currently used technologies in Learning Analytics implementations and their purposes. For example, statistics and data mining are mostly used in the analysis, while data visualization is used for interpretation and decision making. There is also a visible development in the use of more comprehensive set of technologies. In the first years Learning Analytics emerged, the

interventions were very centered to the data clients. The focus was on providing the institution or the teacher (who was often familiar with data mining) with insights to act upon. The

(37)

interventions used mostly statistics and data mining and sometimes social network analysis. Later, the focus shifted to providing the data subjects with insights they can act upon. After that, data visualization took a more prominent place in Learning Analytics interventions, because of the need to make the data interpretable for all stakeholders (Chatti et al., 2012).

Another notable result was that the more simple and transparent the intervention was, the more effective it was. An example is the intervention of Arnold & Pistilli (2012), which was very simple but showed impressive results. This could be explained by the fact that, if students perceive the intervention to be understandable and transparent, they are more motivated to really use the intervention (Slade & Prinsloo, 2013). Conversely, if students perceive the intervention not to be transparent or don’t understand it, they are less inclined to use the intervention or to act upon the feedback that is given by the intervention. For example, in the study of Wise et al. (2014), students stated in the interviews that they did not

understand or disagreed with the calculation of the metric ‘posts read’. They stated that they read more posts than the metrics showed. That demotivated them to use and appreciate the intervention.

While the results are promising, there were some limitations that need to be addressed. Currently, the part of the Learning Analytics research where interventions are assessed on effectiveness is in an early stage. A lot of research about the effects on students’ learning outcomes are conducted with very small samples of participants. Also, most research is conducted in STEM courses. This understandable, because the researchers who develop and use these interventions are, currently, usually originated from technical background like computer science and physics. However, this small sample sizes and specific type of student where the interventions were implemented could lower both internal and external validity. External validity could be increased by doing research in different fields of education that represent the student population better, including sociology, economics, psychology, law,

(38)

linguistics, and medicine. To increase internal validity the use of larger sample sizes is desirable.

6.3 Ethics

Last, one of the biggest and most important challenges in the implementation of Learning Analytics are the ethical considerations. Because the used technologies are

developing fast and have not existed for a long time yet, Learning Analytics has a low level of ‘legal maturity’ (Kay, Korn, & Oppenheim, 2012). This means that, because of the short existence of the field, things like privacy, informed consent and data ownership have not been addressed yet. Data ownership is a clear example that has not been addressed yet. Currently, when data about a learner is collected, the owner of the data is the owner of the implemented Learning Analytics intervention. In most cases, that is the teacher or the

educational institution. Before the emergence of Learning Analytics, when data was collected through questionnaires or procedures that require signing up, this was not that much of an issue because participants actively gave permission for using their data via informed consent. However, because of the new technologies data is collected in a way without the participants’ approval or even awareness. In this case, informed consent is at stake (Greller & Drachsler, 2012). There is no clear method for researchers or teachers to obtain informed consent in Learning Analytics interventions. It is also not clear what the rights of the learners are about their own data (Ferguson, 2012). Furthermore, there is also the concern of what happens to the data when the learner graduates, switch institutions or drops out. The concerns arises around the accessibility of this data (Siemens, 2013). For example, if a student switches from one university to another, should the universities exchange this information? How long can the university store and use this data for research purposes? There are no clear guidelines for handling those questions.

(39)

Another issue is the privacy of the learner, and thereby the access to the data of leaners. It is unclear who should have access to the data within the institution. Questions that may arise are: does the learner have the same access to the data as the institution or teacher? Can teachers from other courses than where the data is generated have access? When the data is analyzed, the concern arises to what extent the teacher or educational institution can act upon this results (Pardo & Siemens, 2014). For example, if a learner applies to enroll in a course, but a predictive model indicates that this particular student has a very low probability of succeeding in the course, may that data be used to make the decision to not admit this student? This could also be seen as an reinforcement of the already present power structure that might have played a role in the predictive model in the first place (Slade & Prinsloo, 2013). For example, when the socioeconomic background of the learner is used as a variable. Although the intentions of the data client may be good, data can also be abused for the sake of efficiency and profit (Greller & Drachsler, 2012). It could possibly lead to an environment where learner’ do not feel safe to make mistakes anymore, because every mistake leaves a data-trail that could be used for analysis and used in a for the learner undesirable way. These concerns stress the importance of the development and use of an ethical framework. It is the responsibility of the researchers, teachers and educational institutions to use and protect the learners’ data in an ethical way. That is why an ethical framework should be elaborated before implementing a Learning Analytics intervention.

6.4 Conclusion

In conclusion, Learning Analytics is a new, emerging field with the potential of understanding learning better and enhance students’ learning outcomes with the use of sophisticated technology. However, there are still multiple limitations that threaten the usefulness and meaningfulness of Learning Analytics interventions. Further research is needed to resolve those limitations, although the question is if, and to what extent these

(40)

limitations could be fully resolved. Also, the constant development of new technologies can help to improve the usefulness and meaningfulness of Learning Analytics interventions. Only time will tell if Learning Analytics can really fulfill the high expectations. (Reyes, 2015).

(41)

7. References

Alay, S., & Kocak, S. (2002). Validity and reliability of time management questionnaire. Hacettepe Üniversitesi Eğitim Fakültesi Dergisi, 22(22). Retrieved from

http://dergipark.ulakbim.gov.tr/hunefd/article/view/5000048816/5000046136

Amershi, S., & Conati, C. (2009). Combining unsupervised and supervised classification to build user models for exploratory. JEDM| Journal of Educational Data Mining, 1, 18-71. Retrieved from

https://jedm.educationaldatamining.org/index.php/JEDM/article/view/9 Arnold, K. E., & Pistilli, M. D. (2012). Course signals at Purdue: Using learning

analytics to increase student success. In Proceedings of the 2nd international conference on learning analytics and knowledge (pp. 267-270). ACM. http://dx.doi.org/10.1145/2330601.2330666

Baker R.S., Inventado P.S. (2014) Educational Data Mining and Learning Analytics. In: Larusson J., White B. (Eds.), Learning Analytics (pp 61-75). New York: Springer. Barnard, L., Lan, W. Y., To, Y. M., Paton, V. O., & Lai, S. L. (2009). Measuring self-

regulation in online and blended learning environments. The internet and higher education, 12(1), 1-6.

https://doiorg.proxy.uba.uva.nl:2443/10.1016/j.iheduc.2008.10.005

Campbell, J. P., DeBlois, P. B., & Oblinger, D. G. (2007). Academic analytics: A new tool for a new era. EDUCAUSE review, 42(4), 40. Retrieved from

https://er.educause.edu/articles/2007/7/academic-analytics-a-new-tool-for-a-new-era Chatti, M. A., Dyckhoff, A. L., Schroeder, U., & Thüs, H. (2012). A reference model for

learning analytics. International Journal of Technology Enhanced Learning, 4(5-6), 318-331. http://dx.doi.org/10.1504/IJTEL.2012.051815

Referenties

GERELATEERDE DOCUMENTEN

behoren niet tot de grafiek. We geven hier nog enkele voorbeelden van relaties, waarvan het instructief is de grafiek te tekenen. In geval a bestaat de grafiek uit geïsoleerde

In the present study, it was examined to what extent adding -redundant- audio affects multimedia learning in university students with dyslexia as compared to typically

Onderscheiden we verder naar lichtgesteldheid, dan valt op dat het aandeel slachtoffers dat in de kom bij duisternis valt voor de fietser 18% en voor de

Simulated data with four levels of AR(1) correlation, estimated with local linear regression; (bold line) represents estimate obtained with bandwidth selected by leave-one-out CV;

Table 1 Mean values and standard deviations of the correlations between the fundamental fre- quency and the intensity contour and MBPS value for each

User-centered methods (ECC procedure, experience booklets, and phenomenological inter- viewing) are empirical methods that yield detailed insights into the lived experience

ROA = Return on assets, net income gedeeld door totale assets Size = log van totale assets LOSS = net income < 0 Leverage = Total interest bearing debt to total assets

In order to gain a deeper understanding of the internationalization of EM MNEs as compared to DC MNEs, I compare the internationalization trajectories of two multinationals