• No results found

Investigating the characteristics and impact of training exercises on study success

N/A
N/A
Protected

Academic year: 2021

Share "Investigating the characteristics and impact of training exercises on study success"

Copied!
26
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Investigating the characteristics

and impact of training exercises

on study success

Mehdi G¨une¸s 11035021

Bachelor thesis Credits: 18 EC

Bachelor Opleiding Kunstmatige Intelligentie

University of Amsterdam Faculty of Science Science Park 904 1098 XH Amsterdam Supervisors: Dhr. dr. B. Bredeweg Informatics Institute Faculty of Science University of Amsterdam Science Park 904 1098 XH Amsterdam Dhr. dr. A.J.P. Heck Korteweg-de Vries Institute

Faculty of Science University of Amsterdam Science Park 107 1090 GE Amsterdam Dhr. M. Cohen SOWISO Science Park 400 1098 XH Amsterdam June 29th, 2018

(2)

Abstract

Learning analytics is cited as one of the key emerging trends in higher educa-tion and has provided opportunities for institueduca-tions to aid student development and personalise learning. Learning analytics has contributed classification of students based on general student behaviour data. Likewise, problems within education like gaming the system have been documented. The research pre-sented in this thesis investigates whether an insight can be gained regarding the nature and quality of the exercises in relation to the effectivity of the student’s learning. The method used for this research is cluster analysis and correlat-ing features. The results suggest that learncorrelat-ing platforms offercorrelat-ing solutions for training exercises have a potential threat of students gaming the system. No discriminating characteristics for quality exercises were discovered, however po-tential difficulties were identified.

(3)

Contents

1 Introduction 4 2 Learning environment 6 3 Data 8 4 Method 9 5 Extracted features 10

6 Plotting features and cluster analysis 11

7 Total exam score correlations 14

8 Chapter specific exam score correlations 15

9 Analysing solution behaviour 17

10 Discussion and conclusion 19

(4)

1

Introduction

In recent years analytics in education has grown due to increased data, increased analytics tools and advances in computing [1]. Learning analytics is defined as ”the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimising learning and the environments in which it occurs” [7]. It is also cited as one of the crucial rising trends in higher education [8] and has provided opportunities for institutions to aid student development and personalise learning [9].

One of the components of Learning analytics is learner modelling. A learner model is ”a model of the knowledge, difficulties and misconceptions of the in-dividual” [3]. Previous research modelling the knowledge of a student used the Quantity of Participation and the Quality of Participation [6]. The Quantity of participation includes features like time spent and viewed content. The Quality of Participation includes features like the test scores and the amount of hints re-quested. This selection of features was based on a series of interviews conducted with several teachers [6]. Hence, they concluded that these features were valid to classify students. However, no details about the interviews were mentioned and thus lacks credibility. Likewise, the data used in the research was unsupervised and thus the classification not verified. Nevertheless, the paper mentions the assumption(s) upon which the feature selection was performed. These assump-tions and the feature selection could yield understanding in feature selection within the field of education.

Another research on learner modelling attempted to determine which user modelling technology performs better in capturing the student’s level of knowl-edge. The Partially Observable Markov Decision Process (POMDP) model was more accurate in tracing the knowledge beliefs than the Bayesian Knowledge Tracing (BKT) model [5]. This was due to the POMDP model yielding the ability to have more than two possible states for the knowledge component. The knowledge components are ”certain pieces of knowledge belonging to a specific subject which represent a certain part of the course” [5]. The BKT model only covered ’learned’ and ’not learned’ as potential states whereas the POMDP model covered ’bad’, ’medium’ and ’good’ as potential states. The POMDP model can thus be used to model the knowledge level of a student. Based on the previous research, an adaptive way of selecting exercises for spe-cific students based on their knowledge level was also implemented to increase the gain in knowledge [4]. Even though no results in the effectivity of an adapt-ing environment were obtained in terms of student performance, this does yield a realisation that exercises might have to be personalised.

Dirk Tempelaar et al. [10] tries to classify students into different at-risk groups using general student behaviour data and dispositional data. The fea-tures extracted from the general student behaviour exhibited within the online education platform were effective for this classification. These features can be summarised as follows: amount of correctly solved exercises, amount of attempts at solving exercises, time spent on the platform, amount of calls for the solution and the scores obtained. These features for classification of students into differ-ent at-risk groups might also be useful for the classification of other elemdiffer-ents within the field of education.

(5)

material. Baker [2] discusses how students actually use interactive learning en-vironments in ways different than intended and that these different usages can lead to poorer learning. ’Gaming the system’ is a well-known behaviour exhib-ited by students, which is defined as ”attempting to succeed in an educational environment by exploiting properties of the system rather than by learning the material and trying to use that knowledge to answer correctly” [2].

Instead of focussing on the student, the focus can also be placed on the educational material used in a module, in order to gain insight on the nature and quality of the education material. The research question addressed in this thesis is whether an insight can be gained regarding the nature and quality of exercises in relation to the effectivity of the student’s learning.

First the learning environment of the student is explained in Section 2. After which the data is presented and described in Section 3. Section 4 covers the method of using the data and the justification of it, namely clustering and correlations. The extracted features are specified in Section 5. After which the features will be plotted against each other and clustered to gain initial insight in Section 6. Section 7 evaluates the correlations between the student behaviour and the exam score. Section 8 evaluates the correlations of the poorly performed chapters to discover discriminating characteristics. The student behaviour in relation to requesting the solution is analysed in Section 9. Finally, section 10 concludes and discusses the obtained results.

(6)

2

Learning environment

The SOWISO1learning environment is an online educational platform for

math-ematics to support students during a course.

The platform consists of theory pages, training exercises and tests, which are divided on category and subcategory level. Likewise there is a forum containing questions.

Every exercise and student has an ELO rating within the learning environ-ment. An ELO rating is used in chess to indicate the skill level of the player. The winner of the game will receive a higher ELO rating whereas the loser will receive a decreased rating. The same concept is used; the student can get a higher rating if he/she solves the question or lose rating if he/she makes mis-takes. The ELO rating has no effect on the final grade.

Theory pages consist of informative texts and example exercises. They are positioned above the corresponding training exercises. A vote can be cast on these theory pages; a thumbs up or a thumbs down. Secondly, a forum question can be opened regarding the theory page to ask questions. Every theory page is located within a certain category and subcategory. See figure 1.

Figure 1: Example of a theory page

Training exercises are voluntary exercises which have no effect on the final grade. The training exercises do have effect on the ELO rating which is recorded within the platform and indicates the knowledge level. The ELO rating will be calculated based on the penalty obtained. The penalty depends on the answer submitted; a wrong answer carries a penalty whereas a correct answer has a penalty of 0. Wrong answers cause a short explanation to appear. A student can also request a hint and the solution. When the solution is requested the student receives a full penalty and the opportunity to try a similar exercise. Students can also cast a vote on exercises and they can open a forum question.

(7)

Multiple training exercises can be part of a single package. Every package is located within a certain category and subcategory. See figure 2.

Figure 2: Example of a training exercise page

The tests exercises are like the training exercises, but have no option to request a hint or the solution. Secondly, an adequate score should be obtained on these tests to pass the course. This adequate score is calculated based on the obtained penalties. However, after the threshold is passed, the score itself has no input on the overall course grade.

As mentioned before, the students can open a forum question. These forum questions can be answered both by the teacher and the students themselves. Likewise, votes can be cast on the opened questions and the answers given. See figure 3.

Figure 3: Example of a forum page

Lastly there is the final exam. An adequate score should also be obtained to pass the course, but unlike the test exercises, the obtained score does have an input on the overall course grade.

(8)

3

Data

The data analysed in this thesis is from a first year Bachelor of Science math course provided by the University of Amsterdam and available via the SOWISO platform. The data consists of the following:

• ELO rating of the students and exercises: This data set consists of the user and exercise IDs with the corresponding ELO ratings.

• Student views of the theory pages: Every theory page view is recorded, containing the ID of the theory page, user, category, sub-category and also the total time viewed.

• Student tries of the training exercises: General information about the exercise try is recorded i.e. the exercise, student, category, subcategory and package ID. Besides the general information the order of the exercise in the package, whether the package finished after this exercise and the state of the exercise are recorded. The state indicates whether the exercise was only opened or also sent in. The more important values are the penalty the student obtained, whether the student pressed solution or hint and the time spent on the exercise.

• Student tries of the test exercises: The test exercises try set roughly con-tains the same data types as the training exercise try set. A test ID and graded penalty are added. The graded penalty is the penalty corrected by the teacher in case there is some mistake in the automatically calculated penalty. It should also be noted that there is no option for the student to press the hint or solution button, hence these are not present.

• Forum posts: The user ID, the page from which the post was opened and the text of the post itself are recorded.

• Exercise votes: This set contains the student ID, the voted for content and the vote itself i.e. thumbs up or thumbs down.

• Student exam scores: The student exam scores consists of obtained scores on the individual questions for every student. Every question also has the corresponding category and subcategory noted.

A total of 917 test and training exercises originated from the course. 247 students took part in the course, of which 196 entered the exam. The total amount of theory views were 63211, the total amount of training exercise tries were 208945 and the total amount of test exercises tries were 121887. The final exam consisted of 27 questions, from 13 separate categories.

The forum posts and exercise votes had little data, namely 58 and 29 items. Thus this data is not used in the study reported in this thesis.

(9)

4

Method

The data is essentially unsupervised, meaning there is no label for every exercise categorising whether it is of quality or not. A quality exercise should increase the level of knowledge of a student and thus prepare the student well for the final exam. Hence the exam data will be used as a criterion for judging the quality of the exercise.

Basic clustering on data takes little resources and can yield valuable initial insight. After this initial clustering, the focus will be on calculating correlations and visualising student behaviour. This is due to the lack of quality supervised data and a broader research question than just classifying exercises. These correlations between features may yield insight on inner relations of student be-haviour and relations with the final grade, potentially indicating gaming of the system [2]. Likewise, certain correlations could be discriminating characteristics for quality exercises. To calculate the correlation and significance, the Pearson product-moment correlation coefficient (PPMCC) was used. A correlation was regarded significant if the probability value was under the generally accepted threshold of 0.05.

The correlations between the features could be extracted in multiple ways. The exam data has the scores obtained by the students for every individual question. Every question has the corresponding category (chapter) and subcat-egory (subchapter) registered. Thus the exam data feature could be the total exam score, the chapter specific score and the subchapter specific score. Like-wise, the student behaviour features could be from the behaviour exhibited in the course, or within a specific chapter, subchapter or exercise. First the total exam score will be correlated with the exhibited behaviour within the course. Based on the evaluation the next correlation will be selected. Any noticeable correlations raising questions on the exhibited behaviour can be visualised to expose potential gaming of the system.

(10)

5

Extracted features

Usable features can be extracted from the exam data, the ELO rating data, the theory data, the training exercise data and the test exercise data.

From the exam data the exam score can be extracted. This can be the total score obtained over the whole exam, but can also be specified to chapter and subchapter level i.e. using the score the student obtained for specific questions corresponding to the chapter or subchapter. For these chapter specific scores the feature will be the average score obtained in the exam questions of the same (sub-)chapter. This is due to the variability in the amount of questions asked corresponding to every chapter or subchapter.

The ELO scores data yields the student ELO score and the exercise ELO score. These ELO scores have a normal spread, what is noticeable is that the mean ELO score of the exercises is lower than of the students’. See figure 4.

Figure 4: Boxplots of the ELO ratings

The theory behaviour data yields three features: the count of opening a theory page, the total view time and the normative view time. The normative view time feature is like the total view time, but regards view times of theory page views which are less than a minute as a single minute. This means that in the total view time views which were shorter than a minute were not added to the calculation. This feature was added since a high number of such theory view records were found can be seen in figure 5. The figure contains two plots having the total students theory view time plotted against the obtained final grade score. The ’Theory view time (norm)’ is the normative view time. What is noticeable is the huge difference in the view time if the ’less than a minute’ records are weighted too.

The features extracted from the training exercises are five: the count of exercises finished (finish count ), the average time for an exercise spent (average time), the average penalty obtained (average penalty), the ratio of requesting the solution (ratio solution requests) and the ratio of asking for a hint (ratio

(11)

Figure 5: Significance of theory view time (norm)

hint requests). It should be noted that not every exercise try is processed, since not every try entails that an answer is submitted or the solution is requested.

As for the test exercises, since there is no possibility to ask for a hint or the solution, the features are the finish count, average time and average penalty.

6

Plotting features and cluster analysis

To clarify the relations of the above mentioned features, the features were plotted against the total exam score. This would show which features potentially had a correlation and thus might be an indication for good or poor performance on the final exam. For example, the left plot in figure 6 suggests a correlation whereas the right plot does not seem to suggest a correlation2. Every data point

represents a student and his behaviour over the whole platform.

Figure 6: Left shows a plot of the student ELO rating against the total exam score. Right shows a plot of the theory view time against the total exam score.

(12)

The next step was clustering the data since it potentially yields valuable initial information on the weight of the features for the classification without using much resources, mainly time. A hierarchical clustering algorithm was used to obtain a dendrogram helping in selecting the right amount of clusters (see figure 7). A dendrogram consists of all the data points on the x-axis. The dendrogram in the figure is shortened since it starts with the layer containing 12 clusters. Every two points or clusters can be joined together. The distance between the points (the cost) is on the y-axis. Thus a dendrogram shows the cost of clustering the data points. The dendrogram indicates that three clusters have the most potential, since the cost difference is huge in comparison to two clusters.

Figure 7: Dendrogram of the data using the total exam score

The clusters were visualised in the previously mentioned plots to see which feature seems to correspond with the clusters and hence seem to have the most effect in the clustering. The plots seen in figure 8 corresponded with the clus-ters3.

It is notable that there seems to be a correlation between these features. The blue cluster has a higher exercise finish count than the other clusters, but less time on the exercises, a higher average penalty, a higher ratio solution requests and a lower rate asking for the hint. It also seems that the blue cluster has a lower total exam score. This is an indication that the students tending to use solution a lot, spend less time on the exercises and request less hints. This could suggest gaming the system. The higher average penalty is expected since students who request the solution receive a full penalty.

After noticing a couple of promising plots and a couple illegible plots due to data points stacked on top of each other it was desirable to calculate the correlations and significance.

(13)

Figure 8: Plots of features against the total exam score of which the data points are coloured according to the clusters. The features are subsequently: Exercise finish count, Exercise average time, Exercise average penalty, Exercise average solution, Exercise average hint

(14)

7

Total exam score correlations

The correlations were set in a table as shown in figure 19 (see appendix). The yellow marked cells contain significant correlations. The significant correlations found for the total exam score were not very high. However, this is generally the case for this type of data [10]. Two of the features which had a relatively high correlation with the exam score were the ELO rating of the student and the average test exercise penalty; 0.4852 and -0.5165. These were as expected; an high ELO rating of the student and a low average test exercise penalty correlate with an high total exam score, since ELO rating and test scores should indicate a level of knowledge.

For the theory features, only the theory view time seemed to have a negative correlation with the total exam score i.e. high view times indicated a lower score. However, no significant correlation was found for the normative time nor for the view count. This yields the possibility that long view times of the theory pages is an indication for poor students and hence the lower total exam score.

For the behaviour within the training exercises, it is notable that higher average time has a positive correlation whereas the finish count, the average penalty and the average solution has a negative correlation. An explanation for this is that a group of students actually uses the solution button to receive an answer quickly, without thorough pondering over the exercise, and then trying to pattern match the past solution for the new question, hence the negative correlation. Likewise, correlation between the average solution usage and the average time and penalty were discovered, supporting this claim. However, no positive correlation was found between the average solution usage and the training exercise finish count. This may be due to there being two groups of students; one simply using the solution button on the exercises and finding it sufficient and those who keep filling in wrong answer and learn through trial-and-error instead of pondering over the material.

For the test exercise features the test average penalty correlation was already evaluated. The count and average penalty had a significant negative correlation. The test exercise count feature also had a positive correlation with the average training exercise solution usage, potentially indicating that the misuse of the platform also causes the students to take the test exercises more often.

Due to the suspicion of the misuse of the solution button, an average penalty was calculated for the training set where the penalties caused due to requesting the solution were omitted. The difference in correlations could give more insight in the usage of the solution button. See figure 9 for the differences in the correlation. The yellow markup are the significant correlations of the original average penalty and the orange markups are changes in the correlations in terms of significance or correlation value.

It is remarkable that the average penalty now has a positive correlation with the total exam score, likewise students with a lower average penalty spend more time pondering over the exercise, use the hints more often, have a lower amount of submitted exercises and have a significant less usage of the solution button. This is another indicator of gaming of the system.

(15)

Figure 9: Differences in correlation for regular and normalised penalty

8

Chapter specific exam score correlations

After having evaluated the correlations for the total exam score, a more spe-cific approach was taken to see differences between poor and good performing exercises. The average score for exam questions of the same chapter was used instead of the total exam score. Likewise, the student behaviour exhibited in training exercises, tests and theory pages corresponding to the chapter was only used. The goal was to find characteristics for worse performing chapters.

The exam consists of 13 chapters of which some chapters consist of a single exam question and some up to five.

The chapters should first be labelled by performance. To achieve this, box-plots of the students’ average score were designed for every individual chapter. See figure 10 for the boxplots. Only chapter 55 is missing from the figure, since that chapter had some outliers above an average score of 100, see figure 20 in the appendices. This could be due to the question weights. The students performed well on the questions corresponding to chapter 55.

Figure 10: Student average score on chapter specific exam questions

The chapters which had a poor score are chapter 59, 104, 111 and 145. A correlation table was created, similar to the correlation table in figure 19, but for every individual chapter. Correlations which were similar for the poor performing chapters but different for the well performing chapters were explored. Figure 11 contains the correlation values for the low performing chapters; the yellow markup indicates positive correlations and the orange markup a negative correlation. The correlations are regarding the individual features with the average score obtained in the final exam questions corresponding to the chapter.

(16)

Figure 11: Correlations of poor performing chapters

Only the student ELO feature has a shared correlation between these chap-ters, however this similarity is shared across all chapters and thus not charac-terising. There are four features which were shared among three of the four poor performing chapters: the theory view time, the exercise finish count, the exercise average time and the test exercise average penalty. All of these corre-lations were also found in the non-poor performing chapters. The theory view time and exercise finish count features had a shared correlation with chapter 103. Sadly, chapter 103 had no distinctive characteristic to be excused. The remaining features had multiple good performing chapters which shared the cor-relation. Likewise, the strength of the correlations was not a discriminant for the well and poor performing chapters either.

The lack of discriminating correlations could be due to the chapters having an inherent different character rather than how the students behaved within them. Some chapters might have introduced new difficult subjects and were therefore poorly performed on, irrelevant of the corresponding training exercises’ quality.

(17)

9

Analysing solution behaviour

The behaviour of the students using the solution button is next to be analysed, due to the suspicion that students are gaming the system.

Therefore, the spread of the students in terms of the ratio of solving the ex-ercise and requesting the solution was required to be visualised. In figure 12 it is remarkable that generally students request the solution more often than solving the exercise themselves. Also, there are some students who seem to have almost never solved the exercise and requested the solution quite often. This might indicate that students find it easier to request the solution instead of filling in their answer, since the training exercises are not obligatory. However, these are only a couple outlying points; the general population does have a certain count of solved exercises and thus do not have the above mentioned behaviour.

Figure 12: Students’ solved:solution ratio

The previous analysis was from the perspective of the students, not the exercises themselves. Rather, it would also be interesting to see the ratio of requesting the solution for the exercise. In figure 13 the individual exercises are ordered by the total amount of times they were finished. Firstly, it is no-ticeable that some exercises are submitted a lot, which is expectable; certain exercises will look like the test exercises and thus be used more. Secondly, the count of solved exercises is quite constant, whereas for the popular exercises the amount of solution requests is high. The extreme ratio for these popular exer-cises could indicate a gaming the system, since this is not the initially intended use by the platform. Also, if we accept the speculation that these popular exer-cises are popular due to similarity for the test exerexer-cises it becomes more evident.

(18)

Figure 13: Training exercises’ solved:solution ratio

Lastly, it would be insightful to see the solution request behaviour over time. Figure 14 shows the count of the type submission ordered over time. Notice that this is ordered by time, thus the scale on the x-axis does not represent a real time scale. The usage of the solution button is quite constant over time. The plot shows that there is no huge difference in the exhibited behaviour of the student in terms of requesting the solution early or late in the course.

(19)

10

Discussion and conclusion

The research question was whether an insight can be gained regarding the nature and quality of the exercises in relation to the effectivity of a student’s learning. No distinct characteristics are discovered for quality exercises. However, re-lations between student behaviour and exam features are identified. The cluster analysis indicates that students requesting the solution a lot spend less time on the exercises and request less hints. This can suggest that students are gaming the system. The correlation with the total exam score presents a pattern within the training exercises; higher average time has a positive correlation whereas the finish count, the average penalty and the average solution has a negative correlation. This indicates that a group of students actually use the solution button to receive an answer quickly, without thorough pondering over the exer-cise, and then trying to pattern match the past solution for the new question. Due to this suspicion the correlations are analysed of the regular training ex-ercise penalty and the penalty of which the solution requests are omitted. The average penalty turns out to have positive correlation with the total exam score. Likewise, the students who have a lower average penalty use hints more often and have a lower amount of submitted exercises and a significant less usage of the solution button. This supports the previous claim.

The solution usage behaviour is analysed from which it is concluded that generally more solutions are requested than exercises are solved. Also, some exercises seem to be submitted far more than others, of which a huge chunk is due to requesting the solution. Thus there is a potential threat of students gaming the system for learning platforms offering solutions for training exercises. The ratio of sending in an answer and requesting a solution seems to be constant. The correlation of the total exam score with the student behaviour features shows that ELO rating and the test exercise average penalty are po-tential indicators of success on the final exam.

There seems to be multiple reasons why no quality characteristics of exer-cises are discovered. First, the meaning of a quality exercise is not clear. Since the effectiveness of an exercise depends on the level of knowledge of the stu-dent, hence the advice of implementing adaptive way of selecting exercises for students [4]. This has to do with the second reason; there are many factors involved. Students from a math A or math B4background come into the course with different knowledge levels. An exercise increasing the knowledge of a stu-dent with math B background might not (yet) be effective for a stustu-dent with a background in math A. Likewise, there are motivated and less motivated stu-dents. Since the data consists of roughly 200 students, it is hard to split them up into multiple data sets while keeping enough records for every set. Lastly, the exam score can be specified to subchapter level at most. Whereas multiple training exercises are submitted within a single subchapter. Judging the quality of a single exercise is thus difficult; the exam data is too general.

(20)

In future work, user modelling technology can be used to model the level of knowledge of the student. Instead of looking at the final exam, the increase in the knowledge of the student can be used as a mean to classify quality exercises. It is also be possible to do more analyses of students’ behaviour in requesting the solution. For example, analysing whether students directly go for the solu-tion, or first try the exercise and afterwards request the solution. Perhaps in this pattern of behaviour a characteristic of quality exercises can be discovered.

References

[1] Ryan Shaun Baker and Paul Salvador Inventado. “Educational data min-ing and learnmin-ing analytics”. In: Learnmin-ing analytics. Sprmin-inger, 2014, pp. 61– 75.

[2] Julie Berry Cullen and Randall Reback. “Tinkering toward accolades: School gaming under a performance accountability system”. In: Improving School Accountability. Emerald Group Publishing Limited, 2006, pp. 1–34. [3] Susan Bull. “Supporting learning with open learner models”. In: Planning

29.14 (2004), p. 1.

[4] van Veen de Ceuninck van Capelle van den Brink and Goosens. “Adaptive E-learning Technical Report at University of Amsterdam”. In: (2015). [5] van Oenen Froberg Timmer and Vranken. “SOWISO Adaptive E-learning

Technical Report at University of Amsterdam”. In: (2015).

[6] van der Molen Stap Koster Smith and Terstall. “SOWISO Technical Re-port at University of Amsterdam”. In: (2016).

[7] Simon Buckingham Shum and Rebecca Ferguson. “Social learning analyt-ics”. In: Journal of educational technology & society 15.3 (2012), p. 3. [8] Sharon Slade and Paul Prinsloo. “Learning analytics: Ethical issues and

dilemmas”. In: American Behavioral Scientist 57.10 (2013), pp. 1510– 1529.

[9] Dirk T Tempelaar, Bart Rienties, and Bas Giesbers. “In search for the most informative data for feedback generation: Learning Analytics in a data-rich context”. In: Computers in Human Behavior 47 (2015), pp. 157– 167.

[10] Dirk Tempelaar et al. “Student profiling in a dispositional learning ana-lytics application using formative assessment”. In: Computers in Human Behavior 78 (2018), pp. 408–420.

(21)

Appendices

(22)
(23)
(24)
(25)
(26)

Referenties

GERELATEERDE DOCUMENTEN

Cyber-Physical Systems - Connectivity - Intelligence Worker Autonomy - Work scheduling - Decision-making - Work methods Knowledge characteristics - Job complexity

• 68% of the population disagree with the statement that “the death penalty prevents crime and makes society safer” and 67% disagree that “the death penalty brings

This paper challenges this proposition by studying (1) to what extent immigrants are penalized for their identity as immigrants in comparison to other relevant deservingness

The fact that the Netherlands have abolished the death penalty in 1870 does not imply that this criminal sanction doesn’t affect the Dutch legal order anymore.. As a member of

The aim of the present study was to investigate the impact a three-week intervention with a gratitude app has on students’ happiness and sleep quality in times of the

Based on the understanding that the offer of the messianic kingdom in terms of the Davidic covenant was rescinded, one consequence of the unpardonable sin may be that the content

Simulation experiment A set of procedures, including simulations, to be performed on a model or a group of models, in order to obtain a certain set of given numerical results..

All three possible degradation products were synthesized and their purities were monitored by elemental analysis and mass spectrometry. The GC separation