• No results found

Gamification and Virtual Reality in e-learning Platforms Do gamification and virtual reality techniques improve the learning process in e-learning platforms?

N/A
N/A
Protected

Academic year: 2021

Share "Gamification and Virtual Reality in e-learning Platforms Do gamification and virtual reality techniques improve the learning process in e-learning platforms?"

Copied!
10
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Gamification and Virtual Reality in e-learning Platforms

Do gamification and virtual reality techniques improve the learning process in

e-learning platforms?

Miltos Nedelkos

University of Amsterdam Information Studies

nedelkosm@gmail.com

ABSTRACT

With the rise of virtual reality, we have noticed that human-computer interaction can be more immersive and more personal than ever before. Both are essential qualities in learning which result in better transfer. Next to classical learning, e-learning platforms are being used for many years, and are even starting to replace it in some cases. These platforms are steadily being improved by personalizing training and by incorporating game elements. Games have been shown to assist learning in various ways. Although, despite the continuous effort to make e-learning platforms a great learning system, they remain problematic in numerous aspects, with the most important one being user interaction. This paper studies whether virtual reality and game elements can improve the learning process in e-learning platforms.

Categories and Subject Descriptors

K.3.1 [Computers and Education]: Computer Uses in Education

; K.8.0 [Personal Computing]: Games

; H.5.1 [Information Interfaces and Presentation]: Artificial, augmented, and virtual realities

General Terms

Master thesis

Keywords

learning, e-learning platforms, virtual reality, game elements, gamification

1.

INTRODUCTION

E-learning refers to the category of computer-mediated learning, which incorporates the use of personal computers for education and training purposes (Anaraki, 2004). In recent years, one of the most commonly used categories of e-learning software are e-learning platforms (eLearning Industry, 2016). Many platforms are generic and offer support for various courses, such as Blackboard or Moodle. These platforms offer hosting for course material as well as organizational tools, such as online tests, project submissions and grading schemes. This makes them ideal tools for schools and universities. Other platforms are focused on personal training or on specialized training, such as Codeacademy (Codeacademy, 2016) for programming.

These platforms are best for individual training and in-depth knowledge transfer in a particular domain.

E-Learning platforms are online repositories of information of a particular course that give the user -usually students- the ability to study, practice, take exams and collaborate with other users and course instructors. Even though these platforms are constantly being updated over the years, both regarding the materials they provide and the tools they incorporate, the user interaction and environment remained the same. For this reason, in this paper they will be further refereed to as classic e-learning platforms.

The progress of e-learning platforms suggests that it would one day be possible to revolutionize learning in such a way that classical learning, a classroom setting with a professor as the main coordinator, would seize to exist. While learning software is able to provide the same, or even more, material as classical learning, it has some distinct problems:

• Most e-learning platforms are based mainly on text (Burd, 2000)

• Lack of rich content for good understanding (Anaraki, 2004)

• Insufficient interactivity which results in that e-learning is less likely to hold learners compared to classical learning (Hammond, 1995).

A distint advantage of e-learning platforms over classical learning is that the learner is the one that defines how, where and when he or she is being taught and at which pace. When the mean of learning is through a software application, interaction is the most important component. In that context in recent years we saw the development of Game-based learning. It has shown to facilitate high knowledge transfer and a steep learning curve (Banerjee & Stone, 2007) as well as significant cognitive benefits (Granic, Lobel, E, & Engels, 2014). There are many applications that either in a form of a game, a simulation or a gamified task manage to train and teach players successfully. Usually such games are being referred to as serious games or applied games. Many suggest that using video games as a mean to educate the upcoming generation of students is an aspect education should focus on (Alshanbari, 2013) (Torrente, 2009). One of the most frequently discussed is the ability that games have to increment student motivation towards learning as they are able to capture their attention and keep them engaged and immersed (Gee, 2003) (Calfee,

(2)

1981). It is important to distinct between learning and training. Learning refers to the acquisition of knowledge or skills through study or experience, while training is the reinforcement of a particular skill or type of behavior. Games used in training usually imitate a real-life situation or even simulate one (Tan & Biswas, 2007). Even in high-risk situations such as surgeries (Sturm, 2008) or warfare (Knerr, 2007), game simulations have been proven to effectively train individuals. In the context of learning, games also fostered two other concepts, namely gamification and virtual reality. Gamification, the application of game-design elements and game principles in non-game contexts (Deterding, Dixon, Khaled, & Nacke, 2011) (Huotari & Hamari, 2012). The idea is that the gaming elements facilitate the user to better engage with the material to be learned and to also increase the experience of the learning environment to stay longer in it.

Virtual Reality can be defined in various ways. The definitions seem to be changing over the years, influenced by how technology advanced, especially in the early stages when VR was first being developed. The following are examples of such definitions:

• Virtual reality is electronic simulations of environments experienced via head mounted eye goggles and wired clothing enabling the end user to interact in realistic three-dimensional situations (Coates, 1992).

• Virtual Reality is an alternate world filled with computer-generated images that respond to human movements. These simulated environments are usually visited with the aid of an expensive data suit which features stereophonic video goggles and fiber-optic data gloves. (Greenbaum, 1992)

• ...the term therefore typically refers to three-dimensional realities implemented with stereo viewing goggles and reality gloves. (Krueger, 1991) Jonathan Steuer correctly argues that these definitions (and any other definition that is similarly based on a particular hardware instantiation) are thereby limited to these technologies; their units of analysis and potential for variance are left unspecified. It is possible to define virtual reality without reference to particular hardware (Steuer, 1992).

Virtual reality should not be defined by the hardware, nor the special type of interface, but by the application and user experience. Therefore, the definition we will be using is ”Virtual reality is a computer mediated technology that simulates a user’s physical presence in a replicated environment, real or imagined, which can be explored and interacted with by that user”.

VR focuses the user’s attention on the tasks and elements at hand, excluding extraneous information and reducing distraction (Bricken, 1991). What distinguishes VR from all preceding technology is the sense of immediacy and control created by immersion: the feeling of ”being there” (Psotka, 1995). By introducing a virtual environment and a virtual interface to a classic e-learning platform, the platform will provide an immersive, flexible learning environment.

There are countless studies and books on learning, how people learn, teaching and learning environments (Bransford, Brown, & Cocking, 1999), adult learning

(Mezirow, 1991), e-learning (Anaraki, 2004), and many other learning-related fields. Similarly, there are countless studies on games, serious games, games in learning (G˚asland, 2011) (Kiili, 2005) (Alshanbari, 2013), cognitive benefits in games (Granic et al., 2014), virtual reality games, and virtual reality in learning (Bricken, 1991) (Psotka, 1995). Games in learning and virtual reality immersiveness can have significant cognitive benefits, as already discussed in the introduction.

Despite the richness of studies in those fields, there are yet very few studies that combine virtual reality with learning and, especially, with game elements. Monahan, McArdle & Bertolotto (2008) is an example for this type of research. They have developed a web-based multi-user 3D environment for real-time teaching which also serves as a tool for students to communicate and collaborate with each other - basically, a VR e-learning web-based platform. While they did not explicitly implement any game elements, their program did feature a virtual environment, where the user controlled an avatar and could walk around in a room, where the course material was projected on the walls of that room.

Another relevant study featured the development of an educational virtual reality game using web services (Katsionis & Virvou, 2008). This project was in fact heavily based in game elements and gameplay. The game included dynamic elements that tried to provide a personalized experience for the player. They found that users were inherently motivated to play and found the game more interesting to use in comparison with a non-personalized, generalized e-learning platform.

Following the introduction there is a methodology section (chapter 2) that explains the purpose of this study, the main question that is formed, details about the two learning platforms that are compared and how those are designed, and the setup of the experiment. On chapter 3 - Results, we present the results of the experiments after they were analyzed and we test the hypotheses that we stated in the previous chapter. On chapter 4 - Dicussion, we criticize the results that we presented as well as the study as a whole. After the discussion, we present the final conclusions that we can draw from this study. Finally, we suggest future research that can be conducted based on the findings of our research as well as alternative methods on how to answer similar research questions.

2.

METHODOLOGY

Following the introduction, this chapter describes the purpose of this study, the research methodology and design, as well as the means of obtaining data.

The main goal of this study is to investigate whether the gamification of an e-learning platform can improve it in such a way that the user finds it more engaging (compared a non-gamified platform), while at the same time maximizing the learning transfer, hence achieving higher performance scores and higher retention rate.

2.1

Research design

Research has shown that gamification and virtual reality can have multiple benefits in learning, immersion and stimulation. Therefore, it is interesting to investigate whether those two can have similar benefits in the process of learning.

(3)

2.1.1

Research question

Based on the analysis provided in the introduction, we believe that gamified virtual environments result in higher immersion for the user, and therefore, there is a higher knowledge transfer as well as more long-term memorization effects when using gamified virtual e-learning platforms compared to traditional ones. Thus, we formed our research question as ”Do gamification and virtual reality techniques improve the learning process in e-learning platforms?”.

2.1.2

Research design

In order to answer this question we investigate the effect that gamification and virtual reality combined have on a classic learning platform.

Therefore, for the purpose of this study we create two e-learning platforms in order to study these effects. One platform represents the classic e-learning platform, as described in most literature, and the other one is a gamified, virtual version of the classic version. Both feature the exact same content but with different media and representation of information. To avoid potential bias, we will refer to the gamified virtual platform as platform A and to the classic platform as platform B. More information about the design of those platforms as well as the material those feature can be found in a further section of this chapter.

Our goal is to assess whether platform A has better results in terms of memorization, memory retention and the user experience that it provides. Therefore, platform B is used as a control measure for platform A in order to study the above mentioned qualities.

We use two different groups so to compare the two platforms. The group that uses platform B is the control group, while the group that uses platform A is the experimental group. Two different groups each use the assigned platform once, then take an assessment test, and finally, fill-in a questionnaire regarding their experience.

Each participant is given a maximum of 15 minutes to use his/her assigned platform. Both platforms feature the same content, but with different representation. Immediately after using the platform, the participant is asked to take an assessment test. This test aims to test their knowledge on various types of information communicated through the platform (see chapter 2.2.1). The assessment test is exactly the same for both groups and both groups use the same interface to answer the test. Following this test the participant is asked to fill in a questionnaire that aims to gather information regarding their experience in using the platform, whether they found it engaging as well as which components they most liked/disliked. This concludes the session for each participant.

Finally, in order to study the retention of information over longer periods, each participant is called to retake the assessment test after their first session, this time without using the platform. The test is exactly the same as the first one (for both platforms). By comparing the two result sets we can determine which platform results in a higher, more stable retention rate.

2.1.3

Hypotheses

Considering our research question and our design, 4 essential hypothesis emerge.

Hypothesis 1: Participants that use platform A better

remember the studied subject than those that use platform B.

As a gamification, platform A offers a better flow and a better drive for participants, while the virtual environment offers high immersion. Those lead us to believe that participants that use this platform can remember more information (accumulated) than those who use the classic one that has no game or virtual elements.

Hypothesis 2: Participants that use platform A retain more spatial information than those that use platform B.

We believe that the virtual component of platform A strongly reinforces spatial information in one’s memory. From a game perspective, the playful exploration in a safe environment strongly contributes in memorizing spatial information.

Hypothesis 3: Participants that use platform A have a better long-term retention of information (tested over the period of two weeks) than those that use platform B.

As it is crucial in learning to not only memorize but also retain information over longer periods, we want to investigate that too. Since platform A can potentially result in a better overall experience, we hypothesize that it is the one to have better long-term effects. Matching the scope of this study, we chose to investigate this effect over a period of two weeks.

Hypothesis 4: Participants that use platform A have a more positive user experience than those that use platform B.

It is equally important that the learning process be engaging as well as motivating. Thus, we also investigate whether participants have a more positive user experience using a gamified, virtual platform in comparison to a classic one.

2.2

Development

This section describes the development process that followed the research design. It contains information about selecting and creating the material for the learning platforms, the design and development of the platforms and creating the assessment test and questionnaire.

2.2.1

Material and scenario

As defined in our research design, we want to investigate whether gamification and virtual reality improve the learning process in e-learning platforms. In that context the type of information that the platforms aim to teach or train is irrelevant and is not taken into consideration. It is not in our scope to investigate if those techniques work best for certain courses or certain type of information, but to understand if these work better for learning in general.

Even though the course subject is irrelevant, the platforms would have to convey some information to the participants. Therefore the material of the platforms has to be something generic, neutral and non-specific (so that it would not require prior knowledge on a certain field) for it to be accessible to all participants. Ideally, the material should be for the most part unknown to all participants, so that the information that they will receive while using the platform will be new to them. The material would also have to be designed so that it could be taught over a small time period; in our case a 15 minute research session. In order to avoid bias due to personal preferences over a certain subject, that material should be as neutral and generic as possible.

(4)

Finally, the material would have to be understandable by all participants so that there would not be low performances due to participants not being able to properly grasp the concept behind the material.

To sum up, the material of the platforms for our study has to be:

• Neutral and generic • New or unknown to most • Understandable to most • Small in size

Taking these factors into consideration a fictional scenario was designed, where the participants are invited to work in a new, already established company, where they would get a first-day orientation. A job introduction is something that the vast majority of people has experienced, and is rather generic itself; this covers the first of the determined criteria. Since the scenario was fictional and was created specifically for this study, it is certain that the material is new and unknown to all participants. Considering that a generic job orientation does not contain any field-specific information, it is understandable to everyone. Finally, the material was tailored in size to fit the 15 minute window that was proper for this research.

During studying this material, the participants would receive the following type of information:

• General (Company): Various information concerning the scenario they are in and the hypothetical company that they would be working at.

• Project: In their job orientation, the participants would be assigned on a hypothetical project. While studying, they receive various information about that project, such as what their task would be, what it is related to and client information.

• Coworkers: Information about their soon-to-be coworkers in the company. Contains information about their name, appearance (image), job position, age, working location and a random fact about them. • Workspace: Information about the layout of the floor

on which the participant would hypothetically be working. It contains information about emergency exits, the purpose of each room and is directly related to the working location of the coworkers.

2.2.2

Platforms

The platforms were both developed to fit the desired description as listed above. In order to be able to compare them directly to each other, both of them feature the exact same content, but with a different form of representation.

Classic Platform

In order to have the most resemblance with the definition of an e-learning platform, the classic platform (or platform B) was developed using Moodle (Moodle, 2016a). Moodle is currently the most used open-source e-learning platform with over 77000 registered platforms in 230 countries and over 88 million users (Moodle, 2016b).

Most of the content in this platform is presented in the form of text, usually in .PDF documents. The material is

structured as a course and features text, images, video and links to other sources. The interface is that of a classical website, where the user is free to navigate and select on their own which link to follow and which information to focus on at any time. Figure 1 presents an example of the platform’s layout.

Figure 1: The classic platform

While using the classic platform each participant is asked to spend a maximum of 15 minutes on it. They are not given specific instructions on how much time to spend on each segment or which information to remember the most. This ensures that the learner is free during the learning procedure to spend their time as they see fitting.

Virtual Platform

The virtual platform was more complex to design since there was no standard model (or most common representation) to use or to draw inspiration from. In order to design the platform we applied techniques most commonly used in gamification and in virtual reality. The participants were also given a maximum of 15 minutes to use the platform, but they had to complete certain objectives within that time window. They were free to explore the virtual environment, even after they completed all objectives, as long as they stayed a maximum of 15 minutes. The implementation of this platform was done in Unity (Unity, 2016), an engine specifically designed to create games and virtual environments. The development was performed in two weeks.

The gamification elements that were used are:

• Objectives: On the top-right corner of the screen (see figure 2) the player can see three objectives. In order to complete the orientation, the player has to finish all three objectives, with no specific reward being given upon completion.

• Minimap: As a visualization technique on a video game (Zammitto, 2008), a mini map was included. On the bottom-left corner of the screen (see figure 2) the player can see a map that corresponds to the floor layout that contains their relative position in it.

(5)

Figure 2: An example view of the VR platform

• Control a virtual character: The player assumes the control of a virtual character that he or she controls using a gamepad or the keyboard/mouse.

Concerning virtual reality techniques, the whole platform was designed from the ground up to resemble a virtual environment of the office in which the player would be working. These days when people think of virtual reality they instantly have the mental image of a headset for VR. As made clear in the introduction, particular hardware and special type of interface should not be the defining factor for virtual reality. In our study we originally planned to use a VR headset but we did not because the hardware was not available at the time in the company that we arranged the sessions. We are confident though that this does not compromise the results of this research since, as we also mention in the introduction, VR is not defined by the hardware but by the environment and the interaction. Therefore, the user interacts with a computer screen, sound headphones, and either a gamepad or keyboard/mouse to control the virtual character.

During the orientation, the player receives voice messages that contain information. The voices messages play as a result of them completing part of their objectives.

2.2.3

Platform differences

The platforms were designed to only feature differences on the methods of representation, the interface and the interaction. Table 1 highlights the differences in the methods of representation of information.

2.3

Population

For the purpose of our study we chose to focus mainly on adults. A big part of training and e-learning takes place in the professional world rather than just in schools. As already mentioned, games and virtual reality simulations train many adults including doctors, soldiers, pilots and more.

We therefore aim for adults, aged 18-67 (working age), with no requirement of prior knowledge on a certain field of expertise. We also aim to include participants that already have some working experience, since they can provide better insight in applications similar to ours, compared to people who have no working experience. In terms of size, in order to get meaningful results, and in conjunction with the scope of our study, we aim to get at least 30 participants in total, 15 for each group.

Representation of Information Information Classic platform Virtual platform Company (General) Text Voice message, UI

Workspace Text, image

Virtual environment, voice message, coworker card placement Coworkers Text, pictures Coworker 3D card, voice message, user interface Project Text, video, external links Video projection, voice message Table 1: Representation of information for each platform

2.4

Data

From the participant sessions we aim to gather and analyze specific data that will help us assess our hypotheses and answer our main research question.

2.4.1

Assessment test

The assessment test is for the most part the main and most important source of our data. It is exactly the same for both groups and features 28 questions of the following type:

• Multiple choice • Fill-in

• Select all which apply

Each question is assigned to a category based on the context. The categories stem from the type of information based on material and are:

• Company • Project • Workspace • Coworkers

Each question is weighted based on difficulty and importance. For example, the question ”In which room will you be working?” is very easy to answer and therefore is weighted for 0.5 points, but the question ”Which coworker does not work on the same floor with you?” requires remembering multiple pieces of information (namely, your working floor, each co-worker’s working floor, and co-worker names) and is therefore weighted for 2 points. In another example, the question ”Which project will you be working on?” is not difficult to answer but is very important; it is graded for 2 points.

Each participant receives a total score for their performance. The total score consists of the sum of all correct answers multiplied by their weight. Next to the

(6)

total score, a sub-score is calculated for each category. Not all categories contribute equally to the calculation of the final score. For example, the ”Company” category has few questions and therefore contributes less.

In order to better present and understand the data, the grading scale is transformed from 0-38 to 0-100.

2.4.2

Questionnaire

The questionnaire is designed to gather data about user experience, in order to address the third hypothesis. It is comprised of 11 quantitative questions, 4 qualitative questions and 3 quantitative questions only answered by those that used the gamified platform. Every quantitative question is modeled after a 5-point Likert scale.

While the qualitative questions are not directly designed to answer a specific hypothesis, they can point out possible problematic points about the platforms or give us information that we would have otherwise missed.

Similarly to performance, an aggregated ”likeness” score is calculated for each participant, but it is not the sole deciding factor for answering the hypothesis relative to this measure.

2.4.3

Participant information

While answering the assessment test and the questionnaire, each participant also provides some demographic information. That includes gender, age and whether they have previous working experience. While that information is not crucial for answering our main research question, it can help us find out if there were weak points in our statistical analysis, or if there is a strong correlation between platform performance and demographics.

3.

RESULTS

This section gives information of how data was gathered and analyzed, and presents the results of the statistical analyses of that data.

3.1

Population

Based on our desired population we have invited 30 participants for our study, the demographics for which are presented in table 2

The people that voluntarily participated in this study were mostly either students or professionals working in a firm. Because of the type of platform and the material that was selected (see 2.2.1 - material and scenario), it was proper to find participants that were in a working environment.

While there is a high imbalance between males and females, this does not impact our results because none of our research questions or hypotheses are age-related and because the same level of imbalance is presented in both groups, with males being the vast majority.

3.2

Statistical analysis

We assess each of our hypotheses one by one by analyzing the appropriate data set.

3.2.1

Statistical inference

In order to properly answer the research question and test the hypotheses, the appropriate statistical analysis has to be done, with methods suitable to our dataset.

To test each hypothesis, we compare two appropriate sub-sets of the data to one another and look at the

Participants Measure Platform A Platform B Count 15 15 Mean age 27 28 Males 12 13 Females 3 2 Had any previous working experience 80% 60%

Table 2: Participant demographics Performance Measure Platform A Platform B Count 15 15 Mean 69 61 Median 72 64 Minimum 55 24 Maximum 78 95 Mode 72 66 Standard deviation 6.47 19.27 Normality Yes Yes

Table 3: Platform performance: first session

significance rate. We accept the hypothesis if that rate is equal or less than p=0.05. The significance rate is calculated using a T-test where the data are following a normal distribution and a Wilcoxon-test where not. The normality for each data set is tested using the Shapiro-Wilk normality test. For binary data, in order to get a more reliable result, the McNemar-test is also used to calculate the significance rate.

3.2.2

Hypothesis 1

Participants that use platform A better remember the studied subject than those that use platform B.

We address this hypothesis by comparing the total performance of the two groups on the first session, the results of which are presented in table 3.

Since both sets pass the normality test, a T-test is applied. The T-test value is p=0.1255 > 0.05. Therefore hypothesis 1 is rejected.

In figure 3 the distribution of the performance for each platform is shown. It is notable that the standard deviation of Platform A (the gamified, virtual platform) is much smaller than that of Platform B. This essentially means that while participants that used Platform A performed better but not significantly, they performed more consistently.

3.2.3

Hypothesis 2

Participants that use platform A retain more spatial information than those that use platform B.

(7)

Figure 3: Aggregated performance distribution per platform

participants received during the learning process. More specifically, the assessment test was structured as such that all space-related questions be under the ”Workspace” category. Therefore, in order to test this hypothesis we compare the performance of each platform for that specific category.

Figure 4: Aggregated performance per platform segregated by topic

In figure 4 the total score for each platform is broken down to sub-scores, one for each category. Note that the score for each category is scaled to 0-100 so that it is easier to compare while looking at this graphic; not each category contributes equally to the calculation of the total score (which is analyzed in Hypothesis 1).

Just by looking at this graph (fig. 4) one can see that the biggest difference in performance is in the ”Workspace” category. Table 4 contains all necessary information about it.

Both data sets pass the normality test and therefore the T-test is applied with a value of p=0.00000005 < 0.05. Therefore hypothesis 2 is accepted.

3.2.4

Hypothesis 3

Participants that use platform A have a better long-term

Spatial performance Measure Platform A Platform B Count 15 15 Mean 79 52 Median 83 57 Minimum 40 17 Maximum 100 93 Mode 77 55 Standard deviation 16.79 21.16 Normality Yes Yes

Table 4: Platform performance on spatial information Performance (2nd Session) Measure Platform A Platform B Count 15 15 Mean 52 41 Median 51 42 Minimum 31 11 Maximum 68 71 Mode 43 11 Standard deviation 11.96 18.19 Normality Yes Yes

Table 5: Platform performance: second session

retention of information (tested over the period of two-weeks) than those that use platform B.

Participants were asked to retake the assessment test two weeks (14 days) after the first session. The results of that session are seen in table 5.

Aplying a T-test on this data set simply to see if there is significant difference in overall performance between the two platforms in the second session gives a value of p=0.0002 < 0.05. The fact that the performance of the virtual platform is now significantly better indicates that the retention rate be significant itself. Figure 5 shows the performance of each platform for each session. By comparing the results of the two sessions for each individual we can calculate a retention rate for each one. The result is shown in table 6.

Both sets pass the normality test. The T-test is applied with a value of p=0.0322 < 0.05 and so hypothesis 3 is accepted.

3.2.5

Hypothesis 4

Participants that use platform A have a more positive user experience than those that use platform B

To address this hypothesis we use the quantitative data gathered from the questionnaire. User experience is not a single factor but a combination of multiple things, all of which the participants are asked to rate in the questionnaire. Similarly to performance, an aggregated score is calculated

(8)

Figure 5: Performance per session per platform Retention rate Measure Platform A Platform B Count 15 15 Mean 74 69 Median 76 75 Minimum 49 17 Maximum 93 95

Mode N/A N/A

Standard

deviation 13.89 21.16 Normality Yes Yes

Table 6: Retention rate

for each individual, the result of which is shown in table 7. This time, the scale is 8-48 instead of 0-100.

Since at least one of the data sets does not pass the normality test, the Wilcoxon significance test is applied and it gives the value p=0.082 > 0.05 which indicates that the difference is not significant. Based on this measure, hypothesis 4 is rejected.

3.3

Gamification

In order to observe the effect of gamification itself, we study the correlation between the performance and the user experience for each gamification element. As we see in table 8, there are a few weak (0.25) either positive or negative correlations, but no medium (0.5) or strong (1.0) ones.

3.4

Notable results

These results are not crucial in order to address the main research question but are notable and can help indicate possible points for future research.

• The correlation between performance and user experience is 0.49 for the virtual platform and 0.60 for the classic platform.

• The performance in emergency exit related questions is

User experience Measure Platform A Platform B Count 15 15 Mean 39 34 Median 41 36 Minimum 25 16 Maximum 45 44 Mode 44 41 Standard deviation 6.31 8.41 Normality No Yes

Table 7: User experience per platform Gamification correlations

Factor 1 Factor 2 Corr. coefficient Performance Combined gamification elements -0.09 Performance Objectives 0.39 Performance Minimap -0.38 Performance Controls 0.16 Age Combined gamification elements -0.33

Table 8: Gamification correlations

significantly better (p=0.006) for the virtual platform (measure by both the Wilcoxon and the McNemar tests).

• Participants found the virtual platform more easy to use than the classic one.

• The question ”I would use this method again” found the virtual platform significantly better (p=0.0045). • The virtual platform performed better on all user

experience related questions.

4.

DISCUSSION

As seen in the results (chapter 3), two out of four hypotheses are accepted.

In the first hypothesis the compared performances were not significantly different. While that means that the hypothesis is rejected, it does not mean that the opposite (that the classic platform outperforms the virtual one) is true. It simply means that even though the virtual one had better performance in our study, it is not significant enough to conclude that it is generally the case.

In table 3 - performance, it is interesting to note that the standard deviation of the virtual platform is much smaller than that of the classic one, but the maximum score is much lower. This means that learning and training on a gamified, virtual platform produces more concise and stable results,

(9)

but possibly not with the highest potential. For practical applications, if one would like to teach or train a group of people so that most people would reach a sufficient level, an application like platform A is more suitable. On the other hand, if one would like to train a group with the goal of producing a few high-achievers while compromising the outcome for the some of the rest of that group, platform B is more suitable.

The second hypothesis is accepted with a very small p value. We are positive, almost with certainty, that gamification and virtual reality result in better a remembering of spatial information. The goal of this study was to see if the combination of the two would result in a better outcome, and that is achieved, but we cannot tell with our data which game mechanic or which virtual reality technique specifically assisted in that.

The third hypothesis is also accepted by calculating the retention rate from the first session to the second one for each participant. Another strong reason to support this hypothesis is that the performance of the virtual platform on the retake is significantly better than the classic one, while that was not the case on the first test. This tells us that gamification and virtual reality help trigger the memorization process during learning, which results in better long-term results. We are not though certain if that will continue to be the case when more time passes.

The final hypothesis is also rejected, meaning that participants did not have a significantly better user experience based on an aggregated score for each user experience related question. While the results show that overall the participants preferred using the virtual platform, the difference is not significant support that argument in the general case. As we see in the introduction, other studies support that gamification and virtual reality do result in a better user experience. This can mean that either this is not the case for e-learning platforms, or that the design of our virtual platform was not sufficient enough in order to reproduce that result. Further research is suggested in this case. Despite the fact that the overall user experience is not significantly better, the virtual platform was proven significantly better for some of the related questions, as we see in the notable results section for the question ”I would use this method again.” with a significance rate of p=0.0045. This measure provides clear indication that in some regards, a virtual e-learning platform provides a better user experience.

The gamification correlations are not strong enough to determine which game elements assisted the most in the learning progress, but can give us an indication which can be used in future research. For example, we can see that the objectives have a positive weak-to-medium correlation with performance, while the opposite is true for the minimap. Perhaps, a better design for a future platform would be to focus more on the objectives and to not include a minimap. From the notable results we see that there is a positive medium strong correlation between user experience and performance. This means that the more a participant likes the platform, the better he or she perform. While that for some sounds logical, we cannot be certain that is the case -not all high-performing students enjoy studying as much. We also notice that performance on the emergency exits is significantly better in the virtual platform. We suspect that is mainly because of the virtual environment and the

objective ”Locate all emergency exits”. This highlights the potential of VR and gamification to safely and efficiently train individuals on emergency situations.

We would like to acknowledge that potential weaknesses of the experiment include the small size of the groups, bias toward the field and the design of the two platforms. While the classic platform was modeled after a popular and established e-learning platform, the virtual one had to be imagined and designed from scratch - of course based on already established gamification and virtual reality techniques. The virtual platform was developed by a single person within two weeks in order to fit the time schedule of this research. Improvements on the design or a more meticulous and professional development can potentially have even better results.

5.

CONCLUSION

To address the main research question ”Do gamification and virtual reality techniques improve the learning process in e-learning platforms?” we look at the statistical analysis of our data and the results of our hypotheses. While some of the results were significantly better, we cannot generally state that yes, those techniques do in fact improve the learning process. We can however specifically mention which parts of that learning process they improve and how.

We conclude with certainty that gamification and virtual reality techniques in e-learning platforms result in better memorization of spatial information. In our case, such spatial information includes the layout of a floor, the location of emergency exits, the position of working spaces and information about certain rooms.

We are also able to conclude that those techniques improve the long term results in learning. The participants of this study that used the gamified and virtual platform had significantly better memorization results two weeks after they had their learning experience, and a better retention rate, compared to those that used the classic e-learning platform.

6.

FUTURE WORK

There are many studies and future research that can be done based on the findings and the development of this one. For more reliable results in a bigger scale we suggest a study with more participants of various backgrounds and disciplines.

Furthermore, we suggest similar studies with the following alterations:

• Different age groups, especially children • Different material and scenario

• Application in specific disciplines • Different game mechanics • Different VR equipment • Different VR interface

• Comparison to other classic platforms • Comparison to classical learning

(10)

Other studies can experiment with specific gamification elements to investigate which are more appropriate for learning platforms. Similarly, virtual reality interfaces can be compared to one another to find the ones most fitting.

7.

ACKNOWLEDGMENTS

I would first like to thank my thesis advisor Dr. Frank Nack of the Faculty of Science at the University of Amsterdam. He consistently instructed me and steered me in the right the direction whenever I asked for it.

I thank the participants who were involved in the survey for this research project. Without their participation and valuable input, my thesis could not have been successfully conducted.

I thank and acknowledge Dr. Anja van der Hulst, Senior Consultant for Serious Gaming at TNO as the second reader, and I am grateful to her for her valuable comments on this thesis.

I would also like to thank Sebas de Jongh, Hardware Architect at CGI Nederland, for his valuable guidance and input throughout researching and writing this thesis, as well as helping me find participants especially suitable for my topic.

Finally, I must express my very profound gratitude to my parents for providing me with unfailing support and continuous encouragement throughout my years of study and through the process of researching and writing this thesis. This accomplishment would not have been possible without them. Thank you.

References

Alshanbari, H. (2013). Video Games in Education. Hassacc, 198–202.

Anaraki, F. (2004). Developing an Effective and Efficient eLearning Platform. International Journal of The Computer, the Internet and Management , 12 (2), 57–63.

Banerjee, B., & Stone, P. (2007). General game learning using knowledge transfer. The 20th International Joint Conference on Artificial . . . , 672–677.

Bransford, J. D., Brown, A. L., & Cocking, R. R. (1999). How people learn: Brain, mind, experience, and school. National Academy Press.

Bricken, M. (1991). Virtual reality learning environments: potentials and challenges. ACM SIGGRAPH Computer Graphics, 25 (3), 178–184.

Calfee, R. C. (1981). Toward a Theory of Intrinsically Instruction* Motivating. I Cognitive Science, 4 , 333–369.

Coates, V. T. (1992). The Future of Information Technology. The Annals of the American Academy of Political and Social Science, 522 (May), 45–56.

Codeacademy. (2016). Learn to code interactively. Retrieved 2016-06-26, from https://www.codecademy.com/ Deterding, S., Dixon, D., Khaled, R., & Nacke, L.

(2011). From game design elements to gamefulness. Proceedings of the 15th International Academic MindTrek Conference on Envisioning Future Media Environments, 9–11.

eLearning Industry. (2016). The top elearning statistics and facts for 2015 you need to know. Retrieved 2016-06-22, from https://elearningindustry.com

G˚asland, M. (2011). Game mechanic based e-learning. Science And Technology, Master Thesis(June), 77. Gee, J. P. (2003). What Video Game Have to Teach

Us About Learning and Literacy. Computers in Entertainment (ACM), 1 (1), 1–4.

Granic, I., Lobel, A., E, C. M., & Engels, R. C. M. E. (2014). The benefits of playing video games. American Psychologist , 69 (1), 66–78.

Huotari, K., & Hamari, J. (2012). Defining gamification. Proceeding of the 16th International Academic MindTrek Conference on Envisioning Future Media Environments, 17.

Katsionis, G., & Virvou, M. (2008). Personalised e-learning through an educational virtual reality game using Web services. Multimedia Tools and Applications, 39 (1), 47–71.

Kiili, K. (2005). Digital game-based learning: Towards an experiential gaming model. Internet and Higher Education, 8 (1), 13–24.

Knerr, B. W. (2007). Immersive Simulation Training for the Dismounted Soldier. U.S. Army Research institute(February).

Mezirow, J. (1991). Transformative dimensions of adult learning. ERIC.

Moodle. (2016a). Moodle e-learning platform. Retrieved 2016-06-25, from https://moodle.net/

Moodle. (2016b). Moodle statistics. Retrieved 2016-06-25, from https://moodle.net/stats/

Psotka, J. (1995). Immersive training systems: Virtual reality and education and training. Instructional Science, 23 (5-6), 405–431.

Steuer, J. (1992). Defining Virtual Reality: Dimensions Determining Telepresence. Journal of Communication, 42 (4), 73–93.

Sturm, L. P. (2008). A Systematic Review of Surgical Skills Transfer After Simulation-Based Training. Annals of Surgery, 248 (2), 166–179.

Tan, J., & Biswas, G. (2007). Simulation-based game learning environments: Building and sustaining a fish tank. Proceedings: First IEEE International Workshop on Digital Game and Intelligent Toy Enhanced Learning, 73–80.

Torrente, ´A. D. E. M., J Blanco. (2009).

Unity. (2016). Unity game development platform. Retrieved 2016-06-26, from http://unity3d.com/

Zammitto, V. (2008). Visualization techniques in video games. , 267–276.

Referenties

GERELATEERDE DOCUMENTEN

It must be noted that there are some limitations concerning the methodology presented. First of all, there are some caveats concerning the Gini coefficient as mentioned in section

Investors based in an emerging market such as Brazil face more exchange rate risk because their home currency is not as stable as those of developed countries but because of

In conclusion, we showed that the air flow inside the impact cavity formed by a solid object hitting a liquid surface attains supersonic velocities.. We found that the very high

Online store offer 2 (furniture/VR).. further processing based on an informed consent. If a participant disagreed and thus selected no, the experiment was immedi- ately terminated,

Interactive Virtual Math is a digital tool for learning graphs from dynamical events at high school (14-17 years old students) and to explore the use of new technologies in

The ICTMT conference series is unique in that it aims to bring together lecturers, teach- ers, educators, curriculum designers, mathematics education researchers, learning

Two of the most well- known among those arrested were sheikh Ali bin Khudayr al-Khudayr (b. 1968), both disciples of the late sheikh Hamoud bin Uqla al-Shuaybi, the most famous

The color point linearly shifts to the yellow part of the color space with increasing particle density for both the transmitted and re flected light, as shown in Figure 4.. The reason