• No results found

The effect of gamification on student investment in the context of career choice

N/A
N/A
Protected

Academic year: 2021

Share "The effect of gamification on student investment in the context of career choice"

Copied!
12
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

The effect of gamification on student investment in the

context of career choice

SUBMITTED IN PARTIAL FULLFILLMENT FOR THE DEGREE OF MASTER OF SCIENCE

Jorim Bakker

10359060

M

ASTER

I

NFORMATION

S

TUDIES

G

AME

S

TUDIES

F

ACULTY OF

S

CIENCE

U

NIVERSITY OF

A

MSTERDAM

August 28, 2016

1st Supervisor 2nd Supervisor

Dr. Frank Nack Dr. Dan Buzzo ISLA, UvA ISLA, UvA

(2)

The effect of gamification on student investment in the

context of career choice

Jorim Bakker

University of Amsterdam Science park, Amsterdam,

Netherlands +31 304242

jorimbakker@hotmail.com

1. INTRODUCTION

In the Dutch education system, a student needs to decide on his or her future education during middle school. Previously in his life, these type of difficult decisions were made by his parents, but now the student must pursue his own dreams and desires. The student is assumed to know his own desires, skills and characteristics best, but the matter is not as straightforward as that. Career decisions are complex, and there is a very good chance of the adolescent student making a wrong choice (Gati & Saka, 2001).

In the first year of their generally self-elected higher education, almost 40% of college and University students drop out of their current study (Van den Broek et al., 2015). This survey revealed that the main reason for dropping out or switching, was that students felt they made the wrong study choice. The group of students that would eventually cease their studies, mentioned that they did not make a conscious decision, when making their choice. The chances of dropping out or switching studies are much smaller for students that attend so-called ‘open door days’. During an open door day, a potential student can experience what the study is like during a regular day. The dropout rate increases for students that base their choice on the potential high pay or social status that the career would yield. However, Van den Broek et al. found that when students truly base their choice on a mix of personal interests, their capacities, the career perspective and job options, the dropout rate plummets.

In order to lower the dropout numbers, it can be reasonably assumed that students should be taught to base their choice on the aforementioned elements: they should learn what their interests and capacities are, and how the career perspective and job possibilities for their desired study look like. They also need to be invested, so that they start taking actions to prepare for the future, like attending open door days. In the following paragraphs, some examples will be given of methods and programs that attempt to address these points and battle the high dropout rates.

A primary career choice method that is used in the Netherlands is the Qompas method (Qompas, 2016). This method, named after the company that developed it, consists of several questionnaires that attempt to determine the interests and competencies of students. However, these tests are all separate and not integrated in one system or tool. The integration has to be done manually by the student or teacher; e.g. competencies are not automatically linked to personal desires or motivations. There is also no aspect that causes the students to be invested in their future and actively consider and investigate their options. A small sample of teachers,

who were asked about the program, reported that the activation of the student accounts was very complicated and created much confusion for the teachers and students. In other words, usability is a facet that hampers this method. This system has been in use for several years now, by many schools, but the dropout rate has remained stable (Onderwijsinspectie, 2014; 2015).

Qompas has recently announced a new program that claims to integrate the aforementioned tests, among other things (Nationale Onderwijsgids, 2015). Though definitely an improvement, simply compiling these tests does nothing to address the degree of investment of the student concerning his or her future. Not a great deal of information is publicly available for this tool, so it is difficult to compare it to the tools and methods that are mentioned and explored in this article. In addition, the effectiveness of this program has yet to be determined.

Another method for decreasing the high dropout rate is counselling with a (personal) study guide. This particular method attempts to let a student make a more conscious decision and encourages the student to explore universities and colleges. Though there is no hard data on this particular method, it is arguably very effective as it addresses some of the main points that cause the high dropout rate, as specified by the aforementioned report by Van den Broek et al. (2015). For one, the students are made to be more conscious about their decision by interacting with a study guide. Second, this study guide attempts to help the student discover his or her personal interests, capacities, their career perspective and job options.

However, counselling is not often available and if it is, it can cost a significant amount of money when the counsellor is from a third-party organisation, instead of the school. It can be argued that these costs might deter a great number of parents from using this option. Additionally, optimal counselling cannot be performed by class tutors for all their students due to time constraints.

All the aforementioned methods have one or more significant impairment(s) of sorts, which hold them back from being effective as study/career choice methods. Therefore, a method should be devised that lacks these impairments but still contains the elements necessary for improving student career choice, which were described by Van den Broek et al. (2015) earlier in this section.

An element that might improve methods aimed at lowering the dropout rates is gamification. According to Deterding et al. (2011a, 2011b), gamification is the use of game-design elements in a non-gaming context. This definition is suited for the purpose

(3)

of this research and will therefore be used. Gamification has proven to result in investment and motivation of individuals in a wide range of contexts, including educational environments (Deterding et al., 2011a; Hamari et al., 2014b). Of course, this depends on how well it is implemented.

This study will determine if middle school students feel they are more invested in their future career choice after using a gamified pilot platform that was designed with this goal in mind. As a result, the following research question was established:

RQ 1: Does gamification in career choice programs increase the investment of students in their future career?

Investment will be defined as the time students spent on their future choices and how interested and concerned they are with their future.

In this context we also analyse the used experimental tool (“Yubu” (Yubu, 2016)), with respect to potential flaws that might influence the experimental findings. This part of the work should also provide insights on general recommendations for the design of such study choice support environments.

RQ2: How can gamification tools be improved to achieve more student investment?

The remaining paper is structured as follows. In section 2, the approach to answering the research questions will be explained. The subsequent section “experiment” will describe the details of the experiment, such as the setup and the procedure. The results from the experiment can be found in section 4. This is also where figures, tables and statistics about the data can be found. This data and the implications of this data are discussed in section 5, “Discussion”. This research article closes with section 6, “Conclusion and future works”, where a final statement is made on the results of the experiment and research possibilities are suggested, based on the limitations of the experiment.

2. METHOD

This study first wants to examine if the investment of students increases after using a gamified career choice program like the current platform. Second, this study investigates how gamification in this context can best be implemented.

Answering the research questions we follow an experimental approach with quantitative and qualitative elements.

The overall experimental approach will make use of a class in a second grade of a middle school environment, with a pupil population that faces the task of deciding on their studies for the year to come.

The experiment will be based on an existing tool, namely “Yubu” (Yubu, 2016), an experimental platform that provides classroom assignments and individual assignments. The experimental platform relies heavily on gamification: all the assignments the tool offers are gamified in some way. For example, students are awarded points for assignments they complete, which unlock new assignments as the user accumulates more points. Some challenges are individual, while others require a group of 2 or more. This depends on the challenge. A more in-depth look at the previously discussed experimental platform will be provided in section 3.1 “The Yubu platform”.

To answer how the gamified tool influences the investment of students (research question 1), a questionnaire on investment will be applied. This is a mostly quantitative method, with 2 qualitative questions. The participants will perform group challenges under guidance and supervision of the researcher. The individual challenges can be done on their own during class, but they can also be done at any time outside of class, as the experimental tool is available on the Internet and has a web address.

The second research question results out of the experimental character of the Yubu platform. As the current state of the platform might influence the planned experiment, it is necessary to investigate any shortcomings. A mixture of quantitative and qualitative methods is applied. This is done in the form of a questionnaire, focused user-testing and a focus group session. The aim then is to generalize those findings and, besides identifying shortcomings for the experiment, also distil recommendations for the design of environments that facilitate student investment.

3. TOOL AND EXPERIMENTAL SET-UP

3.1 The Yubu platform

The platform used in this experiment is an experimental subpart of the study choice tool “Yubu” (Yubu, 2016). Hereafter, the tool used for this experiment will be referred to as ‘the (pilot) platform/tool’ or ‘the (experimental) platform/tool’, as it is only an experimental platform and not representative of the entirety of “Yubu”. Created by the company BeInvolved, this platform provides classroom games and individual games or assignments, which help the student to assess his/her own interests and capabilities and helps link these two factors with what the student wants to achieve or do in the future. If implemented correctly, this tool would be expected to improve the students career choice, as it addresses the underlying problems of career choice and uses gamification to get the students invested. Figure 1 exemplifies the general look of Yubu.

Figure 1. The “explore” subpage as presented in the online tool Yubu.

The platform consists of 3 main subpages (translated): “overview”, “explore” and “challenges”. “Challenges” is where the student works on the assignments the platform offers, which are divided in 4 categories: “qualities”, where the students discover or develop their skills; “motives”, where students discover or define their interests in life and school; “prospects”, where students learn about their future possibilities and how to achieve them; finally, “contacts”, where students learn to build a network of people, by interacting and communicating with others.

(4)

Most of these challenges are digital, but some, usually group assignments, are analogue and must be printed out

“Overview” shows the progress of the user so far. This includes the level the student has attained in certain categories of the tool (qualities, motives etc.) and the career options or study options that apply to them.

“Explore” allows a student to look for information about jobs, middle school profiles and high school or university courses. The tool also allows for some personalisation, for example by giving the students the option to change their profile picture. By completing challenges, the students gather points and eventually “level-up”. At this point, new, more in-depth challenges become playable which the student must complete (in any order he/she desires) to advance to the next level.

The pilot platform is unique in its gamification, personalized approach and streamlined, modern interface technology. Besides arguably being more user-friendly than similar methods, the tool attempts to use gamification elements to improve the measure of investment that students have concerning their own future education and career.

3.2 Setup

All participants (n=21), including both males (n=13) and females (n=8), are part of the same class, where the experiment will also take place: this goes for the surveys, as well as the intervention phase. All 21 participants are familiar with the default method of career choice assistance, the Qompas method.

The overall experiment design contains two parts. The first part covers a questionnaire before the intervention, the participants then use the Yubu tool, and finalize the first part with filling in the questionnaire one more time. The second part of the experiment covers the usablity of the tool.

The first questionnaire, which is taken before the intervention, contains 12 items, 4 of which are background questions (name, age etc.) and 8 of which pertain to the investment of the students. An example of the latter would be: “I have already decided on a study profile.” The Likert-scale used in the majority of items in both surveys ranges from 1 to 5, with 1 meaning “I strongly disagree with the statement”, 2 meaning “I disagree with the statement”, 3 meaning “I neither agree nor disagree with the statement”, 4 meaning “I agree with the statement” and 5 meaning “I strongly agree with the statement”.

They will also answer questions about their time spent on, and degree of attention to their own career future, whether it be direct or far off (their investment). The chosen questions will measure investment at the hand of the given definition. These questions will be asked in what shall be called the first questionnaire, which will be filled in before the intervention, i.e. before the participants will have used the gamified tool. The complete questionnaire is available in Appendix 1.

After filling in the questionnaire, the students will be using the pilot platform for 5 lessons with a duration of 45 minutes each, so they have the opportunity to achieve a solid understanding of the program and its elements. These lessons replace the standard career choice lessons of the students and take place at the same times as the originally planned lessons. The exact content of these

classes can be found in the following section, section 3.3, “Procedure”.

Finally, several measurements will take place following the previous phase. First, the students will be asked to fill in a second questionnaire, which contains the same 8 questions on investment from the first questionnaire, but also 34 extra questions on the usability of the experimental platform. The results of this survey will be used to compare the measure of investment of the students in the context of career choice before and after making use of the platform, by comparing the second questionnaire with the first one.

In the second part of the second and final questionnaire, the students will answer questions about the specific parts of the tool: the games, the levelling, the personalization, the system of filling in skills and motivations and linking them to possible courses. These questions serve to determine the aspects of the tool that are considered the most useful and the most enjoyable to the participants. The user-friendliness of the separate aspects will also be inquired about. This questionnaire can be found in Appendix 2.

Following the questionnaire, the experimental tool will be user-tested by the participants, which will be followed-up with a general focus group session to handle any remaining questions. During the user-testing phase, the participants will be asked to complete or do certain tasks or processes within the program on their own. Determining whether they can complete the assignment or not will be the main objective of this phase. This will be used to measure the platform’s user-friendliness and its degree of self-explanation. The 4 tasks are as follows:

Complete a single (specified) challenge in Yubu.

Add the characteristic “want to make people happy” to your account.

Look up what the study “administrator” entails. Change your profile picture.

During testing, the students are asked to be critical and write down anything they think could use improvement. After every task, the participants will be asked if they were able to complete it without the help of a fellow student. The portion of students that did not complete the task will be asked why they were unable to do so. The rest of the class will then be asked if they agreed with what the student said by a show of hands. This will be recorded in an audio-visual format to ascertain the amount of participants that agreed with a certain statement, and the amount that successfully completed a task. In-depth questions, or follow-up questions for the sake of specification are asked as the need arises and depend on the situation.

During the focus group session, several questions will be posed to the students to inquire about the remaining, untouched issues concerning the gamified tool they have been using. These questions depend on which information was already discovered during the user-testing phase. Some of those questions will be control questions for the questionnaire that will be taken earlier, to see if the given responses are similar.

(5)

Figure 2: Participants working together on a class assignment As there is an absence of surveys that address investment of students in the context of study or career choice, the relevant items on the questionnaire were self-constructed. The items on the questionnaire were all focused around the definition of ‘investment’, as specified in section 3.2, “Setup”. In other words, the items aimed to determine the concern students had for their future, in the context of career.

The 34 items that were constructed to find out how the platform could be improved, were designed to be as concise as possible, without failing to cover all aspects of the tool. Since this questionnaire was aimed at discovering how this tool could be improved, it made mention of the very specific elements of this tool. Therefore, a user friendliness survey was used as basis for constructing this task (Stephan et al., 2006). However, certain elements of the experimental platform were unique, like the presence of print-out assignments and the personalisation options of the tool, so additional or different items had to be constructed for the surveys.

3.3 Procedure

Ultimately, 8 lessons of 45 minutes were used for both the experiment and measurements. During lesson 1, the participants had to fill in the first online survey, which measured their level of investment before ever having used the experimental study choice platform. Immediately after, the participants were explained what the experiment was about, what they would be doing, what would be expected of them and how many lessons it would take. For clarification, the experimental platform works with levels: the participants were told they had to complete the tasks up to level 2 at least.

The main pages of the tool were shown and briefly explained to the participants. This introduction was concluded by telling the students what part of the program had to be finished before the next lesson. Following the introduction, the students were put to work in the program.

During lesson 2, the participants completed an assignment in the form of a ‘party’ game, under supervision of the researcher (see Figure 2). This was a form group assignment, as explained in section 3.2, “Setup”. As usual, the participants were told what part of the platform they were expected to have finished by the start of the next lesson.

At the start of the 3rd lesson the class was divided in groups of 3 or 4 participants each. Within these groups, a (physical) game from the study choice tool was played for the duration of the class. No homework was assigned that lesson.

Lesson 4 was centred around the participants working on assignments within the online section of the platform. The participants were assigned to finish up what they were expected to complete, as specified at the beginning of the experiment: all tasks up to level 2.

The 5th and final experimental lesson saw the participants playing another group assignment of the experimental platform explained and led by the researcher. Afterwards, they had a short period of time to finish up level 2 if they had not done so already, as they were asked to. Though most finished at this point, the homework assignment was, again, to finish up to at least level 2. This was to ensure all would be done by the time the measurement phase came around, which would start at the next lesson.

At the start of lesson 6 the participants filled out the second questionnaire (see section 3.2). Even though the participants were instructed beforehand to ask for clarification if they did not understand a question or process, some answers participants gave contradicted with other answers. Therefore, the decision was made to have the participants all retake the questionnaire, after an explanation was given on all misunderstood questions. Afterwards, the participants were subjected to the user-testing method as specified in section 3.2 “Setup”. The process was not finished in one lesson, so it was split over two lessons.

Lesson 7 was used to finish up user testing and to have the focus group session. Afterwards, the students were thanked for their participation and the data was ready for analysis.

4. RESULTS

4.1 Data analysis procedure

The mean of the investment questions from the first questionnaire will be compared with the mean of the investment questions from the second questionnaire. As two groups of dependent, ordinal data are being compared, the means of the answers to the investment-related questions will be statistically compared using a Wilcoxon signed-rank and a significance border of p<0.05. All other data will be discussed without using statistical tests, as they cannot be applied on the data in question. This refers to items on the survey that yield non-testable data, as well as the user-testing and group session data: these data are the result of open-ended questions, not some variation of a multiple choice question.

4.2 Analysis Questionnaire 1

During the execution of the first questionnaire, 3 participants were absent, which lead to a dropout rate of 14% for the investment related data in the survey. As a result, the data analysis for the questionnaire was performed on the remaining participants (n=18), but the analysis of the user-testing and group session was performed on all participants (n=21).

None of the survey questions pertaining to investment were normally distributed. When comparing the measure of investment before and after using the platform, the following results can be found, as seen in Figure 3, 4 and 5. Figure 3 displays the questions that used a Likert-scale and are therefore ordinal variables.

(6)

Question Number

Figure 3. Investment scores per survey question. Shown are the mean investment scores and standard error from the survey before and after the intervention. Only questions that resulted in ordinal variables are included in this graph. *, P<0,05.

Table 1. Statistical data of the investment score comparisons. Shown in this table are the mean, Standard Error, t or Z value, degrees of freedom and significance of the mean difference for questions 1 through 8. *, P<0,05.

Figure 4. Participant choice of future education. Shown are the amount of students that aimed to do MBO or HBO level of education, before and after the intervention. These results are related to question 3 of the survey. *, P<0,05.

Time in hours

Figure 5. Time that students spent considering their future education and career. Shown is the time in hours the participants spent on their future career before and after using the pilot platform *, P<0,05.

A Wilcoxon signed-rank test was performed on this dataset and any significant difference (a significance below P=0,05) between a ‘before’ and an ‘after’ score is marked with an asterisk (*). Figures 4 and 5 display the data of the two questions that made use of ordinal variables.

There was no significant difference found between the response scores of any of the statements before and after the participants had used the pilot platform. The relevant statistical data for each comparison can be seen in Table 1.

This experiment also recorded the feedback participants had on the tool. The questionnaire that the participants were asked to fill in after using the pilot platform for 5 lessons contained a variety of questions about the user experience and potential feedback. The participants could respond to the statements using a

Δ mean SE t Z Df Sig. (one-tailed) Q1before - Q1after 0,167 0,283 0,589 n/a 17 0,282 Q2before - Q2after 0,556 0,459 1,211 n/a 17 0,122 Q3before - Q3after

n/a n/a n/a -0,707 n/a 0,240

Q4before - Q4after -0,111 0,351 -0,316 n/a 17 0,378 Q5before - Q5after -0,267 0,419 -0,636 n/a 14 0,268 Q6before - Q6after 0,444 0,259 1,719 n/a 17 0,052 Q7before - Q7after -0,222 0,482 -0,461 n/a 17 0,651 Q8before - Q8after

(7)

scale, with the exception of statement 4, 16, 20, 24 and 32, which were open questions.

The scores on the aforementioned statements can be seen in figure 6, with the exception of the open questions. The open questions will be discussed in the “Discussion” section of the article. This dataset was not used in a statistical comparison, so its implications will be further discussed in the “Discussion” section of this article.

Figure 6. Scores on feedback statements related to the pilot platform. Shown are the mean scores on statements 1 through 35, which relate to feedback. Five of the total 35 statements are absent from this figure as they are open questions.

4.3 Analysis User testing

The results to the tasks that the participants were asked to complete are noted below in percentages:

1. Complete a single (specified) challenge in Yubu. Success rate: 100%. The participants elaborated that it was very simple to them as they had done a multitude of challenges already. Notes: 2 people had trouble downloading game instructions: the site would return an error report. 2 participants found that challenges where they had to choose between 2 options to demonstrate which they preferred, they would sometimes see no option at all. Additionally, 7 participants noted that they often had to choose between 2 options that were the same.

2. Add the characteristic “want to make people happy” to your account.

Success rate: 71%. The participants that failed all agreed it was not clear to them where they should look on the site to add a personal trait. It was not self-explanatory to them.

3. Look up what the study “administrator” entails. Success rate: 86%. The participants that failed all agreed it was not self-explanatory where they should have done this. These 3 had not executed this process before.

4. Change your profile picture.

Success rate: 100%. Everyone had done this at least once, as it was, according to the students, an enjoyable part of the tool.

The focus group session was the final part of the measurement phase. What follows are the questions and subsequent responses of the group session.

1. How did you feel about the games and challenges? 100% of participants encountered problems with the platform’s challenges, that prevented the students from completing them, or that made crucial elements disappear, rendering the assignment useless.

86% of all participants encountered those problems at later levels, about level 3 or above.

81% of participants noticed that it was very easy to gain points without being serious: they were progressing through levels without learning anything.

95% of participants felt that there were too many challenges that required more than 1 player. This made leveling slow, as those games had to be played to continue to a higher level.

52% of participants also found the colours of the platform to be “too bright”.

71% of participants thought there should be more images in the platform to fill up the emptiness in many of the screens.

91% of participants would like the challenges to be more game-like, instead of assignment-like.

33% of participants felt the language used in the tool that addressed them was too childish.

86% of participants enjoyed the group assignments like Quonimo and would like to see those type of games made available on their computer, so they can play with each other on there.

43% of participants mentioned they enjoyed Quonimo How did you feel about the leveling system?

67% of all participants admitted they did not like the leveling system. It was also noted that the system could glitch at times: 52% of participants experienced at least once that they did not receive points for completing a task.

2. Who has used the “explore” part of the website? 38% of participants did not know what was meant when the “explore” part of the website was explained. When the “explore” tab was demonstrated for the class, 24% of the participants admitted to not having been on that page at all, while only 10% of the participants mentioned they had made use of the page and its function outside of assignments.

3. Was the information button self-explanatory?

33% of participants did not find the information button. 52% of participants did not find it self-explanatory.

4. Is there something else you would like to see in Yubu? No participants came up with a suggestion.

5. Did anyone use the “question mark” on the home-page? Out of the 9 participants (43%) that clicked on the button, 38% of participants mentioned that the button at least once did not work when they clicked on it. That is 18% of total participants. None of the participants actually made use of the function, they only clicked on it out of curiosity.

(8)

6. Was the part “overview” self-explanatory?

91% of participants found the part overview self-explanatory. The 9% of participants that did not find it self-explanatory found it ‘too busy’.

7. Did you enjoy using Yubu?

76% of participants liked using the tool more than doing another subject.

33% of participants liked the platform more than the Qompas method, which they normally use for study/career choice. 48% of participants liked the Qompas method better than the experimental platform. The participants that preferred their default method (Qompas), had the following to say; P1: “I did not like [the tool], because I was used to [Qompas]. P2: “[The experimental tool] was not self-explanatory.” Those that preferred the tool said; P3: “Doing the tasks was fun.” P4: “I liked doing the games together with others.”

19% of participants claimed they had no opinion on the matter. 8. Do you feel that Yubu has helped you prepare to make

your career choice or choices related to it?

86% of participants felt that the tool had NOT helped them prepare for a career choice.

52% of participants already had an idea about what study or profile they wanted to choose before using the pilot platform. It should be noted that after question 1, which contained many more follow-up questions and notes from the participants, the participants became less participatory, likely due to exhaustion and/or lack of focus: less hands were raised and less opinions were expressed voluntarily.

5. DISCUSSION

This study aimed to find an increase in student investment in relation to career choices when gamification was applied. It also attempted to find points of improvement in the gamified tool the platform. The experiment has shown that there was no increase in student investment after they used a gamified career choice program. None of the investment related questions asked during the questionnaire significantly changed after the participants used The experimental tool, and none of the answers given during the focus group session indicate that they felt more concerned with their future. As a result, the null hypothesis (H0) as stated at the end of section 1, was not rejected.

The findings suggest that gamified career choice programs do not increase investment of students concerning their choices related to their future career. However, there are some methodological notes to be made that supply nuance to this conclusion. For one, the conclusion may be strongly influenced by the fact that the experimental length was only 5 lessons in duration. This was the minimum amount of time advised by the company that designed the platform. In the future, it should be investigated if a significantly longer usage of the experimental tool does demonstrate an effect on student’s ability to make a correct career choice.

Second, the participants could be uncooperative at times. During the second questionnaire, all open questions had some degree of ‘joke answers’. Since they did not take these questions seriously,

it may be assumed that an indeterminate part of the participants did not accurately fill in the questionnaire. Generally, a high amount of responses on Likert-scale questions where ‘3’: neither agreement nor disagreement. This is an easy way out for those that do not intend to honestly answer questions and in this case, some may have abused it.

Third, this was an experimental tool: it had not yet received thorough polishing and the resulting technical problems (e.g. bugs) caused many hindrances for the users. This arguably fuels frustration and ultimately affects the users’ degree of investment. There is more to say about the second hypothesis (“how can gamification tools be improved to achieve more student investment?”). The questions on the second questionnaire that relate to the pilot platform specifically, shed light on the points of improvement for the platform. Although a great multitude of conclusions can be drawn from the data, only the main conclusions will be discussed here.

Most questionnaire responses average around 3 points on the Likert-scale, as can be seen in figure 4. This indicates neither agreement nor disagreement with the statement that was responded to. There are some exceptions, notably statement 27, 31 and 33. Statement 27 has a mean score of 2,3. The statement was as follows: “I enjoyed that I could favorite interesting things, like motives and qualities.” A low score indicates that students did not agree with it and therefore did not enjoy being able to favorite interesting things.

Statement 31 has a mean score of 1,8 on the Likert scale. The statement was as follows: “Yubu always functioned correctly”. The below 3 score indicates that the students are generally in disagreement with this statement. Item 32 is an open question which expands on this issue and asks what went wrong exactly. Most explanations touch on the fact that certain challenges did not work and gave an error report when the participants attempted to access them. In the user-testing phase and the group session phase of the experiment, it became clear that these were mainly high-level challenges, namely challenges of high-level 3 and up. Another point that was mentioned, was that certain students did not receive points for completing a task. This too, was confirmed during the user-testing phase and the group session phase and an issue shared by a majority of the participants.

Item 33 has a mean score of 3,5 on the Likert scale. This statement was as follows: “After potential clarification, the questionnaire was clear to me.” Having a score higher than 3, this would suggest that the participants understood the questions and statements.

No significant discoveries were made in the user-testing phase. The majority of participants succeeded in performing each major process that is available in the experimental tool. However, a lot of participants made remarks and notes that were often confirmed in the group session. Therefore, they will be discussed in the focus group phase.

The focus group session of the experiment yielded several major results. One being the fact that the tool is largely unpolished, mostly at higher levels. Almost all participants experienced at some point that challenges did not work correctly and that it was easy to achieve higher levels without putting in actual effort. They also felt that there were too many challenges that required more than 1 user. This made it difficult to progress at times, as another user would also have to be available, though they were doing their

(9)

own thing. However, players did like playing with each other during class assignments and they would like to see those type of games integrated in the tool. Those class assignments were more “game-like” challenges.

A solution to this paradox might be to shift the focus of the experimental tool to mainly group challenges. During class, students could do the class assignments with each other, while out of class, they could do the personal challenges that require only 1 user. This would be a form of homework. More research would need to be done to test the effect of this proposed set-up.

Based on the mean score of item 35 of the questionnaire, students were indifferent to the experimental platform. However, during the focus group session, it became clear that some enjoyed the platform, while a small majority preferred their alternate career choice method. This may be influenced by the fact that they have had little time to adjust to the current experimental platform, while they are already familiar and used to their default method, the Qompas method. Having students participate in only 5 sessions of 45 minutes to adjust to a new study tool may not be sufficient: a longer experimental phase may overcome this problem.

Earlier research indicated that gamification can be an asset when applied correctly (Von Ahn & Dabbish, 2008; Hamari et al., 2014a; 2014b). This also goes for educational contexts (Huang & Soman, 2013; Iosup & Epema, 2014). For example, in higher technical education, gamification has already been shown to improve participation in voluntary assignments, increase class participation and increase the students attention to the design of the course. This suggests an increase in student investment in the course. Since student investment can be enhanced using gamification, it arguably follows that the same could happen in the context of career choice. In this study, it has not been shown that gamification improves student investment or confidence in choices related to their future career. This is, to a certain degree, in contrast with those findings. Some explanations for this result have been given earlier in this section, but it is not far-fetched to assume the results of this study are in accordance with reality. Gamification has not always been beneficent (Hamari, 2013). In certain cases, like in Hamari (2013), gamification has not had any added value and that might have been the case here. Comments on why the gamification did not work here would be purely speculative, as there is no research that focuses on why gamification might fail in this specific context. Finding out what exactly works and what does not in the context of career choice for students would contribute greatly to gamification attempts in this situation.

To reiterate, this study fails to show an increase in investment of students. This study does show however, that there are several key elements that the students want to see changed. With these changes implemented, it is possible that a polished, gamified career choice tool does add to the investment of students. Ultimately, gamification might still have additional value in this context when executed correctly.

6. CONCLUSION AND FUTURE WORK

Due to the issues explored in section 5, not many valid conclusions can be drawn from the collected data. However, there have been some discoveries in this study that will add knowledge to the scientific community and are therefore of value. In this section, those conclusions will be made and explored.

First, the experiment revealed that students greatly enjoyed working together with each other during tasks. Often times, the work method in schools focuses on individual work: even when dividing the class in groups, the exercises are usually not designed for group work. In the case of the experimental platform used in this experiment, the assignments were designed for cooperation. Besides being gamified, the tasks required interaction between the students, partly in the form of conversation. This form of interaction is often frowned upon in a classroom situation, yet it is a very natural form of interaction. One that teenage students often tend to engage in, even when it is punished. Based on this study, it can be concluded that classroom assignments and possibly homework can potentially be improved by designing them to be more group-focused. This is a great avenue for future research. Second, the students delighted in doing the assignments that were more “game-like”. Many assignments that the students were expected to do were still in the form of question-and-answer. A question could be: “How would you describe your personality? Choose one of the following.” A list of personality traits (e.g. kind, straightforward, expressive) was then presented. It could be said that gamification was not applied to its full potential in these types of assignments. Similar tasks in the experimental version of Yubu are already designed to be “game-like”, so it clearly can be done. Finding ways to gamify the assignments might be a challenge, but considering the activity and engagement of the students when doing the “game-like” tasks, it is worth attempting. Finally, this study used a within-subject approach to discover if gamification had an effect on student investment in the context of career choice. This meant that there was no control group to compare the intervention group with. Adding a control group wherein the participants only make use of the default career choice method (Qompas) should be considered in future research. The inclusion of a control group would be a more valid method to determine whether or not gamification has an effect on student investment in the context of future career.

7. REFERENCES

Deterding, S., Dixon, D., Khaled, R., & Nacke, L. (2011a). From game design elements to gamefulness: defining gamification. In Proceedings of the 15th international academic MindTrek conference: Envisioning future media environments, 9-15. Deterding, S., Sicart, M., Nacke, L., O'Hara, K., & Dixon, D. (2011b). Gamification. using game-design elements in non-gaming contexts. In CHI'11 Extended Abstracts on Human Factors in Computing Systems, 2425-2428.

Gati, I., & Saka, N. (2001). High school students' career-related decision-making difficulties. Journal of Counseling and Development, 79, 331.

Hamari, J. (2013). Transforming homo economicus into homo ludens: A field experiment on gamification in a utilitarian peer-to-peer trading service. Electronic commerce research and applications, 12, 236-245.

Hamari, J., Koivisto, J., & Pakkanen, T. (2014a). Do persuasive technologies persuade?-a review of empirical studies. In International Conference on Persuasive Technology, 118-136. Hamari, J., Koivisto, J., & Sarsa, H. (2014b). Does gamification work?--a literature review of empirical studies on gamification. In

(10)

System Sciences (HICSS), 2014 47th Hawaii International Conference, 3025-3034.

Huang, W. H. Y., & Soman, D. (2013). Gamification Of Education. Research Report Series: Behavioural Economics in Action.

Iosup, A., & Epema, D. (2014). An experience report on using gamification in technical higher education. In Proceedings of the 45th ACM technical symposium on Computer science education, 27-32. Onderwijsinspectie (2014). Onderwijsverslag 2012-2013. On Internet: http://www.onderwijsinspectie.nl/binaries/content/assets/Onderwij sverslagen/2014/onderwijsverslag-2012-2013.pdf, consulted on 16th of July, 2016. Onderwijsinspectie (2015). Onderwijsverslag 2013-2014. On Internet: http://www.onderwijsinspectie.nl/binaries/content/assets/Onderwij sverslagen/2015/onderwijsverslag-2013-2014.pdf, consulted on 16th of July, 2016.

Nationale Onderwijsgids (2015). Qompas komt met vernieuwde lob methode voor profielkeuze. On Internet:

https://www.nationaleonderwijsgids.nl/voortgezet- onderwijs/nieuws/29407-qompas-komt-met-vernieuwde-lob-methode-voor-profielkeuze.html, consulted on 16th of July, 2016. Stephan, E., Cheng, D. T., & Young, L. M. (2006). A usability survey at the University of Mississippi Libraries for the improvement of the library home page. In The Journal of Academic Librarianship, 35-51.

Von Ahn, L., & Dabbish, L. (2008). Designing games with a purpose. Communications of the ACM, 51, 58-67.

Van den Broek, A., Tholen, R., Wartenbergh, F., Bendig-Jacobs, J., Brink, M. & Braam, C. (2015). Monitor Beleidsmaatregelen 2014. On Internet:

https://www.rijksoverheid.nl/binaries/rijksoverheid/documenten/r apporten/2014/12/01/monitor-beleidsmaatregelen-2014/monitor-beleidsmaatregelen-2014.pdf, consulted on 16th of July, 2016. Qompas (2016). On Internet:

http://www.qompas.nl/scholieren.html consulted on 16th of July, 2016.

(11)

APPENDIX 1: Investment questionnaire

Q1: I feel confident about my future.

Q2: I have already decided which profile I will pick. Q3: My goal is to study at an MBO/HBO level [pick one]

Q4: I feel confident about my choice in the previous question. [note: this question refers to Q3, the results of which are shown in figure 2 in the “Results” section]

Q5: Answer only if you intend to go to a higher education: I am not sure about which study I will do. Q6: I intend to start working after middle school, instead of going off to pursue a higher education. Q7: I am not interested in determining my future.

Q8: How many hours have you spent on your future in the past year, on average? [options: 0, 1-3, 4-10, 30] All statements besides 3 and 8 used a Likert scale to interpret the participants answer.

APPENDIX 2: User-friendliness questionnaire

1: Het sprak voor zich wat de verschillende knoppen en afbeeldingen op de pagina "Overzicht" inhielden.

2: Ik vond het onlogisch hoe de verschillende knoppen en afbeeldingen in het onderdeel “Overzicht” geordend waren op de pagina. 3: Er waren teveel knoppen en afbeeldingen op het scherm aanwezig.

4: Zou je kort kunnen vertellen wat voor informatie er op de twee rechthoeken op de rechterhelft van het scherm werd getoond (zie de afbeelding hierboven)?

5: Het sprak voor zich wat de verschillende knoppen en afbeeldingen op de pagina "Trainen" inhielden.

6: Ik vond het onlogisch hoe de verschillende knoppen en afbeeldingen in het onderdeel “Trainen” geordend waren op de pagina. 7: Er waren teveel knoppen en afbeeldingen op het scherm aanwezig.

8: Voor mijn profielkeuze zag ik er het nut niet van in om een toekomstbeeld te vormen. 9: Ik vond het voor mijn profielkeuze nuttig om mijn kwaliteiten te bepalen.

10: Ik vond het voor mijn profielkeuze nuttig om mijn motieven te ontdekken. 11: Voor mijn profielkeuze zag ik er het nut niet van in om contacten te leren maken. 12: Het levelsysteem vond ik leuk.

13: Het levelsysteem vond ik nuttig voor het maken van mijn profielkeuze. 14: Het levelsysteem was onduidelijk.

15: Zou je kort kunnen vertellen waar op de pagina de challenges staan, als je op "toekomstbeelden", "kwaliteiten", "motieven" of "contacten" hebt geklikt?

16: Het sprak voor zich wat de knoppen en afbeeldingen op de pagina "Ontdekken" inhielden.

17: Ik vond het onlogisch hoe de knoppen en afbeeldingen in het onderdeel “Ontdekken” geordend waren op de pagina. 18: Er waren teveel knoppen en afbeeldingen op het scherm aanwezig.

19: Zonder te kijken naar de afbeelding bovenaan, kan je kort vertellen hoe je een nieuwe ontdekking kan toevoegen aan je eerdere ontdekkingen?

20: Het sprak voor zich wat de knoppen en afbeeldingen op de pagina "Informatie" inhielden.

21: Ik vond het onlogisch hoe de knoppen en afbeeldingen in het onderdeel “Informatie” geordend waren op de pagina. 22: Er waren teveel knoppen en afbeeldingen op het scherm aanwezig.

23: Kan je kort vertellen wat je allemaal aanklikt als je bij "Informatie" wilt zoeken naar een baan die jij kan doen met een MBO diploma? 24: Het feit dat ik persoonlijk werd aangesproken vond ik niet leuk.

25: Het taalgebruik van Yubu kwam op mij over als klef.

(12)

27: Het feit dat ik interessante dingen, zoals bepaalde motieven en kwaliteiten, favoriet kan maken, vond ik leuk. 28: Het “favoriten” van bijvoorbeeld bepaalde banen of vakken vond ik nuttig voor het maken van mijn profielkeuze. 29: Het aanpassen van Yubu naar mijn voorkeur vond ik leuk.

30: Yubu werkte altijd goed.

31: Beantwoord als Yubu niet altijd goed werkte - Wat ging er fout en hoe veel tijd heeft dat in beslag genomen, in uren? 32: De vragen in deze enquête waren, na eventuele uitleg, duidelijk.

33: Yubu heeft mij goed geholpen bij het maken van mijn profielkeuze. 34: Ik vond Yubu niet leuk om te gebruiken,

Referenties

GERELATEERDE DOCUMENTEN

The objectives are therefore, to explore and describe the experience of coping with the stigma by women whose partners died of AIDS, as well as developing

Health facilities require tools to monitor improvements in quality intrapartum care and trends in facility-based preventable maternal and newborn adverse outcomes (This

Daarnaast geeft het onderzoek van Billino, Bremmer &amp; Gegenfurtner, 2008 aan dat zowel jongere als oudere vrouwen minder goed zijn in het nemen van beslissingen op basis

Lasse Lindekilde, Stefan Malthaner, and Francis O’Connor, “Embedded and Peripheral: Rela- tional Patterns of Lone Actor Radicalization” (Forthcoming); Stefan Malthaner et al.,

Based on the main idea of this paper, to apply the concept of gamification on mobile app design taking the Elemental Tetrad Model into consideration, the four elements being Story,

Sum- marily, this research will examine the honesty effect of various digital signature interventions, how individual differences may moderate this effect, how this effect

Communicate the vision: het multiloog verhaal, het koppelen van denken aan doen via het gesprek, feedback loops, debriefing (link naar de praktijk). Hierbij

Meer dan driekwart van de respondenten geeft aan nog niet eerder een BROEM-cursus te hebben gevolgd, waarvan de helft aangeeft dat ze ook niet van het bestaan van een