• No results found

Gamification of the Corsi block tapping task through the use of existing game theory

N/A
N/A
Protected

Academic year: 2021

Share "Gamification of the Corsi block tapping task through the use of existing game theory"

Copied!
11
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Gamification of the Corsi block tapping task

through the use of existing game theory

Master Thesis – Information Studies: Game Studies

Author: Alexander Chatzizacharias

Student Number: 10573054

Supervisor: Sebastiaan Dovis

Second Reader: Frank Nack

Final version: 29

th

of August 2014

(2)

Abstract

With the gaining popularity of gamification (applying game elements in non-game related contexts), several studies have tried to gamify neuropsychological assessment tools in order to increase participant motivation and through motivation, participant performance. These studies use a trial and error style of gamification, where they copy game elements and insert them in their own game, without knowing the effects of the game elements on the player. The current study tries to explore a way where this trial and error and copying of other game elements is replaced by the design of game elements for certain emotional responses (increased enjoyment, immersion and motivation) from the player through the use of game theory, thus creating game elements witch improve player enjoyment and motivation. A game version of the Corsi block tapping task was created and presented to 25 participants. A majority of the participants found the game version, indicating that our approach indeed created mechanics which increased player enjoyment and motivation. It is argued that our approach to gamification is more efficient, in a way that the researcher knows the effect of the inserted game elements.

Keywords: Gamification, Neuropsychological assessment

tools, Corsi block tapping task, game, game theory, game elements

Introduction

When acquiring data from neuropsychological assessment tools, it is crucial that the obtained data are as valid as possible. To accomplish this goal many researches attempt to induce optimal performance of the participants by explaining the main purpose of the test and by providing test intervals for better focus. (Hebben & Millberg, 2009). A key factor that affects performance is participant motivation. Motivated participants perform better compared to unmotivated participants (Callahan, Brownlee, Brtkek, & Tosi, 2003). In order to maximize participant motivation, additional rewards can be provided, such as a cash reward. (Boksem, Meijman, &

Lorist, 2006).

Recently, studies use standardized game mechanics to increase participant motivation in their non-game context. This process is called gamification. Gamification refers to the use of design elements characteristic for games in non-game contexts (Deterding, Dixon, Khaled, & Nacke, 2011). The use of gamification is swiftly gaining popularity (Mallon, 2013). The positive potential of gamification on participant motivation is supported by the finding that gaming increases the release of striatal dopamine (Koepp et al., 1998; Kühn et al., 2011). This increased release of striatal dopamine

promotes the long-term potentiation of neural connection inside the striatum (Reynolds, Hyland, & Wickens, 2001), which possibly improves motivation and one’s ability to

learn (Gray, 2010).

Several studies have focused on the gamification of neuropsychological assessment tools (Dovis, van der Oord, Wiers, & Prins, 2012; Hawkins, Rae, Nesbitt & Brown, 2013; Miranda & Palmer, 2013). Defining the degree of gamification of such tasks is very complicated because of the relative young age of the phenomenon and the high level of subjectivity and contextually that the term brings with it (Deterding, Dixon, Khaled, & Nacke, 2011). The most common method to gamify such a tool is to implement aesthetic enhancements (appealing colours, animations, sound effects and/or appealing textures & interface), create a narrative for the tool and the use of a performance- based point system for feedback. (Hawkins et al., 2013). It is argued that the addition of more advanced game elements and mechanics (e.g. skill based reflex systems, adaptive challenge, etc.) will further increase participant motivation (Sweester & Wyeth, 2005).

The results of the studies that integrate gamification on their neuropsychological assessment tools are diverse. Results generally vary in effects on performance but are identical on effects on motivation and enjoyment. Miranda & Palmer (2013) conducted a research where an experimental task (visual search task) was gamified through the use of aesthetic enhancements and a performance based point reward system. Two versions of this task were created, both with different degrees of aesthetic enhancements and afterwards an intrinsic motivation questionnaire was completed by the participants. Results stated that participants that finished the task version with the more salient game elements scored higher on intrinsic motivation than the participants who completed the task version with the less salient game elements. Although Miranda and Palmer did not investigate effects on performance, a demanding task such as the visual search task, increased participant motivation could indicate an improved participant performance (Engelmann, Damaraju, Padmala, & Passoa, 2009), indicating that gamification may increase participant performance due to its influence on participant motivation.

Hawkins et al. (2013) gamified their neuropsychological tasks (choice task, change detection task) by using a point system, aesthetic enhancements (appealing colours, sound effects and/or appealing textures & interface) and an incorporated narrative. Results showed that there was no significant difference in participant performance between the normal and gamified version. Results did show that participants had increased enjoyment and interest in the gamified version of the task. Another study that exclusively researched

Gamification of the Corsi block tapping task through the use of

existing game theory

Alexander Chatzizacharias

- 10573054

(3)

participant performance on neuropsychological assessment tools was conducted by Dovis et al. (2012). They conducted a study which improved the gamification of their neuropsychological assessment tool by creating a full game environment around a visual-spatial working memory task rather than just adding aesthetic enhancements and a narrative. The gamification used by Dovis et al. (2012) featured animation, adaptive difficulty, competition (comparison of scores), aesthetic enhancements, compelling narrative, point systems, multiplier systems and upgrades. Compared to previous research, which only used aesthetic enhancements, basic point systems and narrative, gamification was taken to new and improved levels. As a result Dovis et al. (2012) noticed that gamification had an increased effect on performance on participants diagnosed with Attention-Defecit/Hyperactivity Disorder (ADHD) compared to a baseline feedback-only condition and a small monetary rewards condition (it was found to be equally effective as a strong monetary rewards condition; where children could win 10 euros). This indicates increased motivation in participants. (Callahan, Brownlee, Brtkek, & Tosi, 2003). However, Dovis et al. (2012) found that normally developing children performed equally well on their baseline condition, which provided only feedback, and

their gamified condition.

The aforementioned studies used basic game elements (aesthetic enhancements, points as reward system and narrative) that are straightforward. These methods are known to increase enjoyment in games (Sweester & Wyeth, 2005). However as discussed by Dovis et al. (2012), any of these single game elements could have had the desired effect on motivation. It could also have been a combination of some or all of the aforementioned game elements. Finding out which of these game elements are beneficial to performance is a process of trial and error, since the aforementioned elements were randomly picked without investigating their effect on participants. This means that it is still unknown which and how many elements are needed to motivate a participant to perform optimally. This study strives to implement game theory in order to create meaningful game elements for gamifying neuropsychological tools, providing a method where there is no trial and error since the game elements implemented should have the desired effect. We aim to discover this by answering the following research questions:

1. Which game theory can be used in order to create game elements which increase participant motivation and enjoyment?

2. How can we use the selected game theory in order to create and implement game elements upon a neuropsychological assessment tool? 3. How do participants react on the implemented

game elements?

To answer these research questions two different game specific models are used. The model of enjoyment created by Sweetser & Wyeth (2005) in combination with the Mechanics-Dynamics-Aesthetics (MDA) model created

by Hunicke et al. (2004).

The model of enjoyment integrates the many

heuristics related to enjoyment in game literature into one model which can be used for analysing and reviewing games. Using this model we can identify some key features in games that must be present (Sweester & Wyeth):

 Clear goals for the player

 Adaptive challenge levels related to player skill  Visual and auditory stimuli

 Distracting stimuli should not be presented when the player needs to concentrate

 Rewards should be in line with player effort  Immediate feedback on player based actions  Sense of competition

 Emotional involvement

Knowing these key features allows us to correctly use the MDA model to select correct game elements for the gamification of our neuropsychological tool. The MDA model (Hunicke, LeBlanc, & Zubek, 2004) is a widely known model used within the game industry for successfully implementing meaningful game mechanics. The model has three levels of abstraction, namely the mechanics (specific game elements, e.g. point systems), the dynamics (behaviour of the mechanics within a game world, which act upon player input, e.g. a character that jumps after the player presses the jump button) and the aesthetics (emotional responses of the player e.g. sense of challenge). One can move throughout the levels of the model to conceptualize the dynamic behaviour of game systems, enabling one to predict and design the aesthetic outcome of the game mechanics. This allows the game designer to create specific game mechanics which would, for example, make the player feel

challenged or frustrated.

We believe that using the previously mentioned list of criteria from the model of enjoyment in order to identify the necessary dynamics and aesthetics, allows us to create meaningful mechanics which will raise participant motivation and enjoyment. In this study four different versions of a visuospatial short-term and working memory task are used. Each version has an increased degree of gamification attached to it, ranging from no game elements at all, to a game environment resembling the gamification used by Dovis et al. (2012). Different versions are used in order to assess if participants respond with the same levels of enjoyment and/ or motivation on less gamified versions, deeming gamification unnecessary.

It is expected that participant motivation and enjoyment will increase with a higher degree of gamification. Therefore it is expected that participants will be more motivated to perform on the full game version in relation to the partial game, meaning that game elements being present in the fourth condition (or a combination of these elements) are the best motivators when implemented in neuropsychological assessment tools. It is also expected that as the time on task increases, participants (Tanner et al., 2003) will perform better when conditions containing game elements are presented (Boksem et al., 2006). Meaning that participant

(4)

motivation to perform in the third and fourth presented conditions will be higher for the partial game and full game versions of the visual spatial short-term and working memory task used compared to the first and second versions.

The objective of the current study is to explore the possibilities of implementing gamification through the use of game theory, in order to minimize the process of trial and error in the gamification of neuropsychological assessment tools.

Method

Participants

A total of 32 students from multiple universities in the Netherlands participated in this study. Seven participants were excluded due to technical difficulties within the beta version of the task (game version crashed and would not restart). Eventually, 25 participants (M = 22.84 years old, SD = 2.95; 13 male, 12 female) remained. One participant suffered from deuteranopia (Red-green colour blindness) which did not affect participant enjoyment & motivation.

Software & Hardware

Four versions of the task were created using Unity 4.2 (Unity Technologies). Programming was conducted within Monodevelop (http://monodevelop.com/) using the Unityscript programming language (a slightly modified version of javascript for usage within the Unity game engine). Testing was done on two computers:

1. Intel i5 laptop (2.1GHz), and presented on a 15.6 inch display with a refresh rate of 60 Hz and a screen resolution of 1366x768 pixels. 2. Intel i5 desktop (3.1 GHz), and presented on a

22 inch display with a refresh rate of 60 Hz and a screen resolution of 1680 x 1050 pixels.

The Corsi Block Tapping Task

For this study the Corsi block tapping task (CBTT) was gamified. The CBTT is a neuropsychological assessment tool which is widely used in order to measure working memory (R. P. C. Kessels, Van Zandvoort, Postma,

Kapelle, & De Haan, 2000). The CBTT is a visuospatial short-term and working memory assessment tool (Corsi, 1972). While conducting the CBTT the goal of the participant is to memorize and reproduce the presented sequence. In the physical version of the task one taps the top of the blocks in the desired order. The participant has to conduct three trials which consisted of a same sequence length of blocks. If one of the three trials is reproduced correctly, after three trials the sequence length of blocks increases by one. The starting sequence length of blocks is three. The tasks ends if all three trials within the same sequence length are reproduced incorrectly. The task also ends if the participant successfully reproduces one sequence in each sequence length of blocks (eight being the maximum sequence length). Using the CBTT one can determine the memory span of the participant. The memory span is determined by looking at the highest sequence length of blocks which the participant has correctly reproduced two out of three trials.

Task Versions

In this study the CBTT was digitized and gamified. The digital version of the CBTT is more stable and consistent in regards to internal and external validity (Chua & Don,

Figure 1: Position of the blocks (in mm) measured from the left bottom corner of the board to the left bottom corner of each cube. (Kessels, Van Zandvoort, Postma, Kappelle, & De Haan, 2000)

Figure 2: Trial overview Corsi Block Tapping Task (sequence-length of 2 blocks). a1. To start a trial the begin-button in the middle of the screen had to be clicked. b. A sequence of stimuli (blocks that lit up) was presented one by one. Each stimulus lit up for 1100 ms and was followed by an inter-stimulus interval of 600 ms. c. After the stimulus-sequence was presented the participant responded by using the left-mouseclick on the squares. To respond correctly the presented stimuli had to be reproduced either in the exact same order (forward CBTT) or in the reversed order (backward CBTT). d. Except for the no-feedback condition, response feedback was then presented. A2. After feedback-presentation, the participant was able start the next trial by clicking the next-button.

(5)

2012), meaning our baseline condition is still internally

and externally valid.

In total four different versions of the CBTT were created, each containing a certain degree of gamification. The baseline condition is a digital copy of the default setting of the physical CBTT, the no-feedback version. The second version is the feedback only version, giving the participant minimal feedback, in the form of green checkmarks and red crosses, on correct and incorrect responses. The third version, the partially gamified version, replicates game elements used by previous studies (Hawkins et al., 2013; Miranda & Palmer, 2013), such as a point-system, narrative and aesthetic improvements. The fourth version, the completely gamified version, is the version in which we use game theory such as the game enjoyment model (Sweester & Wyeth, 2005), and the MDA model (Hunicke, LeBlanc, & Zubek, 2004), to create and add meaningful game elements in addition to elements used in the third, the

partially gamified, version.

All four versions share core features which are crucial in order to maintain the validity of the CBTT. First of all, all four versions have 9 blocks, all within a set distance from each other and a standard place on the board (fig.1). Secondly, all versions have an 1100 ms stimulus presentation period and a 600 ms inter stimulus interval. The response interval has an infinite period. The response interval is over once the participant clicks on the same amount of blocks as the current sequence length. The feedback screen also has an infinite time period attached to it. Once the participant is ready he can click on the “next” button to proceed. This also serves as the focus screen for the participant (fig.2). Third and last, all versions contain a tutorial prior to the trials. The tutorial uses a set sequence of three blocks which the participant has to answer correctly in order to proceed. The tutorial also shows instructions which tells the player what is expected off him.

In addition to the normal conditions, backwards conditions were used. In the backwards conditions everything is identical except that the participant needs to

replicate the presented sequences in the reversed order instead of the same order. Furthermore all rules of the CBTT (as discussed earlier) apply to all four versions. Apart from these universal features, each version has some unique features such as point multipliers. All four versions can be seen and played online (http://www.alex-ch.nl/CBTTWeb/CBTTWeb.html)

1. No-feedback version:

This version is an exact replica (fig. 3) of the CBTT as defined by Kessels et al. (2000). During the feedback screen no feedback is shown and the participant has no knowing if he answered correctly or incorrectly (fig. 4). 2. Feedback Only version:

This version of the CBTT is almost identical to the no-feedback version of the task. The only difference being the feedback message (fig. 5). The participant is shown a green checkmark if answered correctly or a red cross if

answered incorrectly.

3. Partially gamified version:

This is the first version where a form of gamification was added. The game elements used in this version are inspired by the previously discussed studies (Dovis, van Der Oord, Wiers, & Prins, 2012; Hawkins, Rae, Nesbitt, & Brown, 2013, 2008; Miranda & Palmer, 2013). First a narrative for the CBTT in order to raise player immersion and involvement (Sweester & Wyeth, 2005) was provided. The player was instructed to help a fictive creature collect supplies, a task which was achieved by clicking and

Figure 3: The physical setup of the CBTT (left) vs the digitized version (right)

Figure 4: The feedback screen in the no feedback version of the CBTT. The screen is identical for correct and incorrect responses.

Figure 5: The feedback messages in the feedback only version of the CBTT. Left being correct and right incorrect

Figure 6: Visually enhanced CBTT, featuring rocks instead of blocks, a lake environment and multiple textures

(6)

eventually destroying the blocks. The CBTT was visually enhanced in order to match the narrative (fig. 6). The board colour was replaced with a ground and moss texture. The blocks were reformed to make them look more rock-like and a rock texture was added to them. The background became a lake with moving water and water lilies. Additionally a point system was added in order to reward the player for his effort, which is in line with the game enjoyment model (Sweester & Wyeth, 2005). The participant received 1000 points for each correct trial and 100 points for each incorrect trial. The point difference served as an extra incentive for the player to perform optimally. Furthermore the feedback screen was improved

with complementing auditory feedback. On a correct response the player received a positive and cheerful sound as feedback along with a green checkmark and an indication of earned points and the point balance because, as theorized in the game enjoyment model, it is important to constantly show the score of the player. If the player reproduces a trial sequence incorrectly he receives a negative sound along with a red cross on the feedback screen with the same point messages as compared to the correctly answered feedback screen (fig. 7).

4. Completely Gamified version:

In the completely gamified version, the game enjoyment model (Sweester & Wyeth, 2005) was applied to determine which dynamics (behaviour of the mechanics within a game world, which act upon player input) and aesthetics (emotional responses from the player) were preferably invoked from our participants. Then using the MDA model (Hunicke, LeBlanc, & Zubek, 2004) the fitting mechanics (known as game elements) were created and combined in order to increase player enjoyment, immersion and motivation. For example, we want the player to feel enjoyment (aesthetic) which is invoked by adaptive challenge levels (dynamic), therefore a mechanic is created which takes player skill into consideration and raises or lowers the challenge presented in the levels. From the game enjoyment model, the following points were deemed applicable to our gamification tool:

1. Clear goals for the player

2. Adaptive challenge levels related to player skill 3. Visual and auditory stimuli

4. Distracting stimuli should not be presented when the player needs to concentrate

5. Rewards should be in line with player effort 6. Immediate feedback on player based actions 7. Sense of competition

8. Emotional involvement

1. Clear goals: This dynamic is important for player

motivation. If a player is aware of his goal he will more likely try to pursuit it, hereby increasing motivation to continue. The goal in this completely gamified version is to improve the highscore by collecting as many points as possible. In order to be able to collect points the player needs to correctly reproduce a sequence of a trial. Only then can he enter a mini-game where he can use his skills to collect points. In summary the goals are (in order of importance): Correctly answer a sequence to enter the mini game, use and improve skills to collect more points

Figure 7: Correct (left) feedback screen & incorrect feedback screen (right) of the partially gamified version of the CBTT, featuring sounds, point progression messages and visual feedback

Figure 8: Game elements added in the completely gamified version of the CBTT; 1: An indicator which shows the current power level of the player. The higher the power meter the more the player unlocks. 2: Flying objects the player needs to click in order to collect points.3: A cure creature for the player to emotionally attack to.4: Text telling the player exactly what is expected from him.5: A power meter that increases with each object clicked. Once full the power level of the player increases

(7)

and to beat the highscore. To make sure the importance of these goals was understood, a power system was created which was an extension to the point system used in the partially gamified version (fig. 8, point 1). The points collected by the player are added up until a pre-set value has been reached. Once that value is reached, the player gains a level. The highscore is a pre-set power level (currently set at 9). The player still sees his score in the feedback screen (fig.9) in order to allow him to track how many points he gathered in the current round (making it possible for him to determine whether he did better or compared to other trials). The highscore is a power level instead of a point amount since it is much easier for the player to remember a simple number (e.g. nine, the highscore and a three his own score) instead of a complicated number (e.g. 54300, an amount of points which is feasible to collect).

2. Adaptive challenge: The player needs to feel challenged

during the game in order to stay interested. If the game is too easy interest will be lost, and if it is too hard he won’t feel like continuing. The CBTT starts out with a set difficulty (which is subjective to the participant) and gets linearly harder. Participants with poorer working memory capabilities will experience more difficulty with the longer CBTT trials. In order to create adaptive difficulty the player needs to have the feeling he is improving a certain skill. In the game we choose to make use of the clicking “skill”. Clicking speed is widely known as a skill which indicates player efficiency in e-sport games, called clicks per minute (CPM). In the current game, once the player correctly answers a sequence trial, he enters a screen with multiple blocks. After a short countdown (which serves as a focus screen) blocks explode and objects fly through the air (fig.8 point 2). Once the player clicks on an object, he receives a pre-set amount of points. The more objects he clicks, the more points he receives, thus the higher his CPM the higher his score. In the early stages of the game only a few objects appear. The higher the sequence length of the trials, the more objects appear, meaning that the player must achieve a high CPM for a higher score.

3. Visual and auditory stimuli: Stimuli are important cues

which point the player to the right parts of the screen, or give the player feedback about his input. The completely gamified version makes sure that all player input is matched with a visual stimuli. Once the player clicks on a block during a trial he sees a small flash, acknowledging the input. Through unlockables, this effect can became a thunder strike or a water splash. If the player clicks on a

flying object a “ping” sound is heard, and a firework explosion is shown. Furthermore, special events such as reaching a power level up play positive music which also change through unlockables.

4. No distracting stimuli: This dynamic is very important

in the gamification of neuropsychological assessment tools, because it needs to keep its validity as a tool. Therefore, no distracting stimuli were presented during the trials. During the trial itself, there is no audio whatsoever and the only visual feedback shown is for the input (the click) of the player.

5. Rewards in line with player effort: In order to keep the

player interested rewards need to be offered accordingly. In addition to the previously mentioned mechanics (power level, highscore & points) unlockables and a multiplier mechanic were included. Each power level up the player receives a new unlockable item (such as new music, more clicking time, other click effects, etc.). A slider is shown on the game screen which the player can use to track his progression towards the next unlockable (fig. 8 point 5). Once the slider is full the player gains a power level and an unlockable, rewarding him for his effort. Another mechanic we use to reward player effort is the multiplier mechanic. Each correct trial executed by the player adds one to the multiplier. The amount of points earned by the player are then multiplied by the multiplier number. Players who put more effort into the trials of the CBTT are thus rewarded with more points.

6. Immediate feedback on player input: The player needs

to be aware that the game acknowledges his input in order to be sure that his actions had meaning. The only activity the player performs during the game is clicking. During the trials, once the player clicks on a block a flash effect is shown (changeable through unlockables) on the click position, showing the player where he clicked and that the click was acknowledged. During the clicking game in the feedback screen, once the player clicks on an object, fireworks pop on that location accompanied with a “ping” sound. Also a floating text is displayed showing the amount of points earned by the click, and the power slider (fig. 8 point 5) is filled accordingly.

7. Sense of competition: In order to create a sense of

competition a highscore mechanic (fig. 9) was created, which is shown during the feedback screen. The highscores we used are fake, in the sense that they are not set by other players, but are inserted by the designer. We constantly show the player his own score and the highscore we want him to beat. We also show the player a fake “next to beat” score. This score represents the score closest to the player if there were leader boards.

8. Emotional involvement: The designers added a virtual

creature (fig. 8 point 3) for the player to relate to, thus becoming more emotionally involved in the narrative of the game. The creature is the player guide, always telling him what is expected from him. In the tutorial the creature is the one telling the player the narrative, and once the task has ended, the creature congratulates the player. The

Figure 9: Feedback screen of the completely gamified version. The player sees his points, total points, highscore, next best score and his own score

(8)

creature is animated and textured with multiple faces for multiple emotional responses, increasing immersion.

Procedure

This study consisted out of one 60 – 75 minute session (depending on participant performance). Before initiating the task, the participant read an information letter explaining the goal of this study and signed an informed

consent form.

During the session, the participant was administered the four different versions of the CBTT, the no feedback version, feedback only version, partially gamified version and completely gamified version. Each version started with the forward CBTT rule set. Once the forward CBTT condition was completed, the same version of the CBTT was administered but with the backwards rule set. Once both rule sets were complete within the same version, the participant was administered the next version. In order to counter order effects, the four versions were counterbalanced across all the participants.

All tests were conducted between 09:00 and 19:00. Testing rooms were silent and had all windows covered. The participant was welcomed upon arrival and a brief explanation was given. At this time the participant also received the informative letter and informed consent form. During the test itself the experimenter was out of line of sight of the participant, covertly observing the participant, noting his behaviour (engagement, enjoyment, confusion, and attention). During the observation we tried to focus on what game elements the player responded to and what his response is, in a similar way as game designers do during play testing. Since play testing is a very important aspect of successful game design (Salen & Zimmerman, 2004), we argue that this similar approach is necessary to gather valuable data on successful and unsuccessful mechanics. All participants were asked to perform at their best before each test session. Once completed with the test session the participant was asked some short questions. First, the participant was asked to rate each version of the CBTT on a scale from 0 – 10 (zero being the lowest possible grade and ten the highest). The participant could give any mark he saw fit. This gives insight on which version the participant enjoyed most. Then the participant was asked what his opinion was on the all the versions. The researcher was silent during this time. The participant was also asked for general feedback and any remarks. Once the questionnaire was completed, the participant was debriefed and thanked.

Results

During this study results were gathered from two sources. The participants were covertly observed during the sessions and they answered a short series of questions. The feedback received was categorized, following the MDA model, into feedback about the mechanics, the dynamics and the aesthetics. This allows us to notice where the participant paid the most attention to (according to the MDA model). The feedback comments were categorized subjectively. As seen in figure 10, the players paid most attention to the aesthetics (the emotions invoked during the completely gamified version, such as increased enjoyment and/or frustration) and not so much on the dynamics and the mechanics meaning that the mechanics introduced, using the game enjoyment model and the MDA model, did invoke an aesthetic response from the player. Most noticeably, 16 many participants commented on how fun the game was compared to the other versions. A noticeable amount also found the clicking game a bit too distracting, which believed lowered their performance on the task itself. Many participants acknowledged that clicking on the objects was a nice addition. The few feedback we received on mechanics and dynamics were mostly individual desires for the game, such as more interaction with the animated creature. Out of all the aesthetic feedback comments, the majority was positive (76%, Table. 1). The most common keywords used (or synonyms) were fun, motivated, distracted and immersed. We interpret the keywords “fun”, “motivated” and “immersed” as positive (Sweester & Wyeth, 2005) and distracting as negative, since there should not be any distracting stimuli (Sweester & Wyeth, 2005). Comments % Positive/Negative Fun 8 38% Positive Motivated 7 33% Positive Distracting 5 24% Negative Immersed 1 5% Positive 12% 26% 62%

Mechanics Dynamics Aesthetics

Figure 10: The received feedback categorized using the MDA model. Mechanics having 4 comments, dynamics 9 comments and aesthetics 21.

Table 1: Table showing how many times certain keywords were used in the feedback comments.

0 1 2 3 4 5 6 7 8 Co mme n ts Positive Negative

Figure 11: Graph showing the amount of positive and negative comments received per game mechanic in the completely gamified version

(9)

With the feedback received, we can also check how well the mechanics we created were received by the players. The feedback comments were subjectively categorized into positive and negative comments, and to which game mechanic they applied to. For example, a comment like “felt the urge to beat the highscore” was categorized as positive towards the highscore mechanic. General comments like “the game version was most fun” were excluded from this categorization since they cannot be related towards a single game mechanic. The highscore mechanic received the most positive feedback (fig. 11) and no negative feedback at all followed by the unlockables mechanic. The points mechanic received two positive comments, the clicking skill mechanic two positive and one negative comment, the cute creature two positive comments and the multiplier one positive and

negative comment.

We also asked the participants to subjectively grade each version. Results show that the completely gamified version was graded the highest by a majority of the participants (81%) (fig. 12), the partially gamified was graded as second best by the majority of the participants (67%), the feedback only version as third best by the majority of the players (69%) and the no feedback version

as worst by the majority of the players (75%). The highest consensus amongst the grade order was for the completely gamified version as lowest (zero percent of the players found the completely gamified the worst version).

In order to make sure that the gamification did not interfere with the validity of the task (since some participants deemed the completely gamified version distracting) we conducted an analysis on the performance across all four versions. A repeated measures one-way analysis of covariance was conducted (ANCOVA), with the task version as the within-subjects factor. The results show that participant performance on the task was not significantly affected by the task version, F (3, 72) = .99,

p = .40, η2 = .04.

Through observation we noticed that participants were surprised once they first encountered the clicking mini game. All participants made surprising remarks when first presented with the completely gamified version, such us “what is this?” and “what?” Once the first feedback screen was presented participants noticed the highscore and asked questions similar to “who

has the highscore”, indicating that they feel a sense of competition. It also appeared that defeating the highscore (and setting a new one) was their primary motivator during the completely gamified version of the game, since this event caused for celebration. Participants did not seem to pay much attention to the points and they quickly lost meaning to them (once they noticed that the power level was the metric for the highscore). Also three participants became demotivated once they lost their multiplier, giving comments such us “can I even beat the highscore now?” indicating that this mechanic works as a good motivator as

long as it is not broken.

Discussion

In this study we researched a new way of creating meaningful game mechanics for the gamification of the CBTT through the use of established game theory. With the help of game theory, game elements were designed that invoke certain feelings from the player, such as enjoyment, motivation and immersion. This should allow researchers to gamify neuropsychological tools, with greater knowledge of the effect the game elements have on the participants. Results show that the players reported more enjoyment and motivation in the completely gamified version of the CBTT. The highscore and unlockables mechanic were the biggest contributors to this effect. The other mechanics had a minimal effect. In line with the aforementioned studies (Dovis, van der Oord, Wiers, & Prins, 2012; Hawkins, Rae, Nesbitt & Brown, 2013; Miranda & Palmer, 2013) a majority of the players (81%) reported that the completely gamified version of the task invoked increased enjoyment from typically developing participants. Our results suggest that the versions are graded in line with our expectations. Meaning that the majority of the players graded the completely gamified version as most enjoyable, the partially gamified version as second best, the feedback only version as third best and the no feedback version as worst. In contrast to expectations, the highscore mechanic was the most popular one instead of the clicking skill mechanic, which shows that the target audience values a competitive environment. This is in line with the model of enjoyment (Sweester & Wyeth, 2005). We used two models for the gamification of the CBTT, namely, the game enjoyment model (Sweester & Wyeth, 2005 and the MDA model (Hunicke, LeBlanc, &

Zubek, 2004).

Through the use of the MDA model we were able to systematically create mechanics (game elements) which would invoke the desired emotions from the player. We aimed at increased enjoyment and motivation. Results indicate that this was the case, having 38% of the participants commenting on how fun the completely gamified version is and 33% commenting on increased motivation. This finding supports the idea that understanding how every mechanic influences the emotional response from a player is key in order to create meaningful gamification, thus the MDA model should be looked upon by future studies trying to accomplish the same.

The game enjoyment model served as an 0 5 10 15 20 25 30 Highest Second Highest Second lowest Lowest Pa rticip an ts NF FO PG CG

Figure 12: The grades the players awarded each version. Most participants awarded the completely gamified version with the highest grade and the no feedback version with the lowest grade

(10)

indicator on which dynamics and aesthetics are important in order to maximize player enjoyment. For example, we used the “clear goals” dynamic in order to create mechanics which provide clear goals for the player. The model is heavily geared towards entertainment games and can also be seen as very genre specific. (Sweester & Wyeth, 2005). One can argue, that since gamification of psychological tasks has other goals than entertainment games, the use of this model for gamification is wrongly placed, which can be supported by our finding that 24% of the participants found the completely gamified version distracting. Our results indicate that the mechanics created using the dynamics and aesthetics indicated in this model resulted in more positive ratings by our participants. Especially the “sense of competition” and the “rewards should be in line with player effort” invoked positive comments. Other criteria such as “adaptive challenge” invoked negative comments, having resulted in 66% of all negative comments specific to game mechanics. We propose a new list of criteria, based on the game enjoyment model (Sweester & Wyeth, 2005), and our finding that the highscore mechanic (which was created for a sense of competition) was the most positively received by the participants followed by the unlockables (which are rewards in line with player effort and skill), meaning that they value the sense of competition and want to be rewarded for their skill. Furthermore these criteria are specifically geared towards the gamification of neuropsychological tasks:

1. Presence of competition with relatable persons 2. Clear competitive goals for the player

3. Challenge of player skill (favourably in line with the task (e.g. clicking speed))

4. Rewards should be in line with player performance

5. Immediate feedback on player input

6. Visual and auditory feedback on important happenings (e.g. level up)

7. Player should always know his score and / or progress

8. Player should always be able to recover from a failed attempt

9. Competition should always seem beatable 10. Presented stimuli should always be meaningful We believe that the proposed criteria are a cost effective way of gamifying neuropsychological tasks which will raise player enjoyment and motivation. Aforementioned studies added game elements and afterwards wondered which of them had the increased enjoyment and motivation effect on their participants (Dovis, van der Oord, Wiers, & Prins, 2012; Hawkins, Rae, Nesbitt & Brown, 2013; Miranda & Palmer, 2013). Using the above criteria, we argue that one can design simple game elements which will maximize player enjoyment and motivation, thus allowing future studies to create simple, effective game elements without using game elements which would have no effect. We are not implying that this is the only effective way of gamifying neuropsychological tasks, since gamification is heavily relying upon context, subjective goals and the target audience (Deterding, Dixon, Khaled, & Nacke, 2011).

Looking at our own proposed criteria we are able to improve our own gamification. An improved gamification would focus more on the sense of competition. We could create a version similar to the partially gamified version that allows player to asynchronously challenge each other (or a fake entity) over the internet, showing who had the highest clicking speed and the highest score, rewarding the winner with extra points or unlockables (such as popular apps do as DrawSomething

(http://en.wikipedia.org/wiki/Draw_Something) and WordFeud (http://wordfeud.com/). The aesthetic enhancements would remain the same, including the animated creature, in order to maintain the same level of

player involvement.

The sample we used in this study was not the intended target audience for the gamification of the CBTT. This could have resulted in weakened responses to certain mechanics, meaning that mechanics created with children (8 – 12 years old) in mind could not have the same effect on the participants used, which were students, as on the

intended target audience.

Future studies should selectively look at the game enjoyment model (Sweester & Wyeth, 2005) and the MDA model (Hunicke, LeBlanc & Zubek, 2004) when applying gamification, or could use our proposed criteria for gamification of neuropsychological tasks. The use of these models allows one to create game elements with a known effect on the player, which is supported by our result that 76% of the participants commented on increased enjoyment and motivation on the completely

gamified version.

If one needs to gamify a neuropsychological tool in order to maximize motivation, our criteria can be used. If one does not know what the emotional response he wants to invoke from the player, he can use the game enjoyment model (Sweester & Wyeth, 2005) in order to identify the needed dynamics and aesthetics. If one knows which dynamics and aesthetics he needs for his gamification, the MDA model (Hunicke, LeBlanc & Zubek, 2004) can be used in order to create the best fitting

game mechanics.

Future studies can also explore other game theory in order to achieve the same goal, such as the Bartle player types (Bartle, 1996) which was outside the scope of this study because its focus towards role playing games. Studies which implement gamification with role playing elements should look at the Bartle player types (Bartle, 1996).

Conclusion

In this study we explored the possibilities of the use of game theory when gamifying neuropsychological assessment tools. In this study the CBTT was used. We used the game enjoyment model (Sweester & Wyeth, 2005) and the MDA model (Hunicke, LeBlanc & Zubek, 2004) in order to create game mechanics which would try to maximize player motivation and enjoyment. Results show that our approach did indeed increase player enjoyment and motivation. This shows that it is possible to add meaningful game mechanics to the game, knowing

(11)

their effect on the participant without the need to rely on trial and error and supplementing studies to discover the effects of the randomized game elements added. This is based upon the finding that 76% of the comments was positive towards our completely gamified version, indicating increased enjoyment and motivation. Since we designed mechanics for increased enjoyment and motivation, this finding supports this way of gamifying neuropsychological assessment tools.

References

Bartle, R. (1996). Hearts, clubs, diamonds, spades: Players who suit MUDs.Journal of MUD research, 1(1), 19. Boksem, M. A., Meijman, T. F., & Lorist, M. M. (2006).

Mental fatigue, motivation and action

monitoring. Biological psychology, 72(2), 123-132. Boksem, M. A., Meijman, T. F., & Lorist, M. M. (2006).

Mental fatigue, motivation and action

monitoring. Biological psychology, 72(2), 123-132. Callahan, J. S., Brownlee, A. L., Brtek, M. D., & Tosi, H. L. (2003). Examining the unique effects of multiple motivational sources on task performance. Journal of

Applied Social Psychology, 33(12), 2515-2535.

Chua, Y. P., & Don, Z. M. (2013). Effects of computer-based educational achievement test on test performance and test takers’ motivation. Computers in Human

Behavior, 29(5), 1889-1895.

Deterding, S., Dixon, D., Khaled, R., & Nacke, L. (2011, September). From game design elements to gamefulness: defining gamification. In Proceedings of the 15th

International Academic MindTrek Conference: Envisioning Future Media Environments (pp. 9-15).

ACM.

Dovis, S., Van der Oord, S., Wiers, R. W., & Prins, P. J. (2012). Can motivation normalize working memory and

task persistence in children with

attention-deficit/hyperactivity disorder? The effects of money and

computer-gaming. Journal of abnormal child

psychology, 40(5), 669-681.

Engelmann, J. B., Damaraju, E., Padmala, S., & Pessoa, L. (2009). Combined effects of attention and motivation on visual task performance: transient and sustained motivational effects. Frontiers in human neuroscience, 3.

Gray, P.O. (2010). Psychology (6th edition). New York: Worth Publishers.

Hawkins, G. E., Rae, B., Nesbitt, K. V., & Brown, S. D.

(2013). Gamelike features might not improve

data. Behavior research methods, 45(2), 301-318. Hebben, N., & Milberg, W. (2009). Essentials of

neuropsychological assessment (Vol. 70). John Wiley &

Sons.

Hunicke, R., LeBlanc, M., & Zubek, R. (2004, July). MDA: A formal approach to game design and game

research. In Proceedings of the AAAI Workshop on

Challenges in Game AI (pp. 04-04).

Kessels, R. P., van Zandvoort, M. J., Postma, A., Kappelle, L. J., & de Haan, E. H. (2000). The Corsi block-tapping task: standardization and normative data.Applied

neuropsychology, 7(4), 252-258.

Koepp, M. J., Gunn, R. N., Lawrence, A. D., Cunningham, V. J., Dagher, A., Jones, T., ... & Grasby, P. M. (1998). Evidence for striatal dopamine release during a video game. Nature, 393(6682), 266-268.

Kühn, S., Romanowski, A., Schilling, C., Lorenz, R., Mörsen, C., Seiferth, N., ... & Gallinat, J. (2011). The

neural basis of video gaming. Translational

psychiatry, 1(11), e53.

Mallon, M. (2013). Gaming and Gamification. Public

Services Quarterly, 9(3), 210-221.

Miranda, A. T., & Palmer, E. M. (2014). Intrinsic motivation and attentional capture from gamelike features in a visual search task. Behavior research methods, 46(1), 159-172.

Reynolds, J. N., Hyland, B. I., & Wickens, J. R. (2001). A

cellular mechanism of reward-related

learning. Nature, 413(6851), 67-70.

Salen, K., & Zimmerman, E. (2004). Rules of play: Game

design fundamentals. MIT press.

Sweetser, P., & Wyeth, P. (2005). GameFlow: a model for evaluating player enjoyment in games. Computers in

Entertainment (CIE), 3(3), 3-3.

Tanner, B. A., Bowles, R. L., & Tanner, E. L. (2003). Detection of intentional sub‐optimal performance on a computerized finger‐tapping task. Journal of clinical

Referenties

GERELATEERDE DOCUMENTEN

As they write (Von Neumann and Morgenstern (1953, pp. 572, 573), the solution consists of two branches, either the sellers compete (and then the buyer gets the surplus), a

The starting point of this paper was the observation that, while standard game theory is based on the assumption of perfectly rational behavior in a well-de ned model that is

If the chosen routes are not the safest (or most functional) routes in the network, techniques are described to redirect traffic by either increasing travel time on the undesired

There was no real improvement in the number of people caught in the phishing scam; however, the trust survey revealed that respondents have a high level of trust in their own

[r]

2010 IEEE Vehicular Networking Conference.. Here the region being the merge area, and the time period being the period when the merging vehicle will reach this area. We

IEEE Vehicular Networking Conference.. Here the region being the merge area, and the time period being the period when the merging vehicle will reach this area. We refer

Moroccan female immigrants have a smaller labor force participation rate in comparison to native females in the Netherlands. However, the total female labor force participation has