• No results found

Achieving Balance in Asymmetrical Video Games

N/A
N/A
Protected

Academic year: 2021

Share "Achieving Balance in Asymmetrical Video Games"

Copied!
41
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

MSc Artificial Intelligence

Master Thesis

Achieving Balance in

Asymmetrical Video Games

by

Philipp Beau

10664092

42ECTS January 2015 - February 2016 Supervisor/Examiner Assessor

(2)

Abstract

We address the problem of achieving balance in asymmetrical multiplayer games by proposing a novel method which assigns actions an impact attribute. We define the impact of an action as how vital the specific action is to winning strategies and other actions. Roughly speaking, how often is an action part of a winning strategy and how important is the action within those strategies. In this thesis, we will show how the calculated impact can be used to spot actions which cause the game to be unbalanced and when adjusted lead to a balanced game. To calculate the impact, our method exclusively uses game logs, refraining from human player feedback, making it reliable and easy to use for current multiplayer games. We tested the proposed method using a tower defence game, where it proved to successfully identify unbalanced actions, reliably leading to a balanced game.

(3)

Contents

1 Introduction 3 1.1 Research Question . . . 5 1.2 Thesis outline . . . 5 2 Related work 6 3 Method 8 3.1 Action history . . . 8

3.2 The action history tree . . . 9

3.3 Fair / balanced game . . . 10

3.4 Unfair / unbalanced game . . . 11

3.5 Discovering high impact actions in a game . . . 12

4 Domain description 15 4.1 Experimental Implementation . . . 16

4.2 Game field . . . 17

4.3 Towers . . . 19

4.3.1 Alien - Chain lighting tower . . . 19

4.3.2 Alien - Parasite tower . . . 20

4.3.3 Alien - Shock tower . . . 20

4.3.4 Human - Fire tower . . . 20

4.3.5 Human - Ice tower . . . 20

4.3.6 Human - Archer tower . . . 20

4.4 Game loop . . . 21 4.5 Actions . . . 22 4.5.1 Upgrading units . . . 22 4.5.2 Sending unit(s) . . . 22 4.5.3 Placing towers . . . 22 5 Experiments 24 5.1 Data . . . 24 5.2 Experiment 1 . . . 26 5.2.1 Experimental setup . . . 26 5.2.2 Result 3x3 . . . 27 5.2.3 Result 4x4 . . . 31 5.2.4 Discussion . . . 34 5.3 Experiment 2 . . . 35 5.3.1 Experimental setup . . . 35 5.3.2 Result . . . 36 5.3.3 Discussion . . . 36 6 Conclusion 37 7 Future work 38

(4)

1

Introduction

To be enjoyable, a multiplayer game has to be balanced [18]. This means that, every player, given they possess equal skill, should start with the same likeli-hood to win the game in the end. In symmetrical games, where each player can always choose from the same action set and always starts at the same entry point, this is given by default (e.g., rock, paper, scissors). In games containing asymmetrical choices however, like most multiplayer realtime strategy games (MRSG), where players start with a sometimes vastly different set of actions, balancing can be very difficult. Those games usually contain different races, where each race consists of many different buildings and units allowing a shier endless amount of strategies, implying that the number of reachable states in a modern commercial game title is enormous [12]. Balancing those games not only requires that the designers take each and every one of those strategies into account, but also the ability to find out why a certain race is, at a given point, more powerful than another. The finding out why is the difficult part in this process and an interesting scientific challenge since it is hard to attribute an action their strength or impact they have in a game. This is mainly because actions in an MRSG, like building a unit or a building, can never be directly pitted against each other to automatically determine their individual strength. Their strength is always determined by the context they are currently operating in. For example, a unit of type A wins against a unit of type B or C individ-ually, but not if B and C come in combination. Directly pitting A against B and A against C would show that A is the strongest unit of the game, while it might in fact be the weakest unit if B and C are very easy to produce. Current MRSGs can contain hundreds of different types of units, buildings and actions rendering it very difficult for a human designer to find the component which is too strong and impacts the game to be unbalanced. Currently game design-ers rely on extensive and expensive human testing[15] requiring a considerable amount of time and effort which can even go far past the public release of a game. Developing a method which can automatically find the cause of an un-balanced game would speed up this process tremendously and could prevent heavily unbalanced games from entering the market.

One consequence of unbalanced factions could be observed in the MRSG Com-pany of Heroes (CoH), a real time strategy game set in the second world war. In its original form CoH consists of two factions, the Wehrmacht and the United States. Both have a very different approach to the game. Without going into too much detail, the units of the United States are generally very versatile but lack experts in a specific role which forces them to overwhelm the enemy by working together as they are often going to lose in one to one combat against similar units of the Wehrmacht. The Wehrmacht on the other hand is a very diversified faction every unit has its specific role on the battlefield and is stronger than a unit of the United States, but also generally more expensive and available later in the game. After CoH got released in 2006 the two factions seemed generally well balanced, but soon one action of the United States called ”strafing run” (a

(5)

plane coming from outside of the battlefield and shooting on your units) proved to have a significant impact on the game. This action would very often lead to the United States being in a favourable position and likely winning the game even though the Wehrmacht might have been in the favourable position before that. This was extremely frustrating for the players of the Wehrmacht and until it got patched made many players stop playing the game which is one reason why CoH was never as successful as other games of the genre like Starcraft. One has to ask the question of what would have happened if the designers of CoH would have been able to patch this one action much sooner and created a better balance, before many people stopped playing CoH in competitive multiplayer.

(6)

1.1

Research Question

Can we provide a method which identifies the cause of an imbalance within a multiplayer game?

If an imbalance in win probability is observed in a complex multiplayer game like CoH, finding the action causing this imbalance is a difficult task. Mainly because the strength of an action can not be determined in a discrete setting where one action is pitted against another. Usually the strength of a particular action lays in the combination with other actions and their place inside the overall strategy. Game designers therefore rely on extensive and expensive human testing [15] to determine problematic actions. This process however can be very cumbersome and is prone to error as it is difficult for a player to judge whether they have taken a wrong decision themselves or the game is unbalanced in total.

In this master thesis we investigate the possibility to attribute actions a value for their strength/impact they have on a game by only using data gathered while the game is played rather than relying on feedback players give. To do this we will focus on three steps:

1. Organise the game logs in an easily analysable format.

2. Find a mathematical model to attribute actions a value of strength or impact they have on a game.

3. Test the method using game logs of an asymmetrical multiplayer game.

1.2

Thesis outline

The following work is structured in three parts. In the first part, we elaborate on related work. In there, we first take a look at academic work involving game logs and afterwards on the current progress in (automatic) game balancing. We then introduce all terminology used throughout the rest of the thesis as well as explain the developed method in detail.

In the second part, we will introduce the domain used for the experiments. We will explain why the environment of tower defence got chosen as a basis for our thesis and give a high level understanding of the simulator built. For explanations on how to extend the environment we refer to the code and the documentation within.

In the third an last part we test the previously introduced method in experi-ments using the simulator described in section two, discuss the results, suggest promising future work and conclude the thesis.

(7)

2

Related work

Using game logs to gain insights in player behaviour and the game itself is not a new phenomenon, but due to the massive increase of gaming consoles that are constantly connected to high speed internet connections, developers are now able to gather extensive amounts of data about their game even way past release date [12]. This data ranges from crash reports to improve stability, over player behaviour, to the exact actions taken within a game and finds a variety of use cases within game development.

Dixit and Youngblood use player data to identify the best spots to place key artefacts and information within a game, to ensure that they where seen and absorbed by the player [8, 9]. The places are identified using their generated observation densities which mark surfaces which generally gain high attention by players.

Kim et al. presented a system to collect and visualise data from user stud-ies, called TRUE. Their system analyses player deaths to find the cause of unintended difficulty increases introduced during development [13]. They do that using an extensive screening process where they link player deaths to the recorded actions taken by the enemies. Within their work, they presented a case study on the popular First-Person Shooter (FPS) game Halo 2 where they were able to identify several unbalanced elements which could be corrected before the public release.

Debeauvais et al. uses data collected from more than 200 000 players and 24 million races from the Xbox 360 racing game Forza Motorsport 4. They use this data to analyse how players use and customise driving assists, such as automatic gear shifting or assisted breaking, over time which lets them describe the skill progression of a player [7]. Using those insights they where able to predict when a player was capable enough to successfully disable a given assist and thus increase the games difficulty with a precision ranging between 60 and 90 percent.

Lewis and Wardrip-Fruin [16] collected and analysed large quantities of game data from the popular massively multiplayer online role-playing game World of Warcraft which they used to empirically study the truth behind commonly believed legends like the class which is fastest in reaching the maximum level or which weapons are most commonly used.

(8)

When talking about automatically balancing video games, the academic com-munity has had a large focus on developing methods which dynamically adjust the game towards the ability of the current player, creating a rather personalised game. Dynamic difficulty adjustment (DDA) approaches include the dynamic adaptation of the game e.g. the game mechanics or the difficulty [3], the game AI [11][21] but also more recent methods of adapting the game environment itself [4]. All of them are there to create a more custom experience, tailored to a specific player, which has shown to bring a higher immersion factor and can make the difference towards (commercial) success of a game [20].

For multiplayer games however, especially competitive multiplayer games, archiv-ing balance means to create a game where the difference between winnarchiv-ing and loosing is by a high percentage determined by your own level of skill and the one of your opponent. Even though most successful games these days contain a significant competitive multiplayer part, research such as ours, that investigates the (automated) balancing of games with the focus on creating a fair game, is rare.

Fang et al. [10] propose a method to achieve team balance in their 3D team com-bating game MagePowerCraft. They evolve artificial neural network controllers using particle swarm optimisation to control two opposing teams in MagePow-erCraft followed by many play outs and a manual decrease of one characters abilities of the team with the higher win ratio until the difference was insignifi-cant and the game considered balanced.

Chan et al. [6] focused on massively multiplayer role-playing games (MMORPGs) and the automatic balancing of their ability-increasing functions (the function describing the change in ability strength depending on the characters level) using coevolutionary programming. Their method seem to perform well, but the extremely simplified ”MMORPG” of just one available action per character raises questions whether the tests were meaningful.

Currently, Jeffrey J. Beyerl [5] is working on the creation of a convex program to optimally balance offensive skills in the action adventure roll playing game Diablo 3. He argues that an important aspect of balance is the diversity of build selections which players can choose from to enforce meaningful decisions which impact the effectiveness of their character. His current process seems promising and could help to create games with a higher replay-ability value by enabling game designers to include more versatile strategies.

(9)

3

Method

When talking about why a video game is unbalanced, players frequently pinpoint one action which gives one faction an unfair advantage as the cause of the imbalance. If such an action exists in a game, there should be the possibility to analyse lots of game logs, or more precisely the histories of actions taken within a game, and find actions, which when played, lead to a win more often than other actions. Based on that we propose a method which calculates how big of an impact actions have on a game. We define the impact of an action as how vital the specific action is to winning strategies and other actions. Roughly speaking, how often is an action part of a winning strategy and how important is the action within those strategies. In the upcoming section we describe in detail how we first organise the action histories in action history trees which then get used to calculate the impact of each action within the game. We also give a definition of balanced and unbalanced games showing their corresponding action history trees using best of tree rock, paper, scissors as an example game.

3.1

Action history

When talking about an action history in a game, it describes the concatenation of actions taken by a player inside a game leading to a certain outcome. For

example, looking at a game of best of three rock, paper, scissor (RPS). Ifplayer

one plays scissor, rock, scissors and player two plays paper, paper, rock this

leads to player two winning the game and player one losing.

Scissor Rock Paper Scissor Rock Paper Win Lose

Play 1 Play 2 Play 3

Player one

Player two

Figure 1: Action history of a best of three rock, paper, scissor game

The action history of this RPS game is thus (scissor×paper), (rock×paper), (scissor× rock) which leads to a loss for player one and a win for player two.

(10)

3.2

The action history tree

Having action histories of both players, they are stored in a tree. Starting with

the first action ofplayer onefollowed by the first action ofplayer twofollowed by

the second action of player one and so on. In every node the observed number of wins,draws and loses subsequent to that action is stored. Figure 2a shows the action history tree for the action history shown in section 3.1. Adding another

action history to the tree, player one (Scissor, Rock, Scissor) vs. player two

(Paper, Rock, Rock) results in Figure 2b.

Scissor 0:0:1 Paper 1:0:0 Rock 0:0:1 Scissor 0:0:1 Paper 1:0:0 Rock 1:0:0 Pl a y 3 Pl a y 2 Pl a y 1

(a) One action history

Scissor 0:1:1 Paper 1:1:0 Rock 0:1:1 Scissor 0:0:1 Paper 1:0:0 Rock 1:0:0 Rock 0:1:0 Scissor 0:1:0 Rock 0:1:0 Pl a y 3 Pl a y 2 Pl a y 1

(b) Two action histories Figure 2: Action history tree

To further clarify the tree structure a three play outs RPS game was run 10000 times with two players picking actions at random (This setup was used for all data gathered about the RPS games in the following sections unless stated

(11)

otherwise). The results were used to populate the trees partially shown in Figure 3. In case of rock, paper, scissors not only one but three trees were created. One for each starting action of player one (Figure 3 & 5).

3.3

Fair / balanced game

Looking at the RPS game of the previous section, the win and lose ratios are equal for both players (Table 1b). This makes sense, both players have the same action space where at each play an action wins, draws or loses with the chance

of 13 (Table 1a). This results in rock, paper, scissors being a fair game for both

players.

P2

Rock Paper Scissor

P1 Rock Tie P2 P1 Paper P1 Tie P2 Scissor P2 P1 Tie (a) Results of RPS Winner Ratio Player one 37% Tie 26% Player two 37% (b) Outcome of 10000 games best of three RPS. Table 1: RPS Scissor 1212:898:1238 Paper 133:251:725 Paper 261:79:44 Rock 725:235:131 Scissor 380:412:356 Rock 214:97:46 Scissor 250:75:43 Paper 46:55:29 Rock 0:42:65 Scissor 0:0:120 Paper 7:17:18 Rock 13:20:15 Scissor 9:18:13 Paper 0:0:9 Rock 13:0:0 Scissor 0:18:0 ... ... ... ... ... ... ... ... Pl a y 3 Pl a y 2 Pl a y 1 Paper 1256:883:1256 Rock 1187:870:1200 ... ...

Figure 3: Action history trees populated with 10000 play outs rock, paper, scissors

(12)

3.4

Unfair / unbalanced game

To generate an unfair game, a fourth action called spock is introduced. Spock only loses against paper and replaces rock of player one in the second play of the best of three RPS game (Table 2a). As a result the win ratio is skewed towards player one (Table 2b) making RPS + spock an unfair game.

P2

Rock Paper Scissor

P1

Rock Tie P2 P1

Paper P1 Tie P2

Scissor P2 P1 Tie

Spock P1 P2 P1

(a) Results of play out two RPS + spock

Winner Ratio

Player one 42%

Tie 25%

Player two 33%

(b) Outcome of 10000 games best of three RPS + spock. Table 2: RPS + spock

In this case spock is the high impact action discussed in the introduction of section 3 skewing the win probability in favour of player one.

Scissor 1416:811:1138 Paper 143:202:829 Paper 252:81:83 Rock 636:246:171 Scissor 359:363:416 Spock 297:39:48 Scissor 280:82:57 Paper 48:39:38 Rock 0:0:123 Scissor 0:0:136 Paper 13:16:13 Rock 14:9:22 Scissor 11:14:13 Paper 0:0:11 Rock 13:0:0 Scissor 0:14:0 ... ... ... ... ... ... ... ... Pl a y 3 Pl a y 2 Pl a y 1 Paper 1384:863:1156 Rock 1359:778:1095 ... ...

(13)

3.5

Discovering high impact actions in a game

In a multiplayer strategy game like CoH or Starcraft, the information of Table 1a and 2a is rarely available, essentially because actions can not be directly pitted against each other. On the other hand, observing the win ratio and monitoring the actions played to build the action history tree is doable without too much overhead for most games. In the following section we will explain the developed method and elaborate on how to spot actions with a high impact on the game using only the action history tree.

Having an unfair game like RPS + spock of section 3.4 the goal is to spot the high impact actions which are leading to a win disproportionately more often than a loss. In an effort to calculate the impact of an action on a game we first look at the average win/draw/lose ratio(AWR, ADR, ALR) following an action of a player. In other words, does an action in general lead to a win more often than other actions of the same player.

AW Raction=P A

a a.win/

PA

b(b.win + b.draw + b.lose)

ADRaction=PAa a.draw/PAb(b.win + b.draw + b.lose)

ALRaction=P A

a a.lose/

PA

b(b.win + b.draw + b.lose)

where A = all nodes of action in the action history tree

AWR ADR ALR

P1 Rock 0.37 0.26 0.37 Paper 0.37 0.26 0.37 Scissor 0.36 0.26 0.38 P2 Rock 0.38 0.25 0.37 Paper 0.37 0.26 0.37 Scissor 0.36 0.27 0.37

(a) Rock, paper, scissors

AWR ADR ALR

P1 Rock 0.42 0.24 0.33 Paper 0.40 0.25 0.34 Scissor 0.39 0.25 0.35 Spock 0.49 0.21 0.29 P2 Rock 0.32 0.24 0.44 Paper 0.34 0.25 0.40 Scissor 0.35 0.24 0.41

(b) Rock, paper, scissors, spock Table 3: Average win, draw and lose ratio after a specific action got played.

In the example of RPS + spock of Table 3b, spock outperforms all other actions of player one in AWR and ALR with the AWR laying above the overall win

ratio of 42% (Table 2b). Second, we introduce AWRa,w, ADRa,w and ALRa,w

which is defined as the AWR, ADR, ALR of an action a given the absence of action w, calculated using algorithm 1 with the action history tree. The result will be a vector containing:

  AW Ra,w ADRa,w ALRa,w  

(14)

Algorithm 1 Calculate AWRa,w ADRa,wand ALRa,w

1: procedure CalcWDLRatio(a, w)

2: result = [0,0,0]

3:

4: for all node in actionHistoryTree.startingNodes do

5: result += CalcWDLWithout(a, w, node, false)

6: end for

7:

8: total = result[0] +result[1] +result[2]

9: return result /= total

10: end procedure

11:

12: procedure CalcWDLWithout(a, w, node, f ound)

13: result = [0,0,0]

14:

15: if node.player != a.player then

16: for all child in node.children do

17: result += CalcWDLWithout(a, w, child, found)

18: end for 19: return result 20: end if 21: 22: if node.action = a then 23: result += node.result 24: found = true

25: else if node.action = w then

26: if found then 27: return -node.result 28: else 29: return result 30: end if 31: end if 32:

33: for all child in node.children do

34: result += CalcWDLWithout(a, w, child, found)

35: end for

36:

37: return result

(15)

Finally we define the impact I of an action b as how much does b shift other actions towards a higher win ratio when used in the same game. In other words, how big of a positive/negative impact does a specific action have on other actions and ultimately the game.

I(b) =PA

a(AW Ra− AW Ra,b) + 0.5 × (ADRa− ADRa,b) − (ALRa− ALRa,b)

where A = all nodes of action in the action history tree

We argue that in an unfair game, where the win probability is skewed towards one player/race, the action with the highest impact of the winning player is the source of the imbalance. In fact, looking at the calculated impact of every action player one can use in RPS + spock (Figure 5), the imbalanced action spock, of player one, identifies as the action with the highest impact on the game.

-0.08 -0.06 -0.04 -0.02 0 0.02 0.04 0.06 0.08 0.1 0.12

Rock Paper Scissors Spock

Impact

Impact

(16)

4

Domain description

As a foundation for the research done in this thesis a multiplayer game contain-ing asymmetrical choices had to be found. While there are many open source games available, all of them lack the possibility to decouple graphics from ac-tual gameplay. This aspect significantly slows down the process of simulating many rounds of a game, rendering it virtually impossible to generate a sizeable portion of analysable data. In order to overcome this problem and to build a solid base for future research, a custom game was developed with the focus to make play-outs as fast as possible while still offering the option to visualise happenings and play the game.

After careful consideration tower defence (TD) was chosen as the genre for this game. Tower defence is typically characterised by static units placed by the player (the towers) and mobile units (the creeps) placed by the game or another player. The creeps commonly walk a predefined path from start to finish where upon arrival they give some kind of penalty to the player (e.g. loosing a live). To prevent the creeps from reaching their goal the player has to strategically place towers along the way which shoot at the creeps to destroy them (Figure 6).

Figure 6: Illustration of a typical tower defence game. Units walking along the brown path while towers beeing placed at and shoot from the green ground.

(17)

TD has three features making it the preferred genre for this research. First, towers can only be placed on predefined positions, reducing the complexity needed for an agent to play the game. Nevertheless the action space can easily be expanded by increasing the size of the game fields or adding more towers to the game. Second, creeps walk a predefined path, taking away the necessity of manoeuvring units which can be a tedious task to implement and has a

big impact on how well a game is played. Third, tower defence is a genre

which is popular within the gaming community and offers a variety of research opportunities containing: map generation, dynamic difficulty adjustment and player modelling [2][17][19], meaning that the developed simulator could be used far beyond this research.

4.1

Experimental Implementation

The custom TD game (CTD) developed for this research contains two races, the human race and the alien race. Both races are distinguished by the towers they can build (4.5.3), while having the same selection of units and upgrades to choose from (4.5). The game was developed in Java and will be publicly

available at github1. In this section we first give a brief overview over how the

developed game works before explaining each aspect in detail.

In CTD two players are playing against each other. One playing the human, the other the alien race. Both players have their own copy of the game field where they can place their towers while sending creeps to the opponents players map. At every step of the game each player has to choose between building a tower, upgrading the creeps or sending them to the opponents players map. Both players start with the same amount of lives and a player has won once the other player is at or below 0 lives. If both players arrive at 0 or less lives at same time as well as if the game runs over a predefined number of rounds the result is a tie.

(18)

4.2

Game field

The game field or map consists of a n × n grid of fields. A field can be either a tower field or a unit field. The tower fields belong to the player playing on that map to place their towers while the unit fields are where the opposing players units will be walking. Units get spawned at the start field and are walking towards the end field. Each tower field can be uniquely identified by its position starting with zero at the top left corner and ending at N-1 in the bottom right where N denotes the total amount of tower fields of the map (Figure 7).

Unit field

Tower field

Start

End

Walking direction

Figure 7: Example 6 × 6 game field

New game fields are easily created using the notation of Figure 8 inside a text file and placing it inside the root directory. The framework will automatically parse the file and convert it into a map if all of the following attributes are met.

OOOSOO O: Tower field

OXXXOO X: Unit field

OXOOOO S: Start unit field

OXXXXO E: End unit field

OOOOXO OEXXXO

(19)

First, exactly one start field S and exactly one end field E must be present at the boundary regions of the map. Second, the start and the end fields must be connected via unit fields.

OOOSOO OXXXOO OXOXXX OXXXOX OOOOOX OEXXXX (a) OOOSOO OOOXOO OOXXOO OOXXOO OOOXOO OOOEOO (b) Figure 9: Not allowed maps

Third, in every map, the way units walk must be uniquely identifiable. Meaning, a unit field must be adjacent to exactly two other unit fields with the exception of the start and end field which have to have exactly one adjacent unit field. Maps as in Figure 9a and 9b are therefore not allowed.

(20)

4.3

Towers

In CTD races get distinguished by the towers available to them. Every tower in the game is exclusive to one race. Where the human race can build the fire, ice and archer tower, the alien race has the chain lightning, parasite and shock tower.

Every tower has two attributes, the damage dealt with each shot and the dis-tance it can shoot. The disdis-tance or range of a tower is denoted in game fields reachable from the place the tower got build using horizontal, vertical or diag-onal movement (Figure 10a, 10b).

OOOSOO OXXXOO OXOTOO OXXXXO OOOOXO OEXXXO (a) Range of one

OOOSOO OXXXOO OXOTOO OXXXXO OOOOXO OEXXXO (b) Range of two Figure 10: Tower ranges

On top of that, most towers have a unique special ability described in the following section.

4.3.1 Alien - Chain lighting tower

Figure 11: The

light-ning tower The chain lightning tower shoots lightning bolts which

jump from unit to unit dealing damage to all of them. The attributes of this tower are the amount of jumps and the length it will jump. E.g. if the amount of jumps is set to three and the length is set to 2 the bolt will damage its first targeted unit, then jump to another unit within range of two fields, deal its damage and jump to another unit within two fields and deal its damage again. Damaging three units in total. If at one point there is no other unit to jump

to, the bolt will stay with the current unit and deal its damage up to three times.

At every step of the game the tower will try to shoot at the unit it previously targeted. If there is no such unit, or the unit was destroyed/walked out of range a new unit is picked at random.

(21)

4.3.2 Alien - Parasite tower

Figure 12: The parasite tower If a unit gets hit by a parasite tower, it makes

them walk backwards with the same speed they would normally walk forwards.

At every step of the game the tower will shoot at a unit within its range which is currently not under the influence of the parasite. If no such unit exists, it will choose a unit within its range at random.

4.3.3 Alien - Shock tower

Figure 13: The shock tower If a unit is hit by this tower the unit will not

move for a certain amount of time.

Equal to the parasite tower, this tower will prefer to shoot at a unit which is currently not affected by its special ability.

4.3.4 Human - Fire tower

Figure 14: The fire tower The fire tower deals its damage to all units on

the field it shoots at. This tower picks, at ev-ery step, equally to the chain lightning tower the unit it previously targeted or a random one.

4.3.5 Human - Ice tower

Figure 15: The ice tower The ice tower slows down units it shoots at

by a certain percentage. This tower prefers to shoot at units currently not influenced by its special.

4.3.6 Human - Archer tower

Figure 16: The archer tower The archer tower has no special ability by

itself but exceeds all other towers in range. Equally to the chain lightning tower, the archer tower will shoot at the unit it previ-ously targeted or a random one.

(22)

4.4

Game loop

In this section the different steps of the CTD game will be described in detail. At the start of every game both players enter with the same amount of lifes and a copy of the same n × n map.

Fulfil action of players Towers shoot Units walk 1. 2. 3.

Figure 17: The game loop of CTD

After the game is setup, the game loop begins. 1. Both players choose one action which gets

executed right away. An action can either be to send units, to upgrade all subsequent units in health, movement or amount or to build a tower on one of the free tower fields of the players map.

2. The already placed towers will pick a unit inside their range to shoot at. The deci-sion which unit is picked at a given point is dependent on the type of tower choosing the target. Fire, archer and chain lightning towers will shoot at their last damaged tar-get. If their last target is either dead or out of range, they choose a target at random. Ice, parasite and shock towers will prefer-ably shoot at a unit currently not influenced by their special. If no such unit exists they will choose a target at random too.

3. All units walk appropriate to their move-ment attribute towards the end of the game field. If a unit walks out of the map the player playing on that map loses a life. If after step 3 none of the three criteria in Table 4 are met the game continues with step 1. Other-wise the game will terminate with the appropriate result.

Criteria Result

1. A player has 0 or less lives The other player wins

2. Both players have 0 or less lifes Tie

3. Game exceeded max amount of loops Tie

(23)

4.5

Actions

The CTD game consists of two types of actions, actions which are available to both races like upgrading and sending units, called global actions, and actions only available to players of a certain race, called race specific actions, like placing a specific tower.

4.5.1 Upgrading units

At every step of the game, a player can choose to upgrade one of all future units attributes or the amount of units spawned at the other players starting position. A unit has two attributes, health and movement. Health describes the amount of damage a unit can take until they get removed from the map while movement describes the amount of fields a unit walks along its path at every step.

Id Attribute Initial Increase per upgrade

H Health 1.0 +0.7

M Movement 1.0 +0.4

A Amount 1.0 +0.75

Table 5: Upgrades

4.5.2 Sending unit(s)

If at any step a player chooses to send units, the game will create units using the attributes of Table 5. Every attribute a will be set to:

Ia+ ta∗ Ua (1)

Where Ia is the initial value of attribute a, ta denotes the amount of times

a was upgraded and Ua the increase per upgrade. While all three attributes

hold floating point values, the amount attribute only takes the integer-part into account. E.g. if amount is 2.5 only 2 units will be spawned.

4.5.3 Placing towers

Next to actions available to both races, races get distinguished by the towers available to them. Every tower in the game is exclusive to one race. Where the human race can build the fire, ice and archer tower, the alien race has the chain lightning, parasite and shock tower.

(24)

Every tower has two attributes, the damage dealt with each shot and the dis-tance it can shoot in game fields. On top of that, most towers have a unique special ability (Table 6,7).

Id Tower Damage Range Special

F Fire 0.6 1 Damages all units on one field

I Ice 1 1 Reduces movement speed by 90% for 3

steps

A Archer 1 3

-Table 6: Human race towers

Id Tower Damage Range Special

C Chain lightning 0.4 1 Damages 3 units less than 3

fields apart

P Parasite 0 1 Lets the hit unit walk

back-wards for 2 steps

S Shock 1 1 Reduces movement speed to 0

for 3 steps Table 7: Alien race towers

In the game the action to place a tower on a specific field is described by the tower id and the position on which the tower should be placed. E.g. the action C4 will place a chain lightning tower at the fourth tower position of the map.

(25)

5

Experiments

In Section 3, a simple game of rock, paper, scissors played by random agents

was used to introduce the method. But rock, paper, scissors (RPS) is not

necessarily a good game to analyse if the method works. Best of three RPS

is a concatenation of one shot games, rendering tactics irrelevant. Current

multiplayer games however offer the player a whole variety of different tactics and synergies between actions.

To evaluate if our method does work in a complex multiplayer game, we de-veloped two experiments using the game described in Section 4. The action histories are generated by two autonomous agents playing against each other on two different maps. In each experiment we will first balance the game using the impact calculated employing the algorithm introduced in the previous section, then investigate if the game could also have been balanced focussing on the other actions.

5.1

Data

The generation of player data was done with two Monte Carlo tree search (MCTS) [14] agents playing against each other. One playing the alien race, one playing the human race. The two agents learned 100000 rounds while play-ing against each other and the last 5000 action histories were used to populate the action history tree.

The MCTS algorithm used by the agents is referred to as Upper Confidence Tree (UCT) [14] which is extending the Upper Confidence Bound algorithm by Auer et al.[1] to trees. UCT combines the exploration and the building of the tree. The tree starts at the root node, from there three phases are consistently taken. The bandit phase, the tree building phase and the random walk phase. The bandit phase starts in the root node where to agent continually chooses an action/child node until arriving in a leaf node. The decision which action is

taken at every step is handled as a multi armed bandit problem. The set Asof

possible actions a in a node s defines the child nodes (s, a) of s. The selected

action a∗ maximises the Upper confidence Bound:

ˆ rs,a+

q

celog(ns)/ns,a (2)

over all a in Aswith ˆrs,adescribing the average reward accumulated by selecting

action a in state s, ns the total number of times node s was visited and ns,a

the amount of times action a was taken from node s. The term cehandles the

exploration vs. exploitation trade-off where a high cefavours exploration and a

(26)

The tree building phase is entered upon arrival in a leaf node. An action is selected uniformly at random and added as a child node of s.

The random walk phase begins after the new child node was added to the tree. At every step an action is taken (uniformly or heuristically) until the game

ends. At this point the acquired reward ruis back propagated towards the root

node and all nodes in this tree run are updated:

ˆ rs,a← 1 ns,a+ 1 (ns,a× ˆrs,a+ ru) (3) ns,a← ns,a+ 1; ns← ns+ 1 (4)

Both MCTS agents where initialised with the same values for all experiments in the following sections (Table 8).

Parameter Value ˆ rs,a initial 0.5 ce 3 ∗ √ 2 Table 8: MCTS parameters

The terminal reward rufor an agent is dependant on the outcome of the game:

ru=      10 if agent won

Lplayed− Lmax if draw

−2 ∗ Lmax if agent lost

with Lmax denoting the maximum allowed game length and Lplayedthe actual

(27)

5.2

Experiment 1

In this first experiment we analyse in depth if a game can be balanced looking at actions with a high impact.

5.2.1 Experimental setup

All tower attributes were initialised manually with the ambition to create a fair game (Table 9). The game runs on two different maps, a three by three map (Figure 18a) and a four by four map (Figure 18b) where each player starts with 10 lives and the maximum game length is set to 100 steps. Every run was repeated three times and all numbers in the following section depict the average of those.

Id Tower Damage Range Special

F Fire 0.6 1 Damages all units on one field

I Ice 1 1 Reduces movement speed by

90% for 3 steps

A Archer 1 3

-C Chain lightning 0.5 1 Damages 3 units less than 3

fields apart

P Parasite 0 1 Lets the hit unit walk backwards

for 3 steps

S Shock 0.5 1 Reduces movement speed to 0

for 3 steps Table 9: Tower attributes

Once the agents played against each other, the resulting win ratio is calculated, the attributes of the tower with the highest impact of the winning faction will be decreased and the experiment started again until the difference in win ratio is smaller than 1% and the game considered balanced. After balance was achieved, we try to balance the towers excluding the tower with the highest impact to see if balance could also be attained by lowering the attributes of the other towers.

(28)

(a) Three by three map (3x3) (b) Four by four map (4x4) Figure 18: Illustrations of the maps used in the experiments

5.2.2 Result 3x3

The result of this setup is a considerably big inequality in win ratio between the alien and the human player, with the alien player being 34% more likely to win the game against the human player on the 3x3 map (Table 10).

Winner Ratio

Human 38%

Tie 11%

Alien 51%

Table 10: Win ratio human vs. alien agent using the attributes of Table 9 on map 3x3

This leads to the conclusion that this game is unbalanced in favour of the alien

player. Calculating the impact of each tower (Figure 19) reveals the chain

lightning tower as the tower with by far the biggest positive impact on the game of the alien player.

(29)

-0.25 -0.2 -0.15 -0.1 -0.05 0 0.05 0.1 0.15 0.2 0.25

Chain Lightning Parasite Shock Archer Ice Fire

Impact

Figure 19: Impact of every tower in CTD using the attributes in Table 9

Following the hypothesis that in an unbalanced game a high positive impact action is the reason for that imbalance, the damage of the chain lightning tower was gradually lowered from 0.6 to 0.3 (Table 11).

0.6 0.5 0.4 0.3

Human 0.38% 40% 42% 44%

Tie 0.11% 11% 12% 12%

Alien 0.51% 49% 46% 44%

Table 11: Win ratio human vs. alien agent depending on the damage of the chain lightning tower

Lowering the damage output of the chain lightning tower ultimately resulted in a fair game (Table 11). Interestingly a direct correlation between the decline of the chain lightning towers impact (Figure 20) and the decline of the alien players win ratio could be observed, suggesting that in fact the chain lightning tower was the action causing the imbalance.

(30)

-0.25 -0.2 -0.15 -0.1 -0.05 0 0.05 0.1 0.15 0.2 0.25

Chain Lightning Parasite Shock Archer Ice Fire

Impact

0.6 0.5 0.4 0.3

Figure 20: Impact of the lightning tower in CTD lowering its damage output from 0.6 to 0.3

However, one could argue that the same effect would have been achieved lower-ing the attributes of the shock and/or parasite tower. To investigate this, the experiment was repeated using three different settings (Table 12), where the attributes of the parasite and/or shock tower where lowered.

Setting

A Parasite effect reduced to 2 steps

B Parasite and shock effect reduced to 1 step

C Setting B and shock tower damage reduced to 0.25

Table 12: The different settings the experiment was repeated with. Each setting uses Table 9 as a foundation.

Looking at the results of each setting in Table 13, even substantially lowering the attributes of any alien tower other than the chain lightning tower does not have the same effect on the game as lowering the attributes of the chain lightning tower.

(31)

Setting A Setting B Setting C

Human 39% 40% 41%

Tie 11% 11% 11%

Alien 50% 49% 48%

Table 13: Win ratio human vs. alien agent given setting A,B or C

Calculating the impact of the towers over the course of the different settings (Figure 21) one can see that neither settings has a significant impact on the values. The chain lightning tower remains the tower with the highest impact, leading to the conclusion that it was in fact the chain lightning tower which caused the game to be imbalanced.

-0.25 -0.2 -0.15 -0.1 -0.05 0 0.05 0.1 0.15 0.2 0.25

Chain Lightning Parasite Shock Archer Ice Fire

Impact

Se;ng A Se;ng B Se;ng C

(32)

5.2.3 Result 4x4

In comparison to the 3x3 map the results of this setup is a win ratio in favour of the human player (Table 14) winning the game 17% more often than the alien player, which shows that this game is in fact unbalanced in favour of the human race.

Winner Ratio

Human 48%

Tie 11%

Alien 41%

Table 14: Win ratio human vs. alien agent using the attributes of Table 9

To find the action responsible for the imbalance, we again calculate the impact of every tower (Figure 22). Looking at the impact of the human towers, the archer tower is the tower with the highest impact on the game.

-0.15 -0.1 -0.05 0 0.05 0.1

Chain Lightning Parasite Schock Archer Ice Fire

Impact

Figure 22: Impact of every tower in CTD using the attributes in Table 9

To verify that the archer tower is the responsible action for the imbalance, we employ the same process as in the previous section. First the damage of the archer tower was gradually lowered until a balanced game was achieved (Table 15).

(33)

1.0 0.8 0.6 0.4

Human 48% 47% 46% 44%

Tie 11% 11% 11% 11%

Alien 41% 42% 43% 45%

Table 15: Win ratio human vs. alien agent depending on the damage of the archer tower

When lowering the damage of the archer tower to 0.4, balance was achieved. Again, a clear correlation between a lower damage and a decrease in impact of the archer tower can be observed (Figure 23).

-0.25 -0.2 -0.15 -0.1 -0.05 0 0.05 0.1 0.15 0.2 0.25

Chain Lightning Parasite Schock Archer Ice Fire

Impact

1.0 0.8 0.6 0.4

Figure 23: Impact of the archer tower in CTD while lowering the damage from 1.0 to 0.4

To verify that the archer tower was in fact the source of the imbalance in the human race, we again try to achieve balance lowering the attributes of the fire and/or ice tower (Table 16).

(34)

Setting

A Fire tower damage set to 0.4

B Setting A and frozen effect reduced to 40%

C Setting B and ice tower damage reduced to 0.5

D Setting C and fire tower damage reduced to 0.2

Table 16: The different settings the experiment was repeated with. Each setting uses Table 9 as a foundation.

As in the 3x3 map, even substantially lowering the attributes of the other towers did not result in a balanced game. The win ratio changed slightly in favour of the alien player, but the human player still won the game around 7% more often.

Setting A Setting B Setting C Setting D

Human 47% 48% 46% 46%

Tie 11% 11% 11% 11%

Alien 42% 41% 43% 43%

Table 17: Win ratio human vs. alien agent given setting A,B,C or D

Leading to the conclusion, that the archer tower was in fact the source of the imbalance as suggested by its impact.

(35)

5.2.4 Discussion

In the first experiment, the method was tested on two different maps using the same tower attributes. On both maps a balanced game was achieved using the calculated impact which accurately spotted the unbalanced towers. In hindsight, when comparing the two maps, it makes sense that the chain lightning tower is very strong on the small 3x3 map as the lightning bolt will almost certainly always hit units with its maximum available damage (maximum amount of lightning jumps) since units will always be close together in comparison to the 4x4 map where the maximum jump length is more likely exceeded. On the 4x4 map the archer tower is the only tower capable of using the tower places 1,2,4 and 5 while still being able to hit units with its range of 3 explaining why this tower gives the human faction such an advantage there.

On both maps a correlation between decrease of tower attributes and decrease of impact could be seen when balancing the highest impact tower (HIT) while the impact of it even increased if other towers attributes got lowered which makes sense since the HIT will become more and more important as other towers get weaker.

(36)

5.3

Experiment 2

In the last experiment we did show that using the impact to find unbalanced actions works in the setups used above. However, all attributes were handpicked and one could argue that those where favourable values. Therefore this second experiment was designed to show that using the impact for balancing works reliably regardless of the tower attributes used initially.

5.3.1 Experimental setup

To test this, a system was setup which automatically balances a game of CTD. This happens in two steps, first the tower attributes are initialised at random using the ranges described in Table 18. Second, the so created game is balanced using three different methods subsequently while recording the balancing steps needed by each of them.

Id Tower Attribute Range Balancing step

F Fire damage 0.0 - 1.0 ∗0.9

I Ice damage 0.0 - 1.0 ∗0.9

hindrance 0.0 - 1.0 ∗0.9

duration 1 - 9 −1

A Archer damage 0.0 - 1.0 ∗0.9

C Chain lightning damage 0.0 - 1.0 ∗0.9

jumps 1 - 5 −1 length 1 - 5 −1 P Parasite damage 0.0 - 1.0 ∗0.9 duration 1 - 9 −1 S Shock damage 0.0 - 1.0 ∗0.9 duration 1 - 9 - 1

Table 18: Tower attributes

A balancing step is defined as the decrease of one attributes value of a chosen tower using the arithmetic in Table 18 to slowly lower the strength of a faction. The attributes get selected in turns. For example, if the shock tower gets selected three times, the damage attribute will be lowered first, followed by the duration attribute, followed by the damage attribute. The experiment was repeated 10 times on the same 3 × 3 map used in experiment 1 where every player starts with 10 lives and the maximum game length is set to 100 steps. A game is considered balanced if the difference between the human and the alien win ratio is below 1%.

(37)

The tree methods used for the balancing are as follows:

1. Balance the tower with the highest impact of the winning faction. 2. Balance a random tower of the winning faction.

3. Balance a random tower of the winning faction with the exception of the tower with the highest impact.

If the impact attribute does accurately predict the unbalanced tower, then the first method should use considerably fewer balancing steps than the one choosing randomly while the method excluding the high impact action should use more.

5.3.2 Result

The result shows that using the impact to balance a game is with an average of 8.1 balancing steps by approximately 65% faster than choosing random towers (Table 19), outperforming it in 10 out of 10 runs. Avoiding the high impact tower in method 3 is 35% worse than picking it randomly (excluding the runs it did not result in a balanced game) and over 2 times worse than using the impact to balance the game. Method 3 never outperforms method 1 but does outperform method 2 in 2 out of 10 times. Method 3 was the only method which did not result in a balanced game in 3 out of 10 times.

Method 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. Avg.

1. Using impact 3 5 18 5 2 8 11 14 1 14 8.1

2. Randomly 4 9 19 6 9 18 15 26 3 25 13.4

3. Without impact 14 41 23 24 ∗ 9 13 ∗ 3 ∗ 18.1

Table 19: The amount of times a towers attributes had to be lowered in order to archeive a balanced game. (∗) = Balance could not be archeived.

5.3.3 Discussion

In this section we challenged the previous experiment which gave a very elab-orate view on how a high impact action can be spotted and exploited within a game with the assumption that the tower values picked in the experimental setup where favourable. If this would have been the case, choosing random ac-tions should have outperformed our method at least once in the course of this experiment. This however, was not the case. Balancing the action with the high-est calculated impact consistently outperformed choosing randomly and was on average 65% faster to create a balanced game, showing that our method works regardless of the initial parameters.

(38)

6

Conclusion

Creating a well balanced multiplayer game is a difficult and tedious task requir-ing lots of human player feedback. The difficulty however can be significantly reduced by understanding which actions cause a game to be unbalanced. In the beginning of this thesis we therefore asked the research question:

Can we provide a method which identifies the cause of an imbalance within a multiplayer game?

To answer this, we identified three main steps which had to be taken in order to design such a method.

1. Organise the game logs in an easily analysable format.

2. Find a mathematical model to attribute actions a value of strength or impact they have on a game.

3. Test the method using game logs of an asymmetrical multiplayer game. During the course of this thesis we described step by step how we first organise the game logs in an action history tree which then gets used to calculate the impact of each action. With our experiments we showed that, using the impact attribute, we were able to reliably find unbalanced actions and therefore balance an asymmetrical multiplayer game in different scenarios regardless of the initial attributes. We showed, that our method does not only work for manual balanc-ing as we did in experiment 1 but can also be used to automatically balance a game as we did in experiment 2.

All in all we conclude that we found a technique which finds the cause of an imbalanced multiplayer game and hope that the developed method finds its way into the game development community, where it can aid developers in creating better multiplayer game experiences.

(39)

7

Future work

Human players. Besides building the CTD game a lot of time was spend developing agents which play the game. Even though a system was created where agents could learn an play the game so that a sizeable amount of data could be generated, it would be an important next step to test our method in a game where the gathered data comes from human players. Especially to analyse how important it is to use data of equally skilled players when calculating the impact of actions and how this affects the overall achieved balance.

Effect of synergies. Some games contain actions which are only strong in combination because they create certain synergies. CTD in the form it got used here does not contain actions with very strong synergies, but they could get included e.g. if a unit gets hit by a fire tower first, the ice tower does double the damage. It would be interesting to see if and how one would have to adjust the algorithm to find out if those synergies are too strong.

More or no races. In this thesis we tested balancing a game containing two races using the impact. Many games however contain more than two races or no specific races at all like the card game Magic: The Gathering. To test the applicability of calculating the impact in those games would certainly be interesting. Maybe calculating the impact of a specific card in Magic: The Gathering would already give a good indicator of its strength.

Genetic algorithms. We have shown that, using the method developed in this thesis, it is possible to find actions within a game which are unbalanced. Using this knowledge, one could very precisely evolve fair multiplayer games using the impact attribute as an indicator which gene should be mutated next.

Acknowledgements

I would like to greatly thank Sander Bakkes for all his support during the thesis project in terms of both technical knowledge and motivation. Timon Kanters and Luisa Zintgraf for valuable input while both, designing the simulator and writing the thesis and Lill Eva Johansen for always knowing a word which sounds better in that context.

(40)

References

[1] Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem. Machine learning, 47(2-3):235–256, 2002.

[2] Phillipa Avery, Julian Togelius, Elvis Alistar, and Robert Pieter

Van Leeuwen. Computational intelligence and tower defence games. In Evolutionary Computation (CEC), 2011 IEEE Congress on, pages 1084– 1091. IEEE, 2011.

[3] Sander Bakkes, Chek Tien Tan, and Yusuf Pisan. Personalised gaming: a motivation and overview of literature. In Proceedings of the 8th Aus-tralasian Conference on Interactive Entertainment: Playing the System, page 4. ACM, 2012.

[4] Sander Bakkes, Shimon Whiteson, Guangliang Li, George Viorel Visniuc, Efstathios Charitos, Norbert Heijne, and Arjen Swellengrebel. Challenge balancing for personalised game spaces. In Games Media Entertainment (GEM), 2014 IEEE, pages 1–8. IEEE, 2014.

[5] Jeffrey J Beyerl. Achieving balance in the arpg genre of video games with asymmetrical choices.

[6] Haoyang Chen, Yasukuni Mori, and Ikuo Matsuba. Solving the balance problem of massively multiplayer online role-playing games using coevolu-tionary programming. Applied Soft Computing, 18:1–11, 2014.

[7] Thomas Debeauvais, Thomas Zimmermann, Nachiappan Nagappan, Kevin Carter, Ryan Cooper, Dan Greenawalt, and Tyson Solberg. Off with their assists: An empirical study of driving skill in forza motorsport 4. Proc. FDG 2014.

[8] Priyesh N Dixit and G Michael Youngblood. Optimal information place-ment in an interactive 3d environplace-ment. In Proceedings of the 2007 ACM SIGGRAPH symposium on Video games, pages 141–147. ACM, 2007. [9] Priyesh N Dixit and G Michael Youngblood. Understanding information

observation in interactive 3d environments. In Proceedings of the 2008 ACM SIGGRAPH symposium on Video games, pages 163–170. ACM, 2008. [10] Shih-Wei Fang and Sai-Keung Wong. Game team balancing by using

par-ticle swarm optimization. Knowledge-Based Systems, 34:91–96, 2012. [11] Ya’nan Hao, Suoju He, Junping Wang, Xiao Liu, Jiajian Yang, and Wan

Huang. Dynamic difficulty adjustment of game ai by mcts for the game pac-man. In Natural Computation (ICNC), 2010 Sixth International Con-ference on, volume 8, pages 3918–3922. IEEE, 2010.

(41)

[12] Kenneth Hullett, Nachiappan Nagappan, Eric Schuh, and John Hopson. Empirical analysis of user data in game software development. In Em-pirical Software Engineering and Measurement (ESEM), 2012 ACM-IEEE International Symposium on, pages 89–98. IEEE, 2012.

[13] Jun H Kim, Daniel V Gunn, Eric Schuh, Bruce Phillips, Randy J Pag-ulayan, and Dennis Wixon. Tracking real-time user experience (true): a comprehensive instrumentation solution for complex systems. In Proceed-ings of the SIGCHI conference on Human Factors in Computing Systems, pages 443–452. ACM, 2008.

[14] Levente Kocsis and Csaba Szepesv´ari. Bandit based monte-carlo planning,

2006.

[15] Ryan Leigh, Justin Schonfeld, and Sushil J Louis. Using coevolution to understand and validate game balance in continuous games. In Proceedings of the 10th annual conference on Genetic and evolutionary computation, pages 1563–1570. ACM, 2008.

[16] Chris Lewis and Noah Wardrip-Fruin. Mining game statistics from web services: a world of warcraft armory case study. In Proceedings of the Fifth International Conference on the Foundations of Digital Games, pages 100– 107. ACM, 2010.

[17] Steven KC Lo, Huan-Chao Keh, and Chia-Ming Chang. A multi-agents coordination mechanism to improving real-time strategy on tower defense game. Journal of Applied Sciences, 13(5):683, 2013.

[18] Andrew Rollings and Dave Morris. Game architecture and design: a new edition. 2003.

[19] Paul Rummell et al. Adaptive ai to play tower defense game. In Computer Games (CGAMES), 2011 16th International Conference on, pages 38–40. IEEE, 2011.

[20] Ching-I Teng. Customization, immersion satisfaction, and online gamer loyalty. Computers in Human Behavior, 26(6):1547–1554, 2010.

[21] Xinrui Yu, Suoju He, Yuan Gao, Jiajian Yang, Lingdao Sha, Yidan Zhang, and Zhaobo Ai. Dynamic difficulty adjustment of game ai for video game dead-end. In Information Sciences and Interaction Sciences (ICIS), 2010 3rd International Conference on, pages 583–587. IEEE, 2010.

Referenties

GERELATEERDE DOCUMENTEN

carbonaria locus. helularia throughout The Netherlands. Moths were scored after consultation with H. Kettlewell and study of Keith-well's collection.. 'I'he data pooled by Province

bevolking  werd

 Andere activiteiten (F=3,36; p= .073); promovendi die betrokken zijn bij weinig andere activiteiten dan hun promotietraject verwachten minder vaak vertraging op te lopen

In this report, case management is interpreted as offering independent and personal support to asylum seekers, from entry to integration or perhaps to their return to their land

Table 1: Overview of the independent variables used in the regression models …18 Table 2: Descriptive statistics of the four village types over the period of 2001-2018 …22 Table

comprehensive quantitative and qualitative analysis of the Islamic State’s total video output over a longer time period is still lacking. This research paper therefore examines

I should like to thank NHTV’s executive board and the dean of the Academy of Digital Entertainment for their foresight in recognising the value of research into the creative

Of course, the database is a rather technical notion and in that sense hard to compare with narrative, which is a symbolic form that can be recognized in all modes