• No results found

Evolutionary dynamics of the infinitely repeated Minority Game

N/A
N/A
Protected

Academic year: 2021

Share "Evolutionary dynamics of the infinitely repeated Minority Game"

Copied!
33
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Evolutionary dynamics of the infinitely

repeated Minority Game

Jens Klooster

10059229

MSc in Economics

Specialization: Behavioural Economics and Game Theory Date: June 14, 2018

Supervisor: Prof. dr. C.M. van Veelen Second reader: Prof. dr. J.H. Sonnemans

(2)

Statement of Originality

This document is written by Student Jens Klooster who declares to take full responsi-bility for the contents of this document. I declare that the text and the work presented in this document are original and that no sources other than those mentioned in the text and its references have been used in creating it. The Faculty of Economics and Business is responsible solely for the supervision of completion of the work, not for the contents.

(3)

Abstract

In this thesis we study the evolutionary dynamics of the infinitely repeated Minority Game. We start by introducing the game and equilibrium concepts in a formal matter. Next, we prove the existence of strategies that are Nash and/or NSS in the undiscounted case. We also present some results regarding the properties of strategies that are ESS in the undiscounted case. We conclude by studying the evolutionary dynamics of the infinitely repeated Minority Game with discounting. We introduce a conjecture and show that, if this conjecture is true, there are no strategies that are evolutionary stable in the infinitely repeated minority game with discounting.

Keywords: Evolutionary Game Theory, Minority Game, Evolutionary Dynamics

Acknowledgements

(4)

Contents

1 Introduction 1

2 Related literature 3

2.1 The Minority Game . . . 3

3 Formal set up 6 3.1 Notation and definitions . . . 6

3.1.1 Evolutionarily Stable Strategy . . . 6

3.1.2 Neutrally Stable Strategy . . . 7

3.1.3 Robustness Against Indirect Invaders . . . 8

3.2 Set up of the Minority Game . . . 9

3.2.1 The repeated game . . . 9

3.2.2 Overview of assumptions . . . 11

4 Theoretical Analysis 13 4.1 The stage game . . . 13

4.1.1 Nash Equilibria . . . 13

4.2 The infinitely repeated game . . . 14

4.2.1 Nash Equilibria . . . 15

4.2.2 Neutrally Stable Strategies . . . 16

4.2.3 Robustness Against Indirect Invasions . . . 20

4.2.4 Evolutionary stability . . . 21

4.2.5 Evolutionary stability with discounting . . . 24

5 Conclusion 28

(5)

1

Introduction

In their book Minority games: interacting agents in financial markets Challet, Zhang and Marsili (2005) present their findings on the Minority Game that first appeared in 1997. The Minority Game is a seemingly simple game where an odd number of agents have to compete to be in the minority. At the beginning of each round an agent has to independently choose a side L(eft) or R(ight). At the end of the round the players at the side that is in the minority win and then the game starts again for an arbitrary amount of rounds. A property of the Minority Game is that actions are strategic substitutes. When more players play action L there is an incentive to switch to action R.

As suggested by the title of the book the Minority Game can be used to model interacting agents in financial markets. In financial markets buyers and sellers interact with each to determine stock prices. It is often the case that when everyone is buying, causing prices to rise, agents would prefer to sell. In this case there is also an advantage for the agents that are in the minority. The dynamics of the game can therefore be used as a stylized model for the dynamics of traders making trades in a financial market. A more thorough analysis on how to use the Minority Game to model a financial market is presented in Challet et al. (2001). There are more situations that can be modelled using Minority Games. Some examples include firms having to decide to enter a certain market, commuters having to choose a route to travel to work or even to describe a spin system (Challet and Marsili, (1999)).

In previous research the Minority Game has usually been studied with the use of simulations - see for example Challet and Zhang (1997) and Cavagna (1999). In this thesis we do a theoretical analysis and study the game using equilibrium concept from Evolutionary Game Theory such as the NSS and ESS. The main question that inspired our research is as follows: does the infinitely repeated (three player) Minority Game have an Evolutionarily Stable Strategy? To try to answer this question, we start out by studying the undiscounted infinitely repeated Minority game and introduce strategies that are Nash and/or NSS. Unfortunately, we were not able to show that there are (or are no) strategies that are ESS in the undiscounted case. However, we are able to prove some interesting propositions that give us an insight in how an ESS should behave, if there is one. The research we did on the undiscounted case also inspired us to study the evolutionary dynamics in the infinitely repeated Minority Game with discounting. In this case we were able to introduce a conjecture and show that, if this conjecture is true, there are no strategies in the discounted infinitely repeated minority game that

(6)

are ESS.

The remainder of this thesis is organized as follows. In the second chapter, we start with a literature review covering the most important papers and results related to the Minority Game. Subsequently, in chapter three, we go over some definitions from Evo-lutionary Game Theory that will be used in our theoretical analysis. Next, we give a formal mathematical introduction to the Minority game. In chapter four, we prove our main results regarding the evolutionary dynamics of the game. We prove that there is an NSS in the infinitely repeated Minority Game without discounting. We also introduce a conjecture and show that, if this conjecture, there are no strategies that are ESS in the infinitely repeated Minority Game with discounting. In chapter five, we conclude by giving a brief summary of our findings and give some ideas for future research.

(7)

2

Related literature

This chapter will briefly summarize the results of some papers related to our research.

2.1

The Minority Game

The Minority Game was first introduced by Challet and Zhang (1997) as an adaptation of the El Farol bar problem introduced by Arthur (1994). The Minority Game is a binary game where N players, N = 2k + 1 with k ≥ 1, who are each equipped with a finite set of strategies S have too choose to be at a side L or side R (sometimes instead of ’side’ we say ’room’). After everyone has chosen a side independently the players who are in the minority win. In the most simple version of the game the winners obtain 1 point and the losers will obtain 0 points. When the payoff is announced the game starts again and will be repeated N ∈ N times.

Challet and Zhang used several simplifying assumptions for the analysis of the game. The first assumption was to limit the information given back to the players to only a ’yes’ or ’no’ if a player is in the minority that round, but not to give the actual number of players that are in the minority. In this way the system could be represented as a binary sequence in the following way. If L is the winning side it gets assigned a 1 and a 0 otherwise. Now the players each have the same bit string of information on which they can base next decision given their strategies.

The second assumption concerns the memory of the players, which represents for how long a player can remember previous outcomes. Challet and Zhang give an example (see below) where they set memory, denoted by M ≥ 1, M ∈ N, equal to three. The following is an example of a strategy given that M = 3.

signal prediction 000 1 001 0 010 0 011 1 100 1 101 0 110 1 111 0

(8)

Note that when M = 3 the total amount of strategies there are is 223 = 256 and in general we have 22M possible strategies.

With this set up Challet and Zhang used simulations to study several properties of the system. The first property they studied was to see how large the fluctuations, denoted by σ, are, at a specific side, for different M . They show that when N = 1001, the fluctuations decrease when M increases. Another property that was studied is to see what happens in a mixed population. In a mixed population not all players are equipped with the same memory parameter M , but are given an M ∈ {1, 2, . . . , 10} randomly. They show that the succes of an agent increases as M increases until M = 6, when the succes stays constant when given a higher M .

After the introduction of the Minority Game several papers on the topic were pub-lished. A remarkable result on the study of the Minority Game was introduced by Cavagna (1999), where he shows that the memory of agents is irrelevant. He shows this by running simulations using the same setup as Challet and Zhang (1997), but at each time step, the past history is just invented (by invented we mean that a random sequence of M bits is drawn). With this setup he shows that the behaviour of the fluctuations σ is the same. That is, with a random memory at each time step the fluctuations still decrease over time and the minimum value of σ is still attained when every agent is given a memory M of length 6.

Several adaptations of the Minority Game have been suggested. We will discuss one adaptation that is called the Evolutionary Minority Game (Lo et al, 2000). Although it is called the Evolutionary Minority Game (EMG), this has nothing do to with Evolu-tionary Game Theory. In the Minority Game introduced by Challet and Zhang agents are stuck with a fixed strategy set which they can use to pick a strategy from for the next round. According to Lo et al. (2000) a system like this cannot avoid this in-built frustration. Therefore, they introduce an adaptation of the game where agents are not necessarily stuck with the same strategies, but they allow agents to let their strategies evolve over time hence the name Evolutionary Minority Game.

Although most papers that have appeared on the Minority Game use simulations to study the behaviour of the system, the Minority Game has also been studied in an experimental setting. The main reason for doing so is that the strategies used in the simulation models are selected by the researchers, but it is not clear if decision makers would also use these strategies. Therefore, Linde et al. (2013) use a strategy method experiment to elicit explicit strategies in a repeated 5-player repeated Minority Game. Participants had to program strategies that were used in a tournament between all submitted strategies, where the five best ranked strategies would obtain a monetary payoff. A finding is that the strategies that are submitted lead to aggregate outcomes that are comparable to those under the symmetric mixed strategy Nash equilibrium. Another finding is that a lot of the strategies employ randomization, something that is excluded in most studies that use simulations.

(9)

evolutionary theoretical approach. There are however two earlier master theses written by Boulema (2012) and Gao (2012) that have also made an attempt to do a theoretical analysis of the Minority Game. Boulema and Xiao both study the repeated infinitely repeated Minority Game without discounting (Gao) and with discounting (Boulema) and offer some useful insights that will be used in this thesis. The specific details of these results will be given in chapter 4, the theoretical analysis part of this thesis. In the next chapter we will continue with the formal set up of the Minority Game.

(10)

3

Formal set up

The goal of this chapter will be twofold. First, we generalize some definitions of evolu-tionary game theory to N ∈ N players instead of the usual two player games. Second, we give a formal introduction to the N -player Minority Game.

3.1

Notation and definitions

In this section we will generalize concepts from Evolutionary Game Theory such as the Evolutionary Stable Strategy (ESS) to games with N players instead of the usual 2-player games. However, we will also repeat some definitions in the 2-player case.

For the notation we will mainly follow Weibull (1995). In the following sections ∆ will denote the mixed-strategy set of a game and u : ∆ → R is the mixed strategy payoff function. Given a game with N players, where N ∈ N, the payoff of strategy x ∈ ∆, when played against y1, . . . , yN −1∈ ∆ is denoted by u(x, y1, . . . , yN −1). Note that order

of the y strategies does not matter.

3.1.1 Evolutionarily Stable Strategy

We start with the definition of an ESS in a two player game.

Definition 3.1 (Evolutionarily Stable Strategy (2 players)). A strategy x ∈ ∆ is an evolutionarily stable strategy if for every strategy y 6= x there is an ¯y ∈ (0, 1) such that

u[x, y + (1 − )x] > u[y, y + (1 − )x] for all  ∈ (0, ¯y).

A useful corollary of this definition is that there is an equivalent way of stating this result that is easier to work with. In fact, when the definition of the ESS was introduced it was originally defined as the following corollary (Maynard Smith (1974)).

Corollary 3.2. A strategy x is an ESS if and only if the following two conditions hold 1. u(x, x) ≥ u(y, x) ∀y,

2. u(x, x) = u(y, x) =⇒ u(x, y) > u(y, y) ∀y 6= x.

(11)

Definition 3.3 (Evolutionarily Stable Strategy (N players)). A strategy x ∈ ∆ is an evolutionarily stable strategy if for every strategy y 6= x there is an ¯y ∈ (0, 1) such that

u[x, y + (1 − )x, . . . , y + (1 − )x | {z } N − 1 times ] > u[y, y + (1 − )x, . . . , y + (1 − )x | {z } N − 1 times ] for all  ∈ (0, ¯y).

It is also possible to give the equivalent definition of an ESS in an N player game, however in this study we will focus on the 3-player Minority Game. Therefore, we will only give the equivalent definition for 3 players. From the 3 player version, one should easily be able to derive the N player version.

Corollary 3.4. A strategy x is an ESS in a 3-player game if and only if the following three conditions holds

1. u(x, x, x) ≥ u(y, x, x), ∀y

2. u(x, x, x) = u(y, x, x) =⇒ u(x, x, y) ≥ u(y, x, y), ∀y.

3. u(x, x, x) = u(y, x, x) and u(x, x, y) = u(y, x, y) =⇒ u(x, y, y) > u(y, y, y), ∀y 6= x.

Given a game we will denote its set of Evolutionarily Stable Strategies as ∆ESS. It is

possible for a game to have no ESS and in this case we have ∆ESS = ∅. We continue with the definition of a Neutrally Stable Strategy for N ∈ N players.

3.1.2 Neutrally Stable Strategy

Next, we go over the definition of a Neutrally Stable Strategy (NSS). With the definitions of an ESS in mind, this will be straightforward. We continue in a similar way by starting with the definition of an NSS in a 2-player game.

Definition 3.5 (Neutrally Stable Strategy (2 players)). A strategy x ∈ ∆ is a Neutrally Stable Strategy if for every strategy y 6= x there is an ¯y ∈ (0, 1) such that

u[x, y + (1 − )x] ≥ u[y, y + (1 − )x] for all  ∈ (0, ¯y).

Again, we can show that the following corollary could be used an equivalent definition of the NSS.

Corollary 3.6. A strategy x is an NSS if and only if the following two conditions hold 1. u(x, x) ≥ u(y, x) ∀y,

2. u(x, x) = u(y, x) =⇒ u(x, y) ≥ u(y, y) ∀y. In an N -player game, N ∈ N, we have the following.

(12)

Definition 3.7 (Neutrally Stable Strategy (N players)). A strategy x ∈ ∆ is an Neu-trally Stable Strategy if for every strategy y 6= x there is a ¯y ∈ (0, 1) such that

u[x, y + (1 − )x, . . . , y + (1 − )x | {z } N − 1 times ] ≥ u[y, y + (1 − )x, . . . , y + (1 − )x | {z } N − 1 times ] for all  ∈ (0, ¯y).

We will also give the equivalent definition in the case where N = 3.

Corollary 3.8. A strategy x is an NSS in a 3-player game if and only if the following three conditions holds

1. u(x, x, x) ≥ u(y, x, x), ∀y

2. u(x, x, x) = u(y, x, x) =⇒ u(x, x, y) ≥ u(y, x, y), ∀y.

3. u(x, x, x) = u(y, x, x) and u(x, x, y) = u(y, x, y) =⇒ u(x, y, y) ≥ u(y, y, y), ∀y. Given a game we will denote its set (possibly empty) with Neutrally Stable Strategies as ∆N SS. Since any ESS is also an NSS we have that ∆ESS ⊂ ∆N SS.

3.1.3 Robustness Against Indirect Invaders

In this section we will go over the definition Robustness Against Indirect Invasions (RAII) as introduced by van Veelen (2012). We start with the introduction of three sets that will be used in the definition. Let x ∈ ∆ be a strategy. Then we split the space of all strategies up into three disjoint sets that together form the full strategy space, namely (evolutionarily) worse (1), equal (2) and better (3) performers. We have

1. SW(x) = {y | u(y, x) < u(x, x) or (u(y, x) = u(x, x) and u(y, y) < u(x, y)}

2. SE(x) = {y | u(y, x) = u(x, x) and u(y, y) = u(x, y)}

3. SB(x) = {y | u(y, x) > u(x, x) or (u(y, x) = u(x, x) and u(y, y) > u(x, y)}

With the use of these sets we are able to define the concept.

Definition 3.9. A strategy x ∈ ∆ is Robust Against Indirect Invasions (RAII) if the following 2 points hold

1. SB(x) = ∅ 2. @y1, . . . , ym, m ≥ 2, such that          y1 ∈ SE(x) yi ∈ SE(yi−1) 2 ≤ i ≤ m − 1 ym ∈ SB(ym−1)

Note that we have defined this concept for two player games, but it could easily be generalized to N player games. We have that if a strategy x is an ESS, then it is RAII as well so that ∆ESS ⊂ ∆RAII. An immediate consequence is that if we prove that

a game has no strategy that is RAII, then it also does not contain an ESS. We have ∆ESS ⊂ ∆RAII ⊂ ∆N SS ⊂ ∆N E.

(13)

3.2

Set up of the Minority Game

The Minority Game, that will be denoted by ΓN, is an N -player game, where N = 2k +1

with k ≥ 1, characterized by a set of players I = {1, . . . , N }, an action space A = {L, R}, equal for all players and a payoff function π : AN → {0, 1}. We can explicitly write out the payoff as follows. Let ai denote the action played by player i ∈ I and L, R ∈ N the

amount of players that chose action L, R respectively. Then the payoff of player 1 is given by the following function.

π : AN → {0, 1} : π(a1, . . . , aN) =          1 if a1 = L and L < N2 1 if a1 = R and R < N2 0 otherwise Note that each player views itself as the first player.

It is also possible for a player to use a mixed strategy and in this case we compute the expected value of the strategy. In the Minority Game each player has the possibility to either choose L or R, or choose L with some probability. Therefore each strategy can be represented as a vector x ∈ ∆, where ∆ = {x ∈ R2+| x1+ x2= 1}. Now given N ∈ N

of these vectors, say x, y1, . . . , yN −1 ∈ ∆, we can compute the probability of outcome

a ∈ AN, we denote this probability by P(a). Now we define the mixed payoff function

as follows. The expected payoff of strategy x is given by: u : ∆ × · · · × ∆ | {z } N times → [0, 1] : u(x, y1, . . . , yN −1) = X a∈AN P(a) · π(a).

3.2.1 The repeated game

Using a discount factor δ ∈ (0, 1) the one shot game becomes a repeated one, which we will denote by ΓN(δ). A history at time t is a list of outcomes that took place before

time t. Let ai,t denote the action played by player i ∈ I at time t ∈ N, then the histories

are defined as follows: h1= (), ht=  a1,1, {a2,1, . . . , aN,1}  , . . . ,  a1,t−1, {a2,t−1, . . . , aN,t−1}  ! , t = 2, 3, . . . ,

where the empty brackets denote that there is no history. Note that each player views itself as the first player in all histories. An assumption we have made is that players do not only know that they are in the minority or majority in a particular round, but they are also capable of observing how many other players are in their room. However, players are not able to distinguish different players, which is why we introduced a set within a history list. For a set we know that the order does not matter. For example, {L, R} and {R, L} are the same set. Therefore, (L, {L, R}) and (L, {R, L}) are the same list. A consequence of this property is that it reduces the amount of possible histories

(14)

since some are the same. The set of possible histories at time t is: H1 = {h1}, Ht= t−1 Y i=1  A × {AN −1} t = 2, 3, . . .

and the set of all possible histories is:

H =

[

t=1

Ht.

A strategy S in the repeated game is a function from the set of all possible histories H to a possibly mixed action space A, that is S : H → A. Given a history ht, the

strategy function S will output a probability vector that will be used in the following round. Next, this probability vector will yield us an action ai,t that will be included in

the history list of ht+1. We will denote the space of all strategies in the repeated game

by S.

Now that we have defined strategies, we are able to define the payoff of a strategy in the repeated game. In this study we will mainly focus on the undiscounted repeated game where δ = 1. In this special case there are several possibilities on how to define the payoff of a strategy. We will use the most common one, which is by use of a limit of means. Given three strategies, S1, S2, S3 ∈ S we denote the payoff of strategy S1 as

follows: U (S1, S2, S3) = lim T →∞ 1 T T X n=1 u  S1 h(Sn1,{S2,S3}), S2 hn(S2,{S1,S3}), S3 h(Sn3,{S1,S2})  , where h(S1,{S2,S3})

n denotes the histories up until time n − 1, where S1 views itself as

the first player and S2, S3 as second and third player. For the second and third player

the order does not matter, which is why we use a set. In Evolutionary Game Theory we are often interested in how strategies perform against themselves. Notice that if we would compute U (S, S, S) or U (S, S, S0) the ordering of the histories will be confusing, since we would end up with histories of the form h(S,{S,S})n or h(S,{S,S

0})

n . Therefore, we

assume that when we compute U (S, S, S) or U (S, S, S0), we place an ordering on the three strategies in advance. We label each S with a separate number i ∈ {1, 2, 3} and then use the previous notation to compute the payoff of strategy S1 against S2 and

S3, then we have U (S, S, S) = U (S1, S2, S3) by construction and this last expression is

properly defined.

Although the focus of this thesis is on the infinitely repeated undiscounted Minority Game, we will also present findings on the infinitely repeated Minority Game with discounting. In that case the payoff is as follows: for δ ∈ [0, 1) the discounted, normalized payoff of strategy S1 ∈ S given S2, S3 ∈ S is given by

U (S1, S2, S3) = (1 − δ) ∞ X t=1 δt−1u  S1 h(Sn1,{S2,S3}), S2 hn(S2,{S1,S3}), S3 h(Sn3,{S1,S2}) 

(15)

In section 3.1 we have not defined anything for repeated games, but the extension to repeated games follows naturally. When we replace ∆ by S and u by U in section 3.1 everything is defined for the repeated game. Therefore, we will not repeat all definitions for the repeated game as well.

3.2.2 Overview of assumptions

In this subsection we give a brief overview on the assumptions we have made for player. We have made the following five assumptions.

1. Every round the player knows if it was in the minority or if it was not. 2. Every round a player knows if there was a minority occurance or not. 3. Players have an infinite memory.

4. Players can not recognize other players.

These assumptions induce the existence of some useful properties of players that we can translate to functions in the analysis of the game. The first thing players can do is to check if there was a minority in a previous round. This is due to assumption 1 and 3. In the theoretical analysis we will call this the Win function, denoted by W . The win function W is defined as follows

Definition 3.10 (Win function). Let W : Ht→ {0, 1} be such that W (h1) := 0 and for

t > 1 we set W (ht) =          0 if a1,t−1, {a2,t−1, a3,t−1} = (L, {L, L}) 0 if a1,t−1, {a2,t−1, a3,t−1} = (R, {R, R}) 1 if otherwise

We will also use a variant of the Win function that checks if there was a minority in room R or L in the previous round. We have

Definition 3.11 (Win-R function). Let WR : Ht → {0, 1} be such that WR(h1) := 0

and for t > 1 we set

WR(ht) =          1 if a1,t−1, {a2,t−1, a3,t−1} = (R, {L, L}) 1 if a1,t−1, {a2,t−1, a3,t−1} = (L, {L, R}) 0 if otherwise The function WLis defined in a similar fashion.

Players can also count how many times they have been in the minority. Given a history ht the Count function denoted by C will return the amount of minorities the

player has been in. We can create such a function due to assumption 2 and 3. Note that a player only knows how many times he or she has been in the minority and is only able to make an estimate of how many times another player has been in the minority (this is due to assumption 4).

(16)

Definition 3.12 (Count function). Let C : Ht → N be such that C(h1) = 0 and for t > 1 we set C(ht) = t−1 X i=1

π(a1,i, a2,i, a3,i).

The last function we need is a function that is able to check if a majority with 3 players has occured. The Majority function checks if there was a majority, with three players, in the previous round.

Definition 3.13 (Majority function). Let M : Ht → {0, 1} be such that M (h1) := 0

and for t > 1 we set

M (ht) =          1 if a1,t−1, {a2,t−1, a3,t−1} = (L, {L, L}) 1 if a1,t−1, {a2,t−1, a3,t−1} = (R, {R, R}) 0 if otherwise

We will also need a majority function that only checks if the majority, with three players, was in room L or R. We start by defining the Majority-L function:

Definition 3.14 (Majority-L function). Let ML: Ht→ {0, 1} be such that ML(h1) := 0

and for t > 1 we set

ML(ht) =    1 if a1,t−1, {a2,t−1, a3,t−1} = (L, {L, L}) 0 if otherwise

(17)

4

Theoretical Analysis

In this chaper we will go over the results of our theoretical analysis. We start with an analysis of the one shot game, where we analyse Nash Equilibria. Subsequently, we do an analysis of the undiscounted repeated minority game.

4.1

The stage game

We start with an analysis of the stage game. We will focus on the case where N = 3 and in some cases we will also give a more general result for N = 2k + 1 with k > 1. Since the game is one shot, we do not have to worry about histories, which makes the analysis easier. We start with some basic results about Nash Equilibria.

4.1.1 Nash Equilibria

Given the 3-player Minority Game a Nash Equilibrium occurs when two players pick room L and the other player picks room R. In this case the two players in room L will obtain a payoff of 0, but have no incentive to switch to the other room. The player in room R will obtain a payoff of 1 and has no incentive to switch since this will reduce his payoff. Obviously, we could also switch the labels of the rooms and let two players pick room R and one player pick room L.

We can generalize this result to the Minority Game with N = 2k + 1, k > 1 players as follows. Let k players pick room R and let k + 1 player pick room L. Then the k players will obtain a payoff of 1 and have no incentive to switch. The other k + 1 players obtain a payoff of 0, but have no incentive to switch. If one switches to the other room, then he will be in the majority again and still obtain a payoff of 0.

In the 3-player MG another interesting Nash Equilibrium occurs where one player picks L, another picks R and the last player picks L with probability pl ∈ [0, 1]. Note

that both players who always choose R or L have no incentive to switch, because then they will be in the majority. The last player will always be in the majority, therefore it does not matter what room he picks. Therefore, the last player can choose any mix. A corollary of this result is that there are an infinite number of Nash Equilibria in the one shot minority game.

Corollary 4.1. There are an infinite number of Nash Equilibria in ΓN(0).

Proof. We know that N = 2k + 1. Let k players pick room L and let another k players pick room R. Now whatever room the last player picks he will always be in the majority.

(18)

Therefore he can pick any mix between the two rooms. Let the last player pick room L with probability pl ∈ [0, 1]. This is a Nash Equilibrium for any fixed pl of which there

are (uncountably) many.

Symmetric Nash Equilibrium

Evolutionary Game Theory is primarily interested in symmetric equilibria, since these are used in the definition of an ESS. Therefore, we will now show some results concerning symmetric Nash Equilibrium. The following proposition captures the most important case in the one shot game.

Proposition 4.2. There only is one unique symmetric Nash Equilibrium in Γ3(0),

where each player picks room L with probability pl= 0.5.

Proof. We will first show that this is a Nash Equilibrium. Let x ∈ ∆ be such that x = 0.5

0.5 !

, then u(x, x, x) = 0.53 + 0.53 = 0.25. Let y ∈ ∆ such that y 6= x, then

we can write y = py 1 − py

!

, where py ∈ [0, 1] and py 6= 0.5. We have u(y, x, x) =

0.52py+ 0.52(1 − py) = 0.52 = 0.25. This shows that x is a Nash Equilibrium.

Next we show that x is the unique Nash Equilibrium. Assume that z ∈ ∆, where z = pz

1 − pz

!

is Nash and pz ∈ [0, 1], pz 6= 0.5. Then we must have that u(z, z, z) ≥

u(x, z, z). However, u(z, z, z) = pz(1 − pz)2 + (1 − pz)p2z = pz(1 − pz) and we have

u(x, z, z) = 0.5p2

z+ 0.5(1 − pz)2 = p2z− pz+ 0.5. Now note that

u(z, z, z) ≥ u(x, z, z) =⇒ pz(1 − pz) ≥ p2z− pz+ 0.5

=⇒ pz = 0.5.

However, when pz = 0.5 we know that z = x, which implies z is not a Nash Equilibrium.

4.2

The infinitely repeated game

In this section we wil present the main result of this study, which is an analysis of the undiscounted repeated three player Minority Game. We check if the game has any strategies that are Nash, NSS, RAII or ESS. Throughout this section, S will denote the space of all strategies in the repeated game and we will use a limit of means payoff as defined in section 3.2.1.

Remark. In the following section(s) we will often use the abbrevation ’f.h.f ’ which means ’for histories from’.

(19)

4.2.1 Nash Equilibria

We start with an analysis of the symmetric Nash Equilibria of the infinitely repeated Minority Game. Recall that in the infinitely repeated Minority Game a strategy is a Nash Equilibrium if for S ∈ S we have that U (S, S, S) ≥ U (S0, S, S) for all S0 ∈ S.

The following strategy is a Nash Equilibrium in the infinitely repeated minority game.

Proposition 4.3. Let S ∈ S be a strategy such that for all t ∈ N we have S(ht) =

0.5 0.5

!

∀ht∈ Ht,

then S is a Nash Equilibrium in the infinitely repeated Minority Game.

Proof. Let S0 ∈ S such that S0 6= S. Assume the mutant plays the following in a round

n ∈ N, S0(hn) =

p 1 − p

!

for some hn ∈ Hn and p ∈ [0, 1]. Then the expected payoff

of that round is p · 0.52+ (1 − p) · 0.52 = 0.25. Since the round and probabilities were arbritrary this holds for every round so that U (S0, S, S) = 0.25. In particular is also holds for p = 0.5 so that U (S, S, S) = 0.25. This implies that S is a Nash Equilibrium. Since we are using a limit of means to determine the payoff it is fairly easy to come up with an infinite amount of other strategies that are also Nash Equilibria. We will give two examples of how this could be done.

Let S ∈ S be a strategy that first plays a fixed room for a finite amount of rounds and then starts mixing evenly, then this strategy is a Nash Equilibrium due to the limit of means payoff.

Proposition 4.4. The following strategy is a Nash Equilibrium. Let n ∈ N be fixed.

S(ht) =                    1 0   f.h.f. H = {ht | t ≤ n}   0.5 0.5   f.h.f. H = {ht | t > n}

Proof. Since we are using a limit of means to determine the payoff, the payoff of the first finite amount of n rounds does not have an effect on the payoff. Therefore, we only have to analyse the what happens after round n. After round n we are in the same situation as in proposition 4.3 and therefore this strategy is Nash.

The following proposition will present another strategy that is a Nash Equilibrium. The intuition is as follows. We construct a strategy that will in the limit tend to the same strategy as in proposition 4.3. We do so by constructing a strategy that mixes evenly, except in rounds where the number of the round can be written as t = n2, n ∈ N. As the number of rounds increases the rounds where we do not mix evenly start to appear less frequently and as t → ∞ these rounds will not have an effect on the payoff. More formally, we have the following.

(20)

Proposition 4.5. The following strategy is a Nash Equilibrium. Let k ∈ N be fixed. S(ht) =                    1 0   f.h.f. H = {ht | t = n2+ k, n ∈ N}   0.5 0.5   f.h.f. H = {ht | t 6= n2+ k, n ∈ N}

Proof. We note that the amount of rounds between two rounds where we do not mix evenly is equal to (n + 1)2+ k − (n2+ k) = 2n + 1, which is dependent on n. Note that as n → ∞, the amount of rounds in between rounds where S plays a fixed room also tends to infinity. Therefore, we are in the same situation as in proposition 4.3, since the rounds where we do not mix evenly do not have an effect on the payoff. Note that since k was arbitrary we have written down an infinite amount of strategies that are Nash.

In this section we have given some examples of strategies that are Nash in the infinitely repeated undiscounted minority game and we have shown that there are an infinite amount of those strategies. Note that the fact that we are not discounting is important. The two proposed examples that are Nash in the infinitely repeated undiscounted game are not Nash in the minority game with discounting, since a mutant could always win in the first round and therefore obtain a better payoff. The next question is if the proposed strategies are also Neutrally Stable. This question will be answered in the next section.

4.2.2 Neutrally Stable Strategies

In this section we will analyse if there are strategies in the undiscounted infinitely repeated minority game that are Neutrally Stable. Recall that a strategy is Neutrally Stable if the conditions introduced in section 3.1.2 hold. The first candidate(s) that could also be Neutrally Stable are the strategies are Nash Equilibria introduced in the previous section. We are able to show that these strategies are not Neutrally Stable. Proposition 4.6. The Nash Equilibrium introduced in proposition 4.3 is not a Neutrally Stable Strategy.

Proof. The following mutant, firstly introduced by Boulema (2012), will be able to invade. The following mutant strategy, denoted by Y , will make use of the win function introduced in definition 3.10. The Nash Equilibrium introduced in 4.3 will be denoted by S. Let τ denote the round when the first minority occurs.

Y (ht) =            0.5 0.5   f.h.f. H = {ht | W (hi) = 0 ∀i ≤ t} aτ,1 f.h.f. H = {ht | ∃i ≤ t s.t. W (hi) > 0}

Since S is a Nash Equilibrium we have that U (S, S, S) ≥ U (Y, S, S). In this case we have even seen that U (S, S, S) = U (Y, S, S). Now in an SSY -group we have that the

(21)

S players will mix evenly. The Y player will mix evenly until a minority occurs and then it will stay in that room. Since the S players keep on mixing evenly, this will not affect the probabilities of being in a minority for the Y player or the S players so that U (S, S, Y ) = 0.25. However, in a Y Y S-group the probability that any given player is in the minority first is 13. If an Y player is in the minority first, both of the Y players will stop mixing and stay in different rooms. The S player will keep mixing evenly forever after and will always be in the majority from then on. In this case an Y player will obtain a payoff of 12. The probability that one of the Y players is in the minority first is 23 so that U (Y, S, Y ) ≥ 23 ·12 = 13. Now we conclude that U (S, S, S) = U (S, S, Y ) and U (S, S, Y ) < U (Y, S, Y ) so that S is a not Neutrally Stable Strategy.

A corollary of this result is that the other two strategies that were introduced in propo-sition 4.4 and 4.5 are also not Neutrally Stable. A natural question that arises is if it is possible to construct a strategy that is an NSS. The answer turns out to be yes, but the strategies do get more complex. The following proposition will present a result regarding the existence of an NSS.

Proposition 4.7. There is a strategy in the infinitely repeated Minority Game without discounting that is Neutrally Stable.

Proof. First, we will give a strategy and then show it is a NSS. We will split this strategy up into several parts to have a clear overview.

Part 1

This strategy starts out by differentiating players using minority rounds. Players will start mixing evenly until a minority occurs. The player that is in the minority first, let’s call him player 1, will then stay in this room until a new minority occurs in a different room. The other players, let’s call them player 2 and 3, will keep mixing evenly until this new minority occurs. More formally,

S1(ht) =                                0.5 0.5   f.h.f. H = {ht | Pti=1W (hi) = 0} a1,t−1 f.h.f. H = {ht | Pti=1W (hi) − C(ht) = 0 and C(ht) ≥ 1}   0.5 0.5   f.h.f. H = {ht | Pti=1W (hi) > 0, C(ht) = 0 and min  Pt j=1 WR(hj), P t j=1 WL(ht)  = 0} Now the strategy is able to distinguish players by using minority rounds, because there is one player that was in the minority first. There is one player that was at some point in the minority, but in a different room as the first player, that was in the minority. There is one player that has seen that a minority occured in both rooms but was never

(22)

in the minority itself. Next, we introduce the following punishment rule for part 1: S1p(ht) =                                0.5 0.5 

 f.h.f. H = {ht | ∃i, t0 such that i ≤ t0 ≤ t,

Pi j=1W (hj) = 1 and W (hi) = 1, C(ht0) = 0, min  Pt0 j=1 WR(hj), P t0 j=1 WL(hj)  = 0, and Pt0 j=iMa1,i−1(hj) = 1}

This punishment rule will start, if in any case players 2 and 3 find out that player 1 switched rooms after the first minority round was played. If player 1 switched, there is a positive probability that the other players find out, because both other players mix, and with probability 0.25 they both play the strategy that player 1 would switch to. Part 2

The second part will assign different fixed rooms to each player. We will divide the minority rounds equally by first letting player 1 in the minority, then player 2, then player 3 and then player 1 again etc. In this case, player 1 is the player that was in the minority first in part 1, player 2 is the player that was in the minority, but in the other room than player 1 was in part 1 and player 3 is the player that was never in the minority in part 1. If at any point in time anyone deviates, causing some player not to obtain a payoff from that player’s designated minority round, then all players will start mixing evenly forever after (this punishment rule will be introduced in part 3). When we arrive at part 2, we know that at some point a minority occured in both rooms. We assume this happens in round k. We obtain the following sub-strategies for each player.

S21(ht) =                    1 0   f.h.f. H = {ht | t = k + i, i ∈ {1, 4, 7, . . . } and ∃!n < k : W (hn) = 1, C(hn) = 1}   0 1   f.h.f. H = {ht | t = k + i, i ∈ N\{1, 4, 7, . . . } and ∃!n < k : W (hn) = 1, C(hn) = 1} S22(ht) =                    1 0   f.h.f. H = {ht | t = k + i, i ∈ {2, 5, 8, . . . } and u(a1,k, a2,k, a3,k) = 1}   0 1   f.h.f. H = {ht | t = k + i, i ∈ N\{2, 5, 8, . . . } and u(a1,k, a2,k, a3,k) = 1}

(23)

S23(ht) =                    1 0   f.h.f. H = {ht | t = k + i, i ∈ {3, 6, 9, . . . } and C(hk) = 0}   0 1   f.h.f. H = {ht | t = k + i, i ∈ N\{3, 6, 9, . . . } and C(hk) = 0} Part 3

Now we construct a punishment rule and then we can glue all sub-strategies together to obtain the strategy. Whenever round k occurs, we will want to punish a player if he or she deviates after round k. Note that if a player deviates after round k, there will either be a majority in room L with 2 players, or there will be a majority in room R with 3 players. Therefore, if this ever happens, all players will start mixing evenly. If there is a majority in room L with two players, then there is a winner in room R, which we can check with the Win-R function. We can also check if a majority with three players occurs in any room, by using the Majority function. We introduce the following punishment rule: S3(ht) = 0.5 0.5 ! f.h.f. H = {ht | ∃t0 : k + 1 < t0≤ t s.t. WR(ht0) = 1 or M (ht0) = 1}

Now we will write down the full strategy in an easily understandable manner. Again, let round k ∈ N ∪ ∞ denote the first round where there has occured a minority in both rooms. If this never happens we set k = ∞.

S(ht) =                             

S1(ht) if t ≤ k and there was no deviation before round k

S1p(ht) if there was a deviation before round k

S1

2(ht) if t > k and indentifies as player 1 and there was never a deviation

S22(ht) if t > k and indentifies as player 2 and there was never a deviation

S23(ht) if t > k and indentifies as player 3 and there was never a deviation

S3(ht) if t > k and there is a deviation after round k, but not before round k.

Part 4

It remains to show that this strategy is a Neutrally Stable Strategy. Assume it is not. Then there is some strategy, say Y , that can invade. Note that U (S, S, S) = 13. We start by checking what strategy Y is allowed to do in an SSY group. When Y is playing against two S-players and it ever deviates after round k has happened, then both the S-players will start mixing evenly which causes the payoff of Y to drop to 14. This implies that one Y player is not allowed to deviate after round k, because if it did not then the payoff would be 13. Before round k it is allowed to deviate, but whatever it does, at some point round k occurs with probability 1 and then it can not deviate anymore. It is also not allowed to deviate when it is the first player in the minority, because if it does, then again both S-players will start mixing evenly forever after and the payoff drops to

1

4 instead of 1

(24)

The question remains if there is a possibility to communicate with another Y player before round k and then try to invade. It turns out that this is not possible and the argument is as follows. Before round k the S-players are mixing evenly and therefore every outcome is, in theory, possible. Therefore, if a Y strategy has some built in rule that whenever some specific history has happened it will deviate after round k, because it ’knows’ that it is playing with other Y players, there is actually a positive proba-bility that it is in an SSY group. This will cause that with a positive probaproba-bility a Y player will deviate after round k, when it is playing in an SSY group, which drops the expected payoff U (Y, S, S) < 13. Therefore, we must have U (S, S, Y ) ≥ U (Y, S, Y ) and U (S, Y, Y ) ≥ U (Y, Y, Y ) for all Y ∈ S.

A corollary of this proposition is that there are an infinite amount of strategies that are NSS. An easy way to construct an infinite amount of other strategies that are NSS is as follows. Let p ∈ (0, 1) such that p 6= 0.5. Then take the strategy presented in the previous proposition and instead of mixing evenly in the first round pick room L with probability p.

4.2.3 Robustness Against Indirect Invasions

In the previous section we showed that there is a strategy that is Neutrally Stable. The next question we ask is if there are strategies that are RAII. The NSS presented section 4.7 is a good first candidate that could also be RAII. The presented NSS turns out not to be RAII.

Proposition 4.8. The NSS presented in proposition 4.7 is not RAII.

Proof. Let Y1 be a mutant strategy that is exactly the same as the NSS except it does

not have the punishment rule introduced in Part 3 in section 4.2.2. Then Y1 is performs

evolutionarily equal to S. Let Y2 be a strategy that is the same in Part 1 in section

4.7, however after round k it will deviate whenever it is assigned to a minority for the first time. Then there are three options.

1. It is the only one that deviates after round k and then it will keep doing the same as strategy Y1.

2. There are two players that have deviated. Then if it was the first in the minority, then after round k + 3 it will keep going into the minority rooms it was assigned to and it will go into the minority rooms assigned to the player that did not deviate. If it was not in the minority first, it will do the same as strategy Y1. This will

cause the other mutant to obtain all the payoff of the player that did not deviate. 3. If all three players deviated it will keep sharing the minority rooms the same way

as Y1.

Now we have constructed a sequence Y1, Y2 such that Y1 ∈ SE(S) and Y2 ∈ SB(Y1).

(25)

This last proof gives us an interesting insight about strategies that are RAII or ESS and that is the following. Firstly, we already know that if a strategy does not punish players that deviate, then it can not be ESS. Now if a strategy uses punishment rules, then it is still possible that it is not ESS as can be seen in the example of the NSS that is not RAII. The problem that arises is that a mutant can appear that is a copy of an incumbent without the punishment rule and then there might come up other mutants that can invade this mutant. If we consider strategies that are ESS, then it should not even be able for a mutant that is a copy of the incumbent without the punishment rule to invade the incumbent. So even if the mutant is an exact copy of the ESS except for the punishment rule, then the incumbent ESS should find out that the mutant copy does not punish and then punish this mutant for not punishing!

4.2.4 Evolutionary stability

We start by showing that if there is an ESS in the infinitely repeated Minority Game without discounting, then it must always play a mixed strategy, when playing against itself. An interesting corollary of the next proposition is that, if there is an ESS, then every history could occur with positive probability when the ESS plays against itself. Proposition 4.9. Let S be a strategy and assume that there is a history ˜hn∈ Hn, n ∈ N

that occurs with positive probability when S plays against itself, such that at least one of the players plays S(˜hn) =

1 0 ! or S(˜hn) = 0 1 !

, then S can not be an ESS.

Proof. Assume that S is an ESS. Without loss of generality we will assume that S(˜hn) =

1 0

!

. Let Y be a mutant strategy that is exactly the same as S, except for the history ˜

hn+1 = (˜hn, (R, {R, R}). Then we pick any mix such that Y (˜hn+1) 6= S(˜hn+1). We know

that when history ˜hn occurs, at least one of the players will play L with probability 1.

Therefore, when S plays against itself, or against Y , the history ˜hn+1 will never occur.

Since Y is equal to S in all other cases we must have that U (S, S, S) = U (Y, S, S) = U (S, S, Y ) = U (Y, S, Y ) = U (S, Y, Y ) = U (Y, Y, Y ), which shows that S can not be an ESS.

Proposition 4.10. When a strategy S ∈ S is an ESS it is never able to distinguish with certainty that it is playing against a mutant or another copy of itself.

Proof. Due to the assumptions in section 3.2.2 we know that players can not recognize other players. From the previous proposition we know that when a strategy is an ESS it will always use a mix, against itself, therefore it will always start the first round by mixing. Therefore, all possible outcomes can appear after the first round, when playing against other copies of the ESS. Now if it would be playing against a mutant strategy it would not be able to distinguish, from the first round outcome, if it was playing against a copy of itself or against a mutant. Therefore, after the first round has been played

(26)

it can not know against who it played. Now, using proposition 4.9 we can extend this argument to round 2, 3, . . . using the same argument as before.

There are two important corollaries that follow from propositions 4.9 and 4.10. First, we have that an ESS must play a mixed strategy in every round.

Corollary 4.11. If S is an ESS, then when it plays against any strategy it will always use a mix. That is, S(ht) =

p 1 − p

!

, p ∈ (0, 1), ∀ht∈ Ht.

Proof. This follows directly from proposition 4.9 and 4.10.

The second corollary shows us that due to the mixing an ESS can not be ’efficient’, that is, there will always be some loss due to the mixing since there always is a positive probability that players will end up in the same room.

Corollary 4.12. If S is an ESS, then U (S, S, S) < 13.

Proof. First, note that the highest expected payoff for a strategy against itself is 13. Second, because S is an ESS it must always play a mix due to proposition 4.9. Due to the mixing there is a positive probability in every round that all players choose the same room so that U (S, S, S) < 13.

These corollaries show us that the NSS presented in proposition 4.7 can not be an ESS, because when it plays against itself it stops mixing at some point and then we know from proposition 4.9 that it can not be an ESS. Of course, we already knew this because earlier we found that the NSS was not RAII.

Corollary 4.13. The NSS presented in proposition 4.7 is not an ESS. Proof. This follows directly from proposition 4.9.

The fact that an ESS must mix in every round is a very strict result. Assume there is a strategy, denoted by S, that is an ESS, then we know that in the first round S(h1) =

p 1 − p

!

for some p ∈ (0, 1). Now let Y be some mutant strategy that is exactly the same as S, except it does something else in the first round. For example, Y (h1) =

1 0

!

, and for all other hn ∈ Hn we have Y (hn) = S(hn). Then this deviation

of Y in the first round must at some point, in an SSY or SY Y group, have a negative influence on the payoff of Y , because S is an ESS. In other words, S must at some point be able to detect that it is not playing against another S player, and punish this player so that it can not invade. Even in the case when it only deviates in one round, or, for example, when it deviates only very slightly from SESS in the first round

Y (h1) =

p +  1 − p − 

!

for  > 0 and very small, and plays the same as SESS in all other

(27)

In the discounted case, this thought turns out to be even stronger and it has inspired us to also work on the discounted case. This work is presented in the next section.

When the outcome of a round does not matter for the future a strategy can also not be an ESS. We will show this for the first round. The argument for an arbitrary round works the same way. We will use the following histories.

h1t :=  L, {L, L}  , . . . ,  a1,t−1, {a2,t−1, a3,t−1}  ! , h2t := L, {R, L}, . . . ,a1,t−1, {a2,t−1, a3,t−1}  ! , h3t := L, {R, R}, . . . ,a1,t−1, {a2,t−1, a3,t−1}  ! , h4t := R, {R, R}, . . . ,a1,t−1, {a2,t−1, a3,t−1}  ! , h5t := R, {L, R}, . . . ,a1,t−1, {a2,t−1, a3,t−1}  ! , h6t := R, {L, L}, . . . ,a1,t−1, {a2,t−1, a3,t−1}  ! .

Proposition 4.14. Let S ∈ S be such that S(h1t) = S(h2t) = S(h3t) = S(h4t) = S(h5t) = S(h6t), ∀t > 1, then S can not be an ESS.

Proof. A mutant Y that is exactly the same as S, except for the first round where Y does not mix is a neutral mutant, since the outcome of the first round does not set the mutant on a different path.

Corollary 4.15. If S is an ESS there must be a t > 1 and i, j ∈ {1, 2, 3, 4, 5, 6} such that S(hit) 6= S(hjt).

Proof. This follows directly from proposition 4.14.

The last corollary shows how sensitive to the behaviour of the others an ESS must be if it exists. During the game it must be using information from past rounds and change it behaviour based on this information. There can never be a point where this sensitivity stops, because then we can construct a mutant that is the same until that point and then changes its behaviour. Therefore, it is very difficult to write down possible candidates that could be ESS and we suspect that they do not exist.

We want to conclude our analysis of the infinitely repeated Minority Game without discounting with the following thought. From the previous results we have established that an ESS must be highly sensitive and is also not able to distinguish between mutants and copies of itself. Therefore, whenever an ESS plays against a mutant strategy that mutant must automatically be set on a path that is, on average, worse than the average path the ESS is on. It follows that an ESS strategy must adapt its strategy in such a way that certain outcomes of rounds are better or worse for the player. Therefore, there

(28)

must be a ranking of these outcomes that is not invadable by mutants. This might turn out to be difficult, since the only way to distinguish players is by minority rounds and it seems that mutants can always turn the odds in their favour to be in the minority or not, because the ESS always has to use a mixed strategy (especially when the ESS does not mix evenly!). Unfortunately, we were not able to further prove that this is true and it turns out to be very difficult. However, this last sequence of thoughts seems to work even better when we are playing the game with discounting. In the next section we present these results.

4.2.5 Evolutionary stability with discounting

Although we have focused mainly on the infinitely repeated Minority Game without discounting, we will also present some results on the game with discounting that have come up from our research on the undiscounted case. For the discounted Minority Game we use the same assumptions made in section 3.2.2. In this case we use the discounted payoff functions introduced in section 3.2.1.

Since propositions 4.9 and 4.10 are independent of what kind of payoff function you use (limit of means or discounted payoff) they also hold for the infinitely repeated discounted Minority Game. Therefore, if SESS is a strategy that is an ESS in the

infinitely repeated discounted Minority Game, it must use a mix in every round. In particular, it must always mix in the first round. We now present a conjecture that may help us in trying to prove that there is (or is no) ESS in the infinitely repeated discounted Minority Game.

A conjecture

Assume SESS ∈ S is an ESS in the infinitely repeated discounted Minority Game.

Then we know it will start out mixing in the first round. Assume we have SESS(h1) =

p 1 − p

!

, where p ∈ (0, 1) and p 6= 0.5. Without loss of generality, we may assume that p > 0.5. Now we can construct a mutant Y , that is exactly the same as SESS, except for

the first round where it uses Y (h1) =

0 1

!

. Now the mutant has a higher probability of winning the first round than the SESS. We also know that the strategy SESS is

not able to recognize from the outcome of the first round that it is playing against a mutant. However, since SESS is an ESS we must have that U (SESS, SESS, SESS) ≥

U (Y, SESS, SESS) so the SESS will have to correct for this first round and eventually

punish the mutant. A question we have thought about is: is this possible? And what happens when this is not possible? We start by assuming it is not possible to punish this and introduce the following conjecture.

Conjecture 4.16. It is not possible for a strategy S ∈ S to punish deviations described in the paragraph above.

(29)

Definition 4.17. Let S ∈ S be a strategy. We say that S is history independent in round t ∈ N, if S(ht) is the same for all ht∈ Ht.

Proposition 4.18. Assuming conjecture 4.16 is true, then an ESS must mix evenly in history independent rounds.

Proof. Let S ∈ S be an ESS that does not mix evenly in history independent rounds and assume conjecture 4.16 to be true. Without loss of generality we can assume that S(h1) =

p 1 − p

!

, p ∈ (0.5, 1). Let S0 ∈ S be a mutant strategy that is exactly the

same as except S0(h1) =

0 1

!

. Then it has a better probability of winning the first round and due to conjecture 4.16 this is not detectable and punishable. Therefore, U (S, S, S) < U (S0, S, S).

Proposition 4.19. Assuming conjecture 4.16 to be true, then any strategy S ∈ S that plays 2, or more, history independent rounds is not an ESS.

Proof. We know that the first round is history independent. Now assume that round n ∈ N, where n 6= 1 is also history independent. Then we can construct a strategy S0such that whenever histories h∗n=



L, {R, R}, a1,2, {a2,2, a3,2}, . . . , a1,n−1, {a2,n−1, a3,n−1}



occur, where ai,k is any action ai,k ∈ {L, R} with i ∈ {1, 2, 3}, k ∈ {2, . . . , n − 1}, we set

S0(h∗n) = 1 0

!

. For all other hn ∈ Hn such that hn 6= h∗n we set S0(hn) = S(hn). Now

we have a strategy S0 that will not mix in round n, whenever she is in the minority in the first round and in room L. This can not be detected by S, since S would mix evenly in this case, and therefore it is not punishable. We also have that

U (S, S, S) = U (S0, S, S) = U (S, S, S0) = U (S0, S, S0) = U (S, S0, S0) = U (S0, S0, S0) so that S is not an ESS.

Theorem 4.20. If conjecture 4.16 is true, then there is no ESS in the the infinitely repeated Minority Game with discounting.

Proof. Assume S is an ESS in the infinitely repeated Minority Game with discounting. From the theorems above we know that S must mix evenly in the first round. We will show that a mutant S0 that is the same as S, except for histories where a minority occurs in the first round, will always be able to invade S. We assume S0 also mixes evenly in the first round.

Without loss of generality we assume a minority occurs in the first round. Let player 1 denote the player that is in the minority first and player 2 and 3 the players that were not. In the second round there are several possible strategies S can use. We will go over all possibilities and show that they can all be invaded by S0. First, from proposition 4.19 we know that they are not all allowed to mix evenly in the second round, since then we can invade S . Therefore, either player 1 or player 2 and 3 have to play a different mix

(30)

in the second round. We also know that in the second round each player has to play a mix due to corollary 4.11.

Let h∗2 ∈ H2 denote the history such that player 1 is in the minority in the first round and player 2 and 3 are not. Let S2,3(h∗2) denote what player 2 and 3 play in the

second round and S1(h∗2) will denote the strategy of player 1 in the second round. For

the second round S has the following options:

1. Player 2 and 3 mix evenly and player 1 plays a different mix. That is, S2,3(h∗2) =

0.5 0.5 ! and S1(h∗2) = p 1 − p ! , p ∈ (0, 1), where p 6= 0.5.

2. Player 2 and 3 mix (not evenly) and player 1 plays a (possibly even) mix. That is, S2,3(h∗2) = p0 1 − p0 ! , where p0∈ (0.1), p0 6= 0.5 and S 1(h∗2) = p00 1 − p00 ! , where p00∈ (0, 1).

(1) The first situation is invadable by a mutant S0 that is constructed as follows. Without loss of generality we may assume that p > 0.5. Let S0 be a strategy that is exactly the same as S, except when history h∗2 occurs and S0 indentifies herself as player 2 or 3. In this case we set S02,3(h∗2) = 0

1 !

. Since S would mix evenly in this case, we have that S0 shifts the probability in her favour to be in the minority in that round. Assuming conjecture 4.16 to be true, which is a similar situation, this can not be punished. Therefore, we have that U (S, S, S) < U (S0, S, S) so that S can not be ESS.

(2) We can invade this strategy by constructing an invader S0 that will not mix if it is player 1 but plays a fixed room in the second round, and that is the same as S in all other rounds. Without loss of generality we may assume that S2,3(h∗2) =

p0 1 − p0

! ,

where p0∈ (0.5, 1). Now we can set S1(h∗2) =

0 1

!

. Due to conjecture 4.16 this deviation is not punishable and therefore U (S, S, S) < U (S0, S, S) since we are discounting.

We have now shown that if conjecture 4.16 is true, then there is no strategy that is ESS in the infinitely repeated Minority Game with discounting. The question that remains open is: is conjecture 4.16 true? Unfortunately, due to limited time we were not able to further study this and therefore only give an intuitive argument on why the conjecture might be true. Let S ∈ S denote a strategy that is ESS in the infinitely repeated Minority Game with discounting. Without loss of generality, we assume that S(h1) =

p 1 − p

!

, where p ∈ (0.5, 1). Let Y1 denote a strategy that is the same as S

except for the first round where it plays Y1(h1) =

0 1

!

(31)

is the same as S except for the first round where it plays Y2(h1) =

1 0

!

. If conjecture 4.16 is true, then even though Y1 wins, on average, the first round more than S we

still must have that U (S, S, S) ≥ U (Y1, S, S). This implies that Y1 must be punished

somehow by strategy S, however we have already established that S can not recog-nize it is playing against a mutant. Therefore, the outcome of the first round must, on average, set the mutant on a path that is less good than the average path that S is on. Recall that there are 6 possible outcomes, namely (R, {R, R}), (R, {R, L}), (R, {L, L}), (L, {R, R}), (L, {L, R}), (L, {L, L}) a player can observe. There must be a ranking of these outcomes that determines the utility of the future path, where (R, {L, L}), (L, {L, L}) and (L, {L, R}) can not be ranked to highly, otherwise mutants Y1and Y2could invade. A question for future research should study wether such a

rank-ing can exist, so that it is not invadable by a mutant strategy. We think it might be the case that for every presented ranking one could come up with a mutant strategy that could use the ranking to its advantage and invade.

(32)

5

Conclusion

The main goal of this thesis was to study the evolutionary dynamics of the infinitely repeated Minority Game. We did so by doing an analysis of the infinitely repeated three player minority game without discounting, but also introduced some ideas for the discounted version. For the analysis of the game we used two different assumptions. The first assumption was that the payoff of a strategy is computed by a limit of means of payoffs in rounds. The second assumption was that at the end of each round a player only knows how many players chose their side, but does not know who chose their side. With this set up we were interested in the question if there exists a strategy in the game that is evolutionarily stable. To analyse this question, we studied the existence of strategies that are Nash, NSS, RAII and ESS. To do so we first had to generalize some equilibrium concepts of Evolutionary Game Theory to 3 player games (this was done in chapter 3). Using these tools we were able to show that there do exist strategies that are Nash and/or NSS. However, we have not found any strategies that are RAII or ESS. We were able to prove some interesting results regarding the existence of strategies that are ESS in the undiscounted case. These results give us insights in how an ESS should behave or look like if they exist.

The research we did on the undiscounted case gave us the inspiration to also analyse the infinitely repeated Minority Game with discounting. We have introduced a conjec-ture and we have shown that if this conjecconjec-ture turns out to be true, then there is no strategy that is an ESS in the discounted case. We also give an intuitive argument on why this conjecture might be true.

It remains an open question if there is an ESS in the infinitely repeated Minority Game (both with and without discounting). Therefore, there are still some interesting questions for future research. The first one is: is the conjecture 4.16 true? Second, due to time constraints we were not able to also analyse Nash, NSS and RAII concepts in the case where we are discounting. It would also be interesting to know if there are any strategies that are Nash, NSS or RAII in the discounted case. Another possibility for future research is the analysis of the N player game instead of the 3 player game.

(33)

References

Arthur, W.B. (1994). Inductive Reasoning and Bounded Rationality. The American Economic Review 84(2). 406 - 411.

Boulema, K. (2012). Finding an evolutionarily stable strategy of the three-player repeated minority game (Unpublished master’s thesis). University of Amsterdam, Amsterdam, The Netherlands.

Cavagna, A. (1999). Irrelevance of memory in the minority game. Physical Review E 59(4). R3783 - R3786.

Challet, D. Chessa, A. Marsili, M, Y-C, Zhang. (2001). From Minority Games to real markets. Quantitative Finance 1(1), 168 - 176.

Challet, D., Marsili, M. (1999). Phase transition and symmetry breaking in the minority game. Physical Review E 60(6), R6271 - R6274.

Challet, D., Marsili, M., Zhang, Y.-C. (2005). Minority Games: interacting agents in financial markets. Oxford University Press.

Challet, D. Zhang, Y.-C., (1997). Emergence of cooperation and organization in an evolutionary game. Physica A 246, 407 - 418.

Gao, X. (2012). Finding an evolutionarily stable strategy of the three-player repeated minority game without discounting (Unpublished master’s thesis). University of Am-sterdam, AmAm-sterdam, The Netherlands.

Linde, J., Sonnemans, J., Tuinstra, J. (2014). Strategies and evolution in the minority game: A multiround strategy experiment. Games and Economic Behavior 86, 77 -95.

Lo, T.S., Hui, P.M., Johnson, N.F. (2000). Theory of the evolutionary minority game. Physical Review E 62(3), 4393 - 4396.

Maynard Smith, J. (1974). The theory of games and the evolution of animal conflicts. Journal of Theoretical Biology 47(1). 209 - 221.

van Veelen, M. (2012). Robustness against indirect invasions. Games and Economic Behavior 74(1), 382 - 393.

Referenties

GERELATEERDE DOCUMENTEN

[r]

This Masters thesis seeks to answer the question of how Member States and the European Union should legally recognise the virtual currency Bitcoin, in light of its potential as an

Building up new industries is a tricky business, and Whitehall would do well to focus on that challenge, and show that Government industrial policy can be a force for rebuilding

The network is able to classify around 84% of the clothing images in the hand labeled set with the correct label and it was able to retrieve 85% of the test items within the

2010 IEEE Vehicular Networking Conference.. Here the region being the merge area, and the time period being the period when the merging vehicle will reach this area. We

All in all, in line with the imitation analysis in Section 4.3, Table 13 shows that participants shift to strategies of clusters that won in the previous round (or one similar to

Feather growth is also defined separately from EBPW as the components differ both in growth rate (Emmans 1989; Emmans &amp; Fisher 1986) and in their amino acid composition

Other shortcomings of the geometrical theory (the failure at caustics of the problem) remain. Like Keller's theory, ours is formal in the sense that we do not