• No results found

Discrete games : an introduction

N/A
N/A
Protected

Academic year: 2021

Share "Discrete games : an introduction"

Copied!
43
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Discrete Games: An Introduction

Luca Di Martile (10230726) Supervisor: Prof. Jo Seldeslachts

Universiteit Van Amsterdam MSc Economics

(2)

Abstract

In this paper, I investigate the very active literature of discrete games. Discrete games provide a useful platform to study several economic and social behaviors. However, these models will gener-ally present multiple equilibria, and imply the coherency problem – the probabilities of all the possible outcomes will not sum to one. I ex-plain in detail the shortcomings related to discrete games and I present seven methodologies through which the researchers attempted to solve the coherency problem. I find that in two decades the literature made significant improvements towards the identification and the estimation in discrete games, which proofs the importance of the accumulation of collective knowledge in the discipline.

(3)

1

Introduction

In this paper, I investigate the very active literature of discrete games. These differ from the regular form games present in microeconomic theory as they allow the actions of all players to influence each other’s pay-off. As I will show in the following section, from an econometric perspective a discrete game is a generalization of discrete choice models.

Several economic studies that cover a wide spectrum of subjects applied discrete games. Jia (2008) employs discrete games in a study of Wal-Mart’s entry decision and localization. Sacerdote (2001) extends the literature of so-cial interactions by applying discrete games in his study of students’ behav-ior. Mazzeo (2002) studies motels’ quality choice by modeling their relations as a discrete game. Manuszak and Cohen (2004) apply discrete games for technology choices, while Sweeting (2009) applies discrete games in a study about radio advertising. Other useful references can be found in Berry and Tamer (2006).

Discrete games provide a useful platform to study several economic and social behaviors. However, the study of discrete games entails several hurdles for the researcher, both from a theoretical and an empirical perspective. Generally, a discrete game will present multiple equilibria, and it will imply the coherency problem – the probabilities of all the possible outcomes will not sum to one.

In this paper, I explain in detail the shortcomings related to discrete games. My investigation of the literature found seven methodologies through which the researchers attempted to solve the coherency problem. I find that in two decades the literature made significant improvements towards the identification and the estimation in discrete games, which proofs the importance of the accumulation of collective knowledge in the discipline.

In the following, I will often treat discrete games as applied to entry games. I will also refer to studies of social interactions, an interesting field of research that has some peculiarities compared to the canonical study of discrete games. More on the social interaction literature can be found in Manski (1993) and Manski (2000), as well as in the papers cited in the following sections.

(4)

The paper is organized as follows. Section 2 explains the main idea be-hind discrete games. Section 3 and 4 describe the fundamental assumptions and the modeling decisions the researcher has to take for the analysis, while Section 5 and 6 present the problems that arise in the study of discrete games. Section 7 introduces the methodologies explored by the literature to solve the limitations of the model. Finally, Section 8 presents the conclusion.

2

Discrete games

Forecasting an event in the future based on information from the past has been the topic of many economic studies, both in macroeconomics and mi-croeconomics. One branch of these studies focuses on estimating the prob-ability of an agent’s choice when her set of possible actions is discrete. For example, the likelihood of ‘Buying’ rather than ‘Not Buying’ if the agent is a consumer; or the likelihood of choosing to produce ‘High Quality’ products rather than ‘Low Quality’ products if the agent is a firm. These discrete choice models (see McFadden (1974), Hausman and Wise (1978)) assume that the agents are influenced only by their own preferences. That is, they act as single-agents.

Discrete games differ from discrete choice models because they allow the agents to influence each other decisions. Discrete games try to give an answer to the question: How the choice of firm A to produce ”High Quality” rather than ”Low Quality” products influences the quality choice of firm B? The interaction between the agents produces a game-theoretic framework where the concept of equilibrium plays a central role. In discrete games, the economist’s objective is not to estimate the likelihood of an agent’s decision, rather to estimate the likelihood of the equilibrium of the game. That is, the set of choices that represent the best response of all the agents in the game.

To give the intuition, following Bresnahan and Reiss (1991a), if agent i acts as a single-agent, one could describe her decision making as a discrete choice model. Agent i ’s pay-off function could be represented as:

(5)

Πi= fi(Xi, θ) + i (1)

where f is a function that depends on Xi, a matrix of observed covariates,

and θ, a set of unknown parameters. The error term, i, represents the set of

unobserved (to the econometrician) variables that influence agent i ’s pay-off. In order to identify agent i ’s decision, the researcher sets threshold conditions to her pay-off function. For example, if the researcher is studying firms’ entry decisions, the pay-off function in (1) will represent firm i ’s profits, and the condition for firm i to enter the market will be Πi≥ 0.

If, instead, the researcher assumes that the actions of other agents influ-ence agent i ’s decision, then equation (1) becomes:

Πi(ai, a−i) = fi(ai, a−i, Xi, θ) + i(ai, a−i) (2)

where a = (ai, a−i) is the N -dimensional vector describing all agents’

ac-tions, with N being the number of agents in the game. In an entry game, a−i = 1 if firm -i (i.e. not firm i ) enters the market, a−i = 0 otherwise.

Therefore, agent i’s pay-off will depend on the other N − 1 agents’ pay-offs. An important element of a game is the equilibrium concept : The set of rules that determines the equilibrium of the game, as the well known Nash equilibrium. The Nash equilibrium wants each player’s action to be her best response. That is, a∗i is agent i ’s best response if:

Πi(a∗i, a∗−i) ≥ Πi(ai, a∗−i)

for all ai. Given that each agent plays her best response, the equilibrium of

the game will be a∗ = (a∗i, a∗−i) and its probability (P r(a∗|X, θ)) will be the object of the study.

Estimating the likelihood of the equilibrium is often a difficult exercise for the researcher, other than problems of endogeneity that the econometrician has to face, in discrete games the equilibrium is not always unique, or it may

(6)

not exist in pure strategies. The researcher is often required to assume an equilibrium selection mechanism to reach a feasible estimation.

Before going deeper into the analysis of discrete games, it is convenient to list the three assumptions the researcher has to take prior to construct the model. As I will show in the following sections, these are not the only assumptions needed for the analysis; they represent, however, the basis of the model from which the researcher can build a meaningful estimation.

3

Three assumptions to model discrete games

First, the researcher has to choose whether the players in the game have complete or incomplete information. In a game with complete informa-tion, each player knows her own pay-off as well as the other players’ pay-offs. In other words, in (2) each player knows the realizations of fi(.) and i for all

i (even if the error terms are unobserved by the econometrician). Conversely, in a game with incomplete information, each player’s pay-off is private in-formation. Player i knows the realizations of i (i.e. her own error term)

but does not observe −i (i.e. the other players’ error terms). Hence, each

agent will base her action upon expectations of the other players’ pay-offs. Whether the game has complete or incomplete information changes the econometric analysis substantially. The equilibrium concept is closely re-lated to the set of information given to the players. A model with complete information usually employs the concept of Nash equilibrium, while mod-els with incomplete information employ a different equilibrium concept, the Bayesian-Nash equilibrium.

Second, the researcher has to decide whether the game is static or dy-namic. In a static game (often referred to as a one-shot game) players take their actions once. Instead, a dynamic game allows the players to interact repeatedly. That is, the agents play the same game in more periods.

The choice between static and dynamic game is typically driven by the topic under study. If it can be safely assumed that the agents’ decisions are uncorrelated over time, then a static game will better fit the model. If, instead, the action of an agent depends on her previous actions, the

(7)

entry model where firms can enter in two different geographic markets, A and B. A static game can be used if entry in A is independent to entry in B for all firms. If, instead, for at least some firms, entry in A affects the likelihood of entry in B in the next period, then a dynamic game should be adopted.

Another important determinant of the choice between static and dy-namic game is the data available to the researcher. The obvious require-ment to adopt a dynamic framework is the availability of panel data (i.e. observations of the agents’ decisions over time). If the researcher has only cross-section data, the analysis of a dynamic game is not feasible and the researcher is forced to assume a static framework.

Lastly, the researcher can either allow only for pure strategies equi-libria or also for mixed strategies equiequi-libria. A (Nash) pure strategies equilibrium is an equilibrium where players take their actions with probabil-ity equals to one. Whereas, in a mixed strategies equilibrium players assign a probability less than one to each of their possible actions. The difference between these two types of equilibria is better explained with an example. Consider the following 2x2 game:

Table 1: 2x2 Game

1/2 L R

U (1,1) (0,0) D (0,0) (1,1)

In this simultaneous game there are two pure strategies equilibria ({L, U } and {D, R}) and one mixed strategies equilibrium where both players assign probability 1/2 to each of their actions.

Generally, allowing for mixed strategies equilibria complicates the analy-sis and imply additional computational burden to calculate the likelihood of the game’s equilibria. However, not allowing for mixed strategies may imply a substantial loss of information in the model. It is due to the researcher to evaluate this trade-off.

In the following sections, I will focus on static games with complete information allowing only for pure strategies equilibria.

(8)

4

Specify the pay-off

Once the assumptions outlined above have been specified, it is possible to give form to the pay-off functions of the model. That is, to specify fi(ai, a−i, X, θ) in (2).

In the literature, researchers have usually assumed this function to be linearly dependent on actions and covariates. In my study I will keep the linear relation, but it is useful to bear in mind that linearity is an assumption, and the dependency of the pay-off to the underlying variables may follow a different relation.

The specification of the pay-off function is related to the case under study. In entry models the function will express the firms’ profits, while in social interaction models the function will take the form of utility.

For example, Ciliberto and Tamer (2009) in their study of entry in airline markets assume the following profit function:1

πi= α0iS + β 0 iZ + γ 0 iW + X j6=i δjyj+ X j6=i φ0jZjyj + i.

The set of observed covariates X = {S, Z, W } includes S, a vector of market characteristics common among firms, Z, a matrix of firm characteristics that enter in all firms’ profit functions, and W , a vector characteristics specific to firm i (i.e. firm j is not directly affected by W of firm i). The variables yj will be equal to 1 if firm j enters the market, or 0 otherwise. The set

of unknown parameters is θ = {α, β, γ, δ, φ}; in particular, the δ’s are the coefficients that measure the effects of competitors’ entry on firm i ’s profits. In the social interaction literature, Soetevent and Kooreman (2007) (also see Brock and Durlauf (2001) and Brock and Durlauf (2007)) studied discrete games with social interactions in high school teen behaviors, and specified

1Ciliberto and Tamer (2009) assume the firms’ profits to be market specific. I suppress this dependency for clearance of exposition.

(9)

the individual utility function as:

Vi(yi, y−i) = u(yi, Xi) + S(yi, y−i) + i(yi)

= βy0iX + γ 2(N − 1)yi

X

j6=i

yj+ i(yi)

where Xi is a vector of individual characteristics that affects the private

utility (u(.)) of teen i. The vector y = {yi, y−i} is formed by variables equal

to 1 if a teen has certain behavior (e.g. smoke cigarettes), and -1 otherwise.2 Here γ is the coefficient that measures the social interaction among teens, as the effect of friends smoking on i ’s utility to smoke.

The social interaction cases are particularly interesting because it is ar-duous for the researcher to put a threshold condition on the individual util-ity. That is, the researcher does not know at what level of utility the teen will start smoking. In addition, this threshold is likely be different among teens. A useful strategy is to focus on the difference between the utility of, say, smoking and the utility of non-smoking. One can assume that if this difference is greater than zero, the teen will start to smoke. Therefore, the relation under study will become:

Vi(1, y−i) − Vi(−1, y−i) = β0X + γ (N − 1) X j6=i yj + i where β = β1− β−1 and i= i(1) − i(−1).

As mentioned before, the pay-offs are generally unobserved by the re-searcher. This is due to the unknown error terms (the ’s). In an entry model, the researcher generally does not have at her disposal data on costs to calculate the profits. More problematic from this standpoint are stud-ies on social interactions, where the utility functions are unobservable by definition.

The researcher, however, observes the agents’ decisions, which can be used to construct a meaningful and econometrically tractable equations

2The choice of using y

i = {−1, 1} instead of yi = {0, 1}, as in entry games, is due to the specification of the ‘social utility’ S(.). The social utility would be null in case yi = 0, implicitly assuming that the alternative behavior does not produce any social interaction. However, as noted by Soetevent and Kooreman (2007), using yi = {−1, 1} produces qualitatively the same results as adopting yi= {0, 1}.

(10)

using multinomial latent variable models. These models find the relation between the players’ actions and their pay-off functions by exploiting the threshold conditions. In an entry game, for example, even if information about the profits are unavailable, if the researcher observes that firm i en-tered the market, she can assume that firm i ’s profits were positive.

Thus, a discrete game is written as a system of simultaneous latent variable equations:          y∗i = βi0Xi+ P j6=iδjyj + i yi = 1 if y∗ ≥ 0 yi = 0 if y∗ < 0 (3)

the latent variable y∗ in (3) represents the profits in an entry model, or the difference in utility in a social interaction model. This system of equa-tions underlines how these two branches of economics are alike, in that they share a similar econometric specification to study two different aspects of the discipline.

In the following section, I will exploit a simple example of discrete game with two players to underline the challenges that the researcher encounters to estimate the equilibrium of a discrete game.

5

Multiple Equilibria in discrete games

Consider the following simultaneous bivariate game: Table 2: 2x2 Discrete Game

y2= 0 y2= 1

y1= 0 (0 , 0) (0 , β20X2+ 2)

y1= 1 (β10X1+ 1 , 0) (β10X1+ δ2+ 1 , β20X2+ δ1+ 2)

The game in Table 2 is the classic example discrete game that is found in the literature of entry models (see, among others, Bresnahan and Reiss (1991a), Tamer (2003)). The game can be written in an econometrically

(11)

tractable specification similar to equation (3): y∗1 = β10X1+ δ2y2+ 1 y∗2 = β20X2+ δ1y1+ 2 yi =    1 if yi∗≥ 0 0 otherwise (4)

with i = 1, 2. Similarly to equation (3), X = {X1, X2} are exogenous

covariates that influence i ’s pay-off, and  = {1, 2} are unobservable (to

the econometrician) random variables.

The important aspect of the game described in (4) is that, with large enough support of , the game shows multiple equilibria. That is, the map-ping between variables and equilibrium is not one-to-one. This multiplicity entails several difficulties to the researcher, from both a theoretical and an empirical perspective. In (4), as in discrete games in general, the sum of the probabilities of the all possible equilibria of the game is greater than one. This is the so called coherency problem (Heckman (1978)), and represents the theoretical paradox of discrete games. The empirical complication is de-rived from the inability of estimating the likelihood of the game’s equilibria by using only the estimation of the unknown parameters. In the region of multiplicity, the researcher needs an equilibrium selection mechanism which picks one of the multiple equilibria. These issues will become clearer in the next sections, where they will be discussed in more detail.

The first thing to note of the game in (4) is that its equilibria depend on the sign of δ1 and δ2. Several empirical works on discrete games make

assumptions about the sign of the interactions (e.g. Bresnahan and Reiss (1990), Bresnahan and Reiss (1991b), Soetevent and Kooreman (2007)), while the estimation methodology employed in other studies do not require the knowledge of these signs (e.g. Ciliberto and Tamer (2009)).

Economic theory can help to predict the sign of the interaction. For example, in firms’ entry games one would expect the entry of a competitor to have a negative effect on the firm’s profits, and therefore to have a negative effect on the likelihood of entry by that firm.

(12)

In other fields, however, the sign of the δ’s may not be easily predictable, like in studies on social interactions. How would truancy by some class mates influence the likelihood of another student to play truant? The effect might be either positive or negative. Even more, the effect might be positive for some students and negative for others.

I will now examine how the different signs of the δ’s imply different mapping of equilibria in (4).

5.1 Game Equilibria with δ1 < 0 and δ2 < 0

Figure 1 depicts in a -space the equilibria of the game in (4), assuming δ1 < 0 and δ2 < 0. That is, each player receives a negative externality

from the other player’s action. In that, the players’ decisions are strategic substitutes. Assume that (4) describes an entry decision by two firms, where each firm can either ‘Enter’ (yi = 1) or ‘Not Enter’ (yi = 0) the market.

From the equations in (4), it can be seen that the condition for yi = 1

is:

i≥ −βiXi− δj

with i, j = {1, 2} and i 6= j. If this condition holds, each firm will enter the market regardless of the other firm’s decision. Therefore, if it holds for both firms, the two will enter the market. Conversely, since δj is negative, a firm

will not enter the market regardless of the action of the other firm if:

i< −βiXi

with i = {1, 2}. When this condition holds for each firm, both of them will stay out of the market. The two cases correspond, respectively, to the top-right and the bottom-left areas of Figure 1.

Mixing these two extreme conditions, we can find that if:

1≥ −β1X1− δ2

(13)

(−𝛽1𝑋1, −𝛽2𝑋2) (−𝛽1𝑋1− 𝛿2, −𝛽2𝑋2− 𝛿1)

(0,0)

(1,0)

(1,0)

(1,0)

(0,1)

(0,1)

(0,1)

(1,1)

1,0

𝑜𝑟

(0,1)

𝜖2 𝜖1

Figure 1: Equilibria in -space with δ1 < 0 and δ2< 0

firm 1 will enter the market, while firm 2 will not enter. This case is de-picted in the bottom-right area of Figure 1. The top-left area has symmetric conditions, with firm 2 that enters and firm 1 that stays out.

In the areas examined so far all the strategies of the firms were dominant. That is, each firm would have played its strategy irrespective of the other firm’s action. In the remaining areas the strategy of at least one firm is not dominant. However, in most cases it is still possible to find a unique equilibrium.

In the bottom-center area of Figure 1, the ’s are such that:

−β1X1≤ 1 < −β1X1− δ2

2 < −β2X2.

Here firm 1’s strategy depends on firm 2: if firm 2 entered the market, firm 1 would stay out. However, the second inequality dictates that firm 2 does not enter the market, therefore firm 1 will enter. The center-left area is the

(14)

mirror image for firm 2.

This intuition applies also in the center-right area, where:

1 ≥ −β1X1− δ2

−β2X2 ≤ 2 < −β2X2− δ1.

If firm 1 did not enter the market, firm 2 would. However, the first condition implies that enter the market is the dominant strategy for firm 1. Hence, firm 2 will stay out of the market. This case is symmetric in the top-center area.

Finally, the central area of Figure 1 presents multiple equilibria. In this area the error terms are such that:

−β1X1 ≤ 1 < −β1X1− δ2

−β2X2 ≤ 2 < −β2X2− δ1.

Since there are no dominant strategies, without further assumptions it is not possible to predict a unique outcome. What is known is that if one of the firm entered, the other would stay out of the market. Hence both outcomes are possible: either firm 1 enters and firm 2 stays out, or the opposite, firm 2 enters and firm 1 stays out.

5.2 Game Equilibria with δ1 > 0 and δ2 > 0

Assuming δ1 > 0 and δ2> 0, each player in game (4) would have a positive

externality from the other player’s action. The strategies of the players are strategic complements. In this setting, the mapping of the equilibria of the game changes, as depicted in Figure 2. Following the same entry game as in the previous example, the strategy of each firm to enter the market (yi= 1

with i = 1, 2) is dominant if:

1≥ −β1X1

(15)

𝜖2 𝜖1 (−𝛽1𝑋1− 𝛿2, −𝛽2𝑋2− 𝛿1) (−𝛽1𝑋1, −𝛽2𝑋2)

(0,1)

(1,1)

(1,1)

(1,1)

(0,0)

0,0

𝑜𝑟

(1,1)

(0,0)

(0,0)

(1,0)

Figure 2: Equilibria in -space with δ1 > 0 and δ2> 0

which corresponds to the top-right area of Figure 2. Similarly, ‘Not Enter’ will be the dominant strategy for both firms in the bottom-left area, where:

1 < −β1X1− δ2

2 < −β2X2− δ1.

Like the case with strategic substitutes strategies, combining the con-ditions for the firms’ dominant strategies will result in only one firm that enters the market. This is the case in the top-left and the bottom-right area, where:

i≥ −βiXi

j < −βjXj− δi.

with i, j = {1, 2} and i 6= j. Specifically, in the bottom-right area only firm 1 enters, while in the top-left area only firm 2 enters.

(16)

Examining the dominant strategies of the firms have produced the same results as in the case of strategic substitutes strategies (Figure 1): Either both firms enter/not enter the market, or at least one firm enters.

The difference between strategic substitutes and strategic complements strategies can be appreciated when one of the firm does not have a dominant strategy, as in the bottom-center area of Figure 2 where the error terms are such that:

−β1X1− δ2≤ 1 < −β1X1

2 < −β2X2− δ1.

Firm 1 would need the positive effect of the entry of firm 2 (δ2> 0) in order

to enter the market. However, the second inequality implies that firm 2 will not enter, hence none of the firms will enter the market. Similarly for firm 2 is the center-left area.

The center-right and top-center area of Figure 2 depicts the opposite, where the positive effect carried by the entry of one firm (as this is its dominant strategy) prompts the other to enter the market.

Lastly, in the center area no players have a dominant strategy since:

−β1X1− δ2 ≤ 1 < −β1X1

−β2X2− δ1 ≤ 2 < −β2X2.

This is the region of multiplicity. Each firm would enter if also the other firm entered. However, since it is a simultaneous game, without imposing restrictions a unique outcome is undetermined. Two equilibria are possible: either both firms enter the market, or no one enter.

5.3 Game Equilibria with δ1 < 0 and δ2 > 0

The last possible case consists of mixed effects of the interactions between players. I will assume δ1 < 0 and δ2 > 0, the opposite (δ1 > 0 and δ2 < 0)

has symmetric equilibria. In this case, firm 1 has a positive externality from the entry of firm 2 (δ2 > 0), but firm 2 receives a negative effect from firm

(17)

𝜖2 𝜖1 (−𝛽1𝑋1− 𝛿2, −𝛽2𝑋2) (−𝛽1𝑋1, −𝛽2𝑋2− 𝛿1)

(0,1)

(1,1)

(1,1)

(1,0)

(0,1)

𝑁𝑜 𝑜𝑢𝑡𝑐𝑜𝑚𝑒𝑠

(0,0)

(0,0)

(1,0)

Figure 3: Equilibria in -space with δ1 < 0 and δ2> 0

1’s entry (δ1< 0).3 Figure 3 shows the mapping of the equilibria.

The areas where both firms have dominant strategies (the corner ar-eas) show the same equilibria as in the two previous cases, while the areas where the strategy of one firm is not dominant show equilibria which are a mixture between the case of strategic substitute and the case of strategic complements. The pattern of the equilibria at the borders can be easily recovered following the intuition outlined above.

The interesting part of Figure 3 is the center area. In both strategic sub-stitutes and strategic complements strategies the center area showed multi-ple equilibria. In the case where the effects of the interactions are mixed, however, no outcome is possible. The reason can be easily understood by

3It is difficult to believe that the entry of a competitor would have opposite effects depending on which firm enters the market. A more realistic example could be the decision of two students to attend math class. Assume that student 2 has high ability, while student 1 has low ability. It is possible that student 1 would benefit from the attendance of student 2 in the class, while student 2 would be damaged by the attendance of student 1.

(18)

the inequalities in this area:

−β1X1− δ2≤ 1 < −β1X1

−β2X2≤ 2 < −β2X2− δ1.

Firm 1 would enter the market only if firm 2 entered as well (δ2 > 0), but

firm 2 would enter the market only if firm 1 stayed out of market (δ1 < 0).

The two conditions are clearly in contradiction.4 Therefore, in this area no equilibrium exists (in pure strategies).

6

The Coherency problem

As stated previously, the objective of the researcher in the study discrete games is to estimate the likelihood of all the equilibria. Hence, in the game examined in (4), the object under study is P r((0, 0)|X), P r((1, 1)|X), P r((1, 0)|X) and P r((0, 1)|X). Assume that the observations are drawn from a random sample and the δ’s in (4) are negative, so that the mapping of the equilibria follows Figure 1. The probabilities of all the outcomes are then: P r((0, 0)|X) = P r(1< −β1X1; 2 < −β2X2) P r((1, 1)|X) = P r(1≥ −β1X1− δ2; 2 ≥ −β2X2− δ1) P r((1, 0)|X) = P r(1≥ −β1X1; 2< −β2X2− δ1) P r((0, 1)|X) = P r(1< −β1X1− δ2; 2 ≥ −β2X2). (5)

In probability theory, the total probability of all the possible outcomes in a sample space must be equal to one. However, given the probabilities in (5) it is possible to see that

P r((0, 0)|X) + P r((1, 1)|X) + P r((1, 0)|X) + P r((0, 1)|X) > 1.

The reason behind this paradox is directly connected to the multiplic-ity of equilibria. Intuitively, the area of multiplicmultiplic-ity in Figure 1 is double

4

(19)

counted in the probability space, because it lies in both P r((1, 0)|X) and P r((0, 1)|X). This paradox is known as the coherency problem, and repre-sents the primary theoretical obstacle for the researcher.

Moreover, the coherency problem implies a more pragmatic empirical is-sue. In the literature of discrete games, the inability to estimate is explained to be due to the mapping between parameters and equilibria not being one-to-one, therefore it is not possible to point-identify the equilibria. In other words, the same values of the explanatory variables are related to different values of the dependent variable. The same data explains different outcomes. Hence, without further restrictions it is not possible to estimate the causal relation between the dependent variable and the explanatory variables.

To make a step forward into the analysis of discrete games it is useful to see how the probabilities in (5) can be disentangled to underline the three components that generate the inability of estimation. As explained above, the coherency problem derives from the multiplicity region where both outcomes (1, 0) and (0, 1) are possible. It follows that the outcomes (0, 0) and (1, 1) are unique, and their probabilities can be point-identified.5 Focusing on the outcome (1, 0), define

U = {(1, 2) : (1, 0) is the unique outcome}

= {(1 ≥ −β1X1− δ2; 2< −β2X2− δ1) ∪ (1 ≥ −β1X1; 2 < β2X2)}

M = {(1, 2) : (1, 0) is a potential observable outcome}

= {(−β1X1 < 1 ≤ −β1X1− δ2; −β2X2 < 2≤ −β2X2− δ1)}.

(6)

The U set represents the area in the -space where the outcome (1, 0) is unique, while the M set is the multiplicity region where (1, 0) is one of the potentially observable outcomes. The two sets are graphically depicted in Figure 4.

Exploiting the two sets, it is possible to rewrite the probabilities in (5)

(20)

(−𝛽1𝑋1, −𝛽2𝑋2) (−𝛽1𝑋1− 𝛿2, −𝛽2𝑋2− 𝛿1) (0,0) (1,0) (1,0) (1,0) (0,1) (0,1) (0,1) (1,1) 1,0 𝑜𝑟 (0,1) 𝜖2 𝜖1 (a) U set (−𝛽1𝑋1, −𝛽2𝑋2) (−𝛽1𝑋1− 𝛿2, −𝛽2𝑋2− 𝛿1) (0,0) (1,0) (1,0) (1,0) (0,1) (0,1) (0,1) (1,1) 1,0 𝑜𝑟 (0,1) 𝜖2 𝜖1 (b) M set

Figure 4: Graphical representation of the U set (the shaded area in (a)) where the outcome (1,0) is unique, and of the M set (the shaded area in (b)) where the outcome (1,0) is one of the potentially observable outcomes.

as: P r((0, 0)|X) = P r(1 < −β1X1; 2 < −β2X2) P r((1, 1)|X) = P r(1 ≥ −β1X1− δ2; 2 ≥ −β2X2− δ1) P r((1, 0)|X) = P r((1, 2) ∈ U ) + Z M P r(1, 0|1, 2, X)dF1,2 = unique outcome z }| { Z U dF1,2 + potential outcome z }| { Z M P r((1, 0)|1, 2, X) | {z }

Equilibrium Selection Mechanism

dF1,2

(7)

where F1,2 is the joint distribution of 1 and 2, generally unknown to the

researcher. The probability of observing the outcome (1, 0) is now composed in two additive parts. The probability that (1, 0) is a unique equilibrium (the left-hand side of the last equation in (7)), which is the area of U , and the probability that (1, 0) is the observed outcome in the region of multiplicity M (the right-hand side of the last equation in (7)). In the latter part of the equation, P r((1, 0)|1, 2, X) represents the equilibrium selection

mechanism, the probability that the outcome (1, 0) is ‘picked’ among the possible multiple equilibria.

The equilibrium selection mechanism is probably the most important element in the study of discrete games. It is the source of multiplicity since

(21)

the likelihood of the last possible outcome is easily derived:

P r((0, 1)|X) = 1 − P r((0, 0)|X) − P r((1, 1)|X) − P r((1, 0)|X).

That is, the introduction of the equilibrium selection mechanism (assuming that it is known) automatically solves the coherency problem.

Therefore, the study of discrete games comprises three unknown elements to the researcher:

• The set of unknown parameters θ = {β, δ}.

• The equilibrium selection mechanism.

• The joint distribution of the unobservables F.

Without making assumptions on at least one of these elements, identification and estimation of the parameters of interest in discrete games is not feasible.

6.1 Further Complications

Before investigating the various methods and assumptions through which the literature dealt with the coherency problem, it is important to extend the analysis to more general games.

To explain the study of discrete games, in the above I have used the example of a two players game. As I have shown, several problems emerge in this simple case. The mapping of the equilibria depends on the signs of the δs. The interaction among players generates a multiplicity region with twofold consequences, the coherency problem and the empirical issue whereby the mapping between parameters and equilibria is not one-to-one. Finally, tracking back the origin of the multiplicity, discrete games contain an equilibrium selection mechanism unknown to the econometrician, which is at the basis of the coherency problem. Two restrictions can be relaxed to extend this example to more general games, and both increase the complex-ity of the analysis of discrete games.

The first generalization is to allow the game to have N players. Gener-ally, allowing for more than two players does not imply strong complications.

(22)

However, in the study of discrete games this generalization is not straight-forward. Passing from two to N players increases the dimensions of the game. That is, the -space that I have used to draw the mapping of the equilibria will be N-dimensional in the N players game. As a consequence, the multiplicity region will count several sub-regions with possibly different multiple equilibria.

The second generalization is to allow for players’ heterogeneity. So far the analysis has been tacit about players’ heterogeneity mainly because this generalization does not affect games with two players. It becomes central, however, by assuming N players.

Players’ heterogeneity not only affects the unobservable error terms, but also the interaction among players. Two types of heterogeneity can be as-sumed:

• Heterogeneity of the identities (δi), whereby each player has a constant different effect on the others.

• Heterogeneity of the effects (δij), which allows each player to have a different effect on each other player.

A simple example clarifies the complications implied in allowing for more than two players and for players’ heterogeneity. Imagine an entry game (δs negative) with three players: one big firm and two of moderate size. Assuming only heterogeneity of the identities, it can be shown that there is a sub-region in the region of multiplicity which accepts two equilibria: either the big firm enters as a monopolist, or the two small firms enter as a duopoly. The number of entrants is not constant in the region of multiplicity, and this creates problems particularly for a a methodology in the literature (examined in Section 7.4) which heavily relied on the number of entrants in the market. Additionally, by further allowing for heterogeneity of the effects, the areas in the -space that define the equilibria are not well shaped (they are non-rectangular).

In the following section, I will review the techniques that the researchers have adopted in the attempt to solve the coherency problem.

(23)

7

Solving the Coherency Problem

I found that the literature has developed seven methodologies to overcome the coherency problem. Namely:

I Impose the Coherency Condition.

II Assume the Equilibrium Selection Mechanism.

III Randomize the Equilibria.

IV Reduce the multiplicity to one event.

V Assume Sequential Decision.

VI Bound Estimation.

VII Estimate the Equilibrium Selection Mechanism.

I will examine these methodologies individually, indicating their benefits and their limitations.

7.1 Impose the Coherency Condition

Early studies (Heckman (1978)) solved the coherency problem by making the model recursive. That is, they imposed the coherency condition: δ1×δ2 = 0.

The coherency condition is a necessary and sufficient condition for the probabilities in (5) to sum to one. Essentially, the coherency condition requires one of the δ’s to be equal to zero. This restriction eliminates the non-uniqueness of the equilibria, solving the coherency problem.

Consider the modified version of the two players game in (4). Assuming δ1 = 0 and δ2< 0 the game becomes:

y∗1 = β10X1+ δ2y2+ 1 y∗2 = β20X2+ 2 yi =    1 if yi∗≥ 0 0 otherwise

(24)

(−𝛽1𝑋1, −𝛽2𝑋2) (−𝛽1𝑋1− 𝛿2, −𝛽2𝑋2)

(0,0)

(1,0)

(1,0)

(0,1)

(0,1)

(1,1)

𝜖2

𝜖1

Figure 5: Equilibria in -space with δ1 = 0 and δ2< 0

Figure 5 depicts the mapping of the equilibria of this modified game. There are no areas of multiplicity. Each equilibrium is unique and its prob-ability can be point-identified. Thus, the sum of the probabilities of the equilibria will be equal to one.

The coherency condition is, however, a really strong assumption. It implies that one player is affected by the action of the other, while the latter is not affected by the action of the first. As if the entry of one competitor would not affect the profits of a firm.

This methodology forces the game to have unique outcomes. It does not overcome the coherency problem, it avoids it. Moreover, the coherency con-dition is not robust for a generalization to N players, unless the researcher is willing to assume that N − 1 firms are not affected by the other firms’ entries.

(25)

7.2 Assume an Equilibrium Selection Mechanism

In certain circumstances, the researcher can assume the equilibrium selection mechanism. That is, the researcher sets a rule whereby a unique equilibrium is picked in the multiplicity region. The rule can be determined by economic reasons, or by common sense.

For example, imagine that the researcher is interested in the interactions across individuals that consume a good. Specifically the researcher is study-ing the individuals’ joint consumption. Each individual has a positive utility by the consumption of the good, and the joint consumption is beneficial for all the individuals, i.e. they increase their utility by consuming the good jointly. Therefore, the individuals prefer to consume the good with someone else rather than alone. It follows that two individuals would rather consume the good together than not consume the good at all.

The example fits perfectly a discrete game with strategic complements strategies (δ’s > 0) examined in Section 5. Figure 2 (reported again below) shows the mapping of the equilibria in the two players discrete game assum-ing the players’ strategies to be strategic complements. In the multiplicity region, two equilibria are possible: (1, 1) or (0, 0). Following the example above, either the two individuals consume jointly ((1, 1)) or they do not consume at all ((0, 0)). Since the δ’s are positive, both individuals would be better off with (1, 1) (joint consumption) rather than (0, 0) (no consump-tion). In fact, the equilibrium (1, 1) Pareto-dominates the equilibrium (0, 0). If the individuals were able to coordinate their decisions, they would choose to consume jointly.

The researcher can assume an equilibrium selection mechanism whereby only Pareto-dominants equilibria are picked in the multiplicity region. This is the strategy followed by Hartmann (2010) in his study about golf players. The study analyzes the effect of playing golf together rather than alone, within groups of golf players. It develops a discrete game with strategic complements strategies, and Hartmann (2010) assumes that in the region of multiplicity, the golf players will coordinate to play golf together rather not playing.

(26)

𝜖2 𝜖1 (−𝛽1𝑋1− 𝛿2, −𝛽2𝑋2− 𝛿1) (−𝛽1𝑋1, −𝛽2𝑋2)

(0,1)

(1,1)

(1,1)

(1,1)

(0,0)

0,0

𝑜𝑟

(1,1)

(0,0)

(0,0)

(1,0)

Equilibria in -space with δ1 > 0 and δ2> 0

the coherency condition. For example, one that picks the equilibrium which maximizes the joint pay-off of the players, or one which maximizes the social welfare, etc.

Assuming an equilibrium selection mechanism is robust to the extensions to N players and players’ heterogeneity, as uncertainty in the multiplicity region is solved with assumptions.

However, this strategy is an ad-hoc solution for discrete games. It solves the coherency problem on a case by case basis, often with several shortcom-ings. The assumption that only Pareto-dominants equilibria are picked in the region of multiplicity implies that there are no outside options for the players, which may be unobservable to the researcher but still influence the players’ actions.

Following the example about joint consumption, assuming that the play-ers will always choose joint consumption in the region of multiplicity rules out the possibility that players may decide not to consume the good jointly

(27)

Therefore, assuming an equilibrium selection mechanism not only implies a loss of information but it may introduce a bias in the estimation.

Furthermore, this strategy of solving the coherency problem generally relies on assumptions about the signs of the interactions. Assuming that only Pareto-dominants equilibria are selected in the multiplicity region is not possible if there are no Pareto-dominants equilibria, as in the case with strategic substitute strategies where (in the two players example) the region of multiplicity accepts either the equilibrium (1, 0) or the equilibrium (0, 1).

7.3 Randomize the Equilibria

Randomizing the equilibria is, to a certain degree, similar to assuming an equilibrium selection mechanism. The researcher assigns a probability to each equilibria in the multiplicity region. Thus, the researcher specifies the equilibrium selection mechanism and the coherency problem is solved.

The researcher can assign different probabilities to each equilibria in the multiplicity region (i.e. the researcher can assume that some equilibria are more likely than others). Alternatively, the researcher can assume that all the equilibria in the multiplicity region are equally likely.

The latter has been adopted by Soetevent and Kooreman (2007) in their study on students behaviors. The authors pay particular attention on the maximum number of pure Nash equilibria the discrete game admits, and whenever a equilibrium lies in the multiplicity region, its probability is set to be 1/E, with E representing the number of equilibria of the game. Therefore, the sum of the probabilities of all the equilibria will be equal to one.

In every discrete game, the number of possible equilibria is 2N, with N being the number of players in the game. This is equivalent to the number of all the possible set of actions that can represent an equilibrium of the game. However, not all of them will be a pure strategies equilibria, some will lie only on the support of mixed strategies equilibria. Thus, restricting the scope of the analysis to only pure equilibria will always deliver a maximum number of pure strategies equilibria below 2N.

The authors show that the maximum number of equilibria not only de-pends on the number of players in the game, but also on whether the players’

(28)

strategies are strategic complements or strategic substitutes. The maximum number of equilibria grows linearly in the number of players in case of strate-gic complements strategies, while it grows exponentially in case of stratestrate-gic substitutes strategies. Moreover, in the latter case, the maximum number of equilibria will differ if the number of players is odd or even.

The authors exploit a simulation based method of estimation. Random draws are drawn from F, the joint distribution of the unobservables which

is assumed to be known. For each random draw, the number of pure Nash equilibria (E) is computed.

However, the maximum number of pure Nash equilibria also depends on the heterogeneity of the players. Soetevent and Kooreman (2007) assumes that the effect of the interaction will be constant among all players. In other words, they assume no heterogeneity, and their estimate measures an “average” effect across players.

This restriction also depends on data limitations. To estimate the effect that players exert on each other in presence of players’ heterogeneity, the data have to show the same player in different games with sometimes dif-ferent other players. In an entry game, it is equivalent to observe the same firm in different geographical markets competing with sometimes different other firms. However, in a study of social interaction such richness of data is unlikely. For example, one student will be in the same class and will interact with the same classmates. This data limitation implies that the researcher can only estimate an average effect, and it will be impossible to disentangle players’ heterogeneity.

In general, however, restricting the multiple equilibria to have the same probability may be a very strong assumption. The researcher should sub-stantiate this assumption with some evidence. For example, the researcher could argue that the players exert symmetric effects in the region of multi-plicity. Furthermore, the methodology depends also on assumptions about the signs of the δ’s, and on assumptions about the distribution of the unob-servables.

However, if the data limitation only allow to estimate the average effect across players, and if it is possible to predict the signs of the players’ effects,

(29)

problem.

7.4 Reduce the Multiplicity into one Event

The researcher can also estimate the multiplicity region as if it was one outcome. In the case with two players and strategic substitutes strategies examined in Section 5, the researcher can estimate the likelihood of the event “Either (1, 0) or (0, 1)”.

This technique has been adopted in the seminal work on entry games of Bresnahan and Reiss (1990) and Bresnahan and Reiss (1991b). These studies focused on the number of entrants, rather than their identities. The approach surpasses the coherency problem as the number of entrants, in case of strategic substitutes strategies, is constant in different regions of the probability space.

Recall that, in the discrete game with two players and strategic substi-tutes strategies, the region of multiplicity accepts only one entrant in the market. Figure 6 shows that in this case the number of firms that enter the market is unique in the -space. In the bottom-left area no firm enters the market, while in the top-right area both firms enter the market. In all the other regions, including the multiplicity region, only one firm enters. There-fore, if the subject under study is the number of entrants in a market, three outcomes are possible: 0, 1 or 2 firms enter the market. Despite the region of multiplicity, the probability of these outcomes can be point-identified.

Bresnahan and Reiss (1991b) generalize this intuition allowing for N players. They exploit that the threshold condition assumed in the two play-ers case (positive pay-off function) can hold also for N playplay-ers.6

Other authors extended this analysis to study the competitive effects that the entry of firms in different – but connected – markets exert one another. Most notably, Schaumans and Verboven (2008) studied the ef-fects in the likelihood of entry across pharmacists and physicians; Cleeren, Verboven, Dekimpe, and Gielens (2010) studied the effect that the entry of a discount market has on the likelihood of entry of a supermarket, and vice-versa.

6

(30)

(−𝛽1𝑋1, −𝛽2𝑋2) (−𝛽1𝑋1− 𝛿2, −𝛽2𝑋2− 𝛿1)

(0,0)

(1,0)

(1,0)

(1,0)

(0,1)

(0,1)

(0,1)

(1,1)

1,0

𝑜𝑟

(0,1)

𝜖2 𝜖1

Figure 6: Number of entrants in different regions of the probability space

Although this methodology produces appreciable results in certain stud-ies, it heavily rely on the symmetric assumption. That is, it does not allow for heterogeneity of the firms/players in the game. Indeed, in some markets (in case of an entry game) the heterogeneity of the firms can have a marginal effect of the outcome.7 However, in other scenarios the identity of the play-ers can drastically change the outcome. As mentioned previously, allowing for player heterogeneity may change the structure of the multiplicity region so that the number of firms that enter the market is not constant.

In addition, focusing on the number of entrants in a market relies on assumptions about the signs of the δ’s. As shown in Figure 2, if the players’ strategies are strategic complements, the multiplicity region allows the out-comes (1, 1) or (0, 0), i.e. either both firms enter the market or both do not enter. The number of firms that enter the market is not constant, hence it is not possible to employ this technique.

7

(31)

7.5 Assume Sequential Decision

Another seminal work in the literature of discrete games is the study of Berry (1992) on entry in the airline market. Berry (1992) recognizes the importance of players’ heterogeneity in discrete games, although he under-lines the challenges that allowing for players’ heterogeneity implies to the estimation procedure.

Berry (1992) assumes a sequential order of actions in the discrete game to make inferences about how the players’ identities influence the likelihood of entry. In that, he changes the nature of the game from simultaneous to sequential, and he restricts the order of the players who move first based on observable covariates.

To understand how assuming a sequential game solves the coherency problem, recall the case of strategic substitutes strategies depicted in Figure 1. The region of multiplicity accepts two equilibria, (1, 0) or (0, 1), as the  of each player lies in the following intervals :

−β1X1 ≤ 1 < −β1X1− δ2

−β2X2 ≤ 2 < −β2X2− δ1.

As explained in Section 5.1, in this region of the probability space each firm would enter the market if and only if the other one did not. However, if one assumes that firm 1 moves first, it would foresee the reaction of firm 2 (N oEntry) in the region of multiplicity, therefore it would enter. More formally, if firm 1 moves first, it will enter the market if:

2 < −β2X2− δ1

and

−β1X1 ≤ 1 < −β1X1− δ2.

Firm 1 is willing to enter the market if firm 2 does not enter once firm 1 entered (the first condition), and if it is profitable (the second condition). The second condition always holds because δ2 is equal to zero given the first

(32)

(−𝛽1𝑋1, −𝛽2𝑋2) (−𝛽1𝑋1− 𝛿2, −𝛽2𝑋2− 𝛿1)

(0,0)

(1,0)

(1,0)

(1,0)

(0,1)

(0,1)

(0,1)

(1,1)

1,0

𝜖2 𝜖1

Figure 7: Game equilibria in case player 1 moves first

Figure 7 depicts the equilibria of the game if firm 1 moves first. The area of multiplicity has now a clear outcome. Hence, the probabilities of all the equilibria can be point-identified and the coherency problem is solved.

The probability that firm 1 enters the market if it moves first is always greater or equal than the probability of the same action assuming simulta-neous decisions. In a way, assuming sequential decision in discrete games replicates the first mover advantage, which can be appreciable in scenarios with an incumbent already present in the market and a possible new en-trant.8 Additionally, this methodology is robust to the generalization for N players, as early players will always gain the advantage over the multiplicity region relative to late players, and all the outcome probabilities can be point identified.

The methodology of assuming a sequential rather than a simultaneous game to solve the coherency problem is similar to the methodology of as-suming an equilibrium selection mechanism discussed in Section 7.2. The

(33)

selection mechanism in a sequential game is that the region of multiplicity in the probability space is always assigned to the early movers.

Although it may be useful in some cases, changing the form of the game from simultaneous to sequential is a strong assumption that should be sub-stantiated with enough evidence showing that the interactions between the players in the case under study are indeed sequential.

7.6 Bound estimation

In the literature of discrete games, bound estimation is a methodology devel-oped by Tamer (2003) and further extended by Ciliberto and Tamer (2009) to make inferences on the effects that the players exert one another in pres-ence of multiple equilibria.

The methodology tries to estimate the boundaries of the equilibria that lie in the multiplicity region of the probability space, without imposing any restrictions on the equilibrium selection mechanism.

To see this, assume the bivariate discrete game with strategic substitutes strategies as in the examples of the previous sections. The multiplicity region accepts (1, 0) and (0, 1) as possible equilibria. Following the graphical representation of the regions in the probability space presented in Section 6, bound estimation creates the confidence interval of, for example, P (1, 0) by estimating the areas depicted in Figure 8.9

The lower bound includes the region of the -space where the outcome (1, 0) is unique, and is equivalent to the U space defined in (6). The upper bound is composed by the region where the outcome (1, 0) is unique and the region where the (1, 0) is one of a multiple, hence it is equivalent to U ∪ M as defined in (6).

More formally, the identification strategy is based upon the following inequality: Z U dF(1,2) ≤ P (1, 0) ≤ Z U dF(1,2)+ Z M dF(1,2) 9

For clearance of exposition, I suppress the dependency of the probabilities to the observable covariates. However, it is useful to keep in mind that all the probabilities in this section are conditional to their covariates (i.e. P (1, 0) is P ((1, 0)|X)).

(34)

(−𝛽1𝑋1, −𝛽2𝑋2) (−𝛽1𝑋1− 𝛿2, −𝛽2𝑋2− 𝛿1) (0,0) (1,0) (1,0) (1,0) (0,1) (0,1) (0,1) (1,1) 1,0 𝑜𝑟 (0,1) 𝜖2 𝜖1

(a) Lower Bound

(−𝛽1𝑋1, −𝛽2𝑋2) (−𝛽1𝑋1− 𝛿2, −𝛽2𝑋2− 𝛿1) (0,0) (1,0) (1,0) (1,0) (0,1) (0,1) (0,1) (1,1) 1,0 𝑜𝑟 (0,1) 𝜖2 𝜖1 (b) Upper Bound

Figure 8: Lower and upper bounds of the bound estimation

where F(1,2)is the joint distribution of the error terms which is assumed

to be known to the econometrician.10 Note that the sets U and M are

func-tions of the observable covariates X and of the set of unknown parameters θ.

Bounds are symmetric for the case of P (0, 1), and the methodology can also be applied to the case of strategic complement strategies (Section 5.2) and, to a certain extent, to the case of mixed strategies (Section 5.3).11

As noted by Tamer (2003), the confidence interval obtained through this methodology is much tighter compared to the confidence interval estimated by ”reducing the multiplicity into event” (Section 7.4), which is:

0 ≤ P (1, 0) ≤ 1 − P (1, 1) − P (0, 0)

The identification strategy employed by Tamer (2003) and subsequently by Ciliberto and Tamer (2009) is identification at infinity in conjunction with imposing an exclusion restriction. Following the example of the game with two players and strategic substitutes strategies, let x1i ∈ Xi be a

co-variate that enters i pay-off function but not j (exclusion restriction).

Ad-10

Ciliberto and Tamer (2009) assume that the joint distribution of the unobservables is known up to a finite dimensional parameter which is part of the set of unknown parameter θ. In other words, they restrict the shape of the distribution to be known, although the parameters of the distribution (mean and variance) are part of the estimation.

11In case of mixed effects, the region of multiplicity does not allow for any possible solution. This holds true if the researcher assumes only pure strategies equilibria. However, if the researcher also allows for mixed strategies equilibria, the support of the multiplicity

(35)

ditionally, assume that x1i has large enough support. Then: P (0, 0) = P (1 ≤ −β1X1; 2 ≤ −β2X2) x1 1→−∞ = P (2 ≤ −β2X2) (8)

Intuitively, by driving the player-specific covariate to infinity (identi-fication at infinity), the condition for player 1 not to enter the market (1≤ −β1X1) will always hold. In other words, player 1’s dominant strategy

will always be not to enter the market. Hence, the researcher is able to iso-late the condition for player 2 and to identify the marginal distribution of 2

and β2. Driving player 2’s specific covariate x12to infinity, will point-identify

the marginal distribution of 1and β1. At this point, the same identification

strategy can be applied to P (1, 1) to identify the distribution of δ1 and δ2

as well as the joint distribution of (1, 2).

Another important feature of bound estimation is the absence of any assumption on the equilibrium selection mechanism. The methodology sur-passes the coherency problem by remaining agnostic about the equilibrium selection mechanism and estimating the bounds around the multiplicity re-gion.

Ciliberto and Tamer (2009) extends the insights of Tamer (2003) to games with N players. Remarkably, their methodology is robust also to players’ heterogeneity (of the identity and of the effects), and does not re-quire any assumption on the sign of the δ’s – the players’ effects on one another.

The estimation technique employed by Ciliberto and Tamer (2009) is based on estimation through simulation. The upper and lower bound de-picted in Figure 8 are computed based on the covariates in the data for every possible equilibrium of the game. This permits to identify a set of unknown parameters (i.e. a set of θ’s) that are consistent with the model and with the observations in the data. Since they estimate a multitude of parameter vectors that are consistent with the observable – in contrast to point-identify one parameter vector – the identification method is said to partially identify the parameters, and delivers confidence intervals for each of the unknown pa-rameter. Note that each parameter vector estimated through this technique

(36)

is an equilibrium selection mechanism that is consistent with the data. Bound estimation is a very promising methodology in the study of dis-crete games. Although it is not able to point-identify the likelihood of entry or the effect that each player exert one another, the boundaries of these effects can be extremely useful, particularly when the signs of the δ’s are unknown. For example, in a study about the effect that play truant have on peers, if both upper and lower bound are positive it can be concluded that one classmate playing truant increases the likelihood that her peers will play truant as well.

However, one shortcoming of bound estimation is when upper and lower bounds have different signs. In this case, the uncertainty about the over-all effect that an action of one player has on the others will remain, and, particularly, it is not possible to exclude that the effect is different from zero.

A further limitation of bound estimation – which, however, is very diffi-cult to overcome – is its dependency on restrictions to the the joint distribu-tion of the error terms (F). Relaxing this assumption is, in principle,

possi-ble as the researcher can use nonparametric estimation techniques. However, nonparametric estimation in conjunction to partially identified parameters is a field of study that still needs to be explored.

7.7 Estimate the Equilibrium Selection Mechanism

One of the most recent methodologies to surpass the coherency problem in the literature of discrete games is to directly estimate the equilibrium selection mechanism of the game. The technique has been introduced by Bajari, Hong, and Ryan (2010), which generalizes the first intuition by Bjorn and Vuong (1984).

The structure of the methodology in Bajari, Hong, and Ryan (2010) is slightly different than the general structure of discrete games examined so far. The technique allows for both pure strategies and mixed strategies equilibria. Additionally, the focus of the study is the estimation of struc-tural parameters that determine the latent variable equation (such as the estimation of marginal costs in a profit function), rather than the estimation

(37)

of the effects across players (the δ’s). A detailed analysis of the study by Bajari, Hong, and Ryan (2010) is beyond the scope of this paper. However, I will outline the intuition to solve the coherency problem that is behind this methodology and its main shortcomings.

In essence, Bajari, Hong, and Ryan (2010) parameterize the equilibrium selection mechanism and include it in the model to be estimated. For reasons explained below, the parameterization of the equilibrium selection mecha-nism includes coefficients that indicate which equilibrium is more likely to be played. For example, they estimate whether pure strategies equilibria are more likely than mixed strategies equilibria, or whether the equilibrium selection mechanism favors equilibria that maximize the joint profits of the players (e.g. as in coordinated games) as opposed to equilibria that have sub maximum joint profits.

In the following I present a simplified version of the model to be coherent with the terminology and the symbols employed in this paper.

Let the pay-off function of player i has the same form as the one presented in Equation (2), that is:

ui(a, Xi, θ) = fi(a, Xi, θ) + i(a) (9)

where a = {ai, a−i} is a vector of all the actions of the N players, with

a ∈ A, the set of all possible outcomes (not equilibria) of the game, and let u = {ui, u−i} be the vector of the players’ payoffs.12

For a given realization of u there can be more than one equilibrium. This is the case when the mapping between parameters and equilibria is not one-to-one, and the discrete game shows multiple equilibria. Hence, keeping u fixed, let E(u) be the set of Nash equilibria of the game, and define λ(y|E(u)) as the equilibrium selection mechanism, the probability that the equilibrium y ∈ E(u) is selected among the – potentially non-singleton – set E(u). Here the definition of the equilibrium selection mechanism is slightly different than the one presented in Section 6, as it includes both the probability that y is a unique equilibrium and the probability that y is picked among the

12Note that for an N players with two strategies game, the cardinality #A = 2N. That is, the number of elements in the set A is 2N.

(38)

multiplicity of equilibria. It follows that for all u:

X

y∈E(u)

λ(y|E(u)) = 1.

Given the above, the probability of observing the equilibrium y is then:

P (y|X, θ, λ) = Z

λ(y|E(u))1[y ∈ E(u)] dF (10)

where F is the joint distribution of the error terms and 1[∗] is equal to 1 if

the argument [∗] is true.

More generally, and following Bajari, Hong, and Ryan (2010), the prob-ability of observing the outcome a in a play of the game is:

P (a|X, θ, λ) = Z X y∈E(u) n λ(y|E(u)) N Y y(ai) o dF

where F is the joint distribution of the error terms. The probability of

observing a is composed of two component: The probability that y is selected in E(u) (i.e. the equilibrium selection mechanism λ(∗)), and the probability that a is observed given the equilibrium y (i.e. QN

y(ai)). Essentially, this

general form is made to allow for mixed strategies equilibria. In particular, the outcome a can be observed both when y is a pure strategies equilibrium or when y is a mixed strategies equilibrium. This explains why the second component is necessary in the formula.

Whether or not the researcher allows for mixed strategies equilibria, the estimation of these probabilities is not feasible because there are more unknown parameters than equations. In other words, there are no degrees of freedom.

To reduce the number of unknowns in the model, Bajari, Hong, and Ryan (2010) restrict the equilibrium selection mechanism to be a function of a vector of parameters γ that describes the likely behavior of the equilib-rium selection mechanism (i.e. λ(y|E(u)) becomes λ(y|E(u); γ)). As men-tioned before, they are able, for example, to estimate whether pure strategies equilibria are more likely than mixed strategies equilibria. Additionally,

(39)

restriction similar to Ciliberto and Tamer (2009), and the parameters of interests (i.e. the parameters governing the pay-off function θ, and the pa-rameters governing the equilibrium selection mechanism γ) are estimated through simulation.13

This methodology is significant step forward in the study of discrete games. As the authors point out, estimating the equilibrium selection mech-anism allows the researcher to simulate the model and, therefore, to produce counterfactuals.

However, this methodology still presents some limitations. First, the set of possible equilibria for a given u (E(u)) have to be computed, which is not a trivial exercise. Second, Bajari, Hong, and Ryan (2010) include several restrictions to the join distribution of the unobservables in the identification procedure and in the simulation. Third, the inclusion of the ”descriptive” pa-rameters γ in the equilibrium selection mechanism may bias the estimation of the other parameters θ. In principle, the latter set of unknown parameters affects also the equilibrium selection mechanism. Hence, the estimates of θ can be biased if the researcher does not account for the effects that changes in the equilibrium selection mechanism have on θ, and vice-versa.

8

Conclusion

In this paper, I examined the challenges the researcher has to face in the study of discrete games. I analyzed the problem of multiple equilibria, evi-dencing with simple examples the different types of multiplicity. I disentan-gled the coherency problem to retrieve the three unknowns that generate the theoretical paradox of discrete games, namely: the set of unknown pa-rameters θ, the equilibrium selection mechanism and the joint distribution of the unobservables F.

My investigation of the literature indicated seven methodologies the re-searchers employed in the attempt to solve the coherency problem. I ex-amined each of them individually, indicating the novelties proposed by the methodology and its limitations.

13

However, Bajari, Hong, and Ryan (2010) do not use identification at infinity and exclusion restriction in conjunction.

(40)

In my review, I tried to follow a chronological order, but I also tried to rank the methodologies according to the innovation they introduced in the literature. I find that in just two decades, since the researcher have started to adopt methodologies to solve the coherency problem in the early 1990s, the literature of discrete games has incredibly progressed. New methods of identification and new techniques of estimation through simulation have been adopted and improved. This demonstrates the importance of the ”col-lective knowledge”, which represents the fuel of the contemporary research. Despite the considerable progress during the last twenty years, the study of discrete games still presents several limitations and shortcomings. How-ever, through the accumulation of the collective knowledge, the economic discipline will surely find a definitive solution to the coherency problem. It is just a matter of time.

(41)

References

Bajari, P., H. Hong, and S. P. Ryan (2010): “Identification and esti-mation of a discrete game of complete inforesti-mation,” Econometrica, 78(5), 1529–1568.

Berry, S., and E. Tamer (2006): “Identification in models of oligopoly entry,” Econometric Society Monographs, 42, 46.

Berry, S. T. (1992): “Estimation of a Model of Entry in the Airline In-dustry,” Econometrica: Journal of the Econometric Society, pp. 889–917.

Bjorn, P., and Q. Vuong (1984): “Simultaneous models for dummy en-dogenous variables: a game theoretic formulation with an application to household labor force participation,” California Institute of Technology.

Bresnahan, T. F.,andP. C. Reiss (1990): “Entry in Monopoly Market,” Review of Economic Studies, 57(4), 531–553.

(1991a): “Empirical models of discrete games,” Journal of Econometrics, 48, 57–81.

(1991b): “Entry and competition in concentrated markets,” Journal of Political Economy, pp. 977–1009.

Brock, W. A., and S. N. Durlauf (2001): “Discrete choice with social interactions,” The Review of Economic Studies, 68(2), 235–260.

(2007): “Identification of binary choice models with social interac-tions,” Journal of Econometrics, 140(1), 52–75.

Ciliberto, F., and E. Tamer (2009): “Market structure and multiple equilibria in airline markets,” Econometrica, 77(6), 1791–1828.

Cleeren, K., F. Verboven, M. G. Dekimpe, and K. Gielens (2010): “Intra-and interformat competition among discounters and supermar-kets,” Marketing Science, 29(3), 456–473.

Hartmann, W. R. (2010): “Demand estimation with social interactions and the implications for targeted marketing,” Marketing Science, 29(4), 585–601.

(42)

Hausman, J. A., and D. A. Wise (1978): “A conditional probit model for qualitative choice: Discrete decisions recognizing interdependence and heterogeneous preferences,” Econometrica, 46, 403–426.

Heckman, J. J. (1978): “Dummy Endogenous Variables in a Simultaneous Equation System,” Econometrica, 46(4), 931–959.

Jia, P. (2008): “What Happens When Wal-Mart Comes to Town: An Empirical Analysis of the Discount Retailing Industry,” Econometrica, 76(6), 1263–1316.

Manski, C. F. (1993): “Identification of endogenous social effects: The reflection problem,” The review of economic studies, 60(3), 531–542.

(2000): “Economic Analysis of Social Interactions,” The Journal of Economic Perspectives, 14(3), 115–136.

Manuszak, M. D.,andA. Cohen (2004): “Endogenous market structure with discrete product differentiation and multiple equilibria: An empirical analysis of competition between banks and thrifts,” Discussion paper, Carnegie Mellon University, Tepper School of Business.

Mazzeo, M. J. (2002): “Product choice and oligopoly market structure,” RAND Journal of Economics, pp. 221–242.

McFadden, D. (1974): “Conditional logit analysis of qualitative choice behavior,” in Frontiers of econometrics, ed. by P. Zarembka, pp. 105–142. Academic Press, New York, NY.

Sacerdote, B. (2001): “Peer Effects With Random Assignment: Re-sults For Dartmouth Roommates,” The Quarterly Journal of Economics, 116(2), 681–704.

Schaumans, C., and F. Verboven (2008): “Entry and regulation: ev-idence from health care professions,” The Rand journal of economics, 39(4), 949–972.

Soetevent, A. R., and P. Kooreman (2007): “A discrete-choice model with social interactions: with an application to high school teen behavior,”

Referenties

GERELATEERDE DOCUMENTEN

ondly, the payo function admits a unique maximumso that there is no risk of confusion. Hence, one might say that the unique payo dominant equilibrium is the unique focal

Assuming that the players i and j consider their decision problem in g(n, x, p) to be equivalent to that in the game of Figure 1 and that the players use risk dominance as the

In this article, I have tried to contribute three things to its development: a new definition of video games as digital (interactive), playable (narrative) texts; a

The algebraic connectivity of the above graph equals 0.1442 and the next theorem states that this graph is on the borderline for a perfect matching.

De kunststof dubbeldeks kas (Floridakas, incl. insectengaas) laat bij Ficus, trostomaat, en Kalanchoë een beter netto bedrijfsresultaat zien dan de referentiekas; bij radijs

Een aantal van deze factoren heeft een vleeskuikenhouder niet in de hand, op andere kan hij invloed uitoefenen.. Voorkomen

Download date: 21.. Deze vraag wordt dan gesteld door een marktonderzoeker, die binnen het vakgebied van de aandrijftechniek op zoek is naar een nog niet ontdekt goudveld voor

Waarderend en preventief archeologisch onderzoek op de Axxes-locatie te Merelbeke (prov. Oost-Vlaanderen): een grafheuvel uit de Bronstijd en een nederzetting uit de Romeinse