• No results found

Bidding for the sake of winning or status : an experimental approach to motivations in a Tullock Contest

N/A
N/A
Protected

Academic year: 2021

Share "Bidding for the sake of winning or status : an experimental approach to motivations in a Tullock Contest"

Copied!
27
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

BIDDING FOR THE SAKE OF

WINNING OR STATUS

AN EXPERIMENTAL APPROACH TO MOTIVATIONS IN A

TULLOCK CONTEST

Msc Economics – Game Theory And Behavioural Economics – 15 ECTS

MARCO ANTONIO CASTELÃO SOARES

Student id: 11400145

I present an experimental analysis of the driving forces of bidding in an All-Pay Auction, particularly a Tullock Contest. This is done by isolating each dimension under investigation in distinguishable treatments. Attention is focused on the utility of winning, and the utility of status, while other factors are held constant across treatments. This setup allows me to quantify these two mechanisms. The results support the Theoretical Predictions and point to the utility of status as the dominant mechanism.

(2)

1. Introduction

Experimental research on many type of auctions has consistently shown that participants tend to bid higher than the Nash Equilibrium level in many types of auctions. In particular, such behaviour has been repeatedly observed in All Pay Auction (a.k.a. Tullock contests) across a variety of scenarios, including no budget constraints, fixed budget constraints, having constant and randomized induced values, one shot versus repeated games, monetary and non-monetary incentives, and team based versus individual based incentives (Masiliunas, Mengel, & Reiss, 2014). Standard rational behaviour theory fails to explain this overbidding, which has subsequently led to the incorporation of behavioural mechanisms to explain the phenomenon. Previous studies strongly suggest that overbidding may be affected by 1) a Utility of winning, 2) Judgement biases, 3) bounded rationality and 4) Relative Payoff Maximization (R. M. Sheremeta, 2014). Different combinations of these mechanisms have been used to explain overbidding (e.g. Masiliunas, Mengel, & Reiss, 2014). Moreover, in many previous studies, relative payoff maximization has been seen as a proxy for a search for status – henceforth referred to as status-seeking (Dechenaux et al., 2014; Mago et al., 2014; R. M. Sheremeta, 2014). Though these distinct mechanisms have been put forward, the previous literature fails to pinpoint which has the most important impact on overbidding. Their relative weight in the decision making process remains unknown.

This thesis aims to fill this gap by proposing a novel laboratory experiment and to use the collected data to test the relative importance of the utility of winning and status-seeking mechanisms

This leads to the research question addressed in this thesis:

Which mechanism is the more determinant factor determining

overbidding in all-pay auctions: the utility of winning or status-seeking?

This thesis starts with an overview of the existing literature relevant for this research question. A particular focus will be on the theoretical analysis of the utility of winning and status-seeking. After presenting the Experimental Design

(3)

and Procedures, a Theoretical Analysis for the specific environment of this experiment will allow for a more structured understanding of the experiment. The subsequent Results section starts with a brief overview and non-parametric results. The shortcomings of such analysis will be discussed, after which panel data regressions will be presented that allow for a more thorough analysis. These regressions correct for the issue of multiple observations across treatments with between-subject and within-subject effects. The limitations of the method will be carefully discussed while some corrections will be applied to check the robustness of the findings. After discussing the results, conclusions follow along with a discussion of this study’s limitations and possible future research.

2. Literature Review

This thesis will study the factors determining overbidding in the context of a Tullock contest (also referred to in the literature as Tullock lottery, lottery contest, rent-seeking lottery or rent-seeking contest), which is a form of all-pay auction in which the winner of the prize is determined probabilistically.

Specifically, the probability that a player wins the prize is the ratio of her bid to the sum of the bids of all players combined. Moreover, the case where the sum of bids is larger than the value of prize, has been described as rent seeking, which denotes socially inefficient but personally profitable behaviour (Krueger, Krueger, & O, 1974). It is socially inefficient because the costs of bidding are lost. Hence the social optimum situation would be where every player places a bid of zero. In the context of rent seeking the tendency to bid more than the Predicted Nash equilibrium - overbidding – may lead to over-dissipation of rents (R. M. Sheremeta, 2013). A more thorough description and analysis of the Tullock contests will be given in the experimental design and theoretical analysis sections.

The main reason for using this specific type of game is because it has been generally found that for Tullock contests there is a lot of behavioural variation and the Nash equilibrium has less explanatory power than in other types of auctions and contests (e.g. Millner & Pratt, 1989; Potters, de Vries, & van

(4)

Winden, 1998; Sheremeta, 2010). This suggests that behavioural factors might have a relatively strong influence on decision making in Tullock contests. Another reason for using the Tullock contest is because it is similar to many economic, political and social environments in which competing agents expend considerable resources (money, effort, time) in order to increase their chances of winning a ‘prize’. Examples range from competition for patents, research grants or mates, to lobbying politicians, promotions or other reward schemes in firms, sports competitions, elections and ethnic conflicts (see Andersson & Iwasa, 1996; Baye et al., 1993; Baye & Hoppe, 2003; Buchanan & Tullock, 1962; Chen, Chen, & Kong-Pin, 2003; Esteban & Ray, 2011; Szymanski, 2003). The main focus of this thesis will be on two of the behavioural mechanisms mentioned above, namely the utility of winning and status-seeking.

The utility of winning implies that in addition to monetary incentives, subjects derive a non-monetary utility from winning. In other words, subjects simply like winning regardless of other factors. This has been observed in a multitude of studies (Brookins & Ryvkin, 2014; Mago, Samek, & Sheremeta, 2014; Parco, Rapoport, & Amaldoss, 2005; R. M. Sheremeta, 2010). Originally this effect was demonstrated by Sheremeta using a lottery contest where the reward had a value of zero (2010). Sheremeta showed that around 40% of the participants exerted a positive amount of costly effort in order to win the reward with no monetary value. Moreover these efforts were correlated with the effort in contests with a positive monetary prize value. In the previously cited literature the utility of winning, w, is modelled as both additive and invariant to the prize value v and also invariant to the number of participants n.

The second mechanism under scrutiny for overbidding in this thesis, is status-seeking. Note that this differs from the previous literature that argues under a more ample approach that subjects not only care about their own payoff, but also about how this relates to the weighted average payoff of other group members (Herrmann & Orzen, 2008; Mago et al., 2014). The origin of relative payoff maximization is still unclear and the theoretical underpinnings of the experimental research on it are incoherent. Relative payoff maximization is modelled by including in the utility function of a subject the average payoff to other group members multiplied by a relative payoff parameter s as in formula 1

(5)

(Dechenaux, Kovenock, & Sheremeta, 2014; Mago et al., 2014; R. M. Sheremeta, 2014).

𝐸𝐸𝐸𝐸𝑖𝑖(e𝑖𝑖, e−i) = ω + p𝑖𝑖𝑤𝑤(e𝑖𝑖, e−i)(v) − e𝑖𝑖 + s𝑛𝑛 ∑(ω + p1 𝑗𝑗𝑤𝑤(e𝑗𝑗, e−𝑗𝑗)(v) − e𝑗𝑗) Formula 1

Here the relative payoff parameter s represents how individuals measure their payoffs relative to others, with s>0 reflecting a pro-social attitude where one’s utility increases with other’s payoffs and with s<0 reflecting the preference to obtain a higher payoff than other group members. Sheremeta proposes three possible origins for the relative payoff parameter s (2014). Firstly that it is simply due to other-regarding preferences, meaning that people derive proportional utility from other people’s payoffs (Fehr & Schmidt, 1999). Secondly, individuals could care about their ‘survival’ payoff, a prediction from evolutionary game theory that is related to ‘survival of the fittest’ (Hehenkamp, Leininger, & Possajennikov, 2004). Lastly, the so-called ‘spite effect’, where one player’s payoff decreases when the other player’s payoff increases. Note that this spite is congruent with status-seeking if the quest for a relatively higher payoff is seen as such (Congleton, 1989; Hamilton, 1970).

However, I see it as problematic that in many studies relative payoff

maximization is equated to status-seeking (Dechenaux et al., 2014; Mago et al., 2014; R. M. Sheremeta, 2014). These studies specifically define participants with s < 0 as status-seekers (e.g. see Sheremeta, 2014, pg. 10). The reason why this is problematic is due to the fact that s is modelled as being invariant to how winners and losers are announced. Surely the amount of status one gains by winning a contest should be dependent on whether other group members are able to see that someone has won the contest? This is how status-seeking is viewed in the social psychology literature (e.g. Huberman, et all, 2004). Moreover, psychologically speaking, status-seeking is a different motive than simple other-regarding preferences and the ‘spite-effect’. Indeed, those

motivations do not depend on whether the other contestants see you winning or not. Moreover, I argue that status-seeking can be modelled similar to utility of winning in that it is an additive to the possible monetary reward, similarly to how it is modelled by psychologists (Huberman et al., 2004). However,

(6)

status-seeking should be distinguished from utility of winning as status-status-seeking can only take place if the information about winners and losers of a game is public, otherwise one cannot gain status. In contrast, the utility of winning can be a plausible motive even if this information is not public, but only announced to the winner. Hence I propose a new theoretical model to distinguish status-seeking from both the utility of winning and relative payoff maximization.

For this experiment I will not model relative payoff maximization explicitly since the experiment has been designed in such a way that this should be equal in all treatments. Relative payoff maximization should be equal in all treatments because it is typically assumed to depend only on the difference between one’s own payoff and the average of other’s payoffs (e.g. Mago et al., 2014). Relative payoff maximization, if pursued, will be equal in all the treatments of my

experiment because participants are not informed of others’ bids and thus not informed of the payoffs of others. Being informed about the winners and losers also provides no information about the payoffs. While subjects will perform beliefs about how much others are bidding, this beliefs should be the same in all treatments. Hence in comparing the different treatments this term would drop out regardless of the model used.

3. Methodology

Experimental Design and Procedures

The experiment1 was conducted at the CREED Laboratory at the Universiteit van Amsterdam in two separate sessions, on May 30-31 2017. The experiment consisted of 4 treatments with a total of 24 subjects and 12 participants per session. These treatments are labelled ‘no result’, ‘private result’, ‘public result’, and ‘whole-session’. Treatments were varied within-subject, with sessions 1 and 2 applying opposite orders for the first three. Subjects were randomly

reallocated in 3 groups of 4 at the beginning of each of the first three

treatments. The ‘whole session’ treatment was run last in both sessions. In this

1

The program for the computerized experiment was written specifically for this thesis in PHP and SQLite. The code is written in a way that makes it easily adaptable to alternative environments. It is available upon request.

(7)

treatment there was a joint auction where all 12 participants apply for the same prize with a public result. On the public result treatments, the participants are aware of who they are playing against. Each of these treatments have 20 rounds with a prize value of 25 Experience Points (EP) and an initial

endowment of 25 EP for every round. In total, there are 4x20=80 auctions per session.

This experiment used a standard form of the Tullock contest as described in (Tullock, 1980). Since this contest is an all-pay auction where the winner is determined probabilistically, by placing higher bids one can increase one’s chance of winning the contest, but one can never be sure to win. Specifically the probability of winning the contest for participant i is the ratio of i’s bid to the aggregate of all bids in the group.

Chance of winning for participant i: 𝐵𝐵𝑖𝑖𝐵𝐵 𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑖𝑖𝑝𝑝𝑖𝑖𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 𝑖𝑖

𝑆𝑆𝑆𝑆𝑆𝑆 𝑜𝑜𝑜𝑜 𝑝𝑝𝑎𝑎𝑎𝑎 𝑏𝑏𝑖𝑖𝐵𝐵𝑏𝑏 𝑖𝑖𝑝𝑝 𝑝𝑝ℎ𝑒𝑒 𝑔𝑔𝑝𝑝𝑜𝑜𝑆𝑆𝑝𝑝

Participants were given an endowment of 25 points in each round in order to ensure positive earnings. All subjects had to pay their bid in each round and only the winner received an amount equal to the prize value of 25. So the earnings in points for each round were as follows:

Earnings

Winner = Endowment + Prize Value – Bid = 25 + 25 – Lottery Tickets Non-winner = Endowment – Bid = 25 – Lottery Tickets

At the beginning of each treatment, the instructions (Appendix A) followed the standard conventional terminology used in lottery scenarios and were read aloud to all subjects. Most importantly, subjects were informed about the type of feedback regarding the winners and losers they would be given in each

treatment. Subjects were given time to re-read the instructions, and asked if any questions or doubts remained about the instructions and whether further

clarification was needed.

Subjects were not allowed to communicate among themselves. In every round, subjects had to decide on the number of lottery tickets they would buy.

In the ‘no result’ treatment, no information regarding the winners or losers was given to the subjects. Hence only monetary utility is at play here, because there is no way for the participants to know who has won or lost. Only at the end of

(8)

the experiment would they be able to know the outcome. However at the moment of the lottery sort, subjects cannot derive utility from winning if they don’t know whether they won and they cannot be motivated by status-seeking because other subjects will not know whether you are a winner or loser. In the ‘private result’ treatment, the result of the contest was announced privately to each subject immediately after each auction. The winners and losers were randomly selected based on their ticket numbers using a computer based randomizer after everyone had made their decision. Specifically each subject was shown on their screen “You won” or “You lost” privately at the end of each round. On top of the monetary utility in this treatment subjects could also derive utility from winning as they are informed whether they won the round or not. However, there is no status-seeking as the subjects could not know whether any particular group member had won or not. Even if a bidder won the auction, others would not know.

The ‘public result’ treatment differs from the ‘private result’ treatment only in that the results of the contest was made public after each round. Specifically, the person who won would knock on the table and stand up for 5 seconds, for public acknowledgment. In this treatment, on top of monetary utility and the utility of winning, participants can possibly derive utility from status-seeking, as all participants will know who the winner is.

The ‘whole session’ treatment results should be merely informative as to the relative importance of these two factors as the group size rises from 4 to 12. This should shed some light on how subjects behave under a different group size while the two mechanism under study are at play, as it is compared to the ‘public result’ treatment.

TABLE I

Monetary utility (v) Utility of winning (w) Status-seeking(Us)

no result X

private result X X

(9)

In each session, subjects were informed that two of them would be randomly selected for payment, from a randomly chosen treatment and round. These two subject received their total number of points divided by a factor of 5 in Euros (for example, 30 points/5 = 6€).

Theoretical Analysis and Predictions

The standard rent-seeking (lottery) contest assumes that n identical risk-neutral individuals compete for a prize v by exerting efforts. The probability that an individual i wins the prize is equal to individual i’s own effort 𝑒𝑒𝑖𝑖 divided by the sum of all individuals’ efforts:

𝑝𝑝𝑖𝑖𝑤𝑤 = 𝑒𝑒 𝑒𝑒𝑖𝑖 𝑖𝑖+ ∑𝑒𝑒−𝑖𝑖 Formula 2

The standard theoretical model is based on the assumption that people only care about the monetary value of the prize v. So, the expected payoff for an individual i in the first treatment is given by the initial endowment 𝜔𝜔, plus the probability of i winning 𝑝𝑝𝑖𝑖𝑤𝑤(𝑒𝑒𝑖𝑖, 𝑒𝑒−𝑖𝑖) times the monetary value of prize v, minus the cost of effort c(ei) exerted by the individual, which gives us the following formula

for the expected utility.

𝐸𝐸𝐸𝐸𝑖𝑖(e𝑖𝑖, e−i) = ω + p𝑖𝑖𝑤𝑤(e𝑖𝑖, e−i)(v) − c(e𝑖𝑖)

Equation 1

It is important to mention here that risk-neutral preferences are assumed in all the utility functions. This is mainly done for the sake of simplicity as risk

preferences are not expected to be relevant for the research question. In

comparison to the No Result treatment, subjects in the Private Result treatment may also derive a non-monetary utility of winning, w. This is modelled out of simplicity and similarly to how it has been done in the literature, namely as being additive to the monetary prize (Dechenaux et al., 2014; Mago et al., 2014; R. Sheremeta, 2014). Where the utility of winning parameter shown in equation (2) is represented as being relative to the monetary utility.

𝐸𝐸𝐸𝐸𝑖𝑖(e𝑖𝑖, e−i) = ω + p𝑖𝑖𝑤𝑤(e𝑖𝑖, e−i)(v + w) − c(e𝑖𝑖) Equation 2

(10)

Furthermore, as discussed in the previous sections, it was anticipated that bidding in the Public Result treatment may also be affected by status seeking motivations. This is captured through the utility of status 𝐸𝐸𝑆𝑆 as an additional variable into the model for the third treatment in the same way it has been done in the psychology literature (e.g. Huberman et al., 2004). This gives:

𝐸𝐸𝐸𝐸𝑖𝑖(e𝑖𝑖, e−i) = ω + p𝑖𝑖𝑤𝑤(e𝑖𝑖, e−i)(v + w + 𝐸𝐸𝑏𝑏) − c(e𝑖𝑖) Equation 3

The previous literature has established that there are no asymmetric equilibria in the lottery contest and that the symmetric equilibrium is unique (Szidarovszky & Okuguchi, 1997). Since the possible earnings depend on individual 𝑖𝑖’s effort 𝑒𝑒𝑖𝑖 and the effort of all other individuals, individual i maximizes his or her

expected utility function dependent on 𝑒𝑒𝑖𝑖. So, differentiating equations (1), (2) and (3) with respect to 𝑒𝑒𝑖𝑖 and setting these to zero gives the equilibrium effort, denoted in equations (4), (5) and (6) respectively.

e1∗ = (𝑛𝑛 − 1)𝑛𝑛2 (𝑣𝑣) Equation 4 e2∗ = (𝑛𝑛 − 1)𝑛𝑛2 (𝑣𝑣 + 𝑤𝑤) Equation 5 e3∗ = (𝑛𝑛 − 1)𝑛𝑛2 (𝑣𝑣 + 𝑤𝑤 + 𝐸𝐸𝑠𝑠) Equation 6

The equations above, use the initial endowment as 𝜔𝜔, probability of winning as 𝑝𝑝𝑖𝑖𝑤𝑤, effort (amount of tickets) as ei, the symmetric pure-strategy Nash

prediction is denoted by

𝑒𝑒

.

Using the parameters of the experiment, equation (4) yields an equilibrium effort of 4.685 EP for the groups of four and 1.90972 EP for the groups of 12 participants. This values will be used to check the overbidding levels compared to the Nash prediction baseline in the present analysis.

(11)

H1: e1∗ < e2∗, where the exerted effort for the utility of winning is superior to the effort exerted for the monetary utility alone.

H2: e2∗ < e3∗, where the exerted effort for the utility of status is superior to the effort exerted for the utility of winning by itself.

Once this two hypothesis are verified, the third one can be concluded:

H3: 𝑒𝑒1∗ < 𝑒𝑒2∗ < 𝑒𝑒3, as we predict w and US to be non-negative, meaning

that the Nash prediction for effort increases from treatment 1 to treatment 2 and from treatment 2 to treatment 3. So it predicts that the bidding will increase with the treatments.

Where an increase in bidding from the No Result treatment to the Private Result treatment would signify that, indeed, participants derive non-monetary utility from winning. This would confirm the findings of the literature on this topic. Subsequently it would also validate the method of inducing and measuring the utility of winning, as this has been shown previously in this context by

participants placing positive bids when the prize value was zero (R. Sheremeta, 2014).

If indeed an increase in bidding from the Private Result treatment to the Public Result treatment is found, this is evidence that participants behave as status-seekers and that this effect is separate from both the utility of winning and relative payoff maximization. It is distinct from the utility of winning because this should have already played a role in the Private Result treatment. It is distinct from relative payoff maximization because neither in this treatment nor in the Public Result treatment do the participants know the payoffs of other group members. Moreover, the feedback that participants receive in both treatments does not provide them with information about other participants’ earnings and while winners would have higher expected earnings the resulting effect from it should be the same in all treatments. Additionally, the results from the Whole Session treatment, will show how both mechanisms are affect by the resulting increase of risk, as the number of participants rises from four to 12 participants. Lastly, I expect to find overbidding in all treatments compared to the Nash equilibrium prediction, due to the two behavioural dimensions that are not the topic of this thesis, namely judgemental biases and bounded rationality which should be visible across all treatments.

(12)

4. Results

Here the collected data will be confronted with the model predictions. A quick overview of the collected data will be given in Subsection I. In Subsection II, significance of differences between sessions is tested, along the illustration of the collected data across rounds and treatments. Subsection III tests the

hypothesis as it investigates the statistical power of the present sample and the robustness of the results.

I – Preliminary Analysis

Table II shows that, in accordance with the predictions, the mean of lottery tickets bought rose across the first 3 treatments, signalling as predicted that the utility of status is higher than the simple utility of winning, as we shall discuss in the following sections.

TABLE II

SUMMARY STATISTICS OF LOTTERY TICKETS BOUGHT Treatment Tickets Mean Over Nash

Mean Standard deviation N No result 7.989583 3.304583 6.8462 480 Private Result 9.216667 4.531667 7.071875 480 Public Result 11.5875 6.9025 7.713871 480 Whole Session 11.00417 9.094446 8.306246 480

The Whole Session treatment, can only be directly compared to the Public Result treatment. To do so, we will compare below the extent to which the bid exceeds the Nash predicted level as previously calculated from equation (4). Note that this equilibrium is different in Public Result than in Whole Session due to the different group sizes. Here we just note the decrease in the mean bid. Finally, our data show that clustered per treatment, the observed bids were on average 4.2 EP significantly higher than the Nash prediction with p < .0052, as we would expect from the literature (Dechenaux, Kovenock, & Sheremeta, 2014; Mago et al., 2014; R. M. Sheremeta, 2014).

2

(13)

II – Robustness of the Experimental Design

In the Methodology Section, it was discussed that the treatments were reversed in the two different sessions. Thus, as the differences between sessions is done per treatment it is possible to observe whether order effects might have been in place. This has been done through a T-test to each treatment individually between both sessions (Table III). From these results we observe an increase of means of 2.0625 EP in the second session significant at a 1% level in the No Result treatment as it yields a t = 3.3348, p =.0009. Which due to its high

significance is indicative of order effects as it was the third treatment applied to the second session. The Private and Public result treatments don’t exhibit significant differences between sessions, while the whole session shows again a significant difference at a 10% level as it yields a t = 1.7512, p =.0806 that should represent the higher risk aversion of the second session to the Whole Session treatment. This is of particular relevance as the second session exhibited a superior mean of bids on all other treatments.

TABLE III

T-TESTS OF DIFFERENCES BETWEEN SESSIONS

Treatment Diff Std Err p-value t-value N

No result 2.0625 0.6184 0.0009 3.3348 480 Private Result 0.6917 0.6455 0.2845 1.0716 480 Public Result 0.8167 0.7039 0.2466 1.1602 480 Whole Session -1.325 0.7566 0.0806 -1.7512 480

One of the concerns that could arise from the design is that subjects might exhibit learning effects over the course of the experiment. To test this, we regress for each treatment the group average bid on the round number, with errors clustered at the group level. The results show no evidence of such learning effects as mean bids do not significantly decrease or increase across rounds as per Appendix D; no significant trend is observed. More generally, the dispersion of bids might seem rather large with an average standard deviation of 7.51 but this common when prizes are assigned probabilistically like in the Tullock contests (Dechenaux, Kovenock, & Sheremeta, 2014).

Figure I shows per treatment the estimated utility of winning and of status seeking. It shows a typically non-negative value to both, with a mean of w = 1.2 and Us = 2.4.

(14)

FIGURE I: Motivations Obtained from Differences in Treatments across rounds (As Per Table I)

The Public Result / Whole Session result is the difference between both

treatments, considering the respective Nash predictions baseline. It is shown in a lighter tone, as it is a secondary analysis and shall be address properly in the Discussion.

Figure II shows for each individual (numbered from 1 to 24) the average bid per treatment. Overall, participants behaved as predicted across treatments. Worth noticing are participants 9 and 15 that we shall discuss in Section 5 as they do stand out from the predictions.

Figure II: Mean Number of Lottery Tickets Bought by Participant across Treatments

-2 -1 0 1 2 3 4 5 6 7

Utility of Winning Utility of status Public Result / Whole Session

0 5 10 15 20 25 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

(15)

III – Test of Hypotheses:

The hypotheses is tested using a regression on the bidding levels. Random mixed effects are included at the statistical level for the independent matching of groups to the model, as per:

𝑒𝑒𝑖𝑖𝑝𝑝 = 𝛽𝛽0+ 𝛽𝛽1𝐷𝐷1𝑖𝑖+ 𝛽𝛽2𝐷𝐷2𝑖𝑖+ 𝛽𝛽3𝐷𝐷3𝑖𝑖+ 𝑢𝑢𝑗𝑗+ 𝑢𝑢𝑖𝑖+ 𝜀𝜀𝑖𝑖𝑝𝑝 Equation 7

where j represents the group; i denotes the subject; t stands for the rounds; D1,

D2 and D3 are the dummies representing each treatment (the no result treatment

is absorbed in the constant term; the β’s are the coefficients to be estimated. The random terms

u

j and

u

i captures the panel structure in the data and is

normally distributed, as is

ε

it

.

With that in mind observations will be clustered

both at the Group as well as at the Subject level, where Table IV presents the maximum Log pseudolikelihood estimates for the coefficients in (Equation 7).

TABLE IV

CLUSTERED ANALYSIS OF ALL TREATMENTS Variable

No result (constant

term)

Private Result Public Result Whole Session

Bid 7.731*** 2.361* 4.140*** 3.273** (1.200) (1.358) (1.596) (1.300) Bid above Nash Predic. 3.304** 2.361* 4.140*** 6.048***

(1.200) (1.358) (1.596) (1.300)

Note: The table presents coefficients from mixed random effects panel data clustered both at the group and individual level. Standard errors are in parentheses; *p<.10, **p<.05, ***p<.01. This table regresses separately the model for the bidding levels as well as the bidding above the Nash Prediction.

H1: e1∗ < e2∗. H2: e2∗ < e3∗. H3: 𝑒𝑒1∗< 𝑒𝑒2∗ < 𝑒𝑒3∗.

Both regressions support H1, H2 and H3, well within the 10% level of

significance as we control for random mixed effects clustered per group and per individual. The No Result treatment shows a bid that is significantly larger than zero with p <.011 while the hypothesis for the utility of winning finds significance at a 10% level on both mixed regressions, because the Private Result dummy is

(16)

significant at this level. For status-seeking, significance is observed at the 1% level. Finally, significance for the Whole Session is observed at the 5% level on both regressions.

The power of the sample is a concern with our limited number of observations. The current sample is clustered in groups of 4 participants with 6 groups per treatment and 2 groups of 12 participants for the last treatment. This experiment is indeed severely underpowered, as shown in Appendix C, as it would require between 19 and 112 groups to achieve an 80%-power for the various

treatments. Thus to give power to the analyses, Table V shows the permutation tests, according to the required amount of subjects has shown in Appendix C. In order to run a successful permutation test, while making full use of the present sample, the multiple observations were averaged across each treatment per subject.

TABLE V

PERMUTATION RESULTS

Treatment Diff N shuffles p-value SE (p) No result vs Private Result 1.2271 448 0.0848 0.0132 Private Result vs Public Result 2.3142 168 0.0417 0.0154 Public Result / Whole Session 2.1919 320 0.0781 0.0150

Note: The Whole Session treatment is compared to the Public Result using the Nash Predictions baseline

Thus as some precision is lost, the amount of observations is reduced to 24x4=96 for the 24 subjects in the 4 treatments and resampled to produce the results above. Where significance at a 10% level is observable, even

considering the standard errors that should control for the previously lost precision. Giving additional support to our results.

5. Discussion

The analysis of Figure I suggests that the utility of winning is unstable across rounds which is proved through regression (2) of Appendix B. This might be a consequence of the introduction of the lottery result, as people may take the previous lottery outcome as input in their subsequent decision. This is confirmed in the regression (3) of Appendix B, and is in accordance with the

(17)

previous literature on Tullock Contest that reports people getting discouraged from further bidding if they lose (Dechenaux et al., 2014). This inconsistency seems however to be the result of the order effects observed in Table III, as the model shows to be robust, once the hypothesis is tested in Table IV.

Here as the Model is tested under mixed effects, we have been able to quantify the monetary utility (v) = 7.731 EP, the utility of winning (w) = 2.361 EP and the status-seeking (Us) = 1.779 EP, once corrected for the clustering of the design.

As a result of clustering, the accurate mean bid levels per treatment are now of 7.731 EP for the to the No Result treatment, 10.092 EP for the Private Result, 11.871 EP for the Public Result. Hence corroborating the three Hypothesis of this thesis which are then substantiated through the resampling of the current sample.

While the internal validity of the analysis follows from its theoretical

underpinning, the artificiality of the laboratory environment may be an issue to its external validity. An additional limitation would be its sample, as 14 out of 24 of its participants are economics graduate students who were still accepted as none of them possessed any particular training in either behavioural economics or game theory. This didn’t produce any significant difference between “econs” and “non-econs”, supporting that the data has not been compromised by an economic-background bias (regression 4, Appendix B).

This experiment builds on previous research by the author with colleagues Ivar Kolvoort and Raghav Wasan, where we observed a high volatility of behaviour as a consequence of too much randomization and confusion. In order to mitigate the shortcomings of the previous project, both the initial endowment and the prize of the lottery have been reduced to one single value. Moreover in each treatment, the exerted effort was measured over 20 rounds in an attempt to reduce possible confusion and to increase the number of observations. These amendments indeed appear to have produced significant improvements to a more robust dataset to the predictions derived from this thesis hypothesis as can be observed in Figure II. However, some confusion may still exist, as witnessed by the comments of two participants:

• Participant 15 reported to have bid higher amounts in the No Result treatment, as he expected other participants to bid less, thus increasing his chances of winning. This subject seemed mostly focused on winning,

(18)

not paying attention to his reduced earnings, as he bid an average of 22.5 EP in the No Result treatment, his highest across treatments. Note that this participant took part of the second session where the No Result treatment was the third in the order. Which as shown in the analyses of Table III, may be the product of order effects from the treatments design. • Participant 9 reported to have bid smaller amounts in both Public Result

treatments, as he did not want people to know of his earnings, in

contradiction with our premises’ of a non-negative Utility of Status. This occurred despite the fact that the earnings are not made public. This could also be the case of a strategy where by expecting other

participants to buy higher amounts of lottery tickets, a participant could opt to maximize profits by keeping his entire initial endowment as it was worth the same amount as the prize.

These two subjects provide a good example to explain the high dispersion of bids as observed in the Standard Deviations in Table II. Such a large spread is common when prizes are assigned probabilistically like in the Tullock contest (Dechenaux, Kovenock, & Sheremeta, 2014).

As mentioned in the Methodology the results from the Whole Session are meant to be informative of the relative importance of the two motivations under scrutiny in this thesis, as compared to the Public Result treatment.

Thus, once the change in the absolute amount of tickets is accounted for, as per the values of Table IV: Whole Session – Public Result = 3.273 – 4.14 = -0.867. This amount should be representative of the level of risk aversion of playing against a bigger group (a.k.a. monetary utility). While if considering the Nash prediction baseline: Whole Session – Public Result = 6.048 – 4.14 = 1.908, the resulting deviation from Nash, should account for the increase in both the utility of winning and the status-seeking utility, drawn from playing in a group of 12 participants, instead of four.

The design of the experiment for the Whole Session did not try to model the utility of winning and the utility from status separately in each session, due to the expected small sample. This has proved wise as it would have led to erroneous conclusions as it is observed from the analysis of Table III – where sessions exhibited a significant difference under the same treatment.

(19)

6. Conclusions

Consistent with the existing literature on this topic overbidding was observed in all treatments (Dechenaux et al., 2014; Mago et al., 2014). Additionally, this thesis has been able to provide evidence for its hypothesis that status seeking is a separate behavioural factor from both the utility of winning and relative payoff maximization. According to the Subsection I from the Results Section, significant difference in the exerted effort across treatments was found, as predicted. These findings have been confirmed through our panel data analysis and several tests to the robustness of the present model. Concluding that overbidding is indeed affected by the utility of winning and of status seeking. Contrary to what would be expected from the previous literature in Tullock contests (Dechenaux et al., 2014), over the 20 rounds in each treatment we did not observe any significant decrease in overbidding. We did however observe a significant level of overbidding and a change in the amount of tickets bought in response to the lottery result of the previous round. Both effects are consistent with the literature on the Tullock Contest.

In line with many results in Behavioural Economics, it does not come as a surprise that participants do not behave according to the Nash Predictions. Of course, the fact that others are not playing according to Nash then means that the Nash strategy is no longer a best response.

Two participants, given their formed beliefs about others behaviour seem to have opted for a best response strategy to stay out of the lottery and stick to their initial endowment, especially in the last treatment. This may be due to a higher understanding of what is at stake (perhaps the case of player 8) or due to other-regarding preferences as discussed in the case of player 9.

The only conclusion to be taken from the Whole Session is that the increase in the risk aversion of playing against a higher number of participants has played a bigger part in the bidding decision process then both mechanisms here under scrutiny.

Hence various avenues for future research are worth considering. Perhaps under additional different session’s treatment 4 (Whole Session) could have been applied to the other two treatments for a bigger group, particularly to

(20)

treatment 2 to see how the utility of winning alone, would be affected by a larger group. When status is out of the picture, perhaps the difference in results would be able to shed some light on either the judgemental biases or bounded

rationality dimensions. A possibility would be to model the saliency of information feedback explicitly with a specific parameter and then use this parameter as a partial mediator of the utility of winning or relative payoff maximization.

Furthermore, little is still known about which different behavioural dimensions are most important in predicting decision-making in Tullock contests and when. Ideally, future experiments would focus on the specifics of behavioural factors as this would help understanding a wide variety of phenomena noticeable outside of the laboratory environment.

7. References

Andersson, M., & Iwasa, Y. (1996). Sexual selection. Trends in Ecology and

Evolution. https://doi.org/10.1016/0169-5347(96)81042-1

Baye, M. R., & Hoppe, H. C. (2003). The strategic equivalence of rent-seeking, innovation, and patent-race games. Games and Economic Behavior, 44(2), 217–226. https://doi.org/10.1016/S0899-8256(03)00027-7

Baye, M. R., Kovenock, D., Baye, M., Kovenock, D., Casper G. de Vries, & Vries, C. De. (1993). Rigging the lobbying process: an application of the all-pay auction. The American Economic Review, 99(2), 93–98.

https://doi.org/10.1257/aer.99.2.93

Brookins, P., & Ryvkin, D. (2014). An experimental study of bidding in contests of incomplete information. Experimental Economics, 17(2), 245–261. https://doi.org/10.1007/s10683-013-9365-9

Buchanan, J., & Tullock, G. (1962). The Calculus of Consent. Ann Arbor, MI: University of Michigan Press. https://doi.org/10.3998/mpub.7687

Chen, K.-P., Chen, & Kong-Pin. (2003). Sabotage in Promotion Tournaments.

Journal of Law, Economics and Organization, 19(1), 119–140. Retrieved

from

(21)

_3ap_3a119-140.htm

Congleton, R. D. (1989). Efficient status seeking: Externalities, and the

evolution of status games. Journal of Economic Behavior & Organization,

11(2), 175–190. https://doi.org/10.1016/0167-2681(89)90012-7

Dechenaux, E., Kovenock, D., & Sheremeta, R. M. (2014). A survey of experimental research on contests, all-pay auctions and tournaments.

Experimental Economics, 18(4), 609–669.

https://doi.org/10.1007/s10683-014-9421-0

Esteban, J., & Ray, D. (2011). A MODEL OF ETHNIC CONFLICT. Journal of

the European Economic Association, 9(3), 496–521.

https://doi.org/10.1111/j.1542-4774.2010.01016.x

Fehr, E., & Schmidt, K. M. (1999). A Theory of Fairness, Competition, and Cooperation. The Quarterly Journal of Economics, 114(3), 817–868. https://doi.org/10.1162/003355399556151

HAMILTON, W. D. (1970). Selfish and Spiteful Behaviour in an Evolutionary Model. Nature, 228(5277), 1218–1220. https://doi.org/10.1038/2281218a0 Hehenkamp, B., Leininger, W., & Possajennikov, A. (2004). Evolutionary

equilibrium in Tullock contests: Spite and overdissipation. European

Journal of Political Economy, 20(4), 1045–1057.

https://doi.org/10.1016/j.ejpoleco.2003.09.002

Herrmann, B., & Orzen, H. (2008). The appearance of homo rivalis: Social preferences and the nature of rent seeking. Retrieved from

https://www.econstor.eu/handle/10419/49653

Huberman, B. A., Loch, C. H., & ?N??ler, A. (2004). Status As a Valued Resource. Social Psychology Quarterly, 67(1), 103–114.

https://doi.org/10.1177/019027250406700109

Krueger, A. O., Krueger, & O, A. (1974). The Political Economy of the Rent-Seeking Society. American Economic Review, 64(3), 291–303. Retrieved from

http://econpapers.repec.org/article/aeaaecrev/v_3a64_3ay_3a1974_3ai_3a 3_3ap_3a291-303.htm

Mago, S., Samek, A., & Sheremeta, R. (2014). Facing Your Opponents: Social Identification and Information Feedback in Contests. Artefactual Field

(22)

Masiliunas, A., Mengel, F., & Reiss, J. P. (2014). Behavioral variation in Tullock contests. Working Paper Series in Economics. Retrieved from

https://ideas.repec.org/p/zbw/kitwps/55.html

Millner, E. L., & Pratt, M. D. (1989). An experimental investigation of efficient rent-seeking. Public Choice, 62(2), 139–151.

https://doi.org/10.1007/BF00124330

Parco, J. E., Rapoport, A., & Amaldoss, W. (2005). Two-stage contests with budget constraints: An experimental study. Journal of Mathematical

Psychology, 49(4), 320–338. https://doi.org/10.1016/j.jmp.2005.03.002

Potters, J., de Vries, C. G., & van Winden, F. (1998). An experimental examination of rational rent-seeking. European Journal of Political

Economy, 14(4), 783–800. https://doi.org/10.1016/S0176-2680(98)00037-8

Sheremeta, R. (2014). Behavioral Dimensions of Contests. Retrieved from https://mpra.ub.uni-muenchen.de/57751/

Sheremeta, R. M. (2010). Experimental comparison of multi-stage and one-stage contests. Games and Economic Behavior, 68(2), 731–747. https://doi.org/10.1016/j.geb.2009.08.001

Sheremeta, R. M. (2013). OVERBIDDING AND HETEROGENEOUS

BEHAVIOR IN CONTEST EXPERIMENTS. Journal of Economic Surveys,

27(3), 491–514. https://doi.org/10.1111/joes.12022

Sheremeta, R. M. (2014). Behavioral Dimensions of Contests.

Szidarovszky, F., & Okuguchi, K. (1997). On the Existence and Uniqueness of Pure Nash Equilibrium in Rent-Seeking Games. Games and Economic

Behavior, 18(1), 135–140. https://doi.org/10.1006/game.1997.0517

Szymanski, S. (2003). The Economic Design of Sporting Contests. Journal of

Economic Literature, 41(4), 1137–1187.

https://doi.org/10.1257/002205103771800004

Tullock, G. (1980) Efficient rent seeking, in: and J. M. Buchanan R. D. Tollison G. Tullock (Eds) Toward a Theory of the Rent-seeking Society, pp. 97-112. College Station, TX: Texas A&M University Press.

(23)

8. Appendix

A: Instructions of Experiment

General instructions

• Welcome to this experiment. Please read these instructions very carefully. They will be read aloud by the experimenter.

• You are not allowed to communicate with other participants during the experiment.

• You can earn money in this experiment. The amount depends on your own choices and the choices of other participants. Your earnings will be denoted in Experimental Points (EP).

• The experiment has four parts each one with one or more rounds. Every round you will start with your initial endowment. Not with what you have remaining from the previous round. In each part you will be placed in a different group.

Keep the paper with your participant number with you at all times during

the experiment.

Payment: at the end of the experiment, 2 random participants get chosen for

payment. For each of these participants a random part of the experiment and a random round will be chosen. The payment will be based on your earnings that round. Thus each round in every part is equally likely to get chosen for payment.

• You will be paid your earnings from this round divided by 5.

Example: Let’s say you get chosen for payment in a round in which your earnings are

30 Experimental Points (EP). Then you will get paid 30/5 = 6.00 real Euros in cash.

Description of task of part 1

• In this part of the experiment you will be randomly placed in a group of 4 participants (including you). You will remain in the same group for part 1 of the experiment

• Part 1 of the experiment consists of 20 rounds.

Each period, you and all other participants in your group will be given an initial

endowment of 25 EP and you will be asked to decide how much you

want to bet for the prize. The prize value is 25 EP and will be the same

for all participants in your group. You may buy any integer number of

lottery tickets between 0 and 25. You have to place the amount of tickets you want to buy through the available entry in the form on your

computer. You cannot communicate your purchase/bet to the other participants.

• After all participants have made their decisions, your earnings for the period are calculated. These earnings will be converted to cash and paid at the end of the experiment if the current period is the period that is randomly chosen for

(24)

plus the reward minus amount of tickets you bought. If you do not win the prize your period earnings are equal to your endowment minus the amount you invested. So you always have to pay for the tickets you bought.

In this period you will not be informed of the result of the contest.

If you win the contest:

Earnings = Endowment + Prize Value – Your Investment = 25 + 25 – Lottery Tickets

If you do not win the contest:

Earnings = Endowment – Your Investment = 25 – Lottery Tickets

The more tickets you buy, the more likely you are to win the contest. The

more tickets the other participants in your group buy, the less likely you are to win the contest. Specifically, for each EP you spend you will receive one lottery ticket. At the end of each period the computer draws randomly one ticket among all the tickets purchased by the participants in your group, including you. The owner of the drawn ticket wins the contest and receives the prize. Thus, your chance of winning the contest is given by the number of Tickets you bought divided by the total number of Tickets all participants in your group bought. You can never be guaranteed to win. However, by increasing the amount of tickets you buy, you can increase your chances of winning. Regardless of who receives the reward, all participants will have to pay for their tickets.

Chance of winning the contest: Your Tickets

Sum of all Tickets in your group

In case no participant buys a ticket, the reward is randomly assigned to one of the 4 participants in the group.

Example: Let’s say participant 1 buys 3 tickets, participant 2 buys 6 tickets, participant

3 buys 0 tickets and participant 4 buys 9 tickets. Then the computer randomly draws one lottery ticket out of 18 (3 + 6 + 0 + 9). As you can see, participant 4 has the highest chance of winning the contest: 0.50 = 9/18. Participant 2 has 0.33 = 6/18 chance, participant 1 has 0.17 = 3/18 chance, and participant 3 has 0 = 0/18 chance of receiving the prize.

• After all participants have bought their tickets the computer will make a random draw which will decide who wins the contest. Then the computer will calculate your period earnings based on the amount of tickets you bought and whether you won the contest. You will not be informed whether you have won the reward or not.

• There will be 20 rounds like this. After all participants finish these 20 rounds the experimenter will announce that part 1 is over.

Raise your hand if you have a question and the experimenter will come to you.

(25)

Description for task in part 2

• Just like in part 1 you are randomly placed in a group of 4 participants and will be playing a contest against them.

• The instructions are the same as in part 1

• After each round, all participants are informed privately on their screen as to whether they won the contest in that round.

• There are 20 rounds in this part of the experiment. After all participants finish these 20 rounds the experimenter will announce that part 2 is over.

---

Description for task in part 3

• Contrarily to the previous parts the members of your group are seating in your row and you will be playing a contest against them.

• The instructions are the same as in part 1.

• After each round, all participants are informed privately on their screen as to whether they won the contest in that round and the person who won, will stand up while being acknowledged by the other members in the same row.

• There are 20 rounds in this part of the experiment. After all participants finish these 20 rounds the experimenter will announce that part 3 is over.

---

Description for task in part 4

Contrarily to the previous parts you are now playing against the 12

participants in this LAB (including you) and will be playing a contest against

them.

• The instructions are the same as in part 1

• After each round, all participants are informed privately on their screen as to whether they won the contest in that round and the person who won, will stand up while being acknowledged by the other members

• There are 20 rounds in this part of the experiment. After these 20 rounds the experimenter will announce that the experiment is over.

(26)

B: Additional Regressions

TABLE VI

DETERMINANTS OF TULLOCK CONTEST

(1) (2) (3) (4) Constant 4.2031*** 7.2891*** 0.6072*** 11.0125*** (0.5365) (0.6216) (0.1198) (1.4343) Above Nash Prediction. 0.9644*** (0.0224) Treatment 1.2271* (0.7221) Round 0.0667 (0.0479) Result - 2.6658*** (0.5735) Econ - 1.8223 (1.8896)

Note: The dependent variable is the bid. Regression (1) is clustered per treatment; (2) analyses only the No Result and Private Result; (3) analyses the variance of the exerted effort according to the result from the previous round; (4) analyses the difference between the exerted effort per subjects with an economical background from the rest of the sample, clustered per treatment (p = .345). Standard errors are in parentheses; *p<.10, **p<.05, ***p<.01.

C: Sample Power and size

TABLE VII

COMPARISON OF TREATMENTS WITH NASH PREDICTION

Treatment Correlation ICC Power Desirable Sample Size No result vs Private Result 0.4588 0.0357 9% 448 (112 groups) Private Result vs Public Result 0.2241 0.0357 17% 168 (42 groups Public Result / Whole Session 0.3776 0.0317 21% 324 (81/27 groups)

(27)

D: Possible Learning Effects across rounds (clustered per groups) No result Private Result Public Result Whole Session

Round 1 -0.7589 0.4642 1.3446 0.4908 (1.0709) (1.6281) (1.8930) (0.1632) Round 2 2.1187*** 1.0448 2.7909 1.0909* (0.2120) (0.5401) (1.6072) (0.0874) Round 3 -0.9721*** -0.7773 -1.7680 0.0755 (0.0726) (0.4831) (1.7155) (0.3641) Round 4 0.2017) 0.7557** -1.0353 -0.7192* (0.3037) (0.1996) (2.3540) (0.0934) Round 5 -1.6001** -0.3653 0.9427 0.7219 (0.4301) (0.4148) (1.5716) (0.1637) Round 6 0.9050** 0.2390 0.5557 -0.3109 (0.2517) (0.6686) (2.6081) (0.3316) Round 7 -0.4026) -0.3954* 0.1825 -0.2837 (0.1824) (0.1646) (3.1998) (0.2378) Round 8 0.1643 0.8229* 0.1085 0.0337 (0.1723) (0.3805) (1.3805) (0.0458) Round 9 1.5483*** -0.5399 -0.1755 0.1739 (0.2335) (0.4918) (0.6954) (0.2852) Round 10 -1.4968*** 0.8077 0.0908 0.2359 (0.1636) (0.4548) (0.8463) (0.1095) Round 11 0.3646 0.1183 0.8705 -0.1704 (0.2802) (0.2404) (1.3556) (0.1063) Round 12 -0.0725 -0.4113 0.3295 0.0942 (0.1621) (0.7505) (0.3886) (0.1052) Round 13 -1.164186 -0.0848 -1.0189 -1.2209 (0.6684) (0.4246) (1.0540) (0.4514) Round 14 -0.1596 0.3036 -0.0505 0.5163* (0.3707) (0.1627) (1.2763) (0.0633) Round 15 1.3521*** -0.4831** 0.6624 -1.6329** (0.2281) (0.1791) (1.4933) (.0474) Round 16 -0.1759 -1.3819*** -1.1106 0.3671 (0.1757) (0.3059) (1.1577) (0.0861) Round 17 0.3444 0.5761* -0.1341 0.9408* (0.3965) (0.2352) (0.8444) (0.0858) Round 18 0.3189** 0.3346** -0.0369 -0.1305 (0.0827) (0.1077) (0.2993) (0.2430) Round 19 0.0343 -0.1318 -0.1767 0.7786** (0.2129) 0.1931 (2.1588) (0.0471) Round 20 -0.1262* 0.6012 -0.2041 0.3953 (0.0493) (0.4849) (0.8897) (0.1311)

Note:Coefficients exhibit deviations from the equilibrium levelmean. Standard errors are in parentheses; *p<.10, **p<.05, ***p<.01.

Referenties

GERELATEERDE DOCUMENTEN

Gezien deze werken gepaard gaan met bodemverstorende activiteiten, werd door het Agentschap Onroerend Erfgoed een archeologische prospectie met ingreep in de

This apphes of course to all those victims who sustained physical injury themselves But we should not forget that where we have damage on the ground due to plane crashes, the number

De scholen vinden het belangrijk om blijvend aandacht te besteden aan gedrag en gedragsre- gels, maar ze zijn er minder zeker van of dat door middel van het

Door in het onderzoek ook te kijken naar de context waarin het programma wordt uitgevoerd (onder andere voor wat betreft het gevoerde beleid ten aanzien van gedrag en

The goal of Behave is to develop social competences and prosocial behavior in the lower classes of secondary schools (pupils from VMBO, Havo and VWO) to discourage

activities” [AS2]. The need for change was also mentioned in the strategy created by the head departments. The motivation behind the strategy was as follows: “This implementation plan

The simulations confirm theoretical predictions on the intrinsic viscosities of highly oblate and highly prolate spheroids in the limits of weak and strong Brownian noise (i.e., for

The same cell shape parameters were subsequently calculated from these micrographs and compared to cells adhered on the corresponding uncoated PCL samples (Figure 6). Figure