• No results found

How identification distorts the processing of socially derived information : an experimental study on the intergroep anchoring bias

N/A
N/A
Protected

Academic year: 2021

Share "How identification distorts the processing of socially derived information : an experimental study on the intergroep anchoring bias"

Copied!
72
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

How Identification Distorts the Processing of Socially Derived

Information

An Experimental Study on the Intergroup Anchoring Bias

Master’s Thesis 15 ECTS

University of Amsterdam Masters in Economics

Track: Behavioral Economics and Game Theory Name: Ivar Renzo Kolvoort

Student number: 11353589 Supervisor: Joël van der Weele Date: 13/08/2017

(2)

2

Statement of Originality

This document is written by Ivar Renzo Kolvoort who declares to take full responsibility for the contents of this document.

I declare that the text and the work presented in this document are original and that no sources other than those mentioned in the text and its references have been used in creating it.

The Faculty of Economics and Business is responsible solely for the supervision of completion of the work, not for the contents.

(3)

3

Abstract

This paper describes an experiment that measures the effect of induced group identity on the use of socially derived anchors. In an estimation task, incentivized for accuracy, subjects received feedback repeatedly from an in- or outgroup. This feedback served as an anchor and the results show that participants assigned more weight to anchors derived from

ingroup members. Hence this study provides evidence for the existence of an ingroup bias in the use of socially derived information. Furthermore there is evidence that this bias persisted for the length of the experiment and that subjects scoring high on the Cognitive Reflection Test exhibited a smaller bias. This study extends the recent findings about social identity to non-strategic decision-making. As a whole this experiment demonstrates the possibility of doing an incentivized MGP experiment efficiently online and that quantitative estimation tasks are suited to study intergroup biases.

(4)

4

Table of Contents

1. Introduction ... 6

2. Empirical and Theoretical Background ... 10

2.1 Why Economists Should Care More About Social Anchors. ... 10

2.2 Group Identity, Ingroup Bias and the Minimal Group Paradigm ... 12

2.3 Ingroup bias and Cognitive reflection ... 14

2.4 Social identity and Economics ... 16

2.5 Social Context, Estimation and Anchoring ... 18

2.6 Social Identity and Information Processing ... 20

3. Experimental Design ... 24

3.1 Overview ... 24

3.2 Minimal Group Induction ... 24

3.3 Quantitative Estimation Tasks ... 27

3.4 Incentive scheme ... 29

3.5 Cognitive Reflection Test ... 30

3.6 Measures of Group Identification ... 31

3.7 Data Analysis ... 32

3.7.1 Dependent Variable Weight ... 32

3.7.2 Observations of Interest ... 33

3.7.3 Modeling ... 34

4. Hypotheses ... 36

5. Results ... 39

5.1 Descriptives ... 39

5.1.1 Sample and Attrition ... 39

5.1.2 Invalid Observations ... 39

5.1.3 Group Manipulation Check ... 40

5.1.4 Descriptives Weight ... 40

5.2 Panel Model Estimation ... 41

5.2.1 Intergroup Anchoring Bias ... 42

5.2.2 Intergroup Anchoring Bias and Cognitive Reflection ... 43

5.2.3 Robustness of Estimates ... 44

5.3 Measures of Identification ... 44

5.3.1 Identification and time ... 44

5.3.2 Correlation with Intergroup Bias. ... 45

(5)

5

6.1 Summary Findings ... 46

6.2 Relation to Existing Literature ... 46

6.3 Limitations and Future Research. ... 51

6.4 Concluding Remarks ... 53

7. References ... 54

8. Appendices ... 59

Appendix A: Links to the online sessions ... 59

Appendix B: Instructions ... 60

Appendix C: Sample characteristics per session ... 63

Appendix D: Check independence of treatment and observation validity ... 64

Appendix E: Robustness check without attrition and with invalid observations ... 65

Appendix F: Extra check change within sessions. ... 66

Appendix G: Robustness check different model specifications. ... 67

(6)

6

1. Introduction

Many parents have, unknowingly, lied to their children. Imagine you are standing on a bridge with a group of your friends. Everybody suddenly jumps off. Would you? I would. If those people, my friends, most of which I believe to be levelheaded, some of which I believe to be exceptionally astute, all decided at once that it would be a good idea to jump off a bridge, then I would hastily follow, thinking they must have some reason for it. I believe most, if not all, people would. That includes parents.

The situation would be a little different if I was just passing over a bridge and see a group of strangers suddenly jumping of. I would be confused, certainly, but I think not immediately inclined to follow suit. Now off course I used a specific (and rather cliché) example here, the point I am illustrating though is universal: we trust the people close to us and their judgements. It is almost instinctual. Any piece of information we get from people near us seems to be just a little bit more believable. A recommendation by a friend is worth a thousand anonymous online reviews. Is this reasonable? And if not, does it cause problems? Does it result in suboptimal decision-making?

We are at a point that we know homo economicus, the rational agent, is largely a fiction. The field of behavioral economics has taught us in the last decades that we in many ways systematically deviate from rational decision making. Most of this research as focused on isolated decision-making. In reality, however, decisions are not made in isolation.

Unfortunately economists have had a tendency in the past to neglect contextual factors in experimental research (Levitt & List, 2007). Due to our social nature many of the decisions we make and the beliefs we hold are based on decisions and beliefs from the social groups we inhabit. The consequences of this dynamic are vast.

The possible outcomes of these social effects are well described in the literature. Pluralistic ignorance and group polarization are examples known to have severe economic consequences (e.g. Miller & McFarland, 1991; O’gorman, 1975). An analogy for these phenomena would be a situation where everyone jumped of the bridge, but no one really knows why. Researchers have indicated that such phenomena underlie many modern crises, such as the catastrophic failure of Enron and NASA’s space shuttle disasters (Bénabou, 2012).

(7)

7

Underlining the importance of research into this are the recent findings of Kahan and colleagues (2013; 2017) that people good at reflective and analytic reasoning are more susceptible to group polarization and ideologically motivated reasoning. These are the type of people often found in positions of power.

Economists have also experimentally studied these social effects, albeit with a different focus. In behavioral economics the focus has mostly been on the effects of group identity on social preferences and behavior in a strategic context (e.g. Goette et al., 2012; Chen & Li, 2009; Chen & Chen, 2011). These studies have shown that when people are assigned to groups, even if based on arbitrary criteria, they will increase reciprocal behavior, increase altruism and try to maximize social welfare with their ingroup members.

While the effects of induced social identity on social preferences are already well established, research on the influences of group identity on information processing is in its infancy. Very recently the first ones to investigate this were Cacault & Grieder (2016) and Le Coq et al. (2015). Cacault and Grieder found evidence that systematically overweigh positive information regarding our social group’s ability. This type of work is very important as it seeks to uncover the underlying processes of decision-making in a social context, which is most of actual decision-making.

To date there are no studies known to me that study intergroup biases1 in relation with anchoring. A rather unfortunate state of affairs especially because it has been known for a long time that in daily life candidate answers for problems are often provided by our social environment (Strack, 1992). These candidate answers are anchors. It is the rule, rather than the exception, that we use information provided by the people around us to guide our decision-making.

The anchoring bias itself is one of the most studied behavioral biases since the seminal paper by Tversky and Kahnemann (1975). I believe that it is also very relevant for decisions in a social context. However, it has been pointed that social context as an aspect of anchoring specifically has been understudied (Furnham & Boo, 2011). Fortunately this is changing. Recently Meub and Proeger found evidence that the anchoring bias is stronger

(8)

8

when the anchor is derived from social context (2015). We have already known that

anchoring is important to decision-making. And we know now that social context is relevant to anchoring. Knowing what we know about social context the next natural question to ask is: Is there an intergroup bias in anchoring?

This study seeks to combine the literature on social identity, anchoring and processing biases. Specifically I am interested in whether there is an ingroup bias in the use of anchors on an objective and incentivized task. Do we process candidate answers for a problem differently if they come from people we know? Is this the same for all people? In investigating this we extend the recent findings about the effect social identity on

information processing to a more natural decision-making context. A context in which people use information obtained from others. Moreover, this study uses novel methods to study these questions. This way it serves as a proof of concept, it shows the plausibility of 1) investigating intergroup biases using estimation tasks and 2) of doing a minimal group manipulation in an online experiment.

The experiment was done online in three sessions over three weeks. At the start subjects were placed into groups using a novel minimal group induction procedure and subsequently had to do quantity estimation tasks. After providing an initial estimate on a trial they got shown a number as feedback that was framed as being the average estimate of three other participants. In actuality this number was the correct answer. It was

experimentally manipulated whether this number was shown as the average estimate of ingroup or outgroup participants. Subjects could then use this information in order to provide a second estimate. Both the initial and second estimates were incentivized for accuracy.

The results provide the first evidence for what I name the intergroup anchoring bias. Participants changed their answer more towards the feedback they got when it came from their group. This shows that even novel and arbitrary groups make people prone to

differentially use information based on what group it is from. As the tasks were incentivized for accuracy this bias was not in line with their own material self-interest. Furthermore I found some evidence that this bias persisted over the three week duration of the

(9)

9

possible learning effects. Lastly, the results indicate that participants more inclined to engage in effortful and analytic thinking are less prone to the intergroup anchoring bias. It has to be noted, however, that the exploratory nature of this study should make us think of these findings only as preliminary results.

The remainder paper is organized as follows. Section 2 gives an overview of the related literature, the experimental design is presented in section 3, section 4 outlines the specific hypotheses, in section 5 the results are presented and lastly section 6 discusses these results, the limitations of this experiment and possible avenues for future research.

(10)

10

2. Empirical and Theoretical Background

2.1 Why Economists Should Care More About Social Anchors.

Humans are intrinsically social beings. To be part of social groups is an innate human need. And for good reasons, it used to be essential to survival. Now, in modern times, that need is still strong. There are many examples indicating that social isolation can result in a lot of behavioral problems and even severe physical or mental illness (Baumeister & Leary, 1995). For both the individual’s wellbeing as well as for social groups themselves, group identification is essential (e.g. Ellemers et al., 2003).

As social identities are such a large part of what it is to be human, the fact that it plays a significant role in economic decision-making should come as no surprise. Akerlof and Kranton were the first to argue that group belonging and group identity is very important for economics (2000). One example of this is that recruiters generally prefer to hire candidates belonging to similar social groups, resulting in sub-optimal hiring practices for their firm (Rivera, 2012). On the other side of that coin, candidates belonging to minority groups experience severe discrimination when applying to jobs (e.g. Carlsson & Rooth, 2007).

The amount of examples of social group influences on individual decision-making studied within psychology is large (for an overview see Hewstone, Rubin & Willis, 2002). From an economic perspective, such effects by themselves have significant economic consequences for the related actors. What makes these effects even more pernicious and result in even larger economic consequences is that they can be self-reinforcing within social groups due to informational cascades (Bikhchandani, Hirshleifer & Welch, 1992).

An informational cascade occurs when a sequence of actors ignore their private information to some extent when making a decision. Instead actors base their decisions on the decisions of other actors assuming that they have information justifying their choices (Shiller, 1995). Informational cascades, among other things, can result in what is known as group polarization (Sunstein, 2002). Group polarization is the phenomenon whereby deliberation moves a group towards a more extreme point of view than would be expected from the individual pre-deliberation inclinations. This phenomenon has a large influence on

(11)

11

the conduct of executive boards and management teams (Sunstein, 2002).

Related to group polarization is the phenomenon of pluralistic ignorance or collective delusion. Pluralistic ignorance occurs when the majority of a group privately rejects some norm or idea, but mistakenly assume that most group members accept it and for that reason they also publicly accept it (Miller & McFarland, 1991). The effects of pluralistic ignorance have been shown to influence the perceptions of whole populations (e.g. O’gorman, 1975). Moreover, it has been widely documented that pluralistic ignorance can lead to strategic over-persistence in low performing firms, with destructive consequences (Westphal & Bednar, 2005).

Such collective delusion has brought huge organizations to (the brink of) collapse. Famous examples are the culture of risk denial at NASA before the space shuttle accidents and the overconfident behavior of boardroom members of Enron and General Motors. A similar pattern found in these and other instances of collective delusion is the emergence of interdependent cognition and “echo chamber” group dynamics (Bénabou, 2013).

Interdependent cognition here refers to how the thinking and perceiving of events by individuals becomes dependent on how other group members think and perceive. This is then reinforced by echo chamber dynamics as the group insulates its members from outside thoughts and perceptions. All judgements by individuals of such a group become anchored in previous judgements by other group members.

Forecasters provide an illustrative example of these dynamics. Market forecasters often anchor their predictions on the consensus values that are publicly available (Campbell & Sharpe, 2009; Fujiwara et al., 2013). These consensus values are in turn derived from all previous individual forecasts. Hence aggregated prior decisions constitute the anchor values for current decisions. Such endogeneity of anchors can cause problems after multiple cycles if the use of these anchors is biased. Similarly to compound interest we could say that this results in a ‘compound bias’. Forecasters indeed have strong incentives for unbiased

predictions and so probably take a possible bias into account when using consensus values. Decision-makers, however, do not always possess this prudent attitude, as the examples of Enron and NASA illustrate.

(12)

12

The use of socially derived information (or anchors) by people in groups lies at the heart of all the aforementioned phenomena. Some of these phenomena are only worst case scenarios, but social cognition is off course also evident in more mundane circumstances. Because of its ubiquity, studying cognitive biases in the processing of socially derived information is of major importance for the field of economics in general and behavioral economics in particular. In this endeavor economists can learn a lot from social psychologists. Social psychology has been studying social cognition for decades. The current experiment has built upon the methods, procedures and theories developed in social psychology which will be discussed next.

2.2 Group Identity, Ingroup Bias and the Minimal Group Paradigm

The term ingroup (or intergroup) bias refers to the tendency to systematically judge one’s own group, the ingroup, more favorably than outgroups, resulting in either ingroup favoritism or outgroup derogation (for an overview see Hewstone, Rubin & Willis, 2002). This lies at the heart of most research into social identity. This bias can be seen in behavior, attitudes and cognition (Macky & Smith, 1998). The main2 psychological theory used to explain the ingroup bias is social identity theory.

This theory has three major components: categorization, identification and

differentiation (Tajfel, 2010). Categorization is the process of labeling and putting people, ourselves included, into social categories. These social categories are in turn intertwined with our self-image. Identification happens when one associates with and starts relating to

specific groups. These group become ingroups with whom we identify. Other groups become outgroups we do not identify with. Lastly, differentiation happens when compare ourselves and our groups with others. Doing so we create a favorable bias towards ourselves and our group. We think of ourselves and our group to better and more able generally. There is a lot of evidence that the main motivation for doing so is to fulfill self-esteem needs (Aberson et

2 Other explanatory theories include optimal distinctiveness theory, subjective uncertainty reduction theory and social dominance theory, for an overview see Hewstone, Rubin & Willis (2002). At the core of all these motivational theories, including social identity theory, lies the notion that people gain some utility when they belief their own group is in some way or form superior than others.

(13)

13

al., 2000; Rubin & Hewstone, 1998). An economist’s phrasing of that notion would be that we gain utility from increasing our self-esteem through ingroup favoritism and outgroup derogation.

In general for intergroup relations and in particular for intergroup biases, the Minimal Group Paradigm (MGP) has been one of the most influential tools for social psychologists since it's development almost fifty years ago (Otten, 2016). It was originally used to study categorization effects on intergroup discrimination and was developed in conjunction with social identity theory (Tajfel et al., 1971). This discrimination was measured using allocation matrices where participants had to choose allocation options giving rewards to ingroup and outgroup members. The basis of MGP is the novel and arbitrary categorization of

participants into groups. The only difference between ingroup and outgroup members should be the fact that they belong to different groups. Hence the resulting effects of a minimal group manipulation are often referred to as mere categorization effects. The following criteria were established for MGP experiments (from Tajfel et al., 1971, pg. 153-154):

1. No face-to-face interaction among subjects, both within or between groups.

2. Group membership should be anonymous.

3. There should be no instrumental link between the basis for group categorization and the nature of requested responses.

4. A strategy of responding differentially to groups should be in competition with a strategy based on a more rational or utilitarian principles.

While initially developed to investigate explicit discrimination, more recently the MGP has been used as a tool to investigate automatic processes and biases under conditions where one's social identity is made salient (Otten, 2016). Greenaway and colleagues found evidence for a group-based processing bias using a communication approach (2015). In their experiments subjects rated instructions allegedly coming from their ingroup as consistently

(14)

14

more reliable and trustworthy as the ones coming from outgroups. This mere categorization effect has also been documented in children (MacDonald et al., 2013).

Multiple studies have looked at how mere categorization affects creates a processing advantage for information related to the ingroup using implicit measurements. For example, Ratner and Amodio found using EEG that faces of ingroup members were assigned more processing weight when looked at (2013). Moreover, there have been multiple studies evidencing an ingroup bias using masked priming tasks (e.g. Otten & Wentura, 1999).

2.3 Ingroup bias and Cognitive reflection

One thing that should be clear from this evidence is that ingroup favoritism and outgroup discrimination does not happen because people are ignorant, incapable or how economists like to phrase it, suffering from bounded rationality. In reality everyone exhibits intergroup biases. Traits as intelligence and analytic or reflective thinking have been

associated with being less prone to biases such as hyperbolic discounting, loss aversion and the endowment effect (e.g. Frederick, 2005; Toplak, West & Stanovich, 2011). In these studies the Cognitive Reflection Test (CRT) is used to measures reflective and deliberative thinking versus heuristic and intuitive thinking. CRT performance is known to be a strong predictor of conscious, effortful information processing, which can also be called System 2 processing (Frederik, 2005). However, of all the biases, the intergroup bias seems to be a different story.

Somewhat counterintuitively there is evidence that the opposite is actually the case. In an experiment by Kahan participants that scored high on the CRT were less likely to display self-serving rationalizations when faced with neutral questions. But when these participants were faced with ideologically relevant issues such as gun control or climate change, they were more likely to display ideologically motivated cognition (Kahan, 2013). In a follow-up study these finding were confirmed when they found that participants that were better in working with quantitative data became more polarized when this data was about something political (Kahan et al., 2017).

(15)

15

“ideologically motivated cognition is a form of information processing that promotes

individuals’ interests in forming and maintaining beliefs that signify their loyalty to important affinity groups.” (Kahan, 2013, pg. 407). They dubbed this theory the Identity-protective Cognition Thesis (ICT). In other words, people use their capabilities selectively to conform their interpretation of information to that of their social ingroup. Hence people with better analytical and reflective thinking are able to re-interpret things to a larger extent and thus can become more socially biased. This makes research into intergroup biases even more important as people with these traits are often found in positions of power and make decisions with large societal consequences.

The psychological literature reviewed so far can guide economists in their

investigation of social identity. However, there are two main reasons for why the extensive psychological literature on ingroup biases cannot be simply copied by economists.

First, most of these use subjective judgements to measure an ingroup effect. These subjective judgements can be either about in- or outgroup members (see Diehl, 2002, for examples) or they can be about the information coming from different groups (e.g.

Greenaway et al., 2015). This form of decision-making, rating people or their information on a Likert scale, is not something done daily by executive boards or consumers. Studies using implicit measurements of bias, for example using priming, often do not incorporate a behavioral or decision-making component. And when they do, they often do not correlate with explicit behavioral measures (Maass, Castelli & Arcuri, 2000).

Besides the fact that the type of decision making in these experiments does not reflect economic decisions, there is also an issue with lack of incentives. Monetary incentives are nowadays the norm in experimental economics. They are used to align decisions in experiments with genuine decisions that have monetary consequences. Most psychological experiments do not incorporate incentives in such a way.

(16)

16

2.4 Social identity and Economics

Akerlof and Kranton were the first economists to realize that identity should be systematically incorporated in economic analysis (2000). They incorporated identity in a utility function. Simply put, in their model one’s identity is related social categories that have specific expectations about other’s behaviors and norms. Experiences in line with these behaviors and norms resulted in positive utility while deviations caused disutility. Quite successfully they applied this model to a multitude of economic phenomena, ranging from discrimination and poverty, to contract theory and education (Akerlof & Kranton, 2000; 2002; 2005).

Since the seminal work of Akerlof and Kranton the economics literature on social identity has steadily grown. In one line of research pre-existing social identities are used to investigate the effect of identity. Fershtman and Gneezy, for example, looked at how

ethnicity affects discriminatory behavior in the trust, ultimatum and dictator game (2001). A study by Ruffle and Sosis showed that the type of community one lives in relates to

discriminatory behavior towards people from different types of communities in a public goods game (2006).

The first researchers to experimentally manipulate social identity and use monetary incentives were Ball, Eckel, Grossman and Zame (2001). They find that in a simple market setting participants assigned to the high social status group consistently capture a larger share of the surplus, both as sellers and buyers. Eckel and Grossman continued to study the effects of social identity in a repeated public goods game (2005). They experimentally induced different degrees of group identification and found that groups with higher degrees of identification show less individual free-riding behavior. Charness, Rigotti and Rustichini also induced different degrees of social identification by means of varying the saliency of the group structures (2006). By pitching participants from different groups against each other they find that increased identification results in less coordination in the Prisoner’s Dilemma while increasing coordination in the Battle of the Sexes game.

These studies, however, do not follow the criteria set out for MPG experiments (Tajfel et al., 1971). In all three studies responding differentially to groups is not in competition with a strategy based on simple utilitarian principles. In other words, individual incentives are

(17)

17

aligned with group incentives. Due to this a possible ingroup bias cannot be purely ascribed to social identity. Participants in the market setting experiment discriminate by either asking higher prices or giving lower offers, both of which increase their payoff (Ball et al., 2001). In the experiment by Eckel and Grossman participants played a repeated public goods game (2005). The repeated nature of this game makes that cooperative behavior could be the result of self-interest; cooperation could sway other participants to also cooperate in future rounds. Similar issues arise in the study by Charness, Rigotti and Rustichini (2006). Contrary to MPG experiments from social psychology, they only find a significant intergroup effect in the conditions where there is face-to-face interaction or a common payoff structure that aligns individual and group incentives. These two elements do not fit the criteria for MPG experiments, so the effects cannot be as easily ascribed to just group identity.

The first true economic MPG experiment was done by Chen and Li (2009). In their seminal paper they investigate the effects of induced social identity on social preferences. After grouping participants based on their painting preferences they matched participants with either an ingroup or an outgroup member in different games. Their results show that participants behaved more altruistically towards ingroup matches and that they were more willing to punish outgroup matches (Chen & Li, 2009). Completely in line with the ingroup favoritism and outgroup discrimination predicted by social identity theory. Most important here is that both the altruistic behavior and punishment went against the subject’s self-interest. Additionally, in a follow-up study with a similar setup it was found that increased group identification resulted in more efficient equilibria being reached in coordination games where participants played with ingroup members (Chen & Chen, 2011).

These results3 show group identity manipulations based on arbitrary criteria increase the motivation to maximize social welfare, positive reciprocity and altruism with ingroup members. They all fall in line with the predictions about ingroup favoritism and outgroup discrimination. Together the researchers cited in this section firmly placed social identity on the radar of economists. This paper, however, investigates the effect of social identity not specifically on social preferences or decision-making in strategic context, but on general decision-making and the underlying information processing. This will be discussed in the following sections.

3 For more empirical evidence of similar mere categorization effects on social preferences see: Le Coq, et al., 2015; Li, Dogan & Haruvy, 2011; Goette, Huffman & Meier, 2012; Goette et al., 2012.

(18)

18

2.5 Social Context, Estimation and Anchoring

The destructive consequences of the phenomena mentioned earlier, group polarization and collective delusion, do not result from social preferences or direct

judgements of other people’s ability. These are what the aforementioned studies on social identity and information processing measure. CEO’s don’t lead their company into

bankruptcy by directly deciding on and judging the ability of their colleagues. Such strategic decisions are based on beliefs about the world at large. Some of these beliefs may be provided by these colleagues in such a way that overconfidence in those beliefs is

attributable to overconfidence in the colleagues, but ultimately the harmful beliefs are not necessarily about these colleagues.

A CEO will not choose to invest in some product because he thinks his advisors are very smart, but because he ultimately beliefs it will result in positive returns. In our everyday decision-making we don’t judge the ability of the people around us, instead we judge their judgements. We form beliefs about the information people around use provide and use it in order to guide our decision-making. In fact, socially derived information is so essential that our mental model of the world is to a large extent the result of judgements made by others before us (Strack, 1992). This is the reason why this study examines the ingroup bias in the context of judgements about exogenous facts while using information provided

endogenously, by in- or outgroups.

Specifically this research focuses on quantitative estimation. The reason for this is that quantities are ubiquitous and very important for our understanding of the world. Estimating quantities is essential in everyday decision making. Some examples of quantitative estimation problems are:

• How much does it cost to rent a three bedroom apartment in the center of Amsterdam?

• How many kilometers will I be able to drive with half a tank of gas?

• What percentage of Google’s revenue will come from advertisements in the year 2025?

• How many denim jeans does a company need to sell in Europe to be able to exercise market power?

(19)

19

The estimation of uncertain quantities is a large part of economic decision-making. In a lot of situation the actual decision-making can be completely dependent on a quantity to be estimated. Because quantitative estimation is such a big part of normal everyday decision-making it is widely used by clinicians to test the reasoning and executive functioning of their patients (Strauss, Sherman & Spreen, 2006).

From a cognitive perspective quantitative estimation is a complex reasoning and memory task that involves the activation of multiple candidate answers where the final estimate is often a combination of these activated answers in which they are given some processing weight (Wilson & Brekke, 1994).

The experiment described in this paper will have an estimation task on where subjects will get feedback from other subjects after they provide an initial estimate. Hence participants’ initial estimate and the feedback given to them serve as candidate answers that both can be assigned some processing weight to arrive at the second estimate.

This mirrors to some extent how we form judgements in real life. It can be the case that candidate answers come from individual knowledge of some relevant rules or examples. In many judgements, though, the candidate answers are provided by the people in our social environment (Strack, 1992).

The main research paradigm for quantitative estimation in economics has been that of the anchoring-and-adjustment heuristic (e.g. Tversky & Kahneman, 1975; Jacowitz & Kahneman, 1995). In this model, when making an estimation the agent first considers an anchor. This anchor serves as a candidate answer and is typically provided by the researcher. The agent judges whether this anchor is higher or lower than the required answer and subsequently adjusts the value of the anchor in the appropriate direction to derive a final estimate. The main, robust finding has been that this adjustment is generally insufficient, leading to an answer too close to the anchor (Tversky & Kahneman, 1975; Northcraft & Naele, 1987). This finding has also been referred to as the anchoring bias.

While this paper does not directly investigate the anchoring bias, the extensive literature on it does confirm the cognitive psychological description of weighing candidate answers in estimation tasks. Anchors function as a candidate answer and adjusting relative to them is similar to weighing them relative to some other number. This last claim is evidenced

(20)

20

by the fact that when multiple anchors are introduced, the anchoring bias is reduced and final answers are reached by combining the available anchors (Whyte & Sebenius, 1997).

Within the anchoring-and-adjustment literature, however, the influence of social context has been relatively understudied (Furnham & Boo, 2011). The exception is a study done by Meub and Proeger which compares neutral anchors with social anchors (2015). Here neutral anchors were the correct answers on previous estimation trials. Social anchors were created by averaging the estimates of other participants on the previous estimation trial. It is important to note here that this previous estimation trial was unrelated to the trial for which the anchor was used, meaning that the anchor did not provide any information. Meub & Proeger’s findings show that the anchoring bias resulting from socially derived anchors is larger than for neutral anchors (2015). Meaning that participants adjusted their estimate even less when the anchor was socially derived. In other words, the social anchor was assigned more weight and the final estimates were thus closer to social anchors than to neutral anchors. This finding highlights the importance of our research. If social anchors are even more influential than anchors in general, intergroup biases in the weighting of such anchors are also likely to have a significant consequence on decision-making.

2.6 Social Identity and Information Processing

While understanding social preferences would certainly help in explaining decision-making, it is only a piece of the puzzle. Behavioral economics has specifically been focused on investigating and explaining economic decision making. Originally the emphasis was placed on cognitive heuristics and biases as in the groundbreaking work of Tversky and Kahneman (1975). The heuristics and biases were explained as cognitive mistakes wired into our brains, such as the overweighting of small probabilities. These biases are now commonly found as assumptions in economic models.

Specifically a lot of research has been done into how self-related information gets differentially processed and how this affects incentivized decision making (for an overview see Bénabou & Tirole, 2016). Möbius and his colleagues found that people become

(21)

21

they tracked subjects’ beliefs about their own relative performance on an IQ test after they received generated, noisy feedback. Subjects over-weighted positive feedback about their performance relative to negative feedback. This created overconfidence. Eil and Rao found the same results regarding asymmetric processing of self-related information not only for ability, but also for attractiveness (2011). In a similar vein Gregg, Mahadevan and Sedikides recently found that people find a theory more credible when the theory assigned to

themselves as compared to when it is not (2017).

This is only a handful of examples from the extensive literature showing that the self-related information gets processed differently. Earlier it was discussed how we humans are intrinsically social. We can almost say our ‘social self’ is as important as our ‘personal self’. Hence our information processing is likely to suffer from distortions based on our social identity, similar to how it is distorted by our personal identity.

Recently in a study by Cacault and Grieder the first evidence was found for such a similar social processing bias (2016). Specifically they found that beliefs about ingroup ability are similarly distorted as beliefs about individual ability as shown in the experiments of Möbius and his colleagues (2011).

After going through a minimal group induction procedure participants in the

experiment of Cacault and Grieder had to complete an IQ test. In order to elicit beliefs three ingroup members and three outgroup members were selected. Following subjects could effectively place bets on whether the three ingroup members had done better on the IQ test than the outgroup members. After an initial bet subjects received some noisy information about the relative performance of their ingroup on a single question of the IQ test.

Subsequently more random groups of in- and outgroup members were selected and

participants could place more bets. Due to this repeated elicitation of subject’s beliefs about their ingroup’s relative performance these beliefs could be tracked. Their findings suggest that group identification results in distortions of beliefs about the ingroup’s ability similar to distortions in beliefs about individual ability (Möbius et al.,2011; Eil & Rao, 2011).

Moreover, Cacault and Grieder found evidence that the ingroup bias could be split up in two distinct effects. Firstly there is a ‘prior ingroup bias’, where subjects were immediately more confident in their ingroup’s relative performance after the group induction phase.

(22)

22

Second, there was a ‘dynamic ingroup bias’, where subjects in the treatment condition processed the feedback about their group’s performance differently than the control group. Participants in the minimal group treatment condition put more weight on positive feedback about their group as compared to negative feedback. This dynamic effect makes that the initial ingroup bias can persists in face of contradictory evidence (Cacault & Grieder, 2016) and in face of a fading effect of the group manipulation (Chen & Li, 2009). It is important to note here, however plausible their conclusions may seem, most of them hinge on small effect sizes and only marginally significant results. Hence the robustness of the effect of social identity on information processing is still very much in question.

Lastly, there is one other paper that gives evidence of group identity influencing decision-making through a channel different than the well-established channel of social preferences. Le Coq and her colleagues studied the behavior of participants on a series of centipede and stag hunt games using a minimal group manipulation (2015). Their initial results showed that subjects were more likely to behave as if their partner was uniformly randomizing between his or her strategies. The researchers dubbed it the Best Response to Uniform Randomization (BRUR) effect. They tested this further in another experiment by systematically varying what the best response to uniform randomization was. This indicated that indeed participants believed that outgroup members behaved more randomly.

Moreover, their results indicated that this effect is separate from the effect of identification on social preferences (Le Coq et al., 2015). This way social identity can have an effect opposite of what would be expected through the channel of social preferences. Specifically, Le Coq and his colleagues found that on the centipede game, given a specific payoff

structure, participants actually continued for longer if playing with an outgroup member, which results in larger payoffs..

The researchers offer two possible explanations of why subjects respond more

towards outgroup members as if they were uniformly randomizing of their strategy space (Le Coq et al., 2015). First is that outgroup members are believed to act randomly because they are perceived to be less (strategically) sophisticated. This would be in line with social identity theory. Second, it could be that people find it harder to make predictions about the behavior of outgroup members because they are more dissimilar. Hence they just make the simplest

(23)

23

prediction and belief every of their actions is equally likely.

While the literature reviewed in this section provided the first evidence that social identity has a real effect on information processing, all decision-making took place in strategic games. The current experiment, as described in the next section, is designed to extend these findings to decisions outside such a strategic context.

(24)

24

3. Experimental Design

3.1 Overview

The experimental design used is quite different than any existing experiment. Firstly the minimal group induction will be novel. Secondly, repeating an MGP experiment with the same groups has not been done yet in similar fashion. Third, the combination of quantitative estimation tasks with group-based feedback will be completely new. Lastly, the whole procedure will be done online. As the experimental design includes multiple novel elements it is described in this section at length.

The experiment consisted out of three similar online surveys that participants filled in by themselves during three consecutive weeks4. Participants were recruited by sharing a short explanation of the experiment with a link to the first session in multiple Facebook groups that the author is a member of. Implementing multiple sessions was done to increase group salience, which is explained further at the end of the next section. All three surveys had a common part that consisted of numerical estimation tasks and allocation matrices5. Besides this session 1 also included the group induction phase at the start and a background and at the end included a questionnaire on background variables and the Cognitive

Reflection Test (CRT). Lastly, session 1 and 3 also included a group identification scale at the end. All these elements of the design are explained in the following sections.

3.2 Minimal Group Induction

At the start of the first survey the minimal group induction took place. The group induction was based subjects' eye color and their painting preferences. The subjects were informed of this and the groups were framed as 'teams'. The use of painting preferences for group formation in the minimal group paradigm has a long history (Diehl, 1990). Specifically,

4 Links to the webpages with the experimental sessions are provided in appendix A

5 The common part of the sessions actually also included a ‘subjective estimation’ task based on willingness to pay questions, which is not described here. A description of this task and short analysis of the data is offered in appendix H. As this task ultimately was not relevant for the research goals of the study it was left out the main body of this paper.

(25)

25

paintings from Paul Klee and Wassily Kandinsky have been used successfully many times (e.g. Tajfel et al., 1971; Billig & Tajfel, 1973; Chen & Li, 2009; Le Coq et al., 2015). The current experiment used the same paintings and procedure as Chen and Li did (2009). Participants were presented with five pairs of paintings, with each pair consisting of one painting by Klee and one by Kandinsky6. Subjects did not know which painting was from which artist and for each pair they selected their preferred painting. Participants were placed either in the Klee or Kandinsky group if they selected three or more paintings from that painter.

Next the second step of the group induction was based on eye color. The main reason for including a second grouping criterion was to form more and smaller groups, instead of just a Klee and Kandinsky group. Laboratory MGP experiments classically use groups of 3-6 participants (e.g. Tajfel et al., 1971). Moreover, it has been shown that members of a group that is a numerical minority exhibit a larger intergroup bias (Leonardelli & Brewer, 2001). While participants could not know the actual size of the groups, having only two groups could signal to them that the groups were rather large. Eye color was chosen as the second criterion because it is trivial, uninformative and has been used before successfully (e.g. Anderson, Fryer & Holt, 2006). The participants were provided with three options (brown, blue and green) and were asked which was most similar to their eye color.

Consequently participants were placed into groups based on having a similar eye color and preference for the same painter. In total there were six groups and they were named according to the relevant eye color and painter (e.g. Team Brown Kandinsky or Team Blue Klee). Participants were then informed about which team they were in and about the names of the other participating teams.

This minimal group induction procedure carefully followed requirements for minimal group paradigms as set out by Tajfel and his colleagues (1971). The group categorization is novel, participants have no history of experiences with groups based on this specific painter

6The five pairs consisted of the following paintings: 1A Gebirgsbildung, 1924, by Klee; 1B Subdued Glow, 1928, by Kandinsky; 2A Dreamy Improvisation, 1913, by Kandinsky; 2B Warning of the Ships, 1917, by Klee; 3A Dry-Cool Garden, 1921, by Klee; 3B Landscape with Red Splashes I, 1913, by Kandinsky; 4A Gentle Ascent, 1934, by Kandinsky; 4B A Hoffmannesque Tale, 1921, by Klee; 5A Development in Brown, 1933, by Kandinsky; 5B The Vase, 1938, by Klee.

(26)

26

preference and eye color. Moreover, the categorization was completely anonymous and subjects had no face-to-face interaction with other participants.

Surprisingly, recent research using the MGP utilizes the exact same group induction procedures that were developed in the late 1960s (Pinter & Greenwald, 2011). This means that there is relatively little knowledge about the efficacy of different induction procedures. One of the things that is known is that group salience is crucial for minimal group

manipulations to work (Chen & Li, 2006). To my knowledge there are no minimal group studies done solely using online tasks that participants could do from home. All studies familiar to the author have used a lab and thus had more methodological freedom to make the groups salient.

In order to increase group salience all team names were always presented

highlighted, in brown, green or blue, depending on the eye color. The relevant team name was always presented this way when participants got feedback from a team. Moreover, at all times participant's own team name was shown at the bottom of the screen while doing the tasks, as is standard in the literature (e.g. Cacault & Grieder, 2016; Chen & Li, 2009)

Furthermore repetition was used as a novel way to increase group salience. Simply being reminded of and exposed to one’s group name more often increases its salience. In addition, it has been shown that in natural social groups identification increases over time (e.g. Hall & Schneider, 1972). For these reasons the task was repeated three times over three weeks. Participants stayed in the same group over these three weeks and hence were

exposed to their own team name more often than they would be in only one session. In addition, in all e-mails sent to the participants during these three weeks their team name was displayed prominently (they received a copy of the instructions and reminders to finish the surveys before the end of the week, these can found in appendix B). Lastly, this set up mimics more realistically how we are part of groups in real life; normally our membership of a group lasts longer than a single experimental session.

(27)

27

3.3 Quantitative Estimation Tasks

The main part of all three surveys consisted of the estimation tasks and WTP questions. To participants it was framed as if their quantitative estimation abilities were tested in three different domains. The first domain was quantitative estimation based on photos which had 8 trials per survey. Based on a photo, participants had to estimate a quantity. This could be, for example, the amount of shopping trolleys pictured or the official capacity of a pictured swimming pool. The second domain was framed as quantitative estimation based on computer generated images consisting of 6 trials. Here participants had to estimate the amount of elements, such as dots or circles. While these two domains were framed as being different to the participants, in order to reduce perceived monotony, they are theoretically the same and all trials will be analyzed together as the estimation tasks. So in total there were 14 estimation trials in each session.

Specifically, quantitative estimation based on images and photos was used so that participants could not gain an advantage based on their prior knowledge. With these

questions there is no domain specific knowledge in the form of rules or examples that could help participants. In contrast, with trivia type quantitative estimation, such as “How long is the Nile?”, participants can have specific advantages resulting in unexplained variance.

After participants were presented with the photo or computer image they provided their numerical estimation for the required quantity. After providing their initial estimate, on the next screen subjects were provided with a fake average answer framed as being from three other participants. Half the time, either on all the even or on all the uneven trials, this feedback was framed as being the average answer of three participants from the subject's team, the other half it was presented as being from a randomly chosen other team. After receiving this feedback participants had to submit a new estimate. In order to reduce possible confusion and make sure that the subjects would use this information, the instructions mentioned that “generally average estimates of multiple participants are

relatively close to the true value”, as has been previously done in an estimation study (Falk & Zimmerman, 2016). In addition, for each estimate subjects had to rate how confident they

(28)

28

were that their estimate fell within 10% of the correct answer on a 7-point Likert scale. Both the initial and revised estimates were incentivized for accuracy (further explained in the next section). This was done so that subjects had to try and provide an accurate estimate based on what they thought before receiving feedback. This makes that participants have two useful candidate answers to use for the second estimate. When this is the case it has been shown that people actually combine the two numbers (Whyte &

Sebenius, 1997). This results in participants having to weigh the information they received as feedback relative to their own initial judgement.

In actuality the average estimate shown to the

participants was not an average estimate. For all objective estimation trials the number given to the subjects after their initial estimate was the true answer. It was only framed as

being an average from either in- or outgroup members. While deception is generally avoided in experimental economics, and for good reasons, it could not be in this instance. Due to constraints on time and resources that the author faced it was not possible to design the experiment in such a way that actual averaged estimates were shown to participants7. However, I would argue that the deception does not necessarily harm the internal validity of this experiment as there is no reason for it to systematically alter the decision-making processes involved. Moreover, care was taken to make sure that all other aspects of the design, i.e. the group formation, the tasks, the monetary incentive, were free from

deception. Ultimately I chose to use the true answers as feedback so that participants would not get the sense over time that the feedback was biased in any way and so that the

feedback was actually useful.

7 This study being part of a master’s thesis meant that I did not have access to resources an actual university research group might have. In a similar future study deception can, and should, be avoided. In discussion section 6.3 I discuss two feasible ways to do this.

(29)

29

For each of the three surveys two versions were made in which the only difference was whether the feedback on a specific trial came from the ingroup or an outgroup. So for each question where version A had the feedback framed as coming from the subject's own team, version B had it framed as being from a competing team. Participants were randomly assigned to a version of each session. This design results in the data being completely counterbalanced for the effect of ingroups versus outgroups.

It is known that the visual properties of a stimulus used in quantity estimation

influence the estimates people give (Gebuis & Reynvoet, 2012). These visual properties, such as size of elements, aggregate surface, average density and spread of elements in the image, can make people to over- or underestimate the required quantities. In order to diminish this bias the pictures and images used in this study were chosen so that all these properties differed. For example, some pictures had large circles as elements and others had small ones. Some were organized in a square, others around a circle. Having the stimuli differ in

properties that could induce a bias also makes sure that the feedback the participants got, the actual answer, is not consistently interpreted as being too high or too low, since the stimuli were chosen so that participants do not consistently over- or underestimate the quantity themselves. For this same reason the trials differed on non-visual properties such as scale (estimating ~100 dots versus estimating ~1000 dots) and type (estimate the official capacity of this pool versus estimate the amount of dots in this picture).

3.4 Incentive scheme

On all the estimation trials both the initial and revised estimate were incentivized for accuracy. This monetary incentive was used to align stated estimates with genuine estimates. Participants could earn 10 points with each estimate and they lost 1 point for every 5% they were off from the true answer. When participants were more than 50% off they received no points. Adding all these points together computes the subject’s personal score.

An important aspect of the payout scheme has to do with inducing a 'common fate' for the members of each team. Previous research has shown that group identification increases when members of a group are interdependent (Gaertner et al., 2006; Brewer,

(30)

30

1979). To create interdependence within teams the final score of each participant was calculated by adding his or her personal score to the average score of their team8.

Monetary prizes were awarded based on each participant’s final score. Participants with the three highest final scores got €50, €20 and €10 respectively. Next, from the

remaining participants five were chosen at random for payment. Of these five the participant with the highest score received €5, the second highest €4, and so all the way down to the participant with the lowest final score of these five, who received €1. This was done so that participants would not get discouraged, as everyone, regardless of final score, had a chance of winning money.

Participants were told that only if they completed all 3 sessions they were eligible for a monetary reward. The payout scheme was explained at the start of every session in the instructions and also sent by email to the participants after completion of the first session. It was emphasized that that this payout scheme came down to this: More accurate estimates make both the amount of money and the chance of winning money go up

3.5 Cognitive Reflection Test

The first session concluded with the Cognitive Reflection Test (CRT). Rather than placing this at the complete end of the experiment, as is usually done, it was placed at the end of the first session because of possible attrition. The CRT was developed by Frederick (2005) and is a three-item test that has participants solving problems in which they have to reflect on the problem and resist reporting their initial response. As such it measures reflective and deliberative thinking versus heuristic and intuitive thinking. The CRT consists out of the following three open-ended questions:

8It is important to note here that this interdependence would not make a rational agent use the feedback from their own team differently than the feedback from another team. It is still the case that more accurate estimates always earn more points. One could think of a strategy where a participant would give a some estimate that is known to be inaccurate in order to get their team's average estimate to be more

accurate. However, this would only reduce the average amount of points the team gets as this would reduce the points earned by him or herself. Such a strategy also would not help your team members more by giving them better feedback, as the other teams would also be shown this same feedback.

(31)

31

1) If it takes 5 machines 5 minutes to make 5 widgets, how long does it take 100 machines to make 100 widgets?

2) A bat and a ball cost €1,10 in total. The bat costs €1,00 more than the ball. How much does the ball cost?

3) In a lake, there is a patch of lilypads. Every day the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how many days would it take for the patch to cover half of the lake?

3.6 Measures of Group Identification

At the end of each session there was a group manipulation check that was framed to participants as being a 'bonus round' in which they could hand out and receive points. It was framed as a bonus round because previous research has shown that when framed in this way respondents are more likely to differentially allocate money between ingroups and

outgroups (Bornstein, 1983). This bonus round consisted of two allocation matrices adapted from Tajfel and colleagues (1971)9. With these matrices, shown in table 1, subjects have to choose a specific allocation that gives an amount of points to a random ingroup and

outgroup member. These bonus points were added to the final score of in- and outgroup members, so that the chosen allocation did not influence a participants’ own score.

These matrices were constructed so that different motivations are pitted against each other (Tajfel et al., 1971). In this experiment matrices type A and type B were chosen (for an in depth analysis of these matrices see Leonardelli & Brewer, 2001). Matrix type A makes participants choose between allocations that either give a larger total amount of bonus points or that give more to the ingroup member. Matrix type B has participants choose an allocation that gives either a larger relative amount to their ingroup member or an allocation that gives both the largest amount to the ingroup member and the largest total amount. What is similar for both matrices is that allocation option 7 represents an equal distribution and all lower numbered allocations indicate ingroup favoritism.

9 Rather than using multiple alternative matrices, which are more common in recent MPG research, Tajfel matrices were used as this study does not focus on the different types of motivation associated with the multiple alternative matrices.

(32)

32

As a more explicit measure of group identification two questions were added to session 1 and 3. (a) "How much does being a member of your group indicate something about who you are?" and (b) "How much do you identify with your team?" (from Gaertner & Insko, 2000). Participants responded on a 7-point Likert scale ranging from “none at all” to “a great deal”. These two items were only added to the end of session 1 and 3 so that we had data for all participants, but not in session 2 in order to not have participants focus too explicitly on how they felt about their groups.

3.7 Data Analysis

3.7.1 Dependent Variable Weight

Of main interest in this experiment is how participants change their initial estimate after receiving feedback. So dependent variable of our main analysis is called Weight and will be computed as follows:

𝑊𝑊𝑊𝑊𝑊𝑊𝑊𝑊ℎ𝑡𝑡 =𝑇𝑇𝑇𝑇𝑇𝑇𝑊𝑊 𝐸𝐸𝑎𝑎𝐸𝐸𝑎𝑎𝑊𝑊𝑇𝑇 − 𝐸𝐸𝐸𝐸𝑡𝑡𝑊𝑊𝐸𝐸𝐸𝐸𝑡𝑡𝑊𝑊1𝐸𝐸𝐸𝐸𝑡𝑡𝑊𝑊𝐸𝐸𝐸𝐸𝑡𝑡𝑊𝑊2 − 𝐸𝐸𝐸𝐸𝑡𝑡𝑊𝑊𝐸𝐸𝐸𝐸𝑡𝑡𝑊𝑊1

Here Estimate 1 refers to the estimate before feedback, Estimate 2 is the estimate after Table 1 Bonus allocation matrices

Matrix Type A Allocation option 1 2 3 4 5 6 7 8 9 10 11 12 13 Bonus ingroup member 19 18 17 16 15 14 13 12 11 10 9 8 7 Bonus outgroup member 1 3 5 7 9 11 13 15 17 19 21 23 25 Matrix Type B Allocation option 1 2 3 4 5 6 7 8 9 10 11 12 13 Bonus ingroup member 7 8 9 10 11 12 13 14 15 16 17 18 19 Bonus outgroup member 1 3 5 7 9 11 13 15 17 19 21 23 25

Note: For each matrix subjects had to choose one allocation option at the end of each session

(33)

33

feedback and True answer refers to the actual answer on the estimation trial, which is also the feedback. Weight has a value of 0 when participants disregard the feedback and do not revise their answer. It has a value of 1 when participants ignore their initial estimate and revise their answer to be exactly what was given as feedback. A value of 0.5 represents a trial in which a subject submitted the average of their initial estimate and the feedback as a second estimate. Hence it can be interpreted as the weight subjects assign to the feedback relative to the weight they assign to their own initial estimate, in order to arrive at a final estimate. Another interpretation would be that Weight(*100%) represents the actualized change in estimate as a percentage of total possible change due to the feedback.

3.7.2 Observations of Interest

By setting up the dependent variable in this manner it is possible to check whether subjects actually weighted the candidate answers, their initial estimate and the feedback, in order to get to a second estimate. When Weight is between 0 and 1 can we interpret the behavior as weighting of the two candidate answers, because only then the final estimate could be a combination of the two anchors.

The goal is to investigate the processes in normal real life decision-making. These are decisions in which agents obviously use all the information at hand. Especially for important decisions this is the case, one would not disregard one piece of information completely. Hence in realistic decision-making if every signal that is considered informative is reasonably assigned a nonzero processing weight.

In an observation with Weight=0 there is no evidence of relative weighting of the candidate answers or anchors. The subject clearly did not use the feedback and just repeated his or her initial estimate10. These observations are expected to happen as people are known to be stubborn. When Weight equals 1 we can similarly not think of the observation as if the initial estimate and the feedback are assigned some processing weight. It could be the case that a subject forgot their initial estimate or that they were so uncertain about it that he or she just copied the feedback. Observations where Weight is smaller than 0 or larger than 1 are clearly no rational combination of the two candidate answers.

10 The only exception to this would be a situation in which the initial estimate is exactly equal to the feedback. However in this case it would still be impossible to check the relative weight assigned to the initial answer and feedback.

(34)

34

To investigate whether there is a difference in how in- and outgroup derived anchors are weighted it is necessary to have observations for which we can reasonably conclude that the feedback is assigned some processing weight. This is only the case for observations where Weight is between 0 and 1. For this reason these will be deemed valid observations and will be the focus of the analysis.

If the aforementioned causes of invalid responses are correct and indeed have

nothing to do with whether the feedback came from an in- or outgroup, we should not see a difference in frequency of invalid responses between the ingroup and outgroup trials.

3.7.3 Modeling

Due to the structure of the data a panel approach is used for the main analysis11. As is standard in repeated measures designs, standard errors are clustered by subject. Maximum likelihood will be used to estimate the coefficients.

Three predictors are included. First is the treatment dummy named ingroup, as we predict there to be an ingroup bias in Weight. Second is the interaction between ingroup bias and CRT score (ingroup*CRT), representing the hypothesis that the ingroup bias increases with a higher score on the CRT. The last predictor is the interaction between ingroup and time (ingroup*Session). The variable Session has the values 1, 2 and 3, as we hypothesize the ingroup bias persist over time and thus be similar in each session. The sessions are used as a time variable instead of individual trials because this allows unobserved trial effects to be controlled for.

Individual effects will be included in the model as the design allows it to be

incorporated without losing the treatment effect. This way effectively all between-subject variation in Weight is removed.

Another important aspect of the data is that the observations will likely be correlated per trial. This is the case because each trial was the same for all participants except for the treatment and the trials were considerably different from each other. The trials varied substantially in visual properties and order of magnitude of the estimation (see subsection

Estimation Tasks). Hence trial effects12 are added to remove all between-trial variation that is

11 In fact, with our data it is impossible to perform repeated measures ANOVA or t-tests while controlling for exogenous effects as our independent variable, treatment, varies both within and between subjects. 12 Traditionally in this type of model such an effect would be called a time effect. However as time is not

(35)

35 unrelated to the group treatment effect.

Lastly subjects’ confidence in their initial estimate will be included as a covariate. This is done as a lot of the variance in Weight will probably be due to how confident subjects are in their initial estimate. When participants have little confidence in their initial estimate they will most likely revise their answer to a larger extent than when they are very confident in their answer.

In order to check whether unobserved heterogeneity of these three control variables – Trial, Subject and Confidence – is related to the predictor variables a Hausman test will be used. If this is not the case the control variables will be added as random effects in the model. If there is correlated unobserved heterogeneity then they will be added as fixed effects.

actually related to the between-trial effects here a different name was chosen.

(36)

36

4. Hypotheses

Hypothesis 1

𝛽𝛽𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖> 0

There will be an ingroup bias in using feedback on the estimation task. I hypothesize that subjects will change their answer more in the direction of the feedback if it is from ingroup members as compared to when the feedback comes from outgroup members. Hence participants will assign anchors derived from ingroup members more weight relative to anchors derived from outgroup members. Hence I will refer to this bias as the Intergroup Anchoring Bias. Ultimately this bias results in higher Weight scores for ingroup trials than for outgroup trials. This hypothesis is based on three effects posited in recent literature:

1) Prior ingroup bias: Social identification creates overconfidence in the ability of ingroup members (Cacault & Grieder, 2016). Consequently the averaged estimates from ingroup members are judged as more accurate which makes participants assign them more weight in order to give a more accurate estimate themselves.

2) Dynamic ingroup bias: Due to biased updating of beliefs about the ability of ingroup members people become more confident in the abilities of ingroup members (Cacault & Grieder, 2016). Biased updating refers to the overweighting of positive signals and underweighting of negative signals about the ability of one’s ingroup. From the subject’s perspective a positive signal would be if the feedback is similar to his or her own initial estimate, a negative signal would be if the feedback was completely different. This asymmetric updating, in turn, makes participants judge ingroup estimates as more accurate over time.

3) BRUR effect: People are more likely to behave as if a partner is uniformly randomizing when this partner is from an outgroup (Le Coq et al., 2015). Any form of

randomization would result in less accurate estimates, so participants would trust the averaged estimates from their ingroup members more as they expect outgroup members to be randomizing to a larger extent.

Referenties

GERELATEERDE DOCUMENTEN

Therefore, the third hypothesis which stated that the interaction between gossip valence and gossip targets’ level of self-esteem would have weakened the indirect

certain behaviors and access to valued resources (Anderson, & Brown, 2010), it is hypothesized that the greater the status inequality is, and thus the

This may explain the results from the additional analyses, since it could suggest that more experienced auditors are less susceptible to the anchoring bias when

Additionally, we found that the degree to which legal professionals believe in free will predicts the extent to which they are affected by outcome information, such that those

The collection also includes objects from India, Thailand, Laos, Sri Lanka, Korea and other Asian countries.. Rosalien van

Figure 1 shows four models that could be discovered using existing process mining techniques.. If we apply the α- algorithm [3] to event log L, we obtain model N 1

Englich, 2005; Reitsma-van Rooijen & Dancker, 2006) – the potential impact on group decision-making is large. Indeed, across three studies we found groups can be vulnerable to

Other regions that respond strongly in the presence of feed- back are the bilateral anterior insular cortices. The exact func- tion of the insula is unclear, since activity has