• No results found

The self-deception of altruists : unconsciously mitigating the external cost of fairness

N/A
N/A
Protected

Academic year: 2021

Share "The self-deception of altruists : unconsciously mitigating the external cost of fairness"

Copied!
65
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

The Self-Deception of Altruists:

Unconsciously Mitigating the External Cost of Fairness

Theodoros Saroglou 11799994 MSc Economics

Track: Behavioral Economics and Game Theory 15 ECT

(2)

TABLE OF CONTENTS

Abstract

1. Introduction

2. Literature Review

2.1. Fairness

2.2. Self-deception

2.2.1. Self-deception in the lab

3. Hypotheses

4. Experiments and Results

4.1. Experiment 1

4.1.1. Procedure and Design

4.1.2. Results of Experiment 1

4.1.2.1. Between-Treatments Analysis

4.1.2.2. Within-Treatment Analysis

4.1.2.3. Confidence

4.1.3. Discussion of Results, Experiment 1

4.1.3.1. Hypothesis 1

4.1.3.2. Hypothesis 2

4.1.3.3. Hypothesis 3

(3)

4.1.3.4. Hypothesis 4

4.1.3.5. Conclusion

4.2. Experiment 2

4.2.1. Procedure and Design

4.2.2. Results of Experiment 1

4.2.2.1. Between-Treatments Analysis

4.2.2.2. Within-Treatment Analysis

4.2.2.3. Confidence

4.2.3. Discussion of Results, Experiment 2

4.2.3.1. Hypothesis 1

4.2.3.2. Hypothesis 2

4.2.3.3. Hypothesis 3

4.2.3.4. Hypothesis 4

5. General Discussion and Conclusion

5.1. Summary of Results

5.2. Theoretical Implications

5.3 Limitations

5.4 Future Research

5.5 Conclusion

References

Appendix

(4)

Abstract

With the use of two experiments, this paper attempts to shed light on the relationship between fairness and self-deception. Similar to Mazar et al. (2015), the experiments included a series of 20 stimulus identification trials which were followed by a confidence report. The stimuli were associated with equally profitable matrices. The main difference between the matrices was that in the one matrix subjects had to give up some money in order to increase the payoff of a charity and apply an equal split of the funds. Based on self-concept theory and self-signaling models, it is hypothesized that self-deception will be positively correlated with fairness. Fairer individuals are expected to make more mistakes, identifying the favorable scenario more often. It was also expected that confidence would be higher among self-deceivers. The results of the experiments did not provide statistically significant results in favor of the hypotheses, however, the main expected trends were observed. Fairer individuals made more biased mistakes, implying that they might have been self-deceiving in order to retain their self-concept. At statistically insignificant levels, confidence was relatively higher in self-deceivers of the first experiment and lower in self-deceivers of the second experiment.

(5)

1. Introduction

One hot summer day, in our living room, my mother was boasting to one of her friends about her charitable giving. She told her friend that she always gives money to homeless people. I, a 10-year-old kid at the time, popped up and said: “No, she doesn’t!”. My mother was furious. I was sent to my room, but I did not know why, since I believed I was telling the truth. The next day, I and my mother were running errands in the city. While we were walking around, I made sure to record each homeless person I see. At the end of our trip, it was time for me to strike back. I told her that we passed five homeless people and that she did not give a cent to any of them, proving the point I had made the previous day. To my surprise, my mother did not believe that we passed any homeless person. After all, if she had seen a homeless person, she would have happily given some of her money. She looked so convinced of her lie that I could not dare confront her about it. Looking back at that story, I know that we were both right. My mother was self-deceiving. She wanted to be a good person and believed she was one, but she did not want to give all of her money to homeless people. As a result, she denied to notice homeless people altogether. The question that I am trying to answer with this paper is closely related to this story and many stories like it: do fair people self-deceive at higher rates in order to retain their self-concept of fairness and compensate for the external costs of their preferences?

If we see the above story as an isolated event, it appears to have little economic consequences. Nevertheless, self-deception is a widespread behavior with large-scale implications. We know that people use motivated reasoning to justify their desired conclusions ( Kunda, 1990) and willful ignorance is regularly employed by individuals who want to avoid prosocial behavior (Grossman and van der Weele, 2017). When ignorance is willful, it is easier to attach accountability to it, however, in cases of self-deception things get more complicated. Self-deception is not detected by the people who employ it, therefore, it is hard to directly hold those individuals accountable for their actions. In fact, it has been argued that self-deception

(6)

should be considered a regular mistake and that assigning moral responsibility to self-deceiving individuals is inconsequential (Levy, 2004).

Either willful or subconscious, biased processing of information as a means of avoiding prosocial behavior is damaging to society. For instance, Stoll-Kleemann et al. (2001) found that personal concerns, such as comfort, prevent people from honestly assessing the dangers of climate change. Norgaard (2006) discovered that people were in denial of climate change facts due to their unwillingness to process information that could negatively affect their personal morality perceptions. Biased information processing can affect other aspects of prosocial behavior such as charitable giving (Niehaus, 2014) and preferences for redistribution (Cruces et al. 2013 ). Moreover, the level of biased self-perception in a society is a better predictor of economic inequality than individualism (Loughnan et al., 2011 ). The above findings point at the existence and the cost of abused information processing. Most of these behaviors are identified at the conscious level, however, information denial can be a deeply rooted human behavior that determines our decisions at a subconscious level.

The inability to place responsibility on self-deception suggests that its elimination can only be achieved through external factors (Mazar and Ariely, 2006). Indeed, some researchers have tried to limit self-deception in the lab by adding physical and psychological barriers or by reducing ambiguity (Mazar and Hawkins, 2015; Pittarello et al., 2015 ). Big part of self-deception research has focused on cheating, however, little research has been conducted in terms of the relationship between self-deception and fairness. In order to find ways to eliminate self-deception when it comes to fairness, we need to figure out who are the people who self-deceive; namely, whether fairer individuals self-deceive more than selfish ones. Models such as self-concept theory ( Mazar et al., 2008) and self-signaling (Ainslie, 1986; Prelec, 2003; Milovic-Prelec and Prelec, 2010) predict that a person who believes she is fair will also tend to self-deceive more in order to avoid making an internally costly unfair decision. People who do not want to find out that they are not as fair as they believe will not notice the opportunity to be fair as often.

(7)

To my knowledge, this is the first piece of research that experimentally tests the relationship between fairness and self-deception. A great deal of theoretical and empirical evidence indicates that a fair individual would self-deceive more in order to avoid internalizing the harm she causes to the other person. If self-deception plays an important role in fairness outcomes, policy has to focus on eradicating this type of conflicting situations and aim at limiting the incentives that lead to self-deception (Mazar and Ariely, 2006). In this paper I have conducted two experiments in order to find out whether self-deception is increasing with the fairness attitudes of an individual. Both experiments were conducted online and were adapted by the existing literature on fairness and self-deception (Dana et al. 2007; Mazar and Hawkins, 2015 ). In these experiments, stimuli were associated with two possible outcome scenarios, each allowing the participant to make a choice between two distributions. One scenario required giving up some money in order to be fair to a charity. The other scenario did not require a conflicting decision. The stimuli were presented to participants for varying periods of time and levels of ambiguity. The goal of the experiments is to find out whether favorable scenario identification mistakes will be higher among participants who choose the fair outcome when they believe they are in the unfavorable scenario. The main difference between the two experiments is the type of the stimulus used and the fact that the second experiment does not include any variation in the stimulus duration. The first experiment’s stimulus exhibited a natural perceptual bias, therefore, it was replaced by a simpler stimulus in the second experiment.

Mistakes did not differ significantly between subjects in the control and dictator treatments of either experiment. Participants in the two experiments who made more altruist choices, also made more biased mistakes, namely, mistakes that can serve both their self-concept and their monetary reward. Nevertheless, the results were, for the most part, not statistically significant. Reaction time did not correlate with biased mistakes in either experiment, however, it did exhibit the expected trend in the second experiment. Finally, confidence levels appear to have increased with the incentive to self-deceive in the first experiment, while they decreased in the second. These results leave a few questions unanswered. Moreover, methodological improvements could be achieved in the future. The remainder of the paper will advance as follows. The

(8)

methodological procedure in regard to the first experiment will be described, then its results will be reported and briefly discussed. Subsequently, the same process will be pursued for the second experiment. In the end, a general discussion will summarize the findings, connect the results to previous theories and detect possible limitations.

2. Literature Review 2.1. Fairness

For a long time, economists had been depicting humans as entirely rational and selfish agents. We referred to this type of agent as the “homo economicus”. Of course, anyone who has spent even the smallest amount of time on this planet knows that the homo economicus either never existed or, if it did, it has long gone extinct. Thankfully, economics has evolved and it now paints a different picture of humanity. Thanks to the development of behavioral economics, the economic man is not depicted as a profit-seeking machine anymore. Selfishness is not an immediate assumption and rationality refers to the utilization of current information in order to satisfy one’s preferences; preferences that do not always have to be egotistical. Fairness experiments played a crucial part in shifting the economic view of humanity. For decades now, subjects in those experiments exhibit fairness towards other subjects, even when there is no monetary incentive to do so. The most commonly used game in that literature - and the game that this paper is going to utilize - is the dictator game. In the dictator game a subject is asked to distribute some fixed amount of funds between herself and another individual. The receiver has no power to block an unfavorable deal, therefore, a homo economicus has no reason to share any of the available funds. In practice, a number of individuals do choose to share some of the funds. Engel (2011) analyzed 616 treatments, conducted over the past 25 years, and found that, on average, dictators share 28.35% of the total endowment ( ​Figure A​). As expected, not all of the 616 experiments followed the exact same procedure. Engel points out that there are many factors which can help explain these results, such as incentive structure, social control, framing effects, demographics and context. Other authors have attributed dictator giving solely to demand

(9)

characteristics (Bardsley, 2008) and others have produced evidence against that claim (Bolton et al., 1998). Nevertheless, dictator games have taught us that people act in a relatively fair manner under controlled experimental circumstances.

Figure A​ - Average transfers by dictators (Engel, 2011)

During the 1990s and 2000s an effort was made to deconstruct, and ultimately explain, the fair behavior of dictators. For instance, Fehr and Schmidt (1999) created a theory of fairness and attributed the positive transfers from dictators to receivers as a form of advantageous inequity aversion. Although some authors expanded on the theory by Fehr and Schmidt ( Frohlich ​et al., 2004), others opposed it. Engelmann and Strobel (2004)supported that it is efficiency interests that drive fair behavior. Their research also shows that concerns about relative maximum and minimum allocation play a vital part in altruistic redistribution. Engelmann and Strobel’s paper provide a different view of the economic man, one that resembles a typical economist. Andreoni (1995) attributes the positive transfers from dictators as a form of “warm glow”, namely, the satisfaction derived from performing a generous act. A few years later, Andreoni and Miller (2002) used the Generalized Axiom of Revealed Preference and posed that an increasing utility in regard to other people’s payoffs renders the fair behavior exhibited in dictator games rational. These papers are part of a literature that tried to explain altruistic behavior based on existing

(10)

experimental evidence. The next part involves papers that attempted to understand fair behavior by eliminating or reducing it.

Multiple sides of the economic and game theoretical assumptions regarding human behavior are often violated in dictator game experiments. For instance, when subjects are given the opportunity to anonymously exit a dictator game with $9 instead of the maximum of $10, one-third of them choose to do so ( Dana et al., 2006 ). These subjects could simply stay in the game and transfer the remaining $1, however, it appears that social expectation poses too heavy a burden and people often choose to anonymously give none than to identifiably give too little. When subjects are presented with a design that allows them to never find out whether they implemented a fair outcome and at the same time allocate the maximum payoff to themselves, the proportion of altruist choices is almost halved ( Dana et al. 2007 ). These results came despite the fact that Dana et al. (2007) provided their subjects with their opportunity to costlessly access the information about the kind of allocation they were implementing. Two other cases of induced fairness reduction were observed in their paper. In the first, two dictators chose an allocation among them and a third subject. The default outcome was the most equitable, however, an inequitable outcome, which shared most of the funds between the two dictators (6:6:1), could be implemented when both dictators chose it. In the second, subjects were interrupted when they took too long to make a distribution selection and as a result the inequitable outcome was selected. The experimenters ensured that more than enough time was provided for subjects to make their choice. As a result of these two treatments, fair outcome implementations were reduced by about 50% in each case (the study was successfully replicated by Larson and Capra, 2009). The results by Dana et al. support the idea that allowing a person to retain her self-concept when making an unfair decision can increase dishonest behavior and that individuals could be “performing” fairness, with themselves in the roles of both actor and audience (Mazar et al., 2008; Murnighan et al., 2001).

Most of the papers presented in the previous paragraph are examples of wilful ignorance. Wilful ignorance protects individuals from having to update their beliefs about their own preferences

(11)

when they make an unfair choice, allowing them to receive a higher monetary reward without any self-image cost (Grossman and van der Weele, 2017). The behavior in question might appear irrational, but it is part of a mechanism that allows us to get away with actions that do not reflect our self-image. Kunda (1990) explains that motivated reasoning helps us reach desired conclusions partly because reaching those desired conclusions without motivated reasoning might cause cognitive dissonance, namely, the morally costly situation where someone acts against her own values. Consequently, wilful ignorance can be an important tool. In the field of honesty, Mazar et al. (2008) showed that, when possible, people cheat just enough to earn a higher reward but not enough to harm their self-concept. The subjects in their experiment appeared to be aware of their cheating, however, their self-concept in regard to honesty remained unaffected. Similar cheating patterns were observed in Fischbacher and Föllmi-Heusi (2013), while Murnighan et al. (2001) suggest that dictator giving is a result of self-impression management, an idea very similar to self-concept theory. This segment of the literature points at a complicated internal process that attempts to balance external and internal rewards.

2.2. Self-deception

The self-concept theory, proposed by Mazar et al. (2008) , can be formalized in a self-signaling model (Ainslie, 1986) such as the one proposed by Bodner and Prelec (2003) and later reformulated by Milovic-Prelec and Prelec (2010) in order to describe self-deception. Self-deception refers to a process similar to interpersonal deception, the only difference is that in self-deception the individual is both the deceiver and the deceived. Two beliefs, p and not-p, coexist within an individual, but one of the two beliefs is not held consciously ( Gur and Sackeim, 1979). The key to this conceptualization of self-deception is that the individual is motivated not to become aware of the non-conscious belief. According to Milovic-Prelec and Prelec (2010), a self-signaling model can explain self-deception as well as it can explain interpersonal deception. In the self-signaling model, total utility is not only determined by the outcome itself, but also by the diagnostic utility that this outcome provides in terms of a held belief. Thus, in the case of Mazar et al. (2008), individuals earn extra money by cheating, but cheating also affects the

(12)

diagnostic utility of their action, namely, their belief that they are honest. As a result, what they gain in monetary outcome by cheating, they lose in diagnostic utility. If the weight that an individual puts on diagnostic utility is low enough, small amounts of cheating can be carried out without a significant change in held belief, culminating in the behavior observed in Mazar et al. (2008).

Self-deception might have served an important evolutionary purpose. The evolutionary approach states that self-deception is a tool evolved to facilitate interpersonal deception ( Trivers, 1985; von Hippel and Trivers, 2011). According to this view, deceiving one’s own self allows the individual to avoid revealing her true motives, facilitating more successful deception. The cognitive load of interpersonal deception is reduced by keeping the information constant for deceiver and deceived, eliminating the need to present a different narrative compared to the one held internally. Believing one’s own lie can also be useful if that lie is detected, since it is hard to punish someone who does not appear to be aware of the crime she has committed. The theory by von Hippel and Trivers identifies a variety of ways to achieve self-deception, including: self-serving information amount searching, selective searching, selective attention, biased interpretation, misremembering and rationalization. It is evident that self-deception is a deeply rooted human behavior.

2.2.1. Self-deception in the lab

A significant section of the literature has examined self-deception and its different manifestations, Schwardmann and van der Weele (2016) provided evidence that overconfidence about one’s own score in a test facilitates convincing others of a superior score. Lynch and Trivers (2012) showed that self-deceivers laugh less with comedic material, which hints at a mask of preferences similar to the mask of honesty that self-deception lends to successful self-deceivers. Fernbach et al. (2013) elicited self-deception in female subjects who were led to believe that higher pain tolerance amounts to sustained skin quality; their desire to enjoy superior skin quality increased their pain tolerance while reports of effort decreased. Balcetis and Dunning (2010)produced evidence that desired items are perceived as being closer in proximity.

(13)

A few years earlier, Balcetis and Dunning (2006) showed that people see the desired interpretation of a stimulus even if that interpretation changes after the stimulus is presented to them. In their experiment, subjects were led to believe that one stimulus was associated with a desired outcome. After the stimulus was presented to them, and before they had the chance to indicate which type of stimulus they saw, they were informed that the stimulus which previously represented the desirable outcome, now represents the undesirable outcome. Despite the fact that the stimulus now represented the undesirable outcome, subjects still indicated seeing it more often than the other stimulus. Their results are evidence that information can be stored in a biased manner without the need to be advantageously misinterpreted afterwards.

Research has also shown that the more justified a favorable mistake is, the higher the probability that self-deception will be employed by an individual in order to increase her reward ( Pittarello et al., 2015). On a similar note, Mazar and Hawkins (2015) decreased self-deception by designing an experiment that added psychological and physical barriers. Subjects had to identify whether there were more dots on the left or the right side of a diagonal line. The best scenario for the subjects was to see more dots on the right, since that was the most profitable outcome. In the first treatment, the default option was also the favorable one. In the second treatment, there was no default option. In the third treatment, the default option was the unfavorable outcome. The first treatment had the highest rates of cheating, followed by the second treatment. In the third treatment, where the unfavorable outcome was the default, cheating was eliminated. Another interesting part of Mazar and Hawkins’ (2015) research is the analysis of confidence. After participants completed all trials, they were asked to report how many trials they believed they answered right. Requesting confidence reports allowed the researchers to find out whether participants were aware of their own self-deception. Additionally, they could observe the effect that self-deception has on confidence overall. Their results revealed increased confidence among self-deceiving subjects, however, the authors did not incentivize an honest report, which might have distorted the measure (Gachter and Renner, 2010). The experimental designs of the two experiments in this paper closely resemble the experimental design of Mazar and Hawkins’ (2015). Nevertheless, there are a few differences between our designs. For instance, the stimuli

(14)

that I used are slightly different, there are no defaults and the outcomes are associated with matrix scenarios. Finally, an accurate confidence report is monetarily incentivized. More information about the designs of the experiments can be found in Section 4.

3. Hypotheses

When individuals reduce their fair decisions through self-deception, the effect on the system as a whole can be detrimental, especially when the fair option is also the most efficient. The experiments presented in this paper are a combination of previous dictator games and self-deception tasks. The goal of the experiments is to reveal who the self-deceivers are, namely whether fairer individuals are also greater self-deceivers. The designs of the experiments do not allow for monetary gain as a result of self-deception, the sole benefit that an individual can reap from self-deceiving is self-concept maintenance. Moreover, all decisions made by participants are incentivized in order to ensure honesty. The paper also aims at finding out whether self-deceiving individuals can spot their own deception by measuring their confidence levels. Self-signaling (Milovic-Prelec and Prelec, 2010) is the main basis for the hypotheses of this paper. The mechanism assumed resembles the mechanism that Grossman and van der Weele (2017) describe in regard to self-signaling and fairness. Grossman and van der Weele utilize a variety of experiments in order to show that wilful ignorance can be a valuable tool for people who want to prevent a negative signal about their preferences from materializing. As a result, people choose to ignore some information in order to be able to make a choice which will only affect the external reward part of their self-signaling equation. The same mechanism should be involved at the subconscious level, where self-deception takes the place of wilful ignorance. Similarly to honesty beliefs (Mazar et al., 2008 ), individuals tend to internally retain a specific level of perceived personal fairness ( Farwell and Weiner, 1996). In accordance to self-signaling, when individuals make a fair choice they earn less monetarily, but they positively update their internal fairness perception. When the true fairness level of an individual is high enough, this monetary loss is not as hurtful, however, the less truly fair a person is the more it hurts to give

(15)

money up for the sake of others. On the other hand, making the unfair option causes the individual to update her beliefs about her true fairness, resulting in a utility decrease. The desire to feel fair and the desire to earn more money could potentially be compromised with the employment of self-deception.

Before I start analyzing the hypotheses of this research, it would be beneficial to define some terms that will be consistently used throughout the paper. When referring to biased mistakes, I am describing the situation where a subject identifies the favourable scenario as correct when, in fact, it is not. A fair decision refers to a decision that a subject makes in order to provide a more equal distribution of funds between her and the charity, despite the fact that she has to give up some funds in order to do so. Finally, in this experiment, confidence is defined as the difference between reported performance and actual performance. A positive confidence measure indicates overconfidence and a negative confidence measure indicates underconfidence. For example, if a subject has made 15 correct scenario identifications but reported that she had made 18 correct scenario identifications, she was overconfident by 3 points.

Based on the previous literature on self-deception ( Balcetis and Dunning 2006, 2010; Mazar and Hawkins, 2015; Pittarello et al. 2015; Mijovic-Prelec and Prelec, 2010 ) and the evidence from research that has incorporated ambiguity into dictator games (Dana et al., 2006, 2007; Larson and Capra, 2009), I predict that the stimulus identification mistakes will be higher in the dictator treatment. This difference in the identification mistakes will be driven by the increased biased mistakes of dictators. Mazar and Hawkins’ (2015) results lead to the expectation that an increased level of confidence among the self-deceiving subjects of the dictator treatment will be observed. Additionally, reaction time is hypothesized to be positively correlated with biased mistakes, in line with Mazar and Hawkins’ (2015) results and due to the increased cognitive load that surrounds self-deception. Finally, altruists and non-altruists are anticipated to exhibit distinct levels of self-deception from each other, with increased levels of self-deception among altruists. The last hypothesis is derived by the self-concept and self-signaling models. An indicator of how fair someone considers themselves is the amount of fair choices they make in order to maintain

(16)

their self-concept, measured in the easier parts of the experiments which are excluded from the rest of the analysis in order to avoid endogeneity. Of course, the amount of fair choices is not a direct measure of a fairness self-concept, however, it is the closest proxy we have to spotting fairness attitudes. People with lower self-concept of fairness do not need to self-deceive in order to achieve the higher outcome, since they can already do that costlessly just by choosing the selfish option. On the other hand, the cost of being selfish for an individual with a higher self-concept of fairness will be higher, driving her to self-deception in order to achieve a higher external reward while maintaining a positive self-concept.

The above reasoning leads to the following four hypotheses:

1. Subjects in the dictator treatment will make more biased mistakes than subjects in the control. 2. Reaction time will be positively correlated with biased mistakes in the dictator treatment. 3. Individuals who make more fair decisions will also make more biased mistakes.

(17)

4. Experiments and Results 4.1. Experiment 1

4.1.1. Procedure and Design

Image A

Example of Experiment’s Stimulus (majority red)

In order to determine the sufficient sample size, I used the statistical software G*Power 3.0.1. For a .05 criterion of statistical significance, a total sample of 119 participants was suggested. Ultimately, 191 participants (77 females) were recruited in order to ensure robustness. The experiment was conducted through the online platform Amazon Mechanical Turk. Amazon Mechanical Turk is an online microtask marketplace. One hundred subjects were assigned in the control treatment and ninety-one in the dictator treatment. The experiment lasted between four and fifteen minutes, determined by the assigned treatment. Participants earned between 15₵ (participation fee) and 2.15$ (participation fee plus maximum bonus). All participants were informed that only 1 of the 21 rounds would be paid (20 trials and 1 additional confidence question). Subjects in both treatments had to identify whether a screen contained a majority of blue or red dots which were evenly distributed on the screen (​Image A​).

(18)

The ambiguity of the stimuli fluctuated in two ways. First, the number of red and blue dots changed, with half of the stimuli including 1, 3, 5, 7 or 9 more red dots than blue and vice versa. Second, the stimuli were presented to the participants for a duration of either 500ms or 250ms. In total, each participant completed 20 trials of this task, 10 for each time interval and 2 for each variation in the number of blue and red dots.

Control Treatment

In the control treatment subjects were paid 15₵ for participating in the experiment. Each participant completed 20 trials of the red and blue dot task. For each trial the subject could earn a bonus of 20 ₵ if she was correct and the trial was randomly selected for payout. Following the completion of the trials, each participant was asked to identify how many correct responses she made in the preceding 20 trials. If the subject was accurate, she could earn a bonus of 2$. Dictator Treatment

Image B- The two possible scenarios

In the dictator treatment, both of the possible outcomes of the stimulus were associated with a matrix scenario (​Image B​). Both scenarios shared a bonus between the subject and the charity Give Directly. Give Directly transfers money to underprivileged individuals in the developing world. The charity was chosen because it allows for a minimum effect on the attitudes of the participants. First, Give Directly does not identify with a specific cause such as global warming or warfare, which limits the potential bias that a subject could hold due to an affiliation. Second, Give Directly is not as well-known as other charities, which helps ensure that most of the participants did not have a previous experience with it. Finally, the personal character of Give Directly is close to the personal character that a dictator game has when the receiver is a person.

(19)

For these reasons, Give Directly was deemed the most appropriate charity to place in the position of receiver.

In the blue scenario subjects have to choose between an inequitable - although more profitable for themselves - outcome (A: 100 ₵ for them and 10₵ for the charity) and an equitable outcome (B: 60₵ for them and 60₵ for the charity). In the red scenario one outcome serves as the the best option, since both sides receive the maximum amount of money they can earn (A: 100 ₵ for the subjects and 60₵ for the charity). Note that in the equitable outcome of the blue scenario (choice B) the equitable outcome is also the more efficient one. Subjects are incentivized to report which scenario they were in for a bonus of 20 ₵. Finally, confidence is measured at the end of the experiment. The use of the confidence question was inspired by Mazar and Hawkins (2015). According to the results of Mazar and Hawkins (2015) people who self-deceive are more confident in their responses. The difference between the design of this experiment and Mazar and Hawkins’ design is that an honest report of confidence is incentivized through a monetary payment of 2$ when the subject is accurate.

(20)

4.1.2. Results of Experiment 1

4.1.2.1. Between-Treatments Analysis

Figure B– Share of mistakes and biased mistakes per treatment

On average, participants in the control made a scenario identification mistake in 22.2% of the trials, while participants in the dictator treatment made a mistake in 21.26% of the trials. Since the data are continuous and non-normally distributed, I used a Wilcoxon rank-sum test in order to find out whether average mistakes were significantly different between the two treatments. First I compared individual average mistakes, then I specifically compared individual average biased mistakes. This test allows me to test the first hypothesis, which stated that biased mistakes would be more in the dictator treatment. The results of the test revealed that the difference was not statistically significant (p=0.412). More specifically, biased mistakes, which comprised 14.55% of total responses in the control and 13.19% in the dictator treatment, were also not statistically different between the two groups (p=0.19). The results of the above tests fail to reject the null hypotheses that there is no difference in mistakes and biased mistakes between the two groups. As a result, the first hypothesis, which stated that biased mistakes would be greater in the dictator treatment, is not confirmed.

(21)

A random-effects probit regression (Appendix A, Table 1 ) was carried out in order to determine the effects of display time, dot ambiguity, dot color, treatment and reaction time on the probability of mistake. This type of regression was chosen because the dependent variable is binary and its distribution is non-normal. Since I am analyzing panel data, the random-effects model allows me to account for the individual effects that might exist. The goal of the following analysis is to further test the first hypothesis, namely, that biased mistakes are higher among dictators. Moreover, the analysis will provide insight into the second hypothesis, which stated that reaction time will be correlated with biased mistakes in the dictator treatment. The results indicated that there was not a significant effect on the probability of mistake from treatment and its interaction with the color of the dot (p=0.379), failing to reject the null hypothesis that the interaction of stimulus color and treatment has no effect on the probability of mistake. As a result, the third hypothesis, which predicted that biased mistakes would be higher among subjects in the treatment, is again not confirmed.

Difficulty significantly increased the probability of mistake (p=0.000). Stimulus duration significantly decreased the probability of mistake (p=0.000). Dot color also significantly and positively increased the probability of mistake (p=0.000). The color of the dots played the most important role among the independent variables (p=0.000), indicating that a perception bias might have existed. When the stimulus dots are blue in majority, the marginal probability of mistake increases by 0.11 compared to when the stimulus dots are red in majority. Reaction time significantly and positively correlated with incorrect responses (p=0.000). The interaction between reaction time, color of stimulus and treatment did not produce a significant effect (p=0.722), however, a significant effect was found in the interaction between reaction time and treatment (p=0.002). Specifically, reaction time in the dictator treatment was negatively correlated with the probability of mistake. As a result of the above analysis, we reject the null hypothesis that the interaction between reaction time and treatment has no effect on the probability of mistake. Due to this finding, we fail to confirm the second hypothesis, which

(22)

predicted that reaction time would exhibit a positive correlation with mistakes in the dictator treatment.

4.1.2.2. Within-Treatment Analysis

Figure C ​- Share of Biased Mistakes (bias share) in Relation with the Share of Altruist Choices (giver score)

Figure D​ ​-​ ​Average Share of Biased Mistakes for Givers (giver score ≥ and Non-Givers (giver score < 0.5)

In order to test the third hypothesis, which stated that fairness will be positively correlated with biased mistakes, a within-subjects analysis was followed. The following analysis also further tests the second hypothesis, which stated that reaction time would be positively correlated with biased mistakes. A new variable, called “giver score”, was generated. The variable indicates the share of altruist choices a subject made when she reported being in the unfavorable scenario. Each subject was given a score between 0 and 1. Only the first two difficulty levels were used to create this variable. The first two difficulty levels were easy enough that allowed most people to make very few mistakes, therefore, subjects were usually correct about which scenario they were in. Since the first two difficulty levels were used in order to create the new variable, they were excluded from the rest of the analysis in order to avoid endogeneity and reverse causality. As can

(23)

be seen in ​Figure ​C, no apparent pattern was detected in the data. Moreover, a Spearman’s rank correlation did not reveal a statistically significant relationship between giver score and bias share (p=0.404). The results of the test fail to reject the null hypothesis that there is no relationship between biased mistakes and fairness. As a result, we fail to confirm the third hypothesis that biased mistakes would be more among fairer individuals.

Just as in the between-subjects analysis, a random-effects probit regression was performed using the rest of the difficulty levels ( Appendix A, Table 3 ). The results of the regression showed that difficulty increased the probability of mistake (p=0.000). Stimulus duration decreased the probability of a mistake when moving from 250ms to 500ms (p=0.002). The effect of the answer being blue exhibited the largest, and positive, effect on the probability of mistake (p=0.000). The effect size of these three variables was similar to the between subjects analysis. The rest of the variables and their interactions did not show a significant effect (p<0.05). Nevertheless, the interaction between the answer being blue and giver score revealed a positive - but statistically insignificant - correlation with the probability of mistake (p=0.122). The significance of the interaction might have been suppressed by the natural bias that the color red produced, however, the null hypothesis that the interaction between color and giver score has no effect on the probability of mistake cannot be rejected. Consequently, we fail to confirm the third hypothesis, which predicted that biased mistakes would be positively correlated with altruist choices.

(24)

4.1.2.3. Confidence

Figure E – ​Average Confidence Levels, Control and Dictator Treatments

Subjects in both treatments exhibited underconfidence. On average, subjects in the control reported that they had made 1.4 less correct answers than what they actually made, while dictators underestimated themselves by 1.1 correct answer ( ​Figure E​). A two-sample Wilcoxon rank-sum test was employed in order to test whether there was a difference in confidence between the two treatments. The test was selected because the data are continuous and non-normally distributed. The results of the test did not reveal a significant difference in confidence between control and dictator treatments (p=0.399). This result rejects the null hypothesis that confidence was different between the two groups. As a result, the fourth hypothesis that there would be higher confidence among the subjects in the dictator treatment is not confirmed.

(25)

Figure F​ – Confidence and Bias Share, Givers and Non-Givers within-dictator treatment. Non-Givers (Giver Score < 0.5) are presented in

red.

Figure G ​– Average Confidence Levels for Non-Givers (Giver Score < 0.5) and Givers

(Giver Score ≥ 0.5)

Overconfidence did not appear to be correlated with biased mistakes ( ​Figure F​). It should be noted that the variable giver score still refers to the first two difficulty levels, while the bias share refers to the rest of the difficulty levels. The dashed line on ​Figure F represents the threshold of overconfidence. A report above the line is classified as overconfident, a report below the line is classified as underconfident. There does not appear to be a specific pattern in the data, although givers were less confident than non-givers on average. In order to test for a possible correlation between biased mistakes and overconfidence, I used Spearman’s rank correlation test. The test is appropriate because it does not make a normality assumption. Spearman’s rank correlation test found no significant effect of biased mistakes on overconfidence (p=0.43), therefore, we fail to reject the null hypothesis of no correlation between overconfidence and biased mistakes. As a result, we fail to confirm the fourth hypothesis which stated that overconfidence and biased mistakes would be positively correlated.

(26)

4.1.3. Discussion of Results, Experiment 1 4.1.3.1. Hypothesis 1

The results of the first experiment did not confirm the first hypothesis that subjects in the treatment would make more biased mistakes than the control. The strongest predictor of a mistakes was the color of the stimulus. The strength of the effect that the color red had on the probability of mistake was bigger than the effect from the rest of the variables combined. This result leads me to the conclusion that the color red might have generated a perceptual bias, which I did not previously take into account.

4.1.3.2. Hypothesis 2

The effect of reaction time remained positive for both types of mistakes in the between-treatment analysis, however, reaction time in the dictator treatment was found to have a negative effect on the probability of mistake. The effect is almost exactly reversed, although at a lower confidence level (p=0.002 > p=0.000). The results of the experiment did not confirm the second hypothesis, which predicted that reaction time would positively correlate with mistakes in the dictator treatment. The second part of the hypothesis predicted that reaction time and biased mistakes would be positively correlated as well. In the within-treatment part of the analysis, reaction time did not play a significant role, leading to a failure to confirm the second part of the second hypothesis.

The inconsistent results that the reaction time analysis produced should be carefully examined. It is true that reaction time analysis can be very beneficial, but the interpretation of its results can be problematic (Spiliopoulos and Ortmann, 2017). Spiliopoulos and Ortmann’s analysis provide some suggestions in order to ensure validity of results in regard to reaction time analysis. The experimental design of both experiments in this paper did not follow a few of them. The most important omission is the fact that there was no time pressure to complete the task, as a result, measurements of reaction time might be inconsistent. Moreover, reaction time analysis is more accurate when the data are collected in the lab experiments and in a within-subjects design,

(27)

things that were also not done in this paper. On the plus side, this experiment included two types of difficulty variation, which adds to the validity of the present reaction time analysis. Nevertheless, reaction time remains an endogenous variable and is subject to reverse causality. Thus, a second round of random-effects probit regressions was carried out in order to ensure robustness. Two additional regressions were performed, one for the between-treatments and one for the within-treatment analysis. The results of the second set of regressions were similar to the first set and can be found in ​Tables 2 ​and ​4​ of ​Appendix A.

4.1.3.3. Hypothesis 3

The third hypothesis predicted that the share of fair choices would be positively correlated with biased mistakes. The hypothesis was not confirmed. Nevertheless, an insignificant positive effect of fairness on biased mistakes was observed. The natural bias that the color red might have produced appears to have played a very strong role and the effect of altruism has the potential to be statistically significant in future research which will utilize a stimulus that does not cause a natural bias.

4.1.3.4. Hypothesis 4

Overall, subjects in this experiment were underconfident. Dictators were slightly less underconfident than subjects in the control. The fourth hypothesis predicted that confidence would be higher in the dictator treatment. Nevertheless, confidence levels were statistically identical across treatments, an outcome expected since no significant self-deception was spotted and performance levels were indistinguishable. The result rejects the hypothesis that biased mistakes would be positively correlated with the dictator treatment. The second part of the hypothesis stated that biased mistakes in the dictator treatment would also be positively correlated with overconfidence and was also not confirmed.

(28)

4.1.3.5. Conclusion

Figure H ​– Predictive Margins derived from Probit Regression (​Appendix A, Table 4)

The above findings point at a lack of self-deception among the participants of the dictator treatment, however, it would be an omission not to take into account the clear bias that the color red created in both control and treatment. This experiment was designed with the goal of allowing for self-deception in about 5-10 trials out of a total of 20. Due to the perceptual bias that the color red caused, mistakes towards red were already very high in the control, making it hard to identify self-deception in the dictator treatment. Although there is no experimental evidence regarding possible biases that could be caused by the color red when used as a stimulus, there is evidence that the color can improve performance in detail-oriented tasks ( Meht and Zhu, 2009). The stimulus is at the heart of the experimental design and if there is an inherent bias accompanying it, any conclusions derived from the experiment are inconsequential. The greatest support for the third hypothesis comes from the fact that, despite the natural bias, the insignificant effect of fair decisions on biased mistakes was positive (​Figure H​).

I also suspect that the screen transitions in the dictator treatment might have been slower than those in the control treatment. Participants in the dictator treatment took on average more than 10

(29)

minutes to complete the task, while in the control the completion time was about 5 minutes. In pilot trials, the expected completion time was 5 minutes for the control and 6-7 minutes for the dictator treatment, implying that during the online recruitment the dictator treatment lasted significantly longer than expected. Due to the above concerns, I decided that a second experiment should be conducted. The goal of the supplementary experiment is to mend the possible experimental design errors of the first experiment and to observe how the results change in regard to the hypotheses when the biases are eliminated.

4.2. Experiment 2

4.2.1. Procedure and Design

Image C -​ Example of stimulus (more dots on the right)

Following the steps taken in the first experiment, the statistical software G*Power 3.0.1 was employed in order to determine a sufficient sample size. For a .05 criterion of statistical significance, a total sample of 107 participants was suggested. Nevertheless, only 66 individuals (28 female) were recruited due to lack of resources. Thirty-five of them were assigned to the control and thirty-one to the dictator treatment. The main goal of this experiment is to eradicate the biases of the first. In order to upgrade the second experiment, three major changes were made. First, the stimulus now includes black dots on the left and right of the screen and participants’ goal is to identify which side has more dots. This stimulus design has been adopted by Mazar and Hawkins (2015)and adapted in terms of the number of dots presented and the fact that the dots are separated vertically instead of diagonally ( ​Image C​). Second, the stimulus

(30)

display time variation has been eliminated in order to help accelerate the screen transitions of the online experiment. Third, the array of differences between the number of dots on the left and right is wider and allows for additional mistakes. More specifically, the difference between left and right dots ranges from 1 to 10 in favor of the left side and from 1 to 10 in favor of the right side. All other factors such as number of trials, distribution scenarios and instructions were retained.

4.2.2. Results of Experiment 2

4.2.2.1 Between-Treatments Analysis

Figure I ​– Average total and biased mistakes per treatment

In order to facilitate the comparison between the first and second experiments, the same analysis was followed in both. The goal of the following analysis is to test the first hypothesis, which stated that biased mistakes would be higher in the dictator treatment. I first tested for a difference in overall mistakes, then I tested for a difference in biased mistakes. On average, participants in the control made a mistake in 19.29% of the trials, while participants in the dictator treatment made a mistake in 24.52% of the trials ( ​Figure I​). A Wilcoxon rank-sum test was used in order to compare average mistakes and the share of biased mistakes as part of total mistakes in the two groups. The test was chosen because the variables in question, average mistakes and share of biased mistakes, are both continuous. Moreover, the data is non-normally distributed. The results

(31)

of the test revealed that the difference in mistakes was not statistically significant (p=0.305). Biased mistakes, which comprised 9.14% of total responses in the control and 12.58% in the dictator treatment, were also not statistically significantly different between the two groups (p=0.79). The results of the tests fail to reject the null hypothesis of no difference between the groups in terms of either overall or biased mistakes. As a result, we fail to confirm the first hypothesis which predicted that biased mistakes would be more in the dictator treatment.

The following analysis aims at further testing the first hypothesis which predicted that subjects in the treatment would make more biased mistakes. It also tests the second hypothesis, that reaction time would be positively correlated with biased mistakes in the dictator treatment. A random-effects probit regression (Appendix A, Table 5 ) was carried out in order to determine the effects of dot ambiguity, dot side, treatment and reaction time on the probability of mistake. Similarly to the first experiment’s analysis, the model was chosen because our dependent variable is binary, there are possible subject specific effects and the data are non-normally distributed. According to the results of the regression, there was not a significant effect of dot side (p=0.728) on the probability of mistake, implying that this time a significant natural bias did not exist. Treatment did not have a significant effect on the probability of mistake (p=0.495). As expected, difficulty increased the probability of mistake significantly (p=0.000). Reaction time increased the probability of mistake at a statistically significant level (p=0.028). The rest of the variables and their interactions all had a statistically insignificant effect on the probability of mistake (p>0.05). According to the results of the above analysis, we fail to confirm the first hypothesis which predicted that treatment would be positively correlated with biased mistakes. Finally, we also fail to confirm the second hypotheses, which stated that reaction time would be positively correlated with biased mistakes in the dictator treatment.

(32)

4.2.2.2. Within Treatment Analysis

Figure J ​- Share of Biased Mistakes (bias share) in Relation with the Share of Atruist Choices

(giver score)

Figure K ​- Average Share of Biased Mistakes for Givers (giver score ≥ and Non-Givers (giver

score < 0

This part of the analysis aims at testing the second and third hypotheses. The second hypothesis stated that reaction time would be positively correlated with biased mistakes. The third hypothesis stated that the share of altruist choices would be positively correlated with biased mistakes. Parallel to the analysis carried out in the first experiment, a “giver score” variable was generated. The variable indicates the share of altruist choices of each participant. The first two difficulty levels were used to generate the new variable, therefore, they were excluded from the rest of the analysis so that endogeneity and reverse causality is prevented. As can be seen in

Figure J ​, a positive correlation was observed between the share of biased mistakes and the share of altruist decisions. In order to test this relationship, I conducted Spearman’s rank correlation test, just like in the first experiment. The test did not reveal a significant relationship between the two variables, however, the relationship was found to be both positive and close to statistical significance (p=0.058). We fail to reject the null hypothesis of no relationship between the two

(33)

variables, therefore, we fail to confirm the third hypothesis that biased mistakes are positively correlated with the share of altruist choices.

This part of the analysis further tests the second hypothesis that reaction time is positively correlated with biased mistakes in the dictator treatment. The second hypothesis, which stated that biased mistakes will be correlated with the share of altruist choices, is also tested. Following the analytical blueprint of the first experiment, a random effects probit regression was conducted (Appendix A, Table 7 ). According to the analysis, difficulty increased the probability of mistake (p=0.000). When the answer was “left”, the probability of mistake increased significantly (p=0.043), indicating that an opposite natural bias might have existed after all. Neither reaction time nor it’s interactions with other variables predicted mistakes (p>0.05). The rest of the variables and their interactions also did not reveal a significant effect (p>0.05) on the probability of mistake. The analysis failed to reject the null hypothesis that reaction time has no relationship with the probability of mistake when the true answer is left. As a result, we fail to confirm the second hypothesis that reaction time is positively correlated with biased mistakes. The regression results also failed to reject the null hypothesis that the interaction between the answer being left and giver score has no relationship with the probability of mistake. Due to this result, we fail to confirm the third hypothesis that biased mistakes would be positively correlated with the share of altruist choices.

(34)

4.2.2.3. Confidence

Figure L ​– Average Confidence Levels per Treatment

In contrast to the first experiment, subjects in the dictator treatment were more underconfident than the control (​Figure I​). In order to test the fourth hypothesis which stated that there would be higher overconfidence among the participants in the dictator treatment, I need to compare the confidence levels of the two groups. Confidence reports were weighted to account for actual performance. On average, participants in the control treatment underestimated their performance by 1.4 correct answers, while participants in the dictator underestimated their performance by 2. Since the data are again non-normally distributed and confidence is a continuous variable, a Wilcoxon rank-sum test was employed. The test did not reveal a statistically significant difference between the two groups (p=0.539). As a result, we fail to confirm the fourth hypothesis, which stated that dictators would be relatively more confident compared to participants in the control treatment.

(35)

Figure M​ -​Overconfidence and Bias Share, Givers and Non-Givers within-dictator treatment. Non-Givers (Giver Score < 0.5) are

presented in red.

Figure N​ - Average Confidence Levels for Non-Givers (Giver Score < 0.5) and Givers

(Giver Score ≥ 0.5

The second part of the fourth hypothesis stated that biased mistakes would be positively correlated with overconfidence. ​Figure M presents the results of the confidence reports in the dictator treatment. Red dots represent non givers (giver score < 0.5) and empty blue dots represent givers (giver score ≥ 0.5). It should be noted that the variable giver score still refers to the first two difficulty levels, while the bias share refers to the rest of the difficulty levels. As can be seen, the only observable pattern is the fact that none of the altruists passed the overconfidence line, namely, they were all underconfident. Average confidence was also lower for givers compared to non-givers ( ​Figure N​). In order to test the fourth hypothesis, we need to test for a correlation between the share of biased mistakes and confidence. Just like in the first experiment, Spearman’s rank correlation test was employed. The results of the test did not indicate that there was a significant relationship between confidence and biased mistakes (p=0.404). The results lead to a failure to reject the null hypothesis of no relationship between confidence and biased mistakes. As a result, the fourth hypothesis, which predicted that biased mistakes would be positively correlated with confidence is not confirmed.

(36)

4.2.3. Discussion of Results, Experiment 2 4.2.3.1. Hypothesis 1

Despite a small general bias towards left (the unfavorable outcome), there were more mistakes in the treatment, both total and biased. Nonetheless, the statistical analysis revealed that the difference was not significant, rejecting the first hypothesis which predicted more biased mistakes in the dictator treatment. I have spotted two possible explanations for these results. First, according to the sample size analysis, the existing sample size was too small, therefore, it limited the statistical power of the tests. Second, there is a chance that only a section of participants in the treatment self-deceived. If a very small number of people self-deceived, then it would be hard to capture that difference in the between-treatments analysis.

4.2.3.2. Hypothesis 2

Although reaction time positively correlated with both types of mistakes, it did not cause an additional positive effect in regard to biased mistakes in the treatment. Both between-treatments and within-treatment analyses found no evidence of a correlation. As mentioned in section 4.1.3.2., reaction time analysis is hard to interpret. There are important elements that could improve the accuracy of reaction time analysis that this paper has not taken into account. Due to this weakness of the current paper, and since the results of the current analysis are not exceptionally strong, a separate analysis has been conducted in Appendix B. The analysis revealed that, although not at statistically significant levels, reaction time predicted biased mistakes for altruist subjects in a distinct manner. Finally, a second set of regressions was conducted. These random-effects probit regressions did not include reaction time. The results of those regressions can be found in ​Tables 6 ​and​ 8​ of ​Appendix A.

(37)

4.2.3.3. Hypothesis 3

Figure O ​– Predictive Margins based on probit regression of ​Table 8, Appendix A. Probability of Mistake per Share of Altruist Choices (Giver Score).

The main between-treatments and within-treatment analyses did not provide significant evidence in favor of the hypothesis that the share of altruist choices would be positively correlated with biased mistakes. Nevertheless, if we exclude reaction time, the effect of altruist choices on biased mistakes becomes significant in the within-treatment analysis ( Appendix A, Table 8 ). A complete altruist (giver score = 1) has a 0.38 probability of making a mistake when the true answer is “left”, while for a complete egoist (giver score = 0), the probability of making the biased mistake is 0.23. In relative terms, complete altruists were found to be 65% more likely to commit a biased mistake. In total, this experiment produced mixed results in regard to the third hypothesis. The hypothesis is not confirmed when we include reaction time in the regression, however, when we exclude reaction time the results of the within-treatment analysis support the hypothesis.

(38)

4.2.3.4. Hypothesis 4

It was hypothesized that overconfidence would be positively correlated with biased mistakes. The adjusted confidence measures did not reveal a statistically significant effect of biased mistakes on confidence. In comparison to the first experiment, confidence levels followed an opposite pattern. In the first experiment givers were less underconfident than non-givers ( ​Figure

G​), while in this experiment confidence was lower among givers. (​Figure N​). These opposing results raise some questions in regard to self-deception and its effect on confidence. If some type of true self-deception is hiding behind the perceptual bias of the first experiment, the direction of confidence is in line with the literature. If self-deception was stronger in the second experiment, then the direction of confidence is in opposition with the rest of the literature. According to my analysis, the biased mistakes of the first experiment were caused by a perceptual bias, while the biased mistakes of the second experiment were caused by self-deception. Consequently, it appears that when stronger self-deception was detected, confidence levels decreased.

5. General Discussion and Conclusion

I have conducted two experiments with the goal of an improved understanding of the relationship between fairness and self-deception. The results of this paper potentially fit within the frameworks of self-concept theory ( Mazar et al. 2008 ) and self-signaling models (Milovic-Prelec and Prelec, 2010), however, they generally lack statistical significance. The following section will summarize the conclusions we can draw in regard to this paper’s four hypothesis.

5.1. Summary of results

Hypothesis 1:

“Subjects in the dictator treatment will make more biased mistakes than subjects in the control.” This hypothesis was rejected in the first experiment. The perceptual bias of the color red appears to have played a vital role in both treatments. In the second experiment, biased mistakes were more among subjects in the dictator treatment, however, the difference was not statistically significant. As a result, the hypothesis was rejected in the second experiment as well.

(39)

Hypothesis 2:

“Reaction time will be positively correlated with biased mistakes in the dictator treatment.” Reaction time inferences can be unreliable, especially when specific design elements are missing (see section 4.1.3.2.). Both experiments suffered from omission of these elements. The most important reason to be skeptical about the reaction time results is the fact that no time pressure was applied. As a consequence, we should take the results of this analysis with a grain of salt. The first experiment revealed a general positive correlation between reaction time and mistakes, however, no significant relationship was found in regard to biased mistakes in the dictator treatment. Overall, reaction time predicted mistakes positively, but it exhibited a negative correlation with biased mistakes in the dictator treatment. This result cannot be matched with any existing theory and could be attributed to a measurement error due to the lack of time pressure. In the second experiment, the hypothesis was also not confirmed. Nevertheless, reaction time followed the expected trend at the extremes (complete egoists and altruists), but at a statistically insignificant level (Appendix B).

Hypothesis 3:

“Individuals who make more fair decisions will also make more biased mistakes.” A within-subjects analysis in the first experiment revealed an insignificant positive relationship. The fact that a positive relationship existed, even at an statistically insignificant level, is encouraging for the hypothesis because it implies that, despite the very strong perceptual bias, fairer subjects still needed to self-deceive in order to achieve a higher external reward and maintain their self-concept. The second experiment confirmed the hypothesis, but only when reaction time was excluded from the analysis. The share of fair choices a subject made in the first two difficulty levels was significantly and positively correlated with biased mistakes in the rest of the trials, indicating that subjects partly maintained their self-concept of fairness by making fair choices and partly by self-deceiving away from making those choices.

Referenties

GERELATEERDE DOCUMENTEN

The seeds produced in the ovule removal treatment (which had a lower abortion rate than the control series) appeared to be of lower quality in terms of survival, indicating

By focusing on individuals’ need for self-reflection, need for cognition, social comparison orientation and degree of similarities between gossip receiver and gossip target,

Purpose – The purpose of this study is to examine the relationship between four types of organizational cultures (supportive, innovative, rule, and goal), two job

In this thesis, a model of foreign entry mode choice is tested (how firms decide between shared ownership and wholly owned subsidiary) which includes technological, institutional, and

Muslims are less frequent users of contraception and the report reiterates what researchers and activists have known for a long time: there exists a longstanding suspicion of

A geographically distributed correlator and beamformer is possible in theory, but in particular in modern large scale radio telescopes with many receivers data volumes explode

situation students changed their opinion of ethnic outgroup members in and outside school.. These studies have been consulted for an answer to the second research question. The results

When measuring the possible link between OUIC based corporate social innovations and the effect of these on the overall corporate social performance, results showed that the number