• No results found

The Impact of Oath and Bayesian Truth Serum on Self-Deception

N/A
N/A
Protected

Academic year: 2021

Share "The Impact of Oath and Bayesian Truth Serum on Self-Deception"

Copied!
7
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

THE IMPACT OF OATH AND BAYESIAN TRUTH SERUM ON SELF-DECEPTION

Ursa Bernardica,b ∗

a

Neuroeconomics Lab, MIT Sloan School of Management, Massachusetts Institute of Technology

b

Master Brain and Cognitive Sciences, University of Amsterdam

KEY WORDS: Self-Deception, Bayesian Truth Serum, Solemn Oath, Pragmatic perspective

ABSTRACT:

Behavioral reserch has documented that people do not always possess an accurate perception and judgments of the situation and the world and yet believe that their behavior is accurate if their decisions are favourable to the self. Recent studies have shown that such self-serving biases are self-deceptive in nature and can be harmful and costly in the long run. Self-deception also affects the quality of self-report surveys which opens up an important methodological question with practical implications: can self-deception be reduced or minimized? The aim of the present study is to implement two truth inducing mechanisms: the Bayesian Truth Serum (BTS), a survey scoring method that creates truth-telling incentives for participants answering subjective opinions, and the Solemn Oath, a commitment method which posits that a person is more likely to tell the truth after a strong promise. Using a modified version of Mijovic-Prelec and Prelec (2010) self-deception model, we not only found that BTS and Solemn Oath increase honesty, but that these truth inducing mechanisms can also reduce self-deception. These findings suggest that truth incentivizing mechanisms might potentially be implemented to improve the quality of respondents’ responses by reducing self-deception and increasing honest answering in online settings.

1. INTRODUCTION

1.1 Self-Deception

Although honesty is central to the self-concept (Aquino & Reed, 2002), many empirical studies reinforce that people deceive oth-ers, but more interestingly also deceive themselves. This phe-nomenon is known as self-deception (Gur & Sackeim, 1979; Slo-man et al., 2010). In the last few decades numerous philosophi-cal articles and books (e.g., Davidson, 1985; Demos, 1960; Fin-garette, 1969; Mele, 1997, Quattrone & Tversky, 1984; Deweese-Boyd, 2006; Levy, 2007) have tried to explain and answer: whether and how self-deception is possible, and solve concerns about the definition: what is self-deception, and it’s intentionally (deceiv-ing the self to deceive others) (Chance & Norton, 2015). On the other side, evolutionary psychologists (Trivers, 2000, 2011) are interested in whether self-deception is adaptive, how it has evolved as a fitness-enhancing strategy and what are the bene-fits. In contrast, mainstream psychologists have concentrated on discovering what mechanisms are involved in self-deception and what is its function (e.g., Greenwald, 1988; Sackeim, 1983; Paul-hus & John, 1998). Likewise, the self-deception literature is con-troversial in the field of neuroscience. Some researchers propose self-deception does not exist, suggesting that there is an effect of decision of labor between different information and motiva-tional modules of sub-selves(Kenrick & White, 2011; Marindale, 1980). On the other hand, some argue self-deception exist and in-vestigate how self-deception is different from similar phenomena, such as impression management, on the neuronal level (Farrow et al., 2015).

The traditional models of self-deception is analogous to interper-sonal deception, where some part of the self intentionally mis-leads another part (Deweese-Boyd, 2012). That said, self-deceivers must (1) simultaneously carry two conflicting representations/beliefs of the reality (p and non-p), and (2) although the individual is not

Corresponding author: Ursa Bernardic, Research Affiliate,

Mas-sachusetts Institute of Technology, Sloan School of Management, Cam-bridge, MA 02139. Email: ubernard@mit.edu

aware of holding one of these beliefs, the motivated acquisition and retention determines which belief is and which belief is not subject to awareness (Gur & Sackeim, 1979).

In one of the first empirical studies of self-deception (Gur & Sackeim, 1979), participants had to recognize the speakers voice and identify it as either their own voice or voice from someone else. They found out that when participants misidentified their own voice, the verbal identification (e.g. this is not my voice) was inconsistent with physiologically based assessment (galvanic skin response increased for the sound of their own voice, even when they failed to identify it as theirs). Gur and Sackeim (1979) interpret this pattern as a form of self-deception, as participants who consciously failed to recognize a voice (verbal assessment), unconsciously knew it was theirs (physiological assessment). This method was criticized by Mele (1997) who argued it was unclear whether physiological responses are demonstrative of belief and even if they are whether this is a sufficient proof the subjects held conflicting beliefs.

Arguably the most rigorous model of self-deception is a game-theoretical models by Mijovic-Prelec and Prelec (2010), which produced three main findings. First, they showed that it is possi-ble to induce costly self-deception in experimental settings with presenting subjects with the prospect of winning a large finan-cial bonus. The finanfinan-cial bonus was depended on overall per-formance relative to other subjects. To be more specific, partic-ipants in their study were randomly assigned into two different treatment groups, which received the same instructions, however the criteria for assigning the bonus differed across two treatment groups. For participants in the Classification Bonus group the bonus was reserved for the top three participants according to classification accuracy in Phase II. Participants in the Anticipa-tion Bonus group received the bonus if they were in the top three participating according to anticipation accuracy. Their findings confirmed that participants in Anticipation Bonus condition had more self-deceptive motive than participants in the Classification Bonus condition. Therefore self-deception judgments can be reli-ably and repeatedly elicited with incentives in the categorization

(2)

task.

Second, their study addressed the psychological benefits of self-deception, which were measured by confidence ratings following each classification response. When they sorted participants ac-cording to self-deception rates, their findings suggest that partic-ipants with moderate self-deception are motivated by the benefits of confirmation, while the participants with high self-deception are motivated by the costs of disconfirmation.

Thirdly, discounting of the confirming judgments were predicted by the game-theoretical model which was based on self-signaling theory (Bodner&Prelec, 2003). The self-signaling theory sug-gests that the total utility is defined as a sum of outcome util-ity, which is the utility of the anticipated causal consequences of choice (e.g. action selection) and diagnostic utility which has the value of the adjusted estimate of one’s disposition, adjusted in light of the choice (e.g. self-image in light of the choice). Us-ing this theory, Mijovic-Prelec and Prelec (2010) proposed that both mechanisms collaborate to produce overt expression of the belief. Additionally, they suggest that differences in bias are due to individual difference in subject motivation.

1.2 Inducing the Truth

Although different disciplines (philosophy, psychology and neu-roscience) suggest distinguishable self-deception frameworks, there is some agreement that ”the worst of all deceptions is self-deception” (Plato), which is ”the source of much of the complexity, and tragedy, of human life (Pinker, 2008, p. 184). Recent studies also support the idea that the self-deception is costly on a long-run (Mijovic-Prelec & Prelec, 2010; Chance et al., 2011, The Arbinger Institute, 2010; Borau et al., 2016), not only for a self-deceiving individual, but also for others involved (Bachkirova, 2016). Speaking of costs, the first warning was made by Freud (1938/1950), who warned that the penalty for a repression is repe-tition, and that it is necessary to uncover this mechanisms through psychoanalysis for the benefit of the client. Although recent lit-erature in this field expanded the array of explanations of self-deception (Bachkirova, 2016), to the best of our knowledge fewer implications were proposed with the pragmatic aim. In other words: how to understand self-deception in order to minimize or reduce it?

In order to so, the idea that self-deceiver knows the truth at some level, is appealing. Whisner (1993) and existential philosophers proposed that people have the demand for honesty and that moral values, critical thinking and rationalization are the most impor-tant motivators for increasing the honesty. Based on this, we de-cided to motivate self-deceivers to be more thoughtful and honest, and search for this level or self, by introducing two distinct truth-inducing mechanisms: by incentivizing for subject honesty (e.g. the Bayesian Truth Serum) or by increasing truthfulness (e.g. the Solemn Oath).

BTS

The Bayesian Truth Serum (BTS) is a scoring method that pro-vides incentives for supplying more honesty in multiple-choice, subjective questions (Prelec, 2004). This method asks not only about their own answers, but also about the percentage estimates of others answer. The formula works by assigning high scores to answers that are surprisingly common. From this a truthful-ness score is create for each subject response. At the end of the experiment, the top participants (percentage of top participants is known to participant) with highest truthfulness score are in-centives with a bonus. Previous research has shown that BTS can induce more honest answers in problem of socially desirable

over-claiming, including recognition questionnaires and charita-ble giving reports, nonmarket goods valuations (Weaver & Pr-elec, 2013) and self-enhancement (Hauser et al., in press). Al-though BTS incentives can be used to extract more honest subjec-tive judgments from subjects, it remains unknown whether such method and truth-incentives can override self-deception and im-prove honesty.

Solemn Oath

In contrast with truth-incentivizing, we would like to compare also another truth-inducing method, known as Solemn Oath. So-cial psychology offers the commitment theory, which posits that a person is more likely to tell the truth after a strong promise (Jacquemet et al., 2011). Economic experiments shows that peo-ple who make promises about future actions are more likely to keep them when playing different games (Elingsen & Johannes-son, 2004; Charness &Dufwenberg, 2006). Many researchers found that the Solemn Oath is a formal way to create the bond between a person and truth-telling (see Sylving, 1959; Kiesler & Sakumura, 1996; Schlesinger, 2011). Solemn Oath has been suc-cessful across a number of studies and in many settings, eliminat-ing the hypothetical bias (Jacquemet et al., 2016) and improveliminat-ing the reliability of elicited preferences in value auction (Jacquemet et al., 2013). Further evidence suggest that a person, which is reminded of a moral standards before a decision in economic games, reduces dishonest behavior, since he/she has to balance the desire to grab the monetary reward and the desire to preserve the positive self-view (Ariely & Jones, 2012). However, to the best of our knowledge, it remains unknown whether such method can prevent or reduce self-deception.

2. METHOD

Participants

Three-hundred residents of the United States (147 male, Mean age = 34.38 years, SD = 9.90) were recruited through Amazon Mechanical Turk and were paid $ 1.50 to complete the study. Unknown to participants, each participant was randomly placed into one of the three conditions: Bayesian Truth Serum (n = 93), Solemn Oath (n = 105), or control (n = 102) (see Supplemental Materials). Individuals who had some familiarity with the Korean characters, did not pass the attention or understanding checks, or did participate in a pilot study were excluded from the study. Stimuli

Participants made decisions regarding ambiguous Korean sings. Size and resolution was matched between pictures. These pic-tures were selected from the set of 100 picpic-tures used in a pilot be-havioral study, where 30 participants recruited via Amazon Me-chanical Turk classified the signs either as more female or male-like. This was primarily done 1) to select sufficiently ambiguous Korean signs (Gur and Sackeim, 1979; Baumeister, 1993; Mi-jovic Prelec and Prelec, 2010; Sloman et al., 2010), 2) to estimate the gender of the sign and 3) to create a correct gender classifi-cation for each sign. Therefore, 40 Korean signs (20 classified as male and 20 as more female-like sign) were equally divided and repeated in 3 Phases (Figure 1.

Behavioural Task

All tasks were programmed using Qualtrics Survey Software and JavaScript.

The main experiment had three phases (Fig. 1), adapted from Mijovic-Prelec and Prelec (2010). In the first phase, 40 Korean characters were randomly presented on the screen and partici-pants were asked to classify it as more male or female-like. They

(3)

Figure 1: Experimental Task. In the task (A) participants made four responses in connection with each sign: an initial classifi-cation in phase 1 (C1), followed by blind anticipation (BA) and second classification (C2) in phase 2, and third classification (C3) in phase 3.

were not given any additional instructions how to do so, except to try to use their intuition and the whole sign into account. To incentivize participants for careful responding we (truthfully) in-structed them that the correct answer for each sign was deter-mined by the majority opinion of our pilot group, however no additional information (regarding size, gender, etc.) were not re-vealed. They would receive $0.025 for each correct gender clas-sification (summing to $1 for all 40 correct clasclas-sifications). The purpose of the Phase 1 was to create a subjective answer key for each participant. This was later used as a main comparison for

Phase 2 and Phase 3 answers.

The Phase 2 trials differed by requiring subjects to first predict / anticipate the gender of the sign in electronic envelope, before classifying. Therefore, subjects were forced to purely guess when they were predicting the gender. When the electronic envelope opened they saw the sign and they had to classify it according to the gender again. To boost self-deception participants in all three conditions, BTS, Control and Oath, were told (truthfully) that the top 30% of participants with most accurate predictions would be rewarded with a $2 bonus. This incentive structure was set up to stimulate a potential motivation for self-deception. Note, in the original study incentives for ancticipation group were larger ($40 for top three participants). We also tested participant com-prehension by testing what a understood what a correct predic-tion/anticipation means. Those who did not pass the instruction test were screened out.

For example, imagine that if you would predict that the sign in the envelope is female, and when the sign appears it had a more male-like shape, than the dilemma would be to either acknowl-edge the prediction error (report the honest answer and lose the opportunity to get a bonus for correct predictions) or to reinter-pret the sign as more male-looking.

Lastly, phase 3 repeated phase one, however it lacked an incen-tive for correct classifications. Subjects saw 40 signs, which they had to classify as more female or male-like. We were interested whether the self-deception as a result of the anticipation, can change a memory. We were additionally interested in whether confirmation of the ”wrong” anticipation will persist also when there is no incentive to do so (in Phase 3). In short, can self-deception change how participants remember the sign?

To calculate the Truthfulness Score - participants had to answer how other people would classify the sign in four random trials across all three phases. At the end of the experiment, subjects completed a demographic questionnaire and were debriefed re-garding the goals of the study.

3. RESULTS

Although the stimuli did not contain the ground truth and subjects never received any feedback of the accuracy of their classifica-tions, we found a considerable agreement in sign classification in Phase 1. Participants earned on average $0.664 (SD = 0.158), which means around 66% of characters in Phase 1 were classi-fied correctly. As a reminder, participants were instructed that the winning answer is the one that matches majority opinion, as in the beauty contest game).

Figure 2: Examples of four Korean signs classified as more female-like (a) or more male-like (b) by the majority of partici-pants. There was a bias towards the male category (c) in C1 (55% male, 45% female), BA (59% male, 41% female), C2 (56% male, 44% female), and C3 (57% male, 43% female) across all partici-pants, which was not driven by participants gender. Interestingly, there was a significantly less bias (c) towards male anticipations in Oath < Control group (p = 0.046, d = 0.27) and BTS < Control group (p = 0.027, d = 0.31). This bias also persist in C2, however just for BTS < Control group, (p = 0.031, d = 0.32).

The participants responses from Phase 1 (the first classification purpose was to create subjective answer key to subsequent phases) and Phase 2 (blind anticipation and second classification) were sorted into 4 different patterns (Fig.3). A consistent pattern is when all three responses coincide, therefore C1 = BA = C2 (FFF or MMM). A self-deceptive pattern corresponds to C1 6= BA = C2 (FMM or MFF), that is, the sign changes gender so as to make the correct anticipation. An inconsistent pattern cor-responds to C1 = BA 6= C2 (FFM or MMF), that is, the par-ticipant changes mind about the gender, although the anticipation was consistent with original classification. Last, an honest pat-tern corresponds to C1 6= BA 6= C2 (FMF or MFM), that is, the participant acknowledge the wrong anticipation and confirm the original classification.

As expected, we observed (Fig. 3) more self-deceptive trials in the Control group than in BTS (Control > BTS, p = 0.021, d = 0.32 ) or Oath condition (Control > Oath, p = 0.023, d = 0.31). Moreover, there were significantly less inconsistent trials is Oath than in Control group (Oath < Control, p = 0.006, d = 0.38), however there were no differences between BTS and Control, nor BTS and Oath condition. Importantly, the two truth incentivizing mechanisms, Oath and BTS treatment improved honesty as there were more honest trials than in Control condition (BTS > Con-trol, p = 0.014, d = 0.35), (Oath > ConCon-trol, p = 0.013, d = 0.35). To better understand the computation involved in truth incentiviz-ing mechanisms we computed individual log-response times of

(4)

Figure 3: The distribution of trial patterns in table (up) or fig-ure (down), by condition group: BTS, Control and Oath. Error bars represent intersubject SEM. Note. *=p < .05, **=p < .01, ***<.001, NS=non-significant.

Figure 4: The impact of condition on log-response times for dif-ferent patterns. Error bars represent intersubject SEM. Note. *=p < .05, **=p < .01, ***<.001, NS=non-significant.

all three classifications for different trials (Fig. 4).Three results stands out: first, participants in BTS and Oath condition make faster responses when self-deceiving than in Control condition. Second, there is opposite direction for honest trials, in which participants from control condition took less time for honest re-sponses than in BTS (Control > BTS, p = 0.027, d = 0.29) and Oath (Control > Oath, p = 0.006 , d = 0.36). Third, for incon-sistent trials participants from Control condition had longer re-sponse times, however it was statistically significant just between Control and Oath condition (Control> Oath, p = 0.021, d = 0.08) Comparing self-deceptive response times for C1, BA, C2, and C3 separately, we found that participants in BTS and Oath condition spent much less time in first classification C1 (BTS < Control, p = 0.006, Oath < Control, p = 0.004). Additionally, participants in Oath also in second classification C2 (Oath < Control, p = 0.045). While, participants in BTS spent more time in honest tri-als for C1(BTS < Control, p = 0.013) and C2 (BTS < Control, p = 0.031).

Additionally, in Phase 3 (Fig. 5) we tested whether participants really see characters differently as a result of their anticipations and classifications, when there is no monetary motivation or re-ward to do so.

Figure 5: The distribution of trial patterns including Phase 3, by condition group: BTS, Control and Oath. Error bars represent intersubject SEM. Note. +=p = .05, *=p < .05, **=p < .01, ***<.001, NS=non-significant.

Our expectations were that specific patterns (especially real self-deception) will change the remembering in the way that partic-ipants will classify the sign according to their anticipation and classification in Phase 2. To asses that, we separated response patterns according to Phase 3 (Fig. 5). In patterns marked 1 (e.g. Consistent1, SD1, Inconsistent1 or Honest1) classification in phase 2 and 3 are the same (e.g. C2 = C3), while in patterns marked 2 (e.g. Consistent2, SD2, Inconsistent2 or Honest2) clas-sification in phase 2 and 3 are different (e.g. C2 6= C3). We found that for Consistent and Honest patterns participants in all three conditions remember and classify signs more consistently, have more consistent classifications in Phase 2 and 3 (e.g. Con-sistent1 > Consistent2, Honest1 > Honest2). Next, when com-paring with a control condition, we found that participants in the BTS condition have less Consistent2 patterns (BTS < Control, p = 0.017, d = 2.53), less SD1 patterns (BTS < Control, p = 0.053, d= 1.22), less SD2 patterns (BTS < Control, p = 0.038, d = 2.23), but more Honest1 patterns (BTS > Control, p = 0.031, d = 2.00). Results also show that when comparing Oath and Control condi-tion, Oath condition participants have less SD1 (Oath < Control, p= 0.020, d = 2.63), Inconsistent1 (Oath < Control, p = 0.024, d = 1.97), Inconsistent2 (Oath < Control, p = 0.022 , d = 3.45), but more Honest1 trials (Oath > Control, p = 0.025, d = 2.64).

Figure 6: Average effort rate, honest rate and happiness level reported by condition. Error bars represent intersubject SEM. Note.*=p < .05, NS=non-significant.

Since self-deception entails not only changing ones behaviour but also drawing a diagnostic inference from that behaviour, we in-cluded additional questions at the end of the experiment. All ported values (e.g. effort rate, honesty and happiness) were re-ported on a 1-5 Likert scale. All three groups rere-ported very high level of effort and honesty in their ratings. Nonetheless, self-reported effort judgments differed significantly across conditions (BTS > Control, p = 0.034 , d = 2.81), (Oath > Control, p = 0.024 , d = 3.18). However, there were no differences across conditions for reported happiness and honesty rating.

(5)

4. DISCUSSION

The precise nature and definition of computations involved in self-deception have been a long-standing problem within psy-chology, evolutionary biology, philosophy and neuroscience. To our knowledge very few attempts have been made to document possible interventions, that could help prevent self-deception in real-world scenarios. In this preliminary study, we investigated whether two truth-inducing mechanisms, BTS and Solemn Oath, can reduce self-deception or/and increase honesty. The task (Fig. 1) is similar to previous work (Mijovic-Prelec & Prelec, 2010), except there was a different incentive structure (top 30% of par-ticipants could earn an additional $2 bonus), and in order to boost the self-deception we did not incentivized for a correct classifica-tions in Phase 2 (C2). We also added one more phase, a non-incentivized re-classification of stimuli in Phase 3 (C3), in or-der to examine whether self-deception affects perception, namely whether subjects really ’see’ what they want to see.

When relating our behavioral results to the original study (Mijovic-Prelec & (Mijovic-Prelec, 2010), there are two results which stand out. We found less self-deceptive patterns and more honest patterns com-paring our control condition and their Anticipation bonus group, which both had very similar instructions (see supplemental infor-mation). A possible explanation for this might be different popu-lations (they tested university students, while we tested MTurks), devices (lab experiment vs. online experiment), different blind anticipation financial incentive structure. It might also be that the signs we collected with the preliminary study were not vague enough (Sloman et al., 2010), or we didnt have enough trials (the participants in our study did 40 trials, while in original study had 100).

Our results suggests that both truth incentivizing mechanisms, BTS and Solemn Oath increases honest trials, and decreases deceptive trials. However, a more appropriate indicator of self-deception would be a comparison between frequency of self-deceptive and inconsistent patterns used in the original study as a base-line for estimating whether there is statistically significant self-deception, or just a variability of classifications. By using the same formula, our modified design and the online subject pool do not show a significant difference between the three condition (BTS, Control, Oath). This can be accounted by a different under-lying motivation between online subjects for whom experiment participation is an important source of income. On the other side, laboratory subjects are motivated to maximize their earnings in a single experiment session (*Note: This exlanation is reinforced by the results of ongoing studies at the MIT Neuroeconomics lab which show significantly lower level of deceptive-dishonest re-sponses in online vs laboratory studies. But, that by itself is an important finding in assessing the role of role of cost vs benefit in self-deception.).

Response time data provide additional evidence of different pro-cessing between the patterns, as well as between the incentivized and the control conditions. Our results suggest that honest and consistent patterns require longer response times across condi-tions. Findings from Farrow and colleagues (2015) suggest that participants require more time for self-deception and that this is correlated with more cognitive load (Farrow et al., 2015). Our results are consistent: self-deceptive trials require more time than inconsistent trials, and participants in control condition made slower responses in self-deceptive patterns than BTS and Oath subjects. Although our study did not asses the level of self-deception using the formula from the original paradigm, the fast response time pattern in self-deception trials in both the BTS and the Oath con-ditions are consistent with the fast self-deception responses in the

Mijovic-Prelec and Prelec study. This suggest that self-deception might be even more costly when subjects are presented with sign-ing and Oath or the BTS. This later explanation cannot be con-firmed or rejected, as it is out of the scope of this project to in-vestigate a shift of perception. However we suggest additional eye-tracking studies, which will address question whether partic-ipants really ’see’ characters differently.

Finally, we wanted to address whether subjects really remem-ber the characters differently, as a result of their anticipations. This is why we included additional Phase 3 into our experiment, where subjects classified the signs without any monetary reward, i.e. without any external incentive. Our results show compared to controls, BTS and Oath subjects show better memory for signs in the honest than in self-deception condition (where they are asked to recognize the sign following their anticipation/prediction of that sign). Although this is only a speculation, it is possible to draw some interpretation of these results from recent neurosci-entific literature, suggesting that neural mechanism of motivated forgetting is achieved by the inhibitory control on the encoding (e.g. to disrupt and truncate its encoding) or the retrieval level (e.g. memory retrieval as a self-protective mechanism). On the retrieval level, memory serves as a self-protective mechanism and retrieval suppression engages lateral prefrontal cortex (An-derson&Hanslmayr, 2014). Interestingly, recent studies (Abe et al., 2007; Spence et al., 2001, 2004, 2008; Nunez et. al., 2005; Farrow et al., 2015) on deception and self-deception have con-sistently reported the ventrolateral prefrontal cortex (vlPFC) is associated with inhibition of the truthful response during decep-tion (Miller & Cohen, 2001; Macdonald et al., 2000) and self-deception (Farrow et al., 2015).

To conclude, our findings suggest that truth-inducing mechanisms, such as the Bayesian Truth Serum and Solemn Oath significantly increase honest trials and reduce self-deceptive trials. Further studies, which will assess higher levels of self-deception are needed to confirm whether truth-inducing mechanisms can modulate self-deception. Higher level of self-deception can be achieved with more ambiguous characters as well as with higher incentives for desired outcome (in our case for correct anticipations of the stim-uli).

It remains for further studies to confirm the impact of the ef-fectiveness of truth incentive mechanisms such as BTS and the Solemn Oath in decreasing self-deception. The most immediate implementation would be in online studies where providing more honest, more truthful and less self-deceiving responses would improve the quality of anonymous surveys. It would also be beneficial in assessing self-reports medical illness/mental disor-ders, given that such reporting is commonly contaminated by self-deception. Although participants in our study were reminded throughout the study that they have signed the Solemn Oath or that they would receive a bonus for truthful answers(BTS), future studies should address the temporal limitations for this mecha-nisms.

Lastly, individual differences in self-deception should be inves-tigated more truly between subjects of a similar treatment. Previ-ous studies suggested that low self-deception is related with low self-esteem (Johnson et al., 1997) and better self-insight (Cook-Greuter, 1999). Therefore, investigating and understanding the self in self-deceivers, the information and characteristics about the person who is involved in self-deception remains an important question for further investigations if we wish to help individuals to understand and minimize self-deception and help practitioners to influence this intended change (Bachkirova, 2016, p. 5).

(6)

5. SUPPLEMENTAL INFORMATION Experimental instructions

Figure 7: Three conditions: Solemn Oath, BTS and Control

6. ACKNOWLEDGEMENTS

We thank the MIT Sloan School of Management for their funding and support. Many thanks to dr. Drazen Prelec and dr. Danica Mijovic-Prelec for your great ideas, and willingness to always offer an invaluable advice and support throughout the entire in-ternship. Many thanks to dr. Ma¨el Lebreton for his helpful sug-gestions for research proposal agreement.

7. DECLARATION OF CONFLICTING INTERESTS

The author(s) declared no potential conflicts of interest with re-spect to the research, authorship, and/or publication of this arti-cle.

8. FUNDING

The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article.

9. REFERENCES

Abe, N., Suzuki, M., Mori, E., Itoh, M., Fujii, T. (2007). Deceiving others: distinct neural responses of the prefrontal cortex and amygdala in simple fabrication and deception with social interactions. Journal of Cognitive Neuroscience, 19(2), 287-295.

Anderson, M. C.,& Hanslmayr, S. (2014). Neural mechanisms of motivated forgetting. Trends in cognitive sci- ences, 18(6), 279-292.

Arbinger Institute. (2010). Leadership and self-deception: Getting out of the box. Berrett-Koehler Publishers. Ariely, D., Jones, S. (2012). The (honest) Truth about Dishonesty: How We Lie to Everyone, Especially Ourselves (Vol. 336). New York, NY: HarperCollins.

Aquino, K.,& Reed II, A. (2002). The self-importance of moral identity. Journal of personality and social psycholo- gy, 83(6), 1423.

Bachkirova, T. (2016). A new perspective on self-deception for applied purposes. New Ideas in Psychology, 43, 1- 9.

Bodner, R.,& Prelec, D. (2003). Self-signaling and diagnostic utility in everyday decision making. The psychology of economic decisions, 1, 105-26.

Borau, S., Nepomuceno, M. V. The Self-Deceived Consumer: Womens Emotional and Attitudinal Reactions to the Airbrushed Thin Ideal in the Absence Versus Presence of Disclaimers. Journal of Business Ethics, 1-16.

Chance, Z.,& Norton, M. I. (2015). The what and why of self-deception. Current Opinion in Psychology, 6, 104-107. Chance, Z., Norton, M. I., Gino, F.,& Ariely, D. (2011). Temporal view of the costs and benefits of self-deception. Proceedings of the National Academy of Sciences, 108(Supplement 3), 15655-15659.

Charness, G.,& Dufwenberg, M. (2006). Promises and partnership. Econometrica, 74(6), 1579-1601.

Davidson, D. (1985). Deception and division. The multiple self, 79.

Demos, R. (1960). Lying to oneself. Journal of Philosophy, 57, 588e595.

Deweese-Boyd, I. (2006). Self-deception.

Farrow, T. F., Burgess, J., Wilkinson, I. D.,& Hunter, M. D. (2015). Neural correlates of self-deception and

impression-management. Neuropsychologia, 67, 159-174. Fingarette, H. (2000). Self-deception. London: University of California Press.

Freud, S. (1950). Splitting of the ego in the defensive process. In J. Strachey (Ed.), Collected papers (Vol. V, pp. 372e375). London: Hogarth Press (Original work published 1938). Greenwald, A. G. (1988). Self-knowledge and self-deception. Self-deception: An adaptive mechanism, 113-131.

Gur, R. C.,& Sackeim, H. A. (1979). Self-deception: A concept in search of a phenomenon. Journal of Personality and Social Psychology, 37(2), 147.

Hauser, R., Prelec, D., and Mijovic-Prelec, D. (2017) Self-assessment: Consistency and Effects on Projection. Working paper.

Jacquemet, N., James, A. G., Luchini, S., & Shogren, J. F. (2011). Social psychology and environmental economics: a new look at ex ante corrections of biased preference evaluation. Environmental and resource economics, 48(3), 413-433. Jacquemet, N., James, A., Luchini, S.,& Shogren, J. F. (2016). Referenda under oath. Environmental and Re- source Economics, 1-26.

Jacquemet, N., Joule, R. V., Luchini, S., Shogren, J. F. (2013). Preference elicitation under oath. Journal of Environmental Economics and Management, 65(1), 110-132.

Kenrick, D. T.,& White, A. E. (2011). A single self-deceived or several subselves divided?. Behavioral and Brain Sciences,

(7)

34(1), 29-30.

Kiesler, C. A.,& Sakumura, J. (1966). A test of a model for commitment. Journal of personality and social psy- chology, 3(3), 349.

Levy, N. (2007). Neuroethics: Challenges for the 21st century. Cambridge University Press.

MacDonald III, A.W., Cohen, J.D., Stenger, V.A., Carter, C.S., 2000. Dissociating the role of the dorsolateral prefrontal and anterior cingulate cortex in cognitive control. Science 288, 18351838.

Mele, A. R. (1997). Real self-deception. Behavioral and Brain Sciences, 20(1), 91-102.

Mijovic-Prelec, D.,& Prelec, D. (2010). Self-deception as self-signalling: a model and experimental evidence.

Philosophical Transactions of the Royal Society B: Biological Sciences, 365(1538), 227-240.

Miller, E. K., Cohen, J. D.(2001). An integrative theory of prefrontal cortex function. Annual Review of Neuroscience, 24, 167202.

Nunez, J. M., Casey, B. J., Egner, T., Hare, T., Hirsch, J. (2005). Intentional false responding shares neural substrates with response conflict and cognitive control. Neuroimage, 25(1), 267-277.

Palinko, O., Kun, A. L., Shyrokov, A., Heeman, P. (2010, March). Estimating cognitive load using remote eye tracking in a driving simulator. In Proceedings of the 2010 symposium on eye-tracking research applications (pp. 141-144). ACM. Paulhus, D. L.,& John, O. P. (1998). Egoistic and moralistic biases in selfperception: The interplay of selfdeceptive styles with basic traits and motives. Journal of personality, 66(6), 1025-1060.

Pinker, S. (2008). One on one with Steve Pinker. The Psychologist, 21(2), 184.

Prelec, D. (2004). A Bayesian truth serum for subjective data. science, 306(5695), 462-466.

Quattrone, G. A.,& Tversky, A. (1984). Causal versus diagnostic contingencies: On self-deception and on the voter’s illusion. Journal of personality and social psychology, 46(2), 237. Sackeim, H. A. (1983). Self-deception, self-esteem, and depression: The adaptive value of lying to oneself. Empirical studies of psychoanalytic theories, 1, 101-157.

Sartre, J. P. (2012). Being and nothingness. Open Road Media. Schlesinger, H. J. (2011). Promises, oaths, and vows: on the psychology of promising. Taylor Francis.

Sloman, S. A., Fernbach, P. M.,& Hagmayer, Y. (2010). Self-deception requires vagueness. Cognition, 115(2), 268-281. Spence, S. A., Farrow, T. F., Herford, A. E., Wilkinson, I. D., Zheng, Y., Woodruff, P. W. (2001). Behavioural and functional anatomical correlates of deception in humans. Neuroreport,

12(13), 2849-2853.

Spence, S. A., Hunter, M. D., Farrow, T. F., Green, R. D., Leung, D. H., Hughes, C. J., Ganesan, V. (2004). A cognitive

neurobiological account of deception: evidence from functional neuroimaging. Philosophical Transactions of the Royal Society B: Biological Sciences, 359(1451), 1755.

Spence, S. A., Kaylor-Hughes, C., Farrow, T. F., Wilkinson, I. D. (2008). Speaking of secrets and lies: the contribution of ventrolateral prefrontal cortex to vocal deception. Neuroimage, 40(3), 1411-1418.

Sylving, H. (1959). The oath: I. The Yale Law Journal, 68(7), 1329-1390.

Trivers, R. (2000). The elements of a scientific theory of selfdeception. Annals of the New York Academy of Sciences, 907(1), 114-131

Trivers, R. (2011). The folly of fools: The logic of deceit and self-deception in human life. Basic Books.

Weaver, R., Prelec, D. (2013). Creating truth-telling incentives with the Bayesian truth serum. Journal of Marketing Research, 50(3), 289-302.

Whisner, W. (1993). Overcoming rationalization and

selfdeception: The cultivation of critical thinking. Educational Theory, 43(3), 309-321.

Referenties

GERELATEERDE DOCUMENTEN

Van Wiechen ontwikkelings onderzoek: R042 Score op aanvullende vragen op ont- wikkelings- kenmerk 41: -- Registreer de score: 0 1 2 Nieuw element Toelichting: 2:

Finally, in accordance with the second, meaning- theoretically constrained general point, we should note that the notion of existence appropriate to states of

For the focal goal condition, the effect of type of cue upon recall should be weakened when the scenarios imply alternative (socializing) goals, but not when the scenarios imply

Deze documentaire gaat over de bijdrage van de (intensieve) veehouderij aan de uitstoot van onder andere koolstofhoudende broeikasgassen.. In een publicatie van de Voedsel-

Deze bijdrage van het verkeer moet onderdeel zijn van de antropogene uitstoot en kan dus niet hoger zijn dan 13% van 6 à 8 Gt De bijdrage van de veehouderij is dan maximaal 18/13

It is necessary to explore if the BTS can improve the predictive validity of conjoint analysis, in this case the traditional method, and also if it shows better results than

12 For a fuller discussion of underdetermination see Newton-Smith (1978). E4 Notable exceptions to the general neglect of the aesthetic properties of theories are.. Stressing

This project uses Peter Berglez’s concept of a global news style to empirically analyze the coverage of a prime example of a global crisis (the 2011 Japanese earthquake and