• No results found

Is honesty really the best policy? : investigating the interaction between loss aversion and attention-to-standard mechanisms on the decision to act dishonestly

N/A
N/A
Protected

Academic year: 2021

Share "Is honesty really the best policy? : investigating the interaction between loss aversion and attention-to-standard mechanisms on the decision to act dishonestly"

Copied!
51
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

I

S

H

ONESTY

R

EALLY THE

B

EST

P

OLICY

?

I

NVESTIGATING THE

I

NTERACTION

B

ETWEEN

L

OSS

A

VERSION AND

A

TTENTION

-

TO

-S

TANDARDS

M

ECHANISMS ON THE

D

ECISION TO

A

CT

D

ISHONESTLY

Abstract

It is not uncommon for people to act dishonestly in pursuit of their self-interest. This behaviour, how-ever, more often seems to arise in situations in which an individual is attempting to circumvent a pos-sible loss rather than approach a pospos-sible gain. Based on the predictions arising from Kahneman & Tversky’s (1979) Prospect Theory, the present contribution tests whether dishonest behaviour does in fact depend upon an individual’s reference point. Moreover, inspired by the effectiveness of attention-to-standards mechanisms outlined in Mazar et al. (2008) in reducing dishonesty in the gain frame, the current research also investigates their efficacy in the loss frame. We employ a probability-based task that implements a 2 (Loss vs. Gain) by 3 (No Intervention, 10 Commandments, Code of Conduct), between-subject design with 161 experimental participants to examine whether dishonest behaviour is reference-dependent, and whether it can be reduced by attention-to-standards mechanisms. Results indicate that, while there is no statistical difference in dishonesty between the Loss and Gain frames, the attention-to-standards mechanism significantly reduces dishonesty in both conditions.

Name: George Beardon Student Number: 11753447

University: University of Amsterdam Faculty: Faculty of Economics and Business Programme: MSc Economics

Specialisation: Behavioural Economics and Game Theory Supervisor: Ivan Soraperra

(2)

STATEMENT OF ORIGINALITY

I, George Beardon, declare to take full responsibility for the contents of this document, of which I am the sole author. I declare that the text presented in this document is original and that no sources other than those mentioned in the text and its references have been used in its creation. The Faculty of Eco-nomics and Business is responsible solely for the supervision of this work, not for its contents.

(3)

SECTION I–INTRODUCTION

“Cheaters never prosper”, they say. Yet, people still frequently seem to cheat to pursue their self-interest, often times actually prospering in the process. Examining the countless acts of cheating be-haviour in modern society, however, exposes a striking pattern: People often seem to be more

moti-vated to cheat because of a desire to circumvent losing something, than they are motimoti-vated to cheat to approach an equivalent gain. One can readily think of a number of different real-life scenarios in

which this tendency seems to be true. Consider, for instance, the case of Lance Armstrong, who was found to be using performance-enhancing substances in 2012. Armstrong, a seven-time Tour de France winner, desperate to retain his former brilliance and protect his position at the top, decided that using steroids presented the best chance of maintaining his status. Moreover, outside of sporting com-petitions, this tendency seems to reveal itself in a number of different economic interest areas. Eng-ström, Nordblom, Ohlsson, and Persson (2015), for instance, found that Swedish taxpayers tend to claim deductions from their end-of-year tax returns more frequently if they are in a deficit of their tax burden than if they are in surplus – that is, Swedish taxpayers seem to try harder to escape taxation if their compliance decision is framed in terms of a potential loss. Furthermore, research on perfor-mance-based incentives indicate that payments made in advance and to be returned if students’ results are substandard can enhance the performance of teachers (e.g., Fryer et al., 2012; Levitt et al., 2016) and bonuses framed in terms of a possible loss are found to improve the productivity of Chinese fac-tory workers (Hossain and List, 2009). Yet, whether such improvements in performance can

exclu-sively be attributed to effort and not an increase in misreporting behaviour or corner-cutting is a

con-tentious issue. Welfare, also, is another area in which it could be possible for individuals to lie in or-der to retain benefits that they no longer fulfil the requisites for (Grolleau, Kocher, & Sutan, 2016). Finally, in the academic sphere many of today’s undergraduates are succumbing to the numerous forms of academic dishonesty in order to sidestep losing out on the most desirable employment posi-tions and places on postgraduate courses at the most sought-after universities (McCabe, Treviño, & Butterfield, 1999).

Although previously it has been thought that individuals will always fully exploit situations in which it is possible to cheat for their self-interest (Becker, 1968), research by Mazar et al. (2008) suggests that cheating behaviour can be constrained because having to update one’s self-conception can consti-tute an internal cost of dishonesty. Yet, individuals are not always mindful of the moral component of their identity and, hence, the moral downfall of dishonesty can fluctuate according to the decision-making context. Thus, it seems interesting to investigate whether steering an individual’s attention to standards of honesty is an effective mechanism to reduce dishonesty both in the Loss and Gain frame. We employ a 2x3 factorial, between-subjects design in order to explore this hypothesis. We vary the frame (Loss vs. Gain) as well as the moral intervention (No Intervention, 10 Commandments, and

Code of Conduct). We find that loss-framed individuals do cheat more than gain-framed individuals,

although this difference is not statistically significant. The statistical procedure, however, is consider-ably underpowered and so it is not possible to conclusively determine that there is no behavioural dif-ference between the framing scenarios. Furthermore, we find that implicit (i.e. recalling the 10 Com-mandments) and explicit (i.e. agreeing to the experiment’s Code of Conduct) moral cues are effective in reducing dishonesty in both frames, with the implicit cue outperforming its explicit counterpart. Hence, the remainder of the paper is structured as follows: Section II discusses the literature pertinent to this topic; Section III outlines the experimental design and methodology; Section IV presents the results; Section V considers the results in the context of prior research; and Section VI offers some concluding remarks, policy implications and avenues of future research.

(4)

SECTION II–LITERATURE REVIEW 2.1–PERSPECTIVES AND PRIOR EVIDENCE ON DISHONESTY

Inspired by the thinking of political and economic philosophers such as Thomas Hobbes and Adam Smith who contended that humankind is, at its core, self-interested, as well as the conventional eco-nomic framework of Expected Utility Theory in which this self-interest is pursued and rationally max-imised, Becker (1968) established the canonical economic model of dishonesty in which a decision-maker behaves dishonestly if it is to their material benefit and is honest otherwise. According to Becker’s (1968) Simple Model of Rational Crime (SMORC), the decision to act dishonestly depends exclusively upon three basic elements: (i) the expected benefit of act; (ii) the probability of detection; and (iii) the magnitude of punishment levied if caught committing the offense. It is instructive to use Becker’s original example in the domain of criminality (for which SMORC was initially developed) to illustrate such a decision-making process. Becker’s (1968) example is as follows: If an individual is in a rush, say, to arrive to an important meeting on time, and they must choose between licitly parking in an inconvenient location or illicitly parking in a more convenient one, then inasmuch as the ex-pected cost of the punishment is not more than the benefit derived from arriving punctually, the ra-tional agent should opt for the illegal parking space in order to not arrive late. In terms of dishonesty, SMORC therefore postulates that an individual facing the opportunity to act dishonestly and boost their earnings should conduct a similar cost-benefit analysis and, subsequently, decide whether to cheat on the basis of its outcome. Moreover, if it transpires that the expected benefit derived from act-ing dishonestly exceeds the financial cost resultact-ing from reprimand, SMORC also predicts that the decision-maker should cheat to the fullest extent, irrespective of the situational context. Becker (1968), therefore, conceptualises a prototypical dishonest human –Homo Improbus1, if you will –

whose decision-making process is one in which there is no intrinsic preference for truth-telling or in-nate concern about being dishonest, and, much like his brother Homo Economicus¸ is one in which utility depends solely on their material payoff. If such an individual is to be more inclined to act hon-estly, either the monetary benefit arising from dishonest behaviour should be decreased, or the proba-bility of detection and magnitude of punishment should be increased, or both, until the cost associated with dishonesty exceeds its benefit. Hence, from this analysis our first hypothesis can be advanced:

Hypothesis 1: Individuals will act dishonestly if provided an opportunity – that is, if there are no, or few, significant exogenous costs related to dishonest behaviour.

However, a recent quantitative meta-analysis by Abeler et al. (2016) suggests that people tend not to follow a Beckerian approach, instead opting only for “incomplete dishonesty” (Shalvi et al., 2011) if the opportunity arises – that is, a magnitude of dishonesty below the maximum predicted by SMORC. Abeler et al.’s (2016) investigation includes data from studies that have adopted approaches similar to Fischbacher and Föllmi‐Heusi (2013) (henceforth referred to as FFH), wherein participants privately perform a random draw (e.g. a coin-toss, a die-roll, etc.) and then anonymously self-report the out-come to the experimenter. Hence, the FFH experimental paradigm allows individuals to behave dis-honestly and increase their payoff free of the possibility of being caught and punished. Despite re-stricting their analysis only to experimental setups akin to FFH, Abeler et al. (2016) were,

1

Taken from Luigi Romeo’s Ecce Homo! A Lexicon of Man (1979) where Homo Improbus is defined as “Liter-ally, someone bad, poor, or even enormous, excessive, in early Roman writers such as Plautus. In the post-Augustean period, the term was more frequent in connection with negative moral qualities; thus, a wicked, vile, dishonest person.” Note, also, that improbus is the etymological root of the noun ‘improbity’ which means to lack honesty or moral integrity.

(5)

less, able to gather data from 72 studies, ultimately including the decisions of more than 32,000 sub-jects from 43 different countries. Abeler et al. (2016) discovered that, in spite of the absence of a de-tection mechanism, most individuals are inclined to sacrifice considerable amounts of money by being (almost) honest. Specifically, Abeler et al. (2016) found that people reported to have obtained only 21.6-percent of the maximum possible payoff, representing a notable departure from Becker’s predic-tion; if people really acted like Homo Improbus, we would expect all reports to be at the payoff-maximising state.

Furthermore, the common finding that people are incompletely dishonest tends to be robust to the choice of experimental paradigm. Indeed, non-maximal dishonesty is also observed in so-called ‘per-formance-based’ tasks in which the experiment’s subjects are required to exert actual effort but are instructed that remuneration is conditional only on their anonymously self-reported performance (e.g., Mazar et al., 2008; Mead et al., 2009; Zhong et al., 2010; Shu & Gino, 2012; Friesen & Gangadharan, 2013; and, Gino & Mogilner, 2014). The observation that people are incompletely dishonest is also found in another subcategory of dishonesty experiments conducted outside of the laboratory in which individuals find ‘misplaced’, or receive ‘misdirected’, items, unaware of the fact that they are actually involved in an experiment and that their behaviour is being scrutinized. Interestingly, the data demon-strates that most individuals are actually completely honest insofar that they often decide to return such items, despite it being impossible that the theft could be traced back to them (e.g., Steinberg et al., 1977; Yezer et al., 1996; West, 2005; Keizer et al., 2008; Mullen & Nadler, 2008; Franzen & Pointer, 2013; and, Stoop, 2014). Hence, the preponderance of empirical studies on honesty tend to refute Becker’s (1968) belief that individuals will cheat maximally when the material benefit of mis-conduct exceeds its potential cost. Indeed, in situations in which dishonesty is undetectable, most people either do not succumb to temptation, or exploit the situation only a little, suggesting that the decision to behave dishonestly is constrained by factors other than the external cost of punishment alone.

Abeler et al. (2016) identify three potential factors that can explain this effect: first, people may en-counter an intrinsic cost of deviating from truth-telling; second, people may have reputational con-cerns which prevents them from self-reporting performance that could be interpreted as incriminating; third, people may find it important to act in a manner consistent with social norms.

Abeler et al.’s (2016) first account implies that there is a direct internal cost to acting dishonestly that should enter the cost-benefit analysis of an individual deciding whether to tell the truth or not. Essen-tially, there is some psychological disutility that arises from falsely reporting one’s performance and that this disutility should either reduce, or eliminate completely, dishonest behaviour. Abeler et al. (2014) find, for instance, that in a field experiment in which dishonesty is completely imperceptible, the observed behaviour of their participants does not differ significantly from the honest outcome as predicted by probability, lending credence to the idea that there is indeed an intrinsic cost of dishones-ty which is both pervasive and salient for people’s behaviour.2 Abeler et al. (2016) suggest that this psychological disutility could arise because of an individual’s moral or religious predisposition, con-cerns about their self-image, an internalised injunctive norm of honesty, or some melange of all three. That not adhering to one’s moral or religious standards could impose a direct psychological cost upon oneself seems obvious. Honesty is oft-cited as one of the foremost important components of the ‘mor-al’ person and a recent survey conducted by Geißler et al. (2003) indicates that a private norm of

2 In fact, Abeler et al. (2014) found that, if anything, people reported the payoff-maximising option slightly less

(6)

honesty avoidance is held by many. Research also demonstrates that individuals are motivated to be internally consistent with their ethical value system in order to maintain a positive self-evaluation (e.g., Pyszczynksi et al., 2004). Greenwald (1980) pairs these perspectives and argues that honesty is an important component of a person’s self-image and that individuals also strive to maintain this component. Hence, any behaviour that challenges an individual’s standards for honesty should direct-ly result in psychological disutility because it calls into question their self-image. Evidence from Bat-tigalli, Charness, and Dufwenberg (2013) suggests that people do indeed experience negative affect upon violation of a personal norm of honesty. Turning to Abeler et al.’s (2016) injunctive norm per-spective, psychologists have often contended that part of the socialisation process is the internalisation of the norms of one’s society (Campbell, 1964), which, in turn, become the criterion through which to assess one’s behaviour. Again, since these internalised social norms effectively constitute part of a person’s ethical value system, non-compliance should ultimately result in some form of psychological disutility. Interestingly, insights provided by the nascent field of Neuroeconomics, which uses func-tional magnetic resonance imaging (fMRI) to measure neural haemodynamic responses to certain economic stimuli, support the interpretation that humankind has an internal reward and punishment mechanism of the sort described above. De Quervain et al. (2004), for instance, find evidence that the brain’s primary reward centre (i.e., the ventral striatum) is activated when people altruistically punish infractions of social norms. Therefore, that there should be a direct psychological cost to acting dis-honestly, because it contravenes either one’s private norms or internalised social norms, is supported both in principle and in actuality.

According to Abeler et al.’s (2016) second proposition, a reputation for honesty is thought to restrict dishonest behaviour because individuals care about others’ perceptions of them. Abeler et al. (2016) suggest that a person’s utility is decreasing in the likelihood that a certain report will be construed as a lie and, hence, result in an unfavourable social image. In fact, reputational concerns can be such an important consideration that if somebody has legitimately received the payoff-maximising outcome, and that this outcome is sufficiently improbable, such a person can actually be downwardly dishonest in order to circumvent any disrepute arising from reporting that they have received the maximum. Empirically, Gneezy et al. (2018) find evidence that people are most likely to act dishonestly in order to receive the payoff-maximising outcome where the chance that this outcome can be perceived to be dishonest is as its lowest. Akerlof (1983) suggests that such a mechanism is likely to have arisen as a heuristic that has evolved from humankind’s proclivity for repeated and non-anonymous interaction in which the appearance of honesty is an important characteristic.

Finally, Abeler et al. (2016) suggest that social norms and, therefore, social comparison, can affect people’s reporting decisions. If others are behaving honestly, then it is posited that an individual might derive disutility by acting differently to them. Importantly, this account differs from the first in that the norm is merely ‘descriptive,’ meaning that it simply describes the behaviour of others. Con-versely, an ‘injunctive’ social norm involves an additional component regarding whether such behav-iour is perceived as acceptable. Hence, according to Abeler et al.’s (2016) first perspective, acting out of accordance with an internalised injunctive norm of honesty should decrease an individual’s utility, irrespective of whether others are actually behaving dishonestly at the time.

Extending beyond Becker’s Simple Model of Rational Crime (1968), and encompassing some of the points of contention outlined above, Mazar et al. (2008) offer a different theoretical perspective on dishonest behaviour that, at its core, includes the principle that people have some inherent preference for truth-telling. Mazar et al. (2008) propose a theory of Self-Concept Maintenance in which an indi-vidual’s internal system of standards can exert control over self-serving behaviour by influencing their ‘self-concept’ – that is, the manner in which a person perceives themselves (Baumeister, 1998).

(7)

Inspired by other psychological theories of morality that emphasise the ‘self’ – for instance, the theory of Objective Self-Awareness (Duval & Wicklund, 1972), in which concentrating on one’s self-image can be an important driver of moral decision-making (see Batson et al. (1999) for an empirical appli-cation), and Bandura’s (2001) Social-Cognitive Theory, in which people possess schemas about their moral character and seek to maintain self-consistency (Welsh & Ordonez, 2014) – Self-Concept

Maintenance Theory (Mazar et al., 2008) is rooted in the belief that individuals wish to pursue

nu-merous different objectives, whilst simultaneously maintaining certain aspects of their self-concept. Specifically, people can often be torn between the self-interested motivation to act dishonestly and the diametrically opposed and competing objective to maintain their self-concept as an honest person; this, then, represents a ‘win-lose’ situation, wherein, much like the forked road in Robert Frost’s po-em ‘The Road Not Taken’, choosing one path necessarily involves sacrificing the other at some cost. According to Self-Concept Maintenance Theory, failure to comply with one’s standards for honesty should involve negatively updating one’s self-concept, which is aversive – that is, the erosion of one’s self-concept constitutes a non-pecuniary cost of dishonest behaviour, similar to those outlined by Abeler et al. (2016). Hence, people attempt to solve the motivational dilemma between self-interest and self-concept maintenance by finding a balance of equilibrium between these antithetical forces, such that they can benefit partially from dishonest behaviour, but only to the extent that they can at the same time maintain their self-image as an honest person (Mazar et al., 2008). Essentially, then, this theory predicts a range of dishonesty in which behaviours that would normally be considered dis-honest do not bear any negative consequence on a person’s self-concept. People will act disdis-honestly, but the magnitude of dishonesty that allows them to maintain this aspect of their self-image will typi-cally involve the sacrifice of considerable financial benefit. Note, however, that the relevance of this theory to a given individual ultimately depends on the extent to which their self-definition is organ-ised around moral characteristics – that is, one needs to have a moral component of identity in order for dishonest behaviour to be reflect in their self-conception (Aquino et al., 2009). Ultimately, then, the preceding analysis effects the current investigation’s second hypothesis:

Hypothesis 2: People will be incompletely dishonest – that is, cheating will be bounded away from the maximum.

2.2 – PROSPECT THEORY AND DISHONESTY

Since Becker (1968) introduced the study of dishonest behaviour to the field of economics, much ex-perimental research has been conducted on the mechanisms that can either promote or restrict dishon-esty. Recently, Jacobsen et al. (2017) reviewed the dishonesty literature to date and outlined a number of social, payment, cognitive, and micro-environmental determinants of dishonesty. It is instructive to provide a concise and non-exhaustive summary of some of these influences before proceeding. So-cially, for instance, it has been found that the unethical behaviour of others can influence one’s own decision to act ethically either negatively or positively depending on the social identity of the ‘Other’ (Gino et al., 2009; Gino & Galinsky, 2012). Furthermore, dishonest behaviour is also said to increase if the benefits are communal rather than individual (Wiltermuth, 2011), if the cost of dishonesty im-posed on another subject is made less severe (Gneezy, 2005), and if the dishonest individual has been treated unfairly previously in the experiment (Houser et al., 2012). Regarding the payment mechanism perspective, dishonesty is promoted by performance-based payment schemes (Cadsby et al., 2010; Belot and Schroder, 2013; Gravert, 2013), by shortening the length of time elapsed between dishonest behaviour and payment (Ruffle & Tobol, 2014), and by altering the method of payment – paying sub-jects in tokens rather than cold, hard cash, for example (Mazar et al., 2008). Cognitively, it has been shown that momentary positive affect can increase dishonesty (Vincent et al., 2013; Mazar & Zhong,

(8)

2010) and so too can exhaustion in that it reduces the cognitive capacity to control one’s impulses (Mead et al., 2009; Gino et al., 2011). Finally, turning to the micro-environmental determinants, dis-honesty rises if the dishonest act requires passivity instead of activity (Mazar & Hawkins, 2015), if the decision is time-pressured (Shalvi et al., 2012), and if there is an increased sense of anonymity (Zhong et al., 2010). Yet, another micro-environmental factor that has garnered comparably less at-tention than those mentioned above is that of loss aversion, which the author finds rather peculiar considering that the topic has been thoroughly explored in other economic interest areas (see Kahne-man, Knetsch, & Thaler, 1991). Hence, the current research, in part, seeks to better understand the effect of this micro-environmental mechanism on an individual’s decision to act dishonestly.

Kahneman and Tversky’s revolutionary Prospect Theory (1979) therefore provides an additional the-oretical foundation of the current research. Although Prospect Theory was originally developed to deepen our understanding of individuals’ departures from rationality in situations involving probabil-istic alternatives – something that Kahneman and Tversky coined ‘prospects’ – its principles and pre-dictions have been extended to a range of different economic subject areas. Despite Kahneman and Tversky’s (1979) original article having amassed over 16,000 citations at the time of writing, it is a worthwhile endeavour to briefly explain the underlying features of their model for the unacquainted reader.

The foundational assumption of Kahneman and Tversky’s (1979) Prospect Theory is that the carriers of utility are changes in wealth or welfare relative to some ‘reference point.’ Hence, whereas

Ex-pected Utility Theory posits that utility is uniquely dependent on the absolute magnitude of an

out-come, Prospect Theory instead postulates that whether a particular situation is evaluated positively or negatively depends ultimately on the change from the initial point of reference. According to Prospect

Theory, then, utility is a function in two arguments: (i) the initial asset position which serves as an

individual’s reference point, and (ii) the magnitude of change relative to said point (Kahneman & Tversky, 1979). Note, also, that the reference point need not be one’s present asset position, but could also represent an aspirational level that differs from the current status quo. Kahneman and Tversky (1979) also propose that an individual’s utility function is non-linear insofar that above the reference point the utility function is concave and, conversely, below the reference point the function is convex. Such an “S”-shaped utility function implies that the marginal psychological response to both positive and negative changes in wealth exhibits diminishing sensitivity. Finally – and most crucially for the current research – Prospect Theory (1979) also hypothesises that there is an asymmetry in an individ-ual’s evaluation of relative losses and gains – or, in the words of Kahneman and Tversky, “losses loom larger than gains” (1979). Particularly, it is thought that the aversive value of some loss is greater than the appetitive value of a gain of identical magnitude, a principle which is now referred to as loss aversion. Consequently, Prospect Theory predicts a utility function (such as that shown in

Figure 1) which is defined by deviations from some reference point, which exhibits concavity for

out-comes above the reference state, convexity for outout-comes below the reference state, and which is also steeper in the latter circumstance.

(9)

One of the most interesting predictions arising from the third limb of Kahneman and Tversky’s

Pro-spect Theory is that people’s behaviour should differ systematically between the gain and loss

do-mains. Kahneman and Tversky (1979), for example, present evidence in their original thesis that peo-ple’s risk preferences are conditional on where along the “S”-shaped value function a particular pro-spect lies. Specifically, people tend to be risk-averse when choosing between positive propro-spects, and risk-seeking when choosing between negative ones. Furthermore, by merely manipulating the frame of the decision-making context (i.e., altering the presentation of transparently identical situations), Tversky and Kahneman (1981) engendered markedly different risk-preferences purely by summoning the inveterate psychological concept of loss aversion3.

Moreover, whilst Prospect Theory was originally developed in the domain of risky-choice – that is,

3

Consider, for instance, the two following hypothetical scenarios. In Scenario 1 you have been given £0 and must choose between Option A: a 15% chance of earning £10,000 (and, hence, an 85% chance of earning noth-ing), and Option B: a 100% chance of earning £1,500. In Scenario 2 you have been given £10,000 and must choose between Option A: a 15% chance of losing nothing (and, hence, an 85% chance of losing £10,000), and

Option B: a 100% chance of losing £8,500. In Both Scenario 1 and 2, choosing Option A yields a 15% chance

of receiving £10,000 and an 85% chance of walking away with nothing, whilst choosing Option B always deliv-ers £1,500. Notice that the expected value of each of these options is exactly £1,500 and differ only in terms of their framing – that is, Scenario 1 is gain-framed, whilst Scenario 2 is loss-framed. Consequently, according to conventional economic theory, an individual’s choice in either scenario should only depend upon their tolerance for financial risk (Option A clearly being the more risky option) and there should be no difference in choice between them. However, what Tverksy and Kahneman (1981) found was that in Scenario 1 people tended most often to choose Option B, whilst in Scenario 2 people tended to make the opposite choice.

Gains Losses

Value

Value of Gain

Value of Loss

(10)

decisions in which the potential outcomes are ambiguous or uncertain – its assumptions and predic-tions are also applicable to the domain of riskless-choice. Tversky and Kahneman themselves extend-ed their theory to riskless options in Loss Aversion in Riskless Choice: A Reference-Dependent Model (1991), and evidence supporting the psychological principle that monetarily equivalent gains and losses are asymmetrically evaluated was found by Kahneman, Knetsch and Thaler (1991) when con-ducting economic research on framing and the endowment effect. Recently, it has been shown that people self-report to be more distressed when simply thinking about negative changes in wealth than they are excited about positive changes of the same amount (McGraw et al., 2010).

Interestingly, one’s aversion to loss can also have an impact on self-interested behaviour. Brewer and Kramer (1986), for instance, showed that loss-framing of a Public Goods Game (PGG) was sufficient to elicit such behaviour. Brewer and Kramer (1986) manipulated the framing of the PGG by describ-ing the situation as either a commons dilemma or a public goods dilemma. If the situation was framed as a commons dilemma, participants had to decide how much they should take from the common pool, simultaneously balancing their self-interest with the need to maintain a sufficiently large com-mon resource to sustain themselves and others. Conversely, if the situation was instead framed as a public goods dilemma, participants had to contribute some portion of an initial endowment to the common pool, whilst still taking into account the same considerations as that of the former situation. Predictably, those participants that had to contribute resources to the common pool (i.e. those individ-uals whose decision was loss-framed) tended to behave less pro-socially than those who were simply able to take from it (i.e. those individuals whose decision was gain-framed), despite both situations being commensurate in terms of objective outcome. This result was later replicated by McCusker and Carnevale (1995) in an identical context, as well as by De Dreu and McCusker (1997) in a somewhat modified context. Furthermore, not only does loss aversion induce selfish behaviour, but it can also provide a rationalisation for said behaviour. For instance, Kahneman, Knetsch and Thaler (1986) found that people respond more positively to a firm’s decision to raise prices if the firm is trying to compensate for a financial loss than if it is seeking to increase profitability.

If loss aversion can create an impetus to act more self-interestedly, and also establish excuses for such behaviour, then it seems only natural to extend this finding into the area of unethical behaviour, such as cheating and dishonesty, since unethicality often arises in situations in which short-term self-interest is promoted over moral correctness. Intuitively, since Prospect Theory’s “S”-shaped utility function predicts that the aversive value of a loss is greater than the appetitive value of an equivalent gain, it seems rational that people will more frequently engage in dishonest behaviour to avoid such losses than to approach such gains. Previous research by Social Psychologists and Behavioural Econ-omists alike have tended to study dishonesty in situations in which individuals have something to gain from cheating (e.g., Mazar et al., 2008), but hitherto a handful of researchers have also begun to ex-plore dishonesty from the loss-based perspective (namely Cameron & Miller, 2008; also cited in Cameron & Miller, 2009; Kern & Chugh, 2008; Grolleau et al., 2016; and Schindler & Pfattheicher, 2017).

Cameron and Miller (2008) employ a performance-based cheating paradigm in order to explore the effect that loss aversion has on their participants’ appetite for honesty. Specifically, participants in their experiment were tasked with solving a series of nine, six-letter anagrams, where “fiendishly dif-ficult” ones were placed at the second and seventh positions. Participants were also instructed that they needed to solve this series in sequential order if they were to receive payment for correct solu-tions. By subtlety manipulating the description of the payment structure, Cameron and Miller (2008) were able to induce loss- or gain-framing for their participants. Participants in the loss frame were instructed that they were starting the experiment with an initial endowment of $10 and would lose $1

(11)

per each anagram unsuccessfully solved, whilst those in the gain frame started with $1 and were to earn $1 per each anagram successfully solved. Because of the “essentially unsolvable” anagrams placed at the second and seventh positions, earning more than $2 (including the $1 show-up fee) would be incredibly difficult without resorting to dishonest behaviour, whilst earning more than $7 would require participants to lie twice. Participants were provided an opportunity to act dishonestly since payment would be based solely on their self-reported performance, which indicated that there was no possibility that the experimenter could check the veracity of their answers and, hence, detect any cheating behaviour. Overall, Cameron and Miller (2008) found that people acted dishonestly in both conditions, but that cheaters were disproportionately represented in the loss condition, where some 53-percent of participants acted dishonestly, relative to only 30-percent in the gain condition. Moreover, loss-framing not only increased the likelihood of acting dishonestly but

al-so engendered dishonesty of a higher magnitude. Specifically, al-some 35-percent of participants in the loss condition claimed that they had solved the seventh anagram or more, therefore effectively cheat-ing twice, compared to only 9-percent of those in the gain condition. Since Cameron and Miller de-cided to adopt a real-effort task, however, a potential confound is that the observed effect is merely a product of increased effort in the loss condition instead of greater dishonesty. Cameron & Miller (2008) defend their research against this criticism, contending that honest effort couldn't help their participants decipher the pair of unsolvable anagrams that were revealed to be too obscure in pretest-ing, but rather short-sightedly did not examine whether this was also the case during the actual exper-iment.

Kern and Chugh (2009), likewise, found that people’s unethicality can change according to the fram-ing context. Specifically, they found that the decision-maker is more likely to either endorse or actual-ly engage in dishonest and unethical behaviour if they are presented with a loss frame rather than a gain frame. However, a significant issue regarding Kern and Chugh’s research is that their partici-pants’ decisions are not monetarily incentivised and, thus, their behaviour may deviate systematically from their potential behaviour if an incentivised experiment had instead been employed. Moreover, Kern and Chugh only ask their participants to “role play” hypothetical scenarios in their experiments rather than using the Experimental Economics field’s preferred method of script enactment and, hence, the participants’ decisions may reflect their knowledge of economic institutions outside the laboratory (Cox and Isaac, 1986) instead of an honest consideration of the real risks and benefits that could arise from behaving unethically.

Building upon this relatively nascent literature, Grolleau et al. (2016) implemented an incentive-compatible experiment, but also extended the analysis of Cameron and Miller (2009) by investigating loss aversion’s effect on cheating behaviour in performance-based task where performance could be either monitored or unmonitored. If the participant had been assigned to the latter condition, there was a possibility to misrepresent their performance without fear of reproach. In this respect, Grolleau et al.’s (2016) experimental design addresses the concern that Cameron and Miller (2008) had not ana-lysed the interaction between the framing situation and the possibility of cheating. Grolleau et al. found that when participants are monitored (i.e. when their self-reported performance in the task is checked against actual performance) there is no difference in performance between the

frames. However, in the unmonitored conditions, they find that cheating is significantly higher for loss-framed individuals. Specifically, participants in the unmonitored loss condition reported to have 9.56 correct responses, whereas those in the unmonitored gain condition reported 5.42, constituting an increase in dishonesty of approximately 76.5-percent between these conditions. Grolleau et al.’s re-search, therefore, rules out the concern that the finding of Cameron and Miller (2008) was driven

(12)

pre-dominantly by loss-framing’s additional motivational impact on actual effort, instead suggesting that a loss frame can indeed induce more mendacious behaviour.

Schindler and Pfattheicher (2017) sought to address some of the issues raised regarding the methodol-ogy of the studies delineated previously. First, they investigated dishonesty in a laboratory setting in-volving pecuniary incentives and thus align their research more closely with the prevailing standards of the Experimental Economics field. Second, they eschew a performance-based context and instead opt for studying dishonesty in situations involving probabilistic outcomes (e.g., the Die-Under-The-Cup paradigm and a coin-toss task). Study 1 of Schindler and Pfattheicher (2017) employed a multi-ple-roll dice task paradigm in which participants were asked to report the number of rolled “4s” after rolling a fair die 75 times. Participants earned 10c per rolled 4 in the gain condition, whilst in the loss condition they lost 10c from an initial endowment of €7.50 if the die displayed a number other than 4. Compared to one-shot tasks, the procedure in Study 1 allowed participants to have a continuous cheat-ing range – that is, instead of only becheat-ing able to cheat once, participants could cheat up to 75 times if they felt so compelled– allowing for a more precise estimate of cheating behaviour. By comparing mean reported outcomes with the statistical baseline of 12.5, Schindler and Pfattheicher (2017) were able to measure dishonest behaviour. Study 2 involved a simple one-shot coin-flip task with framing similar to that of Study 1. Since the outcome of flipping a coin is binary (except in the improbable case that is lands on its edge), it is not possible to conclude certainly whether someone has been dis-honest, hence this yielded their participants a little more “moral wiggle room” (Dana, Weber & Kuang, 2007) with which to act dishonestly. Predictably, Schindler and Pfattheicher (2017) found in both studies that individuals demonstrate an increased proclivity for dishonesty when the subjects’ experimental setting is framed in terms of losses rather than gains.

Briefly, it is worth mentioning a couple of other studies that find similar results with more unconven-tional approaches. Takahashi and Shen (2018) found that by anchoring an individual’s expected re-ward above their ‘fair’ rere-ward, they were able to induce a loss-frame in terms of an anticipated surfeit above what would otherwise have been considered a fair amount. Takahashi and Shen (2018) were, in this manner, able to illuminate that such individuals tended to cheat more often than those whose ex-pected reward was below their fair reward. Balasubramanian, Bennett and Pierce (2017) used a sam-ple of online Indian workers from the MTurk platform to analyse the supply of cheating under differ-ent incdiffer-entive structures. Balasubramanian et al. (2017) concluded that cheating behaviour was least prevalent in those individuals for which the marginal benefit would yield an income greater than their self-reported daily income target. Balasubramanian et al. (2017) understand this finding in the context of a reference-dependent model, in which the daily income target acts as a reference point for their sample. This suggests that those whose incomes are falling below the daily target are situated in a loss-frame where the benefit of a dishonest act is greater than for those who have already earned more than their daily target.

Now, although Mazar et al. (2008) postulate that people are basically honest and tend to be cheating averse, certain circumstances – for instance, facing a potential loss – can lead them to be more dis-honest than usual. The results of Cameron and Miller (2008), Kern and Chugh (2009), Grolleau et al (2016), Schindler & Pfattheicher (2017) and other confirm this perspective. Rick and Loewenstein (2008) refer to this effect as “hypermotivation” – that is, “a visceral state that leads a person to take actions he or she would normally deem unacceptable.” Simply, since “losses loom larger than gains” (Kahneman & Tversky, 1979), it seems only natural that people will be motivated to engage more frequently in dishonest behaviour to avoid such losses than to approach such gains.

(13)

However, there are a number of additional, non-mutually exclusive perspectives for this observed ef-fect. Cameron and Miller (2008) believe their results can be comprehended within the original risk framework of Kahneman and Tversky’s (1979) Prospect Theory. If, for instance, an individual’s utili-ty is derived from both pecuniary and non-pecuniary elements, dishonesutili-ty can be thought of as a somewhat ‘risky’ decision because detection could potentially have the consequence of being socially sanctioned by the experimenter and other participants, resulting in emotions such as shame, embar-rassment, or guilt. Dishonesty could also result in some sort of financial risk, such as the forfeiture of any ill-gotten gains from the experiment, or exclusion from future sessions. Hence, from this perspec-tive, that loss-framed individuals are more dishonest is simply a behaviour that is consistent with the risk-seeking attitude of individuals that are loss averse. Furthermore, it could be the case that loss aversion induces a sentiment of entitledness which provides an individual with a suitable rationalisa-tion for behaving dishonestly in order to protect their entitlement (Cameron and Miller, 2009). Such an individual may proceed to act dishonestly without necessarily altering their self-concept because this rationalisation of entitlement should reduce the moral cost of acting dishonestly. Note, also, that the propensity to rationalise something can be considered a function of the motivation to do so (Rick & Loewenstein, 2008), so it is intuitive that hypermotivated individuals should be able to persuade themselves that acting dishonestly is acceptable in such circumstances. Finally, Grolleau et al. (2016) propose that behaving dishonestly in order to avoid a loss can also be justified from the perspective of a third-party and this, consequently, makes unethical behaviour more acceptable to a loss-focused in-dividual. Kahneman et al.’s (1986) aforementioned finding suggests that third-parties do indeed judge unethical behaviour more favourably if it arises in the context of loss-avoidance. Hence, this suggests that the social norm of not being dishonest is more flexible in the loss domain, allowing loss-focused participants to pursue their self-interest basically free from the special approbation that society has for cheaters that seek to increase what they already have. Grolleau et al.’s (2016) perspective reinforces the idea that the moral cost of dishonesty should be lower for loss-framed individuals. Despite the initial consensus of the literature discussed above, and the clear rationale behind the results, such little research has been done on this topic that it is important to perform both a confirmatory role as well as an exploratory one – especially given the presently ongoing debate regarding the replicability of la-boratory experiments in economics (Camerer et al., 2016). Therefore, to this end, the following hy-pothesis has been formulated:

Hypothesis 3: Where a situation is framed in terms of potential loss instead of potential gain, people will be more likely to act dishonestly in order to avoid such a loss than to achieve an equivalent gain – despite these situations being identical in terms of expected material utility. 2.3–CATEGORISATION,ATTENTION TO STANDARDS, AND THEIR INTERACTION WITH FRAMING Finally, and turning to the most crucial element of the present research, Mazar et al. (2008) also pro-pose a pair of mechanisms that are thought to affect the role of one’s self-concept in the decision-making process. Specifically, these mechanisms are ‘categorisation’ and ‘attention to standards.’ Cat-egorisation denotes the capacity for people to categorise their actions in terms that are compatible with their self-concept and, hence, rationalise their behaviour. Consequently, if a certain unethical behaviour is malleable in terms of its categorisation, either because of the situational or social context that the decision-maker faces, such an individual will be able to reinterpret their behaviour in a man-ner that allows them to proceed without necessarily having to update their self-concept (Gur & Sackeim, 1979). Related to Mazar et al.’s (2008) mechanism of categorisation is the nascent stream of literature in Behavioural Economics and Social Psychology that suggests that people possess an inter-nal process that allow them to interpret morally ambiguous situations in a self-serving manner (see

(14)

Epley & Gilovich (2016) for the mechanics underlying motivated reasoning; and, Gino, Norton & Weber (2016) for an application to the moral context). Interestingly, categorisation of a dishonest act can occur either pre- or post-violation of one’s internal standards of honesty, alleviating the expected or experienced dissonance related to such a transgression (Jacobsen et al., 2017). Naturally, though, Mazar et al. (2008) suggest that there is an inherent limit to categorisation malleability – that is, the truth can be stretched up to a certain point, but beyond that point an unethical behaviour is unques-tionably immoral and would require that a person updates their self-concept negatively.

Mazar et al.’s second mechanism – that of ‘attention to standards’ – refers to the idea that attending to one’s moral standards can make dishonest actions less likely because the dishonest act is more likely to be reflected negatively in one’s self-concept post-transgression. Related to this mechanism is Lang-er’s (1989) concept of mindlessness, which proposes that, although people are capable of behaving mindfully, approaching a situation mindlessly can result in forgetting one’s ethical standards in the moment. Conversely, if the decision-making context makes an individual’s standards for honesty more immediately accessible, such an individual should be cognisant of a stricter definition of honest and dishonest behaviour in the moment of temptation. Ultimately, this mechanism is effective because it prevents people from not confronting the ethicality of their behaviour, thereby increasing the sensi-tivity of one’s self-concept to dishonesty. Moreover, the attention-to-standards mechanism has an ef-fect on moral decision-making because it reduces an individual’s propensity to categorise an unethi-cal act as permissible conduct. Hence, Mazar et al.’s (2008) self-concept-influencing mechanisms are non-mutually exclusive. It is also worth noting that the attention-to-standards mechanism is somewhat related to the idea that individuals are boundedly ethical, which asserts that some people simply are unaware of the ethical norms governing appropriate conduct and, therefore, unconsciously behave unethically (Chugh, Bazerman & Banaji, 2005). Hence, in steering such boundedly ethical individu-als’ attention towards standards of honesty, such individuals should reflect on their behaviour and the prevalence of dishonesty should decline.

Note, moreover, that the notion that situational factors can influence behaviour is one of the founda-tional assumptions of the field of Social Psychology (for a comprehensive analysis of the situafounda-tional component of behaviour see Ross & Nisbett, 1991) and there is already a significant body of empiri-cal evidence suggesting that an individual’s decision-making context can legitimately affect moral conduct. Gino and Ariely (2016), for instance, detail Zimbardo (1969; also discussed in Zimbardo, 2007) and Milgram’s (1974) now-infamous experiments that offer situationist perspectives on human morality. Zimbardo’s (1969) research – commonly referred to as the Stanford Prison Experiment – which assigned Stanford undergraduates to the role of either a prison guard or a prisoner, highlights the incredible power of situational factors on the behaviour of otherwise moral human beings. Zim-bardo’s (1969) experiment was abruptly halted because the guards’ behaviour became increasingly sadistic and the prisoners began suffering from depression and extreme stress. Milgram’s (1974) ex-periment involved participants adopting the role of a ‘teacher’ and administering an electrical shock to a confederate experimental assistant posing as the ‘learner’ each time they made an error on some exercise, where the shock progressively increased in intensity. Milgram (1974) found that over 60% of the participants continued to administer the shock up until its maximum voltage, despite it being clearly marked as dangerous. Obviously, the situational factors at play in the prior illustrations are

hierarchism and incrementalism, respectively, but other, more subtle factors have also been found to

either reduce or produce unethical behaviour.

Mazar et al. (2008), for instance, find that by exposing some of their participants to ‘moral’ stimuli (e.g., by asking them to recall the 10 Commandments (Experiment 1) or agree to their university’s

(15)

Honour Code (Experiment 2)) situational influences can have a pronounced effect on dishonesty. In-deed, such attention-to-standards mechanisms statistically significantly reduced dishonesty relative to participants that received neutral stimuli because they reminded participants’ of their internal stand-ards of honesty in the “moment of temptation.” Overall, then, the evidence presented above suggests that humankind’s morality can be both dynamic and malleable according to the situational context in which the decision-maker is placed.

It follows, therefore, that in the present research the prevalence of dishonesty should be significantly reduced when our participants’ attention is implicitly (e.g. recalling the Ten Commandments) or

ex-plicitly (e.g. agreeing to a Code of Conduct for the experiment) steered towards a set of moral

stand-ards, relative to when no such interventions are implemented. However, it is expected that the former be somewhat less effective than the latter since the latter should make appropriate conduct far less ambiguous for the current research’s participants. The current study also extends Mazar et al.’s (2008) research in a valuable way. Mazar et al.’s (2008) research only assesses the efficacy of increasing at-tention to standards of morality in a context where subjects have the opportunity to earn, rather than lose, some payoff. Therefore, it is interesting to explore whether supraliminal situational cues such as those detailed above can prove (equally) successful in the context of loss aversion. Importantly, this is the first attempt to explore such an interaction to this date.

Finally, whilst there is not currently any literature that directly supports the following prediction, it is expected that the attention-to-standards mechanism to reduce dishonesty should prove less effective, or at worst ineffective, in the context of potential losses owing to sole difference between conditions being the asymmetry in valuation proposed by Prospect Theory (Kahneman & Tversky, 1979). Pri-marily, individuals might still be able to self-servingly categorise their action as not dishonest, despite steering their attention to standards of honesty, because loss aversion places people in a state of hy-permotivation (Rick & Loewenstein, 2008) that provides a powerful rationalisation for behaving dis-honestly. Additionally, the attention-to-standards mechanism might prove ineffective since the psy-chological cost of losing one’s endowment might actually trump the internal cost of acting dishonestly and so the objective of maintaining a positive self-concept might be overridden by the competing, self-interested motivation to circumvent a loss. Finally, then, the prior analysis gives rise to the fol-lowing set of hypotheses:

Hypothesis 4a: If people’s attention is steered either explicitly or implicitly to their own standards of honesty the prevalence of dishonesty should be reduced relative to individuals that are not mindful of these standards.

Hypothesis 4b: The implicit situational cue should be less effective than an explicit situa-tional cue

Hypothesis 4c: The attention-to-standards mechanism should be less effective for loss-framed individuals, ceteris paribus.

SECTION III–EXPERIMENTAL DESIGN AND METHODOLOGY

With the intention of investigating Hypotheses 1-4 outlined in the prior section, the current study im-plements an experiment via the online platform Qualtrics. Primarily, the experiment employs research methodologies adapted from Schindler and Pfattheicher (Experiment 1; 2017) and Mazar et al. (Ex-periments 1 & 2; 2008). Experiment 1 of Schindler and Pfattheicher (2017) employs a die-roll task

(16)

(cf. Fischbacher & Föllmi-Heusi, 2013) to assess the prevalence of dishonest behaviour between treatments that differ only in their framing. The chief benefit of adopting a paradigm similar to Fisch-bacher & Föllmi-Heusi (2013) is that the expected outcomes of the die-roll can serve as a statistical baseline of honest behaviour. If, for instance, a fair six-sided die is rolled only once, the probability that any one number is obtained is 1/6. Hence, if the results of an experiment indicate that a certain number is reported more frequently than this statistical baseline, this could potentially indicate that dishonest behaviour has occurred. Naturally, however, such a methodology permits us to detect dis-honest behaviour on aggregate only by comparing the experimental results to the counterfactual statis-tical distribution of the task’s expected outcomes. It is for this reason that such a set-up is commonly referred to as a population-inferred cheating task (Jacobsen, 2017). It would, of course, be preferable to use an individually-inferred cheating task (e.g., Mazar et al., 2008) in order to detect dishonest be-haviour at the individual level and, therefore, estimate more reliably the prevalence and magnitude of dishonesty across treatments. Yet, within the field of Experimental Economics, it is considered meth-odologically unacceptable to deceive one’s participants, and, unfortunately, employing an

individual-ly-inferred rather than a population-inferred cheating task requires deception insofar that the

experi-menter must eventually uncover the actual performance of a participant in order to detect dishonesty, even after informing them that this would not be the case. Hence, it is appropriate to compromise some of the precision of an individual-inferred cheating task in order to maintain the standards of the Experimental Economics field. Furthermore, adopting a probability-based task similar to that of Fischbacher & Föllmi-Heusi (2013) is also beneficial because it should make it absolutely clear to the experiment’s participants that the experimenter cannot detect dishonest behaviour since it is not pos-sible to directly link their reported performance to a probabilistic outcome, hopefully facilitating dis-honest behaviour.

Like Schindler and Pfattheicher (Experiment 1; 2017), the current study employs a repeated die-roll task in order to assess the prevalence of dishonest behaviour instead of the one-shot die-roll task orig-inated by Fischbacher & Föllmi-Heusi (2013). Specifically, participants are asked to report the num-ber of rolled ‘6s’ after having rolled a fair six-sided die twelve times by visiting the following link: https://www.random.org/dice/?num=12 and clicking ‘Roll Again’. In so doing, the study’s partici-pants could earn themselves a certain benefit. Since the likelihood of obtaining a ‘6’ each time a die is rolled is simply 1/6, the expected value of two rolled 6s out of twelve rolls (12*1/6 = 2) serves as the ‘honestly-reported’ statistical distribution with which to compare the participants’ reported outcomes. Principally, the benefit of using a multiple-roll task is that each participant has the opportunity to cheat more than once and, hence, there is a continuous cheating range. Comparative to the conven-tional Fischbacher & Föllmi-Heusi (2013) paradigm, or a dichotomous one-shot coin-flip (e.g. Bucci-ol & Piovesan, 2011; Abeler et al., 2014), employing this methodBucci-ology should also provide a more precise estimate of actual cheating behaviour insofar that the sample’s confidence intervals should be tighter (Balasubramanian et al. 2017), as well as also requiring fewer participants to ensure sufficient statistical power to identify actual cheating behaviour (Schindler & Pfattheicher, 2017), since it is in-credibly improbable to receive an ‘extreme’ outcome (e.g., twelve 6s) .

Participants were randomly assigned to either a ‘Loss’ or ‘Gain’ condition which were identical in terms of material payoff but differed in terms of their instructions. Participants in both the Loss and

Gain conditions were both told that they could potentially earn up to £12 depending entirely on the

decision they make in the multiple-roll dice-based task delineated above, provided they were also se-lected as one of seven individuals in the final pay-out lottery. However, participants in the Gain con-ditions were informed that they started the decision-making part of the experiment with an initial en-dowment of £0 and that, depending on the number of reported 6s received during the multiple-roll

(17)

dice task, it would be possible to gain up to the maximum amount of £12. Specifically, they would earn £1 for every ‘6’ they reported to have received out of the twelve rolls. Participants in the Loss conditions, however, were informed that they started this part of the experiment with an initial en-dowment of £12 and that it was possible to lose none, some, or all of this enen-dowment during the mul-tiple-roll task, where for each die that displayed a number other than ‘6’ they would lose £1 from their initial endowment, up to the maximum of a £12 loss. Hence, in both conditions participants face the same incentives to act dishonestly by reporting to have obtained a number of 6s greater than the actual outcome of the repeated rolls. Importantly, then, the sole difference between these conditions is in the linguistic framing of the experimental procedure – that is, in the Loss frame the reference point has been manipulated because the instructions express that the money is provided ex ante to the par-ticipant (in anticipated terms), whereas in the Gain frame it is provided ex post, implying either a po-tential loss or popo-tential gain, respectively (the exact wording of the instructions in the Loss and Gain frames is included in the Appendix). Because the Loss and Gain conditions are otherwise transparent-ly identical, employing a subtle manipulation of the frame of the decision-making context is an ap-propriate means of inducing a feeling of loss aversion, which should subsequently affect people’s judgement and decisions, without actually modifying the underlying monetary incentives. Note that in both the Loss and Gain conditions an equivalent example was provided to help participants under-stand the task. Participants were then asked to follow the link provided and then state their response in a space provided next to a sentence reading “Please report the total number of rolled 6s”. Finally, in order to control for any inaccuracy in understanding of the experiment’s instructions and payment structure, participants were also asked to report the amount of money that they have lost or gained in the Loss and Gain conditions, respectively. Participants were told that, if their final earnings accord-ing to this supplementary question did not correspond precisely with the reported number of 6s, they would be excluded both from the data analysis and the payment lottery.

Furthermore, participants not only had the incentive to cheat but also the opportunity to do so. Indeed, a number of measures were taken to ensure that the experimental environment was conducive to dis-honest behaviour. Both Becker (1968) and Mazar et al. (2008) recognise that disdis-honesty is unlikely to prevail when there is a considerable chance of detection. Hence, participants were instructed that https://www.random.org/dice/?num=12 was a third-party link, making it clear that it would not be possible for the experimenters to monitor the true outcome of the roll. Participants in both the Loss and Gain conditions could, therefore, self-report a dishonest outcome free of any fear of detection. Hence, the current research’s experimental design captures a setting in which people have the ability to self-report their performance, and where there are no, or limited, monitoring possibilities, implying that the likelihood of detection is zero, or trivial. Additionally, the present research adopts a technique described in Shalvi et al. (2011) and employed in Shalvi et al. (2012) that has been shown to increase dishonesty. It involves notifying participants that they are allowed to click ‘Roll Again’ as many times as they would like to ensure that the website does indeed produce fair outcomes, but that they must only report the outcome of the first roll. Clicking ‘Roll Again’, however, actually allows the partici-pants to observe counterfactuals and hence increases the justifiability of their dishonesty.

In addition to the baseline Loss and Gain conditions, participants could be randomly assigned to either one of the two attention-to-standards conditions. According to the Self-Concept Maintenance Theory of Dishonesty (Mazar et al., 2008) outlined in Section II, placing individuals into a ‘moral’ mind-set before having them complete some task in which there is an opportunity to act unethically should sig-nificantly reduce the prevalence of unethical behaviour, because doing so reminds them of their inter-nalised moral standards and, hence, would require them to update their self-concept if they acted in a manner discordant with such standards. Ostensibly, it seems counterintuitive that reminding an

(18)

indi-vidual of the immorality of dishonesty should exhibit this effect since, after all, an injunction on such behaviour is almost universally central to one’s self-conception as a moral person. The idea behind Mazar et al.’s (2008) self-concept maintenance perspective, however, is that it is not whether people have the knowledge that it is dishonourable to act dishonestly, but whether people are cognisant of and attend to these standards in their “moment of temptation”. Indeed, since it is thought that only a subset of the numerous facets that constitute an individual’s identity can be held in consciousness at any particular moment (i.e. the ‘working’ self-concept; Markus & Kunda, 1986), the influence of any one facet –for instance, one’s moral self-concept – is merely a function of its contextual accessibility (Aquino et al., 2009). Therefore, taking into account that an individual’s self-conception is somewhat malleable, situational factors, such as whether one’s attention is drawn to standards of honesty, should temporarily affect the accessibility of the moral component of one’s working self-concept and, subse-quently, produce actual behaviour consistent with this component. Conversely, if an individual is not made mindful of moral standards in their moment of temptation, it is instead likely that the diametri-cally opposed objectives of self-interest and self-enhancement will be more immediately accessible, even in individuals for whom morality is usually central to their self-conception (Aquino et al., 2009). The methodologies of Experiment 1 and Experiment 2 of Mazar et al. (2008), therefore, provide the rest of the foundation of the current research’s experimental set-up to test the interaction between our participants’ attention to moral standards, the framing situation, and their subsequent decision-making process. Prior to the procedure delineated above, there will be an ostensibly separate task in both the

implicit and explicit attention to moral standards treatments. In the implicit treatment subjects are

asked to write down as many of the Ten Commandments as they can remember. In the explicit treat-ment subjects are asked to read and state their agreetreat-ment to the experitreat-mental ‘Code of Conduct.’ An exact copy of the Code of Conduct for the current research has been included in the Appendix. Oth-erwise, the procedure and analysis will proceed exactly as outlined above.

As outlined in Section I, Experiment 1 of Mazar et al. (2008) tested whether implicitly steering peo-ple’s attention towards standards of honesty by simply recalling the Ten Commandments could con-strain dishonesty. Mazar et al.’s (2008) rationale for using an obviously religious code instead of a secular one is that, regardless of an individual’s religious affiliation, or whether they are particularly competent at remembering the commandments, the Ten Commandments are widely known to be a set of moral standards, and that even a mere awareness of this information should be enough to produce behaviour consistent with such standards. In a related study by Aquino et al. (2009), the Ten Com-mandments were found to be an effective means of morally priming participants to act benevolently. Like Mazar et al. (2008), Aquino et al. (2009) suggest that the Ten Commandments can influence par-ticipants’ intentions to behave morally because thinking about them activates morally-relevant knowledge structures in their memory which thereby increases the accessibility of the moral facet of their self-concept. It would, of course, be better to use an equivalent non-religious set of moral stand-ards instead of the Ten Commandments in order to avoid any automatic activation of our participants’ intrinsic religious orientation. However, despite philosophers such as Voltaire (1727) and Rousseau (1762) eschewing the concept that morality cannot exist outside of the confines of religious tradition, it is unlikely that many of our participants would be able to recite these or other authors’ perspectives on secular morality. Furthermore, priming participants either subliminally or supraliminally using re-ligious content has been proven to produce moral behaviour irrespective of rere-ligious affiliation in a number of different studies. It has been shown, for instance, that implicitly priming individuals with God concepts using the scrambled-sentence paradigm of Srull and Wyer (1979) can increase coopera-tive behaviour between anonymous strangers (Shariff & Norenzayan, 2007) and that this increase in prosocial behaviour was independent of their participants’ religiosity. Randolph-Seng and Nielsen

(19)

(2007), in a study that is conceptually similar to that of Shariff and Norenzayan (2007), found that not a single participant that received the religious prime could be classified as a cheater relative to neutral or sport-based primes, and, further, that the interaction between the prime and dishonesty was com-pletely unrelated to religiosity. Employing the Ten Commandments recall task as a method to evoke moral sentiment, therefore, represents a satisfactory trade-off between accessibility and the risk of activation of intrinsic religiosity. Additionally, adopting this approach facilitates comparison to the results of Mazar et al. (2008), whilst extending the analysis to a context in which subjects have the opportunity to earn, rather than lose, some payoff.

Alternatively, using a ‘Code of Conduct’ can serve as an explicit reminder of an institution’s stand-ards for appropriate conduct by straightforwardly stipulating the rules that an individual must abide by. In this respect the Code of Conduct treatment differs from the Ten Commandments treatment in-sofar that, by explicitly defining appropriate conduct, the study’s participants should have no doubt as to the ethical standards that they should adhere to, and, thus, it becomes increasingly difficult for po-tential cheaters to rationalise and justify dishonest behaviour. Inspiration for the Code of Conduct treatment is taken from Experiment 2 of Mazar et al. (2008) in which the authors employ a university-specific type of code called an ‘honour code’ in order to influence their participants’ decision to act dishonestly when given the opportunity to do so. At Yale University - where Mazar et al.’s research was conducted - and other academic institutions around the world, an honour code refers to a formal document in which the set of rubrics and ethical principles intended to steer the actions of a particular academic community are outlined. Often, a student’s introduction into an honour system requires the categoric declaration that they are committed to acting honestly when preparing an essay, laboratory report, or some other written assignment, or when participating in an examination.

Reviewing a decade’s worth of research in the field of academic dishonesty, McCabe, Trevino and Butterfield (2001) point to evidence that these so-called ‘honour codes’ are associated with reduced levels of cheating (for more specific investigations see Bowers, 1964; McCabe & Trevino, 1993; and McCabe & Trevino, 1997). Particularly, they find that the presence of an honour code can decrease cheating in examinations and other academic work by up to 38-percent. Theoretically, it is suggested that the adoption of a codified set of rules can be effective in decreasing academic dishonesty by di-rectly altering students’ perceptions of what constitutes academic integrity (McCabe et al., 1999). McCabe et al. (1999) also suggest that the efficacy of an honour code is underscored by the fact that students consequently feel that they are part of a moral community, the integrity of which they are directly responsible for maintaining.

Interestingly, the efficacy of an explicit set of ethical standards in reducing dishonest behaviour has been demonstrated in domains other than the academic. For instance, McCabe et al. (1996) find that business organisations that have adopted a ‘Code of Ethics’ report a statistically significant reduction in self-declared unethical behaviour of their staff compared to those that have not adopted one. More-over, the effect of steering people’s attention to a Code of Conduct has also been well-demonstrated in laboratory settings. The study of Mazar et al. (2008) mentioned above found that asking partici-pants to sign an honour code statement reading: “I understand that this short survey falls under MIT’s [Yale’s] honor system” before being given the opportunity to behave dishonestly completely elimi-nated cheating insofar that their participants’ performance was indistinguishable from the control condition in which cheating was impossible. Likewise, Shu et al. (2011) found that reading an honour code can significantly reduce an individual’s tendency to act dishonestly if presented the opportunity, but that it does not eradicate dishonesty completely (cf. Mazar et al., 2008). Hence, since the

Referenties

GERELATEERDE DOCUMENTEN

In this paper we have studied the dynamics of investor sentiment and investor overconfidence in the modified LLS model of the stock market [21], particularly the case of a biased

In conclusion, this thesis presented an interdisciplinary insight on the representation of women in politics through media. As already stated in the Introduction, this work

Met het netwerkmodel valt niet alleen vast te stellen dat er een verbinding bestaat tussen een MD en neuroticisme, maar ook hoe deze relatie tot stand komt (Borsboom &

Keywords: Composites, Crack Propagation, XFEM, Damage, Delamination, Fracture, Impact, Thermal Stress, Finite Element

The results support the hypothesis that replacing the amounts of donations raised by the number of lives saved will make people look past the overhead costs and choose

Our studies consistently showed, using within- and between-subject designs and anticipated and real coin- toss gambles, that loss aversion in symmetrical gambles was larger when

With respect to the manipulation of the bonus conditions of fund managers: differences in investment levels (between a frequent and infrequent treatment) in an