• No results found

Evil, or weird? : an investigation of the moral stereotype of scientists

N/A
N/A
Protected

Academic year: 2021

Share "Evil, or weird? : an investigation of the moral stereotype of scientists"

Copied!
16
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

the moral stereotype of scientists

Master Thesis

Alessandro Santoro

Student Number 10865135

University of Amsterdam

2015/16

Supervisors

dhr. dr. Bastiaan Rutjens

Second Assessor: dhr. dr. Michiel van Elk

University of Amsterdam

(2)

Alessandro Santoro

University of Amsterdam

A recent research by Rutjens and Heine (2016) investigated the moral stereotype of scientists, and found them to be associated with immoral behaviors, especially purity violations. We developed novel hypotheses that were tested across two studies, inte-grating the original findings with two recent lines of research: one suggesting that the intuitive associations observed in the original research might have been influenced by the weirdness of the scenarios used (Gray & Keeney, 2015), and another using the dual-process theory of morality (Greene, Nystrom, Engell, Darley, & Cohen, 2004) to investigate how cognitive reflection, as opposed to intuition, influences moral judg-ment. In Study 1, we did not replicate the original results, and we found scientists to be associated more with weird than with immoral behavior. In Study 2, we did not find any effect of reflection on the moral stereotype of scientists, but we did replicate the original results. Together, our studies formed an image of a scientist that is not necessarily evil, but rather perceived as weird and possibly amoral. In our discussion, we acknowledge our studies’ limitations, which in turn helped us to meaningfully interpret our results and suggest directions for future research.

Keywords: Stereotyping, Moral Foundations Theory, Cognitive Reflection

How far would a scientist go to prove a theory? In 1802, a training doctor named Stubbins Ffirth hypothesized that yellow fever was not an infectious disease, contrary to popular belief. To prove his theory, he poured in-fected vomit into his open wounds. When the wounds healed without problems, he continued to experiment on himself: he dropped additional ‘fresh black vomit’ into his eye, swallowed pills made from it, and even drank it in a solution with water. Ffirth never got sick from his disturbing series of experiments, but it was not because the disease was not infectious: it just re-quires direct transmission into the bloodstream, usually through the bite of a mosquito (Herzig, 2005).

The stereotype of the ‘evil scientist’ is quite per-vasive in popular culture. Examples of such evil or immoral scientists can be found in contemporary se-ries such as Dexter from Dexter’s Laboratory or Rick Sanchez from Rick and Morty, which are probably due to real cases of unscrupulous scientists such as Stub-bins Ffirth. Such a negative stereotype can have serious consequences, as people might decide to distance them-selves from scientists (Cuddy, Fiske, & Glick, 2008). This is even more important when considering that a re-cent report from the European Commission (Hazelkorn

et al., 2015) has shown a lack of interest in people to pursue a science related career. Additionally, nega-tive perceptions of scientists can influence the extent to which people adhere to their recommendations. This influence is perfectly illustrated by the discrepancy be-tween popular opinion and scientific evidence regarding genetically modified organisms (GMOs). Regardless of the available scientific evidence in favor of GMOs, the public opposition remains strong due to several factors related to the way the GMOs are perceived as danger-ous and immoral (Blancke, Van Breusegem, De Jaeger, Braeckman, & Van Montagu, 2015). In turn, these negative representations yield a large impact on both national and international development of regulatory frameworks concerning the import and cultivation of GM crops. It seems therefore important to directly ad-dress the issue of whether scientists are in fact perceived as immoral as they are depicted in popular culture.

The stereotype of the immoral scientist was the main focus of a recent study by Rutjens and Heine (2016), which is central to the current research. They investi-gated this moral stereotype by looking at the intuitive associations that people hold towards the morality of scientists. Indeed, they found that scientists were

(3)

intu-itively associated with a variety of moral violations (es-pecially purity violations), as compared to various con-trol targets. The current project aimed to replicate and extend their findings in order to understand the true na-ture of such a negative stereotype, which in our research was operationalized as intuitive associations (Study 1) and as explicit judgments (Study 2) of scientists’ moral-ity.

We developed novel hypotheses that were tested across two studies, integrating the original findings with recent lines of research, one suggesting that the intu-itive associations observed by in the original research might have been influenced by the weirdness of the sce-narios used (Gray & Keeney, 2015), and another using dual-process theory of morality (Greene et al., 2004) to investigate how cognitive reflection, as opposed to in-tuition, influences moral judgment. In the first study, we looked at the nature of the intuitive associations observed by Rutjens and Heine (2016), investigating whether scientists are perceived as immoral or rather as weird. In the second study, we tried to replicate the explicit associations observed by Rutjens and Heine (2016), while exploring whether these can be influenced by an induced and/or dispositional reflective state.

Study 1 – The moral stereotype of scientists: intuitive associations

Rutjens and Heine (2016) based their investigation of the perception of scientists’ morality on the Moral Foundations Theory (MFT), which is a central theoret-ical framework in morality research. MFT is rooted in anthropological research and aims to understand why, even though morality differs across cultures, recurrent themes and similarities can also be found. It was shaped into its present form by Haidt and Joseph (2004), who argued that morality is composed of universal and in-nate moral foundations.

A metaphor used to explain this concept is that morality is like a human tongue with its taste receptors (Haidt, 2012). In the same way we all have the same receptors but different tastes in food, MFT argues that we also have the same cognitive modules, or founda-tions, but different ‘tastes’ in morality. The extent to which these foundations are cultivated across cultures makes them more or less sensitive, which then results in different patterns of morality. Since its formulation, MFT has been used to account for a variety of

phenom-ena, such as differences in moral judgments among var-ious cultures or political ideologies (for a review of the existing empirical findings, see Graham et al., 2012). MFT research has identified (at least) five moral foun-dations – Care, Fairness, Loyalty, Authority, and Pu-rity – and has used the Moral Foundations Question-naire (MFQ; Graham, Haidt, & Nosek, 2009) to assess the extent to which they are endorsed by individuals. Additionally, research in this field uses scenarios that describe violations specific to each moral foundation (Davies, Sibley, & Liu, 2014). In their research, Rutjens and Heine (2016) investigated how scientists’ morality is perceived using both scenarios taken from the MFT literature and the MFQ.

Our first study draws on the studies 1-7 from Rut-jens and Heine’s research (2016), in which they exam-ined the stereotype of scientists’ morality by looking at the intuitive associations people hold towards scientists. More specifically, they used a design that combined moral scenarios with the conjunction fallacy (Tversky & Kahneman, 1983), a reasoning error that occurs when it is assumed that specific conditions are more likely than more general ones (described below). Rutjens and Heine (2016) initially presented participants with a sce-nario describing a particular moral violation, such as the following:

On the way home from work, Jack decided to stop at the butcher shop to pick up some-thing for dinner. He decided to roast a whole chicken. He got home, unwrapped the chicken carcass, and decided to make love to it. He used a condom, and fully ster-ilized the carcass when he was finished. He then roasted the chicken and ate it for din-ner alongside a nice glass of Chardonnay. (Supplements, p. 1)

After the reading the scenario, participants had to in-dicate which option was more probable: A) Jack is a sports fan or B) Jack is a sports fan and a [condition target]. Depending on the condition the participant was in, the target of option B would be either a scientist or one of several control targets (e.g., an atheist, a Mus-lim). Since it is impossible for a subcategory (option B) to be more likely than the whole category (option A), selecting option B would be a reasoning error (i.e., the conjunction fallacy). The likelihood to make such an

(4)

error is based on the participant’s intuitive associations between the description of the person in the scenario and the target selected. Therefore, these fallacies can be adopted as a measure of the people’s moral stereotype towards the target.

To distinguish which moral foundations are associ-ated with scientists the most (or the least), Rutjens and Heine (2016) used different moral scenarios taken from the MFT literature, with each scenario depicting a vi-olation to a particular moral foundation. They found that except for fairness and care violations, scientists were consistently associated with immoral behavior, in particular with violations of purity (e.g., a person mak-ing love to a dead chicken). However, it has recently been argued that such scenarios are not just immoral, but also very weird. Research has shown that impurity scenarios are considered both weirder and less severe than other types of scenarios, rising doubts about a pos-sible sampling bias in MFT research (Gray & Keeney, 2015). Accordingly, it is possible that scientists are not necessarily associated with immorality, but rather with unusual behavior that may or may not be (im-)moral. Given the discussed importance of negative public per-ceptions of scientists’ morality, it is important to deter-mine whether they are truly perceived as immoral or as only as capable of odd behavior.

Study 1 used the same design as Rutjens and Heine (studies 1–7; 2016), but extended their findings with a different set of moral scenarios with the goal to iso-late impurity, weirdness, and severity. In doing so, we aimed to shed more light on the nature of the intuitive associations that people hold towards scientists.

Study 2 – The moral stereotype of scientists: explicit judgments

After establishing the role of morality in the intuitive associations with scientists in Study 1, Study 2 inves-tigated the role of cognitive reflection in making ex-plicit morality judgments about scientists. To this end, we draw on a dual-process theory of morality which has been used to test how cognitive reflection (as op-posed to intuition) influences moral judgment. Ac-cording to the dual-process theory, people generating a moral judgment experience a conflict between two distinct psychological/neural systems: an intuitive, au-tomatic and emotionally-driven system, and a more re-flective, controlled and reason-driven system (Greene et

al., 2004). In accordance with this model, research has shown that deontological judgments (i.e., judgments concerned with rights and duties) are associated with intuitive responses, whereas utilitarian judgments (i.e., judgments concerned with maximizing utility) are asso-ciated with more pondered responses (for an overview, see Paxton, Bruni, & Greene, 2014). To test this the-ory, researchers have often employed the Cognitive Re-flection Test (CRT; Frederick, 2005), a test designed to assess participants’ ability to suppress intuitive and in-correct answers in favor of a deliberative and in-correct answer. An example item is the following: “A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?”. Even though one would intuitively think the ball cost $0.10, people who consider the problem more thoughtfully reach the correct answer, $0.05 (as 0.05+ 1.05 = 1.10).

Interesting results have been obtained in research combining the CRT with moral scenarios. First, a ro-bust positive correlation was found between the num-ber of correct answers and utilitarian judgments (Hard-man, 2008; Santoro, 2014), as well as a high nega-tive correlation between CRT scores and judgments of moral wrongness (Pennycook, Cheyne, Barr, Koehler, & Fugelsang, 2014). Second, the CRT has been used to induce a reflective state and investigate its effects on moral judgment. For instance, having participants com-pleting the CRT either before or after rating the wrong-ness of moral dilemmas, it was found that participants in the CRT-first condition judged the scenarios as more acceptable than those in the dilemmas-first condition, hence offering further support for the dual-process the-ory of morality (Paxton, Ungar, & Greene, 2012).

Study 2 combined this paradigm with the design of study 8 from Rutjens and Heine (2016), which directly explored what foundations are most strongly associated with scientists. We investigated whether participants who complete the CRT before making their judgments (consequently entering a reflective state) rate scientists’ morality differently from participants who complete it afterwards, and we also investigated whether the scores on the CRT (a measure of dispositional reflectiveness) correlate with a specific pattern of judgments. Although the CRT has been used in the past to examine the ef-fects of reflection on moral judgments (e.g, Paxton et al., 2012), it has never been employed to study the ef-fects of reflection on a moral stereotype, thus this study

(5)

was a first step for this type of investigation. Practically, it is important to know whether reflection can influence the perception of the scientists’ morality in the case it might help people being more thoughtful about the sci-entists’ suggestions. More generally, it is also important to see whether reflection can influence a moral stereo-type similarly to the way it affects moral judgments.

Key Research Questions

Summarizing, our research draws on the research by Rutjens and Heine (2016) and aims to integrate it with the two other lines of research that have been discussed: the first investigating how weirdness and severity of a scenario affect moral judgment, and the second looking at how (dispositional or induced) reflection influences moral judgment. To do so, two research questions were tested in two studies:

Study 1

Are the intuitive associations between scientists and immorality due to the immorality depicted in impurity scenarios, or rather due to the weirdness and/or sever-ity of the scenarios?

Study 2

Does cognitive reflection influence the perceptions of scientists’ morality?

Considering the research questions are combining novel lines of research, making specific predictions was complex. Pertaining to the first research question, it is possible that for the associations to occur, perceived purity is essential (i.e., scientists being considered im-moral), that perceived weirdness is essential (i.e., sci-entists being considered odd people), or that both are necessary. However, Rutjens and Heine (2016) found that scientists are associated not only with purity viola-tions, but also with other moral violations which might be perceived as less weird. Additionally, taking into account the results of the original research, we also rea-soned that scientists should not be associated strongly with severely immoral behaviors (such as rape) even though this prediction was more exploratory. For these reasons, we predicted that in Study 1 scientists would be associated with impurity as well as with weirdness, expecting the strongest associations for scenarios that are both, and the weakest associations for scenarios that are neither of those and/or highly severe.

The second research question is more exploratory in nature, since the effects of reflection on moral stereo-types has not been investigated yet. However, reflec-tion has been associated with utilitarian judgments, and this might lead to a less harsh perception of scientists’ morality in the case they are perceived to be more con-cerned with maximizing utility (e.g., GMOs can be good) than with what is right (e.g., GMOs are unnat-ural). If this is the case, it could be argued that both induced and dispositional reflection would result in less harsh judgments of the scientists’ morality. We tested this idea in Study 2.

Study 1 Pilot

Initially, 25 moral scenarios were piloted (refer to Appendix A for a full description of the pilot’s meth-ods and results) and categorized according the follow-ing criteria: general immorality, impurity, severity and weirdness. The differing ratings obtained in the pilot were then used to select the five scenarios to use in Study 1, one for each of the following categories:

1. Impure+ weird (+ not severe); 2. Impure+ severe (+ not weird); 3. Not impure+ weird (+ not severe); 4. Not impure+ severe (+ not weird); 5. Impure+ not weird and not severe.

Except for some of the type three scenarios (which are not inherently immoral), the scenarios used were based on moral vignettes previously proposed and validated (e.g., Clifford, Iyengar, Cabeza, & Sinnott-Armstrong, 2015; Graham & Haidt, 2012; Gray & Keeney, 2015). The ratings’ means and standard deviations for each of the moral scenarios piloted were used to determine which were the best fit for our five categories. Except for the type 3 scenario (i.e., only weird) we could not find ones that perfectly matched our categories, due to the limited amount of scenarios that we were able to pi-lot. Therefore, we chose the scenarios by looking at the ones in which the relations between the ratings of in-terest (e.g., weird and impure but not severe) were best

(6)

represented, even though it meant using scenarios that were suboptimal for the respective category. For exam-ple, for the type 5 scenario (i.e., only impure), we could not find a scenario that was high in impurity but low on severity and weirdness, and thus we chose one that was rated lower on severity and weirdness than it was on impurity, despite its impurity rating being quite low. All the means and standard deviations for the scenarios used in Study 1 are shown in Table 1 below, whereas Appendix A reports all of the pilot’s results.

The ratings we obtained in our pilot study were con-sistent with those available in the literature, increasing their reliability:

1. The first scenario (necrobestiality) had an aver-age rating of 3.24 (SD= 1.67) in immorality and of 3.88 (SD= 1.48) in impurity, which is similar to the results of Clifford and colleagues (2015), who found an average rating of 3 (out of 5) in immorality and where 88% of participants rated the scenario as impure.

2. The second scenario (rape) had an average rating of 4.76 (SD= 0.52) in severity and of 3.82 (SD = 1.37) in weirdness, which is similar to the one observed by Gray and Keeney (2015), who found the scenario to be rated 7 (out of 7) in severity and 4.3 (out of 7) in weirdness.

3. The third scenario (man with hamster) was cre-ated by us and thus we cannot compare it with previous ratings in the literature.

4. The fourth scenario (kicking a dog) had an aver-age rating of 4.40 (SD= 0.81) in severity and of 3.62 (SD = 1.34) in weirdness, which is similar to the one observed by Gray and Keeney (2015), who found the scenario to be rated 6 (out of 7) in severity and 4.5 (out of 7) in weirdness.

5. The fifth scenario (sex for drink) had an average rating of 2.82 (SD= 1.52) in immorality and of 3.10 (SD= 1.46) in impurity, which is similar to the results of Clifford and colleagues (2015), who had a comparable scenario – “A homosexual in a gay bar offering sex to anyone who buys him a drink” – that received in an average rating of 2.6 (out or 5) in immorality, while 73% of partici-pants rated the scenario as impure.

Even though these scenarios were the best fit for our categories out of the ones we piloted, they were not a perfect match. The first category was [impure, weird, not severe], but the scenario was rated relatively high also in severity (M= 3.70; SD =1.40). The second cat-egory was [impure, severe, not weird], but the scenario was rated relatively high also in weirdness (M = 3.82; SD= 1.37). The third category was [weird, not impure, not severe] and the scenario used fit the category well. The fourth category was [severe, not impure, not weird], Table 1

Pilot ratings for the scenarios used in Study 1

Rating

Scenario M (SD)

Immorality Impurity Severity Weirdness 1. Jack has sex with a frozen dead chicken before

cooking it for dinner. 3.24 (1.67) 3.88 (1.48) 3.70 (1.40) 4.84 (0.37) 2. Jack forces another person to have sexual

intercourse with him, without that person’s consent. 4.76 (0.69) 4.54 (0.95) 4.76 (0.52) 3.82 (1.37) 3. Jack carries around his hamster in his pocket

daily, regularly asking the hamster for advice. 1.32 (0.82) 1.40 (0.86) 2.00 (1.16) 4.32 (1.02) 4. Jack kicks a dog in the head, hard. 4.46 (0.81) 3.90 (1.36) 4.40 (0.81) 3.62 (1.34) 5. Jack is in a bar and offers to sleep with anyone

(7)

but the scenario was rated relatively high also in impu-rity (M= 3.90; SD =1.36) and weirdness (M = 3.62; SD = 1.34). The fifth category was [impure, not weird, not severe], but the ratings of the scenario used were quite similar: 3.10 (SD= 1.46) for impurity, 2.48 (SD = 1.34) for severity, and 2.90 (SD= 1.39) for weirdness. How-ever, the limited amount of time and resources avail-able for this project did not give us a chance to pilot additional scenarios in order to find better fits for our categories.

Yet, bearing in mind that the scenarios were not per-fectly representative of the respective categories, our re-sults can still offer fruitful insights regarding our first research question.

Methods

Experimental design. In Study 1, each participant had to read a single scenario and then indicate whether the person portrayed in the scenario is more likely to be A) a sports fan or B) a sports fan and a scien-tist/atheist/Muslim. As previously discussed, choosing option B would indicate a reasoning error (i.e., con-junction fallacy) due to the associations that the person intuitively holds towards the target depicted. Since the conjunction fallacy is a very brief measure, it allowed us to keep the study as short as possible (and consequently to test a high number of participants).

Even though Rutjens and Heine (2016) had three sci-entist targets (a scisci-entist, a cell biologist, an experimen-tal psychologist) we decided to only use a general scien-tist target both because they did not find differences be-tween the three scientist targets and because it allowed us to increase our statistical power. The two control groups (atheist/Muslim) are the same as those used in Rutjens and Heine (2016) and were included both in order to keep the design as similar as possible to the original study and because they offer a good compari-son, since one (atheists) is consistently associated with moral violations while the other (Muslim) is not.

Hence, Study 1 had a between-subjects design with scenario types (1-5) and option B target (scien-tist/atheist/Muslim) as independent variables, and num-ber of fallacies in each condition as dependent variable. This study was thus used to answer the first research question, testing whether people attribute impure be-havior to scientists (compared to the other targets), or rather just associates them with strange behavior

(com-paring scientists across scenarios), and whether this at-tribution is affected by the severity of the scenario.

Participants. G*Power 3 (Faul, Erdfelder, Lang, & Buchner, 2007) was used to determine the number of participants needed. For each scenario condition in Study 1, at least 150 subjects were needed to de-tect medium effects (w=.3) with 95% power using chi-squared. Participants were recruited on Amazon’s Me-chanical Turk (MTurk; Buhrmester, Kwang, & Gosling, 2011), which offers a diverse pool of subjects taken from the US population, thus avoiding culture effects. They were excluded if they failed an attention check or did not answer all the questions; this was the only exclu-sion criterion. A sample of 764 adults (i.e., over 18; age and gender was not recorded) took part in Study 1 in ex-change for a monetary reward. Eight participants were excluded because they did not answer all the questions and two because they failed the attention check. This re-sulted in a total of 754 participants that were randomly assigned to one of the conditions; Table 2 below shows the specific number of participants in each condition of Study 1, which was relatively evenly distributed. Table 2

Number of participants per condition

Scenario Target Total Scientist Atheist Muslim

1 56 46 48 150 2 31 64 57 152 3 46 55 50 151 4 51 47 51 149 5 49 55 48 152 Materials. In accordance with the results of the pi-lot, the following scenarios were used:

1. Impure+ weird (+ not severe):

• “Jack has sex with a frozen dead chicken before cooking it for dinner”;

2. Impure+ severe (+ not weird):

• “Jack forces another person to have sexual intercourse with him, without that person’s consent”;

(8)

3. Not impure+ weird (+ not severe):

• “Jack carries around his hamster in his pocket daily, regularly asking the hamster for advice”;

4. Not impure+ severe (+ not weird): • “Jack kicks a dog in the head, hard”; 5. Impure+ not weird and not severe:

• “Jack is in a bar and offers to sleep with any-one who buys him a drink”.

Besides the aforementioned focal materials of the study, both Study 1 and 2 included demographic ques-tions regarding religious beliefs (i.e., “Do you believe in God or a higher power?”; 0= not at all, 100 = very much), political orientation (i.e., “What is your political orientation?” 0= very liberal, 100 = very conservative), and nationality. Additionally, both studies contained the question “Are you a scientist, or working in academia?” (yes/ no) in order to control for familiarity with science as a possible confounder.

Procedure. Upon accessing the survey from MTurk, participants were presented with a welcome page containing a short briefing. After reading the briefing, participants had to confirm they were 18 or older and that they agreed to take part in the study, and then click on “Next” to start the experiment. At this point, the website randomly assigned the participant to one of the conditions, and presented the corresponding moral scenario and target option. The participant then had to read the scenario and indicate whether it is more probable that the person in the scenario is a A) a sports fan or B) a sports fan and a scientist/atheist/Muslim. After completing the main task of the study, partici-pants had to complete an attention check to determine if they were paying attention (i.e., they were asked to select 5 on a scale 1–7), after which they were pre-sented with the demographic questions and the control question about familiarity with science. After these last questions, a final screen thanked the participants and gave them the chance to give feedback.

Results

First of all, the participants who failed the attention check or did not complete all the questions were ex-cluded from the analyses. Then, a dummy “Fallacies”

variable was created: this variable contained the value 1 for participants who committed a fallacy and the value 0 for participants who did not, and served as the de-pendent variable in our analyses. To check for famil-iarity with science as a possible confounder, we ducted a chi-squared analysis only for the scientist con-dition across all scenarios, using familiarity with sci-ence and number of fallacies as variables. This analysis did not reveal any significant effect, possibly because of the small number of people who were familiar with science (17 out of 233), and we thus excluded famil-iarity with science from the following analyses. Next, we conducted five chi-squared analyses, one for each scenario type (1–5), using number of fallacies and tar-get type (scientist, atheist, Muslim) as variables. The analyses revealed a significant overall difference in the number of conjunction fallacies between target condi-tions in all the scenario types: scenario 1 (χ2(2) = 18.34, p< .001, Cramer’s V = .35), scenario 2 (χ2(2) = 28.39, p < .001, V = .43), scenario 3 (χ2(2)= 15.02,

p< .01, V = .32), scenario 4 (χ2(2)= 33.99, p < .001, V = .48), and scenario 5 (χ2(2) = 14.44, p < .01, V = . 31). Subsequently, post-hoc comparisons were conducted between targets for all the scenario types. Table 3 below shows the results of these comparisons (with a Bonferroni-adjusted significance level of .01/15 = .0006), together with those of the overall chi-squared tests and the percentage of fallacies in each condition. The percentage of conjunction fallacies per condition are also illustrated in Figure 1 below.

(9)

Table 3

Percentage of fallacies per condition with chi-squared test results

Scenario Target χ2(df= 2) Cramer’s V Scientist Atheist Muslim

1 21.4 %a 56.5 %b 20.8 %a 18.34* .35 2 6.5 %a 46.9 %b 10.5 %a 28.39* .43

3 39.1 %a 25.5 %b 6.0 %b 15.02* .32 4 7.8 %a 57.4 %b 17.6 %a 33.99* .48 5 2.0 %a 20.0 %a 2.1 %a 14.44* .31

Note. * p < .01. The superscripts indicate the results of the post-hoc chi-squared tests between target conditions. Same superscripts indicate no significant differences (p > .05), whereas different ones indicate significant differences (p < .01).

As shown in Table 3, the number of conjunction fal-lacies in the atheist condition were significantly higher than those in the scientist and Muslim conditions sce-nario type 1, 2, and 4. For scesce-nario type 3, partici-pants in the scientist condition made significantly more fallacies than those in the atheist and Muslim condi-tions. For scenario type 5, no significant differences were found between targets.

Finally, a chi-squared analysis was conducted to look at differences in the number of conjunction fallacies be-tween scenario types in the scientist condition. The analysis showed a significant overall difference (χ2(4) = 31.46, p < .001, Cramer’s V = .37), and subsequent post-hoc comparisons were conducted to look at spe-cific differences between scenarios. These comparisons revealed that participants committed significantly more fallacies in scenario type 1 than in type 5 (χ2(1)= 9.06, p< .01, V = .29), and more fallacies in scenario type 3 than in type 2 (χ2(1)= 10.29, p < .005, V = .37), type 4 (χ2(1)= 13.50, p < .001, V = .37), and type 5 (χ2(1) = 20.40, p < .001, V = .46). While all these compar-isons were significant with a Bonferroni-adjusted sig-nificance level of .005 (.05/10), none of the other com-parisons were significant.

Besides the main analyses, we also checked for cor-relations between fallacies, religiosity, and political ori-entation. The analyses showed significant correlations between all of them (across conditions), and the results are presented in Table 4 below. A significant positive weak correlation was found between the number of fal-lacies and both religiosity (r(752)= .12, p < .01) and political orientation (r(752)= .09, p = .01), indicating that more fallacies were committed by people that were more religious or more conservative. Finally, a

signif-icant positive moderate correlation was found between religiosity and political orientation (r(752) = .38, p < .01), indicating that the more religious people were also the more conservative ones.

Table 4

Correlations between fallacies, religiosity and political orientation Variables 1 2 3 1 Fallacies -2 Religiosity .12* -3 Political Orientation .09** .38* -Note.* p< .01, ** p = .01. Discussion

In Study 1, we expected that scientists would be as-sociated the most with scenarios that were either weird, impure, or both. Moreover, we expected them to be as-sociated the least with scenarios that were severe. The first of these predictions was confirmed by our results, as scientists were associated the most with the weird only scenario. When comparing this scenario with the others, we saw that scientists were significantly more associated with scenario type 3 (weird only) than with type 2 (impure+ severe), 4 (severe only), and 5 (impure only). The results of the first two comparisons are in line with our prediction, since we expected the lowest associations for scenarios that were severe. One might wonder why, then, scientists were significantly more as-sociated with the type 3 than type 5 scenario. However, this is not surprising if we consider the actual ratings of type 5 scenario, which was practically only slightly impure and also had slightly higher severity than type 3.

(10)

Bearing this in mind also allows us to explain why we found that scientists were significantly more associated with scenario type 1 (impure+ weird) than with type 5, since type 1 was higher in both weirdness and impurity than type 5; therefore also this result is in line with our predictions.

Yet, our results differ from those of the original re-search by Rutjens and Heine (2016) in a number of ways. First, a much higher percentage of people in their research associated necrobestiality with scientists: in our study, 21.4% of participants in that condition com-mitted a conjunction fallacy, whereas in the original re-search, up to 65.8% did. Second, we found atheists to be significantly more associated with immoral behav-ior than both scientists and Muslims (as in previous re-search; Gervais, 2014), whereas in the original research the number of fallacies for scientists (in the necrobes-tiality condition) was either similar to those for atheists, or even significantly higher than those. Third, while in our research the fallacies for the scientists and Mus-lims did not differ significantly across scenarios (except for the type 3, which was not immoral), in the original research scientists were significantly more associated than Muslims with a number of moral violations.

Taken together, the results of Study 1 cast doubt on the findings by Rutjens and Heine (2016), and rather suggest that scientists are perceived more as weird than as immoral. Further support for this idea comes from the fact that the two highest numbers of fallacies we ob-served for scientists were committed in scenarios type 3 (only weird) and type 1 (also highly weird), while the lowest number of fallacies were committed in the type 5 scenario (the least weird). Therefore, the results from Study 1 not only support our initial predictions, but also suggest that the original results of Rutjens and Heine (2016) might have been confounded by the weirdness of the scenarios, offering support to Gray and Keeney’s (2015) hypothesis about a sampling bias in MFT re-search. However, our findings need to be interpreted with caution due to the limitations of our study, which are discussed below.

The main limitation of our research is that due to the restricted time and resources available we could only pilot 25 scenarios, and had to choose among those the ones that would best fit our scenario types. Our results are thus based on the scenarios we used, which were not as representative for the respective categories as we

initially expected. To overcome this limitation, we took into account the actual ratings of the scenarios rather than their category, which allowed us to meaningfully interpret our results. Still, since our results rely on the scenarios used, it could be possible that scientists are not necessarily associated with weird behavior, but rather only with the specific behaviors we depicted in the one-sentence scenarios. This is something future research could look into, perhaps looking at how differ-ent types of strange behavior are associated with scien-tists. Further research could also look at how scientists are associated with weird behavior compared to groups which are notoriously known as strange and eccentric, such as rock stars.

Additionally, our study used the target ‘scientists’ in a general sense, with no label specifying which type of scientist. This should also be taken into account when interpreting our results, as it could be possible that our results cannot be extended to all the different types of scientists. However, we think that using a general ‘sci-entist’ target was the best way to approach our inves-tigation for three reasons: it gave us more statistical power (compared to having more scientist targets), Rut-jens and Heine (2016) did not find differences between scientists conditions, and because that is how they are usually referred to in the media (i.e., scientists rather than chemists or physicists). Therefore, we find the use of the general ‘scientist’ target not necessarily a limita-tion, but rather something to keep in mind when gener-alizing the results.

Finally, one might wonder why the results of Study 1 did not replicate the ones of the original research. This discrepancy might have been caused by the materials used, since even though we used the same scenario as the original research (i.e., necrobestiality) our scenario was worded differently. Our scenario descriptions were only one sentence long, whereas scenarios used in the original research (as well as in the MFT literature; e.g., Graham & Haidt, 2012) are usually longer. However, since we had to keep the study as short as possible and since we could not find (longer) moral scenarios in the literature that would fit all our categories, we opted to use short moral vignettes.

The difference between the vignettes we used and the scenarios used in the original research can be clearly seen by looking back at the necrobestiality scenario mentioned in the introduction: in that scenario, the

(11)

moral violation is meticulously described, with the per-son unwrapping the chicken carcass, using a condom, and sterilizing the carcass afterwards; on the other hand, our scenario excluded all these details and just men-tioned the necrobestiality act. Perhaps, the fastidious-ness of the scenario used in the original research makes it seem very methodological and analytical, which is then consequently associated with the mentality of a scientist. Support for this idea comes from an indepen-dent replication of the original research that success-fully replicated the original results using the same (i.e., longer) scenarios (Soetekouw, 2016). This suggests that Gray and Keeney’s (2015) concerns regarding a possi-ble scenario sampling bias in the MFT literature are le-gitimate, and also that we should investigate certain as-pects of these scenarios (e.g., weirdness, severity, word-ing) in order to avoid confounded results. This idea is further discussed in the general discussion.

Study 2 Methods

Experimental design. In Study 2, participants were asked to complete the Cognitive Reflection Test (CRT) either before or after completing the moral judg-ment section of the Moral Foundations Questionnaire (MFQ) from the perspective of a scientist. The study thus only had one between participants condition (CRT-first/ MFQ-first).

To answer the second research question, we looked at differences in the MFQ scores (i.e., explicit moral judgments of scientists) between the participants in the CRT-first condition and those in the MFQ-first condi-tion, as well as at the correlation between the number of correct responses in the CRT and the MFQ scores (across conditions). This allowed us to explore whether induced (i.e., participants in the CRT-first condition) or dispositional (i.e., participants with a high CRT score) reflection affects the participants’ moral stereotype of scientists.

Participants. G*Power 3 (Faul et al., 2007) was used to determine the number of participants needed. For each condition in Study 2, 50 participants were needed to detect medium effects ( f2=.135) with 95% power using a regression analysis. As in Study 1, par-ticipants were recruited on Amazon’s Mechanical Turk (MTurk; Buhrmester et al., 2011), and were excluded

if they failed an attention check or did not answer all the questions. A sample of 107 adults (i.e., over 18; age and gender was not recorded) took part in Study 2 in ex-change for a monetary reward. Nine participants were excluded because they did not answer all the questions and fourteen because they failed the attention check. This resulted in a total of 84 participants that were ran-domly assigned to either the MFQ-First condition (N= 45) or the CRT-First condition (N= 39).

Materials. The moral judgment section of the Moral Foundations Questionnaire (MFQ30-part2; Gra-ham et al., 2009) was used to assess the explicit judg-ments of the scientists’ morality; an example item of the MFQ is “Justice is the most important requirement for a society” (1= strongly disagree; 5 = strongly agree). The Cognitive Reflection Test (CRT; Frederick, 2005) was used to induce and assess reflection. An example item of the MFQ is “If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 ma-chines to make 100 widgets?” (5 minutes). The MFQ and the CRT are reported in full in Appendix B and C respectively. Additionally, Study 2 used the same de-mographic and control questions of Study 1.

Procedure. Except for the main tasks, the overar-ching procedure of Study 2 was the same as the one in Study 1. After the previously described introduction, the website randomly assigned the participants to ei-ther the CRT-First or the MFQ-First condition, and pre-sented either the CRT or the MFQ. Participants had then to read the instructions for the part presented (whether it was the CRT or the MFQ) and complete all the items before moving on to the next section. Within each sec-tion, the items were randomized to avoid the study to be confounded by an order effect. Each item of the CRT was presented individually, in a screen contain-ing both the question and an empty cell to enter the an-swer; participants had to answer the question to move on to the next one. All the MFQ items were presented on the same page, together with the instructions ask-ing participants to respond as John, who is a scientist, in order to make sure they answered from the perspec-tive of a scientist; participants answered each item on a five-point Likert scale (1 = John strongly disagrees; 5= John strongly agrees). After completing the MFQ (and independently from their condition), participants were asked to describe what they knew about John (i.e., he is a scientist), in order to check that the manipulation

(12)

was successful. Finally, the aforementioned control and demographic questions were presented, after which a final screen thanked the participants and gave them the chance to give feedback.

Results

First, participants who failed the attention check, those who did not complete all the questions and those who failed the manipulation check were excluded from the analyses1. Second, a “CRT score” variable was cre-ated, which contained the number of correct answers to the CRT for each participant, ranging from 0 to 3. Third, the answers to the three items of the MFQ cor-responding to each moral foundation (as illustrated in Appendix B) were averaged and computed into five new variables, one for each moral foundation, ranging from 0 to 5. Fourth, to control for familiarity with science as a possible confounder, we ran a MANOVA with the five moral foundations as dependent variables, and with familiarity with science as independent variable. Since this analysis was not significant, we excluded familiar-ity with science from the subsequent analyses. Then, we conducted a MANOVA with the moral foundations as dependent variables, and order condition (MFQ-first/CRT-first) as independent variable. No significant differences were found between conditions, and the re-sults of the analysis are shown in Table 5 below together with means and standard deviations for each foundation in each order condition.

Table 5

Analyses of Variance between Conditions and Moral Foundations Moral Foundation Condition F-value* MFQ-First CRT-First M(SD) M(SD) Fairness 3.47 (.62) 3.67 (.64) 2.14 Loyalty 3.16 (.63) 3.19 (.58) 0.06 Authority 3.28 (.56) 3.27 (.76) 0 Purity 2.79 (.91) 2.84 (1.00) 0.05 Care 3.47 (.70) 3.38 (.72) 0.40

Note. * Degrees of freedom of the test statistics were the same for each test: df= (1,82). None of the tests were signif-icant (p> .05).

Subsequently, a multivariate regression analysis was conducted with CRT score as a predictor and the five moral foundations as dependent variables, but no sig-nificant differences were found between different CRT scores. In order to look for a possible interaction be-tween CRT scores and order condition, we used these two as predictors in a multiple regression with the five moral foundations as dependent variables, but no sig-nificant differences were found.

Finally, a one-way ANOVA was conducted to look at differences between moral foundations; this analy-sis included participants from both order conditions, since they did not differ in our previous analyses. The ANOVA showed a significant overall difference (F(4,83) = 17.34, p < .001), and subsequent paired-samples t-tests were conducted to look at specific di ffer-ences between scenarios. These comparisons revealed that participants rated scientists significantly lower in Purity (M= 2.81, SD = .95) than in Fairness (M = 3.56, SD= .63; t(83) = -6.57, p <.001), Loyalty (M = 3.17, SD= .60; t(83) = -3.63, p <.001), Authority (M = 3.28, SD= .66; t(83) = -5.86, p <.001), and Care (M = 3.43, SD = .71; t(83) = -4.92, p <.001). Additionally, par-ticipants rated scientists significantly higher in Fairness (M = 3.56, SD = .63) than in Loyalty (M = 3.17, SD = .60; t(83) = 4.51, p <.001). While all these compar-isons were significant with a Bonferroni-adjusted sig-nificance level of .001 (.01/10), none of the other com-parisons were significant. The means for each moral foundation are shown in Figure 2, together with the out-come of the comparisons. Same superscripts indicate no significant differences (p > .05), whereas different ones indicate significant differences (p < .001).

Besides the main analyses, we checked for correla-tions between CRT scores, moral foundacorrela-tions, religios-ity, political orientation, and familiarity with science. The results of these analyses are shown in Table 6 be-low, together with means and standard deviations for each of the variables. As in Study 1, we found a signif-icant positive weak correlation between religiosity and political orientation (r(752)= .38, p < .01), indicating that the more religious people were also the more con-servative ones. Finally, we found a significant weak to moderate negative correlation between religiosity and

1Three additional participants respectively answered only

1s, 3s, and 4s to all items, and were later excluded from the analyses; this did not alter the results.

(13)

Figure 2. Averages for each moral foundation across conditions.

the number of correct responses on the CRT, r(82) = -.30, p<.01, indicating that more religious people ob-tained lower scores on the CRT.

Discussion

Our second study was more exploratory in nature, as to our knowledge it was the first one investigating the possible effects of cognitive reflection on a moral stereotype. We aimed to integrate the dual-process the-ory of morality (Greene et al., 2004) with our investiga-tion of the (im)moral stereotype of scientists and to see whether such a stereotype could be affected by induced or dispositional reflection. To this end, we reasoned

that reflection, which has been associated with utilitar-ian judgments (Paxton et al., 2014), could improve the overall perception of a scientist’s morality in the case they are perceived to be associated more with maximiz-ing utility (i.e., utilitarian) rather than with what it is right (i.e., deontological). However, this was not the case, since the ratings on the five moral foundations were not different for participants in the CRT-first con-dition (compared to those in the MFQ-first concon-dition) and with different CRT scores (across conditions). Our results thus suggest that (induced or dispositional) re-flection does not improve the moral stereotype of scien-tists.

Yet, it must be noted that our study had a smaller sample than what we expected (especially the CRT-first condition, which had 39 participants instead of 50), due to the number of participants that had to be excluded from the analyses; hence, it is advisable to replicate our study with a bigger sample to further validate our results. Additionally, it is possible that we did not ob-serve an effect because our manipulation failed to elicit a reflective state, and this might have happened for two reasons. First, due to our limited time and budget, we could only use the three items of the CRT, whereas other research has used a wider battery of items to elicit and measure reflection (e.g., Pennycook et al., 2014). This should not be a problem since the CRT alone has also been used to elicit cognitive reflection (Paxton et al., 2012), but a future replication attempt should use the full battery of items and see whether that would change the outcome of the study. Second, our research relied on online data collection and this could have led to

im-Table 6

Means, standard deviations and correlations between the variables in Study 2

Variable M SD 1 2 3 4 5 6 7 8 9 1 CRT 1.90 1.22 – 2 Fairness 3.56 .63 -.16 – 3 Loyalty 3.17 .60 -.11 .17 – 4 Authority 3.28 .66 -.04 -.06 .35* – 5 Purity 2.81 .95 -.10 .18 .39* .64* – 6 Care 3.43 .71 .03 .50* .08 .07 .06 – 7 Religiosity 34.63 40.64 -.30* .15 .17 .20 .27** .03 – 8 Political Orientation 37.00 26.31 -.02 .08 .09 .23** .27** .02 .26** – 9 Familiarity With Science 1.89 0.31 .07 -.10 -.03 .23** 0.00 -.10 .01 .05 –

(14)

personal participation (Evans & Mathur, 2005), with participants taking part to the study while doing other things or responding superficially, thus failing to actu-ally engage in reflection. This possibility could be in-vestigated by future research, for instance using the de-sign of previous studies involving the CRT (e.g., Paxton et al., 2012) and trying to replicate their results using a pen and paper version and an online version, to see whether the results are similar (as well as in line with the literature). Therefore, even though the results of our second study were not significant, we obtained a num-ber of valuable insights that can be used to improve this type of investigation in the future.

The other aim of this study was to replicate the orig-inal results of Rutjens and Heine (2016), and we suc-cessfully did so. In fact, the average ratings for each moral foundation in our study were very similar to those observed in the original research, as shown in Table 7 below.

Table 7

Means for each Moral Foundation in the current and original research

Research Moral Foundation Current*

M(SD) Original** M(SD) Fairness 3.57 (.63) 3.66 (.68) Loyalty 3.17 (.60) 3.04 (.57) Authority 3.27 (.66) 3.33 (.79) Purity 2.81 (.95) 2.76 (.90) Care 3.42 (.71) 3.48 (.90)

Note.* Study 2, Santoro (2016); ** Study 8, Rutjens & Heine (2016).

Our results thus offer further support to those of Rut-jens and Heine (2016), and provide a clear picture of the explicit moral stereotype of scientists: although our ratings suggest that they are not considered to be partic-ularly evil, they do seem to be perceived as lacking in the purity foundation, which was significantly less as-sociated with scientists than the other foundations. This suggests that rather than necessarily immoral, scientists might be perceived to be amoral, in that they do not mind ‘getting their hands dirty’ (i.e., do not mind im-purity) for the sake of science; however, these are only speculations and should be investigated in the future, as discussed below.

General Discussion Summary of the Studies

Taken together, our results yielded important in-sights. In our first study, we saw that scientists were associated the most with weird behavior and the least with severely immoral behavior, in line with our predic-tions. In the second study, we found no effect of cogni-tive reflection on the moral stereotype of scientists, but we found that scientists were not considered to be evil, although they somewhat lacked in purity. The two stud-ies had some limitations that were discussed, and that proved to be useful in informing future research. Due to these limitations, our results should be replicated before making solid conclusions: for Study 1, future research should try to replicate and extend our results using dif-ferent scenarios; for Study 2, a replication should be conducted with a bigger sample, with the full battery of items used to induce reflection in previous research (e.g., Pennycook et al., 2014), and using a pen and paper version of our design.

Additional Findings

In addition to the main results already discussed, our two studies also offered interesting correlational evi-dence on the relationship between CRT scores, religion, and politics. In both studies, we found a significant weak to moderate positive correlation between politi-cal orientation and religiosity: in Study 1, r(752)= .38, p <.01; in Study 2, r(82) = .26, p <.05. These results show that more conservative people tend to be more re-ligious, which is in line with previous research in the field (e.g., Pennycook, Cheyne, Seli, Koehler, & Fugel-sang, 2012). Moreover, we found a significant weak to moderate negative correlation between religiosity and the number of correct responses on the CRT, r(82) = -.30, p <.01. This result is also in line with previous research on the link between analytical thinking and re-ligiosity, suggesting that being inclined to apply ana-lytical thinking could increase people’s willingness to question and be skeptical about religious beliefs (Pen-nycook, Fugelsang, & Koehler, 2015). Therefore, even though these results were not the main focus of our re-search, they provided significant evidence in support of relevant lines of research and further validated the qual-ity of our data.

(15)

The Moral Stereotype of the Scientist

Summing up, our investigation of the intuitive and explicit perceptions of the scientists’ morality formed an image of a scientist that is not necessarily evil, but rather perceived as weird and possibly amoral, for ex-ample as someone who can disregard morality for the sake of science; this is in line with what was suggested by Rutjens and Heine (2016). Considering that the results of Study 2 successfully replicated those of the original research, it is important to understand why the results of Study 1 did not. As discussed, a possibility is that our results were confounded by the materials we used, since even though we used the same scenario as the original research (i.e., necrobestiality) our scenario had different wording.

To tackle this issue, future research should try to val-idate the scenarios used in MFT research on several scales, including those suggested in our research (i.e., weirdness, severity). These ratings ratings can then be taken into account, together with the wording, to avoid confounded results when using the scenarios in a spe-cific research. Furthermore, to investigate the moral stereotype of scientists more clearly, future research should use our Study 1 design with more scenarios and see which ones are more strongly associated with scien-tists. In particular, since our study contained only one example of weird behavior, research should use a vari-ety of weird scenarios to investigate whether scientists are truly perceived as weird, and check which odd be-haviors they are the most associated with; notoriously weird (e.g., a rock star) or normal (e.g., an average Joe) targets could be used as control groups, to look at how they compare with scientists. Finally, considering that in Study 2 we found scientists to be rated low on impu-rity as in the original research, future research should investigate this perception more in detail. For instance, to further understand the nature of this moral stereo-type, different types of impurity scenarios could be used to see which are associated the most with scientists. A possibility could be to investigate our hypothesis that scientists might be considered impure in the extent to which they are unscrupulous for the sake of science, as illustrated in our example of the ‘vomit-eating’ Stub-bins Ffirth.

It is necessary to discern whether the extent to which the public perceives scientists just as odd or as capable of immoral behavior, since a negative stereotype can

have serious consequences. For instance, the general public’s opinion of GMOs is affected by how these are perceived as unnatural and immoral, regardless of the scientific evidence offered in their support (Blancke et al., 2015). A negative stereotype could then affect the public’s adherence to new practices suggested by the scientists, and before planning interventions to increase trust on scientists and their recommendations, it is cru-cial to truly understand the nature of this moral stereo-type.

We now have an answer to our original question: “Evil, or Weird”? Our results formed a more positive image of scientists than the ‘evil scientist’ we started with. We found that they can be perceived as weird but also as somewhat lacking in purity, which could sug-gest a stereotype of scientists as amoral, perhaps in the sense that they could also set aside morality if needed. However, further research is needed to confirm our re-sults and explain what (immoral and/or strange) behav-iors are the most associated with scientists. Until then, we can take the stereotype of the immoral scientist with irony, and conclude our investigation with a relevant joke:

“I’m a scientist who’s researching bestial-ity between humans and chickens... ...I’ll be in my lab.”

(16)

References

Blancke, S., Van Breusegem, F., De Jaeger, G., Braeckman, J., & Van Montagu, M. (2015). Fatal attraction: the intuitive appeal of gmo opposition. Trends in plant science, 20(7), 414–418.

Buhrmester, M., Kwang, T., & Gosling, S. D. (2011). Ama-zon’s mechanical turk a new source of inexpensive, yet high-quality, data? Perspectives on psychological science, 6(1), 3–5.

Clifford, S., Iyengar, V., Cabeza, R., & Sinnott-Armstrong, W. (2015). Moral foundations vignettes: a stan-dardized stimulus database of scenarios based on moral foundations theory. Behavior research meth-ods, 47(4), 1178–1198.

Cuddy, A. J., Fiske, S. T., & Glick, P. (2008). Warmth and competence as universal dimensions of social percep-tion: The stereotype content model and the bias map. Advances in experimental social psychology, 40, 61– 149.

Davies, C. L., Sibley, C. G., & Liu, J. H. (2014). Confir-matory factor analysis of the moral foundations ques-tionnaire. Social Psychology, 45(6), 431–436. Evans, J. R., & Mathur, A. (2005). The value of online

surveys. Internet research, 15(2), 195–219.

Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G* power 3: A flexible statistical power analysis pro-gram for the social, behavioral, and biomedical sci-ences. Behavior research methods, 39(2), 175–191. Frederick, S. (2005). Cognitive reflection and decision

mak-ing. The Journal of Economic Perspectives, 19(4), 25– 42.

Gervais, W. M. (2014). Everything is permitted? people intuitively judge immorality as representative of athe-ists. PloS one, 9(4), e92302.

Graham, J., & Haidt, J. (2012). Sacred values and evil ad-versaries: A moral foundations approach. The social psychology of morality: Exploring the causes of good and evil, 11–31.

Graham, J., Haidt, J., Koleva, S., Motyl, M., Iyer, R., Wojcik, S. P., & Ditto, P. H. (2012). Moral foundations theory: The pragmatic validity of moral pluralism. Advances in Experimental Social Psychology, Forthcoming. Graham, J., Haidt, J., & Nosek, B. A. (2009). Liberals and

conservatives rely on different sets of moral founda-tions. Journal of personality and social psychology, 96(5), 1029-–1046.

Gray, K., & Keeney, J. E. (2015). Impure or just weird? sce-nario sampling bias raises questions about the founda-tion of morality. Social Psychological and Personality

Science, 1–10.

Greene, J. D., Nystrom, L. E., Engell, A. D., Darley, J. M., & Cohen, J. D. (2004). The neural bases of cognitive conflict and control in moral judgment. Neuron, 44(2), 389–400.

Haidt, J. (2012). The righteous mind: Why good people are divided by politics and religion. Vintage.

Haidt, J., & Joseph, C. (2004). Intuitive ethics: How in-nately prepared intuitions generate culturally variable virtues. Daedalus, 133(4), 55–66.

Hardman, D. (2008). Moral dilemmas: Who makes utilitar-ian choices. Unpublished manuscript.

Hazelkorn, E., Ryan, C., Beernaert, Y., Constantinou, C. P., Deca, L., Grangeat, M., . . . Welzel-Breuer, M. (2015). Science education for responsible citizenship. Report to the European Commission of the Expert Group on Science Education.

Herzig, R. (2005). Suffering for science: Reason and sacri-fice in modern america. Rutgers University Press. Paxton, J. M., Bruni, T., & Greene, J. D. (2014).

Are ?counter-intuitive?deontological judgments re-ally counter-intuitive? an empirical reply to. So-cial cognitive and affective neuroscience, 9(9), 1368– 1371.

Paxton, J. M., Ungar, L., & Greene, J. D. (2012). Reflection and reasoning in moral judgment. Cognitive Science, 36(1), 163–177.

Pennycook, G., Cheyne, J. A., Barr, N., Koehler, D. J., & Fugelsang, J. A. (2014). The role of analytic thinking in moral judgements and values. Thinking& Reason-ing, 20(2), 188–214.

Pennycook, G., Cheyne, J. A., Seli, P., Koehler, D. J., & Fugelsang, J. A. (2012). Analytic cognitive style predicts religious and paranormal belief. Cognition, 123(3), 335–346.

Pennycook, G., Fugelsang, J. A., & Koehler, D. J. (2015). Everyday consequences of analytic thinking. Current Directions in Psychological Science, 24(6), 425–432. Rutjens, B. T., & Heine, S. J. (2016). The immoral

land-scape? scientists are associated with violations of morality. PloS one, 11(4), e0152798.

Santoro, A. (2014). Effects of reflection and time on moral judgment.

Soetekouw, R. (2016). Controlling the cliché: De invloed van subjectieve controle op stereotypering.

Tversky, A., & Kahneman, D. (1983). Extensional versus intuitive reasoning: The conjunction fallacy in proba-bility judgment. Psychological review, 90(4), 1–61.

Referenties

GERELATEERDE DOCUMENTEN

Even though the PROMs included in this study were developed without patient involvement, patients considered most items of the KOOS- PS, HOOS- PS, EQ- 5D and NRS pain

 Integration is not a single process but a multiple one, in which several very different forms of &#34;integration&#34; need to be achieved, into numerous specific social milieux

Theorem 1 In the homogeneous chain with the finite time hypothesis, the strategy to publish immediately every intermediate passed step is better for a scientist than the strategy

Coping with Evil inTwo African Societies , 217 special objects in which the words of the mouth are knotted, like bracelets made of.. plaited

Since the debate on bilingual advantage is still ongoing and it remains unclear how much exposure to a second or third language is needed to observe differences in executive

The research question of this thesis is as follows: How does the mandatory adoption of IFRS affect IPO underpricing of domestic and global IPOs in German and French firms, and does

The genuine novelty [...] attribute[d] to the entrepreneur consists in his spontaneous discovery of the opportunities marked out by earlier market conditions (or by

A legal-theory paradigm for scientifically approaching any legal issue is understood to be a shared, coherent collection of scientific theories that serves comprehension of the law