• No results found

In a Moral Mood: How Emotional Engagement and Empathy shift the Balance between Self-Benefit and Harming Others

N/A
N/A
Protected

Academic year: 2021

Share "In a Moral Mood: How Emotional Engagement and Empathy shift the Balance between Self-Benefit and Harming Others"

Copied!
53
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

UNIVERSITY OF AMSTERDAM

In a Moral Mood: How Emotional

Engagement and Empathy shift the

Balance between Self-Benefit and

Harming Others

by

Lotte Warns

Master Internship 2

Research Master Brain and Cognitive Sciences Supervisor: Dr. J. M. Van Baar

Examiner 1: Dr. O. FeldmanHall Examiner 2: Dr. V. Gazzola

in the

Institute for Interdisciplinary Studies FeldmanHall Lab

(2)

morally due to ulterior motives. Previous research has shown that moral judgments are strongly affected by situational cues, tangibility of rewards, real consequences, emotions and empathy. However, it is yet unclear how people actually learn to directly trade-off harm to others with self-gain in moral dilemmas and how learning to make such moral decisions are affected by emotions and empathy. Subjects performed the pain versus gain task in which they have to trade off monetary gain with alleviation of pain for another individual, while the emotional engagement is modulated. We show that more empathic individuals are more susceptible to emotional cues, which promotes prosocial behavior and is associated with increased neural activity in brain areas implicated in helping be-havior. A computational utility model captured the trade-off between personal gain and harming others and reproduced the behavioral results. In addition, emotional prediction errors (i.e. violations of how one imagines they will feel) postulated by a learning model were predictive of activity in a neural “approach pathway” theorized to support care-giving behavior, suggesting that emotional learning signals in this approach pathway contribute to how people learn to trade off conflicting interests in moral dilemmas. Keywords: Empathy, emotional engagement, morality, prosociality, emotional predic-tion error, learning.

(3)

Introduction

Morality is strongly ingrained in our society. From a young age we learn that there is a set of common moral values that we ought to follow on an everyday basis, such as to always tell the truth, seek justice, not to harm others and be generous (Haidt and Kesebir, 2010). However, people frequently act immorally due to ulterior motives including money and love and thereby violate moral norms (Freiman,2010), for instance when they lie to protect a loved one, harm others for personal monetary gain or - in light of recent events - do not socially isolate to meet up with friends or family despite COVID-19 guidelines. These situations lead to moral dilemmas and often result in a trade-off between your own interests and those of others.

Commiting to someone else’s interests with a cost to oneself is considered an altru-istic act (De Waal,2008). The motives and psychological mechanisms behind altruistic actions vary. A study by FeldmanHall et al. (2015), which investigated these motives, has shown that it is other-oriented emphatic emotions that drive altruism (Batson et al.,

1983), rather than the desire to minimize your own distress (Cialdini et al., 1987). A prerequisite for altruistic behavior, is seeing another individual in need and acting upon the amount of felt distress (Preston,2013). However, the amount of distress evoked in moral dilemmas, depends on the context. A study by Greene et al.(2001) showed that the willingness to act in moral dilemmas is governed by the degree of elicited emotional response which was reinforced by activation of brain areas associated with emotion when dilemmas were considered more “up close and personal”. However, there is a growing body of literature that recognizes that people are poor predictors of their own and oth-ers’ emotions in moral judgements (Ayton et al.,2007,Pollmann and Finkenauer,2009,

Teper et al., 2011,2015). This discrepancy in affective and empathic forecasting could result in distorted moral judgements. For example, by overestimating another’s feelings, one might excessively reduce harm inflicted on that person, thereby hurting one’s own self-interest more than needed. However, these prior studies leave several important questions unanswered. Given that people cannot always predict how distressed they will feel in the face of another person’s need, how do individuals learn to trade off harm

(4)

not act. Greene et al. (2001) argue that this discrepancy is governed by the degree of elicited emotional response. Whereas more “up-close-and-personal” dilemmas — i.e. dilemmas that feel more intimate and personal — elicited activation in areas associated with emotions and diminished the willingness to harm, less personal and non moral judgements resulted in activity in areas associated with working memory. In contrast, when motivational forces (e.g. financial) increase in salience, aversion to harm appears to diminish (FeldmanHall et al.,2012b) and even though people show to be harm aversive in studies, actual behavior tells us otherwise (FeldmanHall et al.,2012b).

Empathy - the capacity to share the feelings of another individual (Hoffman,2001) - is an emotion shown to be an important driving force behind prosocial and moral behavior (Eisenberg and Miller, 1987, Preston and De Waal, 2002). Vicarious expe-riences of others’ feelings (i.e. pain) are reached through resonating with an other’s emotional states at the neural level. These vicarious experineces aid in representing others’ distress to motivate action facilitated by empathy. The logic behind this is built upon the findings from Rizzolatti et al. (1996), who showed that a monkey observing goal-directed actions of another monkey and performing the action itself, resulted in the same neurons firing. This concept of mirror neurons has been extended to the field of social cognition. In the context of emotions, such as pain (Singer et al.,2004,Zaki et al.,

2016), it is thought that simulated personified representations of emotions are used to accurately understand the emotional states of others based on past experiences (Preston and de Waal,2017). However, more recent findings by Krishnan et al.(2016), show that vicarious pain does not involve re-activation of somatosensory representations, but show a more specific vicarious pain signature consisting of the dorsomedial prefrontal cortex (dmPFC), amygdala, posterior cingulate cortex (PCC), and temporoparietal junction (TPJ).

Regardless of the underlying neural mechanism of vicarious pain,FeldmanHall et al.

(5)

3 of an agent in need, supported by activation of the approach pathway of the caregiving model (Preston,2013). In the caregiving model Preston tries to account for the role of emotions in the complexity of human altruistic responding. This model consists of two pathways: (1) the avoidance pathway to alleviate the agent’s own distress and (2) the approach pathway to alleviate the distress of the agent in need. Two different neural pathways are thought to implement these two types of responses to a person in need. A pattern of avoidant behavior activates the amygdala, periaqueductal gray (PAG) and the dorsal anterior cingulate cortex (dACC), areas implicated in processing aversive stimuli during emotional conflict, social evaluations and emotional learning based on social signals (Etkin et al.,2011,Hooker et al.,2006,Phelps,2006). On the other hand, the approach network consists of reward-related neural systems, including the Nucleus Accumbens (NAcc), ventral tegmental area (VTA), caudate nucleus, and the subgenual ACC (sgACC), which are involved in altruistic giving, regulating emotional reactions and learning (Lockwood et al.,2016,Preston,2013,Schultz et al.,1997). This approach pathway offers a route for facilitating prosocial learning and with that a neural pathway to learn how to trade off prosocial concerns with self-interest.

This body of literature shows us that moral decisions are strongly affected by situ-ational cues, tangibility of rewards, real consequences, emotions and empathy. However, it is unclear how people actually learn to directly trade off harm to others with self-gain. To understand how people learn to make moral judgements, we can turn to the litera-ture on reinforcement learning, which has to date been largely disjointed from empathy research. The reinforcement learning literature suggests that due to the difference in ex-pected and actual outcomes of their actions - prediction errors (PE) - individuals learn to update the value of an action (Rescorla,1972,Schultz et al.,1997,Sutton et al.,1998). Some researchers have giving a theoretical account of what learning models of moral decisions might look like (Christopoulos et al., 2017, FeldmanHall and Chang, 2018,

Wallach et al., 2010). For instance, the review by FeldmanHall et al. (2018) suggests that people learn about others’ social values through a reinforcement learning mecha-nism with social norm enforcement facilitated by emotions. The social-affective control model (FeldmanHall and Chang, 2018) posits that social goals are regulated through positive and negative emotional error signals impacting decision policies. However, this does not explain how learning works when personal goals are in contrast with social goals. Despite other efforts by researchers showing that empathy facilitates prosocial learning (Lockwood et al.,2016) and that people comply with social norms by not harm-ing others despite the possibility of personal gain (Crockett et al.,2014), to date there have been no studies that investigate the moral learning process through emotional error signals.

(6)

erationalizes the hypothesis that more empathic individuals learn the trade-off between self-gain and harming others faster under conditions of heightened emotional engage-ment. Behaviorally, prosocial behavior is hypothesized to increase in more empathic individuals and when emotional engagement is heightened, because empathic individu-als are more susceptible to emotional cues (Kang et al.,2017). Following the same logic, it is expected that the higher an individual’s trait empathy, the higher its weight given to distress. Based on neural mechanisms instantiated by Greene et al. (2001) research on emotional engagement, it is hypothesized that emotion processing brain regions will be engaged more when emotional engagement is heightened, including the amygdala, the anterior insula (AI) and the ACC (Decety, 2011,Etkin et al., 2011,Hooker et al.,

2006). Moreover, the overlap between regions of the approach pathway of the caregiving model and areas implicated in learning proposes a role for this pathway in reflecting the PE for distress, an emotional learning signal (Lockwood et al.,2016,Olsson et al.,2018,

(7)

Methods

Participants

In this study, 122 subjects were previously collected from the volunteer panel at the Cognition and Brain Sciences Unit, Cambridge UK, and the postgrad student com-munity. Of these 122 subjects (80/122 high EE condition), 38 subjects completed the PvG task in the scanner, of which 7 were excluded due to incomplete data (17/31 high EE condition). Subjects were compensated for their participation and could keep their earnings accumulated during the task. Subjects were right-handed, had no mental or neurological disorders and had normal or corrected-to-normal vision. Ethical approval was obtained from the University of Cambridge, Department of Psychology Research Ethics Committee.

Pain versus Gain Task

To realize a moral dilemma, subjects (the deciders) were given a £20 bill and on 20 trials were asked how much money (£0 to £1, increasing by £.20) they were willing to give up to decrease the shock intensity inflicted on the wrist of the confederate (the receiver). The more money was kept, the more subjects prioritized their personal gain over the receiver’s pain. Each trial was comprised of eight screens. A trial began with a screen indicating the trial number and their bank balance. The trial screen was followed by a decision screen of 3 seconds, during which subjects saw a visual analogue scale (VAS) with the possible choices (1 - 6). During the next 8 seconds, subjects could move along the VAS to select the amount of money they were willing to give up relating to the shock intensity administered to the receiver. The higher the choice the more money was kept and the higher the shock intensity administered to the receiver, whereas a lower choice indicated that more money was given up (see figure 1C). After their decision, subjects saw a screen with their choice for 3 seconds and then an anticipation screen for 8 seconds. Subjects were told that during the anticipation screen their choice would be transferred to the receiver who was connected to a Digitimer DS7A (i.e. a current stimulator). Before participating in the Pain versus Gain task, subjects first experienced the shock themselves, so that they knew the shocks were real. Following the anticipation

(8)

whereas in the high emotional engagement condition, subjects saw the facial expression and the hand being stimulated. The videos were prerecorded footage of real shocks and were rated by an independent group, allowing shock intensity and rated pain intensity to be matched. There was one video for each shock intensity and each engagement condition (12 in total). As a control task for the scanner, subjects performed a non-moral task in which they decided which finger of the right hand the confederate was to move. Instead of seeing a video of the receiver being eletrically stimulated, the video showed the receiver hand with a finger moving. This task structurally and visually mimicked the design of the PvG task and comprised of the same screens as described in figure1B, except for the ’rate distress’ screen. The non-moral control task was performed by all subjects.

Figure 1: Experimental setup.

A) The Decider had to decide how much money s/he wants to spend in order to reduce the shock intensity administered to the hand of the Receiver. The subject either saw a video of the hand or the face of the Receiver. B) Trial sequence and the two analyzed events: Decide and Video event. C) The decision scale subjects saw during the decision events on which they had to rate how much money they wanted to give up in order to reduce shock intensity. A higher choice indicated that less money was given up and

(9)

7

Interpersonal Reactivity Index

To get a measure of individuals’ other-oriented concern and “egoistic” concern, the empathic concern and personal distress subscales of the affective dimension of the Interpersonal Reactivity Index (IRI) were used respectively (Davis, 1983). Empathic concern (EC) is defined as the extent to have other-oriented feelings of concern and sympathy, whereas personal distress (PD) is defined as the extent of self-oriented feelings of distress or discomfort in reaction to other’s distress.

Computational Modeling

Using Python 3.7.6 (Van Rossum and Drake Jr,1995), several utility and reinforce-ment learning models were fit to the observed choice data in the PvG task to capture the internal trade-off between monetary gain and harming others. According to these models, people do not only take into account the monetary gain for each option but also the harm inflicted on others, weighing its value with the money to calculate a sub-jective value or utility (Schoemaker, 1982). In both models, the utility (U) for each choice option (1-6) is determined by the distress experienced due to the choice made by the Decider and its corresponding shock intensity level, and the trade-off between the monetary gain for this choice and the experienced distress. This trade-off is controlled by a free parameter Wdthat reflects the weight placed on distress relative to the reward:

U = (1 − Wd) ∗ rewards2− Wd∗ distresspredicted (1)

with rewards = {0, .2, .4, .6, .8, 1} for each choice option. Reward is squared to add non-linearity in the model. The distress for each choice option is specified as distresspredicted = σ ∗ SI. SI is the shock intensity and is defined as the square of the

rewards. The σ parameter maps a value for distress onto the shock intensities.

A softmax decision rule is applied to produce a choice in proportion to the total value of all choices:

Pchoice=

eβ∗Uchoice

P eβ∗Uchoice (2)

with β, the inverse temperature parameter, representing the balance between ex-ploring and exploiting different choice options and Pchoice, the probability that the

De-cider will choose to administer a shock.

In the Utility model the σ parameter is a free parameter which is not updated over trials, whereas in the reinforcement learning model the σt parameter is updated over

(10)

with σt+1 indicating the parameter value on the next trial and σt on the current

trial. Note that by updating σ in a linear fashion, the utilities for all choices are updated and not just the utility of one of the choice options to account for the dependency between choice options. This ensures that any learning is spread across and generalized to all choice options, in correspondence to the continuous nature of the participant’s choice space (shock levels 1-6). This way the distress function is used to globally update the distress.

Model fitting Given the descriptions above, the Utility and RL model have 3 and 4 free parameters respectively. For the Utility model, the free parameters are β, Wd and

σ, and for the RL model they are α, β, Wd and initial σ. To fit the free parameters in

the models, the sum of the natural logarithm of the predicted choice probability under the current model parameters for the choice of the Decider on each trial is calculated to define the negative log likelihood (NLL):

N LL = −

n

X

t

ln(Pchoice) (4)

with t indicating current trial and n the total number of trials. This loss function serves as input to a SciPy optimizer (Virtanen et al.,2020) which uses a gradient descent algorithm to find the parameter combination that minimizes the NLL (i.e. describes the subject’s behavioral data best). The NLL is then used to calculate the Akaike information criterion (AIC) to penalize the model for its number of free parameters (Akaike,1974).

(11)

9 where K the number of free parameters in the model. AIC is then used to compare model fits between models and determine the winning model for all subjects on average. Different versions of the two models by lesioning free parameters have been compared and analysis can be found in Supplementary Analysis A: Computational Modeling.

fMRI

Data acquisition. Using a 3-Tesla Trio Tim MRI scanner with a head coil gradient located at the Medical Research Council Cognition and Brain Sciences Unit in Cam-bridge, UK, whole-brain data were acquired with echoplanar T2* weighted imaging (32 sagittal slices, 3 mm-thickness; TR = 2000 ms; TE = 30 ms; flig angle = 78 FOV = 192 mm). T1 weighted structural images were acquired at a 1x1x1 mm resolution.

Data preprocessing SPM12was used to preprocess and analyze MRI data. The first 7 volumes were discarded for equilibration effects before preprocessing. fMRI data was spatially realigned to the first image, after which slice timing and coregistration were performed. Images were normalized using the Montreal Neurological Institute (MNI) template with a 3x3x3 voxel size and smoothed with a Gaussian kernel with full width at half maximum (FWHM) of 8 mm. To remove low frequency drifts, a high-pass temporal filter with a cutoff at 128 seconds was used.

Data analysis

Behavioral data Analyses of behavioral data were performed in R version 3.6.1. (R Core Team). Linear mixed-effect models (Bates et al., 2015, Kuznetsova et al., 2017) were employed to test the effect of the emotional engagement manipulation on choice and situational distress. Interaction factors were added such as empathic concern to test whether increasing empathic concern reduces decisions to shock and increases situational distress. In addition, paradigm was added to control for differences in setting between the behavioral and MRI subjects (see Supplementary Analysis C: Additional paradigm analyses). Moreover, because the VAS differed for rating the situational distress between paradigms (behavioral: 5-point scale/MRI: 13-point scale), MRI distress ratings were rescaled to a 5-point scale. Furthermore, the fitted model parameters were analyzed to test if the model captured differences in behavior due to the emotional engagement ma-nipulation and individual differences in empathic concern. EC and PD scores were mean centered before entering into the analyses as predictors of choice, situational distress or model parameters.

(12)

decision and video event were compared between emotional engagement conditions by using a t-test. In addition, EC and PD scores were added as second level covariate of interest to determine which brain areas correlated with EC/PD scores with a factorial anova. To determine whether activation in certain brain areas is modulated by factors such as shock intensity, the PE as modelled by the RL model or situational distress, GLMs were built where these variables were added as mean-centered parametric mod-ulators to the Video and Decision event. Results are reported at voxel-wise p < 0.001 uncorrected whole brain, and a cluster-wise threshold of p <0.05 uncorrected — a rela-tive lenient threshold — is applied after this initial voxel-wise thresholding step to find significant clusters.

ROI analysis Region of interest (ROI) analyses were performed using the MarsBaR toolbox (Brett et al.,2002) in SPM 12. ROI analyses were used to test specific hypothe-ses about differences in brain activation related to emotional engagement and empathic concern. ROIs were defined as sphere shaped regions with a radius of 10 mm and coor-dinates for the center of the sphere were taken from FeldmanHall et al.(2015, 2012a). Coordinates were taken from these papers because their results provided hypotheses for the questions asked and their ROI coordinates were based on previous literature men-tioned here as well. Activity during the video or decision event for each ROI was ex-tracted using MarsBaR. The acquired beta weights were used in t-tests and correlations test to determine if there was significantly more activity between emotional engagement conditions or correlated with trait empathy, respectively. Results are reported at p < 0.05 uncorrected.

(13)

Results

Behavioral results

Trade-off between personal gain and harming others

To verify that the emotional engagement experimental manipulation was effective, its effects on selfish decisions were examined. Subjects kept significantly more money in the low emotional engagement conditions (mean ± s.d.: £13.75 ± £5.49) compared to the high emotional engagement condition (mean ± s.d.: £10.80 ± £5.09; t (120) = 2.9, p = 0.005; see Figure 2a). These results support the findings by Greene et al. (2001) that the more ’up-close and personal’, the less willing to harm.

Furthermore, to explore whether empathic concern affected the trade-off between personal gain and harming others differentially for EE conditions, these two factors were added to a generalized estimation equation (GEE) (Halekoh et al., 2006). A GEE was used to make population-level inferences by not fitting slopes for each individual and because a linear mixed effect model did not provide enough power. Regression analyses showed that both EE (β= 0.74, SE = 0.23, p = 0.003; ref = low EE) and EC (β= -0.54, SE = 0.14, p = 0.003) could predict choice, as well as the interaction between the two (β= 0.40, SE = 0.22, p = 0.003; ref = low EE; see Figure 2b). Separate correlation test show that the average choice to keep money decreases with empathic concern in the high EE condition (r (78) = -0.41, p <0.001), but not in the low EE condition (r (40) = -0.11, p = 0.47). In summary, subjects with higher empathic concern scores were more prosocial than subjects with lower empathic concern scores and this effect was stronger in the high EE condition compared to the low EE condition. These results suggest that highly empathic individuals are more susceptible to emotional cues when making moral decisions (Kang et al.,2017) and confirms the hypothesis that more empathic individuals are more prosocial in contexts with intense emotional cues..

Situational distress as a consequence of one’s decision

To quantify subjects’ feelings when viewing the consequences of their decision, rat-ings of situational distress were analyzed. It was expected that subjects would feel more

(14)

Figure 2: Behavioral results for choice behavior and distress.

Shown are the Pain versus Gain task behavioral results. a) The significant difference in average choice between emotional engagement conditions (β= 0.74, SE = 0.23, p = 0.003; ref = low EE). b) The significant interaction between empathic concern and emotional engagement on choice (β= 0.40, SE = 0.22, p = 0.003; ref = low EE). c) The significant difference in average situational distress rating between emotional engagement conditions when controlled for choice (β= 0.80, SE = 0.22, p < 0.001). d) The significant interaction between empathic concern and emotional engagement on situational distress ratings when controlling for choice (β= -0.30, SE = 0.04, p < 0.001).

. p <0.06; * p <0.05; ** p <0.01.

distressed when emotional engagement is heightened (observing the face and the body of the receiver). A linear mixed effects model showed that the emotional engagement ma-nipulation did not significantly predict distress (β= 0.04, SE = 0.14, p = 0.75; reference is low EE). However, it was shown earlier that subjects decided to give up significantly more money in the high EE condition. Therefore to control for this factor, choice was added to the regression model. Linear mixed-effect models reveal that both EE (β= 0.80, SE = 0.22, p < 0.001; reference is low EE) and choice (β= 0.55, SE = 0.02, p < 0.001) and their interaction (β= -0.26, SE = 0.04, p < 0.001; reference is low EE; see Figure2c) predict distress. This suggests that subjects did in fact experience different distress lev-els in the high emotional engagement condition and that keeping less money in the high EE condition was just as distressing as keeping more money in the low EE condition. This provides evidence that the face video was indeed more emotionally engaging than the hand video, aligning with the notion that facial information is more salient ( Cham-bers et al.,1999). Moreover, situational distress ratings decreased with EC scores when controlling for choice (β= -0.26, SE = 0.13, p = 0.04). However, when adding EE, it became apparent that the higher the EC score in the high EE condition, the higher the

(15)

13 distress rating when keeping more money, whereas distress ratings went down for keeping more money in the low EE condition with higher EC scores (EC*EE*choice interaction: β= -0.30, SE = 0.04, p < 0.001; ref = low EE; see Figure 2d). This suggests that the EE manipulation more strongly modulates the distress ratings - and thus the emotional response - in the high EE conditions than in the low EE condition. This confirms the hypothesis that showing the consequence of your choice as a facial reaction, which is more up close and personal compared to a hand motion, triggers a greater emotional response in the Decider.

Computational modeling

Figure 3: Posterior predictive check for the Utility model. Shown is the reproducibil-ity of the emotional engagement effect on proso-cial behavior of the Utility model. Each grey dot represents the average true choice per sub-ject, whereas the black square indicates the av-erage true choice over all subjects in the low or high EE condition. The red error bars represent the 95% confidence interval of the average choice from simulating the Utility model 1000 times per

subject.

It is reasoned that the psychological mechanism behind the decision making in the PvG paradigm is based on learning from previous experiences. Did I make the right decision in the previous trial or did I experience more distress than I antici-pated? To test whether situational dis-tress experienced on the previous trial in-fluences choie on the next trial, situational distress experienced on the previous trial was added as a lagged predictor for choice. Regression showed that choice can signifi-cantly be predicted by an individual’s pre-vious distress (β= -0.10, SE = 0.02, p < 0.001) and this result holds when control-ling for previous choice (β= -0.15, SE = 0.02, p < 0.001). So regardless whether the choice on the previous trial was also

lower, the distress experienced on the previous trial predicted the amount of money kept on the next trial. These results, suggest that learning effects are present. To char-acterize the components of a learning model and link these components to the brain, computational modeling was employed.

Model comparison using AIC showed that a Utility model with 3 free parameters (no alpha) was the winning model. For 82.8 % of subjects the Utility model fit better than the RL model with 4 free parameters. A posterior predictive check of the Utility model confirmed the robustness of the model. Figure3shows that the effect of emotional engagement on prosocial behavior can accurately be replicated by the Utility model.

(16)

Figure 4: Model behavior for Wd, α and EPE

Shown are the Pain versus Gain task behavioral results. a) The significant correlation between the fitted Wd value and empathic concern in the high emotional engagement

condition (ρ = 0.28, p = 0.01). b) The significant interaction between empathic concern and emotional engagement on choice (β= -4.64, SE = 0.16, p <0.001). c) No significant predictors were found for the learning rate α. d) The significant difference between emotional engagement conditions on the prediction error (β= -0.41, SE = 0.20, p =

0.04; ref = low EE).

See Supplementary Analysis A: Computational Modeling for more information about model comparison and model quality checks. To test hypotheses about the influence of emotional engagement and empathic concern the Wd parameter from the winning

Utility model was analyzed. Nevertheless, some evidence for learning was found by showing that choice could be predicted by prior distress. Therefore, the α parameter and the modelled prediction errors from the RL model were also analyzed.

Firstly, it was predicted that the weight given to distress would be higher for sub-jects watching the high EE video compared to the low EE video, because Greene et al.

(2001) showed that more “up close and personal” moral dilemmas diminish the will-ingness to harm, which is specified as the trade-off between money and distress for the models described here. Wd parameter values were non-normally distributed

(Shapiro-Wilk: W = 0.74, p <0.001) and therefore non-parametric tests were performed. For the Utility model there was a significant difference in Wd between EE conditions (Z = 2131,

p = 0.02), with a higher Wd for the high EE (mean: 0.25, SE: 0.02) compared to the

low EE condition (mean: 0.22, SE: 0.04). This confirms the hypothesis that emotional engagement affects the willingness to harm, captured by the Wd parameter. This

sug-gests that heightened emotional engagement modulates the trade-off between harming others and personal gain by altering the relative influence of these two constructs in the psychological decision process.

(17)

15 Secondly, it was reasoned that the more empathic the individual, the more s/he will care about the distress caused by watching another individual being harmed than about the money. Therefore, it was expected that the higher an individual’s empathic concern score, the higher the Wd parameter value. Collapsing over EE conditions showed no

significant relationship between empathic concern and the fitted Wd value (ρ = 0.18,

p = 0.052). However, in the high EE condition, the higher an individual’s EC score the higher the Wd parameter value (ρ = 0.28, p = 0.01; see figure 4a), whereas this

relationship was not significant for the low EE condition (ρ = 0.03, p = 0.86). To determine whether the correlations significantly differed from each other and thus if the interaction effect of EE and EC on Wd is significant, a quantile regression was

performed. Regression analysis showed that the relationship between EC and Wd is

not significantly weaker for the low compared to the high EE condition (β= -0.02, SE = 0.03, p = 0.45, ref = low EE). Despite not finding an interaction between EE and EC, these results point towards a possibly stronger effect of empathic concern on the weight of distress for heightened emotional engagement. This suggests that empathy and emotional engagement can affect the weight placed on distress as captured by the model.

Thirdly, if individuals care more about distress than about money as reflected as a higher Wd parameter, then this parameter should be able to predict prosocial behavior.

Therefore, it was tested with a quantile regression whether Wdcould significantly predict

how much subjects were willing to give up to reduce the shock intensity administered to the confederate and whether this is modulated by emotional engagement. Whereas, Wd

of the Utility model could significantly predict money given up (β= -4.64, SE = 0.16, p <0.001; see figure 4b), the interaction with the EE manipulation was not significant (β= -0.10, SE = 0.23, p = 0.65). This confirms that the weight of distress parameter can capture the prosocial/selfish trade-off and explains moral judgements in such a way that a higher Wd results in more prosocial behavior, but that emotional engagement

does not differentially alter this effect.

Furthermore, from model comparison in theSupplementary Analysis A: Computa-tional Modeling, it became apparent that the average log-likelihood (LL) per trial was better for the reinforcement learning model compared to the Utility model, but that the RL model’s LL could not overcome the penalization imposed by the AIC. However, because there were strong predictions about the learning rate and PE results, the α and PEs from the full reinforcement learning model were used to test whether empathy could affect the learning rate or whether emotional engagement could affect the predic-tion errors. In the RL model the σ parameter updates based on emopredic-tional feedback — discrepancies in experienced and anticipated distress — and uses that feedback to change the amount of distress felt for the different shock levels to make a better trade-off

(18)

0.01, SE = 0.02, p = 0.51; see figure 4c). Taken together, this suggest that subjects do not learn differently about their emotions regarding the internal trade-off between harming others and personal gain when emotional engagement is heightened, nor when people are more empathic.

For the model PEs, it was reasoned that in the high EE condition the anticipated distress is underestimated more than in the low EE condition and therefore experienced distress turns out to be higher because of heightened emotional engagement. Prediction error results per subject were analyzed to determine whether PEs are higher — i.e. more violations of anticipated distress — for the high EE condition than the low EE condition. Regression results showed that PEs are significantly predicted by EE (β= -0.41, SE = 0.20, p = 0.04; ref = low EE; low EE mean: -0.91, high EE mean: -0.50; see figure4d). This suggests that violations of anticipated distress are modulated by EE and that on average, distress is overestimated more in the low EE condition than in the high EE condition. That the average PE for both conditions is on average non-zero, is in line with the results byPollmann and Finkenauer(2009) that people overestimate the impact of an emotional event on another individual’s affective state. However, due the emotional feedback, distress seems to be more underestimated in the high EE condition than in the low EE condition.

MRI results

It has been shown that emotional engagement and empathic concern influence moral choice behavior. To test the first hypothesis that brain activity reflects costly moral decisions as affected by different levels of emotional engagement, MRI results have been analyzed using a GLM on the Decision (trading off personal gain with harming others) and Video (viewing the consequence of your decision) events.

(19)

17

Table 1: Decide event of PvG bidirectionally contrasted with the Non-Moral control task.

Region Peak MNI (X, Y, Z) Z-Value P uncorr // FWE Moral >Non-Moral Decision

V1 L −20 −94 −14 4.53 0.018 // 0.106

ROI t-statistic

Amygdala R 28 −8 −28 2.67 0.008

TPJ R 50 −75 6 2.13 0.042

TPJ L −53 −71 6 2.78 0.009

Non-Moral >Moral Decision

Premotor L −22 2 54 4.71 0.000 // 0.000 SPL L −28 −84 38 4.51 0.000 // 0.000 MOG R 36 −84 32 4.17 0.008 // 0.050 Cerebellum R 26 −72 −42 3.98 0.08 // 0.047 MTG L 62 −56 6 3.83 0.014 // 0.067 Cerebellum L −18 −78 −38 3.66 0.033 // 0.186

Moral vs Non-moral activity

First, the EE conditions were collapsed to determine the difference in brain activity between the moral PvG and non-moral control task. Significant activation was found in the primary visual cortex (V1) for moral decision compared to non-moral decision making, potentially indicating more salient visual information during the PvG task than during the NM task. In addition, ROI analysis for the right amydala and bilateral TPJ also showed greater significant activation for moral versus non-moral decision making. The amygdala and TPJ are regions important for processing emotional and socially significant stimuli (Phelps,2006). In contrast, significant activations in the left premotor area, middle temporal gyrus (MTG) and intraparietal sulcus (IPS), the right middle occipital gyrus (MOG) and the bilateral cerbellum were greater for non-moral decisions (see Table 1). Activity in these regions could indicate stronger motor imagery during the decision making event of the non-moral task (Cohen et al., 2014). Whole brain analysis showed no significant clusters when comparing the video feed of the PvG task with the NM task. However, ROI analysis of the left TPJ showed greater activation for the moral video event compared to the non-moral video event. The TPJ is involved in decoding social cues, such as the mental state of others (Young and Saxe,2008,2009). Moreover, when running the opposite contrasts, many regions involved in motor imagery and emotions were activated (lenient threshold; see Table 2).

Emotional engagement

Second, to test whether the emotional engagement condition modulated activity in the ACC, the amygdala and the AI — regions involved in processing emotionally aversive stimuli (Etkin et al.,2011), the two EE conditions were compared to each other in a between subjects factorial design.

No significant clusters were found for the decision event for whole brain analysis, possibly suggesting that subjects had the same mental processes going on while making a decision during the low and the high emotional engagement condition. However, ROI analysis of the left TPJ did show significant activation for this contrast. This suggests

(20)

Table 3: Significant cluster for emotional engagement contrasts for the decision and video feed .

Region Peak MNI (X, Y, Z) Z-Value P uncorr // FWE

High >Low EE Decision

ROI t-statistic

TPJ L −53 −71 6 2.05 0.049

High >Low EE Video

Lingual/Fusiform gyrus L −10 −78 0 4.48 0.000 // 0.000

Fusiform gyrus R 26 −66 −6 4.19 0.026 // 0.151

ROI t-statistic

AI R 56 24 0 2.51 0.018

TPJ R 52 −40 4 2.34 0.026

Low >High EE Video

Brainstem R 6 −30 −44 4.24 0.049 // 0.268

that mentalizing was stronger in the high EE condition than in the low EE condition. The whole brain results may partly be explained by the fact that this study was set up as a between subject design. It may be that there is not enough distinct brain activity or too much betweeen-subject noise between the EE conditions because subjects only experienced one of them and therefore could not compare the emotional impact of these two conditions. In Supplementary Analysis B: Additional fMRI analyses, brain activation for both EE conditions was tested separately. In addition, some significant clusters were found for the video feed. When contrasting the high EE video feed with the low EE video feed, results showed significant clusters in the bilateral fusiform gyri. This can be seen as a manipulation check indicating that subjects were indeed looking at a face or processing facial information in the high EE condition (Grelotti et al.,2002). ROI analysis for this contrasts showed that there was significantly more activation in the right anterior insula and the right TPJ for the high EE compared to the low EE condition. When comparing the low EE video feed with the high EE video feed there was a significant cluster in the brainstem (see Table3and FigureS.6). Taken together, these results so far show that there is more neural activity in brain regions associated with emotional processing and mentalizing when the emotional engagement is heightened, mirroring the behavioral results.

(21)

19

Figure 5: ROI results for the direct comparison between high and low emo-tional engagement for the decision and video event.

A) The significant differences in contrast weight between emotional engagement condi-tions for the decision and video event. B) ROIs for which the PvG >NM task contrast is significantly higher in the high compared to the low EE condition. Left TPJ: x = -53, y = -71, z = 6; right AI: x = 56, y = 24, z = 0; right TPJ: x = 52, y = -40, z = 4.

Table 4: Significant clusters for emotional engagement contrasts for the decide event as modulated by money kept and correlating with EC/PD.

Region Peak MNI (X, Y, Z) Z-Value P uncorr // FWE High EE > Low EE + EC modulated by money kept

sgACC 8 32 0 4.41 0.034 // 0.168

High EE > Low EE + PD modulated by money kept

PCC 2 −40 16 4.41 0.051 // 0.261

Personal gain vs prosocial behavior

To determine whether brain areas associated with conflict processing are more active for self-interested behavior compared to prosocial behavior, choice was added as a parametric modulator. A lower choice indicates a more prosocial decision, because the Decider was willing to give up more money to reduce the shock intensity administered to the Receiver than when choosing one of the higher choice options. No significant clusters were found for more prosocial choices or when high EE was compared to low EE condition. Neither were there any significant clusters when looking at the main effects of EC and PD on the parametrically modulated decision event. However, when comparing emotional engagement conditions, adding EC as a second level covariate of interest resulted in greater activation in the subgenual ACC for the high EE condition compared to the low EE for less prosocial behavior at a lenient threshold (p = 0.034; see Table4and Figure6). Moreover, when adding PD as a second level covariate of interest, the posterior CC showed trending greater activation for the high EE condition compared to the low EE condition. These areas are involved in conflict processing (Etkin et al.,

2011,Liston et al.,2006) and the results could therefore mean that the more empathic or distressed an individual is, the more conflict processing is happening when they make more self interested decisions for heightened emotional engagement.

(22)

and the beta weight for decision-making activity as modulated by choice/money kept.

Previous distress as modulator for choice activity

According to the caregiving model, distress should promote prosocial choices. Ev-idence for this relationship was found in the behavioral results, where it was shown that higher distress on the previous trial increases prosocial behavior. Therefore, it was tested how prior distress is encoded on the current trial by parametrically weighting the decision event with previous distress — situational distress rated on the previous trial. Results show significant activations in the dorsomedial PFC, left cerebellum, ACC and right dlPFC (see table5). Moreover, to determine whether previous distress might mod-ulate decision-making activity more strongly for the high EE condition compared to the low EE condition, a factorial design was implemented on the parametrically modulated decision event. No significant results were found for how previous distress modulates decision-making activity differently for the EE conditions.

It was then explored whether empathic concern or personal distress from the IRI might interact with how previous distress affects moral choice behavior. Results show no significant clusters for EC. However, there is significantly more activity in many areas (see table 5) when choices are more selfish and subjects have lower personal distress scores.

EPE in the approach pathway of the caregiving model facilitates learning of moral judgements.

The prediction error is defined as the experience minus the prediction of distress. Therefore, if subjects experienced more distress than they anticipated, they will have a positive PE. This distress PE or emotional prediction error (EPE) is thought to facilitate the learning of the trade-off between personal gain and harming others (i.e. moral jugdements). To test this, PE is added to the video event of the PvG task as a mean-centered parametric modulator. Results showed that there are no significant clusters for

(23)

21

Table 5: Significant cluster for the decide event as parametrically weighted to previous distress and correlating with decreasing PD scores as a covariate.

Region Peak MNI (X, Y, Z) Z-Value P uncorr // FWE Moral Decision modulated by previous distress

dmPFC −6 52 34 4.28 0.001 // 0.006

cerebellum L −14 −72 −20 3.98 0.006 // 0.049

ACC −12 38 2 3.75 0.023 // 0.183

dlPFC R 38 18 40 3.57 0.037 // 0.270

Moral Decision + PD modulated by previous distress

IPS R 40 50 46 5.06 0.000 // 0.000 IPS L −42 56 52 4.83 0.000 // 0.001 MTG L −54 −24 −18 4.69 0.000 // 0.001 IFG L −58 16 10 4.54 0.003 // 0.020 IFG L −46 40 −10 4.38 0.042 // 0.221 dmPFC −4 28 46 4.34 0.000 // 0.001 Putamen R 30 −6 6 4.22 0.003 // 0.016 ITG L 60 −42 −8 4.14 0.032 // 0.176 Premotor L −26 12 60 4.08 0.003 // 0.017 IFG R 42 32 12 4.08 0.000 // 0.001 Basal ganglia L −14 −8 −2 3.81 0.032// 0.174

Table 6: Significant ROIs for video event parametrically weighted to negative PE.

Region Peak MNI (X, Y, Z) t-statistic P value Video modulated by decreasing PE

VTA 2 −20 16 2.22 0.034

NAcc 6 20 −10 2.18 0.037

sgACC 6 36 −4 2.54 0.016

AI L −36 12 0 2.24 0.032

whole brain analysis. However, ROI analysis showed significantly more activity in the VTA, NAcc, sgACC and the left anterior insula when the prediction error was lower (see Table6and Figure 7). No results where found when adding PE of the previous trial as a modulator to the deciscion event. This suggests that an emotional learning signal is present.

Figure 7: ROI results for the parametrically weighted decision event by negative PE.

A) The beta weight for the decision event parametrically modulated by negative PE. B) ROIs for which the decision event is significantly parametrically weighted. VTA: x = 2, y = -20, z = 16; NAcc: x = 6, y = 20, z = -10; sgACC: x = 6, y = 36, z = -4; AI

(24)

cess became apparent: more empathic participants cared more about distress for moral dilemmas that heightened the emotional engagement. Although not showing any learn-ing effect difference due to emotional engagement, the presence of emotional prediction errors (EPE) in learning related brain regions suggests the possibility of learning the trade-off between harming others and personal gain in moral dilemmas through an emo-tional learning signal. Taken together, these findings demonstrate the delicate synergy between emotional engagement and empathy on adapting moral decision making and highlight a possible role for emotional learning signal (i.e. EPE) in learning to make moral decisions.

Greene et al. (2001) already showed the importance of emotional engagement on moral decision making andFeldmanHall et al.(2015) the relevance of empathy to costly altruism. Here these findings are extended by showing that there is a specific interaction between empathy and emotional engagement in moral dilemmas. That is, empathic individuals act more prosocially and are less willing to harm for moral dilemmas that engage emotional processing, whereas this relationship does not exist for moral dilemmas that engage emotional processing to a lesser extent. By showing that experiencing distress as a consequence of seeing another individual in pain at your hands increases as a function of empathy when emotional engagement was heightened, these results were further supported. This suggests that empathic individuals are more susceptible to emotional cues which reduces the willingness to harm (Doherty, 1997, Kang et al.,

2017).

(25)

23 Using computational modeling, this was also shown by the Wd parameter. The

weight placed on distress is affected by the extent to which moral dilemmas engage emo-tional processing and individual differences in empathic concern. The weight placed on distress in relation to the weight placed on financial gain captures the internal trade-off between harming others and personal gain when making moral judgements. By showing that the weight placed on distress is higher when emotional processing is engaged to a greater extent and that this weight increases for more empathic individuals, but only for moral dilemmas with heightened emotional engagement, it is demonstrated that the weight placed on distress represents the willingness to harm. This would then suggest that more empathic individuals care more about distress than less empathic individuals, but only when emotional processing is engaged. Future research should further investi-gate this relationship. To further illustrate that the weight placed on distress represents the willingness to harm, it was shown that the weight placed on distress could predict moral decision making and accordingly proves to be the underlying latent variable cap-turing the trade-off between personal gain and harming others. These results therefore further confirm the role of emotional engagement and empathy in value-based moral decision making. Future research should manifest the neural correlates of the weight placed on distress compared to weight on financial gain to further support the role of emotional engagement and empathy on this latent process. It could be that the weight placed on distress is reflected in the brain and interacts with prosocial/egoistic behavior. For example, people might display more activity in brain regions processing conflict such as the dACC, and amygdala (Egner et al., 2008, Etkin et al.,2011) or the lateral PFC which is implicated in cognitive control (Ochsner and Gross, 2005) when they place a lot of weight on distress but nevertheless make more egoistic moral judgements. This phenomenon might further interact with emotional engagement and empathy such that increased emotional processing and empathy might results in even more conflict pro-cessing. Furthermore, besides the factors found here to affect moral decision behavior, individual differences in moral judgements might be a product of some people regulating their emotions better. A study by (Eisenberg et al., 1994) demonstrated that people with higher perspective taking scores on the IRI are less emotional because they are better at regulating their emotions. In that case, moral decision making should interact with perspective taking in such a way that for individuals that are worse at regulating their emotions the emotional engagement manipulation should increase prosocial behav-ior, whereas for individuals that are better at regulating their emotions, the emotional engagement manipulation should not affect their moral judgements. Future research can explore these hypotheses.

(26)

(Preston,2013). Therefore, the sgACC could reveal the complex interplay between emo-tional processing and empathy and its underlying neural mechanism on moral decision making. However, this brain region was found at a lenient threshold. Further research is needed to confirm the role of the sgACC in moral decision making. In addition, distress on the previous trial was shown to predict prosocial behavior, which was corroborated by neuroimaging findings showing increased activation in the dmPFC and left cerebel-lum and trending activation in the ACC and right dlPFC. These findings suggest that higher prior distress results in more conflict monitoring and mentalizing. The dlPFC has also been suggested to be essential for integrating contextual cues to overrule financial gain and help others in need (FeldmanHall et al., 2015). Furthermore, it was shown that increased activation was found in brain areas related to working memory for less distressed individuals as indicated by the PD IRI measure when higher prior distress was experienced. Lower PD score means that subjects experience less discomfort in reaction to other’s distress. Therefore these results are in agreement with the findings byGreene et al. (2001) that impersonal dilemmas activate working memory areas assuming that subjects with lower PD scores regard moral dilemmas as less personal.

Surprisingly, when comparing the control task to the PvG task, significant activity in motor and somatosensory areas became apparent. Recent work byGallo et al.(2018), has shown that by disrupting the primary somatosensory cortex (SI), transforming visual feedback to an accurate understanding of pain intensity was distorted. This area is also involved in feeling pain and that would thus suggest that the SI transforms observed pain into an accurate representation of the victim’s pain (Keysers et al., 2010,Lamm et al.,

2011). Therefore, activation in the SI could both reflect the finger movement and the simulated personified representations of pain. Not finding this area, but instead finding it for the opposite contrast, is worrying. However, it could be that the activation for the non-moral finger moving control task was more uniform and therefore elicited clearer activation patterns, whereas the PvG task could have resulted in more noisy activation

(27)

25 patterns across participants due to varying emotional responses and thus non-uniform patterns after group level statistics. Another possibility could be the that regions known to represent psychical pain, do not represent vicarious pain. A topic recently debated due to mixed findings in the literature (Corradi-Dell’Acqua et al.,2016,Krishnan et al.,

2016, Zaki et al., 2016). The found activation by this study in the TPJ would also suggest that vicarious pain representations are reached through mentalizing rather than personified representations of pain.

Contradictory to the expectations, model comparison showed that subject behav-ior was not best explained by a model that incorporated learning. Nor were differences found in learning rate between emotional engagement and individual differences in em-pathy, when investigating the RL model. Together, these findings would suggest a non-learning mechanism for moral decision making that is not affected by emotional engagement or empathy. However, interestingly, emotional prediction errors (Gilbert and Wilson, 2009) were shown to activate the VTA, NAcc, sgACC and AI. That is, increased BOLD activation was found in these areas when the people experienced less distress than anticipated. Previous research by Ayton et al. (2007) and Teper et al.

(2011, 2015) has already shown that people are poor predictors of their own emotions about moral judgements, and that learning from one’s emotions is theoretically possible (FeldmanHall and Chang,2018,Vel´asquez,1998). These findings suggest that there is an opportunity to learn to minimize emotional prediction errors and accurately learn to trade-off personal gain with harm to other in congruence with one’s own sense or morally right and wrong behavior. Whereas commonly, the PE reflects a discrepancy in reward and expresses a learning signal to maximize reward, in this study PE signals incongruity in the emotional experience of distress. This means that when people ex-perienced less distress than anticipated, as reflected as a negative PE, they exex-perienced this as a form of a reward. People might view a negative PE for distress as a relief from a distressing feeling and therefore as something pleasurable (Leknes et al., 2011). An emotion prediction error might in that case facilitate norm compliance which is thought to be influenced by moral emotions (FeldmanHall et al.,2018). These brain activations for EPE are supported by the role of these brain areas ascribed to by the literature. The NAcc is a major component of the ventral striatum and is associated with mediating reward and satisfaction (Salgado and Kaplitt,2015). The VTA is also tied to mediating reward, but more specifically to the anticipation of reward and it signals value learn-ing (Ruff and Fehr,2014). Moreover, the study by (Lockwood et al., 2016) found that the sgACC computed a prosocial prediction error for outcomes delivered to others and

Rudebeck et al.(2014) showed that the sgACC is related to positive affect. In addition,

Wiech et al. (2013) found that sgACC activation reflects violation of moral norms and aversive feelings such as guilt due to caring about harming others. These regions are

(28)

of morality. We develop our morality from a very young age (Killen and Smetana,2015) and further develop it across a lifetime. The participants in this experiment possibly already came into the lab with a strong idea of their moral values, which could not be altered enough in such a way that it affected their behavior during the experiment. More commonly, probabilities are used to learn associations between actions and re-wards in reinforcement learning paradigms, whereas in this study subjects rather learn about their internal model on moral behavior. However, recently a study by Nostro et al. (2020) has shown that learning about moral conflicts in the lab is possible with a (non-emotional) learning signal in the vmPFC. Another possibilty for why the model tried to capture a moral learning effect is because the learning parameter was fit on 20 trials. Potentially, subjects only would have needed a subset of the trials to learn about their internal moral model and became consistent in their moral behavior afterwards. A different type of model would be needed to capture the different processes. Future stud-ies, could delve further into the role of emotional prediction errors in moral judgements. By creating a design that is more suitable for capturing learning effects, the influence of emotional engagement and empathy on emotional prediction errors could further be explored. Moreover, this study was set up with a between subject design. It may be that there is not enough distinct brain activity or too much betweeen-subject noise between the EE conditions because subjects only experienced one of them and therefore could not compare the emotional impact of these two conditions. A within-subject design could shed more light on the differences in brain activity between emotional engagement conditions. Another limitation of this study is its use of a lenient threshold. The results therefore provide preliminary evidence for certain relationships, but future research is needed to confirm the role of the found brain regions.

Social norms are ubiquitous, but do not always correspond with personal interests, providing fruitful soil for moral dilemmas. To understand how people learn to make the trade-off between competing interests in moral dilemmas and how emotions and personality traits such as empathic concern affect this, this study used the PvG task to

(29)

27 model the learning of trade-offs in moral judgements, revealing a plausible link between emotional prediction errors and learning to make trade-offs moral judgements. These results have implications for understanding the train of thought leading up to moral judgements and the function of emotions in this process. In addition, it could ameliorate our understanding of the emergence of “amoral” behavioral patterns in psychopaths

Hosking et al. (2017), Koenigs et al. (2012), Young et al. (2012). This study therefore helps pave the way to understanding how emotional prediction errors contribute to learning to trade off conflicting interests in moral dilemmas.

(30)

Bates, D., M¨achler, M., Bolker, B., and Walker, S. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1):1–48.

Batson, C. D., O’Quin, K., Fultz, J., Vanderplas, M., and Isen, A. M. (1983). Influence of self-reported distress and empathy on egoistic versus altruistic motivation to help. Journal of personality and social psychology, 45(3):706.

Brett, M., Anton, J.-L., Valabregue, R., and Poline, J.-B. (2002). Region of interest analysis using the marsbar toolbox for spm 99. Neuroimage, 16(2):S497.

Chambers, K. W., McBeath, M. K., Schiano, D. J., and Metz, E. G. (1999). Tops are more salient than bottoms. Perception & psychophysics, 61(4):625–635.

Christopoulos, G. I., Liu, X.-X., and Hong, Y.-y. (2017). Toward an understanding of dynamic moral decision making: Model-free and model-based learning. Journal of Business Ethics, 144(4):699–715.

Cialdini, R. B., Schaller, M., Houlihan, D., Arps, K., Fultz, J., and Beaman, A. L. (1987). Empathy-based helping: Is it selflessly or selfishly motivated? Journal of personality and social psychology, 52(4):749.

Cohen, O., Koppel, M., Malach, R., and Friedman, D. (2014). Controlling an avatar by thought using real-time fmri. Journal of neural engineering, 11(3):035006.

(31)

29 Corradi-Dell’Acqua, C., Tusche, A., Vuilleumier, P., and Singer, T. (2016). Cross-modal representations of first-hand and vicarious pain, disgust and fairness in insular and cingulate cortex. Nature communications, 7(1):1–12.

Crockett, M. J., Kurth-Nelson, Z., Siegel, J. Z., Dayan, P., and Dolan, R. J. (2014). Harm to others outweighs harm to self in moral decision making. Proceedings of the National Academy of Sciences, 111(48):17320–17325.

Davis, M. H. (1983). Measuring individual differences in empathy: Evidence for a multidimensional approach. Journal of personality and social psychology, 44(1):113. De Waal, F. B. (2008). Putting the altruism back into altruism: the evolution of

empa-thy. Annu. Rev. Psychol., 59:279–300.

Decety, J. (2011). The neuroevolution of empathy. Annals of the New York Academy of Sciences, 1231(1):35–45.

Doherty, R. W. (1997). The emotional contagion scale: A measure of individual differ-ences. Journal of nonverbal Behavior, 21(2):131–154.

Egner, T., Etkin, A., Gale, S., and Hirsch, J. (2008). Dissociable neural systems resolve conflict from emotional versus nonemotional distracters. Cerebral cortex, 18(6):1475– 1484.

Eisenberg, N., Fabes, R. A., Murphy, B., Karbon, M., Maszk, P., Smith, M., O’Boyle, C., and Suh, K. (1994). The relations of emotionality and regulation to dispositional and situational empathy-related responding. Journal of personality and social psychology, 66(4):776.

Eisenberg, N. and Miller, P. A. (1987). The relation of empathy to prosocial and related behaviors. Psychological bulletin, 101(1):91.

Etkin, A., Egner, T., and Kalisch, R. (2011). Emotional processing in anterior cingulate and medial prefrontal cortex. Trends in cognitive sciences, 15(2):85–93.

FeldmanHall, O. and Chang, L. J. (2018). Social learning: emotions aid in optimiz-ing goal-directed social behavior. In Goal-Directed Decision Makoptimiz-ing, pages 309–330. Elsevier.

FeldmanHall, O., Dalgleish, T., Evans, D., and Mobbs, D. (2015). Empathic concern drives costly altruism. Neuroimage, 105:347–356.

FeldmanHall, O., Dalgleish, T., Thompson, R., Evans, D., Schweizer, S., and Mobbs, D. (2012a). Differential neural circuitry and self-interest in real vs hypothetical moral decisions. Social cognitive and affective neuroscience, 7(7):743–751.

(32)

Gallo, S., Paracampo, R., M¨uller-Pinzler, L., Severo, M. C., Bl¨omer, L., Fernandes-Henriques, C., Henschel, A., Lammes, B. K., Maskaljunas, T., Suttrup, J., et al. (2018). The causal role of the somatosensory cortex in prosocial behaviour. Elife, 7:e32740.

Gilbert, D. T. and Wilson, T. D. (2009). Why the brain talks to itself: Sources of error in emotional prediction. Philosophical Transactions of the Royal Society B: Biological Sciences, 364(1521):1335–1341.

Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., and Cohen, J. D. (2001). An fmri investigation of emotional engagement in moral judgment. Science, 293(5537):2105–2108.

Grelotti, D. J., Gauthier, I., and Schultz, R. T. (2002). Social interest and the devel-opment of cortical face specialization: What autism teaches us about face processing. Developmental Psychobiology: The Journal of the International Society for Develop-mental Psychobiology, 40(3):213–225.

Haidt, J. and Kesebir, S. (2010). Morality.

Halekoh, U., Højsgaard, S., and Yan, J. (2006). The r package geepack for generalized estimating equations. Journal of Statistical Software, 15/2:1–11.

Hoffman, M. L. (2001). Empathy and moral development: Implications for caring and justice. Cambridge University Press.

Hooker, C. I., Germine, L. T., Knight, R. T., and D’Esposito, M. (2006). Amygdala response to facial expressions reflects emotional learning. Journal of Neuroscience, 26(35):8915–8922.

Hosking, J. G., Kastman, E. K., Dorfman, H. M., Samanez-Larkin, G. R., Baskin-Sommers, A., Kiehl, K. A., Newman, J. P., and Buckholtz, J. W. (2017). Disrupted

(33)

31 prefrontal regulation of striatal subjective value signals in psychopathy. Neuron, 95(1):221–231.

Kang, J., Ham, B.-J., and Wallraven, C. (2017). Cannot avert the eyes: reduced atten-tional blink toward others’ emoatten-tional expressions in empathic people. Psychonomic bulletin & review, 24(3):810–820.

Keysers, C., Kaas, J. H., and Gazzola, V. (2010). Somatosensation in social perception. Nature Reviews Neuroscience, 11(6):417–428.

Killen, M. and Smetana, J. G. (2015). Origins and development of morality. Handbook of child psychology and developmental science, pages 1–49.

Koenigs, M., Kruepke, M., Zeier, J., and Newman, J. P. (2012). Utilitarian moral judgment in psychopathy. Social cognitive and affective neuroscience, 7(6):708–714. Krishnan, A., Woo, C.-W., Chang, L. J., Ruzic, L., Gu, X., Lopez-Sola, M., Jackson,

P. L., Pujol, J., Fan, J., and Wager, T. D. (2016). Somatic and vicarious pain are represented by dissociable multivariate brain patterns. Elife, 5:e15166.

Kuznetsova, A., Brockhoff, P. B., and Christensen, R. H. B. (2017). lmerTest package: Tests in linear mixed effects models. Journal of Statistical Software, 82(13):1–26. Lamm, C., Decety, J., and Singer, T. (2011). Meta-analytic evidence for common and

distinct neural networks associated with directly experienced pain and empathy for pain. Neuroimage, 54(3):2492–2502.

Leknes, S., Lee, M., Berna, C., Andersson, J., and Tracey, I. (2011). Relief as a reward: hedonic and neural responses to safety from pain. PloS one, 6(4):e17870.

Liston, C., Matalon, S., Hare, T. A., Davidson, M. C., and Casey, B. (2006). Anterior cingulate and posterior parietal cortices are sensitive to dissociable forms of conflict in a task-switching paradigm. Neuron, 50(4):643–653.

Lockwood, P. L., Apps, M. A., Valton, V., Viding, E., and Roiser, J. P. (2016). Neuro-computational mechanisms of prosocial learning and links to empathy. Proceedings of the National Academy of Sciences, 113(35):9763–9768.

Nostro, A., Ioumpa, K., Paracampo, R., Gallo, S., Fornari, L., De Angelis, L., Gentile, A., Spezio, M., Keysers, C., and Gazzola, V. (2020). Neuro-computational mechanisms of action-outcome learning under moral conflict. bioRxiv.

Obeso, I., Moisa, M., Ruff, C. C., and Dreher, J.-C. (2018). A causal role for right temporo-parietal junction in signaling moral conflict. Elife, 7:e40671.

(34)

Preston, S. D. (2013). The origins of altruism in offspring care. Psychological bulletin, 139(6):1305.

Preston, S. D. and De Waal, F. B. (2002). Empathy: Its ultimate and proximate bases. Behavioral and brain sciences, 25(1):1–20.

Preston, S. D. and de Waal, F. B. (2017). Only the pam explains the personalized nature of empathy. Nature Reviews Neuroscience, 18(12):769.

R Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria.

Rescorla, R. A. (1972). A theory of pavlovian conditioning: Variations in the effec-tiveness of reinforcement and nonreinforcement. Current research and theory, pages 64–99.

Rizzolatti, G., Fadiga, L., Gallese, V., and Fogassi, L. (1996). Premotor cortex and the recognition of motor actions. Cognitive brain research, 3(2):131–141.

Rudebeck, P. H., Putnam, P. T., Daniels, T. E., Yang, T., Mitz, A. R., Rhodes, S. E., and Murray, E. A. (2014). A role for primate subgenual cingulate cortex in sustaining autonomic arousal. Proceedings of the National Academy of Sciences, 111(14):5391– 5396.

Ruff, C. C. and Fehr, E. (2014). The neurobiology of rewards and values in social decision making. Nature Reviews Neuroscience, 15(8):549–562.

Salgado, S. and Kaplitt, M. G. (2015). The nucleus accumbens: a comprehensive review. Stereotactic and functional neurosurgery, 93(2):75–93.

Saxe, R. and Kanwisher, N. (2003). People thinking about thinking people: the role of the temporo-parietal junction in “theory of mind”. Neuroimage, 19(4):1835–1842.

(35)

33 Schoemaker, P. J. (1982). The expected utility model: Its variants, purposes, evidence

and limitations. Journal of economic literature, pages 529–563.

Schultz, W., Dayan, P., and Montague, P. R. (1997). A neural substrate of prediction and reward. Science, 275(5306):1593–1599.

Singer, T. and Klimecki, O. M. (2014). Empathy and compassion. Current Biology, 24(18):R875–R878.

Singer, T., Seymour, B., O’doherty, J., Kaube, H., Dolan, R. J., and Frith, C. D. (2004). Empathy for pain involves the affective but not sensory components of pain. Science, 303(5661):1157–1162.

Sutton, R. S., Barto, A. G., et al. (1998). Introduction to reinforcement learning, volume 135. MIT press Cambridge.

Teper, R., Inzlicht, M., and Page-Gould, E. (2011). Are we more moral than we think? exploring the role of affect in moral behavior and moral forecasting. Psychological Science, 22(4):553–558.

Teper, R., Zhong, C.-B., and Inzlicht, M. (2015). How emotions shape moral behavior: Some answers (and questions) for the field of moral psychology. Social and Personality Psychology Compass, 9(1):1–14.

The Mathworks, I. (2018). MATLAB version 9.5 (R2018b). Natick, Massachusetts. Van Rossum, G. and Drake Jr, F. L. (1995). Python tutorial. Centrum voor Wiskunde

en Informatica Amsterdam, The Netherlands.

Vel´asquez, J. (1998). Modeling emotion-based decision-making. Emotional and intelli-gent: The tangled knot of cognition, pages 164–169.

Virtanen, P., Gommers, R., Oliphant, T. E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., van der Walt, S. J., Brett, M., Wilson, J., Jarrod Millman, K., Mayorov, N., Nelson, A. R. J., Jones, E., Kern, R., Larson, E., Carey, C., Polat, ˙I., Feng, Y., Moore, E. W., Vand erPlas, J., Laxalde, D., Perktold, J., Cimrman, R., Henriksen, I., Quintero, E. A., Harris, C. R., Archibald, A. M., Ribeiro, A. H., Pedregosa, F., van Mulbregt, P., and Contributors, S. . . (2020). SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods, 17:261–272.

Wallach, W., Franklin, S., and Allen, C. (2010). A conceptual and computational model of moral decision making in human and artificial agents. Topics in cognitive science, 2(3):454–485.

(36)

Young, L. and Saxe, R. (2009). An fmri investigation of spontaneous mental state inference for moral judgment. Journal of cognitive neuroscience, 21(7):1396–1405. Zaki, J., Davis, J. I., and Ochsner, K. N. (2012). Overlapping activity in anterior insula

during interoception and emotional experience. Neuroimage, 62(1):493–499.

Zaki, J., Wager, T. D., Singer, T., Keysers, C., and Gazzola, V. (2016). The anatomy of suffering: understanding the relationship between nociceptive and empathic pain. Trends in cognitive sciences, 20(4):249–259.

Referenties

GERELATEERDE DOCUMENTEN

The theory universal subjectivism combines social contract theory (mostly Rawls and Nussbaum), with Peter Singer’s notion of the expanding circle of morality relying on the

Whereas the Rawlsian and universal subjectivist method are procedural – justice is the result of a just procedure – Nussbaum argues that it is possible to make a list of what is

the problem of future generations.’ 294 Rawls concludes: ‘While we would like eventually to answer all these questions, I very much doubt whether that is possible within the scope

According to Mark Malloch Brown, former UN advisor to UN secretary-general Kofi Annan: ‘We have to create a global social security system.’ 565 In the first place

‘Individuals matter; ways of life matter only as expressing and nurturing human individuality.’ 655 Political philosophers should be prepared to assess different cultures to

Bill Cooke pithily defines the notion as follows: ‘[…] self-fulfillment through personal excellence and the use of reason.’ 780 Although human flourishing is important, it does

As political philosopher Darrel Moellendorf remarks: ‘One could perhaps be forgiven for thinking that under the present circumstances an egalitarian world order is

Brandt, Richard R., A Theory of the Good and the Right, Prometheus Books, Amherst, NY, 1998, foreword by Peter Singer.. Bryson, Bill, African Diary, Broadway Books, New