• No results found

Face Perception in Context: The Neural Representation of Value Learning in the Human Brain

N/A
N/A
Protected

Academic year: 2021

Share "Face Perception in Context: The Neural Representation of Value Learning in the Human Brain"

Copied!
118
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Face Perception in Context:

The Neural Representation of Value Learning in the Human Brain

Mona Zimmermann

11099119

Internship Report 2020

Scholte Lab

(2)

Abstract

The context in which we interact with other humans can influence how we perceive

their faces and ultimately how we behave towards our peers. One such context is affective

learning, which is the process of assigning value to a face according to past (rewarding or

punishing) outcomes associated with that face (Bliss-Moureau, Barrett & Wright, 2008). To

date it is unclear how exactly the processing of faces is influenced by such learning. It is

unclear how the brain tracks the value of faces and at what stages of face processing, i.e. at a

perceptual or higher level, such value tracking is evident. To address these questions this

functional MRI study used representational similarity analysis (RSA) to test how five models

of value learning relate to the neural representations of faces in 10 ROIs (number of subjects:

5). Results showed that the Rescorla Wagner (RW) rule frequently fit the neural data best in

areas such as the MFG, SFG and PCC. Interestingly, also in the visual cortex (OCC) the

representational structure of two models of value learning were able to explain the neural

data. Results are discussed in light of recent studies involving the RW rule and hypotheses of

enhanced visual processing of beneficial faces.

Introduction

Humans are experts at perceiving faces. Within milliseconds of viewing a face we are

able to understand a person’s emotional state (De Sonneville et al., 2002; Streit et al., 2003),

can determine whether he or she is familiar or not (Dobs, Isik, Pantazis & Kanwisher, 2019)

and retrieve or make inferences about the characteristics of that person (Todorov, Gobbini,

evans & Haxby, 2007). This rapid processing enables us to interact with our peers in a

sensible and dynamic way and is crucial for effective social communication (Jack & Schyns,

2015).

(3)

The contexts in which we interact with others has an effect on how we express

ourselves and perceive our peer’s faces (Barret, Adolphs, Marsella, Martinez & Pollak, 2019;

Martinez, 2019; but see Stein et al., 2017). Here, context is defined as the conditions that

make up and influence the interactions between people. For example, the physical

environment is a rich source of information for us to interpret another person’s emotional

state (Martinez, 2019). But also, the knowledge we form about another person or learning the

affective value of another person based on previous (positive/negative) experiences (i.e.

affective learning; Bliss-Moureau, Barrett & Wright, 2008) might influence the way in which

we perceive and interpret another person’s face (e.g. Suess, Rabovsky & Rahman, 2015).

Therefore, learning about another person – e.g. learning about their characteristics or forming

associations with a person through experience – can be one such form of context. The

overarching aim of the here presented research was to understand how context in the form of

affective learning influences face perception.

Behavioral and neuroimaging studies have shown that learning about another person’s

past positive or negative behaviors or associating specific experiences with a person has an

effect on the perception and processing of their face (e.g. Bliss-Moreau et al., 2008; Morel,

Beaucousin, Perrin & George, 2012; Petrovic, Kalisch, Pessiglione, Singer & Dolan, 2008;

Suess et al., 2015). For example, in a study conducted by Suess et al. (2015), subjects viewed

neutral faces and rated the valence of their facial expressions before and after associating

positive, negative and neutral behaviors with that person. Neutral faces that were paired with

negative stories were significantly rated as depicting more negative facial expressions than

neutral faces paired with neutral stories (Suess et al., 2015). This effect illustrates that

learning about a person’s past behavior influences the subject’s perception of the facial

(4)

processing of emotional facial expressions (Suess et al., 2015). Similarly, studies using

classical conditioning paradigms pairing neutral faces with aversive stimuli, have found both

behavioral and neural evidence for a changed processing of such faces after learning

(Petrovic et al., 2008; Visser, Scholte & Kindt, 2011). For example, Petrovic (2008) found

that subjects rated faces that were paired with an aversive stimulus as less likeable after

conditioning, while rating faces that were not paired with an aversive stimulus as more

likeable after conditioning. This effect was present even in participants that did not remember

which face was paired with an aversive stimulus (Petrovic et al., 2008). Furthermore, the

neural activations in areas involved in the emotional processing of faces changed as a

function of value attributed to such faces after classical conditioning (Petrovic et al., 2008).

Overall, these findings indicate that context in the form of affective learning has an

effect on the processing of faces that is evident on both the behavioral and neural level

(Petrovic et al., 2008; Suess et al., 2015). However, the specific mechanisms by which

context influences such information processing are still poorly understood (Petrovic et al.,

2008). Firstly, it is unclear how the brain calculates and keeps track of the value associated

with a specific face (Petrovic et al., 2008). Secondly, how such value attribution is encoded in

the brain and thus influences the neural representation of faces (i.e. changes information

processing) and at what level remains to be elucidated. Therefore, this study set out to answer

the following research question:

How is the value of faces computed and updated during affective learning and what

are the neural correlates of such mechanisms?

To answer this question, the multivariate analysis technique of representational

similarity analysis (RSA) was used. This method allows to investigate the manner in which

the brain represents information. This representation can then be compared to possible

(5)

information processing underlying affective learning and face perception. (Kriegeskorte, &

Kiviet, 2013).

Methods

The here presented research was part of the ‘FEED: Facial expression encoding and

decoding’ project and used data obtained from the reinforcement learning session of that

project.

Participants

Thirteen healthy subjects (six males and seven females) that had normal or

corrected-to-normal vision participated in this study. All subjects gave written consent and were screened

before scanning to ensure their safety. Data analysis focused on five subjects (two males,

three females) due to time constraints of the project.

Procedure

Subjects had to do a reinforcement learning task twice, once outside the scanner (the

“offline” session) and a day later in the scanner (the “online” session), in order to investigate

the potential difference between the temporally distant and immediate effects of value

learning on face perception (which is not the topic of the current study). In the online session,

subjects did the task in an 7T MRI scanner to measure the blood-oxygenation level dependent

(6)

Materials

Four faces were used as stimuli in the reinforcement learning task. The faces were

generated with a computer graphics toolbox developed by Glasgow University (Yu, Garrod

& Schyns, 2012). This toolbox contains 3D photos of scanned faces of people. The toolbox

allows to implement the movement of different action units (AUs) of the face onto these 3D

photos. AUs are (groups of) muscles that underlie specific facial movements such as raising

the upper lip (Barrett et al., 2019). In this task four faces were used for which no AUs were

activated (i.e. faces without any dynamic expression). The set of faces used in the task

differed per subject. In a rating session before the task, subjects rated each face on the

dimension of valence. The four faces that differed the least in their valence ratings were

chosen for each subject to control for this possible covariate.

Task

This experiment used a “contextual two-armed bandit” reinforcement learning task, in

which participants had to learn the optimal action (out of two choices) in different “contexts”

(or “states”). The contexts, in this experiment, were cued by the presentation of the different

faces. In each trial, participants were randomly presented with one of the four faces and had

to make a choice between pressing one of two keys (associated with the right index and

middle finger). Each face was shown for 1200 ms. After each key press, participants received

feedback whether they won or lost money (1 Euro) or whether their balance remained the

same. The goal of the task was for participants to associate half of the faces with obtaining

money (‘rewarding faces’) and the other half with losing money (‘punishing faces’). Each

(7)

key was associated with a certain chance of winning money (for the rewarding faces) or

losing money (for punishing faces), which stayed the same throughout the experiment. For

example, for one of the two rewarding faces, pressing the left key was associated with

winning money 90% of the trials, while not winning anything in 10% of the trials. The other

key was associated with opposite chances: in 10% of the trials the participant would win

money, while in 90% of the trials the participant would not win anything. The goal was to

learn which key for each face would give the best outcome most of the time and ultimately to

associate two faces with positive outcomes and the other two faces with negative outcomes.

For rewarding faces the best outcome would mean winning money most of the time, while

for punishing faces this would mean maintaining one’s current balance and avoiding losing

money. For one of the two faces in each category (reward and punishment) it was harder to

learn that contingency as the chances of winning or losing were 60 to 40 instead of 90 to 10

(See Figure 1 for schematic visualization of task). Overall, there were two factors to associate

with the faces: valence (reward vs. punishment) and uncertainty (faces associated with

(8)

Computational Models

Several models of value computation and tracking were developed and compared to

each other in their ability to explain the neural data. Note that, unlike most behavioral

reinforcement learning models, the models in the current study aim to estimate value of the

stimulus (often called the state value) rather than the actions (often called the action value).

The models included models used in the reinforcement learning literature (e.g.

Rescorla-Wagner Rule; Rescorla, 1972) and more simple models that were developed for this study.

Generally, the models output value on a trial-by-trial basis as a function of the rewards,

punishments or neutral outcomes received for a specific face. Due to the fact that the models

are based on this similar premise it is not surprising that they highly correlate (see Figure 2).

Nevertheless, their specific structure and mechanisms do differ in certain aspects and we

therefore deemed it interesting to investigate whether one of the models actually captures the

way in which the brain encodes and keeps track of value best. The below five models were

chosen based on the fact that they were the ones least correlating from the pool of initial

models.

(9)

Model 1

Model 1 operationalizes value updating and tracking as summing up rewards and

punishments received over the course of the experiment for each face (for value

developments per subject see Figure 3). Neutral outcomes are disregarded, and do not count

towards the value development:

(10)

Model 2

In Model 2, value is again operationalized as the sum of rewards and punishments received

for a face. However, in this model neutral outcomes do have an influence on the value

development. This influence is operationalized as a form of ‘forgetting’, in which after

receiving the neutral outcome the value is updated in the direction of the initial value of a

face (i.e. in the direction of 0). For rewarding faces this means that the value of a face goes

down, while for punishing faces the value of a face rises. This computation also incorporates

our assumption that a neutral outcome for rewarding faces might be perceived as confusing

or more negative than receiving a reward. For punishing faces this omission of punishment

however might be perceived as something rewarding in itself. Perceiving the omission of

punishment as rewarding has been previously found in another study (Seymour et al., 2005 as

cited in Petrovic et al., 2008). To set the weight of confusion following neutral outcomes, a

parameter search was conducted. The parameters were set using a form of cross-validation in

which Subject 05’s data served as ‘training set’ to set the parameters. The other subjects’ data

served as a form of independent ‘testing set’. This avoided overfitting given that the data

from different subjects are independent.

Different parameter settings for the different regions of interests (ROIs) were tested

for their optimal outcome in the analysis. Then, the median of the two optimal parameter

Figure 3 Value development for Model 1 per subject

(11)

spaces over all ROIs were taken. We are aware of the fact that this method of choosing

parameters might not be the most optimal one as such parameters might differ between ROIs

and/or subjects. However, we decided to take the median of the parameters to make further

analysis more interpretable and to avoid overfitting to some extent. Furthermore, not enough

data was available to do an exhaustive search of the parameter space. The final medians of

the optimal parameter spaces were -0.6 for the rewarding faces and 0.35 for the punishing

faces:

𝑉 = ∑ 𝑅 (𝑡)

𝑡

𝑅(𝑡) = {

1, 𝑖𝑓 𝑜𝑢𝑡𝑐𝑜𝑚𝑒 𝑓𝑜𝑟 𝑟𝑒𝑤𝑎𝑟𝑑𝑖𝑛𝑔 𝑓𝑎𝑐𝑒 𝑖𝑠 1 𝐸𝑢𝑟𝑜

−0.6, 𝑖𝑓𝑜𝑢𝑡𝑐𝑜𝑚𝑒 𝑓𝑜𝑟 𝑟𝑒𝑤𝑎𝑟𝑑𝑖𝑛𝑔 𝑓𝑎𝑐𝑒 𝑖𝑠 0 𝐸𝑢𝑟𝑜

−1, 𝑖𝑓 𝑜𝑢𝑡𝑐𝑜𝑚𝑒 𝑓𝑜𝑟 𝑝𝑢𝑛𝑖𝑠ℎ𝑖𝑛𝑔 𝑓𝑎𝑐𝑒𝑠 𝑖𝑠 − 1𝐸𝑢𝑟𝑜

0.35,

𝑖𝑓 𝑜𝑢𝑡𝑐𝑜𝑚𝑒 𝑓𝑜𝑟 𝑝𝑢𝑛𝑖𝑠ℎ𝑖𝑛𝑔 𝑓𝑎𝑐𝑒 𝑖𝑠 0 𝐸𝑢𝑟𝑜

Model 3

Equation 2 Equation for Model 2, where t = trial, R(t) = outcome added/subtracted from face value at trial t.

(12)

since one last saw the current face has an influence on the weight of the new reward or

punishment one receives for that face. This model is based on the idea that not having seen a

face for a longer time might slow down the value learning for that face (i.e. leads to

forgetting, flattens the slope of value development) which is operationalized through a lower

weight assigned to the current reward or punishment received. Having seen the same face in

subsequent trials might lead to a stronger learning of the contingency of that face (i.e. the

value), therefore the reward or punishment might have a higher weight:

𝑉 = ∑

1

𝑡𝑠

∗ 𝑅(𝑡)

𝑡

Rescorla Wagner Rule

The fourth model used in this study was the Rescorla-Wagner Rule (RW; Rescorla, 1972):

𝑄(𝑡 + 1) = 𝑄(𝑡) + 𝛼 ∗ 𝛿(𝑡)

𝛿 = 𝑅(𝑡) − 𝑄(𝑡)

Equation 3 Equation for Model 3, where ts = trials since last seen face , R(t) = usual outcome for face (i.e. rewarding face (+ 1

)

/punishing face (-1

)/neutral face (0

€)

) at trial t.

Equation 4 Rescorla Wagner Rule, where t = trial, Q = value at trial t, alpha = learning rate,

𝛿 = prediction error, R(t) = received

Figure 5 Value development for Model 3 per subject

(13)

In this model of associative learning, the value of each stimulus (Q) is updated according to

the reward/punishment (R) following the stimulus and the expected value of that stimulus

that was learned up until then, by including a prediction error term in the model

(𝛿). Furthermore, a learning rate term (𝛼) assigns a weighting to the prediction error to

update the face’s value (Rescorla, 1972). It is important to mention that we were not able to

fit the free parameters of the model (including the learning rate) in the ways usually done i.e.

by using behavioral data, as we are investigating state values instead of action values.

Therefore, even though this learning rate might differ between participants, in this study one

learning rate was set for all subjects, determined in the same way as described above for

Model 2. The median of the parameters yielding the most optimal results for all subjects was

𝛼 = 0.07. Previous conditioning studies have often reported a learning rate of 0.1 (Petrovich

et al., 2008).

Leaky Integrator Model

The Leaky integrator model was the fifth model to be included in this study to describe value

learning. In this model, the tracking of value is imaged to be leaky, in the sense that events

that have happened a longer time ago, will add to the value of a stimuli to a lesser extent than

events that have just happened (Sugrue et al., 2004). Therefore, the rewards/punishments that

were received earlier during learning are weighted less and less during the course of learning.

Figure 6 Value development for the Rescorla Wagner model per subject

(14)

𝑉 = ∑ 𝑅(𝑡) ∗

1

𝑒

−𝑡𝑠1 𝑡

Functional Magnetic Resonance Imaging

Functional Image acquisition

Functional magnetic resonance (fMRI) images were acquired with a Philips 7T MRI

scanner. A 32-channel receive coil and an 8-channel multitransmit RF coil were used. Head

motion was reduced by using custom 3D printed head cases. Functional T2*- weighted

sequences during the reinforcement learning session were acquired in five runs using 3D

gradient-echo, echo-planar imaging over the entire brain with a TR of 1.317 ms (TE: 17 ms,

FOV: 175 x 200 x 200 mm, 320 volumes per run, voxel size: 1.8 x 1.786 x 1.786 mm). Apart

from the reinforcement learning session, a functional localizer session was conducted to

determine the functional ROI Fusiform Face Area (FFA) on the subject-level. The session

consisted of three (Subject-03) to four runs (other subjects) in which 232 volumes with a TR

of 1.32 seconds of T2*-weighted images were obtained using 3D gradient-echo echo planar

imaging over the full brain (same settings as described above for the functional images

Figure 7 Value development for the Leaky Integrator model per subject

Equation 5 Leaky Integrator model, where t = trial, R(t) = Outcome (i.e. reward (+ 1

) /punishment (-1

) / neutral (0

) ), ts =

trials between current and trial t.

(15)

acquired during the reinforcement learning task). During this session, participants did a

one-back task in which several photos of faces, houses, bodies and scrambled scenes were shown

after one another. Subjects had to indicate with a button press when the same photo was

shown subsequently.

Data Analysis

All data was preprocessed and analyzed in Python 3.7.

Representational Similarity Analysis (RSA)

As mentioned in the introduction, data analysis focused on RSA as technique. RSA is

a multivariate analysis technique that allows to examine the way in which the brain

represents information through patterns of activity. The technique can help the development

of theories about the brain’s information processing mechanisms by comparing the

representational ‘structure’ of neural data with the ‘structure’ of models of information

processing (such as value tracking) (Kriegeskorte & Kiviet, 2013; Popal, Wang & Olson,

2019). To do so, something called ‘representational geometries’ are examined and compared

to each other. When imagining neural patterns as points in a coordinate system, where the

coordinates represent the (de)activations of voxels, the representational geometries are

uncovered by computing the distances/differences between these patterns in space and

visualizing these in a representational dissimilarity matrix (RDM) (Kriegeskorte & Kiviet,

2013; Popal et al. 2019). RDMs are constructed of the distance values between each of the

patterns. Common measures to examine the dissimilarity between these patterns are for

(16)

(i.e. every face) were computed to yield the RDM.

In the same manner, the representational geometries of behavioral data and models

can be constructed, by computing the distances between such data points. These can also be

operationalized as in this case ‘feature’ RDM’s. In this study, such feature RDMs represent

the differences between the values that were computed by the models of value tracking for all

of the trials.

Finally, these neural representational geometries and the feature representational

geometries can be compared to understand to what extent the theory of information

processing (in this case value tracking and updating) - as operationalized by the behavioral

models - might be implemented in the brain (see Figure 8 for schematic overview of RSA).

For this study, this might ultimately give insight into the way in which the brain implements

value learning and how such value learning affects face perception processes.

(17)

C

B

A

Figure 8: Schematic visualization of RSA. A. Stimuli (here: faces) are represented as patterns of activity in the brain

which are captured using fMRI and pattern estimation methods to obtain a pattern matrix (of size Trials x Voxels). The

(dis) similarity of these patterns or representations can then be calculated to compare the representations to each other and

over time. This is captured in an RDM that is symmetric by the diagonal. B. Behavioral models of for example value

learning can also be investigated in their (dis)similarity by constructing a model RDM from the models output for each

trial. C. The dissimilarity structures of the model RDM and the Neural RDMs can be compared to get insight into the

brain’s information processing. To do so, representational similarity vectors (RDVs) can be created and correlated. If they

correlate highly, their structures are similar. This can tell whether the brain for example represents value development in a

similar manner as operationalized by the model.

(18)

Preprocessing of fMRI images

FMRIPREP version stable (Esteban et al., 2019; fMRIPrep), a Nipype (Gorgolewski

et al., 2011; 2017) based tool was used to perform preprocessing. Each T1w (T1-weighted)

volume was corrected for INU (intensity non-uniformity)

using N4BiasFieldCorrection v2.1.0 (Tustison et al., 2010) and skull-stripped

using antsBrainExtraction.sh v2.1.0 (using the OASIS template). Brain surfaces were

reconstructed using recon-all from FreeSurfer v6.0.1 (Dale, Fischl & Sereno, 1999), and the

brain mask estimated previously was refined with a custom variation of the method to

reconcile ANTs-derived and FreeSurfer-derived segmentations of the cortical gray-matter of

Mindboggle (Klein et al., 2017). Spatial normalization to the ICBM 152 Nonlinear

Asymmetrical template version 2009c (Fonov, Evans, McKinstry, Almli & Collins, 2009)

was performed through nonlinear registration with the antsRegistration tool of ANTs v2.1.0

(Avants, Epstein, Grossman & Gee, 2008), using brain-extracted versions of both T1w

volume and template. Brain tissue segmentation of cerebrospinal fluid (CSF), white-matter

(WM) and gray-matter (GM) was performed on the brain-extracted T1w using fast (Zhang,

Brady & Smith, 2001; FSL v5.0.9).

Functional data was motion corrected using mcflirt (Jenkinson, Bannister, Brady &

Smith, 2002; FSL v5.0.9). Distortion correction was performed using an implementation of

the TOPUP technique (Andersson, Skare & Ashburner, 2003) using 3dQwarp (Cox, 1996;

AFNI v16.2.07). This was followed by co-registration to the corresponding T1w using

boundary-based registration (Greve & Fischl, 2009) with six degrees of freedom,

using bbregister (FreeSurfer v6.0.1). Motion correcting transformations,field distortion

correcting warp, BOLD-to-T1w transformation and T1w-to-template (MNI) warp were

concatenated and applied in a single step using antsApplyTransforms (ANTs v2.1.0) using

Lanczos interpolation.

(19)

Physiological noise regressors were extracted applying CompCor (Behzadi, Restom,

Liau & Liu, 2007). Principal components were estimated for the two CompCor variants:

temporal (tCompCor) and anatomical (aCompCor). A mask to exclude signal with cortical

origin was obtained by eroding the brain mask, ensuring it only contained subcortical

structures. Six tCompCor components were then calculated including only the top 5%

variable voxels within that subcortical mask. For aCompCor, six components were calculated

within the intersection of the subcortical mask and the union of CSF and WM masks

calculated in T1w space, after their projection to the native space of each functional run.

Frame-wise displacement (Power et al., 2014) was calculated for each functional run using

the implementation of Nipype.

Many internal operations of FMRIPREP use Nilearn (Abraham et al., 2014),

principally within the BOLD-processing workflow. For more details of the pipeline see

https://fmriprep.readthedocs.io/en/stable/workflows.html.

Denoising and Pattern estimation

As mentioned above, RSA is based on the premise that the patterns of (de)activation

of several elements (e.g. neurons) of the brain serve to represent information (Kriegeskorte &

Kiviet, 2011). Therefore, the first step of RSA in this study was to estimate the patterns of

activity that are associated with a specific stimulus. Before estimating the patterns of activity,

it was necessary to thoroughly denoise the data due to relatively high levels of

physiology-related noise in 3D gradient-echo EPI sequences (Reynaud et al., 2007) and strong

(20)

sequentially correlated (Visser et al., 2016). When pattern drift is present, the patterns of

different trials are correlated. This can be due to properties of the experimental design, for

example if the BOLD responses overlap between trials. In this study the ISIs between trials

were appropriately long to avoid such overlap, however after initially inspecting the neural

data pattern drift was present nevertheless.

Therefore, a PCA-based denoising algorithm developed by Snoek & Knapen (in

prep.; https://github.com/lukassnoek/pybest) was used to denoise the neural data which

helped to alleviate the drift. Generally, this algorithm applies a voxel-specific denoising

paradigm that leads to the optimal denoising of each voxel. The first step in the procedure

was to high-pass filter the neural data with a discrete cosine transform set to remove

low-frequency noise (cutoff value was 0.01Hz). Then, the nuisance factors (capturing head

motion and physiology-related factors) that caused most of the noise present in each voxel

were identified using a set of principal components (PCA) and simple linear regression

(OLS) analyses. In a next step, this set of noise components were then regressed out for each

voxel using OLS.

To finally estimate the neural patterns on a trial-by-trial basis, a least-squares all

estimator was used, which fits a general linear model (GLM) with each trial as separate

regressor. Additionally, this model also included two regressors accounting for the button

press to give a response and the feedback that they received after each trial to account for any

brain activity due to these factors. Before fitting the estimator, the design matrix (DM) was

high-passed filtered using a discrete-cosine transform set (cutoff value was 0.01 Hz). Then

the estimator was fit voxel-wise to account for the voxel-specific denoising approach.

Subsequently, the neural patterns were whitened from the correlations that might be

due to the DM, by using the covariance matrix of the DM (Soch et al., 2020). As a last step,

the patterns were multivariate noise normalized (MNN). MNN helps to scale the amount of

(21)

activation/effect of a voxel by its noise (i.e., using the diagonal of the noise covariance

matrix) in order to down-value noisier voxels in comparison to less noisy voxels.

Furthermore, MNN decorrelates the patterns (i.e., using the off-diagonal of the noise

covariance matrix).

Patterns were estimated for ten regions of interest (ROIs) that were chosen based on

previous research that showed their involvement in learning, tracking of value and face

perception (Jahfari, Theeuws & Knapen, 2020; Visser et al., 2011; Wang et al., 2020). The

chosen ROIs were: amygdala (AMG), anterior and posterior cingulate cortex (ACC, PCC),

orbitofrontal cortex (OFC), superior frontal gyrus (SFG), middle frontal gyrus (MFG),

ventral striatum (vStriatum), fusiform face area (FFA), occipital cortex (OCC) and the insula.

All but the FFA mask, were subject specific structural delineations of the regions constructed

with the software package ‘Freesurfer’. The FFA mask was functionally defined by

intersecting the univariate results of the functional localizer task with the fusiform cortex

(structurally defined by ‘Freesurfer’).

Neural RDM/RDV construction

Before constructing the neural RDMs, the patterns of activity for all trials over all five

runs were stacked to be able to get an RDM over all 200 trials. This was important to this

study as the value learning of participants continued from run to run. The aim was to

understand how the representation of faces changes from the beginning of learning (i.e. trial

1, run 1) until the end (i.e. trial 200, run 5). Neural RDMs (200 x 200) were obtained by

taking the pairwise distances between the patterns of all trial using the cosine distance metric

(22)

representational similarity vectors (RDVs) (i.e. flattened RDMs) that contain the distance

measures of the lower triangle of the RDMs.

Figure 9: Example Neural RDMs and RDVs for OCC, FFA and MFG (Subject 01 and 02).

(23)

Feature RDM/RDV construction

To construct the feature RDMs (see Figure 10) for each of the five models, we first

outputted the values using the models for value tracking for each trial. Then, we used

Euclidean distances to compute the dissimilarities between each of the values for the trials.

The Euclidean distance metric calculates the distance between the vectors of the values in

coordinate space instead of the angle between those vectors (as does the cosine metric). As

the values in this study are one dimensional (i.e. only have one property) they fall on a line in

the coordinate space. Therefore, in this case, the Euclidean distance is a more appropriate

measure than the cosine distance as the angle between the features would be zero and the

same for all comparisons (i.e. not contain any information on their dissimilarity). The feature

RDVs for each of the five models were constructed in the same manner as described above

for the neural RDVs.

Analysis

(24)

separately (i.e. a fixed-effects analysis).

Data analysis was divided into three steps. The first step in the analysis was to

compute Spearman’s rank correlations between the neural RDVs of the different ROIs and

the feature RDVs that represented the value development as operationalized by the different

value learning models. This was done to get a first impression on the manner in which the

representational structure of the different value learning models is related to the developing

neural code of the different faces.

Due to the fact that four different faces were used in the reinforcement learning task,

some of the correlations – especially in visual areas such as the OCC and the FFA – might

have been driven by the difference in visual input or facial identity instead of the difference

in value assigned to those faces. Therefore, we covariate controlled the correlations as a

second step. This was done by constructing a separate categorical RDM in which seeing the

same face repeatedly in following trials was coded with 0 whereas seeing a different face was

coded with 1. The dissimilarity between the trials was calculated using the Manhattan metric,

a metric used when constructing feature RDMs from categorical data. To uncover the

variance explained by seeing the different faces, the feature RDV was fitted with an intercept

(vector of ones) as predictor to each neural RDV and each of the model RDVs using an

ordinary least-squares estimator (OLS). After the beta weights were obtained, this unique

variance was then subtracted from each of the RDVs to remove the influence of the covariate.

(25)

Finally, Spearman’s rank correlations between the newly covariate controlled neural and

model RDVs were obtained (see Figure 11 for schematic visualization).

As Spearman’s rank correlations fall under the category of non-parametric tests,

p-values for all correlations were calculated by using permutation testing with 1000

permutations. In this approach, the patterns of each neural RDM were shuffled and the

shuffled RDVs then correlated with the feature RDVs. This allowed to construct a null

distribution of correlations resulting from the random shuffling, against which the ‘real’

correlation was compared to. The p-value then was the proportion of correlations that were

higher or equal to the observed correlation. In order to understand how the different model

RDVs relate to the neural RDVs and to compare the models directly against each other, the

final step in the analysis was to use linear regression to uncover the unique variances

explained by each of the model RDVs. An OLS estimator was used to fit the model RDVs

together with an intercept onto the neural RDVs of the different ROIs (see Figure 12 for

Figure 11: Schematic visualization of correlational analysis. The RDVs of the different ROIs (in this example the

OCC) get correlated with the RDVs of the five models (in this example the RW model) using Spearman’s rank

correlation.

(26)

used to examine which models explain any variance in the representational geometry.

Additionally, a contrast to do pairwise comparisons between the unique variances explained

by the models was constructed to directly compare the model fits to each other. Finally, all

p-values were corrected for multiple comparisons (based on the number of ROIs) using

Bonferroni correction. Results were deemed significance if p < 0.004.

Results

RSA

Correlations

The first step in the data analysis was to compute Spearman’s rank correlations

between the neural RDMs and the feature RDMs in order to get a first impression on how the

different representational structures of the value models map onto the neural representations.

These correlations were then controlled for the covariate of seeing different faces. Generally,

as expected, covariate controlling the RDVs led to the correlations in the occipital cortex to

Figure 12 Schematic visualization of OLS analysis. The neural RDV (of the OCC in this example) is being predicted by the model

RDVs. The model RDVs together with an intercept are fitted with an OLS estimator onto the neural RDV.

(27)

differences in visual input (i.e. the different faces). Reporting of results focuses on the

covariate-controlled Spearman’s rank correlations (see Figures 13 – 17 and Appendix 1.2;

the uncontrolled correlations can be found in Appendix 1.1).

Overall, significant correlations (Bonferroni corrected significance level p<0.004)

between the model RDVs and the neural RDVs were small for all subjects. This is a scale

often reported in the RSA literature. Correlations were very similar between the different

models, and not one model seemed to be remarkably more correlated to the neural data than

all other model in any of the ROIs. This is not surprising as the models were very similar (i.e.

highly correlated).

Whether one model was statistically more correlated to the neural data than another

could however not be empirically tested on the group-level in this analysis as the number of

subjects was too small to feasibly conduct a random-effects analysis on the group-level.

For all subjects, several ROIs could be repeatedly identified for which the

representational geometry significantly correlated with the dissimilarity structure of several

of the models. These ROIs included the PCC, SFG, MFG, OFC, Insula and interestingly the

OCC. This gave an initial idea of which brain areas might specifically encode value tracking

as operationalized by the different models.

(28)

Figure 13 Correlations between neural RDVs and model RDVs Subject 01 (* = p< 0.05, ** = p< 0.01, *** = p<

0.004 (Bonferroni corrected significance level)). AMG: amygdala, ACC: anterior cingulate cortex, PCC: posterior

cingulate cortex, OFC: orbitofrontal cortex, SFG: superior frontal gyrus, MFG: middle frontal gyrus, vStriatum:

ventral striatum, FFA: Fusiform Face Area, OCC: Occipital cortex

Figure 14 Correlations between neural RDVs and Model RDVs Subject 02 (* = p< 0.05, ** = p< 0.01, *** = p< 0.004

(Bonferroni corrected significance level)). AMG: amygdala, ACC: anterior cingulate cortex, PCC: posterior cingulate

cortex, OFC: orbitofrontal cortex, SFG: superior frontal gyrus, MFG: middle frontal gyrus, vStriatum: ventral striatum,

FFA: Fusiform Face Area, OCC: Occipital cortex

(29)

Figure 16 Correlations between neural RDVs and Model RDVs Subject 04 (* = p< 0.05, ** = p< 0.01, *** = p<

0.004 (Bonferroni corrected significance level)). AMG: amygdala, ACC: anterior cingulate cortex, PCC: posterior

cingulate cortex, OFC: orbitofrontal cortex, SFG: superior frontal gyrus, MFG: middle frontal gyrus, vStriatum:

ventral striatum, FFA: Fusiform Face Area, OCC: Occipital cortex

Figure 15 Correlations between neural RDVs and Model RDVs Subject 03 (* = p< 0.05, ** = p< 0.01, *** = p<

0.004 (Bonferroni corrected significance level)). AMG: amygdala, ACC: anterior cingulate cortex, PCC: posterior

cingulate cortex, OFC: orbitofrontal cortex, SFG: superior frontal gyrus, MFG: middle frontal gyrus, vStriatum:

ventral striatum, FFA: Fusiform Face Area, OCC: Occipital cortex

(30)

OLS

In order to compare the fit of each model to the representational geometry of the

ROIs, an OLS estimator was fitted subject-wise for each ROI. Here we focus on the models

that were significant against baseline and showed to explain significantly more of the

variance than all of the other models at a Bonferroni corrected significance level of p<0.004

(see Appendix 2 for t-values and p-values).

The dissimilarity structure of the RW model showed to significantly relate to the

representational geometry of the neural data of certain ROIs for four of the five subjects. The

model also explained significantly more than all of the other models in those ROIs. More

specifically, the ROIs for which the RW model was significant included the MFG (three of

five subjects, see Figures 18, 21, 22), the PCC (two of five subjects, see Figures 19, 22), the

SFG (two of five subjects, see Figures 21, 22). For subject five – for which the learning rate

of the rule was tuned – additionally the representational geometry of the ACC and the OFC

was significantly predicted by the RW model (see Figure 22). It is important to note, that for

Figure 17 Correlations between neural RDVs and Model RDVs Subject 05 (* = p< 0.05, ** = p< 0.01, *** = p<

0.004 (Bonferroni corrected significance level)). AMG: amygdala, ACC: anterior cingulate cortex, PCC: posterior

cingulate cortex, OFC: orbitofrontal cortex, SFG: superior frontal gyrus, MFG: middle frontal gyrus, vStriatum:

ventral striatum, FFA: Fusiform Face Area, OCC: Occipital cortex.

(31)

most subjects (three) the model was only significantly related to one or two ROIs.

Model 3 also seemed to be significantly related to the representational geometry of

two ROIs in two of the five subjects. The model was especially related to the representational

geometry of the OCC (in both subjects, in one subject it was significantly more related to the

RDV than 3 of the 4 other models, see Figure 18, 20). For subject 3 it additionally

significantly predicted the RDV of the MFG (see Figure 20). For subject 4, the Leaky

Integrator Model significantly predicted the RDV of the OCC and the insula (see Figure 21).

Thus, in three of the five subjects the activity of the OCC was significantly related to value

learning.

Overall, it seems that especially the RW model explains the representational

geometries of certain ROIs. However, it is important to note that the ROIs for which the RW

model is predictive, differ between subjects. In accordance with the correlational analysis, the

model was mostly predictive for ROIs that showed significant correlations with the models.

Especially, the MFG, the PCC and the SFG seem to show a dissimilarity structure that is

similar to the one of the value developments operationalized by the RW model. Interestingly,

the representational geometry of the OCC also shows to be predicted by the dissimilarity

structures of certain value models (for three of the five subjects), even after covariate

controlling for the different visual input and identity of the faces. Especially, Model 3 and the

LI model are predictive of the representational geometry of the OCC.

(32)

Figure 19 Results from the OLS analysis for Subject 02. Black stars indicate significance against baseline. Blue stars indicate

significance against other models. (* = p<0.05, ** = p< 0.01, *** = p< 0.004 (Bonferroni corrected significance level).

AMG:

amygdala, ACC: anterior cingulate cortex, PCC: posterior cingulate cortex, OFC: orbitofrontal cortex, SFG: superior frontal

gyrus, MFG: middle frontal gyrus, vStriatum: ventral striatum, FFA: Fusiform Face Area, OCC: Occipital

cortex.

Figure 18 Results from the OLS analysis for Subject 01. Black stars indicate significance against baseline. Blue stars

indicate significance against other models. (* = p<0.05, ** = p< 0.01, *** = p< 0.004 (Bonferroni corrected significance

level). AMG: amygdala, ACC: anterior cingulate cortex, PCC: posterior cingulate cortex, OFC: orbitofrontal cortex, SFG:

superior frontal gyrus, MFG: middle frontal gyrus, vStriatum: ventral striatum, FFA: Fusiform Face Area, OCC: Occipital

cortex.

(33)

Figure 20 Results from the OLS analysis for Subject 03. Black stars indicate significance against baseline. Blue stars

indicate significance against other models. (* = p<0.05, ** = p< 0.01, *** = p< 0.004 (Bonferroni corrected

significance level). AMG: amygdala, ACC: anterior cingulate cortex, PCC: posterior cingulate cortex, OFC:

orbitofrontal cortex, SFG: superior frontal gyrus, MFG: middle frontal gyrus, vStriatum: ventral striatum,

FFA:

Fusiform Face Area, OCC: Occipital cortex.

Figure 21 Results from the OLS analysis for Subject 04. Black stars indicate significance against baseline. Blue stars

indicate significance against other models. (* = p<0.05, ** = p< 0.01, *** = p< 0.004 (Bonferroni corrected

significance level). AMG: amygdala, ACC: anterior cingulate cortex, PCC: posterior cingulate cortex, OFC:

orbitofrontal cortex, SFG: superior frontal gyrus, MFG: middle frontal gyrus, vStriatum: ventral striatum,

FFA:

Fusiform Face Area, OCC: Occipital cortex.

(34)

Discussion

This study set out to investigate how context in the form of affective learning

influences face perception. More specifically, we set out to investigate the neural correlates

of value tracking and face perception during affective learning using the multivariate analysis

technique RSA. The aim of this exploratory study was to understand how the brain represents

and updates the value of faces during learning and to understand at which ‘stages’ of face

perception value tracking is represented in the neural patterns. Five models of value learning

were investigated and compared to each other in their ability to explain the neural data.

Overall, this study yielded two interesting findings. Firstly, value tracking as operationalized

by the RW model was often represented in the neural patterns of areas such as the MFG, SFG

and the PCC. Secondly, an effect of value learning was observable in the OCC for several

subjects, showing that the value of faces is potentially also represented in the neural code of

Figure 22 Results from the OLS analysis for Subject 05. Black stars indicate significance against baseline. Blue stars

indicate significance against other models. (* = p<0.05, ** = p< 0.01, *** = p< 0.004 (Bonferroni corrected

significance level). AMG: amygdala, ACC: anterior cingulate cortex, PCC: posterior cingulate cortex, OFC:

orbitofrontal cortex, SFG: superior frontal gyrus, MFG: middle frontal gyrus, vStriatum: ventral striatum,

FFA:

Fusiform Face Area, OCC: Occipital cortex.

(35)

perceptual areas. Generally, these findings are in line with previous studies showing that

affective learning has an influence on face perception on the neural level (e.g. Suess et al.,

2015). Furthermore, it extends such findings by pointing towards more specific information

processing mechanisms and by showing that such mechanisms are implemented at different

stages of processing ranging from frontal to visual areas. These two findings are discussed in

more detail below.

The Rescorla Wagner rule as a plausible model of value tracking in the brain

The first finding of this study was that value learning as operationalized by the RW

Rule significantly related to the representational geometry of the neural data for four out of

five subjects (one of which served to set the learning rate of the model). The model’s

dissimilarity structure over all trials was significantly more related to the neural

representational geometry than all of the other models in the MFG, SFG and the PCC. Even

though results should be interpreted with caution as no random effects analysis was

conducted, this is a notable result considering that the tested models are highly correlated and

very similar to each other. Generally, the finding implies that the neural representations of the

faces in these areas change over the course of learning similarly to the manner in which the

values develop as defined by the RW rule, which gives an interesting insight into the possible

information processing taking place in the brain.

As described above, the RW model operationalizes value learning as a process in

which a value is updated based on a prediction error formed from previous rewards and

stimulus values (Jahfari et al., 2020; Wilson & Collins, 2019; Rescorla, 1972). Over the

course of learning, this prediction error becomes smaller. Learning thus occurs in a non-linear

(36)

incorporated in the development of the neural representation of the different faces during

learning.

Previous studies on value learning in the context of face perception have also found

that the RW rule is related to the neural activity in several brain areas (Jahfari et al., 2020;

Petrovic et al., 2008). For example, Petrovic et al. (2008) used a conditioning paradigm in

which participants saw four faces in the scanner, two of which were paired with a shock.

Learning of this association was operationalized with the RW rule where the learning rate

was set to 0.1 as proposed by previous studies. Data was analyzed with a simple univariate

analysis in which the conditioned and unconditioned stimuli served as predictors. The main

finding of this study was that activations in the fusiform gyrus and the amygdala increased as

the value outputted by the RW decreased, implying that this model somewhat captures the

way in which the brain updates value connected to faces (Petrovic et al. 2008).

In another study by Jahfari et al. (2020), the RW rule was also used to operationalize

value learning of faces in a task in which participants had to choose for the best option

between two faces. Jahfari et al. (2020) found that value associated with the chosen face had

an effect on the trial-wise BOLD response in the dorsal striatum and the perceptual areas

FFA and OCC. This was particularly true in trials in which the available options were valued

very differentially as computed by the RW model. As a next step, Jahfari et al. (2020) used

machine learning to investigate whether the choice for the most valuable options (as

calculated by the model) could be predicted by the neural data and found that this was

possible with an accuracy of 70%. This again indicated that value as modeled by the RW rule

is represented in the neural data. Visual areas such as the FFA and the OCC were particularly

important for the accuracy of classification, especially on trials in which a decision was hard

for the participant (Jahfari et al., 2020).

(37)

differed between Jahfari et al.’s (2020), Petrovich et al.’s (2008) and the current study. This

could be due to a lot of factors such as the experimental paradigms used (e.g. using a classical

conditioning paradigm with painful shocks instead of monetary punishment might have an

influence on the areas representing learned value to some extent) or the manner in which the

neural data was analyzed. Furthermore, it needs to be considered that the here discussed

studies only investigated the RW rule and did not compare its fit to other possible models.

Nevertheless, the premise that the output of the RW model shows to be related to the neural

data in both Petrovich et al.’s (2008), Jahfari et al.’s (2020) and the current study (in which

the model was significant against other models) lends support to the idea that certain areas of

the brain might implement value tracking of faces in a manner similar as operationalized by

the RW rule, thus especially making use of expected outcomes and prediction errors.

Activity in frontal and visual areas is modulated by learning the value of faces

Areas for which effects in the correlational and OLS analysis of value learning were

found included the MFG, SFG and PCC. Interestingly, similarly as in Jahfari et al.’s (2020)

study the OCC also showed to be an area in which value learning was incorporated in the

neural representation of the different faces. Whereas the value development was most often

captured by the RW rule in the MFG, SFG and the PCC, in the OCC, Model 3 and the LI

model showed significance. Why the models differed for the OCC and the MFG, SFG and

PCC in most subjects and whether this is a meaningful finding remains to be elucidated. It

might be that different regions encode value in a different manner that is captured by these

(38)

effect was only found for a few of these ROIs in each subject and the areas differed between

subjects. Whether this is a measured effect or due to practical reasons needs to be further

investigated. In terms of practical reasons, the signal to noise ration might have differed

between subjects for the different ROIs due to the location in the scanner or the differences in

the effectiveness of the denoising. Overall, the two areas that showed an effect in most

subjects (four out of five subjects) were the MFG and the OCC.

The MFG has been shown to encode value, both for faces and other stimuli in several

studies (Serences, 2008; Visser et al., 2011). For example, Visser et al. (2011) used a

classical conditioning paradigm and RSA to understand how the representation of stimuli

changes over the course of associative learning i.e. when certain value is assigned to different

stimuli. The neural representations of stimuli that were paired with shocks became more

similar over the course of learning and this effect was most prevalent in areas including the

MFG, SFG and OC.

What the exact interaction is between frontal areas and visual areas for the learning of

value remains to be elucidated. The question remains whether the effects seen in visual areas

are due to feedback signals from frontal areas, and what exact purposes such encoding of

value in visual areas serves (Jahfari et al., 2020; Serences, 2008; Stănişor, van der Togt

Pennartz & Roelfsema, 2013). Several studies investigated this and suggested different

purposes.

Serences (2008) investigated whether the values of two stimuli in a decision task have

an influence on visual areas and found that indeed activity in the visual cortex is modulated

by the objective reward history of the different choices. The activity in frontal areas such as

the MFG and the PCC, seemed to be scaled by the difference in values assigned to the

available choices as well. The authors suggested that these frontal areas are part of a

(39)

prediction errors and influences future behavior (Serences, 2008). According to Serences

(2008), this network might then signal to spatially specific parts of the visual cortex to

modulate the valuable stimuli’s representations to update future behavior that is beneficial for

a valuable interaction. In situations in which just one valued stimulus is present, as was the

case in this study, such biasing of spatially selective visual areas might not seem as relevant

as in a situation that requires a direct choice towards one of several stimuli. However, maybe

in this case the activity might serve to prepare the visual system for future situations in which

spatially selective enhanced processing is indeed relevant.

Another view on the role of value modulation of visual activity is connected to

attentional benefits towards valuable options (Jahfari et al., 2020; Stănişor et al., 2013). For

example, Stănişor et al. (2013), investigated the influence of value and attention on V1

activity in macaque monkeys. They found that activity in V1 was influenced by the values of

the presented stimuli, especially the relative values between several option when more than

one stimulus was shown. The modulatory effect of value showed to be the same effect that

selective attention has on V1 activity, implying that more highly valued stimuli attract

attention that is then mirrored in the modulated visual activity towards such stimuli. Stănişor

et al. (2013) further proposes that frontal areas likely enable this modulation by providing

feedback.

Jahfari et al. (2020) proposes that the role of perceptual areas in value tracking -

specifically in choice situations in which the options are very close in value - is to fine tune

the recognition of features associated with the more valuable choice i.e. to tune attention to

facial features that are associated with higher value.

(40)

attentional resources are allocated to beneficial outcomes i.e. faces associated with beneficial

outcomes. Arguably, such enhanced processing is adaptive in social situations as it might

encourage interactions with persons that are potentially beneficial for the perceiver.

Limitations

This study had several limitations. Firstly, due to the small sample size it was not

feasible to do a random-effects analysis on the group-level which would have allowed to

draw more generalizing conclusions. Nevertheless, subject-based small N studies have their

benefits (for example see Smith & Little, 2018). Arguably, finding similar results in

individual subjects without doing a random effects analysis lends convincing support for

some of the concepts discussed in this study.

Secondly, the way in which the model parameters were set for Model 2 and the RW

model might not have been entirely appropriate. Model parameters were set by conducting

the correlational analysis over all ROIs on one subject for different parameter settings and

taking the median of the best parameter settings. However, it could be that the parameter

settings differ between subjects or the ROIs, as they might implement value learning

differently. A more appropriate way to set the parameters might have been to use behavioral

data and set them for every subject separately. However, we did not have enough data to do

so and we focused on state values instead of action values (which can be captured by

behavior) which makes fitting more complicated. This should be more thoroughly addressed

in future research.

Thirdly, an OLS estimator was used in this study to investigate how the value models

relate to the neural data. However, a non-negative least-squares estimator (NNLS) would

have potentially been more appropriate. NNLS is similar to OLS but forces the weights of the

(41)

regression model to be positive. This is more appropriate in the context of RSA as a negative

relation between the dissimilarity structure of the models and the dissimilarity structure of the

neural patterns (i.e. negative regression weights) is not meaningful.

It is further problematic that the OLS assumption of independent observations of the

explained variable is not met due to the fact that the observations in this RSA based model

are pairwise differences, which are inherently autocorrelated. A nonparametric approach to

calculating the associated t and p-values of the regression model (such as permutation testing)

would have been appropriate to account for that, which was however not conducted in this

study. It is therefore necessary to interpret the results with caution.

Lastly, a large group of ROIs were investigated in this study but there might still be

regions that are highly important in value learning and that should be investigated in the

future. For example, we only included the ventral striatum in this analysis even though

studies have also found that the dorsal striatum is involved in value tracking which seems to

encode value to influence future action (Jahfari et al., 2020). Maybe a more exploratory

approach would be interesting to adapt to ultimately understand the entire network of neural

correlates that are important in value learning of faces.

Future research

Future research should address the above discussed limitations and build on the

results of this study.

(42)

visual area followed a pattern as Model 3 and the LI model. To what extend this finding is

stable and whether there is a significant difference between the manner in which these areas

encode value remains to be elucidated.

Secondly, the exact interaction between these areas during value learning should be

further investigated. Research discussed in this study mostly assumes what these interactions

are but more research is needed to support these assumptions.

Thirdly, the ROIs showing an effect of value learning differed between subjects. It

should be uncovered whether this is an actual finding or whether this is due to more practical

reasons (such as signal to noise ratio differences).

Fourthly, the specific behavioral effects of modulated activity in the OCC should be

further highlighted. It remains unclear whether the modulation indeed serves to enhance

attention towards a more valuable option and whether this also has an influence on the

manner in which a specific face is actually perceived. As discussed above, research has

previously shown that there is evidence of changed perception of faces after learning (e.g.

Suess et al., 2015), but it would be interesting to directly connect that to the modulated visual

activity.

Furthermore, it would be compelling to investigate how facial expressions or social

dimensions might influence the rate and manner of value learning. Similarly, how value

learning might influence the perception of facial expressions and social dimensions should be

investigated further, as these topics have practical implications for social interaction. RSA

would be an effective method to dive into such topics.

Conclusion

(43)

tracked and updated in a manner similar as operationalized by the RW rule, in which a

prediction error serves to update value according to past learning. The findings of this study

further suggest that value learning influences the representations of faces in different brain

areas ranging from frontal to perceptual areas. Thus, face processing is influenced by value

learning at different stages of processing with the modulation of activity in more visual areas

potentially serving to heighten the perception of more valuable faces (Jahfari et al., 2020;

Serences, 2008; Stănişor, van der Togt Pennartz & Roelfsema, 2013).

References

Abraham, A., Pedregosa, F., Eickenberg, M., Gervais, P., Mueller, A., Kossaifi, J., ... &

Varoquaux, G. (2014). Machine learning for neuroimaging with

scikit-learn. Frontiers in neuroinformatics, 8, 14. doi:10.3389/fninf.2014.00014.

(44)

Andersson, J. L., Skare, S., & Ashburner, J. (2003). How to correct susceptibility distortions

in spin-echo echo-planar images: application to diffusion tensor

imaging. Neuroimage, 20(2), 870-888.

doi: 10.1016/S1053-8119(03)00336-7.

Avants, B. B., Epstein, C. L., Grossman, M., & Gee, J. C. (2008). Symmetric diffeomorphic

image registration with cross-correlation: evaluating automated labeling of elderly

and neurodegenerative brain. Medical image analysis, 12(1), 26-41.

doi:10.1016/j.media.2007.06.004.

Barrett, L. F., Adolphs, R., Marsella, S., Martinez, A. M., & Pollak, S. D. (2019). Emotional

expressions reconsidered: challenges to inferring emotion from human facial

movements. Psychological Science in the Public Interest, 20(1), 1-68. doi:

10.1177/1529100619832930

Behzadi, Y., Restom, K., Liau, J., & Liu, T. T. (2007). A component based noise correction

method (CompCor) for BOLD and perfusion based fMRI. Neuroimage, 37(1),

90-101.

doi:10.1016/j.neuroimage.2007.04.042.

Bliss-Moreau, E., Barrett, L. F., & Wright, C. I. (2008). Individual differences in learning the

affective value of others under minimal conditions. Emotion, 8(4), 479. doi:

/10.1037/1528-3542.8.4.479

Cox, R. W. (1996). AFNI: software for analysis and visualization of functional magnetic

resonance neuroimages. Computers and Biomedical research, 29(3), 162-173.

doi:10.1006/cbmr.1996.0014.

Dale, A. M., Fischl, B., & Sereno, M. I. (1999). Cortical surface-based analysis: I.

Segmentation and surface reconstruction. Neuroimage, 9(2), 179-194.

doi:10.1006/nimg.1998.0395.

(45)

De Sonneville, L. M. J., Verschoor, C. A., Njiokiktjien, C., Op het Veld, V., Toorenaar, N.,

& Vranken, M. (2002). Facial identity and facial emotions: speed, accuracy, and

processing strategies in children and adults. Journal of clinical and experimental

neuropsychology, 24(2), 200-213. doi: 10.1076/jcen.24.2.200.989

Dobs, K., Isik, L., Pantazis, D., & Kanwisher, N. (2019). How face perception unfolds over

time. Nature communications, 10(1), 1-10. doi: 10.1038/s41467-019-09239-1

Esteban, O., Markiewicz, C. J., Blair, R. W., Moodie, C. A., Isik, A. I., Erramuzpe, A., ... &

Oya, H. (2019). fMRIPrep: a robust preprocessing pipeline for functional

MRI. Nature methods, 16(1), 111-116. doi:10.1038/s41592-018-0235-4

fMRIPrep Available from: 10.5281/zenodo.852659.

Fonov, V. S., Evans, A. C., McKinstry, R. C., Almli, C. R., & Collins, D. L. (2009).

Unbiased nonlinear average age-appropriate brain templates from birth to

adulthood. NeuroImage, (47), S102. doi:10.1016/S1053-8119(09)70884-5.

Gorgolewski, K., Burns C., Madison C., Clark D., Halchenko Y., Waskom M., & Ghosh S.

(2011). Nipype: A flexible, lightweight and extensible neuroimaging data processing

framework in Python. Front. Neuroinform, 5(13). doi:10.3389/fninf.2011.00013.

Gorgolewski, K., Esteban O., Ellis D., Notter M., … & Ghosh S. (2017). Nipype: a flexible,

lightweight and extensible neuroimaging data processing framework in

Python. Frontiers in neuroinformatics. doi: 10.5281/zenodo.581704.

Greve, D. N., & Fischl, B. (2009). Accurate and robust brain image alignment using

boundary-based registration. Neuroimage, 48(1), 63-72.

Referenties

GERELATEERDE DOCUMENTEN

When looking at the burden of refugees it is important to look at the influence of equality on the different criteria and to see if the levelling-down objection offers a valuable

Zou echter sprake zijn geweest van een onrechtmatig begunstigend besluit voor bijvoorbeeld de aanvrager van een vergunning door een te strikt vergunningsvoorschrift, zoals het

Thus, this study will examine the volumes of anterior cingulate cortex (ACC) subregions to attempt to clarify where differences lie in a group of children from Spain diagnosed

Anticipating words and their gender: An event-related brain potential study of semantic integration, gender expectancy, and gender agreement in Spanish sentence

A key feature of the adaptation process was to select adequate material on the basis of the linguistic properties (word length, spelling-to-sound regularity, sentence length and

These results not only con firm that testosterone administration decouples the LOFC from the subcortical threat system, but also show that this is specifically the case in response

Abbreviations: ACC, Anterior Cingulate Cortex; ACG, Anterior Cingulate Gyrus; ALE, Activation Likelihood Estimation; ALFF, Amplitude of Low Frequency Fluctuation; ANPS,

In this fMRI study, we used multivoxel pattern analysis to reveal changes in sound representations induced by the for- mation of new perceptual categories in human auditory cortex..