• No results found

Searching for beauty in the brain of the beholder

N/A
N/A
Protected

Academic year: 2021

Share "Searching for beauty in the brain of the beholder"

Copied!
27
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Searching for beauty in the brain of the beholder:

Searchlight-based multi-voxel pattern analysis to find neural representations of

perceived attractiveness and personal preference in attractiveness

Adelheid M. M. Graat

University of Amsterdam

(2)

Abstract

Attractiveness impressions are made quickly and automatically and have far-reaching consequences in human behavior. While attractiveness has been of interest in many studies, only few studies investigate the private component of attractiveness perception. In the current study, we used a multi-voxel pattern analysis to search for the neural representations of subjective attractiveness and personal preferences in attractiveness. Participants rated natural face photographs four times to create a reliable estimate of subjective attractiveness. Personal preference was measured by subtracting the average attractiveness score from a large group of observers from the subjective attractiveness score. A whole-brain searchlight algorithm was used in combination with a linear regression to identify clusters of voxels with linearly predictive information about subjective attractiveness and personal preferences. Whereas our behavioral results support the robust finding that attractiveness ratings consist of both inter-judge agreements and personal preferences, the searchlight analysis did not yield any

significant voxels. An exploratory analysis was done to investigate whether we could decode subjective attractiveness and personal preferences if they were discretized. Using the

searchlight analysis in combination with a support vector classifier, still no significant center voxels were found. The same analysis with a control variable (gender of faces) did reveal voxels with significant information. A possible explanation for our results is that

attractiveness and personal preferences might be represented in wide-spread weak-informative voxels, but more research is needed to investigate this.

(3)

Searching for beauty in the brain of the beholder: Searchlight-based multi-voxel pattern analysis to find neural representations of perceived attractiveness and personal

preference in attractiveness

The moment we meet a new person, impressions are made quickly and automatically about wide ranging traits, including attractiveness (Ritchie et al., 2017). These first

impressions influence behavior towards the stranger and are shown to affect employment decisions (Luxen & Van De Vijver, 2006), voting patterns (Berggren et al., 2010) and sentencing decisions (Sigall & Ostrove, 1975; Zebrowitz & McDonald, 1991). Facially attractive people are generally treated better than unattractive people. They get help more often, are treated more honestly, get a job-promotion sooner and are adulated (Kościński, 2008). Being perceived as attractive thus conveys various cultural advantages. Some people can even make big money with their good looks, because they look attractive to a broad public.

The fact that those people exist, suggests that there must be some inter-judge agreement about attractiveness. Indeed, certain facial characteristics have been shown to be predictive of attractiveness. Facial symmetry, smooth skin, large eyes, geometrical

averageness and sexually dimorphic features are all examples of factors that lead to higher attractiveness scores (Foo et al., 2017; Kościński, 2007; Rhodes, 2006). These stimulus characteristics have been the focus of many studies in order to understand attractiveness perception. However, focusing on inter-judge agreement only may miss the private component of attractiveness perception in individuals. A literature review by Kościński (2008) describes that about 25% of the total variation in attractiveness ratings can be explained by inter-judge agreement. 50% seems to be a result of variation within judges, caused by for example swings in physiological or psychological state or circumstances during the assessment of faces.The final 25% of the total variation in attractiveness of faces is

(4)

explained by personal preferences of judges. Hönekopp (2006) drew a similar conclusion by finding that shared taste and private taste contribute approximately equally to the variance in attractiveness ratings. Shared taste was defined as the sum of all attractiveness standards that enable two judges to agree about the attractiveness of faces, and private taste incorporates all attractiveness standards of a single judge that give rise to the disagreement between judges.

Despite the importance of individual differences in attractiveness perception, personal preferences have not been given as much attention in research as agreements or standards in attractiveness have. A twin study by Germine et al. (2015) showed that personal preferences in facial attractiveness mainly result from individual experiences instead of genetic variation, whereas another core aspect of facial processing; identity processing, was mainly explained by variation in genes. They concluded that individual life history and experience might be the driving force behind personal preferences in faces. Other research specifically showed that the rater’s own facial characteristics (DeBruine, 2004; Hinsz, 1989), personality preferences (Little et al., 2006), the socioeconomic and cultural environment (DeBruine et al., 2010; Little et al., 2007), previous visual experience (Cooper & Maurer, 2008; Little et al., 2011; Rhodes et al., 2005) and social learning (Little et al., 2015; Verosky & Todorov, 2010) are factors that influence personal preference in facial attractiveness.

These behavioral studies suggest that many factors are involved in the perception of attractiveness and the question arises how attractiveness is computed in the brain. A recent EEG study by Kaiser and Nyga (2020) reported that representations of facial attractiveness are already present in early perceptual stages, as early as 150-200ms after presentation of the faces. Moreover, they showed that these early attractiveness representations are related to personal attractiveness preferences, suggesting that features are weighted in an early stage to compute an individual representation of attractiveness. This is in line with fMRI studies reporting activity related to attractiveness perception in areas involved in visual processing of

(5)

faces, such as the fusiform face area (FFA) and superior temporal sulcus (STS) (Iaria et al., 2008; Winston et al., 2007). Even when subjects were not attending to attractiveness explicitly, Chatterjee et al. (2009) found that activation in the ventral occipital regions, including the FFA and lateral occipital cortex, varied with attractiveness of the perceived faces. One of the few studies that investigated personal preferences in attractiveness specifically, compared participants who on average gave higher versus lower attractiveness scores to faces (Vartanian et al., 2013). They reported that activity in the middle temporal gyrus (MTG) was related to individual differences in perceived attractiveness and suggested that the MTG might be involved in integrating information from a variety of sources to compute attractiveness. However, their conclusion was based on an exploratory analysis and has not been replicated yet.

While attractiveness representations in the brain have been widely studied, only little attention has been given to personal attractiveness preferences and its neural representations. Moreover, the studies that have been investigating subjective attractiveness often use

univariate analyses although multivariate analyses can be more informative about neural representations (Haxby, 2012, Popal et al., 2019). Therefore, in the current study we will use a multi-voxel pattern analysis to investigate what brain regions contain predictive information about subjective attractiveness and personal attractiveness preferences. To get a reliable estimate of subjective attractiveness, participants rate every natural face photograph on attractiveness four times. Personal attractiveness preferences are operationalized by subtracting the average attractiveness score of a large group of raters from the subjective attractiveness score. We hypothesized that we would be able to find clusters of voxels that contain predictive information about subjective attractiveness and personal preferences. Based on prior studies, we expected to find these voxels in ventral occipital regions for subjective attractiveness. Information about personal preferences may be present in the medial temporal

(6)

gyrus. However, no strong predictions about the location of these clusters are formulated given the exploratory nature of previous research. Since voxel selection based on prior research is not possible, a whole-brain searchlight technique is used (Kriegeskorte et al., 2006).

Methods Participants

Sixteen participants participated in this study (10 females). All participants had normal or corrected-to-normal vision. Participants were recruited from the university community of the University of Amsterdam and were financially compensated for participation. Three participants were excluded because of missing data, leaving 13 complete datasets for analysis. All participants gave informed consent and the study was approved by the ethical committee of the University of Amsterdam.

Stimuli

The stimulus set consisted of 80 naturalistic face photographs with direct gaze, taken from the Face Research Lab London Set (for examples see Figure 1) (DeBruine & Jones, 2017). These images included 40 identities, with both a neutral and smiling expression. Every identity comes with the photographed person’s self-reported gender (20 females), age (Mage =

28.13, SDage = 7.95) and ethnicity (20 white, 9 black, 5 west-Asian, 5 Asian, 1

east-Asian/white). The Face Research Lab London Set also included an attractiveness score for every identity, rated by a large set of observers (n=2531).

Experimental procedure

The participants had to perform a task in the fMRI scanner that consisted of passively viewing of the face stimuli. The experiment was split over two sessions. Both sessions contained 6 runs of the task, with 40 stimulus presentations in every run. Every stimulus was presented for 1.25 seconds, followed by a fixed interstimulus interval (ISI) of 3.75 seconds

(7)

with a fixation dot. To keep the participants attentive, a random selection of 5 stimuli per run was followed by a rating either on attractiveness, dominance or trustworthiness. This rating lasted 2.5 seconds and was done using a button-box with eight buttons (four per hand). For a visualization of the paradigm, see figure 1. Each of the 80 face photographs (40 identities, both smiling and neutral) was shown 3 times per session. The stimuli were counterbalanced across the runs in terms of gender and ethnicity.

In both sessions, participants were asked to rate the neutral faces on attractiveness, trustworthiness and dominance on a computer outside of the scanner. Only the attractiveness ratings are used in this study. The participants had to respond on a continuous scale by clicking on a rating scale ranging from -4 (“not at all attractive”) to 4 (“very attractive”). There was no time pressure to respond. To get a reliable estimate of subjective attractiveness, every face identity was rated four times per subject (twice per session).

Figure 1. An example of the timeline of trials. Each trial consisted of a stimulus presentation of 1250ms and was followed by an ISI of 3750ms. A random selection of 5 stimuli per run was followed by a ISI of 1500ms and a rating of 2500ms on attractiveness, dominance or trustworthiness. When a button was pressed, the corresponding circle colored orange to provide visual feedback. After the rating, an ISI of 3750ms appeared again before a new stimulus is presented.

Rating analysis

Subjective attractiveness was calculated by taking the mean of the four ratings per participant. This leaves us with one subjective attractiveness score per face identity, per subject. These attractiveness scores were standardized with a mean of zero and unit variance.

(8)

The same standardization was done with the attractiveness scores taken from the Face Research Lab London set. Per face identity, the average of all ratings in the Face Research Lab London set were taken and we will refer to this as the “average attractiveness score”. Then, personal preferences of every participant were calculated per face identity by subtracting the average attractiveness score from the subjective attractiveness score. fMRI data acquisition and preprocessing

Functional MRI data were obtained at the University of Amsterdam, Spinoza Centre for Functional Magnetic Resonance Imaging using a 3-T Philips Achieva scanner with a 32-channel SENSE headcoil. Foldable foam pads and medical tape on the forehead were used to minimize head motion. In every run, 347 brain volumes were acquired using a multiband gradient echo EPI sequence with the following parameters: repetition time (TR) = 700 ms, echo time (TE) = 30 ms, flip angle (FA) = 55°, SENSE reduction factor = 1.5, voxel size 2.7 x 2.7 x 2.97mm. A high-resolution T1-weighted anatomical scan was also acquired for each subject (TR = 8.1ms, TE = 3.7 ms, voxel size 1 x 1 x 1mm).

Preprocessing was done using fMRIPrep 1.5.8 (Esteban et al., 2019). In sum, each T1-weighted image was corrected for intensity non-uniformity and skull-stripped. Volume-based spatial normalization to a standard space (MNI152NLin2009cAsym) was done through nonlinear registration with antsRegistration (ANTs 2.2.0) (Avants et al., 2008).

Functional data was corrected for susceptibility distortions by estimating a fieldmap based on two EPI references with opposing phase-encoding directions. Based on the

susceptibility distortion estimation, a corrected EPI reference was calculated and co-registered to the T1w reference using bbregister (FreeSurfer) which implements boundary-based

registration (Greve & Fischl, 2009). Head-motion parameters with respect to the BOLD reference (transformation matrices, and six corresponding rotation and translation parameters) are estimated before any spatiotemporal filtering, using mcflirt (FSL 5.0.9, Jenkinson et al.,

(9)

2002). The BOLD time-series were resampled onto their original, native space by applying a single, composite transform to correct for head-motion and susceptibility distortions. The BOLD time-series were then resampled into standard space (MNI152NLin2009cAsym). Several confounding time-series were calculated based on the preprocessed BOLD, but only the head-motion estimates are used in this study as confounds. All details about the

preprocessing steps can be found in Appendix A. Multivariate decoding analysis

As a first step, a high pass filter (0.01Hz) is applied to remove low frequency drift in the data. Subsequently, the functional data was spatially smoothed using a 4-mm FWHM Gaussian kernel. To estimate the activity patterns, a first-level GLM was fitted using a least-squares-all (LSA) technique. The model included a regressor for every trial, for the six motion confounds (translation and rotation in all directions) and for the rating events (to control for motor-related activity). The resulting activity pattern was uncorrelated using a ZCA-based whitening approach, adapted from Soch et al. (2020). For this whitening approach, an uncorrelation matrix (D) is estimated:

𝐷 = 𝐾𝑿𝑿 = 𝑐𝑜𝑣(𝑿)

In python code:

D = sqrtm(np.cov(X.T))

Where X is the first-level single-trial design matrix with dimensions T (time points) x N (trials). The new, uncorrelated, pattern matrix is then calculated by taking the dot product of the uncorrelation matrix (D) and the previously estimated pattern matrix (R, with dimensions N x K, where K is the number of voxels):

𝑅 = 𝐷𝑅

In python code:

(10)

Finally, for decoding purposes, the parameter estimate maps are standardized per run so that all voxels have a mean of zero and unit variance per run. The target variable

(subjective attractiveness or the difference between subjective attractiveness and average attractiveness) is also standardized per run.

To find clusters of voxels with decoding information about the target variable, a searchlight approach was used in combination with a linear regression (Kriegeskorte et al., 2006). A linear regression was chosen because it takes a continuous dependent variable and multiple independent variables (the voxel-wise parameter estimates). This means that the searchlight looks for clusters of voxels of which its activity pattern is linearly predictive of the target variable. Based on Kriegeskorte et al. (2006), the chosen radius of the spherical

searchlight was 5.4mm (2-voxel radius, 33 voxels included in the sphere).We used a leave-one-run-out cross-validation: the linear regression model was trained on 11 runs and tested on the data of the 12th run. This process was iterated so that each of the runs was used as a test

run once, resulting in 12 cross-validation folds. R2 was used as a measurement of prediction

performance and the mean R2 across cross-validation folds was assigned to the center voxel of

the searchlight. The center of the searchlight is “moved” trough the brain, resulting in a whole-brain prediction performance map.

A one-tailed one sample t-test was performed to identify regions with a mean

prediction performance higher than zero. To correct for multiple comparisons, we used a false discovery rate correction (FDR) (α= 0.05).

(11)

Results Behavioral results

To check whether the attractiveness ratings from the Face Lab London set were representative for the average rating from our sample, we performed a Pearson correlation between the attractiveness ratings averaged across all our participants (Figure 2a) and the average

attractiveness ratings from the Face Research Lab London set (Figure 2b). The attractiveness scores from our participants significantly correlated with the average ratings from the Face Research Lab London set (r(38) = 0.91, p < .001). The average correlation between

participants was modest (0.64), indicating that there is some inter-rater agreement as well as personal preferences, as expected. See Figure 2c for a visualization of the inter-rater variation.

Figure 2. A) Standardized attractiveness score per face identity. The size of the bar represents the attractiveness score, averaged across all participants. The error bars represent standard error of the

(12)

mean. B) Standardized average attractiveness score per face identity. The size of the bar represents the average attractiveness score, rated by the participants of the Face Research Lab London dataset. The error bars represent standard error of the mean. C) Boxplot of the standardized subjective

attractiveness score per face identity.

Searchlight results

We performed a searchlight analysis combined with a linear regression to investigate whether we could find groups of voxels with an activity pattern that is linear predictive of subjective attractiveness. An example of a resulting prediction performance map for one participant can be found in Figure 3. To identify voxels with a prediction performance higher than zero on a group-level, a one-tailed one sample t-test was performed with a multiple comparison correction (FDR, α= 0.05). No significant voxels were found. A similar analysis was performed to investigate whether we could find a cluster of voxels with an activity pattern linearly predictive of personal preferences in attractiveness. Again, no significant voxels were found when tested with a one-tailed one-sample t-test (FDR, α= 0.05).

(13)

Figure 3. An example of the R2-map of one participant resulting from the searchlight analysis for

subjective attractiveness (A) and personal preferences (B). Because the prediction performances are derived with a cross-validation method, the R2 values can be below zero. All voxels with a negative R2

value are made transparent in these maps, for visualization purposes.

Control analysis and results

Since it was unexpected to find no significant voxels for both subjective attractiveness and personal preferences, we did a control analysis with a different target variable (gender of faces). This control analysis was done to check if the results could be caused by a

methodological mistake. Methods were similar to the previous analyses with the only difference that the control target variable was a binary variable and therefore the searchlight was combined with a support vector classifier with a linear kernel. The one-sided one sample t-test resulted in significant balanced accuracy scores in the primary visual cortex (Figure 4). This is in line with previous research showing that the primary visual cortex conveys gender information of faces (Petro et al., 2013).Although these results do not show all areas involved

(14)

in gender perception (Kaul et al., 2011), the control analysis does show that using a different target variable (gender of faces) leads to voxels with significant decoding information. It therefore provides evidence that our initial results are not caused by methodological mistakes.

Figure 4. Results from the control analysis with gender of the presented face photographs. This figure shows the t-values of the significant voxels, derived from a one-sample one-sided t-test (FDR, α= 0.05) that tests whether the balanced accuracy scores are significantly higher than zero.

Exploratory analysis

An evident difference between our initial analysis and the control analysis is the used prediction technique: linear regression versus linear support vector classifier. Predicting continuous variables is often more challenging than binary classification because it requires accurate modeling over the whole range of the variable (X. Shen et al., 2017). Therefore, we did an exploratory analysis with the exact same settings as in our control analysis. In this exploratory analysis we investigated whether subjective attractiveness and personal preferences could be decoded from the data if a classifier was used. For this analysis we discretized the subjective attractiveness scores and personal preferences into three equal sized bins per participant. The data in the two extreme bins were used in this exploratory analysis. For subjective attractiveness, the bins can be interpreted as the most attractive faces and the most unattractive faces according to the participant. For personal preferences, the faces with the biggest difference between the subjective attractiveness and the average attractiveness were selected per participant. The bins can be interpreted as the faces that are rated more

(15)

attractive than the average score and the faces that are rated less attractive than the average score.

Again, a spherical searchlight with a radius of 5.4mm and a leave-one run-out cross-validation was used. The searchlight was combined with a support vector classifier with a linear kernel. Balanced accuracy scores were used to control for class imbalances in the runs. Finally, a one-tailed one sample t-test was performed to identify voxels with a classification accuracy above chance. FDR (α= 0.05) was used to correct for multiple comparisons. Exploratory results

To visualize the classification accuracy on a group-level, we averaged the performance maps of all participants in figure 5. No voxels were found with a classification accuracy significantly above chance as tested with a one-tailed one sample t-test with a multiple comparison correction (FDR, α= 0.05).

Figure 5. Classification accuracy maps averaged across all participants for subjective attractiveness (A) and personal preferences in attractiveness (B). The balanced accuracy score ranges from 0 (performance at chance) to 1 (perfect classification). Scores below zero are possible because we use a cross-validation method with a train and test set, but these negative voxels are plotted as transparent for visualization purposes.

(16)

Discussion

In this study we used a multivariate searchlight analysis to identify regions containing information about attractiveness perception and personal attractiveness preferences. Contrary to our predictions, we did not find any voxels with predictive information about subjective attractiveness and personal preferences. Subsequently, an exploratory analysis was done to investigate whether subjective attractiveness and personal preferences could be decoded if they were discretized, but still no significant voxels were found. The same analysis with a control variable (gender of faces) did reveal voxels with significant information.

Our behavioral results showed that personal preferences are present in attractiveness ratings. This is in line with previous research (Hönekopp, 2006; Kościński, 2008). Moreover, previous research suggested that personal preferences arise due to computations differing between individuals (Kaiser & Nyga, 2020). However, we did not identify neural

representations of personal attractiveness preferences. It is possible that these personal attractiveness preferences are better measurable with different methods. For example,

representational similarity analysis (RSA) is specifically developed to investigate similarities and differences between responses (Kriegeskorte et al., 2008). Furthermore, in contrast to MVPA decoding analyses, RSA can investigate the entire multi-dimensional representational space of information (Diedrichsen & Kriegeskorte, 2017; Popal et al., 2019). RSA can also be combined with a whole-brain searchlight analysis, by creating a neural representational dissimilarity matrix (RDM) for every spherical cluster of voxels. This neural RDM can then be compared with a RDM of the target variable (Popal et al., 2019).

Especially for subjective attractiveness, it was unexpected to find no informative voxels since many regions are reported as being related to attractiveness perception in previous studies. We mentioned face processing regions earlier, reported by univariate fMRI studies (Chatterjee et al., 2009; Iaria et al., 2008; Winston et al., 2007) and supported by a

(17)

recent EEG study of Kaiser and Nyga (2020). Reward processing areas such as orbitofrontal cortex (OFC), amygdala and ventral striatum are also often reported to be related to

attractiveness perception (Ishai, 2007; H. Shen et al., 2016; Winston et al., 2007). However, Kranz and Ishai (2006) showed that activity in these regions is moderated by sexual

preference of the perceiver. Sexual preferences or gender of the perceiver are not taken into account in the current study. If this would explain our results, you would expect to find informative voxels on the subject-level that would be overshadowed on the group-level due to variation between subjects. However, our subject specific performance maps do not indicate highly informative voxels, making this explanation unlikely. A more plausible explanation would be that subjective attractiveness might be calculated by a distributed network, resulting in a spread of weak-informative voxels.

A widely distributed representation of subjective attractiveness and personal preferences would be problematic, given the limitations of a searchlight analysis. A

searchlight analysis requires a chosen radius and shape of the volume and weakly informative voxels can only be detected by a searchlight if the radius and shape of the volume match the cluster of informative voxels (Etzel et al., 2013). If the radius is too small or the shape of the volume does not match the shape of the informative cluster, not enough information will be present to reach significant predictive performance. We based our choice for the searchlight radius on Kriegeskorte et al. (2006), who showed that a 2-voxel radius yields the best results across multiple simulations. However, this study assumed a uniform distribution of

information across all voxels, an assumption that often does not hold in fMRI studies. If the information is not present equally at all spatial frequencies, the searchlight results will depend heavily on the chosen radius. Etzel et al. (2013) demonstrated this problem by the following example. In one participant, a voxel yielded an accuracy of 0.17 with a one-voxel radius searchlight, but an accuracy of 0.67 with a two-voxel radius (chance performance was 0.5).

(18)

The same voxel showed the opposite pattern in a different participant, where the voxel was classified as informative in a searchlight with a one-voxel radius and uninformative with a two-voxel radius. Therefore, choosing the searchlight radius is essential but it can be

challenging to find the best radius, especially when dealing with distributed weak-informative voxels. For future studies, we would recommend to systematically tune the searchlight radius and shape. This process can be very time consuming, but will teach us more about the distribution and shape of the activity patterns related to subjective attractiveness and personal attractiveness preferences. Another opportunity for future studies is using different methods where tuning of these parameters is not necessary, such as the data-driven search algorithm proposed by Asadi et al. (2020).

In conclusion, our behavioral results support the robust finding that attractiveness ratings consist of both inter-judge agreement and personal preferences. We could not find neural representations of subjective attractiveness or personal preferences in attractiveness, however.One possible explanation is that subjective attractiveness and personal preferences might be represented in wide-spread weak-informative voxels. This would be in line with studies reporting a distributed network of attractiveness perception (Chatterjee et al., 2009; Said et al., 2011; Vartanian et al., 2013). Nevertheless, more research is needed to investigate this. We proposed multiple methodical possibilities for future studies including RSA,

parameter tuning and the search algorithm of Asadi et al. (2020). Hopefully these future studies can teach us more about the perception of attractiveness. A topic that has been of interest so long, but is still not fully understood: “Everything has beauty, but not everyone sees it” (Confucius, 551 - 479 B.C.).

(19)

References

Abraham, A., Pedregosa, F., Eickenberg, M., Gervais, P., Mueller, A., Kossaifi, J., Gramfort, A., Thirion, B., & Varoquaux, G., (2014). Machine Learning for Neuroimaging with Scikit-Learn. Frontiers in Neuroinformatics 8.

https://doi.org/10.3389/fninf.2014.00014.

Asadi, N., Wang, Y., Olson, I., & Obradovic, Z. (2020). A heuristic information cluster search approach for precise functional brain mapping. Human Brain Mapping. 41, 2263– 2280. https://doi.org/10.1002/hbm.24944

Avants, B. B., Epstein, C. L., Grossman, M., & Gee, J. C. (2008). Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain. Med. Image Anal. 12, 26–41.

doi:10.1016/j.media.2007.06.004

Behzadi, Y., Restom, K., Liau, J., & Liu, T.T., (2007) A Component Based Noise Correction Method (CompCor) for BOLD and Perfusion Based fMRI. NeuroImage 37(1), 90-101. https://doi.org/10.1016/j.neuroimage.2007.04.042.

Berggren, N., Jordahl, H., & Poutvaara, P. (2010). The looks of a winner: Beauty and electoral success. Journal of Public Economics, 94(1-2), 8-15.

https://doi.org/10.1016/j.jpubeco.2009.11.002

Chatterjee, A., Thomas, A., Smith, S. E., & Aguirre, G. K. (2009). The Neural Response to Facial Attractiveness. Neuropsychology, 23(2), 135–143.

https://doi.org/10.1037/a0014430

Cooper, P. A., & Maurer, D. (2008). The influence of recent experience on perceptions of attractiveness. Perception, 37(8), 1216-1226. https://doi.org/10.1068/p5865 Cox, R.W., & Hyde, J.S. (1997). Software Tools for Analysis and Visualization of fMRI

Data. NMR in Biomedicine 10(4-5). https://doi.org/10.1002/(SICI)1099-1492(199706/08)10:4.

Dale, A. M., Fischl, B., & Sereno, M.I. (1999). Cortical Surface-Based Analysis: I. Segmentation and Surface Reconstruction. NeuroImage 9(2).

https://doi.org/10.1006/nimg.1998.0395.

DeBruine, L. M. (2004). Resemblance to self increases the appeal of child faces to both men and women. Evolution and Human Behavior, 25(3), 142-154.

(20)

DeBruine, L.M., Jones, B. C. (2017). Face Research Lab London Set. figshare. https://doi.org/10.6084/m9.figshare.5047666.v3

DeBruine, L. M., Jones, B. C., Crawford, J. R., Welling, L. L., & Little, A. C. (2010). The health of a nation predicts their mate preferences: cross-cultural variation in women's preferences for masculinized male faces. Proceedings of the Royal Society B:

Biological Sciences, 277(1692), 2405-2410. https://doi.org/10.1098/rspb.2009.2184 Diedrichsen, J. & Kriegeskorte, N. (2017) Representational models: A common framework

for understanding encoding, pattern-component, and representational-similarity analysis. PLoS Comput Biol, 13(4): e1005508.

https://doi.org/10.1371/journal.pcbi.1005508

Esteban, O., Markiewicz, C. J., Blair, R. W., Moodie, C. A., Isik, A. I., Erramuzpe, A., ... & Oya, H. (2019). fMRIPrep: a robust preprocessing pipeline for functional MRI. Nature methods, 16(1), 111-116. https://doi.org/10.1038/s41592-018-0235-4

Etzel, J. A., Zacks, J. M., & Braver, T. S. (2013). Searchlight analysis: promise, pitfalls, and potential. NeuroImage, 78, 261–269.

https://doi.org/10.1016/j.neuroimage.2013.03.041

Fonov, V.S., Evans, A.C., McKinstry, R.C., Almli, C.R., & Collins D.L. (2009). Unbiased Nonlinear Average Age-Appropriate Brain Templates from Birth to Adulthood. NeuroImage 47. https://doi.org/10.1016/S1053-8119(09)70884-5.

Foo, Y. Z., Simmons, L. W., & Rhodes, G. (2017). Predictors of facial attractiveness and health in humans. Scientific Reports, 7(1), 39731. https://doi.org/10.1038/srep39731 Germine, L., Russell, R., Bronstad, P. M., Blokland, G. A. M., Smoller, J. W., Kwok, H., …

Wilmer, J. B. (2015). Individual Aesthetic Preferences for Faces Are Shaped Mostly by Environments, Not Genes. Current Biology, 25(20), 2684–2689.

https://doi.org/10.1016/J.CUB.2015.08.048

Gorgolewski, K., Burns, C. D., Madison, C., Clark, D., Halchenko, Y.O, Waskom, M.L, & Ghosh, S. (2011). Nipype: A Flexible, Lightweight and Extensible Neuroimaging Data Processing Framework in Python. Frontiers in Neuroinformatics 5(13).

https://doi.org/10.3389/fninf.2011.00013.

Gorgolewski, K.J., Esteban, O., Markiewicz, C.J., Ziegler, E., Ellis, D.G., Notter, M.P, … Jarecka D. (2018). Nipype. Software. Zenodo. https://doi.org/10.5281/zenodo.596855.

(21)

Greve, D. N., and Fischl, B. (2009). Accurate and robust brain image alignment using boundary-based registration. Neuroimage 48, 63–72. doi:

10.1016/j.neuroimage.2009.06.060

Haxby, J. V. (2012). Multivariate pattern analysis of fMRI: the early beginnings. NeuroImage, 62(2), 852–855. https://doi.org/10.1016/j.neuroimage.2012.03.016 Hinsz, V. B. (1989). Facial resemblance in engaged and married couples. Journal of Social

and Personal Relationships, 6(2), 223-229.

Hönekopp, J. (2006). Once More: Is Beauty in the Eye of the Beholder? Relative Contributions of Private and Shared Taste to Judgments of Facial Attractiveness. Journal of Experimental Psychology, 32(2), 199–209.

https://doi.org/10.1037/0096-1523.32.2.199

Iaria, G., Fox, C. J., Waite, C. T., Aharon, I., & Barton, J. J. (2008). The contribution of the fusiform gyrus and superior temporal sulcus in processing facial attractiveness: neuropsychological and neuroimaging evidence. Neuroscience, 155(2), 409-422. https://doi.org/10.1016/j.neuroscience.2008.05.046

Ishai, A. (2007). Sex, beauty and the orbitofrontal cortex. International journal of

Psychophysiology, 63(2), 181-185. https://doi.org/10.1016/j.ijpsycho.2006.03.010 Jenkinson, M., Bannister, P., Brady, M., and Smith, S. (2002). Improved optimization for the

robust and accurate linear registration and motion correction of brain images. Neuroimage 17, 825–841. doi: 10.1006/nimg.2002.1132

Kaiser, D., & Nyga, K. (2020). Tracking cortical representations of facial attractiveness using time-resolved representational similarity analysis. bioRxiv.

https://doi.org/10.1101/2020.05.21.105916

Kaul, C., Rees, G. & Ishai, A. (2011) The gender of face stimuli is represented in multiple regions in the human brain. Front. Hum. Neurosci., 4, 1–12.

https://doi.org/10.3389/fnhum.2010.00238

Klein, A., Ghosh, S.S., Bao, F.S., Giard, J., Häme, Y., Stavsky, E., Lee, N. (2017).

Mindboggling Morphometry of Human Brains. PLOS Computational Biology 13(2). https://doi.org/10.1371/journal.pcbi.1005350.

Kościński, K. (2007). Facial attractiveness: General patterns of facial preferences.

Anthropological Review, 70(1), 45-79. https://doi.org/10.2478/v10044-008-0001-9 Kościński, K. (2008). Facial attractiveness: Variation, adaptiveness and consequences of

facial preferences. Anthropological Review, 71(1), 77-105. https://doi.org/10.2478/v10044-008-0012-6

(22)

Kranz, F., & Ishai, A. (2006). Face perception is modulated by sexual preference. Current biology, 16(1), 63-68. https://doi.org/10.1016/j.cub.2005.10.070

Kriegeskorte, N., Goebel, R., & Bandettini, P. (2006). Information-based functional brain mapping. PNAS Proceedings of the National Academy of Sciences of the United States of America, 103(10), 3863–3868. https://doi.org/10.1073/pnas.0600244103

Kriegeskorte, N., Mur, M., & Bandettini, P. (2008). Representational similarity analysis - connecting the branches of systems neuroscience. Frontiers in Systems Neuroscience, 2, 1–28. https://doi.org/10.3389/neuro.06.004.2008

Lanczos, C. (1964). Evaluation of noisy data. Journal of the Society for Industrial and Applied Mathematics, Series B: Numerical Analysis, 1(1), 76-85.

Little, A. C., Burt, D. M., & Perrett, D. I. (2006). What is good is beautiful: Face preference reflects desired personality. Personality and Individual Differences, 41(6), 1107-1118. https://doi.org/10.1016/j.paid.2006.04.015

Little, A. C., Caldwell, C. A., Jones, B. C., & DeBruine, L. M. (2015). Observer age and the social transmission of attractiveness in humans: Younger women are more influenced by the choices of popular others than older women. British Journal of Psychology, 106(3), 397-413. https://doi.org/10.1111/bjop.12098

Little, A. C., Cohen, D. L., Jones, B. C., & Belsky, J. (2007). Human preferences for facial masculinity change with relationship type and environmental harshness. Behavioral Ecology and Sociobiology, 61(6), 967-973. https://doi.org/10.1007/s00265-006-0325-7 Little, A. C., DeBruine, L. M., & Jones, B. C. (2011). Exposure to visual cues of pathogen

contagion changes preferences for masculinity and symmetry in opposite-sex faces. Proceedings of the Royal Society B: Biological Sciences, 278(1714), 2032-2039. https://doi.org/10.1098/rspb.2010.1925

Luxen, M. F., & Van De Vijver, F. J. (2006). Facial attractiveness, sexual selection, and personnel selection: When evolved preferences matter. Journal of Organizational Behavior, 27(2), 241-255. doi:10.1002/job.357

Popal, H., Wang, Y., & Olson, I. R. (2019). A Guide to Representational Similarity Analysis for Social Neuroscience. Social Cognitive and Affective Neuroscience, 14(11), 1243– 1253. https://doi.org/10.1093/scan/nsz099

Power, J. D., Mitra, A., Laumann, T. O., Snyder, A. Z., Schlaggar, B. L., & Petersen, S. E. (2014). Methods to detect, characterize, and remove motion artifact in resting state fMRI. Neuroimage, 84, 320-341. https://doi.org/10.1016/j.neuroimage.2013.08.048.

(23)

Rhodes, G. (2006). The Evolutionary Psychology of Facial Beauty. Annual Review of

Psychology, 57(1), 199–226. https://doi.org/10.1146/annurev.psych.57.102904.190208 Rhodes, G., Halberstadt, J., Jeffery, L., & Palermo, R. (2005). The attractiveness of average

faces is not a generalized mere exposure effect. Social Cognition, 23(3), 205-217. Ritchie, K. L., Palermo, R., & Rhodes, G. (2017). Forming impressions of facial

attractiveness is mandatory. Scientific Reports, 7(1), 469. https://doi.org/10.1038/s41598-017-00526-9

Said, C. P., Haxby, J. V, & Todorov, A. (2011). Brain systems for assessing the affective value of faces. Philosophical Transactions of the Royal Society B, 366, 1660–1670. https://doi.org/10.1098/rstb.2010.0351

Satterthwaite, T. D., Elliott, M. A., Gerraty, R. T., Ruparel, K., Loughead, J., Calkins, M. E., ... Wolf, D. H. (2013). An improved framework for confound regression and

filtering for control of motion artifact in the preprocessing of resting-state functional connectivity data. Neuroimage, 64, 240-256.

https://doi.org/10.1016/j.neuroimage.2012.08.052

Shen, H., Chau, D. K. P., Su, J., Zeng, L.-L., Jiang, W., He, J., … Hu, D. (2016). Brain responses to facial attractiveness induced by facial proportions: evidence from an fMRI study. Scientific Reports, 6(1), 1–13. https://doi.org/10.1038/srep35905 Shen, X., Finn, E. S., Scheinost, D., Rosenberg, M. D., Chun, M. M., Papademetris, X., &

Constable, R. T. (2017). Using connectome-based predictive modeling to predict individual behavior from brain connectivity. Nature Protocols, 12(3), 506-518. https://doi.org/10.1038/nprot.2016.178

Sigall, H., & Ostrove, N. (1975). Beautiful but dangerous: Effects of offender attractiveness and nature of the crime on juridic judgment. Journal of Personality and Social Psychology, 31(3), 410–414. https://doi.org/10.1037/h0076472

Soch, J., Allefeld, C., & Haynes, J. D. (2020). Inverse Transformed Encoding Models–a solution to the problem of correlated trial-by-trial parameter estimates in fMRI decoding. Neuroimage, 209, 116449.

https://doi.org/10.1016/j.neuroimage.2019.116449

Tustison, N. J., Avants, B. B., Cook, P. A., Zheng, Y., Egan, A., Yushkevich, P. A., & Gee, J. C. (2010). N4ITK: improved N3 bias correction. IEEE transactions on medical imaging, 29(6), 1310-1320. https://doi.org/10.1109/TMI.2010.2046908.

(24)

Vartanian, O., Goel, V., Lam, E., Fisher, M., & Granic, J. (2013). Middle temporal gyrus encodes individual differences in perceived facial attractiveness. Psychology of Aesthetics, Creativity, and the Arts, 7(1), 38–47. https://doi.org/10.1037/a0031591 Verosky, S. C., & Todorov, A. (2010). Generalization of affective learning about faces to

perceptually similar faces. Psychological Science, 21(6), 779-785. https://doi.org/10.1177/0956797610371965

Winston, J. S., O’Doherty, J., Kilner, J. M., Perrett, D. I., & Dolan, R. J. (2007). Brain systems for assessing facial attractiveness. Neuropsychologia, 45(1), 195–206. https://doi.org/10.1016/J.NEUROPSYCHOLOGIA.2006.05.009

Zhang, Y., Brady, M., & Smith, S. (2001). Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm. IEEE transactions on medical imaging, 20(1), 45-57.

https://doi.org/10.1109/42.906424.

Zebrowitz, L. A., & McDonald, S. M. (1991). The impact of litigants' baby-facedness and attractiveness on adjudications in small claims courts. Law and Human Behavior, 15(6), 603–623. https://doi.org/10.1007/BF01065855

(25)

Appendix A

Results included in this report come from preprocessing using fMRIPrep 1.5.8

(Esteban et al., 2019), which is based on Nipype 1.4.1 (Gorgolewski et al., 2011; Gorgolewski et al., 2018).

Anatomical data preprocessing. The T1-weighted (T1w) image was corrected for intensity non-uniformity (INU) with N4BiasFieldCorrection (Tustison et al., 2010),

distributed with ANTs 2.2.0 (Avants et al. 2008), and used as T1w-reference throughout the workflow. The T1w-reference was then skull-stripped with a Nipype implementation of the antsBrainExtraction.sh workflow (from ANTs), using OASIS30ANTs as target template. Brain tissue segmentation of cerebrospinal fluid (CSF), white-matter (WM) and gray-matter (GM) was performed on the brain-extracted T1w using fast (FSL 5.0.9; Zhang, Brady & Smith, 2001). Brain surfaces were reconstructed using recon-all (FreeSurfer 6.0.1, Dale, Fischl, & Sereno, 1999), and the brain mask estimated previously was refined with a custom variation of the method to reconcile ANTs-derived and FreeSurfer-derived segmentations of the cortical gray-matter of Mindboggle (Klein et al., 2017). Volume-based spatial

normalization to one standard space (MNI152NLin2009cAsym) was performed through nonlinear registration with antsRegistration (ANTs 2.2.0), using brain-extracted versions of both T1w reference and the T1w template. The following template was selected for spatial normalization: ICBM 152 Nonlinear Asymmetrical template version 2009c (Fonov et al., 2009; TemplateFlow ID: MNI152NLin2009cAsym).

Functional data preprocessing. For each of the 17 BOLD runs found per subject (across all tasks and sessions), the following preprocessing was performed. First, a reference volume and its skull-stripped version were generated using a custom methodology of

fMRIPrep. A B0-nonuniformity map (or fieldmap) was estimated based on two (or more) echo-planar imaging (EPI) references with opposing phase-encoding directions, with

(26)

3dQwarp (Cox & Hyde, 1997). Based on the estimated susceptibility distortion, a corrected EPI (echo-planar imaging) reference was calculated for a more accurate co-registration with the anatomical reference. The BOLD reference was then co-registered to the T1w reference using bbregister (FreeSurfer) which implements boundary-based registration (Greve & Fischl, 2009). Co-registration was configured with six degrees of freedom. Head-motion parameters with respect to the BOLD reference (transformation matrices, and six corresponding rotation and translation parameters) are estimated before any spatiotemporal filtering using mcflirt (FSL 5.0.9, Jenkinson et al., 2002). The BOLD time-series, were resampled to surfaces on the following spaces: fsaverage6. The BOLD time-series (including slice-timing correction when applied) were resampled onto their original, native space by applying a single, composite transform to correct for head-motion and susceptibility distortions. These resampled BOLD time-series will be referred to as preprocessed BOLD in original space, or just preprocessed BOLD. The BOLD time-series were resampled into standard space, generating a preprocessed BOLD run in [‘MNI152NLin2009cAsym’] space. Several confounding time-series were calculated based on the preprocessed BOLD: framewise displacement (FD), DVARS and three region-wise global signals. FD and DVARS are calculated for each functional run, both using their implementations in Nipype (following the definitions by Power et al., (2014)). The three global signals are extracted within the CSF, the WM, and the whole-brain masks. Additionally, a set of physiological regressors were extracted to allow for component-based noise correction (CompCor; Behzadi et al., 2007). Principal components are estimated after high-pass filtering the preprocessed BOLD time-series (using a discrete cosine filter with 128s cut-off) for the two CompCor variants: temporal (tCompCor) and anatomical (aCompCor). tCompCor components are then calculated from the top 5% variable voxels within a mask covering the subcortical regions. This subcortical mask is obtained by heavily eroding the brain mask, which ensures it does not include cortical GM regions. For

(27)

aCompCor, components are calculated within the intersection of the aforementioned mask and the union of CSF and WM masks calculated in T1w space, after their projection to the native space of each functional run (using the inverse BOLD-to-T1w transformation). Components are also calculated separately within the WM and CSF masks. For each

CompCor decomposition, the k components with the largest singular values are retained, such that the retained components' time series are sufficient to explain 50 percent of variance across the nuisance mask (CSF, WM, combined, or temporal). The remaining components are dropped from consideration. The confound time series derived from head motion estimates and global signals were expanded with the inclusion of temporal derivatives and quadratic terms for each (Satterthwaite et al. 2013). Frames that exceeded a threshold of 0.5 mm FD or 1.5 standardised DVARS were annotated as motion outliers. All resamplings can be

performed with a single interpolation step by composing all the pertinent transformations (i.e. head-motion transform matrices, susceptibility distortion correction when available, and co-registrations to anatomical and output spaces). Gridded (volumetric) resamplings were performed using antsApplyTransforms (ANTs), configured with Lanczos interpolation to minimize the smoothing effects of other kernels (Lanczos, 1964). Non-gridded (surface) resamplings were performed using mri_vol2surf (FreeSurfer).

Many internal operations of fMRIPrep use Nilearn 0.6.1 (Abraham et al., 2014), mostly within the functional processing workflow.

Referenties

GERELATEERDE DOCUMENTEN

that MG joins a rational rotation curve as well as the condition that such a joining occurs at the double point of the curve. We will also show,that an

Dan volgen de gedichten waarin die relatie wordt bezongen, met bij ieder gedicht een kritisch apparaat en een soms zeer uitgebreid commentaar, dat niet alleen de tekst toelicht,

A football player was diagnosed with myositis ossificans of his right adductor longus muscle after an acute injury.. Conservative treatment failed and 1 year after the initial

As the vibrating mesh nozzle also comprises a recirculating feed system, samples were taken from the liquid feed solution after atomizing without heating (pump + spray (feed)) and

Even though the Botswana educational system does not reveal serious pro= b1ems in terms of planning it is nevertheless important that officials of the Ministry

[r]

Nonetheless, it is surprising that calculative trust was found to be positively associated with opportunism whereas TCE literature predicts the opposite and