• No results found

Does Reinforcement Learning Influence Population Receptive Fields in the Visual Cortex? - A Validation of the Model-Based Approach

N/A
N/A
Protected

Academic year: 2021

Share "Does Reinforcement Learning Influence Population Receptive Fields in the Visual Cortex? - A Validation of the Model-Based Approach"

Copied!
49
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Does Reinforcement Learning Influence Population Receptive Fields in the Visual Cortex? A Validation of the Model-Based Approach

Evan Lewis-Healey (11676760)

Research Project 1

Master’s Brain and Cognitive Sciences University of Amsterdam

(2)

Abstract

Previous behavioural evidence has demonstrated how valuable stimuli can capture attention (e.g. Hickey, Chelazzi & Theeuwes, 2010; Theeuwes & Belopsky, 2012). Computational approaches that have found that valuable stimuli can modulate neural responses in early visual areas have speculated that this may contribute to a sharper sensory representation of valuable stimuli (Serences & Saproo, 2010). However, there are no published studies that seek to quantify this sharpened sensory representation, and this is what this report details. We used a pRF model (Dumoulin & Wandell, 2008) combined with a decoding model (van Bergen et al., 2015) in order to accurately quantify how the sensory representation of stimuli can change as a function of value. In this report, a pRF model based analysis is used to perform decoding on BOLD fMRI data from each separate trial. It was found that this encoding and decoding approach was successful in decoding the location of the visual stimuli presented to participants, thus validating the approach for subsequent analyses. Future directions of the research are discussed.

(3)

Introduction

One of the goals of neuroscience is to understand how the brain encodes and perceives the external world. Encoding models are increasingly being used in sensory neuroscience, as they quantitatively and explicitly conceptualise how (populations) of neurons encode sensory information. Population receptive field (pRF) mapping (Dumoulin & Wandell, 2008) is an exemplary encoding model that can be used to estimate the joint receptive field of a population of neurons. pRF mapping can be used with a variety of neuroimaging techniques, but due to its noninvasive nature, functional magnetic resonance imaging (fMRI) is primarily used. Within the context of fMRI, the pRF is a model of the aggregate response of all neurons within a singular voxel. The parameters of the pRF model can provide information regarding the properties of the receptive fields, such as position and size (Dumoulin & Wandell, 2008). A Gaussian model is typically used. However, more complex models can also be used to explain more of the variance within the fMRI time course (Zuiderbaan, Harvey & Dumoulin, 2012; Kay et al., 2013).

pRF Mapping in Sensory Neuroscience

pRF mapping has contributed to sensory neuroscience as it is a simple computational model that can be grounded in biological feasibility of how the visual field is encoded in the brain. Using this pRF technique with fMRI analysis, retinotopic maps have been found in the visual cortex (Dumoulin & Wandell, 2008; Amano, Wandell & Dumoulin, 2009), the parietal cortex (Harvey et al., 2013), the cerebellum (van Es, van der Zwaag & Knapen, 2018), and some subcortical nuclei (DeSimone et al., 2015). This demonstrates the usefulness of pRF mapping as a technique, as it has provided a simple computational framework to elucidate retinotopic maps in a variety of brain regions.

(4)

Much of pRF research is dedicated to investigate how pRF parameters are reconfigured under a range of cognitive and clinical circumstances. Previous research has found that top-down attentional control can modulate pRFs (Klein, Harvey & Dumoulin, 2014; van Es, Theeuwes & Knapen, 2018; Kay, Weiner & Grill-Spector, 2015). Altered pRF properties have also been observed in a variety of ophthalmological (Haak , Cornelissen, & Morland, 2012; Papanikolaou et al., 2015; Baseler et al., 2011) and neurological (Anderson et al., 2017; Schwarzkopf et al., 2014) disorders (for review, see Dumoulin & Knapen, 2018). In order to gain a more comprehensive understanding of the factors that influence the way the brain encodes information, it is necessary to further investigate what other factors can modulate pRFs.

Decoding Models

Encoding models, such as the pRF model, strive to quantitatively describe the representation of information in the brain (Naselaris et al., 2011), and accurately predict the response from a stimulus. Conversely, decoding models aim to accurately predict the stimulus given the response (Bialek et al., 1991). There are many different types of decoding models. With the advent of computational neuroimaging, classification-based pattern analysis has become ubiquitous in fMRI research. Within the context of fMRI analysis, a classifier is trained on a set of blood-oxygenation level dependent (BOLD) data evoked by certain stimuli or cognitive states. The classification algorithm then decodes a test set of BOLD data, to predict what stimulus the participant is viewing (Haxby et al., 2001; Kamitani & Tong, 2005) This approach is prevailing over univariate analyses, as univariate analyses may be subject to instances of inferential error (Serences & Saproo, 2012). However, classification-based pattern

(5)

analysis is limited, as the decoding algorithm does not explicitly define what is driving the variation in the response of the BOLD data (Naselaris & Kay, 2015).

To circumvent this issue, an encoding model can be combined with a decoding model to form a more holistic analysis pipeline (Naselaris et al., 2011; Serences & Saproo, 2012). This is an important step in computational neuroscience, as much of it is not based on conventional statistical hypothesis testing (Wandell & Winawer, 2015). By using both encoding and decoding models in tandem, both models can be validated. An encoding model can quantitatively describe how stimuli are encoded in specific regions of the brain. The encoding model can then be used to perform decoding analysis on a test set of data (van Gerven, 2017; Naselaris et al., 2011).

Previous studies that have used both encoding and decoding models has allowed researchers to identify visual images (Zuiderbaan, Harvey & Dumoulin, 2017; Kay et al., 2008), reconstruct visual images (Sprague & Serences, 2013; Miyawaki et al., 2008; Naselaris et al., 2009) and reconstruct colours (Brouwer & Heeger, 2009) that participants viewed. Other research has successfully reconstructed movies that participants viewed (Nishimoto et al., 2011), and geometric patterns that participants imagined (Thirion et al., 2006).

Rewarding Stimuli and their role in Visual Perception

In the separate area of research investigating visual attention, the dichotomous framework of attentional control has dominated the literature for a number of decades. The framework posits that attentional control is driven by subjects goals (top-down endogenous factors) or salient features of the stimuli (bottom-up exogenous factors; Posner, 1980; Jonides, 1981; Posner &

(6)

Petersen, 1990; Theeuwes, 1994). Recently, however, there has been an increasing amount of focus on the effect of reward on the modulation of attention. Literature reviews criticising the dichotomous framework have called for the inclusion of selection history (Awh, Belopolsky & Theeuwes, 2012; Failing & Theeuwes, 2018), and value (Anderson, 2013; Anderson, 2016) within a more robust organisational structure.

The aforementioned literature has delivered an increase in evidence indicating that learned reward associations can capture attention, even when the reward is no longer relevant to a subjects goals (Hickey et al., 2010; Theeuwes & Belopsky, 2012). Further to this, the magnitude of reward associated with the stimulus can act as a modulatory mechanism when capturing attention (Theeuwes & Belopsky, 2012; Failing & Theeuwes, 2015; Anderson, Laurent & Yantis, 2011). This value-driven attentional capture has been corroborated in oculomotor experiments (Pearson et al., 2015; Le Pelley et al., 2015), and animal experiments (Franko, Seitz & Vogels, 2010; Raiguel et al., 2006).

Despite the mounting evidence, there are few studies that attempt to underpin how perceptually valuable stimuli can influence neural mechanisms in early sensory cortices. Using animal models and electrophysiological methods, evidence demonstrates that reward value can influence neuronal activity in V1 in Macaques (Stănişor et al., 2013) and rats (Shuler & Bear, 2006). Using fMRI in humans, Serences (2008) demonstrated how prior rewards can influence modulations in the visual system, whilst Serences & Saproo (2010) demonstrated how response profiles in the visual cortex can differ in an fMRI experiment due to the association of value. Recently, Itthipuripat et al. (2019) used an encoding model to investigate attentional capture

(7)

within the cortex, finding that increased value associations modulated the neural representation in early visual cortex, even when the stimuli were no longer task relevant. These findings suggest that rewarding stimuli can modulate neural activation in early sensory cortices, which serves to sharpen the sensory representation of said stimuli.

The Current Study

In lieu of all this information, the motivation for this current study is as follows. Research that demonstrates how valuable stimuli can induce neural modulations in primary visual cortex has speculated that this function allows the rewarding stimuli to have a sharper representation in the visual cortex (Serences & Saproo, 2010). However, to the author’s knowledge, there has a) been no research that accurately quantifies this sharpened sensory representation and b) used pRF mapping and a decoding model to reconstruct stimuli that possess different perceived levels of value. Therefore, this study aims to fill a literature gap through combining a pRF model with a decoding model to quantify how the perception of stimuli changes as a function of value. By using this intertwined computational approach, the pRF model may provide an explicit model of pRFs in every voxel, whilst the decoding model may reconstruct the stimuli and quantitatively compare the visual processing of high and low value stimuli in tandem.

Previous research has demonstrated how pRFs can change under various circumstances, ultimately influencing the sensory representation of stimuli in the visual cortex. Moreover, other research has demonstrated that valuable stimuli can modulate neural responses in the visual cortex. Therefore, it could be hypothesised that stimuli that have a higher perceived value will be more accurately reconstructed in the decoding analysis. This modulation in sensory

(8)

representation could be influenced by other brain regions that are central to reward and decision-making. For example, dopamine signalling from structures such as the ventral tegmental area and subsantia nigra indicate reward (Montague, Dayan & Sejnowski, 1996; Schultz, 1998; 2013), which project onto striatal areas (Balleine, Delgado & Hikosaka, 2007). Corticostriatal loops facilitate interaction between reward-related areas in the striatum, such as the caudate, and the visual cortex (Seger, 2013). Therefore, these distal areas of the brain may play a role in the modulation of population receptive fields in the visual cortex, due to the degree of reward associated with the stimuli.

The steps in the current report are as follows. Firstly, pRF mapping will be used to quantitatively conceptualise how neurons in the visual cortex encode information about low-level visual stimuli. Secondly, voxels that explain enough of the variance within the fMRI time course will be used in a subsequent decoding analysis. If the decoding analysis is successful, this will validate the pRF model used in the first step. As stated before, as computational neuroscience is not based on conventional statistical hypothesis testing (Wandell & Winawer, 2015) the importance of validating models used in analysis can not be understated.

Due to time constraints, this report will only detail these first two steps. However, by validating the model, the approach can be used in subsequent analyses to investigate how the sensory representation of the visual field may sharpen as a function of value. By doing so, the study can contribute to a greater theoretical understanding of how perception may change due to reward. This may have important implications for studies of addiction disorders, as many of

(9)

these disorders are underpinned by dysfunctional attentional biases that interfere with abstinence goals (Robinson & Berridge, 2008; Garavan & Hester, 2007; Field & Cox, 2008).

Method Subjects

Thirty-nine subjects participated in the study. All of the subjects were in good health with no neurological or psychiatric disorders. Informed written consent was given by each participant. All thirty-nine subjects were included in the pRF model based analysis. Issues with the design matrix in the pRF model based analysis prevented the inclusion of seventeen participants in the decoding analysis. Therefore, twenty-one participants data were included for the decoding analysis.

Ethical approval was confirmed by Vrije Universiteit Amsterdam. Scanning of each subject took place at the Spinoza Center for Neuroimaging in Amsterdam. Within this report, two experimental trials within the encompassing study will be summarised. The pRF mapper trial, and the location mapper trial.

Stimulus Presentation in the pRF Mapper Trial

The protocol for the pRF mapper used the same stimuli and procedure as van Es et al. (2019). Subjects were shown a bar-shaped stimulus traversing across a screen. The background colour of the screen was mid-grey. The stimulus was composed of 2000 separate Gabor elements, assigned to a random orientation, colour, spatial frequency and location in the bar. The stimulus travelled across the screen in four directions. These directions were (in temporal order):

(10)

top-bottom, left-right, bottom-top, right-left, with each step of the bar on every TR, and a 1 TR inter-bar interval. Bar width was 1/8th of the screen height in top-bottom/bottom-top bar passes. For left-right/right-left bar passes, the width of the bar was increased to maintain an equal amount of space covered for both vertical and horizontal bars, so as to compensate for the aspect ratio of the 120Hz, full HD (1920x1080) 32-inch BOLD screen. The screen extended 20 by 11 degrees of visual angle. The Gabor elements within the bar were randomly assigned one of two colour combinations (cyan/magenta or red/green) in the middle epoch of every TR (three epochs of 433ms). The elements were greyscale in the first and last epochs.

Each participant was instructed to focus on a small white fixation circle (0.15 degrees of visual angle) located at the centre of the screen, whilst indicating the colour of the majority of the Gabor elements. A staircasing procedure of 3-up-1-down was manipulated, by changing the ratio of the colour combinations of the Gabor elements. This was manipulated to ensure 79% accuracy for four different stimulus eccentricities. By attending to the bar, this procedure facilitated elevated fMRI BOLD responses (Bressler et al., 2013), and ensures an equivalent attentional load for every stimulus location (van Es et al., 2018). This minimises attention to become an extraneous variable within the model fit. The data acquired from this trial was used to fit the pRF model for V1, V2 and the cortical surface of each participant. Figure 1A illustrates the traversing bar stimuli used in this trial. The trial lasted approximately 177s.

(11)

Stimulus Presentation in the Location Mapper Trial

In the location mapper, the aspects of the trial were almost identical to the pRF mapper. The only difference was the stimulus that was presented. Six different orientations of wedge-shaped stimuli were presented at any one time to the participants. The orientations of the wedges were 0°, 60°, 120°, 180°, 240°, and 300° of polar angle. Each orientation was presented six times throughout the trial. The wedges were composed of the Gabor elements as described above. The decoding analysis was performed on the dataset acquired by this task. Wedge shaped stimuli have been used to map pRFs in previous experiments (e.g. Dumoulin & Wandell, 2008), which made them appropriate stimuli for the subsequent decoding analysis. Figure 1B illustrates the stimuli used in this trial. The trial lasted approximately 175s.

Scanning Details

The fMRI data were acquired on a 7T system (Phillips Achieva, NL) with an 8Tx/32Rx rf-coil for transmit and receive (Nova Medical Inc, USA). The following parameters were used: FOV = 224*216*120mm, resolution = 2*2*2 mm, TR = 1300ms, TE = 22ms, flip angle = 62°, in-plane SENSE factor 2 (AP).

(12)

Figure 1: A) A schematic illustration of the traversing bar stimulus used in the pRF mapper trial. As time progresses, the stimulus traverses a screen the participant uses. B) An illustration of the wedge stimuli used in the location mapper trial. Each stimulus was presented to the participant six times, with one appearing on the screen at any one time.

Preprocessing Structural and Functional Images

Results included in this manuscript come from preprocessing performed using FMRIPREP version latest (Esteban et al., 2019), a Nipype (Gorgolewski et al., 2011) based tool. Each T1w (T1-weighted) volume was corrected for INU (intensity non-uniformity) using

(13)

N4BiasFieldCorrection v2.1.0 (Tustison et al., 2010) and skull-stripped using antsBrainExtraction.sh v2.1.0 (using the OASIS template). Brain surfaces were reconstructed using recon-all from FreeSurfer v6.0.1 (Dale, Fischl & Sereno, 1999), and the brain mask estimated previously was refined with a custom variation of the method to reconcile ANTs-derived and FreeSurfer-ANTs-derived segmentations of the cortical gray-matter of Mindboggle (Klein et al., 2017). Spatial normalization to the ICBM 152 Nonlinear Asymmetrical template version 2009c (Fonov et al., 2009) was performed through nonlinear registration with the antsRegistration tool of ANTs v2.1.0 (Avants et al., 2008), using brain-extracted versions of both T1w volume and template. Brain tissue segmentation of cerebrospinal fluid (CSF), white-matter (WM) and gray-matter (GM) was performed on the brain-extracted T1w using fast (Zhang, Brady & Smith, 2001) (FSL v5.0.9).

Functional data was slice time corrected using 3dTshift from AFNI v16.2.07 (Cox, 1996) and motion corrected using mcflirt (FSL v5.0.9; Jenkinson et al., 2002). This was followed by co-registration to the corresponding T1w using boundary-based registration (Greve & Fischl, 2009) with 9 degrees of freedom, using bbregister (FreeSurfer v6.0.1). Motion correcting transformations, BOLD-to-T1w transformation and T1w-to-template (MNI) warp were concatenated and applied in a single step using antsApplyTransforms (ANTs v2.1.0) using Lanczos interpolation.

Physiological noise regressors were extracted applying CompCor (Behzadi et al., 2007). Principal components were estimated for the two CompCor variants: temporal (tCompCor) and anatomical (aCompCor). A mask to exclude signal with cortical origin was obtained by eroding

(14)

the brain mask, ensuring it only contained subcortical structures. Six tCompCor components were then calculated including only the top 5% variable voxels within that subcortical mask. For aCompCor, six components were calculated within the intersection of the subcortical mask and the union of CSF and WM masks calculated in T1w space, after their projection to the native space of each functional run. Frame-wise displacement (Power et al., 2014) was calculated for each functional run using the implementation of Nipype.

Many internal operations of FMRIPREP use Nilearn (Abraham et al., 2014), principally within the BOLD-processing workflow. For more details of the pipeline see https://fmriprep.readthedocs.io/en/latest/workflows.html.

pRF model-based analysis

The predictable nature of the stimuli used in the pRF mapping procedure facilitates a predictable response in the fMRI time course. The model sought to fit optimal parameters, in order to output a predicted time course that explains the most variance of the measured BOLD fMRI data. For every parameter combination, there is a predicted time-course, which is then convolved with a haemodynamic response function (HRF) (Boynton et al., 1996; Friston et al., 1998; Worsley et al. 2002). This mirrors the traditional approach of using a general linear model (GLM), commonly used in neuroimaging analyses.

(15)

Figure 2: A schematic illustration of the method used in this report for each participant. A) The pRF model-based analysis. Stimuli used in the pRF mapper trial elicits a BOLD time course for each voxel. The pRF model-based analysis finds the best-fitting predicted time-course and receptive field for each voxel. B) The decoding analysis. Stimuli used in the location mapper trial elicit a BOLD time-course for each voxel. The time-course for voxels with R2 ≤ 0.45 were included in the decoding analysis. Receptive fields from these voxels and the covariance matrix were generated, which facilitated the calculation of probability distributions. C) The probability distributions were used to generate a decoded image. A separate matrix was subdivided into 72 bins and a GLM was fit on the average, rotated decoded and de-meaned image. The final output was a graph illustrating the average β-weight for the corresponding bin.

(16)

The outputs of the pRF model are the optimal parameters for every voxel. A two-dimensional Gaussian pRF was used. Three parameters were outputted: x0, y0 and σ. (x0, y0) is the pRF center and σ is the Gaussian spread. A grid of 8000 different parameter combinations was constructed to minimise computational processing. The x and y positions of the pRFs were constrained to -10°and 10° of eccentricity from the centre of the visual field. The size of the pRF was constrained from 0.5°to 10° of eccentricity. The optimal parameter combination within these limits was chosen for each voxel.

Optimal parameters of the pRF model were found through the minimisation of the residual sum of squares (𝑅2) between the predicted and measured time series. To fit the optimal pRF parameters, the sweeping bar stimulus used in the pRF mapping procedure is defined in terms of its x and y position, for each time-point. Predicting the fMRI time course was done in two steps. First, the overlap was calculated between the model pRF and the stimulus definition. Next, this overlap was convolved with a haemodynamic response function, in order to account for the haemodynamic response found in fMRI time series data (Friston, Jezzard & Turner, 1994).

Gifti images were generated for each subject to denote the optimal x location, y location and σ. Images displaying the 𝑅2 for every voxel and the polar angle and eccentricity maps were also generated. For this fitting procedure, every voxel on the cortical surface was fitted with a pRF model. Nifti images were created for each subject, with the model fitting procedure undertaken only for V1 and V2. In addition to the aforementioned outputs, the residuals for the

(17)

predicted time course were also calculated for every voxel in V1 and V2. More information on the pRF model based analysis can be found in Dumoulin & Wandell (2008).

Decoding using a Bayesian framework

Using a Bayesian framework within decoding analyses has demonstrated to be more effective at reconstructing the stimulus compared to the use of an inverted encoding model (Gardner & Liu, 2019). Using a Bayesian framework, the probability distribution of the stimulus given a pattern of BOLD responses is as follows:

𝑝(𝑓(𝑠) | 𝑏) = 𝑝(𝑏 | 𝑓(𝑠))𝑝(𝑓(𝑠))

Where 𝑝(𝑓(𝑠) | 𝑏)expresses a probability distribution of the stimulus given the BOLD response. 𝑝(𝑏 | 𝑓(𝑠)) denotes the probability of the BOLD response given the stimulus (the encoding distribution). Finally, 𝑝(𝑓(𝑠))is a uniform prior. This equation states that the decoding distribution is equivalent to the product of the encoding distribution and the prior (Naselaris et al., 2011).

To calculate this probability distribution, the residuals and generated receptive fields from voxels in V1 and V2 with 𝑅2≥ 0.45 were collated for each subject. This value was used not only to minimise computation time, but also to ensure that the generated receptive fields were valid for decoding of the visual stimuli. Figures 3A - 3C illustrate the generated receptive

(18)

Figure 3: An example of the receptive fields that are generated and used for the decoding model. A-C) The receptive fields of adjacent voxels in participant four, as represented in a (50x50) matrix. The difference between the receptive fields are the position of the centre of the receptive field, and the Gaussian spread. D) The averaged sum of every voxel with a pRF fit of R2 ≥0.45 for the same participant. As expected, the receptive fields generated from the pRF fit indicate that most voxels are dedicated to processing the centre of the visual field, with the density of receptive fields decreasing as a function of eccentricity.

(19)

fields of 3 separate voxels within a (50x50) pixel matrix. The difference between each voxel’s receptive field was the centre (x0, y0) and the size (σ). By computing the residuals and generating the receptive fields, it was possible to capture voxel covariance. This report used covariance models that were developed by van Bergen et al. (2015).

The covariance model calculates two things. Firstly, the model calculates the overlap between the receptive fields of different voxels. Intuitively, if the pattern of BOLD response indicated that two (or more) voxels with an overlap in their receptive fields were simultaneously active, then the likelihood of a stimulus being present in the overlapping receptive fields would increase.

Secondly, it estimates the residual covariance. This is necessary as the responses of neurons are inherently noisy (Schiller, Finlay & Volman, 1976; Dean, 1981), with many different sources of noise also present in the fMRI signal (Liu, 2016; Greve et al., 2013). Previous models have assumed the independence of noise between voxels in fMRI studies (Serences et al., 2009; Jehee et al., 2012; Brouwer & Heeger, 2009), however there is mounting evidence that this assumption is invalid within the cortex (Aracaro et al., 2015; De Zwart et al., 2008; Smith & Kohn, 2008). Furthermore, evidence has demonstrated that decoding algorithms produce inaccurate probability distributions if noise correlations in the data are ignored (van Bergen et al., 2015; van Bergen & Jehee, 2018). Therefore, when trying to ascertain the decoding distribution, it is important to factor in how much the residuals of separate voxels covary (as well as the receptive field overlap).

(20)

To illustrate this point, take this hypothetical example. The BOLD response pattern indicated that two voxels were simultaneously active at a singular time-point. The voxels in question also had a high covariance of residuals within the data used to fit the pRF model. Therefore, given the BOLD response, the likelihood of their receptive fields being stimulated decreases, as the BOLD response is more likely to be due to noise in the time course. If this step was not taken, then the decoding model would be based on an inaccurate probability distribution.

After using the covariance matrix to attain the Bayesian probability distributions for multivariate Gaussian residuals, the decoder could reconstruct the stimulus from the BOLD data. This produced a (2500x227) vector denoting the number of pixels squared (502), by the number of time-points (227) in the location mapper trial. This array could be visualised as an animation using matplotlib (Hunter, 2007).

Fitting a GLM on the decoded image

The decoded animation was de-meaned. A general linear model (GLM) was used to explain the de-meaned and decoded animation. Using the equation:

𝑓(𝑠) = 𝑋𝛽 + 𝜖

Where 𝑓(𝑠) is the decoded animation, 𝑋is a (7 x 227) design matrix containing at what time-point the relevant subject saw wedge stimuli at specific orientations (0°, 60°, 120°, 180°, 240° & 300°), stacked onto a vector of ones (serving as a constant), 𝜖 is an error term and 𝛽 is a weighting factor. To minimise the sum of squared errors, ordinary least squares (OLS) was used

(21)

to calculate the 𝛽-weight for every pixel in the decoded animation. Reshaping this into the original (50x50) format created a (7 x 50 x 50) vector containing the 𝛽-weights for the six stimulus orientations at each location, plus the constant.

The vector was separated into stimulus orientation, and plotted as a (50 x 50) matrix (figure 4A). For each stimulus orientation, starting at the orientation of 60∘, the image was rotated counterclockwise by the orientation of the stimulus. For example, for the 𝛽-weights for the corresponding row in 𝑋 denoting when the wedge-stimuli oriented at 120° appeared, the matrix was rotated counter-clockwise 120°. This ensured that each stimulus location aligned with each other. The mean was then calculated for all six of these images (figure 4B).

Separately, a (50x50) matrix was subdivided into 72 bins (figure 7B). The final output for every participant was the mean of every 𝛽-weight located in the corresponding bin of the (50x50) matrix. This was then averaged over the 21 participants included in the decoding analysis (Figure 7C).

(22)

Figure 4: A-F) An example of the (50x50) matrices representing the value of the β-weights of every pixel in the matrices. The matrices presented are for a single participant. The matrix with title 0.0° represents the β-weights of the pixels for wedges presented at 60°. Every succeeding matrix was rotated counter-clockwise by its orientation. G) The final average rotated matrix for the same participant. This represents the β-weights for all stimuli presented at the orientation of 60°.

(23)

Results pRF Model Based Analysis

The model fit is illustrated in figure 5. This figure shows the mean 𝑅2 value for every voxel for all participants. The blue and red boundaries indicate areas V1 and V2. Within the red boundary was V1, and between the blue boundaries was V2. The heat map displays the 𝑅2 (0.100 ≤ 𝑅2 ≥ 0.359) value for each voxel from the model fit on an inflated cortical surface. As expected, the pRF model explained the most variance within the occipital lobe, where the visual cortices are located. The model fit remained above the threshold value in both the parietal cortex (Figure 5B) and the temporal cortex (Figure 5C). However, the 𝑅2 value of voxels further up the dorsal/ventral stream diminished.

(24)

Figure 5: A) The average R2 value for each voxel for all participants, visualised on an inflated cortical surface. The left side displays the left hemisphere, while the right side displays the right hemisphere. V1 is labelled and located within the confines of the red boundary. V2 is labelled and located within the confines of the blue boundaries. The R2values range from 0.1 - 0.359. B) The model fit displayed for the parietal cortex. The model fit diminishes as a function of travelling up the dorsal stream from the occipital cortex. C) The model fit displayed for the temporal cortex and part of the occipital cortex. The model fit diminishes as function of travelling down the ventral stream from the occipital cortex.

(25)

More variance explained in model fit for right hemisphere caused bias in receptive field The pRF model based analysis led to a bias in the average receptive fields generated for each participant (figure 6A). The receptive fields that contributed to figure 6A belonged to voxels in V1 and V2 that had an R2 ≤ 0.45, and were therefore included in the decoding analysis. Figure 6B illustrates that the areas V1 and V2 in the right hemisphere had more voxels that contained a higher 𝑅2 value than the left hemisphere. This concurs with previous research that has demonstrated hemispheric asymmetry in early visual areas in fMRI (Hougaard et al., 2015). As the left visual field is processed by the contralateral hemisphere, the inclusion of more voxels from the right hemisphere in the decoding analysis subsequently led to the over representation of the left visual field for many participants.

(26)

Figure 6: A) The average receptive field for all participants. The cumulative receptive fields have a bias towards the left visual field. B) A graph displaying the R2 value of all voxels in the left and right hemispheres (divided by the vertical line at 0.5 of the x-axis). Each transparent line displays the R2value for every participant. The black dotted line displays the average R2 value for each hemisphere. This graph exemplifies the hemispheric asymmetry in the pRF model based analysis, which likely contributed to the bias in receptive field generation.

The beta-weights indicate the location of the stimulus

Figure 7A displays the 𝛽-weights for every pixel after rotation of the matrix to allow the co-occurence of stimuli in that area of the visual field. The figure displays the average 𝛽-weights for the 21 participants that were included in the decoding analysis. Figure 7A illustrates how averaging over all participants facilitated an accurate visual reconstruction of the stimulus presented to participants in the location mapper trial. The decoded image had a stronger representation in the centre of the visual field.

(27)

Figure 7B displays the matrix that was subsequently subdivided into 72 bins. All of the pixels 𝛽-weights that fell within the location of the same corresponding bin were averaged together for each participant. For every participant, 72 𝛽-weights were generated. Figure 7C displays the average 𝛽-weight for each corresponding bin averaged together for all participants. As there are 72 bins, each bin covers 5° of polar angle. Figure 7C peaks at bin 54 (270° of polar angle), which corresponds with the stimulus location presented to participants.

(28)

Figure 7: A) A (50x50) matrix displaying the 𝛽-weights for the rotated matrices, averaged for every participant included in the decoding analysis. B) A (50x50) matrix illustrating the 72 bins used for the binning procedure. C) A graph displaying the 𝛽-weight for each corresponding bin, averaged over every participant.

(29)

Discussion

This report demonstrates how a pRF model can be combined with a decoding model to successfully decode the location of synthetic stimuli using the BOLD fMRI time course. This decoding approach differs from classification-based analyses, which typically use machine learning to train a classifier to dissociate between discrete states (Haxby et al., 2001; Kamitani & Tong, 2005). The disadvantage of such approaches are that the features of the stimuli are not explicit, and so it is not entirely clear what is driving the different patterns of brain activity that define the classification algorithm (Naselaris & Kay, 2015). The approach used in this report overturns this disadvantage of pattern-based analyses, as the pRF model explicitly outputs parameters that define how areas of the brain encode visual information (Dumoulin & Knapen, 2018).

As suggested by previous researchers, the combination of decoding and encoding models serves the purpose of validating the encoding model used, and should form a common practice in fMRI experiments (Naselaris et al., 2011). Furthermore, due to its lack of conventional hypothesis testing, computational neuroscience increasingly requires its models to be validated (Wandell & Winawer, 2015). This report found that a decoding model could decode, from the BOLD data, the location of stimuli in the visual field, using a pRF model. Therefore, this approach validates the use of the pRF model. Thus, this report achieves two things. Firstly, the report contributes to the relatively small body of literature that corroborates pRF mapping as an accurate method (e.g. Schwarzkopf, Moutsiana & Panesar, 2019; Zuiderbaan et al., 2017; Senden et al., 2014; van Dijk et al., 2016). It is necessary to contribute to this aspect of research, as pRF mapping is becoming a more ubiquitous technique in computational neuroscience, and therefore

(30)

requires empirical validation. Secondly, the report contributes a more novel finding. It is the first of its kind to successfully use a pRF model in tandem with decoding analysis to reconstruct stimuli solely from the BOLD signal. Therefore, this study contributes both to subsequent analyses, and to computational neuroscience, as it demonstrates the efficacy of using a pRF model-based analysis within a decoding model.

As the tandem approach of using decoding with pRFs has been validated, subsequent analyses can use this approach to investigate how reinforcement learning can influence the sensory representation of valuable stimuli in the visual cortex. This approach provides two novel insights. Firstly, to the author’s knowledge, there have been no previous studies that use an encoding and decoding approach to investigate the effect of value on visual perception. Secondly, studies that have investigated this phenomena have speculated that the sensory representation of said stimuli are sharpened in the visual cortex, but have not sought to quantify this. By decoding stimuli from the test set of data, and fitting a GLM on the decoded stimulus, it is possible to quantitatively compare how the perception of high and low value stimuli differ (e.g. by comparing the full width at half maximum of two differing stimulus locations).

One limitation of this report is as follows. Although it was possible to identify at which visual field location stimuli were appearing, it was not possible to accurately reconstruct what the subjects saw in movie format. This could be due to two reasons. Firstly, only V1 and V2 were included in the decoding analysis. If more extrastriate areas of the visual cortex were included in the decoding analysis, more voxels could have been pooled into the decoding algorithm, increasing the signal-to-noise ratio and potentially increasing the accuracy of the

(31)

probability distributions. Secondly, the pRF model based analysis was based on one trial. Hypothetically, if the pRF model based analysis was based on multiple trials, the BOLD signal could be averaged together for each participant. This would increase the signal-to-noise ratio, and potentially increase the 𝑅2 value of the model fit. Subsequently, more voxels may have been included in the decoding analysis. Thus, more data would be included in the decoding analysis, thereby influencing the certainty of the probability distributions outputted by the decoding model, and perhaps allowing a more accurate reconstruction of the visual field.

It is noteworthy, however, that despite an inherently noisy fMRI signal, the pRF mapping trial within this experiment took under three minutes. This is a relatively fast model fitting procedure in comparison to other studies of visual perception that use an encoding model (e.g. Itthipuripat et al., 2019; Vo, Sprague & Serences, 2017; Kay et al., 2008). Due to fMRI’s costly nature, it is important to optimise the time spent in the scanner. Therefore, when using encoding models in fMRI studies, pRF modelling should be strongly considered.

Finally, one other possible limitation is that, to generate Figure 7C, it was necessary to decode the visual space in 21 participants in total. However, the sample size of this report is similar to other decoding studies (Van Bergen et al., 2015; Kok, Jehee & De Lange, 2012; Ester et al., 2013).

Future Directions

This report has validated the method that will be used in a subsequent analysis pipeline. As stated before, using the approach within this report will provide novel insights into how

(32)

reinforcement learning can modulate pRFs in the visual cortex. This will form one aspect of future analysis. To investigate how learned rewards can influence the pRFs, it was necessary to train participants on a value-based decision making (VBDM) task. In experiments that investigate VBDM, it has previously been difficult to accurately track the internal state of participants (Rangel, Camerer & Montague, 2008). However, developments in pupillometry and eye tracking have found that pupil dilation can be used as a proxy for modulation of noradrenaline in the Locus Coeruleus (Murphy et al., 2014; Joshi et al., 2016) and dopaminergic activity in the midbrain (de Gee et al., 2017). Since reward in reinforcement learning is accompanied by a dopamine response in the midbrain area (Schultz, Dayan & Montague, 1997), tracking pupil dilations provides a quantitative estimate of participants internal states, overcoming previous difficulties. Therefore, tracking the pupil during the VBDM task within this study can provide rich information about the internal states of participants, and in conjunction with Q-learning, can allow accurate computational modelling of VBDM processes (van Slooten et al., 2018).

The subsequent report therefore encapsulates a multitude of different phenomena within one experiment, in order to accurately conceptualise the mechanistic processes of reinforcement learning within a VBDM task. Firstly, by accurately quantifying the pRF properties of neurons in the visual cortex, the visual field can be decoded in subsequent trials using stimuli that are associated with different levels of reward. By using multiple stimuli that are associated with different scales of reward (see van Slooten et al., 2018), it is possible to investigate whether attentional capture in the visual cortex increases as a function of the scale of reward. Secondly, Maunsell (2004) argued that the cognitive phenomena of attention and reward is seldom

(33)

disentangled in neurophysiological studies. However, subsequent analyses affords a method that may distinguish whether neural activation in the visual cortex is underpinned by value-driven attentional capture or expectancy of reward. This can be done by decoding the stimuli on a trial by trial basis, and comparing participants choice decisions between stimuli. If the participant chose a stimulus, then any observed shift in sensory representation may be explained through the mechanism of reward expectation. However, if the participant did not choose a stimulus, yet there was a sharpened representation of the decoded stimulus, this may be explained through the mechanism of value-driven attentional capture.

Conclusion

In summary, this report sought to validate the model-based analysis that will be used in subsequent reports. The subsequent reports seek to investigate how the sensory representation of stimuli may change as a function of perceived value. By using this model-based analysis, it will be possible to accurately quantify this. In the pRF model based analysis, the model fit was most successful in explaining the BOLD time course data in the occipital lobe. In the decoding analysis, the location of the stimulus was successfully decoded.

(34)

References

Abraham, A., Pedregosa, F., Eickenberg, M., Gervais, P., Mueller, A., Kossaifi, J., ... & Varoquaux, G. (2014). Machine learning for neuroimaging with scikit-learn. Frontiers in neuroinformatics, 8, 14.

Amano, K., Wandell, B. A., & Dumoulin, S. O. (2009). Visual field maps, population receptive field sizes, and visual field coverage in the human MT+ complex. Journal of neurophysiology.

Anderson, B. A. (2013). A value-driven mechanism of attentional selection. Journal of vision, 13(3), 7-7.

Anderson, B. A. (2016). The attention habit: How reward learning shapes attentional selection. Annals of the new York Academy of Sciences, 1369(1), 24-39.

Anderson, B. A., Laurent, P. A., & Yantis, S. (2011). Learned value magnifies salience-based attentional capture. PloS one, 6(11), e27926.

Anderson, E. J., Tibber, M. S., Schwarzkopf, D. S., Shergill, S. S., Fernandez-Egea, E., Rees, G., & Dakin, S. C. (2017). Visual population receptive fields in people with schizophrenia have reduced inhibitory surrounds. Journal of Neuroscience, 37(6), 1546-1556.

(35)

Avants, B. B., Epstein, C. L., Grossman, M., & Gee, J. C. (2008). Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain. Medical image analysis, 12(1), 26-41.

Awh, E., Belopolsky, A. V., & Theeuwes, J. (2012). Top-down versus bottom-up attentional control: A failed theoretical dichotomy. Trends in cognitive sciences, 16(8), 437-443.

Balleine, B. W., Delgado, M. R., & Hikosaka, O. (2007). The role of the dorsal striatum in reward and decision-making. Journal of Neuroscience, 27(31), 8161-8165.

Baseler, H. A., Gouws, A., Haak, K. V., Racey, C., Crossland, M. D., Tufail, A., ... & Morland, A. B. (2011). Large-scale remapping of visual cortex is absent in adult humans with macular degeneration. Nature neuroscience, 14(5), 649.

Behzadi, Y., Restom, K., Liau, J., & Liu, T. T. (2007). A component based noise correction method (CompCor) for BOLD and perfusion based fMRI. Neuroimage, 37(1), 90-101.

Bialek, W., Rieke, F., Van Steveninck, R. D. R., & Warland, D. (1991). Reading a neural code. Science, 252(5014), 1854-1857.

Boynton, G. M., Engel, S. A., Glover, G. H., & Heeger, D. J. (1996). Linear systems analysis of functional magnetic resonance imaging in human V1. Journal of Neuroscience, 16(13), 4207-4221.

(36)

Bressler, D. W., Fortenbaugh, F. C., Robertson, L. C., & Silver, M. A. (2013). Visual spatial attention enhances the amplitude of positive and negative fMRI responses to visual stimulation in an eccentricity-dependent manner. Vision research, 85, 104-112.

Brouwer, G. J., & Heeger, D. J. (2009). Decoding and reconstructing color from responses in human visual cortex. Journal of Neuroscience, 29(44), 13992-14003.

Cox, R. W. (1996). AFNI: software for analysis and visualization of functional magnetic resonance neuroimages. Computers and Biomedical research, 29(3), 162-173.

Dale, A. M., Fischl, B., & Sereno, M. I. (1999). Cortical surface-based analysis: I. Segmentation and surface reconstruction. Neuroimage, 9(2), 179-194.

de Gee, J. W., Colizoli, O., Kloosterman, N. A., Knapen, T., Nieuwenhuis, S., & Donner, T. H. (2017). Dynamic modulation of decision biases by brainstem arousal systems. Elife, 6, e23232.

Dean, A. F. (1981). The variability of discharge of simple cells in the cat striate cortex. Experimental Brain Research, 44(4), 437-440.

DeSimone, K., Viviano, J. D., & Schneider, K. A. (2015). Population receptive field estimation reveals new retinotopic maps in human subcortex. Journal of Neuroscience, 35(27), 9836-9847.

(37)

de Zwart, J. A., Gelderen, P. V., Fukunaga, M., & Duyn, J. H. (2008). Reducing correlated noise in fMRI data. Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine, 59(4), 939-945.

Dumoulin, S. O., & Knapen, T. (2018). How visual cortical organization is altered by ophthalmologic and neurologic disorders. Annual review of vision science, 4, 357-379.

Dumoulin, S. O., & Wandell, B. A. (2008). Population receptive field estimates in human visual cortex. Neuroimage, 39(2), 647-660.

Esteban, O., Markiewicz, C. J., Blair, R. W., Moodie, C. A., Isik, A. I., Erramuzpe, A., ... & Oya, H. (2019). FMRIPrep: a robust preprocessing pipeline for functional MRI. Nature methods, 16(1), 111.

Ester, E. F., Anderson, D. E., Serences, J. T., & Awh, E. (2013). A neural measure of precision in visual working memory. Journal of Cognitive Neuroscience, 25(5), 754-761.

Failing, M. F., & Theeuwes, J. (2015). Nonspatial attentional capture by previously rewarded scene semantics. Visual Cognition, 23(1-2), 82-104.

Failing, M., & Theeuwes, J. (2018). Selection history: How reward modulates selectivity of visual attention. Psychonomic bulletin & review, 25(2), 514-538.

(38)

Field, M., & Cox, W. M. (2008). Attentional bias in addictive behaviors: a review of its development, causes, and consequences. Drug and alcohol dependence, 97(1-2), 1-20.

Fonov, V. S., Evans, A. C., McKinstry, R. C., Almli, C. R., & Collins, D. L. (2009). Unbiased nonlinear average age-appropriate brain templates from birth to adulthood. NeuroImage, (47), S102.

Frankó, E., Seitz, A. R., & Vogels, R. (2010). Dissociable neural effects of long-term stimulus– reward pairing in macaque visual cortex. Journal of Cognitive Neuroscience, 22(7), 1425-1439.

Friston, K. J., Fletcher, P., Josephs, O., Holmes, A. N. D. R. E. W., Rugg, M. D., & Turner, R. (1998). Event-related fMRI: characterizing differential responses. Neuroimage, 7(1), 30-40.

Friston, K. J., Jezzard, P., & Turner, R. (1994). Analysis of functional MRI time‐series. Human brain mapping, 1(2), 153-171.

Garavan, H., & Hester, R. (2007). The role of cognitive control in cocaine dependence. Neuropsychology review, 17(3), 337-345.

Gardner, J. L., & Liu, T. (2019). Inverted encoding models reconstruct an arbitrary model response, not the stimulus. eNeuro, 6(2).

(39)

Gorgolewski, K., Burns, C. D., Madison, C., Clark, D., Halchenko, Y. O., Waskom, M. L., & Ghosh, S. S. (2011). Nipype: a flexible, lightweight and extensible neuroimaging data processing framework in python. Frontiers in neuroinformatics, 5, 13.

Greve, D. N., & Fischl, B. (2009). Accurate and robust brain image alignment using boundary-based registration. Neuroimage, 48(1), 63-72.

Greve, D. N., Brown, G. G., Mueller, B. A., Glover, G., & Liu, T. T. (2013). A survey of the sources of noise in fMRI. Psychometrika, 78(3), 396-416.

Haak, K. V., Cornelissen, F. W., & Morland, A. B. (2012). Population receptive field dynamics in human visual cortex. PLoS One, 7(5), e37686.

Haxby, J. V., Gobbini, M. I., Furey, M. L., Ishai, A., Schouten, J. L., & Pietrini, P. (2001). Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science, 293(5539), 2425-2430.

Harvey, B. M., Klein, B. P., Petridou, N., & Dumoulin, S. O. (2013). Topographic representation of numerosity in the human parietal cortex. Science, 341(6150), 1123-1126.

Hickey, C., Chelazzi, L., & Theeuwes, J. (2010). Reward changes salience in human vision via the anterior cingulate. Journal of Neuroscience, 30(33), 11096-11103.

(40)

Hougaard, A., Jensen, B. H., Amin, F. M., Rostrup, E., Hoffmann, M. B., & Ashina, M. (2015). Cerebral asymmetry of fMRI-BOLD responses to visual stimulation. PloS one, 10(5), e0126477.

Hunter, J. D. (2007). Matplotlib: A 2D graphics environment. Computing in science & engineering, 9(3), 90.

Itthipuripat, S., Vo, V. A., Sprague, T. C., & Serences, J. (2019). Value-driven attentional capture enhances distractor representations in early visual cortex. BioRxiv, 567354.

Jehee, J. F., Ling, S., Swisher, J. D., van Bergen, R. S., & Tong, F. (2012). Perceptual learning selectively refines orientation representations in early visual cortex. Journal of Neuroscience, 32(47), 16747-16753.

Jenkinson, M. (2003). Fast, automated, N‐dimensional phase‐unwrapping algorithm. Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine, 49(1), 193-197.

Jenkinson, M., Bannister, P., Brady, M., & Smith, S. (2002). Improved optimization for the robust and accurate linear registration and motion correction of brain images. Neuroimage, 17(2), 825-841.

(41)

Jonides, J. (1981). Voluntary versus automatic control over the mind's eye's movement. Attention and performance, 187-203.

Joshi, S., Li, Y., Kalwani, R. M., & Gold, J. I. (2016). Relationships between pupil diameter and neuronal activity in the locus coeruleus, colliculi, and cingulate cortex. Neuron, 89(1), 221-234.

Kamitani, Y., & Tong, F. (2005). Decoding the visual and subjective contents of the human brain. Nature neuroscience, 8(5), 679.

Kay, K. N., Naselaris, T., Prenger, R. J., & Gallant, J. L. (2008). Identifying natural images from human brain activity. Nature, 452(7185), 352.

Kay, K. N., Weiner, K. S., & Grill-Spector, K. (2015). Attention reduces spatial uncertainty in human ventral temporal cortex. Current Biology, 25(5), 595-600.

Kay, K. N., Winawer, J., Mezer, A., & Wandell, B. A. (2013). Compressive spatial summation in human visual cortex. Journal of neurophysiology, 110(2), 481-494.

Klein, A., Ghosh, S. S., Bao, F. S., Giard, J., Häme, Y., Stavsky, E., ... & Keshavan, A. (2017). Mindboggling morphometry of human brains. PLoS computational biology, 13(2), e1005350.

(42)

Klein, B. P., Harvey, B. M., & Dumoulin, S. O. (2014). Attraction of position preference by spatial attention throughout human visual cortex. Neuron, 84(1), 227-237.

Kok, P., Jehee, J. F., & De Lange, F. P. (2012). Less is more: expectation sharpens representations in the primary visual cortex. Neuron, 75(2), 265-270.

Le Pelley, M. E., Pearson, D., Griffiths, O., & Beesley, T. (2015). When goals conflict with values: Counterproductive attentional and oculomotor capture by reward-related stimuli. Journal of Experimental Psychology: General, 144(1), 158.

Liu, T. T. (2016). Noise contributions to the fMRI signal: an overview. NeuroImage, 143, 141-151.

Maunsell, J. H. (2004). Neuronal representations of cognitive state: reward or attention?. Trends in cognitive sciences, 8(6), 261-265.

Miyawaki, Y., Uchida, H., Yamashita, O., Sato, M. A., Morito, Y., Tanabe, H. C., ... & Kamitani, Y. (2008). Visual image reconstruction from human brain activity using a combination of multiscale local image decoders. Neuron, 60(5), 915-929.

Montague, P. R., Dayan, P., & Sejnowski, T. J. (1996). A framework for mesencephalic dopamine systems based on predictive Hebbian learning. Journal of neuroscience, 16(5), 1936-1947.

(43)

Murphy, P. R., O'connell, R. G., O'sullivan, M., Robertson, I. H., & Balsters, J. H. (2014). Pupil diameter covaries with BOLD activity in human locus coeruleus. Human brain mapping, 35(8), 4140-4154.

Naselaris, T., & Kay, K. N. (2015). Resolving ambiguities of MVPA using explicit models of representation. Trends in cognitive sciences, 19(10), 551-554.

Naselaris, T., Kay, K. N., Nishimoto, S., & Gallant, J. L. (2011). Encoding and decoding in fMRI. Neuroimage, 56(2), 400-410.

Naselaris, T., Prenger, R. J., Kay, K. N., Oliver, M., & Gallant, J. L. (2009). Bayesian reconstruction of natural images from human brain activity. Neuron, 63(6), 902-915.

Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu, B., & Gallant, J. L. (2011). Reconstructing visual experiences from brain activity evoked by natural movies. Current Biology, 21(19), 1641-1646.

Papanikolaou, A., Keliris, G. A., Lee, S., Logothetis, N. K., & Smirnakis, S. M. (2015). Nonlinear population receptive field changes in human area V5/MT+ of healthy subjects with simulated visual field scotomas. NeuroImage, 120, 176-190.

Pearson, D., Donkin, C., Tran, S. C., Most, S. B., & Le Pelley, M. E. (2015). Cognitive control and counterproductive oculomotor capture by reward-related stimuli. Visual Cognition, 23(1-2), 41-66.

(44)

Posner, M. I. (1980). Orienting of attention. Quarterly journal of experimental psychology, 32(1), 3-25.

Posner, M. I., & Petersen, S. E. (1990). The attention system of the human brain. Annual review of neuroscience, 13(1), 25-42.

Power, J. D., Mitra, A., Laumann, T. O., Snyder, A. Z., Schlaggar, B. L., & Petersen, S. E. (2014). Methods to detect, characterize, and remove motion artifact in resting state fMRI. Neuroimage, 84, 320-341.

Raiguel, S., Vogels, R., Mysore, S. G., & Orban, G. A. (2006). Learning to see the difference specifically alters the most informative V4 neurons. Journal of Neuroscience, 26(24), 6589-6602.

Rangel, A., Camerer, C., & Montague, P. R. (2008). A framework for studying the neurobiology of value-based decision making. Nature reviews neuroscience, 9(7), 545.

Robinson, T. E., & Berridge, K. C. (2008). The incentive sensitization theory of addiction: some current issues. Philosophical Transactions of the Royal Society B: Biological Sciences, 363(1507), 3137-3146.

Schiller, P. H., Finlay, B. L., & Volman, S. F. (1976). Short-term response variability of monkey striate neurons. Brain research, 105(2), 347-349.

(45)

Schultz, W. (1998). Predictive reward signal of dopamine neurons. Journal of neurophysiology, 80(1), 1-27.

Schultz, W. (2013). Updating dopamine reward signals. Current opinion in neurobiology, 23(2), 229-238.

Schultz, W., Dayan, P., & Montague, P. R. (1997). A neural substrate of prediction and reward. Science, 275(5306), 1593-1599.

Schwarzkopf, D. S., Anderson, E. J., de Haas, B., White, S. J., & Rees, G. (2014). Larger extrastriate population receptive fields in autism spectrum disorders. Journal of Neuroscience, 34(7), 2713-2724.

Schwarzkopf, D., Moutsiana, C., & Panesar, G. (2019, March 24). Validation of population receptive field estimates in human visual cortex. https://doi.org/10.31219/osf.io/479cr

Seger, C. A. (2013). The visual corticostriatal loop through the tail of the caudate: circuitry and function. Frontiers in systems neuroscience, 7, 104.

Senden, M., Reithler, J., Gijsen, S., & Goebel, R. (2014). Evaluating population receptive field estimation frameworks in terms of robustness and reproducibility. PloS one, 9(12), e114054.

(46)

Serences, J. T. (2008). Value-based modulations in human visual cortex. Neuron, 60(6), 1169-1181.

Serences, J. T., & Saproo, S. (2010). Population response profiles in early visual cortex are biased in favor of more valuable stimuli. Journal of neurophysiology, 104(1), 76-87.

Serences, J. T., & Saproo, S. (2012). Computational advances towards linking BOLD and behavior. Neuropsychologia, 50(4), 435-446.

Serences, J. T., Saproo, S., Scolari, M., Ho, T., & Muftuler, L. T. (2009). Estimating the influence of attention on population codes in human visual cortex using voxel-based tuning functions. Neuroimage, 44(1), 223-231.

Shuler, M. G., & Bear, M. F. (2006). Reward timing in the primary visual cortex. Science, 311(5767), 1606-1609.

Smith, M. A., & Kohn, A. (2008). Spatial and temporal scales of neuronal correlation in primary visual cortex. Journal of Neuroscience, 28(48), 12591-12603.

Sprague, T. C., & Serences, J. T. (2013). Attention modulates spatial priority maps in the human occipital, parietal and frontal cortices. Nature neuroscience, 16(12), 1879.

(47)

Stănişor, L., van der Togt, C., Pennartz, C. M., & Roelfsema, P. R. (2013). A unified selection signal for attention and reward in primary visual cortex. Proceedings of the National Academy of Sciences, 110(22), 9136-9141.

Theeuwes, J. (1994). Endogenous and exogenous control of visual selection. Perception, 23(4), 429-440.

Theeuwes, J., & Belopolsky, A. V. (2012). Reward grabs the eye: Oculomotor capture by rewarding stimuli. Vision research, 74, 80-85.

Thirion, B., Duchesnay, E., Hubbard, E., Dubois, J., Poline, J. B., Lebihan, D., & Dehaene, S. (2006). Inverse retinotopy: inferring the visual content of images from brain activation patterns. Neuroimage, 33(4), 1104-1116.

Tustison, N. J., Avants, B. B., Cook, P. A., Zheng, Y., Egan, A., Yushkevich, P. A., & Gee, J. C. (2010). N4ITK: improved N3 bias correction. IEEE transactions on medical imaging, 29(6), 1310.

van Bergen, R. S., Ma, W. J., Pratte, M. S., & Jehee, J. F. (2015). Sensory uncertainty decoded from visual cortex predicts behavior. Nature Neuroscience, 18(12), 1728.

van Dijk, J. A., de Haas, B., Moutsiana, C., & Schwarzkopf, D. S. (2016). Intersession reliability of population receptive field estimates. NeuroImage, 143, 293-303.

(48)

van Es, D. M., Theeuwes, J., & Knapen, T. (2018). Spatial sampling in human visual cortex is modulated by both spatial and feature-based attention. eLife, 7, e36928.

van Es, D. M., van der Zwaag, W., & Knapen, T. (2019). Topographic Maps of Visual Space in the Human Cerebellum. Current Biology.

van Gerven, M. A. (2017). A primer on encoding models in sensory neuroscience. Journal of Mathematical Psychology, 76, 172-183.

van Slooten, J. C., Jahfari, S., Knapen, T., & Theeuwes, J. (2018). How pupil responses track value-based decision-making during and after reinforcement learning. PLoS computational biology, 14(11), e1006632.

Vo, V. A., Sprague, T. C., & Serences, J. T. (2017). Spatial tuning shifts increase the discriminability and fidelity of population codes in visual cortex. Journal of Neuroscience, 37(12), 3386-3401.

Wandell, B. A., & Winawer, J. (2015). Computational neuroimaging and population receptive fields. Trends in cognitive sciences, 19(6), 349-357.

Worsley, K. J., Liao, C. H., Aston, J., Petre, V., Duncan, G. H., Morales, F., & Evans, A. C. (2002). A general statistical analysis for fMRI data. Neuroimage, 15(1), 1-15.

(49)

Zhang, Y., Brady, M., & Smith, S. (2001). Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm. IEEE transactions on medical imaging, 20(1), 45-57.

Zuiderbaan, W., Harvey, B. M., & Dumoulin, S. O. (2012). Modeling center–surround configurations in population receptive fields using fMRI. Journal of vision, 12(3), 10-10.

Zuiderbaan, W., Harvey, B. M., & Dumoulin, S. O. (2017). Image identification from brain activity using the population receptive field model. PloS one, 12(9), e0183295.

Referenties

GERELATEERDE DOCUMENTEN

The point of departure in determining an offence typology for establishing the costs of crime is that a category should be distinguished in a crime victim survey as well as in

In a reaction without pAsp, but with collagen, magnetite crystals covering the collagen fibrils are observed ( Supporting Information Section 1, Figure S1.5), illustrating the

This paper proposes a much tighter relaxation, and gives an application to the elementary task of setting the regularization constant in Least Squares Support Vector Machines

Here, we confine ourselves to a summary of some key concepts: the regularization constant plays a crucial role in Tikhonov regularization [19], ridge regression [9], smoothing

In order to solve such a generic non-convex optimization problem and find a feasible trajectory that reaches the destination, the approach employs a quadratic penalty method to

In order to solve such a generic non-convex optimization problem and find a feasible trajectory that reaches the destination, the approach employs a quadratic penalty method to

perfused hearts from obese animals had depressed aortic outputs compared to the control group (32.58±1.2 vs. 41.67±2.09 %, p<0.001), its cardioprotective effect was attenuated

Neverthe- less, the simulation based on the estimates of the parameters β, S 0 and E 0 , results in nearly four times more infectious cases of measles as reported during odd