• No results found

Feed-forward and recurrent connectivity during evidence accumulation

N/A
N/A
Protected

Academic year: 2021

Share "Feed-forward and recurrent connectivity during evidence accumulation"

Copied!
29
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Feed-forward and recurrent connectivity

during evidence accumulation

Extending the search for the neural correlates

of evidence accumulation with a dynamic causal

modelling approach

Masters project by Rose Nasrawi

Research Master Brain and Cognitive Sciences

Project carried out from January to June 2019

University of Amsterdam

Integrative model-based cognitive neuroscience (IMCN) Research Unit

University of Amsterdam

(2)

Content

Abstract

3

Introduction

4

Methods

7

Experiment

7

Data analysis

11


Results

15

Behavioural results

15

EEG results

17


Discussion

23

References

26

(3)

Abstract

Research into the neural correlates of decision making has investigated how task difficulty and corresponding evidence accumulation rates relate to signals from electro-encephalography (EEG). The results repeatedly revealed a centro-parietal event-related potential (ERP) component that increased with a decrease in task difficulty. Specifically, an increase in evidence accumulation rate was associated with an increase in the amplitude and build-up (i.e., slope) of the signal. Previous research on the neural correlates of evidence accumulation has focussed on local cortical activity, ignoring the role connectivity within a network of brain regions might play during evidence accumulation. The current study investigated how changes in the EEG signal due to task difficulty relate to changes in feed-forward and recurrent effective connectivity in a cortical network. A cat versus dog categorisation task with a between-trials task difficulty manipulation was used. In line with previous research, the results from this study revealed that evidence accumulation rates increased with task difficulty. Furthermore, a centro-parietal EEG component was shown to increase in amplitude when task difficulty decreased. In addition, dynamic causal modelling (DCM) of the EEG data revealed that forward connections were sufficient for more difficult decisions, while feedback connections became necessary for easier decisions. Moreover, both forward and backward connections were shown to change in connectivity strength between different task difficulty levels. It can be concluded that, with changes in the difficulty of a perceptual decision, a broad cortical network reorganisation takes place. This reorganisation is possibly is not specific to either forward or backward connections, but includes both.


(4)

Introduction

In our daily lives, we are constantly confronted with situations where we are forced to make a decision between two (or more) alternatives. For example, when we see a traffic light go from green to yellow: do we speed up, or do we stop? We only have the information at hand as a basis to make our decision, and we need to do so within a reasonable timeframe. We see the yellow light, the person in front of us deciding to speed up, traffic from the left and right starting to continue, the light eventually turning red — we decide to stop. Perceptual decision making refers to this process of deciding on a noisy stream of information, and it has received a tremendous amount of attention in the field of cognitive neuroscience over the past decades (Mulder, Van Maanen, & Forstmann, 2014).

Research on perceptual decision making particularly progressed with the development of a class of models called sequential sampling models (Stone, 1960). Both the quality of information (affected by the stimulus), and the quantity of information needed to make a decision (set by the decision maker) influence the course of the decision process. Sequential sampling models account for this by modelling the interplay between reaction time (RT) and accuracy during a decision (Ratcliff & Smith, 2004). In general, RTs are higher and accuracy is lower for difficult decisions, while RTs are lower and accuracy is higher for easy decisions. Now, a distinction can be made between different types of decisions based on both RT and accuracy, as opposed to when one takes these behavioural measures separately. Hence, the development of sequential sampling models opened up the opportunity to decompose behavioural data into latent processes in decision making (Forstmann, Ratcliff, & Wagenmakers, 2016).

The currently most widely used sequential sampling model is the diffusion decision model (DDM; Ratcliff, 1976). The DDM presumes that when we make a perceptual decision based on noisy information, evidence is accumulated over time, until a response threshold is reached. Based on the response time distributions for both correct and incorrect responses, the DDM estimates four model parameters: drift rate, representing evidence accumulation over time; boundary

separation, serving as the amount of evidence needed for a response; starting point, as the amount

of bias towards one or the other response alternative; and non-decision time, reflecting the time needed for sensory processing, and motor preparation and execution (Ratcliff, Smith, Brown, & McKoon, 2016).

Nearing the end of the 20th century, it became clear that the DDM could not only reliably explain behaviour, but that it also had a neuroscientific relevance and application. Shadlen and colleagues investigated single-cell recordings in the lateral intraparietal area (LIP) in the parietal cortex of monkeys performing a random-dot kinematogram (RDK) task. The monkeys were trained to discriminate whether the overall dot motion was directed towards one or the other target. It was found that the firing rates of neurons in area LIP increased to a maximum while the random-dot motion was being viewed, which was ultimately predictive of the monkeys’ decision (Shadlen & Newsome, 2001). Similar results were observed for recordings from the superior

(5)

colliculus (SC), and dorsolateral pre-frontal cortex (dlPFC) (Horwitz & Newsome, 1999; Kim & Shadlen, 1999). Crucially, activity from single-cell recordings during evidence accumulation closely resembles the way accumulation of sensory evidence is modelled by the DDM: evidence is

accumulated over time, until a response threshold is reached. Consequently, the DDM became a useful and valid tool for (cognitive) neuroscientists to study the connection between behavior and the brain.

Several studies have investigated how evidence accumulation can be related to signals from electro-encephalography (EEG), and in particular event-related potential (ERP) components. In this context, ERPs are highly suitable to use due to their relatively high temporal resolution: on a millisecond scale (Woodman, 2010). Philiastides, Ratcliff, and Sajda (2006) investigated how cortical activity measured with EEG could be related to differences in task difficulty during decision making. To do so, a face versus car categorisation task with a task difficulty manipulation was used. Differences in task difficulty were brought about by manipulating the phase coherence levels of the images: with a decrease in phase coherence, images became increasingly difficult to discriminate, and as a consequence, task difficulty increased. The authors found a centro-parietal ERP component that was related to task difficulty: the magnitude of this component increased with an increase in phase coherence, thus when the tasks’ difficulty decreased (Philiastides et al., 2006). Moreover, an increase in the drift rate estimated from the behavioural data (i.e., RT distributions for correct and incorrect responses), was also associated with an increase in the amplitude of centro-parietal positivity in the EEG signals (Philiastides et al., 2006; Ratcliff, Philiastides, & Sajda, 2009).

In addition to an increase in amplitude, these signals have also been shown to display a gradual build-up in the signal that is proportional to the amount of evidence in the stimulus (Philiastides, Heekeren, & Sajda, 2014). Using a similar face versus car categorisation task, it was found that the ERP component build-up (i.e., the slope of the signal) was faster for stimuli containing a higher amount of sensory evidence (i.e., a higher phase coherence), as opposed to those containing a lower amount. These results are similar to the previously discussed findings from single-cell recordings (e.g., Shadlen & Newsome, 2001), and are consistent with with the way evidence accumulation is modelled in the DDM (Gold & Shadlen, 2007; Ratcliff et al., 2016). However, although the build-up of the EEG signal increases with phase coherence, the amplitude also differs. This means that, as opposed to findings from single-cell recordings and drift rate

modelling, a common threshold is not reached. Furthermore, it has been demonstrated that the previously discussed EEG potentials are specific to evidence accumulation. That is to say, they are independent of sensory modality: visual versus auditory stimuli; and motor requirements: presence versus absence of a motor response (O’Connell, Dockree, & Kelly, 2012). Thus, previous research robustly reveals a domain-general neural signal that relates to evidence accumulation in perceptual decision making.

Previous research on the neural correlates of evidence accumulation so far has mainly focused on separate brain regions in isolation, ignoring the role connectivity within a network of brain regions might play during evidence accumulation. This raises the following important question:

(6)

how do changes in EEG signal due to task difficulty relate to changes in feed-forward and

recurrent effective connectivity? Previous research showed that a feed-forward sweep suffices for simple object recognition (Serre, Oliva, & Poggio, 2007), while feedback recurrent processing becomes necessary as object recognition becomes more complex (Wyatte, Curran, & O’Reilly, 2012; O’Reilly, Wyatte, Herd, Mingus, & Jilk, 2013). This suggests that differences in evidence

accumulation rates, as a consequence of variations in task difficulty, could potentially be related to differences in feed-forward and feedback connectivity strengths.

Effective connectivity between brain regions (i.e., the directionality of functional connections) can be estimated from EEG data with dynamic causal modelling (DCM; Friston, Harrison, & Penny, 2003). It uses a biologically plausible model that explains the development of ERPs based on the temporal dynamics in membrane potentials of neural cell populations. These dynamics are described by interconnected canonical microcircuits (CMC; Friston et al., 2017) — with each circuit consisting of interacting pyramidal, spiny stellate, and interneuron cell populations (David et al., 2006; Moran, Pinotsis, & Friston, 2013). Crucially, DCM provides us with the possibility to estimate network reorganisation in terms of effective connectivity, as a consequence of some experimental manipulation (Kiebel, Garrido, Moran, Chen, & Friston, 2009). For example, previous research on mismatch-negativity made use of this possibility by demonstrating that feed-forward connections were sufficient to explain early ERP components, while feedback connections were necessary for later ones (Garrido, Kilner, Kiebel, & Friston, 2007; Garrido et al., 2008). Thus, by introducing DCM to perceptual decision making, the neural mechanisms behind evidence accumulation could be further unraveled, as it has previously done for studies on mismatch-negativity.

The current study investigates in what way evidence accumulation rates can be related to feed-forward and feedback effective connectivity strengths, using a combined EEG and DCM approach. In accordance with previous research, it is hypothesised that task difficulty influences evidence accumulation rate, and that this is reflected in late components of the EEG signal. It is expected that stimuli with higher phase coherence levels result in higher drift rate parameter estimates. For higher drift rate parameter estimates, higher centro-parietal positivity amplitudes with a faster build-up are expected. Furthermore, task difficulty and the rate of evidence accumulation are hypothesised to relate to changes in effective connectivity strength. It is expected that feed-forward connectivity strength increases as task difficulty decreases, thus: for easy decisions with a fast evidence accumulation rate. In contrast: feedback connectivity strength is expected to increase as task difficulty increases: meaning, for more difficult decisions with a slow evidence accumulation rate.


(7)

Methods

Experiment


Participants

For the study, 21 participants (of whom 15 female, two left-handed, mean age = 20.94 (sd = 2.07)) were recruited from the University of Amsterdam, department of Psychology. All participants received an information letter a few days before the experiment. In this letter, in- and exclusion criteria were set out, and the course of the experiment was explained to the participants. Preceding the experiment, participants signed an informed consent form. All participants in the experiment had normal or corrected-to-normal vision, and had no (history of) neurological or psychiatric diseases. All experimental procedures had been approved for by the Ethical Committee of the Department of Psychology at the University of Amsterdam (UvA).

Stimuli and apparatus

For a cat versus dog categorisation task, images of dogs and cats were taken from the online ImageNet database (http://image-net.org). For each category, 20 different images were selected from the database, whilst assuring a variety in the types of cats and dogs (regarding color, size, race, et cetera). The 40 selected images were then edited in Adobe Photoshop (version 20.0.2, Adobe Inc., Mountain View, California, United States).

Figure 1 above shows the image editing process. First, the animal was cut out of the original

image, and this animal cut-out was then changed to a greyscale image. Furthermore, luminance, contrast, and size for all 40 greyscale cut-outs were equalised. This was done to assure that the

Figure 1. Illustration of the image manipulation process. Change from the natural image

(1st column) to a greyscale cut-out (2nd column) to the phase scrambled images (3rd column, a 40% phase coherence was used in the example images) is shown.

(8)

images did not differ in terms of their physical properties (which could have an influence on the discriminability of the images). The animal cut-out was then pasted onto a grey background (500 x 500 pixels; RGB: 97, 97, 97) with the head of the animal centralised. Furthermore, the cut-outs were scaled so that the head of each animal had the same size. Finally, the phase coherence levels of the images were manipulated in MATLAB (version R2018b, The MathWorks Inc., Natick, Massachusetts, United States) in order to influence the task difficulty (as described by Philiastides et al., 2006).

The images were presented on a BenQ monitor (model = XL2420T, resolution = 1920 x 1080, refresh rate = 120 Hz), at a viewing distance of 90cm. Responses were recorded using two response boxes, tied to the left and right arm rest of a chair.

Experimental design

Participants performed a cat versus dog categorisation task, with a manipulation in task difficulty between trials. Each trial started with a fixation cross (RGB: 220, 220, 220) which was presented for a duration that varied randomly between 1500-2000 ms. The fixation cross was followed by a stimulus (i.e., image of a cat or a dog), presented for 30 ms. The participant was asked to state whether they perceived a dog or a cat by pressing a button on the left or right response box with the index finger of respectively their left and right hand. The assignment of the decision dog and cat to the left and right response box button was counter-balanced between subjects. The participants were asked to respond as fast a possible, without sacrificing accuracy. A deadline was implemented 2500 ms after the image was presented. If the participant failed to respond before this deadline, the text “Too slow! Please respond faster.” was presented on screen for 1000 ms. The next trial started 1000 ms after the participant had given their response (i.e., inter-trial interval, ITI). Below, figure 2 shows an illustration of the experimental design, and figure 3 shows an illustration of the the influence of phase coherence on task difficulty for four example phase coherence levels, and one example image.

Figure 2. Illustration of the experimental design. A trial started with a fixation cross

(1500-2000 ms), followed by the stimulus (30 ms), which is followed by a response (RT). A deadline was implemented (2500 ms after the stimulus). The inter-trial interval (ITI) was 1000 ms.

(9)

The experiment was divided into three sessions: a practice session, a calibration session, and an experimental session. The practice session was implemented for participants to get used to the task, and for the experimenter to see whether the participant fully understood the task. The practice session consisted of one block of 27 trials, where images with 16, 19, 22, 25, 28, 31, 34, 37, and 40% phase coherence were randomly presented (3 trials per coherence level).

A calibration session was implement in order to find individual task difficulty levels. The calibration session used images from the same nine phase coherence levels, and consisted of 10 blocks of each 27 trials, leading to a total of 270 trials with 30 trials per coherence level. Halfway through the calibration session, after five blocks, the participant could take a short break. Based on the RT and accuracies from the calibration session, coherence levels corresponding to four

performance levels (60, 70, 80, and 90% accuracy) were extracted for each individual using a

Palmer fit (Palmer, Huk, & Shadlen, 2005). A chance performance level (50%) was added as a control

condition, and consisted of images with a 0% phase coherence level. Coherence levels

corresponding to the five performance levels (50, 60, 70, 80, 90%) are referred to as respectively coherence 1, 2, 3, 4, and 5. As some participants found the task more difficult than others, the extracted phase coherence levels differed considerably between participants.

Figure 4 (on the next page) shows the palmer fit for two example subjects (3 and 10), and

illustrates a considerable inter-individual difference in task performance. Performance between the two example participants differed largely, and so did the extracted coherence levels. For the 60,

70, 80 and 90% accuracy levels, coherence levels of 19.7, 22.3, 24.2 and 26.2% respectively were

extracted from the Palmer fit for subject 3, while the coherence levels 28.4, 32.3, 35.5 and 41.7% were extracted for subject 10. For the experimental session, the images were scrambled according to the extracted coherence levels from the calibration session for each subject. The experimental session consisted of 36 blocks of 25 trials each, leading to a total of 900 trials with 180 trials per performance level, with an equal number of cats and dogs. Participants could take a break every six blocks.

Figure 3. Illustration of the influence of phase coherence on task difficulty. As the phase

coherence level in the image increased, the image became more easily visible, leading to a decrease in task difficulty.

(10)

EEG data acquisition

EEG data were recorded using a 64-channel BioSemi ActiveTwo system (BioSemi B.V.,

Amsterdam, The Netherlands). Additionally, six external electrodes were placed: (1) next to the left eye, (2) next to the right eye, (3) above the right eye, (4) below the right eye, (5) on the left mastoid, and (6) on the right mastoid. A conductive gel was used for all electrodes, and the skin around the eyes and behind the ears was cleaned with an alcohol wipe before securing the electrodes. EEG data were acquired with a 256 Hz low-pass filter and a sampling rate of 1024 Hz.

Procedure

The experiment started with the practice session, which was performed in the presence of the experimenter. If the task was clear, and if this was reflected in the performance of the participant, the 10 minute calibration session would start. When the participant was finished, the Palmer fit was fitted to the behavioural data (mean RT and accuracy for the eight coherence levels from the calibration block). The performance based coherence levels were then used to create the stimulus set for the experimental session. After setting up the EEG equipment, the participant was

informed about the effects of artefacts on the EEG data. The participants were instructed to keep their jaw muscles relaxed, and refrain from eye blinks during the trials (i.e., from the moment the fixation cross was presented until the participant had given a response). When all instructions were clear, the experimental session started, lasting approximately 45-50 minutes. The total duration of the experiment was 2 hours. For their participation, all first year students from the Psychology bachelor received two research credits.

Figure 4. Illustration of the palmer fit to the data from the calibration block. The left figure

shows the fit for subject 3, the right figure the fit for subject 10. The first row shows accuracy on the y-axis, against coherence on the x-axis. The right row shows RT on the y-axis, against coherence on the x-axis. The blue dots represent the data for each coherence level in the calibration block, the red line represent the palmer fit. As coherence increased, accuracy increased and RT decreased.

(11)

Data analysis

All analyses of the behavioural data were performed in R (R Core Team, 2017), and the Dynamic

Models of Choice toolbox (Heathcote et al., 2019) was used for the diffusion modelling. For the

analyses of the EEG data, the SPM12 toolbox (Penny, Friston, Ashburner, Kiebel, & Nichols, 2011) was used, supported by MATLAB (version R2018b, The MathWorks Inc., Natick, Massachusetts, United States).


Diffusion modelling 
1

The DDM was fitted to the behavioural data from the experimental session. Eight DDMs, with different varying parameters were set up, where: (1) none of the DDM parameters were allowed to vary; (2) only drift rate (v) was allowed to vary; (3) only non-decision time (t0) was allowed to vary; (4) both v and t0 were allowed to vary; (5) only starting point (z) was allowed to vary; (6) both v and z were allowed to vary; (7) both z and t0 were allowed to vary; and finally, (8) v, z and t0 were allowed to vary. Each of these models was fitted to the data from each individual subject.

EEG pre-processing

EEG data from the experimental session for each participant were pre-processed as follows. First, a 0.5 Hz high-pass filter was applied to the data, followed by a 50, 100, 150 and 200 Hz band-stop filter (cut-off frequency: -2 Hz, +2 Hz). The data were then down-sampled to 256 Hz, epoched between -2000 ms pre- and 2000 ms post-stimulus, and labeled for each condition in the

experiment. Then, trials with early responses (RT < 50 ms), and missed trials (trials with no response) were flagged as bad trials. Trials where eye blinks occurred within a critical time range were also flagged as bad trials, accordingly: when the signal from the upper EOG channel reached an amplitude above 100 mV, or the signal from the lower EOG channel reached an amplitude below -100 mV, within the time range of -500 ms pre-stimulus and RT for that trial post-stimulus, the trial was flagged as bad. A baseline correction of -500 until 0 ms was then performed on the data. Furthermore, the data was robust averaged per condition. With robust averaging, weights between 0 and 1 are estimates for fragments of data, to give an indication of how noisy (i.e., due to artefacts) each data fragments is. The lower the weight of a fragment, the smaller its influence on the average. Trials that had been manually flagged as bad (as described above) were entirely excluded in the averaging procedure. The baseline correction was then repeated to account for a shift due to the robust averaging, followed by a 40 Hz low-pass filter. 


EEG statistical analysis

Regular event-related potential (ERP) analysis was performed on the pre-processed data. The statistical analysis consisted of a one-way within-subject ANOVA (factor: task difficulty, levels: coherence 1-5). Based on this analysis, it was determined at which moment in both time and space, the signal differed significantly between any of the five task difficulty conditions in the experiment.

Diffusion modelling analyses were performed by supervisor, Bernadette van Wijk.

(12)

Source reconstruction

A source reconstruction was performed on averaged EEG data across participants, in order to obtain source priors for the dynamic causal modelling. First, a head model was created by registering the data in EEG sensor space to a template cortical mesh. After successful

co-registration, a forward model was created: for each dipole in the cortical mesh, the effect on the sensor level EEG signal was calculated. The forward model was then inverted: based on the sensor level EEG signal, activity for each dipole in the cortical mesh was calculated. This was done using the multiple sparse priors approach (MSP): based on a large number of pre-specified source priors, the MSP inversion chooses the source priors that are necessary to explain the data. Hence, depending on the data at hand, the solutions the inversion provides can be sparse or distributed (Friston et al., 2008).

The inversion was performed for four different time windows: 0-200 ms, 200-400 ms, 400-600 ms, and 600-800 ms, post-stimulus. For each time window and each condition in the experiment, MNI coordinates from peak activity were extracted. Several extracted MNI coordinates from the same brain region were averaged for a more robust source prior. Using the SPM Anatomy Toolbox (version 2.2b) and its probabilistic cytoarchitectonic mapping function, it was determined which brain areas the MNI coordinates corresponded with. In table 1 and figure 5 below, the extracted MNI coordinates and corresponding brain areas are shown.

Table 1. DCM source priors.

MNI coordinates Brain area Abbreviation

-39, -91, -6 Left Middle Occipital Gyrus L MOG 34, -92, -8 Right Inferior Occipital Gyrus R IOG -33, -23, -29 Left Fusiform Gyrus L FG 32, -20, -28 Right ParaHippocampal Gyrus R PHG -37, -38, 61 Left PostCentral Gyrus L PCG 38, -36, 59 Right PostCentral Gyrus R PCG -45, 30, -8 Left Inferior Frontal Gyrus L IFG 46, 29, -6 Right Inferior Frontal Gyrus R IFG

Figure 5. Visualisation of DCM source prior locations. The left and middle figure show cortical

locations with peak activations, with the area labels as shown in table 1. The left figure shows the location of the occipital, parietal, and frontal sources. The middle figure shows the location of the temporal sources. The right figure shows a schematic representation of all eight sources.

(13)

Dynamic causal modelling

Dynamic causal modelling (DCM) of EEG data offers the possibility to estimate effective connectivity between cortical brain regions: the causal influence one neuronal cell population has over another. Moreover, the effect of some experimental manipulation (such as, task difficulty) on these connections can be investigated (Friston, Harrison, & Penny, 2003). The neuronal model of DCM specifies how a network of interacting brain regions processes sensory information. The temporal dynamics within each of these region is modelled by a canonical microcircuit: consisting of interacting pyramidal, spiny stellate, and interneuron cell populations (Friston et al., 2017). Furthermore, a forward model specifies how cortical activity turns into the ERP-responses that are measured at the scalp with EEG (Kiebel et al., 2009). For the model inversion, the free energy bound to the model log-evidence is optimised, in order to estimate model parameters that most accurately describe the data. This model evidence is can be compared using Bayesian model selection, in order to differentiate between various DCM architectures (Kiebel et al., 2009; David et al., 2006).

The extracted MNI coordinates from the source reconstruction were used as source priors for the dynamic causal modelling (DCM) of the EEG data (as shown in table 1, and figure 5). A post-stimulus time window of 0-800 ms was modelled. Activity within this time window was multiplied with a Hanning window to reduce the influence of activity early and late in the time window of the modelling. A canonical microcircuit (CMC) was used as ERP model, and equivalent current dipoles (ECD) as a spatial model. Input to the two occipital areas (see table 1) was modelled to occur at 90 ms post-stimulus (duration = 30 ms).

In order to determine what connections to specify between the sources, four different DCMs were set up differing in their connectivity structure (see figure 6, above). In model 1, connections between the occipital and temporal, occipital and parietal, and parietal and frontal sources were specified. In model 2, a connection between the temporal and parietal source was added. In model

3, a connection between temporal and frontal source was added (to the connections present in model 1). Finally, in model 4, all connections were specified. For each model type, a model where

forward connections were specified, and a model where both forward and backward connections

Figure 6. Schematic representation of the four different DCM structures. Each node

represents a brain area, and each blue line represents the presence of a connection between two areas. In each model, the nodes on the left represent the left hemisphere, and the nodes on the right represent the right hemisphere.

(14)

were specified, were fitted to the averaged data across subjects (i.e., eight models in total). With a 2

family-wise Bayesian model comparison (Penny et al., 2011) the best connectivity structure was determined.

For the winning model structure, a model with only forward connections and a model with both forward and backward connections were fitted to averaged data across subjects for each experimental condition (i.e., 10 models in total). Model fit of the forward (F) and forward-backward (FB) model was then compared for each coherence level (see figure 7 for a schematic illustration of these two models).

Additionally, it was compared which connections vary in strength between a harder and easier decision. Coherence level 2 was chosen for the harder decision, and coherence level 5 for the easier decision. Here, a single model inversion was uses to explain both conditions, as opposed to an inversion per condition. Four different models (as shown in figure 8) were compared: (1) the none

model, where no connections were allowed to vary between the two experimental conditions; (2)

the forward model, where only forward connections were allowed to vary; (3) the backward model, where only backward connections were allowed to vary; and finally, (4) the both model, where forward and backward connections were both allowed to vary.

Due to time constraints the DCM models were fit to averaged data across all subjects, instead of individual subjects’ data.

2

Figure 8. Schematic representation of the four DCMs used for the coherence 2 versus coherence 5 comparison. Each node represents a brain area, pink arrows represent forward connections, and

yellow arrows represent backward connections.

Figure 7. Schematic representation of the forward and forward-backward DCM. Each node

represents a brain area, grey arrows represent input, pink arrows represent forward connections, and yellow arrows represent backward connections.

(15)

Results

Behavioural results


Accuracy and reaction time


Three one-way within-subjects ANOVAs were used to investigate the effect of coherence level on accuracy, RTs for correct responses, and RTs for incorrect responses. It was found that as the phase coherence level of the images increased (i.e., task difficulty decreased), accuracy significantly increased (F(4,20) = 59.09, p < .001). Furthermore, it was found that as phase coherence increased, RTs for correct responses did not significantly decrease (F(4,20) = 1.41, p = .23). Similarly, RTs for incorrect responses also did not significantly decrease with an increase in phase coherence (F(4,20) = 0.19, p = .94). These results are visualised in figure 9 below, with the graph on the left showing the effect of task difficulty on accuracy, and the graph on the right showing the effect of task difficulty on RT. Additionally, descriptive statistics from these behavioural results can be found in

table 2 below. 


Table 2. Descriptive statistics of the behavioural data. Mean and SD for accuracy (proportion),

RTs (ms) for correct responses, and RTs (ms) for incorrect responses, for each experimental condition.

Figure 9. Mean accuracy and reaction times. The plot on the left shows mean accuracy across

participants on the y-axis, against coherence level on the x-axis (see yellow line). The plot on the right shows mean RT across participants on the y-axis, against coherence level on the x-axis (blue line for correct responses, red line for incorrect responses). Error bars represent standard errors (SE).

Coherence 1 Coherence 2 Coherence 3 Coherence 4 Coherence 5

Mean SD Mean SD Mean SD Mean SD Mean SD

Accuracy 0.53 0.04 0.63 0.08 0.70 0.08 0.77 0.09 0.84 0.05

RT correct 591.90 83.71 589.91 70.98 578.69 68.52 563.90 60.81 552.51 56.84

(16)

Diffusion modelling 3

As described in the data analysis section, eight different DDMs were fitted to the data from each individual participant, and model fits were compared. Model evidence values for these eight different models, from the data of each individual participant are shown in figure 10 below. The results illustrate that model 4 (i.e., where both v and t0 were allowed to vary), and model 8 (i.e., where v, z, and t0 were allowed to vary) are strong competitors. For instance, for subject 14 model

4 has stronger evidence than model 8, whereas the opposite is observed for subject 9. Nonetheless,

when taking together the model evidence for all subjects, the evidence for model 8 is stronger than for model 4. Model fit for model 8 and example subject 4 can be found in figure 11 below, and reveals that the model had good fit to the data: the thick lines represent the data, the thin lines represent the model, and model lines largely overlap.


Diffusion modelling analyses were performed by supervisor, Bernadette van Wijk.

3

Figure 10. Model evidence for the eight different DDMs. Model evidence for each participant and

each model type is shown. Each bar represents one participant, with model type on the x-axis, and model evidence (DIC) on the y-axis.

Figure 11. Model fit for model 8 and example subject 4. Plots show cumulative RT distributions, for stimulus

type in the columns, and experimental condition in the rows. The thick lines represent the data, and the thin lines the model. Dashed lines represent incorrect responses, and solid lines correct responses.

(17)

The model parameter estimates for response threshold, non-decision time, drift rate, and bias for model 8 are plotted in figure 12. Three one-way within-subjects ANOVAs were used to analyse the effect of coherence level on non-decision time, drift rate, and bias. The result revealed that non-decision time did not significantly differ with phase coherence level (F(4,20) = 2.6, p = .12). Furthermore, it was revealed that the drift rate parameter increased significantly with phase coherence level (F(4,20) = 195.5, p < .001). Similarly, it was shown that the bias parameter decreased significantly with phase coherence level (F(4,20) = 24.8, p < .001).

EEG results

Event-related potentials

After completing the pre-processing steps, as described in the data analysis section, EEG

activity for the five coherence levels was compared. Using a one-way within-subject ANOVA, it was determined at which moment in both time and space, the signal differed significantly between the coherence levels in the experiment. The results from this analysis revealed a significant effect of phase coherence level on the EEG signal for a centro-parietal cluster (k = 14835), with a peak effect 


Figure 12. Parameter estimates from the diffusion model 8. In the upper graph, the

parameter estimates are plotted, with the value shown on the y-axis, and the parameter for each coherence level on the x-axis. The lower graphs shows the parameter values on the y-axis, against coherence level on the x-y-axis, for each individual subject in the experiment.

Coherence Coherence Coherence

Value

(18)

located at 17 x -57 mm, 512 ms post-stimulus (see lower right image in figure 13). The EEG signal significantly increased in amplitude, with the phase coherence level of the stimuli (F(4,80) = 25.11,

p < .001, family-wise error rate (FWE) corrected).

This result from ERP analysis is visualised in the figures below. Figure 13 shows the average ERP for one centro-parietal electrode (Pz, lying close to where the peak effect was found), with each line representing the data from one of the five coherence levels. With an increase in phase coherence, an increase in the amplitude of the EEG signal is apparent between 400 and 600 ms post-stimulus (with the stimulus presented at 0 ms). Furthermore, topographies for each

condition, averaged for the time range of 462-562 ms post-stimulus, are shown in figure 14. Within this time range, the centro-parietal activity becomes increasingly positive, as represented by the yellow color, with an increase in phase coherence level.

Figure 13. Averaged ERP for electrode Pz. Voltage (mV) is plotted on the y-axis, against time (s)

on the x-axis. Each line represents averaged data across participants for a certain coherence level. The yellow line shows the data for coherence 1, the green line for coherence 2, the blue line for coherence 3, the purple line for coherence 4, and the red line for coherence 5. The topography plot on the top right shows the location of electrode Pz on the scalp. The plot on the lower right shows the statistically significant effect in space.

Figure 14. Topographies for each experimental condition. Averaged scalp topographies for the

time range 462-562 ms post-stimulus are shown for each phase coherence level in the experiment. Yellow indicates positive activation, blue indicates negative activation.

(19)

Dynamic causal modelling

As described in the data analysis section, four different DCM models, differing in their connectivity structure, were fitted to the data. In order to find a model structure that was not biased towards one of the experimental conditions, data was used that had been averaged over both participants and conditions. For each model structure type, a forward model and a forward-backward model was fitted. A family-wise bayesian model comparison revealed that model 1 had the highest relative log-evidence, the simplest model in terms of quantity of connections, as compared to the other more complex model families.

These results are illustrated above (see figure 15). Log-evidence for the model with the least evidence is set at zero: in this case, that is model 4 with forward and backward connections specified. Log-evidence for the other seven models is relative to the zero model. The model with the highest evidence is shown to be model 1 where only forward connections have been specified. Furthermore, when gathering the evidence from the forward and forward-backward model for each model type, model 1 has the highest cumulative log-evidence.

Additionally, model fits for each model can be found in figure 16 (on the next page). Model fit can be observed for each mode: spatial components in sensor space used in order to reduce data. Model fit for mode 2 (out of eight in total) is presented here as an example, as it most clearly illustrated the differences in fit. The thick lines represent the predicted response, and the dashed lines represent the observed response, with model fit determined by the amount of overlap between the two. It is apparent that the fit for forward model 1 is best, while the largest misfit is

Figure 15. Model evidence for the different DCM structures. For each model type, relative

log-evidence (fixed-effects, FFX) is shown for the forward model and the forward-backward model. Log-evidence for the model with the least evidence is set at zero (in this case, FB model 4), with evidence for the other models relative to zero.

(20)

observed for forward-backward model 4. Again, model 1 generally has the best fit, confirming the results obtained from the bayesian model comparison. Based on these results, it was decided to use the connectivity structure of model 1 in further analyses.

Next, using the connectivity structure from model 1, a forward and forward-backward model was fitted to the data from each experimental condition. The results from bayesian model comparison demonstrated the following. For the first four coherence levels, the forward model had the highest log-evidence, relative to the forward-backward model. However, for the highest coherence level, this result flipped: the forward-backward model now had the higher log-evidence than the forward model. These results are illustrated in figure 17 below.

Figure 16. Model fit for the different DCM structures. For each model type (columns), model

fit is shown for the forward model (first row), and the forward-backward model (second row). Activation is shown on the y-axis, against time on the x-axis (ms). The thick lines show the predicted response, the dashed lines show the observed response.

Figure 17. Model evidence for the forward and forward-backward model. For each coherence

level (from left to right), relative log-evidence (FFX) is shown for the forward model (F), and forward-backward model (FB).

(21)

Again, model fit for each model is shown in figure 18 below. Once more, the thick lines

represent the predicted response, and the dashed lines the observed response. For the three lowest coherence levels (coh 1-3), model fit for the forward model is evidently better than for the forward-backward model, with a large misfit observed for the forward-forward-backward model. For coherence 4 and 5, the previously observed misfit for the forward-backward model has decreased. In line with the results from the model evidence, model fit demonstrates a gradual decrease in the amount of misfit for the forward-backward model, with an increase in phase coherence level.

Additionally, it was compared which connections vary in strength between phase coherence level 2 (a harder decision), and phase coherence level 5 (an easier decision): none of the

connections, forward connections, backward connections, or both connections. The results from bayesian model comparison show the highest log-evidence for the both model (i.e., where both forward and backward connections were allowed to vary between the two conditions). This is followed by the forward model, the none model, and finally, the backward model (see figure 19 on the next page).

These results are further illustrated by the model fit, shown in figure 20 on the next page. The blue lines represent coherence level 2, the red lines coherence level 5, with thick lines for the predicted response, and dashed lines for the observed response. The none model shows the largest misfit, for both coherence 2 and 5. For the forward model and the backward model, the fit is shown to increase slightly, especially for coherence 5. Although the forward model has a better fit than the backward model, a misfit is nonetheless observed for these two models. Model fit is clearly best for the both model, although a slight misfit is apparent for coherence level 2.

Figure 18. Model evidence for the forward and forward-backward model. For each coherence

level (columns), model fit is shown for the forward model (first row), and the forward-backward model (second row). The thick lines show the predicted response, the dashed lines show the observed response.

(22)

Figure 19. Model evidence for the coherence 2 versus coherence 5 comparison. Relative

log-evidence is shown for the four models where different connections were allowed to vary between the coherence 2 and coherence 5 conditions.

Figure 20. Model fit for the coherence 2 versus coherence 5 comparison. Model fit is shown

for the four models where different connections were allowed to vary between the coherence 2 and coherence 5 condition. The thick lines show the predicted response, the dashed lines show the observed response, with blue for coherence 2 and red for coherence 5.

(23)

Discussion

The obtained results from this study provide new insights into the neural mechanisms behind evidence accumulation during decision making. The results revealed changes in effective

connectivity within a cortical network, consisting of occipital, temporal, parietal and frontal areas. As described in the results section, a model with forward connections was compared with a model including both forward and backward connections. The results demonstrated that, for more difficult decisions (i.e., stimuli with a lower phase coherence level), the forward model won over the forward-backward model, while the opposite was observed for an easier decision. Moreover, it was investigated which connections vary in strength between a more difficult and an easier decision. Both forward and backward connections were shown to change in connectivity strength with different task difficulty levels. These results suggest that forward connections are sufficient for more difficult decisions, while feedback connections become increasingly necessary for easier decisions. Moreover, the results show evidence for the idea that both forward and backward connection change in strength between an easy and a difficult decision.

To some extent, these results contrast findings from previous research. Previous research showed a feed-forward sweep to be sufficient for simple object recognition (Serre et al., 2007), whereas feedback processing becomes necessary for more complex object recognition (Wyatte et al., 2012; O’Reilly et al., 2013). The results from the current study show evidence in favour of a different story: one where forward connections are sufficient for difficult decisions, whereas feedback connections are potentially more important for easy decisions. In a sense, this story is counter-intuitive: one could expected that, for difficult stimuli with a lower phase coherence level, a larger amount of feedback is needed from higher to lower cortical areas in order to suppress a preliminary incorrect response. Based on this line of reasoning, backward connections would be expected to be necessary for difficult decisions, while forward connections are sufficient for easy decisions. 


However, the following alternative explanation predicts the opposite. Although images with a lower phase coherence level are harder to discriminate, these images are less detailed and contain less information. Thus, a lower amount of sensory information needs to be processed. As a consequence, it could be the case that the strength of the feed-forward sweep is lower, causing backward connections to be less prominent. Nonetheless, it is likely that reality is more nuanced. Perhaps it is more reasonable to suggest that, with changes in the difficulty of a perceptual

decision, a broad network reorganisation takes place. This reorganisation is possibly is not specific to either forward or backward connections, but includes both connection types. The finding in the current study that both forward connections and backward connections varied between a harder and easier decision also substantiates this suggestion. However, further research is needed to more quantitatively investigate the observed changes in connectivity strength. For future analyses, it would be interesting to more specifically investigate which connections differ significantly in strength with changes in task difficulty, and also: whether they increase or decrease. In addition,

(24)

one could investigate whether the results differ between forward and backward connections, or for connections between different brain areas.

In addition to new findings, the results from previous research by Philiastides and colleagues (Philiastides et al., 2006; 2009; 2014) have also en large been replicated in this study. Once again, evidence accumulation rates, as reflected by the DDM drift rate parameter, were shown to increase with a decrease in task difficulty. Furthermore, a centro-parietal EEG component was shown to increase in amplitude, with a decrease in task difficulty. Although a similar categorisation task was used in the experiment, the stimulus set was altered by using images of cats and dogs as opposed to the previously used faces and cars. Thus, the replication of previous findings shows that the effects are not stimulus-set specific, but can in fact be extended to stimuli from the same category (i.e., mammals). Moreover, preceding the experiment, a calibration session was implemented in order to facilitate similar task difficulty levels for each individual participant. This alteration of the experimental design was shown to be of high need: as illustrated in figure 4, inter-individual differences in task performance were considerable. Together, these changes in the experimental set-up provide evidence for the robustness of the replicated findings, as they have been shown to persist regardless of substantial changes in the task design.

One unexpected result found in the current study was that the DDM bias parameter significantly decreased with an increase in phase coherence level (as shown in figure 11). This means that, as task difficulty decreased, people were more prone to respond that they had seen a cat than a dog. One explanation for this finding could be that images of cats and dogs are not of equal valence, in the sense that cats and dogs are not emotionally neutral. It is plausible that cats and dogs have the ability to evoke some sort of emotional reaction, with this effect potentially being stronger for one or the other category. However, it is important to note that the unexpected finding cannot fully be accounted for by a general cat bias — in that case, one would expect the bias to be present across all conditions, while the bias for cat was in fact related to task difficulty. Even so, it is important to consider the influence of stimulus valence in future research. It would also be interesting to investigate whether the effect persists when using more neutral stimulus sets, such as tables and chairs.

An important limitation to this study that needs to be addressed, is the fact that, due to time constraints, DCM modelling was performed on averaged data across all subjects. Consequently, the conclusions that can be drawn from the analyses are limited. When fitting the DCM to data of individual participants, it becomes possible to directly relate estimates of evidence accumulation rates from individual participants to effective connectivity strengths. It would for instance be interesting to see whether these connectivity strengths increase or decrease linearly, in a similar fashion as the drift rate parameter does. It could also be the case that the changes in connectivity more closely resemble a sigmoid function, with a more clear cut-off boundary. In fact, when inspecting the results from the forward versus forward-backward DCM comparison, the effect is shown to flip from coherence 4 to coherence 5 (see figure 17). This could be seen as evidence in

(25)

favour of the latter option. Further analysis of the data at hand is needed to investigate this, enabling the possibility to draw more quantitative conclusions about the observed changes in effective connectivity.

All in all, the results obtained in the study contribute to the scientific field of perceptual decision making. Previous research has mainly focussed on relating the process of evidence accumulation to local neural activity (O’Connell et al., 2012; Philiastides et al., 2006; 2009; 2014), ignoring the role of connectivity within a network of brain regions. It has become clear that EEG scalp activity outside of the centro-parietal area is not directly influenced by modulations in task difficulty — the activity in these areas does not significantly increase or decrease. However, unsurprisingly, the results from the current study demonstrated that occipital, temporal, and frontal areas are also involved in the decision process. Although the measured local activity from these areas remains similar with changes in task difficulty, effective connectivity between them is nonetheless influenced by task difficulty.

Based on these results, it can be concluded that a network approach can be beneficial for research on the neural correlates of evidence accumulation. In this case, a network approach enabled the possibility to further unravel the neural mechanism behind evidence accumulation. Moreover, dynamic causal modelling has been shown to be a useful tool to do so, as it provided new insights in terms of connectivity. In the past, DCM has already been used in research on mismatch-negativity (Garrido et al., 2007; 2008), but this can now be extended to the field of perceptual decision making. Consequently, this opens up many new research possibilities. This is not limited to research on evidence accumulation, but could also be extended to other decision making phenomena, such as the speed-accuracy trade-off (SAT; Fitts, 1954). To conclude, by investigating the network involved in evidence accumulation, this study has placed the neural correlates of evidence accumulation in a broader perspective. Moreover, many new opportunities for future research have been enabled.


(26)

References

David, O., Kiebel, S. J., Harrison, L. M., Mattout, J., Kilner, J. M., & Friston, K. J. (2006). Dynamic causal modeling of evoked responses in EEG and MEG. NeuroImage, 30(4), 1255-1272.


Forstmann, B. U., Ratcliff, R., & Wagenmakers, E. J. (2016). Sequential sampling models in cognitive neuroscience: Advantages, applications, and extensions. Annual Review of

Psychology, 67, 641-666.

Friston, K. J., Harrison, L., & Penny, W. (2003). Dynamic causal modelling. Neuroimage, 19(4), 1273-1302.

Friston, K., Harrison, L., Daunizeau, J., Kiebel, S., Phillips, C., Trujillo-Barreto, N., ... & Mattout, J. (2008). Multiple sparse priors for the M/EEG inverse problem. NeuroImage,

39(3), 1104-1120.

Friston, K. J., Preller, K. H., Mathys, C., Cagnan, H., Heinzle, J., Razi, A., & Zeidman, P. (2017). Dynamic causal modelling revisited. Neuroimage. https://doi.org/10.1016/j.neuroimage. 2017.02.04

Garrido, M. I., Kilner, J. M., Kiebel, S. J., & Friston, K. J. (2007). Evoked brain responses are generated by feedback loops. Proceedings of the National Academy of Sciences, 104(52), 20961-20966.

Garrido, M. I., Friston, K. J., Kiebel, S. J., Stephan, K. E., Baldeweg, T., & Kilner, J. M. (2008). The functional anatomy of the MMN: a DCM study of the roving paradigm. Neuroimage,

42(2), 936-944.

Gold, J. I., & Shadlen, M. N. (2007). The neural basis of decision making. Annual Review

Neuroscience, 30, 535-574.

Heathcote, A., Lin, Y. S., Reynolds, A., Strickland, L., Gretton, M., & Matzke, D. (2019). Dynamic models of choice. Behavior research methods, 51(2), 961-985.

(27)

Ho, T. C., Brown, S., & Serences, J. T. (2009). Domain general mechanisms of perceptual decision making in human cortex. Journal of Neuroscience, 29(27), 8675-8687.

Horwitz, G. D., & Newsome, W. T. (1999). Separate signals for target selection and movement specification in the superior colliculus. Science, 284(5417), 1158-1161.

Kiebel, S. J., Garrido, M. I., Moran, R. J., Chen, C., & Friston, K. J. (2008). Dynamic causal modelling for EEG and MEG. Human Brain Mapping, 30, 1866–1876.

Kim, J. N., & Shadlen, M. N. (1999). Neural correlates of a decision in the dorsolateral prefrontal cortex of the macaque. Nature Neuroscience, 2(2), 176.

MATLAB (2018). Version R2018b. The MathWorks Inc., Natick, Massachusetts, United States.

Moran, R. J., Pinotsis, D. A., & Friston, K. J. (2013). Neural masses and fields in dynamic causal modeling. Frontiers in Computational Neuroscience, 7(57), 1-12.

Mulder, M. J., Van Maanen, L., & Forstmann, B. U. (2014). Perceptual decision neurosciences — a model-based review. Neuroscience, 277, 872-884.

O'Connell, R. G., Dockree, P. M., & Kelly, S. P. (2012). A supramodal accumulation-to-bound signal that determines perceptual decisions in humans. Nature Neuroscience, 15(12), 1729–1735.

O'Reilly, R. C., Wyatte, D., Herd, S., Mingus, B., & Jilk, D. J. (2013). Recurrent processing during object recognition. Frontiers in Psychology, 4, 1-14.

Palmer, J., Huk, A. C., & Shadlen, M. N. (2005). The effect of stimulus strength on the speed and accuracy of a perceptual decision. Journal of Vision, 5(5), 376-404.

Penny, W. D., Friston, K. J., Ashburner, J. T., Kiebel, S. J., & Nichols, T. E. (Eds.). (2011).

(28)

Philiastides, M. G., Ratcliff, R., & Sajda, P. (2006). Neural representation of task difficulty and decision making during perceptual categorisation: a timing diagram. Journal of

Neuroscience, 26(35), 8965-8975.

Philiastides, M. G., Heekeren, H. R., & Sajda, P. (2014). Human scalp potentials reflect a mixture of decision-related signals during perceptual choices. Journal of Neuroscience,

34(50), 16877-16889.

R Core Team (2017). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria.

Ratcliff, R. (1978). A theory of memory retrieval. Psychological Review, 85(2), 59-108. 


Ratcliff, R., Philiastides, M. G., & Sajda, P. (2009). Quality of evidence for perceptual decision making is indexed by trial-to-trial variability of the EEG. Proceedings of the National

Academy of Sciences, 106(16), 6539-6544.


Ratcliff, R., & Smith, P. L. (2004). A comparison of sequential sampling models for two- choice reaction time. Psychological review, 111(2), 333-367.

Ratcliff, R., Smith, P. L., Brown, S. D., & McKoon, G. (2016). Diffusion decision model: Current issues and history. Trends in Cognitive Sciences, 20(4), 260-281.

Serre, T., Oliva, A., & Poggio, T. (2007). A feedforward architecture accounts for rapid categorization. Proceedings of the national academy of sciences, 104(15), 6424-6429.

Shadlen, M. N., & Newsome, W. T. (2001). Neural basis of a perceptual decision in the parietal cortex (area LIP) of the rhesus monkey. Journal of Neurophysiology, 86(4), 1916-1936.

(29)

Woodman, G. F. (2010). A brief introduction to the use of event-related potentials in studies of perception and attention. Attention, Perception, & Psychophysics, 72(8), 2031-2046.

Wyatte, D., Curran, T., & O'Reilly, R. (2012). The limits of feedforward vision: Recurrent processing promotes robust object recognition when objects are degraded. Journal of

Referenties

GERELATEERDE DOCUMENTEN

The objectives of this clinical commentary are to (1) review the cardiometabolic risk profile and cardiorespiratory fitness status of wheelchair users, (2) determine the benefits

H6: The larger the differences in political systems between the Netherlands and its trading partner, the higher the trade creating effect of the immigrant stock on exports will

11 In een onderzoek naar Herman Heijermans – op literair-kritisch gebied zeer bedrijvig - kan zijn werkexterne poëtica dan ook niet buiten beschouwing gelaten worden.. In

Juist de bosreservaten zijn ideaal voor deze methode, omdat deze al intensief gemeten worden, er veel bekend is over de voorgeschiedenis van het bos, de bomen representatief

Artikel 16 1 Wanneer er geen andere bevredigende oplossing bestaat en op voorwaarde dat de afwijking geen afbreuk doet aan het streven de populaties van de betrokken soort in

In this section the third question is answered: Is there a need for new digital language tools amongst the respondents and if so, which? The participants were asked if

Met uitzondering van deze op de binnen- en buitenrand zijn de bakstenen zo gemetseld dat hun langste zijde naar het middelpunt van de toren gericht is.. Langs

In Section 5 we describe Algorithm 1 , a line-search based method for finding critical points of ϕ, discuss its global and local linear convergence.. Section 6 is devoted to