• No results found

An explorative study on the diffusion process postulated by the Central Bottleneck Theory : applying multiple stages to a single process

N/A
N/A
Protected

Academic year: 2021

Share "An explorative study on the diffusion process postulated by the Central Bottleneck Theory : applying multiple stages to a single process"

Copied!
22
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

An Explorative Study on the Diffusion Process Postulated by the Central Bottleneck Theory: Applying Multiple Stages to a Single Process

K. Ho

Institute: University of Amsterdam

Mentor: L. van Maanen

(2)

Contents

Abstract

3

Introduction

3

The Drift Diffusion Model

3

The Psychological Refractory Period

4

Description of the Current Study

6

Expectancies

6

Methods

7

Explored Variables

8

Results

9

Discussion

11

Dynamic DDM

11

Future Suggestions

12

References

13

Appendix A

16

Appendix B

17

Appendix C

18

(3)

An Explorative Study on the Diffusion Process Postulated by the Central Bottleneck Theory: Applying Multiple Stages to a Single Process

Abstract

In the current study it was examined whether there were any markers in response behavior that could possibly reflect the presence of the three perceptual processing stages hypothesized by the central bottleneck theory in the drift diffusion model. The original 1-stage model was extended to 2- and 3-stage variants for the purpose of explorative comparisons. For all 3 models, responses were simulated

for 100 people performing 1000 tasks each. Simulations were written and executed in R, a language for statistical computing. Statistical analyses were performed to test the expectancies of the model

that were made based on how the drift diffusion model was intended to simulate behavior. Results showed a decrease in variance as the amount of stages per model increased. It was concluded that this

decrease might indicate the presence of the three processing stages.

Introduction

Imagine that in your left hand you are holding a dish plate with a considerably large sandwich on top, which you have taken your delicate time to arrange. Meanwhile, your right hand is occupied with holding a full glass of water. On your way to the dinner table you trip over a shoe that was not put away after you came back home, causing the sandwich to slip from the plate on an inevitable to course to the floor. Keep in mind that you haven’t had anything to eat for last few hours, which made the sandwich you just prepared seem exponentially delectable. In such a situation, where time pressure is present, what course of action would you take? You could release your right hand and drop the glass to save the sandwich from hitting the floor, or you could prevent any broken glass by letting the sandwich fall and settle for something else to eat. Whichever decision you make, time was needed to process the situation and to decide which course of action would be optimal. Thus decision-making can be seen as a process by which incoming sensory information about the situation is accrued and used to influence behavior (Heekeren, Marrett, & Ungerleider, 2008). This is also known as perceptual decision-making.

The Drift Diffusion Model

The drift diffusion model (DDM) is a sequential sampling model commonly used to explain or predict perceptual decision making, particularly when the decisions being made have two independent options (binary decisions). This model postulates that before a decision can be made, evidence about the presented stimuli (be it visual or audial) must be accrued until the decision maker has enough to reach a decision threshold (Ratcliff, 1978). Here, noisy sensory evidence for a specific decision threshold is accrued over time until it reaches one of the two decision thresholds, after which, a decisional action is taken (figure 1). The height of these thresholds or bounds (a) represents the amount of evidence that the decision maker has to accrue before they can reach a decision. It has been systematically arranged that the upper bound is considered as the correct decision, while the lower bound is considered as the incorrect decision. The average rate at which evidence is accrued is called the drift rate (v). This drift is a continuous process, approximating a discrete random-walk process also known as the Wiener process (Szabados, 1994). The main parameters that govern the DDM are speed and accuracy. Together, they form a trade-off

(4)

trade-off postulates that decision makers either sacrifice accuracy to make quicker decisions or sacrifice speed for more accurate decisions.

Figure 1: Drift diffusion model with corresponding parameters. Z denotes the starting point of the decision-making process (from wagenmakers et al., 2007)

The DDM and other models have been applied to many studies that were operationalized with perceptual tasks such as numeracy judgments tasks (Ratcliff et al., 2001), categorization tasks (Nosofsky et al., 2011), text processing and priming tasks (McKoon & Ratcliff, 2012; 2013). Aside from studies that only included behavioral tasks, the DDM has also been applied to studies that made use of noninvasive imaging methods to gather empirical data, such as functional magnetic resonance imaging (fMRI, Frank et al., 2015; Mulder et al., 2014), electroencephalography (EEG, Ratcliff et al., 2009; Bode et al., 2012; Polania et al., 2014), and trans-cranial magnetic stimulation (TMS, Philiastides et al., 2011). These studies have covered a wide variety of phenomena, including alcohol use (van Ravenzwaij et al., 2012), sleep deprivation (Ratcliff & van Dongen, 2009),

individual differences in IQ (Ratcliff et al., 2010), and working memory (Schmiedek et al., 2007). The Psychological Refractory Period

When people have to respond to two different types of stimuli simultaneously, the response to the second stimulus often becomes delayed when the stimulus onset asynchrony (SOA) between the stimuli is shortened (figure 2). The SOA denotes the amount of time between the start of the 1st and 2nd stimulus. The delay caused by a shortening the SOA has been termed the psychological refractory period (PRP) (Telford, 1931). This paradigm is supported by a central bottleneck theory that argues that some mental operations might only allow a single stream of sensory information to be processed within a certain time interval, during which the mental operations can not process other streams of sensory information. Thus, when two tasks require the process of their respective streams of sensory information from the same mental operation, a RT delay will ensue for one or both tasks (Pashler, 1994; Ferreira & Pashler, 2002). Additionally, the authors of these studies argued that there could be a single or even multiple bottlenecks which are associated with

distinctive stages of processing or types of mental operations. PRP tasks have been used in studies examining the interference caused by performing multiple multimodal tasks at once, for instance, picture-word interference (Dell’ Acqua et al., 2007; van Maanen, van Rijn & Borst, 2009).

(5)

Figure 2: Gant diagrams illustrating the SOA effect with regards to the PRP paradigm.

Through the reliance of the central bottleneck theory, it has been postulated that the processing required for PRP tasks can be divided into three stages of processing: perceptual identification, response selection, and response execution (Pashler & Johnston, 1989; Levelt,

Roelofs, & Meyer, 1999; Voss, Rothermund, & Voss, 2004). Perceptual identification is considered as an initial perceptual stage that can proceed in parallel, where sensory information about an

administered task is acquired and stored (Sigman & Dehaene, 2006). This can entail the visual elements of an administered task (e.g., shapes, pictures, words, or concepts), and the purpose of the task (e.g., to decide whether a bundle of randomly moving arrows is moving left or right). Response selection is a central decisional stage and is the first to be affected by the central bottleneck. In this stage, evidence about the task at hand is integrated (e.g., “Some arrows in the bundle are moving rightward, but it seems that a majority of them are moving leftward”), and a decision boundary is reached. Response execution is a motor stage that entails the actual execution of the decision, given which decision boundary was reached.

In academic literature, studies implementing PRP tasks are often accompanied by some form of the DDM. However, the DDM is assumed to proceed as a single process (with 1 stage), even though the studies justify its use with the central bottleneck theory which postulates 3 stages (Dehaene & Sigman, 2012; van Maanen, van Rijn, & Taatgen, 2012; Feng et al., 2014; Janczyk,

Büschelberger, & Herbort, 2017). Thus, the incongruence between the drift diffusion process in PRP tasks (1-stage DDM) and the drift diffusion process as postulated by the central bottleneck theory (3-stage DDM) shall be addressed in this paper. Particularly, I will explore whether there are there any markers in response behavior that would reflect the presence of three abovementioned stages within the DDM.

At present, there has not yet been a study that has attempted to reflect these stages in the DDM. When using a predictive model in psychological studies, it is imperative that it simulates data that would best reflect those you would find empirically. Otherwise, the predictive model might compromise the coherence and reliability of the results. Thus, in this case, it is imperative that a comparison is made between the classic 1-stage DDM and its 3-stage extension, which is a better representation of the processing stages postulated by the bottleneck theory. This 3-stage model will be developed in the course of this paper.

The performance of the DDM relies on the responses that it simulates. Thus, to be able to compare the models, differences in response behavior should be examined. The comparison can be approached by thinking about how the models should and should not differ with regards to

response behavior. For starters, the 3 models should be representative of each other. This means that all of them should simulate response behavior in the same manner. Thus, there are some

(6)

statistical aspects of the models that should not differ. For example, the length of RTs should not differ. More specifically, the RTs of people that were simulated under the 3-stage model should equate to the RTs that were simulated under the 1-, and 2-stage models. If these RTs differ, it would mean that the models no longer represent each other. However, RT data distributions could still differ in some aspect. Thus, by relying on the central bottleneck theory, it is expected that there will be certain markers in the simulated data that will show that the 3-stage model is possibly more viable than the 1-stage model.

Description of the current study

In this paper, the classic 1-stage DDM was conceptually equated to the postulations of the central bottleneck theory by extending it to a 3-stage model. Additionally, an extra model with 2 stages was developed as an intermediate reference. For all 3 models, data was simulated and then examined in an explorative manner. This entailed the examination of various statistics. Initial suggestions for statistical explorations came from Wagenmakers, van der Maas, and Grasman (2007). They studied the performance of a version of the DDM, called the EZ-model, in terms of parameter recovery. Here, the mean response time (MRT) and variance of response time (VRT) were primarily examined. Following suit, these statistics were also chosen for examination in this paper. Furthermore, skewness and kurtosis were examined, as these are also informative

properties that are exhibited by data distributions. In the EZ-model, MRT and VRT only encompass correct decisions. However, complete distributions will be explored in this paper. This includes RTs of both correct and incorrect decisions.

When making comparisons between models, a distinction was made between within- and between-groups statistics regarding MRT, VRT, skewness, and kurtosis. Within-group refers to statistics that regard the behavior of a single group of individuals across models, while between-groups refers to statistics that compare the behavior of multiple between-groups or samples of individuals. To be able to compare these statistics and draw certain conclusions from my expectancies of them, z-scores, 95% confidence intervals, and effect sizes were computed. For effect size magnitudes, a value of 0.1 is considered as a small effect, 0.3 as a medium effect, and 0.5 as a large effect (Cohen, 1992). After the comparative statistics have been computed, nonparametric tests were performed to falsify the expectancies of the statistics. Nonparametric tests are statistical analyses that are not reliant on the assumptions that usually constrain parametric tests, such as the assumption of equal variance or the assumption of normally distributed data.

Expectancies

Based on how the DDM is supposed to simulate response behavior, several expectancies were made. Within-MRT was not expected to differ across the models. More specifically, the RTs of a group that was simulated under the 3-stage model should equate to the RTs that were simulated under the 1-, and 2-stage models (for the same group). If a difference in within-MRT is found, that would mean that the models do not represent each other. This

expectancy was also extended to between-MRT, but formulated for multiple groups or samples. Next, Skewness was not expected to differ (both within- and between-), because this is supposed to be solely dependent on the drift rate and boundary separation parameter values (given that the same values were inputted in all the models). There were no expectancies for the within- and between-kurtosis (these were formed after the initial observations were done). These statistics involve the ‘tail-heaviness’ or the spread of the data distributions, which relate to how much variance there is in the data. Within-VRT was not expected to differ across the models for the same

(7)

reason as within-MRT; if a difference is found, the models would not be representative of each other. However, there were no initial expectancies for between-VRT. Similarly to the within- and between-kurtosis, these expectancies were formed after the initial observations were done. Finally, Parameter values were not assumed to affect the differences that could be expressed across the models. The observed differences in statistics should be equally expressed in all models regardless of parameter values (given that all 3 models simulated data with the same parameter values).

Methods

Simulations were done for 100 people, each performing 1000 trials. The statistics computed hereof were the within-group statistics. Between-groups statistics were simulated by reiterating the 100X1000 matrix, 1000 times. All simulations were done with R (R core team, 2016), a programming language used especially for statistical computing. The R-code for the DDM came with default parameter values for drift rate and boundary separation, v= 0.05, and a= 0.1. Keep in mind, that these parameter values are dependent on the type of response behavior that is being simulated. Different types of tasks will elicit different response behaviors, which will recover different parameter values. With the default values shown above, simulated data resembled response behavior that could be found in tasks where one has to respond quickly under time pressure (e.g., random dot motion task). Furthermore, since the differences in data distributions should be expressed across the models regardless of parameter values, it was decided to proceed with explorations using these default values. Within-trial noise (s) was kept at 0.1, as suggested by Wagenmakers, van der Maas, and Grasman (2007). Also, these authors argued that non-decision time (Ter) was an arbitrary additive constant, which does not affect the results of parameter recovery. Thus, Ter was kept at 0.

Donders discovered that mental operations occurred in subsequent components rather than instantaneously (Donders, 1868). Thus, completing a task entailed the sequential occurrence of these components. He found that the time measurement for these tasks, simply required the summation of each respective component that the task-related mental operation consisted of. With this assumption he developed what is known as the subtractive method. In this method, measuring the time it takes to complete a specific component, requires the subtraction of a time that does not contain the component from a time that does contain the component.

Afterwards, each component can be summated to acquire the total time it took to complete the task. However, the subtractive method had its own difficulties with its ability to add or remove certain components without affecting the other components as well. Sternberg found a more dynamic to extend on Donders’ idea of measuring processing stages, which was called the additive factors method (Sternberg, 1969). This was proposed a method for inferring the structure of mental operations from RT without the requirement of component subtractions. Here, RT represents the time needed to execute different internal processing components or stages. For example, the 3 processing stages proposed in this paper. Under the assumption that these components occur in separate independent sequences, the total observed RT can be expressed simply by summing the times of each component.

Based on the methods that were just mentioned, multiple-stage models were developed by summing multiple instances of the drift diffusion function and by dividing the sum through the amount stages. Like so for the 2-stage model: (ddm(tasks, v, a) + ddm(tasks, v, a))/2. And like so for the 3-stage model: (ddm(tasks, v, a) + ddm(tasks, v, a) + ddm(tasks, v, a))/3. This normalization was done because the duration of a task trial is not diagnostic for the amount of processing stages. Thus, to make comparisons between the 1-stage model and the others, it is

(8)

required to normalize the RTs of the 2- and 3-stage models with regards to the duration of a task trial in the 1-stage model. Furthermore, the processing stages were assumed to be of equal lengths in each corresponding model. The R script containing the algorithm for simulating response behaviors for all 3 models can be found in appendix C.

Explored Variables

The main manipulable variable in this exploration was model type (1-, 2-, or 3-stages). Here, additional stages of processing were added to the classic DDM with the help of the subtractive and

additive factors methods. The observed variable in this paper was response time (RT). Skewness statistics were computed with Geary’s method (Geary, 1947), and the kurtosis statistics were computed with Pearson’s method (Pearson, 1905). Following the initial observations shown in the

results section, z-scores, confidence intervals, and effect sizes were computed for the statistics mentioned above. Afterwards, Kruskal-wallis (K-S) tests were performed to examine whether within- and between-groups statistics differed across the models. This non-parametric test ranks

values of each sample to check whether they belong to the same distribution. Lastly, Jonckheere-Terpstra (J-T) tests were performed to determine decreasing or increasing trends of the statistics

across the models. The R packages that were used to be able to perform these tests, were the ‘moments’ and ‘clinfun’ packages.

(9)

Results

To initiate the explorative comparison, line charts and histograms of the within-RT and within-MRT were plotted. From these plots, several things were noted; the within-RTs seemed to be of similar magnitudes, and the data distributions equally skewed (figure 3). However, in figure 4 it can be seen that the within-MRTs of the 2- and 3-stage models seemed more compressed toward the mean than the 1-stage model. In other words, the kurtoses and VRTs seemed to differ across the models. For now, these initial observations seemed to cohere to the expectancies mentioned above.

Figure 4: Histograms and rug plots of the within-MRTs for each model. Notice how the shapes of the distributions differ. To further examine these expectancies, z-scores, confidence intervals and effect sizes were computed for within- and MRT, within- and VRT, within- and skewness, and within- and kurtosis. For the purpose of efficiency, only the between-groups descriptive statistics shall be shown in the core of this paper, along with their respective nonparametric tests. The within-groups descriptive statistics can be found in Appendix A. The descriptives for the between-groups statistics is shown in table 1.

(10)

The first remarkable statistic that was noticed from this table, was the variance for MRT. A noticeable decline can be seen for these values across the 3 models. This is further

reflected in the means of VRT as well. Here, there is also a decline from the 1- to the 3-stage model. Furthermore, the z-scores for VRT yielded exceptionally high values in comparison to any of the other statistics in table 1, which is a sign that there might be this statistic might differ across the models. Looking at the magnitudes of the VRT z-scores, it can be seen that the differences between the 1- and 3- stage models were more expressed than the differences between the 1- and 2-stage models. These were in turn more expressed than the differences between the 2- and 3- stage models. Also, all 3 effect sizes for model comparisons of VRT considerably exceeded the guidelines that would constitute large effects (which is also remarkable). The z-scores, confidence intervals and effect sizes of skewness, kurtosis, and MRT did not show any initial signs of difference. For the within-groups statistics, these comparative statistics (including VRT) did not show any signs of differences either. The results of the nonparametric tests that were performed to validate the abovementioned expectancies is shown in table 2. Similar to before, only the results of the between-groups statistics are shown in this table. The results for the within-groups statistics can be found in appendix B. In this table, the first remarkable results are for VRT. The K-S test showed that VRT significantly differed across the 3 models, H(2)= 261.41, p < 0.01. Additionally, the J-T test revealed a significant decreasing trend for VRT, J= 136640000, p < 0.01. Lastly, the tests for the other statistics in table 2 showed no signs of statistical differences. Thus, explicitly, no

differences were found for within- and between-groups MRT (which was expected due to the normalization). Similarly, no differences were found for the within- and between groups

skewnesses, and none for the within- and between groups kurtoses either. Overall, these results correspond with the expectancies mentioned above.

[between-groups]

1-stage 2-stage 3-stage

skewness Mean 0.050 0.029 0.027 Variance 0.058 0.057 0.050 Z-score 1.990 0.173 2.218 95% CI (+/-) 0.015 0.015 0.014 Effect size 0.089 0.008 0.099

Kurtosis

Mean -0.098 -0.116 -0.125 Variance 0.218 0.213 0.182 Z-score 1.990 0.173 2.218 95% CI (+/-) 0.029 0.029 0.026 Effect size 0.038 0.021 0.060

MRT

Mean 956.303 956.167 956.214 Variance 5.806 2.999 1.796 Z-score 1.449 -0.681 1.020 95% CI (+/-) 0.149 0.107 0.083 Effect size 0.065 -0.030 0.0456 VRT Mean 593.046 296.357 196.001 Variance 7040.878 1727.859 762.572 Z-score 100.192 63.592 142.133 95% CI (+/-) 5.201 2.576 1.712 Effect size 4.481 2.844 6.356

Table 1: Between-groups descriptive and comparative statistics. The z-scores were computed from the comparisons between the 1- and 2-stage, 2- and 3-

(11)

Discussion

In this paper, the drift diffusion model was examined in light of the observation that studies that make use of PRP tasks agree with the postulations of an underlying central bottleneck theory, but nonetheless, still use the ‘classic’ 1-stage model to predict response behavior. Through explorative means, it was found that the variance of response behavior differed

decreasingly across the models. More specifically, variance decreased as the amount of processing stages increased. In other words, simulating response behavior under the 3-stage model resulted in less variance than the other models. As mentioned before, within- and between-groups responses were simulated separately. Variance only differed across the models for the responses between groups. There was no difference in variance for the responses within the groups. Thus, variability between people stayed the same across the models, while variability between samples of people decreased. According to further expectations, it was established that the data

distributions across the models did not differ in ways that would compromise their purpose of simulating response behavior. This relates to the all the within-groups statistics that were explored, and most of the between-groups statistics as well (except for variance)

Dynamic DDM

In this paper, the DDM was assumed to have static decision boundaries. However, there are studies that assume that more dynamic properties are at play during the decision-making process. These dynamic drift diffusion models generally assume that the amount of evidence that needs to be accrued, fluctuates as time passes. Some of them assume that the decision boundaries bend or collapse inwards as decisional time passes (Drugowitsch et al., 2012; Gluth et al., 2013). Thus, as time passes, less and less evidence needs to be accrued to reach a decision. Other dynamic models assume static

boundaries, but with an urgency signal that multiplies with the evidence that is being accrued over time (Deneve, 2012; Thura & Cisek, 2014). However, it was found that these dynamic extensions of the static decision boundary do not provide a better fit for perceptual decision-making behavior in humans (Boehm et al., 2016).

Depending on the type of task that is being performed, the DDM will use

corresponding parameter values to simulate response behavior accordingly. Thus, when empirical data is applied to a parameter recovery model, corresponding parameter values are obtained. When these values are used to simulate data from a predictive model, it is assumed that these data are representable to the data that have been acquired empirically, predictive even. Relating to the results found in the current study, when data is simulated under the 3-stage model (as compared to the 1- and 2- stage-models), the resulting variance might also better reflect the variance that would have been observed from empirical data. So, for instance, if one group of people is assumed to elicit

[between-groups] DF = 2 Skewness K-S chi2 0.680 P-value 0.712 J-T 149390000 P-value 0.228

Kurtosis

K-S chi2 0.152 P-value 0.927 J-T 149730000 P-value 0.629

MRT

K-S chi2 0.416 P-value 0.812 J-T 149600000 P-value 0.628 VRT K-S chi2 261.41 P-value < 2.2e-16 J-T 136640000 P-value < 2.2e-16 Table 2: Results of the non-parametric

test. These were the Kruskall- wallis and Jonckheere-Terpstra tests

(12)

a single (slow) processing stage, while another group is assumed to elicit three (fast) processing stages, evidence for the presence of these three processing stages could be obtained by observing equal response times, but with decreased variances for the latter group. In conclusion, the decrease in variance between the 1-, 2-, and 3-stage drift diffusion models could function as a behavioral marker that reflects the presence of the three processing stages.

Future suggestions

A parameter recovery model for the DDM algorithm that I worked on would be very useful in order further verify the empirical viability of the 3-stage model. Attempts to use the parameter recovery model offered in Wagenmakers, van der Maas, and Grasman (2007) were in vain due to the incompatible variances that affected parameter recovery. The within-groups variances that were computed in this paper were several fold larger than the ones used in their study. This affected the parameter values that would be computed from the parameter recovery model. Thus, in future studies I might pursue the task of writing a compatible drift diffusion recovery algorithm, or the task might also be taken by anyone who might be interested in this study. Another suggestion would be to examine the 3-stage model with differing lengths of

processing stages. In this paper, the stages were of equal length, but this does not necessarily have to be how perceptual processing proceeds. For instance, the response selection stage might take longer than perceptual identification. For a task such as the random dot motion task, it could take longer to accrue evidence to reach a decision than it would to encode the visual elements of the task. It could also be that the length of the 3 stages fluctuates differently for each individual. Regardless, these are some interesting topics to pursue in future studies. A last suggestion, and an important one at that, would be to apply the 3-stage model empirically. Presently, predictive models in Psychology have the capability of accurately simulating a myriad of behaviors. However, comparisons to actual data are imperative to validate their performance. Thus, until this has been accomplished, the viability of the extended drift diffusion models over the classic DDM can be discussed with a tone of skepticism.

Lastly, when using predictive models to simulate behavior, it is important to be aware of the details that are specified by the supporting theories or paradigms. Although a particular predictive model might seem to properly represent the behavior it is supposed to simulate, its performance might still not be optimal due some overlooked details in the underlying theories that were not applied to the model.

(13)

References

Bode, S., Sewell, D. K., Lilburn, S., Forte, J. D., Smith, P. L., & Stahl, J. (2012). Predicting perceptual decisions from early brain activity. Journal of Neuroscience, 32, 12488–12498.

Boehm, U., Hawkins, G. E., Brown, S., Rijn van, H., & Wagenmakers, E-J. (2016). Of monkeys and men: impatience in perceptual decision-making. Psychological Bulletin & Review, 23, 738-749. Cohen, J. (1992). A power primer. Quantitative Methods in Psychology, 112, 155-159.

Dehaene, S., & Sigman, M. (2012). From a single decision to a multi-step algorithm. Current Opinion in Neurobiology, 22, 937-945.

Dell’ Acqua, R., Job, R., Peressotti, F., & Pascali, A. (2007). The picture-word interference effect is not a stroop effect. Psychonomic Bulletin & review, 14, 717-722.

Deneve, S. (2012). Making decisions with unknown sensory reliability. Frontiers in Neuroscience, 6. doi:10.3389/fnins.2012.00075.

Donders, F. C. (1868). Over de snelheid van psychische processen. Het Physiologisch Laboratorium der Utrechtsche Hoogeschool, 2, 92-120.

Drugowitsch, J., Moreno-Bote, R., Churchland, A.K., Shadlen, M.N., & Pouget, A. (2012). The cost of accumulating evidence in perceptual decision making. Journal of Neuroscience, 32, 3612– 3628.

Feng, S. F., Schwemmer, M., & Gershman, S. J. (2014). Multitasking versus multiplexing toward a normative account of limitations in the simultaneous execution of control-demanding behaviors. Cognitive, Affective, & Behavioral Neuroscience, 14, 129-146.

Ferreira, V. S., & Pashler, H. (2002). Central bottleneck influences on the processing stages of word production. Journal of Experimental Psychology, 28, 1187-1199.

Frank, M. J., Gagne, C., Nyhus, E., Masters, S., Wiecki, T. V., Cavanagh, J. F., & Badre, D. (2015). fMRI and EEG predictors of dynamic decision parameters during human reinforcement learning. Journal of Neuroscience, 35, 484–494.

Geary, R. C. (1947). Testing for normality. Biometrika, 34, 209-242.

Gluth, S., Rieskamp, J., & Büchel, C. (2013). Classic EEG motor potentials track the emergence of value-based decisions. NeuroImage, 79, 394–403.

Pashler, H., & Johnston, J. C. (1989). Chronometric evidence for central postponement in temporally overlapping tasks. The Quarterly Journal of Experimental Psychology Section A, 41, 19-45, DOI: 10.1080/14640748908402351.

Heekeren, H. R., Marrett, S., & Ungerleider, L. G. (2008). The neural systems that mediate human perceptual decision making. Nat Rev Neurosci 9, 467–479.

Janczyk, M., Büschelberger, J., & Herbort, O. (2017). Larger between-task crosstalk in children than in adults: behavioral results from the backward crosstalk paradigm and a diffusion model analysis. Journal of Experimental Child Psychology, 155, 95-112.

(14)

Levelt, W. J. M., Roelofs, A., & Meyer, A. S. (1999). A theory of lexical access in speech production. Behavioral and Brain Sciences, 22, 1-75.

Maanen van, L., Rijn van, H., & Borst, J. P. (2009). Stroop and picture-word interference are two sides of the same coin. Psychonomic Bulletin & review, 16, 987-999.

Maanen van, L., Rijn van, H., & Taatgen, N. (2012). RACE/A: an architectural account of the interactions between learning, task control, and retrieval dynamics. Cognitive Science, 36, 62-101.

McKoon, G. & Ratcliff, R. (2012). Aging and IQ effects on associative recognition and priming in item recognition. J. Mem. Lang, 66, 416–437.

McKoon, G. & Ratcliff, R. (2013) Aging and predicting inferences: a diffusion model analysis. J. Mem. Lang, 68, 240–254.

Mulder, M. J., Maanen van, L., & Forstmann, B. U. (2014). Perceptual decision neurosciences – a model-based review. Neuroscience, 277, 872–884.

Nosofsky, R. M., Little, D. R., Donkin, C., & Fific, M. (2011). Short-term memory scanning viewed as exemplar-based categorization. Psychol. Rev., 118, 280–315.

Pashler, H. (1994). Dual-task interference in simple tasks: data and theory. Psychological bulletin, 116, 220-244.

Pearson, K. (1905). Das fehlergesetz und seine verallgemeinerungen durch fechner und pearson. Biometrika, 4, 169-212.

Philiastides, M. G., Auksztulewicz, R., Heekeren, H. R., & Blankenburg, F. (2011). Causal Role of Dorsolateral Prefrontal Cortex in Human Perceptual Decision Making. Current Biology, 21, 980-983.

Polania, R., Krajbich, I., Grueschow, M., & Ruff, C. C. (2014). Neural Oscillations and Synchronization Differentially Support Evidence Accumulation in Perceptual and Value-Based Decision Making. Neuron, 82, 709-720.

R Core Team (2016). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL: https://www.R-project.org/.

Ratcliff, R. (1978). A theory of memory retrieval. Psychological Review, 85, 2. 59-108.

Ratcliff, R., & Dongen van, H.P.A. (2009) Sleep deprivation affects multiple distinct cognitive processes. Psychonomic Bulletin, 16, 742-751.

Ratcliff, R., Philiastides, M. G., & Sajda, P. (2009). Quality of evidence for perceptual decision making is indexed by trial-to-trial variability of the EEG. PNAS, 106, 6539–6544.

Ratcliff, R., Thapar, A., & McKoon, G. (2010). Individual differences, aging, and IQ in two-choice tasks. Cognitive Psychology, 60, 127-157.

Ratcliff, R., Thapar, A., & McKoon, G. (2001). The effects of aging on reaction time in a signal detection task. Psychol. Aging 16, 323–341.

(15)

Ravenzwaij van, D., Dutilh, G., & Wagenmakers, E-J. (2012). A diffusion model decomposition of the effects of alcohol on perceptual decision making. Psychopharmacology, 219, 1017-1025. Schmiedek, F., Oberauer, K., Wilhelm, O., Süß, H-M., & Witmann, W. W. (2007). Individual

differences in components of reaction time distributions and their relations to working memory and intelligence. Journal of Experimental Psychology, 136, 414-429.

Sigman, M., & Dehaene, S. (2006). Dynamics of the central bottleneck: dual-task and task uncertainty. PLoS Biology, 4, 1227-1238.

Sternberg, S. (1969). The discovery of processing stages: extensions of Donders’ method. Acta Psychologica, 30, 276-315.

Szabados, T. (1994). An elementary introduction to the Wiener process and stochastic integrals. Studia Scientiarum Mathematicarum Hungarica, 31, 249-297.

Telford, C. W. (1931). The refractory phase of voluntary and associative responses. Journal of Experimental Psychology, 14, 1-36.

Thura, D., & Cisek, P. (2014). Deliberation and commitment in the premotor and primary motor cortex during dynamic decision making. Neuron, 81, 1401–1416.

Voss, A., Rothermund, K., & Voss, J. (2004). Interpreting the parameters of the diffusion model: an empirical validation. Memory & Cognition, 32, 1206-1220.

Wagenmakers, E-J., Maas van der, H., & Grasman, R. P. P. P. (2007). An EZ-diffusion model for response time and accuracy. Psychonomic Bulletin & Review, 14, 3-22.

(16)

Appendix A

[within-groups] 1-stage

model 2-stage model 3-stage model skewness Mean 1.924 1.917 1.892 Variance 0.063 0.027 0.014 Z-score 0.217 1.307 1.186 CI (+/-) 1.875 0.032 0.023 Effect size 0.030 0.185 0.168

Kurtosis

Mean 8.592 8.418 8.049 Variance 7.011 2.288 0.867 Z-score 0.569 2.078 1.934 CI (+/-) 0.518 0.296 0.182 Effect size 0.080 0.294 0.273

MRT

Mean 957.771 958.438 955.362 Variance 545.486 271.080 158.928 Z-score -0.233 1.483 0.908 CI (+/-) 4.578 3.227 2.471 Effect size -0.033 0.210 0.128 VRT Mean 587516.9 589825.5 584923.1 Variance 2138630810 1323818057 762189321 Z-score -0.392 1.073 0.482 CI (+/-) 9064.085 7131.325 5411.124 Effect size -0.055 0.152 0.068

(17)

Appendix B

Appendix C

#---

# data generation

ddm = function (Trials, mu, a, NDT=0, s=.1, dt = 1e-03)

{

x = rep(0, Trials); count = rep(0, Trials);

rt = correct = c();

sd = sqrt(s^2 * dt);

mu = mu * dt

nwon = TRUE;

while (any(nwon)) {

[within-groups] DF = 2 Skewness K-S chi2 0.005 P-value 0.997 J-T 149970000 P-value 0.968

Kurtosis

K-S chi2 0.012 P-value 0.993 J-T 149950000 P-value 0.522

MRT

K-S chi2 0.177 P-value 0.915 J-T 1493400 P-value 0.797 VRT K-S chi2 0.093 P-value 0.954 J-T 1496600 P-value 0.448

(18)

x = x + rnorm(length(x), mu, sd); # DDM specific

count = count + 1;

won2 = x < -a; # did x2 win?

won1 = (x > a | won2); # did any of x1 or x2 win?

won = won1 + won2; # won = 0 if not endid, 1 if x1 ended, 2 if x2 ended

rt = c(rt, count[won1]);

correct = c(correct, won[won1])

# continue only with those that didn't end yet

nwon = !won

x = x[nwon];

count = count[nwon];

}

data.frame(rt = rt + NDT, correct = correct == 1)

}

#---

task.n=1000 # changes the amount of tasks per person

col.n=100 # changes the amount of people performing tasks

drift.rate=0.05

boundary.sep=.1

#---

# data matrixes

data1.1=matrix(numeric(),nrow=task.n,ncol=col.n)

data2.1=matrix(numeric(),nrow=task.n,ncol=col.n)

data2.2=matrix(numeric(),nrow=task.n,ncol=col.n)

(19)

data3.1=matrix(numeric(),nrow=task.n,ncol=col.n)

data3.2=matrix(numeric(),nrow=task.n,ncol=col.n)

data3.3=matrix(numeric(),nrow=task.n,ncol=col.n)

ddm.rt1=matrix(numeric(),nrow=task.n,ncol=col.n)

ddm.rt2.raw=matrix(numeric(),nrow=task.n,ncol=col.n)

ddm.rt3.raw=matrix(numeric(),nrow=task.n,ncol=col.n)

ddm.cor1raw=matrix(numeric(),nrow=task.n,ncol=col.n)

ddm.cor1=matrix(numeric(),nrow=task.n,ncol=col.n)

ddm.cor2=matrix(numeric(),nrow=task.n,ncol=col.n)

ddm.cor2.1=matrix(numeric(),nrow=task.n,ncol=col.n)

ddm.cor2.2=matrix(numeric(),nrow=task.n,ncol=col.n)

ddm.cor3=matrix(numeric(),nrow=task.n,ncol=col.n)

ddm.cor3.1=matrix(numeric(),nrow=task.n,ncol=col.n)

ddm.cor3.2=matrix(numeric(),nrow=task.n,ncol=col.n)

ddm.cor3.3=matrix(numeric(),nrow=task.n,ncol=col.n)

skew=numeric()

kurt=numeric()

var.mean1=numeric()

var.mean2=numeric()

var.mean3=numeric()

z13=numeric()

(20)

z23=numeric()

z12=numeric()

#---

# Data simulations

set.seed(666)

#---

# Generating sets of RT, seperately for each model

# the 1-stage model

for(j in 1:col.n){

data1.1raw=ddm(task.n,drift.rate,boundary.sep)

# rt data per stage

data1.1[,j]= data1.1raw$rt

# correct response data per stage

ddm.cor1raw[,j]=data1.1raw$correct

ddm.rt1=data1.1

ddm.rt1.colmeans=colMeans(ddm.rt1)

ddm.cor1=ddm.cor1raw

}

prop.cor1=colMeans(ddm.cor1) # proportion of correct responses per set

# the 2-stage model

for(j in 1:col.n){

data2.1raw= ddm(task.n,drift.rate,boundary.sep)

data2.2raw= ddm(task.n,drift.rate,boundary.sep)

# rt data per stage

(21)

data2.2[,j]= data2.2raw$rt

# correct response data per stage

ddm.cor2.1[,j]=data2.1raw$correct

ddm.cor2.2[,j]=data2.2raw$correct

# Normalization

ddm.rt2.raw=data2.1+data2.2

ddm.rt2=ddm.rt2.raw/2

ddm.rt2.colmeans=colMeans(ddm.rt2)

ddm.cor2=ddm.cor2.1*10+ddm.cor2.2

}

# The 3-stage model

for(j in 1:col.n){

data3.1raw= ddm(task.n,drift.rate,boundary.sep)

data3.2raw= ddm(task.n,drift.rate,boundary.sep)

data3.3raw= ddm(task.n,drift.rate,boundary.sep)

# rt data per stage

data3.1[,j]= data3.1raw$rt

data3.2[,j]= data3.2raw$rt

data3.3[,j]= data3.3raw$rt

# correct response data per stage

ddm.cor3.1[,j]=data3.1raw$correct

ddm.cor3.2[,j]=data3.2raw$correct

ddm.cor3.3[,j]=data3.3raw$correct

# Normalization

ddm.rt3.raw=data3.1+data3.2+data3.3

ddm.rt3=ddm.rt3.raw/3

ddm.rt3.colmeans=colMeans(ddm.rt3)

(22)

ddm.cor3=ddm.cor3.1*100+ddm.cor3.2*10+ddm.cor3.3

}

# RT variances per model

rt.vars1=numeric(); rt.vars2=numeric(); rt.vars3=numeric()

rt.vars2.raw=numeric();rt.vars3.raw=numeric()

for(j in 1:col.n){

rt.vars1[j]=var(ddm.rt1[,j])

rt.vars2[j]=var(ddm.rt2[,j])

rt.vars3[j]=var(ddm.rt3[,j])

rt.vars2.raw[j]=var(ddm.rt2.raw[,j])

rt.vars3.raw[j]=var(ddm.rt3.raw[,j])

}

Referenties

GERELATEERDE DOCUMENTEN

In het kader van het ‘archeologiedecreet’ (decreet van de Vlaamse Regering 30 juni 1993, houdende de bescherming van het archeologisch patrimonium, inclusief de latere

Part II Distributed signal processing algorithms for heterogeneous multi-task WSNs 111 5 Multi-task WSN for signal enhancement, MVDR beamforming and DOA estimation: single source

Nevertheless, we show that the nodes can still collaborate with significantly reduced communication resources, without even being aware of each other’s SP task (be it MWF-based

The socio-economic and cultural dimension both showed two significant results making them the most influential dimensions regarding the integration process of international

For the analysis of the behavioral data we in- spected the mean reaction times and error rates of the participants for both the color and shapes conditions as well as the

P1: The idea exploration and generation process of innovation is positively influenced IT constructive thought pattern strategies through communication, networking

The mean envelope value for the AgCl and TPU electrodes increases significant when the load is increased (Table I)..

Using sensor data to arrive at effective decision support for sports encompasses various challenges: (1) Sensor data needs to be understood, processed, cleaned