• No results found

Collapsing boundaries in drift diffusion models with the use of deadlines

N/A
N/A
Protected

Academic year: 2021

Share "Collapsing boundaries in drift diffusion models with the use of deadlines"

Copied!
20
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Collapsing Boundaries in Drift Diffusion Models with the Use of Deadlines Jannes Overtoom Student no:10092560

(2)

Abstract

In this paper we will focus on drift diffusion models with collapsing bounds that can be used to explain evidence needed against reaction time when using deadlines. Several authors proposed the use of a dynamic component in de the drift diffusion models to fit to data better when acquired in dynamic environments. Creating a dynamic environment for this purpose was difficult and model fitting didn’t gave a clear picture when dynamic bound were used, and when static bounds were used. We used deadlines to create a dynamic environment and a direct method to measure the boundaries of drift diffusion models. From our theoretical framework we hypothesised that boundaries for evidence needed against reaction time would collapse faster when a deadline used was shorter. The analysis of the results did not support hypothesis. Our hypothesis could not be discarded too, because several points of difficulty with our analysis were found. It was concluded that in this paper no evidence for collapsing bounds were found, but with a change of analysis the method used in this paper could work.

(3)

Introduction

Psychologist measure the outcome of decision-making, namely overt conscious behaviour often and well. This behaviour is easily measurable, because the overt behaviour is easy to see directly. It is less known what models can explain covert behaviour, namely the process of decision making. Multiple variables play a role in decision making, for instance stress (Janis, & Mann, 1977), priming (Mandel, 2003), motivation (Kuhl, 1986), time and rewards. All these influencing variables are in need of theories and models to fully comprehend how the decision process works. This study will look at the influences of time pressure and rewards on the process of decision-making.

In this paper we will focus on models that can be used to explain evidence needed against Reaction Time (RT) for a decision. The basic model to explain reaction time against evidence in the decision-making process is the Random Walk Model (RWM; Heath,1981; Ashby, 1983). This model is part of the class of Sequential Sampling Models (SSM; Ratcliff, 1978). SSM are models based on the accumulation of evidence for making a decision. The model assumes that humans take evidence from a noisy environment until a critical amount of evidence is acquired to make a

decision. RWM look at the growth of evidence over time in discrete steps towards two choice options. Each piece of evidence towards one of the options is added over time towards an upper or lower boundary for evidence needed for making a decision. Depending on which choice is correct, evidence will, on average, favour one option over the other. Consequently, the accumulated

evidence will tend towards one of the two bounds. When this process is done with infinitely small increments in evidence the amount of evidence becomes measurable on a continuous scale. At this point the RWM is called a Drift Diffusion Model (DDM; Ratcliff, & Smith, 2004). DDM are easier to model mathematically and provide for better experimental possibilities. When research to DDM started the models were used as such that decisionmakers set the boundaries and these boundaries were then fixed over time. This is called a Static Decision Criterion (SDC; Busemeyer & Townsend,

(4)

1993). The DDM was successfully used this way for decades. With DDM lots of the outcome of decision-making was explained well by psychologists. Also in more physiological studies on brain activity the RWM and DDM could explain measurements like single cell recordings(Gerstein & Mandelbrot, 1964). This is later shown as good fitting empirically with monkeys (Shadlen, & Newsome, 2001; Huk & Shadlen, 2005).

Sometimes there was need of adjustments of the decision boundaries. The static nature of the boundaries was challenged by researchers who put time pressure on the decision-making process (Swensson, & Thomas, 1974). They noted that when different stopping rules were chosen, the accuracy trade-off changed By implementing extra time pressure the slope of the speed-accuracy trade-off regression line when steeper down. This was not expected with a DDM with SDC. No change was expected in the slope at all. The changing slop of the speed-accuracy trade-off could be explained with an adjustment of the boundaries in the model. The idea was that people are less accurate when there is less evidence for making the right choice. So if people are less accurate when the RT is higher, they used less evidence at that time. In other research it was shown that over time there was less evidence needed for making a decision which is not possible with SDC either (Busemeyer, & Rapoport, 1988). These inconsistencies brought up the need for a dynamic component in de the DDM to explain these studies.

It was proposed that this dynamic component was based on the cost of two alternative forced choice (2AFC) decisions (Busemeyer, & Rapoport, 1988). This cost was based on the effort that was acquired for each piece of new evidence. This would result in a Reward-Rate (RR). RR is the amount of reward someone gets per piece of RT. People tent to maximize rewards so if optimising rewards does not mean being accurate as possible participants tended not too (Balci, et all., 2011). So the idea is, that people make a trade-off from amount of evidence needed to the amount of RT to get the highest reward. This way the perceived cost per evidence goes up over time and RR

becomes ever lower over time. The way to compensate for this cost, is to create a Dynamic Decision Criterion (DDC; Boehm et al., 2016). A DDC is a component in the model that changes

(5)

over time to compensate for the changing perceived reward over time.

Researchers tried to find support for the existence of DDC in different ways. First support was found by biological researchers who showed that making 2AFC decisions with different difficulties per trail resulted in collapsing boundaries with monkeys (Shadlen, & Kiani, 2013). Because monkeys can do 2AFC tasks for days and can thus do a lot of trails, biologist noted that with enough data DDM with collapsing bounds could be found. A study explored further whether DDM with DDC account for the speed and accuracy of perceptual decisions in a reaction-time random dot motion direction discrimination task and if this is also measurable in the brain (Ditterich, 2006). For the analysis in this paper data from decision-related activity of neurons recorded from the parietal cortex (area LIP) of monkeys were used. For 2AFC with short RT the static models fitted well, but when longer RT was recorded the model with DDC fitted better. This could also be measured in the brain. In biological research collapsing bounds for DDM are thus well established.

Psychologist had a different approach that was based on the cost of evidence instead of difficulty of trails. This is a more useful approach when dealing with humans, because the biological approach takes days of testing for a single participant to get results (Shadlen, & Kiani, 2013; Roitman & Shadlen, 2002). To pay participants to test for days would cost of a small fortune. Busemeyer et al (1988) theorised that every behaviour to get evidence is extra cost for getting the right answer. Cost could consist out of computing the evidence or moving to get the evidence. This way cost can be measured and manipulated, because the amount of evidence provided can be measured and manipulated. This cost of evidence is compared to the RT and accuracy rating, which results in a RR. In this research was show that DDC were used by participants, but the participants had a hard time figuring out the RR optimisation for doing the task. Only a model with myoptic stopping rules seemed to fit. The myoptic model would say that humans only take small blocks of evidence and see after each block if new evidence is worth it, based on the knowledge acquired so far. This model fitted, while a full model were all the possible steps are already calculated to get the

(6)

optimal RR. The researchers found that humans seem to have too little capacity for finding the optimal stopping rule and processing all the information provided.

Later it was tried to combine the biological approach with the psychology approach

(Drugowitsch, et al. 2012). The researchers manipulated the difficulties of the items. They used the sampling cost as a reward. The researchers checked if this resulted in a changing of RR

optimisation with humans and monkeys. It did, so it was proven a combination could be used for testing if RR optimisation would result in DDM with a DDC.

The next step was to show what the DDC actually was. To implement DDC in DDM there are two different approaches. The first one is to have dynamic boundaries that would make the critical amount of evidence needed decrease over time. This means for instance that boundaries would collapse over time (Drugowitsch, et al. 2012). The second type is to implement an urgency variable into the model (Cisek et al., 2009). Urgency would behave as an growing slope coefficient over the amount of evidence , so that evidence would way heavier and heavier over time. Was it collapsing boundaries or an urgency slope? Both models could explain neurological activity in monkey brains, and so could a DDM with SDC (Hanks, Kiani, & Shadlen, 2014). Fitting the different models on neurological data was not an option in finding the best model. There was need of a mathematical basis for the use of the different models. In mathematical analyses it was

concluded that urgency models are less usable than models with dynamic bounds (Boehm et al, 2015). The urgency model had a problem with ever growing standard deviations over time. This resulted in totally unpredictable outcome based because of the randomness of these standard deviations. This made the model hardly usable. The dynamic boundaries can be modelled better, because the problem with growing deviations did not happen. This did not mean urgency could not be implemented in the collapsing bounds model. Frazier and Yu (2007) showed, mathematically, that RR-optimality under stochastic deadlines is achieved by collapsing bounds. This would happen because of growing urgency towards a deadline. This means that by creating conditions with

(7)

were used. Deadlines would induce a sense of urgency that grows towards the deadline. The regression line of this growing urgency over time, could explain collapsing boundaries if urgency and amount of evidence needed where correlated.

A meta study on the subject of DDC against SDC in DDM showed other difficulties with finding collapsing bounds (Hawkins, et al., 2015). A lot of studies did model fitting on the data and did not measured boundaries directly. The model fitting gave an complete, but somewhat of a unclear conclusion. The results were in favour of SDC models, but showed that in certain instances the DDC were a better fit on the data. DDM models with SDC or DDC could both be fitted on many data of different studies with 2AFC tasks. To avoid the indirect work of model fitting on the data, in this paper a 2AFC task will be used with discrete steps of evidence and RT. This way the bounds can be measured directly with linear regression lines between evidence needed against RT for correct decisions on the trails. Participants will go through three conditions with shortening deadlines. From each participant the boundaries will be measured. It is expected that for each shorter deadline the boundaries will collapse sooner and thus the slope of the linear regression lines will go down steeper when deadlines are shorter. Our hypotheses according to the theory is as follows:

H1:|𝛽𝛽1|> |𝛽𝛽2| > |𝛽𝛽3|

Methods Participants

A group of 24 participants (10 male, 14 female; age m=22,2, sd=2,7) participated in this

experiment. Four others did too, but their data wasn’t complete and wrong software was used when testing with them. The participants were offered mandatory participant points for students as reward for participating in this research. As a result, the participants consisted out of healthy students from the University of Amsterdam.

(8)

Participants were randomly assigned to one of the three experimental conditions, an one second deadline condition, a 1,5 second deadline condition and a 2,5 second deadline condition. Written informed consent was obtained from the participants before the beginning experiment. Ethical approval for the study was given by the University of Amsterdam’s Ethics Review Board. Written informed consent was obtained from the participants before the experiment.

Procedure

Each participant was given forms for informed consent and demographic data before the test. When the forms were filled in, the experimenter decided in which condition the participant was put into. The participant would do the test in eighteen blocks. The first block was a quick practice block without deadlines to get the participant acquainted with the task. The second block was another practice block where the deadline was introduced. After these practice rounds the trails for the analyses began. The participant would do 16 blocks of testing of 50 trails each. Each trial would end with the amount of points earned as direct reward. After a block of trails the participants were shown their total score of that block. This went on 16 blocks with pauses after each block. At the end of all the blocks the total score combined was presented as last feedback. After the testing the participants where thanked for participation and debriefed about the experiment.

Materials

The amount of evidence needed for a decision over time was measured with a program running in PsychoPy version V1.84.2 (Peirce, 2007, 2008). This program was played on a

ASUSVG236 23inch screen. The refresh rate of the screen was 60Hz and the resolution was set to 1920 x 1020. The participants were seated at a viewing distance of 70cm from the screen. On the screen there were two black squares shown, one on the left and one on the right. These squares were under or above an imaginary horizontal line through the middle of the screen. These two squares reappeared either high or low every 100ms. A square would appear above or below for 50 ms and

(9)

then there was a pause of 50 ms. The participant needed to not which of the two squares would appear mostly above. In Figure 1 is shown how this looked on the screen. The task was choosing the left or right side were most squares appeared upper over time.

Figure 1. The four types of stimuli combinations shown to the participants. Each picture showing two blocks

being above or below the middle line. Above the two distractors with both squares either up or down. Bellow The two types of stimuli were evidence is provided for left or right.

These blocks were 4.5 degrees wide horizontal and 1.7 degrees high split 15 degrees apart in the middle of the screen. The probability of the blocks being high or low was determined by the rate parameters 𝜃𝜃𝑇𝑇 and 𝜃𝜃𝐷𝐷. These θ’s are the likelihoods of an upperflash or lowerflash appear. The 𝜃𝜃𝑇𝑇 is the probability for evidence for the target to appear, the 𝜃𝜃𝐷𝐷 is the probability for evidence for the distractor to appear. The chance of receiving evidence for the target P(C), evidence for the distractor P(M) or unclear evidence P(U) are put in Equation 1 as follows:

(10)

Equation 1. 𝑃𝑃(𝑀𝑀) = 𝜃𝜃𝐷𝐷(1 − 𝜃𝜃𝑇𝑇)

𝑃𝑃(𝑈𝑈) = 𝜃𝜃𝑇𝑇∗ 𝜃𝜃𝐷𝐷+ (1 − 𝜃𝜃𝑇𝑇)(1 − 𝜃𝜃𝐷𝐷)

The evidence was then distributed as follow in Equation 2 :

Equation 2. 𝜆𝜆𝐻𝐻1= 10 −1 𝑃𝑃(𝐶𝐶) 𝑃𝑃(𝑈𝑈) 𝑃𝑃(𝑀𝑀) 𝜆𝜆𝐻𝐻2 = 1 0 −1 𝑃𝑃(𝑀𝑀) 𝑃𝑃(𝑈𝑈) 𝑃𝑃(𝐶𝐶)

Evidence is then provided until the participant made a final choice or a deadline has been reached for the left or right option with the ‘q’ or ‘p’ key. After the deadline given by the condition the participant was in, the participant could not choose anymore and it was noted as a incorrect response. To integrate the RR principle over the RT, points were given for the answers of the participants. A correct answer would be a ‘+750’ in green, a wrong answer a ‘-750’ in orange and a ‘-1000’ in red if the participant missed the deadline. The result after each trail was shown very briefly. After each block the total sum of point from that block was shown and a pause was

proposed to the participant. After all the blocks the total sum of all points was given as last feedback on the test.

Data Analyses

Only correct responses will be extracted for analyses, because of assumed less randomness with this data than with incorrect responses. First we will examine the RT against the density of answers from the participants. It will show if our manipulation was successful when the distributions per condition differ in form. Linear regression lines were calculated over the RT and evidence needed for a 2AFC. This will be done for the three different conditions. These regression lines will be plotted. After looking at these graphs, we will look into statistical tests on the data to test the hypotheses |𝛽𝛽1| > |𝛽𝛽2| > |𝛽𝛽3|. These test will be done with hierarchical Bayesian regression using order restrictions (Morey, & Wagenmakers, 2014). Using the BayesFactor package (Morey,

(11)

Rouder, & Jamil, 2014). a 𝐵𝐵10 will be calculated between H1:|𝛽𝛽1|> |𝛽𝛽2| > |𝛽𝛽3| and H2: not|𝛽𝛽1|> |𝛽𝛽2| > |𝛽𝛽3| . This will be done using Equation 3:

𝐸𝐸𝐸𝐸𝐸𝐸𝐸𝐸𝐸𝐸𝐸𝐸𝐸𝐸𝐸𝐸 3 𝑝𝑝(𝐻𝐻1|𝐷𝐷𝐸𝐸𝐸𝐸𝐸𝐸)𝑃𝑃𝐸𝐸𝑃𝑃𝐸𝐸𝑃𝑃𝑃𝑃𝐸𝐸𝐸𝐸𝑃𝑃 𝑝𝑝(𝐻𝐻0|𝐷𝐷𝐸𝐸𝐸𝐸𝐸𝐸) = 𝑃𝑃𝑃𝑃𝐸𝐸𝐸𝐸𝑃𝑃 𝑝𝑝(𝐻𝐻1) 𝑝𝑝(𝐻𝐻0) × 𝐵𝐵𝐵𝐵10 𝑝𝑝(𝐷𝐷𝐸𝐸𝐸𝐸𝐸𝐸|𝐻𝐻1) 𝑝𝑝(𝐷𝐷𝐸𝐸𝐸𝐸𝐸𝐸|𝐻𝐻0) Results

From 28 participants, four were excluded because of faulty data. This data was unusable, because of a significant portion of trails were missing or because of the lack of the deadline manipulation by the use of wrong software. In Table 1. the different RT and accuracy per condition is shown.

Conditions Mean RT Sd RT Mean Acc Sd acc

1 1s 0.652 0.237 0.600 0.490

2 1.5s 0.800 0.268 0.629 0.483

3 2.5s 1.317 0.464 0.665 0.472

Table 1. The mean and standard deviation of RT and accuracy per condition

From the usable data only accurate responses were extracted for analyses, because of assumed less randomness in this data than inaccurate responses. It was deducted that wrong responses will also include more random button pushing and not being focussed on the trails, because this will on average result in more wrong decisions.

The histograms in Figure 2. show plots RT bins on the x-axis and the density on the y-axis. These histograms are made by density averaging the data. The plot of the first condition shows a normal like distribution of the reactions over RT bins. The second and third condition do the same but the plots of the data seem to spread out more and more. We can see the deadlines have an influence on how the amount of answers against RT is distributed. If the form of distributions

(12)

changes for each condition we can conclude this is because of our successful manipulation and changing bounds. If the strategy of the participant on the trails stay the same through conditions, the plot of the amount of answers against RT with a deadline will look like a cut off version of the plot of amount of answers against RT without deadlines. If the deadlines are taken serious by the participants, people will tend to give answers with this deadline in mind, and will thus give a distribution of answers against RT different of distributions without deadlines. The histograms follow the expectations that the distributions would change because of our manipulation, so it suggests that the manipulation worked.

(13)

Figure 2. Reaction times and amount of answers. This histogram was made with density averaging on

conditions 1, 2 and 3 to account for outliers. Condition 1 with the quickest deadline shows sort of a normal distribution. From the second to the third condition the distribution is showing a bigger tail on the right.

In Figure 3 the evidence (posterior probability) at decision commitment is plotted against RT for every condition. Within the conditions the least-squares regression line for each participant is plotted. From the plots that show the RT against evidence we can see that some participants followed the expectations, but others did exactly the opposite, requiring more evidence before committing to a decision as deliberation time increased. No real pattern of common regression line can be seen. What can be seen in de graphs, is that the regression lines are more heterogeneous when deadlines are shorter. What also can be seen is that the intercepts of the slopes differ across conditions. Shorter deadlines seem to bring lower intercepts.

(14)

Figure 3. On the x-axis RT bins are plotted, on the y-axis the amount of evidence toward a correct decision.

The linear regression lines are the bounds of evidence needed over RT per participant in that condition.

The analyses with Bayesian hierarchical regression models with order restrictions did not worked as planned. The BayesFactor package did not allow for order restrictions. MCMC-sampling was then used with a Cauchy distribution as prior. This resulted in a prior of almost 0 which made the posterior almost infinitely better. This made the 𝐵𝐵10 an out of proportion big number. Because of this it was decided to do t-test on the different average slopes between the conditions to test the hypotheses in two different parts. The first part will be testing 𝐻𝐻1: |𝛽𝛽1| > | 𝛽𝛽2| and the second part H1:|𝛽𝛽2| > |𝛽𝛽3 |. The Bayesian t-tests on the slope coefficients, gave no evidence, |𝛽𝛽1| > | 𝛽𝛽2|,𝐵𝐵10=0.748 or even some little counter evidence against the hypotheses, |𝛽𝛽2| > |𝛽𝛽3 |, 𝐵𝐵01=3.831. When done using a frequentist analysis, it shows that for both the test there is no significant effect for the hypotheses

(15)

too, 𝐻𝐻1: |𝛽𝛽1| > | 𝛽𝛽2| t(14.992)=0.190, p=.574 and, H1:|𝛽𝛽2| > |𝛽𝛽3 | t(11.685)=1.066, p=.846.

Discussion

How RR influences RT is much researched by psychologist and biologists. In biological research it was shown that during a 2FAC task monkeys would make the bounds of DDM collapse because of different difficulties of trials (Shadlen, & Kiani, 2013). Psychologist hoped to find this effect with humans too, but failed to consistently find it. In some research only difficulty of items was manipulated, but this did not make it possible to fit collapsing bounds on the data of DDM (Drugowitsch, et al. 2012). Other research by psychologist for finding collapsing bounds for 2AFC tasks is done with a view on RR optimisation. In these experiments it is difficult for participants to know when their rewards were optimised (Busemeyer, & Rapoport, 1988), because of the limited capacity of the human brain to constantly calculate RR optimisation.

Previous psychological studies on the subject did not show the existence of collapsing bounds on a consistent base, but noted that sometimes models with collapsing bounds could fit the data. They showed that there are certain dynamic environments for testing, were DDM with collaps-ing bounds would fit better on the data (Ratcliff & Smith, 2004; Boehm, Hawkins, Brown, van Rijn, & Wagemakers, 2015). In this paper different experiments were analysed that could explain dynam-ic influences on DDM. Statdynam-ic bounds were compared to dynamdynam-ic bounds. The influence of diffdynam-icul- difficul-ty, RR optimisation and urgency was noted. A problem with a lot of research was that it was diffi-cult to show weather participants would use SDC or DDC (Hawkins, et al. ,2015). It seemed DDC could be taught or the dynamic environment could be optimised for the use of collapsing. This was hard to do and in most cases DDM with SDC proved to be sufficient as standard fit to approach the data of 2AFC tasks (Hawkins, et al. ,2015). Recent mathematical research done by Frazier and Yu (2008) posed a new option of experiment to find an effect of collapsing bounds with human deci-sion making. They proposed the use of deadlines as a simple yet elegant solution, to induce a clear RR and a naturally growing sense of urgency toward the deadline. This would result in the showing

(16)

of collapsing bounds, what will mean that biological research and psychological research are con-gruent on the subject.

Based on previous research, in this paper was investigated if deadlines would make bounds collapse for a 2FAC task. Expected was that when a condition had a shorter deadline the bound would collapse sooner. Participants in three conditions with different deadlines performed a simple 2AFC task. The manipulation was done successfully. This was shown in the density averaging graphs of correct answers against RT. The task was structured in such a way that the evidence against RT could be measured directly and thus there was no need for model fitting. On the data the regression lines between RT bins and evidence was calculated per participants. These regression lines were averaged for every condition and tested against each other. The analyses do not support the hypotheses that the bounds would collapse because of the deadlines. The test gave no indication of collapsing bounds. The results looked different between the conditions though. The heterogeneity of the regression lines per participant seemed bigger in conditions with a smaller deadline than a conditions with a later deadline. Also the intercepts across the conditions seemed different with conditions with shorter deadlines lower intercepts than conditions with later deadlines. The hetero-geneity of the regression lines is based on variance. Variance is based on squaring differences which makes differences big very easy. This means, that speculation on this subject is not very use-ful. The different intercepts could support SDC because under SDC it is expected that people would lower the amount of evidence needed if deadlines would become smaller (Hawkins et al., 2015).

There are no important findings found that provide good evidence for our hypothesis, but it does not disprove it either. There are two different problems with our analyses that made it so that possible support for our theory was not found. This is the weight of evidence that grows when less evidence is provided and the inevitable positive slope at the beginning of a linear bound.

The problem with the weight of the evidence can explain why the condition with a smaller deadline had slopes expanding more than a condition with a later deadline. When a deadline is smaller there is less evidence available for making a decision, because the evidence is provided in

(17)

set intervals. When there are less pieces of evidence available, the evidence that is there is more important per piece for making a decision, like the law of demand and supply. When evidence becomes more important people will likely wait as long as possible to get as much evidence as possible to avoid unnecessary uncertainty. This could lead to higher response rates later closer towards the deadlines. This would result in an expanding slope of evidence against RT. If this is truly a relevant effect there is need for research into it. If this effect is found to be relevant it must be counteracted to give clear analysis of the hypothesis of collapsing bounds in dynamic

environments. This is maybe done by making the participants very clear how to optimize their RR and thus teach them how to use DDC. This is in line with what Hawkins et al.(2015) found.

The first RT bins are RT were reactions are just fast guesses, because to little evidence is acquired for a well though out response. So there are very little responses at the beginning of the RT against response distribution. This will make the bounds at the beginning of the DDM expanding rapidly, because of the lack of responses at the beginning of trails. This initial growing bound, will have an effect on the overall bound if you measure the bound in a linear way. The overall slope of the bound will seem more expanding than it will be when this initial expanding period is excluded from the linear regression. To counteract this effect a certain RT bin could be chosen to begin measuring from where this effect of the initial growing slope stops. To show how this could work, it can be seen in Figure 4 the evidence (posterior probability) at decision commitment is plotted against RT for every condition. Within the conditions the least-squares regression line for each participant is plotted. how the individual regression lines per condition would look like if the first 500ms were excluded from the data.

(18)

Figure 4. On the x-axis RT bins are plotted, on the y-axis the amount of evidence toward a correct decision. The linear regression lines are the bounds of evidence needed over RT per participant in that condition.

These regression lines seem to indicate more in the direction of collapsing bounds because of deadlines. Especially in the first condition. The amount of 500ms is now arbitrarily chosen, so if the real peak because of the initial expanding at the beginning of the bound could be calculated, a more precise cut-off point could be used. Maybe a cut-off point is computable with the use of the modus of the bound, because this is the peak of the bound, just like the point where the initial expanding stops.

To conclude, in this paper the hypothesis of collapsing bounds because of deadlines in 2AFC tasks was tested. The manipulation was done successfully, but the expectations according the theory did not came through in the analyses of the data. The analysis of the results showed no evidence for the hypothesis that the average bounds would collapse per condition with a smaller deadline. The hypothesis is not rejected though, because there are reasons to expect that no collapsing bounds

(19)

would be found with the analysis that was used in this paper. If the analyses would be done with this in mind an corrected because of it, maybe collapsing bounds with DDM could be found.

Literature

Ashby, F. G. (1983). A biased random walk model for two choice reaction times. Journal of Mathematical Psychology, 27(3), 277-297.

Balci, F., Simen, P., Niyogi, R., Saxe, A., Hughes, J. A., Holmes, P., & Cohen, J. D. (2011). Acquisition of decision making criteria: reward rate ultimately beats accuracy. Attention, Perception, & Psychophysics, 73(2), 640-657.

Boehm, U., Hawkins, G. E., Brown, S., Rijn, H., & Wagenmakers, E. J. (2016). Of monkeys and men: Impatience in perceptual decision-making. Psychonomic bulletin & review, 23(3), 738-749.

Busemeyer, J. R., & Rapoport, A. (1988). Psychological models of deferred decision making. Jour nal of Mathematical Psychology, 32(2), 91-134.

Cisek, P., Puskas, G. A., & El-Murr, S. (2009). Decisions in changing conditions: the urgency-gat ing model. Journal of Neuroscience, 29(37), 11560-11571.

Ditterich, J. (2006). Evidence for time‐variant decision making. European Journal of Neurosci-ence, 24(12), 3628-3641.

Drugowitsch, J., Moreno-Bote, R., Churchland, A. K., Shadlen, M. N., & Pouget, A. (2012). The cost of accumulating evidence in perceptual decision making. Journal of Neuroscience, 32(11), 3612-3628.

Frazier, P. I., Yu, A. J. (2007). Sequential hypothesis testing understochastic deadlines (pp. 465-472). In J. C. Platt, D. Koller, Y. Singer, & S. Roweis (Eds.), Advances in Neural Information Processing Systems 20. Cambridge, MA: MIT Press

Gerstein, G. L., & Mandelbrot, B. (1964). Random walk models for the spike activity of a single neuron. Biophysical journal, 4(1), 41-68.

Hanks, T., Kiani, R., & Shadlen, M. N. (2014). A neural mechanism of speed-accuracy tradeoff in macaque area LIP. Elife, 3, e02260.

Hawkins, G. E., Forstmann, B. U., Wagenmakers, E. J., Ratcliff, R., & Brown, S. D. (2015). Revisiting the evidence for collapsing boundaries and urgency signals in perceptual decision-making. Journal of Neuroscience, 35(6), 2476-2484.

Heath, R. A. (1981). A tandem random walk model for psychological discrimination. British Journal of Mathematical and Statistical Psychology, 34(1), 76-92.

(20)

Huk, A. C., & Shadlen, M. N. (2005). Neural activity in macaque parietal cortex reflects temporal integration of visual motion signals during perceptual decision making. Journal of

Neuroscience, 25(45), 10420-10436.

Janssen, P., & Shadlen, M. N. (2005). A representation of the hazard rate of elapsed time in mc aque area LIP. Nature neuroscience, 8(2), 234-241.

Janis, I. L., & Mann, L. (1977). Decision making: A psychological analysis of conflict, choice, and commitment. Free press.

Kuhl, J. (1986). Motivation and information processing: A new look at decision making, dynamic change, and action control.

Mandel, N. (2003). Shifting selves and decision making: The effects of self-construal priming on consumer risk-taking. Journal of Consumer Research, 30(1), 30-40.

Morey, R. D., & Wagenmakers, E. J. (2014). Simple relation between Bayesian order-restricted and point-null hypothesis tests. Statistics & Probability Letters, 92, 121-124.

Morey, R. D., Rouder, J. N., & Jamil, T. (2014). BayesFactor: Computation of Bayes factors for common designs. R package version 0.9, 8.

Ratcliff, R., & Smith, P. L. (2004). A comparison of sequential sampling models for two-choice re action time. Psychological review, 111(2), 333.

Roitman, J. D., & Shadlen, M. N. (2002). Response of neurons in the lateral intraparietal area du ring a combined visual discrimination reaction time task. Journal of neuroscience, 22(21), 9475-9489.

Shadlen, M. N., & Newsome, W. T. (2001). Neural basis of a perceptual decision in the parietal cortex (area LIP) of the rhesus monkey. Journal of neurophysiology, 86(4), 1916-1936. Shadlen, M. N., & Kiani, R. (2013). Decision making as a window on cognition. Neuron, 80(3),

791-806.

Swensson, R. G., & Thomas, R. E. (1974). Fixed and optional stopping models for two-choice discrimination times. Journal of Mathematical Psychology, 11(3), 213-236.

Referenties

GERELATEERDE DOCUMENTEN

We study the large-time behavior of a class of reaction-diffusion systems with constant distributed microstructure arising when modeling diffusion and reaction in structured

This is because the problem of a current at zero voltage also occurred when the generalized Einstein was not used, but instead only the contact concentrations where calculated

An algebra task was chosen because previous efforts to model algebra tasks in the ACT-R architecture showed activity in five different modules when solving algebra problem;

deduced that regularly spaced N-pulse configurations are more stable than any other N-pulse configuration – in fact, irregular patterns typically evolve toward regularity on

Deze zijn middels een assessment beoordeeld (tabel 5.4). Het meest interessante stuk is aangetroffen in vnr. Het werktuig is intensief gebruikt en wel naar twee uiteinden toe, zodat

Figure 4.24: Effect of fertilizer type, NPK ratio and treatment application level on soil Na at Rogland in

Of, om Dorgelo's eigen woorden te gebruiken: 'het moet toch mogelijk zijn om de studenten in 100 semesteruren naar het front van het hedendaagse technische denken en

Een tweetal projekten dat binnen de interafdelingswerkgroep 'Communi- catiehulpmiddelen voor gehandicap- ten' uitgevoerd wordt heeft ook met spraak te maken. - Een