• No results found

Machine-learning classification in studying neural spiking rates

N/A
N/A
Protected

Academic year: 2021

Share "Machine-learning classification in studying neural spiking rates"

Copied!
30
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Machine-learning classification in

studying neural spiking rates

Merijn Testroote 11173106

Bachelor thesis Credits: 18 EC

Bachelor Opleiding Kunstmatige Intelligentie University of Amsterdam Faculty of Science Science Park 904 1098 XH Amsterdam Supervisors dr. T.R. Walstra dr. U. Olcese Faculty of Science University of Amsterdam Science Park 904 1098 XH Amsterdam jan 31st, 2020

(2)

Abstract

Previously, a support vector machine (SVM) has been used to decode information from neural spiking data. From neural spiking data recorded in the primary visual cortex of mice, we predicted expected and unexpected external stimuli. In this thesis, we compared the performance of XGB and SVM in predicting external stimuli.

Using cross-validation we trained and evaluated XGB and SVM models. We constructed pseudo-recordings in order to increase the size of the dataset. We were able to predict external stimuli from neural spiking data in 5 out of 6 conditions using a XGB classifier. Our results show that the XGB outperforms the SVM in predicting visual and auditory stimuli presented in an oddball paradigm. The XGB was able to find more significant time-windows and was less prone to overfitting.

(3)

Contents

1 Introduction 1 2 Background 3 2.1 Neuroscientific background . . . 3 2.2 Machine-learning classifiers . . . 5 2.3 Support-vector machine (SVM) . . . 7

2.4 (Extreme) Gradient boosted trees (XGB) . . . 9

3 Methods 10 3.1 Dataset . . . 10

3.2 Hyperparameter selection . . . 10

3.3 Training & Evaluation . . . 12

3.4 Combining recordings . . . 12 4 Results 14 4.1 Single-recording . . . 14 4.2 Combined-recording . . . 14 4.3 Classifier comparison . . . 14 5 Discussion 18 6 Conclusion 18 7 Future work 18 A Hyperparameter space 23 A Extra figures 24

(4)

1

Introduction

It is well-known that the biological neuron is one of the building blocks of intelligence in animals. Today we have a good understanding of the work-ings of the neuron. On the other hand, how populations of neurons process information and ultimately lead to cognition is much less understood.

The neuron is a complex cell and comes in many varieties. One thing all neurons share is that they are excitable: that is, they can receive input from neighbouring neurons, and if their internal state matches some criteria they pass this signal on. This signal is known as an action potential or a spike, a peak in the voltage potential of a neuron. The spike is the transfer of information from one neuron to another. It is not a unique event, neurons spike frequently, and they modulate their spiking rate according to the input. This makes it common to study the number of spikes a neuron makes in a certain time window.

Hubel and Wiesel, now Nobel laureates, studied the spiking rates of neu-rons. They recorded spikes from single neurons using electrodes inside a cat primary visual cortex. They found neurons that increase their spiking rate in response to specific visual input, (fig. 1b; Hubel et al. 1959, 1962). This tra-ditional analysis of neural spiking data is performed by studying the spiking rate of single neurons, discarding the distributed information in populations of neurons. To incorporate the information in populations, we want to com-bine the activity of neurons in a model. Models which correctly predict some external stimuli based on the spiking data of many neurons can contain in-formation on the workings of neural populations. Since modern probes (fig. 1a) allow the recording of spiking activity of tens to hundreds of neurons simultaneously, the only thing left is suitable models.

Previously, Support vector machines (SVM) have been successfully used to predict external stimuli based on population spiking data (Raposo et al. 2014). However, modern machine-learning classifiers have been shown to out-perform classical methods like SVM in many tasks.

Preliminary results show through comparison of modern machine-learning classifiers and classically used classifiers that modern classifiers perform bet-ter in regression from neural spiking data (Glaser et al. 2017). One of the classifiers in the comparison is XGBoost (XGB), a popular gradient boosted trees implementation. XGB is also suited for classification and our specific problem. As of our knowledge, there is no comparison of machine-learning classifiers on neural spiking data. This calls for a comparison between XGB

(5)

(a) Multi-electrode probe on the left, spiking data on the right

(b) A neuron showing a response to a specific visual input

Figure 1: (a) Adapted from:

https://web.archive.org/web/20191210095432/https://neuralynx.com/ \documents/Summary_Sheet_-_Silicon_Probes.pdf (b) Adapted from Bear et al. 2016

The Cognitive & Systems Neuroscience lab at the Swammerdam Institute for Life Science (SILS-CSN) has recorded neuron responses oddball trials in the mice primary visual cortex. This dataset will be used in this thesis.

In this thesis, we try to answer two questions:

1. Can we predict external stimuli from neural spiking data in the SILS-CSN dataset.

2. Does XGBoost outperform a support vector machine classifier on pre-dicting external stimuli from neural spiking data in the SILS-CSN dataset.

(6)

2

Background

In this chapter, we try to make the current problem accessible to both the machine-learning expert and the neuroscientist. The first part explains the neuroscientific motivation behind this thesis. We will then cover the basics of machine-learning classifiers.

2.1

Neuroscientific background

When presenting human participants with unexpected stimuli, researchers found a strong change in Electroencephalography (EEG) measurements with respect to the measurements when presenting expected stimuli (fig. 2a; Harms et al. 2016). This response found in EEG recordings is called the mismatch negativity (MMN).

MMN is believed to be the result of local prediction networks, and thus showing a sudden change in brain activity when unexpected stimuli are pre-sented. To explicate the underlying networks responsible for the MMN, one needs to measure closer to the actual neurons.

In EEG, electrical activity on the scalp is measured using electrodes. It is non-invasive and commonly used on human participants. Invasive experi-ments, measuring and controlling brain activity at a neuronal scale, is rare in human participants. A much-used substitute for humans are rodents such as mice and rats, in which invasive technology is well-developed.

Modern multi-electrode probes, very fine needles with multiple electrodes stacked along their depth, allow the local electrical activity to be measured at a neuronal scale (fig. 1a). After drilling small holes in the skull of a rodent, the probe is lowered into the brain (Covey et al. 2015), and neuronal activity can be recorded.

After the recording session is finished, the data is preprocessed and splits it up into frequency bands. The band between 500Hz and 9000Hz contains the spiking activity of neurons around the probe. Using a method called spike-sorting all the recorded spikes are clustered; a cluster containing all spikes belonging to a neuron (Wallisch et al. 2014). The brain of the animal is removed and kept for histology, to cross-reference the recorded neurons with the place of the probe.

(7)

(a) A classical mismatch negativ-ity response where N1 is the un-expected stimulus and P3a the expected.

(b) Flip-flop. Every dot is a stim-ulus. The first green dot is the unexpected first. The following two are the expected stimuli. The pink dot is the unexpected mis-match.

Figure 2: (a) Adapted from Sikkens et al. 2019 (b) Adapted from Daltrozzo et al. 2014

The oddball paradigm is the name for a collection of stimulus sequences. Every sequence of stimuli is designed to test a hypothesis, one is the so-called flip-flop control. In the flip-flop control, two different stimuli are used. One stimulus is presented n times(fig. 2b). The first time it is presented we expect an unconditioned response, after n presentations we expect a lesser (conditioned) response. After this an unexpected stimulus is presented once, expecting a stronger (unconditioned) response.

Traditionally single-neuron responses have been used in oddball research. This led Chen et al. to the result that specific neurons respond ”early” after mismatch presentation and others ”late” (I. Chen et al. 2015).

Current multi-electrode probes and machine-learning methods allow for many neurons to be analysed together, also known as population analysis. It accounts for that neurons encode information together as if they are all part of one picture. Population analysis is becoming more popular in neuroscience, and is used in research on cognition (Arandia-Romero et al. 2017).

Population analysis performed on responses to oddball trials shows which neuronal populations encode information about stimuli. It also shows when in time relative to stimulus onset the neuronal population is encoding infor-mation about the stimuli. This can tell something about the hierarchy of the neuronal populations.

(8)

2.2

Machine-learning classifiers

Machine-learning models are flexible models which learn to predict a con-tinuous or discrete value based on input features. Machine-learning models exist for both learning from labelled data (supervised machine-learning) and from unlabelled data (unsupervised machine-learning).

When talking about learning to predict discrete outcomes using labelled data we refer to this as classification. A special case of classification is binary classification, where there are only two classes.

To train a classifier we need a training set. When using neural spiking rates as inputs and a stimulus type as the class we want to predict, we can mathematically define the training set as

X = [xt, rt]Tt=1 (1)

where x is a vector containing the spiking rates, r is the stimulus type and T is the number of trials. The spiking rate vector is defined as

x = [r1, r2, ..., rn] ∧ rn∈ N (2)

r = (

0 if x is not of the target stimulus type

1 if x is of the target stimulus type (3) where n is the number of neurons, rn the spiking rate of an individual

neuron.

Using this training set we can define a classifier h.

h(x) = (

0 if h classifies x as being the incorrect stimulus type

1 if h classifies x as being the correct stimulus type (4) The performance of the model can be a measure of the information en-coded by the neural spiking rates. We can define the performance of the classifier using the accuracy

A(h|X) = N X t=1 (h(xt) = rt) (5) where (a = b) is 1 if a = b and 0 if a 6= b.

(9)

There are a lot of classifiers to choose from. Raposo et al. decode animal choice and modality from neural spiking rates using a SVM (Raposo et al. 2014). They use the learned weights of the model to quantify the importance of individual neurons. The model performance over time shows when the neural population encoded information about choice or modality.

Viejo et al. shows that non-linear encoders, in specific, gradient boosted trees, outperform linear encoders on neural spiking data for predicting a continuous variable (Viejo et al. 2018). Preliminary results by Glaser et al. confirms this (Glaser et al. 2017). They compared multiple traditional and modern machine-learning algorithms for prediction of a continuous variable. They concluded that modern machine-learning algorithms such as neural networks and ensembles outperform traditional methods including Wiener and Kalman filters. Although, their results comparing gradient boosted trees and support vector machines for regressions are mixed. In two out of three data-sets the performance is not significantly different, in the other the XGB outperforms SVM.

(10)

2.3

Support-vector machine (SVM)

The Support-vector machine (SVM) is a kernel machine classifier, which tries to find a maximum margin between classes using a weighted sum of the features (Alpaydin 2009). If the data is linearly separable, SVM finds the maximum-margin hyperplane. If only two input dimensions are used, the hyperplane is a line. Then SVM finds the line that separates the data points with the biggest margin. This can be defined as:

~

w · ~x − b = 0

where w is the vector of weights (transforming the data into the hyperplane), x is the input data and b is half the width of the margin. A linear SVM algorithm finds the w that maximises b.

If the data is not linearly separable, one can use a so-called kernel trick. This maps the features using a nonlinear transformation to a new space in which the features can be linearly separated (fig. 3).

A popular nonlinear kernel is the radial-basis function (RBF), which can be defined as in eq. 6.

rbf (γ, x) = exp(−γkx − x0k2) (6) The SVM takes two parameters referred to as C and γ. C is a regularisa-tion parameter, a low C will promote a simpler soluregularisa-tion than a high C. The RBF uses the parameter, γ. Gamma defines the inverse radius of influence. Thus a lower γ can result in a more generalised solution and a high γ can lead to overfitting.

The SVM is low in time and space complexity and the model is explainable by the fitted weights. As shown by Raposo et al. (2014), such an explainable model can lead to additional results.

(11)

Figure 3: Transforming non-linear data to a linear space using kernel φ.

By Alisneaky, svg version by User:Zirguezi - Own work, CC BY-SA 4.0, https: //commons.wikimedia.org/w/index.php?curid=47868867

Figure 4: Gradient boosted trees combines the results of many smaller and weaker decision trees. Adapted from T. Chen et al. 2016

(12)

2.4

(Extreme) Gradient boosted trees (XGB)

Gradient boosted trees (GBT) is a machine-learning method for regression and classification. GBT models have been very successful in many applica-tions and have been the choice of many competitors in AI competiapplica-tions (T. Chen et al. 2016).

Like random forests, GBT combines multiple ”weak” decision trees. GBT starts out with a best guess; the most occurring class. It then repeatedly adds new decision trees to improve its guess (fig. 4). On every iteration, GBT takes the errors from the previous decision tree into account. Every tree’s guess is weighted by a learning rate, which is common to be a number around 0.1. Other than a learning rate, GBT usually also takes a limit for the depth of a single tree and the total number of trees to use. But if any single tree does not add any new information to the guess, GBT will stop adding more trees. XGBoost (XGB) is an open-source implementation of gradient boosted trees. XGB adds an additional regularisation, which helps to smooth the final learned weights and avoids over-fitting. XGB is highly optimised and can work on multiple cores or workers. From a XGB model, the importance of the features can be easily extracted.

(13)

3

Methods

3.1

Dataset

For training and evaluation of the classifiers, we used a dataset recorded by the Cognitive & Systems Neuroscience lab at the Swammerdam Institute for Life Science. In this dataset, extracellular laminar probe recordings were done in the primary visual cortex of awake head-fixed mice. A part of the recording was done using a 32 channel probe, another with a 64 channel probe. The dataset consists of neural spiking-rates in response to flip-flop oddball trials. In total 19 recordings were done in 15 mice. The oddball trials include both visual and auditory stimuli. The visual stimuli consisted of both horizontal and vertical drifting gratings. The auditory stimuli were varied in pitch.

Within a recording session, many oddball sequences were presented to the animal. Every single presentation of a stimulus is called a trial. Every trial has three parameters:

1. Stimulus type (visual 1, visual 2, audio 1, audio 2) 2. Context modality (audio, visual or audio-visual)

3. Oddball type (Unexpected first, Expected, Unexpected mismatch) The stimulus type is limited by the context modality. A trial runs from −0.512s to 1.472s relative to stimulus onset, 0s being stimulus presentation. Every recording has it’s own set of neurons. Resulting in a T ×N ×t matrix where T the number of trials, N is the number of neurons, and t is time. The values of the matrix are the spiking frequency. The times are spaced 100ms windows. Meaning that the first time point is −0.512, the second −0.412 etc.

3.2

Hyperparameter selection

The classifiers require a set of parameters in order to tune them to the task at hand. One recording has been taken out of the dataset and is used for hyperparameter selection (See Appendix A for the hyperparameter space). The SVM was tested with both linear and non-linear kernels, the non-linear kernel RBF showed the best results.

The selected hyperparameters for the SVM are: Parameter Value

C 100

gamma 0.001 kernel rbf

(14)

Figure 5: Neurons per recording

The selected hyperparameters for XGBoost are: Parameter Value learning rate 0.05

max depth 6

n estimators 100 colsample bytree 0.4 min child weight 1

subsample 1

(15)

3.3

Training & Evaluation

For every 100ms time window, we trained and evaluated the classifiers. For every recording, we have a T ×N feature matrix, T being the number of trials and N the number of neurons. For every trial, we have three parameters; stimulus type, context modality and oddball type. We use the stimulus type as our labels. If a classifier is able to predict the stimulus type (orientation or pitch), the time window might encode information about the modality (auditory or visual).

Before classification, all neurons with a variance lower than 1 are removed, where 1 is chosen through experimentation (see eq. 7).

(V ar(X) = E[(X − µ)2]) < 1 (7) In order to evaluate our classifier in one time window, we use K-Fold cross-validation with k = 6. The cross-validation is stratified, meaning the distribution of the labels is equal over all folds. The mean of the score in the folds is kept together with the 95% and 5% percentiles of the score for the confidence interval. For the score function, we used the mean accuracy defined as:

Accuracy = T P + T N T P + T N + F P + F N

Where TP is true positive, FP is false positive, TN is true negative and FN is false negative

The cross-validation is repeated in all time windows, resulting in a score and confidence interval per 100ms.

In this dataset, we are doing binary classification, which has a theoretical base rate of P (X = x) = 0.5; the chance of predicting a stimulus type without any prior knowledge. In order to account for any biases in the data, we need to have a way to determine a more accurate base rate. We use the average score on the time windows between −0.512s and 0s as the base rate (Combrisson et al. 2015).

If the score of the classifier in one time-window was significant above base rate using p = 0.05, this is reported in the result tables (see table 4.1). A significant point in a time-window is a sign that the neurons are encoding information about the external stimuli.

3.4

Combining recordings

Every recording session has an arbitrary amount of neurons. Because the firing rates of the neurons are used as features in our classification (T × N ),

(16)

and the number of neurons varies between recordings, we cannot simply combine all trials. In order to solve this, we construct pseudo-trials.

Pseudo-trials are built by sampling trials from neurons, randomly and without replacement. The only constraint is that we do not mix neuron re-sponses to different stimuli type and modality. The set of pseudo-trials is of size T ×Pr

i=1dim(Ni), where r is the number of recordings. Stated

differ-ently, the number of neurons in the pseudo-trials is equal to the number of neurons in all the recordings together.

Because one set of pseudo-trials is a random combination of trials, we want to account for any bias. To do this, we create 20 sets of pseudo-trails, where 20 is chosen with respect to run-time. We treat every set as a single recording, but only after cross-validating all 20 sets we take the mean and the percentiles of the scores.

(17)

4

Results

4.1

Single-recording

For every recording, we trained and evaluated a XGB classifier. In 2 out of 12 recordings the XGB found 11 significant time-windows (table A2). In the first recording we were able to predict the visual stimuli type in the unexpected first and expected trials, and in the audio-only the expected (fig. A8a, e, d). In the second recording, we were able to predict the unexpected first trials for both visual and audio, both including a late time-component. The single-recording results are mixed, it shows the need for combined-recordings.

4.2

Combined-recording

The XGB was able to significantly predict stimuli in all visual conditions (table 1, fig. 6). In the audio conditions, the XGB was only able to predict in the unexpected first and unexpected mismatch trials (fig. 7). In total, the XGB was able to predict 18 significant time windows in all conditions.

Like the single-recording, here we see predictability in the unexpected first and unexpected mismatch visual trials. The predictability in the expected tri-als was not seen in the single-recordings. Using the combined recordings, the XGB was able to predict 18 significant time windows. Using single-recordings this was at most 11 in one recording.

4.3

Classifier comparison

We trained and evaluated both the SVM and XGB on the combined record-ings.

The XGB was able to significantly predict stimuli in seven time-windows in the visual-only mismatch conditions (table 1 and fig. 6c). The SVM was able to significantly predict stimuli only three times in these trials (fig. A10e). The XGB was able to predict the late time component where the SVM did not.

Out of all 12 conditions, the XGB found 21 significant time windows in 6 conditions. The SVM found 7 significant time windows in 2 conditions.

In 5 out of 12 conditions the base rate was higher in SVM than XGB. The XGB only had a higher base rate in 2 out of 12 conditions, meaning that the XGB is less prone to overfitting the data.

(18)

Modality Context Odball NS XGB B XGB NS SVM B SVM

vis vis first 5 0.5 4 0.6

vis vis Expected 3 0.6 0 0.7

vis vis mismatch 7 0.5 3 0.6

aud aud first 3 0.6 0 0.7

aud aud Expected 0 0.6 0 0.6

aud aud mismatch 1 0.6 0 0.7

aud av first 0 0.7 0 0.6 aud av Expected 0 0.7 0 0.6 aud av mismatch 2 0.6 0 0.6 vis av first 0 0.7 0 0.7 vis av Expected 0 0.7 0 0.7 vis av mismatch 0 0.7 0 0.7

Table 1: Combined recordings results, NS=Number significant time windows, B=Baserate

(19)

(a) Unexpected first visual (b) Expected visual

(c) Unexpected mismatch visual

Figure 6: Combined recordings classification using XGB. Thick green line is the model score. Thin green line is the base rate. The band around the thick green line is a 95% confidence interval. The XGB is able to significantly predict expected and unexpected visual stimuli.

(20)

(a) Unexpected first audio (b) Expected audio

(c) Unexpected mismatch audio

Figure 7: The XGB is only able to significantly predict unexpected auditory stimuli and not expected.

(21)

5

Discussion

We used machine-learning classifiers to predict external stimuli in an oddball paradigm. For every 100ms time window, we trained and evaluated a classi-fier. Within every time window, we used the firing rates of neurons as features and an external stimulus as the class. Both the SVM and XGB were able to significantly predict stimuli. The XGB was able to predict more stimuli than the SVM and was less prone to over-fit the data.

The data was recorded in the primary visual cortex and we were able to predict visual stimuli. This is expected as it shows the processing of visual stimuli in the visual cortex. We were also able to predict auditory stimuli, but only unexpected stimuli (First and mismatch). This shows that information about auditory stimuli might be able to reach the primary visual cortex. In some conditions, we report a high base rate p = 0.7 where we would expect a theoretical base rate of p = 0.5. It is unclear if this is the result of an uneven amount of classes or an insufficient amount of data.

6

Conclusion

In our introduction, we suggested the following questions:

1. Can we predict external stimuli from neural spiking data in the SILS-CSN dataset.

2. Does XGBoost outperform a support vector machine classifier on pre-dicting external stimuli from neural spiking data in the SILS-CSN dataset. From our results and discussion we draw the corresponding answers. We could predict external stimuli from neural spiking data in 5 out of 6 conditions using a XGB classifier. The XGB outperformed the SVM on predicting external stimuli from neural spiking data. It was able to predict stimuli in 2.3 times as many trials as the SVM.

7

Future work

The amount of trials in a single recording is low. This might be the reason we had mixed results in the single-recording classification. To improve the per-formance of the classifiers we wrote an algorithm to build pseudo recordings. The implementation of such an algorithm is laborious and prone to induce bugs. The methods to do this are rarely described in the literature. Thus fu-ture work should standardise methods to create pseudo-data-sets from small

(22)

neural spiking data-sets. Because of a lack of resources, we could only build 20 pseudo-recordings. Future work can increase this in order to decrease pos-sible bias.

We used a non-public dataset. This is troublesome, as it is hard to verify and translate our results. Future work should verify our results using publicly available data-sets.

The field of machine-learning knows many methods for classification. Fu-ture work should broaden the scope by including more classification methods other than XGB and SVM.

(23)

References

[Alp09] Ethem Alpaydin. Introduction to machine learning. MIT press, 2009.

[Ara+17] I˜nigo Arandia-Romero et al. “What can neuronal populations tell us about cognition?” In: Current Opinion in Neurobiology 46 (2017). Computational Neuroscience, pp. 48–57. issn: 0959-4388. doi: https://doi.org/10.1016/j.conb.2017.07.008. url: http://www.sciencedirect.com/science/article/pii/ S0959438817300405.

[BCP16] Mark F. Bear et al. Neuroscience: Exploring the brain, Fourth Edition. Wolters Kluwer, 2016, pp. 4–14.

[CC15] Ellen Covey et al. Basic electrophysiological methods. Oxford Uni-versity Press, 2015.

[CG16] Tianqi Chen et al. “XGBoost: A Scalable Tree Boosting Sys-tem”. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. KDD ’16. San Francisco, California, USA: Association for Computing Ma-chinery, 2016, pp. 785–794. isbn: 9781450342322. doi: 10.1145/ 2939672.2939785. url: https://doi.org/10.1145/2939672. 2939785.

[CHL15] IWen Chen et al. “Specific Early and Late Oddball-Evoked Re-sponses in Excitatory and Inhibitory Neurons of Mouse Auditory Cortex”. In: Journal of Neuroscience 35.36 (2015), pp. 12560– 12573. issn: 0270-6474. doi: 10 . 1523 / JNEUROSCI . 2240 - 15 . 2015. eprint: https://www.jneurosci.org/content/35/36/ 12560.full.pdf. url: https://www.jneurosci.org/content/ 35/36/12560.

[CJ15] Etienne Combrisson et al. “Exceeding chance level by chance: The caveat of theoretical chance levels in brain signal classification and statistical assessment of decoding accuracy”. In: Journal of Neuroscience Methods 250 (2015). Cutting-edge EEG Methods, pp. 126–136. issn: 0165-0270. doi: https://doi.org/10.1016/ j.jneumeth.2015.01.010. url: http://www.sciencedirect. com/science/article/pii/S0165027015000114.

(24)

[DC14] Jerome Daltrozzo et al. “Neurocognitive mechanisms of statistical-sequential learning: what do event-related potentials tell us?” In: Frontiers in Human Neuroscience 8 (2014), p. 437. issn: 1662-5161. doi: 10.3389/fnhum.2014.00437. url: https://www. frontiersin.org/article/10.3389/fnhum.2014.00437. [Gla+17] Joshua I. Glaser et al. Machine learning for neural decoding. 2017.

arXiv: 1708.00909 [q-bio.NC].

[HMN16] Lauren Harms et al. “Criteria for determining whether mismatch responses exist in animal models: Focus on rodents”. In: Biolog-ical Psychology 116 (2016). Understanding the neurobiology of MMN and its reduction in schizophrenia, pp. 28–35. issn: 0301-0511. doi: https://doi.org/10.1016/j.biopsycho.2015.07. 006. url: http://www.sciencedirect.com/science/article/ pii/S0301051115300284.

[HW59] D. H. Hubel et al. “Receptive fields of single neurones in the cat’s striate cortex”. In: The Journal of Physiology 148.3 (1959), pp. 574–591. doi: 10.1113/jphysiol.1959.sp006308. eprint: https : / / physoc . onlinelibrary . wiley . com / doi / pdf / 10 .

1113/jphysiol.1959.sp006308. url: https://physoc.onlinelibrary. wiley.com/doi/abs/10.1113/jphysiol.1959.sp006308.

[HW62] D. H. Hubel et al. “Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex”. In: The Jour-nal of Physiology 160.1 (1962), pp. 106–154. doi: 10 . 1113 / jphysiol.1962.sp006837. eprint: https://physoc.onlinelibrary. wiley.com/doi/pdf/10.1113/jphysiol.1962.sp006837. url: https : / / physoc . onlinelibrary . wiley . com / doi / abs / 10 . 1113/jphysiol.1962.sp006837.

[RKC14] David Raposo et al. “A category-free neural population supports evolving demands during decision-making”. In: Nature Neuro-science 17.12 (2014), pp. 1784–1792. issn: 1546-1726. doi: 10. 1038/nn.3865. url: https://doi.org/10.1038/nn.3865. [SBO19] Tom Sikkens et al. “The Role of Top-Down Modulation in

Shap-ing Sensory ProcessShap-ing Across Brain States: Implications for Con-sciousness”. In: Frontiers in Systems Neuroscience 13 (2019), p. 31. issn: 1662-5137. doi: 10.3389/fnsys.2019.00031. url: https : / / www . frontiersin . org / article / 10 . 3389 / fnsys .

(25)

[VCP18] Guillaume Viejo et al. “Brain-state invariant thalamo-cortical coordination revealed by non-linear encoders”. In: PLOS Com-putational Biology 14.3 (Mar. 2018), pp. 1–25. doi: 10. 1371/ journal . pcbi . 1006041. url: https : / / doi . org / 10 . 1371 / journal.pcbi.1006041.

[Wal+14] Pascal Wallisch et al. MATLAB for neuroscientists: an introduc-tion to scientific computing in MATLAB. Academic Press, 2014.

(26)

A

Hyperparameter space

SVM

p a r a m g r i d = [ { ’C ’ : [ 1 , 1 0 , 1 0 0 , 1 0 0 0 ] , ’ k e r n e l ’ : [ ’ l i n e a r ’ ] } , { ’C ’ : [ 1 , 1 0 , 1 0 0 , 1 0 0 0 ] , ’gamma ’ : [ 0 . 0 0 1 , 0 . 0 0 0 1 ] , ’ k e r n e l ’ : [ ’ r b f ’ ] } , ]

XGBoost

p a r a m g r i d = [

{ ’ max depth ’ : np . a r a n g e ( 6 , 1 6 , 1 , dtype=i n t ) , ’ l e a r n i n g r a t e ’ : np . a r a n g e ( 0 . 0 5 , 0 . 3 1 , 0 . 0 5 ) , ’ n e s t i m a t o r s ’ : [ 1 0 0 ] , ’ n j o b s ’ : [ − 1 ] , ’ m i n c h i l d w e i g h t ’ : np . a r a n g e ( 1 , 8 , 1 , dtype=i n t ) , ’ c o l s a m p l e b y t r e e ’ : np . a r a n g e ( 0 . 3 , 0 . 8 , 0 . 1 ) , ’ subsample ’ : [ 0 . 8 , 1 ] } ]

(27)

A

Extra figures

Animal Recording #. Significant #. Neurons

1 Rec 2 2 16 2 Rec 1 2 54 3 Rec 2 2 36 4 Rec 1 3 51 5 Rec 1 11 43 5 Rec 2 1 10 6 Rec 1 11 5 7 Rec 2 4 15 8 Rec 1 8 23 9 Rec 4 8 105 10 Rec 1 1 104 11 Rec 1 8 130

(28)

(a) Unexpected first visual (b) Unexpected first audio

(29)

(a) Unexpected first visual (b) Unexpected first audio

(c) Expected visual (d) Expected audio

(30)

(a) Unexpected first visual (b) Unexpected first audio

Referenties

GERELATEERDE DOCUMENTEN

In the examples and in our network imple- mentations, we use time constants that are roughly of the order of the corresponding values in biological spiking neurons, such as

Licence agreement concerning inclusion of doctoral thesis in the Institutional Repository of the University of Leiden Downloaded from: https://hdl.handle.net/1887/18261 Note: To

License: Licence agreement concerning inclusion of doctoral thesis in the Institutional Repository of the University of Leiden.. Downloaded

Before starting to introduce the various “enhanced” forms of SN P systems as studied in this thesis, it is necessary to recall the SN P systems with standard rules.. The

Now, we consider that system Π works in the limited asynchronous mode, where the time bound associated with all rules

Recently, another idea is introduced for constructing SN P systems to solve computationally hard problems by using neuron division and budding [21], where for

In this work, we have considered spiking neural P systems with astrocytes used in a non-synchronized way: if a rule is enabled at some step, this rule is not obligatorily

In Proceedings of the Eighth Conference on Eighth WSEAS International Conference on Evolu- tionary Computing.. Asynchronous spiking neural