• No results found

En route towards understanding the brain: An integration of unifying neuroscientific theories

N/A
N/A
Protected

Academic year: 2021

Share "En route towards understanding the brain: An integration of unifying neuroscientific theories"

Copied!
20
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

En route towards understanding the brain: An

integration of unifying neuroscientific theories

October 29, 2015

Sven A. van der Burg 10181091 Behavioural Neuroscience track, Brain and Cognitive Science Master Program, University of Amsterdam

sven.vanderburg@student.uva.nl

Supervisor: Dr. Conrado A. Bosman Co-assessor: Dr. Umberto Olcese

Amount of words: 9011

Generally neuroscientific research is reductionistic: The idea is that by focusing on the separate constituents of the brain, this knowledge would eventually all add up to get a coherent expla-nation for how the brain works. This has led to a fragmentation of neuroscientific information across experimental fields, which prevents a comprehensive understanding of the brain. It is neces-sary to overcome the fragmentation problem and construct and test unifying theories that (1) generalize different processes to more fundamental laws and (2) integrate processes at different scales of the brain. This thesis discusses the necessity of unify-ing theoretical approaches for a thorough understandunify-ing of the brain. Furthermore, it provides an overview of recent advances in unifying theories on global brain function and architecture. Fi-nally, pointers for the integration of unifying neuroscientific the-ories are provide, thereby contributing to the generation of a more comprehensive understanding of how the brain works.

(2)

Contents

1 Introduction 2

2 The fragmentation problem 2

2.1 Generalisation . . . 3

2.2 Scale-integration . . . 3

2.3 A demand for unifying theories . . . 4

3 How does the brain encode information? 4 3.1 The single neuron . . . 4

3.2 Population coding . . . 5

4 Statistical regularities 5 4.1 The log-dynamic brain . . . 5

4.2 Criticality and emergence . . . 6

5 Universal global connectivity patterns 7 5.1 Hierarchy . . . 7

5.2 Small-world networks . . . 7

5.3 Cortical high-density counterstream architecture: Is the cortical network really small-world? 8 5.4 Hubs and rich clubs . . . 8

5.5 The brain as a large-scale dynamic functional network . . . 8

5.6 Segregation and integration . . . 9

5.7 Conclusions . . . 9

6 Zooming in on network building blocks 9 6.1 Structural and functional circuit motifs . . . 9

6.2 The dynamic circuit motif hypothesis . . . 10

6.3 The canonical neocortical circuit . . . 10

6.4 Canonical operations . . . 11

7 Generative models of perception and the free-energy principle 11 7.1 Bayesian coding hypothesis . . . 11

7.2 Hierarchichal generative theories . . . 12

7.3 Free-energy . . . 12

7.4 How can free-energy be minimized? . . . 13

7.5 Neuronal implementation of the free-energy principle . . . 13

8 Integrating unifying theories 13 8.1 Relationships between theories . . . 13

8.2 Generalizing and scale-integrating theories . . . 15

8.3 Structural versus functional explanations . . . 16

8.4 Abstract principles versus descriptive theories . . . 16

(3)

1 Introduction

Neuroscientific research typically is reductionistic: By focusing on separate elements of the brain such as specific brain regions, processes, or receptor types, it is thought that eventually this knowledge would accumulate to form a comprehensible explanation of brain function. For example, the majority of func-tional MRI studies is concerned with modular single brain area structure-function mapping, leading to a myriad of brain functions ascribed to a single region (Behrens et al.,2013;Stalnaker et al.,2015). Re-lated to this,Ramachandran(1990) proposed that the brain essentially is a bag of tricks, a combination of independent mechanisms that lead to its functions. Although we learned much from the reductionistic approach, we are still far from a comprehensive understanding of the brain (Adolphs,2015;Fuster,2000; Yuste,2015). Moreover, neuroscientists that do research on a particular neural scale often fail to look beyond that scale, which is referred to by some as Neural Epicentrism (Gordon,2003). In the past decade some researchers have proposed to combine all different scales in the brain to form an integrative un-derstanding of the brain as a whole, a discipline termed Integrative Neuroscience (Gordon,2003;Grillner et al.,2005a). Furthermore, the recently launched Journal of Integrative Neuroscience promotes this ap-proach by publishing neuroscientific studies across different scales. A related recent development in the neuroscientific field is the start of Big science projects, such as the Brain Activity Map, MindScope project, BRAIN initiative, and Human Brain Project (Kandel et al.,2013). These large collaborative projects aim to reach a more integrated understanding of the brain. Nonetheless, the road to understanding the brain by integrating neuroscientific information is first and foremost reserved for theoretical neuroscience, which generalizes and links different empirical observations to form general theories and unifying prin-ciples for the brain. The core of this thesis is devoted to giving an overview of unifying neuroscientific theories, but first it will be further discussed why exactly unifying theories are so important for nowadays neuroscience and what kind of theories are needed.

2 The fragmentation problem

"(...) we might appear to have in brain science a modern-day ’Tower of Babel’, built with a single purpose, but nevertheless doomed by jargon, misunderstandings between the various builders, as well as superficial parodies and denigrations about each other’s context and content." (Gordon(2003); pg. S2)

A consequence of centuries of reductionistic neuroscience, with an exponential acceleration the last decades, is that there is very detailed information available on a myriad of small topics. As most neu-roscientists are specialized in a narrowed-down section of the neuroscientific field, and often fail to look beyond their own section, the information is rarely integrated into unifying accounts of brain function. Most information is there, but it is scattered in small non-integrated patches. This is what I call the fragmentation problem of neuroscience (seeZittoun et al.(2008) for a similar formulation of the frag-mentation problem in the field of psychology). The fragfrag-mentation problem can be subdivided into (1) lack of generalization and (2) lack of scale-integration.

Firstly, the diverse empirical neuroscientific evidence that is out there is scattered across scientific fields, with researchers often focusing on a specific process (i.e. memory or vision) often implying a specific region or system (i.e. the hippocampus or visual system). Consequently, we do reasonably well in un-derstanding how a specific process works, or what is happening in a specific region, but fail to integrate this information to provide fundamental principles of brain function. Secondly, there is fragmentation of neuroscientific data into different scales, which is mostly a consequence of the fact that most exper-imental techniques can only be implemented at a certain level. For example, MRI measurements are confined to mesoscale study of systems, regions or relatively big patches of regions, whereas membrane

(4)

potential recordings are limited to the single-cell level. Although technical advances increase the possi-bilities of obtaining neuronal data simultaneously across scales (see for exampleSilvestri et al.(2015)), the vast majority of data will stay confined to a specific scale. Importantly, the limits of measuring on a single scale has tremendously shaped our thinking about the brain, and led to the modularity crisis and single-neuron doctrine (Barlow,1972;Fuster,2000;Yuste,2015).

Paradoxically, the brain is in essence a system that is based on integration of information across regions and levels, and core of understanding the brain is to understand how all processes act together to ac-complish brain function. A possible solution for the fragmentation problem is reserved for theoretical neuroscience, because it can link the fragmented neuronal data and provide unifying concepts. Theo-retical neuroscientific approaches have two main roles: That of (1) generalization, finding fundamental principles explaining a range of empirical findings, and (2) scale-integration, connecting different levels of explanation. Before discussing the different theories that provide unifying accounts of brain func-tion and architecture, let me explain why exactly integrative theories are necessary for understanding the brain using the human cardiovascular system as an analogy for the brain. Hereby subsuming that to most extent we have a global understanding of how the cardiovascular system works.

2.1 Generalisation

First of all, the main function of the cardiovascular system is to transport substances around the body. A complex organ the brain is, such an essential principle for brain function is hard to find, but the free-energy principle (Friston et al.,2006;Friston,2009,2010) (section 7.3), might come very close. Nonethe-less, knowing the main function of a system does not come close to understanding how the system actu-ally works. Transportation could be done by any kind of system, and only understanding the fundamental properties of a system that govern its functions would lead to a full understanding. Thus, in addition to the understanding in terms of main function, a lower level of understanding of how the system actually accomplishes its main function is needed. At this level, a Grand Unifying Theory (GUT) will not suffice to explain the full function of a system. Several theories or principles are needed to explain the mecha-nisms behind the main purpose. For the cardiovascular system this means for example understanding the mechanisms of heart function, the anatomy of arteries and veins, and the principles of osmosis and diffusion for blood-tissue transmission. Note that a reductionistic approach has been very useful to un-ravel the specific parts of the puzzle, for example of the cardiovascular system anatomy. But we only get a real understanding once we put together all the pieces and compose fundamental laws such as the dis-tinction between arteries and veins or the branching of ducts into capillaries. Similarly, it is for example useful to reductionistically study a specific connection between any two parts of the brain. However, to actually understand brain anatomy, reductionistic studies should be generalized into fundamental prin-ciples such as small-world brain architecture (Bassett and Bullmore,2006;Bullmore and Sporns,2009) (section 5.2) or structural circuit motifs (Sporns and Kötter,2004) (section 6.1). Thus, for a full under-standing of a complex system such as the brain, generalizing theories are needed.

2.2 Scale-integration

Supplementary to extracting general principles from reductionistic knowledge, an important role that is reserved for theoretical science is to provide links between mechanisms operating at different scale, this is emphasized byGordon(2003). In the cardiovascular example, linking different scales of explana-tion is essential in understanding how for example active excreexplana-tion of hormones in capillaries, pumping of the heart and vascular anatomy together lead to binding of hormones to receptors in target organs. Because mechanisms in the brain span several magnitudes of scale, this is even more important in un-derstanding how the brain works. Firstly, this is accomplished by theories that provide general principles that can be applied to different scales, such as the log-dynamic brain hypothesis (Buzsáki and Mizuseki,

(5)

2014) (section 4.1) and the concept of self-organized criticality (Beggs and Timme,2012;Chialvo,2010; Shew and Plenz,2013) (section 4.2). Secondly, scale-integration is fulfilled by theories that describe how mechanisms at one level can explain properties at higher or lower levels. A good example of this is the dynamical circuit motif hypothesis (Womelsdorf et al.,2014) (section 6.2).

Next to theoretical approaches, an important role for solving the scale problem is reserved for neural modelling approaches (Gordon,2003). Simulations of large numbers of neurons bridge the limitations of measurement techniques and thereby provide a solution for the scale integration problem.

2.3 A demand for unifying theories

It is important to note that the majority of theoretical neuroscientific approaches is actually very focused on a specific process or region in the brain. Take for example, the two-stage model for memory trace formation ofBuzsáki(1989) or reviews of functions of a specific area (Burgess et al.,2002;Wilson et al., 2014). These theories are crucial in understanding these specific sub-parts of brain function, but are still a part of the fragmentation problem. They are often not generalizable and focused on a specific brain scale, the two sub-problems of fragmentation. Fortunately, a small minority of influential theories strives to overcome these problems. These theories unite the sub-part focused theories into unifying concepts and could therefore be called meta-theories or unifying theories of brain function. Paradoxi-cally, these unifying theories themselves influence different subsets of experimental fields and are only partially compared and integrated by a few endeavours (Friston,2010;Gordon,2003). Therefore, an overview of unifying brain theories is provided in sections3-7. Links between theories and overarching principles will be discussed insection 8. The thesis mainly focuses on recent, influential theories. Older or lesser known theories might not be covered. An additional constrain is that to theories that explain mechanisms or organizational principles of brain function and structure. More cognitive theories, that do not (yet) have mechanistic explanations for brain function such as embodied cognition (Anderson, 2003;Wilson,2002) will not be covered here.

3 How does the brain encode information?

"The cell assembly can be a bridge between the neurophysiological description of nervous activity in the brain and a psychological description of information processing in the mind, indicating that representing the process-ing of information in terms of cell assemblies has the potential to become the central framework for neuroscience research." (Sakurai(1996); pg. 15)

3.1 The single neuron

Perhaps one of the most grounded principles of neuroscience, and perhaps a very trivial one as well, is that of the neuron as building block of the brain. The single neuron is a powerful machine with a diverse range of functions. It can for example integrate multiple senses to create multisensory output (Stein and Stanford,2008), perform complicated oscillatory transformations (Vaidya and Johnston,2013), adapt to the contrast of visual stimuli (Sanchez-Vives et al.,2000), and even code for abstract concepts such as a specific person (Quiroga,2012). A large range of cell types exist, but the general principles of dendritic integration, and action potential generation and propagation are similar for all types (Kandel et al.,2000). Traditionally, information is thought to be coded by the firing rate of a cell. However, additional coding mechanisms, such as spike timing (VanRullen et al.,2005) and phase coding (O’Keefe and Recce,1993), have gained more attention in the last decades.

(6)

3.2 Population coding

Although it is tempting to suggest that the single neuron is the entity for information encoding in the brain, there is now a general consensus that the main carrier of information is the population level (re-ferred to as population coding). The single neuron coding promise is generally re(re-ferred to as a doctrine, which was highly influenced by the single-neuron focused methodologies available in the last century (Barlow,1972;Yuste,2015). Firstly, coding in a population of cells is far more robust than in single cells, as loss of a few cells will not immediately result in loss of function. Secondly, population coding ame-liorates filtering of noise that is present in single-neuron responses (Pouget et al.,2000;Sakurai,1996). Thirdly, population coding results in emergent functional states that have much more complex prop-erties than could possibly be obtained by single cells. Hebb(1949) proposed that information is stored in the connections between cells that form ensembles through Hebbian learning (i.e. neurons that fire together, wire together). This view is especially important for memory encoding and retrieval: Neurons that represent different attributes of a memory form an ensemble, that can be retrieved if a subset of the neurons gets activated because of attractor properties (Hebb,1949;Hopfield,1994). It is now generally assumed that cellular ensembles are dynamic and are bond together dependent on the context through correlated firing (Von Der Malsburg,1994), presumably in a rhythmically coherent fashion (Singer and Gray,1995). In general, a network of cells can be thought of as a vector containing the state of each cell, every state of the vector coding for specific information (Churchland,2013). By adding the dimension of time, the state of the network can be viewed as traversing a path in network state space, dynamically altering the information that the network state is holding (Harvey et al.,2012;Yuste,2015). Population coding is in essence probabilistic, as noise is an important contributor to neuronal firing leading to un-certainty (Averbeck et al.,2006;Pouget et al.,2000). The way the brain reads out its own code must thus be Bayesian, reconstructing a posterior probability distribution of the information encoded by the network state (Knill and Pouget,2004). This will be described in more detail insection 7.1, where the Bayesian coding hypothesis is discussed. According toYuste(2015), the single neuron doctrine is directly related to the single-neuron focused methodologies in the previous centuries and is the main cause for the ab-sence of a unified theory for the brain. Furthermore, the recent paradigm shift towards the neuronal network as the functional unit of the brain, together with advancements in the techniques to measure neuronal networks would help generating a general theory for brain function (Yuste,2015).

4 Statistical regularities

"Instead of searching for ad hoc laws for the brain, under the pretence that biology is special, a good understand-ing of universal laws might very well provide a breakthrough, because brains must share some of the fundamental laws of nature." (Chialvo(2010); pg. 749)

4.1 The log-dynamic brain

It is generally assumed that parameters in the brain, such as firing rates of cells or synaptic efficacy, are normally distributed. In fact, these parameters follow a log-normal distribution, with a high probability of low values and a heavy tail of low-probability, but significant high values. In a recent reviewBuzsáki and Mizuseki(2014) propose that the log-normal distribution regularity found in the brain is a criti-cal organizational principle. They show that human behaviour, network synchrony, firing rates, burst properties, synaptic strengths and efficacy, corticocortical connections and axonal diameters all follow log-normal distributions. They propose that the log-normal distributions found at all these levels are in-terrelated and a log-normal distribution at one level entails log-normal distributions at other levels. An important consequence of the log-dynamic properties of the brain is that a small proportion (about 10%

(7)

of all cells in the cortex) of fast-firing cells that are highly connected with high synaptic strengths and efficacy dominate brain function. According toBuzsáki and Mizuseki(2014) this minority functions as a backbone of brain function. These cells are involved in standard, non-adaptive behaviours, in which the weak majority of cells almost plays no role. The weak minority is however important for complex adap-tive behaviour. Importantly, the long tail of a log-normal distribution often follows a power-law relation, which has scale-free properties that have been implicated with emergent complex dynamics (Chialvo, 2010). Power-law relations and emergent complex dynamics will be discussed in the next section.

4.2 Criticality and emergence

In addition to log-normality, many parameters in the brain show power-law relations or scale-invariance. This means that parameters show similar patterns at different scales of the measured variable. A well-known example is the 1/f relation between oscillatory frequencies of the local field potential (LFP). The exhibition of power-law relationships gives an indication that the brain is a complex system. There is no consensus on how to define complexity, but a working definition of a complex system could be: a large number of interacting components that exhibits properties that are hard to predict from the properties of lower levels (Johnson,2007;Laughlin,2014;Spier,2011). Complexity is highly related to emergence, which is referred to as the arising of network patterns that are unexpected when observing the individ-ual components of the system (Chialvo,2010;Johnson,2007;Spier,2011). In other words, the whole is greater than the sum of its parts. The brain can be viewed as a complex system, as its global proper-ties arise from the interactions between the individual elements it is composed of in an unexpected and emergent fashion. Quantitative research on complex systems has led to universal principles that under-lie complexity and can be appunder-lied to many sciences including biology, geology and economy (Bak and Paczuski,1995).

An important dynamical system trait that can lead to complex dynamics is criticality. The concept of criticality is explained best using the Ising model (Cipra,1987). The Ising model explains ferromagnetism by modeling lattice sites in a piece of iron. At every site there is an electron whose spin exhibits a certain orientation. At low temperature (T), the spin of the electrons is subject to nearest neighbour interactions, making them all align in the same orientation, resulting in the iron bar being magnetic. There is thus a high degree of order in a state that is very stable over time. This is the subcritical state. As T rises thermal fluctuations overshadow nearest neighbour interactions and all spins start orienting in a random unstable manner. This supracritical state is characterized by random disorder, the bar loses its magnetic property as there is no net magnetic field. In between these two states there is a critical T at which patches of spins with similar orientation emerge that are dynamically changing over time. In this critical state the thermal fluctuations and nearest neighbour interactions are in balance such that complex patterns emerge.

Substantial computational and empirical evidence has shown that the brain operates at a critical point (Beggs and Timme,2012;Chialvo,2010), although this is still under debate (Stumpf and Porter,2012). At the critical point the network exhibits scale-free behaviour which favours communication, information storage, computational power, phase synchrony, and dynamical range (the ability of the network to deal with input of different scales) (Beggs and Timme,2012;Chialvo,2010;Shew and Plenz,2013). Because of these ideal circumstances the brain is thought to maintain criticality through homeostasis. This self-organized criticality might rely on neuromodulatory influence or excitation-inhibition balance (Beggs and Timme,2012;Chialvo,2010;Shew and Plenz,2013). By maintaining criticality the brain creates ideal circumstances that supposedly lead to its complex characteristics. Thus, describing the brain as a self-organized critical system is a powerful way to catch brain function in a unifying principle.

(8)

5 Universal global connectivity patterns

"The emerging field of complex brain networks raises a number of interesting questions and provides some of the first quantitative insights into general topological principles of brain network organization." (Bullmore and Sporns(2009), pg. 196)

The knowledge about brain anatomy is incredibly large, and is rapidly growing because of large collabo-rative initiatives such as the human connectome project (Van Essen et al.,2012). Key in understanding universal principles underlying structure has been a graph theoretical approach to the network of con-nections in the brain. In this approach a network is represented as a set of nodes that are interconnected with links. Nodes can represent elements of different sizes such as molecules, cells, or regions. Using mathematical measures universal principles can be extracted from network structure.

5.1 Hierarchy

An important and long-known principle for brain organization is hierarchy. Already the early models of visual hierarchical processing of Hubel and Wiesel show how information of simple visual features in V1 ascends the cortical hierarchy to higher order visual areas, step by step creating an increasingly complex representation of what is seen (Hubel and Wiesel,1962). Based on a meta-analysis of many connectivity studies in macaque cortex, Felleman and van Essen proposed a hierarchical model of the macaque cortex that still dominates our thinking in neuroscience (Felleman and Essen,1991). In the cortical hierarchy, feed-forward (FF) or bottom-up connections ascend from lower levels to higher levels of the hierarchy. Feedback (FB) or top-down connections descend from higher levels to lower levels of hierarchy. It has generally been assumed that FF connections drive information integration and computation, leading to higher degrees of abstractness when ascending the hierarchical tree. FB connections are traditionally thought to modulate lower-level areas (Hubel and Wiesel,1962;Felleman and Essen,1991). Contradic-tory, it has become clear that FB connections also have driving properties, introducing a reverse hierar-chy that is important for attention and conscious perception (Hochstein and Ahissar,2002). FF and FB streams are relatively segregated due to specific laminar targeting of projections (Markov et al.,2014). Hierarchy is an important unifying principle of structural organization, providing both segregation and integration of processing streams. We will see how hierarchical processing is related to generative models of the brain insection 7. First, small-world architecture is discussed.

5.2 Small-world networks

By analysing network structures throughout biology, technology, and sociology it became clear that a characteristic, universal structure is eminent (Watts and Strogatz,1998). This type of network is charac-terized by high clustering (highly connected groups of nodes are relatively separated from other highly connected groups of nodes), and sparse long-range connections between clusters. This configuration leads to small wiring costs (the total physical distance between nodes is small), and importantly, sur-prisingly small path lengths. This network topography is named small-world, after the well-known six degrees of separation in social networks (Milgram,1967). It has been suggested that anatomical as well as functional connectivity graphs of the brain resemble small-world networks, both at cellular and mesoscale level (Bassett and Bullmore,2006;Bullmore and Sporns,2009). Small-world organization al-lows high functional segregation in combination with specific integration, both essential for complex brain function (section 5.6) (Tononi et al.,1994). Furthermore, small-world topography results in dy-namical complexity with high computational power, fast information transmission, and ideal oscillatory properties (Bassett and Bullmore,2006;Bullmore and Sporns,2009).

(9)

5.3 Cortical high-density counterstream architecture: Is the cortical network really small-world?

Most of the evidence in favour of small-world topology is based on diffusion tensor imaging (DTI) con-nectivity analyses. Recent viral tracer studies in 29 regions of macaque cortex paint a different picture (Markov et al.,2013b). They show that the density of the cortical graph is 66 % (66 % of connections that might exist actually exist). This is considerably less sparse than is proposed by small-network theories. Furthermore, connectivity can be described best by an exponential distance rule: 80% of projections to a point on the cortex come from neurons of 1 or 2 mm distance. Only a few percent of connections connect distant areas (Markov et al.,2013a). Thus, local circuitry dominates cortical connectivity. Nevertheless, the small minority of long-range connections are very specific and crucial for the connectivity profile of an area (Markov et al.,2013a). This is in line with the log-dynamic brain theory ofBuzsáki and Mizuseki (2014). In addition to high density, an important feature of the cortical network is that of a hierarchy that is shaped by FF and FB streams that are separated in different layers of the cortex (Markov et al., 2013a). This concept is very important for hierarchichal generative models and the free-energy principle (section 7). The separate FF and FB streams together with the high density of the cortical graph form the cortical high-density counterstream architecture.

5.4 Hubs and rich clubs

Graphs of the neocortex appear to contain so-called hubs, nodes that are highly connected to other nodes. Many of the shortest paths between other nodes pass through hubs, making them important for efficient communication (Bassett and Bullmore,2006;Bullmore and Sporns,2009). For example, the lateral pre-frontal cortex (PFC) is highly connected to other regions in the brain, making it an important hub for information transmission (Pessoa,2014). It is presumed that in the brain hubs are organized in a so-called rich club. A number of highly connected hub regions are themselves highly interconnected and form a central (in terms of path length to other areas) core (van den Heuvel and Sporns,2011;Markov et al.,2013a). A core-periphery architecture is reminiscent of many complex self-organizing networks, boosting emergent computational properties(Markov et al.,2013a). Presumably, information flow in the brain is highly influenced by these core areas.

5.5 The brain as a large-scale dynamic functional network

In contrast with traditional modular views of regional one-to-one structure-function mapping, there is now growing consensus that cognitive functions arise from dynamic large-scale interactions between brain areas (Bressler and Menon,2010;Buzsaki,2006;Pessoa,2014;Varela et al.,2001). Many opera-tions are performed simultaneously throughout the brain, making the brain a parallel processing ma-chine, which grants it powerful computational properties (Alexander and Crutcher,1990;Rumelhart et al.,1988). In a long search for function mapping it is now becoming eminent that structure-function relationships are both pluripotent (one-to-many mapping, i.e. one structure has many func-tions) and redundant (many-to-one mapping, one function is related to many structures) (Anderson, 2010;Pessoa,2014). Notably, structure-function relationships appear as many-to-many mapping. If a network view is obtained one can distinguish distinct functional networks such as the default network or central executive network (Bressler and Menon,2010). The recruitment of functional networks is how-ever dependent on the context and is therefore very dynamic (Bressler and Menon,2010;Pessoa,2014). In addition, synchronous oscillations are important in large-scale communication between brain areas and the binding of information (Varela et al.,2001). Eminently, large-scale interactions in the brain might lead to emergent states such as consciousness or feelings that themselves can have influences on the on-going activity in the brain (Damasio,2008;Thompson and Varela,2001). Concluding, it is important to understand the brain as a complex, dynamic large-scale network with many processes running in paral-lel.

(10)

5.6 Segregation and integration

Two common concepts that are related to almost all organizational principles of the brain that are de-scribed in this section are that of segregation and integration. As an example, in the visual system color, motion, and form are processed in highly segregated streams. In essence, these features themselves are the result of integration as the hierarchical tree is ascended. Furthermore, to create a unified percept the different features need to be integrated (Zeki and Shipp,1988). Regarding the hierarchical anatomy of the neocortex segregation and integration seem to be unifying principles (Felleman and Essen,1991). Because of the relative sparseness, there is high segregation in small-world networks as well. Because they connect segregated clusters, hubs and rich clubs are proposed to play a critical role in integration of information (Sporns,2013). Furthermore, large-scale phase synchrony is thought to play an important role in integration (Varela et al.,2001). We will see insection 6.1that also in smaller network building blocks segregation and integration are important. Tononi et al.(1998) proposed a unifying theoretical framework in which functional integration and segregation are described with information theory statis-tics. Basically, if a system is organized in such a way that its modules are not completely independent (complete segregation), but also not completely dependent (complete integration), complex patterns emerge. This combination of high segregation and integration was termed neural complexity (Tononi et al.,1998) and highlights the functional benefits of a segregated but integrated system (Tononi et al., 1994). The optimal balance between segregation and integration at which the brain operates might be an implementation of self-organized criticality, which was described insection 4.2as an important prin-ciple for the emergence of beneficial complex properties (Beggs and Timme,2012;Chialvo,2010).

5.7 Conclusions

Independent of whether brain architecture resembles small-world topology or not, it is apparent that cor-tical connectivity is subject to general principles that govern the computational properties of the brain. How the structural rules of brain wiring exactly translate to dynamical and functional properties is how-ever still ambiguous in the field of connectomics. This has recently led to the proposition of building a human dynome instead of a connectome, incorporating the important dynamical aspects of brain func-tion instead of only looking at structural connectivity (Kopell et al.,2014). In addition, it may prove useful to zoom into the circuits that compose the connectome to find recurring elements that might be easier linkable to functional properties. This will be discussed in the next section.

6 Zooming in on network building blocks

"What rules underlie the organization of the particular types of networks that we see in complex brains? It is likely that, as networks become more complex, already existing simpler networks are largely preserved, extended, and combined (...) We may gain insight into the rules governing the structure of complex networks by investigating their composition from smaller network building blocks." (Sporns and Kötter(2004); pg. 1910)

6.1 Structural and functional circuit motifs

To better understand the building blocks of complex networks and there functional propertiesMilo et al. (2002) developed a method to search for recurring small subgraphs consisting of 3-4 nodes, termed struc-tural motifs. Gene regulation networks, food webs, electronic circuits, the World Wide Web, and the con-nectome of C. elegans all contained similar structural motifs that were significantly more abundant than expected by chance. It was shown that these structure motifs share the dynamical property of robustness in response to small preturbations (Prill et al.,2005).Sporns and Kötter(2004) further explored the distri-bution of structural motifs in the brain. They found that the brains of different species from C. elegans to macacque are build up of similar network motifs. One of the most abundant structural motifs is that of a

(11)

chain of recurrently connected nodes, the ends of the chain being unconnected. This type of structural motifs allows for high integration with local nodes, whereas there is high segregation between other pairs of nodes. As discussed insection 5.6, this leads to emergent complex properties (Tononi et al.,1994).

6.2 The dynamic circuit motif hypothesis

Womelsdorf et al.(2014) recently connected (1) structural motifs with specific cellular and synaptic prop-erties with (2) explicit rhythmical activation signatures that (3) can perform canonical circuit computa-tions. They termed this the dynamic circuit motif hypothesis. A well-known example of a dynamical circuit motif is the feedforward inhibition circuit (FFI). FFI refers to input that targets both inhibitory interneurons and principle cells. The inhibitory cells themselves project to the principle cells as well, forming the FFI. FFI circuits can filter input of a specific frequency and facilitate their propagation. Ad-ditionally, they open a window for multiplicative processes. In addition to FFI many circuit motifs exist of which the cellular properties form the basis for theta, alpha, beta and gamma band synchronized ac-tivity and that can perform various computational operations such as gain control, context-dependent gating, and state-specific integration of synaptic input. The authors propose that depending on the lo-cal functional demands the circuit is an assembly of the proper dynamic circuit motifs resulting in the appropriate behaviour of the local network. In essence, the building blocks are similar throughout the brain, similar to what is proposed by graph theoretical analyses ofSporns and Kötter(2004).

6.3 The canonical neocortical circuit

Through many years of extensive functional and anatomical research it became clear that a common laminar organization of specific subsets of cells is found throughout the neocortex. This fundamental organizational principle in the neocortex is referred to as the canonical neocortical circuit. (Douglas and Martin,2004;Harris and Shepherd,2015). Neocortical excitatory cells can be subdivided into 3 classes that are all heavily interconnected within their own class: (1) Intratelencephalic (IT) cells found in L2-6 that primarily project within the telencephalon, (2) pyramidal tract (PT) neurons in L5B that project to subcerebral areas, and (3) corticothalamic (CT) cells in L6 projecting mainly to ipsilateral thalamus. Local cortical processing can be summarized into 3 subsequent stages: (1) Primary thalamocortical (TC) input mainly targets L4. L4 IT cells project broadly to L2/3 and L5/6. (2) IT cells in L2/3 and L5/6 integrate input from L4, thalamus and cortex. Interareal (hierarchichal) processing is mainly implemented at this level. (3) PT neurons in L5 integrate input from local IT cells and TC cells and project to subcerebral target areas. L6 CT cells receive cortical input and project back to thalamus, closing thalamocortical loops and perhaps modulating the gain of thalamic input. In general, inhibitory interneurons can tune the local circuit depending on behavioral context (Douglas and Martin,2004;Harris and Shepherd,2015). Distinct inhibitory interneuron classes are organized in specific circuits as well (Kepecs and Fishell,2014; Pfeffer et al.,2013).

The homology of the neocortical microcircuit throughout the neocortex is a fundamental organizational principle that bridges the gap between the single cell level and large networks. An important implica-tion of this homology is that the operaimplica-tions performed are to some degree similar as well. The circuit motifs, including their functional properties, described insection 6.1could be implemented within the canonical neocortical circuit, making these theories mutually plausible. In addition, the microcircuits that underlie oscillatory patterns in central pattern generators that generate motor and respiratory activ-ity in the brain stem and spinal cord, the neocortical microcircuit, and the hippocampal microcircuit, are to some degree very similar (Grillner et al.,2005b). This indicates that the brain uses similar structures as building blocks for a diverse set of functions. The organization of the neocortical microcircuit is in con-cordance with the constraints of predictive coding (Bastos et al.,2012), which will be further discussed in section 7.

(12)

6.4 Canonical operations

Neurons and circuits throughout the brain can perform a variety of operations. Traditionally, specific operations are ascribed to specific circuits that serve specific goals. Understanding the commonalities between operations and categorizing them into canonical operations might lead to a more unifying ac-count of brain function. In general, there is consensus that a small set of general operations is performed throughout the brain, although explicit literature on the topic is sparse.Kouh and Poggio(2008) describe a set of visual canonical cortical operations and propose a microcircuit model that can perform them, although their paper is principally computational.Carandini and Heeger(2012) argue that divisive nor-malization is a canonical operation used for many purposes in the brain. Divisive nornor-malization is the process in which the responses of neurons are divided by the summed activity of a group of neurons. In addition to divisive normalization,van Atteveldt et al.(2014) propose that also phase resetting is part of the canonical operation tool kit of the brain. Excitability of cells varies as a function of ongoing oscilla-tions, by resetting the phase of oscillations neurons can influence upcoming processing. In conclusion, it seems that at least some operations are used throughout the brain for multiple purposes. Understand-ing the full arsenal of operations the brain uses, and findUnderstand-ing commonalities between them, might lead to more insight in overall brain function.

7 Generative models of perception and the free-energy principle

"The free-energy principle is a simple postulate with complicated implications. It says that any adaptive change in the brain will minimize free-energy. (...) If one looks at the brain as implementing this scheme (minimising a variational bound on disorder), nearly every aspect of its anatomy and physiology starts to make sense." (Friston (2009); pg. 293)

Traditionally, the brain is thought of as a stimulus-driven machine, and perception as a sheer represen-tation of the information entering the senses. In contrast, Kant (1728-1804) already insisted on the spon-taneity of cognition: Perception is the result of spontaneous and synthesizing self-activity of the mind, opposed to mere receptivity of the sensory input (Fazelpour and Thompson,2015;Kant and Guyer,1998). The more modern concept of autopoiesis, the self-organizing, autonomic organization of a system that maintains the organization of the system itself, comprises a similar view (Rudrauf et al.,2003;Varela et al., 1974). It is now commonly accepted that the brain actively generates a probabilistic model of the world which is updated based on sensory information. I refer to theories that share this view as generative the-ories of perception. Two popular concepts are the Bayesian coding hypothesis (Knill and Pouget,2004) and predictive coding (Rao and Ballard,1999). I will describe predictive coding in a broader framework of hierarchical generative theories. The free-energy principle incorporates these theories and can even explain other important processes such as attention, action, and memory.

7.1 Bayesian coding hypothesis

The Bayesian coding hypothesis is mainly based on psychometric evidence that shows that human per-ception follows Bayes theorem. The idea is that the brain does not discretely or directly represents the sensory information in its neural code (although this might seem intuitive), but instead encodes a pos-terior probability of the causes of the incoming sensory information. According to the Bayesian coding hypothesis perception is the construction of this posterior probability based on sensory input and a gen-erative model of the world. This gengen-erative model exists of a likelihood model for the probability of causes, given specific sensations and a prior, the a priori probability density of those causes. Using Bayes rule the likelihood model can be inverted to construct the posterior probability (Knill and Pouget,2004).

(13)

7.2 Hierarchichal generative theories

A vast amount of theories are dedicated to hierarchical implementations of the generative nature of the brain. An influential hierarchical generative theory is the Adaptive Resonance Theory (ART) (Grossberg, 1976,1999,2013). ART states that top-down expectations are continuously compared with bottom-up sensory input. In the case of a match, the bottom-up activity is amplified as a result of resonant syn-chrony, leading to important learning processes. A recently very popular implementation of the Bayesian coding hypothesis in a hierarchical setting is predictive coding (Rao and Ballard,1999;Clark,2013). In this view, top-down feedback (FB) connections carry predictions about lower-level neuronal activities. Feedforward (FF) bottom-up connections carry the error between the predictions and the actual neu-ronal activity, called prediction errors. These prediction errors are thought to be encoded by specialized prediction error units. Because the interplay between predictions and prediction errors takes place at ev-ery level, a self-organised hierarchy emerges with increasingly complex predictions. The strong advan-tage of hierarchical generative theories is that the implementation in the brain is very straightforward and convincing emperical evidence is accumulating (see for example:Makino and Komiyama(2015)). The hierarchical organizing principles of neocortex discussed insection 5.1are perfectly in line with hi-erarchical generative models of the brain. In addition, FF and FB streams are relatively segregated due to specific laminar targeting of projections (Markov et al.,2014). Furthermore, the anatomical constraints of the canonical neocortical microcircuit (section 6.3) complement the connectivity implied by predictive coding (Bastos et al.,2012).

As an addition to traditional hierarchical generative modelsEngel et al.(2001) proposed that intrinsically synchronous activity between groups of neurons is crucial for the selection of input. Correspondingly, Friston et al.(2015) argue that prediction errors are selected by mechanisms that boost the oscillatory coherence between the sending and receiving units, thereby increasing the gain. Furthermore, they pro-pose that prediction errors are carried by fast gamma oscillations, whereas top-down predictions use slower alpha or beta oscillations.Bastos et al.(2015) recently proposed a similar mechanism for commu-nication through coherence (CTC). CTC theory proposes that commucommu-nication between cellular ensem-bles is facilitated by coherence in oscillation between the ensemensem-bles(Fries,2005).Bastos et al.(2015) pro-posed that interareal CTC is separated both spectrally and anatomically, similar toFriston et al.(2015). Concluding, hierarchical generative theories provide a unifying account for learning, attention, and per-ception, and are in line with important organizational principles of the brain, such as cortical hierarchy and the canonical microcircuit.

7.3 Free-energy

Perhaps one of the most unifying global brain theories is the free-energy principle proposed first by Fris-ton et al.(2006), and later extended byFriston(2009,2010). The free-energy principle is a rather abstract straightforward mathematical formulation, from which many interesting implications emerge. In order to live in an ever-changing environment, a biological agent must maintain its organization, thereby heav-ily limiting the states it can be in. An exampleFriston(2009) uses is that of a fish, that needs to be inside water to live. It thus limits its possible states to those in which it is in water. The probability of a small subset of fish-states is thus much higher than a large majority of states with a very low probability, in other words: there is low entropy. A fish out of water is thus in an unexpected surprising state. Entropy is the long-term average of surprise. Minimization of surprise on the short term thus leads to low entropy on the long run, in other words the biological agent can keep its states within physiological boundaries. Because surprise itself is unknown to the biological agent (it would have to know all possible states and their probabilities), it must reconstruct an equivalent: free-energy. Free energy is an upper bound on surprise, with the advantage that it can be known by the biological system. An agent can construct free

(14)

energy by combining its sensory states and an internal probabilistic representation of the causations of specific sensory states, called the recognition density. Free energy is the difference between surprise about the joint instance of sensory states and their causes and the recognition density encoded by the agent’s internal states. By minimizing free energy, the agent implicitly also minimizes surprise, thereby maintaining homoeostasis, but how can free-energy be limited?

7.4 How can free-energy be minimized?

To minimize free energy an agent can do two things: (1) change the internal recognition density, or (2) change sensory input by acting on the environment. For the first option we must consider that the recognition density is based on a generative model of the world. The recognition density is continuously compared with the conditional or posterior density of the actual sensory information and its most-likely causes. A small divergence between the two results in minimal free-energy. Thus by adapting the recog-nition density so that it accurately predicts sensory information free-energy can be suppressed. If formu-lated like that the free-energy principle shows remarkable resemblance to predictive coding. Minimizing free-energy just becomes explaining away prediction errors by changing predictions. The second way of minimizing free-energy is by acting on the environment. Because of the situatedness of an agent in the environment it can change its sensory states by acting. According to the free-energy principle it should act in such a way that the consequent sensory states lead to minimal free-energy. In addition to action and perception, the free-energy principle can explain a whole range of processes and many theories can be reformulated using the free-energy principle, such as attention (reframed as precision or inverse vari-ance, see next section), memory (causal regularities of the recognition density), and value learning (value maximization is the same as free-energy minimization).

7.5 Neuronal implementation of the free-energy principle

The brain has to encode a model for the causes of its sensory states, the recognition density. This can be in the form of neuronal dynamics on a fast time-scale for environmental states, and in connection strengths for causal regularities in the environment. In addition, the brain needs to represent the uncer-tainty, precision or inverse variance of states. This could correspond to processes we generally ascribe an attentional role such as synaptic gain, neuromodulatory influences, and oscillatory coherence. Because implementation of the free-energy principle to updating the recognition density leads to predictive cod-ing, a similar self-organizing hierarchical organization emerges. Also the oscillatory dynamics associated with generative hierarchical models can be implemented within the free-energy framework.

8 Integrating unifying theories

The theories outlined in section3-7are only marginally integrated in the literature (Friston,2010; Gor-don,2003), but might prove very important for understanding the brain. Some pointers for the integra-tion of unifying neuroscientific theories will be given in this secintegra-tion.

8.1 Relationships between theories

Stating the relations between the unifying theories discussed here might contribute to a more thorough understanding of the brain. Most similarities or relatedness between theories were already discussed in section3-7, so the extent of all relations will not be discussed here. Instead, an overview will be given, so to provide a global picture of the relationships between theories. To create an overview of links between theories a network-representation is shown infig. 1. The nodes represent the different theories, whereas the connections represent the relationships between theories. An important notion is that the overall re-latedness between theories (the average density in the graph) is quite low, most theories are related to 3, 4 or 5 other theories. There are no theories that are related to (almost) all other theories, which indicates that there is not, and probably will not be a Grand Unifying Theory (GUT) for brain function. Apparently,

(15)

Self-organised criticality Dynamic circuit motif hypothesis Log-dynamic brain High-density counterstream architecture Small-world network

Hubs & rich clubs

Circuit motifs Neocortical

microcircuit Population coding Hierarchy Hierarchichal generative models Canonical operations Free-energy principle Bayesian coding hypothesis Segregation & integration

Generalising & scale-integrating theory Generalising theory

Segregation & integration principle Large-scale

dynamic functional network

Figure 1: Network of relationships between unifying theories. The relationships between unifying theories discussed in this thesis

are represented in a network format. Nodes represent theories, connections relationships between theories. The nodes included in the yellow area indicate theories to which the segregation and integration principle can be applied. Blue nodes represent gener-alising theories, red nodes represent both generalizing and scale-integrating theories.

the brain is such a complicated system that several theories are needed to explain all mechanisms and functions. Notably, almost all theories discussed here are mutually plausible: only the high-density coun-terstream architecture and small-world networks are opposing theories. An integrated understanding of brain function could thus conveniently encompass all theories discussed here.

A common principle underlying most of the unifying structural theories is that of segregation and in-tegration (section 5.6). In short, an optimal balance between segregation of information into different streams or clusters and integration of information through for example convergence leads to complex functional properties. The concept of hierarchy (section 5.1), small-world network theory (section 5.2), the concept of hubs and rich clubs (section 5.4), high-density counterstream architecture theory ( sec-tion 5.3), large-scale dynamic network theory (section 5.5), and the concept of circuit motifs (section 6.1) can all be understood in terms of segregation and integration. This does not mean that these theories are redundant and the only principle needed to understand brain structure is that of segregation and integra-tion. The segregation and integration provides a global understanding of the laws for brain structure, but the different theories provide more extended overviews of how the segregation and integration principle translates to brain structure. Additionally, the segregation and integration principle can be understood

(16)

Functional

Structural

Abstract principle Descriptive theory

Self-organised criticality

Log-dynamic brain

Small-world

network Hubs & richclubs

High-density counterstream architecture Circuit motifs Neocortical microcircuit Dynamic circuit motif hypothesis Canonical operations Population coding Hierarchichal generative models Bayesian coding hypothesis Free-energy principle Hierarchy Generalising & scale-integrating theory

Generalising theory Segregation & integration Large-scale dynamic functional network

Figure 2: Classification plot of unifying theories. Position on the x-axis represents degree of abstractness of the theory: Abstract

principles are positioned on the left, whereas detailed descriptive theories are positioned on the right. Position on the y-axis repre-sents symbolizes the extent to which a theory is more structural (bottom) or more functional (top).

in terms of self-organized criticality (section 4.2), which even further reduces structural organization to a general principle in the brain.

8.2 Generalizing and scale-integrating theories

As discussed insection 2unifying neuroscientific theories can solve the fragmentation problem by either generalizing different mechanisms or by integrating different scales. All unifying theories discussed here are in essence generalizing. They generalize findings in different areas or regarding different processes. A smaller proportion is also scale-linking. The following theories provide principles or laws that can be applied to different scales: the log-dynamic brain (section 4.1), self-organized criticality (section 4.2), small-world network theory (section 5.2), the concept of hubs and rich clubs (section 5.4), the segregation and integration principle (section 5.6), and the free-energy principle (section 7.3). The dynamic circuit motif hypothesis (section 6.2), and predictive coding (section 7.2) integrate different scales by explaining how structural circuits lead to functional properties. Theories infig. 1andfig. 2are color-coded for whether they are only generalizing (blue) or also scale-integrating (red).

(17)

8.3 Structural versus functional explanations

A gradual classification can be made between theories that explain the brain architecture at a more struc-tural, organizational level (i.e. how is the brain build?) on one side and theories that explain the brain in functional terms (i.e. how does the brain work?) on the other side. There is a continuous line between theories that are in essence structural, such as the concept of hierarchy (section 5.1), and theories that al-most exclusively provide functional explanations, such as the Bayesian coding hypothesis (section 7.1). Some theories, such as the log-dynamic brain hypothesis (section 4.1), link structural with functional explanations. The degree to which a theory is more structural or functional is represented infig. 2for all discussed theories. Allegedly, understanding the brain requires understanding structufunction re-lationships. Therefore, scale-integrating theories that also integrate structure with function are crucial for understanding the brain. These include: The dynamic circuit motif hypothesis (section 6.2), predic-tive coding (section 7.2, large-scale dynamic network theory (section 5.5), and the free-energy principle (section 7.3).

8.4 Abstract principles versus descriptive theories

In addition to a distinction between more structural and more functional theories, the discussed unify-ing theories can be classified accordunify-ing to the degree of abstractness of the theory. Some of the discussed theories, such as self-organised criticality (section 4.2), come down to abstract principles, whereas other theories provide very detailed descriptive explanations, such as the dynamic circuit motif hypothesis (section 6.2). A classification based on the degree of abstractness is presented infig. 2, together with the distinction between structural and functional explanation. An advantage of abstract principles is that they are often straight-forward and applicable to a range of different processes, but they tend to be vulnerable for oversimplification and overgeneralisation. In contrast, descriptive theories are generally complex and confined to specific processes, but provide more satisfying, precise explanations. To come back to the analogy of the cardiovascular system: There is an influential principle, the physiological prin-ciple of minimum work, that explains the vascular system in terms of minimizing cost or work (Murray, 1926). This principle is very useful in explaining many facets of the vascular tree, but only serves compre-hending the cardiovascular system in combination with more descriptive theories that provide a more detailed explanation. Correspondingly, to understand the brain a combination of abstract principles and detailed descriptive theories are necessary.

9 Concluding remarks

There is a fragmentation problem in neuroscience: Information is scattered across experimental fields. To get a comprehensive understanding of how the brain works it is crucial to integrate the fragmented in-formation. The unifying theories discussed in this thesis accomplish this by either generalising empirical findings to form general theories or laws that are applicable to many processes in the brain, or by inte-grating information from different scales of the brain. Inteinte-grating the here-discussed unifying theories is a daunting task. Some proposed that a Grand Unifying Theory (GUT) can account for all brain function (Gordon,2003;Clark,2013;Friston,2010). In my view, and that of others (Anderson and Chemero,2013; Dupré,2002), it is clear that a GUT cannot be formulated for such a complex organ as the brain. There might be universal principles that can be applied to all aspects of brain function, but they will always draw on more descriptive theories for a more detailed explanation. In my view, this is exactly what even-tually will lead to a comprehensive understanding of the brain. Different facets of brain function ask for different theories and principles which complement each other. Understanding the brain would require a hierarchy of theories. On top, abstract principles such as self-organized criticality and the free-energy principle provide global laws for all brain function. As one moves down the hierarchy, more specific, de-scriptive theories are recruited to give more mechanistic explanations. At the lowest levels, theories lose

(18)

their generalisability and focus on specific aspects of the brain. Understanding the relationships between different unifying theories is crucial for understanding the brain. Some pointers for these relations were given in this thesis, but a crucial role of linking theories is reserved for empirical neuroscientific research. By testing hypotheses drawn from integrated unifying theories empirical research can contribute to link-ing theories together. A tight interplay between empirical research and the advancement of unifylink-ing theories will eventually lead to a comprehensive understanding of how the brain works.

Acknowledgements

I want to thank Conrado Bosman for his enthusiasm in the interesting discussions we had and the ex-tensive feedback he provided for this thesis. I want to thank Umberto Olcese for evaluating this thesis as co-assessor.

References

Adolphs, R. (2015). The unsolved problems of neuroscience. Trends in Cognitive Sciences 19, 173–175.

Alexander, G. E. and Crutcher, M. D. (1990). Functional architecture of basal ganglia circuits: neural substrates of parallel processing. Trends in Neurosciences 13, 266–271.

Anderson, M. L. (2003). Embodied Cognition: A field guide. Artificial Intelligence 149, 91–130.

Anderson, M. L. (2010). Neural reuse: A fundamental organizational prin-ciple of the brain. Behavioral and Brain Sciences 33, 245–266. Anderson, M. L. and Chemero, T. (2013). The problem with brain GUTs:

Conflation of different senses of “prediction” threatens metaphysical disaster. Behavioral and Brain Sciences 36, 204–205.

Averbeck, B. B., Latham, P. E. and Pouget, A. (2006). Neural correlations, population coding and computation. Nature Reviews Neuroscience 7, 358–366.

Bak, P. and Paczuski, M. (1995). Complexity, contingency, and criticality. Proceedings of the National Academy of Sciences 92, 6689–6696. Barlow, H. B. (1972). Single units and sensation: a neuron doctrine for

perceptual psychology? Perception 1, 371–394.

Bassett, D. S. and Bullmore, E. (2006). Small-World Brain Networks. The Neuroscientist 12, 512–523.

Bastos, A. M., Usrey, W. M., Adams, R. A., Mangun, G. R., Fries, P. and Friston, K. J. (2012). Canonical Microcircuits for Predictive Coding. Neuron 76, 695–711.

Bastos, A. M., Vezoli, J. and Fries, P. (2015). Communication through co-herence with inter-areal delays. Current Opinion in Neurobiology 31, 173–180.

Beggs, J. M. and Timme, N. (2012). Being Critical of Criticality in the Brain. Frontiers in Physiology 3.

Behrens, T. E. J., Fox, P., Laird, A. and Smith, S. M. (2013). What is the most interesting part of the brain? Trends in Cognitive Sciences 17, 2–4. Bressler, S. L. and Menon, V. (2010). Large-scale brain networks in

cogni-tion: emerging methods and principles. Trends in Cognitive Sciences 14, 277–290.

Bullmore, E. and Sporns, O. (2009). Complex brain networks: graph the-oretical analysis of structural and functional systems. Nature Reviews Neuroscience 10, 186–198.

Burgess, N., Maguire, E. A. and O’Keefe, J. (2002). The human hippocam-pus and spatial and episodic memory. Neuron 35, 625–641. Buzsáki, G. (1989). Buszaki Two-stage model of memory trace formation

a role for noisy brain states.pdf. Neuroscience 31, 551–570. Buzsaki, G. (2006). Rhythms of the Brain. Oxford University Press. Buzsáki, G. and Mizuseki, K. (2014). The log-dynamic brain: how

skewed distributions affect network operations. Nature Reviews Neu-roscience 15, 264–278.

Carandini, M. and Heeger, D. J. (2012). Normalization as a canonical neu-ral computation. Nature Reviews Neuroscience 13, 51–62. Chialvo, D. R. (2010). Emergent complex neural dynamics. Nature

Physics 6, 744–750.

Churchland, P. M. (2013). Matter and Consciousness. MIT Press. Cipra, B. A. (1987). An Introduction to the Ising Model. The American

Mathematical Monthly 94, 937–959.

Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences 36, 181–204.

Damasio, A. (2008). Descartes’ Error: Emotion, Reason and the Human Brain. Random House.

Douglas, R. J. and Martin, K. A. (2004). Neuronal Circuits of the Neocor-tex. Annual Review of Neuroscience 27, 419–451.

Dupré, J. (2002). The Lure of the Simplistic. Philosophy of Science 69, S284–S293.

Engel, A. K., Fries, P. and Singer, W. (2001). Dynamic predictions: Oscil-lations and synchrony in top–down processing. Nature Reviews Neu-roscience 2, 704–716.

Fazelpour, S. and Thompson, E. (2015). The Kantian brain: brain dynam-ics from a neurophenomenological perspective. Current Opinion in Neurobiology 31, 223–229.

Felleman, D. J. and Essen, D. C. V. (1991). Distributed Hierarchical Pro-cessing in the Primate. Cerebral Cortex 1, 1–47.

Fries, P. (2005). A mechanism for cognitive dynamics: neuronal commu-nication through neuronal coherence. Trends in Cognitive Sciences 9, 474–480.

Friston, K. (2009). The free-energy principle: a rough guide to the brain? Trends in Cognitive Sciences 13, 293–301.

(19)

Friston, K. (2010). The free-energy principle: a unified brain theory? Na-ture Reviews Neuroscience 11, 127–138.

Friston, K., Kilner, J. and Harrison, L. (2006). A free energy principle for the brain. Journal of Physiology-Paris 100, 70–87.

Friston, K. J., Bastos, A. M., Pinotsis, D. and Litvak, V. (2015). LFP and oscillations—what do they tell us? Current Opinion in Neurobiology 31, 1–6.

Fuster, J. M. (2000). The Module: Crisis of a Paradigm. Neuron 26, 51–53. Gordon, E. (2003). Integrative Neuroscience. Neuropsychopharmacology

28, S2–S8.

Grillner, S., Kozlov, A. and Kotaleski, J. H. (2005a). Integrative neuro-science: linking levels of analyses. Current Opinion in Neurobiology 15, 614–621.

Grillner, S., Markram, H., De Schutter, E., Silberberg, G. and LeBeau, F. E. N. (2005b). Microcircuits in action – from CPGs to neocortex. Trends in Neurosciences 28, 525–533.

Grossberg, S. (1976). Adaptive pattern classification and universal recod-ing: I. Parallel development and coding of neural feature detectors. Biological Cybernetics 23, 121–134.

Grossberg, S. (1999). The Link between Brain Learning, Attention, and Consciousness. Consciousness and Cognition 8, 1–44.

Grossberg, S. (2013). Adaptive Resonance Theory: How a brain learns to consciously attend, learn, and recognize a changing world. Neural Networks 37, 1–47.

Harris, K. D. and Shepherd, G. M. G. (2015). The neocortical circuit: themes and variations. Nature Neuroscience 18, 170–181. Harvey, C. D., Coen, P. and Tank, D. W. (2012). Choice-specific sequences

in parietal cortex during a virtual-navigation decision task. Nature 484, 62–68.

Hebb, D. O. (1949). The organization of behavior: A neuropsychological theory. Psychology Press.

Hochstein, S. and Ahissar, M. (2002). View from the Top: Hierarchies and Reverse Hierarchies in the Visual System. Neuron 36, 791–804. Hopfield, J. J. (1994). Neural networks and physical systems with

emer-gent collective computational abilities. Proc. Natl. Acad Sci. USA 96, 124.

Hubel, D. H. and Wiesel, T. N. (1962). Receptive fields, binocular interac-tion and funcinterac-tional architecture in the cat’s visual cortex. The Journal of physiology 160, 106.

Johnson, N. F. (2007). Two’s company, three is complexity: A simple guide to the science of all sciences. Oneworld Pubns Ltd.

Kandel, E. R., Markram, H., Matthews, P. M., Yuste, R. and Koch, C. (2013). Neuroscience thinks big (and collaboratively). Nature Reviews Neu-roscience 14, 659–664.

Kandel, E. R., Schwartz, J. H., Jessell, T. M. and others (2000). Principles of neural science, vol. 4,. McGraw-Hill New York.

Kant, I. and Guyer, P. (1998). Critique of Pure Reason. Cambridge Univer-sity Press.

Kepecs, A. and Fishell, G. (2014). Interneuron cell types are fit to function. Nature 505, 318–326.

Knill, D. C. and Pouget, A. (2004). The Bayesian brain: the role of uncer-tainty in neural coding and computation. Trends in Neurosciences 27, 712–719.

Kopell, N. J., Gritton, H. J., Whittington, M. A. and Kramer, M. A. (2014). Beyond the Connectome: The Dynome. Neuron 83, 1319–1328. Kouh, M. and Poggio, T. (2008). A canonical neural circuit for cortical

nonlinear operations. Neural computation 20, 1427–1451. Laughlin, R. B. (2014). Physics, Emergence, and the Connectome. Neuron

83, 1253–1255.

Makino, H. and Komiyama, T. (2015). Learning enhances the relative impact of top-down processing in the visual cortex. Nature Neuro-science 18, 1116–1122.

Markov, N. T., Ercsey-Ravasz, M., Essen, D. C. V., Knoblauch, K., Toroczkai, Z. and Kennedy, H. (2013a). Cortical High-Density Coun-terstream Architectures. Science 342, 1238406.

Markov, N. T., Ercsey-Ravasz, M., Lamy, C., Gomes, A. R. R., Magrou, L., Misery, P., Giroud, P., Barone, P., Dehay, C., Toroczkai, Z., Knoblauch, K., Essen, D. C. V. and Kennedy, H. (2013b). The role of long-range connections on the specificity of the macaque interareal cortical net-work. Proceedings of the National Academy of Sciences 110, 5187– 5192.

Markov, N. T., Vezoli, J., Chameau, P., Falchier, A., Quilodran, R., Huis-soud, C., Lamy, C., Misery, P., Giroud, P., Ullman, S., Barone, P., De-hay, C., Knoblauch, K. and Kennedy, H. (2014). Anatomy of hierarchy: Feedforward and feedback pathways in macaque visual cortex. Jour-nal of Comparative Neurology 522, 225–259.

Milgram, S. (1967). The small world problem. Psychology today 2, 60–67. Milo, R., Shen-Orr, S., Itzkovitz, S., Kashtan, N., Chklovskii, D. and Alon, U. (2002). Network Motifs: Simple Building Blocks of Complex Net-works. Science 298, 824–827.

Murray, C. D. (1926). The physiological principle of minimum work: I. The vascular system and the cost of blood volume. Proceedings of the National Academy of Sciences of the United States of America 12, 207.

O’Keefe, J. and Recce, M. L. (1993). Phase relationship between hip-pocampal place units and the EEG theta rhythm. Hippocampus 3, 317–330.

Pessoa, L. (2014). Understanding brain networks and brain organization. Physics of Life Reviews 11, 400–435.

Pfeffer, C. K., Xue, M., He, M., Huang, Z. J. and Scanziani, M. (2013). Inhi-bition of inhiInhi-bition in visual cortex: the logic of connections between molecularly distinct interneurons. Nature Neuroscience 16, 1068– 1076.

Pouget, A., Dayan, P. and Zemel, R. (2000). Information processing with population codes. Nature Reviews Neuroscience 1, 125–132. Prill, R. J., Iglesias, P. A. and Levchenko, A. (2005). Dynamic Properties of

Network Motifs Contribute to Biological Network Organization. PLoS Biology 3.

Quiroga, R. Q. (2012). Concept cells: the building blocks of declarative memory functions. Nature Reviews Neuroscience 13, 587–597. Ramachandran, V. S. (1990). Interactions between motion, depth, color

and form: The utilitarian theory of perception. In Vision: Coding and efficiency pp. 346–360.

Referenties

GERELATEERDE DOCUMENTEN

If the section produces empty output for a value then the section will start flush at the margin or \secindent if it is positive and there will be no \nmdot.. It will produce

Although literature could be found on the basic elements required for an effective educator-student relationship, very little research has been conducted from the nursing

Recall from the first part of Marx' chapter on the general law of capi- tal accumulation (Marx (1867~1977a), chapter 25) that Marx was quite well aware of a relation between the rate

If the intervention research process brings forth information on the possible functional elements of an integrated family play therapy model within the context of

The second phase of this study consisted of a qualitative, explorative research design used to understand and describe aspects that contribute to the psychosocial

investigated whether delivery by caesarean section was associated with a better neurodevelopmental outcome at 2 years than vaginal delivery for preterm infants born weighing 1 250

Table 4.6.4 Differences in perceptions between farmers and stakeholders in the ostrich industry with regards to the perceived importance/likeliness of welfare

Voor deze deelmarkt geldt hetzelfde verhaal als bij de professionele zorg eerste echelon, met dat verschil dat de gemiddelde druppeltijd lager zal liggen, omdat de afstanden tussen