• No results found

Conley Index Theory in Neuroscience

N/A
N/A
Protected

Academic year: 2021

Share "Conley Index Theory in Neuroscience"

Copied!
156
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

MSc Mathematics

Mathematical Physics

Master Thesis

Conley Index Theory in Neuroscience

by

´

Abel S´

agodi

10325948

April 2020

36 ECTS

Supervisor:

Examiner:

Kathryn Hess

Ale Jan Homburg

(2)

ABSTRACT

Dynamical systems have played a huge role in modelling neural dynamics. Con-ley index theory has been used to analyse the topological structure of invariant sets of diffeomorphisms and of smooth flows. A combinatorial version of the theory can help to describe and capture the global qualitative behaviour of dy-namical systems. In this paper, we show applications of (combinatorial) Conley index theory to certain models used in theoretical neuroscience. We give an introduction to relevant concepts from Conley index theory to flows. Further, a combinatorial approach to the Conley index is described.

First, we propose a novel, simple approach to determine the existence of con-necting orbits between the invariant sets of small Competitive Linear-Threshold Networks. This relies on studying the vector field of the ODE and includes a result for computing the topology of the relative homology of the pair which consists of a cube and a subset of its boundary.

The second application is a demonstration of the usefulness of the contin-uation graph to analyse computational models in neuroscience, though which we can determine which parameter values result in what kinds of behaviour. We analyse the dynamics of two connected excitatory-inhibitory Wilson-Cowan networks over a range of possible parameter values. With this, we investigate which parameters result in the occurrence of seizures in this simple epilepsy model. In the Wilson-Cowan network, a seizure is defined as a state with a higher (average) activity than the baseline, i.e. the fixed point of the system. Further, we identify a parameter value for which the Wilson-Cowan network ex-hibits chaos, i.e. for which one can find a Poincar´e section from whose Poincar´e map there exists a surjection onto a subshift of finite type on two symbols.

Title: Conley Index Theory in Neuroscience Author: ´Abel S´agodi, abel.sagodi@student.uva.nl Supervisor: Kathryn Hess

Examiner: Ale Jan Homburg

Second grader: Lenny Taelman Date: August 20, 2020 Korteweg-de Vries Institute for Mathematics

Faculty of Science University of Amsterdam

(3)

ACKNOWLEDGEMENTS

I would like to thank my supervisor Kathryn Hess for being both very supportive and helpful throughout my dissertation. I enjoyed working with her very much and am grateful for all her advice.

Further thanks go to Konstantin Mischaikow, Robert Vandervorst and Thomas Rot for their help on some topics in Conley index theory.

Moreover, I would like to thank Daniela Egas Santander, Nicole Sanderson and Skander Moualla for our discussions on CTLNs.

(4)

Contents

Notation and index of definitions vi

1 Introduction 1

1.1 Oscillations . . . 1

1.2 Neural dynamics . . . 2

1.3 Dynamical systems analysis in neuroscience . . . 3

1.4 A topological approach. . . 4

1.5 Goal . . . 4

1.6 Outline . . . 5

2 Conley Index Theory 6 2.1 Isolation . . . 7

2.2 The Conley index . . . 9

2.3 Continuation . . . 11

2.4 Wa˙zewski principle . . . 12

2.5 Stability of invariant sets . . . 12

2.6 Attractor-repeller decomposition . . . 13

2.7 Morse decomposition . . . 15

2.8 Conley’s decomposition theorem . . . 17

2.9 Existence of connecting orbits . . . 20

2.10 Connection matrix . . . 22

2.11 Periodic orbits . . . 23

2.12 Symbolic dynamics and chaos . . . 25

3 Combinatorial Conley index theory 27 3.1 Discretisation of space . . . 27

3.1.1 Cubical Sets . . . 28

3.2 Multivalued maps. . . 31

3.3 Combinatorial multivalued maps . . . 32

3.4 Morse sets for combinatorial multivalued maps . . . 34

3.5 Invariance . . . 36

3.6 Index pairs and index maps . . . 38

(5)

3.8 Computability and convergence . . . 42

3.9 Continuation graphs . . . 43

4 Transients for Competitive Threshold-Linear Networks 47 4.1 Transient dynamics. . . 48

4.2 Fixed points. . . 49

4.3 A network with N = 2 neurons . . . 51

4.4 A network with N = 3 neurons . . . 58

4.4.1 No connections . . . 58

5 A continuation graph for a Wilson-Cowan network 67 5.1 Oscillations . . . 69 5.2 Synchronisation . . . 71 5.3 Chaos . . . 71 5.4 Control . . . 72 5.5 Epilepsy . . . 73 5.6 Analysis . . . 74 5.6.1 Methods. . . 75

5.7 Two oscillating populations . . . 76

5.7.1 Fixed points . . . 78 5.7.2 Periodic orbits . . . 79 5.7.3 Investigating chaos . . . 80 5.7.4 Connecting orbits . . . 82 5.7.5 Continuation graph . . . 82 6 Discussion 92 7 Conclusion 95 8 Popular summary 98 References 101 Appendices 116 A Explaining oscillations 116

(6)

B Algebraic Topology 119 B.1 Homology . . . 120 B.1.1 Singular homology . . . 121 B.1.2 Cubical homology . . . 121 B.2 Homology of maps . . . 123 B.3 Homotopy . . . 126

B.4 Reduced and relative homology . . . 127

B.5 Determining the topology of index pairs . . . 128

C Dynamical Systems 131 C.1 Discrete dynamical systems . . . 131

C.2 Symbolic dynamics . . . 132

C.3 System of ODEs . . . 132

C.4 Stability . . . 133

C.5 Nullclines . . . 135

C.6 Poincar´e sections and maps . . . 135

C.7 Homo- and heteroclinic orbits . . . 137

C.8 Lyapunov function . . . 137

C.9 Lyapunov exponents . . . 138

C.10 Chain recurrence . . . 139

C.11 Bifurcation theory . . . 140

D Conley index theory for discrete dynamical systems 142

(7)

Index of Notation

Abbreviations

CTLN Competitive threshold-linear network

HH Hodgkin-Huxley (model)

mv map Multivalued map

ODE Ordinary differential equation

TLN Threshold-linear network

usc upper semicontinuous, page 123

WC Wilson-Cowan (network or model)

Mathematical concepts

α Alpha limit set, page 13

cl(A) Closure of the topological space A

diam(G) Diameter of a grid G

ˆ

K Set of all elementary k-chains, page 122

ˆ

Q Elementary k-chain, page 121

int(A) Interior of topological space A

Inv Maximal invariant set, page 7

Kmax(X) The set of maximal faces in X, page 29

˚

Q Elementary cell, page 123

P(A) Power set for a set A

R Real numbers

R+ Non-negative real numbers

Z Whole numbers

Z+ Non-negative whole numbers

K Set of all elementary cubes, page 29

Kd The set of all elementary cubes in Rd, page 29

R(S, ϕ) Chain recurrent set of S under the flow ϕ, page 18

(8)

ω Omega limit set, page 13

∂(A) Boundary of topological space A

Σn Symbol space on n symbols, page 132

∨ Wedge sum

wrap(A) The cubical wrap of A ⊂ Rd, page 30 A\B Set theoretic difference of sets A and B A ∩ B Set theoretic intersection of sets A and B A ∪ B Set theoretic union of sets A and B

Ac Absolute complement of the set A

Br(p) Open ball of radius r > 0 centred at a point

p

C(R, A; S) The set of connecting orbits from the repeller R to the attractor A in the invariant set S, page 14

C1 Continuously differentiable (functions or

man-ifold) Cd

k k-dimensional chains of R

d

CH?(S) Homology Conley index of S

dH(A, B) The Hausdorff distance between the sets A

and B

h(S) Homotopy Conley index of S

CG Continuation graph, page 46

Definitions

(ε, τ )-chain Definition2.42on page18

k-chains DefinitionB.16on page122

Acyclic strongly connected components Definition3.56on page41

Alpha limit set Definition2.23on page13

Attractor for flows, Definition2.25on page14

for mv maps, Definition3.34on page34

(9)

Autonomous system Definition on page132 on page132

Chain recurrent set Definition2.45on page18

Chain selector DefinitionB.29on page124

Clutching function Definition3.70on page46

Clutching graph Definition3.70on page46

Cofibration DefinitionB.45on page127

Combinatorial enclosure Definition3.29on page33

Combinatorial multivalued map Definition3.22on page32

Complete image Remark2.3on page6

Connecting orbit Definition2.29on page14

Continuation class Definition3.72on page46

Continuation graph Definition3.73on page46

Contractible DefinitionB.46on page127

Cubical set Definition3.8on page29

Cubical wrap Definition3.14on page30

Dual repeller for flows, Definition2.28on page14

for mv maps, Definition3.34on page34

Elementary cube Definition3.7on page28

Elementary interval Definition3.6on page28

Equivalence of Morse decompositions, Definition 3.67on page44

of Morse graphs, Definition3.71on page46

Evaluation map Definition3.3on page27

Flow Definition2.1on page6

Flow-ordering Definition2.38on page17

Forward image Remark2.3on page6

Full cubical set Definition3.13on page30

Gradient-like DefinitionC.28on page137

(10)

Hessian matrix DefinitionC.10on page134

Heteroclinic orbit DefinitionC.27on page137

Homoclinic orbit DefinitionC.27on page137

Homological Conley index for flows, Equation7on page10

for continuous maps, DefinitionD.14on page

143

for mv maps, Equation ?? on page ?? Homology map DefinitionB.31on page124

Homotopic functions, DefinitionB.42on page126

spaces, DefinitionB.43on page126

Homotopy Conley index Definition2.14on page9

Homotopy extension property DefinitionB.44on page126

Hyperbolic fixed point DefinitionC.11on page134

Index pair for flows, Definition2.7on page8

for continuous maps, Definition D.5on page

142

Index map for continuous maps, Definition D.6on page

143

for mv maps, Definition3.49on page39

Invariant set for flows, Definition2.4on page7

for continuous maps, Definition D.1on page

142

for mv maps, Definition3.19on page31

for combinatorial mv maps, Definition3.33

Maximal invariant set for flows, Definition2.5on page7

for continuous maps, Definition D.2on page

142

for mv maps, Definition3.41on page36

Isolated invariant set Definition2.6on page7

Isolating neighbourhood for flows, Definition2.6on page7

for continuous mapsD.3on page142

for combinatorial mv maps, Definition 3.45

on page38

Jacobian matrix DefinitionC.10on page134

(11)

Local section DefinitionC.21on page136

Long exact sequence of a triple Defined in Proposition 2.54

Lyapunov function DefinitionC.28on page137

Lyapunov spectrum DefinitionC.31on page139

Manifold DefinitionC.18on page135

Maximal Lyapunov exponent DefinitionC.30on page138

Minimal mv map Definition3.25on page33

Morse decomposition for flows, Definition2.34on page16

for continuous maps, Definition D.4on page

142

for mv maps, Definition3.36on page35

Multivalued map Definition3.16on page31

Nullcline DefinitionC.16on page135

Omega limit set Definition2.23on page13

Orbit for flows, Definition2.2on page6

for ODEs, DefinitionC.8on page133

for flows for continuous maps, DefinitionC.2

on page131

Ordinary differential equations DefinitionC.6on page132

Outer approximation Definition3.26on page33

Poincar´e map DefinitionC.26on page136

Poincar´e section DefinitionC.24on page136

Pointed space DefinitionB.41on page126

Quotient space DefinitionB.4on page119

Repeller for flows, Definition2.27on page14

for mv maps, Definition3.34on page34

Scalar product DefinitionB.17on page122

Shift equivalence for group homomorphisms, Definition3.52on page40

for continuous maps, Definition D.8on page

143

for homotopy classes, DefinitionD.8on page

143

for endomorphisms, Definition D.11on page

(12)

Shift map DefinitionC.4on page132

Simple intersection property Definition3.2on page27

Solution (of a differential equation) DefinitionC.7 on page133

Stable and unstable manifold DefinitionC.13on page134

Strict partial order Definition2.33on page15

Symbol space on n symbols DefinitionC.4on page132

Tangent space DefinitionC.19on page135

Topological conjugacy Definition2.69on page25

Topological space DefinitionB.1on page119

Trajectory for ODEs, DefinitionC.8on page133

for flows for mv maps, Definition3.18on page

31

for combinatorial mv maps, Definition 3.23

on page32

Transversal DefinitionC.20on page135

(13)

1

Introduction

If an eternal traveller should journey in any direc-tion, he would find after untold centuries that the same volumes are repeated in the same disorder -which, repeated, becomes order: the Order.

The Library of Babel Jorge Luis Borges Dynamical systems have played a major role in theoretical neuroscience since the contributions of Hodgkin and Huxley. The nonlinearity of neural activity and the complexity of neural networks makes analysis of such systems difficult. In the following sections we will argue why the nature of neural networks lends itself well for an analysis through Conley index theory.

1.1

Oscillations

Oscillations can be found in single cell dynamics and on the network scale and are widespread in the biological world in general [10] and the brain specifically [20]. They are therefore also fundamental in computational models of neural dynamics. There is a considerable body of work in mathematical neuroscience investigating neural oscillators, and these studies predict a number of interesting single-neuron and network properties [183]. One of the first of these models, the Hodgkin–Huxley model, was developed to characterise the action potential of a squid axon. The Hodgkin–Huxley model describes how action potentials are ini-tiated1and propagated by means of a set of differential equations that describe

ion concentrations around the neural membrane. Different oscillatory varieties of these neuronal models have been determined, allowing for the classification of types of neuronal responses. The oscillatory dynamics of neuronal spiking as identified in the Hodgkin–Huxley model closely agree with empirical findings. In SectionA, we present a short argument why the Hodgkin–Huxley model gives an explanation of oscillations rather than solely putting forward a phenomeno-logical model. There, we also point to some possible useful contributions of the Conley theory in the discussions of explications in neuroscience.

It is challenging to explain higher level systems neuroscience findings in terms of low-level single-neuron biophysics. Viewing neurons as oscillators can partially bridge the divide between these two levels of understanding. Still, interacting oscillators can produce chaotic behaviour, which makes analysis of networks of them difficult. In dynamical systems theory, oscillations are char-acterised as the periodic orbits in phase space.

Neurons communicate with one another via synapses and affect the timing of spike trains in the post-synaptic neurons. Depending on the properties of the

1While the morphological structure of neurons is ignored. These models are generally

(14)

connection, such as the coupling strength, time delay and whether coupling is excitatory or inhibitory, the spike trains of the interacting neurons may become synchronised. Certain network structures promote oscillatory activity at specific frequencies. For example, neuronal activity generated by two populations of interconnected inhibitory and excitatory cells can show spontaneous oscillations that are described by the Wilson-Cowan model.

The role of recurrent, as opposed to feedforward connectivity, serves to shape neural responses into meaningful patterns of activity. The precise relationship between the dynamics of individual neurons and the brain as a whole remains extremely complex. The question of how connectivity shapes dynamics is of particular interest in the study of local recurrent networks. Many approaches that try to tackle this problem make use of dynamical systems. The complexity of the problem however makes analysis (especially in the mathematical analytic sense) very difficult. We propose that Conley index theory can be very useful to support analysis with the involvement of combinatorial techniques to make investigation into large and complex systems possible.

1.2

Neural dynamics

This example of modelling spiking as attracting periodic orbits is part of a larger theory of neural behaviour described by the idea of attractors in general [109, 101]. Intuitively, attractors are part of the phase space to which the dynamics converges over time. In Section2we will introduce the formal notion of an attractor.

Many attractor theories revolve around the idea of considering memory as attractors. Most studies of the mechanisms underlying learning and memory focus on changing synaptic efficacy (connection strength), and are based on the ideas laid out by Hebb [89]. However, network dynamics also depends on complex interactions among intrinsic membrane properties, synaptic strengths, and membrane-voltage time variation. Furthermore, neuronal activity itself modifies not only synaptic efficacy but also the intrinsic membrane properties of neurons. Every stored object in memory corresponds to a stable state of the memory system, resulting in a system with multistability. Multistability in a dynamical system means that there are multiple attractors which coexist separately in phase space, at the same value of the system’s parameters. In such a system, qualitative changes in dynamics can result from changes in the initial conditions. Any physical system whose dynamics in phase space is dominated by a substantial number of locally stable states to which it is attracted can therefore be regarded as a general content-addressable memory [2,92].

In a biological nervous system, recurrent loops involving two or more neurons are found quite often and are particularly prevalent in cortical regions important for memory, as they exhibit multistability [68, 187]. The occurrence of multi-stability has also been observed in the Hodgkin-Huxley neuron model whose recurrent inputs are delayed versions of their output spike trains [68]. However,

(15)

even in simple models, the effects of connectivity on neural activity are poorly understood [48]. Despite progress in modelling the dynamics of neural networks with linear or locally linear models, it remains unclear how a single network of neurons can produce the observed features. In particular, there are multiple clusters, or fixed points, observed in the data which cannot be characterised by a single linear model [146]. In Section4and5, we further describe multistability in neural networks.

Apart from the attractor states themselves, the transient paths, i.e. the paths towards and between the attractors, seem relevant in neural dynamics and computation. Large-scale neural recordings have established that the transfor-mation of sensory stimuli into motor outputs relies on low-dimensional dynam-ics at the population level, while individual neurons exhibit complex selectivity [130]. In [56], it was argued that there are many situations in which the tran-sient neural behaviour, while hopping between different attractor states carries most of the computational and behavioural significance, rather than the attrac-tor states that are eventually reached. In [160], it is remarked that the transient path from the initial condition to the attractor is an important phase where the brain could exploit its remarkable repertoire of behaviours. In Section 4, we further describe transient dynamics in neural networks.

1.3

Dynamical systems analysis in neuroscience

Dynamical modelling of neural systems and brain functions has a history of suc-cess since the 1950s. These models, inspired by the the Hodgkin–Huxley model, and their analyses are mostly based on traditional approaches and paradigms of nonlinear dynamics [99, 137].2 Neural systems are difficult to analyse for

several reasons. First of all, even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. Second, in contrast to traditional physical systems de-scribed by well-known basic principles, the principles governing the dynamics of neural systems are unknown. Third, the network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must either be probabilistic or must include the analysis of a large range of possible parameter values. Finally, neural systems can exhibit similar dynamics despite having different architectures and levels of complexity. The next sec-tion will give a (biological) justificasec-tion of why it is adequate to focus on the attractor dynamics of neural networks when analysing their behaviour.

Some “classical” dynamical systems approaches to neural networks include bifurcation analysis of a neural network model [14,63,161,159] and mean field approximations [62]. However, mean field approximation fails for small net-works. Conley index theory, a topological tools to analyse dynamical systems, bypasses some of the intrinsic difficulties of bifurcation analysis.

2For some remarks on the justification of such mathematical models in neuroscience, see

(16)

1.4

A topological approach

As we will see in the next sections, Conley index theory can easily identify peri-odic orbits, multiple attractors, chaos and transients. As these are all important in neuroscientific models, the Conley index is suitable for an application to the-oretic neuroscience. Attractors are central to our thethe-oretical understanding of global nonlinear dynamics as they form the basis for robust decompositions of gradient-like structures [33]. Both these approaches attempt to give a de-scription of the behaviour of some dynamical system in terms of recurrent and gradient-like parts. The recurrent parts are a more general idea of an attrac-tor; these recurrent parts are invariant under the dynamics under consideration. The gradient-like parts are analogous to transient dynamics.

Conceptual models for most physical systems are based on a continuum, i.e. the values of the states of a system are assumed to be real numbers. At the same time science is increasingly becoming data driven and thus based on finite information. This suggests the need for tools that seamlessly and systematically provide description of continuous structures from finite data and accounts for the rapid rise in the use of methods from topological data analysis [24,80]. However, there are significant challenges associated with the sampling or generation of data versus the necessary coverage from which we can draw the appropriate conclusions. Identifying the space is only one part of the challenge of understanding dynamics. We also need to capture the behaviour of the nonlinear map that generates the dynamics. Most of the data comes from dynamical processes that are traditionally modelled in terms of continua, i.e. differential equations, manifolds, differentiable maps. The process of collecting data is inherently a discretisation process. Combinatorial Conley index theory can describe attractor dynamics in discrete dynamical systems and in time series data [3,5,9].

On a more philosophical note, modern-day science is about collecting and processing data. Science moves more and more towards data-based representa-tions of (natural) phenomena and away from analytical model-based represen-tations. These developments make a computational Conley theory a valuable tool in applied dynamical systems theory. Besides, the continuation graph, that tracks the dynamics of a dynamical system over a range of parameter values, can give us insight into bifurcations.

1.5

Goal

The aim of this report is to show the usefulness of the Conley index and Morse decompositions in neuroscience. This will be demonstrated by means of anal-ysis of computational models of neural networks. First, the Conley index will be used to show the existence of connecting orbits between invariant sets in the Competitive Linear-Threshold Network CTLN). These connecting orbits are relevant as describing the process of recall in memory models. In CTLNs,

(17)

the relationship between the structure of the network and its fixed points is well-studied [44,48, 45]. However, the transient dynamics of these models are still unexplored. The second application includes the creation of a continuation graph for a Wilson-Cowan network with two excitatitory-inhibitory pairs. This is then used to identify parameter values (weights) for which epileptic seizures occur in this computational epilepsy model. Such continuation graphs are espe-cially useful in biology, it can be difficult to examine the influence of parameters, due to their large number.

1.6

Outline

This report is structured as follows, we begin in Section2with a brief introduc-tion to Conley index theory. In Secintroduc-tion3, we discuss a combinatorial approach to the Conley index. The CTLN system with 2 and 3 neurons is analysed in Section 4, where the existence of transient orbits is shown analytically, while periodic orbits are identified with the aid of the computer. The WC model with two interacting excitatory-inhibitory pairs is analysed through combinatorial Conley index theory in Section5. A continuation graph is created to identify transitions in parameters space to epileptic seizures in this simple model. The appendices, can be consulted for definitions and background information. For algebraic topology see AppendixB, for dynamical systems see AppendixCand for Conley map see AppendixD.

(18)

2

Conley Index Theory

The purpose of Conley index theory is to understand the structure of isolated invariant sets of some dynamical system [34,33]. Conley index theory is a gener-alisation of Morse theory to the setting of non-necessarily gradient or gradient-like flows3 on locally compact metric spaces [33]. In this theory the concepts of a non-degenerate critical point and its Morse index are replaced by the more general concept of an isolated invariant set and its Conley index. For a more comprehensive introduction to Conley index theory, see [176].

Conley index theory has proven useful for the analysis of various problems in dynamical systems theory. Applications of Conley index theory include the outer approximation of the dynamics of multiparameter systems [3, 5], deter-mining a conjugacy or semiconjugacy between an attractor and a model system [50,49,51], the study of switching networks [77] and fast-slow systems [76,78] and the detection of bifurcations [100].

This chapter will focus on flows4, which formalise the idea of the motion

of particles in a fluid. These are elementary when one considers diffusion ap-proximations of ion currents and molecules, for example when describing the dynamics of neural spiking. For this purpose, we will fix X as a locally compact metric space with metric µ. In this setting X is referred to as the phase space. Let us now define a flow.

Definition 2.1. For X a locally compact metric space with metric µ, a contin-uous map

ϕ : R × X → X (1)

is a flow if ϕ(0, x) = x and ϕ(t, ϕ(s, x)) = ϕ(t + s, x). If the time t ranges over R+, then the dynamical system is called a semiflow.

When we start at an initial point x0∈ X in a flow, we will get as a unique

path a curve in phase-space parametrised by time t that passes through the point x0 at initial time t = 0: the orbit.

Definition 2.2. An (complete) orbit through x ∈ X is the image of the function t 7→ ϕ(t, x),

ϕ(R, x0) := {ϕ(t, x0) | t ∈ R},

and is denoted by γx. Forward orbits γx+are defined by restricting time to R +.

Remark 2.3. For simplicity of notation, γx will be used both to denote the

function γx: R → X through x and its image {γx(t) ∈ X | t ∈ R} which is the

trajectory. The complete image for a subset S ⊂ X under the flow ϕ is then ϕ(R, S) := [

t∈R

ϕ(t, S). (2)

3See for the definitionC.28

4One can however consider the more general case of continuous maps, as presented in

D. The general theory is similar to the Conley index theory for flows, but requires a more technical approach.

(19)

The forward image is defined similarly with time restricted to R+.

The fundamental objects of study in Conley index theory are invariant sub-sets of dynamical systems. Let us first look at what such invariant sub-sets are. In this report the focus will be on flows, so the corresponding definition of invariant set for flows is as follows.

Definition 2.4. A set S ⊂ X is an invariant set for the flow ϕ if

ϕ(R, S) = S. (3)

A set S ⊂ X is an forward invariant if

ϕ(t, S) ⊂ S for all t ∈ R+. (4) Definition 2.5. The maximal invariant set in N ⊂ X under the flow ϕ is

Inv(N, ϕ) := {x ∈ N | ϕ(R, x) ⊂ N }.

If it is clear what flow ϕ is meant, this is dropped from the notation, and we write Inv(N ).

For simplicity, we will sometimes refer to the maximal invariant set of some subset N ⊂ X as its invariant set. Consider for instance the simplest example of an invariant set, that of a fixed point or equilibrium point, i.e. a point x ∈ X such that ϕ(R, x) = x. For many differential equations proving the existence of fixed points either analytically or numerically is a nontrivial task. Degree theory is by now a standard tool for this type of problem. To successfully apply this theory requires a few essential steps. Direct computation of the degree of a map is often quite difficult. Therefore, the first step is to choose a continuous family of maps interpolating from the map of interest to a simpler map for which the fixed points are explicitly known. One then chooses a region containing one or more equilibria of the simple map for which the resulting degree is nonzero. The third step typically involves analysis; one must show that each function in the continuous family of maps does not possess a fixed point on the boundary of the region. The application of the Conley index typically involves a similar process as is taken in degree theory. The regions of interest are called isolating neighbourhoods. Henceforth, ϕ refer to a flow on a locally compact metric space X.

2.1

Isolation

In order to characterise invariant sets, we can study the topological properties of a certain neighbourhood around them: the isolating neighbourhood. This neighbourhood must properly include the invariant set we want to study. Definition 2.6. A compact set N ⊂ X is an isolating neighbourhood if

(20)

where int N denotes the interior of N . S is an isolated invariant set if S = Inv(N ) for some isolating neighbourhood N .

The most important property of an isolating neighbourhood is that it is robust with respect to perturbation, see Section2.3. The following definition is essential for the definition of the Conley index.

Definition 2.7. Let S be an isolated invariant set. A pair of compact sets (N, L) where L ⊂ N is called an index pair for S if:

1. S = Inv(cl(N \L)), and N \L is a neighbourhood of S;

2. L is positively invariant in N ; that is, if x ∈ L and ϕ([0, t], x) ⊂ N , then ϕ([0, t], x) ⊂ L,

3. L is an exit set for N ; that is, if x ∈ N and there exists t1> 0 such that

ϕ(t1, x) /∈ N , then there exists t0∈ [0, t1] for which ϕ([0, t0], x) ⊂ N and

ϕ(t0, x) ∈ L.

Isolating neighbourhoods and index pairs are very general objects and as such useful for computational purposes, since they are relatively easy to find. A much stricter notion is that of an isolating block.

Definition 2.8. An isolating block is a compact set B such that for all T > 0, InvT(B, ϕ) ⊂ int B,

where

InvT(B, ϕ) :=x ∈ B | ϕ [−T, T ], x ⊆ B .

Note that InvT is a weaker notion than Inv: Inv(Y, ϕ) ⊂ InvT(Y, ϕ). This

leads to a stricter notion of isolation through the inclusion in the definition. Furthermore, if we define a subset of the isolating block

B−:=x ∈ B | ϕ [0, T ], x 6⊂ B, ∀T > 0 (5) that is closed, we get the following theorem for the existence of an index pair. Proposition 2.9. Isolating blocks are isolating neighbourhoods.

This is easy to show.

Theorem 2.10 (Existence of index pair). Given a smooth flow ϕ and an isolated invariant set S, there exists an isolating block B such that S = Inv(B, ϕ). Furthermore, (B, B−) is an index pair for S.

(21)

2.2

The Conley index

We can learn about the structure of an isolated invariant set by looking at how the flow behaves in its isolating neighbourhood. Even though the Conley index is an index for isolating neighbourhoods we shall define it in terms of the isolated invariant set. If N and N0are isolating neighbourhoods for the flow and Inv(N, ϕ) = Inv(N0, ϕ), then the Conley index of N is the same as the Conley index of N0 [33]. This implies that we can also consider the Conley index as an

index of isolated invariant sets. This property gives great freedom in the choice of regions in phase space on which one will perform the analysis. To define the index, we will make use of the following definitions.

Definition 2.11. A pointed space (Y, y0) is a topological space Y with a

dis-tinguished point y0∈ Y .

Definition 2.12. A map f : X → Y is a quotient map if it is surjective, and a subset U of Y is open if and only if f−1(U ) is open.

Given a pair (N, L) of topological spaces with L ⊂ N , the pointed space (N/L, [L]) is defined as follows. First of all, as a set we have

N/L := (N \L) ∪ [L],

where [L] denotes the equivalence class of points in L in the equivalence relation: x ∼ y if and only if x = y or x, y ∈ L. Let q : N → N/L denote the quotient map. The topology on (N/L, [L]) is defined as follows: as set U ⊂ N \L is open if q−1(U ) is open in N . If L = ∅, then

(N/L, [L]) := (N ∪ {∗}, {∗}),

where {∗} denotes the equivalence class consisting of the empty set. For a pointed space (X, x0), let [(X, x0)] denote its pointed homotopy type.

Theorem 2.13. Let (N, L) and (N0, L0) be index pairs for an isolated invariant set S. Then

(N/L, [L]) ∼ [(N0/L0, [L0])],

i.e., the two pointed topological spaces (N/L, [L]) and (N0/L0, [L0]) are homo-topic.

This theorem shows that the Conley index is well defined.

Definition 2.14. Let S be an isolated invariant set with index pair (N, L). The homotopy Conley index of S is

h(S) = h(S, ϕ) := (N/L, [L]). (6) Observe from the definition that the homotopy Conley index is the homotopy type of a pointed topological space. Unfortunately, working with homotopy

(22)

classes of spaces is extremely difficult. To get around this, it is useful to consider the homological Conley index defined by

CH?(S) := H?(N/L, [L]) ∼= H?(N, L), (7)

the singular homology of the pair N, L (see DefinitionB.14andB.9). The iso-morphism holds if the inclusion of L into N is a cofibration or a good pair [192], see DefinitionB.45. Since the homology of two homotopic spaces is the same, the homological Conley index is well defined. For a cohomological treatment of Conley index theory, see [176].

Remark 2.15. It is not true that given any index pair (N, L), H?(N/L, [L]) ∼= H?(N, L).

However, it is always possible to find an index pair such that this is true [34]. The examples we consider in this report all satisfy this equality. These in-clude only cases where the exit set is a subset of the boundary of the isolating neighbourhood and consists of a finite set of connected components.

The usefulness of the index depends on how much information about the structure of the dynamics of the invariant set can be deduced from knowledge of the Conley index. The index captures little information about the structure of the invariant set itself, but about its surroundings instead.

Let us mention one of the main properties of the index, which is fundamental to many of the applications of the theory.

Proposition 2.16 (Additivity). If S = Fn

j Sj, the disjoint union of isolated

invariant sets Sj, then

CHk(S) ∼=

M

j

CHk(Sj).

Proof. Since the Siare disjoint invariant sets, there exist disjoint isolating

neigh-bourhoods Ni such that (Ni, Li) are index pairs for Si for i = 1, 2, . . . , n. Then

by the additivity of singular homology [59], we have that

CH?(S) = H? n [ i Ni, n [ i Li ! ' n M i H?(Ni, Li) = n M i CH?(Si).

(23)

Remark 2.17. We might restrict our attention to a subset of the space on which the flow is defined originally, for example when we restrict the flow to only positive values X = Rn≥0. Note that the Conley index cannot be computed

in certain cases, for example, when a center fixed point5 is considered. In this

situation there is no isolating neighbourhood that has this fixed point as its invariant set; every neighbourhood intersects the limit cycles around the fixed point.

Further problems can arise when if we do not know if the chosen set X, the space to which we restrict the flow, is indeed an isolating neighbourhood. In such a situation, one can only be sure that N ⊂ X is an isolating neighbourhood if actually N ⊂ int X. This problem is one of the reasons for why the Conley index cannot be computed in some cases, e.g. for the origin, which is often an isolating neighbourhood but lies at the boundary of X. This can be problematic in biological models, where it is normally assumed that the size of the population is non-negative.

2.3

Continuation

Now, we will discuss some useful properties of index pairs, isolating neighbour-hoods and the Conley index itself. First of all, these objects are invariant themselves when we consider a continuous family of dynamical systems

ϕλ: R × X → X, λ ∈ [−1, 1].

Proposition 2.18 ([33]). If N is an isolating neighbourhood for the flow ϕ0,

then, for sufficiently small δ > 0, N is an isolating neighbourhood for all ϕλ, |λ| < δ.

Definition 2.19. Let N ⊂ X be a compact set. Let Sλ = Inv(N, ϕλ). Two

isolated invariant sets Sλ0 and Sλ1 are related by continuation or Sλ0 continues

to Sλ1 if N is an isolating neighbourhood for all ϕλ, λ ∈ [λ0, λ1].

Theorem 2.20 (Continuation Property [33,170]). Let Sλ0 and Sλ1 be isolated

invariant sets that are related by continuation. Then CH?(Sλ0) ∼= CH?(Sλ1).

The Conley index is a purely topological index and as such a very coarse mea-sure of the dynamics. Typically, if it can be computed directly at a particular parameter value, then one’s knowledge of the dynamics at that parameter value is reasonably complete. The power of the index (as in degree theory) comes from being able to continue it to a parameter value where one’s understanding of the dynamics is much less complete.

(24)

2.4

Wa ˙zewski principle

We can use the Conley index to show the existence of a nonempty invariant subset in phase space [33]. In this section it will be discussed how this can be achieved. Observe that the empty set is an isolated invariant set for any flow. Furthermore, (∅, ∅) is an index pair for the empty set. Hence

CH?(∅) ∼= 0.

The contrapositive of this example, although trivial, leads to a significant im-plication, and hence will be designated as a theorem.

Theorem 2.21 (Wa˙zewski principle). If N is an isolating neighbourhood such that CH?(Inv N ) 6∼= 0, then Inv N 6= ∅.

This result provides the simplest example of an existence result of an invari-ant set which can be obtained via the Conley index. This property of the Conley index allows one to pass from the isolating neighbourhood to an understanding of the dynamics of the isolated invariant set.

2.5

Stability of invariant sets

Consider the simplest example of an invariant set, that of a fixed point or equi-librium, i.e., a point x ∈ X such that ϕ(R, x) = {x}. For many differential equations, proving the existence of fixed points either analytically or numeri-cally is a nontrivial task.

The Conley index and the relation of the forward image of N to N can be used to classify each computed isolating neighbourhood N on the basis of its stability. The problem of assessing stability in dynamical systems defined by differential equations is discussed in SectionC.4.

Theorem 2.22 ([165]). Let ˙x = f (x) with f ∈ C1

and x ∈ Rn+m. Let S be a hyperbolic fixed point with an unstable manifold of dimension n. Then

CHk(S) =

(

Z if k = n 0 otherwise.

Proof. By the Hartman-Grobman theorem (see TheoremC.15), the flow of the nonlinear system is homeomorphic to the flow of the linear system

˙

y = Df (S)y. (8)

Therefore it suffices to compute the Conley index of the origin under this linear dynamics6. With a linear change of variables Equation (8) transforms to

 ˙z1 ˙ z2  =A 0 0 B  z1 z2  ,

(25)

where A is an m ×m matrix for which the real parts of all its eigenvalues are less that zero and B is an n × n matrix for which the real parts of all its eigenvalues are greater that zero. An isolating neighbourhood of the origin is given by [−1, 1]m× [−1, 1]n. The exit set is given by [−1, 1]m× ∂([−1, 1]n). Finally, we

get

H?([−1, 1]m× [−1, 1]n, [−1, 1]m× ∂([−1, 1]n)) = H?([−1, 1]n, ∂([−1, 1]n)).

We say that an isolating neighbourhood N is attracting if the forward image of N is entirely contained in N . Otherwise, if the forward image of N is not fully contained in N , we say that N is unstable. If N has the Conley index of a hyperbolic fixed point with d-dimensional unstable manifold, then we say that N is of the type of the corresponding point. For a typical system, it is likely that N indeed contains an equilibrium of the expected stability, but the dynamics in N may turn out to be much more complicated than seen from outside (that is, from the perspective of the isolating neighbourhood and the index pair). We will see some examples where this is the case in Section4. If N ⊂ Rn is of the

type of a fixed point with n-dimensional unstable manifold then we say that N is repelling. Other types of indices are possible; for example, the index of a periodic trajectory (see Section 2.11) differs from the index of any fixed point [65].

2.6

Attractor-repeller decomposition

One of the fundamental theorems in dynamical systems is the decomposition theorem of Conley, which states that any compact invariant set can be divided into its chain recurrent part and the rest. Furthermore, on the latter part one can define a strictly decreasing Lyapunov function (see Definition C.28). These two portions are therefore precisely the sets on which one has recurrent dynamics and gradient-like dynamics [141, 115]. In what the following three sections, we explain this decomposition. First, however, we will introduce a simpler decomposition, namely into attractors and their dual repellers, which is the coarsest way to decompose.

Definition 2.23. Let ϕ be a flow on X, and let Y ⊂ X. The omega limit set of Y is ω(Y ) = ω(Y, ϕ) :=\ t>0 cl  ϕ[t, ∞), Y  , while the alpha limit set of Y is

α(Y ) = α(Y, ϕ) := \ t>0 cl  ϕ(−∞, −t], Y  .

(26)

Definition 2.24. Let ϕ be a flow on X. A compact set N ⊂ X is a trapping region if N is forward invariant and there exists a T > 0 such that ϕ(T, N ) ⊂ int(N ).

Definition 2.25. Let ϕ be a flow on X. A compact set A ⊂ X is a attractor if there exists a trapping region N ⊂ X such that

A = Inv(N, ϕ).

The concept of a repeller is not a matter of time reversal and therefore requires a slight modification in its definition.

Definition 2.26. Let ϕ be a flow on X. A compact set N ⊂ X is a repelling region if N is backward invariant and there exists a T < 0 such that ϕ(T, N ) ⊂ int(N ).

Definition 2.27. A subset R ⊂ S is a repeller if there exists a repelling region N ⊂ X such that

R = Inv+(N, ϕ) := {x ∈ N | ϕ(R+, x) ⊂ N }. Definition 2.28. Let A be an attractor for the flow ϕ. Then

A∗= α(Uc)

is a repeller with repelling neighbourhood Uc and is called the dual repeller of

A in S. The pair (A, A∗) is called an attractor-repeller pair decomposition of S. For an attractor-repeller decomposition, one can consider orbits that are connecting the repeller with the attractor.

Definition 2.29. Let (A, R) be an attractor-repeller decomposition for the invariant set S. The set of connecting orbits from R to A in S is denoted by

C(R, A; S) := {x ∈ S | ω(x) ⊂ A, α(x) ⊂ R}.

From the definition of an attractor-repeller pair decomposition, it is clear that if x ∈ S, then ω(x) 6⊂ R and α(x) 6⊂ A, i.e., there can be no connecting orbits from the attractor to the repeller. The following result is intuitive, yet important. It has been stated in [33], however without proof.

Theorem 2.30. Let (A, R) be an attractor-repeller pair decomposition of S. Then

S = A ∪ R ∪ C(R, A; S).

Proof. First note that we can write R = α(S\A). If S 6= A ∪ R ∪ C(R, A; S), then there is x ∈ S, such that x /∈ A, x /∈ R and ω(x) 6⊂ A or α(x) 6⊂ α(S\A). In the first case, if ω(x) 6⊂ A, then x ∈ R, by definition of R. In the second case, if α(x) 6⊂ α(S\A), then α(x) ⊂ α(A) and so x ∈ A. In both cases we end up with a contradiction.

(27)

The following theorem shows a similar robustness for attractor-repeller de-compositions as we have seen for invariant set, see Section2.3.

Theorem 2.31. Let {ϕλ | λ ≥ 0} be a continuous family of flows and let S

be invariant set for the flow ϕ0. Then an attractor-repeller pair decomposition

continues, i.e., there exists λ0 > 0 such that (Aλ, Rλ) is an attractor repeller

pair for Sλ for all λ ∈ [−λ0, λ0].

The following theorem indicates that Lyapunov functions and attractor-repeller decompositions are in some sense equivalent concepts.

Theorem 2.32 (Lemma 5.3 in [170]). Let S be a compact invariant set with attractor-repeller pair (A, R). Then, there exists a continuous (Lyapunov) func-tion V : S → [0, 1], such that 1. V−1(1) = R 2. V−1(0) = A 3. for x ∈ C(R, A; S) and t > 0, V (x) > V (ϕ(t, x)).

2.7

Morse decomposition

The Morse decomposition is the fundamental structure underlying Conley’s de-composition theorem. A Morse dede-composition of a flow is a finite collection of disjoint, compact invariant sets which together contain all the recurrent be-haviour of the flow in the sense that if they are identified to points, a gradient-like flow is obtained. As with the structures before (see for example the continuation property of invariant sets), the Morse decomposition is stable in the sense that nearby flows admit similar decompositions. Before we can introduce the Morse decomposition, we need the notion of a partial ordering.

Definition 2.33. A strict partial order on a set P is a relation < satisfying the following conditions:

1. p < p never holds for p ∈ P. 2. If p < q and q < r, then p < r.

If in addition, the partial order satisfies for all p, q ∈ P, either p < q or q < p, then < is called a total order. We will use the terminology poset to indicate a partially ordered set, and (P, <) will be used to denote a finite indexing set P with a partial order <.

(28)

Definition 2.34. A Morse decomposition of a flow on a compact metric space X with invariant set S is a finite collection of nonempty, pairwise disjoint isolated invariant sets M (1), . . . , M (n) with strict partial ordering < on the index set P = {1, . . . , n} such that for every

x ∈ S\ [

p∈P

M (p)

there exist indices i < j such that the α- and ω-limit sets of x are contained in M (j) and M (i), respectively. Any ordering on P with the above property is called admissible. Having chosen an admissible order < we shall write

M(S) = {M (p) | p ∈ (P, <)}. (9) The sets in a Morse decomposition are called Morse sets.

The following definition allows us to define a useful representation of the Morse decomposition.

Definition 2.35. A directed graph or digraph is a pair (V, E ), where the col-lection of vertices V consists of a finite set and the edges E are ordered pairs of vertices from V. The number of elements in V and E are denoted by | V | and | E |, respectively. Furthermore, we define the graph with reversed edges as

ET := {(u, v) | (v, u) ∈ E}.

A path in a directed graph is a sequence of edges having the property that the ending vertex of each edge in the sequence is the same as the starting vertex of the next edge in the sequence. A path forms a cycle if the starting vertex of its first edge equals the ending vertex of its last edge. A directed acyclic graph is a directed graph that has no cycles.

Definition 2.36. A Morse decomposition can be represented in terms of a directed graph, the Morse graph, with vertices V =M (i)}i∈P and edges

E = {(M (i), M (j)) | there exists a connecting orbit from M (i) to M (j) . If each Morse set is assigned its Conley index (homotopy or homology) then such a structure is called a Conley-Morse decomposition, and the corresponding graph is called a Conley-Morse graph. Such a Conley-Morse graph captures the global behaviour of a dynamical system [4].

Remark 2.37. Note that a Morse decomposition of an invariant set S is not unique. In particular, if i, j are such indices that i < j, but there is no other index k such that i < k < j, then one can create a coarser Morse decomposition by replacing Siand Sj with Si∪ Sj∪ C(Si, Sj).

As mentioned before, there may be many admissible orders for a given Morse decomposition. However, there is a unique minimal (in the sense of the number of order relations) admissible order which is called the flow-defined order.

(29)

Definition 2.38. The flow on S defines a natural partial order < on P in the following way. The partial order < is defined by setting π0 < π if and only if there exists a sequence of distinct elements of P : π0 = π0, . . . , πn = π, such that

C(πj, πj−1) = ∅ for each j = 1, . . . , n. The admissible ordering < of P is called

the flow-ordering of P.

To relate admissible orders to the flow defined order, we need the concept of the extension of an order.

Definition 2.39. An order <0on P is an extension of < if p < q implies p <0q.

Proposition 2.40 (Proposition 2.3 in [69]). Every admissible ordering of M is an extension of the flow-ordering of M .

This section began with the statement that Morse decompositions are gen-eralisations of attractor-repeller pair decompositions. To see this observe that an attractor-repeller pair decomposition (A, R) of the invariant set S defines a Morse decomposition

M(S) = {M (p) | p = 1, 2, 2 > 1}, where M (1) = A and M (2) = R.

Attractors can be ordered as well, for example with the inclusion relation. Definition 2.41. A partial order relation can be represented by a directed graph, which is often useful for visualisation. It is a common convention to depict these graphs in a Hasse diagram. This diagram is a planar representation7

which is constructed as follows. In a diagram of (P, <) we associate a node to each element of P, and an edge is drawn between the circles for p, q ∈ P if p < q. The circles are placed so that if p < q, then the circle for p is vertically lower than the circle for q, and every edge intersects no circles other than its endpoints.

We note without proof that such a diagram is always possible.

2.8

Conley’s decomposition theorem

The following treatment considers the problem of splitting flows into gradient-like and recurrent parts, but in a more general setting; here we do not require chain-recurrent part to consist only of equilibria.

Let M = {M (p) | p ∈ P} be a finite collection of disjoint compact invariant sets. For these sets to form a Morse decomposition it is required that all of the recurrent behaviour of the flow takes place inside them. In other words, orbits cannot cycle back and forth between two of the sets. This can be achieved by insisting that orbits run downhill with respect to some partial order on the collection [141].

7In the sense of representation as a planar graph. A planar graph is a graph that can be

(30)

Definition 2.42. Let ϕ be a flow on the locally metric space X, x, y ∈ X and , τ > 0. An (ε, τ )-chain from x to y is a finite sequence

{(xi, ti)} ⊂ X × [τ, ∞), i = 1, . . . , n

such that x = x1, ti ≥ τ , µ(ϕ(ti, xi), xi+1) ≤ ε for all i = 1, 2, . . . , n and

µ(ϕ(tn, xn), y) ≤ ε. If there exists an (ε, τ )-chain from x to y, then we write

x (ε,τ )y. Further, if x (ε,τ )y for all (ε, τ ), then x  y.

Definition 2.43. Let x, y ∈ S. Set x ∼ y if and only if there for all ε, τ > 0 exists a periodic (ε, τ )-chain ηx: Z → S such that µx(j) = y for some j ∈ Z.

R(S, ϕ, ε, τ ) = {x ∈ S | x (ε,τ )x}.

Let P be an indexing set for the resulting collection of equivalence classes, that is, let

R(S, ϕ, ε, τ ) = [

p∈P

Rp(S, ϕ, ε, τ ),

where x, y ∈ Rp(S, ϕ, ε, τ ) if and only if x ∼(ε,τ )y.

A strict partial order < is an admissible order on R(S, ϕ, ε, τ ) if, for p 6= q, the existence of an (ε, τ )-chain from an element of Rp(S, ϕ, ε, τ ) to an element

of Rq(S, ϕ, ε, τ ) implies that p > q.

Theorem 2.44 ([33]). Let S be an invariant set for f . Let {Rp(S, ϕ, ε, τ ) | p ∈

P} be the set of equivalence classes of the ε-chain-recurrent set of S. Let M (p) := Inv(Rp(S, ϕ, ε, τ ), ϕ).

Then M(S) := {M (p) | p ∈ P} is a Morse decomposition of S. Furthermore, if > is an admissible order for the equivalence classes of R(S, ϕ, ε, τ ), then > is an admissible order for M(S).

Theorem2.44shows that an ε-chain-recurrent set produces a Morse decom-position.

Definition 2.45. The chain recurrent set of the set S ⊂ X under the flow ϕ is defined by

R(S, ϕ) = {x ∈ S | x  x}.

The chain recurrent set is fundamental for two reasons. First, it is minimal in the sense of the following theorem.

Theorem 2.46 ([33]). If S is a compact invariant set for the flow ϕ, then R(R(S, ϕ), ϕ|R(S)) = R(S, ϕ).

(31)

Again, one can define equivalence classes Rp(S, ϕ), p ∈ P of R(S, ϕ) by

x ∼ y if and only if x ∼(ε,τ )y for all ε, τ > 0.

The theorem tells us that ε-chain-recurrent set consists, essentially, of those points which, having allowed for a finite number of errors of size ε, return to themselves under the dynamics. Second, it captures all the recurrent dynamics as is indicated by the following theorem.

Theorem 2.47 (Fundamental Decomposition Theorem, [33]). Let S be a com-pact invariant set for ϕ. Let Ri(S, ϕ), i = 1, 2, . . . , denote the connected

com-ponents of R(S, ϕ). Then there exists a continuous Lyapunov function V : S → [0, 1]

satisfying:

1. if x 6∈ R(S, ϕ) and t > 0, then V (x) > V (ϕ(t, x));

2. for each i = 1, 2 . . . , there exists σi ∈ [0, 1] such that Ri(S, ϕ) ⊂ V−1(σi)

and the {σi} can be chosen such that σi 6= σj if i 6= j.

This theorem suggests that, to understand the global dynamics of ϕ, it is sufficient to understand the dynamics in the equivalence classes Rp(S, ϕ) of

R(S, ϕ) and the structure of the set of connecting orbits between these equiva-lence classes.

One can think of an (ε, τ )-chain of ϕ from x to y in S as a map ηx: {0, . . . , n} →

S satisfying:

1. ηx(0) = x and ηx(n) = y;

2. µ(ηx(i + 1), ϕ(ηx(i), ti)) < ε for i = 1, . . . , n − 1.

One can define equivalence classes on the points in the invariant set S. Let x, y ∈ S. Set x ∼εy if and only if there exists a periodic (ε, τ )-chain µx: Z → S

such that µx(j) = y for some j ∈ Z.

We will now discuss some connections between chain recurrent set, attractors and repellers and Morse decompositions. First, we will analyse the relation between chain recurrence and attractors, for which we will use the following definition.

Definition 2.48. For Y ⊂ X we define the chain limit set as Ω(Y ) := {z ∈ X | there is y ∈ Y such that y  z}.

Proposition 2.49. For Y ⊂ X the set Ω(Y ) is the intersection of all attractors containing ω(Y ).

Finally, we obtain the following relation between the chain recurrent set and attractors [34,6].

(32)

Theorem 2.50. Let ϕ be a flow on the locally metric space X with invariant set S ⊂ X. The chain recurrent set R(S, ϕ) satisfies

R(S, ϕ) =\{A ∪ A∗| A is an attractor}.

Remark 2.51. By the previous theorem, there exists a finest Morse decompo-sition {M (1), . . . , M (n)} if and only if the chain recurrent set R(S, ϕ) has only finitely many connected components. In this case, the Morse sets coincide with the chain recurrent components of R(S, ϕ).

2.9

Existence of connecting orbits

The existence of connecting orbits between invariant sets (attractor-repeller pairs in particular) can be proved by comparison of the Conley indices of the invariant sets and the joint invariant set.

Definition 2.52. Let S be an isolated invariant set and let (A, R) be an attractor-repeller pair decomposition. An index triple for (A, R) is a triple of compact sets (N2, N1, N0) where N0⊂ N1⊂ N2 such that

• (N2, N0) is an index pair for S;

• (N2, N1) is an index pair for R;

• (N1, N0) is an index pair for A.

Theorem 2.53 (Proposition 3.3 in [117]). Let (A, R) be an attractor-repeller pair decomposition of an isolated invariant set S for a flow ϕ. Then there exists an index triple (N2, N1, N0).

The following proposition will prove very useful to construct a long exact sequence for the index triple.

Proposition 2.54 (Section 8, Theorem 5 in [181]). Let X00 ⊂ X0 ⊂ X be

topological spaces. Then there is a long exact sequence · · · → Hn(X0, X00) → Hn(X, X00) → Hn(X, X0)

∂n

−→ Hn−1(X0, X00) → . . . ,

called the long exact sequence associated to a triple, or the long exact sequence of a triple.

From Proposition 2.54and the fact that N0 ⊂ N1 ⊂ N2 we get the long

exact sequence · · · → Hn(N1, N0) → Hn(N2, N0) → Hn(N2, N1) ∂n −→ Hn−1(N1, N0) → . . . which is equivalent to · · · → CHn(A) → CHn(S) → CHn(R) ∂n −→ CHn−1(A) → . . . . (10)

(33)

This long exact sequence introduces the concept of connecting homomor-phism that will be useful in order to show the existence of connecting orbits in attractor-repeller pairs. The following theorem relates this exact sequence to the underlying dynamics.

Theorem 2.55 ([69]). Let (A, R) be an attractor-repeller pair decomposition of an isolated invariant set S. If S = A ∪ R, then ∂n = 0 for all n in the long

exact sequence (10).

The contrapositive of this result is useful to show the existence of connecting orbits.

Corollary 2.56 ([117]). If ∂n 6= 0 for some n ∈ Z>0, then S 6= A ∪ R, i.e.,

C(R, A; S) 6= ∅.

In preparation for the algebra of connection matrices that will be introduced in Section2.10, we will now try to present the information that is carried by ∂?

in the form of a matrix. To simplify the presentation we shall now only consider homology with field coefficients.

Definition 2.57. The connecting homomorphism or boundary map ∂?(A, R) : CH?(R) → CH?(A)

is a degree −1 operator, that is, it sends n-level homology to (n − 1)-level homology. Proposition 2.58. If ∆n: CHn(A) ⊕ CHn(R) → CHn−1(A) ⊕ CHn−1(R) is defined by ∆?= 0 ∂?(A, R) 0 0  : CH?(A) ⊕ CH?(R) → CH?−1(A) ⊕ CH?−1(R) then H∆?∼= CH?(S).

The matrix in Proposition2.58is the simplest example of a connection ma-trix which will be described further in the Section2.10. Just as one can decom-pose isolated invariant sets, one can decomdecom-pose sets of connecting orbits. Let (A, R) be an attractor-repeller pair decomposition of S with associated bound-ary operator ∂(A, R).

Definition 2.59. A separation of C(R, A; S) is a collection {Cj(R, A; S) | j = 1, . . . , J }

of disjoint open subsets of C(R, A; S) such that C(R, A; S) =

J

[

j=1

(34)

If N is an isolating neighbourhood for S, then there exists Nj ⊂ N such

that Nj is an isolating neighbourhood for Sj := A ∪ R ∪ Cj(R, A; S). Observe

that (A, R) is an attractor-repeller pair for Sj, and hence, there is an associated

boundary operator ∂(A, R; j).

Theorem 2.60 ([133]). For any separation of C(R, A; S) as above,

∂(A, R) = J X j=1 ∂(A, R; j).

2.10

Connection matrix

The general framework for the Conley index and the connection matrix relies on the decomposition of an invariant set S into smaller invariant sets [71, 134]. These smaller invariant sets are subsets of S that have the dynamical charac-teristic of being an attractor or a repeller, with any other point in S being a heteroclinic orbit8.

In the previous section we constructed a connection matrix for the case of an attractor-repeller pair. One can do the same for a Morse decomposition of a flow on a compact metric space X as in (9) with index set P. The connection matrix is a linear map given by

∆P = [∆pq]p,q∈P : M p∈P CH?(M (p)) → M p∈P CH?(M (p)). (11)

If N is the number of elements in P, then ∆I can be represented as the

following N × N matrix ∆I =    ∆11 . . . ∆1N .. . . .. ... ∆N 1 . . . ∆N N   . (12)

Each entry of the matrix (12) is given by

∆ij : CH?(M (j)) → CH?(M (i)), where CH?(M (i)) = Ci M i0=1 CH?(M (ii0)),

where Ci is the number of connected components of the Morse set M (i). Since

each Morse set M (i) has Ciconnected components denoted by M (i) =S Ci

i0=1M (ii0), 8See for the definition of a heteroclinic orbit for differential equations AppendixC.27.

(35)

the entries ∆ij of the matrix (12) are Cj× Ci matrices given by ∆ij=    δ11 . . . δ1Cj .. . . .. ... δCi1 . . . δCiCj   . (13)

Furthermore, each entry of matrix (13) is a linear map given by δi0j0: CH?(M (jj0)) → CH?(M (ii0)),

where the graded homology can be decomposed in each homology level as CH?(M (ii0)) =

Ni0

M

k=0

CHk(M (ii0)),

such that Ni0 denote the highest nontrivial homology level for M (ii0).

The entries δi0j0 are a (Nj0+ 1) × (Ni0+ 1) matrix

δi0j0 =    d00 . . . d0Nj0 .. . . .. ... dNi00 . . . dNi0Nj0   ,

where dmn: CHn(M (jj0)) → CHm(M (ii0)) is a linear map between vector

spaces.

Proposition 2.61 (Proposition 3.2 in [71]). If ∆ is a connection matrix, then it satisfies the following two properties .

1. ∆ij 6= 0 implies p ≤ q (∆ is upper triangular )

2. Each ∆ij is of degree −1 and ∆ ◦ ∆ = 0 (∆ is a boundary operator ).

Theorem 2.62 ([71]). For any isolated invariant set S with Morse decomposi-tion M(S), the set of connecdecomposi-tion matrices is not empty.

Information about the set of connecting orbits between elements of a Morse decomposition can be obtained via the connection matrices of the Morse decom-position [71]. For continuation properties of the connection matrix see [70]. A computational framework for connection matrix theory can be found in [87,182]. In biology, the connection matrix has for example been used for the classification of flows arising from ecological models, see [164,163].

2.11

Periodic orbits

Periodic solutions to differential equations in Rn are among the basic objects of interest in the theory of dynamical systems. In this section we will see how periodic orbits can be identified. An essential ingredient to the approach is the Poincar´e section, see SectionC.6. First, let us formally introduce periodic orbits for flows.

(36)

Definition 2.63. Let ϕ be a flow on a locally compact metric space X. A point x ∈ X is called a periodic point with period τ if there exists τ > 0 such that ϕ(τ, x) = x. The orbit {ϕ(τ, x) | 0 ≤ t ≤ τ } through a periodic point x is called a periodic orbit.

There are many situations in which the flow admits a Poincar´e section for which one can study the dynamics of the associated Poincar´e map9. The Conley

index for the Poincar´e map carries more information than the index for the flow. For the Conley index for maps see AppendixD. One can prove the existence of periodic orbits by observing the Conley index and the Poincar´e map [135]. Theorem 2.64 (Theorem 4.20 in [64]). If N is an isolating neighbourhood which admits a Poincar´e section Ξ and for all n ∈ Z either

dim CH2n(Inv N ) = dim CH2n+1(Inv N )

or

dim CH2n(Inv N ) = dim CH2n−1(Inv N ),

where not all the above dimensions are zero, then Inv N contains a periodic trajectory.

In general, one can determine the Conley index of a normally hyperbolic invariant set by considering its local unstable manifold (see Definition ). Theorem 2.65 ([33]). Assume that a manifold S is a normally hyperbolic invariant set. Let E be the vector bundle over S defined by the local unstable manifold of S. If E is a rank n orientable bundle and one uses homology with field coefficients, then

CHk(S) ' Hk+n(S).

As a corollary we obtain the following result.

Corollary 2.66 (Corollary 3.17 in [138]). Let S be a hyperbolic invariant set that is diffeomorphic to a circle. Assume that S has an oriented unstable man-ifold of dimension n. Then

CHk(S) =

(

Z if k = n, n + 1 0 otherwise .

We will call the above Conley index the the Conley index of a hyperbolic periodic. From Theorem5.7.2and Theorem2.65we get the following.

Corollary 2.67 (Corollary 4.21 in [64]). Under the hypothesis of Theorem

5.7.2, if Inv N has the Conley index of a hyperbolic periodic orbit, then Inv N contains a periodic orbit.

(37)

A key step in the proof of Theorem 5.7.2 is the following exact sequence which relates the index of an isolated invariant set which admits a Poincar´e section with the index of the invariant set under the Poincar´e map.

Theorem 2.68 (Theorem 1.2 in [135]). Assume N is an isolating neighbour-hood for the flow ϕ and assume that N admits a Poincar´e section Ξ. Let Π denote the corresponding Poincar´e map, S = Inv(N, ϕ), and K = Ξ ∩ S. Then, there is the following exact sequence of homology Conley indices:

· · · → CHn(S, ϕ) → CHn(K, Π)

id −χn(K,Π)

−−−−−−−−→ CHn(K, Π) → CHn−1(S, ϕ) → . . .

where χn(K, Π) is the graded automorphism on CH?(K, Π), see DefinitionD.14.

A well known approach to prove the existence of periodic orbits is to con-struct a Poincar´e section and to use suitable topological theorems to find a fixed point of the Poincar´e map. However, as observed in [86], it is usually difficult to locate a Poincar´e section or to compute the Poincar´e map, which makes these methods difficult to apply. The problem is that in most concrete applications our knowledge of the Poincar´e map is based mostly on numerical experiments. The only way to obtain rigorous information from numerical computations is to perform rigorous error analysis by means of interval arithmetic. In Section

3we will discuss methods to analyse structures (like the Poincar´e section) that are constructed with numerical computations in a rigorous manner. In Section

4and5 we will show how the existence of a periodic trajectory in a dynamical system can be proven from the numerical observation of an attracting periodic orbit.

2.12

Symbolic dynamics and chaos

Up to this point we have concentrated on finding relatively simple dynamical structures such as fixed points and periodic orbits. However, nonlinear sys-tems can exhibit extremely complicated structures often referred to as chaotic dynamics or simply chaos. While there is no universally accepted definition of chaos, there are particular examples that everyone agrees are chaotic. The best understood definitions of chaos are those that can be expressed in terms of symbolic dynamics.

Definition 2.69. Let f : X → X and g : Y → Y be continuous maps. A homeomorphism ρ : X → Y is a topological conjugacy from f to g if ρ ◦ f = g ◦ ρ or equivalently if X X Y Y f ρ ρ g

commutes. The relationship between f and g can be weakened by assuming that ρ is a continuous surjective map. In this case it is called a topological semiconjugacy.

(38)

If γx: Z → X is a full trajectory of f through x, then σρ(x) : Z → Y given

by

σρ(x):= ρ ◦ γx

is a full trajectory of g through ρ(x) for a homeomorphism ρ : X → Y . Since ρ is a homeomorphism, then ρ−1 exists and hence given a full trajectory σ

x:=

ρ−1 ◦ γ

y of f through x := ρ−1(y). Therefore, if two dynamical systems are

related by a topological conjugacy, then there is an exact correspondence among all the orbits of each system.

To show the existence of an invariant to exhibit chaotic dynamics one can study dynamical properties of a Poincar´e section of the system [139, 140]. Theorem 2.70 (Theorem 4.25 in [64]). Let f : X → X be a homeomorphism on a locally compact metric space. Let M = M0∪M1be an isolating neighbourhood

under f where M0and M1 are disjoint compact sets. Assume that

Conn(Mk) =

(

(Z, id) if n = 1 0 otherwise,

for k = 0, 1. Then, (Mk∩ f (Mk)) ∪ (Mk∩ f (Ml)) ∪ (Ml∩ f (Ml)) for k, l ∈

0, 1, k 6= l, are isolating neighbourhoods. For k, l ∈ {0, 1}, k 6= l, let Mkl = Mk∩

f (Ml). If additionally χ?(Mkl, f ) (the graded automorphism on CH?(Mkl, f ),

see Definition D.14) is not conjugate to the identity, then there exists d ∈ N

and a continuous surjection ρ : Inv(N, f ) → Σ2, symbol space on 2 symbols (see

DefinitionC.4), such that

ρ ◦ fd= σ ◦ ρ where σ : Σ2→ Σ2 is the shift dynamics on Σ2.

However, it must be emphasised that Theorem2.70shows only the existence of an unstable invariant set which maps onto a horseshoe. Though the set may lie within a strange attractor, we do not yet have sufficiently strong abstract results to prove that the whole attractor is chaotic. The semiconjugacy onto the shift dynamics is a continuous surjective map ρ : Inv(N, f ) → Σk such that

the diagram Inv(N, f ) Inv(N, f ) Σk Σk f ρ ρ σ commutes.

Referenties

GERELATEERDE DOCUMENTEN

Although the field of economics, health and law investigate different theoretical questions, they have in common the use of neuroscientific research results for applied purposes

applied knowledge, techniques and skills to create and.be critically involved in arts and cultural processes and products (AC 1 );.. • understood and accepted themselves as

(The text occurring in the document is also typeset within the argument of \tstidxtext.. The default value is to use a dark grey, but since the default values for the predefined.

This is a sample block of text designed to test \index., the layout. of the index. environment) and any .indexing application, such as makeindex.. ˇ

has spanned a .page break, you might want to check the terms indexed here to make sure they have the correct page numbers listed.. Something else that you might want to check,

Incidentally, that’s just a ˇ regular partial derivative symbol .∂?. Not to be confused with the spin-weighted partial derivative [you need

You might also want to set the location list page separator  ̌ and the range separator  ̌ ^ ^range separator, see location list.. in your

This paper includes an exploratory study with the focus on neuromarketing &amp; consumer neuroscience and its findings, what tools are being used, which findings are