• No results found

Ramp approximations of finitely steep sigmoid control functions in soft-switching ODE networks

N/A
N/A
Protected

Academic year: 2021

Share "Ramp approximations of finitely steep sigmoid control functions in soft-switching ODE networks"

Copied!
57
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Ramp Approximations of Finitely Steep Sigmoid Control

Functions in Soft-Switching ODE Networks

by

Graham Quee

B.Sc., University of Victoria, 2017

A Thesis Submitted in Partial Fulfillment of the

Requirements for the Degree of

MASTER OF SCIENCE

in the Department of Mathematics and Statistics

c

Graham Quee, 2019

University of Victoria

All rights reserved. This thesis may not be reproduced in whole or in part,

by photocopy or other means, without the permission of the author.

(2)

Ramp Approximations of Finitely Steep Sigmoid Control

Functions in Soft-Switching ODE Networks

by

Graham Quee

B.Sc., University of Victoria, 2017

Supervisory Committee

Dr. Roderick Edwards, Supervisor

Department of Mathematics and Statistics

Dr. Junling Ma, Departmental Member

Department of Mathematics and Statistics

(3)

Abstract

In models for networks of regulatory interactions in biological molecules, the sigmoid relationship between concen-tration of regulating bodies and the production rates they control has lead to the use of continuous time ‘switching’ ordinary differential equations (ODEs), sometimes referred to as Glass networks. These Glass networks are the re-sult of a simplifying assumption that the switching behaviour occurs instantaneously at particular threshold values. Though this assumption produces highly tractable models, it also causes analytic difficulties in certain cases due to the discontinuities of the system, such as non-uniqueness. In this thesis we explore the use of ‘ramp’ functions as an alternative approximation to the sigmoid, which restores continuity to the ODE and removes the assumption of infinitely fast switching by linearly interpolating the focal point values used in a corresponding Glass network. A general framework for producing a ramp system from a certain Glass network is given. Solutions are explored in two dimensions, and then in higher dimensions under two different restrictions. Periodic behaviour is explored in both cases using mappings between threshold boundaries. Limitations in these methods are explored, and a general proof of the existence of periodic solutions in negative feedback loops is given.

(4)

Contents

Introduction 1

1 Introduction 1

1.1 Ramps and Shifts . . . 2

1.2 Taking a Glass Network From Heaviside Switches to Ramp Control Functions . . . 4

2 Solutions in Two Dimensions 6 2.1 Example 1 – Equation 10 of Plahte and Kjøglum 2005 [24] . . . 6

2.2 Cases by Region . . . 9

2.3 Boundary Mappings 1 . . . 14

2.4 Example 2 – A Two Dimensional Periodic Solution . . . 15

3 Higher Dimensions With Region Restrictions 21 3.1 Restricted Solutions in Higher Dimensions . . . 21

3.2 Region Escape Times . . . 23

3.3 Boundary Mappings 2 . . . 26

3.4 Example 3 - The Symmetric 3-Loop . . . 28

4 Affine Systems Without Region Restrictions 34 4.1 Representations Using Piece-wise Constant Matrices . . . 34

4.2 Feedback Loops . . . 36

4.3 Existence of Periodic Solutions in Negative Feedback Loops . . . 37

(5)

List of Figures

1 Trajectories of equation (3) with various ramp widths. . . 6

2 Zoomed trajectories of (3) showing only the nonlinear region R1∩ R2, and R2∩ {x | x1≥ ϕ}. . . 8

3 Trajectories of equation (3) in the specific case that ∆i= 1/γ1− 1/γ2. . . 9

4 Phase plane of non-self-regulatory 2-d systems in the case where k1 and c2are both positive. . . 14

5 The stable periodic solution of example 2. . . 19

6 Plots of the Poincar´e map and flow-field with trajectories, nullclines, and thresholds of rotations on the focal points of a simple negative feedback loop. . . 20

7 Ti(t) viewed as the difference between the line, l(t), and etfor k1i > 1 and ki1= 1. . . 24

8 Ti(t) viewed as the difference between the line, l(t), and etfor k1i < 1. . . 26

9 Digraph representation of equation (10) with corresponding regions represented by the graph. . . 28

10 Plot of all required conditions on the existence of a fixed point of the Poinar´e map for (10) with various ramp widths. . . 32

(6)

Dedicated to my sister Heather. She will never understand a word of this, yet she was essential to it eventually being written.

(7)

1

Introduction

The theory of regulatory networks of cellular DNA interacting with its genetic protein products, and other various chemical species, has developed since the publication of Jacob’s and Monod’s ground breaking experiments in 1961 [17]. Qualitatively, some simple biological control networks exhibit behaviour that is characteristic of a sigmoid relationship between the concentration of regulating bodies and the production rates of their end products. Hill functions of the form

H(x) = x

q

ϕq+ xq

have been used as control functions to approximate this behaviour in Ordinary Differential Equation (ODE) models of these regulation networks. It has been argued that in these simple cases, models which include Hill functions do not significantly differ from those that use Heaviside step functions. For example, in simple negative feedback loops neither the periods nor the phase relationships change appreciably in their oscillatory behaviour [13, 11, 12].

The use of these discontinuous approximations was developed by Glass into a class of piece-wise linear switching networks [7, 8, 9, 10], more recently referred to as Glass networks. In Glass networks, the sigmoid driven soft switching networks,

˙

xi= Si(x) − γixi,

where Siis some regulatory function that includes the Hill functions of the system state variables x = hx1, x2, . . . , xni

and γi is the natural decay rate of xi, become the hard switching Glass networks,

˙

xi = Gi(˜x) − γixi,

where ˜x = h˜x1, ˜x2, . . . , ˜xni is a Boolean vector that is threshold dependent in each state variable via the step

functions ˜ xi= ( 0 xi< mi 1 xi> mi .

In practice, each coordinate variable can affect the system at multiple different thresholds, and there is clear reason to believe that the effect of a chemical species in one interaction should express at a different threshold from another. However, we choose to avoid this by assuming that in this case we can increase the dimensionality of the system by defining a new variable as equal to the old variable but with a threshold that is different. In the body of this paper it will become clear that this is advantageous, as solutions become much simpler when the number of simultaneously soft switching interactions is low. It is also believed that this will have no effect on the end result, as previous work has shown that the behaviour of a multi-threshold system is not changed when embedded into a higher-dimensional single-threshold system [2].

Since Gi(˜x) can only take 2n possible values, {ki0, k1i, . . . , k 2n−1

i }, the result is a bounded system of decoupled

piece-wise constant ODE’s in each region where all xi6= mi. In these regular regions, if Gi(˜x) = kji the solution in

each coordinate is simply

xi(t) = (x0i − φ j i)e −γit+ φj i, where φ j i = kji γi . If we take the focal point for region j, φj= hφj1, φj2, . . . , φj

ni, then it is easy to see that x → φj as t → ∞. This

behaviour is maintained if the focal point lies inside the region, in which it is an asymptotically stable fixed point, or until a threshold boundary is crossed. In the latter case, the focal point changes, resulting in an abrupt change in the direction of the trajectory at this point [1].

The lack of definition for this switching function at the threshold value xi = mi is intentional, in the sense

that it is viewed as a limiting case for the Hill functions as their steepness parameter q → ∞. The discontinuity at the threshold values of the limiting case violates the conditions of standard uniqueness-existence theorems for conventional ODE models. This can lead to behaviour that is difficult to analyze in examples outside of the original notion of “simple” biological networks.

One such case occurs when the local flow on both sides of a threshold points towards the threshold boundary. Here the flow cannot escape the boundary, which is referred to as a black wall. Likewise, if the local flow on both sides of a threshold boundary points away from the threshold, then this white wall cannot be reached. In both cases the behaviour along the walls must be analyzed in terms the behaviour of the original soft switching network where q is taken to be large but finite [25]. Single threshold boundaries could be considered reasonably straightforward; however, at intersections of threshold boundaries a black wall in one variable may transition into a white wall as the other threshold is crossed. It is in these small regions that one area of complexity arises, as the flow along the black

(8)

wall can lead down towards this intersection where the solution beyond this point is non-unique in the limiting case. In this example we would be faced with looking at the interactions between two finitely steep sigmoids, and much of the tractability which motivated the original Glass networks is lost.

In spite of these issues, a body of work developed on Glass networks, including the existence and stability of periodic orbits [13, 14, 23] and chaotic dynamics in higher dimensional systems [22]. By characterizing some assumptions which avoid the above complications and insure that trajectories are well behaved when crossing the threshold boundaries, a thorough analysis excluding these white and black walls was summarized and solidified by Edwards [1]. Others worked directly on these problem thresholds, believing that the discontinuities present a major problem in cases such as homeostatic regulation, where stationary points can occur very close to thresholds. This lead to the development of set valued Filippov solutions by Gouz´e and Sari [15], and the singular perturbation methods of Plahte, Mestl, and Omholt [26].

Though the solutions in the sense of Filippov provided a rigorous method of filling the threshold gaps, the lack of uniqueness is troubling when trying to connect the outcome to the biological basis of the models. It was immediately argued by Plahte that there appears to be no additional criterion which would allow us to know which solution is favourable. Problem examples have been explored in both frameworks, and contrasting the results has shown that the Filippov methods can produce solutions outside of what is considered to be biologically realistic, leaving the singular perturbation methods to appear more favoured. This is discussed in more detail by Machina, and colleagues [20, 19].

However, this is no guarantee that the singular perturbation methods retain the tractability which motivated the initial Glass networks, nor that the method will lead to a solution at all. Moreover, this type of analysis is performed to determine how best to handle the limiting case of q → ∞ by examining when q is finite, but arbitrarily large. It should be remembered that the motivation for the use of these infinitely fast switches was to simplify the analysis of biological switching behaviour that is certainly not infinitely fast. In many cases, such as Michaelis–Menten kinetics, the control functions are not steep at all and correspond to small values of q. In these cases, it seems that a method which better approximates the soft switching of the original sigmoid functions may lead to solutions which are more biologically realistic; particularly in examples that diverge from the initial simple systems studied by Glass and Kauffman. Achieving such an approximation by continuous functions would avoid the complication of the threshold regions altogether. In an attempt to retain as much tractability of the original Glass networks as possible while continuously approximating the behaviour of a finitely steep sigmoid switch, we explore the use a piece-wise linear ramp control function. These functions continuously connect the regular regions of the Glass networks, which remain the same outside of an interval between two-thresholds for each state variable, each equidistant from the original mi threshold.

1.1

Ramps and Shifts

Following a similar change of variables in Edwards 2000, except without the shift by the threshold denoted here as mi, the general form for the Glass network with equal decay rates can be expressed by

˙ xi= −xi+ Gi(˜x) i = 1, . . . , n, where ˜ x = h˜x1, ˜x2, . . . , ˜xni and x˜i= ( 0 xi< mi 1 xi> mi . This is viewed as a limiting case of a smooth network in which

˜

xi= s(xi),

where s is any sigmoid function with range [0,1) and s(mi) = 1/2.

Instead of this limiting case, we will analyze the network where the soft-sigmoid switch is approximated using a ramp function, s(xi) ≈ x¯i=        0 x ≤ θi x−θi ϕi−θi θi< x < ϕi 1 ϕi≤ x ,

where θi= mi− di and ϕi= mi+ di represent thresholds corresponding to the beginning and end of the switching

region or ramp region of xi, respectively. The new system,

˙

(9)

¯

x = h¯x1, ¯x2, . . . , ¯xni,

maintains a continuous right hand side at every point in the vector field and can linearly switch finitely fast in each variable, at a rate dependent on di. It should be noted, however, that the system as a whole is not piece-wise linear

in general, due to potential non-linear interactions when two variables are switching at a time.

We can shift the system by any constant so that the analysis is more convenient. Defining ∆i= ϕi− θi = 2di,

let us examine the change to the system when we apply an arbitrary leftward translation by si,

ui= xi− si.

We can define ¯ui so that

¯ ui =        0 ui≤ θi− si ui−θi+si ∆i θi− si< ui< ϕi− si 1 ϕi− si≤ ui =        0 xi− si≤ θi− si xi−si−θi+si ∆i θi− si< xi− si< ϕi− si 1 ϕi− si≤ xi− si =        0 xi≤ θi xi−θi ϕi−θi θi< xi< ϕi 1 ϕi≤ xi = x¯i,

and the differential equation (DE) system becomes, ˙

xi= −xi+ Gi(¯x) = ˙ui= −ui− si+ Gi(¯u).

There are two immediately obvious shifts that make certain elements of the analysis more easy. The first is the translation by the first threshold, which allows for the ramp functions to satisfy ¯ui(0) = 0, reduces the number of

parameters by setting the first threshold to zero, and is an appropriate approximation for Michaelis-Menten type functions when disallowing negative values. The second is translation by the midpoint of the ramp and original threshold mi, which is very useful for studying systems with a large amount of symmetry.

First Threshold Shift

Shifting by the first threshold, θi defines the new system in terms of

ui= xi− θi, so that ˙ ui= −ui− θi+ Gi(¯u), where ¯ ui =        0 ui≤ 0 ui ∆i 0 < ui< ∆i 1 ∆i≤ ui.

Midpoint shift

This shift gives the new system in terms of

ui= xi− mi, so that ˙ ui= −ui− mi+ Gi(¯u), where ¯ ui=        0 ui≤ −di ui+di 2di −di< ui< di 1 di< ui.

(10)

1.2

Taking a Glass Network From Heaviside Switches to Ramp Control Functions

Recalling that in the fast switching system each region could be defined by the binary vector ˜

x = h˜x1, ˜x2, . . . , ˜xni,

we can think of Gi as a mapping from Zn2 to the set of possible ith coordinates of the focal points for a given

non-shifted system, {φ0

i, φ1i, . . . , φ 2n−1

i }. It is convenient to order these regions in a coherent way so that the numbering

of the region corresponds to the conventional ordering of these binary vectors, as appears in Hastings 1977 [16]. Given an integer r ∈ [0, 2n− 1], we can express r as the binary expansion r = b1(r)b2(r) . . . bn(r), and let

Cr= {x ∈ Rn| (−1)bi(r)(xi− mi) < 0, 1 ≤ i ≤ n},

so that x ∈ Crif and only if ˜x = hb1(r), b2(r), . . . , bn(r)i := ˜xr. We then take the ith coordinate of the focal point

to be φr

i for the ODE system in Cr.

Given any ˜xrwe can define a function gr: Zn2 → Z2 that takes the value 1 at ˜xrand 0 otherwise. Let,

gr(˜x) = n Y i=1 fir(˜xi), where fir(xi) = ( xi bi(r) = 1 1 − xi bi(r) = 0.

We can use this function to give the focal point in the region, φri, by taking the product φrigr(˜x) = ( φr i x = ˜˜ xr 0 x 6= ˜˜ xr, so that 2n−1 X r=0 φrigr(˜xj) = φji ∀j = 0, . . . 2 n− 1, and thus Gi(˜x) = 2n−1 X r=0 φrigr(˜x). (1)

For example, a 2-d system in the ramp framework, the general form of the network given by (1) is ˙

x1= −x1+ φ01(1 − ¯x1)(1 − ¯x2) + φ11(1 − ¯x1)¯x2+ φ21x¯1(1 − ¯x2) + φ31x¯1x¯2 (2)

˙

x2= −x2+ φ02(1 − ¯x1)(1 − ¯x2) + φ12(1 − ¯x1)¯x2+ φ22x¯1(1 − ¯x2) + φ32x¯1x¯2.

Certainly, with the firdefined as they are, there is no issue in evaluating each for x ∈ [0, 1], which extends the domain of each gr, and so Gi, to be [0, 1]n. Thus we are justified in using the expression Gi(¯x) = Gi(¯x1, ¯x2, . . . , ¯xn).

However, there are infinitely many more extensions that, even continuously, agree with Gi on the binary vector

space; such as applying any non-zero power to the xi term in each fir.

The reason we choose to focus on this particular Giis that if we select one xj, the ¯xj term only appears once in

the product of fr

i functions which determines each gr, and Gi is given by a linear combination of the grfunctions.

So when keeping all xi fixed for i 6= j, this is a line in ¯xj which connects the two points (θj, Gi◦ ¯xj(θj)) and

(ϕj, Gi◦ ¯xj(ϕj)). Therefore this is a representation of the unique extension of Giover [0, 1]n which is affine in each

xjon the region xj∈ [θj, ϕj]. Since the motivation for using the ramp control functions was to attempt to maintain

tractability via their piece-wise linearity, restricting ourselves to such Gifunctions ensures that this linearity ‘passes

through’ the regulatory function so that any such tractability is not lost.

The other issue that needs to be addressed when converting a Glass network from Heaviside switches to ramps is that the domain of each variable must be necessarily split into three logical regions instead of just two. We refer to the region

Ri= {x ∈ Rn| θi≤ xi≤ ϕi},

as the ramp region for the variable xi, which overlap in regions corresponding to the previous threshold intersections.

It is convenient to apply a shift to the original ˜x switches so that the ramp functions can be expressed as the sum of a switch and a linear segment. That is,

¯

(11)

where x =ˆ  x − θ ϕ − θ  1(θ,ϕ)(x) = (x−θ ϕ−θ, θ < x < ϕ 0, otherwise and x˜0 =1[ϕ,∞)(x) = ( 0 x < ϕ 1 x ≥ ϕ.

This allows us to express that x is not in the ramp region of any variable simply as ˆx = 0, or that x ∈ Ri as

ˆ

xi6= 0. Note that the regions where ˆx = 0 are precisely the regions where the ramp system agrees with the Glass

network. These regions retain the focal points of the original system, and therefore it is still useful to apply the ordering to them as we did before. We modify them slightly to exclude the ramp regions, and set

Cr0 = {x ∈ Rn| (−1)bi(r)(x

i− mi) < −di, 1 ≤ i ≤ n},

which we refer to as the corner regions. From here forward we will not use the original definitions of ˜x and Cr, and

(12)

2

Solutions in Two Dimensions

We will begin by considering networks of only 2 dimensions to compare the dynamics of the ramp control functions to known examples of Glass networks. The number of cases for ramp interactions is small in this dimension, so we can analyze all the possible situations quite easily, even for tricky examples of hard switching systems. The solutions we find here eventually become very useful for understanding large sections of phase space in higher dimensional examples without much alteration; but first we study an example.

2.1

Example 1 – Equation 10 of Plahte and Kjøglum 2005 [24]

Given the system,

˙

y1= Z11+ Z21− 2Z11Z21− γ1y1

˙

y2= 1 − Z12Z22− γ2y2,

where Zi,j represents some soft-switching sigmoid function Zi,j = Σ(yi, mi, qi,j) with threshold mi and steepness

parameter qi,j, we may study a similar system using ramp control functions instead. By observing the focal points

of the original system we can obtain an analogous system using (2) up to a scaling using, φ0= h0, 1/γ2i

φ1= h1/γ1, 1/γ2i

φ2= h1/γ1, 1/γ2i

φ3= h0, 0i,

where φr= hφr

1, φr2i but keeping degradation rates equal. Then (2) reduces to

˙ x1= −x1+ (1 − ¯x1)¯x2 γ1 +x¯1(1 − ¯x2) γ1 (3) ˙ x2= −x2+ (1 − ¯x1)(1 − ¯x2) γ2 +(1 − ¯x1)¯x2 γ2 +¯x1(1 − ¯x2) γ2 ,

The behaviour of trajectories shown in Fig.2 of [24] can be recovered using different values of ∆i corresponding

to the different steepness parameters for the sigmoid control functions, as shown by numerical simulations in Fig. 1 here. The trajectories appear overall similar, but there are apparent differences that seem most extreme near the nullclines within the ramp regions, and in the basins of attractions of the two stable fixed points.

Figure 1: Trajectories of (3) with γ1 = 0.6, γ2 = 0.9 m1 = m2 = 1, and from left to right, ∆1 = ∆2 =

(13)

If we assume that the threshold values are equal across variables, and after shifting the system by the first threshold, in R1∩ R2 we can write the system as

˙ u1= −u1− θ + 1 γ1  1 −u1 ∆  u2 ∆ + 1 γ1  1 − u2 ∆  u1 ∆ = −u1− θ + u1 ∆γ1 + u2 ∆γ1 −2u1u2 ∆2γ 1 ˙ u2= −u2− θ + 1 γ2  1 −u1 ∆   1 −u1 ∆  + 1 γ2  1 − u1 ∆  u2 ∆ + 1 γ2  1 −u2 ∆  u1 ∆ = −u2− θ + 1 γ2 − u1u2 ∆2γ 2 ,

Steady states, u∗ = hu∗1, u∗2i, can be found as the simultaneous roots of these equations, which results in the polynomial, u∗21 (γ1∆ − 1) ∆2 + u ∗ 1  γ1γ2∆ − γ2+ γ1θ + 2(1 − γ2θ) ∆  + γ1γ2θ∆ + θγ2− 1 = 0,

where solutions are valid provided 0 ≤ u1, u2≤ ∆. Explicit general solutions become a bit of a mess of parameters,

but are quite easily solved when using example parameters. For small values of ∆i, the behaviour in the central

box is qualitatively the same as that reached using singular perturbation methods. For example, when using the parameters given in Fig.1 with ∆ = 0.1, the polynomial and the resulting solutions are

−94u∗21 + 7.754u∗1− 0.0937 = 0

and hu∗1, u∗2i ≈ h0.01470, 0.06117i = u∗1 or h0.06778, 0.01888i = u∗2. The Jacobian matrix of the system in R1∩ R2is

J = "∆−2u 2 ∆2γ1 − 1 ∆−2u1 ∆2γ1 − u2 ∆2γ 2 − u1 ∆2γ 2 − 1 # ,

and evaluating at the first equilibrium point results in two complex eigenvalues with negative real part corresponding to the spiral sink in this region.

J1=   ∆−2u∗2 ∆2γ1 − 1 ∆−2u∗1 ∆2γ1 − u∗2 ∆2γ2 − u∗1 ∆2γ2 − 1  ≈ −3.7222 11.7648 −6.7963 −2.6340  → λ1,2 = −3.1781 ± 8.9253i.

At the second equilibrium, the result is two real eigenvalues of opposite sign corresponding to the saddle point in the same region.

J2≈

 10.3719 −5.9279 −2.0982 −8.5315 

→ λ1= 11.0085 λ2= −9.1681.

In all other regions the analysis is much more simple. In the four corner regions, the ODE agrees with the Glass network. Checking fixed points can be done by verifying whether or not the focal point lies within it’s corresponding region (φr ∈ Cr). As before, if it does, then the focal point is a stable fixed point and all solutions in the region

approach it asymptotically. If it does not, then there is no fixed point in the region. The latter is true for all the corner regions in this example. The regions in at most one Riare at worst linear. In R2∩ {x | x1≥ ϕ}, (3) reduces

to ˙ x1= −x1+ (1 − ¯x2) γ1 ˙ x2= −x2+ (1 − ¯x2) γ2 , and the first threshold shift results in

˙ u1= −u1− θ + (∆ − u2) ∆γ1 ˙ u2= −u2− θ + (∆ − u2) ∆γ2 .

(14)

Figure 2: Zoomed trajectories of (3) showing only the nonlinear region R1∩ R2(left), and R2∩ {x | x1≥ ϕ} (right).

Parameters the same as Fig.1, except fixed ∆i= 0.1.

The equilibrium of the second equation does not depend on u1, and so both can be easily solved in terms of u2as,

u∗2= ∆(1 − γ2θ) (1 + γ2∆) , u∗1= ∆ − u ∗ 2 γ1∆ − θ = γ2ϕ γ1(γ2∆ + 1) − θ. (4)

These solutions are valid as long as 0 ≤ u2≤ ∆ ≤ u1. For ∆ = 0.1 we find that u∗≈ h0.49495, 0.013303i is a valid

fixed point in this region and the Jacobian matrix is J = " −1 − 1 γ1∆ 0 −1 − 1 γ2∆ # .

The eigenvalues can be read off of the diagonal of the Jacobian, and are both negative, so this fixed point is stable when it exists. This stable equilibrium, and the previous equilibria from the other region can be seen in Fig. 2. The limit of this equilibrium as ∆ → 0 is u?= h0, (γ

2/γ1)ϕ − θi, which corresponds to x∗= hθ, (γ2/γ1)ϕi = h1, 1.5i

and agrees with the limiting equilibrium of the original example, noting that if ∆ → 0, then both θ and ϕ → 1. However, this is one case of a bifurcation that occurs when the switching behaviour is not assumed to occur at near infinite speed, and the ramps can be used to simplify the analysis in this case. If u∗1< ∆, which, from (4), is equivalent to the condition

1 γ1

− 1 γ2

< ∆,

then this equilibrium does not exist. It can be shown that the u1nullcline is an increasing function of u1in R1∩ R2

and a decreasing function of u1 in R2∩ {x | x1 ≥ ϕ}. Then it must be that the nullcline, viewed with u2 as a

function of u1, starts and returns to u2 = θ continuously as u1 is increased through these regions. Since the u2

nullcline is a decreasing function of u1 in R1∩ R2 and constant and greater than θ in u1 in R2∩ {x | x1≥ ϕ}, if

they cross, then it must happen exactly twice; once in R1∩ R2 and once in R2∩ {x | x1 ≥ ϕ}. Otherwise, there

must be no crossings, or exactly one crossing that occurs at the boundary.

In the case that there are no crossings, the now-unique equilibrium in R1∩ R2 becomes the unique stable

equilibrium in the entire domain. When ∆ is chosen exactly equal to the bifurcation point, ∆ = 1/γ1− 1/γ2, the

other equilibrium is exactly on the boundary x2 = θ. Here, the Jacobian is undefined because, although the right

hand side (RHS) of the system is continuous at this threshold, its left and right derivatives disagree, so we cannot conclude that the equilibrium is stable from the Jacobian used in R2∩{x | x1≥ ϕ}. Interestingly, the point becomes

(15)

Figure 3: Trajectories of (3) with parameters the same as Fig.1, except ∆i= 1/γ1− 1/γ2.

asymptotically as predicted by the ‘right’ Jacobian, and trajectories on the left of the threshold follow the saddle behaviour predicted by the ‘left’ Jacobian of R1∩ R2, as can be seen in Fig. 3.

A case worth considering for (3) in general is if γ1 = γ2. Then 1/γ1 − 1/γ2 = 0 < ∆ for any positive

∆, so the equilibrium in R2∩ {x | x1 ≥ ϕ} cannot exist. For ∆ = 0, the equilibrium is that given previously,

x∗= hθ, ϕi = hm, mi, which is the threshold intersection of the corresponding Glass network.

2.2

Cases by Region

For the general 2d case, it is most convenient to work under the assumption that the system has been shifted by the first threshold. One unfortunate outcome of this shift is that the focal points for the shifted system will also be shifted, resulting in abundant notational clutter of φ − θ as the focal points of the new system. To avoid this, let us assume that x represents a system with θi= 0, ϕi= ∆i and φri the focal points in this shift for every i and r.

Case 1 covers all of the corner regions, where these systems agree with the original Glass network. If ˆx = 0, then ¯xi∈ {0, 1}, so the RHS of the system is affine in xi for each coordinate i, and the system becomes

˙

x =−1 0 0 −1

 x + b.

From (1), we know that this will always result in bi= φri, where r is determined by the region and fixed for each i.

This system is easily solvable using the formula

x(t) = xh(t) + Ψ(t)

Z

Ψ−1(τ )b(τ )dτ, (5)

where Ψ is the fundamental solution to the homogeneous system, xh. Alternatively, we can note that the system is

decoupled and easily solve each equation independently, but we will continue to make heavy use of the formula (5) as we continue. Either method results in the expected solution, with a stable equilibrium if and only if φr∈ Cr,

x(t) = (x(0) − φr)e−t+ φr.

(16)

becomes, depending on ˜x1, either ˙ x1= −x1+ φ01(1 − ¯x2) + φ11x¯2 = −x1+ φ01+ (φ1 1− φ01) ∆2 x2 ˙ x2= −x2+ φ02(1 − ¯x2) + φ12x¯2 = −x2+ φ02+ (φ1 2− φ02) ∆2 x2 if ˜x1= 0, or ˙ x1= −x1+ φ21+ (φ31− φ21) ∆2 x2 ˙ x2= −x2+ φ22+ (φ3 2− φ22) ∆2 x2.

if ˜x1= 1. For either, the system can be written as

˙ x =−1 a1 0 a2− 1  x + b, where a = 1

∆2(φr2−φr1), b = φr1 for r1, r2corresponding to the adjacent regions to R2, Cr1 and Cr2, where ˜x2= 0,

and 1 respectively. Though it is possible for a coordinate of φ not to change across R2, if neither change, this is

equivalent to the regulatory function being constant with respect to that variable. This degenerate case reduces to case 1, and the ramp region in question is essentially non-existent. If a1= 0, the system is diagonal and is solvable

using the same method as in case 1, but with a marginally different solution in the second component. This solution will be the same in the following case, due to x2being independent of the system in this upper-diagonal case. For

a fundamentally different solution it is required that a16= 0, for which there are two different solutions depending

on a2.

If only a2= 0, then the eigenvalues of the homogeneous system are repeated with λ1,2= −1. This corresponds

to the case where the 2ndcoordinates of the focal points do not change across the R

2 ramp region, i.e., x2 is not

self-dependent. Then the corresponding eigenvectors are given by 0 a1 0 0  v1= 0 ⇒ v1= 1 0  and 0 a1 0 0  v2= v1 ⇒ v2=  0 1/a1  , resulting in the homogeneous solution

xh(t) = c1e−t 1 0  + c2e−t  t 1/a1  . Then we have the fundamental solution matrix and its inverse

Ψ =e −t te−t 0 e−t/a1  and Ψ−1=e t −a 1tet 0 a1et  , so that Ψ−1b =b1e t− b 2a1tet b2a1et  Z Ψ−1b =b1e t− b 2a1et(t − 1) b2a1et  Ψ Z Ψ−1b =e −t te−t 0 e−t/a1  b1et− b2a1et(t − 1) b2a1et  =b1+ b2a1−b2a1t +b2a1t b2  =1 a1 0 1  b = xp.

(17)

Therefore the complete solution is given by x = xh+ xp, x(t) = c1e−t 1 0  + c2e−t  t 1/a1  +1 a1 0 1  b. We also have x(0) =x1(0) x2(0)  =  c1 c2/a1  +b1+ a1b2 b2  ⇒ c1= x0− b1− a1b2, c2= a1(y0− b2),

so that the finals solution for these regions is

x1(t) = a1(x2(0) − b2)te−t+ (x1(0) − b1− a1b2)e−t+ b1+ a1b2

x2(t) = (x2(0) − b2)e−t+ b2.

The asymptotics of this result are similar to that of the corners in that we can check if the non-decaying terms are within the region to see if the trajectory may stay within the region. If that point is not within the region, then the trajectory must eventually leave. However, unlike the corner regions, if this point is within the region it does not guarantee that the trajectory eventually approaches it. Due to the te−tterm, the initial increase in magnitude of this term may cause the trajectory to reach a threshold boundary before it decays.

Otherwise, the x2variable does regulate itself, causing the second coordinate of the focal point to change across

R2 and a2 6= 0. For the homogeneous system, the eigenvalues are λ1 = −1, λ2 = a2− 1, so the corresponding

eigenvectors are given by 0 a1 0 a2  v1= 0 ⇒ v1= 1 0  and −a2 a1 0 0  v2= 0 ⇒ v2= a1 a2  , resulting in the homogeneous solution

xh(t) = c1e−t 1 0  + c2e(a2−1)t a1 a2  . Here we have Ψ = " e−t a1e(a2−1)t 0 a2e(a2−1)t # ⇒ Ψ−1= " et −a1et/a2 0 e(1−a2)t/a 2 # , so that Ψ−1b = " b1et− a1b2et/a2 b2e(1−a2)t/a2 # Z Ψ−1b = " b1et− a1b2et/a2 b2e(1−a2)t/a2(1 − a2) # Ψ Z Ψ−1b = " e−t a1e(a2−1)t 0 a2e(a2−1)t # " b1et− a1b2et/a2 b2e(1−a2)t/a2(1 − a2) # =b1+ a1b2/(1 − a2) b2/(1 − a2)  =1 a1/(1 − a2) 0 1/(1 − a2)  b. Therefore the complete solution is given by x = xh+ xp,

x(t) = c1e−t 1 0  + c2e(a2−1)t a1 a2  +1 a1/(1 − a2) 0 1/(1 − a2)  b. We also have, x(0) =x0 y0  =c1+ a1c2 a2c2  +b1+ a1b2/(1 − a2) b2/(1 − a2)  ⇒ c2= y0 a2 − b2 a2(1 − a2) , c1= x0+ a1 a2 (b2− y0) − b1,

(18)

which gives a final solution in these regions of x1(t) =  x1(0) + a1(b2− x2(0)) a2 − b1  e−t+a1 a2  x2(0) − b2 1 − a2  e(a2−1)t+ b 1+ a1b2 1 − a2 x2(t) =  x2(0) − b2 1 − a2  e(a2−1)t+ b2 (1 − a2) .

As before, the constant terms can be checked to determine whether they are within the region of validity for the solution. It is still possible that the solution does not head to this point, but only in the case that a2> 1. Recalling

that a2= (φr22− φ r1

2 )/∆2, this could only occur if this coordinate of the focal point for the region where x2is above

the ramp is substantially larger than the focal point for the region where x2 is below the ramp, proportionally to

the width of the ramp itself. In particular, the distance between these focal points must be larger than the width of the ramp. If this is the case, the x2variable is strongly self-promoting, and the blow-up in x2is somewhat expected.

The end result is not significantly different from the case in which the focal point is above ϕ2, since this threshold

will eventually be hit and the dynamics will change.

In R1∩Rc2the system is just the lower diagonal analogue of the systems we have studied so far. The solutions are

analogous and can be found by performing a simple transformation on the original system to switch the variables. To summarize, there are exactly four solution types outside of R1∩ R2. If the region is such that Gi(x) is constant,

then the solution is

xi(t) = (xi(0) − φri)e −t+ φr

i, (S1)

where including if ˆxi6= 0 and xi is not self regulated. In any case, the ODE is of the form

˙

x = Ax + b

If xiis regulated by xj, where xj is not self-regulated, then taking ai= ai,j results in

xi(t) = ai(xj(0) − bj)te−t+ (xi(0) − bi− aibj)e−t+ bi+ aibj. (S2)

If xiis self-regulated, then ai,i= ai− 1, and the solution for xi is

xi(t) =  xi(0) − bi 1 − ai  e(ai−1)t+ bi (1 − ai) . (S3)

Finally, if xi is regulated by xj, which itself is self-regulated, then ai,j = ai, aj,j = aj− 1, and

xi(t) = xi(0) + ai(bj− xj(0)) aj − bi ! e−t+ai aj xj(0) − bj 1 − aj ! e(aj−1)t+ b i+ aibj 1 − aj . (S4) Case 3, the last region, is R1∩ R2, where explicit solutions are not guaranteed. We can still perform some general

analysis of the region in the 2-d case, as it is not unreasonable to expand (2). This will allow us to write the system in this region as ˙ xi= −xi+ φ0i + (φ 2 i − φ 0 i)¯x1+ (φ1i − φ 0 i)¯x2+ (φ0i − φ 1 i − φ 2 i + φ 3 i)¯x1x¯2 = −xi+ φ0i + (φ2 i − φ0i) ∆1 x1+ (φ1 i − φ0i) ∆2 x2+ (φ0 i − φ1i − φ2i + φ3i) ∆1∆2 x1x2 = −xi+ c0+ c1x1+ c2x2+ c3x1x2. (6)

We can look for equilibria of the general system by considering

0 = −x1+ c0+ c1x1+ c2x2+ c3x1x2

0 = −x2+ k0+ k1x1+ k2x2+ k3x1x2.

The symmetry of these equations makes solving for either variable essentially equivalent. Solving for x2 in each

results in x2= (1 − c1)x1− c0 c2+ c3x1 and x2= k0+ k1x1 1 − k2− k3x1 . Equating the two results in the polynomial,

(19)

so we can see that there will be at most two equilibria, each only valid if both coordinates satisfy 0 ≤ xi ≤ ∆i.

Analyzing the location of these equilibria in terms of the original focal points, either as the root of the polynomial or the intersection of the two rational functions, becomes extremely unwieldy considering each constant hides a sum of the focal coordinates. The Jacobian of the system is

c1− 1 + c3x2 c2+ c3x1

k1+ k3x2 −1 + k2+ k3x1

 .

If the determinant of this Jacobian is negative at the equilibrium values, then it is a saddle. We have seen from the previous example that this is possible. If it is positive, then the stability of the equilibrium is determined by the sign of the trace, with stability if T r(J ) < 0.

One tractable case of interest is when the system lacks self input. In this case, the ithcoordinate of the focal

points do not change across Ri, resulting in c1= k2= c3= k3= 0. The solutions for x2 reduce to

x2=

x1− c0

c2

and x2= k0+ k1x1,

the corresponding solution for x1reduces from a polynomial to the easily solvable linear equation

(c2k1− 1)x1+ c2k0+ c0= 0,

and the Jacobian becomes

−1 c2

k1 −1

 .

The trace is −2, so the equilibrium will be stable so long as the determinant, 1 − c2k1, is non-negative. This will

always be the case if the signs of c2 and k1 are opposite, which is so for any simple negative feedback loop. For

the determinant to be negative, c2 and k1 must be the same sign, and large enough in magnitude. Since their

magnitude is determined by the size of the change in the focal points for by each variable proportional to the width of each ramp, this change must be quite large in at least one of the variables for this to occur.

These facts hint that periodic solutions in 2-d may not be possible without self regulation, as is the case with Glass networks [1]. The nullclines for each variable are straight lines within their ramp region, which become constant outside of the ramp region. If k1 and c2 have the same sign, these nullclines either both increase or both

decrease as a function of x1, and then both nullclines connect the same two corner regions across the diagonal

(Fig.4). This means that the constant parts of the null clines must intersect in these corners, creating a stable equilibrium in each of these regions. Solutions in all adjacent regions head towards these two regions, and away from the unstable equilibrium of the centre, so periodic solutions appear impossible.

Otherwise, the signs of k1 and c2 are opposite and the system is a simple negative feedback loop. In this case

the equilibrium for the centre region, even if it does not sit in the region, is stable. Trajectories in this region will, though perhaps indirectly, spiral towards this equilibrium. Moreover, because the flow field is continuous, this must be true of the points along the boundary of this square region with corners, (θx, θy), (θx, ϕy), (θy, ϕx), and (ϕx, ϕy).

But since these four points are also part of their respective corner regions, the behaviour in these corner regions is in some way determined by the behaviour at these points. For example, if the equilibrium of the centre region were to lie in C0, then the flow at the point (θx, θy) would roughly be pointing towards the interior of C0, so there would

have to also be a stable equilibrium within C0under the dynamics of C0, and this point would be stable. However,

this is far from a rigorous proof, which so far remains elusive.

Another way to see that these two cases exhaust the options when no self regulation occurs is to note that the coordinate of a focal point cannot change across the ramp region for that coordinate. This means that φ0and φ2 must sit along the same line x1= c, as do φ1 and φ3, and that φ0 and φ1 sit on the same line x2 = c, as do φ2

and φ3. This means that the focal points must form the corners of a rectangle in R2. Starting with any orientation

where the above is satisfied, the possible arrangements of focal points can be studied as the symmetries of this rectangle using only axial flips along perpendicular bisectors. This leaves only the orientations:

φ2 φ3 φ0 φ1 φ3 φ2 φ1 φ0 φ1 φ0 φ3 φ2 φ0 φ1 φ2 φ3

When positioned so that there is one focal point in each region, the first and third correspond to the unstable central equilibrium, where the second and fourth are both negative feedback loops in opposite directions. These orientations are exhaustive up to translations, but it is these translations that lead to possible cases we have yet to study.

(20)

Figure 4: Phase plane of non-self-regulatory 2-d systems in the case where k1 and c2 are both positive.

With self regulation, there are substantially more possibilities which are not restricted to these square formations, even with the additional symmetries. Here, periodic solutions are possible, as we will see in example 2, which achieves a stable periodic orbit using a simple rotation of focal points in a square pattern corresponding to the second orientation above. We will analyze this example in terms of mappings between the appropriate threshold boundaries of each region the trajectory passes through in order, and ensuring that this trajectory stays inside regions we can solve analytically. First, some general statements about such mappings to make things easier.

2.3

Boundary Mappings 1

We continue as in [5], by solving the ODE in a region and finding the time it would take to reach each threshold boundary. The mapping is defined by the threshold crossing point occurring in the minimum positive time. We will proceed using cases 1 and 2 above (in Section 2.2).

In case 1 regions each coordinate of the system has solution (S1). If we assume that the exit boundary is some threshold x1

i, then we can solve for the time it takes to reach this point as

x1i = (x 0 i − φ r i)e−t ∗ 1+ φr i =⇒ t1= − ln x1 i − φri x0 i − φri ! . Then the value of each state variable on the exit boundary will be

x1j(x01, x02) =    x1 i i = j (x0j− φr j) x1 i−φri x0 i−φri  + φrj i 6= j. (7) In case 2, we can have solutions of all four types (S1-S4), and so will break the maps down into two sub-cases based on a2. If a2= 0, the solutions will be (S1) and (S2) only. We assume wlog that we are in Rj, and if the exit

point is calculated for a boundary that is one of the ramp thresholds, then set x1j= (x0j− φr1 j )e−t1+ φ r1 j =⇒ t1= − ln x1 j− φ r1 j x0 j− φ r1 j ! .

(21)

Recalling that bj= φrj1 for (S1) solutions x1i = ai(x0j− bj)t1e−t1+ (x0i − bi− aibj)e−t1+ bi+ aibj = ai(x0j− bj)t1e−t1+ x0ie −t1− (b i+ aibj)(e−t1− 1) = −ai(x0j− bj) ln x1 j− bj x0 j− bj ! x1 j− bj x0 j− bj ! + x0i x1 j − bj x0 j − bj ! − (bi+ aibj) x1 j− x0j x0 j− bj ! . (8)

However, it is different if the exit boundary is at a threshold that is orthogonal to the ramp boundaries. In this case, to solve for the time spent in the region, set

x1i = ai(x02− bj)t1e−t1+ (xi0− bi− aibj)e−t1+ bi+ aibj,

but this results in the transcendental equation et1 = x 0 i − bi− aibi x1 i − bi− aibj + ai(x 0 j− bj) x1 i − bi− aibj t1= r1+ r2t1,

which poses a problem for comparing the time values to reach each boundary. The time to exit the region cannot be explicitly solved for in this case, but the equation may be used to compare against other time solutions to verify which is smallest. This idea will be explored further later, after considering some higher dimensional solutions. For now, under the assumption that trajectories cross both threshold boundaries of Rj without intersecting any other

thresholds, this is enough to give the necessary mappings.

If a26= 0, the solutions can be (S3) and (S4). We assume again wlog that we are in Rj, and if the exit point is

calculated for a boundary that is one of the ramp thresholds, then set x1j = x 0 j− bj 1 − aj ! e(aj−1)t1+ bj 1 − aj =⇒ t1= 1 (aj− 1) ln x 1 j(1 − aj) − bj x0 j(1 − aj) − bj ! , so that x1i = x0i +ai aj (bj− x0j) − bi ! x1 j(1 − aj) − bj x0 j(1 − aj) − bj !1−aj1 +ai aj x0j− bj 1 − aj ! x1 j− bj/(1 − aj) x0 j− bj/(1 − aj) ! + bi+ aibj 1 − aj = x0i +ai aj (bj− x0j) − bi ! x1 j(1 − aj) − bj x0 j(1 − aj) − bj !1−aj1 +ai aj x1j− bj (1 − aj) ! + bi+ aibj 1 − aj . (9)

Otherwise, similar to before, when mapping to the orthogonal boundary, set x1i = x0i +ai aj (bj− x0j) − bi ! e−t1+ai aj x0j− bj 1 − aj ! e(aj−1)t1+ b i+ aibj 1 − aj =⇒ 1 = x 0 i + ai aj(bj− x 0 j) − bi x1 i + ai aj(bj− x 0 j) − bi ! e−t1+    ai aj  x0 j− bj 1−aj  x1 i + ai aj(bj− x 0 j) − bi   e (aj−1)t1 1 = r1e−t1+ r2e(aj−1)t1,

another transcendental equation in t1.

2.4

Example 2 – A Two Dimensional Periodic Solution

With the choice of θ1= θ2= 1/3, ϕ1= ϕ2= 2/3, and focal points

φ0= h1/2, 0i φ1= h0, 1/2i

(22)

we produce the system ˙ x1= −x1+ ¯ x1x¯2 2 + ¯x1(1 − ¯x2) + (1 − ¯x1)(1 − ¯x2) 2 ˙ x2= −x2+ ¯x1x¯2+ ¯ x1(1 − ¯x2) 2 + (1 − ¯x1)¯x2 2 .

This system has a periodic solution that stays in the linear regions, which we can find via mapping between the appropriate threshold boundaries. Starting by choosing the Poincar´e section of the boundary R1∩C1, where x01= θ1

and x02> ϕ2, we consider the dynamics of C1. The ODE becomes

˙ x1= −x1 ˙ x2= −x2+ 1 2,

so that a1= a2= b1= 0, and b2= 1/2. The focal point h0, 1/2i does not lie in C1, and there are only two possible

exit boundaries, x11= θ1which is also the start boundary, and x12= ϕ2. Evaluating the time to escape at the start

boundary predictably results in

t1= − ln x1 1 x1 1 ! = 0, but the time to reach the other boundary must be positive because

x02> ϕ2 x02− 1/2 > ϕ2− 1/2 1 > ϕ2− 1/2 x0 2− 1/2 ⇒ 0 < − ln ϕ2− 1/2 x0 2− 1/2 ! = t1.

The resulting map given by (7) reduces to x11= M1(x02) = 1 18x0 2− 9 and x12= ϕ2= 2 3.

For the next mapping, we look at the dynamics of R2∩ {x1< θ1}, where the system becomes

˙ x1= −x1− 3x2 2 + 1 ˙ x2= −x2+ 3x2 2 − 1 2,

so that a1 = −3/2, b1 = 1, a2 = 3/2, and b2 = −1/2. We will assume the solution passes through R2 so that the

exit boundary is x22 = θ2 to avoid examining the transcendental equations for the time being and verify that the

exit boundary is reached at a point so that x21 ≤ θ1. This time, since a2 6= 0, the mapping for x21 is given by (9),

which reduces to x21= M2(x11) = x1 1 4 + 5 24 and x 2 2= θ2.

Proceeding this way to generate mappings for the trajectory that passes through regions in the order of C1, R2, C0, R1, C2, R2, C3, R1, C1.

These mappings can be summarized as:

Mapping 1: Diagonal case, where the mapping (θ1, x02) → (x11, ϕ2) is given by (7), and x11= M1(x02).

M1(x02) = 1 18x0 2− 9 and d dx0 2 [M1] = −2 9(1 − 2x0 2)2 .

(23)

Mapping 2: Upper triangular case, where the mapping (x11, ϕ2) → (x21, θ2) is given by (9), and x12= M2(x11). M2(x11) = x1 1 4 + 5 24 and d dx1 1 [M2] = 1 4. Mapping 3: Diagonal case, where the mapping (x2

1, θ2) → (θ1, x32) is given by (7), and x32= M3(x21). M3(x21) = −1 18x2 1− 9 and d dx2 1 [M3] = 2 9(1 − 2x2 1)2 .

Mapping 4: Lower triangular case, where the mapping (θ1, x32) → (ϕ1, x24) is given by (9), and x42= M4(x32).

M4(x32) = x3 2 4 + 5 24 and d dx3 2 [M4] = 1 4.

Mapping 5: Diagonal case, where the mapping (ϕ1, x42) → (x51, θ2) is given by (7), and x51= M5(x42).

M5(x42) = 1 18x4 2− 9 + 1 and d dx4 2 [M5] = −2 9(1 − 2x4 2)2 . Mapping 6: Upper triangular case, where the mapping (x5

1, θ2) → (x61, ϕ2) is given by (9), and x61= M6(x51). M6(x51) = x51 4 + 13 24 and d dx5 1 [M6] = 1 4.

Mapping 7: Diagonal case, where the mapping (x61, ϕ2) → (ϕ1, x27) is given by (7), and x72= M7(x61).

M7(x61) = −1 18x6 1− 9 + 1 and d dx6 1 [M7] = 2 9(1 − 2x6 1)2 .

Mapping 8: Lower triangular case, where the mapping (ϕ1, x72) → (θ1, x28) is given by (9), and x82= M8(x72).

M8(x72) = x7 2 4 + 13 24 and d dx7 2 [M8] = 1 4.

We can now look at the composition of this mapping to attempt to find a fixed point. The full composition of these mappings can be reduced to

M8◦ M7◦ · · · ◦ M1(x02) =

4543 − 8122x02

6200 − 11088x0 2

. Setting M8◦ M7◦ · · · ◦ M1(x02) = x02 results in the quadratic

11088(x02)2− 14322x0 2+ 4543 = 0, with roots x02=31 ± √ 17 48 . Noting that (31 −√17)/48 u 0.56 < ϕ2, we will focus on x02 = (31 +

17)/48 u 0.732, as it is the only solution satisfying the requirement that ϕ2 ≤ x02. Checking numerically that the mapping does not cut the corners on the

centre region yields:

x02 0.7317314 > 2/3 = ϕ2 x11 0.2397412 < 1/3 = θ1 x2 1 0.2682686 < 1/3 = θ1 x3 2 0.2397412 < 1/3 = θ2 x4 2 0.2682686 < 1/3 = θ2 x5 1 0.7602588 > 2/3 = ϕ1 x6 1 0.7317314 > 2/3 = ϕ1 x7 2 0.7602588 > 2/3 = ϕ2 x8 2 0.7317314 > 2/3 = ϕ2

(24)

This ensures that our assumption that trajectories in ramp regions exit through the opposing ramp boundary and not the side boundary is valid.

To determine stability, we look at d dx0 2 [M8◦ M7◦ · · · ◦ M1(x02)] = M80(x 7 2) · M70(x 6 1) . . . M10(x 0 2) = 2−8· −2 9(1 − 2x02)2 · 2 9(1 − 2x21)2 · −2 9(1 − 2x42)2 · 2 9(1 − 2x61)2 = 2−4· 1 9(1 − 2x0 2)2 · 1 9(1 − 2x2 1)2 · 1 9(1 − 2x4 2)2 · 1 9(1 − 2x6 1)2 .

Note that each of the xi

1, xi2 satisfy xi1, xi2< 1/3 or xi1, xi2> 2/3, so we consider each case:

x > 2/3 −2x < −4/3 1 − 2x < −1/3 (1 − 2x)2> 1/9 1 = (1/9)(9) > 1 9(1 − 2x)2 and, x < 1/3 −2x > −2/3 1 − 2x > 1/3 (1 − 2x)2> 1/9 1 = (1/9)(9) > 1 9(1 − 2x)2.

Thus, the last 4 terms must each be between 0 and 1, and therefore d dx0 2 [M8◦ M7◦ · · · ◦ M1(x02)] = 2−4· 1 9(1 − 2x0 2)2 · 1 9(1 − 2x2 1)2 · 1 9(1 − 2x4 2)2 · 1 9(1 − 2x6 1)2 < 2−4.

So the fixed point of the Poincar´e map is stable, as can be seen in Fig. 5, along with many other points of this analysis.

As mentioned before, this set of focal points was chosen using a simple rotation on one of the negative feedback loops with no self regulation. Specifically, the rotation was given by using a value of r = 1/4 in the focal points

φ0= h3/4 − r, 1/4 − ri φ1= h1/4 − r, 1/4 + ri

φ2= h3/4 + r, 3/4 − ri φ3= h1/4 + r, 3/4 + ri,

for r ≥ 0. Leaving r as a parameter and shifting by the first threshold to make use of (6) (u = x − θ), we come to ˙ u1= −u1+ 5 12− r − 3 2u1+ 6ru2 ˙ u2= −u2− 1 12− r + 6ru1+ 3 2u2,

(25)

Figure 5: The stable periodic solution and some example trajectories of example 2. Purple X’s mark the entrance and exit points of example mappings across corner regions to verify each mapping. Yellow X’s do the same across ramp regions. The stable fixed point of the Poincar´e map is marked in red.

the equilibrium solution reduces to the linear equation

(c2k1− (1 − c1)(1 − k2))u1+ c2k0+ c0(1 − k2) = 0 (6r)2− 5 2   −1 2 ! u1+ 6r  −1 12 − r  + 5 12− r   −1 2  = 0 6  6r2+ 5 24  u1− 6r2− 5 24 = 0, ⇒ u1= 1 6, ⇒ x1= u1+ 1 3 = 1 2.

Plugging u1 = 1/6 into either of the previous equations yields u2 = 1/6, x2 = 1/2 as well. The Jacobian,

Determinant, and Trace of the system are J = " 6r − 1 −32 3 2 6r − 1 # , Det(J ) = (6r − 1)2+ 3 2 2 , and T r(J ) = 2(6r − 1).

The Determinant is always positive, so the equilibrium is a node with stability determined by the trace. For r < 1/6 the equilibrium is stable, and for r > 1/6 it is unstable. The bifurcation that occurs at r = 1/6 is quite interesting, as can be seen in Fig. 6. Because the system is linear in the central region, r = 1/6 corresponds with the node becoming a centre, and the appearance of many nested periodic solutions, all within the central region. This corresponds with the largest amplitude exactly fitting inside the central square, and all solutions from outside spiralling in to this stable periodic solution. As soon as r > 1/6, the equilibrium becomes unstable, and all trajectories in the centre spiral out to a stable periodic solution with amplitude increasing with r from the largest periodic orbit when r = 1/6.

(26)

Figure 6: Plots of the Poincar´e map (Top), approximated by numerically integrating along trajectories, and flow-field with trajectories, nullclines, and thresholds (bottom), of rotations on the focal points of a simple negative feedback loop with r = 1/8, 1/6, 1/4.

(27)

3

Higher Dimensions With Region Restrictions

In higher dimensions self regulation is not required to produce interesting behaviour, as it is thought to be for only two dimensions. We saw previously that self regulation results in additional solution types due to the independent eigenvector in the solution method for solving ˙x = Ax + b when some ai,i 6= −1. This is still true in higher

dimensions, which means the number of distinct cases for combinations of solution types expands quickly. For now, let us make the assumption that we have no direct self regulation, and that at most one ramp is switching at a time. Traditionally the non-self regulating assumption is written as

Gi(˜x1, . . . , ¯xi= 0, . . . , ˜xn) = Gi(˜x1, . . . , ¯xi= 1, . . . , ˜xn) ∀i ∀¯x,

but it may be useful to note that, for the soft-switching case, this is equivalent to the statement that for any fixed values of ¯xj, j 6= i, we have ¯x = ¯x(xi),

Gi(¯x1, . . . , ¯xi, . . . , ¯xn) = Gi(¯x(xi)) = Constant ∀i.

This includes the lines xj = cj for all j 6= i through all of the ramp regions Rj and Ri itself. These two additional

assumptions allow us to achieve a simple set of two solution types where trajectories are in at most one Ri at a

time.

3.1

Restricted Solutions in Higher Dimensions

If at most one variable is switching at a time, suppose the switching variable is xj, and note that the previous

assumption implies that Gj is constant in xj. Noting that ¯xi= ˜xi implies ˆxi= 0, let

Gi(˜x1, . . . , ¯xj = 0, . . . , ˜xn) = Gi(˜xr1) = φ r1 i Gi(˜x1, . . . , ¯xj = 1, . . . , ˜xn) = Gi(˜xr2) = φ r2 i , where r2= r1+ 2n−j and φrj1 = φ r2

j . Since all the ¯xk for k 6= j remain fixed across both these corner regions, and

the Rj region which connects them, gr(˜x) = 0 ∀ r 6= r1, r2in (1). Then Gi reduces to

Gi(¯x(xj)) = φri1(1 − ¯xj) + φri2x¯j= φri1+ (φ r2 i − φ r1 i )¯xj, so that in Rj ˙ ui= −ui− si+ φri1+ (φ r2 i − φ r1 i )¯uj = −ui+ (φr2 i − φ r1 i ) ∆j uj+ φri1− si+ (φr2 i − φ r1 i )(sj− θj) ∆j

= −ui+ ai,juj+ bi,j.

We can always take ai,j =

(φr2i −φr1i )

∆j , so the shift is absorbed by the bi,j, and we can see how this simplifies for

certain shifts: si= arbitrary bi,j= φri1− si+ (φr2i −φr1i )(sj−θj) ∆j si= mi bi,j= (φr1i +φr2i ) 2 − mi si= θi bi,j= φri1− θi

Then for an n-dimensional system under these assumptions, the system can always be transformed so that the switching variable is in the first coordinate, and ai,j = ai, bi,j = bi for simplicity, with bj,j = φjr1− sj = φrj2− sj,

we have ˙ u =         −1 0 0 · · · 0 a2 −1 0 a3 0 −1 .. . . .. ... an · · · −1         u +         b1 b2 b3 .. . bn         , θ1− s1< u1< ϕ1− s1, and ui≤ θ1− s1 or ϕ1− s1≤ ui, ∀i 6= 1.

(28)

It is only moderately more difficult to solve the system ˙u = Au + b using formula (5) as before. To solve the homogeneous system ˙u = Au, note that the characteristic polynomial of A is (−1 − λ)n= 0. Then λi= −1 with

multiplicity n and the eigenvectors are the solutions to

(A − λI)v =         0 0 0 · · · 0 a2 0 0 a3 0 0 .. . . .. ... an · · · 0         v = 0.

It is clear that any vector with a 1 in any coordinate but the first and zeros elsewhere provides a suitable eigenvector, which results in n − 1 linearly independent eigenvectors. Any vector independent of these first n − 1 would need to be non-zero in the first coordinate, but such a vector is not an eigenvector of this system. The nth

eigenvector must be a generalized eigenvector, which can be found with a careful selection of vector from the n − 1 dimensional λ = −1 eigenspace. Letting w = h0, a2, a3, . . . , ani the system (A − λI)v = w has a non-zero solution

v = h1, 0, 0, . . . i. So the homogeneous solution is

u(t) = c1         1 a2t a3t .. . ant         e−t+ c2         0 1 0 .. . 0         e−t+ · · · + cn         0 0 0 .. . 1         e−t = c1         0 a2 a3 .. . an         te−t+         c1 c2 c3 .. . cn         e−t,

and the fundamental solution and its inverse are given by

Ψ(t) =         e−t 0 0 · · · 0 a2te−t e−t 0 a3te−t 0 e−t .. . . .. ... ante−t · · · e−t         and Ψ−1(t) =         et 0 0 · · · 0 −a2tet et 0 −a3tet 0 et .. . . .. ... −antet · · · et         .

Computation of the particular solution results in

Ψ(t) Z Ψ−1b(τ )dτ =         b1 a2b1+ b2 a3b1+ b3 .. . anb1+ bn         ,

so that the full solution is

u(t) = c1         0 a2 a3 .. . an         te−t+         c1 c2 c3 .. . cn         e−t+         b1 a2b1+ b2 a3b1+ b3 .. . anb1+ bn         , where ci= ( ui(0) − bi i = 1 ui(0) − aib1− bi i 6= 1.

(29)

So in general, if uj is the switching variable and u(0) = u0, there are only two solution types,

uj(t) = (u0j − bj)e−t+ bj (S1)

ui(t) = ai(u0j− bj)te−t+ (u0i − aibj− bi)e−t+ aibj+ bi, (S2)

which are identical to (S1) and (S2) of the two dimensional case. Note also that if no variable is in its ramp regions, then ai= 0 ∀ i; or if ui is independent of uj, then ai = 0, and the solution (S2) reduces to agree with the solution

of the regular regions of the fast-switching system in each component for which this is true, which is the same as (S1),

ui(t) = (u0i − bi)e−t+ bi,

recalling that for any ui described by (S1), bi = φri − si is the shifted focal point for the variable in that region.

These two equations give the general solution in all regions where at most one variable is switching through the ramp phase at a time. If the ramp control functions are considered as small perturbations of the original Glass network, then each ∆ishould be small, and then so should be the collection of regions where these solutions become

invalid. In order to ensure that a given trajectory remains valid using these solutions, a condition will be required to determine that when the trajectory crosses one threshold to enter some Rj, that it will leave Rj before it crosses

another threshold into Ri for i 6= j. This idea is explored in the next section.

3.2

Region Escape Times

In Boundary Maps 1 (Section 2.3), we saw that mappings between the intersections of trajectories from threshold boundary to threshold boundary could be derived using the time of intersection with the boundary. However, we had not developed a method for confirming that a given mapping is valid over other potential mappings in general, and relied on numeric verification of the validity of certain maps when we used them in example 2. In particular, we would like to know for what initial points does a trajectory cross a ramp region from one side to the other before any other variable enters its ramp region. Here, we will do a thorough investigation of this issue in the case of solutions of type (S1) and (S2) to determine when these solutions will remain valid.

In every Cr, the solution to the ODE in each component is given by (S1), which is the same as in the regular

regions of the fast-switching systems. As before, for each ui we can set ui(ti) = u1i, where u1i is the value of ui at

each threshold boundary to the region in this coordinate (in a corner there will only be one such boundary). We can then solve for ti as

ti= − ln

bi− u1i

bi− u0i

! . It is easy to see that if the escape boundary u1

i is farther away from the shifted focal point than the initial point,

then

|bi− u1i| > |bi− u0i|,

so the solution to this time is negative, and thus escape through this threshold boundary cannot occur. Similarly, if u0

i and u1i are on opposite sides of bi then the log is undefined. If the distance is the same, then u0i = u1i, in

which case the escape boundary was chosen to be the same as the entry boundary and the associated traversal time is 0. In the fast switching system this was an issue, but in our case continuity of the vector field implies that these boundaries must all be transparent walls unless the focal point is on the boundary, in which case it cannot be reached in finite time. Therefore there is no reason to give this case any special consideration. Otherwise, the distance between the escape boundary and the focal point is less than that of the initial point and so ti is positive.

The time spent in the region is

t∗= min

i {ti| 0 < ti< ∞}.

Note that if ti is either negative or undefined for every i, then the focal point for the system is in this region, and

so the solution will head to this point asymptotically.

If the trajectory is not in a corner region, but still we have at most one variable passing through the ramp region (uj), then it is possible that some coordinate has solution given by (S2). Similarly to before, we can set u1i(ti) to

be the possible escape boundary and get

(30)

Figure 7: Ti(t) viewed as the difference between the line, l(t), and etfor k1i > 1 (left) and k i

1= 1 (right) for case 1.

The positive intersection occurs at t = ti.

This can be rearranged to give

0 = u 0 i − aibj− bi u1 i(ti) − aibj− bi + ai(u 0 j− bj) u1 i(ti) − aibj− bi ti− eti = k1i+ ki2ti− eti= Ti(ti).

This is a transcendental equation in ti, and so this time for traversal cannot explicitly be solved. If this were the

minimum positive time associated with the region, or if two or more solutions of this form must be compared, then the traversal time and mappings for this region cannot be solved for without invoking a numerical method to solve for the first positive root of these Ti functions.

However, we are guaranteed that at least one variable will have a time that is explicitly solvable as in the form of (S1), the ramp-phase variable which has no self regulation. If another variable is also in the form of (S1), and if this variable has a time for traversal that is less than that of the ramp variable, the trajectory will enter a region where more than one variable is switching at a time. We can build a necessary and sufficient condition to guarantee that the ramp variable is also the exit variable, so that we avoid all these complications. We begin by breaking it down into two conditions to handle the cases more succinctly. Let uj be the switching variable.

Condition 1 tj= min i { ti | 0 < ti< ∞ and ui(t) is (S1) with u 1 i = (u 0 i − bi)e−ti+ bi }.

This condition guarantees that the time for tj to reach the boundary is less than all other variables governed by (S1).

Condition 2

Condition 2 ensures that the escape time tj is shorter than any of the escape times for variables governed by

(S2), which are the first positive roots of the Ti functions, viewed here as the difference between a line and the

exponential. The condition is broken up into two cases based on ki1 for clarity, and all of the cases in Condition 2

are written as if Condition 1 is already satisfied. If any of a number of sub-cases are satisfied in either of these two main cases, then Condition 2 is satisfied.

Case 1 is for ki

1≥ 1. Viewing Ti(t) as the difference between a line li(t) = k1i+ k2it and the exponential, ki1> 1

means that Ti(0) > 0. A visualization of this idea can be seen in Fig. 7. Since both the line and the exponential

are monotonic, and lim

t→∞Ti = −∞, this condition guarantees a unique positive root if Ti which is equal to ti. If

(31)

If k1i = 1, then u1i = u 0

i. Unlike variables governed by (S1), it is possible that ui has a positive escape time

even in this situation. Though, this would be in the incredibly rare case that the entry point to this region is the corner where both ui and uj have thresholds, and the trajectory at that point leads into the region where ¯uj is a

ramp, and ¯ui is not. The corresponding visual for this case can be seen in Fig. 7. If ki2≤ 1, we have no positive

intersection and ti = 0 is the only root of Ti. Otherwise, there is a positive root of Ti, and we require Ti(tj) ≥ 0.

To summarize this case, if ki

1≥ 1, any one of the following ensure that either tj≤ ti or ti is undefined:

• ki 1> 1 and Ti(tj) ≥ 0 • ki 1= 1 and k i 2≤ 1 • ki 1= 1, k2i > 1, and Ti(tj) ≥ 0 Otherwise, ti< tj. Note that if u0

i < θi− si then the region necessitates that u1i = θi− si, and so u0i < u1i. It follows that

u0i − aibj− bi< u1i − aibj− bi, and so 1 < k1i =u 0 i − aibj− bi u1 i − aibj− bi ⇐⇒ u1i − aibj− bi< 0 u1i − aibj− bi= u1i − (φr2 i − φ r1 i )(φj− sj) ∆j − φr1 i − si+ (φr2 i − φ r1 i )(sj− θj) ∆j ! < 0 θi− φri1+ (φr2 i − φ r1 i )(θj− φj) ∆j < 0. So in the regions defined by u0

i < θi− si we have k1i > 1 ⇐⇒ θi− φri1+ (φr2 i − φ r1 i )(θj− φj) ∆j < 0. Likewise, if u0i > ϕi− si, it must be that ui1= ϕi− si in this region, and so u0i > u

1 i. By a similar argument, we have ki1> 1 ⇐⇒ ϕi− φri1+ (φr2 i − φ r1 i )(θj− φj) ∆j > 0.

These give a necessary and sufficient condition to distinguish between case 1 and 2, which is only dependent on the parameters of the system, so the condition on ki

1 can be solved without any knowledge of the solution or its

initial conditions other than the region of interest. For Case 2, we have ki

1< 1, and the corresponding visual can be seen in Fig 8. If ki2≤ 1 , then

dTi dt = k i 2− e t≤ 0 and d2Ti dt2 = −e t< 0,

and Ti has no positive root, so ti is undefined and we can ignore this i. Assuming k2i > 1

dTi dt = k i 2− e t= 0 ⇐⇒ lnki 2  = t,

so the difference of the line and the exponential has a unique maximum for a positive value of t. If we evaluate Ti

at this point, we can determine if Ti has a positive root, because if

Ti(ln



k2i) = k1i+ ki2(lnki2− 1) < 0 then Ti(t) < 0 ∀ t > 0.

Then this also leads to Ti having no positive root, and so ti is undefined and we can ignore this i. If instead

Ti(ln  k2i  ) = k1i+ k2i(ln  ki2  − 1) ≥ 0,

then there are 1 or 2 positive roots of Ti (one when the line is tangent to the exponential). In either case, we need

Ti(tj) ≤ 0 and dTdti(tj) ≥ 0. If either Ti(tj) > 0 or dTdti(tj) < 0 then there is a root of Ti that is less than tj. To

(32)

Figure 8: Ti(t) viewed as the difference between the line, l(t), and et for ki1 < 1. For small values of k i 2, ti is

undefined, but for k2i large enough, there can be 1 or 2 intersections. The first intersection is ti if it exists.

• ki 2≤ 1

• Ti(ln k2i) < 0

• Ti(tj) ≤ 0 and dTdti(tj) ≥ 0

It is necessary and sufficient for both Condition 1 and 2 to be satisfied, to ensure that tj is the escape time of

the region, and that the ramp region Rj is fully traversed before any other variable enters its ramp region. We can

rephrase this as the following proposition.

Proposition: Necessary and Sufficient Condition for t∗= t j in Rj

The escape time t∗ for the region Rj∩ (∪i6=jRi)c is tj iff

tj= min

i { ti | 0 < ti< ∞ and ui(t) = (u 0

i − bi)e−t+ bi },

and any one of the following 6 cases is satisfied:

1. k1i > 1 and Ti(tj) ≥ 0. 2. k1i = 1 and k2i ≤ 1. 3. k1i = 1, k2i > 1, and Ti(tj) ≥ 0. 4. k1i < 1 and k2i ≤ 1. 5. ki 1< 1 and Ti(ln k2i) < 0. 6. k1i < 1, Ti(tj) ≤ 0 and dTdti(tj) ≥ 0.

3.3

Boundary Mappings 2

Under the assumptions of the previous section, we can revise our boundary mappings using fewer cases and higher dimensionality. The goal here is to write the full mapping for every coordinate in a fractional linear form as appears in [1], and we can do this by finding the mapping for each coordinate individually. If we assume that the above proposition always holds for the trajectory of interest (i.e., ramp regions are always traversed completely before another variable switches), then the result is only two types of mapping, the mapping for (S1) solutions and the

Referenties

GERELATEERDE DOCUMENTEN

l~ike omstandigheden, in elk geval aantrekke- .lijker dan men de verpleging veelal acht. Met elkander zijn de ministers in het Kabinet toch ook nog

The aim of the instrument was twofold: firstly, to facil- itate deeper learning and to encourage student engage- ment by introducing abstract mathematical concepts through

The good optimization capability of the proposed algorithms makes ramp-LPSVM perform well in numerical experiments: the result of ramp- LPSVM is more robust than that of hinge SVMs

In the lower plot of Figure 6 we observe an increase in the travel time on the secondary route, which results from the increased traffic volume on the secondary route but also from

[r]

Objective: To further test the validity and clinical usefulness of the steep ramp test (SRT) in estimating exercise tolerance in cancer survivors by external validation and extension

een ri-vier een meer maak de zee-arm dicht met een … dam dijk een dam is heel … zwak sterk.. wat wil

In deze beschouwing zal ik bespreken waarom de overheid de regie over de schadeafwikkeling en het preventief versterken in Groningen op zich zou kunnen en