• No results found

Diffusion generated by dynamical systems

N/A
N/A
Protected

Academic year: 2021

Share "Diffusion generated by dynamical systems"

Copied!
34
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Diffusion generated by dynamical systems

Rens Dudink

July 12, 2020

Bachelorscriptie Wiskunde

Begeleiding: prof. dr. Ale Jan Homburg

Korteweg-de Vries Instituut voor Wiskunde

(2)

Abstract

In this thesis we analyse the how the behaviour of dynamical systems can give insights on different types of diffusion. Like in [1] we compute the Mean Squared Displacement and determine how the growth rate of the MSD in time can describe different types of diffusion.

Acknowledgements

I am grateful for the support of my thesis supervisor Prof.dr. Ale-Jan Homburg, as his comments and explanations throughout the process helped me writing my thesis.

Titel: Diffusion generated by dynamical systems

Auteur: Rens Dudink, rensdudink@icloud.com, 10539794 Begeleiding: prof. dr. Ale Jan Homburg,

Einddatum: July 12, 2020

Korteweg-de Vries Instituut voor Wiskunde University van Amsterdam

Science Park 904, 1098 XH Amsterdam http://www.kdvi.uva.nl

(3)

Contents

1 Introduction 4

1.1 Preliminaries . . . 4

2 Dynamical systems and diffusion 7 2.1 Dynamical systems . . . 7

2.1.1 A piecewise linear map . . . 8

2.1.2 Climbing sine map . . . 13

2.1.3 Pomeau–Manneville map . . . 17

2.2 Diffusion . . . 21

2.2.1 Mean Square Displacement . . . 21

2.2.2 Types of diffusion . . . 26

3 Iterated function systems 27 3.1 Random dynamical system . . . 27

3.1.1 Anomalous diffusion . . . 28

3.2 Il’yashenko . . . 28

4 Conclusion 33

(4)

1 Introduction

1.1 Preliminaries

Throughout this thesis we iterate three maps, the piecewise linear map Ma, the climbing

sine map Sa and the Pomeau-Manneville map Pa,z. We use these maps to describe all

sorts of dynamics. These maps can be seen as the core of this thesis. To analyse the three maps we first need some tools. First we need to be able to find fixed points, these points stay fixed under the iteration of a map, and are key to understanding the dynamics of a map.

Let f : R → R.

Definition 1.1. The point x is a called a fixed point of a map f if f (x) = x. We denote the set of fixed points by Fix(f ). If there is an open neighbourhood U of x such that for every y ∈ U limn→∞fn(y) = x, we call x an attracting fixed point. If for all small open

intervals U around x we have for all y ∈ U that |x − y| < |f (x) − f (y)| than we call x repelling.

So in particular, if f is a continuously differentiable function on an open neighbourhood U around a fixed point x it suffices to check whether |f0(x)| < 1. If |f0(x)| < 1 holds x is an attracting fixed point. In a similar way if |f0(x)| > 1 holds x is a repelling fixed point. If |f0(x)| = 1 we can calculate for small  > 0 |f0(x + )| and |f0(x − )| and check whether the derivative is bigger or smaller than 1 close to x.

Example 1.1. Let f (x) = x3, solving f (x) = x gives that Fix(f ) = {−1, 0, 1} is the set of fixed points. Because the derivative f0(x) = 3x2, we have that the fixed point x = 0 is an attracting fixed point and the fixed points x = −1 and x = 1 are repelling fixed points. Besides fixed points we need the sequence space, the elements in a sequence space consist of an infinite list of numbers. These lists are useful because you can divide a map into parts and by assigning to every part of the map a number the iteration of a map can be described as a sequence of numbers. Calculating with those numbers is sometimes easier and quicker to work and program with. Although we only use this occasionally in this thesis the sequence is well used in dynamical systems. We refer to [6] or [2] for more examples.

Definition 1.2. The sequence space on two symbols is defined by Σ2 = {s = (s0s1s2. . .)|sj =

0 or 1}, and on n symbols by Σn= {s = (s0s1s2. . .)|sj ∈ {0, 1, . . . , n − 1}}.

(5)

Definition 1.3. For s, t ∈ Σn define a metric d by d[s, t] = ∞ X i=0 |si− ti| ni . (1.1)

Since |si− ti| is at most n − 1, this infinite series is dominated by the geometric series ∞

X

i=0

n − 1 ni = n,

and therefore it converges for all finite n.

Definition 1.4. The shift map σ : Σn→ Σn is given by σ(s0s1s2. . .) = (s1s2s3. . .).

The shift map simply ”forgets” the first entry in a sequence, and shifts all other entries one place to the left. Note that on Σn the map σ is a many-to-one map as

s0∈ {0, 1, . . . , n − 1}, and that σ is continuous by the metric in definition (1.3).

We collect a few results from ergodic theory used in our analysis. We refer to [8] for more information.

Definition 1.5. For a set X and any subset A ⊂ X the characteristic function χA :

X → R of A is defined by

χA(x) =

(

1 if x ∈ A

0 if x /∈ A (1.2) We use the characteristic function in 2.1.

Definition 1.6 (Measure-preserving). Let (X,B, µ) and (Y, C, ν) be probability spaces. A map φ from X to Y is measurable if φ−1(A) ∈ B for any A ∈ C, and is measure-preserving if it is measurable and µ(φ−1B) = ν(B) for all B ∈C. If in addition φ−1exists almost everywhere and is measurable, then φ is called an invertible measure-preserving map. If T : (X,B, µ) → (X, B, µ) is measure-preserving, then the measure µ is said to be T −invariant, (X,B, µ, T ) is called a preserving system and T a measure-preserving transformation.

Definition 1.7 (Isomorphic measures). Let (X,BX, µ, T ) and (Y,BY, ν, S) be

measure-preserving systems on probability spaces.

The system (Y,BY, ν, S) is isomorphic to (X,BX, µ, T ) if there are sets X0 inBX and

Y0 in BY with µ(X0) = 1, ν(Y0) = 1, T X0 ⊂ X0. SY0 ⊂ Y0 and a measure preserving

map φ : X0 → Y0 with

φ ◦ T (x) = S ◦ φ(x) (1.3) for all x ∈ X0.

(6)

Example 1.2. Define the measure µ(1 2, 1 2) on the set {0, 1} by µ(1 2, 1 2)(1) = µ( 1 2, 1 2)(0) = 1 2. (1.4)

Let X = {0, 1}N with the infinite product measure µ

(12,12) =

Q

Nµ(12, 1

2). This space is a

natural model for the set of possible outcomes of the infinitely repeated toss of a fair coin. The left shift map σ : X → X defined in (1.4) preserves µ(1

2, 1 2). The map φ : X → [0, 1] defined by φ(x0, x1, . . .) = ∞ X n=0 xn 2n+1

is measure-preserving from (X, µ) to ([0, 1], ν) and φ(σ(x)) = T2(φ(x)) for T2= 2x mod 1

One of the ways in which a measure-preserving transformation may be studied is via its induced action on some natural space of functions. Given any function f : X → R and map T : X → X, write f ◦T for the function defined by (f ◦T )(x) = f (T x). As usual we write L1µ for the space of (equivalence classes of) measurable functions f : X → R with R |f |dµ < ∞, L∞ for the space of measurable bounded functions and L1

µ for the

space of measurable integrable functions in the usual sense of function, in particular defined everywhere.

Theorem 1.1 (Birkhoff ). Let (X,B, µ, T ) be a measure-preserving system. If f ∈ L1µ, then lim n→∞ 1 n n−1 X j=0 (Tj(x)) = f∗(x)

converges almost everywhere and in L1µ to a T -invariant function f∗ ∈L1 µ, and Z f∗dµ = Z f dµ. If T is ergodic , then f∗(x) = Z f dµ almost everywhere.

Birfhoff’s theorem precisely describes the relationship sought between the space aver-age of a function and the time averaver-age along the orbit of a typical point.

(7)

2 Dynamical systems and diffusion

In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in a geometrical space. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in a pipe, and the number of fish each springtime in a lake. A map on a space to itself generates a dynamical system by iteration. In this chapter we will explain the link between the behaviour of iterating maps and diffusion. Diffusion is the movement of a substance from an area of high concentration to an area of low concentration. It is useful to have maps generating diffusion, because analysing these maps is more easy than collecting data from a physical experiment.

Assume you want to track the movement of a particle in a closed space with two small openings, where for every second t you plot the height of the particle. Then there is a moment where the particle moves up to the next closed space or moves down. See the figures below.

Figure 2.1: Moving particle

Figure 2.2: Multiple particles There are many different maps that could be used to describe this process, but as soon as you found a suitable map you can use that map to analyse the behaviour of the moving particle.

2.1 Dynamical systems

In this section we provide examples of maps defined on the real line that give rise to dynamics, when iterating, that is reminiscent of random walks.

We start with analysing the maps on the unit interval [0, 1] to find localising and constrained behaviour, for the other three behaviours we expand the map to R. The

(8)

first map we analyse is the piecewise linear map, although the map is piecewise linear the dynamics appearing for different values of a can be rather complex.

2.1.1 A piecewise linear map

The piecewise linear map is continuous everywhere except the points {k + 0.5| k ∈ Z} see figure (2.1) and (2.2).

Definition 2.1. For a > 0, the map ma: R → R is defined on the interval [0, 1] by

ma(x) =

(

ax 0 ≤ x < 12,

ax + 1 − a 12 ≤ x < 1, (2.1) and extended to other points in R using Ma(x + k) = Ma(x) + k for k ∈ Z.

From know on we will write ma(x) for the map on [0, 1] and Ma(x) for the map on

R, you will encounter a similar notation for the other maps introduced later. Below two examples. 5 5 0 Figure 2.3: M1 2 for x ∈ [0, 5]. 5 5 0 Figure 2.4: M4 for x ∈ [0, 5].

We are interested in the fixed points of a function because if we iterate f these points are stationary. For a fixed points x of f we have

fn(x) = f ◦ . . . ◦ f (x) | {z }

n

= x.

From here onward we will write x∗ if x is a fixed point.

Lemma 2.1. For a 6= 1, the set of fixed points of Mais Z, in other words Fix(Ma) = Z.

If in addition a < 1, then all fixed points are attracting, and on the contrary if a > 1 all fixed points are repelling.

(9)

Proof. Solving ma(x) = x gives x = {0, 1} together with the lift onto R by Ma(x + 1) =

Ma(x) + 1 we find that for k ∈ Z

Ma(k) = Ma(1 + 1 + . . . + 1 | {z } k ) = Ma(1 + 1 + . . . + 1 | {z } k−1 ) + 1 = ma(1) + k − 1 = k.

For l /∈ Z we can write l = k + d for k ∈ Z and d ∈ (0, 1) such that Ma(l) = Ma(d + (1 + 1 + . . . + 1

| {z }

k

)) = ma(d) + k 6= d + k = l.

Thus Fix(Ma) = Z, because Ma0 = a we have that the fixed points are attracting for

a < 1 and repelling for a > 1.

Note that if a = 1 the map M1 is identical to the linear equation f (x) = x and we

would have that for every x ∈ R, M1 = x. In the remainder of this chapter we remove

a = 1 as a possible option to make statements easier.

As soon as s fixed point changes from attracting to repelling, the dynamics of a map changes from localising to constrained behaviour. A third type of behaviour we call random walk behaviour appears when the map ma(x) /∈ [0, 1] for a certain x ∈ [0, 1].

• Figure 2.5: For 0 < a < 1 the fixed points of Ma are attracting and we obtain

localising behaviour.

• Figure 2.6: For 1 < a ≤ 2 the fixed points are repelling, but ma(x) is mapped

into [0, 1] so we obtain constrained behaviour.

• Figure 2.7: For a > 2 the fixed points are repelling and ma(x) is mapped out of

[0, 1] so we obtain random walk behaviour.

1 1 0 Figure 2.5: mawith a < 1. 1 1 0 Figure 2.6: ma with 1 < a ≤ 2. 1 1 0 Figure 2.7: mawith a > 2.

(10)

In the figures below we iterate four particles x1 = −2.8, x2 = 0.6, x3 = 1.9 and

x4 = 5.3. We plotted four different trajectories for three different behaviour types.

Figure 2.8: (a) Figure 2.9: (b) Figure 2.10: (c) Figure 2.11: (a) Localising behaviour generated by M1

2. (b) Constrained behaviour

gen-erated by M3

2. (c) Random walk behaviour generated by M3.

For localising behaviour the iterates fi(x) = x(i) converge to a fixed point x∗ for all x ∈ R as you can see in figure 2.8. For Constrained behaviour the fixed points are not attracting but the iterates are contained in an interval [k, k + 1] for some k ∈ Z. To understand the behaviour of Ma(x) for a > 2 and why we call this random walk

behaviour becomes explained next. For a > 2 define ma: [0, 1] → [0, 1] by ma(x) = Ma(x) mod 1. 1 1 0 Figure 2.12: m4 for x ∈ [0, 1]. 1 1 a b I1 I2 I3 I4 0

Figure 2.13: The pre-image of [a, b]. To understand the dynamics of Ma for a > 2, we start with analysing m4. Notice

that if we take [a, b] an interval in [0, 1] with 0 < a < b < 1 that m−14 [a, b] consist of 4 intervals, such that m−14 [a, b] = I1 ∪ I2∪ I3∪ I4. And notice that for the Lebesgue

(11)

ν(T4−1[a, b]) = ν(I1∪ I2∪ I3∪ I4),

= ν(I1) + ν(I2) + ν(I3) + ν(I4),

= b − a 4 + b − a 4 + b − a 4 + b − a 4 , = b − a = ν([a, b]).

This implies that m4 is invariant under the Lebesgue measure by Theorem A.8 from

[8].

Indicate the four intervals I1, I2, I3 and I4 by

J1 = [0, 0.25],

J2 = [0.25, 0.5],

J3 = [0.5, 0.75],

J4 = [0.75, 1].

If we identify the path x through {J1, J2, J3, J4} by iterating m4 by an element s ∈ Σ+4,

like in example (1.2). We get that (Σ+4, σ, µ(1 4, 1 4, 1 4, 1 4)) is isomorphic to ([0, 1], m4, ν). In

other words iterating x by m4 is isomorphic to shifting the quaternary expansion s by σ.

Take for example initial condition x = 0.1287, iterating x by f gives the first 10 iterates x0= 0.1287 so x0 ∈ J1 gives s0 = 1, f (x0) = x1= 0.5148 so x1 ∈ J3 gives s1 = 3, f2(x0) = x2= −0.9408 so x2∈ J1 gives s2 = 1, x3= −0.7632 so x3∈ J1 gives s3 = 1, . . . , x10= −0.0671 so x10∈ J4 gives s10= 4.

Now we can identify x = 0.1287 with the quaternary expansion s = 1311441322 . . .. Note that this quaternary sequence s is unique for initial condition x0, if we take initial

condition y0 = 0.1288 we get quaternary expansion t = 1311443211 . . . and we see that

after six iterations x0 and y0 end up in a different interval Ji. Because of this unique

relation there exist a homeomorfism S : [0, 1] → Σ+4 with

S ◦ m4(x) = σ ◦ S(x). (2.2)

The map S is measure preserving from ([0, 1], m4, ν) to (Σ+4, σ, µ(41,14,14). Birkhoff’s

the-orem implies that for k = {1, 2, 3, 4} lim n→∞ 1 n n−1 X i=0 χJk(m i 4(x)) = f∗(x) = 1 4,

(12)

5 5 0 Figure 2.14: M4 for x ∈ [0, 5]. k − 1 k k + 1 1 2 1 4 1 4 1 4 1 4

Figure 2.15: Random walk property.

is the constant function 14. We can conclude that the iterates of m4(x) have a probability

of 14 to be in each interval I1, I2, I3 or I4, what results in random walk behaviour. The

same result can be obtained for all a > 2, what makes is more difficult is that the map Ma(x) is in general not invariant under the Lebesgue measure ν. Shown for a = 2.4

below. 2 2 0 0.6 1 I1 I2 I3 ν(M2.4−1[0.6, 1]) = ν(I1∪ I2∪ I3),

= ν(I1) + ν(I2) + ν(I3),

= 0.4 2.4+ 0.4 2.4 + 0.2 2.4, 6= 0.4 = ν([0.6, 1]).

Even though Ma is not invariant under the Lebesgue measure for the most a > 2,

we can still proof that Ma shows random walk behaviour for a > 2. Consider ma for

2 < a < 4. Looking at Ma on the interval [0, 1] we get a subdivision into intervals Jk,

k = −1, 0, 1 on which Ma maps into [−1, 0], [0, 1], [1, 2] respectively.

This is characterized as random walk behaviour with positive ”chance” to move to previous or next ”integer interval”, as explained by next theorem.

Theorem 2.1. Let 2 < a < 4. There are positive constants Ck, k = {−1, 0, 1}, such

that for almost all x ∈ [0, 1], lim n→∞ 1 n n−1 X i=0 χJk(m i a(x)) = Ck.

Proof. Birkhoff: limn→∞n1 Pn−1i=0 χJk(m

i a(x)) =

R1

0 χJkdµa=

R

Jkdµafor µa-almost every

(13)

continuous invariant measure exists by Lasota-Yorke, for more results and examples of absolutely continuous invariant measure we refer to [10].

Below we sketch the same picture for Ma with 2 < a < 4 as we had for M4.

1 1 1 a 1 −a1 0

Figure 2.16: ma for a > 2 and x ∈

[0, 1]. k − 1 k k + 1 C0 C1 C1 C−1 C−1

Figure 2.17: Random walk property.

The dynamics of Mafor a ∈ R+are now understood, we have localising for 0 < a < 1,

constrained for 1 < a ≤ 2 and random walk for a > 2. In the next paragraph we see similar dynamics for the climbing sine map, but also another type of dynamics, that could not be described by the piecewise linear map.

2.1.2 Climbing sine map

The main focus of this paragraph is to go through the same steps as we did for Ma.

Definition 2.2. We define the climbing sine map on the unit interval [0, 1] by

sa(x) = x + a sin(2πx). (2.3)

We can lift sa like ma onto R by Sa(x + k) = Sa(x) + k for k ∈ Z, note that because

sin(2πx) has a period of 1 we have on R that Sa= x + a sin(2πx).

Note that S0 = M1is the identity map f (x) = x and that S−a(x) has similar dynamics

as Sa(x +12) that is why we only analyse the climbing sine map for a > 0.

(14)

Figure 2.18: S1

2 for x ∈ [0, 5] Figure 2.19: S4 for x ∈ [0, 5]

Like we did in paragraph (2.1.1) we want to find the fixed points, and conclude for which a the fixed points are attracting and for which a the fixed points are repelling. Lemma 2.2. Fix(Sa) = {k2| k ∈ Z}. Proof. Sa= x x + a sin(2πx) = x a sin(2πx) = 0 x = k 2 for k ∈ Z

The fixed points are also visible in figure 2.18 and figure 2.19. To determine for which a the fixed points are attracting or repelling we need Sa0, calculated below.

Sa(x) = x + a sin(2πx),

Sa0(x) = 1 + 2aπ cos(2πx).

Note that the fixed points x∗ where calculated by solving sin(2πx) = 0, what implies that cos(2πx∗) = {1, −1} for x∗ a fixed point.

Proposition 2.1. The climbing sine map only has attracting fixed points for a < π1, and this is only true for x = k + 0.5 for k ∈ Z, also the other fixed points x ∈ Z are always repelling.

Proof. The fixed points x ∈ Z are repelling because Sa0(x) = 1+2aπ cos(2πx) = 1+2aπ >

1. If x = k + 0.5 for k ∈ Z Sa0(x) = 1 + 2aπ cos(2πx) = 1 − 2aπ, these fixed points are

attracting if |Sa0| < 1, this only holds if a < 1 π.

(15)

Like in the previous paragraph the map Sachanges from localising behaviour towards

constrained behaviour as soon as a exceeds the value of π1, below three examples that describe the same behaviour as detected for ma.

• Figure 2.20: For 0 < a < 1π the fixed points of sa are attracting and we obtain

localising behaviour. • Figure 2.21: For 1

π < aα the fixed points are repelling, but sa(x) is mapped into

[0, 1] so we obtain constrained behaviour.

• Figure 2.22: For α < a the fixed points are repelling and sa(x) is mapped out of

[0, 1] so we obtain random walk behaviour.

From the figures we can obtain that there exist a number 0.7 < α < 0.9 such that the behaviour of the climbing sine map sa changes from constrained to random walk

behaviour. We explain how α can be calculated.

Figure 2.20: s0.2 for x ∈ [0, 1]. Figure 2.21: s0.7 for x ∈ [0, 1]. Figure 2.22: s0.9 for x ∈ [0, 1].

Note that saon the unit interval [0, 1] has one local maximum and one local minimum.

We can find this local max and min by solving s0a(x) = 0. s0a(x) = 1 + 2aπ cos(2πx) = 0

cos(2πx) = −1 2aπ

For a ∈ (1 , ∞) there are two solution in the unit interval. Write xmax(a) for the local

maximum depending on a and xmin(a) for the local minimum. Because the derivative

to a given by

d

daSa(x) = sin(2πx),

is constant and positive for x ∈ (0,12) and negative for x ∈ (12, 1). We can conclude that the local maximum is increasing when a is increasing. The moment this local maximum exceeds 1 is the moment when sa does not map the unit interval into itself anymore.

This will generate random walk behaviour. To find this value of a we solve the system of equations Sa(t) = 1 and Sa0(t) = 0.

(16)

Sa0(t) = 1 + a2π cos(2πt) = 0, (2.4) Sa(t) = t + a sin(2πt) = 1. (2.5)

Solving (2.4) towards t gives

t = arccos

−1 a2π

2π . Substituting into (2.5) gives

arccos(2πa−1 )

2π + a sin(2π

arccos(2aπ−1) 2π ) = 1.

If we approximate the solution for a we find α ≈ 0.73264413 . . .. Likewise to the previous paragraph we found localising, constrained and random walk behaviour. Now iterate four starting values x1 = −2.8, x2 = 0.6, x3 = 1.9 and x4 = 5.3, and similar as we did with

Ma we get four different trajectories for three different behaviour types.

Figure 2.23: (a) Figure 2.24: (b) Figure 2.25: (c) Figure 2.26: (a) Localising behaviour generated by Sa for a < π1. (b) Constrained

be-haviour generated by Sa for π1 < a < α. (c) Random walk behaviour

generated by Sa for α < a.

For the most values of a > α the climbing sine map shows random walk behaviour, but something unexpected happens for some specific values of a. We will not go into specific details, but try to explain the dynamics through some figures. For some values of a close to a = 1 the map sa(x) mod 1 has an attracting fixed point, causing the

iterates of sa(x) increase by 1 or decrease by 1 consistently. Below we give an example

(17)

Figure 2.27: (a) Figure 2.28: (b) Figure 2.29: (c) Figure 2.30: (a) s1.1 mod 1 on the unit interval. (b) First iterates of x = 0.3 under

S1.1(x). (c) 1000 iterates of S1.1.

This new behaviour will describe another type of diffusion called ballistics, this is why we call this ballistic behaviour.

2.1.3 Pomeau–Manneville map

The third and final map is the Pomeau-Manneville map. Definition 2.3. We define the Pomeau–Manneville map by

pa,z(x) =

(

x + axz x ∈ [0,12]

x − a(1 − x)z x ∈ (12, 1], (2.6) We can lift pa,z(x) onto R by Pa(x + k) = Pa(x) + k for k ∈ Z.

The map paconsist of a linear part x together with a disturbance axz of degree z. We

start with z = 2 below three examples.

Figure 2.31: p−1.3,2 Figure 2.32: p1.2,2 Figure 2.33: p2.5,2

There are similarities with the figures of the piecewise linear map (2.5) (2.6) and (2.7). When a > 2 pa,2 maps out of the unit interval with as a result random walk behaviour.

This is because p2,2(12) = 1, and dadpa,2= x2 which is positive for x ∈ (0, 1/2] so we can

conclude that pa,2(12) > 1 for a > 2.

(18)

pa(x) = x, For x ∈ [0,1 2) For x ∈ ( 1 2, 1], x + axz= x, x − a(1 − x)z = x, axz = 0, − a(1 − x)z = x, x = 0, x = 1.

With the same reasoning as lemma 2.1 we have that Fix(Pa,z(x)) = Z. To make

notation easier we will write p1 for x ∈ [0,12) and p2 for x ∈ (12, 1].

To understand the types of behaviour of pa,z it does not just suffice to check whether

the derivative is bigger or smaller than 1. Because the derivative of pa,z is equal to 1 for

x ∈ Z a fixed point of pa,z, shown below.

p1(x) = x + axz, p01(x) = 1 + azxz−1, p01(0) = 1, p2(x) = x − a(1 − x)z, p02(x) = 1 + az(1 − x)z−1, p02(1) = 1.

On the other hand it is directly clear from the the graph of pa,z that pa,z has attracting

behaviour when −2 < a < 0 and repelling 0 < a. Iterating the functions in figure (2.31) ,(2.32) and (2.33) for the same four starting values x1 = −2.8, x2 = 0.6, x3 = 1.9 and

x4 = 5.3 as we use in the previous paragraphs we obtain.

• Figure 2.34: For −2 < a < 0 the fixed points of Pa,2are attracting and we obtain

localising behaviour.

• Figure 2.35: For 0 < a < 2 the fixed points are repelling, but Pa,2 is mapped

into [0, 1] so we obtain constrained behaviour.

• Figure 2.36: For 2 < a the fixed points are repelling and Pa,2 is mapped out of [0, 1] so we obtain random walk behaviour.

Figure 2.34: (a) Figure 2.35: (b) Figure 2.36: (c) Figure 2.37: (a) localization generated by Pa,2 for −2 ≤ a < 0. (b) chaos generated by

(19)

The behaviour of the Pamap for a > 2 looks different than the random walk behaviour

from previous paragraphs. This has to do with the fact that the trajectories move very slow away from the fixed points. We call this type of behaviour anomalous behaviour, in paragraph 2.2 we will explain the difference between random walk behaviour and anomalous behaviour.

As a increases so do the movement of the iterates of pa,2. See P8,2 in the figure below.

Figure 2.38: P8,2 for x ∈ [0, 1]. Figure 2.39: 1000 iterates of P8,2.

At last we would like to understand how z effects Pa,z. We will do that by some

figures. Note that on the unit interval when z increases axz decreases, that is why to compare pa,2with pa,4 we need a to increase when z does.

Figure 2.40: P2,2 and P8,4. Figure 2.41: P5,2 and P32,4. Figure 2.42: 100000 it-erates of P32,4.

For x close to the fixed points {0, 1} the derivative of p32,4≈ 1 is so close to 1 that we

need to iterate 1000000 times to see some iterates jump out of the unit interval. Take for example x = 0.01 then

P32,40 (0.01) = 1 + 32 · 4 · (0.01)3= 1.000128.

The consequence is that particles are trapped for a long time around the fixed points, but they slightly move away from the fixed point. This is the main reason we only

(20)

analyse pa,2, because pa,z for large z needs many more iterates to show the behaviour of

the iterates.

Before reading the next paragraph it is helpful to go over the figures again, and think about iterations like the path a certain particle travels in a one dimensional direction. In the next paragraph we calculate the displacement of these iterates(particles). So far we analysed three different maps Ma, Sa and Pa. And we found five different types

of behaviour. Namely, localising, constrained, anomalous, random walk and ballistic behaviour. These five types of behaviour will be used to describe different types of diffusion. If we have maps that generate types of diffusion we can analyse these maps to understand real life phenomenon, like water flows, trajectories of atoms, and a lot more.

(21)

2.2 Diffusion

Recall that diffusion is the movement of a substance from an area of high concentration to an area of low concentration. The five different behaviour types described in paragraph 2.1 will provide different diffusion types. The five behaviour types were stated as follows. • Localising behaviour: For all x ∈ R the iterates fn(x) converge to a fixed point.

In other words all orbits are attracting.

• Constrained behaviour: The fixed points are repelling but all iterates fn(x)

are contained in an closed interval. That means that there exist a finite k and a b ∈ R such that fn(x) ∈ [b, b + k] for all iterates n ∈ N. For Ma, Sa and Pa,z we

had that k = 1.

• Random walk behaviour: For all x ∈ R the iterate fn(x) randomly walk over

the real line R.

• Ballistic behaviour: The map itself has no stable fixed points, but on the unit interval there is a stable fix point of the map modulo an integer. Causing the iterates to converge to increase or decrease with a constant. This behaviour was generated by the climbing sine map.

• Anomalous behaviour: This looks a lot like random walk behaviour, but has a certain delay. This is a special behaviour generated by the Pomeau-Maneville map.

In the previous paragraph we analysed certain types of behaviours by iterating the maps for different values of a. These types of behaviour can be seen as the movement of a particle in a one dimensional space. In this paragraph we try to link the types of behaviour to a type of diffusion. In the first paragraph we introduce MSD, the Mean Square Displacement, with which will say something about the diffusion. We give some calculations using Ma, Sa and Pa,z. In the second paragraph we use the MSD to link

the five behaviours of the previous chapter to different types of diffusion.

2.2.1 Mean Square Displacement

To measure the average squared distance a ’particle travels’ while iterating a map we use the Mean Square displacement function defined for a given map f , initial points xi

and given time t:

M SD = lim n→∞ 1 n n X i=1 (ft(xi) − (xi))2 (2.7)

This calculates the average squared distance a particle moves if xi is iterated t times

by a map. Here t can be seen as the time the particle can moves around before we measure his displacement. We will calculated the MSD for all behaviour types of Ma,

(22)

• Figure 2.43: In these three figures we iterate 100 random points xi ∈ [0, 1] under the maps M0.8(localising), M1.4(constrained) and M2.2(random walk). In

these figures we plotted the iterates of xi to show the position(ft(xi)) of multiple

iterates.

• Figure 2.44: We again take 100 random iterate points xi and plot the absolute displacement with respect to the initial point (|ft(x

i) − (xi)|).

• Figure 2.45: In the last three figures we plotted the M SD against the time t for the same three maps M0.8(localising), M1.4(constrained) and M2.2(random walk).

Figure 2.43: From left to right we have the position particle of M0.8, M1.4 and M2.2

respectively.

Figure 2.44: From left to right we have the squared displacement of M0.8, M1.4 and M2.2

respectively.

(23)

Note that for the localising map the displacement becomes constant, and so the MSD converges, also for constrained behaviour the displacement of the iterates of xi are

con-tained in a closed interval [k, k + 1], so for large t we will have that the MSD converges. For random walk behaviour it looks like the MSD grows linearly. The growth rate of the MSD will tell us something about the type of diffusion, before talking about diffusion we first plot the same figures for Sa.

• Figure 2.46: In these four figures we iterate 100 random points xi ∈ [0, 1] under the maps S0.2(localising), S0.7(constrained), S0.8(random walk) and S1.01(ballistic).

In these figures we plotted the iterates of xito show the position(ft(xi)) of multiple

iterates.

• Figure 2.47: We again take 100 random iterate points xi and plot the absolute displacement with respect to the initial point (|ft(xi) − (xi)|).

• Figure 2.48: In these last figures we plotted the M SD, notice that the ballistic curve does not grow linearly.

Figure 2.46: From left to right we have the position particle of S0.2, S0.7, S0.8 and S1.01

respectively.

Figure 2.47: From left to right we have the squared displacement of S0.2, S0.7, S0.8 and

(24)

Figure 2.48: From left to right we have the MSD of S0.2, S0.7, S0.8and S1.01respectively.

• Figure 2.49: In these four figures we iterate 100 random points xi ∈ [0, 1]. We use the four maps P−1,2(localising), P2,2(constrained), P6,3(anomalous) and

P32,2(random walk). The x-axis is the time we iterate and the y-axis is the

posi-tion of each particle.

• Figure 2.50: The x-axis is still time, but the y-axis is the squared displacement also written by (Mt

a(xi) − xi)2.

• Figure 2.51: We plot t against the MSD, we use 1000 random points xi instead of 100 to make the curves more smooth, and show that the anomalous diffusion does not grow linear we iterate 1000 times instead of 100.

Figure 2.49: From left to right we have the position particle of P−1,2, P2,2, P6,3 and P32,2

respectively.

Figure 2.50: From left to right we have the squared displacement of P−1,2, P2,2, P6,3 and

(25)

Figure 2.51: From left to right we have the MSD of P−1,2, P2,2, P6,3and P32,2respectively.

In the next paragraph we use the MSD to define the Diffusion constant, with the diffusion constant we can link the types of behaviour to types of diffusion what was the aim for this chapter.

(26)

2.2.2 Types of diffusion

In this paragraph we introduce the growth rate of MSD, the aim of this paragraph is to connect the growth rate of MSD to the types of diffusion.

We define the growth rate α by:

M SD ∼ tα

We will not give explicit calculation or approximations of α, we rather state some facts about α. We can confirm the statement below with the figures of the previous paragraph.

• Localizing diffusion: α = 0. The MSD converges to n < ∞, see localising behaviour or constrained behaviour examples in the previous paragraph.

• Sub-diffusion: 0 < α < 1. Is commonly encountered in crowded environments as, e.g., for organelles moving in biological cells. An example is P6,3 anomalous

behaviour. The trajectories wonder away from their initial point, but with some sort of delay.

• Normal diffusion: α = 1. Generated by maps that show random walk behaviour. • Super-diffusion: 1 < α < 2. Is displayed by a variety of other systems, like animals searching for food and light propagating through disordered matter. An example is P6,5

3.

• Ballistic diffusion: α = 2. See the example S1.01.

To give an example of super-diffusion see figure 2.52, we plotted P6,5

3 for 1000 random

iterate points xi ∈ [0, 1].

Figure 2.52: From left to right we have the position particle, the displacement and the MSD for P6,5

(27)

3 Iterated function systems

In this chapter we analyse iterated function systems(IFS), an IFS is a system where we iterate between multiple maps. In the first paragraph we will explain how anomalous dif-fusion can be obtained from iterating multiple maps randomly. In the second paragraph we discus a topological property of iterated function systems.

3.1 Random dynamical system

If we take an iterated function system and adjust probabilities to iterating a certain system we call this a random dynamical system. Take for example M1

2(Localising) and

S0.9(Random walk) and iterate with probability p = 12 the map M1

2 and with probability

1 − p = 12 we iterate S0.9. In the left figure we plotted the two functions, and in the

right figure we iterated the initial point x0 = 0.3 four times under the two maps. So red,

blue, yellow and purple are constructed by a different combination of iterates M1 2 and

S0.9, chosen randomly with probability 12.

Figure 3.1: M1

2 and S0.9 for x ∈ [0, 5] Figure 3.2: 1000 iterates of

We are interested in the dynamics that appear when you take two maps where one is localising and the other has random walk behaviour. Changing p can change the displacement of the map, take the same maps M1

2 and S0.9 but for different values of p.

We have p = 0.7 (left) and p = 0.3 (right).

We reffer to article [1] for more information on those random dynamical systems, and how they can generate anomalous diffusion.

(28)

Figure 3.3: p = 0.7 Figure 3.4: p = 0.3

3.1.1 Anomalous diffusion

Like you can read in [1]: ”Many diffusion processes in nature and society were found to behave profoundly different from Brownian motion, which describes the random-looking flickering of a tracer particle in a fluid. Brownian dynamics provided a long-standing powerful paradigm to understand spreading in terms of normal diffusion.” Where we saw that the (MSD) of an group of particles increases linearly in the infinite time lim-its, M SD ∼ tα with α = 1. Experimental data exhibiting anomalous diffusion are often modeled successfully by advanced concepts of stochastic theory, continuous time random walks, superdiffusive L´evy walks, generelized Langevin equations, or fractional FokkerPlanck equations, for more explanations on these equations we refer to [9]. For more detailed analysis of anomalous diffusion we refer to [1] and [11].

3.2 Il’yashenko

The aim of this paragraph is to understand the dynamics of a iterated function system close to a common fixed point. This can be applied in several contexts including iterated function systems containing Ma, Saand Pa,z. The theorem of Il’yashenko is a topological

result, it shows that a certain composition of functions exist. It does not state something about random iterations of functions. We write

f1(x) = λx + a1xk+ o(xk),

f2(x) = µx + a2xk+ o(xk).

So that f1(0) = f2(0) = 0 is a common fixed point, and if g is a function such that for

i ∈ {0, 1}

(29)

We will say that g ∈ hf1, f2i, the function g(x) is generated by taking a composition of

multiple f1 and f2. The following lemma is needed to proof the main theorem of this

section.

Lemma 3.1 (Linearisation). Let f (x) = λx + axkbe a functions system of linear growth λ with a disturbance of power k, then there exist a continuous function y = h(x) with h0(0) = 1 such that h−1◦ f ◦ h(x) = λx + o(xk). Here o(xk) are terms of higher order

than xk.

Proof. Given xn+1 = λxn+ axkn we make an adjudicated guess that the function y =

h(x) = x + bxk for some b, with h−1(y) = y − byk+ o(yk). Then we get the following expression for h−1◦ f ◦ h(x). Take a new coordinate y we get

yn+1= h−1◦ f ◦ h(yn)

= h−1◦ f (yn+ bykn)

= h−1(λ(yn+ bynk) + a(yn+ bykn)k)

= h−1(λ(yn) + (a + λb)ykn+ o(ynk))

= λ(yn) + (a + λb)ykn+ o(ykn) − b(λ(yn) + (a + λb)ykn+ o(ynk))k+ o(ynk)

= λ(yn) + (a + λb − bλk)ynk+ o(ykn).

This was independent of b now chose b = λka−λ than we have

yn+1= h−1◦ f ◦ h(yn) = λ(yn) + o(ynk),

Applying this lemma more than once given the fact that limk→∞xk= 0 for x ∈ [0, 1)

we obtain g−1◦ f ◦ g(x) = λx where g(x) is the computation of multiple hi(x). We obtain that g−1◦ f ◦ g(x) = λx for x close to zero.

Example 3.1. Given f (x) = 3x + 4x2, find h(x) such that h−1◦ f ◦ h(x) = 3x + o(x2).

We have λ = 3, a = 4 and k = 2 so b = λka−λ = 2 3 and h(x) = x + 2 3x2 with h−1 = x −23x2+ o(x2). h−1◦ f ◦ h(x) = h−1◦ f (x +2 3x 2), = h−1(3(x +2 3x 2) + 4(x +2 3x 2)2), = h−1(3x + 6x2+ o(x2)), = (3x + 6x2+ o(x2)) − 2 3(3x + 6x

2+ o(x2))2+ (o((3x + 6x2+ o(x2)))2),

= 3x + o(x2)

It is important to understand that close to zero the function h−1◦ f ◦ h(x) converges to the linear equation 3x if this method is applied more often.

(30)

We will use lemma (3.1) in the following theorem.

Theorem 3.1 (Il’yashenko). Consider two smooth maps f1, f2 from R to R satisfying

the following properties: 1. f1(0) = f2(0) = 0;

2. For x ∈ (0, a), f1(x) < x and f2(x) > x;

3. either

a) ln f10(0)/ ln f20(0) 6∈ Q, or, b) there is an integer k ≥ 2 with

f1(k)(0)

f10(0) − (f10(0))k 6=

f2(k)(0) f20(0) − (f20(0))k.

Then for any x ∈ (0, a) and open U ⊂ (0, a), there is g ∈ hf1, f2i with g(x) ∈ U .

Proof. Write

f1(x) = λx + a1xk+ o(xk),

f2(x) = µx + a2xk+ o(xk).

Consider a coordinate change y = i(x) that linearizes f1; i ◦ f0◦ i−1= λ like in lemma

(3.1). Write i(x) = x + bxk+ o(xk). In particular i removes an order k term. Using x1 = f1(x0), y1= i(x1) and y0= i(x0), calculate

y1 = i(x1)

= x1+ bxk1+ o(xk1)

= λx0+ a1xk0+ bλkxk0 + o(xk0)

= λy0+ (a1+ bλk− bλ)y0k+ o(yk0).

So b satisfies a1+ bλk− bλ = 0 and thus

b = a1 λ − λk.

If we apply the same coordinate change to f2, we get for u1 = f2(u0) and v1 = i(u1),

v0 = i(u0) that v1 = µv0+ (a2+ bµk− bµ)v0k+ o(v0k). If and only if b = µ−µa2k, the order

k term vanishes.

One can assume that one map, f1, is linear. We get j ◦ f2(x) = µ(j(x)) with

j(x) = x + dxk+ o(xk). If f2 has a nonvanishing order k term for some minimal k,

then d 6= 0.

Non-resonant case: ln λ/ ln µ 6∈ Q. In this case, the group generated by λ, µ lies dense in R+. For suitable kn, ln, we have that µknλlnx ∈ U . And for large enough n we

(31)

have j ◦ fln

2 ◦ f kn

1 (x) = µlnj(λkn(x)) ∈ U . Hence there is a g ∈ hf1, f2i with g(x) ∈ U .

Resonant case: ln λ/ ln µ ∈ Q.

By taking iterates we may assume λ = 1µ. Note that f2n ◦ fn

1 converges to j−1 as

n → ∞. Now we consider

h ∈ hf1, f2, j−1i, (3.1)

the orbits of h consist of f1, f2 and iterations of f1n◦ f2n = j−1 for large n. Divide the

interval (0, x) in parts by using f1(x), for large n the interval (f1n(x), fn−1(x) is close to

zero. See the number line below.

x f1(x) 0 f2 1(x) f1n−1(x) f1n(x)

Applying j−1= x + bxk on f1n(x) gives that the value of f1n(x) increases the slightest because (f1n(x))k≈ 0.

j−1(f1n(x)) = f1n(x) + b(f1n(x))k> f1n(x). (3.2) We can now find iterates of x in (f1n(x), fn−1(x)) by keep applying j−1 on f1n(x), de-scribed below. fn−1(x) f1n(x) j−1(f1n(x)) j−2(f1n(x)) j −k(fn 1(x))

The interval (f1n(x), f1n−1(x)) is now divided into smaller part by j−1. Note that the bigger we chose n the smaller closer the points j−k(fn

1(x)) are to each other.

Next up we apply f2(x) on the two points f1n(x) and f1n−1(x). We do this until for all

y ∈ U we have f2l◦ fn

1(x) < y < f2l◦ f n−1

1 (x), we may assume this because otherwise we

already found an iteration of x that ends up in U . If there is no iterate fl

2◦j−k◦f1n(x) ∈ U

like described below

f2l◦ fn−1(x)

f2l◦ fn

1(x) f2l◦ j−k◦ f1n(x)

U

We can increase n such that the steps of j−1 get smaller, note that these steps get smaller because taking f1 changes approximately linearly and j−1 with a factor xk. We

can make the steps in the interval (fn

1(x), f1n−1(x)) as small as we want such that there

has to be an iterate in U . Hence we found a suitable h ∈ hf1, f2, j−1i such that h(x) ∈ U ,

with which the density of the orbits is proved.

Il yashencko’s theorem gives insight of the density of multiple iterated function sys-tems.

Example 3.2. Take f2 = M4 and f1 = M1/2 then for x = 0.3 and U = (0.2, 0.25) there

(32)

Proof. f2 ◦ f12(x) = x so the set g(x) = g(0.3) = {0.3 · 2k} for k ∈ Z. There does not

exist k ∈ Z such that {0.3 · 2k} ∈ (0.2, 0.25). Also the derivatives f20(x) = f2(k)(x) = 4 and f10(x) = f1(k)(x) = 12 so statement 3a) ln(

1 2)

ln(4) = −1

2 ∈ Q and 3b) such a k does not

exist because the kth derivative is zero for k ≥ 2. So both 3a) and 3b) do not hold. Example 3.3. Take f2 = 2x + 3x2 and f1 = 12x − x2 then for every x ∈ R and open

interval U ⊂ R there exist an g ∈ hf1, f2i such that g(x) ∈ U .

Proof. The first two statements are satisfied by f2(0) = f1(0) = 0 and for x ∈ R+,

f1(x) < x and f2(x) > x. To check that statement 3b) holds we substitute f10(0) = 12,

f100(0) = −2, f20(0) = 2 and f200(0) = 6 −2 1 2 − ( 1 2)2 = f (k) 1 (0) f10(0) − (f10(0))k 6= f2(k)(0) f20(0) − (f20(0))k = 6 2 − (2)2.

We can conclude that there exist a g ∈ hf1, f2i such that g(x) ∈ U , and the iterates of

g(x) lie dense in R+.

Example 3.4. If the two functions f1(x) and f2(x) are each others inverse function

it makes sense that the iterates of f1(x) and f2(x) can not lie dense in any interval.

Take f1(x) =

1 e

1+(1e−1)x and g2(x) = ex

1+(e−1)x than g1◦ g2(x) = x so they are eachothers

inverse. Checking Il’yashenko’s theorem gives: 1. g1(0) = g2(0) = 0;

2. For x ∈ (0, 1), g1(x) < x and g2(x) > x;

3. both

a) ln g10(0)/ ln g20(0) ∈ Q, or, b) there is no integer k ≥ 2 with

g1(k)(0)

g01(0) − (g01(0))k 6=

g(k)2 (0) g20(0) − (g20(0))k.

Which means that the iterates of hg1, g2i do not lie dense in (0, 1).

To check if iterates lie dense can be difficult, the theorem of Il’yashenko gives a natural way to determine whether the iterates lie dense.

(33)

4 Conclusion

This thesis starts with the introduction of three dynamical systems in chapter 2. The piecewise linear map, the climbing sine map and the Pomeau-Manneville map. We analyse the behaviour of the iterates of these three maps and find multiple types of behaviour. The piecewise linear map show localising, constrained and random walk behaviour, where the climbing sine map shows in addition ballistic behaviour and the Pomeau-Manneville map showed anomalous behaviour. To distinguish the difference be-tween the behaviour types we computed the Mean Squared Displacement in paragraph (2.2.1). By using the MSD we could link the behaviour types to specific types of diffusion. In chapter 3 we introduce iterate function systems, and showed how random dynam-ical systems are constructed by iterating maps with a probability p. We day that the dynamic types introduced in chapter 2 change behaviour as soon as we change p.

In paragraph (3.2) we proved Il’yashenko’s theorem about the density of iterated functions systems close to a common fixed point.

(34)

Bibliography

[1] Y. Sato, R.Klages. Anomalous Diffusion in Random Dynamical Systems(American Physical Society, 2019).

[2] Ale Jan Homburg, M Gharaei. Random Interval Diffeomorphisms

[3] Yu. S. Il’yashenko. Thick attractors of step skew products. Regular and chaotic dy-namics 15:328-334, 2010.

[4] J. Milnor. On the concept of attractor, Comm. Math. Phys. Volume 99, Number 2 (1985), 177-195

[5] Lasota,Yorke, ON THE EXISTENCE OF INVARIANT MEASURES FOR PIECE-WISE MONOTONIC TRANSFORMATION, 1973–.

[6] Robert L. Devaney. Chaotic Dynamical Systems (2003)

[7] R. Clarke Robinson. An introduction to dynamical systems (2012)

[8] M. Einsiedler, T. Ward. Ergodic Theory: With a View Towards Number Theory, Springer-Verlag, 2010.

[9] L. Vlahos, H. Isliker. Normal and Anomalous Diffusion: A Tutorial (2008)

[10] Boyarsky A., G´ora P. (1997) Absolutely Continuous Invariant Measures. In: Laws of Chaos. Probability and Its Applications. Birkh¨auser, Boston, MA

[11] Fernando A. Oliviera, Rogelma M. S. Ferreira, Luciano C. Lapas and Mendeli H. Vainstein. Anomalous Diffusion: A Basic Mechanism for the Evolution of Inhomo-geneus Systems.

Referenties

GERELATEERDE DOCUMENTEN

The last part will result in a concluding chapter where Classical (Hollywood) cinema, complex cinema, control as an intrinsic need, the lack of control and

Complementarity methods in the analysis of piecewise linear dynamical systems..

The clusters can be aggregated into five groups that roughly correspond to the topics identified in the term map: the scientific publication system (journal policies,

The clusters can be aggregated into five groups that roughly correspond to the topics identified in the term map: the scientific publication system (journal policies,

We develop the theory of vector bundles necessary to define the Gauss map for a closed immersion Y → X of smooth varieties over some field k, and we relate the theta function defined

The development of map comparison methods in this thesis has a particular focus; it is aimed at the evaluation of geosimulation models and more specifically the calibration

The questionnaire developed for this study consisted of four main sections: the NPD team, NPD process, NPD planning and the NPD project learning’s.. The first section was intended

For other classes CLempN information is available for only part of the EUNIS class from the SEI land cover map (e.g. a CLempN is known for the third level EUNIS, while second or